Skip to content

Commit

Permalink
Metadata
Browse files Browse the repository at this point in the history
  • Loading branch information
DennisTraub committed Apr 25, 2024
1 parent f9972ef commit d472581
Show file tree
Hide file tree
Showing 4 changed files with 129 additions and 23 deletions.
123 changes: 110 additions & 13 deletions .doc_gen/metadata/bedrock-runtime_metadata.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -519,10 +519,10 @@ bedrock-runtime_InvokeJurassic2:
services:
bedrock-runtime: {InvokeModel}

bedrock-runtime_InvokeLlama2:
title: Invoke the Meta Llama 2 Chat model on &BR; for text generation
bedrock-runtime_Llama2_InvokeLlama:
title: Invoke Meta Llama 2 on &BR; using Meta's native request and response payloads
title_abbrev: "Meta Llama 2: Text generation"
synopsis: invoke the Meta Llama 2 Chat model on &BR; for text generation.
synopsis: get started sending prompts to Meta Llama 2 and printing the response.
category: Invoke model examples
languages:
Go:
Expand All @@ -538,20 +538,17 @@ bedrock-runtime_InvokeLlama2:
- sdk_version: 2
github: javav2/example_code/bedrock-runtime
excerpts:
- description: Asynchronously invoke the Meta Llama 2 Chat foundation model to generate text.
- description: Send your first prompt to Meta Llama 2.
snippet_tags:
- bedrock-runtime.java2.invoke_llama2_async.main
- description: Invoke the Meta Llama 2 Chat foundation model to generate text.
snippet_tags:
- bedrock-runtime.java2.invoke_llama2.main
- bedrock-runtime.java2.InvokeModel_Llama2_Quickstart
JavaScript:
versions:
- sdk_version: 3
github: javascriptv3/example_code/bedrock-runtime
excerpts:
- description: Invoke the Meta Llama 2 Chat foundation model to generate text.
snippet_files:
- javascriptv3/example_code/bedrock-runtime/models/meta_llama2/llama2_chat.js
- description: Send your first prompt to Meta Llama 2.
snippet_tags:
- javascript.v3.bedrock-runtime.InvokeModel_Llama2_Quickstart
PHP:
versions:
- sdk_version: 3
Expand All @@ -565,9 +562,9 @@ bedrock-runtime_InvokeLlama2:
- sdk_version: 3
github: python/example_code/bedrock-runtime
excerpts:
- description: Invoke the Meta Llama 2 Chat foundation model to generate text.
- description: Send your first prompt to Meta Llama 2.
snippet_tags:
- python.example_code.bedrock-runtime.InvokeMetaLlama2
- python.example_code.bedrock-runtime.InvokeModel_Llama2_Quickstart
.NET:
versions:
- sdk_version: 3
Expand All @@ -579,6 +576,106 @@ bedrock-runtime_InvokeLlama2:
services:
bedrock-runtime: {InvokeModel}

bedrock-runtime_Llama2_InvokeModelWithResponseStream:
title: Invoke Meta Llama 2 on &BR; using Meta's native request and response payloads with a response stream
title_abbrev: "Meta Llama 2: Text generation with response stream"
synopsis: get started sending prompts to Meta Llama 2 and printing the response stream in real-time.
category: Invoke model examples
languages:
Java:
versions:
- sdk_version: 2
github: javav2/example_code/bedrock-runtime
excerpts:
- description: Send your first prompt to Meta Llama 3.
snippet_tags:
- bedrock-runtime.java2.InvokeModelWithResponseStream_Llama2_Quickstart
JavaScript:
versions:
- sdk_version: 3
github: javascriptv3/example_code/bedrock-runtime
excerpts:
- description: Send your first prompt to Meta Llama 3.
snippet_tags:
- javascript.v3.bedrock-runtime.InvokeModelWithResponseStream_Llama2_Quickstart
Python:
versions:
- sdk_version: 3
github: python/example_code/bedrock-runtime
excerpts:
- description: Send your first prompt to Meta Llama 3.
snippet_tags:
- python.example_code.bedrock-runtime.InvokeModelWithResponseStream_Llama2_Quickstart
services:
bedrock-runtime: {InvokeModelWithResponseStream}


bedrock-runtime_Llama3_InvokeLlama:
title: Invoke Meta Llama 3 on &BR; using Meta's native request and response payloads
title_abbrev: "Meta Llama 3: Text generation"
synopsis: get started sending prompts to Meta Llama 3 and printing the response.
category: Invoke model examples
languages:
Java:
versions:
- sdk_version: 2
github: javav2/example_code/bedrock-runtime
excerpts:
- description: Send your first prompt to Meta Llama 3.
snippet_tags:
- bedrock-runtime.java2.InvokeModel_Llama3_Quickstart
JavaScript:
versions:
- sdk_version: 3
github: javascriptv3/example_code/bedrock-runtime
excerpts:
- description: Send your first prompt to Meta Llama 3.
snippet_tags:
- javascript.v3.bedrock-runtime.InvokeModel_Llama3_Quickstart
Python:
versions:
- sdk_version: 3
github: python/example_code/bedrock-runtime
excerpts:
- description: Send your first prompt to Meta Llama 3.
snippet_tags:
- python.example_code.bedrock-runtime.InvokeModel_Llama3_Quickstart
services:
bedrock-runtime: {InvokeModel}

bedrock-runtime_Llama3_InvokeModelWithResponseStream:
title: Invoke Meta Llama 3 on &BR; using Meta's native request and response payloads with a response stream
title_abbrev: "Meta Llama 3: Text generation with response stream"
synopsis: get started sending prompts to Meta Llama 3 and printing the response stream in real-time.
category: Invoke model examples
languages:
Java:
versions:
- sdk_version: 2
github: javav2/example_code/bedrock-runtime
excerpts:
- description: Send your first prompt to Meta Llama 3.
snippet_tags:
- bedrock-runtime.java2.InvokeModelWithResponseStream_Llama3_Quickstart
JavaScript:
versions:
- sdk_version: 3
github: javascriptv3/example_code/bedrock-runtime
excerpts:
- description: Send your first prompt to Meta Llama 3.
snippet_tags:
- javascript.v3.bedrock-runtime.InvokeModelWithResponseStream_Llama3_Quickstart
Python:
versions:
- sdk_version: 3
github: python/example_code/bedrock-runtime
excerpts:
- description: Send your first prompt to Meta Llama 3.
snippet_tags:
- python.example_code.bedrock-runtime.InvokeModelWithResponseStream_Llama3_Quickstart
services:
bedrock-runtime: {InvokeModelWithResponseStream}

bedrock-runtime_Scenario_InvokeModels:
title: Invoke various foundation models on &BR;
title_abbrev: Invoke multiple foundation models on &BR;
Expand Down
5 changes: 4 additions & 1 deletion javascriptv3/example_code/bedrock-runtime/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -51,7 +51,10 @@ functions within the same service.
- [Anthropic Claude 2: Text generation](models/anthropic_claude/claude_2.js)
- [Anthropic Claude 3: Text generation](models/anthropic_claude/claude_3.js)
- [Anthropic Claude Instant: Text generation](models/anthropic_claude/claude_instant_1.js)
- [Meta Llama 2: Text generation](models/meta_llama2/llama2_chat.js)
- [Meta Llama 2: Text generation](models/meta/llama2/invoke_model_quickstart.js#L4)
- [Meta Llama 2: Text generation with response stream](models/meta/llama2/invoke_model_with_response_stream_quickstart.js#L4)
- [Meta Llama 3: Text generation](models/meta/llama3/invoke_model_quickstart.js#L4)
- [Meta Llama 3: Text generation with response stream](models/meta/llama3/invoke_model_with_response_stream_quickstart.js#L4)
- [Mistral AI: Text generation with Mistral 7B Instruct](models/mistral_ai/mistral_7b.js)
- [Mistral AI: Text generation with Mixtral 8x7B Instruct](models/mistral_ai/mixtral_8x7b.js)

Expand Down
9 changes: 6 additions & 3 deletions javav2/example_code/bedrock-runtime/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -41,14 +41,17 @@ functions within the same service.
### Invoke model examples

- [AI21 Labs Jurassic-2: Text generation](src/main/java/com/example/bedrockruntime/InvokeModelAsync.java#L205)
- [Amazon Titan: Image generation](src/main/java/com/example/bedrockruntime/InvokeModelAsync.java#L399)
- [Amazon Titan: Image generation](src/main/java/com/example/bedrockruntime/InvokeModelAsync.java#L338)
- [Anthropic Claude 2: Real-time response stream processing](src/main/java/com/example/bedrockruntime/Claude2.java#L65)
- [Anthropic Claude 2: Text generation](src/main/java/com/example/bedrockruntime/InvokeModel.java#L112)
- [Anthropic Claude 3: Real-time response stream processing](src/main/java/com/example/bedrockruntime/Claude3.java#L49)
- [Meta Llama 2: Text generation](src/main/java/com/example/bedrockruntime/InvokeModelAsync.java#L268)
- [Meta Llama 2: Text generation](src/main/java/com/example/bedrockruntime/models/meta/llama2/InvokeModelQuickstart.java#L11)
- [Meta Llama 2: Text generation with response stream](src/main/java/com/example/bedrockruntime/models/meta/llama2/InvokeModelWithResponseStreamQuickstart.java#L12)
- [Meta Llama 3: Text generation](src/main/java/com/example/bedrockruntime/models/meta/llama3/InvokeModelQuickstart.java#L13)
- [Meta Llama 3: Text generation with response stream](src/main/java/com/example/bedrockruntime/models/meta/llama3/InvokeModelWithResponseStreamQuickstart.java#L14)
- [Mistral AI: Text generation with Mistral 7B Instruct](src/main/java/com/example/bedrockruntime/InvokeModelAsync.java#L33)
- [Mistral AI: Text generation with Mixtral 8x7B Instruct](src/main/java/com/example/bedrockruntime/InvokeModelAsync.java#L88)
- [Stable Diffusion: Image generation](src/main/java/com/example/bedrockruntime/InvokeModelAsync.java#L329)
- [Stable Diffusion: Image generation](src/main/java/com/example/bedrockruntime/InvokeModelAsync.java#L268)


<!--custom.examples.start-->
Expand Down
15 changes: 9 additions & 6 deletions python/example_code/bedrock-runtime/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -39,15 +39,18 @@ python -m pip install -r requirements.txt
### Invoke model examples

- [AI21 Labs Jurassic-2: Text generation](bedrock_runtime_wrapper.py#L79)
- [Amazon Titan: Image generation](bedrock_runtime_wrapper.py#L275)
- [Anthropic Claude 2: Real-time response stream processing](bedrock_runtime_wrapper.py#L320)
- [Amazon Titan: Image generation](bedrock_runtime_wrapper.py#L238)
- [Anthropic Claude 2: Real-time response stream processing](bedrock_runtime_wrapper.py#L283)
- [Anthropic Claude 2: Text generation](bedrock_runtime_wrapper.py#L39)
- [Anthropic Claude 3: Multimodal invocation](models/anthropic/claude_3.py#L94)
- [Anthropic Claude 3: Text generation](models/anthropic/claude_3.py#L33)
- [Meta Llama 2: Text generation](bedrock_runtime_wrapper.py#L115)
- [Mistral AI: Text generation with Mistral 7B Instruct](bedrock_runtime_wrapper.py#L152)
- [Mistral AI: Text generation with Mixtral 8x7B Instruct](bedrock_runtime_wrapper.py#L192)
- [Stable Diffusion: Image generation](bedrock_runtime_wrapper.py#L232)
- [Meta Llama 2: Text generation](models/meta/llama2/invoke_model_quickstart.py#L4)
- [Meta Llama 2: Text generation with response stream](models/meta/llama2/invoke_model_with_response_stream_quickstart.py#L4)
- [Meta Llama 3: Text generation](models/meta/llama3/invoke_model_quickstart.py#L4)
- [Meta Llama 3: Text generation with response stream](models/meta/llama3/invoke_model_with_response_stream_quickstart.py#L4)
- [Mistral AI: Text generation with Mistral 7B Instruct](bedrock_runtime_wrapper.py#L115)
- [Mistral AI: Text generation with Mixtral 8x7B Instruct](bedrock_runtime_wrapper.py#L155)
- [Stable Diffusion: Image generation](bedrock_runtime_wrapper.py#L195)


<!--custom.examples.start-->
Expand Down

0 comments on commit d472581

Please sign in to comment.