Skip to content
This repository was archived by the owner on Aug 7, 2025. It is now read-only.

Commit 66ba59d

Browse files
committed
add docs fully
1 parent 4059aa1 commit 66ba59d

File tree

1 file changed

+64
-3
lines changed
  • content/en/user-guide/aws/bedrock

1 file changed

+64
-3
lines changed

content/en/user-guide/aws/bedrock/index.md

Lines changed: 64 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -7,12 +7,73 @@ tags: ["Enterprise image"]
77

88
## Introduction
99

10+
Bedrock is a fully managed service provided by Amazon Web Services (AWS) that makes foundation models from various LLM providers accessible via an API.
11+
LocalStack allows you to use the Bedrock APIs to test and develop AI-powered applications in your local environment.
12+
The supported APIs are available on our [API Coverage Page](https://docs.localstack.cloud/references/coverage/coverage_bedrock/), which provides information on the extent of Bedrock's integration with LocalStack.
13+
1014
## Getting started
1115

12-
## Resource Browser
16+
This guide is designed for users new to AWS Bedrock and assumes basic knowledge of the AWS CLI and our `awslocal` wrapper script.
17+
18+
Start your LocalStack container using your preferred method using the `LOCALSTACK_ENABLE_BEDROCK=1` configuration variable.
19+
We will demonstrate how to use Bedrock by following these steps:
20+
21+
1. Listing available foundation models
22+
2. Invoking a model for inference
23+
3. Using the conversation API
24+
25+
### List available foundation models
26+
27+
You can view all available foundation models using the [`ListFoundationModels`](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_ListFoundationModels.html) API.
28+
This will show you which models are available for use in your local environment.
29+
30+
Run the following command:
31+
32+
{{< command >}}
33+
$ awslocal bedrock list-foundation-models
34+
{{< / command >}}
35+
36+
### Invoke a model
37+
38+
You can use the [`InvokeModel`](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_runtime_InvokeModel.html) API to send requests to a specific model.
39+
In this example, we'll use the Llama 3 model to process a simple prompt.
40+
41+
Run the following command:
42+
43+
{{< command >}}
44+
$ awslocal bedrock-runtime invoke-model \
45+
--model-id "meta.llama3-8b-instruct-v1:0" \
46+
--body '{
47+
"prompt": "<|begin_of_text|><|start_header_id|>user<|end_header_id|>\nSay Hello!\n<|eot_id|>\n<|start_header_id|>assistant<|end_header_id|>",
48+
"max_gen_len": 2,
49+
"temperature": 0.9
50+
}' --cli-binary-format raw-in-base64-out outfile.txt
51+
{{< / command >}}
52+
53+
The output will be available in the `outfile.txt`.
54+
55+
### Use the conversation API
56+
57+
Bedrock provides a higher-level conversation API that makes it easier to maintain context in a chat-like interaction using the [`Converse`](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_runtime_Converse.html) API.
58+
You can specify both system prompts and user messages.
59+
60+
Run the following command:
1361

14-
## Examples
62+
{{< command >}}
63+
$ awslocal bedrock-runtime converse \
64+
--model-id "meta.llama3-8b-instruct-v1:0" \
65+
--messages '[{
66+
"role": "user",
67+
"content": [{
68+
"text": "Say Hello!"
69+
}]
70+
}]' \
71+
--system '[{
72+
"text": "You'\''re a chatbot that can only say '\''Hello!'\''"
73+
}]'
74+
{{< / command >}}
1575

1676
## Limitations
1777

18-
Currently, GPU models are not supported by the LocalStack Bedrock implementation.
78+
* LocalStack Bedrock implementation is mock-only and does not run any LLM model locally.
79+
* Currently, GPU models are not supported by the LocalStack Bedrock implementation.

0 commit comments

Comments
 (0)