You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository was archived by the owner on Aug 7, 2025. It is now read-only.
Copy file name to clipboardExpand all lines: content/en/user-guide/aws/bedrock/index.md
+64-3Lines changed: 64 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -7,12 +7,73 @@ tags: ["Enterprise image"]
7
7
8
8
## Introduction
9
9
10
+
Bedrock is a fully managed service provided by Amazon Web Services (AWS) that makes foundation models from various LLM providers accessible via an API.
11
+
LocalStack allows you to use the Bedrock APIs to test and develop AI-powered applications in your local environment.
12
+
The supported APIs are available on our [API Coverage Page](https://docs.localstack.cloud/references/coverage/coverage_bedrock/), which provides information on the extent of Bedrock's integration with LocalStack.
13
+
10
14
## Getting started
11
15
12
-
## Resource Browser
16
+
This guide is designed for users new to AWS Bedrock and assumes basic knowledge of the AWS CLI and our `awslocal` wrapper script.
17
+
18
+
Start your LocalStack container using your preferred method using the `LOCALSTACK_ENABLE_BEDROCK=1` configuration variable.
19
+
We will demonstrate how to use Bedrock by following these steps:
20
+
21
+
1. Listing available foundation models
22
+
2. Invoking a model for inference
23
+
3. Using the conversation API
24
+
25
+
### List available foundation models
26
+
27
+
You can view all available foundation models using the [`ListFoundationModels`](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_ListFoundationModels.html) API.
28
+
This will show you which models are available for use in your local environment.
29
+
30
+
Run the following command:
31
+
32
+
{{< command >}}
33
+
$ awslocal bedrock list-foundation-models
34
+
{{< / command >}}
35
+
36
+
### Invoke a model
37
+
38
+
You can use the [`InvokeModel`](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_runtime_InvokeModel.html) API to send requests to a specific model.
39
+
In this example, we'll use the Llama 3 model to process a simple prompt.
The output will be available in the `outfile.txt`.
54
+
55
+
### Use the conversation API
56
+
57
+
Bedrock provides a higher-level conversation API that makes it easier to maintain context in a chat-like interaction using the [`Converse`](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_runtime_Converse.html) API.
58
+
You can specify both system prompts and user messages.
59
+
60
+
Run the following command:
13
61
14
-
## Examples
62
+
{{< command >}}
63
+
$ awslocal bedrock-runtime converse \
64
+
--model-id "meta.llama3-8b-instruct-v1:0" \
65
+
--messages '[{
66
+
"role": "user",
67
+
"content": [{
68
+
"text": "Say Hello!"
69
+
}]
70
+
}]' \
71
+
--system '[{
72
+
"text": "You'\''re a chatbot that can only say '\''Hello!'\''"
73
+
}]'
74
+
{{< / command >}}
15
75
16
76
## Limitations
17
77
18
-
Currently, GPU models are not supported by the LocalStack Bedrock implementation.
78
+
* LocalStack Bedrock implementation is mock-only and does not run any LLM model locally.
79
+
* Currently, GPU models are not supported by the LocalStack Bedrock implementation.
0 commit comments