You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository was archived by the owner on Aug 7, 2025. It is now read-only.
Bedrock offers the feature to handle large batches of model invocation requests defined in S3 buckets using the [`CreateModelInvocationJob`](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_CreateModelInvocationJob.html) API.
91
+
92
+
First, you need to create a `JSONL` file that contains all your prompts:
93
+
94
+
{{< command >}}
95
+
$ cat batch_input.jsonl
96
+
{"prompt": "Tell me a quick fact about Vienna.", "max_tokens": 50, "temperature": 0.5}
97
+
{"prompt": "Tell me a quick fact about Zurich.", "max_tokens": 50, "temperature": 0.5}
98
+
{"prompt": "Tell me a quick fact about Las Vegas.", "max_tokens": 50, "temperature": 0.5}
99
+
{{< / command >}}
100
+
101
+
Then, you need to define buckets for the input as well as the output and upload the file in the input bucket:
102
+
103
+
{{< command >}}
104
+
$ awslocal s3 mb s3://in-bucket
105
+
make_bucket: in-bucket
106
+
107
+
$ awslocal s3 cp batch_input.jsonl s3://in-bucket
108
+
upload: ./batch_input.jsonl to s3://in-bucket/batch_input.jsonl
109
+
110
+
$ awslocal s3 mb s3://out-bucket
111
+
make_bucket: out-bucket
112
+
{{< / command >}}
113
+
114
+
Afterwards you can run the invocation job like this:
0 commit comments