-
Notifications
You must be signed in to change notification settings - Fork 115
Commit
- Loading branch information
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -94,7 +94,9 @@ This section covers configuration options that are specific to certain AWS servi | |
|
||
| Variable | Example Values | Description | | ||
| - | - | - | | ||
| `LOCALSTACK_ENABLE_BEDROCK` | `1` | Use the Bedrock provider | | ||
| `BEDROCK_PREWARM` | `0` (default) \| `1` | Pre-warm the Bedrock engine directly on LocalStack startup instead of on demand. | | ||
| `DEFAULT_BEDROCK_MODEL` | `qwen2.5:0.5b` (default) | The model to use to handle text model invocations in Bedrock. Any text-based model available for Ollama is usable. | | ||
|
||
|
||
Check failure on line 100 in content/en/references/configuration.md GitHub Actions / Markdown LintMultiple consecutive blank lines
Check failure on line 100 in content/en/references/configuration.md GitHub Actions / Markdown LintMultiple consecutive blank lines
|
||
### BigData (EMR, Athena, Glue) | ||
|
||
|
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -25,18 +25,27 @@ We will demonstrate how to use Bedrock by following these steps: | |
### List available foundation models | ||
|
||
You can view all available foundation models using the [`ListFoundationModels`](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_ListFoundationModels.html) API. | ||
This will show you which models are available for use in your local environment. | ||
This will show you which models are available on AWS Bedrock. | ||
{{< callout "note">}} | ||
The actual model that will be used for emulation will differ from the ones defined in this list. You can define the used model with `DEFAULT_BEDROCK_MODEL` | ||
Check failure on line 30 in content/en/user-guide/aws/bedrock/index.md GitHub Actions / Markdown LintMax 1 sentence should be on a line
Check failure on line 30 in content/en/user-guide/aws/bedrock/index.md GitHub Actions / Markdown LintMax 1 sentence should be on a line
|
||
{{< / callout "note">}} | ||
|
||
Run the following command: | ||
|
||
{{< command >}} | ||
$ awslocal bedrock list-foundation-models | ||
{{< / command >}} | ||
|
||
### Pre-warming the Bedrock engine | ||
|
||
The startup of the Bedrock engine can take some time. | ||
Per default, we only start it once you send a request to one of the `bedrock-runtime` APIs. | ||
However, if you want to start the engine when localstack starts to avoid long wait times on your first request you can set the flag `BEDROCK_PREWARM`. | ||
|
||
### Invoke a model | ||
|
||
You can use the [`InvokeModel`](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_runtime_InvokeModel.html) API to send requests to a specific model. | ||
In this example, we'll use the Llama 3 model to process a simple prompt. | ||
In this example, we selected the Llama 3 model to process a simple prompt. However, the actual model will be defined by the `DEFAULT_BEDROCK_MODEL` environment variable. | ||
Check failure on line 48 in content/en/user-guide/aws/bedrock/index.md GitHub Actions / Markdown LintMax 1 sentence should be on a line
Check failure on line 48 in content/en/user-guide/aws/bedrock/index.md GitHub Actions / Markdown LintMax 1 sentence should be on a line
|
||
|
||
Run the following command: | ||
|
||
|
@@ -75,5 +84,5 @@ $ awslocal bedrock-runtime converse \ | |
|
||
## Limitations | ||
|
||
* LocalStack Bedrock implementation is mock-only and does not run any LLM model locally. | ||
* LocalStack Bedrock currently only officially supports text-based models. | ||
* Currently, GPU models are not supported by the LocalStack Bedrock implementation. |