Skip to content

Commit

Permalink
Add new bedrock docs
Browse files Browse the repository at this point in the history
  • Loading branch information
silv-io committed Nov 14, 2024
1 parent 452b3c3 commit 5407f1e
Show file tree
Hide file tree
Showing 2 changed files with 17 additions and 6 deletions.
3 changes: 2 additions & 1 deletion content/en/references/configuration.md
Original file line number Diff line number Diff line change
Expand Up @@ -94,7 +94,8 @@ This section covers configuration options that are specific to certain AWS servi

| Variable | Example Values | Description |
| - | - | - |
| `LOCALSTACK_ENABLE_BEDROCK` | `1` | Use the Bedrock provider |
| `BEDROCK_PREWARM` | `0` (default) \| `1` | Pre-warm the Bedrock engine directly on LocalStack startup instead of on demand. |
| `DEFAULT_BEDROCK_MODEL` | `qwen2.5:0.5b` (default) | The model to use to handle text model invocations in Bedrock. Any text-based model available for Ollama is usable. |

### BigData (EMR, Athena, Glue)

Expand Down
20 changes: 15 additions & 5 deletions content/en/user-guide/aws/bedrock/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,28 +15,38 @@ The supported APIs are available on our [API Coverage Page](https://docs.localst

This guide is designed for users new to AWS Bedrock and assumes basic knowledge of the AWS CLI and our `awslocal` wrapper script.

Start your LocalStack container using your preferred method using the `LOCALSTACK_ENABLE_BEDROCK=1` configuration variable.
Start your LocalStack container using your preferred method with or without pre-warming the Bedrock engine.
We will demonstrate how to use Bedrock by following these steps:

1. Listing available foundation models
2. Invoking a model for inference
3. Using the conversation API

### Pre-warming the Bedrock engine

The startup of the Bedrock engine can take some time.
Per default, we only start it once you send a request to one of the `bedrock-runtime` APIs.
However, if you want to start the engine when localstack starts to avoid long wait times on your first request you can set the flag `BEDROCK_PREWARM`.

### List available foundation models

You can view all available foundation models using the [`ListFoundationModels`](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_ListFoundationModels.html) API.
This will show you which models are available for use in your local environment.
This will show you which models are available on AWS Bedrock.
{{< callout "note">}}
The actual model that will be used for emulation will differ from the ones defined in this list.
You can define the used model with `DEFAULT_BEDROCK_MODEL`
{{< / callout >}}

Run the following command:

{{< command >}}
$ awslocal bedrock list-foundation-models
{{< / command >}}

### Invoke a model

You can use the [`InvokeModel`](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_runtime_InvokeModel.html) API to send requests to a specific model.
In this example, we'll use the Llama 3 model to process a simple prompt.
In this example, we selected the Llama 3 model to process a simple prompt.
However, the actual model will be defined by the `DEFAULT_BEDROCK_MODEL` environment variable.

Run the following command:

Expand Down Expand Up @@ -75,5 +85,5 @@ $ awslocal bedrock-runtime converse \

## Limitations

* LocalStack Bedrock implementation is mock-only and does not run any LLM model locally.
* LocalStack Bedrock currently only officially supports text-based models.
* Currently, GPU models are not supported by the LocalStack Bedrock implementation.

0 comments on commit 5407f1e

Please sign in to comment.