From 5407f1ed1ce2a0ae80eeb0d27b6e624d687f443e Mon Sep 17 00:00:00 2001
From: Silvio Vasiljevic <silvio.vasiljevic@gmail.com>
Date: Thu, 14 Nov 2024 16:37:13 +0100
Subject: [PATCH] Add new bedrock docs

---
 content/en/references/configuration.md     |  3 ++-
 content/en/user-guide/aws/bedrock/index.md | 20 +++++++++++++++-----
 2 files changed, 17 insertions(+), 6 deletions(-)

diff --git a/content/en/references/configuration.md b/content/en/references/configuration.md
index 0c124dc119..5fecb0be49 100644
--- a/content/en/references/configuration.md
+++ b/content/en/references/configuration.md
@@ -94,7 +94,8 @@ This section covers configuration options that are specific to certain AWS servi
 
 | Variable | Example Values | Description |
 | - | - | - |
-| `LOCALSTACK_ENABLE_BEDROCK` | `1` | Use the Bedrock provider |
+| `BEDROCK_PREWARM` | `0` (default) \| `1` | Pre-warm the Bedrock engine directly on LocalStack startup instead of on demand. |
+| `DEFAULT_BEDROCK_MODEL` | `qwen2.5:0.5b` (default) | The model to use to handle text model invocations in Bedrock. Any text-based model available for Ollama is usable. |
 
 ### BigData (EMR, Athena, Glue)
 
diff --git a/content/en/user-guide/aws/bedrock/index.md b/content/en/user-guide/aws/bedrock/index.md
index ef642e157c..a6476ed99c 100644
--- a/content/en/user-guide/aws/bedrock/index.md
+++ b/content/en/user-guide/aws/bedrock/index.md
@@ -15,28 +15,38 @@ The supported APIs are available on our [API Coverage Page](https://docs.localst
 
 This guide is designed for users new to AWS Bedrock and assumes basic knowledge of the AWS CLI and our `awslocal` wrapper script.
 
-Start your LocalStack container using your preferred method using the `LOCALSTACK_ENABLE_BEDROCK=1` configuration variable.
+Start your LocalStack container using your preferred method with or without pre-warming the Bedrock engine.
 We will demonstrate how to use Bedrock by following these steps:
 
 1. Listing available foundation models
 2. Invoking a model for inference
 3. Using the conversation API
 
+### Pre-warming the Bedrock engine
+
+The startup of the Bedrock engine can take some time.
+Per default, we only start it once you send a request to one of the `bedrock-runtime` APIs.
+However, if you want to start the engine when localstack starts to avoid long wait times on your first request you can set the flag `BEDROCK_PREWARM`.
+
 ### List available foundation models
 
 You can view all available foundation models using the [`ListFoundationModels`](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_ListFoundationModels.html) API.
-This will show you which models are available for use in your local environment.
+This will show you which models are available on AWS Bedrock.
+{{< callout "note">}}
+The actual model that will be used for emulation will differ from the ones defined in this list.
+You can define the used model with `DEFAULT_BEDROCK_MODEL`
+{{< / callout >}}
 
 Run the following command:
 
 {{< command >}}
 $ awslocal bedrock list-foundation-models
-{{< / command >}}
 
 ### Invoke a model
 
 You can use the [`InvokeModel`](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_runtime_InvokeModel.html) API to send requests to a specific model.
-In this example, we'll use the Llama 3 model to process a simple prompt.
+In this example, we selected the Llama 3 model to process a simple prompt.
+However, the actual model will be defined by the `DEFAULT_BEDROCK_MODEL` environment variable.
 
 Run the following command:
 
@@ -75,5 +85,5 @@ $ awslocal bedrock-runtime converse \
 
 ## Limitations
 
-* LocalStack Bedrock implementation is mock-only and does not run any LLM model locally.
+* LocalStack Bedrock currently only officially supports text-based models.
 * Currently, GPU models are not supported by the LocalStack Bedrock implementation.