Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat(inference): add tutorial how to implement RAG with managed infer… #3744

Merged
merged 27 commits into from
Oct 4, 2024
Merged
Changes from 10 commits
Commits
Show all changes
27 commits
Select commit Hold shift + click to select a range
2ff288b
feat(inference): add tutorial how to implement RAG with managed infer…
Laure-di Sep 24, 2024
73f71c2
configure dev env part
Laure-di Sep 24, 2024
83490a3
Managed database setup
Laure-di Sep 24, 2024
ca9beb9
last part of the first version
Laure-di Sep 24, 2024
d9f4ef2
add more informations on the tuto
Laure-di Sep 26, 2024
d1c41bc
fix title
Laure-di Sep 27, 2024
969b299
fix: remove unused env var and add missing one in code
Laure-di Sep 27, 2024
08218ba
database management improve
Laure-di Sep 27, 2024
6f66eda
develop Document loader
Laure-di Sep 27, 2024
a04d4c3
format
Laure-di Sep 27, 2024
4a72e84
Update tutorials/how-to-implement-rag/index.mdx
Laure-di Oct 3, 2024
004699b
Update tutorials/how-to-implement-rag/index.mdx
Laure-di Oct 3, 2024
b298435
Update tutorials/how-to-implement-rag/index.mdx
Laure-di Oct 3, 2024
52f5265
Update tutorials/how-to-implement-rag/index.mdx
Laure-di Oct 3, 2024
93b07a4
Update tutorials/how-to-implement-rag/index.mdx
Laure-di Oct 3, 2024
bf83379
Update tutorials/how-to-implement-rag/index.mdx
Laure-di Oct 3, 2024
a3f42df
Update tutorials/how-to-implement-rag/index.mdx
Laure-di Oct 3, 2024
29aa648
Update tutorials/how-to-implement-rag/index.mdx
Laure-di Oct 3, 2024
ea655a8
Update tutorials/how-to-implement-rag/index.mdx
Laure-di Oct 3, 2024
f5b4d4d
switch metodo
Laure-di Oct 3, 2024
69bc345
pre-defined prompt
Laure-di Oct 3, 2024
403fd5e
add custom prompt
Laure-di Oct 4, 2024
69c9974
exemple of endpoint
Laure-di Oct 4, 2024
8419eb7
Apply suggestions from code review
Laure-di Oct 4, 2024
bb72b38
Apply suggestions from code review
Laure-di Oct 4, 2024
d35ff83
Apply suggestions from code review
bene2k1 Oct 4, 2024
a77d197
Apply suggestions from code review
bene2k1 Oct 4, 2024
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
275 changes: 275 additions & 0 deletions tutorials/how-to-implement-rag/index.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,275 @@
---
meta:
title: Step-by-Step Guide Implementing Retrieval-Augmented Generation (RAG) with LangChain and Scaleway Managed Inference
Laure-di marked this conversation as resolved.
Show resolved Hide resolved
description: Master Retrieval-Augmented Generation (RAG) with LangChain and Scaleway Managed Inference
content:
h1: Step-by-Step Guide Implementing Retrieval-Augmented Generation (RAG) with LangChain and Scaleway Managed Inference
Laure-di marked this conversation as resolved.
Show resolved Hide resolved
tags: inference managed postgresql pgvector object storage RAG langchain machine learning AI language models
categories:
- inference
---

Retrieval-Augmented Generation (RAG) supercharges language models by enabling real-time retrieval of relevant information from external datasets. This hybrid approach boosts both the accuracy and contextual relevance of model outputs, making it essential for advanced AI applications.

In this comprehensive guide, you'll learn how to implement RAG using LangChain, one of the leading frameworks for developing robust language model applications. We'll combine LangChain with ***Scaleway’s Managed Inference***, ***Scaleway’s PostgreSQL Managed Database*** (featuring pgvector for vector storage), and ***Scaleway’s Object Storage*** for seamless integration and efficient data management.
Laure-di marked this conversation as resolved.
Show resolved Hide resolved

#### Why LangChain?
Laure-di marked this conversation as resolved.
Show resolved Hide resolved
LangChain simplifies the process of enhancing language models with retrieval capabilities, allowing developers to build scalable, intelligent applications that access external datasets effortlessly. By leveraging LangChain’s modular design and Scaleway’s cloud services, you can unlock the full potential of Retrieval-Augmented Generation.
Laure-di marked this conversation as resolved.
Show resolved Hide resolved

#### What You’ll Learn:
Laure-di marked this conversation as resolved.
Show resolved Hide resolved
How to embed text using a sentence transformer using ***Scaleway Manage Inference***
How to store and query embeddings using ***Scaleway’s Managed PostgreSQL Database*** with pgvector
How to manage large datasets efficiently with ***Scaleway Object Storage***
Laure-di marked this conversation as resolved.
Show resolved Hide resolved

<Macro id="requirements" />
Laure-di marked this conversation as resolved.
Show resolved Hide resolved

## Before you start

To complete the actions presented below, you must have:
Laure-di marked this conversation as resolved.
Show resolved Hide resolved
- A Scaleway account logged into the [console](https://console.scaleway.com)
- [Owner](/identity-and-access-management/iam/concepts/#owner) status or [IAM permissions](/identity-and-access-management/iam/concepts/#permission) allowing you to perform actions in the intended Organization
- A valid [API key](/identity-and-access-management/iam/how-to/create-api-keys/)
- [Inference Deployment](/ai-data/managed-inference/how-to/create-deployment/): Set up an inference deployment using [sentence-transformers/sentence-t5-xxl](/ai-data/managed-inference/reference-content/sentence-t5-xxl/) on an L4 instance to efficiently process embeddings.
bene2k1 marked this conversation as resolved.
Show resolved Hide resolved
- [Inference Deployment](/ai-data/managed-inference/how-to/create-deployment/) with the large language model of your choice.
bene2k1 marked this conversation as resolved.
Show resolved Hide resolved
- [Object Storage Bucket](/storage/object/how-to/create-a-bucket/) to store all the data you want to inject into your LLM model.
bene2k1 marked this conversation as resolved.
Show resolved Hide resolved
- [Managed Database](/managed-databases/postgresql-and-mysql/how-to/create-a-database/) to securely store all your embeddings.
bene2k1 marked this conversation as resolved.
Show resolved Hide resolved

## Configure your development environment

1. Install necessary packages: run the following command to install the required packages:
Laure-di marked this conversation as resolved.
Show resolved Hide resolved

```sh
pip install langchain psycopg2 python-dotenv
```
2. Configure your environment variables: create a .env file and add the following variables. These will store your API keys, database connection details, and other configuration values.
Laure-di marked this conversation as resolved.
Show resolved Hide resolved

```sh
# .env file

# Scaleway API credentials
SCW_ACCESS_KEY=your_scaleway_access_key
SCW_API_KEY=your_scaleway_secret_ke
Laure-di marked this conversation as resolved.
Show resolved Hide resolved

# Scaleway managed database (PostgreSQL) credentials
SCW_DB_NAME=your_scaleway_managed_db_name
SCW_DB_USER=your_scaleway_managed_db_username
SCW_DB_PASSWORD=your_scaleway_managed_db_password
SCW_DB_HOST=your_scaleway_managed_db_host # The IP address of your database instance
SCW_DB_PORT=your_scaleway_managed_db_port # The port number for your database instance

# Scaleway S3 bucket configuration
SCW_BUCKET_NAME=your_scaleway_bucket_name
SCW_BUCKET_ENDPOINT=your_scaleway_bucket_endpoint # S3 endpoint, e.g., https://s3.fr-par.scw.cloud

# Scaleway Inference API configuration (Embeddings)
SCW_INFERENCE_EMBEDDINGS_ENDPOINT=your_scaleway_inference_embeddings_endpoint # Endpoint for sentence-transformers/sentence-t5-xxl deployment
Laure-di marked this conversation as resolved.
Show resolved Hide resolved

# Scaleway Inference API configuration (LLM deployment)
SCW_INFERENCE_DEPLOYMENT_ENDPOINT=your_scaleway_inference_endpoint # Endpoint for your LLM deployment
Laure-di marked this conversation as resolved.
Show resolved Hide resolved
```

### Set Up Managed Database
Laure-di marked this conversation as resolved.
Show resolved Hide resolved

To perform these actions, you'll need to connect to your PostgreSQL database. You can use any PostgreSQL client, such as psql. The following steps will guide you through setting up your database to handle vector storage and document tracking.
Laure-di marked this conversation as resolved.
Show resolved Hide resolved

1. Install the pgvector extension
pgvector is essential for storing and indexing high-dimensional vectors, which are critical for retrieval-augmented generation (RAG) systems. Ensure that it is installed by executing the following SQL command:
Laure-di marked this conversation as resolved.
Show resolved Hide resolved

```sql
CREATE EXTENSION IF NOT EXISTS vector;
```
2. Create a table to track processed documents
To prevent reprocessing documents that have already been loaded and vectorized, you should create a table to keep track of them. This will ensure that new documents added to your object storage bucket are only processed once, avoiding duplicate downloads and redundant vectorization:

```sql
CREATE TABLE IF NOT EXISTS object_loaded (id SERIAL PRIMARY KEY, object_key TEXT);
```

3. Connect to PostgreSQL programmatically using Python
You can also connect to your PostgreSQL instance and perform the same tasks programmatically.

```python
# rag.py file

from dotenv import load_dotenv
import psycopg2
import os

# Load environment variables
load_dotenv()

# Establish connection to PostgreSQL database using environment variables
conn = psycopg2.connect(
database=os.getenv("SCW_DB_NAME"),
user=os.getenv("SCW_DB_USER"),
password=os.getenv("SCW_DB_PASSWORD"),
host=os.getenv("SCW_DB_HOST"),
port=os.getenv("SCW_DB_PORT")
)

# Create a cursor to execute SQL commands
cur = conn.cursor()
```

Laure-di marked this conversation as resolved.
Show resolved Hide resolved


### Set Up Document Loaders for Object Storage

In this section, we will use LangChain to load documents stored in your Scaleway Object Storage bucket. The document loader retrieves the contents of each document for further processing, such as vectorization or embedding generation.

1. Storing Data for RAG
Ensure that all the documents and data you want to inject into your Retrieval-Augmented Generation (RAG) system are stored in this Scaleway Object Storage bucket. These could include text files, PDFs, or any other format that will be processed and vectorized in the following steps.

2. Import Required Modules
Before setting up the document loader, you need to import the necessary modules from LangChain and other libraries. Here's how to do that:

```python
# rag.py

from langchain.document_loaders import S3DirectoryLoader
import os
```

3. Set Up the Document Loader
The S3DirectoryLoader class, part of LangChain, is specifically designed to load documents from S3-compatible storage (in this case, Scaleway Object Storage).
Now, let’s configure the document loader to pull files from your Scaleway Object Storage bucket using the appropriate credentials and environment variables:

```python
# rag.py

document_loader = S3DirectoryLoader(
Laure-di marked this conversation as resolved.
Show resolved Hide resolved
bucket=os.getenv('SCW_BUCKET_NAME'),
endpoint_url=os.getenv('SCW_BUCKET_ENDPOINT'),
aws_access_key_id=os.getenv("SCW_ACCESS_KEY"),
aws_secret_access_key=os.getenv("SCW_API_KEY")
)

Laure-di marked this conversation as resolved.
Show resolved Hide resolved
```

This section highlights that you're leveraging LangChain’s document loader capabilities to connect directly to your Scaleway Object Storage. LangChain simplifies the process of integrating external data sources, allowing you to focus on building a RAG system without handling low-level integration details.

### Embeddings and Vector Store Setup
1. Import the required module
```python
# rag.py

from langchain_openai import OpenAIEmbeddings
from langchain_postgres import PGVector
```

2. We will utilize the OpenAIEmbeddings class from LangChain and store the embeddings in PostgreSQL using the PGVector integration.

```python
# rag.py

embeddings = OpenAIEmbeddings(
Laure-di marked this conversation as resolved.
Show resolved Hide resolved
openai_api_key=os.getenv("SCW_API_KEY"),
openai_api_base=os.getenv("SCW_INFERENCE_EMBEDDINGS_ENDPOINT"),
model="sentence-transformers/sentence-t5-xxl",
tiktoken_enabled=False,
)
```

#### Key Parameters:
- openai_api_key: This is your API key for accessing the OpenAI-powered embeddings service, in this case, deployed via Scaleway’s Managed Inference.
Laure-di marked this conversation as resolved.
Show resolved Hide resolved
- openai_api_base: This is the base URL that points to your deployment of the sentence-transformers/sentence-t5-xxl model on Scaleway's Managed Inference. This URL serves as the entry point to make API calls for generating embeddings.
Laure-di marked this conversation as resolved.
Show resolved Hide resolved
- model="sentence-transformers/sentence-t5-xxl": This defines the specific model being used for text embeddings. sentence-transformers/sentence-t5-xxl is a powerful model optimized for generating high-quality sentence embeddings, making it ideal for tasks like document retrieval in RAG systems.
Laure-di marked this conversation as resolved.
Show resolved Hide resolved
- tiktoken_enabled=False: This is an important parameter, which disables the use of TikToken for tokenization within the embeddings process.
Laure-di marked this conversation as resolved.
Show resolved Hide resolved

#### What is tiktoken_enabled?

tiktoken is a tokenization library developed by OpenAI, which is optimized for working with GPT-based models (like GPT-3.5 or GPT-4). It transforms text into smaller token units that the model can process.
Laure-di marked this conversation as resolved.
Show resolved Hide resolved

#### Why set tiktoken_enabled=False?

In the context of using Scaleway’s Managed Inference and the sentence-t5-xxl model, TikToken tokenization is not necessary because the model you are using (sentence-transformers) works with raw text and handles its own tokenization internally.
Laure-di marked this conversation as resolved.
Show resolved Hide resolved
Moreover, leaving tiktoken_enabled as True causes issues when sending data to Scaleway’s API because it results in tokenized vectors being sent instead of raw text. Since Scaleway's endpoint expects text and not pre-tokenized data, this mismatch can lead to errors or incorrect behavior.
Laure-di marked this conversation as resolved.
Show resolved Hide resolved
By setting tiktoken_enabled=False, you ensure that raw text is sent to Scaleway's Managed Inference endpoint, which is what the sentence-transformers model expects to process. This guarantees that the embedding generation process works smoothly with Scaleway's infrastructure.
Laure-di marked this conversation as resolved.
Show resolved Hide resolved

2. Next, configure the connection string for your PostgreSQL instance and create a PGVector store to store these embeddings.

```python

Laure-di marked this conversation as resolved.
Show resolved Hide resolved
connection_string = f"postgresql+psycopg2://{conn.info.user}:{conn.info.password}@{conn.info.host}:{conn.info.port}/{conn.info.dbname}"
vector_store = PGVector(connection=connection_string, embeddings=embeddings)
```

PGVector: This creates the vector store in your PostgreSQL database to store the embeddings.

### Load and Process Documents

Use the S3FileLoader to load documents and split them into chunks. Then, embed and store them in your PostgreSQL database.
Laure-di marked this conversation as resolved.
Show resolved Hide resolved

1. Lazy loadings documents: This method is designed to efficiently load and process documents one by one from Scaleway Object Storage. Instead of loading all documents at once, it loads them lazily, allowing us to inspect each file before deciding whether to embed it.
```python
files = document_loader.lazy_load()
```
#### Why lazy loading?
The key reason for using lazy loading here is to avoid reprocessing documents that have already been embedded. In the context of Retrieval-Augmented Generation (RAG), reprocessing the same document multiple times is redundant and inefficient. Lazy loading enables us to check if a document has already been embedded (by querying the database) before actually loading and embedding it.

```python
text_splitter = RecursiveCharacterTextSplitter(chunk_size=480, chunk_overlap=20)

for file in files:
cur.execute("SELECT object_key FROM object_loaded WHERE object_key = %s", (file.metadata["source"],))
if cur.fetchone() is None:
fileLoader = S3FileLoader(
Laure-di marked this conversation as resolved.
Show resolved Hide resolved
bucket=os.getenv("SCW_BUCKET_NAME"),
key=file.metadata["source"].split("/")[-1],
endpoint_url=endpoint_s3,
aws_access_key_id=os.getenv("SCW_ACCESS_KEY"),
aws_secret_access_key=os.getenv("SCW_API_KEY")
)
file_to_load = fileLoader.load()
chunks = text_splitter.split_text(file.page_content)

embeddings_list = [embeddings.embed_query(chunk) for chunk in chunks]
for chunk, embedding in zip(chunks, embeddings_list):
vector_store.add_embeddings(embedding, chunk)
```

1. S3FileLoader: The S3FileLoader loads each file individually from your ***Scaleway Object Storage bucket*** using the file's object_key (extracted from the file's metadata). It ensures that only the specific file is loaded from the bucket, minimizing the amount of data being retrieved at any given time.
Laure-di marked this conversation as resolved.
Show resolved Hide resolved
2. RecursiveCharacterTextSplitter: The RecursiveCharacterTextSplitter breaks each document into smaller chunks of text. This is crucial because embeddings models, like those used in Retrieval-Augmented Generation (RAG), typically have a limited context window (the number of tokens they can process at once).
Laure-di marked this conversation as resolved.
Show resolved Hide resolved
- Chunk Size: Here, the chunk size is set to 480 characters, with an overlap of 20 characters. The choice of 480 characters is based on the context size supported by the embeddings model. Models have a maximum number of tokens they can process in a single pass, often around 512 tokens or fewer, depending on the specific model you are using. To ensure that each chunk fits within this limit, 380 characters provide a buffer, as different models tokenize characters into variable-length tokens.
Laure-di marked this conversation as resolved.
Show resolved Hide resolved
- Chunk Overlap: The 20-character overlap ensures continuity between chunks, which helps prevent loss of meaning or context between segments.
Laure-di marked this conversation as resolved.
Show resolved Hide resolved
3. Embedding the Chunks: For each document, the text is split into smaller chunks using the text splitter, and an embedding is generated for each chunk using the embeddings.embed_query(chunk) function. This function transforms each chunk into a vector representation that can later be used for similarity search.
4. Embedding Storage: After generating the embeddings for each chunk, they are stored in a vector database (e.g., PostgreSQL with pgvector) using the vector_store.add_embeddings(embedding, chunk) method. Each embedding is stored alongside its corresponding text chunk, enabling retrieval during a query.
Laure-di marked this conversation as resolved.
Show resolved Hide resolved
5. Avoiding Redundant Processing: The script checks the object_loaded table in PostgreSQL to see if a document has already been processed (i.e., the object_key exists in the table). If it has, the file is skipped, avoiding redundant downloads, vectorization, and database inserts. This ensures that only new or modified documents are processed, reducing the system's computational load and saving both time and resources.
Laure-di marked this conversation as resolved.
Show resolved Hide resolved

#### Why 480 characters?

The chunk size of 480 characters is chosen to fit comfortably within the context size limits of typical embeddings models, which often range between 512 and 1024 tokens. Since most models tokenize text into smaller units (tokens) based on words, punctuation, and subwords, the exact number of tokens for 480 characters will vary depending on the language and the content. By keeping chunks small, we avoid exceeding the model’s context window, which could lead to truncated embeddings or poor performance during inference.

This approach ensures that only new or modified documents are loaded into memory and embedded, saving significant computational resources and reducing redundant work.

#### Why store both chunk and embedding?

Storing both the chunk and its corresponding embedding allows for efficient document retrieval later.
When a query is made, the RAG system will retrieve the most relevant embeddings, and the corresponding text chunks will be used to generate the final response.

### Query the RAG System

Now, set up the RAG system to handle queries using RetrievalQA and the LLM.
Laure-di marked this conversation as resolved.
Show resolved Hide resolved

```python
retriever = vector_store.as_retriever(search_kwargs={"k": 3})
llm = ChatOpenAI(
Laure-di marked this conversation as resolved.
Show resolved Hide resolved
base_url=os.getenv("SCW_INFERENCE_DEPLOYMENT_ENDPOINT"),
api_key=os.getenv("SCW_API_KEY"),
model=deployment.model_name,
)

qa_stuff = RetrievalQA.from_chain_type(llm=llm, chain_type="stuff", retriever=retriever)

query = "What are the commands to set up a database with the CLI of Scaleway?"
Laure-di marked this conversation as resolved.
Show resolved Hide resolved
response = qa_stuff.invoke(query)

print(response['result'])
```


### Conclusion
bene2k1 marked this conversation as resolved.
Show resolved Hide resolved

This step is essential for efficiently processing and storing large document datasets for RAG. By using lazy loading, the system handles large datasets without overwhelming memory, while chunking ensures that each document is processed in a way that maximizes the performance of the LLM. The embeddings are stored in PostgreSQL via pgvector, allowing for fast and scalable retrieval when responding to user queries.
Laure-di marked this conversation as resolved.
Show resolved Hide resolved

By combining Scaleway’s Managed Object Storage, PostgreSQL with pgvector, and LangChain’s embedding tools, you can implement a powerful RAG system that scales with your data and offers robust information retrieval capabilities.
Laure-di marked this conversation as resolved.
Show resolved Hide resolved