Skip to content

Commit

Permalink
add custom prompt
Browse files Browse the repository at this point in the history
  • Loading branch information
Laure-di committed Oct 4, 2024
1 parent 69bc345 commit fcfcb0c
Showing 1 changed file with 111 additions and 80 deletions.
191 changes: 111 additions & 80 deletions tutorials/how-to-implement-rag/index.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -33,12 +33,16 @@ LangChain simplifies the process of enhancing language models with retrieval cap

## Configure your development environment

1. Run the following command to install the required packages:
### Step 1: Install Required Packages

Run the following command to install the required packages:

```sh
pip install langchain psycopg2 python-dotenv
```
2. Create a .env file and add the following variables. These will store your API keys, database connection details, and other configuration values.
### Step 2: Create a .env File

Create a .env file and add the following variables. These will store your API keys, database connection details, and other configuration values.

```sh
# .env file
Expand Down Expand Up @@ -67,23 +71,28 @@ LangChain simplifies the process of enhancing language models with retrieval cap

## Setting Up Managed Databases

### Step 1: Connect to Your PostgreSQL Database

To perform these actions, you'll need to connect to your PostgreSQL database. You can use any PostgreSQL client, such as psql. The following steps will guide you through setting up your database to handle vector storage and document tracking.

1. Install the pgvector extension
### Step 2: Install the pgvector Extension

pgvector is essential for storing and indexing high-dimensional vectors, which are critical for retrieval-augmented generation (RAG) systems. Ensure that it is installed by executing the following SQL command:

```sql
CREATE EXTENSION IF NOT EXISTS vector;
```
2. Create a table to track processed documents
### Step 3: Create a Table to Track Processed Documents

To prevent reprocessing documents that have already been loaded and vectorized, you should create a table to keep track of them. This will ensure that new documents added to your object storage bucket are only processed once, avoiding duplicate downloads and redundant vectorization:

```sql
CREATE TABLE IF NOT EXISTS object_loaded (id SERIAL PRIMARY KEY, object_key TEXT);
```

3. Connect to PostgreSQL programmatically using Python
You can also connect to your PostgreSQL instance and perform the same tasks programmatically.
### Step 4: Connect to PostgreSQL Programmatically

Connect to your PostgreSQL instance and perform tasks programmatically.

```python
# rag.py file
Expand All @@ -108,52 +117,20 @@ conn = psycopg2.connect(
cur = conn.cursor()
```

## Embeddings and Vector Store Setup

### Set Up Document Loaders for Object Storage

In this section, we will use LangChain to load documents stored in your Scaleway Object Storage bucket. The document loader retrieves the contents of each document for further processing, such as vectorization or embedding generation.

1. Storing Data for RAG
Ensure that all the documents and data you want to inject into your Retrieval-Augmented Generation (RAG) system are stored in this Scaleway Object Storage bucket. These could include text files, PDFs, or any other format that will be processed and vectorized in the following steps.

2. Import Required Modules
Before setting up the document loader, you need to import the necessary modules from LangChain and other libraries. Here's how to do that:
### Step 1: Import Required Modules

```python
# rag.py

from langchain.document_loaders import S3DirectoryLoader
import os
```

3. Set Up the Document Loader
The S3DirectoryLoader class, part of LangChain, is specifically designed to load documents from S3-compatible storage (in this case, Scaleway Object Storage).
Now, let’s configure the document loader to pull files from your Scaleway Object Storage bucket using the appropriate credentials and environment variables:

```python
# rag.py

document_loader = S3DirectoryLoader(
bucket=os.getenv('SCW_BUCKET_NAME'),
endpoint_url=os.getenv('SCW_BUCKET_ENDPOINT'),
aws_access_key_id=os.getenv("SCW_ACCESS_KEY"),
aws_secret_access_key=os.getenv("SCW_API_KEY")
)

```

This section highlights that you're leveraging LangChain’s document loader capabilities to connect directly to your Scaleway Object Storage. LangChain simplifies the process of integrating external data sources, allowing you to focus on building a RAG system without handling low-level integration details.

### Embeddings and Vector Store Setup
1. Import the required module
```python
# rag.py

from langchain_openai import OpenAIEmbeddings
from langchain_postgres import PGVector
```

2. We will utilize the OpenAIEmbeddings class from LangChain and store the embeddings in PostgreSQL using the PGVector integration.
### Step 2: Configure OpenAI Embeddings

We will utilize the OpenAIEmbeddings class from LangChain and store the embeddings in PostgreSQL using the PGVector integration.

```python
# rag.py
Expand Down Expand Up @@ -182,23 +159,30 @@ In the context of using Scaleway’s Managed Inference and the sentence-t5-xxl m
Moreover, leaving tiktoken_enabled as True causes issues when sending data to Scaleway’s API because it results in tokenized vectors being sent instead of raw text. Since Scaleway's endpoint expects text and not pre-tokenized data, this mismatch can lead to errors or incorrect behavior.
By setting tiktoken_enabled=False, you ensure that raw text is sent to Scaleway's Managed Inference endpoint, which is what the sentence-transformers model expects to process. This guarantees that the embedding generation process works smoothly with Scaleway's infrastructure.

2. Next, configure the connection string for your PostgreSQL instance and create a PGVector store to store these embeddings.
### Step 3: Create a PGVector Store

Configure the connection string for your PostgreSQL instance and create a PGVector store to store these embeddings.

```python
# rag.py

connection_string = f"postgresql+psycopg2://{conn.info.user}:{conn.info.password}@{conn.info.host}:{conn.info.port}/{conn.info.dbname}"
vector_store = PGVector(connection=connection_string, embeddings=embeddings)
```

PGVector: This creates the vector store in your PostgreSQL database to store the embeddings.

### Load and Process Documents
## Load and Process Documents

Use the S3FileLoader to load documents and split them into chunks. Then, embed and store them in your PostgreSQL database.

1. Load Metadata for Improved Efficiency: By loading the metadata for all objects in your bucket, you can speed up the process significantly. This allows you to quickly check if a document has already been embedded without the need to load the entire document.
### Step 1: Load Metadata for Improved Efficiency

Load Metadata for Improved Efficiency: By loading the metadata for all objects in your bucket, you can speed up the process significantly. This allows you to quickly check if a document has already been embedded without the need to load the entire document.

```python
# rag.py

endpoint_s3 = f"https://s3.{os.getenv('SCW_DEFAULT_REGION', '')}.scw.cloud"
session = boto3.session.Session()
client_s3 = session.client(service_name='s3', endpoint_url=endpoint_s3,
Expand All @@ -215,34 +199,42 @@ In this code sample we:
- Set Up Pagination for Listing Objects: We prepare pagination to handle potentially large lists of objects efficiently.
- Iterate Through the Bucket: This initiates the pagination process, allowing us to list all objects within the specified Scaleway Object bucket seamlessly.

2. Iterate Through Metadata: Next, we will iterate through the metadata to determine if each object has already been embedded. If an object hasn’t been processed yet, we will embed it and load it into the database.
### Step 2: Iterate Through Metadata

Iterate Through Metadata: Next, we will iterate through the metadata to determine if each object has already been embedded. If an object hasn’t been processed yet, we will embed it and load it into the database.

```python
text_splitter = RecursiveCharacterTextSplitter(chunk_size=500, chunk_overlap=0, add_start_index=True, length_function=len, is_separator_regex=False)
for page in page_iterator:
for obj in page.get('Contents', []):
cur.execute("SELECT object_key FROM object_loaded WHERE object_key = %s", (obj['Key'],))
response = cur.fetchone()
if response is None:
file_loader = S3FileLoader(
bucket=BUCKET_NAME,
key=obj['Key'],
endpoint_url=endpoint_s3,
aws_access_key_id=os.getenv("SCW_ACCESS_KEY", ""),
aws_secret_access_key=os.getenv("SCW_SECRET_KEY", "")
)
file_to_load = file_loader.load()
cur.execute("INSERT INTO object_loaded (object_key) VALUES (%s)", (obj['Key'],))
chunks = text_splitter.split_text(file_to_load[0].page_content)
try:
embeddings_list = [embeddings.embed_query(chunk) for chunk in chunks]
vector_store.add_embeddings(chunks, embeddings_list)
cur.execute("INSERT INTO object_loaded (object_key) VALUES (%s)",
(obj['Key'],))
except Exception as e:
logger.error(f"An error occurred: {e}")

conn.commit()
# rag.py

from langchain_community.document_loaders import S3FileLoader
from langchain.text_splitter import RecursiveCharacterTextSplitter
from langchain_openai import OpenAIEmbeddings

text_splitter = RecursiveCharacterTextSplitter(chunk_size=500, chunk_overlap=0, add_start_index=True, length_function=len, is_separator_regex=False)
for page in page_iterator:
for obj in page.get('Contents', []):
cur.execute("SELECT object_key FROM object_loaded WHERE object_key = %s", (obj['Key'],))
response = cur.fetchone()
if response is None:
file_loader = S3FileLoader(
bucket=BUCKET_NAME,
key=obj['Key'],
endpoint_url=endpoint_s3,
aws_access_key_id=os.getenv("SCW_ACCESS_KEY", ""),
aws_secret_access_key=os.getenv("SCW_SECRET_KEY", "")
)
file_to_load = file_loader.load()
cur.execute("INSERT INTO object_loaded (object_key) VALUES (%s)", (obj['Key'],))
chunks = text_splitter.split_text(file_to_load[0].page_content)
try:
embeddings_list = [embeddings.embed_query(chunk) for chunk in chunks]
vector_store.add_embeddings(chunks, embeddings_list)
cur.execute("INSERT INTO object_loaded (object_key) VALUES (%s)",
(obj['Key'],))
except Exception as e:
logger.error(f"An error occurred: {e}")

conn.commit()
```

- S3FileLoader: The S3FileLoader loads each file individually from your ***Scaleway Object Storage bucket*** using the file's object_key (extracted from the file's metadata). It ensures that only the specific file is loaded from the bucket, minimizing the amount of data being retrieved at any given time.
Expand All @@ -269,11 +261,15 @@ When a query is made, the RAG system will retrieve the most relevant embeddings,
Now, set up the RAG system to handle queries

```python
llm = ChatOpenAI(
base_url=os.getenv("SCW_INFERENCE_DEPLOYMENT_ENDPOINT"),
api_key=os.getenv("SCW_SECRET_KEY"),
model=deployment.model_name,
)
from langchain import hub
from langchain_core.output_parsers import StrOutputParser
from langchain_core.runnables import RunnablePassthrough

llm = ChatOpenAI(
base_url=os.getenv("SCW_INFERENCE_DEPLOYMENT_ENDPOINT"),
api_key=os.getenv("SCW_SECRET_KEY"),
model=deployment.model_name,
)

prompt = hub.pull("rlm/rag-prompt")
retriever = vector_store.as_retriever()
Expand All @@ -288,7 +284,7 @@ llm = ChatOpenAI(

for r in rag_chain.stream("Your question"):
print(r, end="", flush=True)
time.sleep(0.15)
time.sleep(0.1)
```
- LLM Initialization: We initialize the ChatOpenAI instance using the endpoint and API key from the environment variables, along with the specified model name.

Expand All @@ -302,8 +298,43 @@ llm = ChatOpenAI(

### Query the RAG system with you own prompt template

Personalizing your prompt template allows you to tailor the responses from your RAG (Retrieval-Augmented Generation) system to better fit your specific needs. This can significantly improve the relevance and tone of the answers you receive. Below is a detailed guide on how to create a custom prompt for querying the system.

```python
#rag.py

from langchain.chains.combine_documents import create_stuff_documents_chain
from langchain_core.prompts import PromptTemplate
from langchain_openai import ChatOpenAI

llm = ChatOpenAI(
base_url=os.getenv("SCW_INFERENCE_DEPLOYMENT_ENDPOINT"),
api_key=os.getenv("SCW_SECRET_KEY"),
model=deployment.model_name,
)
prompt = """Use the following pieces of context to answer the question at the end. If you don't know the answer, just say that you don't know, don't try to make up an answer. Always finish your answer by "Thank you for asking". {context} Question: {question} Helpful Answer:"""
custom_rag_prompt = PromptTemplate.from_template(prompt)
retriever = vector_store.as_retriever()
custom_rag_chain = create_stuff_documents_chain(llm, custom_rag_prompt)


context = retriever.invoke(new_message)
for r in custom_rag_chain.stream({"question":new_message, "context": context}):
print(r, end="", flush=True)
time.sleep(0.1)
```

- Prompt Template: The prompt template is designed to guide the model's responses. It instructs the model on how to utilize the provided context and how to handle situations where it lacks information. For a more engaging touch, you might consider adding a light-hearted conclusion, making the responses feel more personable.
- Retrieving Context: The retriever.invoke(new_message) method fetches relevant information from your vector store based on the user’s query. It’s crucial that this step retrieves high-quality context to ensure the model's responses are accurate and helpful.
- Creating the RAG Chain: The create_stuff_documents_chain function links the language model with the custom prompt. This integration allows the model to process the retrieved context effectively and respond appropriately to the question.
- Streaming Responses: The loop that streams responses from the custom_rag_chain provides a dynamic way to output answers. This can enhance user experience by allowing them to see responses as they are generated, rather than waiting for the entire output at once.

### Conclusion

In this tutorial, we explored essential techniques for efficiently processing and storing large document datasets for a Retrieval-Augmented Generation (RAG) system. By leveraging metadata, we can quickly check which documents have already been processed, ensuring that our system operates smoothly without redundant data handling. Chunking optimizes the processing of each document, maximizing the performance of the LLM. Storing embeddings in PostgreSQL via pgvector enables fast and scalable retrieval, ensuring quick responses to user queries.
In this tutorial, we explored essential techniques for efficiently processing and storing large document datasets within a Retrieval-Augmented Generation (RAG) system. By leveraging metadata, we ensured that our system avoids redundant data handling, allowing for smooth and efficient operations. The use of chunking optimizes document processing, maximizing the performance of the language model. Storing embeddings in PostgreSQL via pgvector enables rapid and scalable retrieval, ensuring quick responses to user queries.

Furthermore, you can continually enhance your RAG system by implementing mechanisms to retain chat history. Keeping track of past interactions allows for more contextually aware responses, fostering a more engaging user experience. This historical data can be used to refine your prompts, adapt to user preferences, and improve the overall accuracy of responses.

By integrating Scaleway’s Managed Object Storage, PostgreSQL with pgvector, and LangChain’s embedding tools, you have the foundation to build a powerful RAG system that scales with your data while offering robust information retrieval capabilities. This approach equips you with the tools necessary to handle complex queries and deliver accurate, relevant results efficiently.

By integrating Scaleway’s Managed Object Storage, PostgreSQL with pgvector, and LangChain’s embedding tools, you can build a powerful RAG system that scales with your data while offering robust information retrieval capabilities. This approach equips you with the tools necessary to handle complex queries and deliver accurate, relevant results efficiently.
With ongoing refinement and adaptation, your RAG system can evolve to meet the changing needs of your users, ensuring that it remains a valuable asset in your AI toolkit.

0 comments on commit fcfcb0c

Please sign in to comment.