Skip to content

Commit

Permalink
fix title
Browse files Browse the repository at this point in the history
  • Loading branch information
Laure-di committed Sep 27, 2024
1 parent d9f4ef2 commit abc4837
Showing 1 changed file with 14 additions and 6 deletions.
20 changes: 14 additions & 6 deletions tutorials/how-to-implement-rag/index.mdx
Original file line number Diff line number Diff line change
@@ -1,17 +1,25 @@
---
meta:
title: How to implement RAG with managed inference
description: Learn how to implement Retrieval-Augmented Generation (RAG) using Scaleway's managed inference, PostgreSQL, pgvector, and object storage.
title: Step-by-Step Guide: Implementing Retrieval-Augmented Generation (RAG) with LangChain and Scaleway Managed Inference
description: Master Retrieval-Augmented Generation (RAG) with LangChain and Scaleway Managed Inference. Learn how to integrate Scaleway Managed PostgreSQL Database, pgvector, and Scaleway Object Storage for seamless data retrieval and embedding storage.
content:
h1: How to implement RAG with managed inference
tags: inference, managed, postgresql, pgvector, object storage, RAG
h1: Step-by-Step Guide: Implementing Retrieval-Augmented Generation (RAG) with LangChain and Scaleway Managed Inference
tags: inference managed postgresql pgvector object storage RAG langchain machine learning AI language models
categories:
- inference
---

Retrieval-Augmented Generation (RAG) enhances the power of language models by enabling them to retrieve relevant information from external datasets. In this tutorial, we’ll implement RAG using Scaleway’s Managed Inference, PostgreSQL, pgvector, and Scaleway’s Object Storage.
Retrieval-Augmented Generation (RAG) supercharges language models by enabling real-time retrieval of relevant information from external datasets. This hybrid approach boosts both the accuracy and contextual relevance of model outputs, making it essential for advanced AI applications.

With Scaleway's fully managed services, integrating RAG becomes a streamlined process. You'll use a sentence transformer for embedding text, store embeddings in a PostgreSQL database with pgvector, and leverage object storage for scalable data management.
In this comprehensive guide, you'll learn how to implement RAG using LangChain, one of the leading frameworks for developing robust language model applications. We'll combine LangChain with ***Scaleway’s Managed Inference***, ***Scaleway’s PostgreSQL Managed Database*** (featuring pgvector for vector storage), and ***Scaleway’s Object Storage*** for seamless integration and efficient data management.

***Why LangChain?***
LangChain simplifies the process of enhancing language models with retrieval capabilities, allowing developers to build scalable, intelligent applications that access external datasets effortlessly. By leveraging LangChain’s modular design and Scaleway’s cloud services, you can unlock the full potential of Retrieval-Augmented Generation.

***What You’ll Learn:***
How to embed text using a sentence transformer using ***Scaleway Manage Inference***
How to store and query embeddings using ***Scaleway’s Managed PostgreSQL Database*** with pgvector
How to manage large datasets efficiently with ***Scaleway Object Storage***

<Macro id="requirements" />

Expand Down

0 comments on commit abc4837

Please sign in to comment.