- In this project I have built an end to end advanced RAG project using open source llm model, Mistral using groq inferencing engine.
- You can try the project live here
- This project showcase the implementation of an advanced RAG system that uses groq as an llm to retrieve information about langsmith.
Steps I followed:
- I have used the
WebBaseLoader
from thelangchain_community
document loader to load the data from thehttps://docs.smith.langchain.com/
webpage. - transformed each text into a chunk of
1000
using theRecursiveCharacterTextSplitter
imported from thelangchain.text_splitter
- stored the vector embeddings which were made using the
HuggingFaceInstructEmbeddings
using theFAISS
vector store. - setup the llm
ChatGroq
with the model namemixtral-8x7b-32768
- Setup
ChatPromptTemplate
- finally created the
document_chain
andretrieval_chain
for chaining llm to prompt andretriever
todocument_chain
respectively
- langchain==0.1.20
- langchain-community==0.0.38
- langchain-core==0.1.52
- langchain-groq==0.1.3
- faiss-cpu==1.8.0
- python-dotenv
- Prerequisites
- Git
- Command line familiarity
- Clone the Repository:
git clone https://github.com/NebeyouMusie/End-To-End-Advanced-RAG-Project-using-Open-Source-LLM-Models-And-Groq-Inferencing.git
- Create and Activate Virtual Environment (Recommended)
python -m venv venv
source venv/bin/activate
- Navigate to the projects directory
cd ./End-To-End-Advanced-RAG-Project-using-Open-Source-LLM-Models-And-Groq-Inferencing
using your terminal - Install Libraries:
pip install -r requirements.txt
- run
streamlit run app.py
- open the link displayed in the terminal on your preferred browser
- Collaborations are welcomed ❤️
- I would like to thank Krish Naik
- LinkedIn: Nebeyou Musie
- Gmail: [email protected]
- Telegram: Nebeyou Musie