Mistral + Haystack Collection: build RAG pipelines that rock ๐ค
Collection of notebooks and resources to build Retrieval Augmented Generation pipelines using:
๐ป For other great Haystack Notebooks, check out the ๐ฉ๐ปโ๐ณ Haystack Cookbook
Model | Haystack version | Link | Details | Author |
---|---|---|---|---|
Mistral-7B-Instruct-v0.1 | 1.x | ๐ธ Notebook | RAG on a collection of Rock music resources, using the free Hugging Face Inference API | @anakin87 |
Mixtral-8x7B-Instruct-v0.1 | 1.x | ๐๐ Notebook | RAG on a PDF File, using the free Hugging Face Inference API (using the free Hugging Face Inference API) | @AlessandroDiLauro |
Mixtral-8x7B-Instruct-v0.1 | 1.x | ๐ Notebook ๐๐ Blog post |
RAG from CSV, Product description analysis | @AlessandroDiLauro |
Mixtral-8x7B-Instruct-v0.1 | 2.x | ๐ธ๏ธ๐ฌ Notebook | RAG on the Web, using the free Hugging Face Inference API | @TuanaCelik |
Zephyr-7B Beta | 2.x | ๐ช Article and notebook | Article on how make this great model (fine-tuned from Mistral) run locally on Colab | @TuanaCelik @anakin87 |
Mixtral-8x7B-Instruct-v0.1 | 2.x | ๐ฉบ๐ฌ Article and notebook | Healthcare chatbot with Mixtral, Haystack, and PubMed | @annthurium |
Mixtral-8x7B-Instruct-v0.1 | 2.x | ๐ฎ๐น๐ฌ๐ง๐ง Notebook | Multilingual RAG from a podcast | @anakin87 |
Mixtral-8x7B-Instruct-v0.1 | 2.x | ๐ฐ Notebook | Building a Hacker News Top Stories TL;DR | @TuanaCelik |
-
Great and deep blog post by Hugging Face on the MoE architecture, which is the basis of Mistral 8x7B.
-
Zephyr: Direct Distillation of LM Alignment
Technical report by the Hugging Face H4 team. They explain how they trained Zephyr, a strong 7B model fine-tuned from Mistral.
The main topic is: โ๏ธ how to effectively distill the capabilities of GPT-4 into smaller models? The report is insightful and well worth reading. I have summarized it here.