Skip to content

cerbos/cerbos-rag-demo

Repository files navigation

Requirements

Setup

  • Grab the embedding model: ollama pull mxbai-embed-large
  • Grab the LLM: ollama pull llama3.1

Startup

  • docker compose up

In the console, the app logs will tell you port the application is running on, typically http://localhost:3000/

The application is broken into a number of sections which demonstrate the data store, the embedding process, the vector store and then a full RAG chatbot.

About

No description, website, or topics provided.

Resources

Code of conduct

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages