Open-source monitoring & observability for AI apps and agents
LLMonitor helps AI devs monitor their apps in production, with features such as:
- 💵 Cost, token & latency analytics
- 🔍 Log & edit prompts
- 🐛 Trace agents/chains to debug easily
- 👪 Track users
- 🏷️ Label and export fine-tuning datasets
- 🖲️ Collect feedback from users
- 🧪 Unit tests & prompt evaluations (soon)
It also designed to be:
- 🤖 Usable with any model, not just OpenAI
- 📦 Easy to integrate (2 minutes)
- 🧑💻 Simple to self-host (deploy to Vercel & Supabase)
demo720.mp4
Modules available for:
LLMonitor natively supports:
- LangChain (JS & Python)
- OpenAI module
- LiteLLM
Additionally you can use it with any framework by wrapping the relevant methods.
Full documentation is available on the website.
We offer a hosted version with a free plan of up to 1k requests / days.
With the hosted version:
- 👷 don't worry about devops or managing updates
- 🙋 get priority 1:1 support with our team
- 🇪🇺 your data is stored safely in Europe
Need help or have questions? Chat with us on Discord or email one of the founders: vince [at] llmonitor.com. We're here to support you every step of the way.
This project is licensed under the Apache 2.0 License.