diff --git a/docs/docs/guides/ecosystem/imgs/mistral_ops.png b/docs/docs/guides/ecosystem/imgs/mistral_ops.png new file mode 100644 index 00000000000..9997d5c1877 Binary files /dev/null and b/docs/docs/guides/ecosystem/imgs/mistral_ops.png differ diff --git a/docs/docs/guides/ecosystem/mistral_trace.png b/docs/docs/guides/ecosystem/imgs/mistral_trace.png similarity index 100% rename from docs/docs/guides/ecosystem/mistral_trace.png rename to docs/docs/guides/ecosystem/imgs/mistral_trace.png diff --git a/docs/docs/guides/ecosystem/mistral.md b/docs/docs/guides/ecosystem/mistral.md index e3e0df7585b..ed9ca3b4a4a 100644 --- a/docs/docs/guides/ecosystem/mistral.md +++ b/docs/docs/guides/ecosystem/mistral.md @@ -1,52 +1,68 @@ --- sidebar_position: 1 -hide_table_of_contents: false +hide_table_of_contents: true --- # MistralAI -Weave automatically tracks and logs LLM calls made via the [MistralAI Python library](https://github.com/mistralai/client-python), after `weave.init()` is called. +Weave automatically tracks and logs LLM calls made via the [MistralAI Python library](https://github.com/mistralai/client-python). -## Setup +## Traces -1. Install the MistralAI Python library: - ```bash - pip install mistralai weave - ``` +It’s important to store traces of LLM applications in a central database, both during development and in production. You’ll use these traces for debugging, and as a dataset that will help you improve your application. -2. Initialize Weave in your Python script: - ```python - import weave - weave.init("cheese_recommender") - ``` - :::note - We patch the mistral `chat_completion` method for you to keep track of your LLM calls. - ::: +Weave will automatically capture traces for [mistralai](https://github.com/mistralai/client-python). You can use the library as usual, start by calling `weave.init()`: -3. Use the MistralAI library as usual: +```python +import weave +weave.init("cheese_recommender") - ```python - import os - from mistralai.client import MistralClient - from mistralai.models.chat_completion import ChatMessage +# then use mistralai library as usual +import os +from mistralai.client import MistralClient +from mistralai.models.chat_completion import ChatMessage - api_key = os.environ["MISTRAL_API_KEY"] - model = "mistral-large-latest" +api_key = os.environ["MISTRAL_API_KEY"] +model = "mistral-large-latest" - client = MistralClient(api_key=api_key) +client = MistralClient(api_key=api_key) - messages = [ - ChatMessage(role="user", content="What is the best French cheese?") - ] +messages = [ + ChatMessage(role="user", content="What is the best French cheese?") +] + +chat_response = client.chat( + model=model, + messages=messages, +) +``` + +Weave will now track and log all LLM calls made through the MistralAI library. You can view the traces in the Weave web interface. + +[![mistral_trace.png](imgs/mistral_trace.png)](https://wandb.ai/capecape/mistralai_project/weave/calls) + +## Wrapping with your own ops + +Weave ops make results *reproducible* by automatically versioning code as you experiment, and they capture their inputs and outputs. Simply create a function decorated with [`@weave.op()`](https://wandb.github.io/weave/guides/tracking/ops) that calls into [`mistralai.client.MistralClient.chat()`](https://docs.mistral.ai/capabilities/completion/) and Weave will track the inputs and outputs for you. Let's see how we can do this for our cheese recommender: + +```python +# highlight-next-line +@weave.op() +def cheese_recommender(region:str, model:str) -> str: + "Recommend the best cheese in a given region" + + messages = [ChatMessage(role="user", content=f"What is the best cheese in {region}?")] chat_response = client.chat( model=model, messages=messages, ) + return chat_response.choices[0].message.content - print(chat_response.choices[0].message.content) - ``` +cheese_recommender(region="France", model="mistral-large-latest") +cheese_recommender(region="Spain", model="mistral-large-latest") +cheese_recommender(region="Netherlands", model="mistral-large-latest") +``` -Weave will now track and log all LLM calls made through the MistralAI library. You can view the logs and insights in the Weave web interface. +[![mistral_ops.png](imgs/mistral_ops.png)](https://wandb.ai/capecape/mistralai_project/weave/calls) -[![mistral_trace.png](mistral_trace.png)](https://wandb.ai/capecape/mistralai_project/weave/calls)