Skip to content

Commit

Permalink
docs: Mistral docs v2 (#1674)
Browse files Browse the repository at this point in the history
* add weave.op

* rafactor in shawn style

* add images to folder

* remove table of contents

* highlight weave dec

---------

Co-authored-by: Jason Zhao <[email protected]>
  • Loading branch information
tcapelle and jlzhao27 authored May 24, 2024
1 parent e16040c commit 217348d
Show file tree
Hide file tree
Showing 3 changed files with 46 additions and 30 deletions.
Binary file added docs/docs/guides/ecosystem/imgs/mistral_ops.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
76 changes: 46 additions & 30 deletions docs/docs/guides/ecosystem/mistral.md
Original file line number Diff line number Diff line change
@@ -1,52 +1,68 @@
---
sidebar_position: 1
hide_table_of_contents: false
hide_table_of_contents: true
---

# MistralAI

Weave automatically tracks and logs LLM calls made via the [MistralAI Python library](https://github.com/mistralai/client-python), after `weave.init()` is called.
Weave automatically tracks and logs LLM calls made via the [MistralAI Python library](https://github.com/mistralai/client-python).

## Setup
## Traces

1. Install the MistralAI Python library:
```bash
pip install mistralai weave
```
It’s important to store traces of LLM applications in a central database, both during development and in production. You’ll use these traces for debugging, and as a dataset that will help you improve your application.

2. Initialize Weave in your Python script:
```python
import weave
weave.init("cheese_recommender")
```
:::note
We patch the mistral `chat_completion` method for you to keep track of your LLM calls.
:::
Weave will automatically capture traces for [mistralai](https://github.com/mistralai/client-python). You can use the library as usual, start by calling `weave.init()`:

3. Use the MistralAI library as usual:
```python
import weave
weave.init("cheese_recommender")

```python
import os
from mistralai.client import MistralClient
from mistralai.models.chat_completion import ChatMessage
# then use mistralai library as usual
import os
from mistralai.client import MistralClient
from mistralai.models.chat_completion import ChatMessage

api_key = os.environ["MISTRAL_API_KEY"]
model = "mistral-large-latest"
api_key = os.environ["MISTRAL_API_KEY"]
model = "mistral-large-latest"

client = MistralClient(api_key=api_key)
client = MistralClient(api_key=api_key)

messages = [
ChatMessage(role="user", content="What is the best French cheese?")
]
messages = [
ChatMessage(role="user", content="What is the best French cheese?")
]

chat_response = client.chat(
model=model,
messages=messages,
)
```

Weave will now track and log all LLM calls made through the MistralAI library. You can view the traces in the Weave web interface.

[![mistral_trace.png](imgs/mistral_trace.png)](https://wandb.ai/capecape/mistralai_project/weave/calls)

## Wrapping with your own ops

Weave ops make results *reproducible* by automatically versioning code as you experiment, and they capture their inputs and outputs. Simply create a function decorated with [`@weave.op()`](https://wandb.github.io/weave/guides/tracking/ops) that calls into [`mistralai.client.MistralClient.chat()`](https://docs.mistral.ai/capabilities/completion/) and Weave will track the inputs and outputs for you. Let's see how we can do this for our cheese recommender:

```python
# highlight-next-line
@weave.op()
def cheese_recommender(region:str, model:str) -> str:
"Recommend the best cheese in a given region"

messages = [ChatMessage(role="user", content=f"What is the best cheese in {region}?")]

chat_response = client.chat(
model=model,
messages=messages,
)
return chat_response.choices[0].message.content

print(chat_response.choices[0].message.content)
```
cheese_recommender(region="France", model="mistral-large-latest")
cheese_recommender(region="Spain", model="mistral-large-latest")
cheese_recommender(region="Netherlands", model="mistral-large-latest")
```

Weave will now track and log all LLM calls made through the MistralAI library. You can view the logs and insights in the Weave web interface.
[![mistral_ops.png](imgs/mistral_ops.png)](https://wandb.ai/capecape/mistralai_project/weave/calls)

[![mistral_trace.png](mistral_trace.png)](https://wandb.ai/capecape/mistralai_project/weave/calls)

0 comments on commit 217348d

Please sign in to comment.