Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Mistral >= 1.0.0 support #79

Merged
merged 16 commits into from
Sep 22, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
20 changes: 10 additions & 10 deletions docs/tutorial/providers.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,16 +2,16 @@

## List of all providers

| Provider name | Extra for installation | Guide |
|------------------|------------------------|--------------------------------------------------------------------------------------------------|
| Anthropic | `anthropic` | [Guide for Anthropic :octicons-link-16:](providers/anthropic.md) |
| Cohere | `cohere` | [Guide for Cohere :octicons-link-16:](providers/cohere.md) |
| Google Gemini | `google-generativeai` | [Guide for Google Gemini :octicons-link-16:](providers/google.md) |
| Hugging Face Hub | `huggingface-hub` | [Guide for Hugging Face Hub :octicons-link-16:](providers/huggingface_hub.md) |
| LiteLLM | `litellm` | [Guide for LiteLLM :octicons-link-16:](providers/litellm.md) |
| Mistral AI | `mistralai` | [Guide for Mistral AI :octicons-link-16:](providers/mistralai.md) |
| OpenAI | `openai` | [Guide for OpenAI :octicons-link-16:](providers/openai.md) |
| Azure OpenAI | `openai` | [Guide for Azure OpenAI :octicons-link-16:](providers/openai.md#compatibility-with-azure-openai) |
| Provider name | Extra for installation | Guide |
|---------------------|------------------------|--------------------------------------------------------------------------------------------------|
| Anthropic | `anthropic` | [Guide for Anthropic :octicons-link-16:](providers/anthropic.md) |
| Cohere | `cohere` | [Guide for Cohere :octicons-link-16:](providers/cohere.md) |
| Google Gemini | `google-generativeai` | [Guide for Google Gemini :octicons-link-16:](providers/google.md) |
| Hugging Face Hub | `huggingface-hub` | [Guide for Hugging Face Hub :octicons-link-16:](providers/huggingface_hub.md) |
| LiteLLM | `litellm` | [Guide for LiteLLM :octicons-link-16:](providers/litellm.md) |
| Mistral AI | `mistralai` | [Guide for Mistral AI :octicons-link-16:](providers/mistralai.md) |
| OpenAI | `openai` | [Guide for OpenAI :octicons-link-16:](providers/openai.md) |
| Azure OpenAI | `openai` | [Guide for Azure OpenAI :octicons-link-16:](providers/openai.md#compatibility-with-azure-openai) |


## Chat Completions
Expand Down
48 changes: 32 additions & 16 deletions docs/tutorial/providers/mistralai.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,9 @@
# Mistral AI

??? warning "Deprecation of Mistral AI v0"

Mistral AI python client with version `<1.0.0` will no longer be supported by EcoLogits. See official [migration guide from v0 to v1 :octicons-link-external-16:](https://github.com/mistralai/client-python/blob/main/MIGRATION.md).

This guide focuses on the integration of :seedling: **EcoLogits** with the [Mistral AI official python client :octicons-link-external-16:](https://github.com/mistralai/client-python).

Official links:
Expand Down Expand Up @@ -27,16 +31,16 @@ Integrating EcoLogits with your applications does not alter the standard outputs

=== "Sync"

```python
``` { .python .annotate }
from ecologits import EcoLogits
from mistralai.client import MistralClient
from mistralai import Mistral

# Initialize EcoLogits
EcoLogits.init()

client = MistralClient(api_key="<MISTRAL_API_KEY>")
response = client.chat(
client = Mistral(api_key="<MISTRAL_API_KEY>")

response = client.chat.complete(# (1)!
messages=[
{"role": "user", "content": "Tell me a funny joke!"}
],
Expand All @@ -46,21 +50,23 @@ Integrating EcoLogits with your applications does not alter the standard outputs
# Get estimated environmental impacts of the inference
print(response.impacts)
```

1. Use `client.chat` for Mistral AI v0.

=== "Async"

```python
import asyncio
from ecologits import EcoLogits
from mistralai.async_client import MistralAsyncClient
from mistralai import Mistral

# Initialize EcoLogits
EcoLogits.init()

client = MistralAsyncClient(api_key="<MISTRAL_API_KEY>")
client = Mistral(api_key="<MISTRAL_API_KEY>")

async def main() -> None:
response = await client.chat(
response = await client.chat.complete_async(# (1)!
messages=[
{"role": "user", "content": "Tell me a funny joke!"}
],
Expand All @@ -73,6 +79,9 @@ Integrating EcoLogits with your applications does not alter the standard outputs

asyncio.run(main())
```

1. Use `client.chat` for Mistral AI v0.


### Streaming example

Expand All @@ -82,14 +91,14 @@ Integrating EcoLogits with your applications does not alter the standard outputs

```python
from ecologits import EcoLogits
from mistralai.client import MistralClient
from mistralai import Mistral

# Initialize EcoLogits
EcoLogits.init()

client = MistralClient(api_key="<MISTRAL_API_KEY>")
client = Mistral(api_key="<MISTRAL_API_KEY>")

stream = client.chat_stream(
stream = client.chat.stream(# (1)!
messages=[
{"role": "user", "content": "Tell me a funny joke!"}
],
Expand All @@ -98,23 +107,26 @@ Integrating EcoLogits with your applications does not alter the standard outputs

for chunk in stream:
# Get cumulative estimated environmental impacts of the inference
print(chunk.impacts)
print(chunk.data.impacts) # (2)!
```

1. Use `client.chat_stream` for Mistral AI v0.
2. Use `chunk.impacts` for Mistral AI v0.

=== "Async"

```python
import asyncio
from ecologits import EcoLogits
from mistralai.async_client import MistralAsyncClient
from mistralai import Mistral

# Initialize EcoLogits
EcoLogits.init()

client = MistralAsyncClient(api_key="<MISTRAL_API_KEY>")
client = Mistral(api_key="<MISTRAL_API_KEY>")

async def main() -> None:
response = await client.chat(
response = await client.chat.stream_async(# (1)!
messages=[
{"role": "user", "content": "Tell me a funny joke!"}
],
Expand All @@ -124,8 +136,12 @@ Integrating EcoLogits with your applications does not alter the standard outputs
async for chunk in stream:
# Get cumulative estimated environmental impacts of the inference
if hasattr(chunk, "impacts"):
print(chunk.impacts)
print(chunk.data.impacts) # (2)!


asyncio.run(main())
```

1. Use `client.chat_stream` for Mistral AI v0.
2. Use `chunk.impacts` for Mistral AI v0.

14 changes: 10 additions & 4 deletions ecologits/_ecologits.py
Original file line number Diff line number Diff line change
@@ -1,10 +1,12 @@
import importlib.metadata
import importlib.util
from dataclasses import dataclass, field
from typing import Optional, Union

from packaging.version import Version

from ecologits.exceptions import EcoLogitsError
from ecologits.log import logger


def init_openai_instrumentor() -> None:
Expand All @@ -25,17 +27,21 @@ def init_anthropic_instrumentor() -> None:

def init_mistralai_instrumentor() -> None:
if importlib.util.find_spec("mistralai") is not None:
from ecologits.tracers.mistralai_tracer import MistralAIInstrumentor
version = Version(importlib.metadata.version("mistralai"))
if version < Version("1.0.0"):
logger.warning("MistralAI client v0.*.* will soon no longer be supported by EcoLogits.")
from ecologits.tracers.mistralai_tracer_v0 import MistralAIInstrumentor
else:
from ecologits.tracers.mistralai_tracer_v1 import MistralAIInstrumentor

instrumentor = MistralAIInstrumentor()
instrumentor.instrument()


def init_huggingface_instrumentor() -> None:
if importlib.util.find_spec("huggingface_hub") is not None:
from huggingface_hub import __version__

if Version(__version__) >= Version("0.22.0"):
version = Version(importlib.metadata.version("huggingface_hub"))
if version >= Version("0.22.0"):
from ecologits.tracers.huggingface_tracer import HuggingfaceInstrumentor

instrumentor = HuggingfaceInstrumentor()
Expand Down
13 changes: 7 additions & 6 deletions ecologits/tracers/cohere_tracer.py
Original file line number Diff line number Diff line change
Expand Up @@ -12,13 +12,14 @@
from cohere import AsyncClient, Client
from cohere.types.non_streamed_chat_response import NonStreamedChatResponse as _NonStreamedChatResponse
from cohere.types.streamed_chat_response import StreamedChatResponse
from cohere.types.streamed_chat_response import StreamedChatResponse_StreamEnd as _StreamedChatResponse_StreamEnd
from cohere.types.streamed_chat_response import StreamEndStreamedChatResponse as _StreamEndStreamedChatResponse
except ImportError:
from pydantic import BaseModel
Client = object()
AsyncClient = object()
_NonStreamedChatResponse = object()
_NonStreamedChatResponse = BaseModel
StreamedChatResponse = object()
_StreamedChatResponse_StreamEnd = object()
_StreamEndStreamedChatResponse = BaseModel


PROVIDER = "cohere"
Expand All @@ -31,7 +32,7 @@ class Config:
arbitrary_types_allowed = True


class StreamedChatResponse_StreamEnd(_StreamedChatResponse_StreamEnd): # noqa: N801
class StreamEndStreamedChatResponse(_StreamEndStreamedChatResponse):
impacts: Impacts

class Config:
Expand Down Expand Up @@ -91,7 +92,7 @@ def cohere_stream_chat_wrapper(
request_latency=request_latency,
electricity_mix_zone=EcoLogits.config.electricity_mix_zone
)
yield StreamedChatResponse_StreamEnd(**event.dict(), impacts=impacts)
yield StreamEndStreamedChatResponse(**event.dict(), impacts=impacts)
else:
yield event

Expand All @@ -113,7 +114,7 @@ async def cohere_async_stream_chat_wrapper(
request_latency=request_latency,
electricity_mix_zone=EcoLogits.config.electricity_mix_zone
)
yield StreamedChatResponse_StreamEnd(**event.dict(), impacts=impacts)
yield StreamEndStreamedChatResponse(**event.dict(), impacts=impacts)
else:
yield event

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -55,7 +55,7 @@ def mistralai_chat_wrapper(
return response


def mistralai_chat_wrapper_stream_wrapper(
def mistralai_chat_wrapper_stream(
wrapped: Callable, instance: MistralClient, args: Any, kwargs: Any # noqa: ARG001
) -> Iterable[ChatCompletionStreamResponse]:
timer_start = time.perf_counter()
Expand Down Expand Up @@ -101,7 +101,7 @@ async def mistralai_async_chat_wrapper(
return response


async def mistralai_async_chat_wrapper_stream_wrapper(
async def mistralai_async_chat_wrapper_stream(
wrapped: Callable,
instance: MistralAsyncClient, # noqa: ARG001
args: Any,
Expand Down Expand Up @@ -144,12 +144,12 @@ def __init__(self) -> None:
{
"module": "mistralai.client",
"name": "MistralClient.chat_stream",
"wrapper": mistralai_chat_wrapper_stream_wrapper,
"wrapper": mistralai_chat_wrapper_stream,
},
{
"module": "mistralai.async_client",
"name": "MistralAsyncClient.chat_stream",
"wrapper": mistralai_async_chat_wrapper_stream_wrapper,
"wrapper": mistralai_async_chat_wrapper_stream,
},
]

Expand Down
Loading
Loading