Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

finish_reason not set in AzureOpenAIChatCompletionClient.create_stream #4213

Open
MohMaz opened this issue Nov 15, 2024 · 0 comments · May be fixed by #4311
Open

finish_reason not set in AzureOpenAIChatCompletionClient.create_stream #4213

MohMaz opened this issue Nov 15, 2024 · 0 comments · May be fixed by #4311
Assignees
Milestone

Comments

@MohMaz
Copy link
Contributor

MohMaz commented Nov 15, 2024

What happened?

The provided code snippet works fine for the .create call of AzureOpenAIChatCompletionClient, but errors on .create_stream call:

Creating client with config: {'model': 'gpt-4o', 'azure_endpoint': 'https://xxxxxxxx.openai.azure.com', 'azure_deployment': 'gpt-4o', 'api_version': '2024-08-01-preview', 'model_capabilities': {'vision': False, 'function_calling': False, 'json_output': False}, 'azure_ad_token_provider': <function get_bearer_token_provider.<locals>.wrapper at 0x108205da0>}
-----> Print output of .create call
/Users/mohammadmazraeh/Projects/autogen/python/packages/autogen-core/samples/distributed-group-chat/test_aoi.py:26: UserWarning: Resolved model mismatch: gpt-4o-2024-08-06 != gpt-4o-2024-05-13. Model mapping may be incorrect.
  single_output = await client.create(messages=messages)
-----> CreateResult(finish_reason='stop', content='The autumn leaves painted the park in vibrant shades of red and gold.', usage=RequestUsage(prompt_tokens=17, completion_tokens=14), cached=False, logprobs=None) - * 50
-----> Print output of .create_stream call
Traceback (most recent call last):
  File "/Users/mohammadmazraeh/Projects/autogen/python/packages/autogen-core/samples/distributed-group-chat/test_aoi.py", line 34, in <module>
    asyncio.run(main())
  File "/usr/local/Cellar/[email protected]/3.11.10/Frameworks/Python.framework/Versions/3.11/lib/python3.11/asyncio/runners.py", line 190, in run
    return runner.run(main)
           ^^^^^^^^^^^^^^^^
  File "/usr/local/Cellar/[email protected]/3.11.10/Frameworks/Python.framework/Versions/3.11/lib/python3.11/asyncio/runners.py", line 118, in run
    return self._loop.run_until_complete(task)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/Cellar/[email protected]/3.11.10/Frameworks/Python.framework/Versions/3.11/lib/python3.11/asyncio/base_events.py", line 654, in run_until_complete
    return future.result()
           ^^^^^^^^^^^^^^^
  File "/Users/mohammadmazraeh/Projects/autogen/python/packages/autogen-core/samples/distributed-group-chat/test_aoi.py", line 31, in main
    async for chunk in stream_output:
  File "/Users/mohammadmazraeh/Projects/autogen/python/packages/autogen-ext/src/autogen_ext/models/_openai/_openai_client.py", line 662, in create_stream
    choice.finish_reason
AttributeError: 'NoneType' object has no attribute 'finish_reason'

What did you expect to happen?

I expected to get some reasonable response from .create_stream call as well.

How can we reproduce it (as minimally and precisely as possible)?

import asyncio

from autogen_core.components.models._types import UserMessage
from autogen_ext.models._openai._openai_client import AzureOpenAIChatCompletionClient
from azure.identity import DefaultAzureCredential, get_bearer_token_provider

if __name__ == "__main__":

    async def main():
        config = {
            "model": "gpt-4o",
            "azure_endpoint": "https://xxxxxxx.openai.azure.com",
            "azure_deployment": "gpt-4o",
            "api_version": "2024-08-01-preview",
            "model_capabilities": {"vision": False, "function_calling": False, "json_output": False},
            "azure_ad_token_provider": get_bearer_token_provider(
                DefaultAzureCredential(), "https://cognitiveservices.azure.com/.default"
            ),
        }

        print(f"Creating client with config: {config}")
        client = AzureOpenAIChatCompletionClient(**config)

        messages = [UserMessage(content="Generate one short sentence on some topic!", source="system")]
        print("-----> Print output of .create call")
        single_output = await client.create(messages=messages)
        print("----->", single_output, "- * 50")

        print("-----> Print output of .create_stream call")
        stream_output = client.create_stream(messages=messages)
        async for chunk in stream_output:
            print(chunk)

    asyncio.run(main())

AutoGen version

0.4.0.dev6

Which package was this bug in

Extensions

Model used

gpt-4o

Python version

3.11.10

Operating system

macOS Sequoia Version 15.1 (24B83)

Any additional info you think would be helpful for fixing this bug

No response

@ekzhu ekzhu added this to the 0.4.0 milestone Nov 15, 2024
@MohMaz MohMaz self-assigned this Nov 15, 2024
@MohMaz MohMaz linked a pull request Nov 22, 2024 that will close this issue
3 tasks
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants