Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

add AzureAIInference client , wrapper and test #3301

Closed
wants to merge 10 commits into from

Conversation

Josephrp
Copy link

@Josephrp Josephrp commented Aug 6, 2024

Why are these changes needed?

This PR introduces a new GithubLLM class to autogen, allowing users to leverage GitHub's inference endpoint with automatic fallback to Azure. It provides a seamless way to use GitHub's LLM capabilities within the autogen ecosystem, handling rate limits and ensuring high availability through Azure fallback.

Related Issues

Closes #3300

Checks

Tasks

  • EmbeddingsClient
  • ImageEmbeddingsClient
  • ChatCompletionsClient
  • EmbeddingsClient
  • ImageEmbeddingsClient
  • Inference

Tests

  • AssistantMessage
  • AsyncStreamingChatCompletions
  • ChatChoice
  • ChatCompletions
  • ChatCompletionsFunctionToolCall
  • ChatCompletionsFunctionToolDefinition
  • ChatCompletionsFunctionToolSelection
  • ChatCompletionsNamedFunctionToolSelection
  • ChatCompletionsNamedToolSelection
  • ChatCompletionsResponseFormat
  • ChatCompletionsToolCall
  • ChatCompletionsToolDefinition
  • ChatCompletionsToolSelectionPreset
  • ChatRequestMessage
  • ChatResponseMessage
  • ChatRole
  • CompletionsFinishReason
  • CompletionsUsage
  • ContentItem
  • EmbeddingEncodingFormat
  • EmbeddingInput
  • EmbeddingInputType
  • EmbeddingItem
  • EmbeddingsResult
  • EmbeddingsUsage
  • FunctionCall
  • FunctionDefinition
  • ImageContentItem
  • ImageDetailLevel
  • ImageUrl
  • ModelInfo
  • ModelType
  • StreamingChatChoiceUpdate
  • StreamingChatCompletions
  • StreamingChatCompletionsUpdate
  • SystemMessage
  • TextContentItem
  • ToolMessage
  • UserMessage

Refs

@Josephrp
Copy link
Author

Josephrp commented Aug 6, 2024

@microsoft-github-policy-service agree company="Tonic-AI"

@codecov-commenter
Copy link

codecov-commenter commented Aug 6, 2024

Codecov Report

Attention: Patch coverage is 0% with 106 lines in your changes missing coverage. Please review.

Project coverage is 20.44%. Comparing base (6279247) to head (1e4a8c0).
Report is 70 commits behind head on main.

Files Patch % Lines
autogen/oai/github.py 0.00% 104 Missing and 2 partials ⚠️
Additional details and impacted files
@@             Coverage Diff             @@
##             main    #3301       +/-   ##
===========================================
- Coverage   32.90%   20.44%   -12.47%     
===========================================
  Files          94      104       +10     
  Lines       10235    11005      +770     
  Branches     2193     2501      +308     
===========================================
- Hits         3368     2250     -1118     
- Misses       6580     8508     +1928     
+ Partials      287      247       -40     
Flag Coverage Δ
unittests 20.39% <0.00%> (-12.51%) ⬇️

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

@LittleLittleCloud
Copy link
Collaborator

LittleLittleCloud commented Aug 7, 2024

@Josephrp Does github LLM support function call

@Josephrp
Copy link
Author

Josephrp commented Aug 7, 2024

@Josephrp Does github LLM support function call

so it's actually an azure endpoint with several models, and some of these can return a JSON , but it's not capable of running commands because it doesnt have a code execution environment (my understanding)

@LittleLittleCloud
Copy link
Collaborator

so it's actually an azure endpoint with several models, and some of these can return a JSON

Got it, so can I inference the github model using existing configuration but replace the endpoint and token with gh model endpoint and token?

autogen/oai/github.py Outdated Show resolved Hide resolved
@Josephrp Josephrp changed the title add githubllm client , wrapper and test add AzureAIInference client , wrapper and test Aug 9, 2024
@Josephrp Josephrp marked this pull request as draft August 9, 2024 14:21
@Josephrp
Copy link
Author

so it's actually an azure endpoint with several models, and some of these can return a JSON

Got it, so can I inference the github model using existing configuration but replace the endpoint and token with gh model endpoint and token?

normally yes , btw i 'm starting again on this item , now + these days if you would like to work togerther there are some models with special cases , and linking the class to the message classes ... :-) fun ! 🚀

@Josephrp Josephrp closed this Sep 23, 2024
@Josephrp Josephrp reopened this Sep 23, 2024
@Josephrp
Copy link
Author

moved to here : #3559

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants