.
minor | -
Fix | +
Addition | ^
improvement | !
Change | *
Refactor
IMPORTANT:
0.1.x
will still have some breaking changes in patches.
- Make sure to lock your version, e.g.,
genai = "=0.1.15"
.- Version
0.2.x
will follow semver more strictly.- API changes will be denoted as "
!
- API CHANGE ...."
+
add back AdapterKind::default_key_env_name
+
adapter - xAI adapter+
ServiceTargetResolver added (support for custom endpoint) (checkout examples/c06-starget-resolver.rs).
ollama - now use openai v1 api to list models.
test - add test for Client::all_model_names*
major internal refactor
.
ollama - removed workaround for multi-system lack of support (for old ollama)+
add stop_sequences support cohere+
stop_sequences - for openai, ollama, groq, gemini, cochere+
stop_sequences - for anthropic (thanks @semtexzv)
.
minor update on llms model names^
ChatRole - impl Display^
ChatReqeuust - added from_messages, and append_messages
^
anthropic - updated the default max_token to the max for given the model (i.e. 3-5 will be 8k)+
tool - First pass at adding Function Calling for OpenAI and Anthropic (rel #24)- NOTE: The tool is still work in progress, but this should be a good first start.
.
update version to 0.1.11-WIP
(minor release)
^
ChatRequest - addChatReqeust::from_user(...)
.
openai - add o1-preview, o1-mini to openai list.
update groq models (llama 3.2).
Added .github with Github Bug Report template (#26).
minor readme update to avoid browser issue to scroll down to video section
^
AdapterKind - expose default_key_env_name.
openai - add 'o1-' model prefix to point to OpenAI Adapter.
comments proofing (using genai with custom devai script).
#23 - add documentation.
fix printer comment.
updated to v0.1.9-wip
.
printer - now uses printer::Error (rather than box dyn) (rel #21)+
NEW - structured output - for gemini & OpenAI!
soft deprecation (for now) useChatResponseFormat::JsonMode
(wasChatOptions::json_mode
flag)*
Make most public typesDe/Serializable
.
openai - fix chatgpt prefix. Update current model lists.
add json test for anthropic.
makeswebc::Error
public (relates to: #12)
+
Added ModelMapper scheme (client_builder::with_model_mapper_fn)!
API CHANGE RemovedAdapterKindResolver
(should use ModelMapper) (see examples/c03-mapper.rs)!
API CHANGE RenamedModelInfo
toModelIden
!
API CHANGEAuthResolver
- Refactor AuthResolver Scheme/API (see examples/c02-auth.rs)!
API CHANGE completely removeAdapterConfig
(seeAuthResolver
).
test groq - switch to llama3-groq-8b-8192-tool-use-preview for testing to have the test_chat_json work as expected^
chore: make stream is send.
test -ChatOptions
- add tests for temperature.
A typo in adapters for OpenAI makes the temperature chat option unusable..
unit test - first value_ext insert
+
ChatOption Add json mode for openai type models.
groq - added the Llama 3.1 previews, and grog-..-tool-use.. to the groq model list names!
nowchat::printer::print_chat_stream
(wasutils::print_chat_stream
)!
NowChatOptions
(wasChatRequestOptions
) ! Removeclient_builder.with_default_chat_request_options
(available withclient_builder.with_chat_options
).
readme - add youtube videos doc
!
API CHANGE now ClientBuilder::insert_adapter_config (was with_adapter_config).
code clean
!
API CHANGE - refactor Error- With new
ModelInfo
- Back to
genai::Error
(adapter::Error
was wrongly exposing internal responsibility)
- With new
.
update tests and examples from 'gpt-3.5-turbo' to 'gpt-4o-mini'-
Fix namingClientConfig::with_adapter_kind_resolver
(was wrongly...auth_resolver
)*
refactor code layout, internal Adapter calls to use ModelInfo+
Add ModelName and ModelInfo types for better efficient request/error context!
API CHANGE - nowClient::resolve_model_info(model)
(wasClient::resolve_adapter_kind(mode)
)^
ChatRequest
- addChatRequest::from_system
.
updated provider supported list
^
openai - addedgpt-4o-mini
and switched all openai examples/tests to it!
API CHANGE - NewMessageContent
type forChatMessage.content
,ChatResponse.content
, andStreamEnd.captured_content
(only ::Text variant for now).- This is in preparation for multimodal support
!
API CHANGE - (should be minor, asInto
implemented) -ChatMessage
now takesMessageContent
with only::Text(String)
variant for now.!
API CHANGE - Error refactor - addedgenai::adapter::Error
andgenai::resolver::Error
, and updatedgenai::Error
with appropriateFroms
+
Added token usage for ALL adapters/providers -ChatResponse.usage
andChatRequestOption
.capture_usage
/.capture_content
(for streaming) support for all Adapters (see note in Readme for Ollama for streaming)!
API CHANGE:ClientConfig::with_chat_request_options
(waswith_default_chat_request_options
)!
API CHANGE:PrintChatStreamOptions::from_print_events
(wasfrom_stream_events
)^
AdapterKind
- addedas_str
andas_lower_str
^
ChatRequest
- added.iter_systems()
and.combine_systems()
(includes eventualchat_req.system
as part of the system messages)!
API CHANGE:Client::all_model_names(..)
(wasClient::list_models(..)
)^
groq - add gemma2-9b-it to the list of Groq models!
API CHANGE:genai::Client
(wasgenai::client::Client
, same forClientBuilder
ClientConfig
)-
groq - remove groq whisper model from list_models as it is not a chat completion model^
ollama - implement live list_models for ollama!
Makes AdapterDispatcher crate only (should be internal only)
+
ChatRequestOptions
- addedtemperature
,max_tokens
,top_p
for all adapters (see readme for property mapping).!
SyncAdapterKindResolverFn
- Change signature to return Result<Option> (rather than Result).
made publicclient.resolve_adapter_kind(model)
+
implement groq completions
-
gemini - proper stream message error handling
.
print_chat_stream - minor refactor to ensure flush
-
ollama - improve Ollama Adapter to support multi system messages-
gemini - fix adapter to set "systemInstruction" (Supported in v1beta)
+
Added AdapterKindResolver-
Adapter::list_models api impl and change^
chat_printer - added PrintChatStreamOptions with print_events