v0.8.1
v0.8.1
Features:
- Re-added purging the KV cache from previous versions
- Added check for whether or not to load kv based on previous model id
- Added settings option to bypass
context_length
, allowing you to send all messages regardless of whether they fit in context or not. It is never recommended to use this unless you know what you're doing!
Fixes:
- Fixed UI features such as Drawers and Alerts which use the FadeBackdrop component always closing the modal when clicking on non-button components on screen.
- Fixed OpenAI prompt builder always having 0 context length. This fix allows you to set the generated length instead
- Fixed ChatCompletions context limit being controlled by
max_tokens
instead of context length- With the two fixes above, both OpenAI and Chat Completions API options now use
max_context_length
to control the context builder, since the OpenAI model response does not include a context limit.
- With the two fixes above, both OpenAI and Chat Completions API options now use