Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update OpenAI model types #698

Open
wants to merge 6 commits into
base: main
Choose a base branch
from
Open

Conversation

TheDome0
Copy link

Add GPT_3_5_TURBO_0125 and update default GPT_3_5_TURBO context window according to https://platform.openai.com/docs/models/gpt-3-5-turbo

TheDome0 and others added 3 commits March 23, 2024 19:51
Add GPT_3_5_TURBO_0125 and update default GPT_3_5_TURBO context window
@TheDome0
Copy link
Author

TheDome0 commented Apr 2, 2024

Unfortunately I'm not quite sure why the tests failed, I'm completely new to this codebase.
Should I create an issue to track if this is a more complex task?

The outdated token limits sadly prevent any reasonable usage of the latest models which also come with a significant price cut.

@raulraja
Copy link
Contributor

raulraja commented Apr 4, 2024

Hi @TheDome0
Seems to be failing with:

com.xebia.functional.xef.conversation.ConversationSpec[jvm] > " | GPT_3_5_TURBO model has 4097 max context length | when the number of token in the conversation is greater than | the space allotted for the message history in the prompt configuration | the number of messages in the request must have fewer messages than | the total number of messages in the conversation |[jvm] FAILED
    java.lang.AssertionError: 100 should be < 100
        at com.xebia.functional.xef.conversation.ConversationSpec$1$2.invokeSuspend(ConversationSpec.kt:88)

        Caused by:
        java.lang.AssertionError: 100 should be < 100
            at com.xebia.functional.xef.conversation.ConversationSpec$1$2.invokeSuspend(ConversationSpec.kt:88)

That test may need to get updated at https://github.com/xebia-functional/xef/blob/main/core/src/commonTest/kotlin/com/xebia/functional/xef/conversation/ConversationSpec.kt#L54

@TheDome0
Copy link
Author

TheDome0 commented Apr 5, 2024

Well, shouldn't the check simply be the same as for the 16k model right below? Since now the default 3.5 model also has a 16k context length.
So instead of messagesSizePlusMessageResponse shouldBeLessThan memories.size it should be messagesSizePlusMessageResponse shouldBe memories.size

TheDome0 and others added 2 commits April 5, 2024 16:56
Context window for new gpt-3.5-turbo model (Currently points to gpt-3.5-turbo-0125) has increased and is now on par with gpt-3.5-turbo-16k
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants