Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: Chat with LLM (PT-188527956) #17

Merged
merged 29 commits into from
Dec 11, 2024
Merged

feat: Chat with LLM (PT-188527956) #17

merged 29 commits into from
Dec 11, 2024

Conversation

emcelroy
Copy link
Contributor

@emcelroy emcelroy commented Dec 10, 2024

#188527956

These changes allow the plugin to connect with an LLM for basic chat functionality via the OpenAI API. In addition, they:

  • Add two tool functions to provide to the assistant: get_attributes and create_graph. These are basic proof-of-concept functions for interaction with CODAP. More will be added by work on a separate story.
  • Add Markdown support to the chat transcript. This required some significant updates to the Jest configuration.
  • Incorporate MobX State Tree. Used to create new assistant, chat transcript, and app config models.
  • Adds basic configuration options and related context. For now, this is mainly so we could add a "developer" mode. The developer mode provides additional UI for minimizing unwanted chat artifacts in our Open AI account created during development. In the future we will allow for setting/modifying other configuration options.

@emcelroy emcelroy marked this pull request as ready for review December 10, 2024 14:18
@emcelroy emcelroy requested a review from bgoldowsky December 10, 2024 14:18
Copy link

@bgoldowsky bgoldowsky left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This looks great. I did raise some questions/concerns.
Some of them you might want to defer to other PRs.

src/app-config.json Outdated Show resolved Hide resolved
src/app-config-context.ts Outdated Show resolved Hide resolved
src/hooks/use-app-config-context.ts Show resolved Hide resolved
src/models/app-config-model.ts Show resolved Hide resolved
src/models/assistant-model.ts Outdated Show resolved Hide resolved
src/utils/anthropic-utils.ts Outdated Show resolved Hide resolved
Comment on lines 72 to 74
while (runState.status !== "completed" && runState.status !== "requires_action") {
runState = yield davai.beta.threads.runs.retrieve(self.thread.id, run.id);
}

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This loop worries me. If the state is something like "canceled" or "failed", wouldn't it loop forever? And if it just takes a while to complete, is it going to be calling the API every millisecond to check on it?

Copy link
Contributor Author

@emcelroy emcelroy Dec 10, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good point. This will be addressed in separate branch.

src/models/assistant-model.ts Outdated Show resolved Hide resolved
src/models/assistant-model.ts Outdated Show resolved Hide resolved
Comment on lines 156 to 163
const response = yield fetch(`${process.env.REACT_APP_OPENAI_BASE_URL}threads/${threadId}`, {
method: "DELETE",
headers: {
Authorization: `Bearer ${process.env.REACT_APP_OPENAI_API_KEY}`,
"OpenAI-Beta": "assistants=v2",
"Content-Type": "application/json",
},
});

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This feels like it should be in a utility class. Surprising that the library doesn't give you a method for it and you have to construct the http headers yourself.

@emcelroy emcelroy merged commit c130354 into main Dec 11, 2024
5 checks passed
@emcelroy emcelroy deleted the 188527956-chat-with-llm branch December 11, 2024 15:30
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants