-
Notifications
You must be signed in to change notification settings - Fork 4.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Cannot read properties of undefined (reading 'toolCalls') #860
Comments
I have this issue too, tried the docker installation too , hope they fix it |
same issue here |
can you try other provider? |
Same issue here, i have tried using Node.js on local computer |
The same issue even if I used other providers. I tested it using this providers: Anyway, it worked with local OLlama (Llama 3.1 (8B)). Device: EDIT: I tried to track NETWORK browser traffic to identify the issue. Anyway, The response of the request is EMPTY, and for the payload I can see that the api key is registered and everything looks fine but the request being responded with error. |
happens in two scenario, your models were not selected correctly and its using default model instead of selected model can you set the api keys on the UI and just switch the model dropdown to some other model and then back to the one you want to use, "model" not the "provider" also can you specify which version and commit hash you are using ? |
were you able to check the selected model? thats in the messages and prefixed in the content of the user message |
same happens to me it just stopped working after i finally got it running. I got this in the console:TypeError: Cannot read properties of undefined (reading 'toolCalls') |
Same issue here, was working fine then I attempted to upload a .py file and then that error occurred. Using Claude 3.5 sonnet. I also have an open ai api key i tried using, 4o mini worked, but 4o didn't work. |
Yes, I'm able to check it.
|
any fixes? |
As a temporary fix, after you activate the venv, you can do the following to rollback the huggingface_hub package, which is the source of the issue.
|
I have the same issue when connected with Claude AI, I have saved API key to .env.local but it's still not working. Ollama model running locally works though. |
I'm having the same issue. I'm on. a Mac M1 running Sequoia 15.2. |
I have the same problem as all of you I am Sequoia 15.1.1 |
will look into this |
Thank you for your efforts. |
I read that it was related to docker..i ran docker as per setup instruction but didnt work either |
me too, I think it's linked to the M1 chip |
Same, I'm using M chips too, probably this is what causing the problem. |
I have the same issue, and i do not believe it is related to the M1 chip since i am having it on AMD 7th gen processor |
in any case, I can't wait to use it on my computer and for the bug to be fixed. |
hope it is fixed soon as well. Is it possible that we messed up with setup and did not set it up properly? I guess it is |
I don't think so because I've done all 3 possible installations. |
I have M4 an have the same problem (installed via Pinokio)...Anyone have founded a solution? |
Looking at the error I believe its related to the vecel api being used for model switching. But its just a theory. |
I'm having this issue as well. I've spent days trying to fix it.
Cannot read properties of undefined (reading 'toolCalls') |
This actually did the job for me and now everything works locally using ollama |
@galaridor I just tried this but not working for me :( |
@galaridor may be on to something. |
@BrianLFuller i have not tested with docker. It works locally with qwen 2.5 coder using ollama for me but i believe you still have to set some env variables inside the dockerfile or docker-compose.yaml files |
any update? |
hello, any update ? |
can you try this PR #895 |
Getting same error even if I run it on Pinokio |
@mutlumehmet I think the m1 chip is a problem with macbook users. I'm using an m1 chip mac. (hocam çözebildiniz mi bu arada) |
Hello, Same issue.... any update or solution ? |
pinokio released 3.2.0 also Bolt released SCRIPT VERSION 3.0. I had the same problem but when I tried the new version the problem seems to have disappeared. |
another observation I found that if you post image and the model does not support vision capability, this issue occurs |
I don't think it's related to pinokio, i'm getting the same issue running locally, happens when sending prompts with images, regular text prompts are working fine. |
Hello: The way to solve this: Remove bolt.diy from "Pinokio" and install it back. Do not unistall Pinokio. Just install back Bolt.diy and no more problems. It was Pinokio issue with the wrong fork. It seems like the installation you have does not have the last navigation routing to the right bolt.diy fork. This happened to me and I follow this simple steps and it worked. |
so there are multiple reason for this to occur,lots of them are using pinokio. |
good news is we have docker image published note: you have to use docker login to login to ghcr.io with your githubaccess token to access the image |
error 404 page not found |
ohh so the visibility of the page is private, will change that to public shortly |
I had the issue when using a prompt qwen2.5-coder but not llama3.2. Didn't get it resolved but I wanted to add my experience. This is with the latest dev which includes PR #895. I wasn't able to try with the docker image as it is still private currently. |
Describe the bug
I have installed qwen2.5-coder. I launched bolt in pinokio. And I'm making a mistake.
Link to the Bolt URL that caused the error
/
Steps to reproduce
bolt launch in pinokio
The choice of ollama - qwen2.5-coder:32b
promt
Expected behavior
start
Screen Recording / Screenshot
Platform
Provider Used
ollama
Model Used
qwen2.5-coder:32b
Additional context
this happens when the promt improves.
The text was updated successfully, but these errors were encountered: