Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Cannot read properties of undefined (reading 'toolCalls') #860

Open
500Kilogram opened this issue Dec 21, 2024 · 35 comments
Open

Cannot read properties of undefined (reading 'toolCalls') #860

500Kilogram opened this issue Dec 21, 2024 · 35 comments

Comments

@500Kilogram
Copy link

Describe the bug

I have installed qwen2.5-coder. I launched bolt in pinokio. And I'm making a mistake.

Link to the Bolt URL that caused the error

/

Steps to reproduce

bolt launch in pinokio
The choice of ollama - qwen2.5-coder:32b
promt

Expected behavior

start

Screen Recording / Screenshot

Снимок

Platform

  • OS: [e.g. macOS, Windows, Linux]
  • Browser: [e.g. Chrome, Safari, Firefox]
  • Version: [e.g. 91.1]

Provider Used

ollama

Model Used

qwen2.5-coder:32b

Additional context

image
this happens when the promt improves.

@nzgl-g
Copy link

nzgl-g commented Dec 21, 2024

I have this issue too, tried the docker installation too , hope they fix it

@b22-dev
Copy link

b22-dev commented Dec 21, 2024

same issue here

@thecodacus
Copy link
Collaborator

can you try other provider?

@ZakhilAmin
Copy link

Same issue here, i have tried using Node.js on local computer

@kayed85
Copy link

kayed85 commented Dec 21, 2024

Same issue here, Macbook Pro M1 Pro , macOS Sequoia 15.2
ddddd

@Ibaad-Ur-Rehman
Copy link

same issue with me

@b22-dev
Copy link

b22-dev commented Dec 21, 2024

The same issue even if I used other providers.

I tested it using this providers:
TogetherAI
Groq
xAi

Anyway, it worked with local OLlama (Llama 3.1 (8B)).

Device:
MacBook M1 Pro.

EDIT: I tried to track NETWORK browser traffic to identify the issue. Anyway, The response of the request is EMPTY, and for the payload I can see that the api key is registered and everything looks fine but the request being responded with error.

@thecodacus
Copy link
Collaborator

thecodacus commented Dec 21, 2024

happens in two scenario, your models were not selected correctly and its using default model instead of selected model
or you have set the api key wrong

can you set the api keys on the UI and just switch the model dropdown to some other model and then back to the one you want to use, "model" not the "provider"

also can you specify which version and commit hash you are using ?

@thecodacus
Copy link
Collaborator

EDIT: I tried to track NETWORK browser traffic to identify the issue. Anyway, The response of the request is EMPTY, and for the payload I can see that the api key is registered and everything looks fine but the request being responded with error.

were you able to check the selected model? thats in the messages and prefixed in the content of the user message

@beenycool
Copy link

same happens to me it just stopped working after i finally got it running. I got this in the console:TypeError: Cannot read properties of undefined (reading 'toolCalls')
at Object.flush (file:///C:/Users//Documents/soody/website/bolt.diy-0.0.3/node_modules/.pnpm/[email protected][email protected][email protected]/node_modules/ai/core/generate-text/stream-text.ts:569:33)
at ensureIsPromise (node:internal/webstreams/util:192:19)
at transformStreamDefaultSinkCloseAlgorithm (node:internal/webstreams/transformstream:569:5)
at Object.close (node:internal/webstreams/transformstream:366:14)
at ensureIsPromise (node:internal/webstreams/util:192:19)
at writableStreamDefaultControllerProcessClose (node:internal/webstreams/writablestream:1142:28)
at writableStreamDefaultControllerAdvanceQueueIfNeeded (node:internal/webstreams/writablestream:1222:5)
at writableStreamDefaultControllerClose (node:internal/webstreams/writablestream:1189:3)
at writableStreamClose (node:internal/webstreams/writablestream:699:3)
at writableStreamDefaultWriterClose (node:internal/webstreams/writablestream:1071:10)
at writableStreamDefaultWriterCloseWithErrorPropagation (node:internal/webstreams/writablestream:1063:10)
at node:internal/webstreams/readablestream:1439:15
at complete (node:internal/webstreams/readablestream:1312:9)
at processTicksAndRejections (node:internal/process/task_queues:95:5)
21:38:24 [vite] Internal server error: Cannot read properties of undefined (reading 'toolCalls')
at Object.flush (file:///C:/Users/
/Documents/soody/website/bolt.diy-0.0.3/node_modules/.pnpm/[email protected][email protected][email protected]/node_modules/ai/core/generate-text/stream-text.ts:569:33)
at ensureIsPromise (node:internal/webstreams/util:192:19)
at transformStreamDefaultSinkCloseAlgorithm (node:internal/webstreams/transformstream:569:5)
at Object.close (node:internal/webstreams/transformstream:366:14)
at ensureIsPromise (node:internal/webstreams/util:192:19)
at writableStreamDefaultControllerProcessClose (node:internal/webstreams/writablestream:1142:28)
at writableStreamDefaultControllerAdvanceQueueIfNeeded (node:internal/webstreams/writablestream:1222:5)
at writableStreamDefaultControllerClose (node:internal/webstreams/writablestream:1189:3)
at writableStreamClose (node:internal/webstreams/writablestream:699:3)
at writableStreamDefaultWriterClose (node:internal/webstreams/writablestream:1071:10)
at writableStreamDefaultWriterCloseWithErrorPropagation (node:internal/webstreams/writablestream:1063:10)
at node:internal/webstreams/readablestream:1439:15
at complete (node:internal/webstreams/readablestream:1312:9)
at processTicksAndRejections (node:internal/process/task_queues:95:5) (x4)

@vrcnx
Copy link

vrcnx commented Dec 22, 2024

Same issue here, was working fine then I attempted to upload a .py file and then that error occurred. Using Claude 3.5 sonnet.

I also have an open ai api key i tried using, 4o mini worked, but 4o didn't work.

@b22-dev
Copy link

b22-dev commented Dec 22, 2024

EDIT: I tried to track NETWORK browser traffic to identify the issue. Anyway, The response of the request is EMPTY, and for the payload I can see that the api key is registered and everything looks fine but the request being responded with error.

were you able to check the selected model? thats in the messages and prefixed in the content of the user message

Yes, I'm able to check it.

[Model: grok-2-1212]\n\n[Provider: xAI]\n\n

@beenycool
Copy link

any fixes?

@digason
Copy link

digason commented Dec 22, 2024

As a temporary fix, after you activate the venv, you can do the following to rollback the huggingface_hub package, which is the source of the issue.

pip uninstall -y huggingface_hub
pip install huggingface_hub==0.25.2

@devictang
Copy link

I have the same issue when connected with Claude AI, I have saved API key to .env.local but it's still not working. Ollama model running locally works though.

@BrianLFuller
Copy link

I'm having the same issue. I'm on. a Mac M1 running Sequoia 15.2.
Tried running in Docker and with pnpm. Tried using Chrome Canary, and Brave browser.
Happens with local Ollama and OpenAI with key. I've tried running from localhost:5173 and 172.0.0.1:5173. Getting same issue across the board.

@ThibaultBarral
Copy link

I have the same problem as all of you I am Sequoia 15.1.1

@thecodacus
Copy link
Collaborator

will look into this

@b22-dev
Copy link

b22-dev commented Dec 25, 2024

will look into this

Thank you for your efforts.

@devanubhav
Copy link

I read that it was related to docker..i ran docker as per setup instruction but didnt work either

@ThibaultBarral
Copy link

me too, I think it's linked to the M1 chip

@devanubhav
Copy link

Screenshot 2024-12-26 at 6 51 51 PM worked with llama3(8b) but as visible in the screenshot, it didnt loan in the container

@devictang
Copy link

me too, I think it's linked to the M1 chip

Same, I'm using M chips too, probably this is what causing the problem.

@galaridor
Copy link

I have the same issue, and i do not believe it is related to the M1 chip since i am having it on AMD 7th gen processor

@ThibaultBarral
Copy link

in any case, I can't wait to use it on my computer and for the bug to be fixed.

@galaridor
Copy link

hope it is fixed soon as well. Is it possible that we messed up with setup and did not set it up properly? I guess it is

@ThibaultBarral
Copy link

I don't think so because I've done all 3 possible installations.

@webita
Copy link

webita commented Dec 26, 2024

I have M4 an have the same problem (installed via Pinokio)...Anyone have founded a solution?
I have tried to downgrade huggingface-hub but don't work.

@galaridor
Copy link

Installing dependencies using pnpm instead of npm led to a different problem. Seems like the old is not showing anymore tho.

image_2024-12-26_174944747

@thecodacus
Copy link
Collaborator

Screenshot 2024-12-26 at 6 51 51 PM worked with llama3(8b) but as visible in the screenshot, it didnt loan in the container

this is llm not following instructions. to open a container the llm has to write the code is a special tags this its not doing it.
i often see smaller llm not able to follow system prompt correctly. try with other models like Togather AI, its free

@ReEnvisionAi
Copy link

Looking at the error I believe its related to the vecel api being used for model switching. But its just a theory.

@twestphq
Copy link

I'm having this issue as well. I've spent days trying to fix it.

  • Different browsers
  • Different versions of bolt
  • Different LLMs
  • Docker, normal install
  • Different projects
  • Troubleshooting different projects, uninstalling, reinstalling dependencies, etc, etc
    The list goes on. I start from scratch, get my project set up, I get X part of the way through, then error, troubleshoot, unfixable, start again.

Cannot read properties of undefined (reading 'toolCalls')
at Object.flush (file:///C:/Users/twest/bolt.diy/node_modules/ai/core/generate-text/stream-text.ts:569:33)
at ensureIsPromise (node:internal/webstreams/util:192:19)
at transformStreamDefaultSinkCloseAlgorithm (node:internal/webstreams/transformstream:569:5)
at Object.close (node:internal/webstreams/transformstream:366:14)
at ensureIsPromise (node:internal/webstreams/util:192:19)
at writableStreamDefaultControllerProcessClose (node:internal/webstreams/writablestream:1142:28)
at writableStreamDefaultControllerAdvanceQueueIfNeeded (node:internal/webstreams/writablestream:1222:5)
at writableStreamDefaultControllerClose (node:internal/webstreams/writablestream:1189:3)
at writableStreamClose (node:internal/webstreams/writablestream:699:3)
at writableStreamDefaultWriterClose (node:internal/webstreams/writablestream:1071:10)
at writableStreamDefaultWriterCloseWithErrorPropagation (node:internal/webstreams/writablestream:1063:10)
at node:internal/webstreams/readablestream:1439:15
at complete (node:internal/webstreams/readablestream:1312:9)
at processTicksAndRejections (node:internal/process/task_queues:95:5

@galaridor
Copy link

  1. rename the env file to just ".env" and set the API key or base URL there. If you are using local model don't use localhost, use 127.0.0.1 instead.
  2. Change the "dev" script in the package.json file to "remix vite:dev --host 0.0.0.0 --port {your port} --open"

This actually did the job for me and now everything works locally using ollama

@basil2style
Copy link

@galaridor I just tried this but not working for me :(

@BrianLFuller
Copy link

@galaridor may be on to something.
Instead of renaming my .env.local, I made a copy of it and named it .env. I now have both in the root of my project.
After grabbing the latest main branch, and changing my dev script to this:
"dev": "node pre-start.cjs && remix vite:dev --host 0.0.0.0 --port 5173 --open",
I did a pnpm install, and pnpm run dev. I was able to get a response using the Ollama 3.2 model. The Ollama 3.3 I think is timing out, and my OpenAI attempts still don't work.
Running in docker still gives the Cannot read properties of undefined (reading 'toolCalls') popup on all llm's.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests