Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Compiling Custom Model Fails to Load Into Web-LLM #633

Open
Justin-Cignal opened this issue Nov 21, 2024 · 0 comments
Open

Compiling Custom Model Fails to Load Into Web-LLM #633

Justin-Cignal opened this issue Nov 21, 2024 · 0 comments

Comments

@Justin-Cignal
Copy link

Justin-Cignal commented Nov 21, 2024

I have Followed the instruction here to add a new model and load locally: https://llm.mlc.ai/docs/deploy/webllm.html. I was able to successfully convert the weights, generate a config and compile a model. However I get this error when trying to initialize the model. Error: Cannot find global function mlc.grammar.BNFGrammarGetGrammarOfJSON
image

I have also run the compilation on this model from huggingface https://huggingface.co/mlc-ai/SmolLM2-135M-Instruct-q4f32_1-MLC and got the same issue, so I know the issue is with the compilation to the .wasm file. When I use the wasm file from github the model loads fine SmolLM-135M-Instruct-q4f32_1-ctx2k_cs1k-webgpu.wasm.

I also tested this on gemma-2b-it-q4f32_1-MLC with the same issue
compilation tested on mlc-llm v0.18, v0.17.2

Tested Compilation on the following systems

  • Ubuntu 22.04.5 LTS, NVIDIA GeForce RTX 3060
  • Ubuntu 22.04 LTS, NVIDIA GeForce RTX 4090
  • Xubuntu 22.04.5 LTS, NVIDIA GeForce RTX 3060

Tested Loading with web-llm on the following browsers

  • Chrome Canary V133.0.6852.0, Windows
  • Google Chrome Stable V130.0.6723.91, Ubuntu
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant