-
Notifications
You must be signed in to change notification settings - Fork 21
Issues: ngxson/wllama
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Author
Label
Projects
Milestones
Assignee
Sort
Issues list
calling .exit() has unexpected result: wllamaExit is not a function
#121
opened Sep 29, 2024 by
flatsiedatsie
cannot find tokenizer merges in model file
llama.cpp related
Issues related to llama.cpp upstream source code, mostly unrelated to wllama
#120
opened Sep 27, 2024 by
flatsiedatsie
Bug: Something isn't working
createCompletion
stuck when it runs out of context
bug
#110
opened Aug 19, 2024 by
ngxson
Add support for New feature or request
AbortController
on downloading model
enhancement
#83
opened Jul 3, 2024 by
flatsiedatsie
Add WebGPU support
llama.cpp related
Issues related to llama.cpp upstream source code, mostly unrelated to wllama
#66
opened Jun 8, 2024 by
ngxson
Feature request: diversify error mesages when loading a model fails
enhancement
New feature or request
#56
opened May 24, 2024 by
flatsiedatsie
PostMessage: Data cannot be cloned, out of memory
bug
Something isn't working
#12
opened May 4, 2024 by
flatsiedatsie
Feature request: Github build workflow
enhancement
New feature or request
#6
opened Apr 28, 2024 by
flatsiedatsie
ProTip!
Type g i on any issue or pull request to go back to the issue listing page.