-
Notifications
You must be signed in to change notification settings - Fork 27
Issues: lmstudio-ai/mlx-engine
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. Weβll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Author
Label
Projects
Milestones
Assignee
Sort
Issues list
Add pre-commit hook for ruff format and ruff lint and check pre-commit in CI
good first issue
Good for newcomers
#68
opened Dec 30, 2024 by
neilmehta24
Cache pre-processing of large documents fully embedded into context
enhancement
New feature or request
#66
opened Dec 26, 2024 by
neilmehta24
Failed to load the model (The requested number of bits 3 is not supported.)
beta-available
fixed-in-next-release
The next release of LM Studio fixes this issue
#57
opened Dec 8, 2024 by
certik
Add tests for multi-image VLM prompts, and for followup prompts
enhancement
New feature or request
good first issue
Good for newcomers
#49
opened Nov 27, 2024 by
neilmehta24
Tokens returned with a GenerationResult are off when compared to text
bug
Something isn't working
#42
opened Nov 22, 2024 by
mattjcly
Set wired limit before starting generation
enhancement
New feature or request
fixed-in-next-release
The next release of LM Studio fixes this issue
#40
opened Nov 22, 2024 by
neilmehta24
Add logprobs to generation result
enhancement
New feature or request
#37
opened Nov 15, 2024 by
neilmehta24
Failed to Index Model error mlx-community/Mamba-Codestral-7B-v0.1-8bit
bug
Something isn't working
fixed-in-next-release
The next release of LM Studio fixes this issue
#33
opened Nov 8, 2024 by
YorkieDev
Add KV cache quantization feature
enhancement
New feature or request
#31
opened Nov 8, 2024 by
neilmehta24
Phi 3.5 Vision Instruct Fails to Load "Trust remote code" error
enhancement
New feature or request
#29
opened Nov 6, 2024 by
YorkieDev
Pixtral 12B context size cannot be configured beyond 2048 in LM Studio
bug
Something isn't working
fixed-in-next-release
The next release of LM Studio fixes this issue
#28
opened Nov 6, 2024 by
neilmehta24
Repeated Generation Regression with Qwen2-VL-7B-Instruct-4bit and default LM Studio generation config
bug
Something isn't working
fixed-in-next-release
The next release of LM Studio fixes this issue
#27
opened Nov 4, 2024 by
mattjcly
LM Studio(0.3.5) load mllama model failed.
fixed-in-next-release
The next release of LM Studio fixes this issue
#25
opened Nov 1, 2024 by
Aaronthecowboy
Qwen2-VL-7B Giving Error on 0.3.5
fixed-in-next-release
The next release of LM Studio fixes this issue
#17
opened Oct 22, 2024 by
ThakurRajAnand
ministral 8b downloaded but unable to load
fixed-in-next-release
The next release of LM Studio fixes this issue
#13
opened Oct 17, 2024 by
bhupesh-sf
ProTip!
Updated in the last three days: updated:>2025-01-13.