-
Notifications
You must be signed in to change notification settings - Fork 918
Issues: ml-explore/mlx-examples
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Author
Label
Projects
Milestones
Assignee
Sort
Issues list
mlx_lm with llama-3.3-70b-instruct works like base model in some case.
#1162
opened Dec 15, 2024 by
chigkim
[Feature Request] Unable to Unload or Swap adapters at runtime
#1108
opened Nov 18, 2024 by
chimezie
load_custom_hf_dataset not handling the text_feature argument properly
#1087
opened Nov 3, 2024 by
chimezie
[FEATURE REQUEST] Support for LLM2VEC Encoder-Decoder LLMs
#1052
opened Oct 16, 2024 by
HydrogenBombaklot
Enable custom masks as optional input for models for batch processing
#1044
opened Oct 13, 2024 by
nath1295
Error: llama runner process has terminated: GGML_ASSERT(src1t == GGML_TYPE_F32) failed
#1043
opened Oct 13, 2024 by
lhwong
T5 tokenizer decoding error with CodeT5+
bug
Something isn't working
#1021
opened Oct 9, 2024 by
zcbenz
Support for Nvidia Nemotron and NVLM 1.0
enhancement
New feature or request
#1007
opened Oct 1, 2024 by
vlbosch
I tried madlad400, but there is a problem with the output if it is float16
#980
opened Sep 7, 2024 by
otmb
[Feature Request] MLX_lm.cache_prompt | Save cached_prompt as plaintext in the kv-cache-file metadata
enhancement
New feature or request
#978
opened Sep 6, 2024 by
mark-lord
[Feature Request] MLX_lm: add support for GLM4 family of language models
#952
opened Aug 25, 2024 by
Nekuromento
Previous Next
ProTip!
Find all open issues with in progress development work with linked:pr.