-
Notifications
You must be signed in to change notification settings - Fork 904
Issues: meta-llama/llama-models
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Author
Label
Projects
Milestones
Assignee
Sort
Issues list
Bad output after 80k tokens: rope scaling_factor 8 is used for 1B and 3B 3.2 models
#241
opened Dec 10, 2024 by
yieldthought
Handling Token Limit Issues in Llama 3.2:3b-Instruct Model (2048 Tokens Max)
#240
opened Dec 10, 2024 by
pandiyan90
Llama 3.3 70B Instruct download hits Client error '403 Forbidden' for url
#237
opened Dec 6, 2024 by
BowenBao
Failures llama_models/scripts/example_chat_completion.py FAILED
#235
opened Dec 4, 2024 by
ganeshkinkar
Question about Llama 3.2 3b instruct, multi-shot prompts and how to represent examples
#232
opened Dec 3, 2024 by
andrewfr
Issue Title: Inconsistent Tool Calling Behavior with LLaMA 3.1 70B Model on AWS Bedrock
#229
opened Nov 29, 2024 by
nileshmalode11
NotADirectoryError: [WinError 267] The directory name is invalid:
#212
opened Nov 7, 2024 by
taras-kamo
Unable to determine the device handle for GPU0000:17:00.0: Unknown Error
#200
opened Oct 27, 2024 by
Fujiaoji
OSError with Llama3.2-3B-Instruct-QLORA_INT4_EO8 - missing files?
#194
opened Oct 25, 2024 by
StephenQuirolgico
Previous Next
ProTip!
Type g p on any issue or pull request to go back to the pull request listing page.