-
Notifications
You must be signed in to change notification settings - Fork 462
Issues: OpenGVLab/InternVL
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Author
Label
Projects
Milestones
Assignee
Sort
Issues list
Not a bug, just a newbie question. Does this model work under "lm-studio" or it currently has to be run through CLI?
#697
opened Nov 9, 2024 by
kundeng
3 tasks
[Bug] turn off flash attention in the process of finetune, because of npus
#696
opened Nov 9, 2024 by
BrenchCC
3 tasks done
Does Intern-VL support CPU deployment? Is there a tutorial?
#694
opened Nov 7, 2024 by
fengqiliang93
How to use Intern-VL 2-1B model for VQA classification task ?
#693
opened Nov 4, 2024 by
Aritra02091998
[Bug] ModuleNotFoundError: No module named 'flash_attn'
#690
opened Nov 1, 2024 by
DankoZhang
3 tasks
[Bug] The loss of Continual Training by training only loading peft adapter cannot decrease following the previous round
#689
opened Nov 1, 2024 by
14H034160212
3 tasks done
40B仅用lora完成SFT后,推理时出现Expected all tensors to be on the same device
#685
opened Oct 29, 2024 by
hahapt
diff of the configuration at pretrain stage between the 34B model and 8B model
#683
opened Oct 28, 2024 by
royzhang12
[Feature] When will the InternVL2 paper get released?
#631
opened Oct 18, 2024 by
KeesariVigneshwarReddy
Previous Next
ProTip!
Mix and match filters to narrow down what you’re looking for.