You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I'm looking to reproduce some of the open-source model results from the VWA paper:
(1) Mixtral-8x7B model as the LLM backbone for Caption-augmented model
(2) CogVLM for the Multimodal Model.
Could someone share with me any flags/commands or instructions to setup these configurations for eval?
The text was updated successfully, but these errors were encountered:
Hello,
I'm looking to reproduce some of the open-source model results from the VWA paper:
(1) Mixtral-8x7B model as the LLM backbone for Caption-augmented model
(2) CogVLM for the Multimodal Model.
Could someone share with me any flags/commands or instructions to setup these configurations for eval?
The text was updated successfully, but these errors were encountered: