Replies: 1 comment
-
@Miscend |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
I've been experimenting with using Mixtral as the LLM for GPTpilots instead of GPT-4, but am running into problems getting the clarification loop to work properly.
Specifically, when I integrate Mixtral into the
create_gpt_chat_completion
call, the clarification questions get stuck in endless loops rather than terminating appropriately with "EVERYTHING_CLEAR".I've found a workaround where I can use GPT-4 for the initial high-level prompts up through the first dev step. Then switch to Mixtral after that point by modifying the LLM integration. This seems to avoid getting stuck.
However, it would be ideal to get Mixtral working end-to-end, so I wanted to check for any tips on:
I'm very new to experimenting with alternatives to GPT-4, so any guidance would be much appreciated! Please let me know if there are any other details I can provide that would help troubleshoot this issue.
Thanks for creating such a great framework for leveraging LLMs! Looking forward to contributing back as I learn more.
Beta Was this translation helpful? Give feedback.
All reactions