You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
There are a cases where you might want to change the model (or even prompt) used for a given request based on the current context. Adding a new PromptSelector plugin will let a wave evaluate the current context and select the appropriate prompt and model to use. Here are a few scenarios this would enable:
Choosing an alternate model for longer prompts. Use a low cost model for short tasks but switch to GPT-4 when you need the added tokens.
Choosing an alternate model based on query complexity. You can use a classifier to select the model to use based off the complexity of the current query.
There would be 2 default implementations provided: The DefaultPromptSelector would just select the configured prompt every time. The FeedbackPromptSelector would chose an alternate prompt when giving the model feedback.
The text was updated successfully, but these errors were encountered:
Hi @Stevenic, this project is awesome. I've been using it at work for a little R&D, the appliance is in the media industry so I've been using it to generate automated translations of SRT files, it's been a great first step, the idea is to increase output and automate the operational workflow as much as possible. I've been thinking of using it for some side projects as well, I was wondering if you were open to contributors, I'd love to pitch in.
Of course @rmolinamir... I'd love contributors... Let me setup discussions for the project and I'll invite you as a collaborator. I'm in the middle of adding support for OpenAI functions so quite a few moving pieces right now.
There are a cases where you might want to change the model (or even prompt) used for a given request based on the current context. Adding a new
PromptSelector
plugin will let a wave evaluate the current context and select the appropriate prompt and model to use. Here are a few scenarios this would enable:There would be 2 default implementations provided: The
DefaultPromptSelector
would just select the configured prompt every time. TheFeedbackPromptSelector
would chose an alternate prompt when giving the model feedback.The text was updated successfully, but these errors were encountered: