-
-
Notifications
You must be signed in to change notification settings - Fork 3.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Enhancement:Better ways to send prompts #4708
Comments
Presets fill this need but I can see the value of applying a one-time system message in the same UI/UX as prompts, as well as making them private when shared to other users. |
Preset functions cannot quickly switch models, and the same prompt cannot be quickly applied to multiple models from multiple vendors; everything requires manual setup, which is not as convenient and fast as using a prompt. |
This is exactly the problem I encountered when I configured Librechat recently. I wonder if you have solved this problem in a better way? At present, my method is implemented by configuring Model Specs. For details, see docs and #1617 I am not sure if this setting is appropriate, although it achieves the result I want. Because I am also not used to using But when using it, I found a problem, such setting will cause a very large consumption of tokens! The reason may be that the content of |
There is no other "system prompt" for LibreChat, other than other fields that add to this, or add to the current message. The docs were recently updated and go into many of the fields in detail: |
Thank you very much danny! I think I know why my Librechat consumes a lot of tokens, because I opened |
What features would you like to see added?
The current prompts in the prompt repository are sent to the API in the form of user messages, but I believe it would be more appropriate to send them as system messages. I hope the prompt library can be improved in this regard.Many other GPT software on the market operate in a similar manner.
More details
Sending prompt words in the form of user messages can disrupt the user's reading of their own messages, and if the prompt words are very long, it will create a lengthy message history on the page.
Which components are impacted by your request?
No response
Pictures
No response
Code of Conduct
The text was updated successfully, but these errors were encountered: