Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Enhancement:Better ways to send prompts #4708

Open
1 task done
Passerby1011 opened this issue Nov 13, 2024 · 5 comments
Open
1 task done

Enhancement:Better ways to send prompts #4708

Passerby1011 opened this issue Nov 13, 2024 · 5 comments
Labels
enhancement New feature or request

Comments

@Passerby1011
Copy link
Contributor

What features would you like to see added?

The current prompts in the prompt repository are sent to the API in the form of user messages, but I believe it would be more appropriate to send them as system messages. I hope the prompt library can be improved in this regard.Many other GPT software on the market operate in a similar manner.

More details

Sending prompt words in the form of user messages can disrupt the user's reading of their own messages, and if the prompt words are very long, it will create a lengthy message history on the page.

Which components are impacted by your request?

No response

Pictures

No response

Code of Conduct

  • I agree to follow this project's Code of Conduct
@Passerby1011 Passerby1011 added the enhancement New feature or request label Nov 13, 2024
@danny-avila
Copy link
Owner

Presets fill this need but I can see the value of applying a one-time system message in the same UI/UX as prompts, as well as making them private when shared to other users.

@Passerby1011
Copy link
Contributor Author

Passerby1011 commented Nov 13, 2024

Preset functions cannot quickly switch models, and the same prompt cannot be quickly applied to multiple models from multiple vendors; everything requires manual setup, which is not as convenient and fast as using a prompt.
Sending prompts as user messages to AI is not aesthetically pleasing or elegant on the visual level; on the functional level, it takes up a user message, and the user has to wait for AI's reply. The user must wait for the AI's response before starting the main text.

@harrisonhxy
Copy link

harrisonhxy commented Jan 10, 2025

This is exactly the problem I encountered when I configured Librechat recently. I wonder if you have solved this problem in a better way?

At present, my method is implemented by configuring Model Specs. For details, see docs and #1617

I am not sure if this setting is appropriate, although it achieves the result I want. Because I am also not used to using Presets.

But when using it, I found a problem, such setting will cause a very large consumption of tokens! The reason may be that the content of promptPrefix does not cover 'system prompt', so, I also want to ask if it is possible not to use 'system prompt' in Model Specs (or system message)? How should I cover the system prompt? Where is the 'system prompt' of LibreChat, I can't find it. @danny-avila

@danny-avila
Copy link
Owner

promptPrefix does get added to the chat history as a system message. A system message must be added to every run to persist as with all other history.

There is no other "system prompt" for LibreChat, other than other fields that add to this, or add to the current message.

The docs were recently updated and go into many of the fields in detail:
https://www.librechat.ai/docs/configuration/librechat_yaml/object_structure/model_specs

@harrisonhxy
Copy link

promptPrefix does get added to the chat history as a system message. A system message must be added to every run to persist as with all other history.

There is no other "system prompt" for LibreChat, other than other fields that add to this, or add to the current message.

The docs were recently updated and go into many of the fields in detail: https://www.librechat.ai/docs/configuration/librechat_yaml/object_structure/model_specs

Thank you very much danny! I think I know why my Librechat consumes a lot of tokens, because I opened Artifacts and shadcn/ui 😂
By the way, Artifacts and shadcn/ui can only be globally set in 'Settings->Beta feature'. It can't be configured in Model Specs at present, right?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

3 participants