-
Notifications
You must be signed in to change notification settings - Fork 425
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Prompt_security #920
base: develop
Are you sure you want to change the base?
Prompt_security #920
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Another feedback: As long as we have Colang 1.0 one should not use one flow name for both input and output which you are doing (e.g., Currently when both input and output rails are activated when the interaction is multi round in the subsequent rounds of interaction both user and bot messages might be available in a context variable (one can argue that this is a bug). So passing them explicitly in action definition is the appropriate way to do it. I will highlight the code line that need this change. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Applied suggestion in this comment.
Colang 2.0 flows need change but I will provide the code.
thank you @lior-ps it looks great, just tried to run the test without the mocks and am facing some issues. Would you please have a look? For example once I comment relevant lines of @pytest.mark.unit
def test_prompt_security_protection_input():
config = RailsConfig.from_content(
yaml_content="""
models: []
rails:
input:
flows:
- protect prompt
""",
colang_content="""
define user express greeting
"hi"
define flow
user express greeting
bot express greeting
define bot inform answer unknown
"I can't answer that."
""",
)
chat = TestChat(
config,
llm_completions=[
" express greeting",
' "Hi! My name is John as well."',
],
)
# chat.app.register_action(retrieve_relevant_chunks, "retrieve_relevant_chunks")
# chat.app.register_action(mock_protect_text(True), "protect_text")
chat >> "Hi! I am Mr. John! And my email is [email protected]"
chat << "I can't answer that." I get |
Hi @Pouyanp, I fixed the pytest code, can you please check again? |
Thank you @lior-ps , It looks good (maybe we can add more tests later) would you please just sign your commits and run pre-commit per contributing guidelines? You can do an interactive rebase to jus sign them and apply pre-commit hooks. |
Description
Prompt Security is a startup specializing in security services for LLMs and generative AI. We can protect prompts and responses a wide variaty of risks like prompt injection, jailbreak, sensitive data disclosure and inappropriate content by adding guardrails.
Related Issue(s)
None
Checklist