You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi, I have just run the example three_key_questions.ipynb notebook with gpt-4o as a model, and it seems that the agents that are prescribed to be aggressive and conflicting (Alice and Dorothy) completely disregard those prompts and act in a cooperative non-aggressive fashion (exactly how chatgpt acts - "let me apologize for the inconvenience", "I admit I was wrong, let's work together towards a solution").
Is this the intended behavior?
Have you met this kind of issue in your experiments?
The text was updated successfully, but these errors were encountered:
You've discovered that gpt-4o is terrible at role playing. I agree, it is! They've probably made it so nice by RLHF that it just ignores you when you ask it to role play as a less nice character. Try any other model. Most of them are better at this than gpt-4o.
I guess, It would be nice to add a corresponding comment to the tutorial notebook and\or change the default model from gpt-4o to the one which is good at role playing.
Hi, I have just run the example
three_key_questions.ipynb
notebook with gpt-4o as a model, and it seems that the agents that are prescribed to be aggressive and conflicting (Alice and Dorothy) completely disregard those prompts and act in a cooperative non-aggressive fashion (exactly how chatgpt acts - "let me apologize for the inconvenience", "I admit I was wrong, let's work together towards a solution").Is this the intended behavior?
Have you met this kind of issue in your experiments?
The text was updated successfully, but these errors were encountered: