You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I've observed that when employing p-tuning v2 for inference with all 0 prefix parameters, it impacts the behavior of the original model. I'm contemplating the feasibility of incorporating a prefix prompt without any impact on the original model's behavior. I'm uncertain about whether my experiment has any issues. Your input on this matter would be greatly appreciated. Thank you.
past_key_values = tuple([torch.zeros_like(pkv, dtype=pkv.dtype) for pkv in past_key_values])
The text was updated successfully, but these errors were encountered:
Hi,
I've observed that when employing p-tuning v2 for inference with all 0 prefix parameters, it impacts the behavior of the original model. I'm contemplating the feasibility of incorporating a prefix prompt without any impact on the original model's behavior. I'm uncertain about whether my experiment has any issues. Your input on this matter would be greatly appreciated. Thank you.
past_key_values = tuple([torch.zeros_like(pkv, dtype=pkv.dtype) for pkv in past_key_values])
The text was updated successfully, but these errors were encountered: