-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Sample command #1
Comments
Thanks for the kind words, though I can't take much credit as it was @liuyanchen1015 who led this project! You can find the script which runs the Roberta experiments in The datasets all are loaded from HuggingFace where they were preprocessed. Is there a particular issue running this? |
Thanks for your swift response! To clarify, I see the python files but there is no specification about arguments, so I am not sure which argument to set. For example:
|
from src.Dialects import HongKongDialect
hke = HongKongDialect()
print(hke)
print(hke.transform("I talked with them yesterday"))
print(hke.executed_rules()) The mapping between dialects and features are sourced from e-WAVE. You can find that untransformed data in a machine readable format in
An attempt at clarifying the idea of the experiment which I hope helps in the meantime. Let's say we have an adapter A trained on syntactic feature F. We'd hope that adapter A should have a larger activation when feature F is present than it does in general. Otherwise, adapter A is getting used even on data it wasn't trained to handle. We want to distinguish average activation in these two cases, so this gives us two averages: the average activation of A when feature F is present and the average activation of A across all data (including when feature F isn't present). The difference between these two gives us a sense of the correlation between adapter activation and the actual presence of the types of syntax that adapter was trained to handle! |
Hi @Helw150 , thanks for the wonderful works !
Could you update sample scripts to reproduce the experiments please (e.g. sh files) ?
The text was updated successfully, but these errors were encountered: