You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I fine-tuned a Huggingface model (hfl/chinese-roberta-wwm-ext) to do sequence classification task on my own dataset. The fine-tuning process followed the official getting-started guide (https://transformers.run/) and the fine-tuned model ran successfully completed the sequence classification task. I am now seeking to do interpretable analysis with this package but there are some problems. I would greatly appreciate any help or insights that anyone could provide regarding this issue.
Here is part of my code. I firstly instantiated a model and loaded the weights from the saved fine-tuned model. I also loaded the pretrained tokenizer:
I guess the inputed text was not converted by the tokenizer into the mapping format ({input_ids, attention_mask, token_type_ids}), but I have no idea how to fix this. Any assistance or guidance on this matter would be greatly appreciated!
The text was updated successfully, but these errors were encountered:
I fine-tuned a Huggingface model (hfl/chinese-roberta-wwm-ext) to do sequence classification task on my own dataset. The fine-tuning process followed the official getting-started guide (https://transformers.run/) and the fine-tuned model ran successfully completed the sequence classification task. I am now seeking to do interpretable analysis with this package but there are some problems. I would greatly appreciate any help or insights that anyone could provide regarding this issue.
Here is part of my code. I firstly instantiated a model and loaded the weights from the saved fine-tuned model. I also loaded the pretrained tokenizer:
Then I made an explainer and tried:
Here I got the error:
I guess the inputed text was not converted by the tokenizer into the mapping format ({input_ids, attention_mask, token_type_ids}), but I have no idea how to fix this. Any assistance or guidance on this matter would be greatly appreciated!
The text was updated successfully, but these errors were encountered: