You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Dear Yige,
thanks a lot for sharing the code!
I was wondering if you could provide some more detail on "further pre-training" on the IMDB dataset, e.g. the hyperparameter settings for it.
Or, is it possible to share the BERT model which did the LM pre-training on the IMDB dataset?
The text was updated successfully, but these errors were encountered:
thank you for your issue
we have shown some hyperparameters settings in our paper (see section 5.2)
for bert checkpoints after further-pretraining, we share a link in our README (see section Further Pre-Trained Checkpoints)
after clicking the link, you can get the imdb-based checkpoint in the file pytorch_model_len128_imdb.bin
Dear Yige,
thanks a lot for sharing the code!
I was wondering if you could provide some more detail on "further pre-training" on the IMDB dataset, e.g. the hyperparameter settings for it.
Or, is it possible to share the BERT model which did the LM pre-training on the IMDB dataset?
The text was updated successfully, but these errors were encountered: