-
Notifications
You must be signed in to change notification settings - Fork 162
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Code on Ml-20m can't achieve the same performance as in paper #13
Comments
@outside-BUPT how many steps did you train the model? |
I tried to run run_ml-1m.sh and also got bad evaluation performance as below. dcg@1:0.016556291390728478, hit@1:0.016556291390728478, ndcg@5:0.041968522744744094, hit@5:0.06837748344370861, ndcg@10:0.06059810508379426, hit@10:0.12682119205298012, ap:0.06324608749550562, valid_user:6040.0 |
@oscarriddle The default number of training steps in this script needs to be higher to reproduce the reported result. See our reproducibility paper on this |
After training about 35,000,000 steps, still wandering around a rather low saddle point, merely above SASrec's performance mentioned in paper. |
my result:
hit@1:0.2399, ndcg@5:0.393, hit@5:0.536, ndcg@10:0.4379=, hit@10:0.6718, ap:0.3794
result from paper:
hit@1:0.3440 , ndcg@5:0.4967, hit@5:0.6323, ndcg@10:0.5340, hit@10:0.7473, ap:0.4785
The text was updated successfully, but these errors were encountered: