Different accuracy results after save_test_submission for MAG240M-LSC dataset? #141
tadpole
started this conversation in
MAG240M-LSC
Replies: 1 comment 7 replies
-
Hi! How were you able to call Otherwise, please give us more context on how you save your data, how you call evaluation etc. |
Beta Was this translation helpful? Give feedback.
7 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hi,
When I test my results on MAG240M-LSC dataset with the provided rgat baseline by "python rgnn.py --device=0 --model=rgat --evaluate", the accuracy is 0.6843 in the validation data. But after I save the predicted label with save_test_submission , load the file and eval the results by MAG240MEvaluator().eval(), the result is 0.6804. There was about 0.5% difference. Why is that?
Thanks!
Beta Was this translation helpful? Give feedback.
All reactions