-
Notifications
You must be signed in to change notification settings - Fork 4
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Hyperparameters #3
Comments
Sorry to bother you, but I'd like to ask why these two issues are occurring,I would appreciate it if you could answer it. File "C:\Users\User\anaconda3\envs\gnn_model_stealing\lib\site-packages\torch\utils\data\dataloader.py", line 1004, in _try_get_data No.2 File "C:\Users\User\anaconda3\envs\gnn_model_stealing\lib\multiprocessing\spawn.py", line 135, in _check_not_importing_main
|
Hi, the test accuracy is calculated using the attack testing dataset, i.e., when you run attack.py, it should summarise the target test accuracy, attack accuracy, and fidelity. |
Hi, thanks for your reply. However, if I understand correctly. the target test accuracy is calculated with the trained target model, which is saved after running train_target_model.py. The point is that after I trained and saved the target mode by running train_target_model.py, the target test accuracy was always lower than that in Table 4. Thus, I am wondering if you could kindly share the hyperparameters of training the target models? Or if it's more convenient for you, could you share with me one pretrained target model? Thanks a lot. |
Yes, the target model is trained and saved using the train_target_model.py (and the hyperparameters are the same as this file). After the target model is trained, we perform the attack and use the same set of data (attack testing dataset) to test the performance of target model and the surrogate model. The performance in Table 4 is calculated in this way. |
Thanks for your explanation. I have used the hyperparameters in the train_target_model.py file, but the testing accuracy is still lower than expected. |
Hi, could you specify the datasets and model architectures? We performed some tiny experiments and found that the performance was similar to Table 4. E.g., for GAT trained on citeseer_full, when we perform the attack Also, the performance change may be caused by the randomness. So different runs may result in different performances. |
Hi, I tried to reproduce the experiments on all datasets and models, but the testing accuracy of the target models are generally lower than those in Table 4. During reproducing, I noted two things about training:
Could you please clarify with what parameters I will be able to reproduce the results and comment on the overlapping splits? |
I take the full responsibility for the mismatch between the hyperparameters used in the code and those specified in the paper.
For your convenience, I have provided two sample files (for illustration purposes only, use with caution) to assist you. Additionally, it is worth noting that variations in the results (Table 4) may arise due to the random selection of training data. You can observe such discrepancies by executing the following bash command.
|
@xujing1994 Sorry for the late reply. Regarding the parameters, you can consider the comment above. Regarding the overlap, we consider the attacker can sample the data from the same dataset, e.g., social networks. In this case, the sampled dataset may contain the nodes that are used to train the target model. We will also clarify the parameters and overlap in our paper. |
Hi, it's an interesting work. Thanks for sharing the code.
I am following the code to train the target model but I find the testing accuracy of each model and dataset is always lower than the performance in Table 4.
Could you please share the hyperparameters of training the target models? Thanks in advance!
The text was updated successfully, but these errors were encountered: