Skip to content

Device error when evaluating MACE. #136

Answered by ilyes319
jungsdao asked this question in Q&A
Discussion options

You must be logged in to vote

Hey @jungsdao, thank you very much for using MACE.

Due to a (probable) torchscript bug, a model needs to be saved on the CPU to be loaded to the CPU.
If you want to do this, you can add a --save_cpu flag to your input script, and it should work fine.
You can restart the train for like 0 epoch for your already trained model, just adding this flag. It will create a suitable model.
Or you can do the following trick on a machine with a GPU:

model = torch.load(path_to_model)
model_cpu = model.to("cpu")
torch.save(model_cpu, path_to_model_cpu)

Then you can use the model_cpu for your CPU evals.

Best,

Ilyes

Replies: 1 comment

Comment options

You must be logged in to vote
0 replies
Answer selected by ilyes319
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Category
Q&A
Labels
None yet
2 participants
Converted from issue

This discussion was converted from issue #127 on July 31, 2023 11:36.