Skip to content

Issues with create_lammps_model.py #698

Answered by MusannaGalib
MusannaGalib asked this question in Q&A
Discussion options

You must be logged in to vote

The solution is saving the model again to convert gpu .model to cpu .model: if --save_cpu is not used during training!

import torch
#this has to be done in GPU to convert GPU .model to cpu .model

# Load the model (on a machine with GPU)
model = torch.load("/home/galibubc/scratch/ML_MACE/trial_alumina/MACE_models/MACE_alumina_model.model")

# Move the model to the CPU
model_cpu = model.to("cpu")

# Save the model as a CPU-compatible .model file
torch.save(model_cpu, "/home/galibubc/scratch/ML_MACE/trial_alumina/MACE_models/MACE_alumina_model_cpu.model")

Then create_lammps_model.py can be used to get the lammps.pt file:

!python /home/galibubc/scratch/ML_MACE/trial_alumina/mace/mace/cli/c…

Replies: 1 comment

Comment options

You must be logged in to vote
0 replies
Answer selected by MusannaGalib
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Category
Q&A
Labels
None yet
1 participant