How the MACE implement the multi-GPU lammps inference? #566
-
For my best known, there are two modern MLIP could inference use multi-GPU, the one is Allegro which is not MPNN, don't need consider the message passing method across different GPUs. the other is SevenNet, which implement foward communication and backward communication methods during lammps inference. I can't find similar inplementation in MACE's code. Could you explain how you make multi-GPU inference usable? |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment 3 replies
-
Communicating internal messages between domains is not a requirement it just makes things more efficient by reducing the impact of ghost atoms. To be clear, this impact of ghost atoms is not a product of message passing per se just of a large receptive field. If you make an Allegro model with 10A receptive field, you will have a similar problem. Just with message passing model you can do better. Work is being done on an interface that includes internal inter GPUs communication for MACE in lammps. |
Beta Was this translation helpful? Give feedback.
Communicating internal messages between domains is not a requirement it just makes things more efficient by reducing the impact of ghost atoms. To be clear, this impact of ghost atoms is not a product of message passing per se just of a large receptive field. If you make an Allegro model with 10A receptive field, you will have a similar problem. Just with message passing model you can do better.
Work is being done on an interface that includes internal inter GPUs communication for MACE in lammps.