You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Integrate OpenFold into the nf-core/proteinfold pipeline. OpenFold is a faithful but trainable PyTorch reproduction of AlphaFold2.
OpenFold has the following advantages over the reference implementation:
Faster inference on GPU, sometimes by as much as 2x. The greatest speedups are achieved on (>= Ampere) GPUs.
Inference on extremely long chains, made possible by our implementation of low-memory attention (Rabe & Staats 2021). OpenFold can predict the structures of sequences with more than 4000 residues on a single A100, and even longer ones with CPU offloading.
Custom CUDA attention kernels modified from FastFold's kernels support in-place attention during inference and training. They use 4x and 5x less GPU memory than equivalent FastFold and stock PyTorch implementations, respectively.
Efficient alignment scripts using the original AlphaFold HHblits/JackHMMER pipeline or ColabFold's, which uses the faster MMseqs2 instead.
FlashAttention support greatly speeds up MSA attention.
Description of feature
Integrate OpenFold into the nf-core/proteinfold pipeline. OpenFold is a faithful but trainable PyTorch reproduction of AlphaFold2.
OpenFold has the following advantages over the reference implementation:
Inference on extremely long chains, made possible by our implementation of low-memory attention (Rabe & Staats 2021). OpenFold can predict the structures of sequences with more than 4000 residues on a single A100, and even longer ones with CPU offloading.
Source code: https://github.com/aqlaboratory/openfold
Publication: https://www.biorxiv.org/content/10.1101/2022.11.20.517210v2
The text was updated successfully, but these errors were encountered: