Ring Attention leverages blockwise computation of self-attention on multiple GPUs and enables training and inference of sequences that would be too long for a single devices.
This repository contains notebooks, experiments and a collection of links to papers and other material related to Ring Attention.
-
Paper: Ring Attention with Blockwise Transformers for Near-Infinite Context
- code: lhao499/ring-attention
-
Paper: World Model on Million-Length Video And Language With RingAttention
- code: LargeWorldModel/LWM,
- project site: largeworldmodel.github.io
- models: HF/LargeWorldModel
-
Paper: Striped Attention: Faster Ring Attention for Causal Transformers, code: exists-forall/striped_attention
-
Paper (2022): 4D parallelism: Sequence Parallelism: Long Sequence Training from System Perspective
-
related: Flash-Decoding for long-context inference (together.ai blog)
-
Paper: Online normalizer calculation for softmax (NVIDIA, 2018)
-
LWM model in ollama: https://ollama.com/ifioravanti/lwm
-
Phil Wang's (lucidrain) pytorch impl: lucidrains/ring-attention-pytorch
-
Zilin Zhu's nice zhuzilin/ring-flash-attention implementation
- Incremental Softmax (to understand the algorithm in 'high-level' pytorch)
- Naive flash-attn (to understand the algorithm in 'high-level' pytorch)
- NVIDIA Collective Communication Library (NCCL) Documentation
- PyTorch Distributed Overview
- Distributed communication package - torch.distributed (
send()
,recv()
,broadcast()
, etc.)
Contact us on the GPU MODE discord server: https://discord.gg/gpumode, PRs are welcome (please create an issue first).