Skip to content

Commit

Permalink
change version of rwkv-fla
Browse files Browse the repository at this point in the history
  • Loading branch information
jahatef committed Nov 12, 2024
1 parent c2d6c85 commit bdb3658
Show file tree
Hide file tree
Showing 2 changed files with 3 additions and 3 deletions.
4 changes: 2 additions & 2 deletions megatron/model/rwkv/v6/rwkv.py
Original file line number Diff line number Diff line change
Expand Up @@ -10,12 +10,12 @@
from megatron import mpu
from megatron.mpu import gather_from_model_parallel_region, reduce_from_model_parallel_region, scatter_to_model_parallel_region
try:
from fla.ops.rwkv6 import chunk_rwkv6, fused_recurrent_rwkv6, native_recurrent_rwkv6
from fla.ops.rwkv6 import chunk_rwkv6
import einops
except ModuleNotFoundError:
print(
"Unable to import RWKV FLA kernels. Install them from our requirements/requirements-rwkv.txt, \
or directly from https://github.com/TorchRWKV/flash-linear-attention/tree/stable, or use CUDA kernels."
or directly from https://github.com/sustcsonglin/flash-linear-attention.git, or use CUDA kernels."
)
pass

Expand Down
2 changes: 1 addition & 1 deletion requirements/requirements-rwkv.txt
Original file line number Diff line number Diff line change
@@ -1 +1 @@
rwkv-fla>=0.1.202410200535
git+https://github.com/sustcsonglin/flash-linear-attention

0 comments on commit bdb3658

Please sign in to comment.