Skip to content

Commit

Permalink
[fix] Fix the activation checkpointing when using SwiGLUPackedFusedOp
Browse files Browse the repository at this point in the history
According to the docs (https://pytorch.org/docs/stable/autograd.html#torch.autograd.Function) forward() method should not be called directly, apply() method have to be used instead.
After removing forward call, activation checkpointing starts working. (alternative variant #2)
  • Loading branch information
warpuv committed Oct 17, 2024
1 parent 46d2823 commit 6563898
Showing 1 changed file with 5 additions and 0 deletions.
5 changes: 5 additions & 0 deletions xformers/csrc/swiglu/swiglu_packedw.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -211,7 +211,12 @@ at::Tensor swiglu_packedw_cuda(
const std::optional<at::Tensor> b1b2,
const at::Tensor w3,
const std::optional<at::Tensor> b3) {
if (torch::GradMode::is_enabled()) {
return SwiGLUPackedWeights::apply(x, w1w2, b1b2, w3, b3);
} else {
return SwiGLUPackedWeights::forward(
/* ctx */ nullptr, x, w1w2, b1b2, w3, b3);
}
}
} // namespace

Expand Down

0 comments on commit 6563898

Please sign in to comment.