Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Codegen] Remove memref optimizations from OptimizeTensorInsertExtractSlices #18732

Open
wants to merge 2 commits into
base: main
Choose a base branch
from

Conversation

Groverkss
Copy link
Contributor

@Groverkss Groverkss commented Oct 9, 2024

This pass is meant to be run on tensor code, but has slowly accumulated unrelated optimizations. This patch makes this pass run only tensor optimizations.

No test additions because this pass only has tensor tests.

Depends on #18731

Comment on lines -243 to +246
linalg::hoistRedundantVectorTransfers(cast<func::FuncOp>(funcOp));
IRRewriter rewriter(funcOp->getContext());

funcOp.walk([&](scf::ForOp forOp) { moveLoopInvariantCode(forOp); });
LDBG("after hoisting loop invariant code\n" << funcOp);
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

linalg::hoistRedundantVectorTransfers is a combination of LICM + memref optimizations. We only run LICM to keep the previous behavior for tensor code.

Comment on lines -105 to +107
// CHECK: vector.transfer_read {{.+}} vector<1x8xf16>
// CHECK: vector.transfer_read {{.+}} vector<8xf16>
// CHECK: vector.transfer_write
// CHECK: vector.transfer_read {{.+}} vector<1x8xf16>
// CHECK: vector.transfer_read {{.+}} vector<8xf16>
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

test changes caused by switching pass to optimize-vector-transfer, which also drops vector unit dims.

Copy link
Contributor

@qedawkins qedawkins left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice!

@@ -549,7 +549,9 @@ void addGPUMatmulSimtPassPipeline(OpPassManager &funcPassManager,
// still rely on buffer level transformations for transfer ops hoisting and
// store to load forwarding. This relies on shacky alias analysis and we need
// to move this to tensor level once we have better abstractions.
funcPassManager.addPass(createOptimizeTensorInsertExtractSlicesPass());
// TODO: We should be to start hoisting out accumulator load/store out
// after https://github.com/llvm/llvm-project/pull/111533.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

reminder to drop before landing

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

oh good catch, thanks!

Copy link
Contributor

@hanhanW hanhanW left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Okay, I have better understanding now. Thanks for the good writeup in llvm/llvm-project#111533 (comment)!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants