-
Notifications
You must be signed in to change notification settings - Fork 582
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Codegen] Remove memref optimizations from OptimizeTensorInsertExtractSlices #18732
base: main
Are you sure you want to change the base?
Conversation
linalg::hoistRedundantVectorTransfers(cast<func::FuncOp>(funcOp)); | ||
IRRewriter rewriter(funcOp->getContext()); | ||
|
||
funcOp.walk([&](scf::ForOp forOp) { moveLoopInvariantCode(forOp); }); | ||
LDBG("after hoisting loop invariant code\n" << funcOp); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
linalg::hoistRedundantVectorTransfers is a combination of LICM + memref optimizations. We only run LICM to keep the previous behavior for tensor code.
// CHECK: vector.transfer_read {{.+}} vector<1x8xf16> | ||
// CHECK: vector.transfer_read {{.+}} vector<8xf16> | ||
// CHECK: vector.transfer_write | ||
// CHECK: vector.transfer_read {{.+}} vector<1x8xf16> | ||
// CHECK: vector.transfer_read {{.+}} vector<8xf16> |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
test changes caused by switching pass to optimize-vector-transfer, which also drops vector unit dims.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice!
@@ -549,7 +549,9 @@ void addGPUMatmulSimtPassPipeline(OpPassManager &funcPassManager, | |||
// still rely on buffer level transformations for transfer ops hoisting and | |||
// store to load forwarding. This relies on shacky alias analysis and we need | |||
// to move this to tensor level once we have better abstractions. | |||
funcPassManager.addPass(createOptimizeTensorInsertExtractSlicesPass()); | |||
// TODO: We should be to start hoisting out accumulator load/store out | |||
// after https://github.com/llvm/llvm-project/pull/111533. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
reminder to drop before landing
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
oh good catch, thanks!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Okay, I have better understanding now. Thanks for the good writeup in llvm/llvm-project#111533 (comment)!
This pass is meant to be run on tensor code, but has slowly accumulated unrelated optimizations. This patch makes this pass run only tensor optimizations.
No test additions because this pass only has tensor tests.
Depends on #18731