Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Codegen] Remove memref optimizations from OptimizeTensorInsertExtractSlices #18732

Open
wants to merge 2 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -240,8 +240,11 @@ struct CastLikeInsertSliceOpFolder final

void OptimizeTensorInsertExtractSlicesPass::runOnOperation() {
auto funcOp = getOperation();
linalg::hoistRedundantVectorTransfers(cast<func::FuncOp>(funcOp));
IRRewriter rewriter(funcOp->getContext());

funcOp.walk([&](scf::ForOp forOp) { moveLoopInvariantCode(forOp); });
LDBG("after hoisting loop invariant code\n" << funcOp);
Comment on lines -243 to +246
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

linalg::hoistRedundantVectorTransfers is a combination of LICM + memref optimizations. We only run LICM to keep the previous behavior for tensor code.


// TODO: walking in some reverse / inside-out order would be more efficient
// and would capture more cases.
funcOp.walk(
Expand All @@ -252,11 +255,8 @@ void OptimizeTensorInsertExtractSlicesPass::runOnOperation() {
hoistSubsetWithLoopInvariantTensor(rewriter, forOp);
});
LDBG("after hoisting subset loop invariant tensors" << funcOp);
vector::transferOpflowOpt(rewriter, funcOp);
MLIRContext *context = &getContext();

LDBG("after hoisting redundant transfers on tensors\n" << funcOp);

MLIRContext *context = &getContext();
RewritePatternSet patterns(context);
populateVectorTransferTensorSliceTransforms(patterns);
scf::ForOp::getCanonicalizationPatterns(patterns, context);
Expand Down
4 changes: 3 additions & 1 deletion compiler/src/iree/compiler/Codegen/LLVMGPU/Passes.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -549,7 +549,9 @@ void addGPUMatmulSimtPassPipeline(OpPassManager &funcPassManager,
// still rely on buffer level transformations for transfer ops hoisting and
// store to load forwarding. This relies on shacky alias analysis and we need
// to move this to tensor level once we have better abstractions.
funcPassManager.addPass(createOptimizeTensorInsertExtractSlicesPass());
// TODO: We should be to start hoisting out accumulator load/store out
// after https://github.com/llvm/llvm-project/pull/111533.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

reminder to drop before landing

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

oh good catch, thanks!

funcPassManager.addPass(createOptimizeVectorTransferPass());

// Hoist loop invariant code to avoid pipelining it.
funcPassManager.addPass(createLoopInvariantCodeMotionPass());
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -43,9 +43,9 @@ func.func @winograd_input_transform() {
}
// CHECK-LABEL: func.func @winograd_input_transform
// CHECK-NOT: memref.alloc
// CHECK: vector.transfer_read
// CHECK: vector.transfer_read
// CHECK: scf.for
// CHECK: vector.transfer_read
// CHECK: vector.transfer_read
// CHECK: scf.for
// CHECK: scf.for
// CHECK: vector.transfer_read
Expand All @@ -71,9 +71,9 @@ func.func @winograd_output_transform() {
}
// CHECK-LABEL: func.func @winograd_output_transform
// CHECK-NOT: memref.alloc
// CHECK: vector.transfer_read
// CHECK: vector.transfer_read
// CHECK: scf.for
// CHECK: vector.transfer_read
// CHECK: vector.transfer_read
// CHECK: scf.for
// CHECK: scf.for
// CHECK: vector.transfer_read
Expand Down
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
// RUN: iree-opt --split-input-file --iree-gpu-test-target=volta@vulkan \
// RUN: --pass-pipeline='builtin.module(func.func(iree-spirv-tile-to-cooperative-ops, iree-codegen-generic-vectorization, iree-spirv-vectorize-to-cooperative-ops, iree-codegen-optimize-tensor-insert-extract-slices, canonicalize, cse))' \
// RUN: --pass-pipeline='builtin.module(func.func(iree-spirv-tile-to-cooperative-ops, iree-codegen-generic-vectorization, iree-spirv-vectorize-to-cooperative-ops, iree-codegen-optimize-vector-transfer, canonicalize, cse))' \
// RUN: %s | FileCheck %s

#pipeline_layout = #hal.pipeline.layout<bindings = [
Expand Down Expand Up @@ -102,9 +102,9 @@ func.func @matmul_256x1024x128_div_add() attributes {translation_info = #transla
// CHECK: vector.transfer_write %[[ZERO]], {{.+}} : vector<16x16xf16>, memref<16x16xf16, strided<[128, 1], offset: ?>>
// CHECK: scf.for %{{.+}} = %[[C0]] to %[[C1024]] step %[[C32]]
// CHECK: gpu.barrier
// CHECK: vector.transfer_read {{.+}} vector<1x8xf16>
// CHECK: vector.transfer_read {{.+}} vector<8xf16>
// CHECK: vector.transfer_write
// CHECK: vector.transfer_read {{.+}} vector<1x8xf16>
// CHECK: vector.transfer_read {{.+}} vector<8xf16>
Comment on lines -105 to +107
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

test changes caused by switching pass to optimize-vector-transfer, which also drops vector unit dims.

// CHECK: vector.transfer_write
// CHECK: gpu.barrier
// CHECK: scf.for %[[IV_Y:.+]] = %[[OFFSET_Y]] to %[[C32]] step %[[C32]]
Expand Down Expand Up @@ -219,7 +219,7 @@ func.func @matmul_256x1024x128_div_add() attributes {translation_info = #transla
// CHECK-DAG: %[[C16:.+]] = arith.constant 16 : index
// CHECK-DAG: %[[C32:.+]] = arith.constant 32 : index
// CHECK-DAG: %[[C512:.+]] = arith.constant 512 : index
// CHECK-DAG: %[[ZERO:.+]] = arith.constant dense<0.000000e+00> : vector<1x16x16xf16>
// CHECK-DAG: %[[ZERO:.+]] = arith.constant dense<0.000000e+00> : vector<16x16xf16>

// CHECK-DAG: %[[ID_X:.+]] = gpu.thread_id x
// CHECK-DAG: %[[ID_Y:.+]] = gpu.thread_id y
Expand All @@ -234,13 +234,13 @@ func.func @matmul_256x1024x128_div_add() attributes {translation_info = #transla
// CHECK: scf.for %{{.+}} = %[[ID_Z]] to %[[C1]] step %[[C1]]
// CHECK: scf.for %{{.+}} = %[[OFFSET_Y]] to %[[C32]] step %[[C32]]
// CHECK: scf.for %{{.+}} = %[[OFFSET_X]] to %[[C32]] step %[[C32]]
// CHECK: vector.transfer_write %[[ZERO]], {{.+}} : vector<1x16x16xf16>, memref<1x16x16xf16, strided<[32768, 256, 1], offset: ?>>
// CHECK: vector.transfer_write %[[ZERO]], {{.+}} : vector<16x16xf16>, memref<1x16x16xf16, strided<[32768, 256, 1], offset: ?>>

// CHECK: scf.for %{{.+}} = %[[C0]] to %[[C512]] step %[[C32]]
// CHECK: gpu.barrier
// CHECK: vector.transfer_read {{.+}} vector<1x1x8xf16>
// CHECK: vector.transfer_read {{.+}} vector<8xf16>
// CHECK: vector.transfer_write
// CHECK: vector.transfer_read {{.+}} vector<1x1x8xf16>
// CHECK: vector.transfer_read {{.+}} vector<8xf16>
// CHECK: vector.transfer_write
// CHECK: gpu.barrier
// CHECK: scf.for %[[IV_Z:.+]] = %[[ID_Z]] to %[[C1]] step %[[C1]]
Expand All @@ -254,16 +254,16 @@ func.func @matmul_256x1024x128_div_add() attributes {translation_info = #transla
// CHECK: %[[READ3:.+]] = vector.transfer_read %[[RHS_VIEW]][%[[C0]], %[[C16]], %[[C0]]]
// CHECK: %[[READ4:.+]] = vector.transfer_read %{{.+}}[%[[C0]], %[[C0]], %[[C0]]]
// CHECK: %[[CT0:.+]] = vector.contract
// CHECK-SAME: %[[READ0]], %[[READ2]], %[[READ4]] : vector<1x16x16xf16>, vector<1x16x16xf16> into vector<1x16x16xf16>
// CHECK-SAME: %[[READ0]], %[[READ2]], %[[READ4]] : vector<16x16xf16>, vector<16x16xf16> into vector<16x16xf16>
// CHECK: %[[CT1:.+]] = vector.contract
// CHECK-SAME: %[[READ1]], %[[READ3]], %[[CT0]] : vector<1x16x16xf16>, vector<1x16x16xf16> into vector<1x16x16xf16>
// CHECK-SAME: %[[READ1]], %[[READ3]], %[[CT0]] : vector<16x16xf16>, vector<16x16xf16> into vector<16x16xf16>
// CHECK: vector.transfer_write %[[CT1]], %{{.+}}[%[[C0]], %[[C0]], %[[C0]]]
// CHECK: scf.for %{{.+}} = %[[ID_Z]] to %[[C1]] step %[[C1]]
// CHECK: scf.for %{{.+}} = %[[OFFSET_Y]] to %[[C32]] step %[[C32]]
// CHECK: scf.for %{{.+}} = %[[OFFSET_X]] to %[[C32]] step %[[C32]]
// CHECK: %[[READ5:.+]] = vector.transfer_read %{{.+}}[%[[C0]], %[[C0]], %[[C0]]]
// CHECK: %[[READ6:.+]] = vector.transfer_read %{{.+}}[%[[C0]], %[[C0]], %[[C0]]]
// CHECK: %[[DIV:.+]] = arith.divf %[[READ6]], %[[READ5]] : vector<1x16x16xf16>
// CHECK: %[[DIV:.+]] = arith.divf %[[READ6]], %[[READ5]] : vector<16x16xf16>
// CHECK: vector.transfer_write %[[DIV]], %{{.+}}[%[[C0]], %[[C0]], %[[C0]]]

// -----
Expand Down Expand Up @@ -362,9 +362,9 @@ func.func @matmul_256x1024x128_mixed_signedness_int8() {
// CHECK: vector.transfer_write %[[ZERO]], {{.+}} : vector<16x16xi32>, memref<16x16xi32, strided<[128, 1], offset: ?>>
// CHECK: scf.for %{{.+}} = %[[C0]] to %[[C1024]] step %[[C32]]
// CHECK: gpu.barrier
// CHECK: vector.transfer_read {{.+}} vector<1x8xi8>
// CHECK: vector.transfer_read {{.+}} vector<8xi8>
// CHECK: vector.transfer_write
// CHECK: vector.transfer_read {{.+}} vector<1x8xi8>
// CHECK: vector.transfer_read {{.+}} vector<8xi8>
// CHECK: vector.transfer_write
// CHECK: gpu.barrier
// CHECK: scf.for %[[IV_Y:.+]] = %[[OFFSET_Y]] to %[[C32]] step %[[C32]]
Expand Down
2 changes: 1 addition & 1 deletion third_party/llvm-project
Loading