Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[WIP] LLVM/MHLO uplift 09/04/2023 #2484

Closed
wants to merge 7 commits into from
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion docs/BuildOnLinuxOSX.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@ Firstly, install MLIR (as a part of LLVM-Project):
``` bash
git clone -n https://github.com/llvm/llvm-project.git
# Check out a specific branch that is known to work with ONNX-MLIR.
cd llvm-project && git checkout 91088978d712cd7b33610c59f69d87d5a39e3113 && cd ..
cd llvm-project && git checkout 6098d7d5f6533edb1b873107ddc1acde23b9235b && cd ..
```

[same-as-file]: <> (utils/build-mlir.sh)
Expand Down
2 changes: 1 addition & 1 deletion docs/BuildOnWindows.md
Original file line number Diff line number Diff line change
Expand Up @@ -52,7 +52,7 @@ Install MLIR (as a part of LLVM-Project):
```shell
git clone -n https://github.com/llvm/llvm-project.git
# Check out a specific branch that is known to work with ONNX-MLIR.
cd llvm-project && git checkout 91088978d712cd7b33610c59f69d87d5a39e3113 && cd ..
cd llvm-project && git checkout 6098d7d5f6533edb1b873107ddc1acde23b9235b && cd ..
```

[same-as-file]: <> (utils/build-mlir.cmd)
Expand Down
2 changes: 1 addition & 1 deletion docs/Testing.md
Original file line number Diff line number Diff line change
Expand Up @@ -36,7 +36,7 @@ The all_test_names.txt is automatically generated with command "make check-onnx-

### Adding ONNX-supported test cases to the current set of backend tests

When the ONNX-to-Krnl conversion of an operator is added, the corresponding backend tests for this operator should be added to test.py. The available test cases can be found in `third_part/onnx/onnx/backend/test/case/node`. You can identify new tests by looking for the new operator in `test/backend/all_test_names.txt`. Once you have located new tests, you may add the new tests in the `test/backend/inference_backend.py.` Please note to add suffix `_cpu` to the onnx test name. Associated with the test, you can define how to run the tests for the new operator. For example:
When the ONNX-to-Krnl conversion of an operator is added, the corresponding backend tests for this operator should be added to test.py. The available test cases can be found in `third_party/onnx/onnx/backend/test/case/node`. You can identify new tests by looking for the new operator in `test/backend/all_test_names.txt`. Once you have located new tests, you may add the new tests in the `test/backend/inference_backend.py.` Please note to add suffix `_cpu` to the onnx test name. Associated with the test, you can define how to run the tests for the new operator. For example:
```
"test_and2d_cpu": {STATIC_SHAPE:{}, DYNAMIC_SHAPE:{-1:{-1}}, CONSTANT_INPUT:{-1}},
```
Expand Down
4 changes: 2 additions & 2 deletions docs/UpdatingLLVMCommit.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@

# Updating the LLVM commit or MLIR-HLO submodule

ONNX-MLIR depends on `llvm-project` (among various other projects such as `mlir-hlo`). The `llvm-project` dependency is captured in [utils/clone-mlir.sh](clone-mlir.sh). `mlir-hlo` is a submodule found in the `third_party` directory.
ONNX-MLIR depends on `llvm-project` (among various other projects such as `mlir-hlo`). The `llvm-project` dependency is captured in [../utils/clone-mlir.sh](clone-mlir.sh). `mlir-hlo` is a submodule found in the `third_party` directory.

We plan to update `llvm-project` a couple of times a month in order to keep up-to-date with the advancements made in `mlir`, but also to decrease the complexity of each update. There is currently no plan to update `mlir-hlo` on any given schedule, though for a specific LLVM update it may be necessary to also update the `mlir-hlo` submodule for the build to continue working correctly. This is because `mlir-hlo` itself also has a dependency on `mlir`.

Expand All @@ -17,7 +17,7 @@ We've started an update rotation that is described [here](https://github.com/onn
## What is the update process?

1. **Lookup green commit hashes**: From the Github issue https://github.com/llvm/torch-mlir/issues/1178, find the LLVM and MLIR-HLO green commits for the week when ONNX-MLIR is being updated.
2. **Update the `llvm-project` commit**: Update the LLVM commit referenced in the source tree to the green commit hash for the LLVM project from Step 1. The current locations that need to be updated are [utils/clone-mlir.sh](clone-mlir.sh), [docs/BuildOnLinuxOSX.md](BuildOnLinuxOSX.md) and [docs/BuildOnWindows.md](BuildOnWindows.md).
2. **Update the `llvm-project` commit**: Update the LLVM commit referenced in the source tree to the green commit hash for the LLVM project from Step 1. The current locations that need to be updated are [utils/clone-mlir.sh](../utils/clone-mlir.sh), [docs/BuildOnLinuxOSX.md](BuildOnLinuxOSX.md) and [docs/BuildOnWindows.md](BuildOnWindows.md).
3. **Update the `mlir-hlo` submodule**: In the `third-party/mlir-hlo` directory, run `git fetch` followed by `git checkout <mlir-hlo-commit-hash>` (where `<mlir-hlo-commit-hash>` is the green commit hash for the MLIR-HLO project from Step 1).
4. **Rebuild and test ONNX-MLIR**: This might involve fixing various API breakages introduced upstream (they are likely unrelated to what you are working on). If these fixes are too complex, please file a work-in-progress PR explaining the issues you are running into asking for help so that someone from the community can help.

Expand Down
4 changes: 2 additions & 2 deletions test/mlir/conversion/instrument/add.mlir
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ func.func @test_instrument_add_onnx(%arg0 : tensor<10x10xf32>, %arg1 : tensor<10
}

// CHECK-LABEL: func.func @test_instrument_add_onnx
// CHECK: "krnl.runtime_instrument"() {nodeName = "model/add1", opName = "onnx.Add", tag = 5 : i64} : () -> ()
// CHECK: "krnl.runtime_instrument"() <{nodeName = "model/add1", opName = "onnx.Add", tag = 5 : i64}> : () -> ()
// CHECK: [[RES_:%.+]] = memref.alloc()
// CHECK: affine.for [[I_0_:%.+]] = 0 to 10 {
// CHECK: affine.for [[I_1_:%.+]] = 0 to 10 {
Expand All @@ -16,7 +16,7 @@ func.func @test_instrument_add_onnx(%arg0 : tensor<10x10xf32>, %arg1 : tensor<10
// CHECK: affine.store [[VAR_3_]], [[RES_]]{{.}}[[I_0_]], [[I_1_]]{{.}} : memref<10x10xf32>
// CHECK: }
// CHECK: }
// CHECK: "krnl.runtime_instrument"() {nodeName = "model/add1", opName = "onnx.Add", tag = 6 : i64} : () -> ()
// CHECK: "krnl.runtime_instrument"() <{nodeName = "model/add1", opName = "onnx.Add", tag = 6 : i64}> : () -> ()
// CHECK: return
// CHECK: }

Expand Down
4 changes: 2 additions & 2 deletions test/mlir/conversion/instrument/onnx_add.mlir
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ func.func @test_instrument_add_onnx(%arg0 : tensor<10x10xf32>, %arg1 : tensor<10
}

// CHECK-LABEL: func.func @test_instrument_add_onnx
// CHECK: "krnl.runtime_instrument"() {nodeName = "model/add1", opName = "onnx.Add", tag = 5 : i64} : () -> ()
// CHECK: "krnl.runtime_instrument"() <{nodeName = "model/add1", opName = "onnx.Add", tag = 5 : i64}> : () -> ()
// CHECK: [[RES_:%.+]] = memref.alloc()
// CHECK: affine.for [[I_0_:%.+]] = 0 to 10 {
// CHECK: affine.for [[I_1_:%.+]] = 0 to 10 {
Expand All @@ -16,7 +16,7 @@ func.func @test_instrument_add_onnx(%arg0 : tensor<10x10xf32>, %arg1 : tensor<10
// CHECK: affine.store [[VAR_3_]], [[RES_]]{{.}}[[I_0_]], [[I_1_]]{{.}} : memref<10x10xf32>
// CHECK: }
// CHECK: }
// CHECK: "krnl.runtime_instrument"() {nodeName = "model/add1", opName = "onnx.Add", tag = 6 : i64} : () -> ()
// CHECK: "krnl.runtime_instrument"() <{nodeName = "model/add1", opName = "onnx.Add", tag = 6 : i64}> : () -> ()
// CHECK: return
// CHECK: }

Expand Down
4 changes: 2 additions & 2 deletions test/mlir/conversion/instrument/onnx_all.mlir
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ func.func @test_instrument_add_onnx(%arg0 : tensor<10x10xf32>, %arg1 : tensor<10
}

// CHECK-LABEL: func.func @test_instrument_add_onnx
// CHECK: "krnl.runtime_instrument"() {nodeName = "model/add1", opName = "onnx.Add", tag = 5 : i64} : () -> ()
// CHECK: "krnl.runtime_instrument"() <{nodeName = "model/add1", opName = "onnx.Add", tag = 5 : i64}> : () -> ()
// CHECK: [[RES_:%.+]] = memref.alloc()
// CHECK: affine.for [[I_0_:%.+]] = 0 to 10 {
// CHECK: affine.for [[I_1_:%.+]] = 0 to 10 {
Expand All @@ -16,7 +16,7 @@ func.func @test_instrument_add_onnx(%arg0 : tensor<10x10xf32>, %arg1 : tensor<10
// CHECK: affine.store [[VAR_3_]], [[RES_]]{{.}}[[I_0_]], [[I_1_]]{{.}} : memref<10x10xf32>
// CHECK: }
// CHECK: }
// CHECK: "krnl.runtime_instrument"() {nodeName = "model/add1", opName = "onnx.Add", tag = 6 : i64} : () -> ()
// CHECK: "krnl.runtime_instrument"() <{nodeName = "model/add1", opName = "onnx.Add", tag = 6 : i64}> : () -> ()
// CHECK: return
// CHECK: }

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -5,10 +5,10 @@

// Check nested if lowering (function computes scalar Sign).
func.func @test_if_sign(%arg0: tensor<f32>) -> tensor<i32> {
%zero = onnx.Constant {value = dense<0> : tensor<i32>} : tensor<i32>
%plus = onnx.Constant {value = dense<1> : tensor<i32>} : tensor<i32>
%minus = onnx.Constant {value = dense<-1> : tensor<i32>} : tensor<i32>
%0 = onnx.Constant {value = dense<0.0> : tensor<f32>} : tensor<f32>
%zero = "onnx.Constant"() <{value = dense<0> : tensor<i32>}> : () -> tensor<i32>
%plus = "onnx.Constant"() <{value = dense<1> : tensor<i32>}> : () -> tensor<i32>
%minus = "onnx.Constant"() <{value = dense<-1> : tensor<i32>}> : () -> tensor<i32>
%0 = "onnx.Constant"() <{value = dense<0.0> : tensor<f32>}> : () -> tensor<f32>
%1 = "onnx.Less"(%arg0, %0) : (tensor<f32>, tensor<f32>) -> tensor<i1>
%2 = "onnx.If"(%1) ({
onnx.Yield %minus : tensor<i32>
Expand All @@ -25,10 +25,10 @@ func.func @test_if_sign(%arg0: tensor<f32>) -> tensor<i32> {
// mlir2FileCheck.py
// CHECK-LABEL: func.func @test_if_sign
// CHECK-SAME: ([[PARAM_0_:%.+]]: memref<f32>) -> memref<i32> {
// CHECK-DAG: [[CONSTANT_1_:%.+]] = "krnl.global"() {name = "constant_{{[0-9]+}}", shape = [], value = dense<0> : tensor<i32>} : () -> memref<i32>
// CHECK-DAG: [[CONSTANT_2_:%.+]] = "krnl.global"() {name = "constant_{{[0-9]+}}", shape = [], value = dense<1> : tensor<i32>} : () -> memref<i32>
// CHECK-DAG: [[CONSTANT_3_:%.+]] = "krnl.global"() {name = "constant_{{[0-9]+}}", shape = [], value = dense<-1> : tensor<i32>} : () -> memref<i32>
// CHECK-DAG: [[VAR_0_:%.+]] = "krnl.global"() {name = "constant_{{[0-9]+}}", shape = [], value = dense<0.000000e+00> : tensor<f32>} : () -> memref<f32>
// CHECK-DAG: [[CONSTANT_1_:%.+]] = "krnl.global"() <{name = "constant_{{[0-9]+}}", shape = [], value = dense<0> : tensor<i32>}> : () -> memref<i32>
// CHECK-DAG: [[CONSTANT_2_:%.+]] = "krnl.global"() <{name = "constant_{{[0-9]+}}", shape = [], value = dense<1> : tensor<i32>}> : () -> memref<i32>
// CHECK-DAG: [[CONSTANT_3_:%.+]] = "krnl.global"() <{name = "constant_{{[0-9]+}}", shape = [], value = dense<-1> : tensor<i32>}> : () -> memref<i32>
// CHECK-DAG: [[VAR_0_:%.+]] = "krnl.global"() <{name = "constant_{{[0-9]+}}", shape = [], value = dense<0.000000e+00> : tensor<f32>}> : () -> memref<f32>
// CHECK-DAG: [[RES_:%.+]] = memref.alloc() : memref<i1>
// CHECK-DAG: [[LOAD_PARAM_0_MEM_:%.+]] = krnl.load [[PARAM_0_]][] : memref<f32>
// CHECK-DAG: [[LOAD_VAR_0_MEM_:%.+]] = krnl.load [[VAR_0_]][] : memref<f32>
Expand Down
7 changes: 5 additions & 2 deletions test/mlir/conversion/onnx_to_krnl/ControlFlow/Loop.mlir
Original file line number Diff line number Diff line change
Expand Up @@ -116,7 +116,9 @@ func.func @test_loop(%arg0: tensor<i64>, %arg1: tensor<i1>, %arg2: tensor<?xf32>
// CHECK-DAG: [[VAR_dim_7_:%.+]] = memref.dim [[PARAM_2_]], [[CST_0_1_]] : memref<?xf32>
// CHECK-DAG: [[CST_0_2_:%.+]] = arith.constant 0 : index
// CHECK: [[VAR_dim_9_:%.+]] = memref.dim [[PARAM_2_]], [[CST_0_2_]] : memref<?xf32>
// CHECK: [[VAR_11_:%.+]] = affine.max [[MAP_0_]]([[VAR_dim_7_]], [[VAR_dim_9_]])
// CHECK-DAG: [[VAR_11_:%.+]] = affine.max [[MAP_0_]]([[VAR_dim_7_]], [[VAR_dim_9_]])
// CHECK-DAG: [[CST_1_1_:%.+]] = arith.constant 1 : index
// CHECK-NOT: separator of consecutive DAGs
// CHECK-DAG: [[RES_3_:%.+]] = memref.alloc([[VAR_11_]]) {{.*}}: memref<?xf32>
// CHECK-DAG: [[LOOP_1_:%.+]] = krnl.define_loops 1
// CHECK-DAG: [[CST_0_3_:%.+]] = arith.constant 0 : index
Expand Down Expand Up @@ -148,7 +150,7 @@ func.func @test_loop(%arg0: tensor<i64>, %arg1: tensor<i1>, %arg2: tensor<?xf32>
// CHECK: krnl.iterate([[LOOP_2_]]) with ([[LOOP_2_]] -> [[I_2_:%.+]] = [[CST_0_]] to [[VAR_4_]]){
// CHECK: [[VAR_8_1_:%.+]] = krnl.get_induction_var_value([[LOOP_2_]]) : (!krnl.loop) -> index
// CHECK: "krnl.region"() ({
// CHECK-DAG: [[LOAD_RES_1_MEM_1_:%.+]] = "krnl.seqextract"([[RES_]], [[VAR_8_1_]]) {copy = 0 : ui1} : (memref<?xmemref<?xf32>>, index) -> memref<?xf32>
// CHECK-DAG: [[LOAD_RES_1_MEM_1_:%.+]] = "krnl.seqextract"([[RES_]], [[VAR_8_1_]]) <{copy = 0 : ui1}> : (memref<?xmemref<?xf32>>, index) -> memref<?xf32>
// CHECK-DAG: [[LOOP_3_:%.+]] = krnl.define_loops 1
// CHECK-DAG: [[CST_0_8_:%.+]] = arith.constant 0 : index
// CHECK-DAG: [[CST_0_9_:%.+]] = arith.constant 0 : index
Expand All @@ -163,3 +165,4 @@ func.func @test_loop(%arg0: tensor<i64>, %arg1: tensor<i1>, %arg2: tensor<?xf32>
// CHECK: return [[RES_4_]] : memref<?x?xf32>
// CHECK: }
}

Original file line number Diff line number Diff line change
Expand Up @@ -18,10 +18,10 @@ func.func @test_loop_tiny_yolo() -> tensor<?xi32> {
// CHECK-LABEL: func @test_loop_tiny_yolo
// CHECK-SAME: () -> memref<?xi32> {
// CHECK-DAG: [[ZERO:%.+]] = arith.constant 0 : index
// CHECK-DAG: [[ONE_:%.+]] = "krnl.global"() {name = {{.*}}, shape = [], value = dense<1> : tensor<i32>} : () -> memref<i32>
// CHECK-DAG: [[VAR_0_:%.+]] = "krnl.global"() {name = {{.*}}, shape = [], value = dense<7> : tensor<i64>} : () -> memref<i64>
// CHECK-DAG: [[VAR_1_:%.+]] = "krnl.global"() {name = {{.*}}, shape = [], value = dense<true> : tensor<i1>} : () -> memref<i1>
// CHECK-DAG: [[VAR_2_:%.+]] = "krnl.global"() {name = {{.*}}, shape = [], value = dense<0> : tensor<i32>} : () -> memref<i32>
// CHECK-DAG: [[ONE_:%.+]] = "krnl.global"() <{name = {{.*}}>, shape = [], value = dense<1> : tensor<i32>} : () -> memref<i32>
// CHECK-DAG: [[VAR_0_:%.+]] = "krnl.global"() <{name = {{.*}}>, shape = [], value = dense<7> : tensor<i64>} : () -> memref<i64>
// CHECK-DAG: [[VAR_1_:%.+]] = "krnl.global"() <{name = {{.*}}>, shape = [], value = dense<true> : tensor<i1>} : () -> memref<i1>
// CHECK-DAG: [[VAR_2_:%.+]] = "krnl.global"() <{name = {{.*}}>, shape = [], value = dense<0> : tensor<i32>} : () -> memref<i32>
// CHECK-DAG: [[RES_:%.+]] = memref.alloc() : memref<i32>
// CHECK-DAG: [[LOAD_VAR_0_MEM_:%.+]] = krnl.load [[VAR_0_]][] : memref<i64>
// CHECK-DAG: [[VAR_5_:%.+]] = arith.index_cast [[LOAD_VAR_0_MEM_]] : i64 to index
Expand Down
Loading
Loading