You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
what happens is that during lowering to ttnn when creating dtype attribute for to_layout we convert outputTensor data type to ttnn supported data type. More over IR doesn't correctly represents what is happening in runtime since we would either way convert all types from IR into ttnn supported types during translation to flatbuffer....
One approach to do this is to do type conversion on input graph from forge. This would update all types to ttnn supported types.
Maybe this is okay from forge perspective since they don't do anything on MLIR graph so if we change type in IR it wouldn't make difference on them.
Different approach is to change forge to do type conversion to dtypes which are supported in ttnn. This would make more sense since both input IR and output IR from mlir would have identical data types.
The text was updated successfully, but these errors were encountered:
@mtopalovicTT, I think TTIR should be flexible and accept non-hw types like ints and bools and complex types etc. We should handle this kind of data format conversion during a TTIR->TTIR pass.
@mrakitaTT, @AleksKnezevic, any input from the stablehlo side? There have been many similar df conversion situations there.
ttnn doesn't support all data types which come from forge. One example is is int32 is mapped to Uint32
This can lead to some strange IR like this
to_layout(%arg1) { dtype = Uint32 }(tensor<10x10xi32) -> tensor<10x10xi32)
what happens is that during lowering to ttnn when creating dtype attribute for
to_layout
we convert outputTensor data type to ttnn supported data type. More over IR doesn't correctly represents what is happening in runtime since we would either way convert all types from IR into ttnn supported types during translation to flatbuffer....One approach to do this is to do type conversion on input graph from forge. This would update all types to ttnn supported types.
Maybe this is okay from forge perspective since they don't do anything on MLIR graph so if we change type in IR it wouldn't make difference on them.
Different approach is to change forge to do type conversion to dtypes which are supported in ttnn. This would make more sense since both input IR and output IR from mlir would have identical data types.
The text was updated successfully, but these errors were encountered: