-
Notifications
You must be signed in to change notification settings - Fork 6
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Merge M0 ops #54
Merge M0 ops #54
Conversation
This reverts commit fc09080.
Update tracer API See merge request tenstorrent/torch-ttnn!22
- ttnn.open(0) -> ttnn.open_device(device_id=0) - ttnn.close(d) -> ttnn.close_device(d)
Add conversion and unit test cases - add - eq - gt - logical_and - logical_or - logical_xor - lt - maximum - minimum - mul - ne - pow - sub - xlogy
- aten.abs - aten.acos - aten.acosh - aten.asin - aten.asinh - aten.atan - aten.atan2 # binary - aten.atanh - aten.clone - aten.cos - aten.cosh - aten.erf - aten.exp - aten.expm1 - aten.gelu - aten.hardtanh - aten.isinf - aten.isnan - aten.leaky_relu - aten.log - aten.log10 - aten.log1p - aten.log2 - aten.logical_not - aten.neg - aten.reciprocal - aten.relu - aten.rsqrt - aten.sigmoid - aten.sign - aten.sin - aten.sinh - aten.sqrt - aten.tan - aten.tanh
- addcdiv - addcmul - where
Also use fx.subgraph_rewriter - matmul - linear
- ttnn.add(and other ops) don't have __name__, so torch.compile will fail. We hard patch the op with the __name__ - Now ttnn need a to_layout before computation
All the lowering tests now either (x)pass or xfail for me. All the xfailed tests have their corresponding issues (#64, tenstorrent/tt-metal#11925, #66, tenstorrent/tt-metal#12853) so that we can track them independently. There might still be 0-1 sporadic RNG-based numerical failure (when comparing inference results), but not about the graph (the identity, number, place, etc. of ops).
|
Is there a way to skip approval for running GitHub workflow/CI? That would let us spot errors without human intervention and fix things faster, especially with this time difference. |
@jdh8 this is due to the limitation for first time contributors using forks. |
return self.call_function_prop_meta(ttnn.matmul, args, kwargs) | ||
|
||
if target == torch.ops.aten.linear.default: | ||
return self.call_function_prop_meta(ttnn.linear, args, kwargs) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
(For #66)
The conversion for aten.linear
is here, but it is somehow unused.
xlogy asserts the same size for inputs for now
There are still conflicts to resolve. I kept conflict markers for a clear history containing atomic commits. The following unchecked files still contain conflicts.
Files containing conflicts
.gitignore
README.md
tests/test_fall_back.py
tests/tools/test_stats.py
tools/generate_report.py
torch_ttnn/__init__.py
torch_ttnn/backend.py
torch_ttnn/fx_graphviz.py
torch_ttnn/passes/eliminate_coreops_pass.py
torch_ttnn/passes/graphviz_pass.py
torch_ttnn/passes/lowering/eliminate_data_move_pass.py
Refactor tests
models
test_real_world.pylowering/eltwise
test_pointwise_trinary.pylowering/matmul
lowering/misc
test_fall_back.pylowering/normalization
layer_norm
lowering/tensor_manipulation