We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
LinalgToXeGPU
linalg.matmul
transpose_b
This can be implemented using xegpu.load_nd %x {transpose = {1, 0}} for B tiles.
xegpu.load_nd %x {transpose = {1, 0}}
We should support both patterns:
linalg.matmul_transpose_b
matmul
%b_tr = linalg.transpose %b ... %res = linalg.matmul %a, %b, ...
This functionality is required for OV integration (#207).
The text was updated successfully, but these errors were encountered:
xegpu.dpas
dchigarev
Successfully merging a pull request may close this issue.
This can be implemented using
xegpu.load_nd %x {transpose = {1, 0}}
for B tiles.We should support both patterns:
linalg.matmul_transpose_b
.matmul
(this is how MLIR from OV will look like):This functionality is required for OV integration (#207).
The text was updated successfully, but these errors were encountered: