-
Notifications
You must be signed in to change notification settings - Fork 11
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
E2E MaxPool2d Bringup #642
Conversation
LPanosTT
commented
Sep 6, 2024
•
edited
Loading
edited
- Bringup MaxPool2d: TTIR->Pass to add reshapes on input/output -> TTNN -> Flatbuffer -> runtime
- Working for shapes where H, W, and C are tile-dim aligned, and C is a power of 2
SI32Attr:$input_height, | ||
SI32Attr:$input_width, | ||
SI32Attr:$kernel_height, | ||
SI32Attr:$kernel_width, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same as conv op, we should remove / infer these 4 attributes.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Btw, all of these can be changed from:
SI32Attr:$stride_height,
To:
"int64_t":$stride_height,
And then tablgen will generate the getters and setters to work with ints instead of IntAttr's, which is easier to work with.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@nsmithtt Since pooling doesn't have weights there is nothing to infer the kernel size from. Furthermore the TTNN maxpool2d op requires the input be (1, 1, N*H*W, C)
, so the height width and batch size information is lost by the time we get to runtime, unless they are stored as attributes.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
TTNN maxpool2d op requires the input be (1, 1, NHW, C)
Yes the op should be defined this way in TTNN dialect, but in TTIR we shouldn't define it this way. Agree about the kernel size, but input_height/width should be removed.
16afb37
to
bd8285d
Compare
I've made the pass which inserts reshapes onto MaxPool2d generalized as I hope to re use the code to insert reshapes on Conv2d as well. |
328f709
to
11a40cd
Compare
SI32Attr:$input_height, | ||
SI32Attr:$input_width, | ||
SI32Attr:$kernel_height, | ||
SI32Attr:$kernel_width, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
TTNN maxpool2d op requires the input be (1, 1, NHW, C)
Yes the op should be defined this way in TTNN dialect, but in TTIR we shouldn't define it this way. Agree about the kernel size, but input_height/width should be removed.
return failure(); | ||
} | ||
|
||
if (!llvm::isa<MaxPool2dOp>(op)) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this needed?
fef525d
to
ef5b219
Compare
a67d176
to
04b5681
Compare
Add reshape insertion pass for maxpool2d Clenup maxpool fix shpes Remove MLIR module dump in TTIRToTTNNPass.cpp, add more checks to verify() methods for TTIR/TTNN MaxPool2dOp Fix verify condition for maxpool2d Generalize reshape-inserting pass with template fix build warnings remove use of auto keyword Rebase fixes Make input height/width optional in TTIR as per nicks suggestion Remove stray prints add emit C pattern for maxpool2d fix typo in error message Remove clang-tidy from cmake file fix
c13b1f4
to
1e20f6b
Compare
Add E2E maxpool2d op Add reshape insertion pass for maxpool2d