Replies: 2 comments
-
Just to make an example, I'am experimenting a top-down approach with: I have many doubts that it will work but let see what happens and if it will open a clear path for the contribution ((/cc @stellaraccident). |
Beta Was this translation helpful? Give feedback.
0 replies
-
Another top-down case is 3d deformable convolution proposed recently in Keras-cv: /cc @SimonBiggs @axeldavy |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
We have a quite old topic about the tension between custom-op vs XLA composability
I really want to know what is the strategy to interact with the XLA "consumers" projects when they hit the limit of expressing their computation with XLA/*HLO.
Very recently we had again a proposal to introduce C++ native OPS for the first time in historically pure python library like Keras starting from the limit to express IOU3D.
But as this project is not related exclusively to Keras or TF it seems that also Pytorch 3d needed to rely on c++/cuda custom ops so I suppose that currently we will have problem also with Pytorch-XLA/Torch-MLIR.
Then, to add also another framework, JAX is recently introducing something like custom-ops API in the framework
In general I could understand that we want a fast "exit strategy" to call well historical performing native implementation without contributing/interacting too much with the compiler stack/team but it seems to me that the strategy to stress test XLA on a more rich distribution of Tensor Programs it is really not too much clear IMHO.
When a "consumer" framework hit the barrier of expressing a Tensor Program for a specific operation how they are going to interact with the OpenXLA team?
I think that just introducing every time the "custom-ops" shortcut also in "relatively side" python projects will not help to positively stress test the compiler technology and its standard golden set of operations (?)HLO (/cc @burmako)
Beta Was this translation helpful? Give feedback.
All reactions