-
Notifications
You must be signed in to change notification settings - Fork 12
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Try to build CUDA wheels #106
base: main
Are you sure you want to change the base?
Conversation
Here is a pre-built version of the code in this pull request: wheels.zip, you can install it locally by unzipping |
pull_request: | ||
# Check all PR |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we really want to build all CUDA wheels on all PR? I'm worried this will increase CI time a lot
python -m build --sdist sphericart-torch --outdir sphericart-torch/dist | ||
python -m cibuildwheel sphericart-torch/dist/*.tar.gz --output-dir sphericart-torch/dist | ||
env: | ||
CIBW_BEFORE_ALL: bash /host/home/runner/work/sphericart/sphericart/scripts/cibw-cuda-setup.sh ${{ matrix.cuda }} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Shouldn't this also be done for the diverse sphericart (non torch) wheels?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We don't provide any CUDA functionality for our Python (NumPy) inteface
@@ -0,0 +1,26 @@ | |||
#!/bin/bash | |||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
set -eu |
-e
to exit if any command fails, -u
to error on undefined bash variables
Regarding the distribution of the wheels, I had an idea last week: we could have some code in Python to dynamically pick a version of the shared library (depending on CUDA presence, CUDA version and torch version for rascaline-torch). When doing a local build, there would be a single library matching the installation environment. For PyPI releases, we could use the code in this PR to build for all variations of Torch/CUDA, and then merge the resulting shared libraries in a single wheel (there are tools to extract & repack wheels). The resulting wheel would be a bit larger, but not that bad overall since sphericart is fairly small. The other concern I have is that we would be releasing something quite different from what the developers are testing, but it might still be worth it compared to the pain of setting up an alternative PyPI index and explaining to users to use it. |
No description provided.