-
-
Notifications
You must be signed in to change notification settings - Fork 48
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
WIP: pytorch v2.6.0 #326
base: main
Are you sure you want to change the base?
WIP: pytorch v2.6.0 #326
Conversation
…nda-forge-pinning 2025.01.18.07.29.32
Hi! This is the friendly automated conda-forge-linting service. I just wanted to let you know that I linted all conda-recipes in your PR ( I do have some suggestions for making it better though... For recipe/meta.yaml:
This message was generated by GitHub Actions workflow run https://github.com/conda-forge/conda-forge-webservices/actions/runs/12863672331. Examine the logs at this URL for more detail. |
This looks better than expected so far. Still have to double check dependency changes. Happy if someone could do that (even if it's just noting which bounds changed relative to the current recipe) |
recipe/meta.yaml
Outdated
@@ -305,27 +288,28 @@ outputs: | |||
- typing_extensions | |||
- {{ pin_subpackage('libtorch', exact=True) }} | |||
run: | |||
- {{ pin_subpackage('libtorch', exact=True) }} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The old syntax was to workaround the fact that for the non megabuilds each libtorch may have different hashes making it incompatible.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Or maybe something like that. Maybe you have addressed the core problem and this is the better way.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Even for the megabuild, libtorch has a unique hash, because there's only one. It's pytorch itself that gets different hashes due to the different python versions
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Even for the megabuild, libtorch has a unique hash
Not true. See https://anaconda.org/conda-forge/libtorch/files
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Obviously cuda and non-CUDA create different libtorch hashes, but within a megabuild it's unique, which is what matters for pinning it in pytorch.
Unless the idea was to allow mixing CUDA-enabled pytorch with non-CUDA libtorch, but I don't see the sense in that.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's more interesting for the blas_impl - pytorch
could theoretically be independent of that (if all the blas calls go through libtorch), but also there we're already creating different pytorch
hashes there due to the {{ pin_subpackage("libtorch", exact=True) }}
in the host dependencies, so AFAICT we're not materially changing the various installations here, just making it impossible to install untested/unsupported combinations
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For non-megabuilds, the idea was to allow libtorch from any of the builds (with the same features except python version) to work with any pytorch build. This way, you don't have to download a different libtorch
for different python version. Note that this is for non-megabuilds only i.e. osx where we don't have CUDA builds.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, that makes sense! Let me try to reflect that in the run-deps.
Aarch builds fail with
|
don't rely on PKG_BUILDNUM resolving this correctly, which is either racy, or implicitly depends on a separate render pass after setting build.number
Sigh, since when is conda-build applying patches through
|
bd0bec7
to
022f063
Compare
otherwise conda breaks ``` conda_build.exceptions.RecipeError: Mismatching hashes in recipe. Exact pins in dependencies that contribute to the hash often cause this. Can you change one or more exact pins to version bound constraints? Involved packages were: Mismatching package: libtorch (id cpu_generic_habf3c96_0); dep: libtorch 2.6.0.rc7 *0; consumer package: pytorch ```
on osx-64
skip? |
Yeah, this minor accuracy violation indeed sounds skippable, but I've deprioritised this PR until we get the windows builds for 2.5 fixed (and ideally your #318 merged as well). |
ok, good to know |
worth pointing out that as of 6 days time, pypi will have an up to date pytorch package whereas conda won't. Will have a look at that other PR |
Are you talking about rc's, or are we not looking at the same index? 2.6.0 GA hasn't been published AFAICT. Or are you saying that 2.6.0 will be released in 6 days? In any case, this is no reason to rush. We didn't have windows packages for years, and I'm more concerned about fixing them, than lagging behind the PyPI release a bit (and we've often lagged for months in the past; this has gotten much better with the open-gpu server, but it still happens; 2.5.0 was released Oct 18th last year, we had first builds on Nov 3rd). |
yes
100% |
Build the release candidates
Linux CI cancelled until builds for #322 are live