Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Rebuild for CUDA 12 w/arch support #8

Conversation

regro-cf-autotick-bot
Copy link
Contributor

@regro-cf-autotick-bot regro-cf-autotick-bot commented Aug 24, 2023

This PR has been triggered in an effort to update cuda120.

Notes and instructions for merging this PR:

  1. Please merge the PR only after the tests have passed.
  2. Feel free to push to the bot's branch to update this PR if needed.

Please note that if you close this PR we presume that the feedstock has been rebuilt, so if you are going to perform the rebuild yourself don't close this PR until the your rebuild has been merged.


Here are some more details about this specific migrator:

The transition to CUDA 12 SDK includes new packages for all CUDA libraries and build tools. Notably, the cudatoolkit package no longer exists, and packages should depend directly on the specific CUDA libraries (libcublas, libcusolver, etc) as needed. For an in-depth overview of the changes and to report problems see this issue. Please feel free to raise any issues encountered there. Thank you! 🙏


If this PR was opened in error or needs to be updated please add the bot-rerun label to this PR. The bot will close this PR and schedule another one. If you do not have permissions to add this label, you can use the phrase @conda-forge-admin, please rerun bot in a PR comment to have the conda-forge-admin add it for you.

This PR was created by the regro-cf-autotick-bot. The regro-cf-autotick-bot is a service to automatically track the dependency graph, migrate packages, and propose package version updates for conda-forge. Feel free to drop us a line if there are any issues! This PR was generated by https://github.com/regro/cf-scripts/actions/runs/5968582771, please use this URL for debugging.

The transition to CUDA 12 SDK includes new packages for all CUDA libraries and
build tools. Notably, the cudatoolkit package no longer exists, and packages
should depend directly on the specific CUDA libraries (libcublas, libcusolver,
etc) as needed. For an in-depth overview of the changes and to report problems
[see this issue]( conda-forge/conda-forge.github.io#1963 ).
Please feel free to raise any issues encountered there. Thank you! 🙏
@conda-forge-webservices
Copy link

Hi! This is the friendly automated conda-forge-linting service.

I just wanted to let you know that I linted all conda-recipes in your PR (recipe) and found it was in an excellent condition.

@jakirkham jakirkham mentioned this pull request Aug 24, 2023
@jakirkham
Copy link
Member

Looks like the builds are failing due to this line calling fix_cudart_rpath, which is undefined

Looks like that was removed upstream: glotzerlab/hoomd-blue#1400

@pabloferz
Copy link
Contributor

Thanks @jakirkham for looking into this! I will fix this over the hoomd-dlext repo, and tag a new patch release.

@jakirkham
Copy link
Member

Thanks Pablo! 🙏

@pabloferz
Copy link
Contributor

SSAGESLabs/hoomd-dlext#21 should fix the issue you're seeing here.

@jakirkham
Copy link
Member

jakirkham commented Aug 29, 2023

Thanks Pablo! 🙏

Looks like that just went into the 0.4.0 release. Is that right?

If so, maybe we can do a release here first and then follow up on this migration afterwards. Does that seem reasonable?

Edit: Ah nvm. Didn't see the release was also being folded into this PR. That works too 🙂

@jakirkham jakirkham changed the title Rebuild for CUDA 12 w/arch support Release 0.4.0 & Rebuild for CUDA 12 w/arch support Aug 29, 2023
@jakirkham
Copy link
Member

@conda-forge-admin, please re-render

@pabloferz
Copy link
Contributor

I think we want different builds for hoomd v3 and hoomd v4 series. I would need to update the PR again.

If releasing a new version and adding CUDA 12 support within the same PR works, let's go with that.

@pabloferz
Copy link
Contributor

@conda-forge-admin, please re-render

@pabloferz
Copy link
Contributor

Given the bot also triggered #9. I went ahead and made sure that worked first. We should probably go with than one first an then, reopen/re-trigger this one.

@jakirkham
Copy link
Member

Makes sense. If you would rather merge that first, that seems reasonable to me. Can ask the bot to regenerate this PR after

@pabloferz
Copy link
Contributor

OK. I merged #9. Can we close and regenerate this one?

And again, thank you so much @jakirkham for being so diligently on top of this!

@pabloferz
Copy link
Contributor

I had to do much more work than anticipated. But we can give it a go again.

@jakirkham
Copy link
Member

Sorry this was challenging

Unfortunately since the PR was closed, the bot won't re-run

Had tried to manually readd the feedstock to the migration ( regro/cf-graph-countyfair#5 ), but it seems that might not be doable either

Think this will need to be migrated manually. Either by reopening this PR and resolving conflicts. Or starting a new one pulling in the migrator changes from this PR

@pabloferz pabloferz reopened this Oct 9, 2023
@conda-forge-webservices
Copy link

Hi! This is the friendly automated conda-forge-linting service.

I was trying to look for recipes to lint for you, but it appears we have a merge conflict.
Please try to merge or rebase with the base branch to resolve this conflict.

Please ping the 'conda-forge/core' team (using the @ notation in a comment) if you believe this is a bug.

@jakirkham
Copy link
Member

@conda-forge-admin, please re-render

@conda-forge-webservices
Copy link

Hi! This is the friendly automated conda-forge-linting service.

I just wanted to let you know that I linted all conda-recipes in your PR (recipe) and found it was in an excellent condition.

@jakirkham
Copy link
Member

@conda-forge-admin, please re-render

@github-actions
Copy link

github-actions bot commented Oct 9, 2023

Hi! This is the friendly automated conda-forge-webservice.

I tried to rerender for you, but it looks like there was nothing to do.

This message was generated by GitHub actions workflow run https://github.com/conda-forge/hoomd-dlext-feedstock/actions/runs/6460499425.

@jakirkham jakirkham changed the title Release 0.4.0 & Rebuild for CUDA 12 w/arch support Rebuild for CUDA 12 w/arch support Oct 9, 2023
@jakirkham
Copy link
Member

Have tried to fix up the merge conflicts. Let's see how this goes

@jakirkham
Copy link
Member

jakirkham commented Oct 9, 2023

Interesting it seems this build completed, but it didn't actually do anything 🤔

Edit: See a few other builds like this as well that didn't do anything. Wonder if it has something to do with how these skips are configured

skip: true # [win or py<36 or python_impl == "pypy"]
skip: true # [hoomd == "v2" and cuda_compiler_version != "None" and py>39]
skip: true # [hoomd == "v2" and cuda_compiler_version == "None" and py>310]
skip: true # [hoomd == "v3" and cuda_compiler_version not in ("11.2", "None")]

@jakirkham
Copy link
Member

Am seeing some builds with this error (for example):

++ /home/conda/feedstock_root/build_artifacts/hoomd-dlext_1696878018099/_h_env_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold/bin/python -c 'import hoomd; print(getattr(hoomd, "__version__", "").startswith("2."), end="")'
Traceback (most recent call last):
  File "<string>", line 1, in <module>
  File "/home/conda/feedstock_root/build_artifacts/hoomd-dlext_1696878018099/_h_env_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold/lib/python3.11/site-packages/hoomd/__init__.py", line 81, in <module>
    from hoomd import hpmc
  File "/home/conda/feedstock_root/build_artifacts/hoomd-dlext_1696878018099/_h_env_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold/lib/python3.11/site-packages/hoomd/hpmc/__init__.py", line 32, in <module>
    from hoomd.hpmc import pair
  File "/home/conda/feedstock_root/build_artifacts/hoomd-dlext_1696878018099/_h_env_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold/lib/python3.11/site-packages/hoomd/hpmc/pair/__init__.py", line 11, in <module>
    from . import user
  File "/home/conda/feedstock_root/build_artifacts/hoomd-dlext_1696878018099/_h_env_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold/lib/python3.11/site-packages/hoomd/hpmc/pair/user.py", line 18, in <module>
    from hoomd.hpmc import _jit
ImportError: libcuda.so.1: cannot open shared object file: No such file or directory

Meanwhile other builds don't see this error at all (for example)

As there is no GPU available (and so libcuda.so.1 from the driver library is not around), it makes sense that loading this library would fail. Are there other options here outside of loading that library?

@pabloferz
Copy link
Contributor

Closing in favor of #13

@pabloferz pabloferz closed this Dec 15, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants