Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Suppressed UserWarning: TORCH_CUDA_ARCH_LIST #3160

Open
wants to merge 1 commit into
base: develop
Choose a base branch
from

Conversation

devesh-2002
Copy link

Changes

TORCH_CUDA_ARCH_LIST warning is suppressed using torch.cuda.get_arch_list().

Please let me know, if any changes are required.

Related tickets

Solves issue #3141

@devesh-2002 devesh-2002 requested a review from a team as a code owner December 17, 2024 16:10
@github-actions github-actions bot added the NNCF PT Pull requests that updates NNCF PyTorch label Dec 17, 2024
@MaximProshin
Copy link
Collaborator

@AlexanderDokuchaev , @alexsu52 , please take a look.

Copy link
Contributor

@alexsu52 alexsu52 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the contribution!

I believe that this solution can be extended to all cases of using NNCF, not just for mobilenet_v2 example. I'm sure such PR will be more valuable for NNCF users.

I would suggest the following:

  • Move the suppression code in the method which compiles NNCF extension for GPU
    async_result = pool.apply_async(self._loader.load)
  • Rework warning suppression via warning filtering:
with warnings.catch_warnings():
    warnings.filterwarnings("ignore", message="TORCH_CUDA_ARCH_LIST is not set")
    pool = ThreadPool(processes=1)
    async_result = pool.apply_async(self._loader.load)
    self._loaded_namespace = async_result.get(timeout=timeout)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
NNCF PT Pull requests that updates NNCF PyTorch
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants