Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fatal Python error when executing on second cuda device. #103

Closed
gRox167 opened this issue Feb 13, 2024 · 4 comments · Fixed by #104
Closed

Fatal Python error when executing on second cuda device. #103

gRox167 opened this issue Feb 13, 2024 · 4 comments · Fixed by #104

Comments

@gRox167
Copy link

gRox167 commented Feb 13, 2024

First let me express my gratitude for developers who bring this brilliant package!
The issue that I encountered is really weird. I am working on a server equipped with 2 H100 GPU. pytorch-finufft can execute smoothly on cuda:0 device and cpu device, but when I use cuda:1 device it came error:

Fatal Python error: PyThreadState_Get: the function must be called with the GIL held, but the GIL is released (the current Python thread state is NULL)
Python runtime state: initialized

The minimal environment is

data = torch.view_as_complex(
    torch.stack((torch.randn(15, 80, 12000), torch.randn(15, 80, 12000)), dim=-1)
)
omega = torch.rand(2, 12000) * 2 * np.pi - np.pi

finufft_type1(
            omega.to("cuda:1"),
            data.to("cuda:1"),
            (320,320),
            isign=-1,
            modeord=0,
        )

I am not sure if this is only happen on my server.
I noticed that when I use cuda:0, there is only one process created on gpu0, however, if I use cuda:1, there will be 2 processes created respectively on both of gpu.

@WardBrian
Copy link
Collaborator

Because this was done without requires_grad on either tensor, I suspected it was a bug in cufinufft rather than the pytorch_finufft wrapper. That seems to be true, here is a modified minimal example:

import numpy as np
import torch
import cufinufft

data = torch.view_as_complex(
    torch.stack((torch.randn(15, 80, 12000), torch.randn(15, 80, 12000)), dim=-1)
)
omega = torch.rand(2, 12000) * 2 * np.pi - np.pi

cufinufft.nufft2d1(
            *omega.to("cuda:1"),
            data.reshape(-1, 12000).to("cuda:1"),
            (320,320),
            isign=-1,
        )

This leads to a slightly different error:

terminate called after throwing an instance of 'thrust::system::system_error'
  what():  exclusive_scan failed to synchronize: cudaErrorIllegalAddress: an illegal memory access was encountered
Fatal Python error: Aborted

Current thread 0x000015555552c4c0 (most recent call first):
  File "/mnt/home/bward/finufft/finufft/python/cufinufft/cufinufft/_plan.py", line 236 in setpts
  File "/mnt/home/bward/finufft/finufft/python/cufinufft/cufinufft/_simple.py", line 38 in _invoke_plan
  File "/mnt/home/bward/finufft/finufft/python/cufinufft/cufinufft/_simple.py", line 12 in nufft2d1
  File "/mnt/home/bward/finufft/finufft/mwe.py", line 14 in <module>

Extension modules: numpy.core._multiarray_umath, numpy.core._multiarray_tests, numpy.linalg._umath_linalg, numpy.fft._pocketfft_internal, numpy.random._common, numpy.random.bit_generator, numpy.random._bounded_integers, numpy.random._mt19937, numpy.random.mtrand, numpy.random._philox, numpy.random._pcg64, numpy.random._sfc64, numpy.random._generator, torch._C, torch._C._fft, torch._C._linalg, torch._C._nested, torch._C._nn, torch._C._sparse, torch._C._special (total: 20)
Aborted (core dumped)

@WardBrian
Copy link
Collaborator

I have opened flatironinstitute/finufft#420, where I believe further discussion should take place.

Thank you for reporting!

@WardBrian
Copy link
Collaborator

@gRox167 I believe we have a fix for this in the latest main branch if you want to give that a try!

@gRox167
Copy link
Author

gRox167 commented Feb 16, 2024

@gRox167 I believe we have a fix for this in the latest main branch if you want to give that a try!

I have tried in my code, and it works perfectly fine! Thank you for helping me to publish a issue to the upstream package and provide responsive fix!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants