You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I'm using test.py in Google Colab and CUDA is available in the runtime (Tesla T4). The builder of the dataparallel module (build_dp) gets the device information as parameters. However, when running the script on gpu I get the following error message:
RuntimeError: Input type (torch.FloatTensor) and weight type (torch.cuda.FloatTensor) should be the same or input should be a MKLDNN tensor and weight is a dense tensor
[/content/mmdetection/mmdet/models/backbones/resnet.py](https://localhost:8080/#) in forward(self, x)
634 x = self.stem(x)
635 else:
--> 636 x = self.conv1(x)
637 x = self.norm1(x)
638 x = self.relu(x)
/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
1049 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
1050 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1051 return forward_call(*input, **kwargs)
1052 # Do not call functions when jit is used
1053 full_backward_hooks, non_full_backward_hooks = [], []
RuntimeError: Input type (torch.FloatTensor) and weight type (torch.cuda.FloatTensor) should be the same or input should be a MKLDNN tensor and weight is a dense tensor
torch.cuda.is_available() returns True, isinstance(img, torch.cuda.FloatTensor) during the end of the test-transforms pipeline (adapted DefaultFormatBundle.transform) returns False nonetheless.
Any tip how to fix this or any clue about the reason? When trying to convert the tensor at that point with Tensor.to(device) I got a RuntimeError regarding multiprocessing despite single gpu (calling single_gpu_test):
RuntimeError: Cannot re-initialize CUDA in forked subprocess. To use CUDA with multiprocessing, you must use the 'spawn' start method
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
Hi,
I'm using test.py in Google Colab and CUDA is available in the runtime (Tesla T4). The builder of the dataparallel module (build_dp) gets the device information as parameters. However, when running the script on gpu I get the following error message:
torch.cuda.is_available() returns True, isinstance(img, torch.cuda.FloatTensor) during the end of the test-transforms pipeline (adapted DefaultFormatBundle.transform) returns False nonetheless.
Any tip how to fix this or any clue about the reason? When trying to convert the tensor at that point with Tensor.to(device) I got a RuntimeError regarding multiprocessing despite single gpu (calling single_gpu_test):
RuntimeError: Cannot re-initialize CUDA in forked subprocess. To use CUDA with multiprocessing, you must use the 'spawn' start method
Beta Was this translation helpful? Give feedback.
All reactions