-
Beta Was this translation helpful? Give feedback.
Answered by
Disty0
Feb 21, 2025
Replies: 1 comment 1 reply
-
This happens when you are using a PyTorch that is not compiled with ROCm and something tries to access the ROCM_HOME variable of PyTorch and it returns the ROCm that is installed in your system instead of the ROCm in PyTorch. It is a harmless log since Zluda is using a PyTorch that is compiled with CUDA, so this is expected. |
Beta Was this translation helpful? Give feedback.
1 reply
Answer selected by
Jaxysmith
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
This happens when you are using a PyTorch that is not compiled with ROCm and something tries to access the ROCM_HOME variable of PyTorch and it returns the ROCm that is installed in your system instead of the ROCm in PyTorch.
It is a harmless log since Zluda is using a PyTorch that is compiled with CUDA, so this is expected.