Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

MPS Support #96

Open
trbutler opened this issue Sep 7, 2022 · 6 comments
Open

MPS Support #96

trbutler opened this issue Sep 7, 2022 · 6 comments

Comments

@trbutler
Copy link

trbutler commented Sep 7, 2022

Running neural-style-pt on Apple Silicon seems to require using only CPU or it terminates with this error:

raise AssertionError("Torch not compiled with CUDA enabled")

AssertionError: Torch not compiled with CUDA enabled

Given that PyTorch now supports native Apple Metal acceleration, is there a way to fix this so it'd use MPS?

@ProGamerGov
Copy link
Owner

@ProGamerGov Yeah it should be relatively straight forward, however I currently don't have a way to test it.

@ProGamerGov
Copy link
Owner

@trbutler Try running this WIP neural-style-pt branch on MPS: https://github.com/ProGamerGov/neural-style-pt/tree/master-2

@trbutler
Copy link
Author

trbutler commented Sep 14, 2022

@ProGamerGov Thanks! I just tried it and it doesn't seem to be paying attention to my request to switch backends. For example upon typing:

python neural_style.py -style_image /Users/timothybutler/Downloads/f24.jpg -content_image /Users/timothybutler/Downloads/SCAN22122008_00000.tif -backend mps

It starts and then gives the same error:

Traceback (most recent call last):
File "/Users/timothybutler/Experiments/neural-style-pt/neural_style.py", line 500, in
main()
File "/Users/timothybutler/Experiments/neural-style-pt/neural_style.py", line 62, in main
content_image = preprocess(params.content_image, params.image_size).to(backward_device)
File "/Users/timothybutler/Experiments/miniconda3/envs/ldm/lib/python3.10/site-packages/torch/cuda/init.py", line 211, in _lazy_init
raise AssertionError("Torch not compiled with CUDA enabled")
AssertionError: Torch not compiled with CUDA enabled

Perhaps the backend isn't getting passed along?

@trbutler
Copy link
Author

@ProGamerGov I just realized if I set both -backend and -gpu to mps it does seem to work. Should setting backend to mps set gpu to the same in setup_gpu if nothing is specified in the gpu parameter?

Currently running neural_style on an image to see what happens, but it does show 97% GPU usage from the python process in MacOS's Activity Monitor.

@trbutler
Copy link
Author

Results using mps vs. cpu:

python neural_style.py -style_image /Users/timothybutler/Downloads/f24.jpg 77.40s user 81.96s system 45% cpu 5:52.32 total
python neural_style.py -style_image /Users/timothybutler/Downloads/f24.jpg 856.05s user 451.82s system 261% cpu 8:20.04 total

I couldn't get -gpu mps,cpu to work -- I thought I'd try that out of curiosity, too, but that reports that AssertionError: The number of -multidevice_strategy layer indices minus 1, must be equal to the number of -gpu devices.

@e13h
Copy link

e13h commented Dec 15, 2022

I couldn't get -gpu mps,cpu to work -- I thought I'd try that out of curiosity, too, but that reports that AssertionError: The number of -multidevice_strategy layer indices minus 1, must be equal to the number of -gpu devices.

I think this this because the default for -multidevice_strategy is 4,7,29, which implies you are using 4 devices. If using -gpu mps,cpu, try using a setting -multidevice_strategy to a single number (like 7 for example).


However, this leads to another error

RuntimeError: Invalid device string: 'cuda:mps'

But this can be easily resolved by modifying these lines...

def name_devices(self, input_list):
device_list = []
for i, device in enumerate(input_list):
if str(device).lower() != 'c':
device_list.append("cuda:" + str(device))
else:
device_list.append("cpu")
return device_list

...to the following:

    def name_devices(self, input_list):
        device_list = []
        for i, device in enumerate(input_list):
            if str(device).lower() not in ("cpu", "mps"):
                device_list.append("cuda:" + str(device))
            else:
                device_list.append(device)
        return device_list

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants