Using Fish-Speech with amd gpu's in windows (with zluda) #874
patientx
started this conversation in
Show and tell
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hi there. I try to maintain the comfyui-zluda fork , that is based on ZLUDA which "lets you run unmodified CUDA applications with near-native performance on Intel AMD GPUs." (https://github.com/lshqqytiger/ZLUDA) , now I got interested in fish-speech and wanted to try the same code with it, it works but of course would need deep changes to be able to maintain it alongside nvidia gpu's so I forked it.
https://github.com/patientx/fish-speech-zluda
I primarily used my zluda.py from comfyui-zluda , alongside code from https://github.com/AznamirWoW at #754 .
Needs a bit of pre installing dependencies but once done, it would also open the way for other zluda related ai apps. At least you would have an idea how to run them, I should say.
Everything is explained step-by-step, there is a simple installer , which creates the necessary virtual environment, installs necessary packages , models etc. After everything is completed, all you would need to do is run the batch file to open up the webui.
Please have a try. (I am not a coder, started using stable diffusion last year with my amd gpu and from there, tried to find solutions to run these stuff better on my gpu, so this is a hobby at most)
Beta Was this translation helpful? Give feedback.
All reactions