Reducing RAM usage? #6619
Replies: 2 comments 3 replies
-
You can increase the swap file in the system to 40-50 GB if you have space and enable unloading from VRAM to RAM through the driver settings, but the program should be selected for python from Comfy X:...\ComfyUI\python_embedded\python.exe. |
Beta Was this translation helpful? Give feedback.
-
Today I did some tests, just for fun, and separate diffusion model, text encoders and vae (by using the I've run a simple default workflow with the same parameters. There have been a reduction of ~3GB of RAM with GGUF, not a big deal, but it's something. Also I can't see any quality loss. Checkpoint:RAM: 14.9 GB Diffusion model GGUF:RAM: 11.9 GB Test with fp8I also tried with fp8 mode in Diffusion Model in fp8 mode:RAM: 12.9 GB Diffusion GGUF with
|
Beta Was this translation helpful? Give feedback.
-
Hey, I've been trying ComfyUI and Stable Diffusion for a while now and it's been really interesting, however I'm constantly running into Out Of Memory errors when using bigger workflows. Is there any way I could reduce ComfyUI's RAM/VRAM usage? I read a bit into it and found out about quantizing models, however I can't figure out how to get that to work with my checkpoint of choice (LEOSAM's HelloWorld XL 7.0).
I have a RTX 3050 mobile with 4GB of VRAM, an AMD Ryzen 7 6000, and 16GB of RAM. Thanks in advance!
Beta Was this translation helpful? Give feedback.
All reactions