PLZ HELP... My max image generation sizes keep getting significantly smaller and smaller #175
-
I'm on Windows 11 and AMD 7900 XT. A couple months ago I could generate just about anything after at a total max of 1024 x 1536. Now I can only run a total max of 720 x 864. Does having more/certain extensions or number of installed models affect the VRAM? Why is it with every new commit my ability to generate gets whittled down further and further? I've tried every combination of COMMANDLINE_ARGS I can find, I've added --backend=directml , I've reinstalled everything countless times, I tried deleting some stuff from the ARGS and changing some of the new optimization settings, but they just caused a bunch of errors. I'm losing my mind. At this point I'm about to give up... Getting super bummed out and could really use some help.... My current webui-user.bat: set PYTHON= call webui.bat |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment 2 replies
-
I have a 7900XT as well, and have had this same problem. I spent a long time tweaking settings, and my best results are with these commandline_args: set COMMANDLINE_ARGS=--medvram --upcast-sampling --no-half-vae --disable-nan-check --use-cpu interrogate --opt-split-attention-v1 Also check under Settings/Optimizations, cross attention optimization is set to V1. This setting seems to overwrite the commandline arg. I've also had good luck setting Token Merging Ratio to 0.5 to speed up generations a lot (which seems to produce only NANs for me with sub-quad attention) with these settings and a SD 1.5 model I can do batches of 4 512x768 hires.fix to x2 (1024x1536 final res). |
Beta Was this translation helpful? Give feedback.
I have a 7900XT as well, and have had this same problem. I spent a long time tweaking settings, and my best results are with these commandline_args:
set COMMANDLINE_ARGS=--medvram --upcast-sampling --no-half-vae --disable-nan-check --use-cpu interrogate --opt-split-attention-v1
Also check under Settings/Optimizations, cross attention optimization is set to V1. This setting seems to overwrite the commandline arg. I've also had good luck setting Token Merging Ratio to 0.5 to speed up generations a lot (which seems to produce only NANs for me with sub-quad attention)
with these settings and a SD 1.5 model I can do batches of 4 512x768 hires.fix to x2 (1024x1536 final res).