You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I use manual split a lot with 3 GPUs. Do something like [22.4,23,23] and most models split fine. When I tried to load quen-vl it goes OOM and never loads onto the 3rd GPU. Setting autosplit works, but that isn't ideal for many models that barely fit.
Just pulled today, but I don't think anything changed related to this between yesterday and today. Usually I'm running tensor parallel which isn't supported for qwen yet and it splits fine. Will try the model I loaded previously when I'm done messing with qwen.
Reproduction steps
Set variables to manually split in the config.yml, run CUDA_VISIBLE_DEVICES=0,1,2 ./start.sh
Expected behavior
Splits like other models among the 3 GPUs.
Logs
OOM on GPU0
Additional context
No response
Acknowledgements
I have looked for similar issues before submitting this one.
I have read the disclaimer, and this issue is related to a code bug. If I have a question, I will use the Discord server.
I understand that the developers have lives and my issue will be answered when possible.
I understand the developers of this program are human, and I will ask my questions politely.
The text was updated successfully, but these errors were encountered:
OS
Linux
GPU Library
CUDA 12.x
Python version
3.10
Describe the bug
I use manual split a lot with 3 GPUs. Do something like [22.4,23,23] and most models split fine. When I tried to load quen-vl it goes OOM and never loads onto the 3rd GPU. Setting autosplit works, but that isn't ideal for many models that barely fit.
Just pulled today, but I don't think anything changed related to this between yesterday and today. Usually I'm running tensor parallel which isn't supported for qwen yet and it splits fine. Will try the model I loaded previously when I'm done messing with qwen.
Reproduction steps
Set variables to manually split in the config.yml, run CUDA_VISIBLE_DEVICES=0,1,2 ./start.sh
Expected behavior
Splits like other models among the 3 GPUs.
Logs
OOM on GPU0
Additional context
No response
Acknowledgements
The text was updated successfully, but these errors were encountered: