You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hello, thank you for your excellent work! I am trying to train the 2nd stage of UniAD on the Bench2Drive-base dataset, but I found that it occupies more than 40G of cuda memory, while the official data given by UniAD is less than 20G, which means that I will encounter a cuda out of memory error halfway through training. My parameters are set to samples_per_gpu=1, workers_per_gpu=1, queue_length=3. In this case, how can I modify it to reduce the cuda memory usage and avoid errors? I'll appreciate it if you can give me some suggestions at your convenience. Thanks for your help
The text was updated successfully, but these errors were encountered:
Hello, thank you for your excellent work! I am trying to train the 2nd stage of UniAD on the Bench2Drive-base dataset, but I found that it occupies more than 40G of cuda memory, while the official data given by UniAD is less than 20G, which means that I will encounter a cuda out of memory error halfway through training. My parameters are set to samples_per_gpu=1, workers_per_gpu=1, queue_length=3. In this case, how can I modify it to reduce the cuda memory usage and avoid errors? I'll appreciate it if you can give me some suggestions at your convenience. Thanks for your help
The text was updated successfully, but these errors were encountered: