You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In Table 14(a) and 14(b), you have mentioned batch size as 4096 for Pretraining while batch size of 1024 for Finetuning.
Could you clarify if the said batch size is per GPU or global batch size using all GPUs in all nodes?
Also the GPU size used to report above values.
Could the logs of pretraining and finetuning be made available?
The text was updated successfully, but these errors were encountered:
The batch sizes in the appendix are global unless stated otherwise, and so are the learning rates. So you can use any number of GPUs as long as the total batch size adds up to that number. We used a mix of A100 40gb and A100 80gb in our experiments, so most configs should work on GPUs with 40gb or below with 64 GPUs.
If you run out of memory, you can always use a lower per-gpu batch size while increasing the number of GPUs (which will be equivalent) or reducing the learning rate (which might not exactly reproduce the result due to training with AdamW) to compensate.
I'll look into seeing if we can release some training graphs.
In Table 14(a) and 14(b), you have mentioned batch size as 4096 for Pretraining while batch size of 1024 for Finetuning.
Could you clarify if the said batch size is per GPU or global batch size using all GPUs in all nodes?
Also the GPU size used to report above values.
Could the logs of pretraining and finetuning be made available?
The text was updated successfully, but these errors were encountered: