You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
At the moment, string-based batch size like auto or auto:N that lm-evaluation-harness accepts is not supported for API models, and it will fallback to the default batch size of 1.
However, any integer batch size that is provided is indeed used for API models. So a few observations:
Our current CR does not support non-integer batch sizes, so we don’t support the auto feature either.
Our default batch size of 8 I’ve found to be too large in every evaluation
The text was updated successfully, but these errors were encountered:
At the moment, string-based batch size like
auto
orauto:N
thatlm-evaluation-harness
accepts is not supported for API models, and it will fallback to the default batch size of1
.However, any integer batch size that is provided is indeed used for API models. So a few observations:
The text was updated successfully, but these errors were encountered: