You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I'm wanting to reduce the batch size in order to get around the dreaded "CUDA out of memory" error. I'm wanting to do this to be able to generate much larger images as outputs at least ~10meg. I don't see an Argument for this so thought perhaps this is hard coded?
The text was updated successfully, but these errors were encountered:
@Bird-NZ The batch size is always 1 for neural-style-pt, as batch size refers to the number of images being run through the network. You can reduce memory usage by using the Adam optimizer, and by using different models for later steps.
I'm wanting to reduce the batch size in order to get around the dreaded "CUDA out of memory" error. I'm wanting to do this to be able to generate much larger images as outputs at least ~10meg. I don't see an Argument for this so thought perhaps this is hard coded?
The text was updated successfully, but these errors were encountered: