You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Thank you for sharing the code. When running MAML with conv4 backbone, the memory usage accumulates as epoch increases, causing CUDA out of memory. It seems the problem is caused by grad = torch.autograd.grad(set_loss, fast_parameters, create_graph=True). I tried to set create_graph=False (to approximate first-order MAML), and the memory usage becomes normal. This indicates that the created graph cannot be released after each epoch, if setting create_graph=True.
Did you meet with such problem in training MAML? May I get some suggestions on solving this problem?
Thanks!
The text was updated successfully, but these errors were encountered:
Hi,
Thank you for sharing the code. When running MAML with conv4 backbone, the memory usage accumulates as epoch increases, causing CUDA out of memory. It seems the problem is caused by grad = torch.autograd.grad(set_loss, fast_parameters, create_graph=True). I tried to set create_graph=False (to approximate first-order MAML), and the memory usage becomes normal. This indicates that the created graph cannot be released after each epoch, if setting create_graph=True.
Did you meet with such problem in training MAML? May I get some suggestions on solving this problem?
Thanks!
The text was updated successfully, but these errors were encountered: