You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Should model.state <- dsharp.load(modelFile) change which device the model is on?
Issue #427 | Created by @nhirschey | 2022-04-26 19:51:45 UTC |
I built the dev branch locally to test the new save/load functionality from PR #425 and I found something unexpected. Should model.state <- dsharp.load(modelFile) change which device the model is on?
When trying to use run rnn.fsx with #r "nuget: TorchSharp-cuda-windows, 0.96.0", restoring model state seems to always move the model to CPU device, which causes an error unless I manually move it back to GPU. Details below.
Should
model.state <- dsharp.load(modelFile)
change which device the model is on?Issue #427 | Created by @nhirschey | 2022-04-26 19:51:45 UTC |
I built the dev branch locally to test the new save/load functionality from PR #425 and I found something unexpected. Should
model.state <- dsharp.load(modelFile)
change which device the model is on?When trying to use run
rnn.fsx
with#r "nuget: TorchSharp-cuda-windows, 0.96.0"
, restoring model state seems to always move the model to CPU device, which causes an error unless I manually move it back to GPU. Details below.Related PyTorch docs here.
Edited with simpler self-contained repro:
The text was updated successfully, but these errors were encountered: