You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Issue #392 | Created by @dsyme | 2021-11-08 23:06:54 UTC | enhancement
TorchSharp is optionally allowing explicit disposal of TorchSharp handles to tensors. Currently DiffSharp doesn't support this - you can't explicitly dispose DiffSharp tensors. We should consider if DiffSharp is also going to allow explicit disposal somehow.
Explicit disposal brings accuracy with regard to memory/performance but is quite intrusive in the programming model. For example, TorchSharp is always guaranteeing that new handles get returned from tensor functions, so
letsomeFunction(input:Tensor)=if monday then
input.add(100)else
input
must become:
letsomeFunction(input:Tensor)=if monday then
input.add(100)else
input.alias()// create a new handle, because tensor functions always return new handles that can be explicitly disposed.
This means tensor functions creating intermediaries can explicitly dispose, e.g. this is valid (but absolutely relies on f1 returning a new tensor handle - not necessarily a copy of the tensor, but certainly a fresh alias handle).
letcompose f1 f2 x =use tmp = f1 x
f2 tmp
Now TorchSharp doesn't have much compositional second-order code (objects composing functions) - th Sequential model is the most obvious example. However DiffSharp has a lot of this. Overall the above will be very intrusive for DiffSharp programming. It's a bit of a conundrum to be honest.
The text was updated successfully, but these errors were encountered:
Explicit disposal discussion
Issue #392 | Created by @dsyme | 2021-11-08 23:06:54 UTC |
enhancement
TorchSharp is optionally allowing explicit disposal of TorchSharp handles to tensors. Currently DiffSharp doesn't support this - you can't explicitly dispose DiffSharp tensors. We should consider if DiffSharp is also going to allow explicit disposal somehow.
Explicit disposal brings accuracy with regard to memory/performance but is quite intrusive in the programming model. For example, TorchSharp is always guaranteeing that new handles get returned from tensor functions, so
must become:
This means tensor functions creating intermediaries can explicitly dispose, e.g. this is valid (but absolutely relies on
f1
returning a new tensor handle - not necessarily a copy of the tensor, but certainly a fresh alias handle).Now TorchSharp doesn't have much compositional second-order code (objects composing functions) - th
Sequential
model is the most obvious example. However DiffSharp has a lot of this. Overall the above will be very intrusive for DiffSharp programming. It's a bit of a conundrum to be honest.The text was updated successfully, but these errors were encountered: