-
Notifications
You must be signed in to change notification settings - Fork 126
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
moving tensors back and forth between CPU and GPU? #207
Comments
There might be a better way to do this, but ATen compiles the following functions in THCTensorCopy: There might also be some convenience functions like how pytorch lets you |
I'm also running against a wall here. The api is a little confusing for me here. For me it looks like one should do sthg like
but this seems wrong as t_gpu contains just zeros after the operation. @ezyang What am i missing? many thanks chofer |
If you are running reasonably recent master, I think the following should work:
We should make |
Hi, my HEAD is
However, it looks like to problem is the creation with My workaround to bring externally allocated cpu data on the gpu in a tensor:
So from my point of view it seems that there is a transportation issue of the memory from the cheers c.hofer |
@c-hofer look at https://github.com/zdevito/ATen/blob/31d00ab7fdf00c258b0fad5b1b05af77e92b64a9/aten/src/ATen/test/dlconvertor_test.cpp You can use the DLPack format which is a cross-framework, well-specified and simple format that we support importing from: https://github.com/dmlc/dlpack/ |
Thx, that's a valuable hint :) |
You can also clone on the CPU first and then move it to GPU, if that's feasible: |
thx, this is surely more elegant ... by the way, any plans when the new ATen api will be more or less stable? |
Super sorry if this is obvious, but -- how do I copy a tensor from CPU -> GPU and vice versa? I've been looking through the documentation, and can't seem to find how to do this?
The text was updated successfully, but these errors were encountered: