We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Thanks for your great job!
The se3-transformer is powerful, but seems to be memory exhaustive.
I built a model with the following parameters, and got "CUDA out of memory error" when I run it on the GPU(Nvidia V100 / 32G).
model = SE3Transformer( dim = 20, heads = 4, depth = 2, dim_head = 5, num_degrees = 2, valid_radius = 5 )
num_points = 512 feats = torch.randn(1, num_points, 20) coors = torch.randn(1, num_points, 3) mask = torch.ones(1, num_points).bool()
Does this error relate to the version of pytorch? and how can I fix it?
The text was updated successfully, but these errors were encountered:
Same problem, our GPU is A100/80G, but can not run the above code.
Sorry, something went wrong.
Same here, A100/40G Vram, (125 points, 20D features) seems to be the upper bound May I ask you have found the solution yet?
No branches or pull requests
Thanks for your great job!
The se3-transformer is powerful, but seems to be memory exhaustive.
I built a model with the following parameters, and got "CUDA out of memory error" when I run it on the GPU(Nvidia V100 / 32G).
model = SE3Transformer(
dim = 20,
heads = 4,
depth = 2,
dim_head = 5,
num_degrees = 2,
valid_radius = 5
)
Does this error relate to the version of pytorch? and how can I fix it?
The text was updated successfully, but these errors were encountered: