-
Notifications
You must be signed in to change notification settings - Fork 259
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Casting memmaps in ArrayProxies is slower than loading into memory first (optimization opportunity) #1371
Comments
If you know the |
Thanks @effigies, I think these are good strategies for interacting with the memmap data. But still I'm confused why the simple Regardless, I will just set |
I don't really know why the memmap is slower. I would probably profile it and see where it's spending the time. |
Ya I did a quick profiling, with With I guess when the array is a memmap, |
I'm not sure there's much to be done here. That said, if you're up for it, you could try looking into and benchmarking other approaches to scaling. e.g., pre-allocating an array with That will move more of the branching logic (e.g., are dtypes the same) into nibabel, and it has been nice to let numpy make those decisions and trust them to be efficient. It could be worth upstreaming the solution (if we find one) into numpy's |
Ya I agree, I feel like there's not much worth doing at this point. Especially since in this case |
It seems that reading uncompressed nifti volumes takes significantly longer when memory mapping is enabled. In this example, I load four large 4D nifti volumes from the ABCD dataset, with and without memory mapping.
This is what I get, using nibabel v5.2.1:
Any idea why this might happen?
The text was updated successfully, but these errors were encountered: