Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Detector component may fail to load data under high memory pressure #441

Open
philsmt opened this issue Aug 25, 2023 · 1 comment
Open

Comments

@philsmt
Copy link
Contributor

philsmt commented Aug 25, 2023

The reading code in extra.components tries to work around HDF problems with respect so slicing and thus reads entire datasets into memory for subsequent slicing. This relies on virtual memory tricks allowing to allocate potentially very large amounts of memory, even overcomitting physical memory.

This has shown to fail on shared nodes under high memory pressure, potentially due to different settings. Selecting down a detector object to a single train and pulse still caused MultimodKeyData.ndarray() to allocate an entire sequence worth of memory, which failed.

@philsmt
Copy link
Contributor Author

philsmt commented Aug 25, 2023

This only happens when using pulse selection, e.g. here:
det['image.data'].select_trains(np.s_[0]).select_pulses(np.s_[0])

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant