Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Use multiprocessing.shared_memory to reduce pickling overhead in MultiprocessingEvaluator #239

Open
EwoutH opened this issue Apr 3, 2023 · 0 comments

Comments

@EwoutH
Copy link
Collaborator

EwoutH commented Apr 3, 2023

I noticed in a profile run of the MultiprocessingEvaluator that pickling takes up a signficant amount of runtime. Since we are using Python 3.8 or later, we can leverage the multiprocessing.shared_memory module to share large data structures between processes without having to pickle them. This can significantly reduce the serialization overhead. We can create a SharedMemory object to store your data, and then use ShareableList or numpy arrays to interact with the shared memory.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

1 participant