Use multiprocessing.shared_memory
to reduce pickling overhead in MultiprocessingEvaluator
#239
Labels
multiprocessing.shared_memory
to reduce pickling overhead in MultiprocessingEvaluator
#239
I noticed in a profile run of the MultiprocessingEvaluator that pickling takes up a signficant amount of runtime. Since we are using Python 3.8 or later, we can leverage the
multiprocessing.shared_memory
module to share large data structures between processes without having to pickle them. This can significantly reduce the serialization overhead. We can create aSharedMemory
object to store your data, and then useShareableList
or numpy arrays to interact with the shared memory.The text was updated successfully, but these errors were encountered: