Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[FEAT]: send whole population (2D matrix) to obj func #171

Open
osamimi opened this issue Dec 12, 2024 · 2 comments
Open

[FEAT]: send whole population (2D matrix) to obj func #171

osamimi opened this issue Dec 12, 2024 · 2 comments

Comments

@osamimi
Copy link

osamimi commented Dec 12, 2024

I am currently using scipy’s differential_evolution() which has the option (vectorized=True) to pass the n_variables * n_population_size matrix (in my case around 20 x 200) to the objective function once pee iteration . This allows me to vectorize the math (numba/numpy) and get very good performance, around 50ms per iteration. This ends up being faster than using multiprocessing and dispatching individual solution vectors to multiple cores due to the overheads involved (considering the relatively cheap cost function).

@thieu1995
Copy link
Owner

Hi @armisamimi,

Indeed, when using whole population to calculate fitness, it will faster like 10X or 100X than calculate each of the agent's fitness. However, it is for benchmark functions only. You can't do it with other problem, for example, training neural network, hybrid with machine learning, hyper-parameter tuning with AI models.
My library is built for general optimization problem, it means you can solve any optimization problem with it, it is not only for benchmark functions. I don't want to limit the usage of this library, that is why I did not choose the way to calculate obj_function like that in the first place.

Also what you did is really wrong, you should not using multiprocessing for benchmark function, the time to start or close a sub-process is longer than the time to run a single obj_function. Therefore, you should use multithreading for benchmark functions. It will be much faster than multiprocessing in this case.

@osamimi
Copy link
Author

osamimi commented Jan 5, 2025 via email

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants