Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Unittest class_optimize #93

Closed
drbacke opened this issue Oct 4, 2024 · 7 comments
Closed

Unittest class_optimize #93

drbacke opened this issue Oct 4, 2024 · 7 comments
Assignees

Comments

@drbacke
Copy link
Contributor

drbacke commented Oct 4, 2024

No description provided.

@drbacke drbacke self-assigned this Oct 4, 2024
@NormannK
Copy link
Contributor

NormannK commented Oct 4, 2024

last night I did start working on that. I have not come to a good solution if we would touch the parameters of the optimization.
To get a base line idea how good the genetic algorithm is i did run your original class using the test.py values 25346 times. I did collect ergebnis["simulation_data"]["Gesamtbilanz_Euro"] since i thought that might be the best value to compare.
This is the result:
image
From the 25346 Values 21921 are -0.7150588584710738. So the percentage of values that are not the lowest: 13.51298%
If someone touches the parameters, that could be a value to benchmark it.
output_org.txt

@drbacke
Copy link
Contributor Author

drbacke commented Oct 4, 2024

This is very intersting.
Maybe it's important to know, that there ist a possibility to restart the Last valid solution. This helps a lot.
But this is an excellent Benchmark 👍 especially for Testing other algorithms

@NormannK
Copy link
Contributor

NormannK commented Oct 4, 2024

Thanks!
I’m aware that the solution gets added three times during the first generation, and I’ve experimented with that. However, I’ve noticed that if it’s not added three times, the results are getting way worse.

I'm assuming that the starting_solution in test.py is already a well-optimized solution rather than a random one, which was my initial assumption. Could you confirm if that’s the case?

Also, I started a longer benchmark for PR #88 about an hour ago. While I don’t expect any differences, I wanted to run it just to be certain.

@drbacke
Copy link
Contributor Author

drbacke commented Oct 4, 2024

Yes the base Population only works well, If you add the ind 3x. The opt algorithm ist not very efficient.
I think there is also a lot of potential by changing the parametrization in the future. The sub methods like pv_akku are also inefficient. The Focus ist more at Debugging in the Moment.

@NormannK
Copy link
Contributor

NormannK commented Oct 4, 2024

I've uploaded my draft for the test in PR #98
Suggestions are appreciated.

that's more like an integration testing rather than unit testing.

@NormannK
Copy link
Contributor

NormannK commented Oct 5, 2024

I recommend running this test only when changes are made to the class_optimize file. Since we already test all other classes, and this particular test is quite time-consuming and CPU-intensive, limiting it to relevant changes will optimize performance.

@michaelosthege
Copy link
Collaborator

Closed by #98

The test is a bit slow, but 90 % coverage was reached.

We should also refactor the tests to load input/expectation data from JSON files, because then we can parametrize tests with different datasets without adding thousands of lines to the Python files: #107

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants