-
Notifications
You must be signed in to change notification settings - Fork 24
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Ideas for improving granularity of garbage collection (gc) #749
Comments
Some work has been done towards this goal in the new-gc-master-merge branch. Specifically, PR #426 contains the bulk of the work. Here I will summarize the issues we have encountered so far.
|
Can we make a benchmark definition to test this? one example would be a function which incremented to 1000000, and then just returned |
Using the following benchmark (here is the corresponding KORE for reference: count.kore.txt)
if we call
we notice that the memory usage increases with time until the process finishes (in the graph, This is because the current GC cannot do collections in the middle of function execution and therefore the useless memory objects that contain integer values from previous iterations linger in memory until the function terminates. |
@theo25 to refresh his memory on what the status of this is. |
Theo is working on rebasing the testing branch against the current version of the backend; it's drifted some way out of sync. Outstanding issues left to fix:
Could we redesign the object representation to carry sort categories around? |
We currently only can do garbage collection at the granularity of rewrite steps. It would be nice to be able to do it more frequently, to handle large function evaluations where it's not working with very much live memory.
The text was updated successfully, but these errors were encountered: