You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
If this issue is intended to be a bug report, please fill out the following:
Expected behavior
I did not expect it to use much more ram when trying to run the simulation for it's full 5 second simulation than what was used for its default 15 millisecond simulation, since I would think it is dealing with the same variables.
Actual behavior
I ran out of available ram yesterday when I woke up. Swap space was maybe using a little over 4 gibibytes then. I mounted another swap partition on another drive. I also made another swap file to increase my total swap space to around 77 gibibytes. Now the simulation is using around 62G of memory that I can see while I type this. It keeps using more and more.
It has been on step 999998 for much longer than what is shown in the image I pasted. Seeing as how the system requirement is 60G of hard drive space, I am kind of wondering if it is writing all 60GB of that in RAM before it writes the files to the HDD. If that is not what it is doing, then I think there is a memory leak.
Steps to reproduce the behavior
Run the program with -d 5000, and try running it for it's full duration.
If it's not a memory leak:
Seems like a program like this could be designed to work on segments of a frame, sort of like slices of the worm that would be in slides. But the slices would be segments, and the segments would be entire cells. Each segment could have a page of the associated synaptic sets that each cell is associated with. Once the cells of a segment have their new values calculated, and their corresponding synaptic sets have been given values for the members of the sets to work with in the next frame, then the new segment could be written to the end of the file for the new frame. The new values for the particular synaptic sets could be written to their corresponding set labels in a master file of synapses that has all of the synapses listed and labeled to be worked on for the next frame once the new one is finished being built from all of its segments.
I'm thinking each frame segment would get written to the hard disk, and the master list of synapse sets for each frame would remain on the hard disk, and only the segment that would be having work performed on it would get pulled into RAM, and only the associated synapses would get pulled into RAM.
Once a frame is complete, then the program could calculate an image of the worm in its virtual box from if it has been a certain number of frames since the last frame was completed, discard the frame before the one that was just completed, then get back to work on a new frame starting with the first segment and associated synapses to calculate.
Once all frames have been calculated for the length of simulated time of the worms activity, and images produced for the last frames for groups of frames, so that the video can be 60 images a second or something, then the program could compile the video from the images, and discard the images.
But maybe it's just a memory leak somewhere.
The text was updated successfully, but these errors were encountered:
Thanks for reporting this @Dartomic. To be honest I've seen something like this myself running long simulations. Thankfully I've had enough ram for the sims to finish but clearly it's not very good. The generation of the video was designed and tested with shorter runs, and probably any memory leak was not noticed. Many of the Sibernetic videos were also generated natively on windows rather than via the Docker container. Recent work with the Docker version has just focussed on getting the various drivers to install correctly for running the basic simulation: #338
So in short, yes this needs to be investigated further and optimised.
One option is to tweak the script to run the default Sibernetic test configuration of the falling box, which will execute much faster and the overhead from recording every frame to video should be the same as the worm simulation...
If this issue is intended to be a bug report, please fill out the following:
Expected behavior
I did not expect it to use much more ram when trying to run the simulation for it's full 5 second simulation than what was used for its default 15 millisecond simulation, since I would think it is dealing with the same variables.
Actual behavior
I ran out of available ram yesterday when I woke up. Swap space was maybe using a little over 4 gibibytes then. I mounted another swap partition on another drive. I also made another swap file to increase my total swap space to around 77 gibibytes. Now the simulation is using around 62G of memory that I can see while I type this. It keeps using more and more.
It has been on step 999998 for much longer than what is shown in the image I pasted. Seeing as how the system requirement is 60G of hard drive space, I am kind of wondering if it is writing all 60GB of that in RAM before it writes the files to the HDD. If that is not what it is doing, then I think there is a memory leak.
Steps to reproduce the behavior
Run the program with -d 5000, and try running it for it's full duration.
If it's not a memory leak:
Seems like a program like this could be designed to work on segments of a frame, sort of like slices of the worm that would be in slides. But the slices would be segments, and the segments would be entire cells. Each segment could have a page of the associated synaptic sets that each cell is associated with. Once the cells of a segment have their new values calculated, and their corresponding synaptic sets have been given values for the members of the sets to work with in the next frame, then the new segment could be written to the end of the file for the new frame. The new values for the particular synaptic sets could be written to their corresponding set labels in a master file of synapses that has all of the synapses listed and labeled to be worked on for the next frame once the new one is finished being built from all of its segments.
I'm thinking each frame segment would get written to the hard disk, and the master list of synapse sets for each frame would remain on the hard disk, and only the segment that would be having work performed on it would get pulled into RAM, and only the associated synapses would get pulled into RAM.
Once a frame is complete, then the program could calculate an image of the worm in its virtual box from if it has been a certain number of frames since the last frame was completed, discard the frame before the one that was just completed, then get back to work on a new frame starting with the first segment and associated synapses to calculate.
Once all frames have been calculated for the length of simulated time of the worms activity, and images produced for the last frames for groups of frames, so that the video can be 60 images a second or something, then the program could compile the video from the images, and discard the images.
But maybe it's just a memory leak somewhere.
The text was updated successfully, but these errors were encountered: