You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hey, I have been using the codebase for quite some time and I noticed extensive memory consumption of infer.py script. In my case it consumes around 120GB of RAM, which is a huge deal - my machine has 2T of RAM but I would need to run more of them in parallel.
I tried a simple RAM profiler on per-line basis and this is the output:
Hi, thank you for the detailed information and sorry for the delay. I was aware that the inference could take quite some memory but had not run any profiler, thank you for that. I would need some extra time to understand where the main problem is and how to solve it (if possible) and at the moment I do not have that time. I will keep the issue open to take a look at it in the future.
With the current version that we have here, a compromise solution could be to increase the subsampling parameter. You will lose granularity in the output but perhaps it will not affect so much the final result depending on the scenario. We did some analysis between 50ms and 100ms in Table 4 and you can see that there is quite some impact when using fine-tuning but that was when evaluating with collar 0ms. Maybe, in a more forgiving setup, the difference would not be that large.
Hey, I have been using the codebase for quite some time and I noticed extensive memory consumption of infer.py script. In my case it consumes around 120GB of RAM, which is a huge deal - my machine has 2T of RAM but I would need to run more of them in parallel.
I tried a simple RAM profiler on per-line basis and this is the output:
So it looks like the larger consumption is coming out of
infer_loader
.Any ideas on how we could improve this?
I am running on pretty long audios ~1 hour.
The text was updated successfully, but these errors were encountered: