You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Rendering costed much longer time if mitsuba met new frame of consecutive frames.
System configuration
Platform: Ubuntu16.04
Compiler: CMake
Python version: 3.7.9
Mitsuba 2 version: newest
Compiled variants:
scalar_rgb
specular_rgb
gpu_rgb
Description
I tried to use Mitsuba2 in Python to render a sequence of frames where a single object deformed continuously between frames. I choosed to replace the vertex coords after rendering the current frame and then rendered the next frame. However, when I studied the rendering time of each frame, an odd problem occured that rendering only costed ~20ms per frame if I tested using frames which had been rendered in this machine but costed ~50ms per frame if using new frames. For example, I firstly rendered the frames of 1st~50th in the first experiment, and then rendered the frames of 1st~100th in second experiment. Things got weird that in the second experiment the last 50 frames costed much longer than the first 50 frames.
I set some timers in C++ source code and recompiled. Results showed that function named Scene<Float, Spectrum>::ray_intersect costed most of the rendering time. I'm not familiar with this part of codes and wonder if there is some cache in GPU memory or some cache in mitsuba codes causing the different rendering time ?
Steps to reproduce
Prepare a sequence of .obj data.
Initialize a scene object using xml file.
Replace the vertex coords frame by frame to render a sequence of objects.
Run the same codes but using more frames.
The text was updated successfully, but these errors were encountered:
Not really sure to understand how to reproduce this on my end. Could you maybe share a stripped down version of your python script that exhibits this weird behavior.
If you use scalar mode, do all of the frames take the same time to render?
There's indeed a cache used by Enoki (stored in ~/.enoki) to avoid recompiling kernels that have been seen before.
It's possible that one of the parameters you are changing between frames ends up being "baked" into the kernel, so the cache can't be used unless you've rendered that exact frame before.
You could identify which changes trigger this behavior by reducing the number of changes between frames, until you find the one that makes render time increase.
Once you find it, you could change the assignment to use the literal=False constructor of Enoki arrays. Here's an example where we use literal=False to prevent the learning rate value from being baked into kernels:
Summary
Rendering costed much longer time if mitsuba met new frame of consecutive frames.
System configuration
scalar_rgb
specular_rgb
gpu_rgb
Description
I tried to use Mitsuba2 in Python to render a sequence of frames where a single object deformed continuously between frames. I choosed to replace the vertex coords after rendering the current frame and then rendered the next frame. However, when I studied the rendering time of each frame, an odd problem occured that rendering only costed ~20ms per frame if I tested using frames which had been rendered in this machine but costed ~50ms per frame if using new frames. For example, I firstly rendered the frames of 1st~50th in the first experiment, and then rendered the frames of 1st~100th in second experiment. Things got weird that in the second experiment the last 50 frames costed much longer than the first 50 frames.
I set some timers in C++ source code and recompiled. Results showed that function named Scene<Float, Spectrum>::ray_intersect costed most of the rendering time. I'm not familiar with this part of codes and wonder if there is some cache in GPU memory or some cache in mitsuba codes causing the different rendering time ?
Steps to reproduce
The text was updated successfully, but these errors were encountered: