Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

warpprb only works properly when there is a triangle mesh in the scene #17

Open
gerwang opened this issue May 29, 2023 · 5 comments
Open

Comments

@gerwang
Copy link

gerwang commented May 29, 2023

I want to use the warpprb config instead of warp in a multiview setting, as it can model indirect illumination effects. However, it does not work properly when the scene only contains an SDF and no mesh. Strangely, it works properly when the scene contains a mesh, even if it is far away and not seen by any camera. Since adding a mesh to the scene greaterly slows down the JIT compilation of the CUDA backend, I really want to fix the issue and allow optimizing a scene that only contains an SDF.

How to reproduce:

  1. Seems that the warpprb config is only used on the mirror-opt scene. I confirm it works properly on that scene:
python optimize.py mirror-opt --optconfig mirror-opt-hq --config warpprb --outputdir ./outputs --llvm --force

yields

convergence2.mp4
  1. Changing the --optconfig from mirror-opt-hq from no-tex-12 also works:
python optimize.py mirror-opt --optconfig no-tex-12 --config warpprb --outputdir ./outputs --llvm --force

yields

convergence2.mp4

Note that three views are completely blocked by the wall.

  1. Removing the <shape type="rectangle"> meshes in mirror-opt.xml (scene file at mirror-opt-2.zip) however, does not work properly
python optimize.py mirror-opt-2 --optconfig no-tex-12 --config warpprb --outputdir ./outputs --llvm --force

yields

convergence2.mp4
  1. Finally, if I do not remove the mesh shapes but put them at very high positions, it works properly even if no camera can see the rectangle shapes (scene file at mirror-opt-3.zip).
python optimize.py mirror-opt-3 --optconfig no-tex-12 --config warpprb --outputdir ./outputs --llvm --force

yields

convergence2.mp4

Note

Some videos are shorter than others because I often encounter a crash

Assertion failed in /project/ext/drjit-core/ext/nanothread/src/queue.cpp:354: remain == 1

Thus, the optimization stops before running for full iterations.

@dvicini
Copy link
Member

dvicini commented May 31, 2023

Hi,

Thanks for the detailed bug report. I am not surprised that the warpprb is not so robust to many different scene configurations, this integrator was only used for this particular result in the paper.

I unfortunately don't have time right now to properly look at this. I would suggest to see if the integrator base class does something different if a mesh is together. There should be some logic to switch between using optix (when meshes are present) and the simple case when we only have an SDF.

Otherwise, you can also try using an LLVM variant, as the LLVM compiler is a lot faster than the Optix compiler.

For the last error (remain==1): are you using the latest Mitsuba/drjit? If not, can you try updating to their latest releases?

@gerwang
Copy link
Author

gerwang commented May 31, 2023

Thanks for your reply! I'm looking into the base ReparamIntegrator class for details.

For the last error (remain==1), I confirm that I am using the lastest pypi prebuilt mitsuba, drjit and fastsweep on Python 3.10.11 on Linux. Should I post an issue to the Mitsuba3 repo?

 ✗ pip list                                                                                           
Package         Version
--------------- --------
...
drjit           0.4.2
fastsweep       0.1.2
mitsuba         3.3.0
...

@dvicini
Copy link
Member

dvicini commented May 31, 2023

Posting an issue on the Mitsuba issue tracker only really makes sense if you can reproduce the problem in a more minimal example. Otherwise it would just be too difficult to debug. Does it also happen when running some of the tutorials?

@gerwang
Copy link
Author

gerwang commented May 31, 2023

Unfortunately, I only encounter this problem when running this project, and only in --llvm mode, not cuda mode. I will try to simplify the reproduction script.

@dvicini
Copy link
Member

dvicini commented May 31, 2023

I see. Yes in that case your best bet is to try to simplify the script further and further until you can isolate the problem.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants