-
Notifications
You must be signed in to change notification settings - Fork 266
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Distant sensor plugin #143
base: master
Are you sure you want to change the base?
Conversation
src/sensors/tests/test_distant.py
Outdated
sensor = scene.sensors()[0] | ||
sampler = sensor.sampler() | ||
|
||
n_rays = 10000 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could we use a smaller number here? Trying to keep the unit tests to run under 20 minutes ^^
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I reduced this loop to 1000 iterations, but the variance of results increased as a consequence.
Great work! A few minor comments regarding the unit tests. The C++ looks good to me! |
Hi @Speierers, I pushed an update with a few important changes:
In addition, I had to add an object to the |
Great!
Ok
Don't we compute the radiant flux in this case? |
You're right, it makes more sense to do that, although we'll get the spectral flux per unit solid angle. I'll change it again and update the docs. |
2e7ef08
to
82f1e36
Compare
I pushed the following changes:
|
c8f8b11
to
5836654
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi @leroyvn,
Sorry I am only getting back to this PR now.
It would be great to have this plugin available for the upcoming release of next
. If you have the time, could you please address the few comments below and rebase this branch onto next
?
Then we should be able to merge this PR, finally! 🚀
ray.o = m_target - 2.f * ray.d * m_bsphere.radius; | ||
} | ||
|
||
ray.update(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This was deprecated on next
// have differentials | ||
ray.has_differentials = false; | ||
|
||
ray.update(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ditto
|
||
MTS_IMPLEMENT_CLASS_VARIANT(DistantSensor, Sensor) | ||
MTS_EXPORT_PLUGIN(DistantSensor, "DistantSensor"); | ||
NAMESPACE_END(mitsuba) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add empty line at the end of the file 😉
src/sensors/tests/test_distant.py
Outdated
scene = load_dict(dict_scene) | ||
sensor = scene.sensors()[0] | ||
scene.integrator().render(scene, sensor) | ||
img = np.array(sensor.film().bitmap()).squeeze() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Would it be possible to avoid using numpy here, and use Bitmap.data()
to get an enoki array instead?
src/sensors/tests/test_distant.py
Outdated
sensor = scene.sensors()[0] | ||
scene.integrator().render(scene, sensor) | ||
img = np.array(sensor.film().bitmap()).squeeze() | ||
assert np.allclose(np.array(img), expected) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Use ek.allclose
and add an empty line as the end of the file.
Hi @Speierers, thanks for reviewing this PR. I completely rewrote this plugin a few months ago. It now has optional shape-based ray origin and target control, which makes it really usable to compute the reflectance of complex geometries (the reason why we needed this in the first place). I'd like to submit that version instead. What would be the deadline for submission? |
Hi @leroyvn , Sorry but I am not sure to understand what you mean by "shape-based ray origin and target". Could you maybe give a concret example? This sounds quite interesting! There is not concrete deadline, and we could always merge this after the release so no need to hurry. |
By default, this Target control allows the user to specify where rays should be directed at. If your scene has a square footprint, you can target exactly that square. The current implementation we have also allows to target a single point. Origin control is a similar mechanism used to control where rays start from. In practice, the plugin samples a target point, then projects it onto an origin shape. The default is the scene's bounding sphere, but you can specify any shape of your liking. I added this because for some reason, I had intersection accuracy problems with very large scenes when ray origins were located on the bounding sphere (see #171). |
I see, this sounds like a nice feature to have (especially the target control) to improve performances. I would be happy to see this code if it isn't too complicated. Those are very "primitive" plugins and so it is nice to keep them as simple as possible. Although if it isn't to difficult to implement I would be in favor of adding this. |
@leroyvn Interesting, what you described sounds a lot like the problem of sampling rays starting from an infinite emitter in the context of light tracing (aka particle tracing). mitsuba2/src/emitters/constant.cpp Lines 69 to 99 in 7c9c539
|
@merlinND Indeed, I'm actually following what you're doing as part of your light tracer PR and I think there might have common interface issues to solve. This actually also holds true for sensor spectral sampling, which is very similar to emitter spectral sampling. @Speierers How about I first open a new PR with the current version (branching off from |
I ended up pushing the code to this branch. The tests will require some cleanup and a few render tests would also be nice, I think. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi @leroyvn ,
Here are a few comments (mostly style although I realized that you were going to do another pass over the code).
I am a little concerned of the numerous combinations of parameters this plugin might take, and I would be in favor of reducing that number a bit.
For instance, I am not sure of the utility of the RAY_ORIGIN
feature and IMO this doesn't really comply with the distant nature of this plugin.
I understand the direction
and orientation
parameters, but I am wondering whether this could be defined by the to_world
matrix (e.g. using a look_at
, defining target and up)?
Also, when the resolution of the film is NxM
, we should still be able to define the up
vector right?
Finally, regarding the target strategy, I am wondering whether things couldn't be simplified a bit.
Basically we have the following combinations:
target=none
,dir=single
: that's the original plugin ✔️target=none
,dir=width
: looks at the radiance coming from a swipe of direction, any useful? ❓target=none
,dir=all
: looking a the scene from all around, I can see how this could be used ✔️target=point
,dir=single
: this could be useful, although I feel like this could deserve it's own plugintarget=point
,dir=width
: same as above ❓target=point
,dir=all
: isn't this the same astarget=none
,dir=all
❓target=shape
,dir=single
: like the original plugin, but more efficient for single shape and no missing rays ✔️target=shape
,dir=width
: same as above ❓target=shape
,dir=all
: looking from all around a shape, why not ✔️
} | ||
|
||
props.mark_queried("direction"); | ||
props.mark_queried("flip_directions"); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What would be a use case for the flip_directions
parameter?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
When using the Dirac direction sampling strategy (1x1 film), the sensor looks, by default, in the direction opposite to the direction
parameter. This is unintuitive, especially because the directional
plugin uses a different convention. See a more general comment on this below.
The positioning of the origin of those rays can also be controlled using the | ||
``ray_origin``. This is particularly useful when the scene has a dimension much | ||
smaller than the others and it is not necessary that ray origins are located at | ||
the scene's bounding sphere. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would argue that we should remove this feature as a distant sensor should have an origin.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
See comments below: this is a workaround.
Hi @Speierers, thanks a lot for checking out this code so quickly! I think the chosen parametrisation and behaviour will be better understood with additional context. The general idea is that this plugin is used to estimate the BRDF of surfaces with complex geometry and/or an atmospheric layer. The illumination for such scenes consists of a About ray direction sampling strategiesThe direction sampling strategy is driven by the film size as you noticed. Here are the motivations for the 3 possible cases:
The Plane case is heavily used in practice because the principal plane contains many distinctive features and is much more quickly computed than the entire hemisphere. About direction and orientationAt the time, Arguably the functionality provided by those parameters can be achieved using About origin controlThis feature is actually a workaround #171. I agree with your comments, of course: I'm theoretically all in favour of dropping this. However, if the original problem is still here (I haven't checked if this is still problematic on the About target controlThe None strategy is not very useful in practice. The Point strategy is used to trace rays from the top of the atmosphere in pseudo-1D scenes (no complex geometry such as trees or topography at the surface). The Shape strategy is used for similar purposes, but when the surface also features complex geometry (a "proper" 3D scene). Arguably, functionality very similar to the Point strategy could be achieved with the Shape strategy, e.g. by using a disk shape with a tiny radius. But why bother sampling a shape if it is not required? Also, user input is much simpler (point vs Practical use case summary
* Pseudo-1D geometry (flat surface, optical properties of the participating medium vary only along the vertical direction). Wrap-up
|
Co-authored-by: @schunkes
This PR adds a distant directional sensor plugin similar the distant directional emitter. It is useful to record radiance leaving the scene and compute, in practice, the BRDF of scenes featuring complex geometry.
The plugin features an optional
target
parameter, useful to restrict the target location sampling to a single point in the scene. This feature makes this sensor equivalent to aradiancemeter
pointing towards the target location and located on a bounding sphere centered at the target. It is useful to emulate one-dimensional geometries heavily used for Earth observation-related applications.