You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I understand the normal maps are computed in camera space, but could you please elaborate on what the exact transformation is from world space normals to camera space? E.g. in camera space, how are the x,y,z axes defined?
I've been looking at this code, but when I pass a predicted mesh given by the demo code to _render_normal I get a blank image (which suggests these are not the right transformations):
In summary, I'm trying to use the normal maps predicted by your pretrained pix2pix network, but to do so I need to know how the ground truth normal maps used to train this network were computed.
Thank you!
The text was updated successfully, but these errors were encountered:
I understand the normal maps are computed in camera space, but could you please elaborate on what the exact transformation is from world space normals to camera space? E.g. in camera space, how are the x,y,z axes defined?
I've been looking at this code, but when I pass a predicted mesh given by the demo code to
_render_normal
I get a blank image (which suggests these are not the right transformations):pifuhd/lib/evaluator.py
Line 83 in e47c4d9
In the demo code, it looks like the normals are directly predicted by the network, so I'm having trouble deciphering what the coordinate system is:
pifuhd/apps/recon.py
Line 118 in e47c4d9
In summary, I'm trying to use the normal maps predicted by your pretrained pix2pix network, but to do so I need to know how the ground truth normal maps used to train this network were computed.
Thank you!
The text was updated successfully, but these errors were encountered: