You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am testing the provided model. By default, the input is in BGR format since it uses cv2.imread. I found that if the images are converted to RGB format cv2.COLOR_BGR2RGB, the depth map is even better. I checked the training code, it reads images using cv2.imread. So I am wondering why it is the case. Does the author or anyone else see similar phenomena?
The text was updated successfully, but these errors were encountered:
I am testing the provided model. By default, the input is in BGR format since it uses
cv2.imread
. I found that if the images are converted to RGB formatcv2.COLOR_BGR2RGB
, the depth map is even better. I checked the training code, it reads images usingcv2.imread
. So I am wondering why it is the case. Does the author or anyone else see similar phenomena?The text was updated successfully, but these errors were encountered: