You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The experiment configuration is like this:
Hardware: Tesla V100 32.768G Ubuntu1604
Dataset: Replica room0: pick 21 views from the first 200 frames, processed by colmap
Training: train/test 18:3 for 50k iteration
When evaluated on 3 test images, the results are PSNR=16.997, 19.288 and 20.136 for RGB inference, respectively. The rendered videos are visually not recognizable as the original scenes, and the depth is not converged.
I have tried the exact dataset on NeRF and trained for 50k iterations as well, it works fine for RGB rendering (PSNR around 30), but the disparity hardly converges, either.
This synthetic dataset resembles the office room scene you demonstrated on your project homepage for the field of views and pose divergence. I am wondering what the problem is, and many thanks for any kind comments!
The text was updated successfully, but these errors were encountered:
thua919
changed the title
Application on other dataset:Replical room0 bad convergence
Application on other dataset:Replica room0 bad convergence
Jan 17, 2023
Hi there, Thanks for sharing this excellent code.
I am trying to have this applied to the Replica dataset: https://github.com/facebookresearch/Replica-Dataset
The experiment configuration is like this:
Hardware: Tesla V100 32.768G Ubuntu1604
Dataset: Replica room0: pick 21 views from the first 200 frames, processed by colmap
Training: train/test 18:3 for 50k iteration
parameter config
expname = replica_20v basedir = ./logs/release datadir = ./data/replica_room0_20v dataset_type = llff factor = 4 llffhold = 8 N_rand = 4096 N_samples = 64 N_importance = 128 use_viewdirs = True raw_noise_std = 1e0 no_ndc = True colmap_depth = True depth_loss = True depth_lambda = 0.1 i_testset = 5000 i_video = 10000 N_iters = 50000
When evaluated on 3 test images, the results are PSNR=16.997, 19.288 and 20.136 for RGB inference, respectively. The rendered videos are visually not recognizable as the original scenes, and the depth is not converged.
I have tried the exact dataset on NeRF and trained for 50k iterations as well, it works fine for RGB rendering (PSNR around 30), but the disparity hardly converges, either.
This synthetic dataset resembles the office room scene you demonstrated on your project homepage for the field of views and pose divergence. I am wondering what the problem is, and many thanks for any kind comments!
The text was updated successfully, but these errors were encountered: