-
Notifications
You must be signed in to change notification settings - Fork 136
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
training with vlp-16 dataset #19
Comments
For cameras with different intrinsic parameters than the KITTI cameras, there is a slightly more complicated image resizing process than simple cropping. In particular, we need to resize the image in a way such that the resized image has the same effective intrinsics as KITTI. |
Hi Fangchang, |
@XiaotaoGuo It might or might not work well with a different set of intrinsics, but there is simply no guarantee that the trained network would transfer directly to this new set of intrinsics (and image sizes). My suggestion is to keep the test images as close to the train images as possible. |
Thanks! What if we use our own dataset to train the network and test with it? |
Then there is no need for any resizing |
Hi @Melvintt, |
Hi, |
Hi Fangchang,
Our lab is currently working on a project which requires generating depth maps from our vlp-16 lidar and camera setting. Your work looks great as the depth map solution. Since we got different size images as input, I think what we need to do to use this network is (1) read in our own calibration information (K) and (2) crop input images as (width, high) both multiples of 16 (since we got errors when going through decode layers with some other sizes), is that right?
We've tested with a rather small dataset (only ~700 frames) and got results like the figure showing below.
We are wondering if the dataset is too small or the depth info from vlp-16 is too sparse since the results remain clear projected lines. It would be great if you have any suggestions, thanks!
The text was updated successfully, but these errors were encountered: