Real time traffic scene mapping using dense surfel representation.
This repo is part of the code of our LADS: Lightweight Autonomous Driving Simulator Using Surfel Mapping and GAN Enhancement. The SurfelMapping rebuilds the 3D traffic scene from images collected by stereo camera of a driving vihecle. The 3D model is represented by surfels to make the rendered images more realistic.
The code is inspired by https://www.imperial.ac.uk/dyson-robotics-lab/downloads/elastic-fusion/
- Ubuntu 18.04. We only tested on 18.04 but should be compatible to other adjacent distributions.
- CMake
- OpenGL
- Eigen
- pangolin. Build from source.
- not too small GPU memory
run commands
mkdir build
cd build
cmake ..
make
The program requires RGB images and corresponding DEPTH and SEMANTIC maps as input. We use some third part learning methods prediction to provide dense depth map and semantic labels. You can download demo data from here (using PSMNet for depth and PointRend for semantic). You can use your own data including RGB, depth and semantic. The RGB, depth and semantic subdirectories should be in a same super directory and set the subdir name in the KittiReader. See KittiReader.cpp for details of input path. If you use the demo data, you do not need to change it.
To build the surfel map, run
cd build
./build_map [path to the data super dir]
In the GUI window, uncheck "pause" button and run the mapping. click "save" for saving the built map.
To load saved map and generate new data, run
cd build
./load_map [path to the data super dir] [saved map path]
Click "path mode" and generate novel views as shown and instructed in loadmap.gif. Then click "Acquire Novel Images" to get new images of those views.
The work in SPADE dir is forked from https://github.com/NVlabs/SPADE. We modified the input and some other parts. a postprocess.py is also added for synthesizing final images from the GAN generated image and rendered image. Please go to original SPADE to see the training and testing of the code.