| Webpage | Full Paper | Dataset Download
TL;DR: A comprehensive multi-LiDAR, multi-scenario dataset that extensively incorporates segments of real-world geometric degeneracy.
- (20241025) The metadata.json file for the OS1-64 used in the Beta platform can be accessed via the following link: Google Drive.
- (20240910) Data can be downloaded from GEODE - Google Drive.
- (20240910) Dataset README is avaliable.
- Review the overview of the GEODE dataset, including details about sensors, definitions of ROS topics and messages, and important caveats regarding localization evaluation.
- Download the dataset from GEODE - Google Drive. Additional information about each sequence and scenario is available on our homepage. Additional download options for users in Mainland China will be made available in the future. Currently, we only provide ROS1 bags. ROS2 users can convert these using the rosbags-convert toolkit.
- Adapt your SLAM algorithm using the provided dataset parameters, and calculate the error after obtaining the results.
- Device
$\alpha$ :- Velodyne VLP-16 ;
- Stereo HikRobot MV-CS050-10GC cameras;
- Xsens MTi-30 IMU;
- Device
$\beta$ :- Ouster OS1-64;
- Stereo HikRobot MV-CS050-10GC cameras;
- Xsens MTi-30 IMU;
- Device
$\gamma$ :- Livox AVIA;
- Stereo HikRobot MV-CS050-10GC cameras;
- Xsens MTi-30 IMU;
The GEODE dataset provides sensor raw data and corresponding rosbag.
Sensor raw data with * is only available for
The calibration results are stored in three files, alpha.yaml
, beta.yaml
and gamma.yaml
, according to the acquisition device.
We have provided the script 'rmse.py' for everyone to calculate the localization accuracy of the algorithms they run.
python3 rmse.py <Your traj> <GT traj> <time offset>
For example, running the following command will calculate the error between all trajectories in TUM format that contain the relead
field and the ground truth trajectories, and then compute the average value.
python3 rmse.py relead <GT traj> 0
For the sequences 'off-road', 'inland_waterways', and 'Metro_Tunnels_Tunneling', three sets of equipment were mounted on a rack constructed from aluminum profiles to simultaneously collect data, while only one set of GT pose equipment was utilized to capture motion trajectories. Consequently, the trajectories obtained from the algorithm need to be processed before proceeding with subsequent error calculations. Fortunately, due to the effectiveness of the time synchronization scheme, we only need to account for the spatial offsets between different sensors in these sequences.
For the 'off-road' and 'inland_waterways' sequences, where GT poses are collected using GNSS/INS, we align the GT poses to the coordinate system of the beta device. This alignment allows the trajectories derived from the beta device's data, processed by the algorithm, to be directly used for error calculation. For the alpha and carol devices, the trajectories from the algorithm are transformed into the GT pose coordinate system using the scripts alpha2GT_gnss.py
and ``gamma2gt_gnss.py` before error calculations are performed.
This suggests that when you want to evaluate the errors of the two sequences, off-road and inland waterways, there are the following three scenarios:
- For the alpha device, you need to convert the trajectories from multiple runs of the SLAM output into the ground truth coordinate system using the
alpha2GT_gnss.py
script. To do this, place the multiple trajectories from this sequence into a folder (e.g., "Offroad1") and create another folder to store the converted trajectories (e.g., "tran2body"). Then, update the paths for these two folders in theraw_folder
andoutput_folder
variables in thealpha2GT_gnss.py
script. After running the script, you will obtain the trajectories transformed into the ground truth coordinate system in the new folder (e.g., "tran2body"). Next, run thermse.py
script to calculate the RMSE.- For the beta device, no additional processing is required. Simply run the
rmse.py
script to obtain the RMSE.- For the gamma device, you similarly need to convert the trajectories from multiple runs of the SLAM output into the ground truth coordinate system using the
gamma2GT_gnss.py
script. Similar to the alpha device, place the multiple trajectories into a folder (e.g., "Offroad1") and create another folder for the converted trajectories (e.g., "tran2body"). Then, update the paths for these folders in theraw_folder
andoutput_folder
variables in thegamma2GT_gnss.py
script. After running the script, you will obtain the converted trajectories in the ground truth coordinate system in the new folder (e.g., "tran2body"). Finally, run thermse.py
script to calculate the RMSE.
In the 'Metro_Tunnels' sequence, where GT poses are obtained by tracking prisms with a Leica MS60, we align the true values to the alpha device. The beta and gamma devices then convert the algorithm's trajectories to the GT pose coordinate system using the scripts beta2gt_leica.py
and gamma2gt_leica.py
. The necessity of using scripts to transform coordinate systems, which requires additional operations to calculate errors, arises from the fact that the GT poses from the Leica tracking prisms include only positions, not attitudes. It is challenging to convert the true values to the other two devices using the results of multi-aLiDAR calibration. To maintain a unified processing approach, we adopt the same method for sequences recorded simultaneously.
This indicates that when you want to evaluate the errors in the Metro_Tunnels scenario data (including both shield tunnel and tunneling tunnel sequences), there are three possible cases:
- For the alpha device, no additional processing is required. Simply run the
rmse.py
script to obtain the RMSE.- For the beta device, you need to convert the SLAM output from multiple runs into the ground truth coordinate system using the
beta2gt_leica.py
script. To do this, place the multiple trajectories from this sequence into a folder (e.g., "shield_tunnel1") and create another folder to store the converted trajectories (e.g., "tran2body"). Then, update the paths for these two folders in theraw_folder
andoutput_folder
variables in thebeta2gt_leica.py
script. After running the script, you will obtain the trajectories transformed into the ground truth coordinate system in the new folder (e.g., "tran2body"). Next, run thermse.py
script to calculate the RMSE.- For the gamma device, similarly, you need to convert the SLAM output from multiple runs into the ground truth coordinate system using the
gamma2gt_leica.py
script. As with the beta device, place the multiple trajectories into a folder (e.g., "shield_tunnel1") and create another folder for the converted trajectories (e.g., "tran2body"). Then, update the paths for these folders in theraw_folder
andoutput_folder
variables in thegamma2gt_leica.py
script. After running the script, you will obtain the converted trajectories in the ground truth coordinate system in the new folder (e.g., "tran2body"). Finally, run thermse.py
script to calculate the RMSE.
For the “stair” sequence, we obtain the ground truth pose by using the PALoc algorithm to align the sensor data with the ground truth map. However, due to the small field of view of the Livox Avia LiDAR equipped with the
Click the button below to access detailed information (including scenarios, degeneration types, etc.) and to download the dataset.
Awesome-Algorithms-Against-Degeneracy
-
SubT-MRS: Pushing SLAM Towards All-weather Environments, CVPR, 2024. [Paper] [website]
-
ENWIDE Dataset (related paper: COIN-LIO: Complementary Intensity-Augmented LiDAR Inertial Odometry, ICRA, 2024. [arXiv] [code])
-
LiDAR Degeneracy Datasets (related paper: Degradation Resilient LiDAR-Radar-Inertial Odometry, ICRA, 2024. [arXiv])
-
WHU-Helmet: A helmet-based multi-sensor SLAM dataset for the evaluation of real-time 3D mapping in large-scale GNSS-denied environments, IEEE Transactions on Geoscience and Remote Sensing, 2023. [paper] [website]
-
Open-source datasets released by the SubT teams
- Heterogeneous LiDAR Dataset for Benchmarking Robust Localization in Diverse Degenerate Scenarios Zhiqiang Chen, Yuhua Qi, Dapeng Feng, Xuebin Zhuang, Hongbo Chen, Xiangcheng Hu, Jin Wu, Kelin Peng, Peng Lu Under Review [Arxiv]
If you have any other issues, please report them on the repository.