Skip to content

Commit

Permalink
feat(reorganize): reorganize docs structure and add more detail about…
Browse files Browse the repository at this point in the history
… data read, visualization etc.
  • Loading branch information
Kin-Zhang committed Aug 24, 2024
1 parent 8eb397a commit e83d04b
Show file tree
Hide file tree
Showing 10 changed files with 277 additions and 287 deletions.
2 changes: 1 addition & 1 deletion .github/workflows/ci.yml
Original file line number Diff line number Diff line change
Expand Up @@ -21,5 +21,5 @@ jobs:
key: ${{ github.ref }}
path: .cache
- run: pip install mkdocs-material
- run: pip install mkdocs-minify-plugin mkdocs-video mkdocs-git-committers-plugin mkdocs-git-revision-date-localized-plugin # mkdocs-git-revision-date-plugin # mkdocs-git-revision-date-localized-plugin mkdocs-git-authors-plugin
- run: pip install mkdocs-minify-plugin mkdocs-video mkdocs-git-committers-plugin-2 mkdocs-git-revision-date-localized-plugin # mkdocs-git-revision-date-plugin # mkdocs-git-revision-date-localized-plugin mkdocs-git-authors-plugin
- run: mkdocs gh-deploy --force
106 changes: 16 additions & 90 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,103 +1,29 @@
A Dynamic Points Removal Benchmark in Point Cloud Maps
---

<!-- [![PWC](https://img.shields.io/endpoint.svg?url=https://paperswithcode.com/badge/a-dynamic-points-removal-benchmark-in-point/dynamic-point-removal-on-semi-indoor)](https://paperswithcode.com/sota/dynamic-point-removal-on-semi-indoor?p=a-dynamic-points-removal-benchmark-in-point) -->
[![arXiv](https://img.shields.io/badge/arXiv-2307.07260-b31b1b?logo=arxiv&logoColor=white)](https://arxiv.org/abs/2307.07260)
[![video](https://img.shields.io/badge/中文-Bilibili-74b9ff?logo=bilibili&logoColor=white)](https://www.bilibili.com/video/BV1bC4y1R7h3)
[![video](https://img.shields.io/badge/video-YouTube-FF0000?logo=youtube&logoColor=white)](https://youtu.be/pCHsNKXDJQM?si=nhbAnPrbaZJEqbjx)
[![poster](https://img.shields.io/badge/Poster-6495ed?style=flat&logo=Shotcut&logoColor=wihte)](https://hkustconnect-my.sharepoint.com/:b:/g/personal/qzhangcb_connect_ust_hk/EQvNHf9JNEtNpyPg1kkNLNABk0v1TgGyaM_OyCEVuID4RQ?e=TdWzAq)

Here is a preview of the readme in codes. Task detects dynamic points in maps and removes them, enhancing the maps:
Author: [Qingwen Zhang](http://kin-zhang.github.io) (Kin)

<center>
<img src="assets/imgs/background.png" width="80%">
</center>
This is **our wiki page README**, please visit our [main branch](https://github.com/KTH-RPL/DynamicMap_Benchmark) for more information about the benchmark.

**Folder** quick view:
## Install

- `methods` : contains all the methods in the benchmark
- `scripts/py/eval`: eval the result pcd compared with ground truth, get quantitative table
- `scripts/py/data` : pre-process data before benchmark. We also directly provided all the dataset we tested in the map. We run this benchmark offline in computer, so we will extract only pcd files from custom rosbag/other data format [KITTI, Argoverse2]
If you want to try the MkDocs locally, the only thing you need is `Python` and some python package. If you are worrying it will destory your env, you can try [virual env](https://docs.python.org/3/library/venv.html) or [anaconda](https://www.anaconda.com/).

**Quick** try:

- Teaser data on KITTI sequence 00 only 384.8MB in [Zenodo online drive](https://zenodo.org/record/10886629)
```bash
wget https://zenodo.org/records/10886629/files/00.zip
unzip 00.zip -d ${data_path, e.g. /home/kin/data}
```
- Clone our repo:
```bash
git clone --recurse-submodules https://github.com/KTH-RPL/DynamicMap_Benchmark.git
```
- Go to methods folder, build and run through
```bash
cd methods/dufomap && cmake -B build -D CMAKE_CXX_COMPILER=g++-10 && cmake --build build
./build/dufomap_run ${data_path, e.g. /home/kin/data/00} ${assets/config.toml}
```

### News:

Feel free to pull a request if you want to add more methods or datasets. Welcome! We will try our best to update methods and datasets in this benchmark. Please give us a star 🌟 and cite our work 📖 if you find this useful for your research. Thanks!

- **2024/04/29** [BeautyMap](https://arxiv.org/abs/2405.07283) is accepted by RA-L'24. Updated benchmark: BeautyMap and DeFlow submodule instruction in the benchmark. Added the first data-driven method [DeFlow](https://github.com/KTH-RPL/DeFlow/tree/feature/dynamicmap) into our benchmark. Feel free to check.
- **2024/04/18** [DUFOMap](https://arxiv.org/abs/2403.01449) is accepted by RA-L'24. Updated benchmark: DUFOMap and dynablox submodule instruction in the benchmark. Two datasets w/o gt for demo are added in the download link. Feel free to check.
- **2024/03/08** **Fix statements** on our ITSC'23 paper: KITTI sequences pose are also from SemanticKITTI which used SuMa. In the DUFOMap paper Section V-C, Table III, we present the dynamic removal result on different pose sources. Check discussion in [DUFOMap](https://arxiv.org/abs/2403.01449) paper if you are interested.
- **2023/06/13** The [benchmark paper](https://arxiv.org/abs/2307.07260) Accepted by ITSC 2023 and release five methods (Octomap, Octomap w GF, ERASOR, Removert) and three datasets (01, 05, av2, semindoor) in [benchmark paper](https://arxiv.org/abs/2307.07260).
---
- [ ] 2024/04/19: I will update a document page soon (tutorial, manual book, and new online leaderboard), and point out the commit for each paper. Since there are some minor mistakes in the first version. Stay tune with us!
## Methods:
Please check in [`methods`](methods) folder.
Online (w/o prior map):
- [x] DUFOMap (Ours 🚀): [RAL'24](https://arxiv.org/abs/2403.01449), [**Benchmark Instruction**](https://github.com/KTH-RPL/dufomap)
- [x] Octomap w GF (Ours 🚀): [ITSC'23](https://arxiv.org/abs/2307.07260), [**Benchmark improvement ITSC 2023**](https://github.com/Kin-Zhang/octomap/tree/feat/benchmark)
- [x] dynablox: [RAL'23 official link](https://github.com/ethz-asl/dynablox), [**Benchmark Adaptation**](https://github.com/Kin-Zhang/dynablox/tree/feature/benchmark)
- [x] Octomap: [ICRA'10 & AR'13 official link](https://github.com/OctoMap/octomap_mapping), [**Benchmark implementation**](https://github.com/Kin-Zhang/octomap/tree/feat/benchmark)

Learning-based (data-driven) (w pretrain-weights provided):
- [x] DeFlow (Ours 🚀): [ICRA'24](https://arxiv.org/abs/2401.16122), [**Benchmark Adaptation**](https://github.com/KTH-RPL/DeFlow/tree/feature/dynamicmap)
Offline (need prior map).
- [x] BeautyMap (Ours 🚀): [RAL'24](https://arxiv.org/abs/2405.07283), [**Official Code**](https://github.com/MKJia/BeautyMap)
- [x] ERASOR: [RAL'21 official link](https://github.com/LimHyungTae/ERASOR), [**benchmark implementation**](https://github.com/Kin-Zhang/ERASOR/tree/feat/no_ros)
- [x] Removert: [IROS 2020 official link](https://github.com/irapkaist/removert), [**benchmark implementation**](https://github.com/Kin-Zhang/removert)
Please note that we provided the comparison methods also but modified a little bit for us to run the experiments quickly, but no modified on their methods' core. Please check the LICENSE of each method in their official link before using it.

You will find all methods in this benchmark under `methods` folder. So that you can easily reproduce the experiments. [Or click here to check our score screenshot directly](assets/imgs/eval_demo.png).
<!-- And we will also directly provide [the result data](TODO) so that you don't need to run the experiments by yourself. ... Where to save this? -->
Last but not least, feel free to pull request if you want to add more methods. Welcome!
## Dataset & Scripts
Download PCD files mentioned in paper from [Zenodo online drive](https://zenodo.org/records/10886629). Or create unified format by yourself through the [scripts we provided](scripts/README.md) for more open-data or your own dataset. Please follow the LICENSE of each dataset before using it.
- [x] [Semantic-Kitti, outdoor small town](https://semantic-kitti.org/dataset.html) VLP-64
- [x] [Argoverse2.0, outdoor US cities](https://www.argoverse.org/av2.html#lidar-link) VLP-32
- [x] [UDI-Plane] Our own dataset, Collected by VLP-16 in a small vehicle.
- [x] [KTH-Campuse] Our [Multi-Campus Dataset](https://mcdviral.github.io/), Collected by [Leica RTC360 3D Laser Scan](https://leica-geosystems.com/products/laser-scanners/scanners/leica-rtc360). Only 18 frames included to download for demo, please check [the official website](https://mcdviral.github.io/) for more.
- [x] [Indoor-Floor] Our own dataset, Collected by Livox mid-360 in a quadruped robot.
<!-- - [ ] [HKUST-Building] Our [fusionportable Dataset](https://fusionportable.github.io/dataset/fusionportable/), collected by [Leica BLK360 Imaging Laser Scanner](https://leica-geosystems.com/products/laser-scanners/scanners/blk360) -->
<!-- - [ ] [KTH-Indoor] Our own dataset, Collected by VLP-16/Mid-70 in kobuki. -->
Welcome to contribute your dataset with ground truth to the community through pull request.
### Evaluation
First all the methods will output the clean map, if you are only **user on map clean task,** it's **enough**. But for evaluation, we need to extract the ground truth label from gt label based on clean map. Why we need this? Since maybe some methods downsample in their pipeline, we need to extract the gt label from the downsampled map.

Check [create dataset readme part](scripts/README.md#evaluation) in the scripts folder to get more information. But you can directly download the dataset through the link we provided. Then no need to read the creation; just use the data you downloaded.
main package [user is only need for sometime, check the issue section]
```bash
pip install mkdocs-material
```

- Visualize the result pcd files in [CloudCompare](https://www.danielgm.net/cc/) or the script to provide, one click to get all evaluation benchmarks and comparison images like paper have check in [scripts/py/eval](scripts/py/eval).
plugin package
```bash
pip install mkdocs-minify-plugin mkdocs-git-revision-date-localized-plugin mkdocs-git-authors-plugin mkdocs-video
```

- All color bar also provided in CloudCompare, here is [tutorial how we make the animation video](TODO).
### Run
```bash
mkdocs serve
```

## Acknowledgements

Expand Down
72 changes: 6 additions & 66 deletions docs/data.md → docs/data/creation.md
Original file line number Diff line number Diff line change
@@ -1,73 +1,13 @@
# Data
# Data Creation

In this section, we will introduce the data format we use in the benchmark, and how to prepare the data (public datasets or collected by ourselves) for the benchmark.

## Format

We saved all our data into PCD files, first let me introduce the [PCD file format](https://pointclouds.org/documentation/tutorials/pcd_file_format.html):

The important two for us are `VIEWPOINT`, `POINTS` and `DATA`:

- **VIEWPOINT** - specifies an acquisition viewpoint for the points in the dataset. This could potentially be later on used for building transforms between different coordinate systems, or for aiding with features such as surface normals, that need a consistent orientation.

The viewpoint information is specified as a translation (tx ty tz) + quaternion (qw qx qy qz). The default value is:

```bash
VIEWPOINT 0 0 0 1 0 0 0
```

- **POINTS** - specifies the number of points in the dataset.

- **DATA** - specifies the data type that the point cloud data is stored in. As of version 0.7, three data types are supported: ascii, binary, and binary_compressed. We saved as binary for faster reading and writing.

### Example

```
# .PCD v0.7 - Point Cloud Data file format
VERSION 0.7
FIELDS x y z intensity
SIZE 4 4 4 4
TYPE F F F F
COUNT 1 1 1 1
WIDTH 125883
HEIGHT 1
VIEWPOINT -15.6504 17.981 -0.934952 0.882959 -0.0239536 -0.0058903 -0.468802
POINTS 125883
DATA binary
```
In this `004390.pcd` we have 125883 points, and the pose (sensor center) of this frame is: `-15.6504 17.981 -0.934952 0.882959 -0.0239536 -0.0058903 -0.468802`. All points are already transformed to the world frame.
## Download benchmark data
We already processed the data in the benchmark, you can download the data from the [following links](https://zenodo.org/records/10886629):
| Dataset | Description | Sensor Type | Total Frame Number | Size |
| --- | --- | --- | --- | --- |
| KITTI sequence 00 | in a small town with few dynamics (including one pedestrian around) | VLP-64 | 141 | 384.8 MB |
| KITTI sequence 05 | in a small town straight way, one higher car, the benchmarking paper cover image from this sequence. | VLP-64 | 321 | 864.0 MB |
| Argoverse2 | in a big city, crowded and tall buildings (including cyclists, vehicles, people walking near the building etc. | 2 x VLP-32 | 575 | 1.3 GB |
| KTH campus (no gt) | Collected by us (Thien-Minh) on the KTH campus. Lots of people move around on the campus. | Leica RTC360 | 18 | 256.4 MB |
| Semi-indoor | Collected by us, running on a small 1x2 vehicle with two people walking around the platform. | VLP-16 | 960 | 620.8 MB |
| Twofloor (no gt) | Collected by us (Bowen Yang) in a quadruped robot. A two-floor structure environment with one pedestrian around. | Livox-mid 360 | 3305 | 725.1 MB |
Download command:
```bash
wget https://zenodo.org/api/records/10886629/files-archive.zip
# or download each sequence separately
wget https://zenodo.org/records/10886629/files/00.zip
wget https://zenodo.org/records/10886629/files/05.zip
wget https://zenodo.org/records/10886629/files/av2.zip
wget https://zenodo.org/records/10886629/files/kthcampus.zip
wget https://zenodo.org/records/10886629/files/semindoor.zip
wget https://zenodo.org/records/10886629/files/twofloor.zip
```
In this section, we demonstrate how to extract expected format data from public datasets (KITTI, Argoverse 2) and also collected by ourselves (rosbag).
<!-- I will add soo -->
Still, I recommend you to download the benchmark data directly from the [Zenodo](https://zenodo.org/records/10886629) link without reading this section. Back to [data download and visualize](index.md#download-benchmark-data) page.
It's only needed for people who want to **run more data from themselves**.

## Create by yourself

If you want to process more data, you can follow the instructions below. (
If you want to process more data, you can follow the instructions below.

!!! Note
Feel free to skip this section if you only want to use the benchmark data.
Expand Down
122 changes: 122 additions & 0 deletions docs/data/index.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,122 @@
# Data Description

In this section, we will introduce the data format we use in the benchmark, and how to visualize the data easily.
Next section on creation will show you how to create this format data from your own data.

## Benchmark Unified Format

We saved all our data into **PCD files**, first let me introduce the [PCD file format](https://pointclouds.org/documentation/tutorials/pcd_file_format.html):

The important two for us are `VIEWPOINT`, `POINTS` and `DATA`:

- **VIEWPOINT** - specifies an acquisition viewpoint for the points in the dataset. This could potentially be later on used for building transforms between different coordinate systems, or for aiding with features such as surface normals, that need a consistent orientation.

The viewpoint information is specified as a translation (tx ty tz) + quaternion (qw qx qy qz). The default value is:

```bash
VIEWPOINT 0 0 0 1 0 0 0
```

- **POINTS** - specifies the number of points in the dataset.

- **DATA** - specifies the data type that the point cloud data is stored in. As of version 0.7, three data types are supported: ascii, binary, and binary_compressed. We saved as binary for faster reading and writing.

### A Header Example

I directly show a example header here from `004390.pcd` in KITTI sequence 00:

```
# .PCD v0.7 - Point Cloud Data file format
VERSION 0.7
FIELDS x y z intensity
SIZE 4 4 4 4
TYPE F F F F
COUNT 1 1 1 1
WIDTH 125883
HEIGHT 1
VIEWPOINT -15.6504 17.981 -0.934952 0.882959 -0.0239536 -0.0058903 -0.468802
POINTS 125883
DATA binary
```
In this `004390.pcd` we have 125883 points, and the pose (sensor center) of this frame is: `-15.6504 17.981 -0.934952 0.882959 -0.0239536 -0.0058903 -0.468802`.
Again, all points from data frames are ==already transformed to the world frame== and VIEWPOINT is the sensor pose.
### How to read PCD files
In C++, we usually use PCL library to read PCD files, here is a simple example:
```cpp
#include <pcl/point_cloud.h>
#include <pcl/point_types.h>
pcl::PointCloud<pcl::PointXYZI>::Ptr pcd(new pcl::PointCloud<pcl::PointXYZI>);
pcl::io::loadPCDFile<pcl::PointXYZI>("data/00/004390.pcd", *pcd);
```

In Python, we have a simple script to read PCD files in [the benchmark code](https://github.com/KTH-RPL/DynamicMap_Benchmark/blob/main/scripts/py/utils/pcdpy3.py), or from [my gits](https://gist.github.com/Kin-Zhang/bd6475bdfa0ebde56ab5c060054d5185), you don't need to read the script in detail but use it directly.

```python
import pcdpy3 # the script I provided
pcd_data = pcdpy3.PointCloud.from_path('data/00/004390.pcd')
pc = pcd_data.np_data[:,:3] # shape (N, 3) N: the number of point, 3: x y z
# if the header have intensity or rgb field, you can get it by:
# pc_intensity = pcd_data.np_data[:,3] # shape (N,)
# pc_rgb = pcd_data.np_data[:,3:6] # shape (N, 3)
```

## Download benchmark data

We already processed the data in the benchmark, you can download the data from the [following links](https://zenodo.org/records/10886629):


| Dataset | Description | Sensor Type | Total Frame Number | Size |
| --- | --- | --- | --- | --- |
| KITTI sequence 00 | in a small town with few dynamics (including one pedestrian around) | VLP-64 | 141 | 384.8 MB |
| KITTI sequence 05 | in a small town straight way, one higher car, the benchmarking paper cover image from this sequence. | VLP-64 | 321 | 864.0 MB |
| Argoverse2 | in a big city, crowded and tall buildings (including cyclists, vehicles, people walking near the building etc. | 2 x VLP-32 | 575 | 1.3 GB |
| KTH campus (no gt) | Collected by us (Thien-Minh) on the KTH campus. Lots of people move around on the campus. | Leica RTC360 | 18 | 256.4 MB |
| Semi-indoor | Collected by us, running on a small 1x2 vehicle with two people walking around the platform. | VLP-16 | 960 | 620.8 MB |
| Twofloor (no gt) | Collected by us (Bowen Yang) in a quadruped robot. A two-floor structure environment with one pedestrian around. | Livox-mid 360 | 3305 | 725.1 MB |

Download command:
```bash
wget https://zenodo.org/api/records/10886629/files-archive.zip

# or download each sequence separately
wget https://zenodo.org/records/10886629/files/00.zip
wget https://zenodo.org/records/10886629/files/05.zip
wget https://zenodo.org/records/10886629/files/av2.zip
wget https://zenodo.org/records/10886629/files/kthcampus.zip
wget https://zenodo.org/records/10886629/files/semindoor.zip
wget https://zenodo.org/records/10886629/files/twofloor.zip
```

## Visualize the data

We provide a simple script to visualize the data in the benchmark, you can find it in [scripts/py/data/play_data.py](https://github.com/KTH-RPL/DynamicMap_Benchmark/blob/main/scripts/py/data/play_data.py). You may want to download the data and requirements first.

```bash
cd scripts/py

# download the data
wget https://zenodo.org/records/10886629/files/twofloor.zip

# https://github.com/KTH-RPL/DynamicMap_Benchmark/blob/main/scripts/py/requirements.txt
pip install -r requirements.txt
```

Run it:
```bash
python data/play_data.py --data_folder /home/kin/data/twofloor --speed 1 # speed 1 for normal speed, 2 for faster with 2x speed
```

It will pop up a window to show the point cloud data, you can use the mouse to rotate, zoom in/out, and move the view. Terminal show the help information to start/stop the play.

<center>
![type:video](https://github.com/user-attachments/assets/158040bd-02ab-4fd4-ab93-2dcacabf342a)
</center>

The axis here shows the sensor frame. The video is play in sensor-frame, so you can see the sensor move around in the video.

Loading

0 comments on commit e83d04b

Please sign in to comment.