Skip to content

Latest commit

 

History

History
95 lines (68 loc) · 6.52 KB

README.md

File metadata and controls

95 lines (68 loc) · 6.52 KB

Evaluation Toolbox for LiDAR Generation

This directory is a self-contained, memory-friendly and mostly CUDA-accelerated toolbox of multiple evaluation metrics for LiDAR generative models, including:

  • Perceptual metrics (our proposed):
    • Fréchet Range Image Distance (FRID)
    • Fréchet Sparse Volume Distance (FSVD)
    • Fréchet Point-based Volume Distance (FPVD)
  • Statistical metrics (proposed in Learning Representations and Generative Models for 3D Point Clouds):
    • Minimum Matching Distance (MMD)
    • Jensen-Shannon Divergence (JSD)
  • Statistical pairwise metrics (for reconstruction only):
    • Chamfer Distance (CD)
    • Earth Mover's Distance (EMD)

Citation

If you find this project useful in your research, please consider citing:

@article{ran2024towards,
  title={Towards Realistic Scene Generation with LiDAR Diffusion Models},
  author={Ran, Haoxi and Guizilini, Vitor and Wang, Yue},
  journal={arXiv preprint arXiv:2404.00815},
  year={2024}
}

Dependencies

Basic (install through pip):

  • scipy
  • numpy
  • torch
  • pyyaml

Required by FSVD and FPVD:

Model Zoo

To evaluate with perceptual metrics on different types of LiDAR data, you can download all models through:

or

  • the full directory of one specific model:

64-beam LiDAR (trained on SemanticKITTI):

Metric Model Arch Link Code Comments
FRID RangeNet++ DarkNet21-based UNet Google Drive ./models/rangenet/model.py range image input (our trained model without the need of remission input)
FSVD MinkowskiNet Sparse UNet Google Drive ./models/minkowskinet/model.py point cloud input
FPVD SPVCNN Point-Voxel Sparse UNet Google Drive ./models/spvcnn/model.py point cloud input

32-beam LiDAR (trained on nuScenes):

Metric Model Arch Link Code Comments
FSVD MinkowskiNet Sparse UNet Google Drive ./models/minkowskinet/model.py point cloud input
FPVD SPVCNN Point-Voxel Sparse UNet Google Drive ./models/spvcnn/model.py point cloud input

Usage

  1. Place the unzipped pretrained_weights folder under the root python directory or modify the DEFAULT_ROOT variable in the __init__.py.
  2. Prepare input data, including the synthesized samples and the reference dataset. Note: The reference data should be the point clouds projected back from range images instead of raw point clouds.
  3. Specify the data type (32 or 64) and the metrics to evaluate. Options: mmd, jsd, frid, fsvd, fpvd, cd, emd.
  4. (Optional) If you want to compute frid, fsvd or fpvd metric, adjust the corresponding batch size through the MODAL2BATCHSIZE in file __init__.py according to your max GPU memory (default: ~24GB).
  5. Start evaluation and all results will print out!

Example:

from .eval_utils import evaluate

data = '64'  # specify data type to evaluate
metrics = ['mmd', 'jsd', 'frid', 'fsvd', 'fpvd']  # specify metrics to evaluate

# list of np.float32 array
# shape of each array: (#points, #dim=3), #dim: xyz coordinate (NOTE: no need to input remission)
reference = ...
samples = ...

evaluate(reference, samples, metrics, data)

Acknowledgement