Where2comm: Communication-Efficient Collaborative Perception via Spatial Confidence Maps
Yue Hu, Shaoheng Fang, Zixing Lei, Yiqi Zhong, Siheng Chen
Presented at Neurips 2022 Spotlight
See our code at https://github.com/MediaBrain-SJTU/Where2comm.
Abstract: Multi-agent collaborative perception could significantly upgrade the perception performance by enabling agents to share complementary information with each other through communication. It inevitably results in a fundamental trade-off between perception performance and communication bandwidth. To tackle this bottleneck issue, we propose a spatial confidence map, which reflects the spatial heterogeneity of perceptual information. It empowers agents to only share spatially sparse, yet perceptually critical information, contributing to where to communicate.
-
Dataset Support
- DAIR-V2X
- OPV2V
- V2X-Sim 2.0
-
SOTA collaborative perception method support
- Where2comm [Neurips2022]
- V2VNet [ECCV2020]
- DiscoNet [NeurIPS2021]
- V2X-ViT [ECCV2022]
- When2com [CVPR2020]
- Late Fusion
- Early Fusion
-
Visualization
- BEV visualization
- 3D visualization
If you find this code useful in your research then please cite
@inproceedings{Where2comm:22,
author = {Yue Hu, Shaoheng Fang, Zixing Lei, Yiqi Zhong, Siheng Chen},
title = {Where2comm: Communication-Efficient Collaborative Perception via Spatial Confidence Maps},
booktitle = {Thirty-sixth Conference on Neural Information Processing Systems (Neurips)},
month = {November},
year = {2022}
}
Thank for the excellent cooperative perception codebases OpenCOOD and CoPerception.
Thank for the excellent cooperative perception datasets DAIR-V2X, OPV2V and V2X-SIM.
Thank for the dataset and code support by YiFan Lu.
Thanks for the insightful previous works in cooperative perception field.
V2vnet: Vehicle-to-vehicle communication for joint perception and prediction ECCV20 [Paper]
When2com: Multi-agent perception via communication graph grouping CVPR20 [Paper] [Code]
Who2com: Collaborative Perception via Learnable Handshake Communication ICRA20 [Paper]
Learning Distilled Collaboration Graph for Multi-Agent Perception Neurips21 [Paper] [Code]
V2X-Sim: A Virtual Collaborative Perception Dataset and Benchmark for Autonomous Driving RAL21 [Paper] [Website][Code]
OPV2V: An Open Benchmark Dataset and Fusion Pipeline for Perception with Vehicle-to-Vehicle Communication ICRA2022 [Paper] [Website] [Code]
V2X-ViT: Vehicle-to-Everything Cooperative Perception with Vision Transformer ECCV2022 [Paper] [Code] [Talk]
Self-Supervised Collaborative Scene Completion: Towards Task-Agnostic Multi-Robot Perception CoRL2022 [Paper]
CoBEVT: Cooperative Bird's Eye View Semantic Segmentation with Sparse Transformers CoRL2022 [Paper] [Code]
DAIR-V2X: A Large-Scale Dataset for Vehicle-Infrastructure Cooperative 3D Object Detection CVPR2022 [Paper] [Website] [Code]
If you have any problem with this code, please feel free to contact [email protected].