Skip to content

ActFormer: Scalable Collaborative Perception via Active Queries

License

Notifications You must be signed in to change notification settings

shifan-Z/ActFormer

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

40 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

ActFormer: Scalable Collaborative Perception via Active Queries

Official code for the paper "ActFormer: Scalable Collaborative Perception via Active Queries", accepted by 2024 IEEE International Conference on Robotics and Automation. Currently the code is a raw version, will be updated ASAP. If you have any inquiries, feel free to contact [email protected]

Abstract

Collaborative perception leverages rich visual observations from multiple robots to extend a single robot's perception ability beyond its field of view. Many prior works receive messages broadcast from all collaborators, leading to a scalability challenge when dealing with a large number of robots and sensors.

In this work, we aim to address scalable camera-based collaborative perception with a Transformer-based architecture. Our key idea is to enable a single robot to intelligently discern the relevance of the collaborators and their associated cameras according to a learned spatial prior. This proactive understanding of the visual features' relevance does not require the transmission of the features themselves, enhancing both communication and computation efficiency. Specifically, we present ActFormer, a Transformer that learns bird's eye view (BEV) representations by using predefined BEV queries to interact with multi-robot multi-camera inputs. Each BEV query can actively select relevant cameras for information aggregation based on pose information, instead of interacting with all cameras indiscriminately. Experiments on the V2X-Sim dataset demonstrate that ActFormer improves the detection performance from 29.89% to 45.15% in terms of [email protected] with about 50% fewer queries, showcasing the effectiveness of ActFormer in multi-agent collaborative 3D object detection.

Methods

method

Requirements

same as BEVFormer

See details in Install requirements

Data

Download here V2X-Sim

See details in Data Access

Train & Test

see train.py and test.py

See train,test and visual details in Start

About

ActFormer: Scalable Collaborative Perception via Active Queries

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 99.8%
  • Shell 0.2%