ContextualFusion: Context-Based Multi-Sensor Fusion for 3D Object Detection in Adverse Operating Conditions
This repository contains the code and resources for the paper:
"ContextualFusion: Context-Based Multi-Sensor Fusion for 3D Object Detection in Adverse Operating Conditions"
by Shounak Sural, Nishad Sahu, Ragunathan Rajkumar
Published at IEEE Intelligent Vehicles Symposium (IV) 2024, South Korea
[Read the paper]
This project is based on the repository from MIT-Han Lab's BEVFusion and has been extended to develop the ContextualFusion framework.
Access the AdverseOp3D dataset:
Download here
Pretrained models are available for download:
Download models
- For night-time evaluation, use the model:
CF_Night_trained_NuScenes.pth
To run the evaluation on the NuScenes dataset at night-time, use the following command:
torchpack dist-run -np 2 python tools/test.py configs/nuscenes/det/transfusion/secfpn/camera+lidar/swint_v0p075/convfuser.yaml models/CF_Night_trained_NuScenes.pth --eval bbox
If you find this project useful, please cite the paper in the following format-
@INPROCEEDINGS{10588584,
author={Sural, Shounak and Sahu, Nishad and Rajkumar, Ragunathan Raj},
booktitle={2024 IEEE Intelligent Vehicles Symposium (IV)},
title={ContextualFusion: Context-Based Multi-Sensor Fusion for 3D Object Detection in Adverse Operating Conditions},
year={2024},
volume={},
number={},
pages={1534-1541},
keywords={Solid modeling;Three-dimensional displays;Laser radar;Lighting;Object detection;Logic gates;Cameras;Autonomous Vehicles;3D Object Detection;Night-time Perception;Adverse Weather;Contextual Fusion},
doi={10.1109/IV55156.2024.10588584}
}