Skip to content

ssuralcmu/ContextualFusion

Repository files navigation

ContextualFusion: Context-Based Multi-Sensor Fusion for 3D Object Detection in Adverse Operating Conditions

This repository contains the code and resources for the paper:

"ContextualFusion: Context-Based Multi-Sensor Fusion for 3D Object Detection in Adverse Operating Conditions"
by Shounak Sural, Nishad Sahu, Ragunathan Rajkumar
Published at IEEE Intelligent Vehicles Symposium (IV) 2024, South Korea
[Read the paper]


Overview

This project is based on the repository from MIT-Han Lab's BEVFusion and has been extended to develop the ContextualFusion framework.

AdverseOp3D Dataset

Access the AdverseOp3D dataset:
Download here

Pretrained Models

Pretrained models are available for download:
Download models

  • For night-time evaluation, use the model: CF_Night_trained_NuScenes.pth

Evaluation Command

To run the evaluation on the NuScenes dataset at night-time, use the following command:

torchpack dist-run -np 2 python tools/test.py configs/nuscenes/det/transfusion/secfpn/camera+lidar/swint_v0p075/convfuser.yaml models/CF_Night_trained_NuScenes.pth --eval bbox

Citation

If you find this project useful, please cite the paper in the following format-

@INPROCEEDINGS{10588584,
  author={Sural, Shounak and Sahu, Nishad and Rajkumar, Ragunathan Raj},
  booktitle={2024 IEEE Intelligent Vehicles Symposium (IV)}, 
  title={ContextualFusion: Context-Based Multi-Sensor Fusion for 3D Object Detection in Adverse Operating Conditions}, 
  year={2024},
  volume={},
  number={},
  pages={1534-1541},
  keywords={Solid modeling;Three-dimensional displays;Laser radar;Lighting;Object detection;Logic gates;Cameras;Autonomous Vehicles;3D Object Detection;Night-time Perception;Adverse Weather;Contextual Fusion},
  doi={10.1109/IV55156.2024.10588584}
}

About

Git repository for ContextualFusion project

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published