Skip to content
/ DLFNet Public

PyTorch version code for our lane line detection model(ps: preview version)

License

Notifications You must be signed in to change notification settings

EADMO/DLFNet

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

11 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

It's just a preview now

Dynamic Lane Feature Network: Multi-Scale Dynamic Weighted Lane Feature Network for Complex Scenes

Pytorch implementation of the paper "[Dynamic Lane Feature Network: Multi-Scale Dynamic Weighted Lane Feature Network for Complex Scenes]

Introduction

Arch

  • DLFNet is based on the BiFPN concept and the way humans perceive and reason about lane lines in the real world, achieving the integration of global semantic information with local feature details.
  • In culane and tusimple, the performance is superior , especially at high IOU threshold

Installation

Prerequisites

Only test on Ubuntu18.04 and 20.04 with:

  • Python >= 3.8 (tested with Python3.8)
  • PyTorch >= 1.6 (tested with Pytorch1.6)
  • CUDA (tested with cuda10.2)
  • Other dependencies described in requirements.txt

Clone this repository

Clone this code to your workspace. We call this directory as $DLFNET_ROOT

git clone https://github.com/EADMO/DLFNet.git

Create a conda virtual environment and activate it (conda is optional)

conda create -n dlfnet python=3.8 -y
conda activate dlfnet

Install dependencies

# Install pytorch firstly, the cudatoolkit version should be same in your system.

conda install pytorch torchvision cudatoolkit=10.1 -c pytorch

# Or you can install via pip
pip install torch==1.8.0 torchvision==0.9.0

# Install python packages
python setup.py build develop

Data preparation

CULane

Download CULane. Then extract them to $CULANEROOT. Create link to data directory.

cd $DLFNET_ROOT
mkdir -p data
ln -s $CULANEROOT data/CULane

For CULane, you should have structure like this:

$CULANEROOT/driver_xx_xxframe    # data folders x6
$CULANEROOT/laneseg_label_w16    # lane segmentation labels
$CULANEROOT/list                 # data lists

Tusimple

Download Tusimple. Then extract them to $TUSIMPLEROOT. Create link to data directory.

cd $DLFNET_ROOT
mkdir -p data
ln -s $TUSIMPLEROOT data/tusimple

For Tusimple, you should have structure like this:

$TUSIMPLEROOT/clips # data folders
$TUSIMPLEROOT/lable_data_xxxx.json # label json file x4
$TUSIMPLEROOT/test_tasks_0627.json # test tasks json file
$TUSIMPLEROOT/test_label.json # test label json file

For Tusimple, the segmentation annotation is not provided, hence we need to generate segmentation from the json annotation.

python tools/generate_seg_tusimple.py --root $TUSIMPLEROOT
# this will generate seg_label directory

Getting Started

Training

For training, run

python main.py [configs/path_to_your_config] --gpus [gpu_num]

For example, run

python main.py configs/resnet18_culane.py --gpus 0

Validation

For testing, run

python main.py [configs/path_to_your_config] --[test|validate] --load_from [path_to_your_model] --gpus [gpu_num]

For example, run

python main.py configs/dla34_culane.py --validate --load_from culane_dla34.pth --gpus 0

At present, this code can output the visual results and GT . You only need to add add --view or --view_gt We will get the visualization result in work_dirs/xxx/xxx/visualization.

Results

Results Show

CULane

Backbone mF1 F1@50 F1@75
ResNet-18 55.23 79.58 62.21
ResNet-34 55.14 79.73 62.11
ResNet-101 55.55 80.13 62.96
DLA-34 55.64 80.47 62.78

TuSimple

Backbone F1 Acc FDR FNR
ResNet-18 97.89 96.84 2.28 1.92
ResNet-34 97.82 96.87 2.27 2.08
ResNet-101 97.62 96.83 2.37 2.38

Acknowledgement

About

PyTorch version code for our lane line detection model(ps: preview version)

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published