Skip to content

Latest commit

 

History

History
71 lines (63 loc) · 1.78 KB

README.md

File metadata and controls

71 lines (63 loc) · 1.78 KB

Progressively Complementary Network for Fisheye Image Rectification Using Appearance Flow

Introduction

This is a pytorch implementation for Progressively Complementary Network for Fisheye Image Rectification Using Appearance Flow.

image

Requirements

  • Linux or Windows
  • Python 3
  • Pytorch 1.5

Dataset

For training the network, you need to download the perspective dataset Places2 or Coco. Then, move the downloaded images to

--data_prepare/picture

run

python data_prepare/get_dataset.py

to generate your fisheye dataset. The generated fisheye images and new GT will be placed in

--dataset/data/train 
--dataset/gt/train  
or 
--dataset/data/test
--dataset/gt/test

Training

Before training, make sure that the fisheye image and corresponding GT have been placed in

--dataset/data/train
--dataset/gt/train

After that, generate your image lists

python dataset/flist.py

The updated file paths is in

--flist/dataset/train.flist 
--flist/dataset/train_gt.flist 

Finally, training network by

python train.py

Testing

If you want to use our pre-train model, you can download Baidu(Extraction code: zv83) or Google Drive.

Put the pre-trained model in

--FISH-Net/release_model/pennet4_dataset_square256

Place test fisheye images and corresponding GT(not necessary, but can not be empty. You can placed the fisheye images to take up position.) in

--dataset/data/test
--dataset/gt/test

Update file paths

python dataset/flist.py

run

python test.py