Skip to content

Akshay Rangesh, Bowen Zhang and Mohan Trivedi, "Driver Gaze Estimation in the Real World: Overcoming the Eyeglass Challenge," IEEE Intelligent Vehicles Symposium (IV), 2020.

License

Notifications You must be signed in to change notification settings

eebowen/GPCycleGAN

 
 

Repository files navigation

Gaze Preserving CycleGAN (GPCyceGAN)

PyTorch implementation for the training procedure described in Driver Gaze Estimation in the Real World: Overcoming the Eyeglass Challenge.

Parts of the CycleGAN code have been adapted from the PyTorch-CycleGAN respository.

Installation

  1. Clone this repository
  2. Install Pipenv:
pip3 install pipenv
  1. Install all requirements and dependencies in a new virtual environment using Pipenv:
cd GPCycleGAN
pipenv install
  1. Get link for desired PyTorch and Torchvision wheel from here and install it in the Pipenv virtual environment as follows:
pipenv install https://download.pytorch.org/whl/cu100/torch-1.2.0-cp36-cp36m-manylinux1_x86_64.whl
pipenv install https://download.pytorch.org/whl/cu100/torchvision-0.3.0-cp36-cp36m-linux_x86_64.whl

Dataset

  1. Download the complete IR dataset for driver gaze classification using this link.
  2. Unzip the file.
  3. Prepare the train, val and test splits as follows:
python prepare_gaze_data.py --dataset-dir=/path/to/lisat_gaze_data

Training

The prescribed three-step training procedure for the classification network can be carried out as follows:

Step 1: Train the gaze classifier on images without eyeglasses

pipenv shell # activate virtual environment
python gazenet.py --dataset-root-path=/path/to/lisat_gaze_data/all_data/ --version=1_1 --snapshot=./weights/squeezenet1_1_imagenet.pth --random-transforms

Step 2: Train the GPCycleGAN model using the gaze classifier from Step 1

python gpcyclegan.py --dataset-root-path=/path/to/lisat_gaze_data/ --version=1_1 --snapshot-dir=/path/to/trained/gaze-classifier/directory/ --random-transforms

Step 3.1: Create fake images using the trained GPCycleGAN model

python create_fake_images.py --dataset-root-path=/path/to/lisat_gaze_data/all_data/ --version=1_1 --snapshot-dir=/path/to/trained/gpcyclegan/directory/
cp /path/to/lisat_gaze_data/all_data/mean_std.mat /path/to/fake_data/mean_std.mat # copy over dataset mean/std information to fake data folder

Step 3.2: Finetune the gaze classifier on all fake images

python gazenet-ft.py --dataset-root-path=/path/to/fake_data/ --version=1_1 --snapshot-dir=/path/to/trained/gaze-classifier/directory/ --random-transforms
exit # exit virtual environment

Inference

Inference can be carried out using this script as follows:

pipenv shell # activate virtual environment
python infer.py --dataset-root-path=/path/to/lisat_gaze_data/all_data/ --split=test --version=1_1 --snapshot-dir=/path/to/trained/models/directory/
exit # exit virtual environment

You can download our pre-trained (GPCycleGAN + gaze classifier) weights using this link.

Config files, logs, results and snapshots from running the above scripts will be stored in the GPCycleGAN/experiments folder by default.

About

Akshay Rangesh, Bowen Zhang and Mohan Trivedi, "Driver Gaze Estimation in the Real World: Overcoming the Eyeglass Challenge," IEEE Intelligent Vehicles Symposium (IV), 2020.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 100.0%