Official PyTorch implementation of "Learning similarity and dissimilarity in 3D face masked with PointMLP, PointNet++ and PointNet triplet network"
A pytorch implementation of the "Learning similarity and dissimilarity in 3D face masked with PointMLP, PointNet++ and PointNet triplet network" paper for training a point cloud aware facial recognition model using [Triplet loss][1]. Training is done on the Bosphorus dataset which has been pre-processed and augmented using only normal expression and frontal oriented and combine it with our in-house dataset named D415 dataset. PointNet, PointNet++ and PointMLP with different modification is introduced to train on small vram (6GB VRAM) graphic card with gradient accumalation technique.
Because of the regulation of the dataset, we cannot provide the dataset here. However we had provide code which created the combined datasets for this experiment.
Please let me know if you find mistakes and errors, or improvement ideas for the code and for future training experiments. Feedback would be greatly appreciated.
Operating System: Ubuntu 18.04 (you may face issues importing the packages from the requirements.yml file if your OS differs).
- project page for download dataset for training
- logs and pretrained models downloadable from project page
- journal publish
- paper/codes release
- testing code of face recognition
# step 1. clone this repo
git clone https://github.com/azhadzuraimi/3D-face-masked-recognition.git
cd 3D-face-masked-recognition
# step 2. create a conda virtual environment and activate it
conda create --name <environment_name> --file requirements.txt
conda activate <environment_name>
# Optional solution for step 2: install libs step by step
conda create -n <environment_name> python=3.7 -y
conda activate <environment_name>
conda install pytorch==1.10.1 torchvision==0.11.2 cudatoolkit=10.2 -c pytorch -y
pip install cycler einops h5py pyyaml==5.4.1 scikit-learn==0.24.2 scipy tqdm matplotlib==3.4.2
pip install pointnet2_ops_lib/.
Link to download the pre-trained models using Triplet Loss is underdevelopement.
Link to download the pre-trained models using Triplet Loss is underdevelopement.
Notes:
- Training triplets will be generated at the beginning of each epoch and will be saved in the 'datasets/generated_triplets' directory as numpy files that can be loaded at the beginning of an epoch to start training without having to do the triplet generation step from scratch if required (see the --training_triplets_path argument).
- Each triplet batch will be constrained to a number of human identities (see the --num_human_identities_per_batch argument).
- [1] Florian Schroff, Dmitry Kalenichenko, James Philbin, “FaceNet: A Unified Embedding for Face Recognition and Clustering”: arxiv
Our implementation is mainly based on the following codebases. We gratefully thank the authors for their wonderful works.
facenet-pytorch-glint360k pointMLP-pytorch Pointnet2_PyTorch Pointnet_pytorch
- NVIDIA GTX 1660ti Graphics Card (6 gigabytes Video RAM).
- i5-10400 Intel CPU.
- 32 Gigabytes DDR4 RAM at 3600 MHz.
3D-face-masked-recognitionco is under the Apache-2.0 license.