Skip to content

wh1t3tea/face-recognition

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

68 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Real-time Face Recognition with a Light-weight Model

This repository implements a real-time face recognition system using a lightweight model based on GhostFaceNetV2 architecture. The model achieves state-of-the-art performance and is designed to run efficiently on CPU, making it suitable for local deployment without the need for a GPU.

Links to all mentioned datasets and model weights can be found at the bottom of the page.

Image Preprocessing

For datasets, I have developed a module for alignment using RetinaFace for face and landmarks detection. RetinaFace outputs bounding boxes and 5 facial landmarks. In my alignment pipeline, I used cv2.warpAffine to rotate faces based on these landmarks. All datasets have been preprocessed using this script, ensuring consistent cropping and alignment.

image

GhostFaceNetV2

GhostFaceNetV2 is a lightweight convolutional neural network (CNN) with a backbone derived from MobileNet. Its main feature is the use of attention mechanisms. The architecture is based on depth-wise and point-wise convolutional blocks, ensuring efficient and effective performance.

This model closely approaches state-of-the-art performance while containing approximately 4 million parameters. This enables running the model in face recognition pipelines without the need for a GPU.

image

Original TensorFlow implementation

Training

Train notebook

For experimentation and similar purposes, we provide a notebook with a complete pipeline for training and testing the model.

image

Train Configuration

We provided a detailed train configuration file (config.json) with adjustable parameters such as learning rates, batch size, and optimizer settings. For our training setup, we utilized the Stochastic Gradient Descent (SGD) optimizer with an initial learning rate of 0.1. The training process spanned 30 epochs, with a batch size of 256. Additionally, we employed a MultiStepLR scheduler with a step size of 3 and a gamma value of 0.1 to adjust the learning rate at specific intervals during training. In the ArcMargin loss configuration, we incorporated a margin value of 0.5 and a scale factor of 32. These parameters play a crucial role in shaping the loss function's behavior, specifically in enhancing the discrimination between classes during training. The margin value introduces angular margins between different classes, while the scale factor adjusts the magnitude of feature embeddings, ultimately contributing to improved face recognition performance.

The complete training configuration used to train the model is available at the following link

Dataset

The model was trained on the open-source face-dataset:

  • Casia-WebFace dataset - 0.5 million images/10k persons. This dataset comprises images of 10,572 individuals. While it is a refined iteration of the Casia dataset, it still exhibits class imbalance. To address this, we advise implementing the WeightedRandomSampler technique, which helps enhance class stability by appropriately weighting samples during the training process.

ArcFaceLoss

The model was trained using ArcFaceLoss, a sophisticated loss function tailored for face recognition tasks. ArcFaceLoss enhances the discriminative power between classes by adding an angular margin to the cosine similarity between feature embeddings and class centers. This margin encourages the model to learn more separable feature representations, resulting in improved performance in face recognition tasks.

Metrics

The repository provides the trained model weights, with the following metrics:

Recognition

The key feature of this model's architecture is its lightweight nature, making it ideal for local deployment of face recognition systems on a CPU. Without using CUDA, the pipeline with the RetinaFace face detection model (S size) achieves 12-15 fps on a consumer-grade 6-core processor. GPU inference allows to reach more then 30 fps.

How to Use

  1. Install Dependencies:

    pip install -r requirements.txt
  2. Create your config.json:

    image

  3. Train the Model:

    To train the model, run the train.py script, specifying the path to your config file:

    python train.py --config cfg/config.json

Desktop Application

In addition to the face recognition model, we offer a user-friendly desktop application for real-time face recognition. This application provides a convenient interface for running face recognition locally on your machine.

image

Features

  • Real-time Face Recognition: Experience seamless face recognition with live camera feed integration.
  • User-Friendly Interface: Intuitive and easy-to-use interface for effortless navigation and operation.
  • Customizable Settings: Adjust your face dataset.
  • Cross-Platform Compatibility: Compatible with Windows, macOS, and Linux operating systems for versatile usage.

Installation

To use the desktop application, follow these simple steps:

  1. Clone the Repository: Clone the repository to your local machine by running the following command:

    git clone https://github.com/wh1t3tea/face-recognition
  2. Navigate to the Desktop App Directory: Move to the directory containing the desktop application files:

    cd /desktop
  3. Run the Application: Launch the desktop application by running:

    python desktop_app.py

Usage

Once the application is running, you can perform the following actions:

  • Adjust your own face dataset: Click on:

    image

  • Face photo database: The face photo database should be organized in the following format: each individual should have their own folder named after them, containing photos of that individual:

    image -> image -> image

  • Start Face Recognition: Click on the "Start" button to initiate real-time face recognition:

    image

  • Adjust Settings: Modify settings such as face detection threshold, recognition confidence level, device and camera resolution to optimize performance:

    image

    If you intend to use GPU acceleration within our application, make sure to install the necessary CUDA drivers and cuDNN libraries. These components are vital for unlocking GPU capabilities and maximizing performance.

  • View Recognition Results: View recognized faces and corresponding labels in the application interface in real-time. You can also adjust your callbacks:

    Image 1 Image 2

Model weights

Train logs

Datasets

References

Notes

  • This project is developed for educational purposes and can be further refined and expanded for specific needs.
  • When using this code in your projects, please provide references to the original research papers and datasets.

EXE

To build desktop app we used PyInstaller and Inno Setup:

  • Build: you can specify any arguments you need in this bash comand:

    pyinstaller --add-data "weights:weights" --noconsole --icon=static/icon.ico --manifest=manifest.xml --name IdentityX --uac-admin  desktop_app.py
  • Create installer: To create installer we used Inno Setup:

    1. Download [Inno Setup](https://jrsoftware.org/isinfo.php)
    2. Write your setup script or use Wizard for user-friendly UI
    

    Our setup script will be available soon.

About

IdentityX

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages