A robust video emotion analysis system combining OpenCV's Haar cascades with DeepFace's deep learning models for efficient facial emotion recognition in video streams.
- Hybrid Face Detection: Combines OpenCV Haar cascades with DeepFace verification
- Adaptive Frame Processing: Skips frames (configurable) to optimize performance
- Face Preprocessing Pipeline: Automatic contrast/brightness adjustment + resizing
- Confidence-based Filtering: Ignores low-confidence predictions (<0.8 threshold)
- Temporal Aggregation: Groups results by second for stable emotion reporting
- Error Resilience: Comprehensive exception handling at all processing stages
pip install opencv-python-headless deepface numpy
python app.py <video_path>
Example Analysis:
python app.py demo_video.mp4
Frame: 15 Emotion: happy (Confidence: 0.92)
Frame: 30 Emotion: neutral (Confidence: 0.85)
Second 1: Emotion changed from happy to neutral
- Frame Decoding (OpenCV VideoCapture)
- Hybrid Face Detection (Haar Cascade + DeepFace verification)
- Face Normalization (Contrast/Brightness adjustment + Resize to 224x224)
- Emotion Analysis (DeepFace's emotion model)
- Temporal Aggregation (Per-second emotion statistics)
FRAME_SKIP = 5 # Process every 5th frame
EMOTION_CONFIDENCE_THRESHOLD = 0.8 # Minimum confidence score
EMOTION_CHANGE_THRESHOLD = 0.3 # Relative change for emotion shift
- Selective Frame Processing: Reduces redundant computations
- Face Preprocessing: Standardizes input for better model accuracy
- Memory Management: Explicit video capture release post-processing
- Confidence Filtering: Eliminates uncertain predictions
MIT License - See LICENSE for full text.
Issues and PRs welcome! Please follow standard GitHub workflows.