Skip to content

Based on our paper "A Hybrid Deep Feature Selection Framework for Emotion Recognition from Human Speeches."

License

Notifications You must be signed in to change notification settings

aritraM23/Wrapper-Filter-Speech-Emotion-Recognition

 
 

Repository files navigation

Wrapper-Filter-Speech-Emotion-Recognition

Based on our paper "A Hybrid Deep Feature Selection Framework for Emotion Recognition from Human Speeches" under review in Multimedia Tools and Applications, Springer.

Overall Workflow

Requirements

To install the required dependencies run the following in command prompt: pip install -r requirements.txt

Running the codes:

Required directory structure: ("data" directory contains class-wise spectrograms of the raw audio files in original dataset).


+-- data
|   +-- .
|   +-- train
|   +-- val
+-- PasiLuukka.py
+-- WOA_FS.py
+-- __init__.py
+-- audio2spectrogram.py
+-- main.py
+-- model.py

Then, run the code using the command prompt as follows:

python main.py --data_dir "./data"

Available arguments:

  • --num_epochs: number of training epochs. Default = 100
  • --learning_rate: learning rate for training. Default = 0.0005
  • --batch_size: batch size for training. Default = 4
  • --optimizer: optimizer for training: SGD / Adam. Default = "SGD"

About

Based on our paper "A Hybrid Deep Feature Selection Framework for Emotion Recognition from Human Speeches."

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 100.0%