This part of the application listen the speaker and transcribe words to let the deaf people read what the other said
This part of the application uses the camera to see the gestures of the deaf people, interpret it and speech for him (or she)
This code uses the mediapipe library and also uses a model learning that you can calibrate yourself. Run the Application.py Use one of three options
Select 1 in the app Speak to see the retranscription Say "quitter" or "fini" or press q to exit
WhatsApp.Video.2022-12-12.at.00.34.16_Trim.mp4
Select 2 in the app Do the gestures and the app will speech the word to you
WhatsApp.Video.2022-12-12.at.00.35.27_Trim.mp4
Type k on your keyboard to enter keypoint mode to train your model Do your hand gesture and press the corresponding index of this gesture from 0 to 9 and a to e This will store your keypoints in the "/keypoint.csv" file Go to "/keypoint_classifier_label.csv" file to give each sign a name Run the "keyoint_classification.ipynb" to train the model so that you will be able to use it.
Same process with point history to add moving gestures.
https://github.com/Kazuhito00/hand-gesture-recognition-using-mediapipe