Skip to content

Latest commit

 

History

History
43 lines (29 loc) · 1.69 KB

README.md

File metadata and controls

43 lines (29 loc) · 1.69 KB

Hand-Gesture-Recognition

This is an application in two parts to help deaf and hard of hearing people to communicate

First part: Speech to Text

This part of the application listen the speaker and transcribe words to let the deaf people read what the other said

Second part: Gesture to Speech

This part of the application uses the camera to see the gestures of the deaf people, interpret it and speech for him (or she)

How to use the code

This code uses the mediapipe library and also uses a model learning that you can calibrate yourself. Run the Application.py Use one of three options

image

For the Speech to Text part:

Select 1 in the app Speak to see the retranscription Say "quitter" or "fini" or press q to exit

WhatsApp.Video.2022-12-12.at.00.34.16_Trim.mp4

For the Gesture to Speech part:

Select 2 in the app Do the gestures and the app will speech the word to you

WhatsApp.Video.2022-12-12.at.00.35.27_Trim.mp4

If you want to add gestures:

Type k on your keyboard to enter keypoint mode to train your model Do your hand gesture and press the corresponding index of this gesture from 0 to 9 and a to e This will store your keypoints in the "/keypoint.csv" file Go to "/keypoint_classifier_label.csv" file to give each sign a name Run the "keyoint_classification.ipynb" to train the model so that you will be able to use it.

Same process with point history to add moving gestures.

Sources:

https://github.com/Kazuhito00/hand-gesture-recognition-using-mediapipe