Skip to content

Latest commit

 

History

History
35 lines (25 loc) · 1.83 KB

README.md

File metadata and controls

35 lines (25 loc) · 1.83 KB

Sign-Language-Translator

Project Idea

The end goal of this project is to develop an in-house device that can do the following:

Audio-Translation: The audio from a person is taken as an input and an appropriate gesture (or combination of gestures) is output on the screen (we will have to create standard templates for the gestures) in real-time.

Gesture-Translation: The images captured from the gestures of a person are processed in real-time and output the audio (display the text on the screen) corresponding to the gesture (or combination of gestures)

Learning Outcomes

Practical Project Experience in Computer Vision Using Tensorflow and/or PyTorch Libraries

Plan of Action or Time Line

The overall outline of this project would start right with the basics i.e. identifying the relevant datasets and papers and end with the final step of deployment of the identified algorithms.

  1. Literature Review: Identify the relevant datasets and papers in this domain
  2. Ideation: Select a model/Modify an existing model and implement/test it
  3. Experimentation: Perform various experiments on different datasets, with different parameters and take note of the observations and results
  4. Deployment: Improve the model to increase it's accurancy and/or reduce inference time for deployment onto a device with low compute power

Pre-requisites

  1. Basic knowledge of Python and OOPs in python
  2. Familiarity with using Git
  3. Basic theoritical concepts of Computer Vision
  4. Passion for Learning

Expectations

You should be able to write clean efficient code with proper commenting and documentations of each experiment.

Project Mentors

Faculty Advisor

Prof. Sravan Danda