Skip to content

Comparison of various machine learning and deep learning models to classify sign language actions

Notifications You must be signed in to change notification settings

parthjdoshi/ml-mini-project

Repository files navigation

Sign Language Recognition using Sensors

In this project, we explore various techniques for the classification of various signs from sensor data obtained using non-intrusive methods such as gloves or armbands.

This project is largely based upon the implementation of the paper written by H. Cate, F. Dalvi and Z. Hussain. The link for their paper "Sign Language Recognition Using Temporal Classification" can be found here. Whenever we were stuck at some point or needed guidance on how to proceed further, we have referred to their codebase. We have been standing on the shoulders of giants, and we would like to thank them.

The various methods which we have implemented are:

  1. Logistic Regression
  2. Support Vector Machines (tuned using Grid Search)
  3. Artificial Neural Networks (3 layers)
  4. Long Short-Term Memory

This is the report for our work. The results that we have obtained and the analysis for the same can be found there.

The contributors to this project are:

  1. Parth Doshi
  2. Jimit Gandhi
  3. Deep Gosalia

About

Comparison of various machine learning and deep learning models to classify sign language actions

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published