Skip to content

Android application that performs sign language to text translation in real time

Notifications You must be signed in to change notification settings

MehrinFirdousi/Synaera-TeamSemaphore

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

35 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Synaera-TeamSemaphore

Android application that performs sign language to text translation in real time

Table of Contents

Our Team


Mehrin Firdousi

Team Lead. App and Server development

Qusai Al-Kiswany

Application development

Aamir Ali

Computer Vision model development

Adam Ahsan

NLP model development

Ayesha Hassan

NLP model development & dataset preparation

Project repository structure

Synaera's development process involved 4 main components; Android app running the sign language translation service among other features, Computer Vision model for human gesture/action recognition, Natural Language Processing model for sentence formation from predicted signs and finally the streaming server that connects all the above components and allows them to communicate over the network.

Contains Synaera app's code (Kotlin and Java), ready to be tested with the streaming server. The app requires internet connection and on launch, a session is established by the app client to our cloud-based server, which is then used to run the live translation service.

Contains all code and components related to training and testing of the CV model used by the server to recognize human gestures/actions as ASL signs. The model is hosted by the streaming server. The signs detected by the CV model is represented in gloss notation, which will be processed further by the NLP model to generate an English sentence.

Contains all code and components related to training and testing of the NLP model used to translate ASL gloss to English. This model is hosted by the streaming server to perform live translation of ASL using the CV model's output.

Contains all the code for the video streaming server, including setting up of socket connections to communicate with the Android app client, hosting the two deep learning models and performing pre/post processing of model input/output. This server is currently hosted on an Azure Cloud VM, but can also be run locally. For more info on how to test locally, see -

Features

  • Real-time translation of sign language to text from phone camera

  • Chat with sign language user

  • Upload video containing ASL signing and generate transcript

User guide

On-boarding Screens and Login/Registration steps

Synaera-login-demo.mp4

Real-time translation and Chat demo

Synaera-short-demo.mp4

Transcript generation for Video Upload demo

Synaera-video-upload.mp4

How to use

Pre-requisites

  • Install Android Studio latest version
  • Android mobile device with minimum Android version 5.0

Option 1

Clone this repository on your machine, navigate to 'Android App' directory and open the project in Android Studio

git clone https://github.com/MehrinFirdousi/Synaera-TeamSemaphore.git
cd 'Android App'

Option 2

Directly install synaera-app.apk file on your Android device (located at the root of this repository)

Learn more

About us

https://synaera.wixsite.com/home

More about our deep learning models development process and data collection

https://github.com/AdamJeddy/Grad-Project-Code

More about the server development process

https://github.com/MehrinFirdousi/Synaera-server