Skip to content

This paper explores Silent Sound Technology, focusing on its potential to enhance communication in noisy environments through lip-reading and deep learning, with applications in hearing aids and security.

Notifications You must be signed in to change notification settings

STREIN-11/Evaluate_Lip_reading_using_Deep_Learning_Techniques.

 
 

Repository files navigation

Evaluate-Lip-reading-using-Deep-Learning-Techniques.

This paper explores Silent Sound Technology, focusing on its potential to enhance communication in noisy environments through lip-reading and deep learning, with applications in hearing aids and security.

Dataset:

LRW, GRID, LRS2, LRS3-TED, VoxCeleb2, CAS-VSR-W1k(LRW-1000

Contact

For any questions or feedback, please contact Debojyoti Bhuinya, Subhamay Ganguly, Akash Das,

Proposed Methodology

In this section, we describe the methodology employed for preprocessing video data, aligning it with textual content, and training a neural network model for a specific task. The proposed methodology encompasses data loading, data preprocessing, and model training. Screenshot 2024-08-21 220201

Convolutional Neural Network:

Screenshot 2024-08-21 220304

Bi-Directional LSTM architecture

Screenshot 2024-08-21 220423

Total Architecture

update

Performence Evaluation

acuracy

About

This paper explores Silent Sound Technology, focusing on its potential to enhance communication in noisy environments through lip-reading and deep learning, with applications in hearing aids and security.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Jupyter Notebook 97.9%
  • Python 2.1%