In this project, we will develop a Sign Gesture Language Translator using MediaPipe and the Intel OneAPI Platform. The translator will be able to recognize sign gestures captured from a video stream and convert them into corresponding text or spoken language.
Before getting started, make sure you have the following prerequisites installed:
- Intel OneAPI Base Toolkit
- Jupyter Lab ( Intel oneAPI 2023 ) Kernal
- MediaPipe
- TensorFlow
- Keras
- openCV
- Install Intel oneAPI Kernal :
Cheatsheet.txt (Refer)
-
Install MediaPipe: Run the following command in your terminal to install MediaPipe using pip:
!pip install mediapipe
-
Install Intel OneAPI Base Toolkit: Visit the Intel Developer Zone website (https://software.intel.com/content/www/us/en/develop/tools/oneapi/base-toolkit.html) and follow the instructions to download and install the Intel OneAPI Base Toolkit for your operating system.
-
Clone the repository: Clone the project repository from GitHub using the following command:
!git clone https://github.com/Er-AI-GK/oneAPI-Sign-Language-Gesture-Translator.git
-
Set up the environment: Open a terminal and navigate to the project directory. Activate the Intel OneAPI environment by running the following command:
source <path-to-intel-oneapi>/setvars.sh
-
Run the translator: Execute the following command in the terminal to start the Sign Gesture Language Translator:
SLT Main.ipynb
The translator will launch and begin capturing video from your default camera.
-
Translate sign gestures: Make different sign gestures in front of the camera, and the translator will recognize them and display the corresponding text or spoken language output.
- First I Import libraries in Intel oneAPI kernal
- Collect the dataset
- Preprocess Function of the datasets
- Create & Save Model
- Finally, I deploy my model
I choosed OneAPI DNN it's have optimized library and Python OneAPI kernal. So, it's give acceleration my project and gives high accuracy output.
oneAPI Deep Neural Network Library (oneDNN) is an open-source cross-platform performance library of basic building blocks for deep learning applications. oneDNN is part of [oneAPI](https://oneapi.io). The library is optimized for Intel(R) Architecture Processors, Intel Graphics, and Arm\* 64-bit Architecture (AArch64)-based processors. oneDNN has experimental support for the following architectures: NVIDIA\* GPU, AMD\* GPU, OpenPOWER\* Power ISA (PPC64), IBMz\* (s390x), and RISC-V.oneDNN is intended for deep learning applications and framework developers interested in improving application performance on Intel CPUs and GPUs. Deep learning practitioners should use one of the applications enabled with oneDNN.
If you want to contribute to this project, please follow these steps:
- Fork the repository on GitHub.
- Create a new branch with a descriptive name for your feature or bug fix.
- Implement your changes and make sure the code passes all tests.
- Commit your changes and push them to your forked repository.
- Submit a pull request to the main repository.
-
Intel oneAPI : Sign Language Gesture Translator => https://devmesh.intel.com/projects/oneapi-sign-language-translator
-
Intel oneAPI : Emotion Recogniton using NLP => https://devmesh.intel.com/projects/intel-oneapi-based-emotion-recognition-using-nlp-audio-text
==> https://devcloud.intel.com/oneapi/ ==> www.oneapi.io/open-source/