A digital assitant which has full automation capabilties of Nautral Language Understanding and speech input and output along with Emotional Intelligence and Web Browsing also perform day to day activities like Sending message,answering calls, etc. Uses Fine-Tuned Deep Learning Large Languae Model (LLMs) for Emotional Understading and Question Answering.
- Front End: HTML, CSS, JavaScript
- Digital Assitant Python(pyttsx3,Nltk,PyTorch,pandas,numpy,scikit-learn,transformers)
- Backend and API: Django and REST Framework
- Deep Learning LLM Hugging Face API
-
Dataset Collection:
- Gather publicly available datasets from sources like Kaggle, Hugging Face, and other repositories.
- Create custom-labelled datasets from movie dialogues, social media conversations, video transcripts, and CC content to train the models for specific tasks.
-
Model Fine-Tuning:
- Fine-tune llama2-7b and BERT models using the collected datasets for sentiment analysis, question answering, and NER tasks.
- Implement transfer learning techniques to adapt the models to various contexts and domains.
-
Contextual Understanding:
- Utilize Q/A datasets containing contextual information with emotional annotations to enhance the models' understanding of emotional context.
- Incorporate ontologies to interpret cultural contexts and social relationships influencing emotional expression.
-
Reinforcement Learning:
- Implement a reinforcement learning mechanism based on user feedback (positive/negative ratings) to continually improve the models' performance in generating responses.
Our workflow follows a sequential process:
- User Input: Users provide input queries or statements.
- Intent Recognition: Determine the intent behind the user input.
- Sentiment Analysis: Analyze the sentiment of the input.
- Response Generation: Generate appropriate responses based on the input and sentiment analysis.
- Milestone 1: Dataset Collection and Preprocessing
- Milestone 2: Model Fine-Tuning for Sentiment Analysis
- Milestone 3: Model Fine-Tuning for Question Answering
- Milestone 4: Model Fine-Tuning for Named Entity Recognition
- Milestone 5: Integration of Reinforcement Learning Mechanism
- Milestone 6: Testing and Evaluation
- Milestone 7: Documentation and Finalization
- clone the git repository locally.
git clone https://github.com/4darsh-Dev/MuskanAi.git
- Install python and setup virtual envionment.
pip install virtualenv
cd MuskanAi
python -m venv myenv
.\myenv\Scripts\activate
source myenv/bin/activate
- Installing required modules and libraries
pip install -r requirements.txt
- Running Django Development Server
python manage.py makemigrations
python manage.py migrate
python manage.py runserver
-- Server will be started at localhost (example: http://127.0.0.1:8000/)
Detailed documentation on usage, contribution guidelines, and API integration can be found in the Documentation Link.
We express our gratitude to the incredible individuals who have contributed to the development and success of MuskanAi. π Your dedication, passion, and insights have played a pivotal role in shaping this project.
Special thanks to the open-source community for their continuous support and collaborative spirit. π Your contributions, whether big or small, have contributed to the growth and improvement of CogniGuard.
We value your feedback! Report issues at [email protected] Propose features, or submit pull requests. Let's create a fair and transparent digital environment together! πβ¨