From d51b50ae6a2b11109882086d7a7ab58f93b63a6e Mon Sep 17 00:00:00 2001 From: Nandini Singh Date: Wed, 25 Oct 2023 01:15:07 +0530 Subject: [PATCH] Delete Projects/AIML-DATA-SCIENCE-PROJECTS/toxicity-grader-using-deep-learning directory will be adding this readme to the main folder containing the toxicity grader project. --- .../README.md | 43 ------------------- 1 file changed, 43 deletions(-) delete mode 100644 Projects/AIML-DATA-SCIENCE-PROJECTS/toxicity-grader-using-deep-learning /README.md diff --git a/Projects/AIML-DATA-SCIENCE-PROJECTS/toxicity-grader-using-deep-learning /README.md b/Projects/AIML-DATA-SCIENCE-PROJECTS/toxicity-grader-using-deep-learning /README.md deleted file mode 100644 index e1ef190..0000000 --- a/Projects/AIML-DATA-SCIENCE-PROJECTS/toxicity-grader-using-deep-learning /README.md +++ /dev/null @@ -1,43 +0,0 @@ -## Toxicity Grader - -This repo contains code for a toxicity grading model using TensorFlow and Gradio. The model is trained to detect whether a comment is toxic, obscene, threatening, or involves identity hate. The steps below outline the process of installing dependencies, preprocessing the data, creating a sequential model, making predictions, evaluating the model, and testing it using Gradio. - -### 1. Install Dependencies - -Before running the code, please make sure to install the required dependencies by executing the following command: - -``` -!pip install tensorflow tensorflow-gpu pandas matplotlib sklearn -``` - -### 2. Preprocess - -The preprocessing step involves converting the comment text into numerical features using the `TextVectorization` layer from TensorFlow. The data is then split into training, validation, and test datasets. - -### 3. Create Sequential Model - -The sequential model is defined using the TensorFlow Keras API. It consists of an embedding layer, a bidirectional LSTM layer, fully connected layers for feature extraction, and a final output layer with sigmoid activation. The model is compiled with binary cross-entropy loss and the Adam optimizer. - -### 4. Train the Model - -The model is trained using the training and validation datasets. The training progress is visualized using a line plot showing the loss and metrics over epochs. - -### 5. Make Predictions - -The model can be used to make predictions on new comment data. The code provides examples of how to input a single comment or a batch of comments to obtain toxicity predictions. - -### 6. Evaluate Model - -The model's performance is evaluated using precision, recall, and accuracy metrics on the test dataset. - -### 7. Test and Gradio - -The code installs the Gradio library and saves the trained model. It then loads the model and defines a function to score comments based on toxicity. Gradio is used to create a user-friendly interface where users can input a comment and see the toxicity grades for different categories. - -To run the Gradio interface, please make sure to install the required dependencies by executing the command: - -``` -!pip install gradio jinja2 -``` - -Once the interface is launched, users can input comments and obtain the toxicity grades in different categories.