Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Develop AI Models for Response Categorization and Sentiment Analysis #294

Closed
jvJUCA opened this issue Jan 31, 2024 · 11 comments
Closed

Develop AI Models for Response Categorization and Sentiment Analysis #294

jvJUCA opened this issue Jan 31, 2024 · 11 comments
Assignees

Comments

@jvJUCA
Copy link
Member

jvJUCA commented Jan 31, 2024

  • Develop machine learning models for categorizing user responses into predefined categories or topics relevant to the research objectives.
  • Implement sentiment analysis algorithms to automatically classify the sentiment expressed in user feedback as positive, negative, or neutral.
  • Train AI models on annotated datasets of user responses to improve accuracy and performance in categorization and sentiment analysis tasks.
  • Evaluate the effectiveness and reliability of the AI models through validation and testing against benchmark datasets.
@jvJUCA jvJUCA added the Enhancement New feature or request label Jan 31, 2024
@jvJUCA jvJUCA added this to the [M11] - AI analysis milestone Jan 31, 2024
@kindler-king
Copy link

Hello @jvJUCA , I am Arya Sarkar a NLP Developer based in Kolkata, India. I am keen to start contributing to the project as I am relatively new to open source in hopes of being a part of GSOC 2024 working on the project titled: Data Extraction for Sentiment Analysis from Usability Tests .

I will spend some time understand the code behind the project, in the meantime I had a question about this particular issue.
I would love to know a few things:

  1. How are user responses stored (i.e. an example data structure of user feedback to know what I can work with)
  2. This task has 2 aspects, Topic Modelling & Sentiment Analysis. I was curious is it just for data annotation purposes ? Or Do you also want to reliably convey the results in the interface? Or is that a seperate task altogether.

I have experience working on similar techstack for the last couple of years and feel that I can help with this by leveraging my experience. Happy to be a part of the community :D

@kindler-king
Copy link

@jvJUCA This got me thinking further about possible solutions and there are two ways to proceed:

  1. Using few-shot training examples with something like LLAMA-7B can help create a more generalised model, which can handle Topic Modelling, Sentiment Analysis, maybe even summarization & Q/A in the future depending on further requirements. However, although the model is open-source, hosting it on a GPU server can have extra costs attached to it. It might be possible to host it for free on Colab as shown here. But I feel this would be make the solution more complicated without any reason.

  2. That's why I am more include towards the 2nd option i.e. using SOTA sentiment analysis and topic modelling models combined with feature engineer and standard text data pre-processing (don't need LLMs everytime XD). I feel this solution is simple, memory efficient and will cover all the bases while being free and open source too :D

What are your thoughts ?
P.S Would love it if you could share the DISCORD channel URL, I saw it was mentioned in the readme file, but did not see it's link anywhere.

@harshkasat
Copy link

Hello @jvJUCA , I am Arya Sarkar a NLP Developer based in Kolkata, India. I am keen to start contributing to the project as I am relatively new to open source in hopes of being a part of GSOC 2024 working on the project titled: Data Extraction for Sentiment Analysis from Usability Tests .

I will spend some time understand the code behind the project, in the meantime I had a question about this particular issue. I would love to know a few things:

  1. How are user responses stored (i.e. an example data structure of user feedback to know what I can work with)
  2. This task has 2 aspects, Topic Modelling & Sentiment Analysis. I was curious is it just for data annotation purposes ? Or Do you also want to reliably convey the results in the interface? Or is that a seperate task altogether.

I have experience working on similar techstack for the last couple of years and feel that I can help with this by leveraging my experience. Happy to be a part of the community :D

Hello, my name is Harsh Kasat and I am from India. I am interested in contributing to the project as I am new to open source and I'm looking for part of GSOC 2024. I would like to work on the project Data Extraction for Sentiment Analysis from Usability Tests. I have the same question as @kindler-king: How is the user response saved? We need to work on the data first to analyze it (clearing, grouping, and performing EDA on data). Once we have done all the analytic stuff, then we can move forward with the sentiment analysis.

@harshkasat
Copy link

@jvJUCA This got me thinking further about possible solutions and there are two ways to proceed:

  1. Using few-shot training examples with something like LLAMA-7B can help create a more generalised model, which can handle Topic Modelling, Sentiment Analysis, maybe even summarization & Q/A in the future depending on further requirements. However, although the model is open-source, hosting it on a GPU server can have extra costs attached to it. It might be possible to host it for free on Colab as shown here. But I feel this would be make the solution more complicated without any reason.
  2. That's why I am more include towards the 2nd option i.e. using SOTA sentiment analysis and topic modelling models combined with feature engineer and standard text data pre-processing (don't need LLMs everytime XD). I feel this solution is simple, memory efficient and will cover all the bases while being free and open source too :D

What are your thoughts ? P.S Would love it if you could share the DISCORD channel URL, I saw it was mentioned in the readme file, but did not see it's link anywhere.

  • I would like to add some points regarding above discussion. Firstly, while we can run the llama2-7b model on Google Colab, but the free session disconnects when the notebook is idle. Therefore, we can consider using a server GPU such as AWS SageMaker, where we can upload the model and inference API on Hugging Face. This may cost some money, but we can use a minimal model and GPU to keep the costs low.

  • Secondly, for sentimental analysis, we can use the Transformer library from Hugging Face. This library we can also use for a vector database, which can further work with RAG (if we can make Q/A using the LLM model). I have previously worked with recent LLM model like GPT, Gemini, Gemma, Llama2, phind, mistral.

@kindler-king
Copy link

kindler-king commented Feb 22, 2024

@harshkasat The Sagemaker approach by using some minimal computing power like the t3-small makes response times very high, if we want to actually run an LLM and pricing is a potential issue, it would make sense to use models with a smaller footprint than Llama-7B. Gpt4All although nowhere as powerful as Llama7B can serve a similar purpose in the event we think of implementing RAG / Q&A pipeline in the future.

In my experience, finetuning a RoBERTa - Base model on sentiment analysis tasks is the best possible option and required no special computing power for SENTIMENT ANALYSIS tasks. Similarly for Topic Modelling BERT fine-tuned on Topic Modelling tasks is the best performer (have spent quite some time last year on both these tasks).

@jvJUCA I would love to see a data sample of user responses to get a high level idea about the possible techniques to use, would love to discuss this in more detail :D

@harshkasat
Copy link

@kindler-king, I totally agree that if we don't want (rag/qa) a model with a large memory and footprint make no sense, rather go for a smaller one. We can opt for BERT models such as Bert, Roberta, and DistilBert. For quantization, we can use I-Bert, or we can use the LORA quantization method and fine tuning them.
This all depends on data we're getting from user response. 😊

@marcgc21
Copy link
Member

@harshkasat and @kindler-king I had to tell you that his issue is quite related to a project that we have not shared publicaly as we it has been send to a journal to be published.
The best way to start working on that might be start thinking on a system that allows to do that and how might work. What I can share with you is that this kind of sentiment analysis came from a video of a user working during some tasks. So maybe working on some code that helps you to extract some sentiment from a video might also be an starting point of work.

@marcgc21 marcgc21 self-assigned this Feb 25, 2024
@kindler-king
Copy link

That sounds intruiging, I recently authored a paper on Multi-Modal emotion Recognition i.e. fusing TEXT data (from Twitter) along with VIDEO data to improve the performance of sentiment analysis in attention space. I will start working on this issue then and try to implement a pretrained CNN for video based sentiment analysis.

@marcgc21 Do you feel that we should proceed with:

  1. Extracting sentiment directly from videos,
  2. Extracting sentiment from the textual transcript obtained from the video,
  3. Creating a Hybrid mechanism to use both VIDEO and TEXTUAL sentiment analysis to determine the final sentiment being conveyed in a video?

Which do you think is the closest to the use-case you have in mind? I can start working on that.
Also can you share the discord link? I couldn't find it anywhere, it would be easier to closely connect with the community on Discord.

Thanks a lot for your input @marcgc21 Have a great day :D

@kindler-king
Copy link

An option 4 that comes to my mind, which seems reasonable is:
Extracting the AUDIO from a video and performing SENTIMENT analysis on the AUDIO itself.

@marcgc21 How does that sound to you? Out of the 4 options, which do you think I should explore?

@kindler-king
Copy link

kindler-king commented Feb 28, 2024

Hello @marcgc21 @jvJUCA Here's a simple implementation of a VIDEO SENTIMENT TRACKING system.
The neural network was fine-tuned on the FER2013 dataset, and I had made this a year back. On my system it took a good 6 hours to train the model but luckily I had the model checkpoints saved from my experiments last time.

It is not nearly 100% accurate, but can serve to estimate the sentiment of USERS from their WEBCAM recordings or other videos.
Possible areas of enhancement include:

  1. Factoring Multi-modal data (i.e. also the sentiment from AUDIO attached with the VIDEO)
  2. Using a better pre-trained model fine-tuned on a larger dataset containing more labelled samples.

Link to view the VIDEO sample of the result on STOCK videos I found on the web:
https://drive.google.com/file/d/18EE8zBlXzUpj0AgmUYCgsKrxCQJB98_C/view?usp=sharing

Another interesting thing is that this is REAL-TIME, and can be replicated on servers with much lesser computing power than a standard LLM.
Do let me know your thoughts and possible directions from here.

@kindler-king
Copy link

@marcgc21 @jvJUCA Would appreciate some feedback about the previous step :D And possible directions to move from here onwards.

@KarinePistili KarinePistili added Future Work and removed Enhancement New feature or request labels Apr 15, 2024
@ruxailab ruxailab locked and limited conversation to collaborators Apr 15, 2024
@jvJUCA jvJUCA closed this as completed Jun 19, 2024
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests

5 participants