You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Speech Emotion Recognition (SER) system was defined as a combination of different frameworks and works based on analyzing audio signals to identify emotions. We can use one or combine other parts to reach the correct emotion, but in this fun machine-learning project, we will use the acoustic part of speech, including pitch, jitter, tone, etc.
The text was updated successfully, but these errors were encountered:
Hey @prodigyinfotech , can you assign me this issue.
For this project, I was thinking of using MFCC (Mel-frequency cepstral coefficients) for feature extraction and a Bi-directional LSTM network for sequence modeling.
Speech Emotion Recognition (SER) system was defined as a combination of different frameworks and works based on analyzing audio signals to identify emotions. We can use one or combine other parts to reach the correct emotion, but in this fun machine-learning project, we will use the acoustic part of speech, including pitch, jitter, tone, etc.
The text was updated successfully, but these errors were encountered: