Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Prediction failure #21

Open
Mlallena opened this issue Jun 22, 2021 · 0 comments
Open

Prediction failure #21

Mlallena opened this issue Jun 22, 2021 · 0 comments

Comments

@Mlallena
Copy link

Hello.

I am testing your idea, and am trying to create a network that can identify between three different languages. However, I am encountering a few problems.

The first is that the precision of the model never seems to rise above 65%. Could this be a matter of not having enough data to work with?

The second is that, when I try to use one of the saved states to make a test (using the same procedure you mention here) the prediction result is always the same (0.380580,0.269690,0.349730), no matter what spectrogram I use. What's the issue here? How can the system return the same prediction for very different audios?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant