Skip to content

Tools: Python (OpenCV 3.0 + Keras lib-Convolution 2D Neural Network). Desc: Research project at Soft Computing course.

Notifications You must be signed in to change notification settings

aleksandarbos/Sound-Recognition-Convo2D-Neural-Network

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

69 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Sound Recognition

Using Neural Networks - Convolution2D

Technical specs

Python(Keras lib for Neural Networks) + OpenCV 3.0

Summary

The main goal of this project is to recognize (fingerprint) short audio samples, such as short speech command, whistle, or any other sound from nature and map them to specific action.

Motivation

Using sound samples for reaching your goal: - Sound recognition of songs (music) , sounds from nature, human voice - Using human voice for commanding smarthpone, smart vehicle during the ride - Can be of great use to people with major disability.

Implemented:

- So far, software is trained to recognize whistle melodies and short audio samples. 
It can be easly upgraded to recognize specific types of sound.

Further implemenation:

 - Sound recognition in real-time (not from audio samples, live recording from mic)
 - New data-sets and new training

Screenshots:

  • Main frame:
    Alt text

Simple whistle ASCENDING test:

  • Application output:
    Simple whistle ASC test
  • Test analyze - FFT:
    Simple whistle test analyte - FFT
  • Test analyze - Waveform:
    Test analyze - Waveform
  • Test analyze - Spectrogram:
    Simple whistle test analyte - Spectrogram
  • Test analyze - Spectrogram - Black and White (ready as Neural Network input):
    Simple whistle test analyte - Spectrogram BW ANN ready input

Instalation

Licence

- MIT

References

Great spectrogram article <br/ > University Of Novi Sad, Faculty Of Techical Scieces, AI-lab

Releases

No releases published

Packages

No packages published

Languages