Web-based Optical Music Recognition tool that translates musical notes on monophonic scores to ABC notation and annotates the ABC notes onto the music score to facilitate the process of learning music. See the full article explaining this project here. This project was created in a 2 day hackathon at YouthHacks 2019.
This web app is developed with Flask on a tensorflow model built by Calvo-Zaragoza et al. published as End-to-End Neural Optical Music Recognition of Monophonic Scores in the Applied Sciences Journal 2018.
To get started, follow the steps below:
- Install the following dependencies: tensorflow v1, flask, opencv
- Download the semantic model developed by Calvo-Zaragoza et al.
- Download the semantic vocabulary
- Download the font Aaargh.ttf (this is needed to annotate the image with the ABC notation)
If you would like to train the semantic model yourself, head over to the tensorflow model Github repository for instructions and download the PrIMuS dataset.
Make sure your folder structure is as follows:
app.py
vocabulary_semantic.txt
Aaargh.ttf
├── Semantic-Model
| ├── semantic_model.meta
| └── semantic_model.index
| └── semantic_model.data-00000-of-00001
├── templates
| ├── index.html
| └── result.html
├── static
| ├── css
| └── bulma.min.css
Once everything has been set up as above, head over to your terminal / command prompt. Change the directory to the directory with your app.py
file and run python app.py
. Wait for a few seconds and you should receive a message on the link you should go to in order to view the web app. Go to the URL, upload your music sheet and get the result!
The annotated sheet will be saved to the same folder as app.py
with the name annotated.png
.
A huge thanks to Calvo-Zaragoza et al. for building this awesome deep learning model, and for sharing the trained model, dataset and code.