Skip to content

Latest commit

 

History

History
87 lines (56 loc) · 3.82 KB

README.md

File metadata and controls

87 lines (56 loc) · 3.82 KB

Segmentation CNN

Method

Convolutional neural networks (CNN) for music segmentation. Similar than in [1], a log-scaled Mel spectrogram is extracted from the audio signal, with the difference that input spectrograms are max pooled across beat times. Beat tracking was done using the MADMOM toolbox with the DBN beat tracking algorithm from [2]. Context windows of 16 bars are then classified by a CNN to determine whether the central beat is a segment boundary. The CNN training was implemented using Keras.

On the 'Internet Archive' portion of the SALAMI dataset it achieves a boundary detection f-Measure of 59% at a tolerance of 2 beats for a random 0.9/0.1 split. Some audio files did not have a corresponding annotation and were discarded.

An example of a beat-wise log Mel spectrogram An example of a beat-wise log Mel spectrogram and corresponding prediction with ground truth segment annotations. Prediction output with ground truth segment annotations

Some more example outputs of the CNN with corresponding ground truth annotations can be found in the 'Results' subfolder (the nicer ones :)

Downloading the SALAMI data

  1. Clone the repository public SALAMI data repository and put it in the ./Data folder as a subdirectory.
  2. Run the script provided here (requiring Python 2):
mkdir Audio
cd ./Python
python2 SALAMI_download.py ../Data/salami-data-public/metadata/id_index_internetarchive.csv ../Audio/

Running the beat detection

In order to open MP3 files provided by the SALAMI dataset, the ffmpeg library is needed on Debian/Ubuntu. It can be installed via

sudo apt install ffmpeg

After that the beat tracking from the MADMOM library can be run on all files with

cd ./Audio
mkdir beats
DBNDownBeatTracker batch -o ./beats $(ls *.mp3)

This will take quite some time and use a lot of memory. After finishing, the beat files (*.beats.txt) will be placed next to the audio files.

Model training and prediction

After the beat times are extracted, the model can be trained by calling the feature extraction and training scripts:

cd ./Python
python3 feature_extraction.py
python3 train_segmentation_cnn.py

The trained model will be saved in the data directory and can be used to predict segments for an unseen audio file:

python3 track_segmentation.py AUDIO_FILE

TODO

This is work in progress! So far the evaluation is run in MATLAB, whereas for the CNN training, the Keras Python library was used. Evaluation is done on the beat level using the beat-level labels constructed from the ground truth annoations. For computing the f-Measure, the Beat Tracking Evaluation Toolbox was used. Currently porting the feature extraction and evaluation to Python.

Requirements

For the CNN training:

Feature extraction in Python:

Evaluation:

References

[1] Karen Ullrich, Jan Schlüter and Thomas Grill: Boundary detection in music structure analysis using convolutional neural networks. ISMIR 2014. pdf

[2] Sebastian Böck, Florian Krebs and Gerhard Widmer, A Multi-Model Approach to Beat Tracking Considering Heterogeneous Music Styles. ISMIR 2014. pdf