Today I read a paper titled “Robust Downbeat Tracking Using an Ensemble of Convolutional Networks”
The abstract is:
In this paper, we present a novel state of the art system for automatic downbeat tracking from music signals.
The audio signal is first segmented in frames which are synchronized at the tatum level of the music.
We then extract different kind of features based on harmony, melody, rhythm and bass content to feed convolutional neural networks that are adapted to take advantage of each feature characteristics.
This ensemble of neural networks is combined to obtain one downbeat likelihood per tatum.
The downbeat sequence is finally decoded with a flexible and efficient temporal model which takes advantage of the metrical continuity of a song.
We then perform an evaluation of our system on a large base of 9 datasets, compare its performance to 4 other published algorithms and obtain a significant increase of 16.8 percent points compared to the second best system, for altogether a moderate cost in test and training.
The influence of each step of the method is studied to show its strengths and shortcomings.