P7-05: Efficient Supervised Training of Audio Transformers for Music Representation Learning
Pablo Alonso-Jiménez (Universitat Pompeu Fabra)*, Xavier Serra (Universitat Pompeu Fabra ), Dmitry Bogdanov (Universitat Pompeu Fabra)
Subjects (starting with primary): MIR tasks -> automatic classification ; Knowledge-driven approaches to MIR -> machine learning/artificial intelligence for music ; Musical features and properties -> representations of music ; Musical features and properties -> timbre, instrumentation, and singing voice ; Musical features and properties -> musical affect, emotion and mood ; Musical features and properties -> musical style and genre
Presented In Person: 4-minute short-format presentation
In this work, we address music representation learning using convolution-free transformers. We build on top of existing spectrogram-based audio transformers such as AST and train our models on a supervised task using patchout training similar to PaSST. In contrast to previous works, we study how specific design decisions affect downstream music tagging tasks instead of focusing on the training task. We assess the impact of initializing the training with different existing weights, using various input audio segment lengths, using learned representations from different blocks and tokens of the transformer for downstream tasks, and applying patchout at inference to speed up feature extraction. We find that 1) initializing the audio training from ImageNet or AudioSet weights and longer input segments are beneficial both for the training and downstream tasks, 2) the best representations for the downstream tasks are located in the middle blocks of the transformer, and 3) using patchout at inference allows faster processing than our convolutional baselines while maintaining superior performance. The resulting models, MAEST, are publicly available and obtain the best performance among open models in music tagging tasks.
If the video does not load properly please use the direct link to video