P4-07: On the Effectiveness of Speech Self-Supervised Learning for Music
Yinghao MA (Queen Mary University of London)*, Ruibin Yuan (CMU), Yizhi Li (The University of Sheffield), Ge Zhang (University of Michigan), Chenghua Lin (University of Sheffield), Xingran Chen (University of Michigan), Anton Ragni (University of Sheffield), Hanzhi Yin (Carnegie Mellon University), Emmanouil Benetos (Queen Mary University of London), Norbert Gyenge (Sheffield University), Ruibo Liu (Dartmouth College), Gus Xia (New York University Shanghai), Roger B. Dannenberg (School of Computer Science, Carnegie Mellon University), Yike Guo (Hong Kong University of Science and Technology), Jie Fu (BAAI)
Subjects (starting with primary): Musical features and properties -> representations of music ; Knowledge-driven approaches to MIR -> machine learning/artificial intelligence for music ; Knowledge-driven approaches to MIR -> representations of music
Presented In Person: 4-minute short-format presentation
Self-supervised learning (SSL) has shown promising results in various speech and natural language processing applications. However, its efficacy in music information retrieval (MIR) still remains largely unexplored. While previous SSL models pre-trained on music recordings may have been mostly closed-sourced, recent models such as wav2vec2.0 have shown promise. Nevertheless, research exploring the effectiveness of applying speech SSL models to music recordings has been limited. We explore the music adaption of SSL with two distinctive speech-related models, data2vec1.0 and Hubert, and refer to them as music2vec and musicHuBERT, respectively. We train 12 SSL models with 95M parameters under various pre-training configurations and systematically evaluate the MIR task performances with 13 different MIR tasks. Our findings suggest that training with music data can generally improve performance on MIR tasks, even when models are trained using paradigms designed for speech. However, we identify the limitations of such existing speech-oriented designs, especially in modelling polyphonic information. Based on the experimental results, empirical suggestions are also given for designing future musical SSL strategies and paradigms.
If the video does not load properly please use the direct link to video