P2-03: A Dataset and Baselines for Measuring and Predicting the Music Piece Memorability
Li-Yang Tseng (National Yang Ming Chiao Tung University), Tzu-Ling Lin (National Yang Ming Chiao Tung University), Hong-Han Shuai (National Yang Ming Chiao Tung University)*, JEN-WEI HUANG (NYCU), Wen-Whei Chang (National Yang Ming Chiao Tung University)
Subjects (starting with primary): Evaluation, datasets, and reproducibility -> novel datasets and use cases ; MIR and machine learning for musical acoustics -> applications of machine learning to musical acoustics
Presented Virtually: 4-minute short-format presentation
Nowadays, humans are constantly exposed to music, whether through voluntary streaming services or incidental encounters during commercial breaks. Despite the abundance of music, certain pieces remain more memorable and often gain greater popularity. Inspired by this phenomenon, we focus on measuring and predicting music memorability. To achieve this, we collect a new music piece dataset with reliable memorability labels using a novel interactive experimental procedure. We then train baselines to predict and analyze music memorability, leveraging both interpretable features and audio mel-spectrograms as inputs. To the best of our knowledge, we are the first to explore music memorability using data-driven deep learning-based methods. Through a series of experiments and ablation studies, we demonstrate that while there is room for improvement, predicting music memorability with limited data is possible. Certain intrinsic elements, such as higher valence, arousal, and faster tempo, contribute to memorable music. As prediction techniques continue to evolve, real-life applications like music recommendation systems and music style transfer will undoubtedly benefit from this new area of research.
Poster session Zoom meeting
If the video does not load properly please use the direct link to video