P5-16: Towards Improving Harmonic Sensitivity and Prediction Stability for Singing Melody Extraction

Keren Shao (UCSD)*, Ke Chen (University of California San Diego), Taylor Berg-Kirkpatrick (UCSD), Shlomo Dubnov (UC San Diego)

Subjects (starting with primary): MIR fundamentals and methodology -> music signal processing ; MIR tasks -> automatic classification ; Musical features and properties -> melody and motives

Presented Virtually: 4-minute short-format presentation

Abstract:

In deep learning research, many melody extraction models rely on redesigning neural network architectures to improve performance. In this paper, we propose an input feature modification and a training objective modification based on two assumptions. First, harmonics in the spectrograms of audio data decay rapidly along the frequency axis. To enhance the model's sensitivity on the trailing harmonics, we modify the Combined Frequency and Periodicity (CFP) representation using discrete z-transform. Second, the vocal and non-vocal segments with extremely short duration are uncommon. To ensure a more stable melody contour, we design a differentiable loss function that prevents the model from predicting such segments. We apply these modifications to several models, including MSNet, FTANet, and a newly introduced model, PianoNet, modified from a piano transcription network. Our experimental results demonstrate that the proposed modifications are empirically effective for singing melody extraction.

Poster session Zoom meeting

If the video does not load properly please use the direct link to video