P2-10: Polyffusion: A Diffusion Model for Polyphonic Score Generation With Internal and External Controls
Lejun Min (Shanghai Jiao Tong University)*, Junyan Jiang (New York University Shanghai), Gus Xia (New York University Shanghai), Jingwei Zhao (National University of Singapore)
Subjects (starting with primary): MIR fundamentals and methodology -> symbolic music processing ; Knowledge-driven approaches to MIR -> machine learning/artificial intelligence for music ; MIR tasks -> music generation ; Knowledge-driven approaches to MIR -> representations of music
Presented In Person: 4-minute short-format presentation
We propose Polyffusion, a diffusion model that generates polyphonic music scores by regarding music as image-like piano roll representations. The model is capable of controllable music generation with two paradigms: internal control and external control. Internal control refers to the process in which users pre-define a part of the music and then let the model infill the rest, similar to the task of masked music generation (or music inpainting). External control conditions the model with external yet related information, such as chord, texture, or other features, via the cross-attention mechanism. We show that by using internal and external controls, Polyffusion unifies a wide range of music creation tasks, including melody generation given accompaniment, accompaniment generation given melody, arbitrary music segment inpainting, and music arrangement given chords or textures. Experimental results show that our model significantly outperforms existing transformer and sampling-based baselines, and using pre-trained disentangled representations as external conditions yields more effective controls.
If the video does not load properly please use the direct link to video