P3-11: VampNet: Music Generation via Masked Acoustic Token Modeling
Hugo F Flores Garcia (Northwestern University)*, Prem Seetharaman (Northwestern University), Rithesh Kumar (Descript), Bryan Pardo (Northwestern University)
Subjects (starting with primary): MIR tasks -> music synthesis and transformation ; MIR tasks -> music generation ; Applications -> music composition, performance, and production ; Knowledge-driven approaches to MIR -> machine learning/artificial intelligence for music
Presented In Person: 4-minute short-format presentation
We introduce VampNet, a masked acoustic token modeling approach to music synthesis, compression, inpainting, and variation.
We use a variable masking schedule during training which allows us to sample coherent music from the model by applying a variety of masking approaches (called prompts) during inference. VampNet is non-autoregressive, leveraging a bidirectional transformer architecture that attends to all tokens in a forward pass. With just 36 sampling passes, VampNet can generate coherent high-fidelity musical waveforms. We show that by prompting VampNet in various ways, we can apply it to tasks like music compression, inpainting, outpainting, continuation, and looping with variation (vamping). Appropriately prompted, VampNet is capable of maintaining style, genre, instrumentation, and other high-level aspects of the music. This flexible prompting capability makes VampNet a powerful music co-creation tool. Code and audio samples are available online.
If the video does not load properly please use the direct link to video