Taegyun Kwon is Ph.D. Candidate at the Graduate School of Culture Technology, Korea Advanced Institute of Science and Technology (KAIST). He researched on piano performance analysis, including realtime piano transcription, alignment and expressive generation.
Joonhyung Bae, a Korean artist and Ph.D. Candidate at GSCT KAIST. He is currently a Music and Audio Computing Lab member. He is researching sound-based virtual performer visualization for artistic expression using deep learning.
Jiyun Park is a Ph.D. student in the Music and Audio Computing Lab at GSCT KAIST. Her research focuses on music performance analysis including real-time music alignment and singing voice. In this work, she was responsible for the development and operation of a real-time lyrics tracking system.
Jaeran Choi is a Master Student at GSCT KAIST. Her research interests include Human-AI musical interaction. In particular, she focuses on multimodal musical cue detection and reactive accompaniment systems.
Hyeyoon Cho is a second year master student at GSCT KAIST. She received a Bachelor's degree in piano performance at University of Texas, Austin and Master’s in piano performance at Indiana University. Her research interests include quantization in piano performance and music information retrieval.
Yonghyun Kim is currently pursuing a master's degree at GSCT KAIST. His research areas of interest are Music, Artificial Intelligence, and HCI. He is currently focusing on research that combines multimedia (esp. audio and vision) and AI to enrich human musical experience and creation.
Dasaem Jeong, Assistant Professor in the Department of Art & Technology at Sogang University, South Korea. He obtained his Ph.D. in culture technology from KAIST under supervision of Juhan Nam. His research focuses on various music information retrieval tasks, including expressive performance modeling and symbolic music generation.
Juhan Nam is an Associate Professor in the Graduate School of Culture Technology at the Korea Advanced Institute of Science and Technology (KAIST) in South Korea. He is the director of the Music and Audio Computing Lab. He is interested in various topics at the interaction of music, audio signal processing, machine learning, and human-computer interaction.

We organized a collaborative vocal performance using interaction with an automated piano. The performance was an experiment to see if MIR technology could be used to minimize the number of performers or operators and create a natural performance. The performance was held on June 27 at the KAIST Sports Complex with soprano Sumi Cho, and <Heidenlöslein (Wild Rose)> was the first of the songs performed. The system consists of 1. an automatic expressive performance generation system virtusonet / 2. a vocal reactive accompaniment system to adjust the timing for virtusonet 3. virtual performer visualization, 4. automatic lyric tracking, and 5. control system.
The piece begins without a piano player, but instead with an automated piano and a visualization of the sides and hands to fill in the presence of the performer. <Heidenlöslein> has three repeated fermata sections in which the performance system waits for the accompaniment in time with the singing and shows interactions at the right time. In addition, a system specialized in tracking lyrics was used to automatically track and display the lyrics, and the entire system was automated by enabling real-time communication. Despite the wet and noisy environment of the performance hall, the system worked successfully in the actual performance, demonstrating the possibility of systematically applying the technology in the actual performance hall instead of the laboratory.