Human-Computer Collaborative Music Making

PI:Zhiyao Duan
Students: Yujia Yan, Christodoulos Benetatos, Nan Jiang, Frank Cwitkowitz, Mojtaba Heydari
Award Number:1846184
Award Title:CAREER: Human-Computer Collaborative Music Making
Award Amount:$499,219
Duration:June 1, 2019 to May 31, 2024 (Estimated)

Disclaimer: Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation.

Overview of Goals and Challenges

Music is part of every culture on earth, and the enjoyment of music is nearly universal. Music performance is often highly collaborative. Musicians harmonize their pitch, coordinate their timing, and reinforce their expressiveness to make music that strikes the hearts of the audience. We are in a world where the interaction between humans and machines is becoming deeper and broader, yet this interaction can hardly be called collaboration. The role of machines is assistive at best, and is not nearly equal to that of humans. Developing systems that allow us to truly collaborate with our increasingly important partners - machines, is one of the main missions in the research of cyber-human systems, robotics, and artificial intelligence. To this end, the highly collaborative music performance is an ideal substance for the research on human-computer collaboration.

We aim to achieve a human-computer collaborative music making system, which allows humans to collaborate with machines in a way similar to how humans collaborate with each other. This proposed system advances the current automatic music accompaniment systems by empowering machines of 1) much stronger music perception skills (audio-visual attending to individual parts in ensemble performances vs. monophonic listening), 2) much more expressive music performance skills (expressive audio-visual rendering vs. timing adaptation of audio only), and 3) much deeper understanding of music theory and composition rules (improvisation skills vs. music theory novice).

The Proposed Human-Computer Collaborative Music Making System

The project has four research thrusts with the following expected outcomes: 1) Attending to Human Performances: algorithms for machine listening and visual analysis of multi-instrument polyphonic music performances; 2) Rendering Expressive Machine Performances: computational models for expressiveness and audio-visual rendering techniques for expressive performances; 3) Modeling Music Language for Improvisation: computational models for compositional rules, and algorithms for music generation, harmonization, and improvisation; 4) System Integration: a human-computer collaborative music making system, and a set of design principles backed by subjective evaluations.

Current Results (including prior work)

Perception

Performance

Composition

Broader Impacts

The power and potential of the connection between music and technology is exemplified by the careers of great multidisciplinary thinkers such as Pythagorus, Galilei, Da Vinci, and Franklin. This project provides exciting opportunities to showcase this connection to a broad audience. For the general public, we plan to apply techniques developed in the research activities toward augmented concert experiences. In particular, we have been collaborating with the Eastman School of Music and the Chinese Choral Society of Rochester to automate the presentation of multimedia content (e.g., lyrics, pictures, sound effects) following live music performances in instrumental and choral concerts. For pre-college students, the PI offers a four-day summer mini-course on Music and Math to illustrate their interesting relations each year. The PI also plans to host a half-day lab open house to showcase interactive demos of his research. For higher education, the PI is excited to teach and advise students in the Audio and Music Engineering program, which is a unique interdisciplinary enterprise that attracts students with diverse backgrounds to STEM disciplines through the door of music.

Publications

  1. C. Benetatos, J. VanderStel, and Z. Duan, BachDuet: A deep learning system for human-machine counterpoint improvisation, in Proc. International Conference on New Interfaces for Musical Expression (NIME), 2020.
  2. N. Jiang, S. Jin, Z. Duan, and C. Zhang, RL-Duet: Online music accompaniment generation using deep reinforcement learning, in Proc. AAAI, 2020.
  3. C. Benetatos and Z. Duan, BachDuet: A human-machine duet improvisation system, in Late Breaking and Demo in the International Society for Music Information Retrieval Conference, 2019.