2022.04 - Several AIR lab members will go for summer internships at ByteDance, Yousician and Tencent.
2022.04 - We are happy to release the web-based version of BachDuet. Everyone with a web browser can now improvise with AI in real time in the style of Bach counterpoint! Great work, Yongyi, Christos and Tianyu!
2021.10 - The lab is awarded a New York State Center of Excellence in Data Science grant and a University of Rochester Goergen Institute for Data Science grant. Thank you, NYS and UR!
2021.05 - Congratulations to Bochen Li on winning an 2021 Outstanding PhD Dissertation Award at the University of Rochester! Well deserved!
2021.05 - Several AIR lab members are going for a summer internship this year, at Adobe, ByteDance, Chordify, Tencent, and Pandora.
2020.08 - Check out AIR lab's YouTube channel!
2019.12 - Our Vroom! search engine for sounds using vocal imitation as queries is online!
2019.10 - Check out our demo video of BachDuet, a system for real-time interactive duet counterpoint improvisation between human and machine in the Bach chorale style. A brief description of the system is here.
2019.10 - Check out the AIR lab production for the ISMIR2019 Call for Music - Variations on ISMIR: some funny reflections on AI.
At the AIR lab, we conduct research in the emerging field of computer audition, i.e., designing computational systems that are able to analyze and understand sounds including music, speech, and environmental sounds. We address fundamental issues such as parsing polyphonic auditory scenes (the cocktail party effect), as well as designing novel applications such as sound retrieval and music information retrieval. We also combine sound analysis with the analysis of other signal modalities such as text and video towards multi-modal scene analysis. Various projects that we have been working on include audio source separation, automatic music transcription, audio-score alignment, speech enhancement, speech diarization and emotion recognition, sound retrieval, sound event detection, and audio-visual scene understanding.
Our work is funded by the National Science Foundation under grants No. 1617107, titled "III: Small: Collaborative Research: Algorithms for Query by Example of Audio Databases" (project website), No. 1741472, titled "BIGDATA: F: Audio-Visual Scene Understanding" (project website), and No. 1846184, titled "CAREER: Human-Computer Collaborative Music Making". Our work is also funded by the University of Rochester internal pilot awards on AR/VR and health analytics.
We are looking for highly motivated students to join the AIR lab. Students are expected to have a solid background in mathematics, programming, and academic writing. Experiences in music activities will be a plus. Most importantly, students should be fascinated by humans' ability in perceiving and understanding sounds, and are willing to make computers to achieve this capability! If you are interested, please apply to the ECE Ph.D. program, and mention Prof. Zhiyao Duan in your application. If you are a master's or undergrad student at UR and want to do a project/thesis in the AIR lab, please send Dr. Duan an email or stop by his office.