AIR Lab at the Engineering Quad (Spring 2023)
AIR Lab over Zoom (Fall 2021)
AIR Lab at the Eastman Quad (Spring 2019)
AIR Lab at the Hopeman Building (Spring 2016)


2024.03 - Neil, Yongyi and I are proud to co-organize the Singing Voice Deepfake Detection (SVDD) challenge at IEEE Spoken Language Technology Workshop (SLT) 2024, together with Jiatong Shi from CMU and Ryuichi Yamamoto and Prof. Tomoki Toda from Nagoya University

2023.11 - Christos, I and Prof. Philippe Pasquier from Simon Fraser University delivered a tutorial on Computer-Assisted Music-Making Systems: Taxonomy, Review, and Live Coding. In addition to a comprehensive review, tt also features a live coding session on building a real-time musical agent using Euterpe, a prototyping framework for creating music interactions on the web.

2023.11 - Neil's NIJ fellowship and research was covered by News10NBC (Rochester local TV station) and a nice article!

2023.10 - Neil Zhang received an 2023 National Institute of Justice (NIJ) Graduate Research Fellowship among a total of 24 awardees nationwide. Congratulations, Neil!

2023.09 - Yutong Wen received an 2023 University of Rochester Undergraduate Research Presentation Award and an WASPAA 2023 travel grant! These awards will support his travel to WASPAA 2023 to present his first-authored paper.

2023.07 - Undergrads in AIR lab are doing awesome research! Yongyi and Yutong had their first publication as the first author at Interspeech 2023 and WASPAA 2023, respectively. They were both co-mentored by Neil!

2023.06 - Our paper co-authored by You (Neil) Zhang, Yuxiang Wang, and Zhiyao Duan was recognized as among the top 3% of all accepted papers at ICASSP 2023! Neil was also selected as one of the 24 presenters at the inaugural "Rising Star Program in Signal Processing" program. Zhiyao received an outstanding reviewer award.

2023.06 - Several AIR lab PhD students will go for summer internships at Adobe, Meta, Microsoft, and Sony.

2023.05 - Congratulations to our MS graduates, Qiaoyu Yang (ECE) and Zehua Li (TEAM), and undergraduate graduates, Yongyi Zang (AME) and Enting Zhou (CS)! All the Best!!

2023.05 - Yongyi Zang received an 2023 University of Rochester Undergraduate Research Presentation Award! This award will support his travel to Interspeech 2023 to present his first-authored paper.

2023.04 - Yongyi Zang presented an oral presentation and a poster at the Undergraduate Research Exposition about "Euterpe". Yiyang Wang and Neil Zhang presented at the Graduate Research Symposium.

2023.03 - Zhiyao Duan and our collaboration with industry partner IngenID are featured on this article.

2023.02 - Four papers from AIR lab are accepted by ICASSP 2023. Congrats to all students and collaborators!

2023.01 - Zhiyao Duan was on WXXI Connections radio program together with Raffaella Borasi and Blaire Koerner, discussing how artificial intelligence may affect the music industry. This is an interview with host Mona Seghatoleslami about our NSF project on ``Toward an Ecosystem of Artificial Intelligence-Powered Music Production (TEAMuP)''. Listen to the recording here.

2022.12 - AIR lab is awarded a New York State Center of Excellence in Data Science grant to develop and deploy spoofing aware speaker verification systems with IngenID. Thank you, NYS!

2022.11 - AIR lab is awarded a seed funding from the University of Rochester Goergen Institute for Data Science to investigate Personalized Immersive Spatial Audio with Physics Informed Neural Field.

2022.09 - NSF grants $1.8M to a fantastic and diverse team of researchers between UofR and Northwestern to build foundations for AI-powered music production ecosystems! AIR lab and Interactive Audio Lab at Northwestern will co-lead the technical component of this project. Thank you, NSF!

2022.04 - Several AIR lab members will go for summer internships at ByteDance, Yousician and Tencent.

2022.04 - We are happy to release the web-based version of BachDuet. Everyone with a web browser can now improvise with AI in real time in the style of Bach counterpoint! Great work, Yongyi, Christos and Tianyu!

2021.10 - AIR lab is awarded a New York State Center of Excellence in Data Science grant and a University of Rochester Goergen Institute for Data Science grant. Thank you, NYS and UR!

2021.05 - Congratulations to Bochen Li on winning an 2021 Outstanding PhD Dissertation Award at the University of Rochester! Well deserved!

2021.05 - Several AIR lab members are going for a summer internship this year, at Adobe, ByteDance, Chordify, Tencent, and Pandora.

2020.08 - Check out AIR lab's YouTube channel!

2019.12 - Our Vroom! search engine for sounds using vocal imitation as queries is online!

2019.10 - Check out our demo video of BachDuet, a system for real-time interactive duet counterpoint improvisation between human and machine in the Bach chorale style. A brief description of the system is here.

2019.10 - Check out the AIR lab production for the ISMIR2019 Call for Music - Variations on ISMIR: some funny reflections on AI.

Welcome to AIR!

At the AIR lab, we conduct research in the emerging field of computer audition, i.e., designing computational systems that are able to analyze and understand sounds including music, speech, and environmental sounds. We address fundamental issues such as parsing polyphonic auditory scenes (the cocktail party effect), as well as designing novel applications such as sound retrieval and music information retrieval. We also combine sound analysis with the analysis of other signal modalities such as text and video towards multi-modal scene analysis. Various projects that we have been working on include audio source separation, automatic music transcription, audio-score alignment, speech enhancement, speech diarization and emotion recognition, sound retrieval, sound event detection, and audio-visual scene understanding.

Our work is funded by the National Science Foundation under grants No. 1617107, titled "III: Small: Collaborative Research: Algorithms for Query by Example of Audio Databases" (project website), No. 1741472, titled "BIGDATA: F: Audio-Visual Scene Understanding" (project website), No. 1846184, titled "CAREER: Human-Computer Collaborative Music Making" (project website), and No. 2222129, titled "Collaborative Research: FW-HTF-R: Toward an Ecosystem of Artificial Intelligence-Powered Music Production (TEAMuP)". Our work is also funded by the New York State Center of Excellence in Data Science, and University of Rochester internal awards on AR/VR, health analytics, and data science.

Position Openings

We are looking for highly motivated students to join the AIR lab. Students are expected to have a solid background in mathematics, programming, and academic writing. Experiences in music activities will be a plus. Most importantly, students should be passionate to do research in the exciting fields of computer audition, music information retrieval, and multimodal learning. If you are interested, please apply to the ECE Ph.D. program, and mention Prof. Zhiyao Duan in your application. If you are a master's or undergrad student at UR and want to do a project/thesis in the AIR lab, please send Dr. Duan an email or stop by his office at Room 720 in the Computer Studies Building.