layout | title |
---|---|
page |
Programme |
Timings and session details are provided below. All times are UK local time (i.e., UTC).
09:00 | Welcome | |||||||||
09:05 | The Clarity CEC3 Overview
| |||||||||
09:40 | Challenge Papers | (Chair: Jennifer Firth) | ||||||||
11:10 | Break | |||||||||
11:30 | Invited Talk | (Chair: Graham Naylor) | ||||||||
12:30 | Clarity Future Directions | (Chair: Simone Graetzer) | ||||||||
13:30 | Close |
Links to technical reports and videos for all talks are provided in the programme below.
Deep learning-based denoising for hearing aids: Meeting the technological and audiological constraints of a wearable solution
Improving speech understanding in noise is the top priority for people with hearing loss. While amplifying sound is sufficient to improve speech intelligibility in quiet environments, removing background noise is essential to improve speech clarity in noisy situations. In this keynote, we will review traditional noise cancelling approaches and discuss promises of DNN-based denoising approaches to improve speech intelligibility in noise. We will talk about the unique challenges of implementing DNN-based denoising solutions in wearables and constraints imposed by hearing aids. We will provide insights into Phonak’s recently launched Sphere DNN and highlight the co-development of the software architecture, hardware architecture, and training pipeline. Our speech enhancement DNN lives within the latency, power, and computational constraints of hearing aids and improves speech intelligibility in noise for people with hearing loss. We will furthermore discuss considerations for designing clinical studies and discuss study results of a recent study that demonstrated the clinical benefit of Phonak’s Sphere DNN solution. Lastly, we will talk about pitfalls in evaluating speech enhancement technologies for wearables, such as the neglect of latency considerations, open couplings, limitations of study setups, and using SNR as an outcome measure to demonstrate algorithm benefit
Challenge paper sessions will consist of oral presentations allowing 15-20 minutes per team and 5 minutes for Q&A.
09:40-10:00 | The JAIST System for Task 1 of The 3rd Clarity Enhancement Challenge Huy Quoc Nguyen, Candy Olivia Mawalim, Masashi Unoki (Japan Advanced Institute of Science and Technology, Japan) |
|
10:00-10:20 | MIMO-DPRNN-ConvTasNet for the 3rd Clarity Enhancement Challenge Robert Sutherland, Stefan Goetze and Jon Barker (University of Sheffield, UK) |
|
10:20-10:45 | End-to-End Multi-Channel Target Speech Enhancement System Based on Online SpatialNet Yindi Yang, Ming Jiang, Zhihao Guo and Eric Miao (Elehear, Canada, China) |
|
10:45-11:10 | Spatial-FullSubNet for Hearing Aid Processing Xiang Hao, Jibin Wu (The Hong Kong Polytechnic University, China) |