News  >  Events

Signal Processing and Machine Learning for Attention

Our devices work best when they understand what we are doing or trying to do. A large part of this problem is understanding to what we are attending. I’d like to talk about how we can do this in the visual (easy) and auditory (much harder and more interesting) domains. Eye tracking is a good but imperfect signal. Audio attention is buried in the brain and recent EEG (and ECoG and MEG) work gives us insight. These signal can be use to improve the user interface for speech recognition and the auditory environment. I’ll talk about using eye tracking to improve speech recognition (yes!) and how we can use attention decoding to emphasize the most important audio signals, and to get insight about the cognitive load that our users are experiencing. Long term, I’ll argue that listening effort is an important new metric for improving our interfaces. Listening effort is often measured by evaluating performance on a dual-task experiment, which involves divided attention.

By Malcolm Slaney, Google Machine Hearing Research

Informations

Time: starting at 10.15 am

Place: room BC 420, EPFL

From: 16 Oct, 2019
To: 16 Oct, 2019

Categories

Share

The UNIL-EPFL dhCenter ceased its activities on December 31, 2022. The contents of this site, with the exception of our members' pages, are no longer updated. Thanks to all of you for having kept this space alive! More information