Separation, localization, and comprehension of multiple, simultaneous speech signals by humans, machines, and human-machine systems
Nat Durlach
(Boston University, Hearing Research Center and Biomedical Engineering Dept.)

An attempt is made to provide an introductory overview of some issues, problems, and challenges relevant to the separation, localization and comprehension of multiple, simultaneous speech sources. In addition to considering humans and autonomous machines, attention is given to systems that exploit human-machine collaboration, i.e., supernormal hearing-aids in which machine sensing and processing is followed by human auditory sensing and processing. Included among the topics discussed are the relation of source segregation to source localization, the need to integrate various machine processing techniques, comparisons of human and machine processing, and the susceptibility of human performance to multiple maskers, informational masking, and reverberation.

Nathaniel I. Durlach, Louis D. Braida, and Yoshiko Ito (1986). Toward a model for discrimination of broadband signals. J Acoust Soc Am 80, 63-72.
Nathaniel I. Durlach, Christine R. Mason, Gerald Kidd, Jr., Tanya L. Arbogast, H. Steven Colburn, and Barbara G. Shinn-Cunningham (2003). Note on informational masking (L). J Acoust Soc Am 113, 2984-2987.
Nathaniel I. Durlach,, Barbara G. Shinn-Cunningham, and R.M. Held (2003). Supernormal auditory localization. Presence 2, 89-103.