A Sharp-Eyed Sound System

Imagine an audio product with "eyes" able to focus on the head and ears of the listener. A USC-industry team goes beyond imagining.
by Eric Mankin
Interdisciplinary collaborators Chris Kyriakis of the School of Engineering, left, and Tomlinson Holman of Cinema-Television and the TMH Corp. hope to expand the usefulness of the THM "MicroTheater" system, seen in the background in an acoustically insulated studio on the USC campus, by giving it the capacity to actively adjust its imaged sound to follow the movements of a listener. While current research is aimed at helping film editors, future "immersive" sound systems may have applications giving auditory cues to the visually impaired, or directional-cued warning systems for pilots or air traffic controllers.

Photo by Irene Fertik
A UNIVERSITY-INDUSTRY team from the new Integrated Media Systems Center (IMSC) is giving sharp eyes to a sound system that already has superb audio quality.

"MicroTheater" is a specialized audio product created for motion picture editors by TMH Corp., a corporate partner in IMSC. Researchers from the schools of Engineering and Cinema-Television and TMH Corp. are working at the interdisciplinary IMSC to add a visual system that will locate first the head and then the ears of the listener, and then continuously focus a perfect, spatially accurate sound image on them.

This new "immersive sound" system will allow an editor to move around the controls of an editing bay making adjustments and still hear the soundtrack the way audiences will hear it in a theater, with sounds correctly placed in space.

The project, an example of IMSC's commitment to working closely with industry to apply new technology as fast as possible, may someday also help the visually impaired, as well as airplane pilots and air traffic controllers.

Film theater sound-reproduction systems use an array of three speakers placed behind the screen, along with additional "surround" speakers on the sides of the auditorium to reproduce sounds, which seem sharply located in space to audience members.

MicroTheater was created to allow editors to work on film sound tracks creating such spatial effects without renting an expensive ($1,000 per hour or more) dubbing stage using such a multiple-speaker system.

It's impossible to place a central speaker directly behind a video monitor. Instead, Micro-

Theater uses two speakers, one mounted on each side of a video monitor, plus a third on the floor delivering bass. The center speaker information is processed electronically so that it appears to originate from the center of the screen even though there is no speaker there. The surround channel information is handled by two loudspeakers placed above and to the side of the listener.

MicroTheater is already widely used in film and television post-production. TMH President Tomlinson Holman, who is also an associate professor in the School of Cinema-Television, is part of an IMSC group that includes Chris Kyriakakis, a researcher in the department of electrical engineering/systems. The pair share a passion for state-of-the-art sound and hope the research will extend the usefulness of the MicroTheater system as well as yield new insights into sound design.

MicroTheater in its present configuration creates a properly oriented perfect sound image only when the operator is in a relatively small area directly in front of the monitor. But professional mixing boards are huge consoles, which engineers must move considerable distances to access.

USC SCIENTISTS are developing algorithms that will allow computers to search visual images of the area in front of the monitors for a head, locate the ears on the head, and appropriately change the sound signal produced by the speakers.

The head/ear-recognition software is based on work done by neurophysiologist Christoph von der Malsburg, who took basic research on the way the human brain processes visual information to create a computer system to recognize human faces.

Von der Malsburg's face-recognition algorithm was already set up to find landmarks on the human head, including ears. Hartmut Neven from von der Malsburg's group is working to adapt the algorithm for the sound system.

According to Kyriakakis, further work is needed to accelerate the process so that it can take place in real time, following fast movements. Once the algorithm is perfected, the next step would be to integrate it into a chip expressly designed for the function, speeding it up further.

TMH's Holman is famous in film-sound circles as the patent-holder for the THX sound system, which he created while working for USC alumnus George Lucas. He believes that "immersive sound" will not just be an advance for his company's product, but also for sound technology.

"I believe that immersive sound will see expanding uses in coming years," Holman said.

Other possible uses might include improved teleconferencing and telepresence, and augmented and virtual reality environments.

Immersive audio components for air-traffic control systems or pilot warning and guidance systems might give directional clues about hazards. A fighter pilot, for example, could hear a warning about a possible enemy aircraft coming from the direction of the threat.

Similarly, systems could be designed for the visually impaired that would create auditory displays of obstacles, properly located in space.

IMSC was designated May 23, 1996, as the principal site for National Science Foundation research in multimedia, or advanced technologies that combine digital video and audio, computer animation, text and graphics in interactive real time. The center encourages collaborative projects with corporations that expand the technical boundaries of the field.