±«Óãtv

Research & Development

Posted by Chris Pike, Tom Nixon on , last updated

We recently gave an update on some of the great work happening within the ±«Óãtv Audio Research Partnership. The audio team here at ±«Óãtv R&D is working to turn the latest research into practice for the ±«Óãtv and its audiences. We've had some great success lately, including winning the EBU Technology & Innovation award for a project we've worked on called the EBU ADM Renderer (EAR). This post gives a little more insight into this side of our work and particularly the EAR project

Besides the Audio Research Partnership, working in the ±«Óãtv R&D audio team involves developing and evaluating production tools, running listening tests, supporting production trials, and developing training resources with the ±«Óãtv Academy. We also do a great deal of collaborative work with industry partners and in standardisation bodies.

Orpheus Project

We’re just coming to the end of the , which is a 3-year EU-funded collaboration with industry partners from across Europe, aiming to make object-based audio a practical reality. We’ve talked before about the that we built in Broadcasting House as part of this project. The project has now produced a report describing the end-to-end object-based audio system architecture that it developed, which has been published by the European Broadcasting Union (EBU) as . We don’t promise that it gives the answer to life, the universe and everything, but if you want to know more about applying object-based audio in practice, it’s a good place to start! The project also held a one-day workshop in Munich to share the outcomes with the rest of the industry. Videos of all the talks are available on the .

The Audio Definition Model (ADM)

A lot of our work in this area involves the Audio Definition Model (ADM), but what exactly is it? The ADM is a data model for describing audio experiences. When put like that it might not sound that useful, but if it’s stored along with audio in a WAV (BWAV/BW64) file this can be thought of as a file format for storing next-generation audio content. It can be used to say simple things about channel-based audio like “this file contains stereo content” or “this file contains 5.1 content”, replacing legacy WAV metadata, as well as supporting newer types of content like scene-based audio (often called higher-order ambisonics, or HOA) and object-based audio.

In addition to information about the audio, ADM metadata can represent programme information (title, language etc.), and has an object and interactivity model which can be used to build experiences which adapt to the requirements and desires of individual listeners.

The ADM is an open standard , so it can be implemented and used by anyone. We’d like to see the ADM adopted as an interchange format for programme production and delivery, and it’s great to see this starting to happen: Avid Pro Tools has , and MAGIX has as part of the Orpheus project, but there’s still a lot of work to be done.

On our side we’re continuing to work to improve the ADM, to standardise a form of ADM metadata which can be serialised and sent over a network to allow live production of ADM content, and to define ADM “profiles” – agreed subsets of the ADM which should be used for specific applications.

We’re also continuing to build tools for working with the ADM, for example…

The EBU ADM Renderer

We have also been collaborating within the EBU to create the EBU ADM Renderer (a.k.a. the EAR). This is a system for rendering the types of content defined by the ADM (channel-based, object-based, scene-based) to any defined loudspeaker system. We have worked with our partners (, , and the ) to release this specification () with an accompanying . If you read about last week’s EBU Technical Assembly held here in Salford, you might have noticed that ! We’re thrilled to have won this award with our partners and are hopeful that the industry adopts the EAR in future.

±«Óãtv R&D - Libear - An Open-Source Library for the EBU ADM Renderer

The EBU ADM Renderer is part of our wider efforts to standardise open formats for working with so-called “next-generation audio” (NGA). In addition to the Audio Definition Model (ADM) as a way of representing NGA audio formats, a renderer (such as the EAR) is an important piece of the puzzle, since it defines what the parameters in the format definition mean in terms of signals that are played out of the speakers. We are currently working within the ITU-R to standardise rendering techniques; we expect the EAR to be part of that, and eventually, to end up as the spatial audio renderer in a wide range of tools, such as digital audio workstations and mixing consoles. For more insight into the EAR and how it can be used, our partners at the IRT have created a neat .

As with the ADM, this project is far from over. We’re planning on releasing the next version of the specification in October to tie off some loose ends, and are starting to work on things like Binaural and HOA as output formats from the renderer, and a real-time implementation.

We recently attended the Audio Engineering Society Convention in Milan, where we presented a workshop about the EAR. After dinner the night before the presentation we discovered this sculpture down a tiny back street:

-

This post is part of the Immersive and Interactive Content section

Topics