±«Óãtv

Research & Development

What we're doing

The ±«Óãtv Audio Research Partnership was launched in 2011 to bring together some of the world's best audio technology researchers to work on pioneering new projects. The original partners were University of Surrey, University of Salford, Queen Mary University of London, University of Southampton, and University of York. Since then, we have partnered with many more research groups and organisations, and are always looking for opportunities to collaborate where there is a shared research interest.

Why it matters

Collaborating with university and industrial partners allows us to work directly with the best researchers and to steer the research to maximise the benefit to our audiences. By coming together, we can pool our resources to tackle some of the biggest challenges in broadcast. This partnership has led to pioneering developments in many areas including immersive audio, personalised and interactive content, object-based audio, accessibility, AI-assisted production, music discovery, audio augmented reality and enhanced podcasts.

Outcomes

Over the past decade or so, the partnership has given rise to a wide range of projects, including large-scale collaborative projects, PhD studentships, industrial placements and public events.

Public Events

We run a series of public events to celebrate the most exciting and innovative developments in audio, both creative and technical. You can read about each event and watch video recordings from some talks below:

Collaborative Projects

Several large-scale projects have resulted from the Audio Research Partnership. These have been funded by various bodies, including , , , and , with a total portfolio size in excess of £30M.

Dates Project Partners Description
2021‑2026   University of Surrey, Lancaster University Using AI and OBM to enable media experiences that adapt to individual preferences, accessibility requirements, devices and location.
2020‑2025   University of Surrey Using application sector use cases to drive advances in core research on machine learning for sound.
2019-2027 Queen Mary University of London Combining state-of-the-art ability in artificial intelligence, machine learning and signal processing.
2019-2024 University of York The future of immersive and interactive storytelling.
2019-2021 Imrsvray, University of Surrey Building tools to produce six degrees-of-freedom immersive content that combines lightfield capture and spatial audio.
2016-2019 University of Surrey and University of Salford Using machine learning to extract information about non-speech and non-music sounds.
2014-2019 QMUL, University of Oxford, University of Nottingham Fusing audio and semantic technologies for intelligent music production and consumption
2013-2019 University of Surrey, University of Salford, University of Southampton Advanced personalised and immersive audio experiences in the home, using spatial and object-based audio.
2015-2018 IRT, Bayerischer Rundfunk, Fraunhofer IIS, IRCAM, B-COM, Trinnov Audio, Magix, Elephantcandy, Eurescom Creating an end-to-end object-based audio broadcast chain.
2013-2016 Joanneum Research, Technicolor, VRT, iMinds, Bitmovin, Tools at Work Investigating immersive coverage of large-scale live events.

PhD Projects

We have sponsored or hosted the following PhD students, covering a variety of topics:

Dates Student University Description
2020‑2024   Jay Harrison York Context-aware personalised audio experiences
2020-2024 David Geary York Creative affordances of orchestrated devices for immersive and interactive audio and audio-visual experiences
2020-2024 Jemily Rime York Interactive and personalised podcasting with AI-driven audio production tools
2020-2024 Harnick Khera QMUL Informed Source Separation for Multi-Mic Production
2019-2023 Angeliki Mourgela QMUL Automatic Mixing for Hearing Impaired Listeners
2018-2022 Jeff Miller QMUL Music recommendation for ±«Óãtv Sounds
2018-2021 Daniel Turner York AI-Driven Soundscape Design for Immersive Environments
2016-2021 Craig Cieciura Surrey Device orchestration rendering rules
2019-2020 ´¡»å°ù¾±Ã &²Ô²ú²õ±è;°ä²¹²õ²õ´Ç°ù±ô²¹ York Binaural monitoring for orchestrated experiences
2016-2020 Lauren Ward Salford
2012-2019 Chris Pike York
2013-2018 Chris Baume Surrey
2014-2018 Michael Cousins Southampton 
2014-2018 Tim Walton Newcastle
2011-2016 Darius Satongar Salford
2011-2015 Paul Power Salford
2011-2015 Anthony Churnside   Salford
2011-2015 Tobias Stokes Surrey

Industrial Placements

On occasion, we host short industrial placements from PhD or Masters students:

Year Student University Description
2021   Josh Gregg York Audio personalisation for Accessible Augmented Reality Narratives
2020 Edgars Grivcovs York Audio Definition Model production tools for NGA and XR
2020 Danial Haddadi Manchester  Audio device orchestration tools and trial productions
2019 Valentin Bauer QMUL Audio Augmented Reality
2019 Ulfa Octaviani QMUL Remote study on enhanced podcast interaction
2019 Emmanouil Theofanis Chourdakis QMUL Automatic mixing for object-based media
2018 Jason Loveridge York Device simulation plug-in
2016 Michael Romanov IEM Ambisonics and renderer evaluation
2014 Adib Mehrabi QMUL Music thumbnailing for ±«Óãtv Music
2014 James Vegnuti QMUL User experience of personalised compression using the Web Audio API
2013 Nick Jillings
Zheng Ma
QMUL Personalised compression using the Web Audio API
2011 Martin Morrell QMUL Spatial audio system design for surround video

This project is part of the Immersive and Interactive Content section

This project is part of the Audio Research work stream

Topics

People & Partners

Project Team

Project Partners