±«Óãtv

Responsible Machine Learning in the Public Interest

Developing machine learning and data-enabled technology in a responsible way that upholds ±«Óãtv values.

Published: 1 January 2018

±«Óãtv Research & Development is working with colleagues across the ±«Óãtv, as well as academic and expert institutions, to develop Machine Learning - and data-enabled technologies more generally - in ways that reflect and uphold core ±«Óãtv values and support the ±«Óãtv in delivering its remit.

Project from 2018 - present

Three people look at a wall of photographs

Why it matters

Machine learning (ML) has transformative potential for across various sectors such as health, education, media and transport - but this disruptive potential brings with it a set of societal challenges and raises important questions about the broader social implications and consequences of the technology.

The ±«Óãtv is currently developing machine learning applications and capabilities, and ±«Óãtv Research & Development is exploring potential future applications of artificial intelligence (AI) in the media. The risk of unintended societal consequences from ML has been well illustrated by recent where ML has been shown to make decisions that are or. The opacity of many of these systems further complicates this problem, and there is a lack of clarity in many cases as to where resides in these complex socio-technical systems. The ±«Óãtv is committed to anticipatory, evaluative and proactive research to advance machine learning in the public interest.

Just as our broadcasting and journalism services are built on a number of fundamental principles, based on our public mission (...) the AI services that we build will have these same principles at their heart'

-

 This work programme aims to deepen our knowledge of the key challenges facing the media industry, with a specific focus on public service broadcasting to help keep the ±«Óãtv at the forefront of debates, developments and best practice. Our research agenda aims to develop an approach to ML where , among others - are embedded and preserved in future development, application and evaluation of machine learning technologies and systems and automated systems more generally.


Current areas of work

  • Responsible AI and public service media
  • Intelligible AI by design
  • Public service approaches to personalisation and recommendation systems
  • Public understandings of AI and attitudes/expectations about the use of AI in the media

Following the 2017 ±«Óãtv conference on Artificial Intelligence and Society, ±«Óãtv R&D, in collaboration with key across the ±«Óãtv, conducted scoping work into current debates about ethics and machine learning. We attended several key events, including at the Royal Society, the organised by TechUK, and we conducted a comprehensive literature review on the topic. This work culminated in a scoping report on the topic: 'The case for ethical machine learning at the ±«Óãtv'. This scoping work informed to the and made recommendations for a ±«Óãtv research agenda to advance work in this area.

These recommendations have now been formalised into the following programme of research:

  • Responsible AI and Public Service Media: We are building case studies of ML at the ±«Óãtv to identify issues and necessary responses to help ensure fairness, transparency and accountability in workflows and systems. We are also supporting academic research into AI, media and bias to inform our work around responsible AI in the public sector.
  • Intelligible AI: We are interviewing industry stakeholders about ML and AI systems at the ±«Óãtv, and we are exploring key requirements for explainability.
  • Public Service Personalisation: We are investigating approaches to public service recommendations and personalisation that align with ±«Óãtv values, for example, by fostering principles of diversity exposure. This extends to considering new ways to articulate and measure public service value in these systems.
  • Audience Research: We are researching audience understandings, attitudes and expectations around automated decisions, ML and the media.
  • Convening an internal and external debate: We are working with key people across the wider ±«Óãtv to convene internal discussion forums and helping our colleagues in the ±«Óãtv Blue Room and ±«Óãtv Academy organise the 'AI, Society and the Media' conference, including an 'AI, media diversity' networking event hosted by the ±«Óãtv women in STEM network.

How to get involved

This is a ±«Óãtv Research & Development programme of work done in collaboration between . Our approach is interdisciplinary and collaborative. If you are actively working in this area and want to share this work with us or think there might be opportunities to collaborate, we want to hear from you.

Project Team

  • Bill Thompson

    Bill Thompson

    Head of Public Value Research
  • Tim Cowlishaw

    Tim Cowlishaw

    Senior Software Engineer
  • Ahmed Razek

    Senior Technology Demonstrator, ±«Óãtv TS&A
  • Ali Shah

    Head of Technology Transfer & Partnerships, ±«Óãtv TS&A
  • Internet Research and Future Services section

    The Internet Research and Future Services section is an interdisciplinary team of researchers, technologists, designers, and data scientists who carry out original research to solve problems for the ±«Óãtv. Our work focuses on the intersection of audience needs and public service values, with digital media and machine learning. We develop research insights, prototypes and systems using experimental approaches and emerging technologies.

Rebuild Page

The page will automatically reload. You may need to reload again if the build takes longer than expected.

Useful links

Theme toggler

Select a theme and theme mode and click "Load theme" to load in your theme combination.

Theme:
Theme Mode: