±«Óãtv

Research & Development

Posted by Rhianne Jones, Bronwyn Jones on , last updated

The media industry is not alone in facing challenges to innovating responsibly with artificial intelligence (AI). Working out how to tackle these vital ethical and practical questions is the role of a new research programme - BRAID, or ‘Bridging Responsible AI Divides’ - which brings insights from the arts and humanities to bear on today’s rapid technical development. As a core partner, the ±«Óãtv hosted the BRAID launch event, bringing together a diverse community of policymakers, artists, academics and industry representatives. So what did we learn?

A Responsible AI (RAI) ecosystem can only be achieved if we all work together

Professor Shannon Vallor described how many corporate leaders are seeking to redefine RAI as a narrowly technical challenge of AI safety, but what's missing is the human question of what we collectively deserve and want from AI, including justice, accountability, equity and liberty to know, create and flourish. We need a richer understanding of the kinds of societies people want with AI - and that's where the arts and humanities can help us go beyond merely limiting harms to co-construct humane visions, knowledge and practices for AI.

Co-Directors Shannon Vallor and Ewa Luger on stage introducing the event in front of a screen displaying a 'Welcome to BRAID' message.

Co-Directors Shannon Vallor and Ewa Luger took a critical look at RAI progress to date

We're at a familiar inflection point, Professor Ewa Luger said, where we have proliferating sets of high-level ethical principles but they are not stopping predictable harms and impacts on people's livelihoods - generative AI is a case in point. We see actors in AI innovation being 'just responsible enough' to keep society pacified and at the same time, the 'human and the humane are being stripped out' so technical developments can progress. Supporting new voices from the next generation of thinkers underpins BRAID's approach to supporting the future leaders of RAI.

We've been here before!

Dr Rumman Chowdhury took us through a potted history of RAI and how we've seen the same narratives (Is AI alive? Will AI take our jobs? etc.) recur with generative AI, seeming to pause the progress that had been made in industry practice and governance, government investment and policy development, and resourcing of civil society and independent third parties. She said the toys have changed, but the rules of the game have not - power still remains concentrated in the hands of the few.

Dr Rumman Chowdhury on stage in blue light, stood at a lectern in front of a screen reading: Building Responsible AI in the age of Generative AI.

Rumman Chowdhury said we don't need to reinvent the wheel for RAI - we can expand existing and ongoing work:

"What people want is to be able to take advantage of the technology. What is blocking them is not the technology, it's the broken institutions. So the fear every photographer, screenwriter, actor, etc., has is not that artificial intelligence will take their jobs. It's that corporations will put them out of a job by using this technology. Even though there are pathways forward that would actually enable and ensure everybody to be gainfully employed - for all of us to be benefiting from the technology."

Responsible AI requires more than technical safety

Panellists discussed how building good societies where AI works for everyone requires deliberately and effectively bringing in the voices of those currently excluded from the conversation, including diverse publics and underrepresented groups. It involves normative questions about if, when and how we should use AI and importantly, we need to be able to say no to deploying these technologies when necessary. Finally, RAI is about practice and processes that need to be cultivated and iterated over time.

Panellists sat in a row in front of a screen reading: Panel 1: Lessons from the first wave of Responsible AI. Panellists, from left to right: Andrew Strait, Ada Lovelace Institute, Ali Shah, Accenture, Helen Kennedy, University of Sheffield, David Leslie, Alan Turing Institute, Dawn Bloxwich, DeepMind, Stephen Cave, University of Cambridge.

Left to right: Andrew Strait, Ada Lovelace Institute chaired Panel 1 with Ali Shah, Accenture, Helen Kennedy, University of Sheffield, David Leslie, Alan Turing Institute, Dawn Bloxwich, DeepMind, Stephen Cave, University of Cambridge

Existential risk framings are a distraction from the here and now

AI systems are already impacting people's lives in important ways and diverting the conversation to future potential harms can distract from the need to intervene in AI innovation now. ChatGPT and the recent wave of generative AI have brought critical questions of power into public discourse and this can represent an opportunity to refocus the debate away from existential risk, instead amplifying new voices that work to evidence and explore the real impacts.

Panellists on a purple stage talking. From left to right: Atoosa Kasirzadeh, Abeba Birhane, Mozilla Foundation,  Yasmine Boudiaf, Ada Lovelace Institute, Arne Hintz, Cardiff Data Justice Lab, Carolyn Ten Holter, Oxford Internet Institute, Jack Stilgoe, UCL.

Left to right: Panel 2 was chaired by Atoosa Kasirzadeh and included Abeba Birhane, Mozilla Foundation, Yasmine Boudiaf, Independent Artist, Arne Hintz, Cardiff Data Justice Lab, Carolyn Ten Holter, Oxford Internet Institute, Jack Stilgoe, UCL

Our futures are not determined: Power matters but so do moral and creative imaginaries

Panellists discussed how creatives are both inspired and experimenting with AI capabilities, and simultaneously feeling anxiety and fear about their jobs and professional futures. Looking back for historical lessons (e.g. interrogate the material infrastructures, avoid extremes and make room for nuance and detail) and leveraging the experience and expertise of diverse stakeholders in the present can help us build new desirable futures.

Panellists sat in a row in front of a screen reading: Panel 3 - The Future of Responsible AI with the Arts and Humanities. From left to right the pannelists are: Rhianne Jones, ±«Óãtv R&D, chaired Panel 3 with Jonnie Penn, Cambridge, Rebecca Fiebrink, Goldsmiths, Ramon Amaro, Nieuwe Instituut, Joel McKim, Birkbeck and Franziska Schroeder, Queen’s University Belfast

Left to right: Rhianne Jones, ±«Óãtv R&D, chaired Panel 3 with Jonnie Penn, Cambridge, Rebecca Fiebrink, Goldsmiths, Ramon Amaro, Nieuwe Instituut, Joel McKim, Birkbeck and Franziska Schroeder, Queen's University Belfast

Alongside the talks, guests enjoyed an immersive curated exhibition of art, music and interactive visualisations in the ±«Óãtv Media Cafe featuring work by Patricia Wu Wu, Emma Varley, Wesley Goatley, Jake Elwes, Pip Thornton and a musical performance from Jess +.

Composite image of four photos from a musical performance by Jess Fisher, Deirdre Bencsik, Clare Bhabra and a robot, controlled by Jess.

Image of a musical performance by Jess Fisher, Deirdre Bencsik, Clare Bhabra, robot

The complexity and urgency of responsible AI requires these important conversations to continue. For more information on how the BRAID programme will continue to do this, .

Interested in getting involved? See the . Have an idea for a project that responds to one of ? Get in contact with ±«Óãtv via BRAID or by emailing responsibleinnovationteam@bbc.co.uk

is dedicated to integrating Arts, Humanities and Social Science research more fully into the Responsible AI ecosystem. Funded by the UKRI Arts and Humanities Research Council, BRAID is the first Responsible AI programme of its scale in the UK. BRAID's ambition is to see a wider community of researchers, practitioners and publics collaborate with industry and policymakers to tackle some of the biggest ethical questions posed by AI, building public trust and ensuring the UK remains at the global forefront of the research, development and deployment of AI. To find out more about BRAID, you can read a , or visit our website at

A big thanks to the ±«Óãtv-BRAID event launch team led from the ±«Óãtv side by Thom Hetherington and to the ±«Óãtv Radio Theatre production team in London.

Topics