±«Óătv

Research & Development

Having good mental models of how AI works is important so we can collectively and individually make good decisions about the use of this widespread, often-invisible, hard-to-explain and fallible technology. As the recent report for the UK government says:
It is about knowing enough to be a conscious and confident user of AI-related products; to know what questions to ask, what risks to look out for, what ethical and societal implications might arise, and what kinds of opportunities AI might provide. Without a basic literacy in AI specifically, the UK will miss out on opportunities created by AI applications, and will be vulnerable to poor consumer and public decision-making, and the dangers of over-persuasive hype or misplaced fear.”
We can think about explaining AI at a number of different layers, representing places to intervene with explanations and better understanding.
 
Layers of explanation diagram
 
Within an AI system: Tools to help engineers and data scientists understand their machine learning models in detail, this is mainly the realm of (XAI) research.
As part of an interface to an AI system: Explaining aspects of the AI in the user interface to a product or system; e.g. “… because you liked watching Line of Duty…” style explanations.
Around an AI system: Explaining how the AI works in the context around it. This could be marketing copy, additional FAQs, tutorial videos or in the onboarding process to a service.
AI in society and media: Changing the images, words, metaphors and stories that are widely used in society and culture to represent AI will affect how it’s generally understood. This could include explaining and representing AI better in our education system, whether that’s school, college or later in life; and influencing how it is featured or mentioned in the media, whether that’s in the news or in popular culture like films.

 

Why it matters

This work should help promote understanding of this influential and widespread technology - reducing both unnecessary fears and unhelpful hype. This fits into our public purposes to"provide impartial news and information to help people understand and engage with the world around them" and "support learning for people of all ages". The ±«Óătv has also made to explaining how AI and ML works.
 
We think it is particularly important to help people understand machine learning because of four characteristics of the technology.
 
Four characteristics of AI 
 
  1. AI is ubiquitous, and affects almost all of us in so many areas. Voice assistants, TV recommendations, news feeds, photo search, insurance, health diagnoses - it’s all around us.
  2. AI is invisible. AI is often powering decisions and predictions in systems but often we’re not aware of it. Even for obvious examples, like searching for photos of dogs on your phone, there’s no indication that it uses AI, let alone how it uses this technology.
  3. AI is opaque and hard to explain. It’s hard to explain how AI works, or how particular AI systems come to decisions. It’s partly the nature of the technology, it’s even hard for the experts and practitioners to understand exactly what’s going on inside it.
  4. AI is fallible and goes wrong. It goes wrong in unusual and unpredictable ways, not really in ways that humans fail. It might have biases or errors accidentally incorporated from training data or other AI code, which can’t be fixed like a bug. It can also be profoundly misused, amplifying human biases or lending credibility to dubious goals. When it does go wrong, it can have real-world consequences from exam results to job applications to police surveillance and incarceration.

What we're doing

We are building prototypes to explore new ways to explain AI. Below is a screenshot of an experimental bird identification AI, incorporating the ability to experiment, and showing uncertainty, understandable features and explaining the data and its biases. We’re testing it with school children to learn more about what makes sense to them and what doesn’t.

 Bird identification tool

Building on this we will be developing a series of mini-explainers aimed at children, illustrating some of the pitfalls and quirks of some AI systems.

 
We are also running a project to create better images for representing ML and AI - understanding why the clichéd robots and blue brains are unhelpful and thinking about what could replace them. We are running workshops with teams working on AI, working with courses at the and starting other collaborations.
 
Workshop ideas and sketches

Image ideas from a creative workshop

 

Icon credits
Error 404 by Aneeque Ahmed from the Noun Project
box icon by Fithratul Hafizd from the Noun Project
Ghost by Pelin Kahraman from the Noun Project
stack by Alex Fuller from the Noun Project

This project is part of the Internet Research and Future Services section

Topics

People & Partners

Project Team