Making AI More Understandable
Artificial Intelligence is ubiquitous and we are developing ways to make it more understandable for everyone in society.
Artificial Intelligence or “AI” (also known as machine learning) is ubiquitous, but we might not know it is there, and it can go wrong. It’s a technology underpinning important decisions and having major impact on individuals and society. This work is developing new approaches to making AI more understandable to non-expert audiences.
Project from - present
“It is about knowing enough to be a conscious and confident user of AI-related products; to know what questions to ask, what risks to look out for, what ethical and societal implications might arise, and what kinds of opportunities AI might provide. Without a basic literacy in AI specifically, the UK will miss out on opportunities created by AI applications, and will be vulnerable to poor consumer and public decision-making, and the dangers of over-persuasive hype or misplaced fear.”
![Layers of explanation diagram](/rd/sites/50335ff370b5c262af000004/assets/60a3d61906d63e96f20000d8/Untitled.001.png)
Why it matters
![Four characteristics of AI](/rd/sites/50335ff370b5c262af000004/assets/60a3d64d06d63e96f20000d9/Untitled.002.png)
- AI is ubiquitous, and affects almost all of us in so many areas. Voice assistants, TV recommendations, news feeds, photo search, insurance, health diagnoses - it’s all around us.
- AI is invisible. AI is often powering decisions and predictions in systems but often we’re not aware of it. Even for obvious examples, like searching for photos of dogs on your phone, there’s no indication that it uses AI, let alone how it uses this technology.
- AI is opaque and hard to explain. It’s hard to explain how AI works, or how particular AI systems come to decisions. It’s partly the nature of the technology, it’s even hard for the experts and practitioners to understand exactly what’s going on inside it.
- AI is fallible and goes wrong. It goes wrong in unusual and unpredictable ways, not really in ways that humans fail. It might have biases or errors accidentally incorporated from training data or other AI code, which can’t be fixed like a bug. It can also be profoundly misused, amplifying human biases or lending credibility to dubious goals. When it does go wrong, it can have real-world consequences from exam results to job applications to police surveillance and incarceration.
What we're doing
We are building prototypes to explore new ways to explain AI. Below is a screenshot of an experimental bird identification AI, incorporating the ability to experiment, and showing uncertainty, understandable features and explaining the data and its biases. We’re testing it with school children to learn more about what makes sense to them and what doesn’t.
Building on this we will be developing a series of mini-explainers aimed at children, illustrating some of the pitfalls and quirks of some AI systems.
![Workshop ideas and sketches](/rd/sites/50335ff370b5c262af000004/assets/60a3d71706d63eba2a0000b9/Untitled.004.png)
Image ideas from a creative workshop
This project is part of the Internet Research and Future Services section
Topics
People & Partners
Project Team
-
David Man
Creative Director
-
Libby Miller
Producer
-
Alicia Grandjean
R&D Engineer
-
Tristan Ferne
Lead Producer
-
Henry Cooke
Senior Producer & Creative Technologist
-
Ben Hughes
Junior Research Engineer
-
Jess Bergs
Senior Creative Technologist