±«Óãtv

AI and the Archive - the making of Made by Machine

How we used AI and Machine Learning to create mini-±«Óãtv Four style video montages.

Published: 5 September 2018

It had never been done before, and the idea would make many a broadcaster’s eyes go wide with wonder. Use artificial intelligence (AI) to help create two nights of experimental programming on network ±«Óãtv TV on ±«Óãtv Four, by delving into the treasures of the vast ±«Óãtv Archive. And then dig deeper, and use AI and machine learning to create mini-±«Óãtv Four style segments in a programme presented by Hannah Fry, to illustrate how the technology works.

So why were we doing it? Here at R&D we’re already exploring how AI and machine learning technologies could transform the future of media production, and are always looking to work with programme makers to support and future proof the ±«Óãtv.

The ±«Óãtv has one of the largest broadcast archives in the world, and manually searching millions of programme hours is near impossible. It’d take decades for one person to watch them all.

The increasing digitisation of the archive gives us the opportunity to develop AI tech to help filmmakers and schedulers find hidden gems - programmes and content that may otherwise not be seen again, or overlooked. And we are able to utilize the speed of machines to help us, as a computer’s gaze is much faster than ours and can scan a years’ worth of TV in a couple of days.

R&D have previously worked with ±«Óãtv Four, so it was a great to collaborate again and we agreed with ±«Óãtv Four Editor Cassian Harrison that artificial intelligence, an emerging trend, would be a great choice. Two ideas were finally selected that could make their working lives easier and complement their schedule best. And so ±«Óãtv 4.1 was born.

First, we would use machine learning to pick out likely shows for an evening’s entertainment on ±«Óãtv Four from the vast ±«Óãtv Archive. The channel's scheduler would then use this list to select what would be broadcast.

Now ±«Óãtv Four is quite a distinctive channel, so we’ve had to create the new technology ourselves, drawing on the work of developers around the world. It’s been done in-house by R&D and can’t be bought off the shelf. It’s in its early stages and we’ve been working with academics to refine it, including one from Mexico who is working on a PhD about teaching computers to enjoy television.

The AI examined the programmes that ±«Óãtv Four had shown in the past and their attributes, analyzing their descriptions and subjects – be it music, history or science. Computers trawled through more than 270,000 programmes across the archive that were available in digital form, ranking the top 150 most relevant factual ones by what you could call their '±«Óãtv Four-ness'. Schedulers then used the list to select those to show across two nights, the 4th and 5th September 2018.

Now to the second idea, and really challenging part of the project, to use AI to help make TV in Made by Machine: When AI met the Archive. Our aim was to highlight how the technology worked, and using it to create mini-±«Óãtv Four style compilations from these top 150 programmes seemed like the perfect way to do it. But how would the machine 'watch' them? What would it see? And could it be trained to select and edit video in the beautiful way people do?

The aim was not only for the AI to create segments, but to let people 'see' inside the mind of the machine as it did so, albeit in a simplified way. Viewers could watch its processes and also spot the limits of its learning.

How did it do that? It broke each of those top 150 ±«Óãtv Four-like shows into bite-size chunks (more than 15,000 in total), and then chained them back together.

And to show a range of different ways computers could analyse video and then try to link it together to form some sort of story or narrative, R&D technologists devised four techniques:

  • Object & Scene Recognition: where the AI learns to identify what a scene consists of, including the type of landscape, identifying objects, whether people are featured and what they might be wearing. You then see how it attempts to create a compilation where each scene follows on from the last in some way.
  • Subtitle Analysis: the AI uses natural language processing as it scans the subtitles of archive programmes and then looks for connections between words, topics and themes as it pieces footage together.
  • Visual Energy (or Dynamism): the AI analysing video frame by frame to try to detect whether there’s a lot of activity on screen (high energy) or not (low energy). It then tries to create a compilation with shifting pace starting slow, building up energy then dropping back for a breather before shifting to a climax.
  • Finally the AI draws on what it’s learnt using all three techniques to create a new piece of content.

For each we tried to show on screen the decisions that were being made, in the form of simplified text that was generated by the AI. The video compilations in the programme are virtually as made by the AI. The selection of all the clips and pretty much the editing were all done by the AI, with minimal human intervention.

We tested each of these techniques by creating compilations of up to 15 minutes. It didn’t always go to plan. The machine didn’t always get it right, and sometimes got stuck in its own a data loops. At one point it had a particular fascination with buses, another test run focused almost solely on clips from one documentary about the garden of an English country house, with barely a mention of the other 150 programmes. But there were fantastic moments of unexpected juxtaposition, and wonderful snippets of archive programmes which our team had never seen before and which we’d now like to watch in their entirety!

The AI reflects its training data, so any bias in its training was replicated. So it failed to spot mobile phones, but picked up cell phones (incorrectly in some case). And the machine's choices were guided by our engineers. For example, we opted for the bite-size chunks of programme to be created with durations between 25 seconds to just under two minutes. It also found some programmes that wouldn’t usually be picked up for scheduling due to complicated contractual and rights issues, so we filtered those out. Multiple versions were made using each technique, and then one of each was chosen by ±«Óãtv Four to go into Made by Machine: When AI met the Archive.

These sections were incorporated into the programme together with presenter Dr Hannah Fry and a virtual host, '±«Óãtv 4.1' as well as colleagues across the ±«Óãtv, including the team at ±«Óãtv Archives in West London. As experimental as the programme was, we didn’t have the timeframe or resources for everything to be created by AI and machine learning. The virtual host '±«Óãtv 4.1’ was made by , in the spirit of ±«Óãtv Four and the programme. We also worked closely our colleagues in ±«Óãtv Television & Media Operations to produce the programme production, with graphics designed with .

Made by Machine: When AI met the Archive at 9pm on ±«Óãtv Four, 5th September 2018 as part of ±«Óãtv 4.1 AI TV.

  • Internet Research and Future Services section

    The Internet Research and Future Services section is an interdisciplinary team of researchers, technologists, designers, and data scientists who carry out original research to solve problems for the ±«Óãtv. Our work focuses on the intersection of audience needs and public service values, with digital media and machine learning. We develop research insights, prototypes and systems using experimental approaches and emerging technologies.

Rebuild Page

The page will automatically reload. You may need to reload again if the build takes longer than expected.

Useful links

Theme toggler

Select a theme and theme mode and click "Load theme" to load in your theme combination.

Theme:
Theme Mode: