Audible Feast > Interview > Interview with Max Sanderson, The Guardian

Interview with Max Sanderson, The Guardian

I recently interviewed Max Sanderson from The Guardian about their Science Weekly podcast series on artificial intelligence (AI). The interview was conducted as a part of the Podcast Brunch Club Podcast, of which I am a co-host.

Here is the transcript of our interview, which you can find in audio version here on the PBC site or below in the Podbean player link.

Audible Feast: Hello listeners, this is Sara DaSilva, and I am here with Max Sanderson from The Guardian’s Science Weekly Podcast. Thanks so much for doing this interview! Our PBC members love hearing from the creative minds behind the shows that we feature on the playlists and I always like hearing from the creators too, because from the creative standpoint, you hear something totally different than the listener end, so it’s pretty cool. Tell us a little about the show and your role with the show. How long have you been working on the show?

MS: Since the podcast that you featured came out, I actually no longer actively produce the show, but I do exec [produce] it at the moment. I started life off actually as a neuroscientist, so it [my path] was quite indirect into podcasting. Then I realized I wasn’t going to be a very good scientist because I’m not very good with statistics, and I’ve got really shaky hands – I probably wouldn’t be very good in the lab. So I decided to talk about science as a professional and record other people talking about science. That’s how I ended up as a science communicator – that’s what we call it. I love podcasts primarily because I think the way that science is often portrayed to the public is kind of very dehumanized form: “We are scientists and We know this and We know that, and now We will tell you,” and there’s no interaction and no humanity.

My big thing has always been trying to put the humans back in he science. I eventually got a job in podcasting – I used to work for a company called Radio Wolfgang, and did a podcast called Scientish, which was looking at scientific themes in movies. From there I got picked up by The Guardian to do the Science Weekly podcast. When I got here 2.5 years ago the show was about what happened in science news that week – which is interesting and fun, but not really what I wanted to do. What I wanted to do was take some of the more timeless elements of science and pull out the characters and stories within them. I did the podcast for just over two years and then I moved on to oversee a couple of podcasts and into the documentary space a bit. I produced this show up until this miniseries on AI.

AF: Well that’s awesome! I am an engineer, so science communicator podcasts are literally one of my favorite things. What you said rings true for me-I’m an engineer but I never wanted to be an engineer that sat at a desk and did calculations, I wanted to translate that to how it applies. I have to say, I love statistics – hahaha.

MS: Well, I love statistics, but I’m just not very good at them. That’s even worse, it’s an unrequited love.

AF: I think it’s great that you ended up in this role then. It’s perfect. So the episode of The Guardian’s Science Weekly that we’re featuring this month [November 2018] is the last in a four-part miniseries about artificial intelligence – AI. It aired earlier this year [in 2018]. This particular episode was about whether AI needs an off switch. Can you tell us a little more about the four part miniseries and why you decided to end with this topic?

MS: When I joined Science Weekly I thought a half an hour is good to explore some topics, but some topics are too big for that time frame. If you do try to fit it in to half an hour, what you end up doing is not exploring very much [in detail]. We did some other miniseries – is time an illusion, is free will real … the philosophical stuff. The thing I’ve always found working with science journalists (I had three brilliant presenters on SW – Ian Sample, Nicola Davis, and Hannah Devlin, who primarily write) is that the side of them that produces the articles and is rightfully serious about science is very different from the side you get when the mics are off, or when they’re not interviewing people.

This miniseries came out of conversations between Ian Sample and myself – Ian is kind of obsessed with AI. He’d come back to the studio to review it and we’d have these up to an hour long conversations about the philosophical elements of AI. The thing we kept coming back to what was what it can tell us about US – about humans. The fact that we’re now creating this technology where we have to reduce things like intelligence or vision or ethics to binary computations actually can reveal a lot about what it means to be human. That was one element that made us want to do the miniseries. We also wanted roundtable discussions – on the outset, they were all slightly skeptical.

The way AI is generally portrayed in the media is very binary – either very good or very bad. And there’s actually loads of disagreement between AI researchers as to what AI even is, and whether we’ll ever get to human level AI. That’s why we wanted to have the roundtable discussions, to address this skepticism about AI (hence the miniseries title – Questioning AI). It also helped us flush out some of the disagreements within the field. My favorite episode was actually what kind of intelligence would we create? This became a discussion about what intelligence is – is human intelligence the ultimate intelligence? What other types of intelligence are out there? These discussions held up a mirror to what it means to be human.

We wanted to make the last episode quite sexy. When we think about AI we think about Terminator and taking over the world, we envision this big red switch that you can hit when they all go too crazy. But it was really about the ethics of it – what does it mean to create an ethical robot and is it even possible? You could ask the same question – do humans need an off switch? It depends what they’re doing, and whether they know what they’re doing is bad. That was kind of what we were trying to get at with that episode.

AF: Yeah, I think there’s this fear that if you have reason and ethics, then there’s this chance (or almost an inevitability) that world domination or control over everything is the thing that will ultimately be the end that comes with it. I think that’s people’s fear – automated weapons and stuff – it’s almost thinking of it as an extreme. I think there’s so much else in the whole spectrum of opportunity, bad and good, but I think that probably leads to why there’s such a disagreement about everything related to AI. Our minds go to these extremes – we could solve world hunger, or have world destruction. But really, there’s so many things in between – but maybe because the average person doesn’t know that much about what AI could do, that that’s why our minds tend to go to these extremes.

MS: I’d completely agree, and I think a big problem is how this stuff is reported on. I’m doing another project at the moment on CRISPR and gene editing — these things are always treated as if they’re somehow “other” to us, as if they have some sense of agency that could save or ruin the world depending on what they choose to do with it. But these things are not good or bad – they have the potential to be good or bad, and what makes them that way is the humans using them. Sir Nigel Shadbolt, in the episode, talked about how the existential threats from AI won’t be the things that ruin humanity – what will ruin humanity is natural stupidity. It’s a very important point! AI has the potential to do bad and good things, and some that are both bad and good and worse and better, but it’s how we use it that will determine the outcome. That’s lost a lot of the time, especially in the way it’s reported.

AF: I also liked the discussion about how regulation and ethics have a tricky relationship – you can’t regulate people’s ethics per se, and the idea of regulating AI isn’t necessarily the answer. It’s not companies or institutions that are selling or creating the AI, it’s the people behind the programming. If you’re going to regulate, there’s some bias or subjectivity embedded in that. It’s almost impossible to be completely objective in a regulation. AI is developed by people who don’t have to be associated with a company or institution. This is a product of brilliant minds, not necessarily software. It becomes very individual.

MS: That was one of my favorite bits, the idea of regulation. How in God’s name do you regulate something like that? I’m no moral relativist, there are things that are right and wrong, but once you get into the middle bit, it’s hard to say what’s right and what’s wrong. Again, when it comes to humans, because we all kind of have this sense of moral agency or an ethical compass, it’s assumed that if the human makes the decision then it’s probably okay.

You touched on something – in this series we wanted to question not only the limits of AI but what kind of questions AI is prompting. Regulation, ethics – these are really important and we’ve never really had to grapple with them other than in the way we create laws that may evolve as society evolves. But we’ve never had to think about the famous trolley question (should you save one person to kill four, what if those people are related to you, what if they’re younger or older, what if they’re sick) … we’ve never had to think about why it is that we choose the ethical choices we make. What is it that leads up to our choices? AI is really good at making us think about those choices and what drives our decision making.

AF: Well I loved this episode, and I’ll have to go back and listen to the other three parts of the series. We really appreciate you sharing a little more about the show. One last thing – we like to ask all of our guests what podcasts they’re listening to right now.

MS: I have two. The first is The Shadows by Kaitlyn Prest – it’s unbelievable. I loved The Heart, I loved the way she played around with documentary and fiction. The sound design is inspirational. I’m only about halfway through The Shadows and it’s totally gripping. Slightly uncomfortable sometimes if you listen on the train and there are scenes of a more adult nature and you’re kind of like oh my god … do these people know I’m listening to people doing stuff? The other one I’ve become obsessed with and can’t quite put a finger on why, is Underdog. It’s kind of playing off the obsession with Beto O’Rourke, but it’s really simple, the sound design doesn’t take away too much, and it’s real-time – there’s an energy to it that you don’t often get with podcasts.

Thanks to Max for sharing about his love of science communication and for bringing The Guardian’s Science Weekly and other podcasts to the masses.

Tagged with: ,

Leave a Reply

Your email address will not be published. Required fields are marked *

*