Free will online: the illusion of choice

You might think you control your online life - but algorithms make most decisions for you...
22 September 2020

Interview with 

Kartik Hosanagar, Wharton School of the University of Pennsylvania

ONLINE VIDEO

A viewer watches streaming media online

Share

Thanks to the internet, it seems like today we have more choices than ever before: films to watch or books to read, places to go, people to talk to. But there’s so much information online that we can only ever see a fraction of it - and so most tech companies track everything you do on their platform, build a digital doppelganger of who you are, then compare that model of you to others, to try and figure out what you’d like to see best. Kartik Hosanagar researches the consequences of this process at the Wharton School at the University of Pennsylvania, and Chris Smith asked him who’s really in control here...

Kartik - We all believe that we're making our own choices, but all of my research shows that's not the reality. For example, 80% of the viewing hours streamed on Netflix originated from automated recommendations. At YouTube, the statistic is very similar. Close to a third of the sales at Amazon originate from automated recommendations; and the vast majority of dating matches on apps like Tinder are initiated by algorithms. Really, if you think about it, there are very few decisions we make these days that aren't touched by algorithms that are built on top of the big data.

Chris - So how are these algorithms actually doing what they do, and what are the risks?

Kartik - First, I want to acknowledge that algorithms create a tremendous amount of value. I mean, which of us wants to go back to whatever the TV network decides we need to watch? But at the same time these algorithms are not necessarily objective, infallible decision makers; they are prone to many of the same biases we associate with humans. For example, in a recent case where algorithms used in courtrooms in the US to compute a defendant's risk of reoffending, it was shown that these algorithms were biased against black defendants. And to be clear, it's not that there's a human programmer who's programming these biases in; rather, the algorithms are learning from data. So in the past, if there's a race bias in the policing system or the criminal sentencing system, this algorithm will learn to assume that a black defendant is more likely to reoffend.

Chris - But also these algorithms can be sussed out by savvy humans, who basically work out how they work.

Kartik - Yeah, I think that's a good way to look at it. If you look at the world of advertising and marketing, even before computers, before digital technologies, most of marketing and advertising was focused on how do we... I don't want to use the word “con” people into making decisions, but certainly how do we persuade people? The new version of it is: how do we persuade algorithms to put us in front of the people? I view this mostly as a positive, in the sense that this exercise helps ensure that the most relevant websites come in front of consumers; but at the same time, there is a dark side of it. Some of these companies are focused on, "how do we fool the algorithm into thinking our page is more relevant than it really is?" There's an element of grey here. And one, again, needs to be cautious about: how is the algorithm making the decision? Why did this particular recommendation get made?

Chris - Is there not a danger that this is narrowing our choices, because it just force feeds us a monotonous diet of what we like, and it makes us less adventurous, less likely to think outside the box; and as a result while our life may seem simpler, it's potentially poorer for it? Are we losing our free will here?

Kartik - Yeah. In fact in my book, A Human's Guide to Machine Intelligence, in that I argue that most of us really do not have the free will that we think we do. And if you think about who we are, at the end of the day, it's the sum total of the choices we made, what we read, what we bought; and it ultimately has shaped us to become who we are today. To be clear again, I'm not saying we need to become Luddites and go back to a world before algorithms. The analogy I offer is that it's like early caveman discovering fire and saying, “well, this can be tricky to control - let me stop using fire”. Instead you learn to use fire; you learn to control fire; you learn to have things like the fire department that can extinguish fires; you maybe even keep fire extinguishers at home and so on; and you use fire.

Comments

Add a comment