The Algorithms That Rule Your Life
Review by Kate Storey-Fisher
Being Human in the Age of Algorithms
Book by Hannah Fry
In July 2016, an autopilot-controlled Tesla crashed into a white tractor-trailer on a highway in Florida. The navigation system, thinking the side of the trailer was just a bright sky, had directed the car full-speed into the truck. The driver of the car was killed. The Tesla continued driving underneath the trailer until it veered off and hit a tree.
Programming driverless cars is a formidable task, and not just because the stakes are so high. “What’s hard is all the problems with driving that have nothing to do with driving,” says Oxford robotics professor Paul Newman. It doesn’t matter if your car can switch lanes perfectly if it can’t tell the difference between smooth white metal and open sky. In other words, the difficulty with designing algorithms is teaching them how to be more human—but without human imperfections.
This tension lies at the core of Hannah Fry’s approachable if at times disconcerting new book, Hello World: Being Human in the Age of Algorithms. Fry showcases the almost miraculous potential of algorithms, while also being careful to illustrate their deep-set (and sometimes fatal) faults.
Fry, a math professor at University College London, takes on a big subject, whose scope I appreciate even more after reading Hello World. She gives a brief introduction to the types of algorithms programmers use, and hints at the power they wield in our lives. Fry then devotes the rest of the book to the myriad aspects of life in which algorithms are deeply embedded. In addition to discussing driverless cars, Fry takes us through justice and crime, describing software that sets jail sentences and uses facial recognition to locate alleged criminals. Algorithms, we learn, affect everything from mundane grocery purchases to life-saving cancer diagnoses. They are even wound up in the art world, with algorithms shaping our literary preferences and writing their own songs. Though the book achieves breadth at the expense of depth, at times jumping abruptly from one anecdote to the next, it reveals just how much algorithms influence our lives.
Fry is also a popular TV presenter and public speaker, and her ability to communicate technical concepts in an accessible and engaging way is the standout feature of Hello World. She doesn’t belabor the intricacies of the algorithms themselves, explaining them instead through (sometimes hilarious) metaphors. The Los Angeles Police Department’s Predictive Policing system (PredPol for short) becomes “the Kim Kardashian of algorithms: extremely famous, heavily criticized in the media, but without anyone really understanding what it does.” To give us insight into how it works, Fry compares PredPol to a bookie who calculates odds on where crimes will happen. Using data from 80 years of LA crime incidents, the algorithm was able to predict where one in five crimes occurred. When Fry discusses algorithms known as random forests that can, say, predict the next Netflix show you’d like to watch, she likens them to the “Ask The Audience” lifeline in Who Wants To Be A Millionaire. These “forests” are composed of mini-algorithms, each of which “votes” on whether you’ll like a particular show in your Netflix queue. The majority vote is chosen as the result. This gives a much more accurate answer than any single algorithm working alone—like the 91% success rate of asking the audience, versus 65% for phoning a friend.
These and many of the other algorithms Fry describes fall into the category of machine learning (ML), the buzzword that refers to computers learning from data on their own. ML has permeated nearly every discipline. In my own field of astronomy, new telescopes will observe of billions of galaxies, and we will need to label each one as a spiral or an elliptical galaxy. We can’t sort through all of them by hand, so we teach a computer to do it for us. We “train” the algorithm by feeding it images of galaxies from previous datasets already labeled as “spiral” or “elliptical” so it can learn what each label means. When we later give the algorithm images it has never seen before, it can sort them into spirals and ellipticals, almost like magic.
Less enchanting is ML’s tendency to pick up the biases of the data it was trained on. If the training images were low quality, many spirals would be blurred out and mislabeled as elliptical. Even with the new high-quality images, the algorithm would now be biased towards labeling too many as galaxies as elliptical.
When it comes to more down-to-Earth problems, algorithms can be just as biased—sometimes with devastating consequences. Fry describes an ML algorithm known as COMPAS, which judges in Wisconsin use to predict recidivism and help them set jail terms. This program was intended as a way to avoid biases the human judges might hold. But it turns out that the algorithm, which had been trained on past sentences, picked up these very biases: it is twice as likely to mistakenly label black defendants as high risk compared to white defendants. The history of racial discrimination in the justice system continues to propagate through COMPAS. Fry suggests ways to temper this bias, but argues that the algorithm does have its benefits over human judges—as long as we take it with a grain of salt. “Perhaps,” she writes, “acknowledging that algorithms aren’t perfect, any more than humans are, might just have the effect of diminishing any assumption of their authority.”
While Fry is rightly critical of the COMPAS algorithm, I wish she had dug further into the implications and pervasiveness of machine bias. She fails to mention racial bias when she describes policing algorithms or facial recognition software, which is known to mischaracterize dark-skinned faces much more than white faces. Fry hardly touches gender discrimination, though there are well-documented instances of algorithms favoring men in recruitment and hiring. Algorithms have power over all of us, but we need to recognize that they most harm those who are already vulnerable.
Still, Hello World shines when Fry discusses her own research and brings to bear her scientific expertise. In the section on crime, she describes how her group used detailed crime report datasets to predict the targets of burglaries. (If your house does get robbed, you are up to twelve times more likely to be targeted again in the next week.) To describe how driverless cars decide whether an obstacle is a truck or a tumbleweed, she introduces Bayes’ theorem, a statistical method that “commands an almost cultish enthusiasm” among scientists. I can confirm this in the astrophysics community, where non-Bayesians are now hardly trusted to tell a star from a passing satellite.
One of the strongest takeaways of the book is just how insidious some algorithms have become. When Fry describes the business plan of 23andMe and other genetic sequencing companies, she warns: “you’re not using the product; you are the product.” While most of us know this on some level, we often choose to ignore it, placing more value on the chance to learn our predisposition to Alzheimer’s or identify a distant relative.
But once our data is in the hands of companies, they have complete control. Three London hospitals handed over the complete medical histories of 1.6 million patients to Google, in the hopes that it would use artificial intelligence to learn to identify kidney injuries. The patients were never notified, let alone asked for their consent. Even when we know that algorithms are analyzing our data, the code is often proprietary—as is the case for the Wisconsin sentencing algorithm. We are denied access to the inner workings of the technology empowered to decide our fates.
You could walk away from Hello World with the feeling that algorithms are always watching and manipulating you—they are. Their presence has become unavoidable.
But their power doesn’t have to be absolute. With proper regulation and oversight, Fry writes, algorithms could combine the best of humans and technology. Self-driving cars can act as safety nets for human drivers, and algorithms can pair up with radiologists to better detect cancers. Fry imagines a future where algorithms and humans play off of each other’s strengths and solve some of the world’s most pressing problems.
In other words: we can’t beat the algorithms, so we have to join them.
Kate Storey-Fisher is a Ph.D. student in Physics at New York University. She hopes an algorithm won’t learn to write a better book review than her anytime soon.