Where is artificial intelligence taking us?
Review by Dervla Kelly
Life 3.0
Being Human in the Age of Artificial Intelligence
Book by Max Tegmark
History tells us that we humans are really good at ignoring spectacular risks and being undone by our own inventions. Consider the modest example of asbestos: strong and resistant to fire, it raised suspicions early on of being a danger to health, but was used widely in buildings for almost a century before we accepted that its airborne shreds caused lung disease. Other known but ignored dangers include tobacco and climate change. And now you can add artificial intelligence to that list, Max Tegmark informs us in his earnest, probing new book, Life 3.0.
To me artificial intelligence (AI) is an invisible force, like garlic in sauce. Tegmark, a professor of cosmology at the Massachusetts Institute of Technology, tells me AI is non-biological intelligence. He provides accessible factual examples of the tasks AI can accomplish (such as slick poker-playing computers and precocious language translation services) and how it accomplishes them. He then describes the development of what he calls “super AI” in machines and explores how such technology could infiltrate both our immediate world and possible cosmic futures billions of years away. Based on computing power requirements and costs, AI experts in the book disagree about whether super AI is possible. Tegmark has faith in the ability of humans to develop super AI sometime in the next 100 years. But, he cautions, “It may take that long to make it safe.”
This tone of cautious, hedged speculation runs through Life 3.0 as Tegmark surveys the impact of super-intelligent machines on political structures, national security, information, finances and communication technology. The book is every bit as political and social as it is scientific. Tegmark toys with our sense of security, with speculative but unsettling story lines about robojudges, stock market manipulation, brain uploading, psychological manipulation and pandemic-spreading mosquito robots. He challenges us to think about how we can harness and control the power of AI in the hopes readers will advocate for and militate against varying aspects of AI and science research. This is a tricky balancing act because it is difficult to describe potential future developments of thinking machines without sounding either like a crackpot raving about fantastical alternate realities or like a manic futurist indulging in existential crises. Tegmark nonetheless pulls it off, guiding us through the nebulous nature of AI like a seasoned watchman.
Life 3.0 paints the marriage of super AI and humans in broad strokes. Tegmark dedicates a chapter to the cosmic civilization springing from intergalactic AI. This is a playground for his cosmologist self to shine and my favorite chapter of the book. Tegmark envisions human life flourishing for billions of years, not merely in our solar system, but breaking out into the cosmos. Dark energies from spinning black holes and quasars threaten Tegmark’s space castle, so he dissects and studies them, similar to his earlier book Our Mathematical Universe. He envisions mind-boggling future engineering endeavors, such as wormhole construction, aided by super intelligent machines. Tegmark, serving as his own best critic, closes the chapter with a grim caveat: “If we don’t improve our technology, the question isn’t whether we will go extinct, but merely how.”
If Tegmark’s speculations are right, we might even have to worry about managing a daunting balance of power between far-flung future generations of humans and super intelligent machines. These visions of future civilizations are notably cautious as Tegmark, in the style of Star Trek’s Spock, filters them with a scientific lens, softening the speculation. The book is not an expose of weak sci-fi plots, but sci-fi strictly beholden to the laws of physics. At times I couldn’t help but wonder, however, how a science fiction writer would have described these same predictions and whether less hedged speculations could escalate our fears for the future.
It would be easy to dismiss the book as a brainstorming exercise curated by Tegmark, with business entrepreneurs and academics indulging their hobby as obsessive technologists. Given that the science and potential applications of AI are something that 99% of us do not understand, however, Tegmark believes he has a moral imperative to educate us. He is a founding member of the Future of Life Institute whose mission is rooted in the safe development of AI. This imperative is strongly justified in one instance, where Tegmark cites an industry petition (with 17,000 signatures) warning against AI-based weapons that operate independently of humans. This fear comes not from a quake hermit but is instead a community consensus.
One slice of the future largely overlooked by Tegmark is the medical applications of artificial intelligence. He provides a superficial description of potential modifications to the human body, including Ray Kurzweil’s descriptions of nanorobots replacing human digestive and endocrine systems and upgrades to skin, skeletons and brains. He describes medical advances in terms of overcoming constraints in self-assembly, self-repair and self-reproduction rather than exploring the technical science of manipulating genetics and immunology and making humans immortal. Machines are getting really good at diagnosing illnesses based on pattern recognition in medical data such as imaging. I am excited to see what diseases will be eradicated for the children of tomorrow, but relying on AI for medical judgments requires a leap of faith. Whether humans will trust and engage with machines in the practice of medicine and beyond is an active research area that deserves more attention.
As an Irish person, I was brought up in a culture that strives for intergenerational security, respectability and upward mobility. As such, my family and friends all have their eyes open about technology eliminating jobs. For those seeking practical advice on how to deal with the cost of AI to our society, this is not the book. Tegmark acknowledges upcoming changes in work patterns and global power structures but avoids forecasting social and cultural repercussions of AI technology use. Perhaps Tegmark thought any attempt to describe the vast spectrum of human reactions to AI that stems from our diverse ideologies, religions and socioeconomics would be arbitrary or beyond his expertise.
Life 3.0 is indirectly a collaborative effort. Through many conferences and dinners, Tegmark has gathered perspectives on the evolution of AI from leaders in their fields including entrepreneurs Elon Musk and Larry Page, AI researchers from academia and companies (including DeepMind, Google, Facebook, and Apple), economists, legal scholars and philosophers. The book doesn’t describe the personalities of the people pushing AI research. As an outsider, I picture skinny guys and girls in darkened rooms encased in walls of glowing electronic letters and numbers, under the watch of the moneymen from industry, academia and government. I would have liked to read more about the current AI research operations, culture and the ethics of technological disruptions that produce winners and losers.
Facing the reality of being outsmarted by machines in the near future, Tegmark expects to be humbled. As someone who is outsmarted by machines many times every day, I found this amusing. I find technology ridiculously impressive. It has my utmost deference.
Do I worry about the future of AI? Yes. My new phone is out of date already. And be forewarned: I felt old reading this book. Change is happening fast. I’m now assured the future, just like the present, will be painful. Industries will die and cultures with them. Luckily for me, my major worries are ultimately practical and self-centered. Plus, there is nothing I enjoy more than a good book to escape reality.
Dervla Kelly is a postdoctoral researcher at the Department of Pathology, New York University School of Medicine.