The big three threats today are climate change, artificial intelligence and genetic engineering. Weapons, especially nuclear weapons, pose a far lesser threat. Why? Humans learned long ago that weapons are fearful and destructive, and such knowledge long since has become deep-rooted into the fabric of our neurology. Humans have not learned much about the big three threats of today. Paradoxically, humans yearn for the fruits of employing the the commodities that cause climate change, the speed and efficiency of artificial intelligence production and problem modeling, and the anti-Malthusian force for more bountiful food availability.
These threats are ordinary and necessary in the sense that we just are looking for better ways to be successful creatures. Ordinary, too, because we do not acknowledge the threats despite their open and obvious state. Cognitive bias causes us to overlook problems plainly in front of us. We necessarily plod along, but the necessary plod is half-blind with eyes cast to the ground or into an algorithm, or brains humming only with, perhaps, a meaningless goal to agree with TV news and try to seem smart or agreeable to our friends. Maybe we can redirect this plodding to improve our chances against threats by producing energy from renewable fuels, and regulating artificial intelligence and genetic engineering.
We need morally sound scientists and practiced philosophers to lead. Unfortunately, instead of selecting and following masters in the sciences and arts, most people prefer to follow people they like – it’s terrible management but that’s what people do. On big issues, such as these big three threats, people believe those with whom they associate, those who provide positive feedback, not those who pose a difficult question, nor those who focus to identify the variables, to weigh the evidence and to seek answers. Quick repartee from an attractive TV talking head satisfies most people – that’s as much as they want to think about it.
Author Bill McKibben calls out these threats as self-inflicted. Weapons, too, are self-inflicted instruments against ourselves, but we know about weapons. We don’t know, and resist knowing, about the big three.
Environmental Thinker Bill McKibben Sounds Warning on Technology
Known for climate change work, the pioneer says global warming, AI and genetic engineering are self-inflicted threats to humanity
- By Annie Sneed on April 15, 2019
When author and environmentalist Bill McKibben wrote The End of Nature in 1989, the world was a very different place. The science behind climate change was not as sophisticated, the public’s understanding of the issue more limited, and the real-world effects of global warming far less obvious to the average person.
Thirty years and more than a degree Celsius of warming later, humanity has yet to seriously deal with the planet’s climate problem. Heat-trapping greenhouse gases continue to build up in the atmosphere, and the U.S. government has failed to take meaningful action to curb them. And in addition to climate change, McKibben now sees two other existential threats facing humanity: artificial intelligence and human genetic engineering. He writes about these issues in his new book, Falter: Has the Human Game Begun to Play Itself Out?
Scientific American spoke with McKibben, founder of the climate change organization 350.org, about the motivation behind his book, why he believes these problems are so perilous, and how humankind might address its self-created crises.
[An edited transcript of the interview follows.]What is different, in terms of climate change, since you wrote The End of Nature in 1989?
In 1989 there was still something somewhat abstract about the threat [of climate change]. We knew it was coming, the science was clear, but you couldn’t yet take a picture of it. Thirty years later it’s the dominant fact of daily life for hundreds of millions of people every day around the world who are dealing with some flood, drought, wildfire or the steady rise of the oceans—all the things we’ve unleashed. So it is a very different situation in that way. It is real now.
You draw attention to what you call two other existential threats to humanity—artificial intelligence and human germline genetic engineering. Why do you think these three issues are the most alarming?
These are the other things that potentially shift what we’ve understood to be our place on this planet. Just as climate change dramatically undermines nature, so these shifts dramatically challenge human nature.
For artificial intelligence, people have speculated about all kinds of disaster scenarios. Maybe they’re true, maybe they’re not. But even if AI works as it’s supposed to, the inevitable result will be the supplanting of human beings as the measure of meaning in the world. And to what end? So that we can make things happen more quickly? Why do we want that? What is the point? It seems to me that those are the questions we haven’t even begun to ask.
There are obviously big practical worries about what it means to start etching inequality into our genes [by theoretically genetically engineering human embryos]—what are the health risks and so on. Those are all persuasive and powerful reasons not to do [human germline genetic engineering]. But they don’t get at the deepest reason, which has to do with the meaning of being human. Just as it turned out that we were wrong to take the stability of the physical world for granted, we’re also wrong to take the meaning of a human life for granted. I fear that it’s not going to prove very hard to undermine that.
Do you only see these two technologies (AI and germline engineering) as dangerous? Or do they also have something positive to offer people?
Technologies come with a suite of benefits and costs. The classic example is fossil fuels: they did amazing things for us until we reached a point where we were using them in such quantity that they were doing damage. Now we need to find other ways to power our lives.
I truthfully don’t think that human genetic engineering offers us much in the way of benefit. If people are worried about genetic disease, we can already deal with that through preimplantation genetic diagnosis, which is used in fertility clinics around the world. It lets you make sure your child doesn’t have a genetic disease, but it doesn’t let you improve your child. So I don’t think the benefits [of human germline genetic engineering] are very high, and the potential cost—in terms of meaning—is enormous.
Some might say that you’re being overly alarmist, especially in terms of AI and genetic engineering. How would you respond to that critique?
I’m used to it because people said it for years about climate change. I wish that I had been [overly alarmist]. If anything, we’ve been under-alarmist about it. I think the biggest problem around AI and human genetic engineering is that we’re barely discussing them in these terms. One of the points of this book is to get that discussion underway as fast as we can.
Given how little we’ve done to address climate change, where do we go from here? Do you see any potential solutions that could help us address the problem in time?
The good news is that the engineers have done their job with remarkable speed and power. Thirty years ago we didn’t know what was going to replace coal, oil and gas—we just knew they had to be replaced. But in the last decade, the price of a solar panel has dropped 90 percent. In most of the world now, solar and wind are the cheapest ways to produce energy.
But we’re not making the transition quickly enough—not for lack of engineering, but for lack of political will. The fossil fuel industry has so much political power and it’s willing to use it to maintain its business model, even at the expense of breaking the planet. So it’s going to take a one-two punch of great engineering and great movement building. Over the last 10 years we’ve built a climate movement really out of nothing, and it continues to grow. You saw remarkable stuff this year with schoolchildren across the planet leaving their classrooms, demanding action. We’re seeing young people push for a Green New Deal in the U.S. I don’t know exactly what form all this will eventually take, in terms of legislation and policy, but I do know that that ferment is a good sign. People are not going to take this lying down.
How do we address these other existential threats—AI and genetic engineering? Will the approach be similar to climate change?
We’re a little earlier in the curve, but you can already see people beginning to try and figure out responses to them—though I don’t think people are talking deeply enough yet and I imagine it’ll require movement-building in order to get those big conversations going. There are elements on both the political left and right that are made uneasy by these new technologies, and it will be interesting to see whether or not they can figure out how to work together.
Why did you write this book? What are you hoping it will accomplish?
When I wrote The End of Nature 30 years ago, my theory of change was simple. I was 27 years old; I thought people would read my book and then they would change. I now know that that’s not the way change happens. Books and arguments are one part of what needs to happen, but I spend most of my time now building movements. I think those are what really will move the needle. I hope that this book contributes a little to that movement-building process.
But I also just want to mark where we are. Thirty years ago my greatest fear for climate change, in a way, was that we’d walk off this cliff without even recognizing it. I think now at least there’s going to be a serious fight. And that is, at the very least, a more dignified way for humans to be engaging with this greatest of crises.
ABOUT THE AUTHOR(S)
Annie Sneed
Recent Articles