Cosmologist Martin Rees gives humanity a 50-50 chance of surviving the 21st century

Cosmologist Martin Rees gives humanity a 50-50 chance of surviving the 21st century

Cosmologist Martin Rees gives humanity a 50-50 chance of surviving the 21st century

Finding the best ways to do good. Made possible by The Rockefeller Foundation.

Martin Rees is Britain’s astronomer royal, a professor at Cambridge University, and one of the leading cosmologists in the world. In a 2003 book, titled Our Final Hour, he gave civilization a 50-50 chance of surviving the 21st century, an estimate he reached after surveying all the ways humanity could destroy itself.

Rees has followed that book with another one about existential threats, titled On the Future: Prospects for Humanity. And the upshot of the new book is clear: The choices we make today, and in the next couple of decades, will likely determine the fate of life on earth.

Rees’s biggest fear is our enhanced technological capacity, which gives just a few people the power to do more damage than ever before. For example, a handful of bad actors could release malicious code that upends computer networks around the world, or bioterrorists could unleash a deadly virus that quickly becomes a global pandemic, or overeager physicists could spawn a black hole by smashing protons together.

Then there’s the very real possibility that bioengineering technologies, like gene editing, will produce unprecedented inequalities in society that could transform life as we know it. There’s also the looming danger of artificial intelligence, which, depending on who you ask, is either an existential threat or a wildly overstated non-concern.

In spite of all this, Rees still calls himself a “techno-optimist.” Which is to say, he thinks we can harness science and technology to save ourselves and the planet. I spoke to him last week about why he remains hopeful in the face of all these threats, and why he thinks scientists have an ethical obligation to engage politically. I also asked him if he thinks human beings will have to flee Earth if we want to survive in the long run. (His answer might surprise you.)

A lightly edited transcript of our conversation follows.

Sean Illing

In your previous book, Our Final Hour, you said we had a 50 percent chance of surviving the 21st century. How do you feel about our odds today?

Martin Rees

Well, that was obviously a rough number, but I still believe that there could be serious setbacks to our civilization, and I feel more concerned now than I was then about the fact that technology means that small groups or even individuals can by error, or by design, have a disruptive effect that cascades globally.

This is a relatively new thing, and I’m not sure we fully appreciate the dangers. Technology has not only increased the ways we could destroy ourselves, it’s also made it much easier for us to do it. So that means we’re always close, potentially, to a global disaster.

I worry more than I did about the collective impact we’re having on the resources and the environment. We keep building and expanding, and we’re demanding more energy and more resources, and we’re on what appears to be an unsustainable path. My concerns about this have only grown since 2003 when I wrote Our Final Hour.

Sean Illing

What would you say worries you the most right now? What keeps you up at night?

Martin Rees

In the short run, I worry about the disruptive effects of cyber attacks or some form of biological terror, like the intentional release of a deadly virus. These kind of events can happen right now, and they can be carried out by small groups or even an individual. It’s extremely hard to guard against this kind of threat.

Disruptions of this kind will be a growing problem in our future, and it will lead to more tensions between privacy, security, and liberty. And it will only become more acute as time goes on.

I also worry that our societies are more brittle now and less tolerant of disruption. In the Middle Ages, for example, when the Black Plague killed off half the populations of towns, the others sort of went on fatalistically.

But I think if we had some sort of pandemic today, and once it got beyond the capacity of hospitals to cope with all the cases, then I think there would be catastrophic social disruption long before the number of cases reached 1 percent. The panic, in other words, would spread instantly and be impossible to contain.

“I worry about human folly and human greed and human error”
Sean Illing

Let’s step back from the ledge for a second and talk about science and technology. Do you think the pace of technological change is now too fast for society to keep up?

Martin Rees

I think it’s amazingly fast. Is it too fast for society? I don’t know. I do know that we’re struggling to cope with all these technologies. Just look at the impact of social media on geopolitics right now. And the risks of artificial intelligence and biotechnology far exceed social media. But these things also have potentially huge benefits to society, if we can manage them responsibly.

Sean Illing

Well, that’s sort of my point: Technology moves faster than culture, and the gap is growing. I see no reason to believe we can manage these innovations “responsibly.” In fact, we seem to be doing the opposite: Technology disrupts society, and then we struggle to adapt in the wake of these disruptions.

Martin Rees

I certainly take the point, and don’t necessarily disagree. The downsides are enormous, and the stakes keep getting higher. But these changes are coming, whether we want them to or not, so we have to try and maximize the benefits while at the same time minimizing the risks.

Sean Illing

Do you think our greatest existential threat at this point is ourselves and not some external threat from the natural world?

Martin Rees

I think the main threats are the ones we’re causing. I’m an astronomer, but I don’t worry about asteroids barreling into the earth and destroying us, because we can see these things coming. I worry about human folly and human greed and human error. I worry much more about, say, a nuclear war then I do a natural disaster. Human threats like this are growing much faster than traditional risks like asteroids, and in many cases, we’re just not prepared to deal with it.

Cosmologist Martin Rees gives humanity a 50-50 chance of surviving the 21st century

Sean Illing

You talk a lot in the book about cooperation and the need for better decision-making. I often worry that our incentive structures — at the individual and collective level — are so misaligned with our actual interests that it’s almost impossible to imagine us making the sort of smart, long-term decisions we’ll have to make to navigate the future. I’m curious how you think about this, and what role you think science and technology play.

Martin Rees

I agree that the gap between the incentives driving our behavior and our actual interests is growing, and many of the issues we’re facing require international agreements and long-term planning, climate change being an obvious example. And we’re having a hard time convincing politicians to do what’s in our long-term interest when all they care about is being reelected.

As scientists, we must try to find solutions for these problems, but we also have to raise public consciousness and interest. Politicians care about what’s in the press, what’s in their inboxes, and scientists have to do what they can to keep these urgent problems on their radar. I consider this my obligation as a scientist.

At the same time, scientists don’t have any special wisdom when it comes to politics or ethics, so we don’t have the answers when it comes to decisions about what to value or do. The wider public has to be involved in that conversation, and scientists can help by educating them as much as possible.

Sean Illing

I’m glad you went there, because I think this is such a crucial point. We often forget that science is a tool that helps us get more of what we want, but it can’t tell us what we ought to want or do. But if you look at our culture now, it’s clear to me that we’re allowing our values to be decided by the technologies we’ve built, not the other way around.

Martin Rees

You make a great point, and you’re quite right in saying that we need a value system that science itself can’t provide. In the book, I talk about the atomic scientists who developed nuclear weapons during WWII, many of whom became politically involved after the war to do what they could to control the powers they helped unleash. They thought they had a special obligation.

And I think that is true of scientists in other fields. We’re seeing some of the big tech companies like Facebook and Twitter take responsibility perhaps too late in the game, but there are other examples of scientists working in fields like bioengineering who understand the risks now and are going to great lengths to control them.

But the big difference now is that there are far more people around the world with expertise in all these technologies, especially in AI and bioengineering. And the commercial pressures to develop them are enormous, which means attempts to impose regulations will only be moderately successful.

So even if we develop an ethics to guide these technologies, I’m not sure we’ll ever be able to enforce them on a global level. And that is extremely scary.

“We have a billion people in the world in abject poverty, which could be alleviated by the wealth of the thousand richest people on the planet”
Sean Illing

People like Steven Pinker make the case that life is steadily improving, and that reason and technology are the prime drivers of that improvement. There is something undeniably true about this argument, but I think it also misses something fundamental about our nature and the fragility of the world we’ve created.

Martin Rees

I read Pinker’s book, and I’ve had exchanges with him on this. There’s no doubt he’s right about life expectancy improving and fewer people in poverty and all that, but I think he overlooks two things. The first is what I mentioned earlier about new technologies creating new threats that can be unleashed relatively easily by small groups of people or individuals.

He also seems to think that human beings have advanced ethically compared to earlier generations, and I’m not so sure about that. In the medieval period, life was miserable and there wasn’t anything people could do to improve it. Today, the gap between the way the world is and the way it could be is enormous.

We have a billion people in the world in abject poverty, which could be alleviated by the wealth of the thousand richest people on the planet. That we allow that to continue surely says something significant about how much — or little — moral progress we’ve made since medieval times.

Sean Illing

Do you believe that humanity will have to move beyond the Earth if it wants to survive in the long run?

Martin Rees

I certainly hope not. I hope that there will be a few pioneers who travel to space and form a little colony on Mars, but I think this should be left to the private sector. I don’t see any practical case for NASA sending people to space anymore. The private sector can afford to take more risks than NASA, and many adventurers are happy to live with the risks of space travel.

We can hope that these people will go to Mars and be at the forefront of developing new technologies, because they’ll have every incentive to adapt to a hostile environment. But I strongly disagree with Elon Musk and my late colleague Stephen Hawking who talk about mass immigration to Mars. I think that’s a dangerous delusion because Mars will be a more hostile environment than the top of Everest or the South Pole, and dealing with climate change here on Earth is far more important than terraforming Mars.

Sean Illing

You call yourself a “techno-optimist” despite writing two books about all the ways in which human life can be annihilated. Where does your optimism spring from?

Martin Rees

I’m an optimist in that I believe that the ability of technology to provide a good life for everyone, not just in our countries, but throughout the world, is going to grow. But I’m also an ethical pessimist in that I recognize that this is not happening in the way that it should. We have abject poverty in our countries, we have whole regions of the world where people are in poverty, and this is a political failure. And this gap is getting wider, not closer.

Sean Illing

Do you think humanity will have to evolve into something else, into something posthuman, in order to survive for another 100 centuries?

Martin Rees

Humanity hasn’t changed all that much in terms of physique and mentality. If, because of technology or space travel or some other development, evolution starts to happen on a much faster time scale, it will have important consequences for human life.

For instance, we can still enjoy the literature written by Greek and Roman authors more than 2,000 years ago, because the character of human beings hasn’t changed all that much, and we recognize their emotional lives in our own world. But if we think of what could happen with bioengineering techniques or artificial intelligence, it’s entirely possible that humans a century or two from now will have only an algorithmic understanding of us and what we were like.

If that happens, if we lose this continuity between generations of human beings, that will be a total game changer. I don’t know what comes next, but we will have entered a new phase of human evolution.

Sourse: vox.com

No votes yet.
Please wait...

Leave a Reply

Your email address will not be published. Required fields are marked *