Enjoyed the episode? Want to listen later? Subscribe here, or anywhere you get podcasts:

Update April 2019: The key theory Dr Sandberg puts forward for why aliens may delay their activities has been strongly disputed in a new paper, which claims it is based on an incorrect understanding of the physics of computation.

It seems tremendously wasteful to have stars shining. When you think about the sheer amount of energy they’re releasing, that seems like it’s a total waste. Except that it’s about 0.5 percent of the mass energy that gets converted into light and heat. The rest is just getting into heavy nuclei. If you can convert mass into energy, you might actually not care too much about stopping stars. If the process of turning off stars is more costly than 0.5% of the total mass energy, then you will not be doing it.

Anders Sandberg

The universe is so vast, yet we don’t see any alien civilizations. If they exist, where are they? Oxford University’s Anders Sandberg has an original answer: they’re ‘sleeping’, and for a very compelling reason.

Because of the thermodynamics of computation, the colder it gets, the more computations you can do. The universe is getting exponentially colder as it expands, and as the universe cools, one Joule of energy gets worth more and more. If they wait long enough this can become a 10,000,000,000,000,000,000,000,000,000,000x gain. So, if a civilization wanted to maximize its ability to perform computations – its best option might be to lie in wait for trillions of years.

Why would a civilization want to maximise the number of computations they can do? Because conscious minds are probably generated by computation, so doing twice as many computations is like living twice as long, in subjective time. Waiting will allow them to generate vastly more science, art, pleasure, or almost anything else they are likely to care about.

But there’s no point waking up to find another civilization has taken over and used up the universe’s energy. So they’ll need some sort of monitoring to protect their resources from potential competitors like us.

It’s plausible that this civilization would want to keep the universe’s matter concentrated, so that each part would be in reach of the other parts, even after the universe’s expansion. But that would mean changing the trajectory of galaxies during this dormant period. That we don’t see anything like that makes it more likely that these aliens have local outposts throughout the universe, and we wouldn’t notice them until we broke their rules. But breaking their rules might be our last action as a species.

This ‘aestivation hypothesis’ is the invention of Dr Sandberg, a Senior Research Fellow at the Future of Humanity Institute at Oxford University, where he looks at low-probability, high-impact risks, predicting the capabilities of future technologies and very long-range futures for humanity.

In this incredibly fun conversation we cover this and other possible explanations to the Fermi paradox, as well as questions like:

  • Should we want optimists or pessimists working on our most important problems?
  • How should we reason about low probability, high impact risks?
  • Would a galactic civilization want to stop the stars from burning?
  • What would be the best strategy for exploring and colonising the universe?
  • How can you stay coordinated when you’re spread across different galaxies?
  • What should humanity decide to do with its future?

If you enjoy this episode, make sure to check out part two where we talk to Anders about dictators living forever, the annual risk of nuclear war, solar flares, and more.

The 80,000 Hours podcast is produced by Keiran Harris.

Highlights

The basic question that made us interested in the Fermi paradox in the first place is, does the silence of the sky foretell our doom? We really wonder if the evidence that the universe seems to be pretty devoid of intelligent life is a sign that our future is in danger, that there is some bad things ahead for us. One way of reasoning about this is the great filter idea from Robin Hanson. There has to be some step that is unlikely from going from inanimate matter to life, to intelligence, to some intelligence that makes a fuss that you can observe over astronomical distances. One of these probabilities of transition must be very, very low, otherwise the universe would be full of aliens making parking lots on the moon and putting up adverts on the Andromeda galaxy.

It would be very obvious if we lived in that kind of universe, so you can say, “Well, it’s obvious we’re alone. The probability of life might be super low, or maybe it’s that life is easy by intelligence is rare.” In that case, we are lucky and we’re fairly alone, which might be a bit sad, but it also means we’re responsible for the rest of the universe and the silence in the sky doesn’t actually say anything bad. The problem is, of course, that it also might be that intelligence is actually fairly common, but it doesn’t survive; there is something very dangerous about being an intelligence species. You tend to wipe yourself out or become something inert. Maybe all civilizations quickly discover something World of Warcraft or other games and succumb to that. Or, some other, more subtle convergence threat. Except that many of these explanations of what that bad and dangerous thing is are very strange explanations.

It’s an interesting question when we know that certain boxes shouldn’t be opened. Sometimes we can have a priori understandings, but this research field, whatever the effects it has tend to be local. So if we open that box and there were bad stuff, there is also going to be local disasters. That might be much more acceptable than some other fields where when you have effects, they tend to be global. If something bad exists in the box, it might affect an entire world. This is, for example, why I think we should be careful about technologies that produce something self-replicating. Whether that is computer viruses or biological organisms, or artificial intelligence that can copy themselves. Or maybe even memes and ideas that can spread from mind to mind.

We want to avoid existential risk that could mean that we would never get this grand future. We might want to avoid doing stupid things that limit our future. We might want to avoid doing things that create enormous suffering of disvalue in these futures. So, what I’ve been talking about here is kind of our understanding about how big the future is, and then that leads to questions like, “What do we need to figure out right now to get it?” Some things are obvious, like reducing existential risk. Making sure we survive and thrive. Making sure we have an open future.

Some of it might be more subtle, like how do we coordinate once we start spreading out very far? Right now, we are within one seventh of a second away from each other. All humans are on the same planet or just above it. That’s not going to be true forever. Eventually, we are going to be dispersed so much that you can’t coordinate, and we might want to figure out some things that should be true for all our descendants.

Related episodes

About the show

The 80,000 Hours Podcast features unusually in-depth conversations about the world's most pressing problems and how you can use your career to solve them. We invite guests pursuing a wide range of career paths — from academics and activists to entrepreneurs and policymakers — to analyse the case for and against working on different issues and which approaches are best for solving them.

The 80,000 Hours Podcast is produced and edited by Keiran Harris. Get in touch with feedback or guest suggestions by emailing [email protected].

What should I listen to first?

We've carefully selected 10 episodes we think it could make sense to listen to first, on a separate podcast feed:

Check out 'Effective Altruism: An Introduction'

Subscribe here, or anywhere you get podcasts:

If you're new, see the podcast homepage for ideas on where to start, or browse our full episode archive.