Enjoyed the episode? Want to listen later? Subscribe here, or anywhere you get podcasts:

What if I kill you by driving to the shops and causing different reproductive events, that via a long causal chain result in your death? Is that still an action, or is it merely an omission?

Will MacAskill

You’re given a box with a set of dice in it. If you roll an even number, a person’s life is saved. If you roll an odd number, someone else will die. Each time you shake the box you get $10. Should you do it?

A committed consequentialist might say, “Sure! Free money!” But most will think it obvious that you should say no. You’ve only gotten a tiny benefit, in exchange for moral responsibility over whether other people live or die.

And yet, according to today’s return guest, philosophy Professor Will MacAskill, in a real sense we’re shaking this box every time we leave the house, and those who think shaking the box is wrong should probably also be shutting themselves indoors and minimising their interactions with others.

To see this, imagine you’re deciding whether to redeem a coupon for a free movie. If you go, you’ll need to drive to the cinema. By affecting traffic throughout the city, you’ll have slightly impacted the schedules of thousands or tens of thousands of people. The average life is about 30,000 days, and over the course of a life the average person will have about two children. So — if you’ve impacted at least 7,500 days — then, statistically speaking, you’ve probably influenced the exact timing of a conception event. With 200 million sperm in the running each time, changing the moment of copulation, even by a fraction of a second, will almost certainly mean you’ve changed the identity of a future person.

That different child will now impact all sorts of things as they go about their life, including future conception events. And then those new people will impact further future conceptions events, and so on. Thanks to these ripple effects, after 100 or maybe 200 years, basically everybody alive will be a different person because you went to the movies.

As a result, you’ll have changed when many people die. Take car crashes as one example: about 1.3% of people die in car crashes. Over that century, as the identities of everyone change as a result of your action, many of the ‘new’ people will cause car crashes that wouldn’t have occurred in their absence, including crashes that prematurely kill people alive today.

Of course, in expectation, exactly the same number of people will have been saved from car crashes, and will die later than they would have otherwise.

So, if you go for this drive, you’ll save hundreds of people from premature death, and cause the early death of an equal number of others. But you’ll get to see a free movie (worth $10). Should you do it?

This setup forms the basis of ‘the paralysis argument’, explored in one of Will’s recent papers.

To see how it implies inaction as an ideal, recall the distinction between consequentialism and non-consequentialism. For consequentialists, who just add up the net consequences of everything, there’s no problem here. The benefits and costs perfectly cancel out, and you get to see a free movie.

But most ‘non-consequentialists’ endorse an act/omission distinction: it’s worse to knowingly cause a harm than it is to merely allow a harm to occur. And they further believe harms and benefits are asymmetric: it’s more wrong to hurt someone a given amount than it is right to benefit someone else an equal amount.

So, in this example, the fact that your actions caused X deaths should be given more moral weight than the fact that you also saved X lives.

It’s because of this that the nonconsequentialist feels they shouldn’t roll the dice just to gain $10. But as we can see above, if they’re being consistent, rather than leave the house, they’re obligated to do whatever would count as an ‘inaction’, in order to avoid the moral responsibility of foreseeably causing people’s deaths.

Will’s best idea for resolving this strange implication? In this episode we discuss a few options:

  • give up on the benefit/harm asymmetry
  • find a definition of ‘action’ under which leaving the house counts as an inaction
  • accept a ‘Pareto principle’, where actions can’t be wrong so long as everyone affected would approve or be indifferent to them before the fact.

Will is most optimistic about the last, but as we discuss, this would bring people a lot closer to full consequentialism than is immediately apparent.

Finally, a different escape — conveniently for Will, given his work — is to dedicate your life to improving the long-term future, and thereby do enough good to offset the apparent harms you’ll do every time you go for a drive. In this episode Rob and Will also cover:

  • Are, or are we not, living at the most influential time in history?
  • The culture of the effective altruism community
  • Will’s new lower estimate of the risk of human extinction over the next hundred years
  • Why does AI stand out a bit less for Will now as a particularly pivotal technology?
  • How he’s getting feedback while writing his book
  • The differences between Americans and Brits
  • Does the act/omission distinction make sense?
  • The case for strong longtermism, and longtermism for risk-averse altruists
  • Caring about making a difference yourself vs. caring about good things happening
  • Why feeling guilty about characteristics you were born with is crazy
  • And plenty more.

Interested in applying this thinking to your career?

If you found this interesting, and are thinking through how considerations like these might affect your career choices, our team might be able to speak with you one-on-one. We can help you consider your options, make connections with others working on similar issues, and possibly even help you find jobs or funding opportunities.

Apply to speak with our team

Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Or read the transcript below.

Producer: Keiran Harris.
Audio mastering: Ben Cordell.
Transcriptions: Zakee Ulhaq.

Highlights

Longtermism for risk-averse altruists

Is what I care about myself making a difference or is what I care about that good things happen? If what I care about is myself making a difference, then absolutely. A standard account of risk aversion would say that you prefer the guarantee of saving one life. Let’s say it’s a 1% chance of saving 110 lives from the nuclear war example. Obviously it’s a smaller probability but a larger amount of good. However, as an altruist, should you care that you make the difference?

I mean it’s quite antithetical to what effective altruism is about. I think in “Doing Good Better”, I mentioned the example of a paramedic is coming to save someone’s life. They’re choking or something. They need CPR and you push them out of the way and you start making the difference yourself. So I mean, in order to make this clearer, just imagine you’re just going to learn about one of two scenarios. In the first scenario, some existential risk happens, but you saved dozens of lives yourself. In the second scenario, no existential risk happens and you don’t do any good yourself and you’re just going to find out which of those two things are true. Which should you hope is the case? Well obviously you should hope that no existential catastrophe and you not doing anything is the thing that happens, but if that’s what your preferences are, then your preferences aren’t about you yourself making the difference. In fact, what you actually care about is good stuff happening.

Are we living in the most influential time in history?

On certain views that are popular in the effective altruism community, like the Bostrom-Yudkowsky scenario that’s closely associated with them — I don’t want to claim that they think it’s very likely. On that view, there’s a period where we develop artificial general intelligence that moves very quickly to superintelligence and either way, basically everything that ever happens is determined at that point where it’s either the values of the superintelligence that then it can do whatever it wants with the rest of the universe. Or it’s the values of the people who manage to control it, which might be democratic, might be everyone in the world, it might be a single dictator.

And so I think just very intuitively it sounds important. Intuitively, that would be the most important moment ever. And in fact there’s two claims. One is that there is a moment where almost everything happens, where most of the variance of how the future could go actually gets determined by this one very small period of time, and that secondly, that that time is now.

So one line of argument is just to say, “Well, it seems like that’s a very extraordinary claim”. We could try and justify that. Then there’s a question of spelling out what extraordinary means, but insofar as that’d be a really extraordinary claim we should have low credence in it unless we’ve got very strong arguments in its favour. Then there’s a second argument or second understanding of influential that is very similar, but again, different enough that maybe it’s worth keeping separate, which is just the point at which it’s best to directly use our resources if we’re longtermists, where that’s just: how does the marginal cost-effectiveness of longtermist resources vary over time?

And here again, the thought is, well, we should expect that to go up and down over time. Perhaps there are some systematic reasons for it going down. Perhaps there’s some systematic reasons for it going up. Either way it would seem surprising if now was the time where most longtermist resources are most impactful and what that question is relevant to is that it’s one part of, but not the whole of, an answer to the question of should we be planning to spend our money now doing direct work or should we instead be trying to save for a later time period, whether that’s financial savings or movement building.

Implications for the effective altruism community

When I’m thinking about effective altruism movement strategy and what it should be aiming to be, I really think we should be treating this as a shift from before. And it’s not like a startup where you’ve got some kind of growth metric and going as fast as you can, which makes sense if you’re in competition with other things where if you get there a few months earlier you win. But instead, we are creating this product or culture that could be very influential for a very long time period, where the other thing to say is that, supposing we are trying to have influence in a hundred years time or 200 years time – that’s very hard to do in general. But here’s something you can do which is create a set of ideas that propagate over time. That has a good track record of having a long run influence.

What are some of the things this means? One is just getting the culture and the ideas just exactly right. And so when we’re thinking about what sort of things is it committed to, we should be judging that in part by the question of ‘how’s that going to help it grow over the kind of long term?’. So one thing I care about a lot is being friendly to other value systems, especially very influential value systems, rather than being combative towards them.

The case for strong longtermism

We distinguish longtermism in the sense of just being particularly concerned about ensuring the long term future goes well. That’s analogous with environmentalism, which is the idea of being particularly concerned about the environment. Liberalism being particularly concerned with liberty. Strong longtermism is the stronger claim that the most important part of our action is the long-run consequences of those actions. The core aim of the paper is just being very rigorous in the statement of that and in the defense of it. So for people who are already very sympathetic to this idea, I don’t think there’s going to be anything kind of novel or striking in it. The key target is just what are the various ways in which you could depart from a standard utilitarian or consequentialist view that you might think would cause you to reject strong longtermism, and we go through various objections one might have and argue that they’re not successful.

Articles, books, and other media discussed in the show

Will’s papers and articles

Will’s book recommendations

Everything else

Related episodes

About the show

The 80,000 Hours Podcast features unusually in-depth conversations about the world's most pressing problems and how you can use your career to solve them. We invite guests pursuing a wide range of career paths — from academics and activists to entrepreneurs and policymakers — to analyse the case for and against working on different issues and which approaches are best for solving them.

The 80,000 Hours Podcast is produced and edited by Keiran Harris. Get in touch with feedback or guest suggestions by emailing [email protected].

What should I listen to first?

We've carefully selected 10 episodes we think it could make sense to listen to first, on a separate podcast feed:

Check out 'Effective Altruism: An Introduction'

Subscribe here, or anywhere you get podcasts:

If you're new, see the podcast homepage for ideas on where to start, or browse our full episode archive.