NOTE: This piece is now out of date. More current information on our plans and impact can be found on our Evaluations page.


Summary

This year, we focused on “upgrading” – getting engaged readers into our top priority career paths.

We do this by writing articles on why and how to enter the priority paths, providing one-on-one advice to help the most engaged readers narrow down, and introductions to help them enter.

Some of our main successes this year include:

  1. We developed and refined this upgrading process, having been focused on introductory content last year. We made lots of improvements to coaching, and released 48 pieces of content.

  2. We used the process to grow the number of rated-10 plan changes 2.6-fold compared to 2016, from 19 to 50. We primarily placed people in AI technical safety, other AI roles, effective altruism nonprofits, earning to give and biorisk.

  3. We started tracking rated-100 and rated-1000 plan changes. We recorded 10 rated-100 and one rated-1000 plan change, so with this change, total new impact-adjusted significant plan changes (IASPC v2) doubled compared to 2016, from roughly 1200 to 2400. That means we’ve grown the annual rate of plan changes 23-fold since 2013. (If we ignore the rated-100+ category, then IASPCv1 grew 31% from 2017 to 2016, and 12-fold since 2013.)

  4. This meant that despite rising costs, cost per IASPC was flat. We updated our historical and marginal cost-effectiveness estimates, and think we’ve likely been highly cost-effective, though we have a lot of uncertainty.

  5. We maintained a good financial position, hired three great full-time core staff (Brenton Mayer as co-head of coaching; Peter Hartree came back as technical lead; and Niel Bowerman started on AI policy), and started training several managers.

Some challenges include: (i) people misunderstand our views on career capital so are picking options we don’t always agree with (ii) we haven’t made progress on team diversity since 2014 (iii) we had to abandon our target to triple IASPC (iv) rated-1 plan changes from introductory content didn’t grow as we stopped focusing on them.

Over the next year, we intend to keep improving this upgrading process, with the aim of recording at least another 2200 IASPC. We think we can continue to grow our audience by releasing more content (it has grown 80% p.a. the last two years), getting better at spotting who from our audience to coach, and offering more value to each person we coach (e.g. doing more headhunting, adding a fellowship). By doing all of this, we can likely grow the impact of our upgrading process at least several-fold, and then we could scale it further by hiring more coaches.

We’ll continue to make AI technical safety and EA nonprofits a key focus, but we also want to expand more into other AI roles, other policy roles relevant to extinction risk, and biorisk.

Looking forward, we think 80,000 Hours can become at least another 10-times bigger, and make a major contribution to getting more great people working on the world’s most pressing problems.

We’d like to raise $1.02m this year. We expect 33-50% to be covered by Open Philanthropy, and are looking for others to match the remainder. If you’re interested in donating, the easiest way is through the EA Funds.

If you’re interested in making a large donation and have questions, please contact [email protected].

If you’d like to follow our progress during the year, subscribe to 80,000 Hours updates.

Table of Contents

In the rest of this review, we cover:

  1. A summary of our key metrics.
  2. What we do, and how our plans changed over the year.
  3. Our progress over the year, split into upgrading, introductory, research and capacity building.
  4. Updated estimates of our historical and marginal cost-effectiveness.
  5. Mistakes, issues and risks.
  6. Our plan for the next year.
  7. Our financial situation and fundraising targets.

See our previous annual reviews.

Key summary metrics

Plan changes

Our key metric is “significant plan changes”. We count one when someone tells us they switched their plans from path A to B due to us, and expect to have a greater social impact as a result.

We track the total number, but we also rate the plan changes 0.1 / 1 / 10 depending on their size and expected impact, and track the total in each category. Read more about the definition of significant plan change, and impact-adjustment.

This year, we decided to focus on plan changes rated 10, and we grew these 163% from 19 newly recorded in 2016 to 50 recorded this year.

However, plan changes rated 0.1 and 1 declined 8% and 3% respectively as we stopped focusing on them.

We also track the “impact-adjusted” total (the weighted sum), which grew 31%. However, this year we became more confident that the plan changes rated 10 are more than 10 times higher impact than those rated 1, perhaps 100 times more, so this undercounts our growth.

See a summary of all these metrics in the table below. Bear in mind, they’re highly imperfect proxies of our impact — we go into more detail in the full section on historical cost-effectiveness.

20132014201520162017All time total
Total new significant plan changes2175245146514103216
Yearly growth rate257%227%498%-4%
Rated 0.107907937281618
Yearly growth rate1186%781%-8%
Rated 112571446536321498
Yearly growth rate375%153%353%-3%
Rated 10911111950100
Yearly growth rate22%0%73%163%
Impact-adjusted total10216826392212052660
Yearly growth rate64%57%251%31%

The reporting period ends in Nov each year

Here are annual and monthly growth charts for new plan changes rated 10.

Using a Dec-Nov year for these figures.

Below is the monthly growth chart for plan changes rated 1. As we stopped improving and driving traffic to our introductory content, plan changes driven by online sources stayed flat, while those from workshops went to zero because we stopped giving workshops.

Adjusted plan change weightings

This year we became more confident that the plan changes rated 10 should be split into 10/100/1000. Below, we have reconstructed our past metrics to show roughly how they would have looked if we’d used these categories too. (Note that if a change is recorded as 10 in 2015 and then re-rated 100 in 2016, it’ll appear in both years as +10 and +90)

Plan change rating

20132014201520162017Grand Total
Total significant plan changes2175248146514143223
Rated 0.17907937281618
Rated 112571446536321498
Rated 1091010165095
Rated 100143311
Rated 1000000011

Summary funnel figures

The table below shows some key metrics earlier in the funnel (i.e. website visitors, new newsletter subscribers, people coached), and our financial and labour costs.

Traffic continued to grow as we released 48 new or updated pieces over the year – the most we’ve released in a year.

New newsletter subscribers grew 80%, but most of this was due to a spike in December 2016 – they have been flat otherwise because we’ve been directing traffic towards coaching rather than the newsletter.

The number coached almost tripled as we switched from workshops to coaching.

Our costs slightly more than doubled, as we grew the team (an average of 1.3 full-time staff extra over the year, as well as 1.5 full-time staff worth of freelancers), raised salaries and moved to the Bay Area.


The reporting period ends in Nov each year, apart from the last three rows which use a Jan-Dec reporting period.

What is 80,000 Hours?

80,000 Hours aims to get talented people working on the world’s most pressing problems.

Many people want to do good with their careers, but currently there’s no clear source of research-backed advice on how to do this most effectively.

We aim to provide this advice, by:

  1. Doing research to work out which career opportunities are highest-impact, and how individuals can fill them.

  2. Producing online content to bring people to our site, and tell them about these opportunities and how to enter them.

  3. Providing in-person advice to help our most engaged readers narrow down these options, enter them, and join the effective altruism community.

Each aspect is complementary with the next. The research informs the online content; the online content brings people into the in-person advice and makes it faster to give; and speaking in-person helps us prioritise the research.

We do this work in the context of the rest of the effective altruism community. We see our role in the community as helping to allocate human capital as effectively as possible, which we think is especially important because the community is currently more skill-constrained than funding constrained.1 We aim to be the “Open Philanthropy of careers”.

As part of our coordination with the community, we work closely with the Centre for Effective Altruism, and are legally part of the same entity, though for most intents and purposes operate as an independent organisation (fiscal sponsorship). Read CEA’s independent annual review.

Our programmes (online content and in-person advice) aim to take people all the way from the point at which they have a vague idea they want to do good but no specific plans, all the way to having a job in our priority career paths and in-depth knowledge of effective altruism.

Roughly, we divide this process into two stages, “introductory” and “upgrading”:

Introductory stageUpgrading stage
How do we do it?Online career guideAdvanced online content & one-on-one advice (inc. community introductions)
What changes in this stage?
KnowledgeLearn about the basic principles in our career guide.Learn about our priority paths and advanced research.
MotivationBecome somewhat more focused on social impact.Make social impact a key career goal.
Community connectionsMake some initial connections.Make several strong connections.
Recorded output: plan changeReport a small plan change (rated 1 or 0.1) e.g. change graduate programme, take the GWWC pledge, aim towards a priority path in 3yr.Successfully enter one of our priority paths, or other high-impact option. Report a large plan change (rated 10)

In 2016, we tripled the number of plan changes rated 1, while the number rated 10 only increased 73%, so we assessed that upgrading was the key bottleneck. So, in our 2016 annual review, we decided to focus on the upgrading stage rather than introductory.

How our strategy and plans changed over the year

There were two main ways our strategy changed: (i) we switched our target from tripling IASPC to growing rated-10 plan changes 2.5-fold, and (ii) in order to do this, we stopped giving workshops and resumed one-on-one coaching. We’ll explain each in turn.

Despite deciding to focus on upgrading, we set an aggressive target to triple the total number of impact-adjusted significant plan changes (IASPC), matching our performance in 2016.

However, it takes a long time to get plan changes rated 10 (our latest estimate is about 2 years from joining the newsletter on average), and many of the projects we listed in the annual review were more helpful in finding rated-1 rather than rated-10 plan changes.

What’s more, we realised that some plan changes rated 10 are actually worth more like 100 times those rated 1. This meant that the IASPC metric, as defined at the start of 2017, undercounts the value of growth from plan changes rated 10.

As a result, we decided to drop the target, and we would have failed to make it. (Though, we did complete 4 out of 5 of the specific projects we listed in our 2016 plans – improving career reviews, mentor networks, user tracking and some efforts to make the guide more engaging; and in addition one major marketing experiment.)

Instead, from June 2017, we decided to only focus on growing the number of plan changes rated 10, setting a target of 40 over the year, up from 19 in 2016. We exceeded this target, reaching 50 by the end of November. Going forward, we also intend to revise the IASPC metric so that it better captures the value of the top plan changes, and then return to using IASPC to set our goals.

As part of this shift to rated-10 plan changes, in February we also decided to shift the in-person team fully towards one-on-one advice, and stop giving workshops. This was for a number of reasons.

One was that we realised that if we put specific appeals on specialist content, we could find very high potential people to coach. For instance, at the end of our AI problem profile, we say “Want to work in AI safety? We can help.” Since this profile receives about 3000 unique views per month, this throws up 1-2 great coaching candidates each month.

We then found that if we coached these people, we’d get more impact-adjusted plan changes per hour than if we gave workshops, and many more plan changes rated 10. Although we can only deliver one-on-one advice to a smaller number of people, the narrower targeting turns out to more than offset this downside, making the one-on-one more effective overall.

After we realised this, we put two full-time staff on one-on-one advice, and made improving the process one of our top priorities.

Now we’ll outline the progress we made this year in more depth. Then we’ll analyse our cost-effectiveness and then plans for next year.

We divide our progress into four categories of work, which we cover in sequence:

  1. Upgrading – helping engaged users enter our priority career paths.
  2. Introductory – telling new people about our basic ideas.
  3. Research – to identify the highest-impact career opportunities.
  4. Capacity building – to increase the abilities of the team in the long-term.

Progress 1: upgrading programmes

What process did we focus on after June to help with upgrading?

Probably our most significant progress this year was developing the following upgrading process – in brief, we promote a list of priority paths, then give people one-on-one help entering them.

We think the process is cost-effective and scalable, and it drove 4.4-fold growth in how many plan changes rated 10 we track each month compared to the start of the year.

Here’s how the approach works in a little more depth:

  1. In our research, we agree on a list of ‘priority paths’ — career types we’re especially excited about that people usually don’t consider. See a rough list here.
  2. We write up online content that makes the case for these paths, and then explains who’s well suited and how to enter. These are mainly career reviews and problem profiles, as well as our podcast.
  3. Promote this content to our existing audience and the effective altruism community.
  4. Ask the readers to apply to coaching, and spot the people we can help the most.
  5. Give these people one-on-one help deciding which path to focus on, introductions to people in these areas, and help finding specific jobs and funding opportunities.
  6. Track plan changes from this group.

Here’s a rough user flow:

Here’s how the one-on-one advice works:

  • People apply and we select those we’re best placed to help and who have the best chance of getting into our priority paths.
  • They fill out preparatory work, writing out their plans and questions for us.
  • We do a 30-60 minute call via Skype. Initially we focus on helping them answer key uncertainties to narrow down their options, and recommending further reading. Then, we focus on helping them take action, by making introductions to specialists in the area, as well as specific jobs and sources of funding.
  • With some fraction, we continue to follow up via email, and may arrange further meetings.
  • We aim for them to continue to engage with mentors and people in the effective altruism community.

Here are the key funnel metrics in Sept 2017, which was typical month near the end of the year.

StageNumber in Sept 2017Conversion rate from previous stageComments
1a) Reach - unique visitors to entire site162,640Note that 60% of these are new readers; while it takes over a year on average to make a big plan change.
2) Unique visitors who spent at least 5 mins reading a problem profile or career review3,0422%Note that not all of these are priority paths.
3) Coaching applications2257%
4) Tier 1 coaching applicants4319%The top ~20% of applicants that we want to coach.
5) People coached4195%
6a) Expected rated-10 plan changes in 6 months4.110%Based on recent conversion rates this year
6b) Expected rated-10 plan changes in 18 months820%Rough estimate. We expect further increases after this.
6c) Actual rated-10 plan changes recorded this month717%Note that most of these were from coaching in previous months
Total financial costs per month/$65,000Much of this spent on investment rather than getting short-term plan changes..
Ratio of costs per expected plan change rated 10 in 18 months/$8100 Overestimate of cost to cause a plan change (explained below under “costs”).

This is how people who made plan changes first found out about 80,000 Hours:

The EA community dominates because we mainly coach people who have some kind of involvement in the community (and only 10% of people involved in the community first found out about it from us).

On average it takes about 2 years between someone first engaging on the website and reporting a rated-10 plan change, so the people who are reporting plan changes today are those who first started reading in 2015 (when our traffic was about one third the size). This is both because it takes people time to change their minds and because it takes us time to learn about their shift.

What did the plan changes consist of?

What follows is more explanation of what these shifts typically involve.

Our largest focus area this year was AI technical safety, since we thought it was the most urgent area where we could make a contribution.

Over half of the technical AI safety plan changes come from people with a pretty similar story. Typically, they’re in their early-mid 20’s and studied a quantitative subject at university. They’ve come across effective altruism before, but were not actively involved. They read our online content on AI safety and found it significantly more concrete in terms of next steps than other resources, decided they might be a good fit, and applied for coaching.

During coaching, we helped them think about their personal fit relative to other options, gave them further reading (such as our AI safety syllabus and CHAI’s bibliography) and introduced them to mentors and the AI safety community (e.g. David Kruger, a PhD candidate at MILA and FHI intern).

There are now over 15 people aiming to do technical AI safety research due to 80,000 Hours at graduate school, often at top labs (such as UC Berkeley’s CHAI). One has already published an AI safety paper and another three are working on their own. We can give further details on request.

Turning to AI policy/strategy, two were already working in government, but after reading our content (such as our AI policy guide) and speaking with us, they’re planning to focus more on AI safety. Three are at top universities, but have switched their focus to AI policy/strategy, and are already working with some of the leaders in the field (e.g. Allan Dafoe). Of these, two decided to get PhDs (in law and political science) to get into a better position to make a difference long-term.

AI support roles usually mean operations and management roles at AI research organisations. These people come from a wide variety of backgrounds. They have a similar degree of interest in AI safety, but have a more generalist skill-set as opposed to academic.

Outside of AI, the next largest group is people who took jobs at effective altruist organisations. Of the nine people working in effective altruist orgs, three were already fairly involved in the community – we helped by telling them about a job they might be a good fit for, which they landed. The other six attribute their plan changes to 80,000 Hours for a miscellaneous set of reasons, for instance, reading our website changed their minds about which problem to work on, we got them more involved in the community in general, and so on.

The biorisk plan changes are typically from people who weren’t planning on working in this field, had completed an undergraduate degree in a related area (e.g. medicine) and were already interested in effective altruism but not sure how to contribute. We told them about how to contribute (e.g. in our podcasts on biorisk) and put them into contact with people working in the area.

The final major category is people who switched to earning to give. Three of them are in tech entrepreneurship and one in quant trading.

Note that because so many of the plan changes are doing graduate study to enter research, many are not already having an immediate impact. We roughly estimate:

  • 30% have changed their plans but not yet passed a major “milestone” in their shift. Most of these people have applied to a new graduate programme but not yet received an offer.
  • 30% have reached a milestone, but are still building career capital (e.g. entered graduate school, or taken a high-earning job but not yet donated much).
  • 40% have already started having an impact (e.g. have published research, taken a nonprofit job).

This is different from previous years, when fewer plan changes were focused on research, and so a greater fraction were able to have an impact right away. In 2016, we rated 18% as pre-milestone, 11% as post-milestone and 72% as already having an impact.

If you’re interested in making a donation over $100,000, we can provide detailed case studies of individual of plan changes (with permission of the people involved of course). Please contact [email protected].

Growth rates

At the start of the year, we were only tracking about one rated-10 plan change per month. Over 2016, our average was 1.6. We’ve now held above 7 the last three months, making for 4.4-fold growth compared to the start of 2017, comparing month on month (year on year growth was 2.6-fold).

Note that the months in which we recorded 11 and 13 were partly inflated because we made extra effort to assess our impact in these months, and we count plan changes on the month we find out about them (as opposed to the month we cause them). Excluding these extra efforts, they would have been in line with the other months at around 7 per month.

Costs and cost-effectiveness

This year, we put the bulk of core team time into this upgrading process. Here’s a breakdown of how we spent the time:

  • Rob & Roman: producing and promoting online content on priority paths.
  • Peter M & Brenton: giving and improving the one-on-one advice.
  • Peter H: improving systems and metrics for the above.
  • Ben: 35% managing the team; 35% external relations; 30% online content (a mixture of intro and advanced).

In the funnel table above, we estimated that our work in September 2017 will cause 8 value 10 plan changes within the next 18 months, compared to financial costs of $65,000, or $8,100 per change.

We expect the number and value of plan changes will continue to increase after this point, since we’ve seen many cases where it took more than 2 years to record a plan change. We also expect the efficiency of the process to continue to increase as we improve the coaching and content. So, we expect the costs to be below $8,100 per change longer term.

What’s the value of the plan changes? We’ll cover that later in this report. Additionally, bear in mind that we also produce plan changes rated 1, and other forms of value, such as growing the effective altruism community, which are being ignored here.

What were the main ways we improved the upgrading process during year?

Once the upgrading process was solidified around June, we made a number of improvements, including:

Released more online content on priority paths:

Most of this content contained appeals to apply to coaching. These appeals, combined with traffic growth, helped us to grow the number of “tier 1” coaching applications (the cutoff for receiving coaching, and roughly the top 20% of applicants) from zero to about 50 per month. Other major sources included EAG conferences and word-of-mouth.

Source of tier 1 coaching leadsNumberPercentage
Online call-to-action13248%
EAG6223%
Online plan change report176%
Referral155%
Asking them one-on-one83%
Unknown4015%
Grand Total274100%

We also improved the one-on-one advice in several ways:

  • Brenton started as a full-time coach in February, joining Peter, and was trained to the point where he gets similar results.
  • Peter and Brenton were new to coaching at the start of the year, so have learned a lot about how to do it, and gained expertise on and connections within the priority paths.
  • We iterated the application process, to get better at selecting people (though it’s too early to see clear results).
  • We secured 3 extra paid specialist mentors (compared to 1 last year), who we refer people to after they get coaching.
  • We started tracking coaching in our CRM (customer relationship manager system), which means that everyone on the team can see who has been coached, and we save all the key data, including how many hours we spend with each person, so we can estimate plan changes per hour.
  • We made lots of tweaks to streamline the process, making it faster to assess applications, schedule coaching sessions and create prep documents.

We also ran several experiments of major new features:

  • Peter M spent 3 weeks focused on headhunting for specific top jobs. This led to 2 successful job placements and seven trials, which was better than our expectations. So, we intend to put more effort into headhunting in the future, with an initial focus on collecting better data on the people we coach to make it easier to do an initial cut-down.

  • We launched a job board that highlights about 10 especially promising job opportunities. It’s received over 70,000 views, and now generates several applications for coaching each week. However, we’re not aware of any clear cases of people switching jobs due to it, so we don’t intend to increase investment in it.

  • We spent several weeks exploring the idea of turning the coaching into a “fellowship”, where we admit a smaller number of people who receive a greater level of ongoing support, as well as introductions to each other. We still think this is promising, and intend to do more experiments over the next year.

  • We hired Niel Bowerman to work full-time as an AI policy specialist, since it’s one of the highest-priority and most neglected areas. If this works well, it could be a route to scaling the process: we could have 2-3 “generalist” coaches, who send people on to “specialists” in several key areas.

We think there are many more avenues for making the process more efficient and scaling up its impact, as we cover in the plans section later.

Progress 2: introductory programmes

Upgrading was the main way we aimed to increase our impact this year, but as covered earlier, we also have additional impact through “introductory” programmes.

How do our introductory programmes work, and what impact do they have?

Our main introductory programme is our online career guide, which aims to introduce people to our key advice and effective altruism in general.

Here is an overview of the funnel, again for Sept 2017:

Sept 2017 Conversion from previous main step
1a) Reach - new visitors145,506
1b) Reach - returning visitors41,25528.4%
2a) Engagement - Newsletter signups6,69816.2%
2b) Engagement - Unique visitors that spent more than 5 minutes reading career guide6,25115.2%
3a) Plan change reports (not via coaching or events)3535.3%
3c) Confirmed plan changes rated 1370.6%
3b) Confirmed plan changes rated 1 (not via coaching or events)200.3%
3c) Confirmed plan changes rated 0.1 (not via coaching or events)460.7%

The introductory programmes create value in several ways:

  • Through the direct impact of extra donations and improved career choices.
  • By finding people who later “upgrade”, as in the previous section. In 2017, 69% of plan changes rated 10 were previously rated 1, and on average people have read the site for two years before being confirmed as a rated-10 plan change.
  • By introducing people to the effective altruism community and growing it.

On the latter, several surveys suggest 80,000 Hours is probably responsible for involving about 5-15% of the members of the effective altruism movement:

  1. The 2017 Effective Altruism survey found that 7.2% of respondents (94) first heard about EA from 80,000 Hours, and the annual rate had grown 4-fold since 2014. 80,000 Hours was also the third most important way people “got involved” in EA, after GiveWell and “books or blogs”.

  2. As covered in the same post, in a poll on the EA Facebook group, 13% of respondents cited 80,000 Hours as the way they first found out about effective altruism.

  3. In a survey of participants of EAG London 2017, 9% said they found out about EAG through 80,000 Hours, and we’ve been told similar or larger figures at previous EAG conferences.

  4. If we look directly at the rated-1 and rated-0.1 plan changes, it seems likely that they include hundreds of people who became more involved in effective altruism due to 80,000 Hours (figures below). If there are several thousand “engaged” community members who have made significant changes to their lives, then this would again suggest 5-15% are due to 80,000 Hours.

The results of our survey of rated-1 plan changes:

NumberPercentage of previous category
Rated-1 plan changes in 2017612
Answered the question “do you consider yourself an active supporter of effective altruism?”45374%
Said they’re “actively involved” in the community.14833%
Answered the follow up question “Did you get involved in the effective altruism community because of 80,000 Hours?”7047%
1) Said "80,000 Hours made it more likely, but wasn’t the main reason I got involved"2536%
2) Said “Yes, 80,000 Hours is the main reason I got involved in the community"1319%
3) Said "No, 80,000 Hours did not play a role".3246%

Over our entire history, we’ve recorded about 1500 rated-1 plan changes and 1600 rated-0.1 plan changes. This survey suggests that of the rated-1 plan changes, about 12.5% of these are now actively involved and got more involved due to 80,000 Hours.

What progress did we make on introductory content in 2017?

New content released

We released several career guide articles, and supplementary articles (brackets show views over the previous 12 months):

See metrics on all our key 2017 content.

We also looked into re-releasing the book, and spoke to several editors, an agent and a bestselling author. We decided to deprioritise it for now, but improved our plans for the next iteration, which we still plan to do within 3 years.

Analytics upgrades

We made major upgrades to user data tracking, to make it easier to measure our impact. For instance, we now record prior involvement with effective altruism when people first sign-up to our newsletter, so we can better separate our impact from the rest of the community.

More generally, we now consolidate much more of our data, so we can analyse individual users from when they first give us their email address, to how they engage, all the way to recording plan changes. Here’s a sketch of the system:

Facebook advertising experiments

We performed an $11,000 experiment with Facebook adverts targeted at students and alumni of top universities in their 20s. We were able to gain about 3,500 newsletter subscribers (~$3/sub average, but more like $5 at the margin) after six months, leading to 13 coaching applications and 3 plan changes rated 1, as well as 8 flagged as “potential 10s”.

Our best guess is that the 2-year conversion rate of the subscribers to impact-adjusted significant plan changes will be 0.1 – 2%, making the cost of acquisition per IASPC $200-$3000.

If the costs of acquisition are 50% of total costs at the margin, the total cost per plan change would be $400-$6000. Given that our current cost is under $500, this method is probably not especially attractive.

So, we don’t intend to scale up these campaigns next year, though we are interested in running more experiments, especially with more narrowly targeted audiences.

Video guide experiment

Last year, we considered turning the career guide into a MOOC in order to increase audience size and completion rate. To test this idea, we turned one section of the guide into a 6 minute video, in the style of the Vox videos i.e. a mixture of talking to camera, archive footage and animation.

We created the video this year, but haven’t released it yet.

We learned that we can produce what we think is a high-quality video, but it took longer than we expected. In total, the cost was 2 weeks of Ben’s time, 4 months and $16,000 from the project lead (a freelancer with a film school background) and $6000 of other costs.

Many of these costs were due to set-up and learning how to make videos, so could probably be reduced by a factor of two, but turning the whole guide into videos of this style would still be a major project. Another option would be to make the videos less scripted, or to remove animation, but we think this would reduce quality significantly.

Given that we’re focused on upgrading rather than introductory content, we decided to deprioritise this project, though we’re still interested in doing it eventually.

Progress 3: research

Besides upgrading and introductory programmes, we also carry out research. Its aim is to improve the accuracy of our views about which career options and strategies are highest-impact, helping us direct people into more effective paths.

Most of our research is carried out in tandem with creating online content, and it’s hard to separate the two. We also pursue a small amount of open-ended research that isn’t related to anything we intend to publish. Overall, we’d estimate that about 5% of team time goes into research.

Here’s some of the main research progress we made this year, broken down into several categories.

Creating the priority path list

Our upgrading process (as above) focuses on certain priority paths, so it’s vital to choose the right ones.

This year, we created an explicit list for the first time. It was based on a variety of inputs, including the views of community leaders which we surveyed, and our own analysis of the key bottlenecks facing our top priority problems and an assessment where we can most easily help people enter.

As part of this, we added some roles we have not focused on as much in the past, including:

  • AI safety policy
  • AI safety strategy
  • China expert
  • Biorisk policy and research
  • Decision-making psychology research and policy roles.

We also learned about many other high-level priorities for the community in our talent survey.

Improving our understanding of the priority paths

The bulk of our research efforts went into understanding the details of the priority paths, answering questions such as how to enter the path, who’s best suited to it, and what are the best sub-options.

We think this research is relatively tractable, there’s a lot more we could learn, and it has a major impact on what we advise people in-person. It could also eventually change our views about which paths to prioritise.

Most of the podcasts were in this category, as well as the career reviews and problem profiles we listed earlier in the upgrading section (and many unpublished pieces), though note that about 80% of the effort that goes into producing these is communication rather than research.

Here are some examples of questions where we improved our understanding, with relevant published work in brackets:

Big picture research

We also do a little work on understanding major considerations that influence our advice. This year, we improved our understanding of the following issues (which haven’t already been covered above):

Published:

  • What rules of thumb the members of a community should use to coordinate, and how they differ from what’s best if everyone acts individually. Read more (more detailed write-up still in draft)

  • What fraction of social interventions “work”. We found that the claim in our guide and the effective altruism community that “most social programs don’t work” is a bit overly simple and depends on the definition. Read more.

  • How non-consequentialists might analyse whether to take a harmful job in order to have a greater positive impact, and what to do all things considered. Read more

  • The degree of inequality in the global income distribution, and how it depends on the definition and dataset used. Read more

  • How most people think it’s incredibly cheap to save lives in the developing world, and that charities don’t differ much in effectiveness. Read more

  • What economists think the typical magnitude of externalities are for different jobs, and how this means they’re small compared to donations. Read more

Unpublished:

  • How to quantitatively analyse replaceability using economics, and its overall decision-relevance.

  • When comparative advantage is important compared to personal fit, and how to evaluate it.

Progress 4: capacity-building

The capacity of the organisation is our ability to achieve our aims at scale. We break it into our team capacity and the strength of our financial situation. Recently, we’ve had a relatively strong financial situation, so team capacity is the more pressing bottleneck.

Progress on team capacity

The current team of 7 full-time core staff and Ben are satisfied and motivated. In a survey in Dec, on 1-5 point scale corresponding to very poor/poor/satisfactory/good/excellent, they answered 4.8/5 for satisfaction with their line manager, 4.8/5 for satisfaction with the organisation, and 4.5/5 for satisfaction with their role all considered.

We had several significant successes hiring full-time staff:

  • Peter Hartree, our lead engineer, who was full-time on the team in 2015 but left, decided to return to near full-time.
  • Brenton Mayer joined as co-head of in-person advice (along with Peter McIntyre) in February, was trained up to deliver similar results as Peter, and now plays a major role in running the service.
  • Niel Bowerman, who was on the team in 2013, and then worked as Assistant Director of the Future of Humanity Institute, came back to work on AI policy. The aim of this role is to fill the most pressing talent gaps in this path, which we think is one of the highest-impact and most neglected areas.

However, we also lost one full-time staff member (Jesse, Head of Growth) as they realised the role wasn’t a good fit for their skills and interests.

We also carried out three trials with very strong candidates.

We hired several new freelancers, including:

  • Richard Batty, who helped to found 80,000 Hours in 2012, and has already written several career reviews.
  • Josh Lowe, an excellent editor, who’s a professional journalist.
  • A research assistant & editor & writer.
  • An office assistant and an office cleaner.

We completed our move to the Bay Area, securing visas for everyone on the team by April 2017, setting up our office, and doing the administration needed (though we’re yet to have the pleasure of filing our first personal US tax returns…).

We raised salaries, which we think has made the current team more satisfied and better able to save time, and also helped us to attract several potential hires.

Looking forward, Ben is near the limit of how many people he can effectively manage, so the next stage is to train other staff members as managers. We started doing that this year: Roman and Rob have started managing one freelancer, and Peter McIntyre has started managing Niel. If this goes well, we should have the option to double the size of the team over the next 1-2 years.

Progress on our financial situation

Our financial situation remains strong. In March, we achieved our expansion fundraising target of $2.1m. This gave us enough to cover our existing team, increase salaries, hire up to 4 new junior staff, expand our marketing budget, and have 12 months’ reserves at the end of 2017. So, we didn’t have to fundraise over the rest of the year.

For the first time, we also raised several commitments to donate again one year later. We intend to make greater use of multi-year commitments now we’re more established.

Making this target represented a major increase in our funding level, as shown by the following table (rough estimates of expenses and income by calendar year).2

201220132014201520162017Total
Expenditure ($)$57,721$163,108$229,133$283,297$406,074$767,347$1,906,680
Income ($)$70,285$237,074$325,686$355,689$388,020$1,989,462$3,366,216

Half of the funding came from Open Philanthropy, who made their first grant to us. Gaining them as a donor represents a major increase in our funding capacity. We also gained another new large donor ($70k+) and 15 new medium donors ($1k+).

Our funding base remains top heavy — the top 5 donors supplied 85% of the target, and almost everything came from the top 24 (who each gave over $1000). About 15% comes from former users, though we expect this to rise significantly as people get older.

Despite being top heavy, we think our funding base is strong. Many of the donors, especially Open Philanthropy, have the capacity to give much more, so could make up for the loss of one or two major donors, as well as grow our funding. Finally, we have tended to welcome one new large donor each year, and expect that to continue.

More importantly, we feel well-aligned with our donors — they’re members of our community, and if they stopped funding us, it would probably be for good reason.

Overall, we think our current donor community could, if appropriate, expand our funding several fold from today, and likely more. This means that while additional funding is helpful, we don’t see our funding model as a key bottleneck over the coming 2+ years. We discuss our fundraising targets in more detail later.

Historical cost-effectiveness

Has 80,000 Hours justified its costs over its history? We think it has, but it’s a difficult question, and we face a huge amount of uncertainty in our estimates.

In this section, we outline our total costs over history, and then outline several ways that we try to quantify our impact compared to costs. In the next section, we estimate future cost-effectiveness at the margin rather than historical. You can see our previous analysis here.

Years in vs. years out

One very rough way to estimate our effectiveness is to consider the ratio of years spent on the project to years of careers changed.

We’ve spent about 19 person-years of time on 80,000 Hours from the core team.

In that time, we’ve recorded over 3200 significant plan changes. Most of these people are in their twenties, so have over 30 more years in their careers, making for 96,000 years influenced, which is 5,000 times higher than our inputs.

You could convert this into a funding multiplier, by assigning a dollar value to each year of labour.

One response is that we mainly “speed-up” plan changes rather than causing them to happen at all, since these people might have been influenced by other groups in the community later, or come to the same conclusions on their own.3 If we suppose the average speed-up is two years (which we think is conservative), then 80,000 Hours has enabled 6,400 extra years of high-impact careers, which is 336 times inputs.

If we repeat the calculations only counting the one hundred rated-10 plan changes, then we’ve influenced 3,000 years counting 30 years each (158 times inputs) or 200 years with a two year speed-up (11 times inputs).

Given the size of these ratios, if you think our advice is better than what people normally receive, then it seems likely that we’ve been cost-effective.

However, we normally try to estimate our impact in a more precise way. In short, we try quantify all of our costs and the value of our plan changes in “donor dollars” — how much donors would have been willing to pay for them — and then compare the ratio. That’s what we’ll do in the following sections.

What (opportunity) costs has 80,000 Hours incurred?

We’ve spent about $1.9m over our history, of which over 60% was on staff salaries. See detail on historical costs.

Running 80,000 Hours also incurs an opportunity cost — our staff could have worked on other high-impact projects if they weren’t working here.

These costs are difficult to estimate, but it’s important to try if we’re to give a full picture of our cost-effectiveness.

Note that when most meta-charities report their cost-effectiveness ratio they don’t include the opportunity costs of their staff, making their cost-effectiveness ratio seem higher than it really is, especially if staff salaries are below market rates. If we correctly consider the full opportunity cost of our funding and staff, then any cost-effectiveness ratio above 1 means the project is worth doing.

How can we estimate staff opportunity costs? One method is to suppose that staff would have earned to give otherwise. In 2015, we asked our staff how much they would have donated if they were earning to give instead of working at 80,000 Hours. The average answer was $25,000 per year.

A likely better method is to suppose staff would have worked at other organisations in the community, and quantify the value they would have created there. In our 2016 community talent survey, respondents said they would have been willing to pay an average of $130,000 to have an extra year of time from their most recent hire. If 66% of our staff could have taken one of these jobs otherwise, the opportunity cost per year would be $86,000 per staff member. We assume 66% rather than 100% because 80,000 Hours creates some jobs that wouldn’t have existed otherwise.

In the 2017 survey, the organisations said they would have been willing to pay $3.6m for three extra years of senior staff time, or $1.2m per year. The equivalent for a junior staff member was $1.2m, or $400,000 per year. If we suppose 33% of our staff could have taken senior positions otherwise, 33% junior, and 33% something with low opportunity costs, then the average cost per year is $528,000 per staff member.

As a sense check, if we add $70,000 of salaries, this implies a total cost of $300 per staff hour, which seems high but not out of the question.

We also used an additional 16 years from volunteers, interns and freelancers, but these typically have lower opportunity costs relative to financial costs, so we’re not estimating them.

Putting these figures together:

2015 and before20162017Grand total
Total full-time core staff/person-years94.16.319.4
Opportunity cost per person-year/$25,000

86,000528,000
Total opportunity costs/$225,000352,6003,326,4003,904,000
Total financial costs/$733,259406,074767,3471,906,680
Total costs/$958,259758,6744,093,7475,810,680

Our estimates of opportunity costs have increased dramatically, and now account for far more than our financial costs. At least some of this is because the community has become much less funding constrained in recent years.

Note that technically we should calculate the present value of our costs, where costs incurred in earlier years are counted as higher, but because the majority of our costs were incurred in 2017, this won’t have much effect on the total.

Donations in kind

We also receive discounted services from companies (listed here), though for the most part, the value of these only accounts for a few percent of the budget.

One exception is that Google gives us $40,000 of free AdWords per month as part of their Grantspro program, which can be spent on keywords with a value up to $2. If valued at their nominal rate, this would be over half of our budget, though we’d never pay this much if they weren’t free.

We also receive free advice from many in the effective altruism community (many of whom are listed on our acknowledgements page), and in the past we’ve used the time of student group leaders and other volunteers. We haven’t tried to quantify these costs, and ignore donations in kind in the rest of the estimates.

Now, let’s compare these costs to our historical impact.

Value of top plan changes

The main way we estimate our impact is to try to identify the highest-impact plan changes we’ve caused, write up detailed case studies about them, quantify their value, and seek external estimates to cross-check against our own.

Since we think the majority of our impact comes from a small number of the highest-impact plan changes, this captures a good fraction of the total.

Examples of top plan changes

You can see some examples of top plan changes from previous years here:

Unfortunately, many of the details of the plan changes can’t be shared publicly due to confidentiality. If you’re considering donating over $100,000, we can share more details if you email [email protected].

How to quantify the value of top plan changes in donor dollars

One way we try to quantify the value of the plan changes is in “donor dollars” i.e. the value of a dollar to the next best place our donors could have donated otherwise at the current margin. To be more concrete, most of our funding would have gone to opportunities similar to those funded by the EA Community and Long-term funds.

This means that one way to quantify the value of the changes is to imagine the plan change never happened, but instead X additional dollars were given to the EA community or Long-term Fund. Then, we need to try to estimate of the value of X where this option is equally as good as the actual world in which the plan change did happen.

Our aim is to make an all-considered tradeoff, taking account of:

  • The counterfactual – the chance that the person changed their plans anyway.
  • Discounting – resources are worth less if they come in the future.
  • Opportunity costs – the value of what the person would have done if they hadn’t made the change.
  • Drop out – the chance that the person doesn’t follow through with their plan change.

Needless to say, this is extremely hard, but we need to try our best.

Another problem with the donor dollar metric is that additional money has diminishing returns, and other non-linearities, which means that summing the value of different plan changes can be misleading. It will also change from year-to-year as funding constraints vary, so we need to update our estimates each year.

This definition made, the question is whether each dollar of resources we’ve used has created more than one additional donor dollar of value.

What’s the value of the top plan changes in donor dollars?

We created a new list of our top 10 plan changes of all time, and made our own dollar estimates of their value, using the method above. In particular, we drew heavily on the figures provided in the 2017 talent survey by about 30 leaders in the effective altruism community.

In most cases, we also asked at least two people outside of 80,000 Hours to make their own estimates (ideally people who are familiar with the relevant area). We combined the estimates, arriving at the following results.

These results are only useful if you’re relatively aligned with us in choice of problem area, epistemology and the degree of talent-constraint in these areas.4 If you’re interested in donating to us, you might want to make your own estimates, and we can provide more details if you email [email protected].

Having said that, we have examples of plan changes of many different types (e.g. across global poverty and animal welfare as well as extinction risks), so even if you’re not fully aligned with us, 80,000 Hours can still be impactful.

Here are our estimates of the value of the top ten plan changes in donor dollars:

NumberBest guess net present value in donor dollarsBottom 10% confidence intervalTop 10% confidence interval"Realised" valuePercentage realisedFirst recordedChange this year
120,000,000

700,00060,000,0007,500,00038%2016Major upgrade
23,000,000-300,00020,000,000300,00010%2014None
32,500,000010,000,00043,0002%2014None
42,000,000100,00020,000,000666,66733%2016Upgrade
52,000,000200,00010,000,000500,00025%2014Upgrade
61,600,000500,0004,000,00030,0002%2014None
71,100,000200,0008,000,0001000009%2015None
81,000,000010,000,00000%2016None
91,000,000010,000,000100,00010%2016None
101,000,000-200,0005,000,0001,000,000100%2014None
TOTAL35,200,0001,200,000157,000,00010,239,66729%

Plan change value is “realised” when the person contributes labour or money to a top problem area.

If we take these figures at face value, and sum them, then their total value is about $35m, 18.5 times higher than our financial costs, and 6.1 times higher than total costs including opportunity costs of $5.8m.

Note again that since most meta-charities don’t include opportunity costs in their multiplier estimates, you should use the higher figure if making a side-by-side comparison.

Value of plan changes outside of the top 10

We’ve recorded over 3,200 significant plan changes in total, so this estimate only considers 0.3% of the total. What might the value of the rest be?

We’ve recorded a further 90 plan changes rated 10. We think these have an average value of about $100,000, making for a further $9m.

Another 1,500 were rated 1. We took a random sample of 10 and made estimates, giving an average of $7,000, adding up to another $10.5m.

Another 1,600 were rated 0.1, which we think are worth about 10% as much, adding up to another $1.05m.

We also need to consider the value of plan changes we haven’t tracked, those that will be caused in the future due to our past efforts, and also those who made worse decisions as a result of engaging with us, but we’ve left these out for now. We’ll discuss some ways we might have a negative impact in a later section.

Putting these together:

Best guess
net present value summed in donor dollars
Percentage of total
Top 120,000,00036%
Next 915,200,00027%
90 more plan changes rated 109,000,00016%
1,500 plan changes rated 110,500,00019%
1,600 plan changes rated 0.11,120,0002%
Untracked plan changes???
Future plan changes resulting from past activity???
Negative value plan changes???
Total55,820,000100%

The sum of $55.8m is 29.3 times higher than financial costs, and 9.6 times higher if we also add opportunity costs.

There is a great deal to dispute about these estimates, but they provide some evidence that 80,000 Hours has been a highly effective use of resources so far.

We can also try to increase the robustness of these estimates by comparing them to some different methods, which is what we’ll cover next.

Increasing the capacity of the effective altruism community

Another way 80,000 Hours has an impact is by introducing people to the ideas of effective altruism, and helping them get involved with the community. If you think building effective altruism community is valuable, this could be a large source of impact, which is also somewhat independent from the direct value of the plan changes as covered above.

We argued earlier that plausibly 5-15% of the current membership of the community can be attributed to 80,000 Hours. We can then multiply this by the total value of the community to make an estimate of our impact.

For the value of the community, we’d encourage you to make your own estimate. Our own (not at all robust) estimate is that the present value of the community is over $1bn donor dollars, with an 80% range of at least $100m – $10bn (even if we completely exclude Open Philanthropy). One reason we think it’s this high is because it seems likely that the community will raise at least a present value of $1bn in extra donations for effective altruism causes in the future by attracting a couple more HNW donors, or simply through the Giving What We Can pledge (which has already raised over $1bn of commitments), and we think the community will achieve much more than just raise money.

If the present value of the community is over $1bn, then 5-15% of that would be worth $50-$150m, or 8-26 times our total financial and opportunity costs.

To make a full estimate, however, we’d also need to consider negative effects 80,000 Hours might have had on the community. For instance, we might have made the community less welcoming by focusing on a narrow range of careers and causes, putting off people who would have otherwise become involved. Some have also argued that growing the number of people in the community can have negative effects since it makes coordination harder. We cover more ways we might have had a negative impact later.

Acting as a multiplier on the effective altruism community

Besides growing the capacity of the community, 80,000 Hours is also the main source of research on career choice that the community uses. If this research is useful, then it can make people in the community more effective.

If, as above, we think the community will have over $1bn of impact in donor dollars, and we’ve used $5.8m of resources, then we’d only need to increase the effectiveness of how these resources are used by 5.8/1000 = 0.58% to justify our costs.

In practice, we think 80,000 Hours has quite major impacts on the community, such as:

  • Encouraging the idea that effective altruism should be about how to spend your time rather than only your money, and presenting evidence that talent gaps are more pressing than funding gaps.
  • Helping to focus the community more on the long-term future and animal welfare rather than only global poverty, and helping to make AI technical / strategy / policy / support commonly considered career options in the community.
  • Several other organisations have adopted our organisational practices. For instance, the plan change metric has been adopted by CFAR and considered by others; and we helped CEA enter Y Combinator and develop a more startup style strategy

Though, as in the previous estimate, we’d also need to subtract potential negative impacts.

Impact from Giving What We Can pledges

A minor way we have an impact is by encouraging people to take the Giving What We Can pledge and donate more to effective charities. We can use this to get a lower bound on our impact, especially if you’re more concerned by global poverty. We covered this method in more depth last year.

Over our entire history, we’ve encouraged 318 people to take the pledge, who say they likely wouldn’t have taken it without 80,000 Hours.

In Giving What We Can’s most recent published report, from 2015, they estimated that each person who takes the pledge donates an extra $60,000 to top charities (NPV, counterfactually and dropout adjusted). Note that we expect about 30% of this to go to long-term and meta charities, 10% animal welfare and 60% global poverty, so the units are not the same as the donor dollars earlier.

We think the $60,000 estimate might be a little high for new members, but even if we reduce it to $20,000, that’s $6.4m of extra donations to top charities.

This would be about 3 times our financial costs, and about equal to total costs including opportunity costs.

Marginal cost-effectiveness

If you’re considering donating to or working at 80,000 Hours, then what matters is the cost-effectiveness of future investment (at the margin), rather than what happened in the past. Historical cost-effectiveness is some guide to this, but it could easily diverge.

You’d then compare this cost-effectiveness to “the bar” for investment by the community, but we don’t do this here.

Unfortunately, estimating marginal cost-effectiveness is even harder than estimating historical, but below we cover several ways to go about it.

First, we’ll consider what our prior should be, then a “top down” method estimating our growth rate, then a “bottom up” method based on our recent activities, and then suggest a long-term growth oriented approach at the end.

Should we expect marginal cost-effectiveness to increase or decrease?

It’s often assumed that our cost-effectiveness ratio should decline over time, due to diminishing returns.

This is true in the long-term, however, over shorter time scales opposing effects can dominate, such as economies of scale and learning: over time we become better and better at causing plan changes with the same amount of resources.

Our expectation is that learning effects will continue to dominate over the next 1-3 years, decreasing the cost per plan change. Although we have taken some low hanging fruit (e.g. promoting some of the best paths we know about), we still see ways to become much more efficient. For instance, despite little increase in our number of staff, we’ve tripled our web traffic in three years — this roughly means that each piece of new content gets 3-times as many views as it did in the past, making work on content about 3-times more effective than in the past. We’ve also become quicker and better at writing, and we expect we can continue to improve. We cover more ways to improve efficiency in the plans section later.

Another difficulty is that much of our resources go into “investment” that we expect will continue to pay off many years in the future (e.g. we improve our research, grow our baseline traffic, train the team). Since these future plan changes have not been included in our impact estimates, they will increase our cost-effectiveness when they arrive in the future.

One consequence of this is that if we ramp up investment spending (as we did this year), it should temporarily drive down our cost-effectiveness ratio, but this shouldn’t be confused for a decrease in actual cost-effectiveness.

Top down: increase in total plan change value

These caveats in mind, one way to make a quantitative estimate of marginal cost-effectiveness is to compare our impact at the end of 2016 to our impact now, and then compare that to our increase in total costs.

Unfortunately, we didn’t make estimates using the same methods at the end of 2016, so we can’t make a side-by-side comparison. However, there are a couple of ways we can make a rough estimate.

One method is to put a dollar value on the different types of plan change (0.1/1/10/100/1000), assume they’re constant, and then compare the summed totals then and now:

End of 2016End Nov 2017Growth
Total number of plan changes number rated 18661,49873%
Average value of plan changes rated 17,0007,000
Total number of plan changes rated 104289112%
Average value of plan changes rated 10/$100,000100,000
Total number of plan changes rated 1007929%
Average value of plan changes rated 1001,600,0001,600,000
Total number of plan changes rated 100001NA
Average value of plan changes rated 100020,000,00020,000,000
Summed value in donor dollars21,462,00053,786,000151%
Total financial and opportunity costs/$4,093,7475,810,68042%
Ratio5.29.3

This method suggests that our cost-effectiveness went up by almost a factor of 2 in 2017, though the increase is small compared to uncertainty in the estimate.

Moreover, much of the increase was due to adding one plan change rated 1000 in 2017. If we exclude this, the ratio was flat.

One complication is that many of the plan changes we recorded in 2017 were due to activities before 2017 (which would decrease the effectiveness in 2017). On the other hand, much of our spending in 2017 will produce plan changes in 2018 and beyond, and this would increase the estimate if included. It’s unclear which effect dominates.

Another way to make an estimate of our change in total impact is to look at changes in the top 10 list, since they account for about 60% of the value measured in donor dollars.

NumberBest guess net present value in donor dollarsFirst recordedChange this year
120,000,0002016Major upgrade
23,000,0002014None
32,500,0002014None
42,000,0002016Upgrade
52,000,0002014Upgrade
61,600,0002014None
71,100,0002015None
81,000,0002016None
91,000,0002016None
101,000,0002014None
SUM35,200,000

Our estimate is that the plan change rated 1000 would have been valued at more like $3m in 2016, so it saw a $17m increase, or about 50% of the total.

However, even if we put this to one side and focus on the other nine, two of the plan changes were new, and two more were upgraded. That means that in 2016 there were about 5 plan changes valued at over $1m, whereas now there are 9. That’s an increase of 1.8-fold, roughly matching our other estimate of a 2-fold increase.

“Bottom up” estimate of recent activities

One weakness of the top down approach is that the nature of our activities and plan changes have changed over time. In particular, some of our most valuable plan changes in the past were caused by research and networking that we might not be able to repeat going forward, or won’t be aided by marginal donations.

So, an approach that might be more relevant to donors is to make a “bottom up” estimate of the cost-effectiveness of our marginal activities.

In the earlier section on our upgrading process, we estimated that in the last few months it has produced plan changes rated 10 within 18 months for about $8,100. We also think this process can be scaled at the margin, and likely made more efficient, and this ignores all our other forms of impact.

If we add opportunity costs at the 2017 ratio, that would be $42,000 per rated-10+ plan change.

The question then becomes whether recent plan changes rated 10 are worth this many donor dollars.

We tried to identify the top plan changes that were newly recorded in 2017. We excluded one that was due to networking, so doesn’t obviously reflect the impact of scalable activities. We made donor dollar estimates of the remainder, using the same process as above, finding a total value of just under $3m.

Given that there were 49 new rated-10 plan changes in total over 2017, that would imply their mean value is at least $60,000.

We also estimated that the lowest value plan change was worth about $50,000. Roughly, we can add this to the contribution from the top 8 plan changes, and estimate the mean is about $110,000.

$110,000 per rated-10 plan change is about 13.6-times higher than financial costs, and 2.6-times higher if opportunity costs are also included.

This is a little lower than our previous estimates, but is likely an underestimate, since the value of the plan changes tends to increase over time as we gain more information.

The growth approach to evaluating startup nonprofits

A final approach to estimating marginal cost-effectiveness would be to focus more on future benefits, rather than our 2017 impact.

We intend for the vast majority of the impact of 80,000 Hours to lie in the future. The methods above, however, only consider “banked” plan changes, leading to too much focus on the short-term. It’s like evaluating a startup company based on its current profits, when what actually matters is long-term return on investment.

Estimating cost-effectiveness based on future growth, however, is also very hard. We suggest some rules of thumb in this more in-depth article on the topic.

One way you could approach the estimate is to think about the total value 80,000 Hours will create if it succeeds in a big way, and then consider how marginal investment changes the chance of this happening, or brings forward this impact.

We think the potential upside of 80,000 Hours is high. For instance, if we can engage 5% of talented young people, then about 5% of political, business, and scientific leadership will have been readers, and this is a group that will influence hundreds of billions of dollars of resources per year in the US alone.

If you think there’s some non-tiny probability of this kind of scenario, then it’ll be highly effective to fund 80,000 Hours to a level that lets us grow at the maximum sustainable rate, bringing this impact as early as possible.

Mistakes and issues

The following are some mistakes we think we’ve made which became clear this year. We also list some problems we faced, even if we’re not sure they were mistakes at the time.

People misunderstand our views on career capital

In the main career guide, we promote the idea of gaining “career capital” early in your career. This has led to some engaged users to focus on options like consulting, software engineering, and tech entrepreneurship, when actually we think these are rarely the best early career options if you’re focused on our top problems areas. Instead, it seems like most people should focus on entering a priority path directly, or perhaps go to graduate school.

We think there are several misunderstandings going on:

  1. There’s a difference between narrow and flexible career capital. Narrow career capital is useful for a small number of paths, while flexible career capital is useful in a large number. If you’re focused on our top problem areas, narrow career capital in those areas is usually more useful than flexible career capital. Consulting provides flexible career capital, which means it’s not top overall unless you’re very uncertain about what to aim for.

  2. You can get good career capital in positions with high immediate impact (especially problem-area specific career capital), including most of those we recommend.

  3. Discount rates on aligned-talent are quite high in some of the priority paths, and seem to have increased, making career capital less valuable.

However, from our career guide article, some people get the impression that they should focus on consulting and similar options early in their careers. This is because we put too much emphasis on flexibility, and not enough on building the career capital that’s needed in the most pressing problem areas.

We also enhanced this impression by listing consulting and tech entrepreneurship at the top of our ranking of careers on this page (now changed), and they still come up highly in the quiz. People also seem to think that tech entrepreneurship is a better option for direct impact than we normally do.

To address this problem, we plan to write an article in January clarifying our position, and then rewrite the main guide article later in the year. We’d also like to update the quiz, but it’s lower priority.

We’ve had similar problems in the past with people misunderstanding our views on earning to give and replaceability. To some extent we think being misunderstood is an unavoidable negative consequence of trying to spread complex ideas in a mass format – we list it in our risks section below. This risk also makes us more keen on “high-fidelity” in-person engagement and long format content, rather than sharable but simplified articles.

Not prioritising diversity highly enough

Diversity is important to 80,000 Hours because we want to be able to appeal to a wide range of people in our hiring, among users of our advice, and in our community. We want as many talented people as possible working on solving the world’s problems. A lack of diversity can easily become self-reinforcing, and if we get stuck in a narrow demographic, we’ll miss lots of great people.

Our community has a significant tilt towards white men. Our team started with only white men, and has remained even more imbalanced than our community.

We first flagged lack of team diversity as a problem in our 2014 annual review, and since then we’ve taken some steps to improve diversity, such as to:

  1. Make a greater effort to source candidates from underrepresented groups, and to use trial work to evaluate candidates, rather than interviews, which are more biased.
  2. Ask for advice from experts and community members.
  3. Add examples from underrepresented groups to our online advice.
  4. Get feedback on and reflect on ways to make our team culture more welcoming, and give each other feedback on the effect of our actions in this area.
  5. Put additional priority on writing about career areas which are over 45% female among our target age ranges, such as biomedical research, psychology research, nursing, allied health, executive search, marketing, nonprofits, and policy careers.
  6. During our next round of board reform, we’ve found a highly qualified woman who we have asked to join.
  7. Do standardised performance reviews, make salaries transparent within the team, and set them using a formula to reduce bias and barriers.
  8. Have “any time” work hours and make it easy to remote work.
  9. Implement standard HR policies to protect against discrimination and harassment. We adopted CEA’s paid maternity/paternity leave policy, which is generous by US standards.

Our parent organisation, CEA, has two staff members who work on diversity and other community issues. We’ve asked for their advice, and supported their efforts to exclude bad actors, and signed up to their statement of community values.

However, in this time, we’ve made little progress on results. In 2014, the full-time core team contained 3 white men, and now we have 7. The diversity of our freelancers, however, has improved. We now have about 9 freelancers, of which about half are women, and two are from minority backgrounds.

So, we intend to make diversity a greater priority over 2018.

In particular, we intend to make hiring at least one candidate from an underrepresented group to the core team a top priority for the next year. To do this, we’ll put more effort (up to about 5-10% of resources) into improving our culture and finding candidates. We hope to make progress with less investment, but we’re willing to make a serious commitment because it could enable us to hire a much better team over the long-term, and talent is one of our main constraints.

Set an unrealistic IASPC target

As explained earlier, we set ourselves the target of tripling impact-adjusted significant plan changes (IASPC) over the year while also focusing on rated-10 plan changes. However, the IASPC metric wasn’t set up to properly capture the value of these changes, the projects we listed were more suited to rated-1 plan changes, and we didn’t properly account for a 1-3 year lead time on generating rated-10 plan changes. This meant we had to drop the target half-way through the year.

We could have anticipated some of these problems earlier if we had spent more time thinking about our plans and metrics, which would have made us more effective for several months. In particular, we could have focused earlier on specialist content that is better suited to attracting people who might make rated-10 plan changes, as opposed to improving the career guide and general interest articles.

Going forward, we’ll adjust the IASPC metric to contain a 100 and 1000 category, and we’ll think more carefully about how easy it is to get different types of plan change.

Not increasing salaries earlier

This year, we had in-depth discussions with five people about joining the team full-time. Four of them were initially concerned by our salaries, but reassured after they heard about the raise we implemented in early 2017. This suggests we might have missed out on other staff in earlier years. Given that talent is a greater bottleneck than funding, this could have been a significant cost.

It’s not obvious this was a mistake, since we weren’t aware of as many specific cases in previous years, but we were encouraged by several advisors to raise salaries, so it’s possible we could have corrected this earlier.

Looking forward, we expect there are further gains from raising salaries. After living in the Bay Area for about a year, we have a better sense of the cost of living, and our current salaries don’t easily cover a high-productivity lifestyle in the area (e.g. living close to a downtown office). Rent costs have also increased at around 10% per year, which means that comparables we’ve used in earlier years (such as GiveWell in 2012) are out of date. Our salaries are also arguably in the bottom 30% compared to other US nonprofits of our scale, depending on how you make the comparison.

Rated-1 plan changes from online content not growing

Even though web traffic is up 80%, the on-going number of people reporting rated-1 and rated-0.1 plan changes from the career guide didn’t increase over the year, and the same is true of newsletter subscribers. This is because traffic to key conversion pages (e.g. the decision tool and the article about the GWWC pledge) has not increased.

This is not surprising given that we haven’t focused on driving more traffic into these pages. Instead, we’ve recently focused on driving people into the coaching applications. However, we had hoped that new traffic would spillover to a greater extent, driving extra growth from rated-1 plan changes.

Accounting behind

The CEA ops team (which we share) was short of staff over the year. This meant that our financial figures were often delayed by 3-6 months.

One problem this caused is that we only had delayed information on what our reserves were through the year, though this didn’t cause any issues this year since we maintained plenty of reserves.

Another problem was that it was hard to track spending, which meant that we didn’t catch overspending on AWS until we had incurred $5,000 of unneeded expenses (though we received a partial refund), which was about 0.7% of our budget. We also mis-paid a staff member by about the same amount due to a confusion about their salary over several months, which we decided not to recoup (in part because we wanted to raise their salary anyway).

To address this, CEA has made several operations hires over the year, increasing capacity (though illness on the team has temporarily reduced capacity again). What’s more, all our accounts have been transferred to new software (Xero), are now up-to-date within 1-2 months, and are easier to check, and we can continue to make improvements to systems. We also intend to allocate more time to checking our spending.

Not being careful enough in communication with the community

Quick comments on the EA Forum or Facebook by staff members will be taken as representing the organisation, creating problems if mistaken or misconstrued. Some comments by Ben and Rob this year ended up causing controversy. Even though the criticism was significantly based on a misunderstanding, and many people defended our comments, others didn’t see the defences, so our reputation was still likely harmed.

The most obvious solution is to raise our bar for commenting publicly, and submit this commenting to more checking, moving in the direction of Holden Karnofsky’s policies. The downside is reduced communication between us and our community, so we don’t intend to go as far as Karnofsky, but we’ve taken a step in that direction. As part of this, we updated our team communications policy and reviewed it with the team.

Poor forecasting of our coaching backlog

In November, our coaching backlog suddenly spiked to over 6 weeks, as we received a large number of applications and had reduced coaching capacity. This meant that in Sept-Oct we spent more time getting coaching applications than was needed, recent applicants had to wait a long time before starting, we’ve committed to coach people with different criteria from what we’d now use, and we’ve had to temporarily close applications.

We could have predicted this if we had made more thorough forecasts of how many hours of coaching time we’d have, and how much time it takes to coach each person.

Going forward, we’re making more conservative and detailed estimates of capacity.

Maybe not focusing enough on “rated-1000” plan changes

The top plan change rating in our IASPC metric was “10”. We now think this top category spans at least two orders of magnitude, so it should be broken into 10, 100, and 1000. This could have led us to not put enough attention into the 100 and 1000 categories. For instance, focusing on rated-100+ plan changes rather than those rated-10+ would probably mean putting more effort into novel research, and one-on-one recruitment into coaching.

It’s not obvious this was a mistake because we were less confident about the degree of spread at the start of the year – we’re more confident now due to the talent survey and our recent evaluation. We’re also not sure how it changes our strategy. We also partially fixed the problem in June by deciding to focus only on rated-10 plan changes.

Going forward, we intend to add 100 and 1000 categories to the metric, and think more about how focusing more on these might change our strategy.

Maybe not focusing enough on maintaining credibility as opposed to getting traffic

Related to the above, some have suggested that we focus too much on getting traffic, such as by using “clickbaity” titles, funny images, a more opinionated tone, “tech startup” design, and topics of wider interest, and not enough focus on maintaining a credible brand. This runs the risk of getting more people interested, but putting off some of our core audience, who are more likely to make rated-100+ plan changes.

We’re unsure how much of a problem this is. For instance, more “clickbaity” headlines often result in over twice as much traffic on Facebook in testing, and there seems to be a correlation between traffic and the number of good coaching applications that result from new content. We also already make an effort to avoid highly clickbaity headlines.

However, we’ll better keep our options open by maintaining a credible brand, so we’d like to take a step in this direction. To do this, we intend to:

  1. Stop using “memes” or other images associated with non-credible sources (though newspaper style cartoons are OK).

  2. Run titles past a checklist of pitfalls, and generally aim for more credible sounding options, accepting some cost in terms of views.

  3. Create more advanced content.

  4. Aim to push all our content in the direction of having a balanced, fact-focused, direct tone, and presenting our views with more nuance. We’ve already moved in this direction in the last 1-2 years, and we intend to keep doing so.

Risk-analysis: how might 80,000 Hours have a negative impact?

There are several ways we could cause people to have less impact with their careers, or other negative impacts. Here are some (non-exhaustive) examples that have seemed more concerning over recent years.

  1. Our advice might be wrong. This is the most obvious risk, though we also put a lot of effort into avoiding it through our research. Overall, we think it’s likely that some of our advice will turn out to be wrong, but it’s among the most well-researched advice available, so we’re confident it’s more likely to be true than common alternatives.

  2. We might put people off effective altruism more broadly. Although we bring many people into the community, it’s hard to avoid putting some people off, especially given our large and untargeted reach via online content. In particular, some have suggested that we focus too much on a narrow range of careers and causes, and that our brand might put off some of the most analytical people (as we mention above). We try to reduce these risks when writing and promoting our content. We could do more to scale back reach, but this would come at the cost of a smaller audience. We are also aware of some people being put off by being turned down for coaching. This is difficult to avoid, but we keep aiming to improve our framing, selection, and rejection process for the coaching service.

  3. We might encourage people to pursue overly competitive options (e.g. AI safety research), increasing the chance they burn out or get disillusioned, reducing their long-term impact. We try to avoid this by (i) being clear about the conditions for personal fit (ii) encouraging people to test options before committing and (iii) encouraging people to gain some flexible career capital as back-up. For instance, if you pursue a machine learning master’s degree as a test for working on AI safety research, and it doesn’t work out, you’re still in a good position.

  4. Relatedly, we might encourage people to underweight personal fit in favour of concrete “high impact” suggestions. This was more of a concern in the early years when we didn’t emphasise personal fit as much, but we still find that people sometimes focus too much on the concrete options we suggest rather than personal fit. We’ve also noticed that people overweight our concrete suggestions, compared to seeking out novel options. This seems hard to avoid, but we’ve tried by focusing more on “solving global problems” rather than promoting certain job types, as well as by promoting entrepreneurial approaches. In coaching, we encourage people to think of more options.

  5. In general, it’s easy to encourage people into worse options by giving an overly simple presentation of a complex consideration. We gave the example of career capital earlier, and last year we gave the example of replaceability. We’ve tried to address this by (i) being more cautious about promoting new ideas that we understand less well and (ii) leaning towards writing longer, more nuanced articles, at the likely expense of a smaller audience.

  6. By having online materials, we might encourage unqualified people to enter risky fields that are difficult to positively contribute to, such as AI policy. This makes coordination harder, and could result in people doing harmful projects. We’re trying to address this by putting more emphasis on gaining expertise, improving our filtering of coaching candidates, and being more up-front about ways to cause harm in a field (e.g. in our recent article on extinction risk, we listed ways to not contribute).

Plan for the next year

In brief, we want to push ahead with the “upgrading” process outlined above, in which we release content about priority paths, then provide one-on-one support entering these paths, with the aim of getting plan changes rated 10, 100, or 1000.

We think this process is already cost-effective, can be made several times more efficient again, and can then be scaled up at least several fold.

In the rest of this section, we outline some of the key strategic decisions we need to make, and then how this relates to concrete projects for the next year.

Key strategic decisions

There’s more justification of some of the key decisions underneath.

DecisionPositionQuick justification
Should we change our vision & mission?Not substantially, but shift focus towards “filling key roles in most pressing problems” rather than helping “as many people as possible find high impact careers”. See the current version at the start.We think we should focus more on plan changes rated 100/1000.
What should our key impact metric be?Stick with IASPC as our key week-to-week metric, but add 100/1000 categories. This will mean we also need to adjust the system to better handle changing values over time.

At least once a year, convert into donor dollars. Take these figures from our own research and annual talent survey.

As explained earlier, we think the rated-10 plan changes span over 2 orders of magnitude in donor dollars, and it’s important to track these differences.

Donor dollars are what matter for our donors, but are hard to track on a week-to-week basis.

Should we focus on getting plan changes rated 0.1/1/10/100/1000?Rated-100.Explained below.
Which target market?Similar to last year, but focus more on those aged 25-32 rather than 20-25, and with qualifications especially relevant to priority paths.It’s hard for younger people to make rated-100 plan changes. The EA community is also short of certain skills.
Do we have product-market fit?Yes when it comes to getting rated-1/10 plan changes, but not when it comes to those rated-100+.Since we only have 10 examples, we’re less certain we have a scalable process for getting these. Moreover, we face major uncertainties about the form of the product, especially in-person.
How fast should we hire?Slowly. Only hire for essential roles, or where someone can add lots of value immediately with high autonomy.We don’t yet have product-market fit if we focus on rated-100 plan changes. Moreover, we think we can increase our impact several-fold by increasing the efficiency of our existing programmes rather than adding new staff. Hiring staff also reduces flexibility, and in general we think there are huge benefits from maintaining a high bar to hiring.
What are the most powerful drivers of rated-10+ plan changes?We usually find that all the main drivers (online content, coaching, community connections) are important, and their impact is difficult to disentangle, because they serve different purposes. That said, some kind of in-person interaction is basically necessary, and online alone isn’t sufficient.See earlier section on the upgrading process.
How much to focus on research vs. providing advice? (i.e. improving our accuracy vs. telling people our findings)Currently spending under 5% of time on research; would like to increase towards 10%.10% seems like a reasonable long-term ratio, and there are lots of concrete topics that seem useful.
How much to focus on online content vs. in-person advice, such as coaching?Roughly 50:50, allocate staff mainly based on fit.Both seem important for plan changes, and have different, complementary benefits.
How much to focus on improving our programmes vs. outreach?Mostly focus on improving programmes. (Possible exception: make more effort to recruit top coaching candidates)We already have lots of reach from our online content, the EA community, and word-of-mouth. Improving programmes also increases each of these.
In coaching, should we work more with existing users, or with new people?Unsure.We want to think more about how to prioritise between different groups of people we could coach.

Why focus on getting more rated-100 plan changes right now?

On current estimates, plan changes rated 100 or higher account for over 50% of our historical impact in donor dollars, so they’re the best proxy for our impact.

What’s more, we suspect that at the current margin it’ll be more cost-effective to grow by getting more rated-100 plan changes than those rated 1 or 10. This is because it seems easier to get one extra plan change rated-100 than 100 rated-1.

We also think it’ll be easier to grow through rated-100 plan changes than those rated-1000, since that shift would involve a major departure from current programmes.

We’re less certain whether it’s more effective to focus on those rated-10 or 100, but when we try to make more detailed bottom-up estimates of specific projects, a focus on rated-100s seems slightly better.

We also expect it to be more effective because we’ve put less effort into getting rated-100 plan changes in the past, so it’s more neglected.

Some additional strategic reasons in favour include:

  • Option value – it’s easier to go from narrow outreach to broad than vice versa.

  • Measurability – It’s easier to demonstrate a large impact on a small number of people, making it easier to tell if our plans are working and use this evidence to fundraise. What’s more, our donors would prefer more rated-10/100 plan changes than more rated-1s.

  • Value of information – we’re more uncertain about how we can best find more rated-100 plan changes, so we’ll learn more from trying.

  • Community building – it’s easier to coordinate a small number of highly engaged people.

  • EA community needs – our survey found that community leaders think “upgrading” is the key bottleneck facing the community right now, and there was also significant demand for people with more specialist skills.

What do we think our key bottlenecks are?

By “key bottleneck” we mean an area where a small amount of effort will have a big effect on our total impact.

We think our key bottleneck is designing & improving our upgrading process – it seems possible to increase the impact of the process at least 2-3 fold within about a year, without hiring anyone, and this would increase our total impact about the same amount. We could then further scale it up by hiring.

Some promising areas within this focus include:

  1. Get better at spotting and attracting the right people to coach. We think this alone might increase coaching efficiency by 2-3 times, though we’re quite uncertain.

  2. Experiment with ways to improve the coaching offering, such as adding better job matchmaking to top roles and funding, or a fellowship to provide on-going support. We think this could increase the efficiency of the coaching another 2-3 times.

  3. Push ahead with our experiment with Niel as an AI policy specialist – a coach who works full-time on AI policy is tasked with finding and filling the most pressing talent gaps in the area. If this works, we could hire specialists for many of our top priority paths.

  4. Keep adding content aimed at tier 1 coaching leads, both to attract them and help them decide. This has the potential to increase our audience another 50-100%, and multiplies the effectiveness of the coaching.

  5. Also see the research priorities below.

Capacity bottlenecks

Ben has reached his management capacity, so additional hires need to be managed by other team members. This means we’re bottlenecked by having skilled, proven managers in the team. To address this, Peter McIntyre has started managing one full-time staff member; and the other staff are managing freelancers. If this goes well, we can hire more people. We also need to keep working on having clear plans and metrics.

With hiring, we also face the diversity problems we covered earlier.

Research priorities

Last year, we think developing and promoting the idea of “AI policy and strategy” careers in the community helped to cause a significant shift in where people focus. In the past, it has been similar with earning to give and “working at EA orgs” as options.

This suggests that further work to better frame priority paths could be valuable. In particular, we’d like to:

  • Improve our understanding of the best sub-options within each path, especially the newer ones – China expert and biorisk policy/research.

  • Improve our understanding of who’s a best fit for each path, and how to narrow them down quickly. We could try to create a flow chart for quickly cutting them down.

  • See if we can identify new priority paths.

We can make progress on the above by doing more interviews with experts in each top problem area.

We’d also like to finish off in-progress conceptual work on community coordination and replaceability.

How could the rest of the community best help us?

The community is our biggest source of top coaching leads, so we’re keen to see the community continue to grow and send people to coaching.

We also see that some in-person interaction is effectively a necessary condition for making a rated-10+ plan change, so we’re keen to see more ways for people to have high-quality in-person interactions with the community that we can funnel people into.

We also draw heavily on research produced by the rest of the community, especially concerning which problem areas to focus on and how to tackle them, and more of this would be a great help.

Examples of concrete projects we’re likely to pursue early in 2018

Content

  • Update our biorisk and nuclear security problem profiles.
  • Upgrade our AI technical safety research profile to include recent developments, and re-release.
  • Continue to release podcasts aimed at helping the most engaged users.
  • Release core articles justifying our priority paths and updating our views on career capital.

Research

  • Set aside some time to explicitly focus on the research priorities listed above (without worrying about publishing the write up), and see how much progress we make.
  • Tilt the podcasts towards topics that aid our research priorities.

Technical & web

  • Update our IASPC tracking systems to include the 100/1000 categories, and record changes to the ratings.
  • Review results from the last iteration of our coaching application process, and do another.
  • Website updates to make it more useful and appealing to the people we most want to coach.

In-person

  • Actively reach out to find more top coaching candidates from the community.
  • Continue with the AI policy specialist experiment.
  • Set up a database we can use to headhunt for key roles.

Metric targets

We expect that if we only make modest efforts, we should be able to find about:

  • 600 rated-1 plan changes
  • 25 rated-10
  • 3 rated-100

This makes for a total of about 1100 IASPC points – a little below what we recorded in 2017 ignoring the single rated-1000 plan change.

To push ourselves to grow, we’d like to aim to double this, reaching 2200 over the year.

This seems achievable, but it’s hard to be confident. We could easily do everything right, but fail to get any rated-100+ plan changes, which would make it hard to make the target. For instance, even if we could get 100 extra rated-10 plan changes (more than 2-fold growth), that would only be 900-1000 extra IASPC points. Or we could quickly find one rated-1000 plan change, which would almost be enough by itself.

Lead metric targets

We’ll also focus on the following metrics, which predict future plan changes.

  1. Unique site visitors to content that (i) delivers key messages for tier 1 coaching leads (ii) upholds our tone guidelines.

  2. Number of quality-adjusted coaching leads. Currently we focus on “tier 1” leads, but we’d like to make this more fine-grained.

  3. IASPC per hour produced by coaching (“coaching efficiency”)

  4. Total coaching hours.

  5. Hours spent on research, working on issues relevant to our top research priorities.

We’re interested in creating a more leading metric for coaching, which might be something like “number of tier 1s satisfied” per week.

What this might look like in the future

We’ll keep improving our online advice. Our aim is to make it clearly the best source of research-backed advice on social impact careers.

The in-person team will coach the most engaged online readers. We envision hiring specialists for each major priority path to allocate talent within each area, similar to a foundation grantmaker – we’re aiming to be the “Open Philanthropy of careers”.

In more depth, the process might look like this:

  1. Read online content (basic ideas and options)
  2. Apply to coaching and get initial “generalist” advice (where to focus)
  3. Once decided and if qualified, become an “80k fellow”, which gives you introductions to other fellows, specialist coaching in your area, and further “add ons”, such as headhunting into specific jobs and scholarships.
  4. We keep going until you get placed into a top role in your area.

It seems feasible we could make this process 3+ times more efficient, and gain 3+ times as much coaching capacity, growing its impact about 10-fold.

Longer term, we aim to establish ourselves as a key source of advice for the world’s most talented young people. If we do this, then in the future we’ll have worked with many of the leaders in academia, politics and business, and we’ll help them work together to solve the world’s most pressing problems.

Financial report

2017 spending vs. projected budget

Below we compare our spending so far this year with the budget we projected in February. The spending is measured as of the end of November, so is one month short of the year.

Global total spending ($)Baseline budget ($)Aggressive budget ($)
Staff and contractors salaries, payroll and benefits453,836436,197684,945
Non-salary staff expenses58,04940,12140,121
Office rent, supplies and utilities63,86972,90272,902
Contribution to CEA's expenses for operations45,00038,49938,499
Marketing19,57219,511183,775
Online services31,25719,38819,388
Workshop expenses and student groups (exc. travel)4,42919,30119,301
Other4,5266,9306,930
Uncategorized Expenditure5,54435,60735,607
Foreign currency gains and losses-779
80,000 Hours total spending Jan-Nov 2017685,305
Projected total spending by the end of 2017 (including payment of any outstanding liabilities)767,347688,4571,101,469
80,000 Hours total income Jan-Nov 20171,989,462

Overall, spending was a little over our baseline budget, but we came under the aggressive budget in two main ways:

  • Staff salaries: We expect to have spent $500,000 at the end of the year, but had budget for $680,000. This was mainly because we only hired one full-time staff member toward the end of the year and about 1 FTE worth of freelancers, when the budget was enough to hire four FTE in summer. Some staff also gave up some of their pay.
  • Marketing: We ran a FB marketing experiment, but decided to deprioritise further spending. We also didn’t do a book giveaway.

Although we didn’t end up using them, it was useful to have these options to expand. The extra money we raised will simply reduce our targets this year.

Some other minor differences included:

  • We spent more on online services due to switching plans on our analytics software (Segment and Mixpanel).
  • We stopped giving workshops in favour of coaching, reducing spending on workshops.

See more detail on historical spending.

2018 provisional baseline budget

Our “baseline” budget is what we’d need to spend if we don’t hire, but otherwise maintain our current operations and commitments, including standard salary increases and moving to a new office.

Prepared 14 Dec 2017, and still under review.

2017 historical (Jan-Nov)2018 budget
Staff salaries, payroll and benefits (7 FTE)$331,824$547,080
Contractors (2.5 FTE)$122,012$148,776
Non-salary staff expenses (e.g. travel, conferences and food)$35,946$71,860
Office rent, supplies and utilities$63,869$132,900
Contribution to CEA's expenses for operations$45,000$50,000
Legal and immigration$14,205$37,200
Marketing$19,572$27,000
Workshop expenses and student groups (exc. travel)$4,4290
Computer Software & Hardware$6,491$13,200
Internet/Web/Hosting Fees$31,257$50,520
Non-employee insurance$1,211$600.00
Books/subscriptions/reference$1,407$1,440
Other$3,3150
Uncategorized Expenditure$5,544$2,400
Foreign currency gains and losses0
Total$685,305$1,082,976

Current financial situation

We expect to finish the year with around $1,350,000 net assets, or around 15 months’ reserves on our baseline budget. This compares to about $320,000 this time last year.

Outside of our fundraising rounds, we receive about $36,000 a year from small donors and book sales, which adds around half a month of reserves each year.

Expansion budget

Some ways we could expand faster than the baseline budget include:

  • Hiring additional staff. We’d like to have budget to hire up to 2.5 full-time staff at our average salary over the year, or the equivalent in terms of freelancers. The top role is an additional coach, either a generalist or a specialist depending on fit. We’d also consider hiring for research, web engineering / design, or outreach.

  • Increasing salaries 20-30%. This would put us closer to average research nonprofits in the Bay Area with $1m+ budget, and GiveWell in 2013 adjusted for the change in cost of living. This would make it easier to attract good staff (higher salaries helped with attracting candidates this year). It would also make it much easier for our staff to afford housing in central Berkeley when we open an office there in the summer, and enable other ways to save time.

  • An additional $25,000 per year for more marketing experiments.

This would approximately result in the following changes to the budget:

2018 baseline2018 expansion2019 expansion
Staff and contractor salaries$695,856$953,644$1,123,229
Staff overheads, inc office and non-salary expenses$204,760$264,760$377,928
Marketing$27,000$52,000$52,810
Total$1,082,976$1,425,764$1,717,488

Fundraising targets

Last year, we raised enough to cover our expansion budget and end 2017 with 12 months’ reserves. This enabled us to not fundraise over 2017, without dropping below our reserves target of 12 months.

However, this approach meant that if we undershot our expansion budget, we’d end the year with over 12 months’ reserves, which is what happened. Given high discount rates in the community, it’s better to free up these funds for other projects.

So, this year we’d like to raise enough to cover our expansion budget for 2017 and end the year with 7 months’ reserves. To offset the slight decrease in financial security this entails, we intend to focus more on raising multi-year commitments to donate.

This leads to the following fundraising target of $1.02m.

Expansion budget 2018$1,425,764
Expansion budget 2019$1,717,488
Expected baseline income over 2018$36,000
Approx additional cash required to cover 2018, and 7/12 of 2019$2,391,632
Expected cash on hand end Dec 2017$1,373,261
Additional income needed$1,018,371

If we make this target, it would enable us to fund all of our current operations while making significant efforts to expand.

Additional funding over this target gives us the option to expand even faster (e.g. hiring more staff, doing a trial with scholarships). The worst case scenario for additional funding is that we raise less next year, while maintaining more reserves, which increases our financial security and makes it easier to attract staff.

We expect to cover 33-50% of this target from Open Philanthropy, but we don’t want to overly depend on one source of funding, so are looking for other donors to match Open Philanthropy and cover the remainder.

Why donate to 80,000 Hours

We need more ambitious people working on the world’s most pressing problems – it’s among the biggest barriers to progress.

80,000 Hours tackles this bottleneck head on by providing advice on how to take these careers, and it’s the only organisation in the effective altruism community, and the world, taking this approach to doing so.

So far, we’ve developed a flexible, scalable process (guide -> profiles -> coaching -> community) for getting more people into whichever problems are most pressing at the time, and the impact of this has likely already justified our costs many times over, even including substantial opportunity costs.

What’s more, we’ve grown this impact 23-fold in 4 years, averaging 120% growth per year (measured with IASPC p.a.). We’ve also grown our traffic and newsletter a similar amount, while our costs have only increased 5-fold.

Additional funding gives us the chance to continue this impact and growth. We’ve been funded by the world’s top startup accelerator, Y Combinator. We’ve also been funded by the leading foundation that takes an effective altruism approach, Open Philanthropy.

Longer-term, we have the chance to get far bigger, perhaps influencing a significant fraction of future leaders, and building the effective altruism community so that it can solve the world’s biggest problems.

Next steps

The easiest way to donate to 80,000 Hours is through the EA Funds:

Donate now

If you’re interested in making a large donation and have questions, please contact [email protected].

If you’d like to follow our progress during the year, subscribe to 80,000 Hours updates. See our previous evaluations.

Notes and references

  1. By this we mean that an additional person can usually contribute more by taking a job in the community than they could by earning to give. Read more

  2. The amount for 2017 is a little below our target of $2.1m, because about $100k was received in Dec 2016. These figures are also approximate because we have not yet finalised the 2017 accounts, and they depend on fluctuations between the pound and dollar.

  3. Future time should also be discounted.

  4. Though if you think the community is less talent constrained, it will also decrease our opportunity costs, reducing the overall impact on our cost-effectiveness.