Hide table of contents

We can probably influence the the world in the near future, as well as in the far future, and both near and far seem to matter.

The point I want to convey here is that, unless you are living in a particularly unusual time of your personal history, the influence you can have on the world in the next 2 years is the most important influence you will have in any future pair of years. To do that I will use personal stories, as I did in my last post

Let us assume for the time being that you care about the far future a substantial amount. This is a common trend that many though not all EAs have converged to after a few years of involvement with the movement. We are assuming this to steel-man the case against the importance of the next 2 years. If what matters is over a 100 years into the future, which difference could 2015 to 2017 make?

Why what I do now, relatively, doesn't matter anymore

I believe the difference to be a lot. When I look back at the sequence of transformations that happened to me and that I caused after being a visiting fellow at the old version of MIRI, the birth of EA me, one event stands as more causally relevant than, quite likely, all others put together. Giving a short TED on Effective Altruism and Friendly AI. It mattered much. It didn't matter that much because it was good or not, or because it conveyed complex ideas to the general public - it didn't - or whichever other thing a more self-serving version of me might have wanted to pretend. 

It mattered because it happened soon, because it was the first, and because I nagged the TED organizers about getting someone to talk about EA at the TED global event (the event was framed by TED as a sort of pre-casting for the big TED global 2013). TED 2013 was the one to which Singer was invited. I don't know if there was a relation.
If  I went to the same event, today, that would be a minor dent, and it hasn't been three years yet. 

Fiction: Suppose now I continue my life striving to be a good, or even a very good EA, I do research on AGI, I talk to friends about charity, I even get, say, one very rich industrialist to give a lot of money to Givewell recommended charities in late 2023 at some point. It is quite likely that none of this will ever surpass the importance of making the movement grow bigger (and better) by that initial acceleration, that single one night silly kick-starter. 

The leveraging power you can still have

Now think about the world in which Toby Ord and Will MacAskill did not start Giving What We Can. 

The world in which Bostrom did not publish Superintelligence, and therefore Elon Musk, Bill Gates, and Steve Wozniak didn't turn to "our side" yet. 

The EA movement is growing in number, in resources, in power, and in accessing the people in positions of power. You can still change the derivative of the growth curve so it grows faster, like they could have done in the joke below: 

But not for long. Soon we will have exhausted all incredibly efficient derivative increasing opportunities. We will have only the efficient ones. 

Self Improvement versus Effectivity

It is commonly held among EAs that self-improvement is the route to being a maximally powerful individual, which then means becoming a maximally causally powerful individual so you can do your Max altruistic good. A lot of people's time goes to self-improvement.

Outreach and getting others to do work are neglected because of that, in my opinion. Working on self-improvement can be tremendously useful, but some people use it to enter a cocoon from which they believe they will emerge butterflies long later. They may even be right, but long later may be too late for their altruism. 

Does not this argument hold, at any point in time, for the next two years?

Yes. Unless you are say, going to finish college in two years, or repaying a debt so you can work on the FAI team three years from now, or something very unusual, most of your power involves contacting the right people now, having ideas now, and getting people as good as you to join your cause now.  

Recently I watched a GWWC pledge event, where expected  one million dollars were raised in future expected earnings. This was the result of a class on Altruism given to undergrads in Berkeley (hooray Ajea and Oliver by the way). You can probably get someone to teach a course on EA, or do it yourself now etc... and then all the consequences of those people turning into EAs now versus in five years, or for some people never, are on you. 

If it holds at any time, does it make sense?

Of course it does. What Singer did in 1972 has influenced us all more than anything he could do now. But he is alive now, he can't change the past, and he is doing his best with the next two years. 

The same is true of you now. In fact, if you have a budget of resources to allocate to effective altruism, I suggest you go all in during the next two years (or one year, if you like moving fast). Your time will never be worth the same amount of Altruistic Utilons after that. For some of us, I believe that there will be a crossover point, a point at which your actions are mattering less than how much you want to dedicate effort to them, this is your EA-retirement day. This depends on which other values you have besides EA and how old you are. 

But the next two years are going to be very important, and to start acting on them now, and going all in, seems to me like an extremely reasonable strategy to adopt even if you have a limited number of total hours or effort you intend to allocate to EA. If you made a pledge to donate 10% of your time for instance, donate the first 10% you can get a hold of. 

Exceptions

Many occasions are candidate exceptions for this "bienial all-in" policy I'm suggesting. Before the end of Death was fathomable, it would be a bad idea to spend two years fighting it, as Bostrom suggests in the fable. If you will finish some particularly important project that will enable you to do incredible things in two years, but it won't matter that much how well as long as you do it (say you will get a law degree in two years) then maybe hold your breadth. If you are an AI researcher who thinks the most important years for AI will be between 10 and 20 years from now, it may not be your time to go all in yet. There are many other cases. 

But for everyone else, the next two years will be the most important future two years you will ever get to have as an active altruist. 

Make the best of them! 

Comments17


Sorted by Click to highlight new comments since:

Working on self-improvement can be tremendously useful, but some people use it to enter a cocoon from which they believe they will emerge butterflies long later.

In my opinion, it's best to intermix self-improvement with working on object-level goals in order to make sure you are solving the right problems. Instead of spending all your time on self-improvement, maybe take some self-improvement time at the end of each day.

You seem to think that resources now are several times better than resources in a few years, which are presumably several times better than resources in a few more years, and so on. Let's say you think that it's a factor of 2 improvement every 2 years (it sounds like this understates your view).

If you endorsed this logic between 1954 and now, you would conclude that resources in 1954 are about a billion times more valuable than resources now, i.e. that having a few million dollars in 1954 is roughly as valuable as controlling all of the world's resources today, or that a group of a dozen people in 1954 wield more influence than the whole world does today. This is conceivable, but would be pretty surprising.

My claim is a little narrower than the one you correctly criticize.

I believe that for movements like EA, and for some other types of crucial consideration events (atomic bombs, FAI, perhaps the end of aging) there are windows of opportunity where resources have the sort of exponential payoff decay you describe.

I have high confidence that the EA window of opportunity is currently in force. So EAs en tant que telle are currently in this situation. I think it is possible that AI's window is currently open as well, I'm far less confident in that. With Bostrom, I think that the "strategic considerations" or "crucial considerations" time window is currently open. I believe the atomic bomb time window was in full force in 1954, and highly commend the actions of Bertrand Russell in convincing Einstein to sign the anti-bomb manifesto. Just like today I commend the actions of those who caused the anti-UFAI manifesto. This is one way in which what I intend to claim is narrower.

The other way is that all of this rests on a conditional: assuming that EA as a movement is right. Not that it is metaphysically right, but some simpler definition, where in most ways history unfolds, people would look back and say that EA was a good idea, like we say the Russell-Einstein manifesto was a good idea today.

As for reasons to believe the EA window of opportunity is currently open, I offer the stories above (TED, Superintelligence, GWWC, and others...), the small size of the movement at the moment, the unusual level of tractability that charities have acquired in the last few years due to technological ingenuity, the globalization of knowledge - which increases the scope of what you can do a substantial amount - the fact that we have some, but not all financial tycoons yet, etc...

As to the factor of resource value decrease, I withhold judgement, but will say the factor could go down a lot from what it currently is, and the claim would still hold (which I tried to convey by Singer's 1972 example).

We have financial tycoons?? Then why is there still room for funding with AMF GiveDirectly SCI and DwTW?? Presumably they're just flirting with us.

It's been less than two years and all the gaps have either been closed, or been kept open in purpose, which Ben Hoffman has been staunchly criticising.

But anyway, it has been less than 2 years and Open Phil has way more money than it knows what to do with.

QED.

It has been about 3 years, and only very specific talent still matters for EA now. Earning to Give to institutions is gone, only giving to individuals still makes sense.

It is possible that there will be full scale repleaceability of non-researchers in EA related fields by 2020.

But only if, until then, we keep doing things!

Many tycoon personality types favour other charities where they're the main patron. This is pure speculation but others may want to leave room for typical individual donors, as these charities are particularly well suited to them.

A good test might be: is it easier for you to double your own effectiveness on expectation or create 1 more EA just as thoughtful as effective as yourself on expectation? In both cases, you are increasing the total capacity of the EA movement by the same amount. In both cases, this capacity can be reinvested in recruiting further EAs, self-improvement of individual EAs, object-level EA projects, etc.

By this test, your TED talk looks very attractive compared to almost any self-improvement effort you could do, since it created hundreds or thousands of EA equivalents to you on expectation. The equivalent achievement of improving yourself by a factor of 100 or 1000 seems quite difficult... if you are spending 1/4 of your time and energy working towards your EA goals, for instance, you can only improve by a factor of 4 at most by getting yourself to work harder. (Working smarter is another story. In fact, this post you wrote just now could be considered advice on how to work smarter. In general I'm more optimistic about opportunities to work smarter than work harder, because working harder is unlikely to get you more than a 4x multiplier in most cases.)

The world in which Bostrom did not publish Superintelligence, and therefore Elon Musk, Bill Gates, and Paul Allen didn't turn to "our side" yet.

Has Paul Allen come round to advocating caution and AI safety? The sources I can find right now suggest Allen is not especially worried.

http://www.technologyreview.com/view/425733/paul-allen-the-singularity-isnt-near/

Thanks Diego!

Matt Wage's old post on this topic is relevant: https://80000hours.org/2012/04/the-haste-consideration/

Thumbs up. Love the use of stories.

Assumption of exponential growth, and the ability to build a movement without major landmark successes, that the quality of the sociological institutions within the movement wont matter for growth and resilience much further down the line, that it wont be incredibly valuable to have people in different places across the economy that can only be got at through dedicated time, that the kinds of projects you can do within 2 years have the same marginal return to the kinds of projects you can do within 10 years...

I agree with this about attracting people to the movement as a general principle, but I'm worried that short term focus blinkers us to some fantastic opportunities - which would in turn strengthen the movement as attractors / make us more interesting to outsiders.

Is this post missing part of it?

Thanks for noticing, fixed!

Curated and popular this week
 ·  · 40m read
 · 
I am Jason Green-Lowe, the executive director of the Center for AI Policy (CAIP). Our mission is to directly convince Congress to pass strong AI safety legislation. As I explain in some detail in this post, I think our organization has been doing extremely important work, and that we’ve been doing well at it. Unfortunately, we have been unable to get funding from traditional donors to continue our operations. If we don’t get more funding in the next 30 days, we will have to shut down, which will damage our relationships with Congress and make it harder for future advocates to get traction on AI governance. In this post, I explain what we’ve been doing, why I think it’s valuable, and how your donations could help.  This is the first post in what I expect will be a 3-part series. The first post focuses on CAIP’s particular need for funding. The second post will lay out a more general case for why effective altruists and others who worry about AI safety should spend more money on advocacy and less money on research – even if you don’t think my organization in particular deserves any more funding, you might be convinced that it’s a priority to make sure other advocates get more funding. The third post will take a look at some institutional problems that might be part of why our movement has been systematically underfunding advocacy and offer suggestions about how to correct those problems. OUR MISSION AND STRATEGY The Center for AI Policy’s mission is to directly and openly urge the US Congress to pass strong AI safety legislation. By “strong AI safety legislation,” we mean laws that will significantly change AI developers’ incentives and make them less likely to develop or deploy extremely dangerous AI models. The particular dangers we are most worried about are (a) bioweapons, (b) intelligence explosions, and (c) gradual disempowerment. Most AI models do not significantly increase these risks, and so we advocate for narrowly-targeted laws that would focus their att
 ·  · 1m read
 · 
Are you looking for a project where you could substantially improve indoor air quality, with benefits both to general health and reducing pandemic risk? I've written a bunch about air purifiers over the past few years, and its frustrating how bad commercial market is. The most glaring problem is the widespread use of HEPA filters. These are very effective filters that, unavoidably, offer significant resistance to air flow. HEPA is a great option for filtering air in single pass, such as with an outdoor air intake or a biosafety cabinet, but it's the wrong set of tradeoffs for cleaning the air that's already in the room. Air passing through a HEPA filter removes 99.97% of particles, but then it's mixed back in with the rest of the room air. If you can instead remove 99% of particles from 2% more air, or 90% from 15% more air, you're delivering more clean air. We should compare in-room purifiers on their Clean Air Delivery Rate (CADR), not whether the filters are HEPA. Next is noise. Let's say you do know that CADR is what counts, and you go looking at purifiers. You've decided you need 250 CFM, and you get something that says it can do that. Except once it's set up in the room it's too noisy and you end up running it on low, getting just 75 CFM. Everywhere I go I see purifiers that are either set too low to achieve much or are just switched off. High CADR with low noise is critical. Then consider filter replacement. There's a competitive market for standardized filters, where most HVAC systems use one of a small number of filter sizes. Air purifiers, though, just about always use their own custom filters. Some of this is the mistaken insistence on HEPA filters, but I suspect there's also a "cheap razors, expensive blades" component where manufacturers make their real money on consumables. Then there's placement. Manufacturers put the buttons on the top and send air upwards, because they're designing them to sit on the floor. But a purifier on the floor takes up
 ·  · 10m read
 · 
Citation: McKay, H. and Shah, S. (2025). Forecasting farmed animal numbers in 2033. Rethink Priorities. The report is also available on the Rethink Priorities website. Executive summary We produced rough-and-ready forecasts of the number of animals farmed in 2033 with the aim of helping advocates and funders with prioritization decisions. We focus on the most numerous groups of farmed animals: broiler chickens, finfishes, shrimps, and select insect species. Our forecasts suggest almost 6 trillion of these animals could be slaughtered in 2033 (Figure 1).   Figure 1: Invertebrates could account for 95% of farmed animals slaughtered in 2033 according to our midpoint estimates. Note that ‘Insects’ only includes black soldier fly larvae and mealworms. Our midpoint estimates point to a potential fourfold increase in the number of animals slaughtered from 2023 to 2033 and a doubling of the number of animals farmed at any time. Invertebrates drive the majority of this growth, and could account for 95% of farmed animals slaughtered in 2033 (see Figure 1) and three quarters of those alive at any time in our mid-point projections. We believe our forecasts point to an urgent need to address critical questions around the sentience and welfare of farmed invertebrates. Our estimates come with many caveats and warnings. In particular: * Species scope: For practicality, we produced numbers only for a few key animal groups: broiler chickens, finfishes, shrimp, and certain insects (black soldier flies and mealworms only). * Sensitivity to insect farming growth: Our forecasts are particularly sensitive to the growth in insect farming, which is highly sensitive to the success of insect farming business models and their ability to attract future investment. The recent and forecasted estimates, with 90% subjective credible intervals, can be viewed below in Table 1.  Table 1: Estimates of recent and forecasted numbers of broiler chickens, finfishes, shrimps, and insects slau