Hide table of contents

We can probably influence the the world in the near future, as well as in the far future, and both near and far seem to matter.

The point I want to convey here is that, unless you are living in a particularly unusual time of your personal history, the influence you can have on the world in the next 2 years is the most important influence you will have in any future pair of years. To do that I will use personal stories, as I did in my last post

Let us assume for the time being that you care about the far future a substantial amount. This is a common trend that many though not all EAs have converged to after a few years of involvement with the movement. We are assuming this to steel-man the case against the importance of the next 2 years. If what matters is over a 100 years into the future, which difference could 2015 to 2017 make?

Why what I do now, relatively, doesn't matter anymore

I believe the difference to be a lot. When I look back at the sequence of transformations that happened to me and that I caused after being a visiting fellow at the old version of MIRI, the birth of EA me, one event stands as more causally relevant than, quite likely, all others put together. Giving a short TED on Effective Altruism and Friendly AI. It mattered much. It didn't matter that much because it was good or not, or because it conveyed complex ideas to the general public - it didn't - or whichever other thing a more self-serving version of me might have wanted to pretend. 

It mattered because it happened soon, because it was the first, and because I nagged the TED organizers about getting someone to talk about EA at the TED global event (the event was framed by TED as a sort of pre-casting for the big TED global 2013). TED 2013 was the one to which Singer was invited. I don't know if there was a relation.
If  I went to the same event, today, that would be a minor dent, and it hasn't been three years yet. 

Fiction: Suppose now I continue my life striving to be a good, or even a very good EA, I do research on AGI, I talk to friends about charity, I even get, say, one very rich industrialist to give a lot of money to Givewell recommended charities in late 2023 at some point. It is quite likely that none of this will ever surpass the importance of making the movement grow bigger (and better) by that initial acceleration, that single one night silly kick-starter. 

The leveraging power you can still have

Now think about the world in which Toby Ord and Will MacAskill did not start Giving What We Can. 

The world in which Bostrom did not publish Superintelligence, and therefore Elon Musk, Bill Gates, and Steve Wozniak didn't turn to "our side" yet. 

The EA movement is growing in number, in resources, in power, and in accessing the people in positions of power. You can still change the derivative of the growth curve so it grows faster, like they could have done in the joke below: 

But not for long. Soon we will have exhausted all incredibly efficient derivative increasing opportunities. We will have only the efficient ones. 

Self Improvement versus Effectivity

It is commonly held among EAs that self-improvement is the route to being a maximally powerful individual, which then means becoming a maximally causally powerful individual so you can do your Max altruistic good. A lot of people's time goes to self-improvement.

Outreach and getting others to do work are neglected because of that, in my opinion. Working on self-improvement can be tremendously useful, but some people use it to enter a cocoon from which they believe they will emerge butterflies long later. They may even be right, but long later may be too late for their altruism. 

Does not this argument hold, at any point in time, for the next two years?

Yes. Unless you are say, going to finish college in two years, or repaying a debt so you can work on the FAI team three years from now, or something very unusual, most of your power involves contacting the right people now, having ideas now, and getting people as good as you to join your cause now.  

Recently I watched a GWWC pledge event, where expected  one million dollars were raised in future expected earnings. This was the result of a class on Altruism given to undergrads in Berkeley (hooray Ajea and Oliver by the way). You can probably get someone to teach a course on EA, or do it yourself now etc... and then all the consequences of those people turning into EAs now versus in five years, or for some people never, are on you. 

If it holds at any time, does it make sense?

Of course it does. What Singer did in 1972 has influenced us all more than anything he could do now. But he is alive now, he can't change the past, and he is doing his best with the next two years. 

The same is true of you now. In fact, if you have a budget of resources to allocate to effective altruism, I suggest you go all in during the next two years (or one year, if you like moving fast). Your time will never be worth the same amount of Altruistic Utilons after that. For some of us, I believe that there will be a crossover point, a point at which your actions are mattering less than how much you want to dedicate effort to them, this is your EA-retirement day. This depends on which other values you have besides EA and how old you are. 

But the next two years are going to be very important, and to start acting on them now, and going all in, seems to me like an extremely reasonable strategy to adopt even if you have a limited number of total hours or effort you intend to allocate to EA. If you made a pledge to donate 10% of your time for instance, donate the first 10% you can get a hold of. 

Exceptions

Many occasions are candidate exceptions for this "bienial all-in" policy I'm suggesting. Before the end of Death was fathomable, it would be a bad idea to spend two years fighting it, as Bostrom suggests in the fable. If you will finish some particularly important project that will enable you to do incredible things in two years, but it won't matter that much how well as long as you do it (say you will get a law degree in two years) then maybe hold your breadth. If you are an AI researcher who thinks the most important years for AI will be between 10 and 20 years from now, it may not be your time to go all in yet. There are many other cases. 

But for everyone else, the next two years will be the most important future two years you will ever get to have as an active altruist. 

Make the best of them! 

5

0
0

Reactions

0
0

More posts like this

Comments17


Sorted by Click to highlight new comments since:

Working on self-improvement can be tremendously useful, but some people use it to enter a cocoon from which they believe they will emerge butterflies long later.

In my opinion, it's best to intermix self-improvement with working on object-level goals in order to make sure you are solving the right problems. Instead of spending all your time on self-improvement, maybe take some self-improvement time at the end of each day.

You seem to think that resources now are several times better than resources in a few years, which are presumably several times better than resources in a few more years, and so on. Let's say you think that it's a factor of 2 improvement every 2 years (it sounds like this understates your view).

If you endorsed this logic between 1954 and now, you would conclude that resources in 1954 are about a billion times more valuable than resources now, i.e. that having a few million dollars in 1954 is roughly as valuable as controlling all of the world's resources today, or that a group of a dozen people in 1954 wield more influence than the whole world does today. This is conceivable, but would be pretty surprising.

My claim is a little narrower than the one you correctly criticize.

I believe that for movements like EA, and for some other types of crucial consideration events (atomic bombs, FAI, perhaps the end of aging) there are windows of opportunity where resources have the sort of exponential payoff decay you describe.

I have high confidence that the EA window of opportunity is currently in force. So EAs en tant que telle are currently in this situation. I think it is possible that AI's window is currently open as well, I'm far less confident in that. With Bostrom, I think that the "strategic considerations" or "crucial considerations" time window is currently open. I believe the atomic bomb time window was in full force in 1954, and highly commend the actions of Bertrand Russell in convincing Einstein to sign the anti-bomb manifesto. Just like today I commend the actions of those who caused the anti-UFAI manifesto. This is one way in which what I intend to claim is narrower.

The other way is that all of this rests on a conditional: assuming that EA as a movement is right. Not that it is metaphysically right, but some simpler definition, where in most ways history unfolds, people would look back and say that EA was a good idea, like we say the Russell-Einstein manifesto was a good idea today.

As for reasons to believe the EA window of opportunity is currently open, I offer the stories above (TED, Superintelligence, GWWC, and others...), the small size of the movement at the moment, the unusual level of tractability that charities have acquired in the last few years due to technological ingenuity, the globalization of knowledge - which increases the scope of what you can do a substantial amount - the fact that we have some, but not all financial tycoons yet, etc...

As to the factor of resource value decrease, I withhold judgement, but will say the factor could go down a lot from what it currently is, and the claim would still hold (which I tried to convey by Singer's 1972 example).

We have financial tycoons?? Then why is there still room for funding with AMF GiveDirectly SCI and DwTW?? Presumably they're just flirting with us.

It's been less than two years and all the gaps have either been closed, or been kept open in purpose, which Ben Hoffman has been staunchly criticising.

But anyway, it has been less than 2 years and Open Phil has way more money than it knows what to do with.

QED.

It has been about 3 years, and only very specific talent still matters for EA now. Earning to Give to institutions is gone, only giving to individuals still makes sense.

It is possible that there will be full scale repleaceability of non-researchers in EA related fields by 2020.

But only if, until then, we keep doing things!

Many tycoon personality types favour other charities where they're the main patron. This is pure speculation but others may want to leave room for typical individual donors, as these charities are particularly well suited to them.

A good test might be: is it easier for you to double your own effectiveness on expectation or create 1 more EA just as thoughtful as effective as yourself on expectation? In both cases, you are increasing the total capacity of the EA movement by the same amount. In both cases, this capacity can be reinvested in recruiting further EAs, self-improvement of individual EAs, object-level EA projects, etc.

By this test, your TED talk looks very attractive compared to almost any self-improvement effort you could do, since it created hundreds or thousands of EA equivalents to you on expectation. The equivalent achievement of improving yourself by a factor of 100 or 1000 seems quite difficult... if you are spending 1/4 of your time and energy working towards your EA goals, for instance, you can only improve by a factor of 4 at most by getting yourself to work harder. (Working smarter is another story. In fact, this post you wrote just now could be considered advice on how to work smarter. In general I'm more optimistic about opportunities to work smarter than work harder, because working harder is unlikely to get you more than a 4x multiplier in most cases.)

The world in which Bostrom did not publish Superintelligence, and therefore Elon Musk, Bill Gates, and Paul Allen didn't turn to "our side" yet.

Has Paul Allen come round to advocating caution and AI safety? The sources I can find right now suggest Allen is not especially worried.

http://www.technologyreview.com/view/425733/paul-allen-the-singularity-isnt-near/

Thanks Diego!

Matt Wage's old post on this topic is relevant: https://80000hours.org/2012/04/the-haste-consideration/

Thumbs up. Love the use of stories.

Assumption of exponential growth, and the ability to build a movement without major landmark successes, that the quality of the sociological institutions within the movement wont matter for growth and resilience much further down the line, that it wont be incredibly valuable to have people in different places across the economy that can only be got at through dedicated time, that the kinds of projects you can do within 2 years have the same marginal return to the kinds of projects you can do within 10 years...

I agree with this about attracting people to the movement as a general principle, but I'm worried that short term focus blinkers us to some fantastic opportunities - which would in turn strengthen the movement as attractors / make us more interesting to outsiders.

Is this post missing part of it?

Thanks for noticing, fixed!

Curated and popular this week
 ·  · 6m read
 · 
*Disclaimer* I am writing this post in a personal capacity; the opinions I express are my own and do not represent my employer. I think that more people and orgs (especially nonprofits) should consider negotiating the cost of sizable expenses. In my experience, there is usually nothing to lose by respectfully asking to pay less, and doing so can sometimes save thousands or tens of thousands of dollars per hour. This is because negotiating doesn’t take very much time[1], savings can persist across multiple years, and counterparties can be surprisingly generous with discounts. Here are a few examples of expenses that may be negotiable: For organizations * Software or news subscriptions * Of 35 corporate software and news providers I’ve negotiated with, 30 have been willing to provide discounts. These discounts range from 10% to 80%, with an average of around 40%. * Leases * A friend was able to negotiate a 22% reduction in the price per square foot on a corporate lease and secured a couple months of free rent. This led to >$480,000 in savings for their nonprofit. Other negotiable parameters include: * Square footage counted towards rent costs * Lease length * A tenant improvement allowance * Certain physical goods (e.g., smart TVs) * Buying in bulk can be a great lever for negotiating smaller items like covid tests, and can reduce costs by 50% or more. * Event/retreat venues (both venue price and smaller items like food and AV) * Hotel blocks * A quick email with the rates of comparable but more affordable hotel blocks can often save ~10%. * Professional service contracts with large for-profit firms (e.g., IT contracts, office internet coverage) * Insurance premiums (though I am less confident that this is negotiable) For many products and services, a nonprofit can qualify for a discount simply by providing their IRS determination letter or getting verified on platforms like TechSoup. In my experience, most vendors and companies
Forethought[1] is a new AI macrostrategy research group cofounded by Max Dalton, Will MacAskill, Tom Davidson, and Amrit Sidhu-Brar. We are trying to figure out how to navigate the (potentially rapid) transition to a world with superintelligent AI systems. We aim to tackle the most important questions we can find, unrestricted by the current Overton window. More details on our website. Why we exist We think that AGI might come soon (say, modal timelines to mostly-automated AI R&D in the next 2-8 years), and might significantly accelerate technological progress, leading to many different challenges. We don’t yet have a good understanding of what this change might look like or how to navigate it. Society is not prepared. Moreover, we want the world to not just avoid catastrophe: we want to reach a really great future. We think about what this might be like (incorporating moral uncertainty), and what we can do, now, to build towards a good future. Like all projects, this started out with a plethora of Google docs. We ran a series of seminars to explore the ideas further, and that cascaded into an organization. This area of work feels to us like the early days of EA: we’re exploring unusual, neglected ideas, and finding research progress surprisingly tractable. And while we start out with (literally) galaxy-brained schemes, they often ground out into fairly specific and concrete ideas about what should happen next. Of course, we’re bringing principles like scope sensitivity, impartiality, etc to our thinking, and we think that these issues urgently need more morally dedicated and thoughtful people working on them. Research Research agendas We are currently pursuing the following perspectives: * Preparing for the intelligence explosion: If AI drives explosive growth there will be an enormous number of challenges we have to face. In addition to misalignment risk and biorisk, this potentially includes: how to govern the development of new weapons of mass destr
 ·  · 4m read
 · 
Hey everyone, I’ve been going through the EA Introductory Program, and I have to admit some of these ideas make sense, but others leave me with more questions than answers. I’m trying to wrap my head around certain core EA principles, and the more I think about them, the more I wonder: Am I misunderstanding, or are there blind spots in EA’s approach? I’d really love to hear what others think. Maybe you can help me clarify some of my doubts. Or maybe you share the same reservations? Let’s talk. Cause Prioritization. Does It Ignore Political and Social Reality? EA focuses on doing the most good per dollar, which makes sense in theory. But does it hold up when you apply it to real world contexts especially in countries like Uganda? Take malaria prevention. It’s a top EA cause because it’s highly cost effective $5,000 can save a life through bed nets (GiveWell, 2023). But what happens when government corruption or instability disrupts these programs? The Global Fund scandal in Uganda saw $1.6 million in malaria aid mismanaged (Global Fund Audit Report, 2016). If money isn’t reaching the people it’s meant to help, is it really the best use of resources? And what about leadership changes? Policies shift unpredictably here. A national animal welfare initiative I supported lost momentum when political priorities changed. How does EA factor in these uncertainties when prioritizing causes? It feels like EA assumes a stable world where money always achieves the intended impact. But what if that’s not the world we live in? Long termism. A Luxury When the Present Is in Crisis? I get why long termists argue that future people matter. But should we really prioritize them over people suffering today? Long termism tells us that existential risks like AI could wipe out trillions of future lives. But in Uganda, we’re losing lives now—1,500+ die from rabies annually (WHO, 2021), and 41% of children suffer from stunting due to malnutrition (UNICEF, 2022). These are preventable d
Relevant opportunities
19
Eva
· · 1m read