Hide table of contents

Tl;dr: I’m kicking off a push for public discussions about EA strategy that will be happening June 12-24. You’ll see new posts under this tag, and you can find details about people who’ve committed to participating and more below. 

Motivation and what this is(n’t)

I feel (and, from conversations in person and seeing discussions on the Forum, think that I am not alone in feeling) like there’s been a dearth of public discussion about EA strategy recently, particularly from people in leadership positions at EA organizations. 

To help address this, I’m setting up an “EA strategy fortnight” — two weeks where we’ll put in extra energy to make those discussions happen. A set of folks have already volunteered to post thoughts about major strategic EA questions, like how centralized EA should be or current priorities for GH&W EA.

This event and these posts are generally intended to start discussion, rather than give the final word on any given subject. I expect that people participating in this event will also often disagree with each other, and participation in this shouldn’t imply an endorsement of anything or anyone in particular.

I see this mostly as an experiment into whether having a simple “event” can cause people to publish more stuff. Please don't interpret any of these posts as something like an official consensus statement.

Some people have already agreed to participate

I reached out to people through a combination of a) thinking of people who had shared private strategy documents with me before that still had not been published b) contacting leaders of EA organizations, and c) soliciting suggestions from others. About half of the people I contacted agreed to participate. I think you should view this as a convenience sample, heavily skewed towards the people who find writing Forum posts to be low cost. Also note that I contacted some of these people specifically because I disagree with them; no endorsement of these ideas is implied. 

People who’ve already agreed to post stuff during this fortnight [in random order]:

  1. Habryka - How EAs and Rationalists turn crazy
  2. MaxDalton - In Praise of Praise
  3. MichaelA - Interim updates on the RP AI Governance & Strategy team
  4. William_MacAskill - Decision-making in EA
  5. Ardenlk - On reallocating resources from EA per se to specific fields
  6. Ozzie Gooen - Centralize Organizations, Decentralize Power
  7. Julia_Wise - EA reform project updates
  8. Shakeel Hashim - EA Communications Updates
  9. Jakub Stencel - EA’s success no one cares about
  10. lincolnq - Why Altruists Can't Have Nice Things
  11. Ben_West and 2ndRichter - FTX’s impacts on EA brand and engagement with CEA projects
  12. jeffsebo and Sofia_Fogel - EA and the nature and value of digital minds
  13. Anonymous – Diseconomies of scale in community building
  14. Luke Freeman and Sjir Hoeijmakers - Role of effective giving within E
  15. kuhanj - Reflections on AI Safety vs. EA groups at universities
  16. Joey - The community wide advantages of having a transparent scope
  17. JamesSnowden - Current priorities for Open Philanthropy's Effective Altruism, Global Health and Wellbeing program
  18. Nicole_Ross - Crisis bootcamp: lessons learned and implications for EA
  19. Rob Gledhill - AIS vs EA groups for city and national groups
  20. Vaidehi Agarwalla - The influence of core actors on the trajectory and shape of the EA movement
  21. Renan Araujo - Thoughts about AI safety field-building in LMICs
  22. ChanaMessinger - Reducing the social miasma of trust
  23. particlemania - Being Part of Systems
  24. jwpieters - Thoughts on EA community building
  25. MichaelPlant - The Hub and Spoke Model of Effective Altruism
  26. Quadratic Reciprocity - Best guesses for how public discourse and interest in AI existential risk over the past few months should update EA's priorities
  27. OllieBase - Longtermism
  28. Peter Wildeford and Marcus_A_Davis - Past and future of Rethink Priorities

If you would like to participate

  1. If you are able to pre-commit to writing a post: comment below and I will add you to this list.
  2. If not: you can publish a post normally, and then tag your post with this tag.
  3. And include the following at the bottom of your post:[1]

This post is part of EA Strategy Fortnight. You can see other Strategy Fortnight posts here.

How to follow posts from this event

Posts will be tagged with this tag. As there is no formal posting schedule, you might want to subscribe to the tag to be notified when new posts get made.

If you want to start reading now, the Building Effective Altruism tag has a bunch of already-published posts on this subject.

  1. ^

    Thanks to @Vaidehi Agarwalla for suggesting people do this

Comments26


Sorted by Click to highlight new comments since:

I had previously decided to work on EA community-building full-time, and have now mostly changed my mind. I want to write up my reasoning for this. I don't think this will be entirely relevant to general movement strategy, but I think it's worth making legible for others

Great, thanks! I added you to the list

I think this is totally within scope and I'd personally find it interesting to read!

Just for the sake of feedback, I think this makes me personally less inclined to post the ideas and drafts I have been toying with because it makes me feel like they are going to be completely steamrolled by a flurry of posts by people with higher status than me and it wouldn't really matter what I said.

I don't know who your target demo here is and it sounds like "flurry of posts by high status individuals" might have been your main intention anyways. However, please note, that this doesn't necessarily help you very much if you are trying to cultivate more outsider perspectives.

In any case, you're probably right that this will lead to more discussion and I am interested to see how it shakes out. I hope you'll write up a review post or something to summarize how the event went because it's going to be hard to follow that many posts about different topics and the corresponding they each generate.

Thanks for the feedback!

It's not entirely clear to me how this shakes out. I agree it is the case that posts cannibalize attention from each other to some extent, so you posting at the same time as a popular post could detract attention from yours. However, when people are on the Forum to read one thing they often click around on other stuff when they are done/get bored, meaning that you get more attention when posting during a popular time.

For example, in this graph you can see that, at least for the past ~year, when there is a spike in attention on community posts (usually caused by an exogenous scandal), we see a much smaller but still positive spike in attention to noncommunity posts, implying that the people who came for the popular thing tend to spend a (smaller, but still net positive) amount of time reading less popular stuff.

My guess is that it's weakly beneficial for you to post when something else popular is going on, but I'm not sure.

(Also pragmatically I expect that people are going to procrastinate, so if you post in the next ~week you probably won't have much competition.)

Thanks for setting this up, Ben.

I wonder -- conditioned on several less well-known people expressing intent to post and preference for a special setup -- whether it would be worthwhile to announce Fortnite Annex (June 26 to 29?) dedicated to less well-known voices, who could of course choose Main Fortnite if they preferred. Or you could identify ~2 specific days during Fortnite on which you ask the more well-known people not to release their posts. People could get some of the intended benefit by posting early, but that strategy doesn't give them much lead time at all.

I definitely see how having, say, a Will MacAskill post drop an hour after a less well-known person's post could lead to the latter poster feeling (and maybe being) overshadowed.

Can I encourage you to organize this, if you think it would be useful? Seems like the kind of thing which should be grassroots organized anyway, and it sounds like you have a better vision for it than I do.

I'm not convinced it would be net positive this time in the absence of several less well-known people expressing intent to post and preference for a special setup. I think there would be some downsides to each way the idea could be implemented a few days prior to start, so I'd wanted to see specific evidence that less well-known people would be more likely to post before endorsing a special setup this time.

Documenting the vision, my theory was that setting aside time for lesser-known voices (which basically means asking the well-known voices not to post at certain times) would mitigate concerns by less well-known voices that their contributions would "be completely steamrolled by a flurry of posts by people with higher status." (quoting Jacob, the original commenter above).

I agree that the effects here shake out in different directions -- though I hypothesize that the positive effect on a engagement with a given post comes more from general awareness of something bringing people to the Forum (e.g., there's a new scandal, it's Strategy Fortnight, etc.). In contrast, I speculate that the negative "cannibalizing" effect comes more from specific posts (look, there are fresh posts by X, Y, and Z with active engagement). Thus, I speculate that -- by judicious management of post timing -- we could capture much of the positive effect of the special event bringing in readers while mitigating the effect of prominent voices crowding other voices out. Of course, I could be wrong!

After thinking about it some more, it would probably be best to set aside space for lesser-known voices either at the beginning of an event or in a multi-day interlude in the middle of the event. Setting aside time at the very end of the event risks people having already had their fill of strategy talk; setting random days aside offers relatively limited isolation. However, most people who just learned about Strategy Fortnight wouldn't be ready to publish in the first few days, and I think it's too late to ask people who have already agreed to write for the event not to publish their post for a multi-day period. 

So I think the best ways to test/implement the idea are off the table for this go-round.

Yeah, I would frame the event as "this is a topic being are going to be discussing something, now is the time to pitch in"

This makes sense, but it's also likely that the comments (and other engagement) on any one post from a high status individual is many times over that from the median forum post. So it's unclear how this nets out. OTOH, these posts being clumped might also mean they compete for attention.

One interesting aspect of this experiment is that it isn't a competition and there are no prizes

I hadn't thought about that before, I think this is a great point!

There were contests in the recent past. They haven't affected much practical change. My impression within effective altruism is that they were appreciated as an intellectual exercise but that they're isn't faith that another contest like that will provoke the desired reforms. 

Some of the public criticism of EA I saw a few months ago was that the criticism contest was meant only to attract the kind of criticism the leadership of EA would want to hear. That criticisms of EA on a fundamental level were relegated to outside media was taken as a sign EA-sponsored self-criticism was a form of controlled and paid opposition. I'm personally ambivalent about that perception though, suffice to say, the criticism contest with prizes hasn't appeared too shift much outside perception there isn't much of a chance that EA as a movement will reform in the face of criticism. 

To host competitions like that was worth a try. Yet this event is worth a try as well. Many of the individuals participating in this event have a role at the CEA or another organization affiliated with it. I've noticed there are 5-10 other leading figures with roles at EA-affiliated organizations that have agreed to participate in this event but also haven't commented. That means they were privately invited to participate before this event was publicly announced.

I imagine several leaders of various, leading EA-affiliated organizations (e.g., Joey Savoie, Oliver Habryka, Peter Wildeford & Marcus Davis, etc.) that already had annual budgets in the hundreds of thousands of dollars per year wouldn't have the time, the need for extra hard-to-come-by funds, nor an elevated platform to get attention from other leaders. They already had the means to have their criticisms taken seriously without the need to participate in a contest. That's why they wouldn't have bothered to participate in the criticism contest before.

 Yet they've agreed to participate in this event after they were personally and privately invited. I assume they wouldn't have agreed in this event if they felt like there wasn't any significant change in EA strategy could be provoked.

Oh, excited to learn this is happening! I would write something. Most likely: a simplified and updated version of my post from a couple of weeks back (What is EA? How could it be reformed?). Working title: The Hub and Spoke Model of Effective Altruism.

That said, in the unlikely event someone wants to suggest something I should write, I'd be open to suggestions. You can comment below or send me a private message. 

Great, thanks! I added you to the list

I like the notion that we should have bursts of discussion on this. 

I will run 1 - 3 polis polls where people can try and find consensus statements on these topic.

I will probably be publishing a post on my best guesses for how public discourse and interest in AI existential risk over the past few months should update EA's priorities: what things seem less useful now, what things seem more useful, what things were surprising to me about the recent public interest that I suspect are also surprising to others. I will be writing this post as an EA and AI safety random, with the expectation that others who are more knowledgeable will tell me where they think I'm wrong.  

Great, thanks! I added you to the list

Following this for sure!

In line with the EA Strategy Fortnight, we at EA Anywhere have decided to center this month's discussion around major strategic EA questions and host two virtual events this Sunday for different time zones:

Thanks for putting this together! I've written a post for this series about improving EA communications related to disability. 

I have a short post about longtermism that this post prompted me to finish and publish, happy to-precommmit

I don't suppose there's a way to tag shortforms with this? 

Looking forward to reading some interesting thoughts. :)

Curated and popular this week
jackva
 ·  · 3m read
 · 
 [Edits on March 10th for clarity, two sub-sections added] Watching what is happening in the world -- with lots of renegotiation of institutional norms within Western democracies and a parallel fracturing of the post-WW2 institutional order -- I do think we, as a community, should more seriously question our priors on the relative value of surgical/targeted and broad system-level interventions. Speaking somewhat roughly, with EA as a movement coming of age in an era where democratic institutions and the rule-based international order were not fundamentally questioned, it seems easy to underestimate how much the world is currently changing and how much riskier a world of stronger institutional and democratic backsliding and weakened international norms might be. Of course, working on these issues might be intractable and possibly there's nothing highly effective for EAs to do on the margin given much attention to these issues from society at large. So, I am not here to confidently state we should be working on these issues more. But I do think in a situation of more downside risk with regards to broad system-level changes and significantly more fluidity, it seems at least worth rigorously asking whether we should shift more attention to work that is less surgical (working on specific risks) and more systemic (working on institutional quality, indirect risk factors, etc.). While there have been many posts along those lines over the past months and there are of course some EA organizations working on these issues, it stil appears like a niche focus in the community and none of the major EA and EA-adjacent orgs (including the one I work for, though I am writing this in a personal capacity) seem to have taken it up as a serious focus and I worry it might be due to baked-in assumptions about the relative value of such work that are outdated in a time where the importance of systemic work has changed in the face of greater threat and fluidity. When the world seems to
 ·  · 4m read
 · 
Forethought[1] is a new AI macrostrategy research group cofounded by Max Dalton, Will MacAskill, Tom Davidson, and Amrit Sidhu-Brar. We are trying to figure out how to navigate the (potentially rapid) transition to a world with superintelligent AI systems. We aim to tackle the most important questions we can find, unrestricted by the current Overton window. More details on our website. Why we exist We think that AGI might come soon (say, modal timelines to mostly-automated AI R&D in the next 2-8 years), and might significantly accelerate technological progress, leading to many different challenges. We don’t yet have a good understanding of what this change might look like or how to navigate it. Society is not prepared. Moreover, we want the world to not just avoid catastrophe: we want to reach a really great future. We think about what this might be like (incorporating moral uncertainty), and what we can do, now, to build towards a good future. Like all projects, this started out with a plethora of Google docs. We ran a series of seminars to explore the ideas further, and that cascaded into an organization. This area of work feels to us like the early days of EA: we’re exploring unusual, neglected ideas, and finding research progress surprisingly tractable. And while we start out with (literally) galaxy-brained schemes, they often ground out into fairly specific and concrete ideas about what should happen next. Of course, we’re bringing principles like scope sensitivity, impartiality, etc to our thinking, and we think that these issues urgently need more morally dedicated and thoughtful people working on them. Research Research agendas We are currently pursuing the following perspectives: * Preparing for the intelligence explosion: If AI drives explosive growth there will be an enormous number of challenges we have to face. In addition to misalignment risk and biorisk, this potentially includes: how to govern the development of new weapons of mass destr
Sam Anschell
 ·  · 6m read
 · 
*Disclaimer* I am writing this post in a personal capacity; the opinions I express are my own and do not represent my employer. I think that more people and orgs (especially nonprofits) should consider negotiating the cost of sizable expenses. In my experience, there is usually nothing to lose by respectfully asking to pay less, and doing so can sometimes save thousands or tens of thousands of dollars per hour. This is because negotiating doesn’t take very much time[1], savings can persist across multiple years, and counterparties can be surprisingly generous with discounts. Here are a few examples of expenses that may be negotiable: For organizations * Software or news subscriptions * Of 35 corporate software and news providers I’ve negotiated with, 30 have been willing to provide discounts. These discounts range from 10% to 80%, with an average of around 40%. * Leases * A friend was able to negotiate a 22% reduction in the price per square foot on a corporate lease and secured a couple months of free rent. This led to >$480,000 in savings for their nonprofit. Other negotiable parameters include: * Square footage counted towards rent costs * Lease length * A tenant improvement allowance * Certain physical goods (e.g., smart TVs) * Buying in bulk can be a great lever for negotiating smaller items like covid tests, and can reduce costs by 50% or more. * Event/retreat venues (both venue price and smaller items like food and AV) * Hotel blocks * A quick email with the rates of comparable but more affordable hotel blocks can often save ~10%. * Professional service contracts with large for-profit firms (e.g., IT contracts, office internet coverage) * Insurance premiums (though I am less confident that this is negotiable) For many products and services, a nonprofit can qualify for a discount simply by providing their IRS determination letter or getting verified on platforms like TechSoup. In my experience, most vendors and companies