"EA-Adjacent" now I guess.
🔸 10% Pledger.
Likes pluralist conceptions of the good.
Dislikes Bay Culture being in control of the future.
I think this is downstream of a lot of confusion about what 'Effective Altruism' really means, and I realise I don't have a good definition any more. In fact, because all of the below can be criticised, it sort of explains why EA gets seemingly infinite criticism from all directions.
Because in many ways I don't count as EA based off the above. I certainly feel less like one than I have in a long time.
For example:
I think a lot of EAs assume that OP shares a lot of the same beliefs they do.
I don't know if this refers to some gestalt 'belief' than OP might have, or Dustin's beliefs, or some kind of 'intentional stance' regarding OP's actions. While many EAs shared some beliefs (I guess) there's also a whole range of variance within EA itself, and the fundamental issue is that I don't know if there's something which can bind it all together.
I guess I think the question should be less "public clarification on the relationship between effective altruism and Open Philanthropy" and more "what does 'Effective Altruism' mean in 2025?"
I mean I just don't take Ben to be a reasonable actor regarding his opinions on EA? I doubt you'll see him open up and fully explain a) who the people he's arguing with are or b) what the explicit change in EA to an "NGO patronage network" was with names, details, public evidence of the above, and being willing to change his mind to counter-evidence.
He seems to have been related to Leverage Research, maybe in the original days?[1] And there was a big falling out there, any many people linked to original Leverage hate "EA" with the fire of a thousand burning suns. Then he linked up with Samo Burja at Bismarck Analysis and also with Palladium, which definitely links him the emerging Thielian tech-right, kinda what I talk about here. (Ozzie also had a good LW comment about this here).
In the original tweet Emmett Shear replies, and then it's spiralled into loads of fractal discussions, and I'm still not really clear what Ben means. Maybe you can get more clarification in Twitter DMs rather than having an argument where he'll want to dig into his position publicly?
For the record, a double Leverage & Vassar connection seems pretty disqualifying to me - especially as i'm very Bay sceptical anyway
I think the theory of change here is that the Abundance Agenda taking off in the US would provide an ideological frame for the Democratic Party to both a) get competitive in the races in needs to win power in the Executive & Legislature and b) have a framing that allows it to pursue good policies when in power, which then unlocks a lot of positive value elsewhere
It also answers the 'why just the US?' question, though that seemed kind of obvious to me
And as for no cost-effectiveness calculation, it seems that this is the kind of systemic change many people in EA want to see![1] And it's very hard to get accurate cost-effectiveness-analyses from those. But again, I don't know if that's also being too harsh to OP, as many longtermist organisations don't seem to publicly publish their CEAs apart from general reasoning like about "the future could be very large and very good"
Maybe it's not the exact flavour/ideology they want to see, but it does seem 'systemic' to me
I think on crux here is around what to do in this face of uncertainty.
You say:
If you put a less unreasonable (from my perspective) number like 50% that we’ll have AGI in 30 years, and 50% we won’t, then again I think your vibes and mood are incongruent with that. Like, if I think it’s 50-50 whether there will be a full-blown alien invasion in my lifetime, then I would not describe myself as an “alien invasion risk skeptic”, right?
But I think sceptics like titotal aren't anywhere near 5% - in fact they deliberately do not have a number. And when they have low credences in the likelihood of rapid, near-term, transformative AI progress, they aren't saying "I've looked at the evidence for AI Progress and am confident at putting it at less than 1%" or whatever, they're saying something more like "I've look at the arguments for rapid, transformative AI Progress and it seems so unfounded/hype-based to me that I'm not even giving it table stakes"
I think this is a much more realistic form of bounded-rationality. Sure, in some perfect Bayesian sense you'd want to assign every hypothesis a probability and make sure they all sum to 1 etc etc. But in practice that's not what people do. I think titotal's experience (though obviously this is my interpretation, get it from the source!) is that they seem a bunch of wild claims X, they do a spot check on their field of material science and come away so unimpressed that they relegate the "transformative near-term llm-based agi" hypothesis to 'not a reasonable hypothesis'
To them I feel it's less someone asking "don't put the space heater next to the curtains because it might cause a fire" and more "don't keep the space heater in the house because it might summon the fire demon Asmodeus who will burn the house down". To titotal and other sceptics, they believe the evidence presented is not commensurate with the claims made.
(For reference, while previously also sceptical I actually have become a lot more concerned about transformative AI over the last year based on some of the results, but that is from a much lower baseline, and my risks are more based around politics/concentration of power than loss-of-control to autonomous systems)
I appreciate the concern that you (and clearly many other Forum users) have, and I do empathise. Still, I'd like to present a somewhat different perspective to others here.
EA seems far too friendly toward AGI labs and feels completely uncalibrated to the actual existential risk (from an EA perspective)
I think that this implicitly assumes that there is such a things as "an EA perspective", but I don't think this is a useful abstraction. EA has many different strands, and in general seems a lot more fractured post-FTX.
e.g. You ask "Why aren’t we publicly shaming AI researchers every day?", but if you're an AI-sceptical EA working in GH&D that seems entirely useless to your goals! If you take 'we' to mean all EAs already convinced of AI doom then that's assuming the conclusion, whether there is a action-significant amount of doom is the question here.
Why are we friendly with Anthropic? Anthropic actively accelerates the frontier, currently holds the best coding model, and explicitly aims to build AGI—yet somehow, EAs rally behind them? I’m sure almost everyone agrees that Anthropic could contribute to existential risk, so why do they get a pass? Do we think their AGI is less likely to kill everyone than that of other companies?
Anthropic's alignment strategy, at least publicly facing, is found here.[1] I think Chris Olah's tweets about it found here include one particularly useful chart:
The probable cruxes here are that 'Anthropic', or various employees there, are much more optimistic about the difficulty of AI safety than you are. They also likely believe that empirical feedback from actual Frontier models is crucial to a successful science of AI Safety. I think if you hold these two beliefs, then working at Anthropic makes a lot more sense from an AI Safety perspective.
For the record, the more technical work I've done, and the more understanding I have about AI systems as they exist today, the more 'alignment optimistic' I've got, and I get increasingly skeptical of OG-MIRI-style alignment work, or AI Safety work done in the absence of actual models. We must have contact with reality to make progress,[2] and I think the AI Safety field cannot update on this point strongly enough. Beren Millidge has really influenced my thinking here, and I'd recommend reading Alignment Needs Empirical Evidence and other blog posts of his to get this perspective (which I suspect many people at anthropic share).
Finally, pushing the frontier of model performance isn't apriori bad, especially if you don't accept MIRI-style arguments. Like, I don't see Sonnet 3.7 as increasing the risk of extinction from AI. In fact, it seems to be both a highly capable model that's also very-well aligned according to Anthropic's HHH criteria. All of my experience using Claude and engaging with the research literature about the model has pushed my distribution of AI Safety towards the 'Steam Engine' level in the chart above, instead of the P vs NP/Impossible level.
Spending time in the EA community does not calibrate me to the urgency of AI doomerism or the necessary actions that should follow
Finally, on the 'necessary actions' point, even if we had a clear empirical understanding of what the current p(doom) is, there are no clear necessary actions. There's still lots of arguments to be had here! See Matthew Barnett has argued in these comments that one can make utilitarian arguments for AI acceleration even in the presence of AI risk,[3] or Nora Belrose arguing that pause-style policies will likely be net-negative. You don't have to agree with either of these, but they do mean that there aren't clear 'necessary actions', at least from my PoV.
Of course, if one has completely lost trust with Anthropic as an actor, then this isn't useful information to you at all. But I think that's conceptually a separate problem, because I think have given information to answer the questions you raise, perhaps not to your satisfaction.
Theory will only take you so far
Though this isn't what motivates Anthropic's thinking afaik
To the extent that word captures the classic 'single superintelligent model' form of risk
I have some initial data on the popularity and public/elite perception of EA that I wanted to write into a full post, something along the lines of What is EA's reputation, 2.5 years after FTX? I might combine my old idea of a Forum data analytics update into this one to save time.
My initial data/investigation into this question ended being a lot more negative than other surveys of EA. The main takeaways are:
Doing this research did contribute to me being a lot more gloomy about the state of EA, but I think I do want to write this one up to make the knowledge more public, and allow people to poke flaws in it if possible.
To me this signals more values-based conflict, which makes it harder to find pareto-improving ways to co-operate with other groups
I do want to write something along the lines of "Alignment is a Political Philosophy Problem"
My takes on AI, and the problem of x-risk, have been in flux over the last 1.5 years, but they do seem to be more and focused on the idea of power and politics, as opposed to finding a mythical 'correct' utility function for a hypothesised superintelligence. Making TAI/AGI/ASI go well therefore falls in the reference class of 'principal agent problem'/'public choice theory'/'social contract theory' rather than 'timeless decision theory/coherent extrapolated volition'. The latter 2 are poor answers to an incorrect framing of the question.
Writing that influenced my on this journey:
I also think this view helps explain the huge range of backlash that AI Safety received over SB1047 and after the awfully botched OpenAI board coup. They were both attempted exercises in political power, and the pushback often came criticising this instead of looking on the 'object level' of risk arguments. I increasingly think that this is not an 'irrational' response but perfectly thing, and "AI Safety" needs to pursue more co-operative strategies that credibly signal legitimacy.
I think the downvotes these got are, in retrospect, a poor sign for epistemic health
My previous attempt at predicting what I was going to write got 1/4, which ain't great.
This is partly planning fallacy, partly real life being a lot busier than expected and Forum writing being one of the first things to drop, and partly increasingly feeling gloom and disillusionment with EA and so not having the same motivation to write or contribute to the Forum as I did previously.
For the things that I am still thinking of writing I'll add comments to this post separately to votes and comments can be attributed to each idea individually.
I don't really get the framing of this question.
I suspect, for any increment of time one could take through EAs existence, then there would have been more 'harm' done in the total rest of world during that time. EA simply isn't big enough to counteract the moral actions of the rest of the world. Wild animals suffer horribly, people die of preventable diseases etc constantly, formal wars and violent struggles occur affecting the lives of millions. There sheer scale of the world outweighs EA many, many times over.
So I suspect you're making a more direct comparison to Musk/DOGE/PEPFAR? But again, I feel like anyone wielding using the awesome executive power of the United States Government should expect to have larger impacts on the world than EA.