Bio

Pro-pluralist, pro-bednet, anti-Bay EA. 🔸 10% Pledger.

Sequences
3

Against the overwhelming importance of AI Safety
EA EDA
Criticism of EA Criticism

Comments
315

a) r.e. Twitter, almost tautologically true I'm sure. I think it is a bit of signal though, just very noisy. And one of the few ways for non-Bay people such as myself to try to get a sense of the pulse of the Bay, though obviously very prone to error, and perhaps not worth doing at all.

b) I haven't seen those comments,[1] could you point me to them or where they happened? I know there was a bunch of discussion around their concerns about the Biorisk paper, but I'm particularly concerned with the "Many AI Safety Orgs Have Tried to Criminalize Currently-Existing Open-Source AI" article - which I haven't seen good pushback to. Again, welcome to being wrong on this. 

  1. ^

    Ok, I've seen Ladish and Kokotajlo offer to talk which is good, would have like 1a3orn to take them up on that offer for sure.

It's an unfortunate naming clash, there are different ARC Challenges:

ARC-AGI (Chollet et al) - https://github.com/fchollet/ARC-AGI

ARC (AI2 Reasoning Challenge) - https://allenai.org/data/arc

These benchmarks are reporting the second of the two.

LLMs (at least without scaffolding) still do badly on ARC, and I'd wager Llama 405B still doesn't do well on the ARC-AGI challenge, and it's telling that all the big labs release the 95%+ number they get on AI2-ARC, and not whatever default result they get with ARC-AGI...

(Or in general, reporting benchmarks where they can go OMG SOTA!!!! and not helpfully advance the general understanding of what models can do and how far they generalise. Basically, traditional benchmark cards should be seen as the AI equivalent of "IN MICE")

Folding in Responses here

@thoth hermes (or https://x.com/thoth_iv if someone can get it to them if you're Twitter friends then pls go ahead.[1] I'm responding to this thread here - I am not saying "that EA is losing the memetic war because of its high epistemic standards", in fact quite the opposite r.e. AI Safety, and maybe because of misunderstanding of how politics work/not caring about the social perception of the movement. My reply to Iyngkarran below fleshes it out a bit more, but if there's a way for you to get in touch directly, I'd love to clarify what I think, and also hear your thoughts more. But I think I was trying to come from a similar place that Richard Ngo is, and many of his comments on the LessWrong thread here very much chime with my own point-of-view. What I am trying to push for is the AI Safety movement reflecting on losing ground memetically and then asking 'why is that? what are we getting wrong?' rather than doubling down into lowest-common denominator communication. I think we actually agree here? Maybe I didn't make that clear enough in my OP though.

@Iyngkarran Kumar - Thanks for sharing your thoughts, but I must say that I disagree with it. I don't think that the epistemic standards are working against us by being too polite, quite the opposite. I think the epistemic standards in AI Safety have been too low relative to the attempts to wield power. If you are potentialy going to criminalise existing Open-Source models,[2] you better bring the epistemic goods. And for many people in the AI Safety field, the goods have not been brought (which is why I see people like Jeremy Howard, Sara Hooker, Rohit Krishnan etc get increasingly frustrated by the AI Safety field). This is on the field of AI Safety imo for not being more persuasive. If the AI Safety field was right, the arguments would have been more convincing. I think, while it's good for Eliezer to say what he thinks accurately, the 'bomb the datacenters'[3] piece has probably been harmful for AI Safety's cause, and things like it a very liable to turn people away from supporting AI Safety. I also don't think it's good to say that it's a claim of 'what we believe', as I don't really agree with Eliezer on much.

(r.e. inside vs outside game, see this post from Holly Elmore)

@anormative/ @David Mathers - Yeah it's difficult to manage the exact hypothesis here, especially for falsified preferences. I'm pretty sure SV is 'liberal' overall, but I wouldn't be surprised if Trump % is greater than 16 and 20, and it definitely seems to be a lot more open this time, e.g. a16z and Musk openly endorsing Trump, Sequoia Capital partners claiming that Biden dropping out was worse than the Jan 6th riot. Things seem very different this time around, different enough to be paid attention to.

-    -    -    -    -    -    -    -    -    -    -    -    

Once again, if you disagree, I'd love to actually here why. Up/down voting is a crude feedback to, and discussion of ideas leads to much quicker sharing of knowledge. If you want to respond but don't want to publicly, then by all means please send a DM :)

  1. ^

    I don't have Twitter and think it'd be harmful for my epistemic & mental health if I did get an account and become immersed in 'The Discourse'

  2. ^

    This piece from @1a3orn is excellent and to absence of evidence of good arguments against it is evidence of the absence of said arguments. (tl;dr - AI Safety people, engage with 1a3orn more!)

  3. ^

    I know that's not what it literally says but it's what people know it as

Quick[1] thoughts on the Silicon Valley 'Vibe-Shift'

I wanted to get this idea out of my head and into a quick-take. I think there's something here, but a lot more to say, and I've really haven't done the in-depth research for it. There was a longer post idea I had for this, but honestly diving more than I have here into it is not a good use of my life I think.

The political outlook in Silicon Valley has changed.

Since the attempted assassination attempt on President Trump, the mood in Silicon Valley has changed. There have been open endorsements, e/acc has claimed political victory, and lots of people have noticed the 'vibe shift'.[2] I think that, rather than this being a change in opinions, it's more an event allowing for the beginning of a preference cascade, but at least in Silicon Valley (if not yet reflected in national polling) it has happened. 

So it seems that a large section of Silicon Valley is now openly and confidently supporting Trump, and to a greater or lesser extent aligned with the a16z/e-acc worldview,[3] we know it's already reached the ears of VP candidate JD Vance.

How did we get here

You could probably write a book on this, so this is a highly opinionated take. But I think this is somewhat, though not exclusively, an own goal of the AI Safety movement.

  • As ChatGPT starts to bring AI, and AI Safety, into the mainstream discourse, the e/acc countermovement begins. It positions itself as opposite effective altruism, especially in the wake of SBF.
  • Guillaume Verdon, under the alias "Beff Jezos", realises the memetic weakness of the AI Safety movement and launches a full memetic war against it. Regardless of his rightness or wrongness, you do to some extent got to hand it to him. He's like right-wing Émile Torres, ambitious and relentless and driven by ideological zeal against a hated foe.
  • Memetic war is total war. This means nuance dies to get it to spread. I don't know if, for example, Marc Andreessen actually thinks antimalarial bednets are a 'triple threat' of badness, but it's a war and you don't take prisoners. Does Beff think that people running a uni-group session on Animal Welfare are 'basically terrorists', I don't know. But EA is the enemy, and the enemy must be defeated, and the war is total.
  • The OpenAI board fiasco is, I think, a critical moment here. It doesn't matter what the reasoning we've come out with at the end of the day was, I think it was perceived as 'a doomer coup' and it did radicalize the valley. In his recent post Richard Ngo called on the AI Safety movement to show more legitimacy and competence. The board fiasco torpedoed my trust in the legitimacy and competence of many senior AI safety people, so god knows how strong the update was for Silicon Valley as a whole.
  • This new movement became increasingly right-wing coded. Partly as a response to the culture wars in America and the increasing vitriol thrown by the left against 'tech bros', partly as a response to the California Ideology being threatened by any sense of AI oversight or regulation, and partly because EA is the enemy and EA was being increasingly seen by this group as left-wing, woke, or part of the Democratic Party due to the funding patterns of SBF and Moskovitz. I think this has led, fairly predictably, to the right-ward shift in SV and direct political affiliation with a (prospective) second Trump presidency
  • Across all of this my impression is that, just like with Torres, there was little to no direct pushback. I can understand not wanting to be dragged into a memetic war, or to be involved in the darker parts of Twitter discourse. But the e-acc/technooptimist/RW-Silicon-Valley movement was being driven by something, and I don't think AI Safety ever really argued against it convincingly, and definitely not in a convincing enough way to 'win' the memetic war. Like, the a16z cluster literally lied to Congress and to Parliament, but nothing much come of that fact.
    • I think this is very much linked to playing a strong 'inside game' to access the halls of power and no 'outside game' to gain legitimacy for that use of power. It's also I think due to EA not wanting to use social media to make its case, whereas the e-acc cluster was born and lives on social media.

Where are we now?

I'm not a part of the Bay Area scene and culture,[4] but it seems to me that the AI Safety movement has lost the 'mandate of heaven' to whatever extent it did have it. SB-1047 is a push to change policy that has resulted in backlash, and may result in further polarisation and counter-attempts to fight back in a zero-sum political game. I don't know if it's constitutional for a Trump/Vance administration to use the Supremacy Clause to void SB-1047 but I don't doubt that they might try. Biden's executive order seems certain for the chopping block. I expect a Trump administration to be a lot less sympathetic to the Bay Area/DC AI Safety movements, and the right-wing part of Silicon Valley will be at the very least energised to fight back harder.

One concerning thing for both Silicon Valley and the AI Safety movement is what happens as a result of the ideological consequences of SV accepting this trend. Already a strong fault-line is the extreme social conservatism and incipient nationalism brought about by this. In the recent a16z podcast, Ben Horowitz literally accuses the Biden administration of breaking the rule of law, and says nothing about Trump literally refusing to concede the 2020 election and declaring that there was electoral fraud. Mike Solana seems to think that all risks of democratic backsliding under a Trump administration were/are overblown (or at least that people in the Bay agreeing was preference falsification). On the Moments-of-Zen Podcast (which has also hosted Curtis Yarvin twice), Balaji Srinivasan accused the 'Blue Tribe' of ethnically cleansing him out of SF[5] and called on the grey tribe to push all the blues out of SF. e-acc sympathetic people are noting that anti-trans ideas bubbling up in the new movement. You cannot seriously engage with ideas and shape them without those ideas changing you.[6] This right-wing shift will have further consequences, especially under a second Trump presidency.

What next for the AI Safety field?

I think this is a bad sign for the field of AI Safety. Political polarisation has escaped AI for a while. Current polls may lean in support , but polls and political support are fickle, especially in the age of hyper-polarisation.[7] I feel like my fears around the perception of Open Philanthropy are re-occuring here but for the AI Safety movement at large. 

I think the consistent defeats to the e-acc school and the fact that the tech sector as a whole seems very much unconvinced by the arguments for AI Safety should at some point lead to a reflection from the movement. Where you stand on this very much depends on your object-level beliefs. While this is a lot of e-acc discourse around transhumanism, replacing humanity, and the AI eschaton, I don't really buy it. I think that they don't think ASI is possible soon, and thus all arguments for AI Safety are bunk. Now, while the tech sector as a whole might not be as hostile, they don't seem at all convinced of the 'ASI-soon' idea.

A key point I want to emphasise is that one cannot expect to wield power successfully without also having legitimacy.[8] And to the extent that the AI Safety movement's strategy is trying to thread this needle it will fail.

Anyway, long ramble over, and given this was basically a one-shot ramble it will have many inaccuracies and flaws. Nevertheless I hope that it can be directionally useful and lead to productive discussion.

  1. ^

    lol, lmao

  2. ^

    See here, here, and here. These examples are from Twitter because, for better or for worse, it seems much of SV/tech opinions are formed by Twitter discourse.

  3. ^

    Would be very interested to hear the thoughts of people in the Bay on this

  4. ^

    And if invited to be I would almost certainly decline,

  5. ^

    He literally used the phrase 'ethnically cleanse'. This is extraordinarily dangerous language in a political context.

  6. ^

    A good example in fiction is in Warhammer40K, where Horus originally accepts the power of Chaos to fight against Imperial Tyranny, but ends up turning into their slave.

  7. ^

    Due to polarisation, views can dramatically shift on even major topics such as the economy and national security (i know these are messy examples!). Current poll leads for AI regulation should not, in any way, be considered secure

  8. ^

    I guess you could also have overwhelming might and force, but even that requires legitimacy. Caesar needed to be seen as legitimate by Marc Anthony, Alexander didn't have the legitimacy to get his army to cross the Hyphasis etc.

No really appreciated it your perspective, both on SMA and what we mean when we talk about 'EA'. Definitely has given me some good for thought :)

Feels like you've slightly misunderstood my point of view here Lorenzo? Maybe that's on me for not communicating it clearly enough though.

For what it's worth, Rutger has been donating 10% to effective charities for a while and has advocated for the GWWC pledge many times...So I don't think he's against that, and lots of people have taken the 10% pledge specifically because of his advocacy

That's great! Sounds like very 'EA' to me 🤷

I think this mixes effective altruism ideals/goals (which everyone agrees with) with EA's specific implementation, movement, culture and community.

I'm not sure everyone does agree really, some people have foundational moral differences. But that aside, I think effective altruism is best understand as a set of ideas/ideals/goals. I've been arguing that on the Forum for a while and will continue to do so. So I don't think I'm mixing, I think that the critics are mixing.

This doesn't mean that they're not pointing out very real problems with the movement/community. I still strongly think that the movement has lot of growing pains/reforms/recknonings to go through before we can heal the damage of FTX and onwards.

The 'win by ippon' was just a jokey reference to Michael Nielsen's 'EA judo' phrase, not me advocating for soldier over scout mindset.

If we want millions of people to e.g. give effectively, I think we need to have multiple "movements", "flavours" or "interpretations" of EA projects.

I completely agree! Like 100000% agree! But that's still 'EA'? I just don't understand trying to draw such a big distinction between SMA and EA in the case where they reference a lot of the same underlying ideas.

So I don't know, feels like we're violently agreeing here or something? I didn't mean to suggest anything otherwise in my original comment, and I even edited it to make it more clear I was more frustrated at the interviewer than anything Rutger said or did (it's possible that a lot of the non-quoted phrasing were put in his mouth)

Just a general note, I think adding some framing of the piece, maybe key quotes, and perhaps your own thoughts as well would improve this from a bare link-post? As for the post itself:

It seems Bregman views EA as:

a misguided movement that sought to weaponize the country’s capitalist engines to protect the planet and the human race

Not really sure how donating ~10% of my income to Global Health and Animal Welfare charities matches that framework tbqh. But yeah 'weaponize' is highly aggressive language here, if you take it out there's not much wrong with it. Maybe Rutger or the interviewer think Capitalism is inherently bad or something?

effective altruism encourages talented, ambitious young people to embrace their inner capitalist, maximize profits, and then donate those profits to accomplish the maximum amount of good.

Are we really doing the earn-to-give thing again here? But like apart from the snark there isn't really an argument here, apart from again implicitly associating capitalism with badness. EA people have also warned about the dangers of maximisation before, so this isn't unknown to the movement.

Bregman saw EA’s demise long before the downfall of the movement’s poster child, Sam Bankman-Fried

Is this implying that EA is dead (news to me) or that is in terminal decline (arguable, but knowledge of the future is difficult etc etc)?

he [Rutger] says the movement [EA] ultimately “always felt like moral blackmailing to me: you’re immoral if you don’t save the proverbial child. We’re trying to build a movement that’s grounded not in guilt but enthusiasm, compassion, and problem-solving.

I mean, this doesn't sound like an argument against EA or EA ideas? It's perhaps why Rutger felt put off by the movement, but then if you want a movement based on 'enthusiasm, compassion, and problem-solving' (which are still very EA traits to me, btw), then that's because it would be doing more good, rather than a movement wracked by guilt. This just falls victim to classic EA Judo, we win by ippon.

I don't know, maybe Rutger has written up more of his criticism somewhere more thoroughly. Feel like this article is such a weak summary of it though, and just leaves me feeling frustrated. And in a bunch of places, it's really EA! See:

  • Using Rob Mather founding AMF as a case study (and who has a better EA story than AMF?)
  • Pointing towards reducing consumption of animals via less meat-eating
  • Even explicitly admires EA's support for "non-profit charity entrepreneurship"

So where's the EA hate coming from? I think 'EA hate' is too strong and is mostly/actually coming from the interviewer, maybe more than Rutger. Seems Rutger is very disillusioned with the state of EA, but many EAs feel that way too! Pinging @Rutger Bregman or anyone else from the EA Netherlands scene for thoughts, comments, and responses.

With existential risk from unaligned AI, I don't think anyone has ever told a very clear story about how AI will actually get misaligned, get loose, and kill everyone. 

This should be evidence against AI x-risk![1] Even in the atmospheric ignition case in Trinity, they had more concrete models to use. If we can't build a concrete model here, then it implies we don't have a concrete/convincing case for why it should be prioritised at all, imo. It's similar to the point in my footnotes that you need to argue for both p and p->q, not just the latter. This is what I would expect to see if the case for p was unconvincing/incorrect.

I don't think this is a problem: we shouldn't expect to know all the details of how things go wrong in advance

Yeah I agree with this. But the uncertainty and cluelessness in the future should decrease one's confidence that they're working on the most important thing in the history of humanity, one would think.

and it is worthwhile to do a lot of preparatory research that might be helpful so that we're not fumbling through basic things during a critical period. I think the same applies to digital minds.

I'm all in favour of research, but how much should that research get funded? Can it be justified above other potential uses of money and general resource? Should it be an EA priority as defined by the AWDW framing? These we (almost) entirely unargued for.

  1. ^

    Not dispositive evidence perhaps, but a consideration

It also seems like you're mostly critiquing the tractability of the claim and not the underlying scale nor neglectedness?

Yep, everyone agrees it's neglected. My strongest critique is the tractability, which may be so low as to discount astronomical value. I do take a lot of issue with the scale as well though. I think that needs to be argued for rather than assumed. I also think trade-offs from other causes need to be taken into account at some point too.

And again, I don't think there's no arguments that can make traction on the scale/tractability that can make AI Welfare look like a valuable cause, but these arguments clearly weren't made (imho) in AWDW

I don't quite know what to respond here.[1] If the aim was to discuss something differently then I guess there should have been a different debate prompt? Or maybe it shouldn't have been framed as a debate at all? Maybe it should have just prioritised AI Welfare as a topic and left it at that. I'd certainly have less of an issue with the posts that were were that have happened, and certainly wouldn't have been confused by the voting if there wasn't a voting slider.[2]

  1. ^

    So I probably won't - we seem to have strong differing intuitions and intepretations of fact, which probably makes communication difficult

  2. ^

    But I liked the voting slider, it was a cool feature!

Load more