JWS

2227 karmaJoined Jan 2023

Bio

Participation

Kinda pro-pluralist, kinda anti-Bay EA.

I have come here to extend the principle of charity to bad criticisms of EA and kick ass. And I'm all out of charity.

(my opinions are fully my own, and do not represent the views of any close associates or the company I work for)

Posts
4

Sorted by New
4
JWS
· 10mo ago · 1m read

Sequences
1

Criticism of EA Criticism

Comments
202

Hey Ryan, thanks for your engagement :) I'm going to respond to your replies in one go if that's ok

#1:

It's worth noting there is some asymmetry in the likely updates with a high probability of a mild negative update on near term AI and a low probability of a large positive update toward powerful near term AI.

This is a good point. I think my argument would point to larger updates for people who put susbtantial probability on near term AGI in 2024 (or even 2023)! Where do they shift that probability in their forecast? I think just dropping it uniformly over their current probability would be suspect to me. So maybe it'd wouldn't be a large update for somebody already unsure what to expect from AI development, but I think it should probably be a large update for the ~20% expecting 'weak AGI' in 2024 (more in response #3)

#2:

Further, manifold doesn't seem that wrong here on GPT4 vs gemini? See for instance, this market: 

Yeah I suppose ~80%->~60% is a decent update, thanks for showing me the link! My issue here would be the resolution criteria realy seems to be CoT on GSM8K, which is almost orthogonal to 'better' imho, especially given issues accounting for dataset contamination - though I suppose the market is technically about wider perception rather than technical accuracy. I think I was basing a lot of my take on the response on Tech Twitter which is obviously unrepresentative, and prone to hype. But there were a lot of people I generally regard as smart and switched-on who really over-reacted in my opinion. Perhaps the median community/AI-Safety researcher response was more measured.

#3:

As in, the operationalization seems like a very poor definition for "weakly general AGI" and the tasks being forecast don't seem very important or interesting.

I'm sympathetic to this, but Metaculus questions are generally meant to be resolved according a strict and unambiguous criteria afaik. So if someone thinks that weakly general AGI is near, but that it wouldn't do well at the criteria in the question, then they should have longer timelines than the current community response to that question imho. The fact that this isn't the case to me indicates that many people who made a forecast on this market aren't paying attention to the details of the resolution and how LLMs are trained and their strengths/limitations in practice. (Of course, if these predictors think that weak AGI will happen from a non-LLM paradigm then fine, but then i'd expect the forecasting community to react less to LLM releases)

I think where I absolutely agree with you is that we need different criteria to actually track the capabilities and properties of general AI systems that we're concerned about! The current benchmarks available seem to have many flaws and don't really work to distinguish interesting capabilities in the trained-on-everything era of LLMs. I think funding, supporting, and popularising research into what 'good' benchmarks would be and creating a new test would be high impact work for the AI field - I'd love to see orgs look into this!

B

Can't we just use an SAT test created after the data cutoff?...You can see the technical report for more discussion on data contamination (though account for bias accordingly etc.)

For the Metaculus question? I'd be very upset if I had a longer-timeline prediction that failed because this resolution got changed - it says 'less than 10 SAT exams' in the training data in black and white! The fact that these systems need such masses of data to do well is a sign against their generality to me.

I don't doubt that the Gemini team is aware of issues of data contamination (they even say so at the end of page 7 in the technical report), but I've become very sceptical about the state of public science on Frontier AI this year. I'm very much in a 'trust, but verify' mode and the technical report is to me more of a fancy press-release that accompanied the marketing than an honest technical report. (which is not to doubt the integrity of the Gemini research and dev team, just to say that I think they're losing the internal tug-of-war with Google marketing & strategy)

#4:

This doesn't seem to be by Melanie Mitchell FYI. At least she isn't an author.

Ah good spot. I think I saw Melanie share it on twitter, and assumed she was sharing some new research of hers (I pulled together the references fairly quickly). I still think the results stand but I appreciate the correction and have amended my post.

<>    <>    <>    <>    <>

I want to thank you again for the interesting and insightful questions and prompts. They definitely made me think about how to express my position slightly more clearly (at least, I hope I make more sense to you after this reponse, even if we don't agree on everything) :)

JWS
3d11
3
2

warning - mildly spicy take

In the wake of the release, I was a bit perplexed by how much of Tech Twitter (answered by own question there) really thought this a major advance.

But in actuality a lot of the demo was, shall we say, not consistently candid about Gemini's capabilities (see here for discussion and here for the original).

At the moment, all Google have released is a model inferior to GPT-4 (though the multi-modality does look cool), and have dropped an I.O.U for a totally-superior-model-trust-me-bro to come out some time next year.

Previously some AI risk people confidently thought that Gemini would be substantially superior to GPT-4. As of this year, it's clearly not. Some EAs were not sceptical enough of a for-profit company hosting a product announcement dressed up as a technical demo and report.

There have been a couple of other cases of this overhype recently, notably 'AGI has been achieved internally' and 'What did Ilya see?!!?!?' where people jumped to assuming a massive jump in capability on the back on very little evidence, but in actuality there hasn't been. That should set off warning flags about 'epistemics' tbh.

On the 'Benchmarks' - I think most 'Benchmarks' that large LLMs use, while the contain some signal, are mostly noisy due to the significant issue of data contamination (papers like The Reversal Curse indicate this imo), and that since LLMs don't think as humans do we shouldn't be testing them in similar ways. Here are two recent papers - one from Melanie Mitchell, one about LLMs failing to abstract and generalise, and another by Jones & Bergen[1] from UC San Diego actually empirically performing the Turing Test with LLMs (the results will shock you)

I think this announcement should make people think near term AGI, and thus AIXR, is less likely. To me this is what a relatively continuous takeoff world looks like, if there's a take off at all. If Google had announced and proved a massive leap forward, then people would have shrunk their timelines even further. So why, given this was a PR-fueled disappointment, should we not update in the opposite direction?

Finally, to get on my favourite soapbox, dunking on the Metaculus 'Weakly General AGI' forecast:

  • 13% of the community prediction is already in the past (x < Dec 2023). Lol, lmao.
  • Also judging by Cumulative Probability:
    • ~20% likely in 2024 (really??!?!?!? if only this was real that'd be free money for sceptics)
    • ~16% likely in 2025
    • Median Date March 2026
    • The operationalisation of points 1 and 3 to my mind make this nearly ~0-1% in that time frame
      • Number 1 is an adversarial Turing Test. LLMs, especially with RLHF, are like the worst possible systems at this. I'm not even in kidding, in the paper I linked above sometimes ELIZA does better
      • Number 3 requires SAT tests (or, i guess, tests with overlapping Questions and Answers) not be in the training data. The current paradigm relies on scooping up everything, and I don't know how much fidelity the model makers have in filtering data out. Also, it's unlikely they'd ever show you the data they trained on as these models aren't proprietary. So there's know way of knowing if a model can meet point 3!
      • 1 & 3 makes me think a lot of AGI forecasts are from vibes and not looking at the question operationalisations and the technical performance of models

tl;dr - Gemini release is disappointing. Below many people's expectations of its performance. Should downgrade future expectations. Near term AGI takeoff v unlikely. Update downwards on AI risk (YMMV).

  1. ^

    I originally thought this was a paper by Mitchell, this was a quick system-1 take that was incorrect, and I apologise to Jones and Bergen. 

I'm glad you found my comment useful. I think then, with respect, you should consider retracting some of your previous comments, or at least reframing them to be more circumspect and be clear you're taking issue with a particular framing/subset of the AIXR community as opposed to EA as a whole.

As for the points in your comment, there's a lot of good stuff here. I think a post about the NRRC, or even an insider's view into how the US administration thinks about and handles Nuclear Risk, would be really useful content on the Forum, and also incredibly interesting! Similarly, I think how a community handles making 'right-tail recommendations' when those recommendations may erode its collective and institutional legitimacy[1] would be really valuable. (Not saying that you should write these posts, they're just examples off the top of my head. In general I think you have a professional perspective a lot of EAs could benefit from)

I think one thing where we agree is that there's a need to ask and answer a lot more questions, some of which you mention here (beyond 'is AIXR valid'):

  • What policy options do we have to counteract AIXR if true?
  • How do the effectiveness of these policy options change as we change our estimation of the risk?
  • What is the median view in the AIXR/broader EA/broader AI communities on risk?

And so on.

  1. ^

    Some people in EA might write this off as 'optics', but I think that's wrong

I'm sorry you encountered this, and I don't want to minimise your personal experience

I think once any group becoms large enough there will be people who associate with it who harbour all sorts of sentiments including the ones you mention.

On the whole though, i've found the EA community (both online and those I've met in person) to be incredibly pro-LGBT and pro-trans. Both the underlying moral views (e.g. non-traditionalism, impartiality and cosmpolitanism etc) point that way, as do the underlying demographics (e.g. young, high educated, socially liberal)

I think where there might be a split is in progressive (as in, leftist politically) framings of issues and the type of language used to talk about these topics. I think those often find it difficult to gain purchase in EA, especially on the rationalist/LW-adjacent side. But I don't think those mean that the community as a whole, or even the sub-section, are 'anti-LGBT' and 'anti-trans', and I think there are historical and multifacted reasons why there's some emnity between 'progressive' and 'EA' camps/perspectives.

Nevertheless, I'm sorry that you experience this sentiment, and I hope you're feeling ok.

JWS
6d15
1
0

Thanks for sharing the post Zed :) Like titotal says, I hope you consider staying around. I think AI-risk (AIXR) sceptic posts should be welcomed on the Forum. I'm someone who'd probably count as AIXR sceptic for the EA community (but not the wider world/public). It's clearly an area you think EA as a whole is making a mistake, so I've read the post and recent comments and have some thoughts that I hope you might find useful:

I think there are some good points you made:

  • I really appreciate posts that push against the 'EA Orthodoxy' on the Forum that start off useful discussions. I think 'red-teaming' ideas is a great example of necessary error-correction, so regardless of how much I agree or not, I want to give you plaudits for that.
  • On humility in long-term forecasts - I completely agree here. I'm sure you've come across it but Tetlock's recent forecasting tournament deals with this question and does indeed find Forecasters place lower AIXR than subject-matter experts.[1] But I'd still say that a risk of extinction roughly ~1% is worth considering as an important risk worth consideration and more investigation, wouldn't you?
  • I think your scepticism on very short timelines is directionally very valid. I hope that those who have made very, very short timeline predictions on Metaculus are willing to update if those dates[2] come and go without AGI. I think one way out of the poor state of the AGI debate is for more people to make concrete falsifiable predictions.
  • While I disagree with your reasoning about what the EA position on AIXR is (see below), I think it's clear that many people think that is the position, so I'd really like to here how you've come to this impression and what EA or the AIXR community could do to present a more accurate picture of itself. I think reducing this gap would be useful for all sides.

Some parts that I didn't find convincing:

  • You view Hanson's response as a knock-down argument. But he only addresses the 'foom' cases and only does so heuristically, not from any technical arguments. I think more credible counterarguments are being presented by experts such as Belrose & Pope, who you might find convincing (though I think they have non-trivial subjective estimates of AIXR too fwiw).
  • I really don't like the move to psychoanalyse people in terms of bias. Is bias at play? Of course, it's at play for all humans, but therefore just as likely for those who are super optimistic as those pessimistic a-priori. I think once something breaks through enough to be deemed 'worthy of consideration' then we ought to do most of our evaluation on the merits of the arguments given. You even say this at the end of the 'fooling oneself' section! I guess I think the questions of "are AIXR concerns valid?" and "if not, why are they so prominent?" are probably worth two separate posts imo. Similarly to this, I think you sometimes conflate the questions of "are AIXR concerns valid?" and "if it is, what would an appropriate policy response look like?" I think in your latest comment to Hayven that's where you strongest objections are (which makes sense to me, given your background and expertise), but again is diferent from the pure question of if AIXR concern is valid.
  • Framing those concerned with AIXR as 'alarmists' - I think you're perhaps overindexing on MIRI here as representative of AI Safety as a whole? From my vague sense, MIRI doesn't hold a dominant position in AI Safety space as it perhaps did 10/20 years ago. I don't think that ~90%+ belief in doom is an accurate depiction of EA, and similarly I don't think that an indefinite global pause is the default EA view of the policies that ought to be adopted. Like you mention Anthropic and CHAI as two good institutions, and they're both highly EA-coded and sincerely concerend about AIXR. I think a potential disambiguation here is between 'concern about AIXR' and 'certain doom about AIXR'?

But also some bad ones:

  • saying that EA's focus on x-risk lacks "common sense" - I actually think x-risk is something which the general public would think makes a lot of sense, but they'd think that EA gets the source of that risk wrong (though empirical data). I think a lot of people would say that trying to reduce the risk of human extinction from Nuclear War or Climate Change is an unambiguously good cause and potentially good use of marginal resources.
  • Viewing EA, let alone AIXR, as motivated by 'nonsense utilitarianism' about 'trillions of theoretical future people'. Most EA spending goes to Global Health Causes in the present. Many AIXR advocates don't identify as longtermists at all. They're often, if not mostly, concerned about risk to humans alive today, themselves, those they care about. Concern about AIXR could also be motivated through non-utilitarian frameworks, though I'd concede that this probably isn't the standard EA position

I know this is a super long comment, so feel free to only respond to the bits you find useful or even not at all. Alternatively we could try out the new dialogue feature to talk through this a bit more? In any case, thanks again for the post, it got me thinking about where and why I disagree both with AI 'doomers' as well as your position in this post.

  1. ^

    roughly 0.4% for superforecasters vs 2.1% for AI experts by 2100

  2. ^

    Currently March 14th 2026 at time of writing

JWS
8d14
4
1

Hey Wei, I appreciate you responding to Mo, but I found myself still confused after reading this reply. This isn't purely down to you - a lot of LessWrong writing refers to 'status', but they never clearly define what it is or where the evidence and literature for it is.[1] To me, it seem to function as this magic word that can explain anything and everything. The whole concept of 'status' as I've seen it used in LW seems incredibly susceptible to being part of 'just-so' stories.

I'm highly sceptical of this though, like I don't know what a 'status gradient' is and I don't think it exists in the world? Maybe you mean an abstract description of behaviour? But then a 'status gradient' is just describing what happened in a social setting, rather than making scientific predictions. Maybe it's instead a kind of non-reductionist sense of existing and having impact, which I do buy, but then things like 'ideas','values', and 'beliefs' should also exist in this non-reductionist way and be as important for considering human action as 'status' is.

It also tends to lead to using explanations like this:

One tricky consideration here is that people don't like to explicitly think about status, because it's generally better for one's status to appear to do everything for its own sake

Which to me is dangerously close to saying "if someone talks about status, it's evidence it's real. If they don't talk about it, then they're self-deceiving in a Hansion sense, and this is evidence for status" which sets off a lot of epistemological red-flags for me

  1. ^

    In fact, one of the most cited works about it isn't a piece of anthropology or sociology, but a book about Improv acting???

JWS
11d18
3
0
1

Just a quick point of order:

as far as I know, nobody who enabled or associated with SBF has yet stepped down from their leadership positions in EA organizations.

I think Will resigning from his position of the EV UK board and Nick resigning from both the UK and US boards would count for this

I'm not making a claim here whether these were the 'right' outcomes or whether it's 'enough', but there have been consequences including at 'leading' EA organisations

Maybe you two might consider having this discussion using the new Dialogue feature? I've really appreciated both of your perspectives and insights on this discussion, and I think the collaborative back-and-forth your having seems a very good fit for how Dialogues work.

A side consideration - assuming a UK-based EAGx is being planned for next year, perhaps that could be planned to co-incide with holidays from UK univerisites, and perhaps be more favourable from applications from students who wanted to attend/apply to EAG London 2024 but didn't for the reason Oliver states?

[ aside: I know organising events isn't an easy thing, just want to make it clear this is more of a consideration rather than a demand :) ]

I'm find myself pretty confused at this reply Tristan. I'm not trying to be rude, but like in some cases I don't really see how it follows

When you say "EAs are out" it seems like we want some of our own on the inside, as opposed to just sensible, saftey concerned people.

I disagree. I think it's a statement of fact. The EAs who were on the board will no longer be on the board. They're both senior EAs, so I don't think it's an irrelevant detail for the Forum to consider. I also think it's a pretty big stretch to go from 'EAs are out' to 'only EAs can be trusted with AI Safety', like I just don't see that link being strong at all, and I disagree with it anyway

What succinct way to put this is better? "Saftey is out" feels slightly better but like it's still making some sort of claim that we have unique providence here. So idk, maybe we just need slightly longer expressions here like "Helen Toner and Tasha McCauley have done really great work and without their concern for saftey we're worried about future directions of the board" or something like that.

Perhaps an alternative could have been "Sam Altman returning as OpenAI CEO, major changes to board structure agreed?" or something like that?

As for your expression. I guess I just disagree with it, or think it's lacking evidence? I definitely wouldn't want to cosign it or state that's an EA-wide position?

to avoid uncertainty about what I mean here:

  • I'm familiar with Toner's work a little, and it looks good to me. I have basically ~0 knowledge of what 'great work' McCauley has done, or how she ended up on the board, or her positions on AI Safety or EA in general
  • I don't think not having these members of the board means I should be worried about the future of the OpenAI board or the future of AI Safety
  • In terms of their actions as board members the drastic action they took on Friday without any notice to investors or other stakeholders, combined with losing Ilya, nearly losing the trust of their newly appointed CEO, complete radio silence, and losing the support of ~95% of the employees of the entity that they were board members for,[1] leaves me lots of doubts about their performance and suitability of board members of any significant organisation and their ability to handle crises of this magnitude

But see below, I think these issue are best discussed somewhere else

(the other two paragraphs of yours focus somewhat confusingly on the idea of labeling EAs as being necessary for considering the impact of this on EA (and on their ability to govern in EA) which I think is best discussed as its own separate point?)

I agree that the implications of this for EA governance are best discussed in another place/post entirely, but it's an issue I think does need to be brought up, perhaps when the dust has settled a bit and tempers on all sides have cooled.

I don't know where I claim that labelling EAs is necessary for discussing the impacts of this at all. Like I really just don't get it - I don't think that's true about what I said and I don't think I said it or implied it 🤷‍♂

  1. ^

    including most of the AI-safety identifying people at OpenAI as far as I can tell

Load more