This is a special post for quick takes by Tamay. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.
Sorted by Click to highlight new quick takes since:

I think EAs should be more critical of its advocacy contingents, and that those involved in such efforts should set a higher bar for offering more thoughtful and considered takes.

Short slogans and emojis-in-profiles, such as those often used in ‘Pause AI’ advocacy, are IMO inadequate for the level of nuance required for complex topics like these. Falling short can burn credibility and status of those involved in EA in the eyes of onlookers.

As someone who runs one of EAs advocacy contingents, I think the overall idea of more criticism is probably a good idea (though I suspect I'll find it personally unpleasant when applied to things I work on), but I'd suggest a few nuances I think exist here:

  1. EA is not unitary, and different EAs and EA factions will have different and at times opposing policy goals. For example, many of the people who work at OpenAI/Anthropic are EAs (or EA adjacent), but many EAs think working at OpenAI/Anthropic leads to AI acceleration in a harmful way (EAs also have differing views of the relevant merits of those two firms). 
    1. Which views are considered EA can change the composition of who identifies as EA, EA-adjacent, unopposed, and EA-hostile -- e.g. my perception of Sam Altman would be as EA-adjacent, but the perception that EAs have been critical of OpenAI, along with other events, likely pushed him further away from EA than he'd otherwise be; Elon Musk and Peter Thiel may also be related examples.
  2. Advocacy is inherently information-lossy, since it involves translating information from one context into a format that will be persuasive in some sort of politically useful way. Usually this involves simplification (because a popular or decision-maker audience has less bandwidth than an expert audience) and may also involve differentiation (since the message will probably tend to be optimized to fit something like the existing views of its audience). This is a hard challenge to manage. 
    1. One type of simplification I've noticed is from an internal EA-organizing perspective -- where the experts/leaders at the center tend to have nuanced, reasonable views, but those views, when being transmitted to organizers who again transmit to less experienced people interested in EA, can become translated into a dogma that is simplistic and rigid.
    2. Two case studies of EA (or EA-adjacent) advocacy -- monetary/macroeconomic policy and criminal justice reform -- have had interestingly different trajectories. With monetary policy in the U.S., EA-funded groups tended to foreground technical policy-understanding and (in my opinion) did a good job transitioning their recommendations as macroeconomic conditions changed (am thinking mainly of Employ America). The criminal justice reform movement (where I founded a volunteer advocacy organization, the Rikers Debate Project) has in my opinion been mostly unable to reorient its recommendations and thinking in response to changing conditions. In my opinion, the macroeconomic policy work had more of a technocratic theory of change than more identity-oriented criminal justice reform efforts funded by EA though there were elements of technocracy and identitarianism in both fields. (Rikers Debate, which was not funded by EA groups, has historically been more identitarian in focus).  

Can you provide a historical example of advocacy that you think reaches a high level of thoughtfulness and consideration?

I think much of the advocacy within EA is reasonably thoughtful and truth-seeking. Reasoning and uncertainties are often transparently communicated. Here are two examples based on my personal impressions:

  • advocacy around donating a fraction of one's income to effective charities is generally focused on providing accounts of key facts and statistics, and often acknowledges its demandingness and its potential for personal and social downsides
  •  wild animal suffering advocacy usually does things like acknowledging the second-order effects of interventions on ecosystems, highlights the amount of uncertainty around the extent of suffering and often calls for more research rather than immediate intervention


By contrast, EA veganism advocacy has done a much poorer job in remaining truth-seeking as Elizabeth has pointed out.

Thanks for your thoughtful reply, I appreciate it :)

I am a bit confused still. I'm struggling to see how the work of GWWC is similar to the Pause Movement? Unless you're saying there is a vocal contingent of EAs (who don't work for GWWC) who publicly advocate (to non-EAs) for donating ≥ 10% of your income? I haven't seen these people.

In short, I'm struggling to see how they're analogous situations.

You asked for examples of advocacy done well with respect to truth-seekingness/providing well-considered takes, and I provided examples.

You seem annoyed, so I will leave the conversation here.

I'm a bit skeptical that all identitarian tactics should be avoided, insofar as that is what this is. It's just too potent a tool - just about every social movement has promulgated itself by these means, by plan or otherwise. Part of this is a "growth of the movement" debate; I'm inclined to think that more money+idea proliferation is needed.

I do think there are some reasonable constraints:

  1. Identitarian tactics should be used self-consciously and cynically. It's when we forget that we are acting, that the worst of in/out groupiness presents itself. Do think we could do with some more reminding of this.

  2. I would agree that certain people should refrain from this. Fine if early-stage career people do it, but I'll start being concerned if Macaskill loses his cool and starts posting "I AM AN EA💡" and roasting outgroups.

The term 'default' in discussions about AI risk (like 'doom is the default') strikes me as an unhelpful rhetorical move. It suggests an unlikely scenario where little-to-no measures are taken to address AI safety. Given the active research and the fact that alignment is likely to be crucial to unlocking the economic value from AI, this seems like a very unnatural baseline to frame discussions around.

It seems like you're not arguing about the rhetoric of the people you disagree with but rather on the substantive question of the likelihood of disastrous AGI.

The reasons you have given tend to disconfirm the claim that "doom is the default." But rhetorically, their expression succinctly and well conveys their belief that AGI will be very bad for us unless a very difficult and costly solution is developed.

One issue I have with this is that when someone calls this the 'default', I interpret them as implicitly making some prediction about the likelihood of such countermeasures not being taken. The issue then that this is a very vague way to communicate one's beliefs. How likely does some outcome need to be for it to become the default? 90%? 70%? 50%? Something else?
 

The second concern is that it's improbable for minimal or no safety measures to be implemented, making it odd to set this as a key baseline scenario. This belief is supported by substantial evidence indicating that safety precautions are likely to be taken. For instance:

  • Most of the major AGI labs are investing quite substantially in safety (e.g. OpenAI committing some substantial fraction of its compute budget, a large fraction of Anthropic's research staff seems dedicated to safety, etc.)
  • We have received quite a substantial amount of concrete empirical evidence that safety-enhancing innovations are important for unlocking the economic value from AI systems (e.g. RLHF, constitutional AI, etc.)
  • It seems a priori very likely that alignment is important for unlocking the economic value from AI, because this effectively increases the range of tasks that AI systems can do without substantial human oversight, which is necessary for deriving value from automation
  • Major governments are interested in AI safety (e.g. the UK's AI Safety Summit, the White House's securing commitments around AI safety from AGI labs)

Maybe they think that safety measures taken in a world in which we observe this type of evidence will fall far short from what is neeeded. However, it's somewhat puzzling be confident enough in this to label it as the 'default' scenario at this point.

I take it as a kind of "what do known incentives do and neglect to do" ---- when I say "default" I mean "without philanthropic pressure" or "well-aligned with making someone rich". Of course, a lot of this depends on my background understanding of public-private partnerships through the history of innovation (something I'm liable to be wrong about). 

The standard venn diagram of focused research organizations https://fas.org/publication/focused-research-organizations-a-new-model-for-scientific-research/ gives a more detailed view along the same lines, less clumsy, but still the point is "there are blindspots that we don't know how to incentivize". 

It's certainly true that many parts of almost every characterization/definition of "alignment" can simply be offloaded to capitalism, but I think there are a bajillion reasonable and defensible views about which parts those are, if they're hard, they may be discovered in an inconvenient order, etc. 

Well, sure, but if there a way to avoid the doom, then why after 20 years has no one published the plan for how to do it that doesn't resemble a speculative research project of the type you try when you clearly don't understand the problem and that doesn't resemble the vague output of a politician writing about a sensitive issue?

There's no need for anyone to offer a precise plan for how we can avoid doom. That would be nice, but unnecessary. I interpreted Tamay as saying that the "default" outcomes that people contemplate are improbable to begin with. If the so-called default scenario will not happen, then it's unclear to me why we need to have a specific plan for how to avoid doom. Humanity will just avoid doom in the way we always have: trudging through until we get out alive. No one had a realistic, detailed plan for how we'd survive the 20th century unscathed by nuclear catastrophe, and yet we did anyway.

I interpreted Tamay as saying that the “default” outcomes that people contemplate are improbable to begin with.

I'm curious about your "to begin with": do you interpret Tamay as saying that doom is improbable even if little-to-no measures are taken to address AI safety?

While I agree that slogans like "doom is the default" should not take over the discussion in favour of actual engagement, it doesn't appear that your problem is with the specific phrasing but rather with the content behind this statement.

More from Tamay
Curated and popular this week
Relevant opportunities