Hide table of contents

In no particular order, here's a collection of Twitter screenshots of people attacking AI Safety. A lot of them are poorly reasoned, and some of them are simply ad-hominem. Still, these types of tweets are influential, and are widely circulated among AI capabilities researchers.

1

2

3

4

5

(That one wasn't actually a critique, but it did convey useful information about the state of AI Safety's optics.)

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

Conclusions

I originally intended to end this post with a call to action, but we mustn't propose solutions immediately. In lieu of a specific proposal, I ask you, can the optics of AI safety be improved?

14

0
0

Reactions

0
0
Comments13


Sorted by Click to highlight new comments since:

"Still, these types of tweets are influential, and are widely circulated among AI capabilities researchers." I'm kind of skeptical of this.

Outside of Giada Pistilli and Talia Ringer I don't think these tweets would appear on the typical ML researcher timeline, they seem closer to niche rationality/EA shitposting.

Whether the typical ML person would think alignment/AI x-risk is really dumb is a different question, and I don't really know the answer to that one!

Although to be clear it's still nice to have bunch of different critical perspectives! This post exposed me to some people I didn't know of.

As someone who is not really on Twitter, I found this an interesting thread to read through, thanks! :)

I'd enjoy reading periodic digests like this of "here's surveys of what people are saying about us, and some cultural context on who these people are". I do feel a bit lost as to who all of these people are, which I think would help me parse what is going on here a bit better.

Strongly agree.

I think it's essential to ask some questions first:

  • Why do people hold these views? (Is it just their personality, or did somebody in this community do something wrong?)
  • Is there any truth to these views? (As can be seen here, anti-AI safety views are quite varied. For example, many are attacks on the communities that care about them rather than the object-level issues.)
  • Does it even matter what these particular people think? (If not, then leave them be.)

Only then should one even consider engaging in outreach or efforts to improve optics.

Could someone explain the “e/acc” in some of these? I haven’t seen it before.

Neither have I, but judging by one of the tweets it stands for "effective acclerationist"? Which I guess means trying to get as much tech as possible and trusting [society? markets? individual users?] to deal effectively with any problem that comes up?

It's something that was recently invented on Twitter, here is the manifesto they wrote: https://swarthy.substack.com/p/effective-accelerationism-eacc?s=w
It's only believed by a couple people afaict, and unironically maybe by no one (although this doesn't make it unimportant!)

We expect e/acc to compile as “scary” for many EAs, although that’s not the goal. We think EA has a lack of focus and is missing an element of willingness to accept the terms of the deal in front of humanity — i.e. to be good stewards of a consciousness-friendly technocapital singularity or die trying.

Unlike EA, e/acc

  • Doesn’t advocate for modernist technocratic solutions to problems
  • Isn’t passively risk-averse in the same way as EAs that “wish everything would just slow down”
  • Isn’t human-centric — as long as it’s flourishing, consciousness is good
  • Isn’t in denial about how fast the future is coming
  • Rejects the desire for a panopticon implied by longtermist EA beliefs

Like EA, e/acc:

  • Is prescriptive
  • Values more positive valence consciousness as good
  • Values zero recognizable consciousness in the universe as the absolute worst outcome.

I agree with some of these allegedly not EA ideas and disagree wih some of the allegedly EA ones ("more positive valence consciousness = good"). But I'm not sure the actual manifesto has anything to do with any of these.

Abridged version of #10, as I understand it, after looking it up on Twitter: Pistilli was aware of the threat of superintelligente, but eventually chose to work on other important AI ethics problems unrelated to X-risk. She was repeatedly told that's negligent and irresponsible and felt very alienated by people in her field. Now she refuses to delve into sentience/AGI problems altogether, which seems like a loss.

The lesson is that the X-risk crowd needs to learn to play better with the other kids, who work on problems that will pop up in a world where we aren't all dead.

I like the format.

The interesting ones IMO (meaning "the ones that may convey some kind of important truth") are 1, 6, 8, 17, 19. And maybe 10 but I can't see the thread so I don't know what it's saying.

Okay, this is very off-topic, but I just really want more EAs know about a browser extension that has massively improved my Twitter experience. 

https://github.com/insin/tweak-new-twitter/

Tweak New Twitter is a browser extension which removes algorithmic content from Twitter, hides news and trends, lets you control which shared tweets appear on your timeline, and adds other UI improvements

I also find the screenshotted post in #7 problematic:  "Once AGI is so close to being developed that there's no longer sufficient time for movement building or public education to help with AI safety, I guess I can go on holiday and just enoy the final few months of my life." 

I'd be doubtful that official AI safety organisations or their representatives would communicate similarly. But a good takeaway in general is to not promote content on global priorities that insinuates a sense of powerlessness.

Curated and popular this week
 ·  · 7m read
 · 
This is a linkpost for a paper I wrote recently, “Endogenous Growth and Excess Variety”, along with a summary. Two schools in growth theory Roughly speaking: In Romer’s (1990) growth model, output per person is interpreted as an economy’s level of “technology”, and the economic growth rate—the growth rate of “real GDP” per person—is proportional to the amount of R&D being done. As Jones (1995) pointed out, populations have grown greatly over the last century, and the proportion of people doing research (and the proportion of GDP spent on research) has grown even more quickly, yet the economic growth rate has not risen. Growth theorists have mainly taken two approaches to reconciling [research] population growth with constant economic growth. “Semi-endogenous” growth models (introduced by Jones (1995)) posit that, as the technological frontier advances, further advances get more difficult. Growth in the number of researchers, and ultimately (if research is not automated) population growth, is therefore necessary to sustain economic growth. “Second-wave endogenous” (I’ll write “SWE”) growth models posit instead that technology grows exponentially with a constant or with a growing population. The idea is that process efficiency—the quantity of a given good producible with given labor and/or capital inputs—grows exponentially with constant research effort, as in a first-wave endogenous model; but when population grows, we develop more goods, leaving research effort per good fixed. (We do this, in the model, because each innovator needs a monopoly on his or her invention in order to compensate for the costs of developing it.) Improvements in process efficiency are called “vertical innovations” and increases in good variety are called “horizontal innovations”. Variety is desirable, so the one-off increase in variety produced by an increase to the population size increases real GDP, but it does not increase the growth rate. Likewise exponential population growth raise
 ·  · 4m read
 · 
TLDR When we look across all jobs globally, many of us in the EA community occupy positions that would rank in the 99.9th percentile or higher by our own preferences within jobs that we could plausibly get.[1] Whether you work at an EA-aligned organization, hold a high-impact role elsewhere, or have a well-compensated position which allows you to make significant high effectiveness donations, your job situation is likely extraordinarily fortunate and high impact by global standards. This career conversations week, it's worth reflecting on this and considering how we can make the most of these opportunities. Intro I think job choice is one of the great advantages of development. Before the industrial revolution, nearly everyone had to be a hunter-gatherer or a farmer, and they typically didn’t get a choice between those. Now there is typically some choice in low income countries, and typically a lot of choice in high income countries. This already suggests that having a job in your preferred field puts you in a high percentile of job choice. But for many in the EA community, the situation is even more fortunate. The Mathematics of Job Preference If you work at an EA-aligned organization and that is your top preference, you occupy an extraordinarily rare position. There are perhaps a few thousand such positions globally, out of the world's several billion jobs. Simple division suggests this puts you in roughly the 99.9999th percentile of job preference. Even if you don't work directly for an EA organization but have secured: * A job allowing significant donations * A position with direct positive impact aligned with your values * Work that combines your skills, interests, and preferred location You likely still occupy a position in the 99.9th percentile or higher of global job preference matching. Even without the impact perspective, if you are working in your preferred field and preferred country, that may put you in the 99.9th percentile of job preference
 ·  · 4m read
 · 
Sometimes working on animal issues feels like an uphill battle, with alternative protein losing its trendy status with VCs, corporate campaigns hitting blocks in enforcement and veganism being stuck at the same percentage it's been for decades. However, despite these things I personally am more optimistic about the animal movement than I have ever been (despite following the movement for 10+ years). What gives? At AIM we think a lot about the ingredients of a good charity (talent, funding and idea) and more and more recently I have been thinking about the ingredients of a good movement or ecosystem that I think has a couple of extra ingredients (culture and infrastructure). I think on approximately four-fifths of these prerequisites the animal movement is at all-time highs. And like betting on a charity before it launches, I am far more confident that a movement that has these ingredients will lead to long-term impact than I am relying on, e.g., plant-based proteins trending for climate reasons. Culture The culture of the animal movement in the past has been up and down. It has always been full of highly dedicated people in a way that is rare across other movements, but it also had infighting, ideological purity and a high level of day-to-day drama. Overall this made me a bit cautious about recommending it as a place to spend time even when someone was sold on ending factory farming. But over the last few years professionalization has happened, differences have been put aside to focus on higher goals and the drama overall has gone down a lot. This was perhaps best embodied by my favorite opening talk at a conference ever (AVA 2025) where Wayne and Lewis, leaders with very different historical approaches to helping animals, were able to share lessons, have a friendly debate and drive home the message of how similar our goals really are. This would have been nearly unthinkable decades ago (and in fact resulted in shouting matches when it was attempted). But the cult