R

RyanCarey

8928 karmaJoined Aug 2014

Bio

Researching Causality and Safe AI at Oxford

Previously, founder (with help from Trike Apps) of the EA Forum.

Discussing research etc at https://twitter.com/ryancareyai.

Comments
1236

Topic contributions
6

There is an "EA Hotel", which is decently-sized, very intensely EA, and very cheap.

Occasionally it makes sense for people to accept very low cost-of-living situations. But a person's impact is usually a lot higher than their salary. Suppose that a person's salary is x, their impact 10x, and their impact is 1.1 times higher when they live in SF, due to proximity to funders and AI companies. Then you would have to cut costs by 90% to make it worthwhile to live elsewhere. Otherwise, you would essentially be stepping over dollars to pick up dimes.

Of course there are some theoretical reasons for growing fast. But theory only gets you so far, on this issue. Rather, this question depends on whether growing EA is promising currently (I lean against) compared to other projects one could grow. Even if EA looks like the right thing to build, you need to talk to people who have seen EA grow and contract at various rates over the last 15 years, to understand which modes of growth have been healthier, and have contributed to gained capacity, rather than just an increase in raw numbers. In my experience, one of the least healthy phases of EA was when there was the heaviest emphasis on growth, perhaps around 1.5-4 years ago, whereas it seemed to do better pretty-much all of the other times.

Yes, they were involved in the first, small, iteration of EAG, but their contributions were small compared to the human capital that they consumed. More importantly, they were a high-demand group that caused a lot of people serious psychological damage. For many, it has taken years to recover a sense of normality. They staged a partial takeover of some major EA institutions. They also gaslit the EA community about what they were doing, which confused and distracted decent-sized subsections of the EA communtiy for years.

I watched The Master a couple of months ago, and found to be a simultaneously compelling and moving description of the experience of cult membership, that I would recommend.

Interesting point, but why do these people think that climate change is going to cause likely extinction? Again, it's because their thinking is politics-first. Their side of politics is warning of a likely "climate catastrophe", so they have to make that catastrophe as bad as possible - existential.

I think that disagreement about the size of the risks is part of the equation. But it's missing what is, for at least a few of the prominent critics, the main element - people like Timnit, Kate Crawford, Meredith Whittaker are bought in leftie ideologies focuses on things like "bias", "prejudice", and "disproportionate disadvantage". So they see AI as primarily an instrument of oppression. The idea of existential risk cuts against the oppression/justice narrative, in that it could kill everyone equally. So they have to opposite it.

Obviously this is not what is happening with all people in the FATE AI, or AI Ethics community, but I do think it's what's driving some of the loudest voices, and that we should be clear-eyed about it.

I guess you're right, but even so I'd ask:

  • Is it 11 new orgs, or will some of them stick together (perhaps with CEA) when they leave? 
  • What about other orgs not on the website, like GovAI and Owain's team? 
  • Separately, are any teams going to leave CEA?

Related to (1) is the question: which sponsored projects are definitely being spun out?

Hmm, OK. Back when I met Ilya, about 2018, he was radiating excitement that his next idea would create AGI, and didn't seem sensitive to safety worries. I also thought it was "common knowledge" that his interest in safety increased substantially between 2018-22, and that's why I was unsurprised to see him in charge of superalignment.

Re Elon-Zillis, all I'm saying is that it looked to Sam like the seat would belong to someone loyal to him at the time the seat was created.

You may well be right about D'Angelo and the others.

  1. The main thing that I doubt is that Sam knew at the time that he was gifting the board to doomers. Ilya was a loyalist and non-doomer when appointed. Elon was I guess some mix of doomer and loyalist at the start. Given how AIS worries generally increased in SV circles over time, more likely than not some of D'Angelo, Hoffman, and Hurd moved toward the "doomer" pole over time.

Nitpicks:

  1. I think Dario and others would've also been involved in setting up the corporate structure
  2. Sam never gave the "doomer" faction a near majority. That only happened because 2-3 "non-doomers" left and Ilya flipped.
Load more