CL

Chris Leong

Organiser @ AI Safety Australia and NZ
7584 karmaJoined Sydney NSW, Australia

Bio

Participation
7

Currently doing local AI safety Movement Building in Australia and NZ.

Sequences
1

Wise AI Wednesdays

Comments
1290

Topic contributions
2

Very excited to read this post. I strongly agree with both the concrete direction and with the importance of making EA more intellectually vibrant

Then again, I'm rather biased since I made a similar argument a few years back.

Here's the main differences between what I was suggesting back then and what Will is suggesting here:

  • I suggested that it might make sense for virtual programs to create a new course rather than just changing the intro fellowship content. My current intuition is that splitting the intro fellowship would likely be the best option for now. Some people will get really annoyed if the course focuses too much on AI, whilst others will get annoyed if the course focuses too much on questions that would likely become redundant in a world where we expect capability advances to continue. My intuition is that things aren't at the stage where it'd make sense for the intro fellowship to do a complete AGI pivot, so that's why I'm suggesting a split. Both courses should probably still give participants a taste of the other.
  • I put more emphasis on the possibility that AI might be useful for addressing global poverty and that it intersects with animal rights, whilst perhaps Will sees this as too incrementalist?
  • Whilst I also suggested that putting more emphasis on the implications of advanced AI might make EA less intellectually stagnant, I also noted that perhaps it'd be better for EA to adopt a yearly theme and simply make the rise of AI the first such theme. I still like the yearly theme idea, but the odds and legibility of AI being really important have increased enough that I'm now feeling a lot more confident as identifying AI as an area that deserves more than just a yearly theme.

I also agree with the "fuck PR" stance (my words, not Will's). Especially insofar as the AIS movement has greater pressure to focus on PR, since it's further towards the pointy end, I think it's important for the EA movement to use its freedom to provide a counter-balance to this.

I would like to suggest that folk not downvote this post below zero. I'm generally in favour of allowing people to defend themselves, unless their response is clearly in bad faith. I'm sure many folk strongly disagree with the OP's desired social norms, but this is different from bad faith.

Additionally, I suspect most of us have very little insight into how community health operates and this post provides some much needed visibility. Regardless of whether you think their response was just right, too harsh or too lenient, this post opens up a rare opportunity for the community to weigh in.

I suspect people are downvoting this post either because they think the author is a bad person or they don't want the author at EA events. I would suggest that neither of these are good reasons to downvote this specific post into the negative.

I thought some of their analysis was weak. I made comments at the time, but unfortunately, I don't have time at the moment to go back and find them.

I'm surprised that there hasn't been an attempt (as far as I know) to fund/create a competitor to Epoch.ai.

It wouldn't have to compete on all benchmarks, but it would be good to have a forecasting organisation that could be trusted with potentially dual use insights into capabilities trajectories. I don't believe this would require uniformity of views: it would just require people with a proper sense of responsibility.

I also think that the bad judgement displayed by some of their employees impinges on some of their research (emphasis on some, particularly the more subjective elements, Epoch is still my go-to-source in many cases). Unfortunately, I think there's a difference between being intelligent and being wise and one common way that this distinction plays out is that some quite intelligent folks follow the incentive gradient towards being excessively and reflexively contrarian. Just to be clear, I'm not trying to attack their research, just noting that whilst a second opinion would always have been valuable, the fact that I trust them less on the margin, makes the need for such a second opinion feel more pressing to me.

In terms of producing high-quality research, I'd orient to how Epoch has done many things well, but also made a few mistakes that I would controversially call clear mistakes.

I'm also pretty sure that there's sufficient talent in the space now to create a second such effort. It could also start small and funders could help it scale if it proves itself.

Thanks for sharing. 

I assume you've read Tyler Alterman's excellent but long essay: https://forum.effectivealtruism.org/posts/AjxqsDmhGiW9g8ju6/effective-altruism-in-the-garden-of-ends

How do you views compare to him?

"However, AI timelines have led me to conclude that everything I had previously planned on doing over the course of the coming months or yearsmust now be completed as soon as possible, ideally by the end of the weekend."

Really? That feels like excessive haste.

We seem to be seeing some kind of vibe shift when it comes to AI.

What is less clear is whether this is a major vibe shift or a minor one.

If it's a major one, then we don't want to waste this opportunity (it wasn't clear immediately after the release of ChatGPT that it really was a limited window of opportunity and if we'd known, maybe we would have been able to leverage it better).

In any case, we should try not to waste this opportunity, if happens to turn out to be a major vibe shift.

Sure, but these orgs found their own niche.

HIP and Successif focuse more on mid-career professionals.

Probably Good focusing on a broader set of cause areas; and taking some of the old responsibilities of 80k when it started focusing on more on transformative AI. 

Oh, I think AI safety is very important; short-term AI safety too though not quite 2027 😂.

Knock-off MATS could produce a good amount of value, I just want the EA hotel to be even more ambitious.

Chris Leong
*2
0
0
50% disagree

Should our EA residential program prioritize structured programming or open-ended residencies?


There's more information value in exploring structured programming.

That said, I'd be wary duplicating existing programs; ie. if the AI Safety Fellowship became a knock-off MATS.

Load more