Currently doing local AI safety Movement Building in Australia and NZ.
I would like to suggest that folk not downvote this post below zero. I'm generally in favour of allowing people to defend themselves, unless their response is clearly in bad faith. I'm sure many folk strongly disagree with the OP's desired social norms, but this is different from bad faith.
Additionally, I suspect most of us have very little insight into how community health operates and this post provides some much needed visibility. Regardless of whether you think their response was just right, too harsh or too lenient, this post opens up a rare opportunity for the community to weigh in.
I suspect people are downvoting this post either because they think the author is a bad person or they don't want the author at EA events. I would suggest that neither of these are good reasons to downvote this specific post into the negative.
I'm surprised that there hasn't been an attempt (as far as I know) to fund/create a competitor to Epoch.ai.
It wouldn't have to compete on all benchmarks, but it would be good to have a forecasting organisation that could be trusted with potentially dual use insights into capabilities trajectories. I don't believe this would require uniformity of views: it would just require people with a proper sense of responsibility.
I also think that the bad judgement displayed by some of their employees impinges on some of their research (emphasis on some, particularly the more subjective elements, Epoch is still my go-to-source in many cases). Unfortunately, I think there's a difference between being intelligent and being wise and one common way that this distinction plays out is that some quite intelligent folks follow the incentive gradient towards being excessively and reflexively contrarian. Just to be clear, I'm not trying to attack their research, just noting that whilst a second opinion would always have been valuable, the fact that I trust them less on the margin, makes the need for such a second opinion feel more pressing to me.
In terms of producing high-quality research, I'd orient to how Epoch has done many things well, but also made a few mistakes that I would controversially call clear mistakes.
I'm also pretty sure that there's sufficient talent in the space now to create a second such effort. It could also start small and funders could help it scale if it proves itself.
Thanks for sharing.
I assume you've read Tyler Alterman's excellent but long essay: https://forum.effectivealtruism.org/posts/AjxqsDmhGiW9g8ju6/effective-altruism-in-the-garden-of-ends
How do you views compare to him?
"However, AI timelines have led me to conclude that everything I had previously planned on doing over the course of the coming months or years, must now be completed as soon as possible, ideally by the end of the weekend."
Really? That feels like excessive haste.
We seem to be seeing some kind of vibe shift when it comes to AI.
What is less clear is whether this is a major vibe shift or a minor one.
If it's a major one, then we don't want to waste this opportunity (it wasn't clear immediately after the release of ChatGPT that it really was a limited window of opportunity and if we'd known, maybe we would have been able to leverage it better).
In any case, we should try not to waste this opportunity, if happens to turn out to be a major vibe shift.
Should our EA residential program prioritize structured programming or open-ended residencies?
There's more information value in exploring structured programming.
That said, I'd be wary duplicating existing programs; ie. if the AI Safety Fellowship became a knock-off MATS.
Very excited to read this post. I strongly agree with both the concrete direction and with the importance of making EA more intellectually vibrant
Then again, I'm rather biased since I made a similar argument a few years back.
Here's the main differences between what I was suggesting back then and what Will is suggesting here:
I also agree with the "fuck PR" stance (my words, not Will's). Especially insofar as the AIS movement has greater pressure to focus on PR, since it's further towards the pointy end, I think it's important for the EA movement to use its freedom to provide a counter-balance to this.