Non-EA interests include chess and TikTok (@benthamite). We are probably hiring: https://metr.org/hiring
Feedback always appreciated; feel free to email/DM me or use this link if you prefer to be anonymous.
Maybe instead of "where people actually listen to us" it's more like "EA in a world where people filter the most memetically fit of our ideas through their preconceived notions into something that only vaguely resembles what the median EA cares about but is importantly different from the world in which EA didn't exist."
I had considered calling the third wave of EA "EA in a World Where People Actually Listen to Us".
Leopold's situational awareness memo has become a salient example of this for me. I used to sometimes think that arguments about whether we should avoid discussing the power of AI in order to avoid triggering an arms race were a bit silly and self important because obviously defense leaders aren't going to be listening to some random internet charity nerds and changing policy as a result.
Well, they are and they are. Let's hope it's for the better.
Thanks for writing this, Alene!
The reason I feel excited about dedicating my life to LIC, however, is because I believe we will win.
Is there something you could share about why you think this? E.g. have analogous projects succeeded before, have the previous cases had judgments indicating that the case would succeed on appeal, etc.?
do you have a sense of how to interpret the differences between options? E.g. I could imagine that basically everyone always gives an answer between 5 and 6, so a difference of 5.1 and 5.9 is huge. I could also imagine that scores are uniformly distributed between the entire range of 1-7, in which case 5.1 vs 5.9 isn't that big.
Relatedly, I like how you included "positive action" as a comparison point but I wonder if it's worth including something which is widely agreed to be mediocre (Effective Lawnmowing?) so that we can get a sense of how bad some of the lower scores are.
In my post, I suggested that one possible future is that we stay at the "forefront of weirdness." Calculating moral weights, to use your example.
I could imagine though that the fact that our opinions might be read by someone with access to the nuclear codes changes how we do things.
I wish there was more debate about which of these futures is more desirable.
(This is what I was trying to get out with my original post. I'm not trying to make any strong claims about whether any individual person counts as "EA".)