R

Raemon

2526 karmaJoined

Comments
216

Topic contributions
1

Raemon
40
13
0
2

I was encouraged by the positive response to my posts: it turned out that many people found them helpful! But that also raised the question: why isn't anyone else doing this? In a community of people who care a ton about the most effective ways to donate money, why wasn't anyone else set up to make similarly detailed cost-effectiveness analyses?

A sort of central paradox of EA as a movement/community is "you'd think, writing up cost-benefit analysis of donation targets would be like a core community activity", but, also, there's big professional orgs evaluating all the charities, and also the a lot of charities feel very fuzzy / difficult to evaluate.

I think it'd be cool if "attempt to make a BOTEC calculation evaluating donation targets" was like the sort of thing people did at EA meetups on-the-regular. (seems more grounding than "spend most of the time recruiting more people to EA"). 

It feels fairly alarming to me that this post didn't get more pushback here and is so highly upvoted.

I think it makes a couple interesting points, but then makes extremely crazy sounding claims, taking the Rethink Priorities 7 - 15% numbers at face value, when the arguments for those AFAICT don't even have particular models behind them. This is a pretty crazy sounding number that needs way better argumentation than "a poll of people said so", and here it's just asserted without much commentary at all.

(In addition to things other people have mentioned here, like the 97% number being very sus, and "why are we assuming they have net negative lives?", describing "10% as bad as a chicken" as a "conservative assumption" that's like basically made up. Also, it has some random political potshots that aren't really affecting the core claim but also seem bad for EA Forum culture)

This feels like sort of the central example of why EA Vegan Advocacy is not truthseeking, and it’s everyone’s problem needed to get written. (disclaimer: I am close with the author of that post)

(By contrast it has negative karma on LessWrong. I have weak disagreement with Oliver there about whether it should be more like -9 karma or more like "-2 to 10", but it was at 85 karma when I found it here before me and a couple people strong downvoted and that seems like EA forum basically has no filter for poorly argued claims)

I recall previously hearing there might be a final round of potential amendments in response to things Gavin Newsom requests. Was/is that accurate?

(several years late, whoops!)

Yeah, my intent here was more "be careful deciding to scale your company to the point you need a lot of middle managers, if you have a nuanced goal", rather than "try to scale your company without middle managers."

In the context of an EA jobs list it seems like both are pretty bad. (there's the "job list" part, and the "EA" part)

Yeah, this does seem like an improvement. I appreciate you thinking about it and making some updates.

Can you say a bit more about:

and (2) worse in private than in public.

?

Mmm, nod. I will look into the actual history here more, but, sounds plausible. (edited the previous comment a bit for now)

Following up my other comment:

To try to be a bit more helpful rather than just complaining and arguing: when I model your current worldview, and try to imagine a disclaimer that helps a bit more with my concerns but seems like it might work for you given your current views, here's a stab. Changes bolded.

OpenAI is a frontier AI research and product company, with teams working on alignment, policy, and security. We recommend specific opportunities at OpenAI that we think may be high impact. We recommend applicants pay attention to the details of individual roles at OpenAI, and form their own judgment about whether the role is net positive.  We do not necessarily recommend working at other positions at OpenAI

You can read considerations around working at a frontier AI company in our career review on the topic.

(it's not my main crux, by "frontier" felt both like a more up-to-date term for what OpenAI does, and also feels more specifically like it's making a claim about the product than generally awarding status to the company the way "leading" does)

Load more