Bio

Participation
1

How others can help me

A 19-year-old man deciding if he should double major in dentistry and CS(takes 8 years) or single major on CS(4 years). Currently suffering from knowing the feasibility of working in non-EA world to reduce AI s-risks

How I can help others

I'd be willing to spend time discuss with you on anything related to EA and serve as a thinking partner ( especially s-risks and career planning)

Comments
76

What I mean here is if you can give> $50000 a year, then it could be probable the contribution is better than direct work.

Of course, doing any small altruistic things is valuable and worth admiration(even only giving $1)

That's partially true I think. However, some EA orgs aren't funding constrainted at all, therefore they hire people that's better than a certain bar, not hiring people in a limited number. In this, you get whole credit, because even if you decide not to work there, there won't be another people hired

Thanks for your willingness to type down your critique. Your idea is basically: Maybe I can't become a PR 90 researcher, but at least I probably could become PR 90 at something and collaborate with others and make impact.

 

But my critique is : Suppose you're PR 90 at writing skills but average at every other things. Maybe you can apply for writing position in EA world, to help researchers publish better articles/papers. But it's hard to get in EA world, if you work in non-EA world, it seems impossible to reduce AI s-risks if you're only good at writing. You'll still get a writing job in non-EA world, but what you're going to write is probably not related to AI s-risks at all. It seems only people with the skill of research or policy can make impact for AI s-risks in the non-EA world(such as implementing safety designs). There's in fact not that many skills that can make influences in the non-EA world, especially for s-risks 

I think you point out a good point: Some of the donators are too conservative of donating. I wonder if you have thoughts to this question: https://forum.effectivealtruism.org/posts/n76Hpb8N53JBeeWD4/resolving-paradox-funding-isn-t-bottleneck-vs-80-high

Have you ever considered donating to the field of AI s-risks, like Center for reducing suffering or independent s-risks reserachers?

I think some of Brain Tomasiks essays are quite persuasive: https://briantomasik.com/

Also, I think we could say: Imagine you're going to be thrown to a volcano for 10 minutes, but you'd get X years of happiness, how many years are you willing to do this exchange? I think most of us even if we wouldn't want to be thrown to a volcano to exchange 1000 years of happiness, that's why reducing extreme suffering is important

Some rough ideas- 1.For older people, it's harder to change their career path to something like AI safety. However, they could still do earn to give(But unfortunately, EtG is less discussed in EA recently)

2.The cost of turning values: It'd be harder for people over 40 to change their value systems they hold for years to completely EA value systems. For me, since I touched EA at 15 so I don't have this problem.

Thanks very much for your answering, I'm very grateful for it. I think bascially your idea is "The impact of AI risks research is fat-tailed". But there's still a question: If there are money left, and 80% of people aren't funded, why don't you fund them even though they have little impact? Maybe you'll say we should save money, but will the AI risks researchers in the future be much more capable than now? In other words, if the funding bar is still that high, is it probable 10 years later, still 80% of people can't reach the funding bar and do direct work?

It seems many unfunded s-risks researchers are already senior(have 5+ years of experience), which means even if they have 10 years more, they probably wouldn't get more capable and pass the funding bar. But I'm unceratin and welcome to criticize this idea.

Could this news really be the evidence of "It's probably in the future the funding gap would decrease significantly". Of course in the future 3 years there may be a lot of small donors coming from Anthropic, but what if Anthropic is surpassed by other AI frontier labs in the future?(like: Open AI, Google Deepmind) There may be way fewer donor in these companies. Therefore, the increase of funding may not continue long-term. (Though, I'm very uncertain, welcome to comment below to share your intuitions about this).

It's interesting and is actually an under discussed but important topic in EA community.

However, I think you could compare direct work vs donating to support AI safety research directly, not donating Givewell(which mainly focus on improving global health) Because for some people, donating to longtermism funds is much more effective than GiveWell. 

Load more