640 karmaJoined



How I can help others

Keywords: software engineering, startups, web development, html, js, css, React, Ruby on Rails, Django, mobile apps, Android.


Hi calebp.

If you have time to read the papers, let me know if you think they are actually useful.

Thanks a lot for giving more context. I really appreciate it.

These were not “AI Safety” grants

These grants come from Open Philanthropy's focus area "Potential Risks from Advanced AI". I think it's fair to say they are "AI Safety" grants.

Importantly, the awarded grants were to be disbursed over several years for an academic institution, so much of the work which was funded may not have started or been published. Critiquing old or unrelated papers doesn't accurately reflect the grant's impact.

Fair point. I agree old papers might not accurately reflect the grant's impact, but they correlate.

Your criticisms of the papers lack depth ... Do you do research in this area, ...

I totally agree. That's why I shared this post as a question. I'm not an expert in the area and I wanted an expert to give me context.

Could you please update your post to address these issues and provide a more accurate representation of the grants and the lab's work?

I added an update linking to your answer.

Overall, I'm concerned about Open Philanthropy's granting. I have nothing against Thompson or his lab's work.

Sorry, I should have attached this in my previous message.

where does it say that he is a guest author?


This paper is from Epoch. Thompson is a "Guest author".

I think this paper and this article are interesting but I'd like to know why you think they are "pretty awesome from an x-risk perspective".

Epoch AI has received much less funding from Open Philanthropy ($9.1M), yet they are producing world-class work that is widely read, used, and shared.

Agree. OP's hits-based giving approach might justify the 2020 grant, but not the 2022 and 2023 grants.

Thanks for your thorough comment, Owen.

And do the amounts ($1M and $0.5M) seem reasonable to you?

As a point of reference, Epoch AI is hiring a "Project Lead, Mathematics Reasoning Benchmark". This person will receive ~$100k for a 6-month contract.

In the case of OpenDevin it seems like the grant is directly funding an open-source project that advances capabilities.

I'd like more transparency on this.

Very good point. Yeah, it seems like a 1/10 life has to be net negative. But a 4/10 life I'm not sure it's net negative.

The difference in subjective well-being is not as high as we might intuitively think.

(anecdotally: my grandparents were born in poverty and they say they had happy childhoods)

The average resident of a low-income country rated their satisfaction as 4.3 using a subjective 1-10 scale, while the average was 6.7 among residents of G8 countries

Doing a naive calculation: 6.7 / 4.3 = 1.56 (+56%).

The difference in the cost of saving a live between a rich and a poor country is 10x-1000x.

It would probably be good to take this into account, but I don't think it would change the outcomes that much.

Load more