I currently lead EA funds.
Before that, I worked on improving epistemics in the EA community at CEA (as a contractor), as a research assistant at the Global Priorities Institute, on community building, and Global Health Policy.
Unless explicitly stated otherwise, opinions are my own, not my employer's.
You can give me positive and negative feedback here.
I agree that a lot of EAs seem to make this mistake but I don't think the issue is with the neglectedness measure, ime people often incorrectly scope the area they are analysing and fail to notice that that specific area can be highly neglected whilst also being tractable and important even if the wider area it's part of is not very neglected.
For example, working on information security in USG is imo not very neglected but working on standards for datacentres that train frontier LMs is.
Fwiw I think the "deepfakes will be a huge deal" stuff has been pretty overhyped and the main reason we haven't seen huge negative impacts is that society already has reasonable defences against fake images that prevent many people from getting misled by them.
I don't think this applies to many other mouse style risks that the AI X-risk community cares about.
For example the main differences in my view between AI-enabled deepfakes and AI-enabled biorisks are:
* marginal people getting access to bioweapons is just a much bigger deal than marginal people being able to make deepfakes
* there is much less room for the price of deepfakes to decrease than the cost of developing a bioweapon (photoshop has existed for a long time and expertise is relatively cheap).
People call these companies labs due to some combination of marketing and historical accident. To my knowledge no one ever called Facebook, Amazon, Apple, or Netflix "labs", despite each of them employing many researchers and pushing a lot of genuine innovation in many fields of technology.
I agree overall but fwiw I think that for the first few years of Open AI and Deepmind's existence, they were mostly pursuing blue sky research with few obvious nearby commercial applications (e.g. training NNs to play video games). I think a lab was a pretty reasonable term - or at least similarly reasonable to calling say, bell labs a lab.
Fwiw I think that donor lotteries are great and I'm glad they have a place in the effective giving ecosystem. I'm not sure I follow most of your points analysis but I'd push back on
Donor lotteries assume there’s demand for the model....
My understanding is that donor lotteries don't take up much time or attention from people who aren't participating in them. They are pretty low-cost to run relative to managed funds and people entering them have a good sense of their chance of winning and the pool they'll be able to direct if they do win.
Interesting, I think I only endorse a weak version of this claim and expect replies to the post to be fairly nitpicky which would make writing the post annoying.
Otoh (the weak version) seems pretty obvious to me, which makes me excited to write a longer post making the case for it, are there any particular points you'd like such a post cover?
Hi Markus,
For context I run EA Funds, which includes the EAIF (though the EAIF is chaired by Max Daniel not me). We are still paying out grants to our grantees — though we have been slower than usual (particularly for large grants). We are also still evaluating applications and giving decisions to applicants (though this is also slower than usual).
We have communicated this to the majority of our grantees, but if you or anyone else reading this urgently needs a funding decision (in the next two weeks), please email caleb [at] effectivealtruismfunds [dot] org with URGENT in the subject line, and I will see what I can do. Please also include:
You can also apply to one of Open Phil’s programs; in particular, Open Philanthropy’s program for grantees affected by the collapse of the FTX Future Fund may be particularly of note to people applying to EA Funds due to the FTX crash.