Bio

I currently lead EA funds.

Before that, I worked on improving epistemics in the EA community at CEA (as a contractor), as a research assistant at the Global Priorities Institute, on community building, and Global Health Policy.

Unless explicitly stated otherwise, opinions are my own, not my employer's.

You can give me positive and negative feedback here.

Comments
456

Topic contributions
6

the AI safety group was just way more exciting and serious and intellectually alive than the EA group — this is caricatured,


Was the AIS group led by people that had EA values or were significantly involved with EA?

I’m sure it was a misunderstanding, but fwiw, in the first paragraph, I do say “positive contributors” by which I meant people having a positive impact. 

I agree with some parts of your comment  though it’s not particularly  relevant to the thesis that most people with significant responsibility for most of the top-tier (according to my view on top tier areas for making AGI go well) have values that are much more EA like than would naively be expected.

I don’t think the opposite of (i) is true.

 Imagine a strong fruit loopist, believes there’s an imperative to maximise total fruit loops. 

If you are not a strong fruit loopist there’s no need to minimise total fruit loops, you can just have preferences that don’t have much of an opinion on how many fruit loops should exist (I.e. everyone’s position). 

Maybe this is working for them, but I can’t help feeling icked by it, and it makes me lose a bit of faith in the project.

Plausibly useful feedback, but I think this is ~0 evidence for how much faith you should have in blue dot relative to factors like reach, content, funding, materials, testimonials, reputation, public writing, past work of team members... If. I were doing a grant evaluation of Blue Dot, it seems highly unlikely that this would make it into the eval

There's definitely some selection bias (I know a lot of EAs), but anecdotally, I feel that almost all the people who, in my view, are "top-tier positive contributors" to shaping AGI seem to exemplify EA-type values (though it's not necessarily their primary affinity group).

Some "make AGI go well influencers" who have commented or posted on the EA Forum and, in my view, are at the very least EA-adjacent include Rohin Shah, Neel Nanda, Buck Shlegeris, Ryan Greenblatt, Evan Hubinger, Oliver Habryka, Beth Barnes, Jaime Sevilla, Adam Gleave, Eliezer Yudkowsky, Davidad, Ajeya Cotra, Holden Karnofsky ....  most of these people work on technical safety, but I think the same story is roughly true for AI governance and other "make AGI go well" areas.[1]

I personally wouldn't describe all of the above people's work as being in my absolute top tier according to my idiosyncratic worldview (note that many of them are working on at least somewhat conflicting agendas so they can't all be in my top tier), and it could be true that "early EA" was a strong attractor for such people, but EA has since lost its ability to attract "future AI thought leaders". [2]

I also want to make a stronger, but harder to justify, claim that the vast majority of people doing top-tier work in AI safety are ~EAs. For example, many people would consider Redwood Research's work top tier, and both Buck and Ryan (according to me) exemplify EA values (scope sensitivity, altruism, ambition, etc.). Imo, some combination of, scope sensitivity, impartiality, altruism, and willingness to take weird ideas seriously seems extremely useful (and maybe even critical) for doing the most important "make AI go well" work.

  1. ^

    I know that some of these people wouldn't "identify as EA" but that's not particularly relevant. The think I'm trying to point at is a set of values that are common in EA but rare in AI/ambitous people/elites/the general public.

  2. ^

    It also seems good to mention there are some people who are EAs (according to my defintion) having a large negative impact on AI risk.

On a related note, I happened to be thinking about this a little today as I took a quick look at what ~18 past LTFF who were given early career grants are doing now, and at least 14 of them are doing imo clearly relevant things for AIS/EA/GCR etc. I couldn't quickly work out what the other four were doing (though I could have just emailed them or spent more than 20 minutes total on this exercise). 

For me, it was a moderate update against "bycatch" amongst LTFF grantees (an audience which, in principle, should be especially vulnerable to bycatch), though I don't think this should be much of an update for others, especially when thinking about the EA community more comprehensively.

Also, not all LTFF funding was used to make videos. Rob has started/supported a bunch of other field-building projects

We do fund a small amount of non AI/bio work so it seems bad to rule those areas out.

 It could be worth bringing more attention to the breakdown of our public grants if the application distribution is very different to the funded one, I’ll check next week internally to see if that’s the case.

Answer by calebp8
1
0

We evaluate grants in other longtermist areas but you’re correct that it’s rare for us to fund things that aren’t AI or bio (and biosecurity grants more recently have been relatively rare).  We occasionally fund work in forecasting, macrostrategy, and fieldbuilding. 

It’s possible that we’ll support a broader array of causes in the future but until we make an announcement I think the status quo of investigating a range of areas in longtermism and then funding the things that seem most promising to us (as represented by our public reporting) will persist.

Load more