Bio

I currently lead EA funds.

Before that, I worked on improving epistemics in the EA community at CEA (as a contractor), as a research assistant at the Global Priorities Institute, on community building, and Global Health Policy.

Unless explicitly stated otherwise, opinions are my own, not my employer's.

You can give me positive and negative feedback here.

Posts
29

Sorted by New
3
calebp
· · 1m read

Comments
355

Topic contributions
6

Answer by calebp19
2
0

Hi Markus,

For context I run EA Funds, which includes the EAIF (though the EAIF is chaired by Max Daniel not me). We are still paying out grants to our grantees — though we have been slower than usual (particularly for large grants). We are also still evaluating applications and giving decisions to applicants (though this is also slower than usual). 

We have communicated this to the majority of our grantees, but if you or anyone else reading this urgently needs a funding decision (in the next two weeks), please email caleb [at] effectivealtruismfunds [dot] org with URGENT in the subject line, and I will see what I can do. Please also include:

  • Please also include the name of the application (from previous funds email subject lines),
  • the reason the request is urgent,
  • latest decision and payout dates that would work for you - such that if we can’t make these dates there is little reason to make the grant.

You can also apply to one of Open Phil’s programs; in particular, Open Philanthropy’s program for grantees affected by the collapse of the FTX Future Fund may be particularly of note to people applying to EA Funds due to the FTX crash.

Pragmatically, I think many of the old folks around EA are either doing very well, or are kind of lost/exploring other avenues. Other areas allow people to have more reputable positions, but these are typically not very EA/effective areas. Often E2G isn't very high-status in these clusters, so I think a lot of these people just stop doing much effective work.


I haven't really noticed this happening very much empirically, but I do think the effect you are talking about is quite intuitive. Have you seen many cases of this that you're confident are correct (e.g. they aren't lost for other reasons like working on non-public projects or being burnt out)? No need to mention specific names.


In theory, EAs are people who try to maximize their expected impact. In practice, EA is a light ideology that typically has a limited impact on people. I think that the EA scene has demonstrated success at getting people to adjust careers (in circumstances where it's fairly cheap and/or favorable to do so)

This seems incorrect to me, in absolute terms. By the standards of ~any social movement, EAs are very sacrificial and focused on increasing their impact. I suspect you somewhat underrate how rare it is outside of EA to be highly committed to ~any non-self-serving principles seriously enough to sacrifice significant income and change careers, particularly in new institutions/movements.
 

Yeah all seems plausible. I suspect that lack of a great "seed" for community projects is more predictive - it just happens to be the case that few people have done high effort projects that got product market fit. Maybe this is the rich-get-richer thing you mentioned.

I think this is a great initiative. SF is one of the most important (possibly the most important) places for EA/AIS work, but there aren't many high-effort community/field-building projects there. There are lots in Berkeley, but travelling from one place to the other happens less than you might naively expect.

Austen and his team are some of the best executors I have met in EA/AIS. I'm really excited to see where this goes!

Answer by calebp6
2
0

I don't know if there is an official answer, but I would be very surprised if the 10% pledge required including your spouse's income as well. 

I think the GWWC team generally (IMO correctly) cares more about people fulfilling the "spirit" of the pledge than splitting hairs over who has and hasn't fulfilled it in some technical sense. Including your spouse's income may make sense in some cases, but it probably depends on specifics that you should just make a call on.

I'm really excited about this change in direction. My impression is that 80k staff increasingly have wanted to double down on making AI go well for a while, and I think it's important that the outward brand/image is aligned with what people in the organisation are most excited about.

My impression is that many commenters who haven't run or worked at cause-neutral organisations will underestimate the challenges of having an org vision and mission that doesn't feel coherent and consistent to its employees. One way I expect this to improve 80k as an organisation is that 80k may have an easier time hiring people who care a lot about AI and are deeply knowledgeable on the topic, even if (hypothetically) the case for working at 80k for people who mostly care about AI risk was about as strong as before the official switch.

I really appreciate that 80k leadership are bold enough to focus on what they think is most useful. Pivoting 80k seems much better to me than having most senior people leave 80k to work on a different project and then 80k being a very different org people-wise with the same brand. 

(I haven't been through the many comments on this post - apologies if this is wrong in meaningful ways that have been clarified in other comments)

This has been on the stack to look into for a few weeks. I think we might just take the graphs down until we're confident they are all accurate. They were broken (and then fixed) after we moved accounting systems, but it looks like they might have broken again.

I agree that a lot of EAs seem to make this mistake but I don't think the issue is with the neglectedness measure, ime people often incorrectly scope the area they are analysing and fail to notice that that specific area can be highly neglected whilst also being tractable and important even if the wider area it's part of is not very neglected.

For example, working on information security in USG is imo not very neglected but working on standards for datacentres that train frontier LMs is.

Answer by calebp*2
1
0

Fwiw I think the "deepfakes will be a huge deal" stuff has been pretty overhyped and the main reason we haven't seen huge negative impacts is that society already has reasonable defences against fake images that prevent many people from getting misled by them.

I don't think this applies to many other mouse style risks that the AI X-risk community cares about. 

For example the main differences in my view between AI-enabled deepfakes and AI-enabled biorisks are:
* marginal people getting access to bioweapons is just a much bigger deal than marginal people being able to make deepfakes
* there is much less room for the price of deepfakes to decrease than the cost of developing a bioweapon (photoshop has existed for a long time and expertise is relatively cheap). 

My point was that I don’t think it was marketing or a historical accident, and it’s actually quite different to the other companies that you named which were all just straightforward revenue generating companies from ~day 1.

Load more