Bio

I currently lead EA funds.

Before that, I worked on improving epistemics in the EA community at CEA (as a contractor), as a research assistant at the Global Priorities Institute, on community building, and Global Health Policy.

Unless explicitly stated otherwise, opinions are my own, not my employer's.

You can give me positive and negative feedback here.

Posts
24

Sorted by New
3
calebp
· 2y ago · 1m read

Comments
275

Topic contributions
6

Answer by calebpDec 13, 202219
2
0

Hi Markus,

For context I run EA Funds, which includes the EAIF (though the EAIF is chaired by Max Daniel not me). We are still paying out grants to our grantees — though we have been slower than usual (particularly for large grants). We are also still evaluating applications and giving decisions to applicants (though this is also slower than usual). 

We have communicated this to the majority of our grantees, but if you or anyone else reading this urgently needs a funding decision (in the next two weeks), please email caleb [at] effectivealtruismfunds [dot] org with URGENT in the subject line, and I will see what I can do. Please also include:

  • Please also include the name of the application (from previous funds email subject lines),
  • the reason the request is urgent,
  • latest decision and payout dates that would work for you - such that if we can’t make these dates there is little reason to make the grant.

You can also apply to one of Open Phil’s programs; in particular, Open Philanthropy’s program for grantees affected by the collapse of the FTX Future Fund may be particularly of note to people applying to EA Funds due to the FTX crash.

Here are some very brief takes on the CCM web app now that RP has had a chance to iron out any initial bugs. I'm happy to elaborate more on any of these comments.

  • Some praise
    • This is an extremely ambitious project, and it's very surprising that this is the first unified model of this type I've seen (though I'm sure various people have their own private models).
      • I have a bunch of quantitative models on cause prio sub-questions, but I don't like to share these publicly because of the amount of context that's required to interpret them (and because the methodology is often pretty unrefined) - props to RP for sharing theirs!
    • I could see this product being pretty useful to new funders who have a lot of flexibility over where donations go.
    • I think the intra-worldview models (e.g. comparing animal welfare interventions) seem reasonable to me (though I only gave these a quick glance)
    • I see this as a solid contribution to cause prioritisation efforts and I admire the focus on trying to do work that people might actually use - rather than just producing a paper with no accompanying tool.
  • Some critiques
    • I think RP underrates the extent to which their default values will end up being the defaults for model users (particularly some of the users they most want to influence)
      • I think the default values are (in my personal view) pretty far from my values or the mean of values for people who have thought hard about these topics in the EA community.
    • The x-risk model in particular seems to make bake-in quite conservative assumptions (medium-high confidence)
      • I found it difficult to provide very large numbers on future population per star - I think with current rates of economic and compute growth, the number of digital people could be extremely high very quickly.
      • I think some x-risk interventions could plausibly have very long run effects on x-risk (e.g. by building an aligned super intelligence)
    • The x-risk model seems to confuse existential risk and extinction risk (medium confidence - maybe this was explained somewhere, and I missed it)
    • Using the model felt clunky to me, it didn't handle extremely large values well and it made iterating on values difficult, it's not the kind of thing that you can "play with" imo.
  • Some improvements I'd like to see
    • I'd be interested in seeing RP commission some default values from researchers/EAs who can explain their suggested values well.
    • I would like for the overall app to feel more polished/responsive/usable - idk how much this would cost. I'd guess it's at least a month's work for a competent dev, maybe more.

I don't understand the point about the complexity of value being greater than the complexity of suffering (or disvalue). Can you possibly motivate the intuition here? It seems to me like I can reverse the complex valuable things that you name, and I get their "suffering equivalents" e.g. (e.g. friendship -> hostility, happiness -> sadness, love -> hate ... etc.), and they don't feel significantly less complicated. 

I don't know exactly what it means for these things to be less complex; I'm imagining something like writing a Python program that simulates the behaviour of two robots in a way that is recognisable to many people as "friends" or "enemies" and measuring at the length of the program.

Oli’s comment so people don’t need to click through

I thought some about the AI Safety camp for the LTFF. I mostly evaluated the research leads they listed and the resulting teams directly, for the upcoming program (which was I think the virtual one in 2023).

I felt unexcited about almost all the research directions and research leads, and the camp seemed like it was aspiring to be more focused on the research lead structure than past camps, which increased the weight I was assigning to my evaluation of those research directions. I considered for a while to fund just the small fraction of research lead teams I was excited about, but it was only a quite small fraction, and so recommended against funding it.

It did seem to me that the quality of research leads was very marketly worse by my lights than past years, so I didn't feel comfortable just doing an outside-view on the impact of past camps (as the ARB report seems to do). I feel pretty good about the past LTFF grants to the past camps but my expectations for post-2021 camps were substantially worse than earlier camps, looking at the inputs and plans, so my expectation of the value of it substantially changed.

I agree that people in existing EA hubs are more likely to come across others doing high value work than people located outside of hubs.

That said, on the current margin, I still think many counterfactually connections happen at office spaces in existing EA hubs. In the context of non residential spaces, I’m not really sure who would use an EA office space outside existing EA hubs so I’m finding the comparison between office in a hub vs office outside a hub a little confusing (whereas with CEEALAR I understand who would use it).

I go back and forth on this. Sometimes, I feel like we are funding too many underperforming projects, but then some marginal project surprises me by doing quite well, and I feel better about the hits-based strategy. Over the last three months, we have moved towards funding things that we feel more confident in, mostly due to funding constraints.

I don't think that I have a great list of common communicable lessons, some high-level thoughts/updates that jump to mind:

  • in general, people will be worse than they expect when working in areas they have little experience in.
  • making grants to people who are either inside the community or have legible credentials is often much cheaper in terms of evaluation time than making grants to random people who apply who aren't connected to the community, but being too insular in our grantmaking is probably unhelpful for the long-term health of the community - balancing these factors is hard
  • The social skills and professionalism of grantees are probably more important than I used to think they were - funding people who are extremely ambitious but are unreliable or unprofessional seems to have lots of hidden costs.
  • sometimes it's worth encouraging a grantee to pursue a role at an established organisation even if they are above the bar for a grant - there are lots of downsides of grants that the grantee might not be tracking, and overall, I think it's ok to be a bit more paternalistic than I use to think.

I think the performance/talent of grantees and context is extremely important. 

That said, some programs that I am excited about that I believe many EAs are a good fit for:

  • University EA groups, particularly ones at top universities
  • Field-specific retreats/workshops/boot camps etc.
  • Career advising calls and other career-focused content
  • Writing high-quality blog posts

Some projects I have seen work well in the past, but I think they are a bad fit for most people:

  • Youtube channels
  • Mass media comms (like writing an op-ed in a popular newspaper)

Most of my views on this topic are informed by this survey.

(note that I'm not speaking about CEEALAR or any other specific EAIF applicants/grantees specifically)

I understand that CEEALAR has created a low-cost hotel/coworking space in the UK for relatively junior people to stay while they work on research projects relevant to GCRs. I think that you had some strategic updates recently so some of my impression of your work may be out of date. Supporting people early on in their impact-focused careers seems really valuable, I've seen lots of people go through in-person retreats and quickly start doing valuable work.

At the same time, I think projects that take lots of junior people and put them in the same physical space for an extended period whilst asking them to work on important and thorny questions have various risks (e.g. negative effects on mental health, attracting negative press to EA, trapping people in suboptimal learning environments).

I think some features of projects in this reference class I'd be excited to see (though it's NOT a list of requirements):
* located in an existing hub so that program participants have plenty of people outside the program to interact with
* generally taking people with good counterfactual options outside of EA areas so that people don't feel "trapped" and because this is correlated with being able to do very useful stuff within EA cause areas quickly
* trying to foster an excellent intellectual environment - ideally, there would be a critical mass of thoughtful people and truth-seeking epistemic norms
* having a good track record of a high proportion of people leaving and entering high-impact roles
* taking community health seriously, incidents should be handled in a professional manner and generally, projects should adhere to sensible best practices (e.g. amongst full-time staff, there shouldn't be romantic relationships between managers and their direct reports)

I recently spent some time in the Meridian Office, a co-working space in Cambridge UK for people working on pressing problems, which seems to be doing a good job on all of the points above (though I haven't evaluated them properly).

(Note that I don't mean to imply that CEEALAR is or isn't doing well on the above points, as I don't want to talk about specific EAIF grantees.)

  1. ^

     

Doctors in the UK (like the ones that set this up) earn way less that $350k a year in general. Junior doctors (which are the majority of the UK doctor workforce) are very poorly paid, I think many of my friends made something like £14/hour for the first few years after qualifying.

I didn't say that AI was software by definition - I just linked to some (brief definitions) to show that your claim afaict is not widely understood in technical circles (which contradicts your post). I don't think that the process of using Photoshop to edit a photo is itself a program or data (in the typical sense), so it seems fine to say that it's not software.

Definition make claims about what is common between some set of objects. It's fine for single members of some class to be different from every other class member. AI does have a LOT of basic stuff in common with other kinds of software (it runs on a computer, compiles to machine code etc.).

It sounds like the statement "AI is different to other kinds of software in important ways" is more accurate than "AI is not software" and probably conveys the message that you care about - or is there some deeper point that you're making that I've missed?

Load more