Hide table of contents

A while ago there was a thread about ideas for new projects for large EA funders to back. We had a similar email thread here at the Centre for Effective Altruism (CEA) and the Future of Humanity Institute (FHI) at Oxford, about 'blue sky funding opportunities'. As the Christmas giving season approaches, I thought I would mention a few of my favourites. I am happy to field questions about the details in the comments section, inasmuch as I know them.

1. Put money in an EA 'venture' fund that publicly announces it is seeking to fund new projects with particular characteristics (e.g. scaling a demonstrated anti-poverty intervention). Use this 'money on the table' to encourage more entrepreneurs to come forward with early-stage business plans. Otherwise help to connect donors with founders. More or less the same idea is mentioned here.

  • There is a >50% chance CEA will launch something like this in the coming year.

2. Offer prizes for people who have achieved awesome things to both help and encourage them to continue achieving awesome things.

3. Hire a full or part-time Personal Assistant for Prof Nick Bostrom, so he can spend as much time as possible doing follow-up research for his popular book, Superintelligence. This would also indirectly free up staff time for project managers and researchers in the rest of FHI.

  • CEA would try to recruit someone suitable for this role if it could be funded, but there may not be anyone who would meet all the requirements.

4. Fund a founding employee for 'CEA USA'. This person would:

  • Develop a strategy for how CEA USA can be most useful to the movement. The Board has met and considered a couple of options so far.
  • Simultaneously, start cultivating a donor base who might support that expansion. If donors aren't convinced, go back and edit the strategy, as that would be a bad sign.
  • Do the remaining operations necessary for CEA USA to receive donations in a variety of ways; hire people; do payroll; meet its legal reporting requirements; set up a basic website, and so on.

The board would probably approve a recruitment process to try to find someone suitable for this position if funds were available.

5. Fund a professional fundraiser to raise money for CEA/FHI/CSER/FLI. We have not yet found a good candidate for this, but if we were offering closer to market wages (£40-60,000p.a.) we might find someone suitable to bring us expertise fundraising from major donors.

  • If you want to fund this for CEA, talk to me. If you want to fund this for FHI, talk to Cecilia Tilli. If you want to fund this for CSER talk to Sean O'Heigeartaigh.

 

Possible individual projects you could already fund by giving to CEA

The following you could fund by giving to CEA. Message me if you are interested, so I can send you more information on how much we would need to raise to press ahead with these projects.

1. We are raising money to help market both William MacAskill's book Doing Good Better, and Peter Singer's book, The Most Good You Can Do. If funds are available, we will hire a professional book promotion company to do this. Outputs could include, depending on the amount of money raised: websites for the books; launch events; more media appearances booked; more opportunities to write op-eds based on the books' content; articles placed in newspapers connecting the books to current events; potentially even a promotional video.

2. CEA could hire a contractor to work on a set of podcasts about effective altruism. If we liked them, we could then produce more.

3. Test out mainstream pamphleting about effective altruism to see how many, e.g. Giving What We Can members, this can create.

4. Giving What We Can now has almost 700 members. The majority of our resources go towards attracting new members, as we see this as the most valuable use of time. Some of the work of our Director of Community goes towards deepening engagement with existing members. However, we could now usefully dedicate the Director of Community to just constantly talk to existing members - splitting out the role of talking to potential new members. What would this Director of Community do?

  • Speak to, or personally email, every member at least once a year. Ideally build a personal relationship with them in the process.
  • Enquire about their giving - Are they fulfilling the pledge? What is holding them back from giving more? Where are they giving? Are they aware of the latest recommendations, and the reasons for them?
  • In some cases, give them advice on tax deductibility, our trust, other members who live near them, which charity most aligns with their values, and so on.
  • In some cases, let them know about other activities in the effective altruist community they might want to look into, like 80,000 Hours, the Open Philanthropy Project, and so on.
  • In the process of doing all this, drive up reporting rates for our annual giving survey. Each year we get data from (a different) 50% of members, after chasing them a bunch. This makes it hard to know how much money we are moving, and what share are dropping out each year. 

I also think that merely speaking regularly with someone from Giving What We Can would make members more likely to stick with the pledge long term.

If it were me, I would rather fund new member recruitment over the above. However, if I were more skeptical about how much new members would give, and how long they will naturally stick with the pledge, I could prefer to fund the above. I think we will definitely want someone to take on this role full time, sooner or later.

Comments15


Sorted by Click to highlight new comments since:

Just wanted to mention that reading this has made me change my donation plans -- instead of donating directly, I'm going to try to use my money for donation matching and to seed prizes for EA activities / ventures that I'd like to see. I was already leaning this way, but this post made me make it official.

I'd be happy to collaborate with other people who are similarly interested in building up donation matching pools and/or seed prizes.

The idea of regularly talking to GWWC members makes me want to plug the EA Buddy System. The goals are much the same, it's just decentralized and volunteer-based. Is it worth coordinating with GWWC on this, e.g. coming up with a set of suggestions that EA buddies can talk about with GWWC members?

I like the EA Buddy system and would be happy to see it promoted to GWWC members in some form, but I feel it's slightly different from what we are going for here.

Many GWWC members don't identify as 'EAs' and want to be talked to about GWWC issues specifically by someone highly knowledgable.

I expect someone who works on this every day to become very skilled at having these conversations.

These are good points.

Hire a full or part-time Personal Assistant for Prof Nick Bostrom

Is there a reason this couldn't be done with FHI funding? If FHI believed that this was the best use of an additional [however much it takes to hire an assistant], then an unrestricted donation of that amount would make it happen. If not, it's much less clear that this would be a good idea.

It could probably be done through FHI funding, but it would be considerably more expensive and might be blocked by the university.

Why is that?

The university charges major overhead on all salaries (50-100%). It has regulations about who can get PAs, how much time they get, what they can do, how much they must be paid. We are talking about an 800 year old institution here.

Also, the FHI mainly focuses on raising academic grants, and it's hard to use these to cover a PA.

Are there currently any posters/brochures for EA, Givewell, GWWC etc.?

Edit: thanks guys, glad to know these exist. Will probably print a few to dot around my university.

There are Giving What We Can ones here: https://drive.google.com/#folders/0B5tNAaAvGxc9Q0c4TGtMeGhsU1E

They are slightly out of date but mostly relevant.

There's a .impact project to design ones, with some entries.

If I read this recent blog post correctly, it sounds like GiveWell are concerned about bumping into the room for more funding ceiling for some of their top charities. Would this be a point against trying to recruit more donors and in favour of encouraging new projects to start up? (or promoting causes that GW doesn't really cover, such as nonhuman animals or xrisk).

Has there been A/B testing of the messaging for the book launches? It'd be a huge missed opportunity if, for example, the books hammer on the standard drowning child thought experiment type opportunity cost arguments if an excited altruism perspective turns out to be more effective for getting people to actually take action. (For example, the entire last chapter of Martin Seligman's book Learned Optimism is basically about how being altruistic makes people happier. He analogizes donating to charity and volunteering to "moral jogging"--somewhat unpleasant in the short run but good for your happiness in the long run.)

Thanks for taking the time to put this together, Rob. I'm interested in the pamphleting angle, especially as someone who has studied pro-veg pamphleting in length and who is researching it further. I'd be interested in hearing if you have any plans to study EA pamphleting in any amount of depth.

Curated and popular this week
 ·  · 8m read
 · 
Around 1 month ago, I wrote a similar Forum post on the Easterlin Paradox. I decided to take it down because: 1) after useful comments, the method looked a little half-baked; 2) I got in touch with two academics – Profs. Caspar Kaiser and Andrew Oswald – and we are now working on a paper together using a related method.  That blog post actually came to the opposite conclusion, but, as mentioned, I don't think the method was fully thought through.  I'm a little more confident about this work. It essentially summarises my Undergraduate dissertation. You can read a full version here. I'm hoping to publish this somewhere, over the Summer. So all feedback is welcome.  TLDR * Life satisfaction (LS) appears flat over time, despite massive economic growth — the “Easterlin Paradox.” * Some argue that happiness is rising, but we’re reporting it more conservatively — a phenomenon called rescaling. * I test this hypothesis using a large (panel) dataset by asking a simple question: has the emotional impact of life events — e.g., unemployment, new relationships — weakened over time? If happiness scales have stretched, life events should “move the needle” less now than in the past. * That’s exactly what I find: on average, the effect of the average life event on reported happiness has fallen by around 40%. * This result is surprisingly robust to various model specifications. It suggests rescaling is a real phenomenon, and that (under 2 strong assumptions), underlying happiness may be 60% higher than reported happiness. * There are some interesting EA-relevant implications for the merits of material abundance, and the limits to subjective wellbeing data. 1. Background: A Happiness Paradox Here is a claim that I suspect most EAs would agree with: humans today live longer, richer, and healthier lives than any point in history. Yet we seem no happier for it. Self-reported life satisfaction (LS), usually measured on a 0–10 scale, has remained remarkably flat over the last f
 ·  · 7m read
 · 
Crossposted from my blog.  When I started this blog in high school, I did not imagine that I would cause The Daily Show to do an episode about shrimp, containing the following dialogue: > Andres: I was working in investment banking. My wife was helping refugees, and I saw how meaningful her work was. And I decided to do the same. > > Ronny: Oh, so you're helping refugees? > > Andres: Well, not quite. I'm helping shrimp. (Would be a crazy rug pull if, in fact, this did not happen and the dialogue was just pulled out of thin air).   But just a few years after my blog was born, some Daily Show producer came across it. They read my essay on shrimp and thought it would make a good daily show episode. Thus, the Daily Show shrimp episode was born.   I especially love that they bring on an EA critic who is expected to criticize shrimp welfare (Ronny primes her with the declaration “fuck these shrimp”) but even she is on board with the shrimp welfare project. Her reaction to the shrimp welfare project is “hey, that’s great!” In the Bible story of Balaam and Balak, Balak King of Moab was peeved at the Israelites. So he tries to get Balaam, a prophet, to curse the Israelites. Balaam isn’t really on board, but he goes along with it. However, when he tries to curse the Israelites, he accidentally ends up blessing them on grounds that “I must do whatever the Lord says.” This was basically what happened on the Daily Show. They tried to curse shrimp welfare, but they actually ended up blessing it! Rumor has it that behind the scenes, Ronny Chieng declared “What have you done to me? I brought you to curse my enemies, but you have done nothing but bless them!” But the EA critic replied “Must I not speak what the Lord puts in my mouth?”   Chieng by the end was on board with shrimp welfare! There’s not a person in the episode who agrees with the failed shrimp torture apologia of Very Failed Substacker Lyman Shrimp. (I choked up a bit at the closing song about shrimp for s
 ·  · 11m read
 · 
Confidence: Medium, underlying data is patchy and relies on a good amount of guesswork, data work involved a fair amount of vibecoding.  Intro:  Tom Davidson has an excellent post explaining the compute bottleneck objection to the software-only intelligence explosion.[1] The rough idea is that AI research requires two inputs: cognitive labor and research compute. If these two inputs are gross complements, then even if there is recursive self-improvement in the amount of cognitive labor directed towards AI research, this process will fizzle as you get bottlenecked by the amount of research compute.  The compute bottleneck objection to the software-only intelligence explosion crucially relies on compute and cognitive labor being gross complements; however, this fact is not at all obvious. You might think compute and cognitive labor are gross substitutes because more labor can substitute for a higher quantity of experiments via more careful experimental design or selection of experiments. Or you might indeed think they are gross complements because eventually, ideas need to be tested out in compute-intensive, experimental verification.  Ideally, we could use empirical evidence to get some clarity on whether compute and cognitive labor are gross complements; however, the existing empirical evidence is weak. The main empirical estimate that is discussed in Tom's article is Oberfield and Raval (2014), which estimates the elasticity of substitution (the standard measure of whether goods are complements or substitutes) between capital and labor in manufacturing plants. It is not clear how well we can extrapolate from manufacturing to AI research.  In this article, we will try to remedy this by estimating the elasticity of substitution between research compute and cognitive labor in frontier AI firms.  Model  Baseline CES in Compute To understand how we estimate the elasticity of substitution, it will be useful to set up a theoretical model of researching better alg