Hide table of contents
2 min read 19

185

Effective Ventures (EV) is a federation of organisations and projects working to have a large positive impact in the world. EV was previously known as the Centre for Effective Altruism but the board decided to change the name to avoid confusion with the organisation within EV that goes by the same name.

EV Operations (EV Ops) provides operational support and infrastructure that allows effective organisations to thrive.

The new EV Ops website.

Summary

EV Ops is a passionate and driven group of operations specialists who want to use our skills to do the most good in the world.

You can read more about us at https://ev.org/ops.

What does EV Ops look like?

EV Ops began as a two-person operations team at CEA. We soon began providing operational support for 80,000 Hours, EA Funds, the Forethought Foundation, and Giving What We Can. And eventually, we started supporting newer, smaller projects alongside these, too.

As the team expanded and the scope of these efforts increased, it made less sense to remain a part of CEA. So at the end of last year, we spun out as a relatively independent organisation, known variously as “Ops”, “the Operations Team”, and “the CEA Operations team”.

For the last nine months or so, we’ve been focused on expanding our capacity so that we can support even more high-impact organisations, including the GovAI, Longview Philanthropy, Asterisk, and Non-trivial. We now think that we have a comparative advantage in supporting and growing high-impact projects — and are happy that this new name, “Effective Ventures Operations”' or “EV Ops”, accords with this.

EV Ops is arranged into the following six teams:

High-level EV Ops organisational chart.

The organisations EV Ops supports

We now support and fiscally sponsor several organisations (learn more on our website). Alongside these we support a handful of Special Projects: smaller, 1-2 person, early-stage projects which may grow into independent organisations of their own.


We’re keen to support a wide range of projects looking to do good in the world, although we’re close to current capacity. To see if we could help your project grow and develop, visit https://ev.org/ops/about or complete the expression of interest form

Get involved

We’re currently hiring for the following positions:

If you’re interested in joining our team, visit https://ev.org/ops/careers

If you have any questions about EV or EV Ops, just drop a comment below. Thanks for reading!

Comments19


Sorted by Click to highlight new comments since:

Can someone clarify whether I'm interpreting this paragraph correctly?

Effective Ventures (EV) is a federation of organisations and projects working to have a large positive impact in the world. EV was previously known as the Centre for Effective Altruism but the board decided to change the name to avoid confusion with the organisation within EV that goes by the same name.

I think what this means is that the CEA board is drawing a distinction between the CEA legal entity / umbrella organization (which is becoming EV) and the public-facing CEA brand (which is staying CEA). AFAIK this change wasn't announced anywhere separately, only in passing at the beginning of this post which sounds like it's mostly intended to be about something else?

(As a minor point of feedback on why I was confused: the first sentence of the paragraph makes it sound like EV is a new organization; then the first half of the second sentence makes it sound like EV is a full rebrand of CEA; and only at the end of the paragraph does it make clear that there is intended to be a sharp distinction between CEA-the-legal-entity and CEA-the-project, which I wasn't previously aware of.)

Yep, your interpretation is correct. We didn't want to make a big deal about this rebrand because for most people the associations they have with "CEA" are for the organization which is still called CEA. (But over the years, and especially as the legal entity has grown and taken on more projects, we've noticed a number of times where the ambiguity between the two has been somewhat frustrating.) Sorry for the confusion!

What are your criteria for deciding which organizations to support? 

I’m particularly interested in how you think about cause prioritization in this process. The list of currently supported organizations looks roughly evenly split between organizations that are explicitly longtermist (e.g. Forethought Foundation and Longview Philanthropy) and organizations that (like the EA community as a whole) support both longtermist and neartermist work (e.g. GWWC and EA Funds). I don’t see any that focus solely on neartermist work. Do you expect the future mix of supported organizations to look similar to the current one? Would an organization working on animal welfare be as likely to receive support as one working on biosecurity if other factors like strength and size of team were the same? 

Also, I’ve mentioned this elsewhere, but I really hope this change leads to a major reassessment of how governance is structured for these organizations.

Minor criticism, but having the same initials as expected value might cause some confusion when people refer to expected value as EV sometimes.

Obviously Effective Ventures aren’t alone in this - CEA could mean both Centre for Effective Altruism and Cost Effectiveness Analysis.

And I’m not sure how feasible it is for new orgs to avoid confusion due to other abbreviations and acronyms used in EA.

I once met an EA (effective altruist) who worked at EA (Electronic Arts) and I asked to meet his EA (executive assistant) and it turned out they lived in EA (East Anglia) and were studying EA (enterprise architecture) but considering adding in EA (environmental assessment)  to make it a double major, the majors cost $40k ea (each) 😉

Were they wearing an Emporio Armani t-shirt, by any chance?

They're pretty different kinds of things - an abstract concept vs an organisation - so I don't think it will cause confusion.

MaxRa
17
21
2

I had the same thought only with Tyler Cowen's Emergent Ventures, which is an organisation that is even fairly closely associated with EA (e.g. I personally know two EAs who are among their fellows).

I know a few EAs amongst their fellows as well but I have never heard Emergent Ventures referred to as EV in practice, so it seems fine to me.

Ofer
4
1
17

I strongly agree with this comment, except that I don't think this issue is minor.

IMO, this issue is related to a very troubling phenomenon that EA is seemingly undergoing in the past few years: people in EA tend to sometimes do not think much about their EV, and instead strive to have as much impact as possible. "Impact" is a sign-neutral term ("COVID-19 had a large impact on international travel"). It's very concerning that many people in EA now use it interchangeably with "EV", as if EA interventions in anthropogenic x-risk domains cannot possibly be harmful. One can call this phenomenon "sign neglect".

Having a major EA organization named "EV" (as an acronym for something that is not "expected value") may exacerbate this problem by further decreasing the usage of the term "EV", and making people use sign-neutral language instead.

I think when people talk about impact, it's implicit that they mean positive impact. I haven't seen anything that makes me think that someone in EA doesn't care about the sign of their impact, although I'd certainly be interested in any evidence of that.

Ofer
13
2
8

I haven’t seen anything that makes me think that someone in EA doesn’t care about the sign of their impact

It's not about people not caring about the sign of their impact (~everyone in EA cares); it's about a tendency to behave in a way that is aligned with maximizing impact (rather than EV).

I’d certainly be interested in any evidence of that

Consider this interview with one of the largest funders in EA (the following is based on the transcript from the linked page):

Rob: "What might be distinctive about your approach that will allow you to find things that all the other groups haven’t already found or are going to find?"

[...]

SBF: But having gotten that out of the way, I think that being really willing to give significant amounts is a real piece of this. Being willing to give 100 million and not needing anything like certainty for that. We’re not in a position where we’re like, “If you want this level of funding, you better effectively have proof that what you’re going to do is great.” We’re happy to give a lot with not that much evidence and not that much conviction — if we think it’s, in expectation, great. Maybe it’s worth doing more research, but maybe it’s just worth going for. I think that is something where it’s a different style, it’s a different brand. And we, I think in general, are pretty comfortable going out on a limb for what seems like the right thing to do.

.

Rob Wiblin: OK, so with that out of the way, what’s a mistake you think at least some nontrivial fraction of people involved in effective altruism are making?

[...]

SBF: Then the last thing is thinking about grantmaking. This is definitely a philosophical difference that we have as a grantmaking organization. And I don’t know that we’re right on it, but I think it’s at least interesting how we think about it. Let’s say we evaluate a grant for 48 seconds. After 48 seconds, we have some probability distribution of how good it’s going to be, and it’s quite good in expected value terms. But we don’t understand it that well; there’s a lot of fundamental questions that we don’t know the answer to that would shift our view on this.

Then we think about it for 33 more seconds, and we’re like, “What might this probability distribution look like after 12 more hours of thinking?” And in 98% of those cases, we would still decide to fund it, but it might look materially different. We might have material concerns if we thought about it more, but we think they probably won’t be big enough that we would decide not to fund it.

Rob Wiblin: Save your time.

SBF: Right. You can spend that time, do that, or you could just say, “Great, you get the grant, because we already know where this is going to end up.” But you say that knowing that there are things you don’t know and could know that might give you reservations, that might turn out to make it a mistake. But from an expected value of impact perspective —

Rob Wiblin: It’s best just to go ahead.

SBF: Yeah, exactly. I think that’s another example of this, where being completely comfortable doing something that in retrospect is a little embarrassing. They’ll go, “Oh geez, you guys funded that. That was obviously dumb.” I’m like, “Yeah, you know, I don’t know.” That’s OK.

[...]

Rob Wiblin: Yeah. It’s so easy to get stuck in that case, where you are just unwilling to do anything that might turn out to be negative.

SBF: Exactly. And a lot of my response in those cases is like, “Look, I hear your concerns. I want you to tell me — in writing, right now — whether you think it is positive or negative expected value to take this action. And if you write down positive, then let’s do it. If you write down negative, then let’s talk about where that calculation’s coming from.” And maybe it will be right, but let’s at least remove the scenario where everyone agrees it’s a positive EV move, but people are concerned about some…

Notably, the FTX Foundation's regranting program "gave over 100 people access to discretionary budget" (and I'm not aware of them using a reasonable mechanism to resolve the obvious unilateralist's curse problem). One of the resulting grants was a $215,000 grant for creating an impact market. They wrote:

This regrant will support the creation of an “impact market.” The hope is to improve charity fundraising by allowing profit-motivated investors to earn returns by investing in charitable projects that are eventually deemed impactful.

A naive impact market is a mechanism that incentivizes people to carry out risky projects—that might turn out to be beneficial—while regarding potential harmful outcomes as if they were neutral. (The certificates of a project that ended up being harmful are worth as much as the certificates of a project that ended up being neutral, namely nothing.)

Thanks for the reply!

If I understand correctly, you think that people in EA do care about the sign of their impact, but that in practice their actions don't align with this and they might end up having a large impact of unknown sign?

That's certainly a reasonable view to hold, but given that you seem to agree that people are trying to have a positive impact, I don't see how using phrases like "expected value" or "positive impact" instead of just "impact" would help.

In your example, it seems that SBF is talking about quickly making grants that have positive expected value, and uses the phrase "expected value" three times.

Reasonably determining whether an anthropogenic x-risk related intervention is net-positive or net-negative is often much more difficult[1] than identifying the intervention as potentially high-impact. With less than 2 minutes to think, one can usually do the latter but not the former. People in EA can easily be unconsciously optimizing for impact (which tends to be much easier and aligned with maximizing status & power) while believing they're optimizing for EV. Using the term "impact" to mean "EV" can exacerbate this problem.


  1. Due to an abundance of crucial considerations. ↩︎

Does EV have any policies around term-limits for board members? This is a fairly common practice for nonprofits and I’m curious about how EV thinks about the pros and cons, and more generally how EV thinks about board composition and responsibilities given the outsize role the board has in community governance.

Does EV have any current employees outside of EV Ops? 

Technically speaking all employees of the constituent organizations are "employees of EV" (for one of the legal entities that's part of EV).

Thanks, yea. I guess I'm asking if there are other people or functionalities of EV outside of EV Ops or the constituent orgs, and outside the board.

Ah, got you. There are a few people employed in small projects; things with a similar autonomous status to the orgs, but not yet at a scale where it makes sense for them to be regarded as "new orgs".

Curated and popular this week
 ·  · 10m read
 · 
Regulation cannot be written in blood alone. There’s this fantasy of easy, free support for the AI Safety position coming from what’s commonly called a “warning shot”. The idea is that AI will cause smaller disasters before it causes a really big one, and that when people see this they will realize we’ve been right all along and easily do what we suggest. I can’t count how many times someone (ostensibly from my own side) has said something to me like “we just have to hope for warning shots”. It’s the AI Safety version of “regulation is written in blood”. But that’s not how it works. Here’s what I think about the myth that warning shots will come to save the day: 1) Awful. I will never hope for a disaster. That’s what I’m trying to prevent. Hoping for disasters to make our job easier is callous and it takes us off track to be thinking about the silver lining of failing in our mission. 2) A disaster does not automatically a warning shot make. People have to be prepared with a world model that includes what the significance of the event would be to experience it as a warning shot that kicks them into gear. 3) The way to make warning shots effective if (God forbid) they happen is to work hard at convincing others of the risk and what to do about it based on the evidence we already have— the very thing we should be doing in the absence of warning shots. If these smaller scale disasters happen, they will only serve as warning shots if we put a lot of work into educating the public to understand what they mean before they happen. The default “warning shot” event outcome is confusion, misattribution, or normalizing the tragedy. Let’s imagine what one of these macabrely hoped-for “warning shot” scenarios feels like from the inside. Say one of the commonly proposed warning shot scenario occurs: a misaligned AI causes several thousand deaths. Say the deaths are of ICU patients because the AI in charge of their machines decides that costs and suffering would be minimize
 ·  · 14m read
 · 
This is a transcript of my opening talk at EA Global: London 2025. In my talk, I challenge the misconception that EA is populated by “cold, uncaring, spreadsheet-obsessed robots” and explain how EA principles serve as tools for putting compassion into practice, translating our feelings about the world's problems into effective action. Key points:  * Most people involved in EA are here because of their feelings, not despite them. Many of us are driven by emotions like anger about neglected global health needs, sadness about animal suffering, or fear about AI risks. What distinguishes us as a community isn't that we don't feel; it's that we don't stop at feeling — we act. Two examples: * When USAID cuts threatened critical health programs, GiveWell mobilized $24 million in emergency funding within weeks. * People from the EA ecosystem spotted AI risks years ahead of the mainstream and pioneered funding for the field starting in 2015, helping transform AI safety from a fringe concern into a thriving research field. * We don't make spreadsheets because we lack care. We make them because we care deeply. In the face of tremendous suffering, prioritization helps us take decisive, thoughtful action instead of freezing or leaving impact on the table. * Surveys show that personal connections are the most common way that people first discover EA. When we share our own stories — explaining not just what we do but why it matters to us emotionally — we help others see that EA offers a concrete way to turn their compassion into meaningful impact. You can also watch my full talk on YouTube. ---------------------------------------- One year ago, I stood on this stage as the new CEO of the Centre for Effective Altruism to talk about the journey effective altruism is on. Among other key messages, my talk made this point: if we want to get to where we want to go, we need to be better at telling our own stories rather than leaving that to critics and commentators. Since
 ·  · 32m read
 · 
Formosa: Fulcrum of the Future? An invasion of Taiwan is uncomfortably likely and potentially catastrophic. We should research better ways to avoid it.   TLDR: I forecast that an invasion of Taiwan increases all the anthropogenic risks by ~1.5% (percentage points) of a catastrophe killing 10% or more of the population by 2100 (nuclear risk by 0.9%, AI + Biorisk by 0.6%). This would imply it constitutes a sizable share of the total catastrophic risk burden expected over the rest of this century by skilled and knowledgeable forecasters (8% of the total risk of 20% according to domain experts and 17% of the total risk of 9% according to superforecasters). I think this means that we should research ways to cost-effectively decrease the likelihood that China invades Taiwan. This could mean exploring the prospect of advocating that Taiwan increase its deterrence by investing in cheap but lethal weapons platforms like mines, first-person view drones, or signaling that mobilized reserves would resist an invasion. Disclaimer I read about and forecast on topics related to conflict as a hobby (4th out of 3,909 on the Metaculus Ukraine conflict forecasting competition, 73 out of 42,326 in general on Metaculus), but I claim no expertise on the topic. I probably spent something like ~40 hours on this over the course of a few months. Some of the numbers I use may be slightly outdated, but this is one of those things that if I kept fiddling with it I'd never publish it.  Acknowledgements: I heartily thank Lily Ottinger, Jeremy Garrison, Maggie Moss and my sister for providing valuable feedback on previous drafts. Part 0: Background The Chinese Civil War (1927–1949) ended with the victorious communists establishing the People's Republic of China (PRC) on the mainland. The defeated Kuomintang (KMT[1]) retreated to Taiwan in 1949 and formed the Republic of China (ROC). A dictatorship during the cold war, Taiwan eventually democratized in the 1990s and today is one of the riche