COO Successif, Co-Director EA Germany, Trustee Effective Ventures UK
Entrepreneur (currently textiles, previously software) for 25+ years and interested in EA since 2015, joining the local group and donating. I joined the EA Munich organiser team and took the GWWC pledge in 2020, developed the software for the new donation management system Effektiv Spenden is using in Germany and Switzerland in 2021 and have been co-director of EA Germany since November 2022.
I run the donation drive Knitters Against Malaria, which has raised over $100,000 for the Against Malaria Foundation since 2018.
Let me know if you have ideas for EA Germany or Successif
I can offer to mentor and be a sounding board if you are an EA-aligned non-profit entrepreneur
I found the overview in this post useful: https://forum.effectivealtruism.org/posts/xGqpQKf2FpjvwJe6q/ea-meta-funding-landscape-report
Re-Reading Will MacAksill's Defining Effective Altruism from 2019, I saw that he used a similar approach that resulted in four claims:
The ideas that EA is about maximising and about being science-aligned (understood broadly) are uncontroversial. The two more controversial aspects of the definition are that it is non-normative, and that it is tentatively impartial and welfarist.
He didn't include integrity and collaborative spirit. However, he posted in 2017 that these two are among the guiding principles of CEA and other organisations and key people.
Secondly, I find the principles themselves quite handwavey, and more like applause lights than practical statements of intent. What does 'recognition of tradeoffs' involve doing? It sounds like something that will just happen rather than a principle one might apply. Isn't 'scope sensitivity' basically a subset of the concerns implied by 'impartiality'? Is something like 'do a counterfactually large amount of good' supposed to be implied by impartiality and scope sensitivity? If not, why is it not on the list? If so, why does 'scout mindset' need to be on the list, when 'thinking through stuff carefully and scrupulously' is a prerequisite to effective counterfactual actions?
This poses some interesting questions, and I've thought about them a bit, although I'm still a bit confused.
Let's start with the definition on effectivealtriusm.org, which seems broadly reasonable:
Effective altruism is a research field and practical community that aims to find the best ways to help others, and put them into practice.
So what EA does is:
So, basically, we are a company with a department that builds solar panels and another that runs photovoltaic power stations using these panels. Both are related but distinct. If the solar panels are faulty, this will affect the power station, but if the power station is built by cutting down primal forest, then the solar panel division is not at fault. Still, it will affect the reputation of the whole organisation, which will affect the solar engineers.
But going back to the points, we could add some questions:
1.a seems pretty straightforward: If we have different groups working on this, then the less biased ones (using a scout mindset and being scope sensitive) and the ones using decision-making theories that recognize trade-offs and counterfactuals will fare better. Here, the principles logically follow from the requirements. If you want to make the best solar cells, you'll have to understand the science behind them.
1.b Here, we can see that EA is based on the value of impartiality, but it is not a prerequisite for a group that wants to do good better. If I want to do the most good for my family, then I'm not impartial, but I still could use some of the methods EAs are using.
2.a Could be done in many different ways. We could commit massive fraud to generate money that we then donate based on the principles described in 1.
In conclusion, I would see EA as:
Those two values seem to me to reflect the boundaries that the movement's founders, the most engaged actors, and the biggest funders want to see.
Some people are conducting local prioritisation research, which might sometimes be worthwhile from an impartial standpoint, but giving up on impartiality would radically change the premise of EA work.
Having worked in startups and finance, I can imagine that there might be ways in which EA ideas could be implemented without honesty, integrity, and compassion cost-effectively. Aside from the risks of this approach, I would also see dropping this value as leading to a very different kind of movement. If we're willing to piss off the neighbours of the power plant, then this will affect the reputation of the solar researchers.
In describing the history of EA, we could include the different tools and frameworks we have used, such as ITN. But these don't need to be the ones we'll use in the future, so I see everything else as being downstream from the definition above.
All of these activities sound like services provided to the EA community. [...] the same way Givedirectly is and should be judged by how effectively they serve their beneficiaries (e.g. Africans below the poverty line), CEA should be judged by how effectively it serves its effective beneficiaries by empowering them to do those things.
This doesn't sound right to me. If you want to focus on the customer analogy, the funders are paying CEA to provide impact according to their impact metrics. CEA engages with a subset of the EA community that they think will lead to effects that they think will lead to impact according to their own theory of change and/or the ToC of the funder(s). Target groups can differ based on the ToC of project, so you see people engaging on the forum but being rejected from EAGs.
I think there is much room for criticism when looking more closely at the ToCs, which is more to your next point:
- The movement was founded on Givewell/GWWC doing reviews of and ultimately promoting charities - reviews for which transparency is an absolute prerequisite for recommendation
- It seems importantly hypocritical as a movement to demand it of evaluees but not to practice it at a meta level
Both Givewell and GWWC want to shift donation money to effective charities, which is why they have to make a compelling case for donors. Transparency seems to be a good tool for this. The analogy here would be CEA making the case for them to get funded for their work. Zach has written a bit about how they engage with funders.
I personally think there is a good case to be made to try for broader meta-funding diversification, which would necessitate more transparency around impact measurement. The EA Meta Funding Landscape Report asks some good questions. However, I can also see that the EV of this might be lower than that of engaging with a smaller set of funders. Transparency and engaging with a broad audience can be pretty time-consuming and thus lower the cost-effectiveness of your approach.
(All opinions are my own and don't reflect those of the organisations I'm affiliated with.)
Reflections on Two Years at EA Germany
I'm stepping down this week after two years as co-director of EA Germany. While I deeply valued the team and helped build successful structures, I stayed too long when my core values and personal fit no longer aligned.
When I joined EAD, I approached it like the other organisations I’ve worked with, planning on staying 5-10 years to create stability during growth and change. My co-director, Sarah, and I aimed to grow EAD quickly and sustainably. But the FTX collapse hit just as I started in November 2022, and the dream of expanding the team disappeared.
This wasn’t the only challenge. I treated EAD as a single organisation rather than part of a global ecosystem where impact shouldn’t be geographically contained. I slipped into a “soldier mindset,” focused on proving EAD’s local value instead of prioritising international scalability or considering where I could provide the most impact.
By the end of my first year, I could see that I’d reached the end of what I was best at and passionate about. The organisation was running well, and my full-time input wasn’t needed anymore. But I stayed—because I felt so good with the team, because of my long-term commitment, and because I hoped we’d find a path to grow the organisation within Germany.
Meanwhile, I started consulting for Claire Boine at Successif. When she secured new funding to expand, I joined her team part-time. Instead of using this as a chance to leave EAD, I tried to balance both roles—while still running my company, serving as a trustee at EV UK, and mentoring on the side.
Looking back, this was my biggest mistake: I didn’t recognise that my counterfactual impact at EAD had become the lowest of all my commitments. Instead of staying true to my value of helping solve the most pressing problems as effectively as possible, I acted out of connection and obligation.
This experience has taught me to recognise when to step back and refocus on where my skills, passion, and impact align best.
I’m very grateful for the opportunity to work alongside Sarah Tegeler in building and leading the organisation with the invaluable support of Christiane Ranke and Milena Canzler. I also appreciate the backing of the EAD board, the CEA Groups team, my colleagues in the CBG program, and the many people—both within and outside the EA community—I had the privilege to work with.