I have received funding from the LTFF and the SFF and am also doing work for an EA-adjacent organization.
My EA journey started in 2007 as I considered switching from a Wall Street career to instead help tackle climate change by making wind energy cheaper – unfortunately, the University of Pennsylvania did not have an EA chapter back then! A few years later, I started having doubts whether helping to build one wind farm at a time was the best use of my time. After reading a few books on philosophy and psychology, I decided that moral circle expansion was neglected but important and donated a few thousand sterling pounds of my modest income to a somewhat evidence-based organisation. Serendipitously, my boss stumbled upon EA in a thread on Stack Exchange around 2014 and sent me a link. After reading up on EA, I then pursued E2G with my modest income, donating ~USD35k to AMF. I have done some limited volunteering for building the EA community here in Stockholm, Sweden. Additionally, I set up and was an admin of the ~1k member EA system change Facebook group (apologies for not having time to make more of it!). Lastly, (and I am leaving out a lot of smaller stuff like giving career guidance, etc.) I have coordinated with other people interested in doing EA community building in UWC high schools and have even run a couple of EA events at these schools.
Lately, and in consultation with 80k hours and some “EA veterans”, I have concluded that I should consider instead working directly on EA priority causes. Thus, I am determined to keep seeking opportunities for entrepreneurship within EA, especially considering if I could contribute to launching new projects. Therefore, if you have a project where you think I could contribute, please do not hesitate to reach out (even if I am engaged in a current project - my time might be better used getting another project up and running and handing over the reins of my current project to a successor)!
I can share my experience working at the intersection of people and technology in deploying infrastructure/a new technology/wind energy globally. I can also share my experience in coming from "industry" and doing EA entrepreneurship/direct work. Or anything else you think I can help with.
I am also concerned about the "Diversity and Inclusion" aspects of EA and would be keen to contribute to make EA a place where even more people from all walks of life feel safe and at home. Please DM me if you think there is any way I can help. Currently, I expect to have ~5 hrs/month to contribute to this (a number that will grow as my kids become older and more independent).
This warms my heart, thanks for writing Julia! A note from a dad trying to be supportive: I also want to acknowledge the mothers that let dads take care of the kids their own way. While it is not possible to generalize, having observed dads with children, at least here in Scandinavia, they might do things differently. Letting fathers parent their own way and trusting them makes it much easier for dads to care for children. Someone mentioned interest in taking care of kids - this interest can be increased, in my experience drastically, by letting fathers take care of the kids in their own particular way (while somewhat anecdotal I am reminded of this article, in a society where dads take on more of a role and bring the kids to the pub equivalent).
To be clear, I think there is absolutely no intention of doing this. EA existed before AI became hot, and many EAs have expressed concerns about the recent, hard pivot towards AI. It seems in part, maybe mostly (?), to be a result of funding priorities. In fact, a feature of EA that hopefully makes it more immune than many impact focused communities to donor influence (although far from total immunity!) is the value placed on epistemics - decisions and priorities should be argued clearly and transparently, why AI should take priority over other cause areas. Glad to have you engage skeptically on this!
Love this framing — in my own EA work I’ve found that leaning into boldness in marketing outperforms caution. Still, I’d be really curious if anyone has data on how coolness affects downstream outcomes — not just reach, but who we attract and any data that might indicate how it shapes culture over time.
I sometimes do informal background or reference checks on "semi-influential" people in and around EA. A couple of times I decided not to get too close — nothing dramatic, just enough small signals that stepping back felt wiser. (And to be fair, I had solid alternatives; with fewer options, one might reasonably accept more risk.)
I typically don’t ask for curated references, partly because it feels out of place outside formal hiring and partly because I’m lazy — it’s much quicker to ask a trusted friend what they think than to chase down a stranger who was pre-selected to say something nice.
Main takeaway: curated references tell you little. What actually helps is asking trusted mutuals or mutuals-of-mutuals who’ve worked with the person directly — ideally when things weren’t going perfectly. Ask what went wrong, how it was handled, and whether they’d recommend working closely again. Those short, candid conversations are gold.
People in EA are surprisingly open to act as such informal references if you approach them with integrity and transparency — they’ll tell you what they know, what they don’t, and often volunteer what to watch for. But you would need to build trust in advance. Thus, building trust in EA might be underrated. That said, I think if you consistently act with honesty, you’ll soon have access to genuinely useful informal information.
If you want to build strong collaborations here, earn trust by being open and careful — and don’t hesitate to cross-check before partnering or taking funding. On funding: it’s often wise to ask for concrete commitments quickly. Some people genuinely mean well but keep others waiting for months because they’re themselves over-committed.
(For context: informal reference checks not feeling right has only happened twice for me, and others might have seen the same things and made different judgment calls. That’s fine — I just tend to stay on the cautious side. Also, I’m no expert; but I’ve seen things go wrong and might have some biasing battle scars. Happy to hear suggestions or additional thoughts on how others approach this.)
Very good point on coming new to EA. Maybe hearing about different cause areas in an intro workshop then landing here and wondering if it is the Alignment Forum. It might even feel a bit like bait and switch? If this is a recurring theme for newcomers to EA, this is something that should be looked at. Not sure if anyone is tracking the funnel of onboarding into EA? If so, one might see people being interested initially, then dropping off when they meet a "wall of AI".
I’m skeptical that corporate AI safety commitments work like @Holden Karnofsky suggests. The “cage-free” analogy breaks: one temporary defector can erase ~all progress, unlike with chickens.
I'm less sure about corporate commitments to AI safety than Karnofsky. In the latest 80k hrs podcast episode, Karnofsky uses the cage free example of why it might be effective to push frontier AI companies on safety. I feel the analogy might fail in potentially a significant way in that the analogy breaks in terms of how many companies need to be convinced:
-For cage free chicken, convincing one company, even just for a few months, is a big win
-For frontier AI companies, you might not win until every single company is convinced, forever. One company not committing, perhaps only for a few months, and the risk reduction could evaporate, or at least take a significant hit
I do recognize that it might be more nuanced but felt the 80k interview overstated optimism on this front. For example, steel-manning his argument, maybe if one gets 60% "coverage" in a critical period, it still reduces risk significantly. But if it is to a large degree a "cat-out-of-the-bag" situation, the bag only needs to be open briefly.
Perhaps I am missing something obvious, so useful if people can correct me.
I like the idea of just accepting it as moral imperfection rather than rationalizing it as charity — thanks for challenging me! One benefit of framing it as imperfection is that it helps normalize moral imperfection, which might actually be net positive for the most dedicated altruists, since it could help prevent burnout or other mental strain.
Still, I’m not completely decided. I’m unclear about cases where someone needs to use their runway:
A. They might have chosen not to build runway and instead donated effectively, and then later, when needing runway, received career transition funding from an effective donor.
B. Alternatively, they could have built runway and, when needing it, avoided submitting a funding request for career transition and instead used their own funds — probably more cost-effective overall, since it reduces admin costs for both the person and the grantmakers.
Thanks for posting this — I came to similar conclusions during a recent strategy sprint for a small org transitioning off major-donor dependence.
One thing I tried to push further was: how can small orgs actually operationalize this tradeoff? A few concrete ideas that might help others:
The time cost and frustration of diversification can quietly sink a sub-$1M org. But the reverse mistake — assuming it's too expensive without testing — is also risky. Fast, lightweight experiments + clear tracking feels like a powerful combo.
Happy to compare notes if others are working through this.
My alma mater! A completely irrational and sentimental upvote from me haha!