Bio

Participation
4

​​I have received funding from the LTFF and the SFF and am also doing work for an EA-adjacent organization.

My EA journey started in 2007 as I considered switching from a Wall Street career to instead help tackle climate change by making wind energy cheaper – unfortunately, the University of Pennsylvania did not have an EA chapter back then! A few years later, I started having doubts whether helping to build one wind farm at a time was the best use of my time. After reading a few books on philosophy and psychology, I decided that moral circle expansion was neglected but important and donated a few thousand sterling pounds of my modest income to a somewhat evidence-based organisation. Serendipitously, my boss stumbled upon EA in a thread on Stack Exchange around 2014 and sent me a link. After reading up on EA, I then pursued E2G with my modest income, donating ~USD35k to AMF. I have done some limited volunteering for building the EA community here in Stockholm, Sweden. Additionally, I set up and was an admin of the ~1k member EA system change Facebook group (apologies for not having time to make more of it!). Lastly, (and I am leaving out a lot of smaller stuff like giving career guidance, etc.) I have coordinated with other people interested in doing EA community building in UWC high schools and have even run a couple of EA events at these schools.

How others can help me

Lately, and in consultation with 80k hours and some “EA veterans”, I have concluded that I should consider instead working directly on EA priority causes. Thus, I am determined to keep seeking opportunities for entrepreneurship within EA, especially considering if I could contribute to launching new projects. Therefore, if you have a project where you think I could contribute, please do not hesitate to reach out (even if I am engaged in a current project - my time might be better used getting another project up and running and handing over the reins of my current project to a successor)!

How I can help others

I can share my experience working at the intersection of people and technology in deploying infrastructure/a new technology/wind energy globally. I can also share my experience in coming from "industry" and doing EA entrepreneurship/direct work. Or anything else you think I can help with.

I am also concerned about the "Diversity and Inclusion" aspects of EA and would be keen to contribute to make EA a place where even more people from all walks of life feel safe and at home. Please DM me if you think there is any way I can help. Currently, I expect to have ~5 hrs/month to contribute to this (a number that will grow as my kids become older and more independent).

Comments
407

Topic contributions
1

What I’ve learned from informal background checks in EA

I sometimes do informal background or reference checks on "semi-influential" people in and around EA. A couple of times I decided not to get too close — nothing dramatic, just enough small signals that stepping back felt wiser. (And to be fair, I had solid alternatives; with fewer options, one might reasonably accept more risk.)

I typically don’t ask for curated references, partly because it feels out of place outside formal hiring and partly because I’m lazy — it’s much quicker to ask a trusted friend what they think than to chase down a stranger who was pre-selected to say something nice.

Main takeaway: curated references tell you little. What actually helps is asking trusted mutuals or mutuals-of-mutuals who’ve worked with the person directly — ideally when things weren’t going perfectly. Ask what went wrong, how it was handled, and whether they’d recommend working closely again. Those short, candid conversations are gold.

People in EA are surprisingly open to act as such informal references if you approach them with integrity and transparency — they’ll tell you what they know, what they don’t, and often volunteer what to watch for. But you would need to build trust in advance. Thus, building trust in EA might be underrated. That said, I think if you consistently act with honesty, you’ll soon have access to genuinely useful informal information.

If you want to build strong collaborations here, earn trust by being open and careful — and don’t hesitate to cross-check before partnering or taking funding. On funding: it’s often wise to ask for concrete commitments quickly. Some people genuinely mean well but keep others waiting for months because they’re themselves over-committed.

(For context: informal reference checks not feeling right has only happened twice for me, and others might have seen the same things and made different judgment calls. That’s fine — I just tend to stay on the cautious side. Also, I’m no expert; but I’ve seen things go wrong and might have some biasing battle scars. Happy to hear suggestions or additional thoughts on how others approach this.)

This is  super helpful - do you feel like your overview even points at what potentially useful safety work is currently not covered by anyone?

Very good point on coming new to EA. Maybe hearing about different cause areas in an intro workshop then landing here and wondering if it is the Alignment Forum. It might even feel a bit like bait and switch? If this is a recurring theme for newcomers to EA, this is something that should be looked at. Not sure if anyone is tracking the funnel of onboarding into EA? If so, one might see people being interested initially, then dropping off when they meet a "wall of AI". 

I’m skeptical that corporate AI safety commitments work like @Holden Karnofsky suggests. The “cage-free” analogy breaks: one temporary defector can erase ~all progress, unlike with chickens.

I'm less sure about corporate commitments to AI safety than Karnofsky. In the latest 80k hrs podcast episode, Karnofsky uses the cage free example of why it might be effective to push frontier AI companies on safety. I feel the analogy might fail in potentially a significant way in that the analogy breaks in terms of how many companies need to be convinced:
-For cage free chicken, convincing one company, even just for a few months, is a big win
-For frontier AI companies, you might not win until every single company is convinced, forever. One company not committing, perhaps only for a few months, and the risk reduction could evaporate, or at least take a significant hit

I do recognize that it might be more nuanced but felt the 80k interview overstated optimism on this front. For example, steel-manning his argument, maybe if one gets 60% "coverage" in a critical period, it still reduces risk significantly. But if it is to a large degree a "cat-out-of-the-bag" situation, the bag only needs to be open briefly.

Perhaps I am missing something obvious, so useful if people can correct me.

I like the idea of just accepting it as moral imperfection rather than rationalizing it as charity — thanks for challenging me! One benefit of framing it as imperfection is that it helps normalize moral imperfection, which might actually be net positive for the most dedicated altruists, since it could help prevent burnout or other mental strain.

Still, I’m not completely decided. I’m unclear about cases where someone needs to use their runway:

A. They might have chosen not to build runway and instead donated effectively, and then later, when needing runway, received career transition funding from an effective donor.

B. Alternatively, they could have built runway and, when needing it, avoided submitting a funding request for career transition and instead used their own funds — probably more cost-effective overall, since it reduces admin costs for both the person and the grantmakers.

Thanks for posting this — I came to similar conclusions during a recent strategy sprint for a small org transitioning off major-donor dependence.

One thing I tried to push further was: how can small orgs actually operationalize this tradeoff? A few concrete ideas that might help others:

  • Run small experiments early — not just to test donor conversion, but to triage which sources are worth pursuing at all. You might find several are cost-efficient, in which case diversification isn’t so costly. Quick tests: EA Forum post, alumni fundraising email to 100–300 people, etc. Adjust cost per dollar numbers by e.g. ~2x to account for later optimising those campaigns.
  • Track time and money per funding source to estimate true cost-per-dollar raised. Even rough numbers help focus efforts - there likely are large differences so that even +/-50% estimates are useful.
  • Use a CRM from the start — not just to keep future potential donors “warm,” but to track the full funnel: outreach, engagement, follow-ups, and conversion. This helps spot bottlenecks and build institutional memory.
  • Consider raising a buffer instead of diversifying to mitigate funding concentration risk. If you’ve run experiments and know your cost-per-dollar-raised, this buffer becomes a smart, quantifiable hedge — and potentially easy to justify to funders as a cost-effective funding strategy (might make multi-year, overall cost-per-dollar-raised the lowest).

The time cost and frustration of diversification can quietly sink a sub-$1M org. But the reverse mistake — assuming it's too expensive without testing — is also risky. Fast, lightweight experiments + clear tracking feels like a powerful combo.

Happy to compare notes if others are working through this.

Just to add my personal experience, if you might be planning direct work, especially entrepreneurship and/or might want to have children - a personal runway has served me well. Not sure if this is stretching the "giving 10%" too far, but you could mentally consider it donated and in case you don't need it later, you can donate it then. I think at least 12 months of runway at your anticipated future expenses might be the right level (so not a student expense, but if you might want children, accounting for all related expenses). Another situation that could make donations more challenging is if you move cities/countries for your job and thus might incur extra expenses from travelling to see loved ones. Especially with things getting weird geopolitically and with AI, I think now might be a good time to consider "donating to flexibility". That said, I think more strict donation pledges are highly commendable but for me it has meant I operated with lower runway than might have been ideal.

Have you checked  with a nearby local EA group if they have younger people looking for mentors? I find that sometimes the youthful optimism energizes me too - like going to church!

Btw for anyone this helps: My Norton antivirus did not like the download. I decided this was high trust enough that I disabled it and as far as I know nothing bad happened. I could turn it on again after installing the excellent software.

Yesssss!!!! I am trying it right away. I also think for many here, timing is important to set limits. Like cap your work week at 50 or at most 60 hours (or less if you have caretaking responsibilities). That way you don't let guilt push you into unhealthy territory. That's how I use timers. Also great for parents that are both ambitious to make sure one does not get a career advantage by feeling more nervous or something.

Load more