Bio

Participation
4

​​I have received funding from the LTFF and the SFF and am also doing work for an EA-adjacent organization.

My EA journey started in 2007 as I considered switching from a Wall Street career to instead help tackle climate change by making wind energy cheaper – unfortunately, the University of Pennsylvania did not have an EA chapter back then! A few years later, I started having doubts whether helping to build one wind farm at a time was the best use of my time. After reading a few books on philosophy and psychology, I decided that moral circle expansion was neglected but important and donated a few thousand sterling pounds of my modest income to a somewhat evidence-based organisation. Serendipitously, my boss stumbled upon EA in a thread on Stack Exchange around 2014 and sent me a link. After reading up on EA, I then pursued E2G with my modest income, donating ~USD35k to AMF. I have done some limited volunteering for building the EA community here in Stockholm, Sweden. Additionally, I set up and was an admin of the ~1k member EA system change Facebook group (apologies for not having time to make more of it!). Lastly, (and I am leaving out a lot of smaller stuff like giving career guidance, etc.) I have coordinated with other people interested in doing EA community building in UWC high schools and have even run a couple of EA events at these schools.

How others can help me

Lately, and in consultation with 80k hours and some “EA veterans”, I have concluded that I should consider instead working directly on EA priority causes. Thus, I am determined to keep seeking opportunities for entrepreneurship within EA, especially considering if I could contribute to launching new projects. Therefore, if you have a project where you think I could contribute, please do not hesitate to reach out (even if I am engaged in a current project - my time might be better used getting another project up and running and handing over the reins of my current project to a successor)!

How I can help others

I can share my experience working at the intersection of people and technology in deploying infrastructure/a new technology/wind energy globally. I can also share my experience in coming from "industry" and doing EA entrepreneurship/direct work. Or anything else you think I can help with.

I am also concerned about the "Diversity and Inclusion" aspects of EA and would be keen to contribute to make EA a place where even more people from all walks of life feel safe and at home. Please DM me if you think there is any way I can help. Currently, I expect to have ~5 hrs/month to contribute to this (a number that will grow as my kids become older and more independent).

Comments
381

Topic contributions
1

One non-expert idea here is to assume that all the building blocks of mirror bacteria exist - what would it take then to create effective mirror phages? Is there any way we can make progress on this already, without those building blocks, but knowing roughly what they are? And in a defense favoring way? Again I would really align with other biosec folks on this at OP, Blueprint and MBDF as I feel very hesitant about unilateral actions. But something like this might have legs, especially if some plausible work can be outlined that can be done with current techniques.

Hi Nnaemeka, yeah I totally agree about not doing something potentially advancing the creation of dangerous mirror organisms. I am commenting just to iterate what I said about "defense-favoring" - I know little of microbiology but thought I would mention just in case there might be some way to very lightly modify an existing non-mirror phage to "hunt and kill" mirror microbes (e.g. just altering their "tracking" and "ingestion" system). But this is probably an incredibly naive idea but thought I would put it out there as there is a whole chapter on phages in the mirror bio report. Also, my impression from the report is that there is scientific uncertainty about how bad mirror bio would be. It might be worth solidifying this by e.g. taking single parts of plant or human immune systems and exposing them only to simple, single mirror molecules that would likely be present on mirror organisms. This might show definitely that mirror bio might be catastrophic. But I would do any such work in really tight cooperation with the Mirror Biology Dialogues Fund and others and definitely not act unilaterally. It might at least be worth it to read at least the most relevant parts of the mirror bio report if you might have time.

I know little of microbiology, but I know there is some focus on mirror bacteria. One possible pivot that could attract funding would be to look if phages can be made to track and consume mirror bacteria. This is a super speculative idea, but I think there might be some funding for defenses against mirror life. Perhaps you have already looked at the detailed report on mirror life published at the end of last year (my non-expert read was that it was believed phages would not work - but maybe it is possible to make "mirror phages" in a defense-favoring way)?

One point I have raised earlier: If one is worried about neocolonialism, reducing the risk from powerful technology might look like a better option. It is clear that the global south is bearing a disproportionate burden from fossil fuel burning by rich nations. Similarly, misuse or accidents in nuclear, biotechnology and/or AI might also cause damage to people who had little say in how these technologies were being rolled out. Especially preventing nuclear winter seems like something that would disproportionately affect poor people, but I think AI Safety and Biosecurity are likely candidates for lowering the risk of perpetuating colonial dynamics as well. 

Fixed! Thanks for pointing that out.

The book "Careless People" starts as a critique of Facebook — a key EA funding source — and unexpectedly lands on AI safety, x-risk, and global institutional failure.

I just finished Sarah Wynn-Williams' recently published book. I had planned to post earlier — mainly about EA’s funding sources — but after reading the surprising epilogue, I now think both the book and the author might deserve even broader attention within EA and longtermist circles.

1. The harms associated with the origins of our funding

The early chapters examine the psychology and incentives behind extreme tech wealth — especially at Facebook/Meta. That made me reflect on EA’s deep reliance (although unclear how much as OllieBase helpfully pointed out after I first published this Quick Take) on money that ultimately came from:

  • harms to adolescent mental health,
  • cooperation with authoritarian regimes,
  • and the erosion of democracy, even in the US and Europe.

These issues are not new (they weren’t to me), but the book’s specifics and firsthand insights reveal a shocking level of disregard for social responsibility — more than I thought possible from such a valuable and influential company.

To be clear: I don’t think Dustin Moskovitz reflects the culture Wynn-Williams critiques. He left Facebook early and seems unusually serious about ethics.
But the systems that generated that wealth — and shaped the broader tech landscape could still matter.

Especially post-FTX, it feels important to stay aware of where our money comes from. Not out of guilt or purity — but because if you don't occasionally check your blind spot you might cause damage.

2. Ongoing risk from the same culture

Meta is now a major player in the frontier AI race — aggressively releasing open-weight models with seemingly limited concern for cybersecurity, governance, or global risk.

Some of the same dynamics described in the book — greed, recklessness, detachment — could well still be at play. And it would not be completely surprising if such culture to some extent is being replicated across other labs and institutions involved in frontier AI.

3. Wynn-Williams is now focused on AI governance (e.g. risk of nuclear war)

In the final chapters, Wynn-Williams pivots toward global catastrophic risks: AI, great power conflict, and nuclear war.

Her framing is sober, high-context, and uncannily aligned with longtermist priorities. She seems to combine rare access (including relationships with heads of state), strategic clarity, and a grounded moral compass — the kind of person who can get in the room and speak truth to power. People recruiting for senior AI policy roles might want to reach out to her if they have not already.


I’m still not sure what the exact takeaway is. I just have a strong hunch this book matters more than I can currently articulate — and that Wynn-Williams herself may be an unusually valuable ally, mentor, or collaborator for those working on x-risk policy or institutional outreach.

If you’ve read it — or end up reading it — I’d be curious what it sparks for you. It works fantastically as an audiobook, a real page turner with lots of wit and vivid descriptions.

I think this is super useful to share - thanks! One question: Do you think you are striking the right balance between detail and speed of making an application? I am asking as e.g. Lightspeed Grants tried to make applying for funding as quick and easy as possible, and after a quick skim your application seems to pull more in the direction of impressive detail. I am commenting mostly as I could see some first-time grant applicants might take away that funding applications require a lot of time to put together. 

Actually reading this again, I think maybe you have a point about complexity of arguments/assumptions. Not sure if it is Occam's Razor, but if one has to contort an argument into this weird, windy argument with unusual assumptions - maybe this hard attempt at something like "rationalization" should be a warning flag. That said, the world is complex and unpredictable, so perhaps reasoning about it is complex too - I guess this is an age-old debate with no clear answer!

Animal welfare on the other hands seems so extremely easy to argue is important. Global poverty a little less so but still easier than x-risk (more about whether handing out mosquito nets is better than economic growth, democracy, human rights, etc.).

https://forum.effectivealtruism.org/posts/jgspXC8GKA7RtxMRE/on-living-without-idols

That is true and perhaps I could have chosen a better wording. This is also why I put "cause neutrality" in quotation marks. I would welcome any suggestions for wording that might be less confusing. Apologies if I have caused confusion - I now changed it to "cause balanced" - hopefully that is better and less confusing.

Load more