— I engineer ambitious ideas until they survive the battlefield of reality —
I have received funding from the LTFF and the SFF and am also doing work for an EA-adjacent organization.
My EA journey started in 2007 as I considered switching from a Wall Street career to instead help tackle climate change by making wind energy cheaper – unfortunately, the University of Pennsylvania did not have an EA chapter back then! A few years later, I started having doubts whether helping to build one wind farm at a time was the best use of my time. After reading a few books on philosophy and psychology, I decided that moral circle expansion was neglected but important and donated a few thousand sterling pounds of my modest income to a somewhat evidence-based organisation. Serendipitously, my boss stumbled upon EA in a thread on Stack Exchange around 2014 and sent me a link. After reading up on EA, I then pursued E2G with my modest income, donating ~USD35k to AMF. I have done some limited volunteering for building the EA community here in Stockholm, Sweden. Additionally, I set up and was an admin of the ~1k member EA system change Facebook group (apologies for not having time to make more of it!). Lastly, (and I am leaving out a lot of smaller stuff like giving career guidance, etc.) I have coordinated with other people interested in doing EA community building in UWC high schools and have even run a couple of EA events at these schools.
Lately, and in consultation with 80k hours and some “EA veterans”, I have concluded that I should consider instead working directly on EA priority causes. Thus, I am determined to keep seeking opportunities for entrepreneurship within EA, especially considering if I could contribute to launching new projects. Therefore, if you have a project where you think I could contribute, please do not hesitate to reach out (even if I am engaged in a current project - my time might be better used getting another project up and running and handing over the reins of my current project to a successor)!
I can share my experience working at the intersection of people and technology in deploying infrastructure/a new technology/wind energy globally. I can also share my experience in coming from "industry" and doing EA entrepreneurship/direct work. Or anything else you think I can help with.
I am also concerned about the "Diversity and Inclusion" aspects of EA and would be keen to contribute to make EA a place where even more people from all walks of life feel safe and at home. Please DM me if you think there is any way I can help. Currently, I expect to have ~5 hrs/month to contribute to this (a number that will grow as my kids become older and more independent).
Perhaps I am missing something but on the >=1000x criteria, if we target e.g. <1% of people succumbing to the disease over their lifetime (might want to set even lower, in order to make people comply with suggestions - precedent of similar risk reductions and uptake might be worth looking into if not done already), this means we expect people that are not protected to inhale only 10 particles over their lifetime, in expectation (assuming minimum lethal dose of 1 particle)? Asking as that seems like a small degree of environmental spread. I realize that perhaps the reasoning here might be infohazardous, but if not I would be very interested to know more. Or perhaps additional reduction comes from one or more additional measures, such as far UVC, glycol, etc.
Another reason having foreigners come in is skill. It can be argued that Bangladesh' economic development was in parts due to skills transfer from Korean workers to Bangladeshis. So if someone can set up e.g. a 500-person company, with 250 local employees, these will not learn from the 250 foreigners. And like start-ups in the global North, often these will quit and start their own companies where a few of them might move on to become successful.
I think quite a few dedicated EAs will remain whatever happens. Perhaps those voices actually have more leverage in another way: To see if there are ways to redirect money away from high-paying less impactful jobs and into animal welfare and/or global health/poverty reduction. I have always been suspicious of the extreme assumptions needed behind x-risk interventions being prioritized over other causes: Extreme increases in value (populations and life satisfaction) over long time horizons, as well as a simultaneous ceasing "forever" of subsequent periods of high risk. In a way, it becomes EA all over again. As less clearly useful AI safety project pop up left and right, that might look more like philanthropy before EA. And our job is then to show them that there are more effective interventions than AI safety. Renaissance Philanthropy is doing fantastic work looking at how new technology can be applied to solve current, painful problems - something making me hopeful.
That's actually a fantastic challenge: What post that gets above e.g. 200 karma has had the most AI usage? I mean if AI-heavy workflows can generate excellent content, that's a win, no? More along the line of AI for epistemics things some EAs are working on, and might even help if AIs can help with sharing information on AI safety - at some point AI safety might need to become mostly automated to keep up with AI progress. In addition to the limit, I would be super excited to have some competitions on using AI heavily to write the best content, do the best research, and share lessons on how to use it to do even more good.
Not sure it fits neatly in your categories, but the mirror bacteria prototype shelter I put up garnered both curatorial interest and excitement from serious artists: https://www.fonixfuture.com/
Framing these as ownable, inhabitable existential art objects people can put up in their backyard for contemplation/conversation starting has resulted in a few deposits for pre-orders. Very happy to share more information!
The electric grid is a powerful, current AI safety policy opportunity. ~25% of the cost of data centres is electricity and the electric grid is heavily regulated across government agencies. In fact, the very reason why many people claim nuclear energy is over regulated is exactly why we might be able to regulate AI strongly via electric grid regulation. This means "the grid" could be a strong contender for a space to make rapid and robust progress on AI safety policy. As an expert in the electric grid with a strong interest in AI safety and resilience, I see an electricity sector full of opportunities for quick wins and actually moving the needle. While "high-level" policy interventions such as SB 53 are important, a drawback of these types of policy interventions is twofold:
1 - They attract enormous public scrutiny. Electric regulation, on the other hand, hardly make it into local newspapers or industry press even.
2 - They don't really move the needle. They are more aspirational and rely on outcomes in court cases, enforcement, etc. The electric grid already has "kill switches" installed and integration with national security.
On the other hand, in the electric grid, one can plausibly pass very strong regulation with physical, actual AI safety levers. Some examples:
A - Large consumers of electricity are critical to the grid. It is not unforeseeable at all that gov't bodies could require a "kill switch" for data centres, not because of AI safety but due to grid health.
B - The military is extremely focused on electrical grids. They are also a likely gov't body that would act quickly and decisively on perceived threats on the grid. They likely influence requirements for cyber security of the electrical grid. This can include monitoring of critical load (yes, data centres) and access to the above "kill switch".
These are just two of many ideas I have around making rapid, robust progress on AI safety via much less public scrutiny, and using existing pathways and gov't focus areas around electric grid management and national defense. I have many more ideas and many years of working in and following the industry as well as a large professional network if anyone wants to talk - please DM me!
I think we may be looking at this at the wrong level of analysis. Individual responsibility matters and people should be held accountable, but if the goal is to reduce incidents like this, focusing mainly on individual cases probably won’t move the needle much. I’d like to zoom out and consider what this might imply about the male part of the EA community more generally.
I previously used the Boeing analogy: a door falls off a plane and we find the missing bolt. But bolts are not the real problem — safety culture is. The real issue is the environment that allowed the missing bolt to go undetected until the plane was already in flight.
Previous EA harassment cases and EA community surveys suggest many women in EA report gender-related concerns. That points to broader cultural and structural dynamics we should examine.
I suspect many of us men in the community (myself included) should reflect more seriously on how we can improve. Some of this may relate to biases or blind spots that are widely documented in the literature. If so, addressing them would not only improve community culture but also help us think more clearly in general.
I’m not sure where this reflection will lead, but it seems like a necessary first step. A few areas that might be worth examining:
More broadly, it might be productive for EA to treat this as a structural issue and proactively implement lessons from the large body of research on reducing harassment risk in organizations.
As Brad points out, even now, and with some (high?) likelihood in the near future, EA will be begging for people to start new things. So please disregard downvotes. Instead, if you think you can pull this off and have credentials, just take tips like mine, Brad's and others' and see downvotes as "not ready yet", and do not interpret it as "a project similar to this is not worthwhile". There are tons of people right now working on starting new things, and this will only accelerate as the need for it is large.
And criticism is on the EA community if we make promising entrepreneurs discouraged. Here are programs you can apply to to speed up your progress:
-AIM incubation
-Catalyze Impact
-BlueDot
-And probably many more that will put you on the path to creating new orgs, even orgs that again create new orgs (this makes me thing that an overview of EA opportunities for builders/entrepreneurs might be helpful)
Perhaps this is not all good news - we want AI to embody all of humanity's values and desires and possibility for flourishing. While it might be true that EAs have found some path and framework close to optimal for human flourishing, I think we should also be skeptical that both AI and EA has emerged from a very narrow set of humanity, tech-centered, etc. Seems like there is a lot to unpack at more meta levels.