If you put a substantial amount of time into something, I think it’s worth considering whether there’s an easy way to summarize what you learned or repurpose your work for the EA Forum.
I find that repurposing existing work is quick to write up because I already know what I want to say. I recently wrote a summary of what I learned applying to policy schools, and I linked to the essays I used to apply. The process of writing this up took me about three hours, and I think the post would have saved me about five hours had I read it before applying. And I...
This is a post with praise for Good Ventures.[1] I don’t expect anything I’ve written here to be novel, but I think it’s worth saying all the same. [2] (The draft of this was prompted by Dustin M leaving the Forum.)
Over time, I’ve done a lot of outreach to high-net-worth individuals. Almost none of those conversations have led anywhere, even when they say they’re very excited to give, and use words like “impact” and “maximising” a lot.
Instead, people almost always do some combination of:
- (I remember in the early days of 80,000 Hours, we spent a whole day hosting an UHNW. He ultimately gave £5000. The week afterwards, a one-hour call with Julia Wise - a social worker at the time - resulted in a larger donation.)
I learn about new ways that Julia had a significant impact on this community every few months, and it never ceases to give me a sense of awe and appreciation for her selflessness. EA would not be what it is today without Julia.
The current US administration is attempting an authoritarian takeover. This takes years and might not be successful. My manifold question puts an attempt to seize power if they lose legitimate elections at 30% (n=37). I put it much higher.[1]
Not only is this concerning by itself, this also incentivizes them to achieve a strategic decisive advantage via superintelligence over pro-democracy factions. As a consequence, they may be willing to rush and cut corners on safety.
Crucially, this relies on them believing superintelligence can be achieved before ...
This is something we should think about more as a mod team- I'll discuss it with them.
Our current politics policy is still this. But it arguably wasn't designed with our current situation in mind. In my view, it'd be a bad thing if discussions on the Forum became too tied to the news cycle (It generally seems true that once something is on the news, you are at least several years too late to change it), our impact has historically not been had by working in the most politically salient areas (neglectedness isn't a perfect proxy but it still matters). Howev...
Am I wrong that EAs working in AI (safety, policy, etc.) and who are now earning really well (easily top 1%) are less likely to donate to charity?
At least in my circles, I get the strong impression that this is the case, which I find kind of baffling (and a bit upsetting, honestly). I have some just-so stories for why this might be the case, but I'd rather hear others' impressions, especially if they contradict mine (I might be falling prey to confirmation bias here since the prior should be that salary correlates positively with likelihood of donating among EAs regardless of sector).
I'd consider this a question that doesn't benefit from public speculation because every individual might have a different financial situation.
Truth be told "earning really well" is a very ambiguous category. Obviously, if someone were financially stable, eg. consistently earning high 5 figure or six figure dollars/euros/pounds/francs or more annually(and having non-trivial savings) and having a loan-free house, their spending would almost always reflect discretionary interests and personal opinions (like 'do I donate to charity or not").
For everyone not fi...
Richard Ngo has a selection of open-questions in his recent post. One question that caught my eye:
How much censorship is the EA forum doing (e.g. of thought experiments) and why?
I originally created this account to share a thought experiment I suspected might be a little too 'out there' for the moderation team. Indeed, it was briefly redacted and didn't appear in the comment section for a while (it does now). It was, admittedly, a slightly confrontational point and I don't begrudge the moderation team for censoring it. They were patient and transparent in ...
So long and thanks for all the fish.
I am deactivating my account.[1] My unfortunate best guess is that at this point there is little point and at least a bit of harm caused by me commenting more on the EA Forum. I am sad to leave behind so much that I have helped build and create, and even sadder to see my own actions indirectly contribute to much harm.
I think many people on the forum are great, and at many points in time this forum was one of the best places for thinking and talking and learning about many of the world's most important top...
In light of recent discourse on EA adjacency, this seems like a good time to publicly note that I still identify as an effective altruist, not EA adjacent.
I am extremely against embezzling people out of billions of dollars of money, and FTX was a good reminder of the importance of "don't do evil things for galaxy brained altruistic reasons". But this has nothing to do with whether or not I endorse the philosophy that "it is correct to try to think about the most effective and leveraged ways to do good and then actually act on them". And there are many peop...
As do I brother, thanks for this declaration! I think now might not be the worst time ogir those who do identify directly as EAs to stay so to encourage the movement, especially some of the higher up thought and movement leaders. I don't think a massive sign up form or anything drastic is necessary, just a few higher status people standing up and saying "hey, I still identify with this thing".
That is if they think it isn't an outdated term...
One of the benefits of the EA community is as a social technology where altruistic actions are high status: earning-to-give, pledging and not eating animals are all venerated to varying degrees among the community.
Pledgers have coordinated to add the orange square emoji to their EA forum profile names (and sometimes in their twitter bio). I like this, as it both helps create an environment where one is might sometimes be forced to think "wow, lots of pledgers here, should I be doing that too?" as well as singling out those deserving of our respect.&n...
I'm glad to see that the EA Forum Team implemented clear and obviously noticeable tags for April Fools' Day posts. It shows they listen to feedback!
Thanks for giving feedback! I looked at this particular quick take again before April Fool's to make sure we'd fixed the issue. Thanks to @JP Addison🔸 for writing the code to make the tags visible.
Reflections on "Status Handcuffs" over one's career
(This was edited using Claude)
Having too much professional success early on can ironically restrict you later on. People typically are hesitant to go down in status when choosing their next job. This can easily mean that "staying in career limbo" can be higher-status than actually working. At least when you're in career limbo, you have a potential excuse.
This makes it difficult to change careers. It's very awkward to go from "manager of a small team" to "intern," but that can be necessary if you want to le...
I agree with you. I think in EA this is especially the case because much of the community-building work is focused on universities/students, and because of the titling issue someone else mentioned. I don't think someone fresh out of uni should be head of anything, wah. But the EA movement is young and was started by young people, so it'll take a while for career-long progression funnels to develop organically.
~30 second ask: Please help @80000_Hours figure out who to partner with by sharing your list of Youtube subscriptions via this survey
Unfortunately this only works well on desktop, so if you're on a phone, consider sending this to yourself for later. Thanks!
I spent most of my early career as a data analyst in industry, which engendered in me a deep wariness of quantitative data sources and plumbing, and a neverending discomfort at how often others tended to just take them as given for input into consequential decision-making, even if at an intellectual level I knew their constraints and other priorities justified it and they were doing the best they could. ...and then I moved to global health applied research and realised that the data trustworthiness situation was so much worse I had to recalibrate a lot of ...
This is fantastic to hear! The Global burden of disease process (while the best and most reputable we have) is surprisingly opaque and hard to follow in many cases. I haven't been able to find the spreadsheets with their calculations.
Their numbers are usually reasonable but bewildering in some cases and obviously wrong in others. GiveWell moving towards combining GBD with other sensible models is a great way forward.
Its a bit unfortunate that the best burden of disease models we have aren't more understandable.
We just launched a free, self-paced course: Worldbuilding Hopeful Futures with AI, created by Foresight Institute as part of our Existential Hope program.
The course is probably not breaking new conceptual ground for folks here who are already “red-pilled” on AI risks — but it might still be of interest for a few reasons:
It’s designed to broaden the base of people engaging with long-term AI trajectories, including governance, institutional design, and alignment concerns.
It uses worldbuilding as an accessible gateway for newcomers — especially those wh
For those among us who want to get straight back to business - I've tagged (I think) all the april fools posts, so you can now filter them out of your frontpage if you prefer by adding the "April Fools' Day" tag under the "Customize feed" button at the top of the frontpage, and changing the filter to hidden.
I thought that today could be a good time to write up several ideas I think could be useful.
1. Evaluation Of How Well AI Can Convince Humans That AI is Broadly Incapable
One key measure of AI progress and risk is understanding how good AIs are at convincing humans of both true and false information. Among the most critical questions today is, "Are modern AI systems substantially important and powerful?"
I propose a novel benchmark to quantify an AI system's ability to convincingly argue that AI is weak—specifically, to persuade human evaluators that AI...
I just read Stephen Clare's 80k excellent article about the risks of stable totalitarianism.
I've been interested in this area for some time (though my focus is somewhat different) and I'm really glad more people are working on this.
In the article, Stephen puts the probability that a totalitarian regime will control the world indefinitely at about 1 in 30,000. My probability on a totalitarian regime controlling a non-trivial fraction of humanity's future is considerably higher (though I haven't thought much about this).
One point of disagreement ...
Another way to think about the risk is not just the current existing authoritarian regimes (e.g. China, Russia, DPRK) but also the alliance or transnational movement of right-wing populism, which is bleeding into authoritarianism, seeking power in many Western democracies. Despite being “nationalist”, each country’s movement and leaders often support each other on the world stage and are learning from each other e.g. Bannon pays a support visit to France’s National Front, many American right-wingers see Orban as a model and invite him to CPAC, Le Pen and O...
so im a fool because you betrayed my trust? im a fool for holding what you say with complete sincerity? i’m not the fool, you are
(credit: https://x.com/FilledwithUrine/status/1906905867296927896)