Do people enjoy using Slack? I hate Slack and I think that Slack has bad ergonomics. I'm in about 10 channels and logging into them is horrible. There is no voice chat. I'm not getting notifications (and I fret the thought of setting them up correctly - I just assume that if someone really wanted to get in touch with me immediately, they will find a way) I'm pretty sure it would be hard to create a tool better than Slack (I'm sure one could create a much better tool for a narrower use case, but would find it hard to cover all the Slack's features) but let's assume I could. Is it worth it? Do you people find Slack awful as well or is it only me?
People often appeal to Intelligence Explosion/Recursive Self-Improvement as some win-condition for current model developers e.g. Dario argues Recursive Self-Improvement could enshrine the US's lead over China.
This seems non-obvious to me. For example, suppose OpenAI trains GPT 6 which trains GPT 7 which trains GPT 8. Then a fast follower could take GPT 8 and then use it to train GPT 9. In this case, the fast follower has a lead and has spent far less on R&D (since they didn't have to develop GPT 7 or 8 themselves).
I guess people are thinking that OpenAI will be able to ban GPT 8 from helping competitors? But has anyone argued for why they would be able to do that (either legally or technically)?
They could exclusively deploy their best models internally, or limit the volume of inference that external users can do, if running AI researchers to do R&D is compute-intensive.
There are already present-day versions of this dilemma. OpenAI claims that DeepSeek used OpenAI model outputs to train its own models, and they do not reveal their reasoning models' full chains of thought to prevent competitors from using it as training data.
Meta: I’m seeing lots of blank comments in response to the DIY polls. Perhaps people are thinking that they need to click ‘Comment’ in order for their vote to count? If so, PSA: your vote counted as soon as you dropped your slider. You can simply close the pop-up box that follows if you don’t also mean to leave a comment.
Happy voting!
What organizations can be donated to to help people in Sudan effectively? Cf. https://www.nytimes.com/2025/04/19/world/africa/sudan-usaid-famine.html?unlocked_article_code=1.BE8.fw2L.Dmtssc-UI93V&smid=url-share
I have not done personal research into their cost effectiveness, but I wanted to flag two NGOs recommended by Vox's Future Perfect at the end of last year. The commentary is my own though!
Alight - Alight initially received a humanitarian waiver, but their USAID-funded program was later cancelled and they're raising funds to continue operations. Alight is smaller/less well known than MSH (or other INGOs), and may face greater challenges in rapidly mobilizing emergency resources (medium confidence on this).
"We have anywhere between 15 to 30 infants in these ...
Should I Be Public About Effective Altruism?
TL;DR: I've kept my EA ties low-profile due to career and reputational concerns, especially in policy. But I'm now choosing to be more openly supportive of effective giving, despite some risks.
For most of my career, I’ve worked in policy roles—first as a civil servant, now in an EA-aligned organization. Early on, both EA and policy work seemed wary of each other. EA had a mixed reputation in government, and I chose to stay quiet about my involvement, sharing only in trusted settings.
This caution gave me flexibili...
Speaking as someone who does community building professionally: I think this is great to hear! You’re probably already aware of this post, but just in case, I wanted to reference Alix’s nice write-up on the subject.
I also think many professional community-building organisations aim to get much better at communications over the next few years. Hopefully, as this work progresses, the general public will have a much clearer view of what the EA community actually is - and that should make it easier for you too.
How would you rate current AI labs by their bad influence or good influence? E.g. Anthropic, OpenAI, Google DeepMind, DeepSeek, xAI, Meta AI.
Suppose that the worst lab has a -100 influence on the future, for each $1 they spend. A lab half as bad, has a -50 influence on the future for each $1 they spend. A lab that's actually good (by half as much) might have a +50 influence for each $1.
What numbers would you give to these labs?[1]
It's possible this rating is biased against smaller labs since spending a tiny bit increases "the number of labs" by 1 which is
Just Compute: an idea for a highly scalable AI nonprofit
Just Compute is a 501c3 organization whose mission is to buy cutting-edge chips and distribute them to academic researchers and nonprofits doing research for societal benefit. Researchers can apply to Just Compute to get access to the JC cluster, which supports research in AI safety, AI for good, AI for science, AI ethics, and the like, through a transparent and streamlined process. It's a lean nonprofit organization with a highly ambitious founder who seeks to raise billions of dollars for compute.&n...
There's a famous quote, "It's easier to imagine the end of the world than the end of capitalism," attributed to both Fredric Jameson and Slavoj Žižek.
I continue to be impressed by how little the public is able to imagine the creation of great software.
LLMs seem to be bringing down the costs of software. The immediate conclusion that some people jump to is "software engineers will be fired."
I think the impacts on the labor market are very uncertain. But I expect that software getting overall better should be certain.
This means, "Imagine everything useful ab...
Sorry - my post is coming with the worldview/expectations that at some point, AI+software will be a major thing. I was flagging that in that view, software should become much better.
The question of "will AI+software" be important soon is a background assumption, but a distinct topic. If you are very skeptical, then my post wouldn't be relevant to you.
Some quick points on that topic, however:
1. I think there's a decent coalition of researchers and programmers who do believe that AI+software will be a major deal very soon (if not already). Companies ar...
This article gave me 5% more energy today. I love the no fear, no bull#!@$, passionate approach. I hope this kindly packaged "get off your ass priveleged people" can spur some action, and great to see these sentiments front and center in a newspaper like the Guardian!
https://www.theguardian.com/lifeandstyle/2025/apr/19/no-youre-not-fine-just-the-way-you-are-time-to-quit-your-pointless-job-become-morally-ambitious-and-change-the-world?CMP=Share_AndroidApp_Other
I've been thinking a lot about how mass layoffs in tech affect the EA community. I got laid off early last year, and after job searching for 7 months and pivoting to trying to start a tech startup, I'm on a career break trying to recover from burnout and depression.
Many EAs are tech professionals, and I imagine that a lot of us have been impacted by layoffs and/or the decreasing number of job openings that are actually attainable for our skill level. The EA movement depends on a broad base of high earners to sustain high-impact orgs through relatively smal...
At Risk of violating @Linch's principle "Assume by default that if something is missing in EA, nobody else is going to step up.", I think it would be valuable to have a well researched estimate of the counterfactual value of getting investment from different investors (whether for profit or donors).
For example in global health, we could make GiveWell the baseline, as I doubt whether there is a ll funding source where switching as less impact, as the money will only ever be shifted from something slightly less effective. For example if my organisation recei...
https://www.gap-map.org/?sort=rank&fields=biosecurity
I think this is cool! It shows gaps in capabilities so that people can see what needs to be worked on.
Reflections on "Status Handcuffs" over one's career
(This was edited using Claude)
Having too much professional success early on can ironically restrict you later on. People typically are hesitant to go down in status when choosing their next job. This can easily mean that "staying in career limbo" can be higher-status than actually working. At least when you're in career limbo, you have a potential excuse.
This makes it difficult to change careers. It's very awkward to go from "manager of a small team" to "intern," but that can be necessary if you want to le...
I've just ran into this, so excuse a bit of grave digging. As someone who has entered the EA community with prior career experience I disagree with your premise
"It's very awkward to go from "manager of a small team" to "intern," but that can be necessary if you want to learn a new domain, for instance."
To me this kind of situation just shouldn't happen. It's not a question of status, it's a question of inefficiency. If I have managerial experience and the organization I'd be joining can only offer me the exact same job they'd be offering to a fresh grad, t...
Announcing PauseCon, the PauseAI conference.
Three days of workshops, panels, and discussions, culminating in our biggest protest to date.
Twitter: https://x.com/PauseAI/status/1915773746725474581
Apply now: https://pausecon.org
The recent rise of AI Factory/Neocloud companies like CoreWeave, Lambda and Crusoe strikes me as feverish and financially unsound. These companies are highly overleveraged, offering GPU access as a commodity to a monopsony. Spending vast amounts of capex on a product that must be highly substitutive to compete with hyperscalers on cost strikes me as an unsustainable business model in the long term. The association of these companies with the 'AI Boom' could cause collateral reputation damage to more reputable firms if these Neoclouds go belly up.
And,...
I'd be excited to see 1-2 opportunistic EA-rationalist types looking into where marginal deregulation is a bottleneck to progress on x-risk/GHW, circulating 1-pagers among experts in these areas, and then pushing the ideas to DOGE/Mercatus/Executive Branch. I'm thinking things like clinical trials requirements for vaccines, UV light, anti-trust issues facing companies collaborating on safety and security, maybe housing (though I'm not sure which are bottlenecked by federal action). For most of these there's downside risk if the message is low fidelity, the...
There's this ACX post (that I only skimmed and don't have strong opinions about) which mostly seems to do this, minus the "pushing" part.
Thought these quotes from Holden's old (2011) GW blog posts were thought-provoking, unsure to what extent I agree. In In defense of the streetlight effect he argued that
...If we focus evaluations on what can be evaluated well, is there a risk that we’ll also focus on executing programs that can be evaluated well? Yes and no.
- Some programs may be so obviously beneficial that they are good investments even without high-quality evaluations available; in these cases we should execute such programs and not evaluate them.
- But when it comes to programs that where eval
LLMs seem more like low-level tools to me than direct human interfaces.
Current models suffer from hallucinations, sycophancy, and numerous errors, but can be extremely useful when integrated into systems with redundancy and verification.
We're in a strange stage now where LLMs are powerful enough to be useful, but too expensive/slow to have rich scaffolding and redundancy. So we bring this error-prone low-level tool straight to the user, for the moment, while waiting for the technology to improve.
Using today's LLM interfaces feels like writing SQL commands ...