Welcome to the EA Forum bot site. If you are trying to access the Forum programmatically (either by scraping or via the api) please use this site rather than forum.effectivealtruism.org.

This site has the same content as the main site, but is run in a separate environment to avoid bots overloading the main site and affecting performance for human users.

New & upvoted

Customize feedCustomize feed
79
· · · 19m read

Quick takes

Show community
View more
Set topic
Frontpage
Global health
Animal welfare
Existential risk
Biosecurity & pandemics
12 more
In two days (March 21st, 12-4pm), about 140 of us (event link) will be marching on Anthropic, OpenAI and xAI in SF asking the CEOs to make statements on whether they would stop developing new frontier models if every other major lab in the world credibly does the same. This comes after Anthropic removed its commitment to pause development from their RSP. We'll be starting at 500 Howard St, San Francisco (Anthropic's Office, full schedule and more info here). This is shaping to be the biggest US AI Safety protest to date, with a coalition including Nate Soares (MIRI), David Krueger (Evitable), Will Fithian (Berkeley Professor) and folks representing PauseAI, QuitGPT, Humans First.
AI Czar attacks EA. (Again.) Today in this post on X, the U.S. 'AI Czar' David Sacks directly attacked Humans First, an AI safety advocacy organization, by claiming that it's nothing more than 'censorship power play', a shadowy campaign by Effective Altruists to turn the conservative right against the AI industry, and to block technological progress.  He quote-posted this blog by Jordan Schachtel titled 'Built to Deceive: How the Effective Altruist Machine Infiltrated the Conservative Right on AI'. As an AI Safety advocate, a member of Humans First, an Effective Altruist, and a political conservative, I'm angry about this misrepresentation of AI safety campaign. And I think EAs should fight back harder against senior federal officials smearing our movement. Any suggestions on how to respond? I don't have time this week to write a detailed rebuttal, but I'd be happy to link and promote anything that others write.
UC Berkeley EA is hosting a west coast uni student EA retreat on april 10-12, with ~50 attendees from Berkeley, Stanford, UCLA, UCI, UCSD, & more, as well as special guests like Matt Reardon, Jake McKinnon, Jesse Gilbert, Julie Steele, Adam Khoja, Richard Ren, & more... ...but we only know to reach out to people who're involved with their uni's clubs. so: if you're interested in attending, book a 5-10 minute chat with alex or aiden :) some examples of gaps in our outreach: * unis that don't have an EA club * students who haven't joined their uni's EA club * transfers to west-coast unis * students who're on leave from their uni and presently living on the west coast * high-schoolers who'll soon be starting at west coast we won't be able to take everyone, but reading the ea forum is a pretty positive indicator that you'd be a good fit!
TIL: In 1971, Mario Pierre Roymans stole a Vermeer painting and tried to ransom it for a donation to starving Bengali refugees. It's an interesting example of naive altruistic utilitarianism before EA — inspired by the same famine that led Peter Singer to write "Famine, Affluence, and Morality". (Roymans was apprehended and spent six months in prison; no ransom was paid. )
More EA in da news: https://x.com/DavidSacks/status/2034047505336295904 And the spicy CAIS take: https://x.com/cais/status/2034389842076025164?s=46