Welcome to the EA Forum bot site. If you are trying to access the Forum programmatically (either by scraping or via the api) please use this site rather than forum.effectivealtruism.org.

This site has the same content as the main site, but is run in a separate environment to avoid bots overloading the main site and affecting performance for human users.

New & upvoted

Customize feedCustomize feed

Quick takes

Show community
View more
Set topic
Frontpage
Global health
Animal welfare
Existential risk
Biosecurity & pandemics
11 more
I'm writing a newsletter on current events, long-term trends, and topical debates roughly every other day. Recent posts include: * A summary of a debate on AI progress, featuring Ajeya Cotra, Peter Wildeford, Eli Lifland, Matthew Barnett, and others. * The three types of problems with population decline. * We underestimate the pace of progress because much of it isn't salient. * The reason we don't see more automation is simply that AI isn't good enough. * Economists are unusually good at taking human agency into account.
If you work in an office with other EAs/ interesting and interested people, consider putting the debate slider from our upcoming debate on a big whiteboard. It can lead to some interesting conversations, and even better, some counterfactual forum posts.    PS- I'm aware this looks a bit 'people selling mirrors'
7
Linch
15h
0
On the off chance anybody is both interested in AI news and missed it, Anthropic sued DoW and other government officials/agencies for the supply chain risk designation in DC and Northern Californian Circuit. The full-text of the Northern Californian complaint here: The primary complaints: 1. First Amendment retaliation. Anthropic alleges that Pentagon officials illegally retaliated against the company for its position on AI safety. They argue that Trump, Hegseth, and others wanted to punish Anthropic for protected speech, citing public social media and other dialogue as evidence that the punishment is ideological in nature.   2. Misuse of the supply chain risk designation. Anthropic was officially designated a supply chain risk, which requires defense contractors to certify they don't use Claude in their Pentagon work. Anthropic argues that this is a misuse of the SCR designation which Congress intended to stop foreign actors, and that Anthropic clearly does not pose a supply-chain risk in a reading of the law. 3. Lack of Due Process (Fifth Amendment violation). "The Challenged Actions arbitrarily deprive Anthropic of those interests without any process, much less due process." 4. Ultra vires. Anthropic alleges that the Presidential Directive requiring every federal agency to immediately cease all use of Anthropic’s technology exceeds the limits of the President's authority as granted by Congress. 5. Administrative Procedural Act. Similar to the above, Anthropic argues that administration violates the administrative procedural act, and the sanctions are not permitted to the relevant agencies as a duty granted to them by Congress.  IANAL etc. in my personal opinion #2 seems very clearcut as a common-language and precedent reading of these things. #1 also seems strong. Sources I randomly skimmed online thought #3-#5 had a good case too, but I don't have an independent view. The DC complaint looks less meaty (and I didn't read it)
6
quinn
13h
3
i'm confused about tithing. I yearn for the diamond emoji from GWWC, and I'm not comfortable enough to do it since I took like a 50% pay cut to do AI safety nonprofit stuff. Seems weird to make such a financial commitment, which implicates my future wife, who I have presumably not met yet, especially when I'm scraping by without too many savings per paycheck.  Is there a sense in which I already am diamond emoji eligible, because I'm "donating 50% of my income" in the sense of opportunity cost? 50 is, famously, greater than 10. 
In my experience, orgs work much harder to get donations from a "grantmaker" than from an individual. I made my first big donation in 2015, where I donated $20K to REG. I talked to a bunch of orgs in the process of trying to decide where to donate. Some of them didn't respond at all, and many of their responses were shallow. A few months later, I took a philanthropy class at Stanford where we split up into groups and each group was responsible for figuring out where to donate a $20K grant. The level of communication I got from nonprofits was dramatically different. Orgs bent over backwards to be as communicative and helpful as possible. My experience was that orgs didn't put much priority on a $20K grant from me as an individual, but they jumped at the possibility of a $20K grant from a Stanford Grantmaker. For my future donations, I'm considering whether I should rebrand my emails: I could tell nonprofits something like "I'm reaching out as a representative on behalf of the Greatest Happiness Fund, a grantmaker that focuses on supporting effective charities" (Greatest Happiness Fund is the name of my DAF). Maybe I would get better responses that way. It feels a little manipulative though.