Awhile back I came across this slide from the Money for Good project, which I thought was a sobering quantification of the rarity of donor decision-making based on nonprofit outperformance (cost-effectiveness etc). Hope Consulting got this data by surveying 4,000 US individuals with household incomes >$80k (top 30% income back in 2009, comprising 75% of overall individual donations), of which 2,000 were in the >$300k bracket.
Opportunity size for US retail donors in 2009 was ~$45B, so this works out to ballpark $1-1.5B which is still sizeable, e.g. it...
This is not really an argument to either side, but a while ago I created a rough little spreadsheet where you can put in:
- How much disvalue you see in the world going badly for animals vs. humans
- How likely you think it is that the world will go badly for animals vs. humans vs. both
- How much of the world that makes AI go well for humans you expect to also help make AI go well for animals
And it calculates for you what you should focus on (AIS vs. AIxAnimals) :)
It's very rough, very proxy, all the usual caveats apply. But I am hoping that it can wit...
TIL: In 1971, Mario Pierre Roymans stole a Vermeer painting and tried to ransom it for a donation to starving Bengali refugees. It's an interesting example of naive altruistic utilitarianism before EA — inspired by the same famine that led Peter Singer to write "Famine, Affluence, and Morality".
(Roymans was apprehended and spent six months in prison; no ransom was paid. )
I would have predicted the positive press and basically think this would "work" today if these conditions were met:
I agree you on the overall downsides though. This sets a bad precedent that will be misused by many and burn a ton of social trust that is ultimately more important.
We're sadly no longer accepting sign-ups for our founder's programme. We've had an influx of demand and we're now fully at capacity for the foreseeable future. Its funding situation is precarious and I've sadly got to focus on that now. Results are nuts, but mental health funders are focussed on LMICs, meta funders don't like mental health interventions, so it's a challenging category to even survive in.
For now, I've got to focus on doing a good job for our existing clients. I'm sorry!
Blimey. Did you check with CE about offering it as part of their incubation program (funded by them, maybe paid by results as you say)? And/or other incubators like Catalyze, or fellowship programs (not founders per se) like Constellation? (IIRC they have an affiliated executive coach already)
I'm surprised by "I don't really want a grant" though. E.g. the usual process is basically seed funding grant to check/demonstrate progress --> if you achieve that (or seem on track to), you get renewed funding. The mechanism isn't perfect (maybe you can BS your wa...
I run a curated Discord for high agency people with Long COVID/ME (myalgic encephalomyelitis). The group includes tech founders, researchers, rationalists/EAs, etc. The focus is on troubleshooting each other's conditions actively, as well as creating a body of knowledge to bring back to the wider community in the form of writing, education, projects, companies, etc.
Some know me as Liface, others as Liam Rosen - I have been in the EA/rationality community for over 10 years, previously in the Bay, now in New York City, and am the main moderator of r/slatesta...
Coal and nuclear electricity generation kill a significant number of fish through water intake systems. This matters for evaluating the impact of any new electricity load.
Most thermal power plants (coal, nuclear, and to a lesser extent gas) draw large volumes of water from rivers and lakes for cooling. This causes two underappreciated harms to fish:
Impingement — fish get trapped against water intake filters and die. Entrainment — eggs, larvae, and small fish are pulled through pumps and heat exchangers, killing them. A single coal plant in Ohio (Bay Shore ...
More EA in da news: https://x.com/DavidSacks/status/2034047505336295904
And the spicy CAIS take: https://x.com/cais/status/2034389842076025164?s=46
From the CAIS tweet:
We believe the effective altruism movement is, unfortunately, controlled opposition. The less influence it has on AI safety, the better.
I really don't like it when people paint the whole movement with one brush, but they're not wrong that there's a subset of AI safety/EA that behaves like controlled opposition. Obvious example is Anthropic/Dario (who was one of the first GWWC signatories); Good Ventures basically doesn't fund anything that might be bad for AI company stock prices; I can think of some other possible examples but I do...
Researchers simulate an entire fly brain on a laptop. Is a human brain next?
What is the implication of this for EA thinking? Does the fly that purely exists in the computer warrant moral consideration, and could we increase the overall welfare of the world by making millions of these simulations with ideal fruit-fly conditions?
They fully copied the brain of the fly, so from my understanding it should also feel pleasure and pain in theory, I think this poses a real conundrum for EA morality.
You're right that it isn't a WBE. Also, incentives:
To be fair, we’re not unsympathetic to why Eon used the language they did. Their careful blog post on ‘How the Eon Team Produced a Virtual Embodied Fly’ would likely have only been read by a few hundred neuroscientists, while “We’ve uploaded a fruit fly” reached millions. Startup survival requires investment, funding follows excitement, and excitement follows headlines - not careful caveats. This bold approach may even feel obligatory when an organisation’s stated mission is “solving brain emulation as an engineering sprint, not a decades-long research program.”
"There are more things in heaven and earth, Horatio, Than are dreamt of in your philosophy"
One thing I've been floating about for a while, and haven't really seen anybody else deeply explore[1], is what I call "further moral goods": further axes of moral value as yet inaccessible to us, that is qualitatively not just quantitatively different from anything we've observed to date.
For background, I think normal, secular, humans live in 3 conceptually distinct but overlapping worlds:
In two days (March 21st, 12-4pm), about 140 of us (event link) will be marching on Anthropic, OpenAI and xAI in SF asking the CEOs to make statements on whether they would stop developing new frontier models if every other major lab in the world credibly does the same. This comes after Anthropic removed its commitment to pause development from their RSP.
We'll be starting at 500 Howard St, San Francisco (Anthropic's Office, full schedule and more info here). This is shaping to be the biggest US AI Safety protest to date, with a coalition including Nate Soar...
AI Czar attacks EA. (Again.)
Today in this post on X, the U.S. 'AI Czar' David Sacks directly attacked Humans First, an AI safety advocacy organization, by claiming that it's nothing more than 'censorship power play', a shadowy campaign by Effective Altruists to turn the conservative right against the AI industry, and to block technological progress.
He quote-posted this blog by Jordan Schachtel titled 'Built to Deceive: How the Effective Altruist Machine Infiltrated the Conservative Right on AI'.
As an AI Safety advocate, a member of Humans First, an E...
This is a slow-burn solution, but the most effective support and rebuttals will come from people who aren't EAs, but are just fair/principled, and have had enough exposure to EA to know when attacks are unfair. E.g. See Dean Ball this week. So the more surface area EAs can create with those sorts of people, the better the position EA is in. For example, I think Andy Masley's datacenter water use posts created a lot of surface ara with such people and has been better for the EA 'brand' than any specific rebuttal.
(A part of this strategy involves, as a...
UC Berkeley EA is hosting a west coast uni student EA retreat on april 10-12, with ~50 attendees from Berkeley, Stanford, UCLA, UCI, UCSD, & more, as well as special guests like Matt Reardon, Jake McKinnon, Jesse Gilbert, Julie Steele, Adam Khoja, Richard Ren, & more...
...but we only know to reach out to people who're involved with their uni's clubs. so: if you're interested in attending, book a 5-10 minute chat with alex or aiden :)
some examples of gaps in our outreach:
I went to jail yesterday in Wisconsin. I helped rescue 23 beagles in a large mass open rescue against a factory farm, Ridglan Farms, near Madison. We were trying to push the police to act on documented animal cruelty at Ridglan. Instead they arrested me and 26 other activists.
I wrote a blog post about why I did it.. Excerpt:
...I think some altruists suffer from lack of moral courage. Especially those of us who work on tech: we often have lots of moral conviction, but are typically wealthy and aren’t usually risking much personally, and I think that’s a gap.
This is one of the most inspiring things I've read in months. Its such a good example to have someone with a illustrius tech background like you involved in a protest like this. It might jolt some into action or at least make us think a bit harder about whether we are really morally courageous enough to do the best that we can.
Extended anecdote from My Willing Complicity In "Human Rights Abuse", by a former doctor (GP) working at a Qatari visa center in India to process "the enormous number of would-be Indian laborers who wished to take up jobs there":
...Another man comes to mind (it is not a coincidence that the majority of applicants were men). He was a would-be returnee - he had completed a several year tour of duty in Qatar itself, for as long as his visa allowed, and then returned because he was forced to, immediately seeking reassessment so he could head right back. He had wo
How regularly does everyone use this forum? I'm curious whether people tend to set aside time for browsing the forum, check it on-the-go, or just check the forum digest. I'm also wondering how I should approach the forum (examples: set aside one hour every week to stay up to date on the latest posts, check it when I'm on my phone instead of doomscrolling, just read the weekly digest and see if there are any interesting posts, etc.).
i'm confused about tithing. I yearn for the diamond emoji from GWWC, and I'm not comfortable enough to do it since I took like a 50% pay cut to do AI safety nonprofit stuff. Seems weird to make such a financial commitment, which implicates my future wife, who I have presumably not met yet, especially when I'm scraping by without too many savings per paycheck.
Is there a sense in which I already am diamond emoji eligible, because I'm "donating 50% of my income" in the sense of opportunity cost? 50 is, famously, greater than 10.
I disagree voted, because I don't think it is a terrible policy / think it is a hard problem and they've solved it in probably the most reasonable way.
I think that it probably isn't perfect and has a lot of issues, but pledged donations are counterfactual (no one would donate otherwise), while doing a direct work role is not as clearly counterfactual (the organization would usually probably hire someone else, but maybe they'd be less good than you, etc). I think that feels messy to litigate properly - in some cases doing direct work is way better than the ...
Experts currently treat being persuaded as reasonably good evidence that something is true — their judgment is calibrated enough that when they find an argument convincing, that's correlated with the argument actually being correct. This allows them to update readily in light of new evidence, and is a big part of how intellectual progress happens: lots of innovation and advances in basically every subject come down to experts taking sometimes weird new ideas seriously.
One worry I have about superpersuasive AI is that it could erode this. If a superpersuasi...
I think this is technically true but irrelevant: if we have superpersuasive AI, then there won't be human experts anymore, because the AI will have more expertise than any human. Unless somehow the AI is superpersuasive while still having sub-human performance in most ways, which seems unlikely to me.
I think a common mistake for researchers/analysts outside of academia[1] is that they don't focus enough on trying to make their research popular. Eg they don't do enough to actively promote their research, or writing their research in a way that's easy to become popular. I talked to someone (a fairly senior researcher) about this, and he said he doesn't care about mass outreach given that only cares about his research being built upon by ~5 people. I asked him if he knows who those 5 people are and could email them; he said no.
I think this is a systematic...
In my experience, orgs work much harder to get donations from a "grantmaker" than from an individual.
I made my first big donation in 2015, where I donated $20K to REG. I talked to a bunch of orgs in the process of trying to decide where to donate. Some of them didn't respond at all, and many of their responses were shallow.
A few months later, I took a philanthropy class at Stanford where we split up into groups and each group was responsible for figuring out where to donate a $20K grant. The level of communication I got from nonprofits was dramatically dif...