Quick takes

Set topic
Frontpage
AGI & Animals Debate
Global health
Animal welfare
Existential risk
13 more

Awhile back I came across this slide from the Money for Good project, which I thought was a sobering quantification of the rarity of donor decision-making based on nonprofit outperformance (cost-effectiveness etc). Hope Consulting got this data by surveying 4,000 US individuals with household incomes >$80k (top 30% income back in 2009, comprising 75% of overall individual donations), of which 2,000 were in the >$300k bracket.

Opportunity size for US retail donors in 2009 was ~$45B, so this works out to ballpark $1-1.5B which is still sizeable, e.g. it... (read more)

This is not really an argument to either side, but a while ago I created a rough little spreadsheet where you can put in:

- How much disvalue you see in the world going badly for animals vs. humans
- How likely you think it is that the world will go badly for animals vs. humans vs. both
- How much of the world that makes AI go well for humans you expect to also help make AI go well for animals

And it calculates for you what you should focus on (AIS vs. AIxAnimals) :) 

It's very rough, very proxy, all the usual caveats apply. But I am hoping that it can wit... (read more)

I'm not sure how to interpret the "Cost" lines. Is it supposed to be the negation of utility? And therefore "Cost of World C (Good for Humans + Good for Animals)" should be a negative number, because it has positive utility?

TIL: In 1971, Mario Pierre Roymans stole a Vermeer painting and tried to ransom it for a donation to starving Bengali refugees. It's an interesting example of naive altruistic utilitarianism before EA — inspired by the same famine that led Peter Singer to write "Famine, Affluence, and Morality".

(Roymans was apprehended and spent six months in prison; no ransom was paid. )

3
Michael St Jules 🔸
Why do you think it was naive instead of a good bet that happened to not work out?
12
Aaron Gertler 🔸
Maybe "naive" isn't the right language -- I mean it mostly in the sense of "it's a bad idea to commit crimes in the service of charity" rather than "the expected value was negative". If Mario cared sufficiently little about being imprisoned, damaging a masterpiece, or generating opposition to famine relief writ large, I could see the theft as a positive-EV move from his perspective. But on the "benefit" side of the tradeoff, I'm skeptical that there was even a remote possibility of the Belgian government putting up ~$17 million to ransom the painting, especially on the deadline he set. (Claude notes that governments have a strong incentive not to set a precedent by making public ransom payments.) That said, when I did some more reading on the case, I saw this: So it may have been a surprisingly effective publicity stunt, if the public's reactions were really so positive! (That's not something I'd expect in the modern world.)  But I continue to think it's generally misguided to steal money so you can give it away,* for reasons including "I wouldn't want someone stealing my money to support their own favorite charity" and "if your cause draws attention because thieves support it, you should expect people to turn against it". *But if you can steal bread to feed your starving child, why not someone else's children? As the guy who played Javert in my high school's production of Les Mis, I can't help thinking about Jean Valjean here. But I'm not inclined to spend the time I'd need to work through the relevant arguments and counterarguments.

I would have predicted the positive press and basically think this would "work" today if these conditions were met:

  • charismatic criminal (art thieves! maybe hackers like anonymous)
  • ransom made to a powerful, disliked entity (governments, specific well-known billionaires)
  • For a well-known cause that'd widely regarded as worthy (hurricane/typhoon relief, childhood cancer research, etc.)

I agree you on the overall downsides though. This sets a bad precedent that will be misused by many and burn a ton of social trust that is ultimately more important.  

We're sadly no longer accepting sign-ups for our founder's programme. We've had an influx of demand and we're now fully at capacity for the foreseeable future. Its funding situation is precarious and I've sadly got to focus on that now. Results are nuts, but mental health funders are focussed on LMICs, meta funders don't like mental health interventions, so it's a challenging category to even survive in.

For now, I've got to focus on doing a good job for our existing clients. I'm sorry!  

Showing 3 of 4 replies (Click to show all)
4
Jamie_Harris
I opened your profile and website and couldn't tell what this referred to? I'm intrigued, even if it's no longer accepting sign ups! 
2
John Salter
We don't have a public page for it; people sign-up via word-of-mouth and invite via incubators. We handpick and train mental health coaches for EA founders from the people who got the best results for regular EAs. The thesis was that people who're founding or scaling an EA charity founders face a ton of mental health challenges and that can be resolved quickly and help them and their charity succeed.  I figured getting the results would be the hard part, or convincing founders you could, but no. Within ~2 years, over half of AIM incubated charities have had one or more founders successfully resolve one mental health problem with us. ~90% of people who do the first session complete the programme and ~50% decide to keep going after it ends to work on their next most pressing problem. This is waaaaaaaay better than our stats for regular EAs and regular people - Founders underinvest in themselves so hard, and are so focussed on making their organisation succeed, that tons of low hanging fruit remain.  The problem is getting someone to fund it long-term: - Early stage founders are broke, irrationally self-sacrificial, and time-poor - Mental health funders, for good reason, care mostly just for LMICs - Meta funders, for good reason, don't want to choose for others what service would work best for them / their incubatees. So, while finding seed funding to demonstrate POC was really easy, getting something durable isn't. Donors think incubators should fund it. Incubators think donors should, after all, it's an ecosystem wide service. It only costs ~$80k a year to run, I'll figure out a way to do it, it's whether I can do that in time to avoid losing talent I can't replace. I have one coach with a ~90% success rate, who only costs ~$33k a year, considering quitting because they don't believe the job will exist in 2 years. The founders she supports collectively have a budget in the tens of millions and several are widely used as examples as EA's most successful ever char

Blimey. Did you check with CE about offering it as part of their incubation program (funded by them, maybe paid by results as you say)? And/or other incubators like Catalyze, or fellowship programs (not founders per se) like Constellation? (IIRC they have an affiliated executive coach already)

I'm surprised by "I don't really want a grant" though. E.g. the usual process is basically seed funding grant to check/demonstrate progress --> if you achieve that (or seem on track to), you get renewed funding. The mechanism isn't perfect (maybe you can BS your wa... (read more)

I run a curated Discord for high agency people with Long COVID/ME (myalgic encephalomyelitis). The group includes tech founders, researchers, rationalists/EAs, etc. The focus is on troubleshooting each other's conditions actively, as well as creating a body of knowledge to bring back to the wider community in the form of writing, education, projects, companies, etc.

Some know me as Liface, others as Liam Rosen - I have been in the EA/rationality community for over 10 years, previously in the Bay, now in New York City, and am the main moderator of r/slatesta... (read more)

Coal and nuclear electricity generation kill a significant number of fish through water intake systems. This matters for evaluating the impact of any new electricity load.

Most thermal power plants (coal, nuclear, and to a lesser extent gas) draw large volumes of water from rivers and lakes for cooling. This causes two underappreciated harms to fish:

Impingement — fish get trapped against water intake filters and die. Entrainment — eggs, larvae, and small fish are pulled through pumps and heat exchangers, killing them. A single coal plant in Ohio (Bay Shore ... (read more)

(Can you point me to something about the moral weight of fish eggs? I have never heard of this before)

What happened to EA Birmingham (UK)? I remember some years ago it was a thing, so:

  1. What happened?
  2. Is there any reason it wouldn't be fertile ground for a group to be set up?

More EA in da news: https://x.com/DavidSacks/status/2034047505336295904

And the spicy CAIS take: https://x.com/cais/status/2034389842076025164?s=46

From the CAIS tweet:

We believe the effective altruism movement is, unfortunately, controlled opposition. The less influence it has on AI safety, the better.

I really don't like it when people paint the whole movement with one brush, but they're not wrong that there's a subset of AI safety/EA that behaves like controlled opposition. Obvious example is Anthropic/Dario (who was one of the first GWWC signatories); Good Ventures basically doesn't fund anything that might be bad for AI company stock prices; I can think of some other possible examples but I do... (read more)

Researchers simulate an entire fly brain on a laptop. Is a human brain next?

What is the implication of this for EA thinking? Does the fly that purely exists in the computer warrant moral consideration, and could we increase the overall welfare of the world by making millions of these simulations with ideal fruit-fly conditions? 
 

They fully copied the brain of the fly, so from my understanding it should also feel pleasure and pain in theory, I think this poses a real conundrum for EA morality.

1
akash 🔸
I lean towards a yes but I am uncertain because I don't know how the stimuli is fed and I would imagine that the simulated brain, unlike an embodied fruit fly, isn't perpetually processing information and taking actions. If the latter is true and if it replaces the need for ... processing ... billions of life fruit flies in labs worldwide, seems like a huge animal welfare win to me. EDIT: Eon, the company behind this development published a blog post explaining their research, and after reading it, I am much less confident in my lean. This doesn't seem to be a whole fly brain emulation / a full copy: Source: How the Eon Team Produced a Virtual Embodied Fly

You're right that it isn't a WBE. Also, incentives:

To be fair, we’re not unsympathetic to why Eon used the language they did. Their careful blog post on ‘How the Eon Team Produced a Virtual Embodied Fly’ would likely have only been read by a few hundred neuroscientists, while “We’ve uploaded a fruit fly” reached millions. Startup survival requires investment, funding follows excitement, and excitement follows headlines - not careful caveats. This bold approach may even feel obligatory when an organisation’s stated mission is “solving brain emulation as an engineering sprint, not a decades-long research program.”

"There are more things in heaven and earth, Horatio, Than are dreamt of in your philosophy"

One thing I've been floating about for a while, and haven't really seen anybody else deeply explore[1], is what I call "further moral goods": further axes of moral value as yet inaccessible to us, that is qualitatively not just quantitatively different from anything we've observed to date.

For background, I think normal, secular, humans live in 3 conceptually distinct but overlapping worlds:

  1. The physical world: matter, energy, atoms, stars, cells. An detached external
... (read more)

wrote it up in more detail on substack: https://linch.substack.com/p/further-moral-goods

4
David Mathers🔸
One reason to think we might not find anything morally valuable that distinct from what we already know about is that our concept of morality is made to fit with the stuff we already know about. 
4
Linch
Agreed. It's possible that we/our descendants won't see much value for extending past blissful experiences even when other axes of value are theoretically possible, in the same way that aliens without conscious experiences would not see any particular reason to privilege qualia (even if they could be convinced that it's real).

In two days (March 21st, 12-4pm), about 140 of us (event link) will be marching on Anthropic, OpenAI and xAI in SF asking the CEOs to make statements on whether they would stop developing new frontier models if every other major lab in the world credibly does the same. This comes after Anthropic removed its commitment to pause development from their RSP.

We'll be starting at 500 Howard St, San Francisco (Anthropic's Office, full schedule and more info here). This is shaping to be the biggest US AI Safety protest to date, with a coalition including Nate Soar... (read more)

AI Czar attacks EA. (Again.)

Today in this post on X, the U.S. 'AI Czar' David Sacks directly attacked Humans First, an AI safety advocacy organization, by claiming that it's nothing more than 'censorship power play', a shadowy campaign by Effective Altruists to turn the conservative right against the AI industry, and to block technological progress. 

He quote-posted this blog by Jordan Schachtel titled 'Built to Deceive: How the Effective Altruist Machine Infiltrated the Conservative Right on AI'.

As an AI Safety advocate, a member of Humans First, an E... (read more)

This is  a slow-burn solution, but the most effective support and rebuttals will come from people who aren't EAs, but are just fair/principled, and have had enough exposure to EA to know when attacks are unfair. E.g. See Dean Ball this week. So the more surface area EAs can create with those sorts of people, the better the position EA is in. For example, I think Andy Masley's datacenter water use posts created a lot of surface ara with such people and has been better for the EA 'brand' than any specific rebuttal.

(A part of this strategy involves, as a... (read more)

34
akash 🔸
(Not a solution, but a general observation about people who engage in bashing EA.) The "dot connectors" will always connect the dots, infer or invent nefarious motivations, and try to bucket you as they like. The problem is that you can't neatly map EAs onto the political spectrum -- yes, there are dominant trends, but the variance in views is sufficiently high that commentators have genuinely no clue where EAs belong. This makes sense because most major movements in history have been political ones, so when assessing EA, most people pull out their internal political philosophy detector and you end up with a mess like the chart below!  But EA is a moral philosophy movement, and the chain of thinking is genuinely different. Instead of thinking how to organize society and labor, EAs unanimously agree on beneficentrism and deal with questions like, "What morally matters? To what degree? Which interventions are most effective? How do you even assess what is most effective?" When you organize a movement around these set of questions, you end up with: * Some people who want to automate software engineering, some who want to pause it entirely, and others who think we should defensively accelerate progress * At least two frontier AI labs: let's not forget OpenAI received $30 million in philanthropic money during its inception! * Some EAs who think that AI will be a big deal for {their cause area}, others who are skeptical of the whole AI bundle * Some EAs passionately dislike AI writing, some are fine with methodical use of AI in writing, and some are even more liberal about it * One particular EA who is the loudest voice combatting the data center water usage myth * (At least) one person from the EA-sphere who has large holdings in AI infrastructure * And conservative AI Safetyists like you and liberal long timeline accelerationists like me I don't know what the best solution for combatting EA bashing is, but spreading the idea that EA is more politically and in
4
Charlie_Guthmann
I think we could use a documentary series where we just go follow around orgs or individual EAs for a couple days and see how they talk, live and act. It would be pretty cheap at the very least. 

UC Berkeley EA is hosting a west coast uni student EA retreat on april 10-12, with ~50 attendees from Berkeley, Stanford, UCLA, UCI, UCSD, & more, as well as special guests like Matt Reardon, Jake McKinnon, Jesse Gilbert, Julie Steele, Adam Khoja, Richard Ren, & more...

...but we only know to reach out to people who're involved with their uni's clubs. so: if you're interested in attending, book a 5-10 minute chat with alex or aiden :)

some examples of gaps in our outreach:

  • unis that don't have an EA club
  • students who haven't joined their uni's EA club
  • t
... (read more)

I went to jail yesterday in Wisconsin. I helped rescue 23 beagles in a large mass open rescue against a factory farm, Ridglan Farms, near Madison. We were trying to push the police to act on documented animal cruelty at Ridglan. Instead they arrested me and 26 other activists.

I wrote a blog post about why I did it.. Excerpt:

I think some altruists suffer from lack of moral courage. Especially those of us who work on tech: we often have lots of moral conviction, but are typically wealthy and aren’t usually risking much personally, and I think that’s a gap.

... (read more)

This is one of the most inspiring things I've read in months. Its such a good example to have someone with a illustrius tech background like you involved in a protest like this. It might jolt some into action or at least make us think a bit harder about whether we are really morally courageous enough to do the best that we can.

Extended anecdote from My Willing Complicity In "Human Rights Abuse", by a former doctor (GP) working at a Qatari visa center in India to process "the enormous number of would-be Indian laborers who wished to take up jobs there":

Another man comes to mind (it is not a coincidence that the majority of applicants were men). He was a would-be returnee - he had completed a several year tour of duty in Qatar itself, for as long as his visa allowed, and then returned because he was forced to, immediately seeking reassessment so he could head right back. He had wo

... (read more)

How regularly does everyone use this forum? I'm curious whether people tend to set aside time for browsing the forum, check it on-the-go, or just check the forum digest. I'm also wondering how I should approach the forum (examples: set aside one hour every week to stay up to date on the latest posts, check it when I'm on my phone instead of doomscrolling, just read the weekly digest and see if there are any interesting posts, etc.).

i'm confused about tithing. I yearn for the diamond emoji from GWWC, and I'm not comfortable enough to do it since I took like a 50% pay cut to do AI safety nonprofit stuff. Seems weird to make such a financial commitment, which implicates my future wife, who I have presumably not met yet, especially when I'm scraping by without too many savings per paycheck. 

Is there a sense in which I already am diamond emoji eligible, because I'm "donating 50% of my income" in the sense of opportunity cost? 50 is, famously, greater than 10. 

Showing 3 of 16 replies (Click to show all)

I disagree voted, because I don't think it is a terrible policy / think it is a hard problem and they've solved it in probably the most reasonable way.

I think that it probably isn't perfect and has a lot of issues, but pledged donations are counterfactual (no one would donate otherwise), while doing a direct work role is not as clearly counterfactual (the organization would usually probably hire someone else, but maybe they'd be less good than you, etc). I think that feels messy to litigate properly - in some cases doing direct work is way better than the ... (read more)

1
Clara Torres Latorre 🔸
Nice. I don't think it's perfect but it's mostly in the right ballpark.
8
Neel Nanda
I'm sympathetic to the argument that it would be hard to operationalise a salary sacrifice pledge in ways that are hard to game, but true to the spirit of it. But I feel annoyed that the tone of the FAQ and Luke's comment is not "this is a meaningful flaw in the pledge, we don't see a good way to fix it, but acknowledge it creates bad incentivises". Eg it seems terrible that the FAQ frames this as "resigning from your pledge", which I consider to have strong connotations of giving up or failing. For example, this part of Luke's comment rubbed me the wrong way, because it felt like it was saying that actually people are misunderstanding the pledge, and it's totally consistent with taking a massive pay cut to pursue direct altruistic work. But it is clearly, by design, not, and his comment felt like it was missing the point. Eg someone who leaves a job in finance or tech to take a job at half the salary to do direct work, and intends to remain in that new role for the rest of their career, is making far more of a sacrifice than if they just donated 10%, and I consider them to have no obligation to donate further. But I don't see the conditions of Luke's comment applying, as the salary sacrifice comes from switching industries not an arrangement with their employer. And they may never be able to donate later, if they just postpone their pledge. So they would need to resign. Which is a terrible incentive!

Experts currently treat being persuaded as reasonably good evidence that something is true — their judgment is calibrated enough that when they find an argument convincing, that's correlated with the argument actually being correct. This allows them to update readily in light of new evidence, and is a big part of how intellectual progress happens: lots of innovation and advances in basically every subject come down to experts taking sometimes weird new ideas seriously.

One worry I have about superpersuasive AI is that it could erode this. If a superpersuasi... (read more)

I think this is technically true but irrelevant: if we have superpersuasive AI, then there won't be human experts anymore, because the AI will have more expertise than any human. Unless somehow the AI is superpersuasive while still having sub-human performance in most ways, which seems unlikely to me.

I think a common mistake for researchers/analysts outside of academia[1] is that they don't focus enough on trying to make their research popular. Eg they don't do enough to actively promote their research, or writing their research in a way that's easy to become popular. I talked to someone (a fairly senior researcher) about this, and he said he doesn't care about mass outreach given that only cares about his research being built upon by ~5 people. I asked him if he knows who those 5 people are and could email them; he said no.

I think this is a systematic... (read more)

In my experience, orgs work much harder to get donations from a "grantmaker" than from an individual.

I made my first big donation in 2015, where I donated $20K to REG. I talked to a bunch of orgs in the process of trying to decide where to donate. Some of them didn't respond at all, and many of their responses were shallow.

A few months later, I took a philanthropy class at Stanford where we split up into groups and each group was responsible for figuring out where to donate a $20K grant. The level of communication I got from nonprofits was dramatically dif... (read more)

Load more