This is a special post for quick takes by defun 🔸. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.
Sorted by Click to highlight new quick takes since:

All of the headlines are trying to run with the narrative that this is due to Trump pressure, but I can’t see a clear mechanism for this. Does anyone have a good read on why he’s changed his mind? (Recent events feel like: Buffet moving his money to his kids’ foundations & retiring from BH, divorce)

I have only speculation, but it's plausible to me that developments in AI could be playing a role. The original decision in 2000 was to sunset "several decades after [Bill and Melinda Gates'] deaths." Likely the idea was that handpicked successor leadership could carry out the founders' vision and that the world would be similar enough to the world at the time of their death or disability for that plan to make sense for several decades after the founders' deaths. To the extent that Gates thought that the world is going to change more rapidly than he believed in 2000, this plan may look less attractive than it once did.

Why not what seems to be the obvious mechanism: the cuts to USAID making this more urgent and imperative. Or am I missing something?

"A few years ago, I began to rethink that approach. More recently, with the input from our board, I now believe we can achieve the foundation’s goals on a shorter timeline, especially if we double down on key investments and provide more certainty to our partners."

It seems it was more of a question of whether they could grant larger amounts effectively, which he was considering for multiple years (I don't know how much of that may be possible due to aid cuts).

Holden Karnofsky has joined Anthropic (LinkedIn profile). I haven't been able to find more information.

"Member of Technical Staff" - That's surprising. I assumed he was more interested in the policy angle.

Member of Technical Staff is often a catchall term for "we don't want to pigeonhole you into a specific role, you do useful stuff in whatever way seems to add the most value", I wouldn't read much into it

From here it seems that indeed «he focuses on the design of the company's Responsible Scaling Policy and other aspects of preparing for the possibility of highly advanced AI systems in the future.»

It seems that lots of people with all sorts of roles at AI companies have the formal role "member of technical staff"

Let's hope he understands the power of the dark side.

I'd love to see Joey Savoie on Dwarkesh’s podcast. Can someone make it happen?

Joey with Spencer Greenberg: https://podcast.clearerthinking.org/episode/154/joey-savoie-should-you-become-a-charity-entrepreneur/

The meat-eater problem is under-discussed.

I've spent more than 500 hours consuming EA content and I had never encountered the meat-eater problem until today.

https://forum.effectivealtruism.org/topics/meat-eater-problem

(I had sometimes thought about the problem, but I didn't even know it had a name)

saulius
40
16
1
1

I think the reason is that it doesn't really have a target audience. Animal advocacy interventions are hundreds of times more cost-effective than global poverty interventions. It only makes sense to work on global poverty if you think that animal suffering doesn't matter nearly as much as human suffering. But if you think that, then you won't be convinced to stop working on global poverty because of its effects on animals. Maybe it's relevant for some risk-averse people. 

I wonder if Open Philanthropy thinks about it because they fund both animal advocacy and global poverty/health. Animal advocacy funding probably easily offsets its negative global poverty effects on animals. It takes thousands of dollars to save a human life with global health interventions and that human might consume thousands of animals in her lifetime. Chicken welfare reforms can half the suffering of thousands of animals for tens of dollars. However, I don't like this sort of reasoning that much because we may not always have interventions as cost-effective as chicken welfare reforms.

Yeah, perhaps if you care about animal welfare, the main problem with giving money to poverty causes is that you didn't give it to animal welfare instead, and the increased consumption of meat is a relative side issue.

One potential audience is people open to moral trade. Say Pat doesn't care much about animals and is on the fence between global poverty interventions with different animal impacts, and Alex cares a lot about animals and normally donates to animal welfare efforts. Alex could agree with Pat to donate some amount to the better-for-animals global poverty charity if Pat will agree to send all their donations there.

Except if you do the math on it, I think you'll find that it's really hard to come out with a set of charities, values, and impacts that make this work. Pat would have to be so close to indifferent between the two options.

(And if you figure that out, there's also all the normal reasons why moral trade is challenging and practice.)

Also, you can argue against the poor meat eater problem by pointing out that it's very unclear whether increased animal production is good or bad for animals. In short, the argument would be that there are way more wild animals than farmed animals, and animal product consumption might substantially decrease wild animal populations. Decreasing wild animal populations could be good because wild animals suffer a lot, mostly due to natural causes. See https://forum.effectivealtruism.org/topics/logic-of-the-larder I think this issue is also very under-discussed.

I've been thinking about the meat eater problem a lot lately, and while I think it's worth discussing, I've realized that poverty reduction isn't to blame for farmed animal suffering.

(Content note: dense math incoming)

Assume that humans' utility as a function of income is  (i.e. isoelastic utility with ), and the demand for meat is  where  is the income elasticity of demand. Per Engel's law is typically between 0 and 1. As long as  at low incomes and  at high incomes.

For simplicity, I am assuming that the animal welfare impact of meat production is negative and proportional to . (As saulius points out, it's unclear whether meat production is net positive or net negative for animals as a whole. Also, animal welfare regulations and alternative protein technologies are more common in high-income regions like the EU and US, so this assumption may not apply at the high end.) If this is true, then increasing a person or country's income is most valuable when that person/country is in extreme poverty, and least valuable at the high end of the income spectrum.

The upshot: the framing of the meat eater problem as being about poverty obscures the fact that the worst offenders of factory farming are rich countries like the United States, not poor ones, and that increasing the income of a rich person is worse for animal welfare than increasing that of a poor one (as long as both of them are non-vegan). I feel like it's hypocritical for animal advocates and EAs from rich countries to blame poor countries for the suffering caused by factory farming.

I feel like it's hypocritical for animal advocates and EAs from rich countries to blame poor countries for the suffering caused by factory farming.

I don't think this is what the meat-eater problem does. You could imagine a world in which the West is responsible for inventing the entire machinery of factory farming, or even running all the factory farms, and still believe that lifting additional people out of poverty would help the Western factory farmers sell more produce. It's not about blame, just about consequences.

I realise this isn't your main point, and I haven't processed your main argument yet. It would make a lot of sense to me if transferring money from a first-world meat eater to a third-world meat eater resulted in less meat being eaten, but I'd imagine that the people most concerned with this issue are thinking about their own money, and already don't consume meat themselves?

Anthropic's Twitter account was hacked. It's "just" a social media account, but it raises some concerns.

Update: the post has just been deleted. They keep the updates on their status page: https://status.anthropic.com/

John Schulman (OpenAI co-founder) has left OpenAI to work on AI alignment at Anthropic.

https://x.com/johnschulman2/status/1820610863499509855

Ilya's Safe Superintelligence Inc. has raised $1B.

I guess one thing worth noting here is that they raised from a16z, whose leaders are notoriously critical of AI safety. Not sure how they square that circle, but I doubt it involves their investors having changed their perspectives on that issue.

[anonymous]9
6
0
3

Just incase anyone is reading this, I too would like a billion dollars.

The way people downvote jokes on this forum... At least I appreciated it :)

[anonymous]4
1
0
1

Don’t worry Nick, I’ll never stop.

I'll try a bit more too. 23 votes and 6 karma now - looks like the forum is split on the low effort humor front ;).

[anonymous]3
2
1

lol someone has to write a post "How to make an upvoted joke on the forum that isn't cringe"

simply become one of the most successful and influential ML researchers 🤷‍♂️

Maybe a silly question, but does "one shot" for safe AGI mean they aren't going to release models along the way and only try and do reach the superintelligence bar? Would have thought investors wouldn't have been into that...

Or are they basically just like other AI companies and will release commercial products along the way but with a compelling pitch?

I highly recommend the book "How to Launch A High-Impact Nonprofit" to everyone.

I've been EtG for many years and I thought this book wasn't relevant to me, but I'm learning a lot and I'm really enjoying it.

Cool! What kind of things are you learning from it?

After years of donating to established organizations (top GiveWell charities), I want to start directing a portion of my donations to new/small charities (eg. Presenting nine new charities). I think this book is helping me better understand which new charities might have more potential.

I also really liked "Part II. Making good decisions", which covers many tools that can be useful for personal and professional decision-making (rationality, the scientific method, EA, Weighted Factor Modelling, etc.).

(The link isn't working for me.)

Fixed. Thanks!

[anonymous]1
1
0

I second this recommendation!

Meta has just released Llama 3.1 405B. It's open-source and in many benchmarks it beats GPT-4o and Claude 3.5 Sonnet:

Zuck's letter "Open Source AI Is the Path Forward".

EJT
11
0
0

Wait, all the LLMs get 90+ on ARC? I thought LLMs were supposed to do badly on ARC.

It's an unfortunate naming clash, there are different ARC Challenges:

ARC-AGI (Chollet et al) - https://github.com/fchollet/ARC-AGI

ARC (AI2 Reasoning Challenge) - https://allenai.org/data/arc

These benchmarks are reporting the second of the two.

LLMs (at least without scaffolding) still do badly on ARC, and I'd wager Llama 405B still doesn't do well on the ARC-AGI challenge, and it's telling that all the big labs release the 95%+ number they get on AI2-ARC, and not whatever default result they get with ARC-AGI...

(Or in general, reporting benchmarks where they can go OMG SOTA!!!! and not helpfully advance the general understanding of what models can do and how far they generalise. Basically, traditional benchmark cards should be seen as the AI equivalent of "IN MICE")

Thanks!

Anthropic has just launched "computer use". "developers can direct Claude to use computers the way people do".

https://www.anthropic.com/news/3-5-models-and-computer-use

Curated and popular this week
 ·  · 25m read
 · 
Epistemic status: This post — the result of a loosely timeboxed ~2-day sprint[1] — is more like “research notes with rough takes” than “report with solid answers.” You should interpret the things we say as best guesses, and not give them much more weight than that. Summary There’s been some discussion of what “transformative AI may arrive soon” might mean for animal advocates. After a very shallow review, we’ve tentatively concluded that radical changes to the animal welfare (AW) field are not yet warranted. In particular: * Some ideas in this space seem fairly promising, but in the “maybe a researcher should look into this” stage, rather than “shovel-ready” * We’re skeptical of the case for most speculative “TAI<>AW” projects * We think the most common version of this argument underrates how radically weird post-“transformative”-AI worlds would be, and how much this harms our ability to predict the longer-run effects of interventions available to us today. Without specific reasons to believe that an intervention is especially robust,[2] we think it’s best to discount its expected value to ~zero. Here’s a brief overview of our (tentative!) actionable takes on this question[3]: ✅ Some things we recommend❌ Some things we don’t recommend * Dedicating some amount of (ongoing) attention to the possibility of “AW lock ins”[4]  * Pursuing other exploratory research on what transformative AI might mean for animals & how to help (we’re unconvinced by most existing proposals, but many of these ideas have received <1 month of research effort from everyone in the space combined — it would be unsurprising if even just a few months of effort turned up better ideas) * Investing in highly “flexible” capacity for advancing animal interests in AI-transformed worlds * Trying to use AI for near-term animal welfare work, and fundraising from donors who have invested in AI * Heavily discounting “normal” interventions that take 10+ years to help animals * “Rowing” on na
 ·  · 3m read
 · 
About the program Hi! We’re Chana and Aric, from the new 80,000 Hours video program. For over a decade, 80,000 Hours has been talking about the world’s most pressing problems in newsletters, articles and many extremely lengthy podcasts. But today’s world calls for video, so we’ve started a video program[1], and we’re so excited to tell you about it! 80,000 Hours is launching AI in Context, a new YouTube channel hosted by Aric Floyd. Together with associated Instagram and TikTok accounts, the channel will aim to inform, entertain, and energize with a mix of long and shortform videos about the risks of transformative AI, and what people can do about them. [Chana has also been experimenting with making shortform videos, which you can check out here; we’re still deciding on what form her content creation will take] We hope to bring our own personalities and perspectives on these issues, alongside humor, earnestness, and nuance. We want to help people make sense of the world we're in and think about what role they might play in the upcoming years of potentially rapid change. Our first long-form video For our first long-form video, we decided to explore AI Futures Project’s AI 2027 scenario (which has been widely discussed on the Forum). It combines quantitative forecasting and storytelling to depict a possible future that might include human extinction, or in a better outcome, “merely” an unprecedented concentration of power. Why? We wanted to start our new channel with a compelling story that viewers can sink their teeth into, and that a wide audience would have reason to watch, even if they don’t yet know who we are or trust our viewpoints yet. (We think a video about “Why AI might pose an existential risk”, for example, might depend more on pre-existing trust to succeed.) We also saw this as an opportunity to tell the world about the ideas and people that have for years been anticipating the progress and dangers of AI (that’s many of you!), and invite the br
 ·  · 12m read
 · 
I donated my left kidney to a stranger on April 9, 2024, inspired by my dear friend @Quinn Dougherty (who was inspired by @Scott Alexander, who was inspired by @Dylan Matthews). By the time I woke up after surgery, it was on its way to San Francisco. When my recipient woke up later that same day, they felt better than when they went under. I'm going to talk about one complication and one consequence of my donation, but I want to be clear from the get: I would do it again in a heartbeat. Correction: Quinn actually donated in April 2023, before Scott’s donation. He wasn’t aware that Scott was planning to donate at the time. The original seed came from Dylan's Vox article, then conversations in the EA Corner Discord, and it's Josh Morrison who gets credit for ultimately helping him decide to donate. Thanks Quinn! I met Quinn at an EA picnic in Brooklyn and he was wearing a shirt that I remembered as saying "I donated my kidney to a stranger and I didn't even get this t-shirt." It actually said "and all I got was this t-shirt," which isn't as funny. I went home and immediately submitted a form on the National Kidney Registry website. The worst that could happen is I'd get some blood tests and find out I have elevated risk of kidney disease, for free.[1] I got through the blood tests and started actually thinking about whether to do this. I read a lot of arguments, against as well as for. The biggest risk factor for me seemed like the heightened risk of pre-eclampsia[2], but since I live in a developed country, this is not a huge deal. I am planning to have children. We'll just keep an eye on my blood pressure and medicate if necessary. The arguments against kidney donation seemed to center around this idea of preserving the sanctity or integrity of the human body: If you're going to pierce the sacred periderm of the skin, you should only do it to fix something in you. (That's a pretty good heuristic most of the time, but we make exceptions to give blood and get pier