This is a special post for quick takes by MikhailSamin. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.
Sorted by Click to highlight new quick takes since:

I do not believe Anthropic as a company has a coherent and defensible view on policy. It is known that they said words they didn't hold while hiring people (and they claim to have good internal reasons for changing their minds, but people did work for them because of impressions that Anthropic made but decided not to hold). It is known among policy circles that Anthropic's lobbyists are similar to OpenAI's.

From Jack Clark, a billionaire co-founder of Anthropic and its chief of policy, today:

Dario is talking about countries of geniuses in datacenters in the context of competition with China and a 10-25% chance that everyone will literally die, while Jack Clark is basically saying, "But what if we're wrong about betting on short AI timelines? Security measures and pre-deployment testing will be very annoying, and we might regret them. We'll have slower technological progress!"

This is not invalid in isolation, but Anthropic is a company that was built on the idea of not fueling the race.

Do you know what would stop the race? Getting policymakers to clearly understand the threat models that many of Anthropic's employees share.

It's ridiculous and insane that, instead, Anthropic is arguing against regulation because it might slow down technological progress.

I think the context of the Jack Clarke quote matters:

What if we’re right about AI timelines? What if we’re wrong?
Recently, I’ve been thinking a lot about AI timelines and I find myself wanting to be more forthright as an individual about my beliefs that powerful AI systems are going to arrive soon – likely during this Presidential Administration. But I’m struggling with something – I’m worried about making short-timeline-contingent policy bets.

So far, the things I’ve advocated for are things which are useful in both short and long timeline worlds. Examples here include:

  • Building out a third-party measurement and evaluation ecosystem.
  • Encouraging governments to invest in further monitoring of the economy so they have visibility on AI-driven changes.
  • Advocating for investments in chip manufacturing, electricity generation, and so on.
  • Pushing on the importance of making deeper investments in securing frontier AI developers.

All of these actions are minimal “no regret” actions that you can do regardless of timelines. Everything I’ve mentioned here is very useful to do if powerful AI arrives in 2030 or 2035 or 2040 – it’s all helpful stuff that either builds institutional capacity to see and deal with technology-driven societal changes, or equips companies with resources to help them build and secure better technology.

But I’m increasingly worried that the “short timeline” AI community might be right – perhaps powerful systems will arrive towards the end of 2026 or in 2027. If that happens we should ask: are the above actions sufficient to deal with the changes we expect to come? The answer is: almost certainly not!

[Section that Mikhail quotes.]

Loudly talking about and perhaps demonstrating specific misuses of AI technology: If you have short timelines you might want to ‘break through’ to policymakers by dramatizing the risks you’re worried about. If you do this you can convince people that certain misuses are imminent and worthy of policymaker attention – but if these risks subsequently don’t materialize, you could seem like you’ve been Chicken Little and claimed the sky is falling when it isn’t – now you’ve desensitized people to future risks. Additionally, there’s a short- and long-timeline risk here where by talking about a specific misuse you might inspire other people in the world to pursue this misuse – this is bound up in broader issues to do with ‘information hazards’.

These are incredibly challenging questions without obvious answers. At the same time, I think people are rightly looking to people like me and the frontier labs to come up with answers here. How we get there is going to be, I believe, by being more transparent and discursive about these issues and honestly acknowledging that this stuff is really hard and we’re aware of the tradeoffs involved. We will have to tackle these issues, but I think it’ll take a larger conversation to come up with sensible answers.

In context Jack Clark seems to be arguing that he should be considering short timeline, 'regretful actions' more seriously.

Hi Mikhael, could you clarify what this means? “It is known that they said words they didn't hold while hiring people”

In RSP, Anthropic committed to define ASL-4 by the time they reach ASL-3.

With Claude 4 released today, they have reached ASL-3. They haven’t yet defined ASL-4.

Turns out, they have quietly walked back on the commitment. The change happened less than two months ago and, to my knowledge, was not announced on LW or other visible places unlike other important changes to the RSP. It’s also not in the changelog on their website; in the description of the relevant update, they say they added a new commitment but don’t mention removing this one.

Anthropic’s behavior is not at all the behavior of a responsible AI company. Trained a new model that reaches ASL-3 before you can define ASL-4? No problem, update the RSP so that you no longer have to, and basically don’t tell anyone. (Did anyone not working for Anthropic know the change happened?)

When their commitments go against their commercial interests, we can’t trust their commitments.

You should not work at Anthropic on AI capabilities.

[This comment is no longer endorsed by its author]Reply
evhub
18
1
2
1
1

This is false. Our ASL-4 thresholds are clearly specified in the current RSP—see "CBRN-4" and "AI R&D-4". We evaluated Claude Opus 4 for both of these thresholds prior to release and found that the model was not ASL-4. All of these evaluations are detailed in the Claude 4 system card.

The thresholds are pretty meaningless without at least a high-level standard, no?

The RSP specifies that CBRN-4 and AI R&D-5 both require ASL-4 security. Where is ASL-4 itself defined?

The original commitment was (IIRC!) about defining the thresholds, not about mitigations. I didn’t notice ASL-4 when I briefly checked the RSP table of contents earlier today and I trusted the reporting on this from Obsolete. I apologized and retracted the take on LessWrong, but forgot I posted it here as well; want to apologize to everyone here, too, I was wrong.

(Haven’t thought about this really, might be very wrong, but have this thought and seems good to put out there.) I feel like putting 🔸 at the end of social media names might be bad. I’m curious what the strategy was.

  • The willingness to do this might be anti-correlated with status. It might be a less important part of identity of more important people. (E.g., would you expect Sam Harris, who is a GWWC pledger, to do this?)

  • I’d guess that ideally, we want people to associate the GWWC pledge with role models (+ know that people similar to them take the pledge, too).

  • Anti-correlation with status might mean that people will identify the pledge with average though altruistic Twitter users, not with cool people they want to be more like.

  • You won’t see a lot of e/accs putting the 🔸 in their names. There might be downside effects of perception of a group of people as clearly outlined and having this as an almost political identity; it seems bad to have directionally-political properties that might do mind-killing things both to people with 🔸 and to people who might argue with them.

How do effectiveness estimates change if everyone saved dies in 10 years?

“Saving lives near the precipice”

Has anyone made comparisons of the effectiveness of charities conditional on the world ending in, e.g., 5-15 years?

[I’m highly uncertain about this, and I haven’t done much thinking or research]

For many orgs and interventions, the impact estimations would possibly be very different from the default ones made by, e.g., GiveWell. I’d guess the order of the most effective non-longtermist charities might change a lot as a result.

It would be interesting to see how it changes as at least some estimates account for the world ending in n years.

Maybe one could start with updating GiveWell’s estimates: e.g., for DALYs, one would need to recalculate the values in GiveWell’s spreadsheets derived from the distributions that are capped or changed as a result of the world ending (e.g., life expectancy); for estimates of relative values of averting deaths at certain ages, one would need to estimate and subtract something representing that the deaths still come at (age+n). The second-order and long-term effects would also be different, but it’s possibly more time-consuming to estimate the impact there.

It seems like a potentially important question since many people have short AGI timelines in mind. So it might be worthwhile to research that area to give people the ability to weigh different estimates of charities’ impacts by their probabilities of an existential catastrophe.

Please let me know if someone already has worked this out or is working on this or if there’s some reason not to talk about this kind of thing, or if I’m wrong about something.

I think this could be an interesting avenue to explore. One very basic way to (very roughly) do this is to model p(doom) effectively as a discount rate. This could be an additional user input on GiveWell's spreadsheets.

So for example, if your p(doom) is 20% in 20 years, then you could increase the discount rate by roughly 1% per year

[Techinically this will be somewhat off since (I'm guessing) most people's p(doom) doesn't increase at a constant rate, in the way a fixed discount rate does.]

I think discounting QALYs/DALYs due to the probability of doom makes sense if you want a better estimate of QALYs/DALYs; but it doesn’t help with estimating the relative effectiveness of charities and doesn’t help to allocate the funding better.

(It would be nice to input the distribution of the world ending in the next n years and get the discounted values. But it’s the relative cost of ways to save a life that matters; we can’t save everyone, so we want to save the most lives and reduce suffering the most, the question of how to do that means that we need to understand what our actions lead to so we can compare our options. Knowing how many people you’re saving is instrumental to saving the most people from the dragon. If it costs at least $15000 to save a life, you don’t stop saving lives because that’s too much; human life is much more valuable. If we succeed, you can imagine spending stars on saving a single life. And if we don’t, we’d still like to reduce the suffering the most and let as many people as we can live for as long as humanity lives; for that, we need estimates of the relative value of different interventions conditional on the world ending in n years with some probability.)

Curated and popular this week
 ·  · 11m read
 · 
Confidence: Medium, underlying data is patchy and relies on a good amount of guesswork, data work involved a fair amount of vibecoding.  Intro:  Tom Davidson has an excellent post explaining the compute bottleneck objection to the software-only intelligence explosion.[1] The rough idea is that AI research requires two inputs: cognitive labor and research compute. If these two inputs are gross complements, then even if there is recursive self-improvement in the amount of cognitive labor directed towards AI research, this process will fizzle as you get bottlenecked by the amount of research compute.  The compute bottleneck objection to the software-only intelligence explosion crucially relies on compute and cognitive labor being gross complements; however, this fact is not at all obvious. You might think compute and cognitive labor are gross substitutes because more labor can substitute for a higher quantity of experiments via more careful experimental design or selection of experiments. Or you might indeed think they are gross complements because eventually, ideas need to be tested out in compute-intensive, experimental verification.  Ideally, we could use empirical evidence to get some clarity on whether compute and cognitive labor are gross complements; however, the existing empirical evidence is weak. The main empirical estimate that is discussed in Tom's article is Oberfield and Raval (2014), which estimates the elasticity of substitution (the standard measure of whether goods are complements or substitutes) between capital and labor in manufacturing plants. It is not clear how well we can extrapolate from manufacturing to AI research.  In this article, we will try to remedy this by estimating the elasticity of substitution between research compute and cognitive labor in frontier AI firms.  Model  Baseline CES in Compute To understand how we estimate the elasticity of substitution, it will be useful to set up a theoretical model of researching better alg
 ·  · 8m read
 · 
Around 1 month ago, I wrote a similar Forum post on the Easterlin Paradox. I decided to take it down because: 1) after useful comments, the method looked a little half-baked; 2) I got in touch with two academics – Profs. Caspar Kaiser and Andrew Oswald – and we are now working on a paper together using a related method.  That blog post actually came to the opposite conclusion, but, as mentioned, I don't think the method was fully thought through.  I'm a little more confident about this work. It essentially summarises my Undergraduate dissertation. You can read a full version here. I'm hoping to publish this somewhere, over the Summer. So all feedback is welcome.  TLDR * Life satisfaction (LS) appears flat over time, despite massive economic growth — the “Easterlin Paradox.” * Some argue that happiness is rising, but we’re reporting it more conservatively — a phenomenon called rescaling. * I test this hypothesis using a large (panel) dataset by asking a simple question: has the emotional impact of life events — e.g., unemployment, new relationships — weakened over time? If happiness scales have stretched, life events should “move the needle” less now than in the past. * That’s exactly what I find: on average, the effect of the average life event on reported happiness has fallen by around 40%. * This result is surprisingly robust to various model specifications. It suggests rescaling is a real phenomenon, and that (under 2 strong assumptions), underlying happiness may be 60% higher than reported happiness. * There are some interesting EA-relevant implications for the merits of material abundance, and the limits to subjective wellbeing data. 1. Background: A Happiness Paradox Here is a claim that I suspect most EAs would agree with: humans today live longer, richer, and healthier lives than any point in history. Yet we seem no happier for it. Self-reported life satisfaction (LS), usually measured on a 0–10 scale, has remained remarkably flat over the last f
 ·  · 7m read
 · 
Crossposted from my blog.  When I started this blog in high school, I did not imagine that I would cause The Daily Show to do an episode about shrimp, containing the following dialogue: > Andres: I was working in investment banking. My wife was helping refugees, and I saw how meaningful her work was. And I decided to do the same. > > Ronny: Oh, so you're helping refugees? > > Andres: Well, not quite. I'm helping shrimp. (Would be a crazy rug pull if, in fact, this did not happen and the dialogue was just pulled out of thin air).   But just a few years after my blog was born, some Daily Show producer came across it. They read my essay on shrimp and thought it would make a good daily show episode. Thus, the Daily Show shrimp episode was born.   I especially love that they bring on an EA critic who is expected to criticize shrimp welfare (Ronny primes her with the declaration “fuck these shrimp”) but even she is on board with the shrimp welfare project. Her reaction to the shrimp welfare project is “hey, that’s great!” In the Bible story of Balaam and Balak, Balak King of Moab was peeved at the Israelites. So he tries to get Balaam, a prophet, to curse the Israelites. Balaam isn’t really on board, but he goes along with it. However, when he tries to curse the Israelites, he accidentally ends up blessing them on grounds that “I must do whatever the Lord says.” This was basically what happened on the Daily Show. They tried to curse shrimp welfare, but they actually ended up blessing it! Rumor has it that behind the scenes, Ronny Chieng declared “What have you done to me? I brought you to curse my enemies, but you have done nothing but bless them!” But the EA critic replied “Must I not speak what the Lord puts in my mouth?”   Chieng by the end was on board with shrimp welfare! There’s not a person in the episode who agrees with the failed shrimp torture apologia of Very Failed Substacker Lyman Shrimp. (I choked up a bit at the closing song about shrimp for s