This is a special post for quick takes by D0TheMath. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.
Sorted by Click to highlight new quick takes since:

I saw this comment on LessWrong

This seems noncrazy on reflection.

10 million dollars will probably have very small impact on Terry Tao's decision to work on the problem. 

OTOH, setting up an open invitation for all world-class mathematicians/physicists/theoretical computer science to work on AGI safety through some sort of sabbatical system may be very impactful.

Many academics, especially in theoretical areas where funding for even the very best can be scarce, would jump at the opportunity of a no-strings-attached sabbatical. The no-strings-attached is crucial to my mind. Despite LW/Rationalist dogma equating IQ with weirdo-points, the vast majority of brilliant (mathematical) minds are fairly conventional - see Tao, Euler, Gauss. 

EA cause area?

Thoughts? 

I don't know what the standard approach would be. I haven't read any books on evolutionary biology. I did listen to a bit of this online lecture series: https://www.youtube.com/watch?v=NNnIGh9g6fA&list=PL848F2368C90DDC3D and it seems fun & informative.

I’ve been using the models I’ve been learning to understand the problems associated with inner alignment to model evolution during this discussion, as it is a stochastic gradient descent process, so many of the arguments for properties that trained models should have can be applied to evolutionary processes.

So I guess you can start with Hubinger et al’s Risks from Learned Optimization? But this seems a nonstandard approach to trying to learn evolutionary biology.

Do you feel it is possible for evolution to select for beings who care about their copies in Everett branches, over beings that don't? For the purposes of this question let's say we ignore the "simplicity" complication of the previous point, and assume both species have been created, if that is possible.

It likely depends on what it means for evolution to select for something, and for a species to care about it's copies in other Everett branches. It's plausible to imagine a very low-amplitude Everett branch which has a species that uses quantum mechanical bits to make many of it's decisions, which decreases its chances of reproducing in most Everett branches, but increases it's chances of reproducing in very very few.

But in order for something to care about it's copies in other Everett branches, the species would need to be able to model how quantum mechanics works, as well as how acausal trade works if you want it to be able to be selected for caring how it's decision making process will affect non-causally-reachable Everett branches. I can't think of any pathways for how a species could increase it's inclusive genetic fitness by making acausal trades with it's counterparts in non-causally-reachable Everett branches, but I also can't think of any proof for why it's impossible. Thus, I only think it's unlikely.

For the case where we only care about selecting for caring about future Everett branches, note that if we find ourselves in the situation I described in the original post, and the proposal succeeds, then evolution has just made a minor update towards species which care about their future Everett selves.

Evolution doesn't select for that, but it's also important to note that such tendencies are not disselected for, and the value "care about yourself, and others" is simpler than the value "care about yourself, and others except those in other Everett branches", so we should expect people to generalize "others" as including those in Everett branches, in the same way that they generalize "others" as including those in the far future.

Also, while you cannot meaningfully influence Everett branches which have split off in the past, you can influence Everett branches that will split off some time in the future.

I’m not certain. I’m tempted to say I care about them in proportion to their “probabilities” of occurring, but if I knew I was on a very low-“probability” branch & there was a way to influence a higher “probability” branch at some cost to this branch, then I’m pretty sure I’d weight the two equally.

Are there any obvious reasons why this line of argument is wrong:

Suppose Everett interpretation of qm is true, and an x-risk curtailing humanity's future is >99% certain, with no leads on the solution to it. Then, given a qm bit generator, which generates some high number of bits, for any particular combination of bits, there exists a universe in which that combination was generated. In particular, the combination of bits encoding actions one can take to solve the x-risk are generated in some world. Thus, one should use such a qm bit generator to generate a plan to stop the x-risk. Even though you will likely see a bunch of random letters, there will exist a version of you with a good plan, and the world will not end.

One may argue the chances of finding a plan which produces an s-risk is just as high as one curtailing the x-risk. This only seems plausible to me if the solution produced is some optimization process, or induces some optimization process. These scenarios should not be discounted.

Curated and popular this week
 ·  · 16m read
 · 
This is a crosspost for The Case for Insect Consciousness by Bob Fischer, which was originally published on Asterisk in January 2025. [Subtitle.] The evidence that insects feel pain is mounting, however we approach the issue. For years, I was on the fence about the possibility of insects feeling pain — sometimes, I defended the hypothesis;[1] more often, I argued against it.[2] Then, in 2021, I started working on the puzzle of how to compare pain intensity across species. If a human and a pig are suffering as much as each one can, are they suffering the same amount? Or is the human’s pain worse? When my colleagues and I looked at several species, investigating both the probability of pain and its relative intensity,[3] we found something unexpected: on both scores, insects aren’t that different from many other animals.  Around the same time, I started working with an entomologist with a background in neuroscience. She helped me appreciate the weaknesses of the arguments against insect pain. (For instance, people make a big deal of stories about praying mantises mating while being eaten; they ignore how often male mantises fight fiercely to avoid being devoured.) The more I studied the science of sentience, the less confident I became about any theory that would let us rule insect sentience out.  I’m a philosopher, and philosophers pride themselves on following arguments wherever they lead. But we all have our limits, and I worry, quite sincerely, that I’ve been too willing to give insects the benefit of the doubt. I’ve been troubled by what we do to farmed animals for my entire adult life, whereas it’s hard to feel much for flies. Still, I find the argument for insect pain persuasive enough to devote a lot of my time to insect welfare research. In brief, the apparent evidence for the capacity of insects to feel pain is uncomfortably strong.[4] We could dismiss it if we had a consensus-commanding theory of sentience that explained why the apparent evidence is ir
 ·  · 40m read
 · 
I am Jason Green-Lowe, the executive director of the Center for AI Policy (CAIP). Our mission is to directly convince Congress to pass strong AI safety legislation. As I explain in some detail in this post, I think our organization has been doing extremely important work, and that we’ve been doing well at it. Unfortunately, we have been unable to get funding from traditional donors to continue our operations. If we don’t get more funding in the next 30 days, we will have to shut down, which will damage our relationships with Congress and make it harder for future advocates to get traction on AI governance. In this post, I explain what we’ve been doing, why I think it’s valuable, and how your donations could help.  This is the first post in what I expect will be a 3-part series. The first post focuses on CAIP’s particular need for funding. The second post will lay out a more general case for why effective altruists and others who worry about AI safety should spend more money on advocacy and less money on research – even if you don’t think my organization in particular deserves any more funding, you might be convinced that it’s a priority to make sure other advocates get more funding. The third post will take a look at some institutional problems that might be part of why our movement has been systematically underfunding advocacy and offer suggestions about how to correct those problems. OUR MISSION AND STRATEGY The Center for AI Policy’s mission is to directly and openly urge the US Congress to pass strong AI safety legislation. By “strong AI safety legislation,” we mean laws that will significantly change AI developers’ incentives and make them less likely to develop or deploy extremely dangerous AI models. The particular dangers we are most worried about are (a) bioweapons, (b) intelligence explosions, and (c) gradual disempowerment. Most AI models do not significantly increase these risks, and so we advocate for narrowly-targeted laws that would focus their att