This is a special post for quick takes by eca. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.
Sorted by Click to highlight new quick takes since:

I’m vulnerable to occasionally losing hours of my most productive time “spinning my wheels”: working on sub-projects I later realize don’t need to exist.

Elon Musk gives the most lucid naming of this problem in the below clip. He has a 5 step process which nails a lot of best practices I’ve heard from others and more. It sounds kind of dull and obvious to write down, but somehow I think staring at the steps will actually help. Its also phrased somewhat specifically to building physical stuff, but I think there is a generic version of each. I’m going to try implementing on my next engineering project.

The explanation is meandering (though with some great examples I recommend listening to!) so I did my best attempt to quickly paraphrase them here:

The Elon Process:

  1. “Make your requirements less dumb. Your requirements are definitely dumb.” Beware especially requirements from smart people because you will question them less.
  2. Delete a part, process step or feature. If you aren’t adding 10% of deleted things back in, you aren’t deleting enough.
  3. Optimize and simplify the remaining components.
  4. Accelerate your cycle time. You can definitely go faster.
  5. Automate.

https://youtu.be/t705r8ICkRw

(13:30-28)

Quest: see the inside of an active bunker

Why, if you don't mind me asking?

Empirical differential tech development?

Many longtermist questions related to dangers from emerging tech can be reduced to “what interventions would cause technology X to be deployed before/ N years earlier than/ instead of technology Y”.

In, biosecurity, my focus area, an example of this would be something like "how can we cause DNA synthesis screening to be deployed before desktop synthesizers are widespread?"

It seems a bit cheap to say that AI safety boils down to causing an aligned AGI before an unaligned, but it kind of basically does, and I suspect that as more of the open questions get worked out in AI strategy/ policy/ deployment there will end up being at least some examples of well defined subproblems like the above.

Bostrom calls this differential technology development. I personally prefer "deliberate technology development", but call it DTD and whatever. My point is, it seems really useful to have general principles for how to approach problems like this, and I've been unable to find much work, either theoretical or empirical, trying to establish such principles. I don't know exactly what these would look like; most realistically they would be set of heuristics or strategies alongside a definition of when they are applicable.

For example, a shoddy principle I just made up but could vaguely imagine playing out is "when a field is new and has few players, (e.g. small number of startups, small number of labs) causing a player to pursue something else on the margin has a much larger influence on delaying the development of this technology than causing the same proportion of R&D capacity to leave the field at a later point".

While I expect some theoretical econ type work to be useful here, I started thinking about the empirical side. It seems like you could in principle run experiments where, for some niche areas of commercial technology, you try interventions which are cost effective according to your model to direct the outcome toward a made up goal.

Some more hallucinated examples:

  • make the majority of guitar picks purple
  • make the automatic sinks in all public restrooms in South Dakota stay on for twice as long as the current ones
  • stop CAPTCHAs from ever asking anyone to identify a boat
  • stop some specific niche supplement from being sold in gelatin capsules anywhere in California

The pattern: specific change toward something which is either market neutral or somewhat bad according to the market, in an area few enough people care about/ the market is small and straightforward such that we should expect it is possible to occasionally succeed.

I'm not sure that there is anything which is a niche enough market to be cheap to intervene on while still being at all representative of the real thing. But maybe there is? And I kind of weirdly expect trying random thing stuff like this to actually yield some lessons, at least in implicit know-how for the person who does it.

Anyway, I'm interested in thoughts on the feasibility and utility of something like this, as well as any pointers to previous attempts to do this kind of thing (sort of seems like certain type of economists might be interested in experimenting in this way, but probably way too weird).

Curated and popular this week
 ·  · 20m read
 · 
Once we expand to other star systems, we may begin a self-propagating expansion of human civilisation throughout the galaxy. However, there are existential risks potentially capable of destroying a galactic civilisation, like self-replicating machines, strange matter, and vacuum decay. Without an extremely widespread and effective governance system, the eventual creation of a galaxy-ending x-risk seems almost inevitable due to cumulative chances of initiation over time across numerous independent actors. So galactic x-risks may severely limit the total potential value that human civilisation can attain in the long-term future. The requirements for a governance system to prevent galactic x-risks are extremely demanding, and they need it needs to be in place before interstellar colonisation is initiated.  Introduction I recently came across a series of posts from nearly a decade ago, starting with a post by George Dvorsky in io9 called “12 Ways Humanity Could Destroy the Entire Solar System”. It’s a fun post discussing stellar engineering disasters, the potential dangers of warp drives and wormholes, and the delicacy of orbital dynamics.  Anders Sandberg responded to the post on his blog and assessed whether these solar system disasters represented a potential Great Filter to explain the Fermi Paradox, which they did not[1]. However, x-risks to solar system-wide civilisations were certainly possible. Charlie Stross then made a post where he suggested that some of these x-risks could destroy a galactic civilisation too, most notably griefers (von Neumann probes). The fact that it only takes one colony among many to create griefers means that the dispersion and huge population of galactic civilisations[2] may actually be a disadvantage in x-risk mitigation.  In addition to getting through this current period of high x-risk, we should aim to create a civilisation that is able to withstand x-risks for as long as possible so that as much of the value[3] of the univers
 ·  · 13m read
 · 
  There is dispute among EAs--and the general public more broadly--about whether morality is objective.  So I thought I'd kick off a debate about this, and try to draw more people into reading and posting on the forum!  Here is my opening volley in the debate, and I encourage others to respond.   Unlike a lot of effective altruists and people in my segment of the internet, I am a moral realist.  I think morality is objective.  I thought I'd set out to defend this view.   Let’s first define moral realism. It’s the idea that there are some stance independent moral truths. Something is stance independent if it doesn’t depend on what anyone thinks or feels about it. So, for instance, that I have arms is stance independently true—it doesn’t depend on what anyone thinks about it. That ice cream is tasty is stance dependently true; it might be tasty to me but not to you, and a person who thinks it’s not tasty isn’t making an error. So, in short, moral realism is the idea that there are things that you should or shouldn’t do and that this fact doesn’t depend on what anyone thinks about them. So, for instance, suppose you take a baby and hit it with great force with a hammer. Moral realism says: 1. You’re doing something wrong. 2. That fact doesn’t depend on anyone’s beliefs about it. You approving of it, or the person appraising the situation approving of it, or society approving of it doesn’t determine its wrongness (of course, it might be that what makes its wrong is its effects on the baby, resulting in the baby not approving of it, but that’s different from someone’s higher-level beliefs about the act. It’s an objective fact that a particular person won a high-school debate round, even though that depended on what the judges thought). Moral realism says that some moral statements are true and this doesn’t depend on what people think about it. Now, there are only three possible ways any particular moral statement can fail to be stance independently true: 1. It’s
 ·  · 2m read
 · 
Summary Arkose is an early-stage AI safety fieldbuilding nonprofit focused on accelerating the involvement of experienced machine learning professionals in technical AI safety research through direct outreach, one-on-one calls, and public resources. Between December 2023 and June 2025, we had one-on-one calls with 311 such professionals. 78% of those professionals said their initial call accelerated their involvement in AI safety[1].  Unfortunately, we’re closing due to a lack of funding.  We remain excited about other attempts at direct outreach to this population, and think the right team could have impact here. Why are we closing? Over the past year, we’ve applied for funding from all of the major funders interested in AI safety fieldbuilding work, and several minor funders. Rather than try to massively change what we're doing to appeal to funders, with a short funding runway and little to no feedback, we’re choosing to close down and pursue other options. What were we doing? Why? * Calls: we ran 1:1 calls with mid-career machine learning professionals. Calls lasted an average of 37 minutes (range: 10-79), and we had a single call with 96% of professionals we spoke with (i.e. only 4% had a second or third call with us). On call, we focused on: * Introducing existential and catastrophic risks from AI * Discussing research directions in this field, and relating them to the professional’s areas of expertise. * Discussing specific opportunities to get involved (e.g. funding, jobs, upskilling), especially ones that would be a good fit for the individual. * Giving feedback on their existing plans to get involved in AI safety (if they have them). * Connecting with advisors to support their next steps in AI safety, if appropriate (see below). * Supportive Activities: * Accountability: after calls, we offered an accountability program where participants set goals for next steps, and we check in with them. 114 call participants set goals for check