This is a special post for quick takes by Caruso. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.
Sorted by Click to highlight new quick takes since:

Fired from OpenAI's Superalignment team, Aschenbrenner now runs a fund dedicated to funding AGI-focused startups, according to The Information. 

"Former OpenAI super-alignment researcher Leopold Aschenbrenner, who was fired from the company for allegedly leaking information, has started an investment firm to back startups with capital from former Github CEO Nat Friedman, investor Daniel Gross, Stripe CEO Patrick Collision and Stripe president John Collision, according to his personal website.

In a recent podcast interview, Aschenbrenner spoke about the new firm as a cross between a hedge fund and a think tank, focused largely on AGI, or artificial general intelligence. “There’s a lot of money to be made. If AGI were priced in tomorrow, you could maybe make 100x. Probably you can make even way more than that,” he said. “Capital matters.”

“We’re going to be betting on AGI and superintelligence before the decade is out, taking that seriously, making the bets you would make if you took that seriously. If that’s wrong, the firm is not going to do that well,” he said."

What happened to his concerns over safety, I wonder? 

Buck
25
0
0
4

He lays out the relevant part of his perspective in "The Free World Must Prevail" and "Superalignment" in his recent manifesto.

Buck, do you have any takes on how good this seems to you / how good the arguments in the manifesto for doing this work seem to you? (No worries if not or you don't want to discuss publicly)

I don’t think he says anything in the manifesto about why AI is going to go better if he starts a “hedge fund/think tank”.

I haven’t heard a strong case for him doing this project but it seems plausibly reasonable. My guess is I’d think it was a suboptimal choice if I heard his arguments and thought about it, but idk.

My current understanding is that he believes extinction or similar from AI is possible, at 5% probability, but that this is low enough that concerns about stable totalitarianism are slightly more important. Furthermore, he believes that AI alignment is a technical but solvable problem. More here.

I am far more pessimistic than him about extinction from misaligned AI systems, but I think it's quite sensible to try to make money from AI even in worlds from high probability of extinction, since the market signal provided counterfactually moves the market far less than the realizable benefit from being richer in such a crucial time.

harfe
18
10
0
2

I am far more pessimistic than him about extinction from misaligned AI systems, but I think it's quite sensible to try to make money from AI even in worlds from high probability of extinction, since the market signal provided counterfactually moves the market far less than the realizable benefit from being richer in such a crucial time.

I am sympathetic to this position when it comes to your own money. Like, if regular AI safety people put a large fraction of their savings into NVIDIA stock, that is understandable to me.

But the situation with Aschenbrenner starting an AGI investment firm is different. He is not directing (just) his own money, but the much larger capital of his investors into AGI companies. So the majority of the wealth gain will not end up in Aschenbrenner's hands, but belong to the investors. This is different from a small-scale shareholder who gets all the gains (minus some tax) of his stock ownership.

But even if Aschenbrenner's plan is to invest into the world-destroying in order to become richer later when it matters, it would be nice to say so and also explain how you intend to use the money later. My guess however is that this is not actually what Aschenbrenner actually believes. He might just be in favour of accelerating these technologies.

If you are concerned about extinction and stable totalitarianism, 'we should continue to develop AI but the good guys will have it' sounds like a very unimaginative and naïve solution

+1. 

(I feel slightly bad for pointing this out) It's also, perhaps not too coincidentally, the sort of general belief that's associated with giving Leopold more power, compared to many other possible beliefs one could have in this area. 

What would the imaginative solution be? 

Agreed. Getting a larger share of the pie (without breaking rules during peacetime) might be 'unimaginative' but it's hardly naïve. It's straightforward and has a good track record of allowing groups to shape the world disproportionately.

I'm a bit confused. I was just calling Aschenbrenner unimaginative, because I think trying to avoid stable totalitarianism while bringing about the conditions he identified for stable totalitarianism lacked imagination. I think the onus is on him to be imaginative if he is taking what he identifies as extremely significant risks, in order to reduce those risks. It is intellectually lazy to claim that your very risky project is inevitable (in many cases by literally extrapolating straight lines on charts and saying 'this will happen') and then work to bring it about as quickly and as urgently as possible.

Just to try and make this clear, by corollary, I would support an unimaginative solution that doesn't involve taking these risks, such as by not building AGI. I think the burden for imagination is higher if you are taking more risks, because you could use that imagination to come up with a win-win solution.

In today's Bulletin of the Atomic Scientists is this headline - "Trump has a strategic plan for the country: Gearing up for nuclear war" 

https://thebulletin.org/2024/07/trump-has-a-strategic-plan-for-the-country-gearing-up-for-nuclear-war/

Does EA have a plan to address this? If not, now would be a good time.  

I published a short piece on Yann LeCun posting about Jan Leike's exit from OpenAI over perceived safety issues, and wrote a bit about the difference between Low Probility - High Impact events and Zero Probability - High Impact events. 

https://www.insideaiwarfare.com/yann-versus/

This is an interesting #OpenPhil grant. $230K for a cyber threat intelligence researcher to create a database that tracks instances of users attempting to misuse large language models.

https://www.openphilanthropy.org/grants/lee-foster-llm-misuse-database/

 Will user data be shared with the user's permission? How will an LLM determine the intent of the user when it comes to differentiating between purposeful harmful entries versus user error, safety testing, independent red-teaming, playful entries, etc. If a user is placed on the database, is she notified? How long do you stay in LLM prison? 

I did send an email to OpenPhil asking about this grant, but so far I haven't heard anything back.

[comment deleted]1
0
0
Curated and popular this week
 ·  · 16m read
 · 
This is a crosspost for The Case for Insect Consciousness by Bob Fischer, which was originally published on Asterisk in January 2025. [Subtitle.] The evidence that insects feel pain is mounting, however we approach the issue. For years, I was on the fence about the possibility of insects feeling pain — sometimes, I defended the hypothesis;[1] more often, I argued against it.[2] Then, in 2021, I started working on the puzzle of how to compare pain intensity across species. If a human and a pig are suffering as much as each one can, are they suffering the same amount? Or is the human’s pain worse? When my colleagues and I looked at several species, investigating both the probability of pain and its relative intensity,[3] we found something unexpected: on both scores, insects aren’t that different from many other animals.  Around the same time, I started working with an entomologist with a background in neuroscience. She helped me appreciate the weaknesses of the arguments against insect pain. (For instance, people make a big deal of stories about praying mantises mating while being eaten; they ignore how often male mantises fight fiercely to avoid being devoured.) The more I studied the science of sentience, the less confident I became about any theory that would let us rule insect sentience out.  I’m a philosopher, and philosophers pride themselves on following arguments wherever they lead. But we all have our limits, and I worry, quite sincerely, that I’ve been too willing to give insects the benefit of the doubt. I’ve been troubled by what we do to farmed animals for my entire adult life, whereas it’s hard to feel much for flies. Still, I find the argument for insect pain persuasive enough to devote a lot of my time to insect welfare research. In brief, the apparent evidence for the capacity of insects to feel pain is uncomfortably strong.[4] We could dismiss it if we had a consensus-commanding theory of sentience that explained why the apparent evidence is ir
 ·  · 7m read
 · 
Introduction I have been writing posts critical of mainstream EA narratives about AI capabilities and timelines for many years now. Compared to the situation when I wrote my posts in 2018 or 2020, LLMs now dominate the discussion, and timelines have also shrunk enormously. The ‘mainstream view’ within EA now appears to be that human-level AI will be arriving by 2030, even as early as 2027. This view has been articulated by 80,000 Hours, on the forum (though see this excellent piece excellent piece arguing against short timelines), and in the highly engaging science fiction scenario of AI 2027. While my article piece is directed generally against all such short-horizon views, I will focus on responding to relevant portions of the article ‘Preparing for the Intelligence Explosion’ by Will MacAskill and Fin Moorhouse.  Rates of Growth The authors summarise their argument as follows: > Currently, total global research effort grows slowly, increasing at less than 5% per year. But total AI cognitive labour is growing more than 500x faster than total human cognitive labour, and this seems likely to remain true up to and beyond the point where the cognitive capabilities of AI surpasses all humans. So, once total AI cognitive labour starts to rival total human cognitive labour, the growth rate of overall cognitive labour will increase massively. That will drive faster technological progress. MacAskill and Moorhouse argue that increases in training compute, inference compute and algorithmic efficiency have been increasing at a rate of 25 times per year, compared to the number of human researchers which increases 0.04 times per year, hence the 500x faster rate of growth. This is an inapt comparison, because in the calculation the capabilities of ‘AI researchers’ are based on their access to compute and other performance improvements, while no such adjustment is made for human researchers, who also have access to more compute and other productivity enhancements each year.