This is a special post for quick takes by MichaelDickens. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.
Sorted by Click to highlight new quick takes since:

Quick thoughts on investing for transformative AI (TAI)

Some EAs/AI safety folks invest in securities that they expect to go up if TAI happens. I rarely see discussion of the future scenarios where it makes sense to invest for TAI, so I want to do that.

My thoughts aren't very good, but I've been sitting on a draft for three years hoping I develop some better thoughts and that hasn't happened, so I'm just going to publish what I have. (If I wait another 3 years, we might have AGI already!)

When does investing for TAI work?

Scenarios where investing doesn't work:

  1. Takeoff happens faster than markets can react, or takeoff happens slowly but is never correctly priced in.
  2. Investment returns can't be spent fast enough to prevent extinction.
  3. TAI creates post-scarcity utopia where money is irrelevant.
  4. It turns out TAI was already correctly priced in.

Scenarios where investing works:

  1. Slow takeoff, market correctly anticipates TAI after we do but before it actually happens, and there's a long enough time gap that we can productively spend the earnings on AI safety.
  2. TAI is generally good, but money still has value and there are still a lot of problems in the world that can be fixed with money.

(Money seems much more valuable in scenario #5 than #6.)

What is the probability that we end up in a world where investing for TAI turns out to work? I don't think it's all that high (maybe 25%, although I haven't thought seriously about this).

You also need to be correct about your investing thesis, which is hard. Markets are famously hard to beat.

Possible investment strategies

  1. Hardware makers (e.g. NVIDIA)? Anecdotally this seems to be the most popular thesis. This is the most straightforward idea but I am suspicious that a lot of EA support for investing in AI looks basically indistinguishable from typical hype-chasing retail investor behavior. NVIDIA already has a P/E of 56. There is a 3x levered long NVIDIA ETP. That is not the sort of thing you see when an industry is overlooked. Not to say NVIDIA is definitely a bad investment, it could be even more valuable than the market already thinks, I'm just wary.
  2. AI companies? This doesn't seem to be a popular strategy, the argument against is that it's a crowded space with a lot of competition which will drive margins down. (Whereas NVIDIA has a ~monopoly on AI chips.) Plus I am concerned that giving more money to AI companies will accelerate AI development.
  3. Energy companies? It's looking like AI will consume quite a lot of energy. But it's not clear that AI will make a noticeable dent on global energy consumption. This is probably the sort of thing you could make reasonable projections for.
  4. Out-of-the-money call options on a broad index (e.g. S&P 500 or NASDAQ)? This strategy avoids making a bet about which particular companies will do well, just that something will do much better than the market anticipates. But I'd also expect that unusually high market returns won't start showing up until TAI is close (even in a slow-takeoff world), so you have less time to use the extra returns to prevent AI-driven extinction.
  5. Commodities? The idea is that anything complicated will become much easier to produce thanks to AI, but commodities won't be much easier to get, so their prices will go up a lot. This is an interesting idea that I heard recently, I have no idea if it's correct.
  6. Momentum funds (e.g. VFMO or QMOM)? The general theory of momentum investing is that the market under-reacts to slow news. The pro of this strategy is that it should work no matter which stocks/industries benefit from AI. The con is that it's slower—you don't buy into a stock until it's already started going up. (I own both VFMO and QMOM (mostly QMOM), a bit because of AI but mainly because I think momentum is a good idea in general.)

There is some discussion of strategy 4 on LW at the moment: https://www.lesswrong.com/posts/JotRZdWyAGnhjRAHt/tail-sp-500-call-options

Re: Possible investment strategies there is a dialogue on LessWrong from November 2023 which I think still holds up. Quoting from the takeaways:


Invest like 50% of my portfolio into pretty broad index funds with really no particular specialization

  • Take like 20% of my portfolio and throw it into some more tech/AI focused index fund. Maybe look around for something that covers some of the companies listed here on the brokerage interface that is presented to me (probably do a bit more research here)
  • Invest like 3-5% of my portfolio into each of Nvidia, TSMC, Microsoft, Google, ASML and Amazon
  • Take like 2-5% of my portfolio and use it to buy some options (probably some long-term call options on some of the stocks above), making really sure I buy ones that have limited downside, and see whether I can successfully not blow up that part of my portfolio for like 2 years before I do any more here

And then I probably wouldn't bother much with rebalancing and basically forget about it unless I feel like paying much extra attention.

About energy companies, I think the investment idea is less about general global energy consumption via AI, but rather the companies that are helping to build out and power these large data centres.

Microsoft have been investing in nuclear energy, xAI's Colossus cluster was positioned right next to a natural gas plant, Sam Altman invested in and is now chair of the board of nuclear startup Oklo. And my understanding is that power substation equipment is a bottleneck with equipment like transformers now having a lead time of years

I sold all my NVIDIA stock, since their moat looks weak to me:

https://forum.effectivealtruism.org/posts/rBx9RmJdBJgHkjL4j/will-openai-s-o3-reduce-nvidia-s-moat

I think your reasoning is generally correct. Another argument: If you believe things look sufficiently grim under short timelines, maybe you should invest under the assumption that a recession, or something else, will pop the AI bubble and gives us longer timelines.

Why does distributing malaria nets work? Why hasn't everyone bought a bednet already?

  • If it's because they can't afford bednets, why don't more GiveDirectly recipients buy them?
  • Is it because nobody in the local area sells bednets? If so, why doesn't anyone sell them?
  • Is it because people don't think bednets are worth it? If so, why do they use the bednets when given them for free?

Merely subsidizing nets, as opposed to free distribution, used to be a much more popular idea. My understanding is that that model was nuked by this paper showing that demand for nets falls discontinuously at any positive price (60 percentage points reduction in demand when going from 100% subsidy to 90% subsidy). So unless people's value for their children's lives are implausibly low, people are making mistakes in their choice of whether or not to purchase a bednet.

New Incentives, another GiveWell top charity, can move people to vaccinate their children with very small cash transfers (I think $10). The fact that $10 can mean the difference between whether people protect their children from life threatening diseases or not is crazy if you think about it.

This is not a rare finding. This paper found very low household willingness to pay for cleaning up contaminated wells, which cause childhood diarrhea and thus death. Their estimates imply that households in rural Kenya are willing to pay at most $770 to prevent their child's death, which just doesn't seem plausible. Ergo, another setting where people are making mistakes. Another; demand for motorcycle helmets is stupidly low and implies that Nairobi residents value a statistical life at $220, less than 10% of annual income. Unless people would actually rather die than give up 10% of their income for a year, this is clearly another case where people's decisions do not reflect their true value.

This is not that surprising if you think about it. People in rich countries and poor countries alike are really bad at investing in preventative health. Each year I dillydally on getting the flu vaccine, even though I know the benefits are way higher than the costs, because I don't want to make the trip to CVS (an hour out of my day, max). My friend doesn't wear a helmet when cycling, even at night or in the rain, because he finds it inconvenient. Most of our better health in the rich world doesn't come from us actively making better health decisions, but from our environment enabling us to not need to make health decisions at all.

I think this is the best explanation I've seen, it sounds likely to be correct.

I'm pretty sure the personal benefits of getting the flu vaccine for a male in their 20-30s is not much higher than the costs. Agree on the bike helmet thing though. 

Alexander Berger answered pretty much this exact question on a old 80k episode

Felt a little scared realizing that that episode is over 3 years old. It's such a great one and I return to it often!

I don’t know enough about AMF to answer your question directly, but I can shed some light on market failures by way of analogy to my employer, Kaya Guides, which provides free psychotherapy in India:

  1. Our beneficiaries usually can’t afford psychotherapy outright
  2. They sometimes live rurally, and can’t travel to places that do psychotherapy in person
  3. There are not enough psychotherapists in India for everyone to receive it
  4. The government, equally, don’t have the capacity or interest to develop the mental health sector enough (against competing health priorities) to make free treatment available
  5. Our beneficiaries usually don’t know what psychotherapy is, or that they have a problem at all, nor that it can be treated
  6. We are incentivised to make psychotherapy as cheap as possible to reach the worst-served portion of the market, while for-profits are incentivised to compete in more lucrative parts of the market

I can see how many, if not all, of these would be analogous to AMF. The market doesn’t and can’t solve every problem!

That sounds pretty reasonable for why psychotherapy wouldn't be as widespread as it should. It looks to me like most of these reasons wouldn't apply to AMF. Training new psychotherapists takes years and tens of thousands of dollars (at developing-world wages). Getting more malaria nets requires buying more $5 malaria nets, and distributing malaria nets is much easier than distributing psychotherapists. So reasons 1–3 and #6 don't carry over (or at least not to nearly the same extent). #4 doesn't seem relevant to my original question so I think #5 is the only one that carries over—recipients might not know that they should be concerned about malaria.

Effective bednets have a relatively short shelf life due to both loss of insecticide and physical damage.

People in target regions can and do buy bednets, though for much of the target market the cost might still represent a day's income so they won't necessarily be inclined to replace them at optimal intervals. (On the other hand, it's a tiny fraction of a typical GiveDirectly handout, which is probably why "people buy bednets with it" isn't a major feature of their research even in regions with significant malaria). Consumers see [not necessarily as effective] alternative products purporting to achieve mosquito control in the same shops , and won't necessarily prioritise purchasing replacement nets when it represents a large spend for them, their reason for doing so is the existing bednet doesn't seem to be working, and people who are relatively informed about malaria prevention are also informed that governments and NGOs tend to dispense bednets for free...      Programmes dispensing free nets tend to provide advice on using them properly too.

Bednets on sale in some local markets are often untreated, so buying replacements locally isn't necessarily even a good decision.

How strong is the evidence for bednets being effective?

A priori there is a not unsurprising mistake the researchers could have made in reaching this conclusion & they would have an incentive to make such a mistake.

A priori bednets being very effective is a bit surprising.

What is the strongest study that supports this conclusion?

The evidence is quite strong. You can most likely get more detail than you ever wanted from the GiveWell review.

Thanks.

It seems like there are 4 studies with extended follow up -- Binka et al https://doi.org/10.1016/S0035-9203(02)90321-4 , Diallo et al https://pmc.ncbi.nlm.nih.gov/articles/PMC2585912/ , Lindblade et al https://doi.org/10.1001/jama.291.21.2571 , Louis et al https://doi.org/10.1111/j.1365-3156.2012.02990.x -- but not of the type that would be directly informative.

As Binka et al say “The original trials ran for only 1-2 years each. At the end of these periods, the efficacy of the intervention was considered proven and the control groups were provided with nets or curtains, thus these trials could not be used to demonstrate the effects of long-term transmission control.”.

"Are Ideas Getting Harder to Find?" (Bloom et al.) seems to me to suggest that ideas are actually surprisingly easy to find.

The paper looks at the difficulty of finding new ideas in a variety of fields. It finds that in all cases, effort on finding new ideas is growing exponentially over time, while new ideas are growing exponentially but at a lower rate. (For a summary, see Table 7 on page 31.) This is framed as a surprising and bad thing.

But it actually seems surprisingly good to me. My intuition is that the number of ideas should grow logarithmically with effort, or possibly even sub-logarithmically. If effort is growing exponentially, we'd expect to see linear or sub-linear growth in ideas. But instead we see exponential growth in ideas.

I don't have a great understanding of the math used in this paper, so I might be misinterpreting something.

Bloom et al. do report exponential growth of various metrics, but I don't think these metrics are well-characterized by 'ideas'. They are things like price-performance of transistors or crop yields per area.

If we instead attempt to measure progress by something like 'number of ideas', there is some evidence in favor of your guess that "ideas should grow logarithmically with effort". E.g., in a review of the 'science of science', Fortunato et al. (2018) say (emphases mine):

Early studies discovered an exponential growth in the volume of scientific literature, a trend that continues with an average doubling period of 15 years. Yet, it would be naïve to equate the growth of the scientific literature with the growth of scientific ideas. [...] Large-scale text analysis, using phrases extracted from titles and abstracts to measure the cognitive extent of the scientific literature, have found that the conceptual territory of science expands linearly with time. In other words, whereas the number of publications grows exponentially, the space of ideas expands only linearly.

Bloom et al. also report a linear increase in life expectancy in sc. 6. I vaguely remember that there are many more examples where exponential growth becomes linear once evaluated on some other 'natural' metrics, but I don't remember where I saw them. Possibly in the literature on logarithmic returns to science. Let me know if it'd be useful if I try to dig up some references.

ETA: See e.g. here, number of known chemical elements. Possibly there are more example in that SSC post.

Looking at the Decade in Review, I feel like voters systematically over-rate cool but ultimately unimportant posts, and systematically under-rate complicated technical posts that have a reasonable probability of changing people's actual prioritization decisions.

Example: "Effective Altruism is a Question (not an ideology)", the #2 voted post, is a very cool concept and I really like it, but ultimately I don't see how it would change anyone's important life decisions, so I think it's overrated in the decade review.

"Differences in the Intensity of Valenced Experience across Species", the #35 voted post (with 1/3 as many votes as #2), has a significant probability of changing how people prioritize helping different species, which is very important, so I think it's underrated.

(I do think the winning post, "Growth and the case against randomista development", is fairly rated because if true, it suggests that all global-poverty-focused EAs should be behaving very differently.)

This pattern of voting probably happens because people tend to upvote things they like, and a post that's mildly helpful for lots of people is easier to like than a post that's very helpful for a smaller number of people.

(For the record, I enjoy reading the cool conceptual posts much more than the complicated technical posts.)

Curated and popular this week
Relevant opportunities