Hide table of contents

Next week for the 80,000 Hours Podcast I'll be interviewing Carl Shulman, advisor to Open Philanthropy, and generally super informed person about history, technology, possible futures, and a shocking number of other topics.

He has previously appeared on our show and the Dwarkesh Podcast:

He has also written a number of pieces on this forum.

What should I ask him?

New Answer
New Comment

10 Answers sorted by

Why aren't there more people like him, and what is he doing or planning to do about that?

Related question: How does one become someone like Carl Shulman (or Wei Dai, for that matter)?

I thought about this and wrote down some life events/decisions that probably contributed to becoming who I am today.

  • Immigrating to the US at age 10 knowing no English. Social skills deteriorated while learning language, which along with lack of cultural knowledge made it hard to make friends during teenage and college years, which gave me a lot of free time that I filled by reading fiction and non-fiction, programming, and developing intellectual interests.
  • Was heavily indoctrinated with Communist propaganda while in China, but leaving meant I then had no viable moral/philosophical/political foundations. Parents were too busy building careers as new immigrants and didn't try to teach me values/traditions. So I had a lot of questions that I didn't have ready answers to, which perhaps contributed to my intense interest in philosophy (ETA: and economics and game theory).
  • Had an initial career in cryptography, but found it a struggle to compete with other researchers on purely math/technical skills. Realized that my comparative advantage was in more conceptual work. Crypto also taught me to be skeptical of my own and other people's ideas.
  • Had a bad initial experience with academic re
... (read more)

Maybe if/how his thinking about AI governance has changed over the last year?

Relatedly, I'd be interested to know whether his thoughts on the public's support for AI pauses or other forms of strict regulation have updated since his last comment exchange with Katja, now that we have many reasonably high-quality polls on the American public's perception of AI (much more concerned than excited), as well as many more public conversations.

A bit, but more on the willingness of AI experts and some companies to sign the CAIS letter and lend their voices to the view 'we should go forward very fast with AI, but keep an eye out for better evidence of danger and have the ability to control things later.' My model has always been that the public is technophobic, but that 'this will be constrained like peaceful nuclear power or GMO crops' isn't enough to prevent a technology that enables DSA and OOMs (and nuclear power and GMO crops exist, if AGI exists somewhere that place outgrows the rest of the world if the rest of the world sits on the sidelines). If leaders' understanding of the situation is that public fears are erroneous, and going forward with AI means a hugely better economy (and thus popularity for incumbents) and avoiding a situation where abhorred international rivals can safely disarm their military, then I don't expect it to be stopped. So the expert views, as defined by who the governments view as experts, are central in my picture. Visible AI progress like ChatGPT strengthens 'fear AI disaster' arguments but at the same time strengthens 'fear being behind in AI/others having AI' arguments. The kinds of actions that have been taken so far are mostly of the latter type (export controls, etc), and measures to monitor the situation and perhaps do something later if the evidential situation changes. I.e. they reflect the spirit of the CAIS letter, which companies like OpenAI and such were willing to sign, and not the pause letter which many CAIS letter signatories oppose.   The evals and monitoring agenda is an example of going for value of information rather than banning/stopping AI advances, like I discussed in the comment, and that's a reason it has had an easier time advancing.

Nice to know, Rob! I have really liked the podcasts Carl did. You may want to link to Carl's (great!) blog in your post too.

In general, I would be curious to know more about how Carl thinks about determining how much resources should go into each cause area, which I do not recall being discussed much in Carl's 3 podcasts. Some potential segways:

Carl has knowledge about lots of topics, very much like Anders Sandberg. So I think the questions I shared to ask Anders are also good questions for Carl:

  • Should more resources be directed towards patient philanthropy at the margin? How much more/less?
  • How binary is longterm value? Relevant to the importance of the concept of existential risk.
  • Should the idea that more global warming might be good to mitigate the food shocks caused by abrupt sunlight reduction scenarios (ASRSs) be taken seriously? (Anders is at the board of ALLFED, and therefore has knowledge about ASRSs.)
  • Which fraction of the expected effects of neartermist interventions (e.g. global health and development, and animal welfare) flow through longtermis considerations (e.g. longterm effects of changing population size, or expansion of the moral circle)? [Is it unclear whether Against Malaria Foundation is better/worse than Make-A-Wish Foundation, as argued in section 4.1 of Maximal cluelessness?]
  • Under moral realism, are we confident that superintelligent artificial intelligence disempowering humans would be bad?
  • Should we be uncertain about whether saving lives is good/bad because of the meat eater problem [my related calculations; related post]?
  • What is the chance that the time of perils hypothesis is true (e.g. how does the existential risk this century compare to that over the next 1 billion years)? How can we get more evidence for/against it? Relevant because, if existential risk is spread out over a long time, reducing existential risk this century has a negligible effect on total existential risk, as discussed by David Thorstad. [See also Rethink's post on this question.]
  • How high is the chance of AGI lock-in this century?
  • What can we do to ensure a bright future if there are advanced aliens on or around Earth (Magnus Vinding's thoughts)? More broadly, should humanity do anything differently due to the possibility of advanced civilisations which did not originite on Earth? [Another speculative question, which you covered a little in the podcast with Joe, is what should we do differently to improve the world if we were in a simulation?]
  • How much weight should one give to the XPT's forecasts? The ones regarding nuclear extinction seem way too pessimistic to be accurate [the reasons I think this are in this thread]. Superforecasters and domain experters predicted a likelihood of nuclear extinction by 2100 of 0.074 % and 0.55 %. My guess would be something like 10^-6 (10 % of a global nuclear nuclear war involving tens of detonations, 10 % of it escalating to thousands of detonations, and 0.01 % of that leading to extinction), in which case superforecasters would be off by 3 orders of magnitude.

Importance of the digital minds stuff compared to regular AI safety; how many early-career EAs should be going into this niche? What needs to happen between now and the arrival of digital minds? In other words, what kind of a plan does Carl have in mind for making the arrival go well? Also, since Carl clearly has well-developed takes on moral status, what criteria he thinks could determine whether an AI system deserves moral status, and to what extent.

Additionally—and this one's fueled more by personal curiosity than by impact—Carl's beliefs on consciousness. Like Wei Dai, I find the case for anti-realism as the answer to the problem of consciousness weak, yet this is Carl's position (according to this old Brian Tomasik post, at least), and so I'd be very interested to hear Carl explain his view.

IIRC Carl had a $5M discretionary funding pot from OpenPhil. What has he funded with it?

Not much new on that front besides continuing to back the donor lottery in recent years, for the same sorts of reasons as in the link, and focusing on research and advising rather than sourcing grants.

My understanding is that he believes that full non-indexical conditioning has solved many (most? all?) problems in anthropics. It might be interesting to hear his views on what has been solved, and what is remaining.

I'd like to hear his advice for smart undergrads who want to build their own similarly deep models in important areas which haven't been thought about very much e.g. take-off speeds, the influence of pre-AGI systems on the economy, the moral value of insects, preparing for digital minds (ideally including specific exercises/topics/reading/etc.).

I'm particularly interested in how he formed good economic intuitions, as they seem to come up a lot in his thinking/writing.

Can you ask him whether or not it's rational to assume AGI comes with significant existential risk as a default position, or if one has to make a technical case for it coming with x-risk? 

How did he deal with two-envelope considerations in his calculation of moral weights for OpenPhil?

[This comment is no longer endorsed by its author]

I have never calculated moral weights for Open Philanthropy, and as far as I know no one has claimed that. The comment you are presumably responding to began by saying I couldn't speak for Open Philanthropy on that topic, and I wasn't.

Sorted by Click to highlight new comments since:

[Meta] Forum bug: when there were no comments it was showing as -1 comments

Curated and popular this week
Relevant opportunities