Take the 2025 EA Forum Survey to help inform our strategy and prioritiesTake the survey
Hide table of contents

Next week for the 80,000 Hours Podcast I'll be interviewing Carl Shulman, advisor to Open Philanthropy, and generally super informed person about history, technology, possible futures, and a shocking number of other topics.

He has previously appeared on our show and the Dwarkesh Podcast:

He has also written a number of pieces on this forum.

What should I ask him?

New Answer
New Comment


10 Answers sorted by

Why aren't there more people like him, and what is he doing or planning to do about that?

Related question: How does one become someone like Carl Shulman (or Wei Dai, for that matter)?

Wei Dai
76
0
0
4
4

I thought about this and wrote down some life events/decisions that probably contributed to becoming who I am today.

  • Immigrating to the US at age 10 knowing no English. Social skills deteriorated while learning language, which along with lack of cultural knowledge made it hard to make friends during teenage and college years, which gave me a lot of free time that I filled by reading fiction and non-fiction, programming, and developing intellectual interests.
  • Was heavily indoctrinated with Communist propaganda while in China, but leaving meant I then had no viable moral/philosophical/political foundations. Parents were too busy building careers as new immigrants and didn't try to teach me values/traditions. So I had a lot of questions that I didn't have ready answers to, which perhaps contributed to my intense interest in philosophy (ETA: and economics and game theory).
  • Had an initial career in cryptography, but found it a struggle to compete with other researchers on purely math/technical skills. Realized that my comparative advantage was in more conceptual work. Crypto also taught me to be skeptical of my own and other people's ideas.
  • Had a bad initial experience with academic re
... (read more)

Maybe if/how his thinking about AI governance has changed over the last year?

Relatedly, I'd be interested to know whether his thoughts on the public's support for AI pauses or other forms of strict regulation have updated since his last comment exchange with Katja, now that we have many reasonably high-quality polls on the American public's perception of AI (much more concerned than excited), as well as many more public conversations.

9
CarlShulman
A bit, but more on the willingness of AI experts and some companies to sign the CAIS letter and lend their voices to the view 'we should go forward very fast with AI, but keep an eye out for better evidence of danger and have the ability to control things later.' My model has always been that the public is technophobic, but that 'this will be constrained like peaceful nuclear power or GMO crops' isn't enough to prevent a technology that enables DSA and OOMs (and nuclear power and GMO crops exist, if AGI exists somewhere that place outgrows the rest of the world if the rest of the world sits on the sidelines). If leaders' understanding of the situation is that public fears are erroneous, and going forward with AI means a hugely better economy (and thus popularity for incumbents) and avoiding a situation where abhorred international rivals can safely disarm their military, then I don't expect it to be stopped. So the expert views, as defined by who the governments view as experts, are central in my picture. Visible AI progress like ChatGPT strengthens 'fear AI disaster' arguments but at the same time strengthens 'fear being behind in AI/others having AI' arguments. The kinds of actions that have been taken so far are mostly of the latter type (export controls, etc), and measures to monitor the situation and perhaps do something later if the evidential situation changes. I.e. they reflect the spirit of the CAIS letter, which companies like OpenAI and such were willing to sign, and not the pause letter which many CAIS letter signatories oppose.   The evals and monitoring agenda is an example of going for value of information rather than banning/stopping AI advances, like I discussed in the comment, and that's a reason it has had an easier time advancing.

Nice to know, Rob! I have really liked the podcasts Carl did. You may want to link to Carl's (great!) blog in your post too.

In general, I would be curious to know more about how Carl thinks about determining how much resources should go into each cause area, which I do not recall being discussed much in Carl's 3 podcasts. Some potential segways:

Carl has knowledge about lots of topics, very much like Anders Sandberg. So I think the questions I shared to ask Anders are also good questions for Carl:

  • Should more resources be directed towards patient philanthropy at the margin? How much more/less?
  • How binary is longterm value? Relevant to the importance of the concept of existential risk.
  • Should the idea that more global warming might be good to mitigate the food shocks caused by abrupt sunlight reduction scenarios (ASRSs) be taken seriously? (Anders is at the board of ALLFED, and therefore has knowledge about ASRSs.)
  • Which fraction of the expected effects of neartermist interventions (e.g. global health and development, and animal welfare) flow through longtermis considerations (e.g. longterm effects of changing population size, or expansion of the moral circle)? [Is it unclear whether Against Malaria Foundation is better/worse than Make-A-Wish Foundation, as argued in section 4.1 of Maximal cluelessness?]
  • Under moral realism, are we confident that superintelligent artificial intelligence disempowering humans would be bad?
  • Should we be uncertain about whether saving lives is good/bad because of the meat eater problem [my related calculations; related post]?
  • What is the chance that the time of perils hypothesis is true (e.g. how does the existential risk this century compare to that over the next 1 billion years)? How can we get more evidence for/against it? Relevant because, if existential risk is spread out over a long time, reducing existential risk this century has a negligible effect on total existential risk, as discussed by David Thorstad. [See also Rethink's post on this question.]
  • How high is the chance of AGI lock-in this century?
  • What can we do to ensure a bright future if there are advanced aliens on or around Earth (Magnus Vinding's thoughts)? More broadly, should humanity do anything differently due to the possibility of advanced civilisations which did not originite on Earth? [Another speculative question, which you covered a little in the podcast with Joe, is what should we do differently to improve the world if we were in a simulation?]
  • How much weight should one give to the XPT's forecasts? The ones regarding nuclear extinction seem way too pessimistic to be accurate [the reasons I think this are in this thread]. Superforecasters and domain experters predicted a likelihood of nuclear extinction by 2100 of 0.074 % and 0.55 %. My guess would be something like 10^-6 (10 % of a global nuclear nuclear war involving tens of detonations, 10 % of it escalating to thousands of detonations, and 0.01 % of that leading to extinction), in which case superforecasters would be off by 3 orders of magnitude.

Importance of the digital minds stuff compared to regular AI safety; how many early-career EAs should be going into this niche? What needs to happen between now and the arrival of digital minds? In other words, what kind of a plan does Carl have in mind for making the arrival go well? Also, since Carl clearly has well-developed takes on moral status, what criteria he thinks could determine whether an AI system deserves moral status, and to what extent.

Additionally—and this one's fueled more by personal curiosity than by impact—Carl's beliefs on consciousness. Like Wei Dai, I find the case for anti-realism as the answer to the problem of consciousness weak, yet this is Carl's position (according to this old Brian Tomasik post, at least), and so I'd be very interested to hear Carl explain his view.

IIRC Carl had a $5M discretionary funding pot from OpenPhil. What has he funded with it?

Not much new on that front besides continuing to back the donor lottery in recent years, for the same sorts of reasons as in the link, and focusing on research and advising rather than sourcing grants.

My understanding is that he believes that full non-indexical conditioning has solved many (most? all?) problems in anthropics. It might be interesting to hear his views on what has been solved, and what is remaining.

I'd like to hear his advice for smart undergrads who want to build their own similarly deep models in important areas which haven't been thought about very much e.g. take-off speeds, the influence of pre-AGI systems on the economy, the moral value of insects, preparing for digital minds (ideally including specific exercises/topics/reading/etc.).

I'm particularly interested in how he formed good economic intuitions, as they seem to come up a lot in his thinking/writing.

Can you ask him whether or not it's rational to assume AGI comes with significant existential risk as a default position, or if one has to make a technical case for it coming with x-risk? 

How did he deal with two-envelope considerations in his calculation of moral weights for OpenPhil?

[This comment is no longer endorsed by its author]

I have never calculated moral weights for Open Philanthropy, and as far as I know no one has claimed that. The comment you are presumably responding to began by saying I couldn't speak for Open Philanthropy on that topic, and I wasn't.

[Meta] Forum bug: when there were no comments it was showing as -1 comments

Curated and popular this week
 ·  · 1m read
 · 
This morning I was looking into Switzerland's new animal welfare labelling law. I was going through the list of abuses that are now required to be documented on labels, and one of them made me do a double-take: "Frogs: Leg removal without anaesthesia."  This confused me. Why are we talking about anaesthesia? Shouldn't the frogs be dead before having their legs removed? It turns out the answer is no; standard industry practice is to cut their legs off while they are fully conscious. They remain alive and responsive for up to 15 minutes afterward. As far as I can tell, there are zero welfare regulations in any major producing country. The scientific evidence for frog sentience is robust - they have nociceptors, opioid receptors, demonstrate pain avoidance learning, and show cognitive abilities including spatial mapping and rule-based learning.  It's hard to find data on the scale of this issue, but estimates put the order of magnitude at billions of frogs annually. I could not find any organisations working directly on frog welfare interventions.  Here are the organizations I found that come closest: * Animal Welfare Institute has documented the issue and published reports, but their focus appears more on the ecological impact and population decline rather than welfare reforms * PETA has conducted investigations and released footage, but their approach is typically to advocate for complete elimination of the practice rather than welfare improvements * Pro Wildlife, Defenders of Wildlife focus on conservation and sustainability rather than welfare standards This issue seems tractable. There is scientific research on humane euthanasia methods for amphibians, but this research is primarily for laboratory settings rather than commercial operations. The EU imports the majority of traded frog legs through just a few countries such as Indonesia and Vietnam, creating clear policy leverage points. A major retailer (Carrefour) just stopped selling frog legs after welfar
 ·  · 4m read
 · 
Note: This post was crossposted from the Open Philanthropy Farm Animal Welfare Research Newsletter by the Forum team, with the author's permission. The author may not see or respond to comments on this post. ---------------------------------------- > Why ending the worst abuses of factory farming is an issue ripe for moral reform I recently joined Dwarkesh Patel’s podcast to discuss factory farming. I hope you’ll give it a listen — and consider supporting his fundraiser for FarmKind’s Impact Fund. (Dwarkesh is matching all donations up to $250K; use the code “dwarkesh”.) We discuss two contradictory views about factory farming that produce the same conclusion: that its end is either inevitable or impossible. Some techno-optimists assume factory farming will vanish in the wake of AGI. Some pessimists see reforming it as a hopeless cause. Both camps arrive at the same conclusion: fatalism. If factory farming is destined to end, or persist, then what’s the point in fighting it? I think both views are wrong. In fact, I think factory farming sits in the ideal position for moral reform. Because its end is neither inevitable nor impossible, it offers a unique opportunity for advocacy to change the trajectory of human moral progress. Not inevitable Dwarkesh raised an objection to working on factory farming that I often hear from techno-optimists who care about the issue: isn’t its end inevitable? Some cite the long arc of moral progress; others the promise of vast technological change like cultivated meat or Artificial General Intelligence (AGI) which surpasses human capabilities. It’s true that humanity has achieved incredible moral progress for humans. But that progress was never inevitable — it was the result of moral and political reform as much as technology. And that moral progress mostly hasn’t yet extended to animals. For them, the long moral arc of history has so far only bent downward. Technology may one day end factory farming, just as cars liberated w
 ·  · 1m read
 · 
This is a personal essay about my failed attempt to convince effective altruists to become socialists. I started as a convinced socialist who thought EA ignored the 'root causes' of poverty by focusing on charity instead of structural change. After studying sociology and economics to build a rigorous case for socialism, the project completely backfired as I realized my political beliefs were largely psychological coping mechanisms. Here are the key points: * Understanding the "root cause" of a problem doesn't necessarily lead to better solutions - Even if capitalism causes poverty, understanding "dynamics of capitalism" won't necessarily help you solve it * Abstract sociological theories are mostly obscurantist bullshit - Academic sociology suffers from either unrealistic mathematical models or vague, unfalsifiable claims that don't help you understand or change the world * The world is better understood as misaligned incentives rather than coordinated oppression - Most social problems stem from coordination failures and competing interests, not a capitalist class conspiring against everyone else * Individual variation undermines class-based politics - People within the same "class" have wildly different cognitive traits, interests, and beliefs, making collective action nearly impossible * Political beliefs serve important psychological functions - They help us cope with personal limitations and maintain self-esteem, often at the expense of accuracy * Evolution shaped us for competition, not truth - Our brains prioritize survival, status, and reproduction over understanding reality or being happy * Marx's insights, properly applied, undermine the Marxist political project - His theory of ideological formation aligns with evolutionary psychology, but when applied to individuals rather than classes, it explains why the working class will not overthrow capitalism. In terms of ideas, I don’t think there’s anything too groundbreaking in this essay. A lot of the