Jacob Davis, a writer for the socialist political magazine Jacobin, raises an interesting concern about how current longermist initiatives in AI Safety are in his assessment escalating tensions between the US and China. This highlights a conundrum for the Effective Altruism movement which seeks to advance both AI Safety and avoid a great power conflict between the US and China.

This is not the first time this conundrum has been raised which has been explored on the forum previously by Stephen Clare.

The key points Davis asserts are that:

  • Longtermists have been key players in President Biden’s choice last October to place heavy controls on semiconductor exports.
  • Key longtermist figures advancing export controls and hawkish policies against China include former Google CEO Eric Schmidt (through Schmidt Futures and the longtermist political fund Future Forward PAC), former congressional candidate and FHI researcher Carrick Flynn, as well as other longtermists in key positions at Gerogetown Center for Security and Emerging Technology and the RAND Corporation.
  • Export controls have failed to limit China's AI research, but have wrought havoc on global supply chains and seen as protectionist in some circles.

I hope this linkpost opens up a debate about the merits and weaknesses of current strategies and views in longtermist circles.

37

0
0

Reactions

0
0
Comments16


Sorted by Click to highlight new comments since:

I think there's something to this, but:

  • My impression of Eric Schmidt is that he is not a longtermist, and if anything has done lots to accelerate AI progress.
  • The October 7 controls have not "devastated critical supply chains". The linked article gives no evidence for this claim. China has something like 10% or less of the chip market share, and the export controls don't affect other countries' abilities to produce chips (though they do prevent some chips from being sold to China). Most fabs right now have utilization rates well below 100%, meaning they produce fewer chips than they could due to weak demand.
  • The October 7 controls also have not "upset markets" globally, or at least the linked article gives no evidence for this claim. Memory chip-makers like Samsung have seen profits fall, but this seems to be a normal business cycle thing --- semiconductors, and especially memory chips, are a cyclical industry, sensitive to consumer demand, and the current downturn is almost certainly related to the global financial downturn and associated reduction in consumer demand.
    • I think the October 7 controls have affected and will affect markets, but mostly by reducing profits of companies selling chips and equipment to China, and reducing the supply of some chips and equipment within China (their intended purpose). There'll probably be other, indirect effects down the line, but it's hard to say what those will be now.
  • I also note a tension between those two points -- the first blames the October 7 controls for there being a chip supply shortage, and the second blames the controls for there being a chip oversupply. Neither is true.
  • I disagree with the claims that the October 7 controls have "failed spectacularly at achieving their stated ambitions" and that despite them "China’s AI research has managed to continue apace".
    • I basically disagree with the linked article.
      • It states that Nvidia is releasing export-control-adapted versions of its chips with lower memory interconnect (to be below the export control thresholds) for the Chinese market. This is true, but the gap between the state of the art and what can be sold to China will grow.
      • It seems to suggest that compute will be less important in future. I think that's unlikely, at least for developing frontier models.
      • Another purpose of the October 7 controls was to limit Chinese chip-makers' access to equipment, materials and software, and it seems tentatively pretty successful at that (though time will tell).
  • I think the "increased West-China tensions" point is right though and fairly concerning.
  • I also think the "CSET was a major contributor to the October 7 controls" point is right, but whether this was ex ante good or bad probably depends on one's views on AI x-risk.
Arepo
6
10
14

My impression of Eric Schmidt is that he is not a longtermist, and if anything has done lots to accelerate AI progress.


This seems no-true-Scotsmany. It seems to have become almost commonplace for organisations that started from a longtermist seed to have become competitors in the AI arms race, so if many people who are influenced by longtermist philosophy end up doing stuff that seems harmful, we should update towards 'longtermism tends to be harmful in practice' much more than towards 'those people are not longtermists'.

It seems to have become almost commonplace for organisations that started from a longtermist seed to have become competitors in the AI arms race, so if many people who are influenced by longtermist philosophy end up doing stuff that seems harmful, we should update towards 'longtermism tends to be harmful in practice' much more than towards 'those people are not longtermists'.

I agree with this, but "longtermists may do harmful stuff" doesn't mean "this person doing harmful stuff is a longtermist". My understanding is that Schmidt (1) has never espoused views along the lines of "positively influencing the long-term future is a key moral priority of our time", and (2) seems to see AI/AGI kind of like the nuclear bomb -- a strategically important and potentially dangerous technology that the US should develop before its competitors.

I think it's fair for Davis to characterise Schmidt as a longtermist.

He's recently been vocal about AI X-Risk. He funded Carrick Flynn's campaign which was openly longtermist, via the Future Forward PAC alongside Moskovitz & SBF. His philanthropic organisation Schmidt Futures has a future focused outlook and funds various EA orgs.

And there are longtermists who are pro AI like Sam Altman, who want to use AI to capture the lightcone of future value.

https://www.cnbc.com/amp/2023/05/24/ai-poses-existential-risk-former-google-ceo-eric-schmidt-says.html

He's recently been vocal about AI X-Risk.

Yeah, but so have lots of people; it doesn't mean they're all longtermists. Same thing with Sam Altman -- I haven't seen any indication that he's longtermist, but would definitely be interested if you have any sources. This tweet seems to suggest that he does not consider himself a longtermist.

He funded Carrick Flynn's campaign which was openly longtermist, via the Future Forward PAC alongside Moskovitz & SBF.

Do you have a source on Schmidt funding Carrick Flynn's campaign? Jacobin links this Vox article which says he contributed to Future Forward, but it seems implied that it was to defeat Donald Trump. Though I actually don't think this is a strong signal, as Carrick Flynn was mostly campaigning on pandemic prevention and that seems to make sense on neartermist views too.

His philanthropic organisation Schmidt Futures has a future focused outlook and funds various EA orgs.

I know Schmidt Futures has "future" in its name, but as far as I can tell they're not especially focused on the long-term future. They seem to just want to boost innovation through scientific research and talent growth, but so does, like, nearly every government. For example, their Our Mission page does not mention the word "future".

His philanthropic organisation Schmidt Futures...funds various EA orgs

Can you give some examples? My impression was that the funding has been minimal at best, would be surprised if EA orgs receive say >10% of their funding, and likely <1%.

Also I don't want to overstate this point, but I don't think I've yet met a longtermist researcher who claims to have had a extended (or any) conversation with Schimdt. Given that there aren't many longtermist researchers to begin with (<500 worldwide defined rather broadly?), it'd be quite surprising for someone to claim to be a longtermist (or for others to claim that they are) if they've never even talked to someone doing research in the space. 

To be fair, I think a few of Schmidt Futures people were looking around EA Global for things to fund in 2022. I can imagine why someone would think they're a longtermist. 

I agree there are probably a few longtermist and/or EA-affliated people at Schimdt Futures, just as there are probably such people at Google, Meta, the World Bank, etc. This is a different claim than whether Schimdt Futures institutionally is longtermist, which is again a different claim from whether Eric Schimdt himself is.

My understanding is that Schmidt (1) has never espoused views along the lines of "positively influencing the long-term future is a key moral priority of our time"

I don't think that's so important a distinction. Prominent longtermists have declared the view that longtermism basically boils down to x-risk, which (again in their view) overwhelmingly boils down to AI risk. If, following their messaging, we get highly influential people doing harmful stuff in the name of AI risk, I think we should still update towards 'longtermism tends to be harmful in practice'. 

Not as much as if they were explicitly waving a longtermist banner, but the more we believe the longtermist movement has had any impact on society at all, the stronger this update should be.

The posts linked in support of "prominent longtermists have declared the view that longtermism basically boils down to x-risk" do not actually advocate this view. In fact, they argue that longtermism is unnecessary in order to justify worrying about x-risk, which is evidence for the proposition you're arguing against, i.e. you cannot conclude someone is a longtermist because they're worried about x-risk.

Are you claiming that if (they think and we agree that) longtermism is 80+% concerned with AI safety work and AI safety work turns out to be bad, we shouldn't update that longtermism is bad? The first claim seems to be exactly what they think. 

Scott:

Does Long-Termism Ever Come Up With Different Conclusions Than Thoughtful Short-Termism?

I think yes, but pretty rarely, in ways that rarely affect real practice... Most long-termists I see are trying to shape the progress and values landscape up until that singularity, in the hopes of affecting which way the singularity goes

You could argue that he means 'socially promote good norms on the assumption that the singularity will lock in much of society's then-standard morality', but 'shape them by trying to make AI human-compatible' seems a much more plausible reading of the last sentence to me, given context of both longtermism.

Neel:

If you believe the key claims of "there is a >=1% chance of AI causing x-risk and >=0.1% chance of bio causing x-risk in my lifetime" this is enough to justify the core action relevant points of EA

He identifies as a not-longtermist (mea culpa), but presumably considers longtermism the source of these as 'the core action relevant points of EA', since they certainly didn't come from the global poverty or animal welfare wings.

Also, at EAG London, Toby Ord estimated there were 'less than 10' people in the world working full time on general longtermism (as opposed to AI or biotech) - whereas the number of people who'd consider themselves longtermist is surely in the thousands.

I don't know how we got to whether we should update about longtermism being "bad." As far as I'm concerned, this is a conversation about whether Eric Schmidt counts as a longtermist by virtue of being focused on existential risk from AI.

It seems to me like you're saying: "the vast majority of longtermists are focused on existential risks from AI; therefore, people like Eric Schmidt who are focused on existential risks from AI are accurately described as longtermists."

When stated that simply, this is an obvious logical error (in the form of "most squares are rectangles, so this rectangle named Eric Schmidt must be a square"). I'm curious if I'm missing something about your argument.

This is a true claim in general, but seems quite an implausible claim for Schimdt specifically, who has been in tech and at Google for much longer than people in our parts have been around.

Mind if I re-frame this discussion? The relevant question here shouldn't be a matter of beliefs, "is he a longtermist?", it's a matter of identity and identity strength. This isn't to say beliefs aren't important and knowing his wouldn't be informative, but identity (at least to some considerable degree) precedes and predicts beliefs and behavior. 

 

But I also don't want to overemphasize particular labels, there are enough discernible positions out there that this isn't very helpful. Especially for individuals with some expertise, in positions of authority who may be reluctant to carelessly endorse particular groups.

Accepting this, here's some of what we could look into:

  • Amount of positive socialization with EAs and affiliates (Jason Matheny's FLI history is notable, how long and involved was this position?)
  • Amount of out-group derogation - if he's positioned against our out-group, this may indicate or induce sympathy. Mentioning X-risk seriously once did this, may still to a degree.
  • Effect of role identities (Matheny apparently did malaria work before EA. Not sure what tech industry or Google CEO entails, defensiveness or maybe self-importance(?), "yeah me quoting the Bhagavad Gita would sound good!")
  • Identities are correlated; what are his political, religious and cultural identities?

I agree that identity and identity strength are important variables for collective guilt assignment.

That said, I think the case for JM is substantially stronger than the case for Schimdt, which we were previously talking about upthread. 

Curated and popular this week
 ·  · 23m read
 · 
Or on the types of prioritization, their strengths, pitfalls, and how EA should balance them   The cause prioritization landscape in EA is changing. Prominent groups have shut down, others have been founded, and everyone is trying to figure out how to prepare for AI. This is the first in a series of posts examining the state of cause prioritization and proposing strategies for moving forward.   Executive Summary * Performing prioritization work has been one of the main tasks, and arguably achievements, of EA. * We highlight three types of prioritization: Cause Prioritization, Within-Cause (Intervention) Prioritization, and Cross-Cause (Intervention) Prioritization. * We ask how much of EA prioritization work falls in each of these categories: * Our estimates suggest that, for the organizations we investigated, the current split is 89% within-cause work, 2% cross-cause, and 9% cause prioritization. * We then explore strengths and potential pitfalls of each level: * Cause prioritization offers a big-picture view for identifying pressing problems but can fail to capture the practical nuances that often determine real-world success. * Within-cause prioritization focuses on a narrower set of interventions with deeper more specialised analysis but risks missing higher-impact alternatives elsewhere. * Cross-cause prioritization broadens the scope to find synergies and the potential for greater impact, yet demands complex assumptions and compromises on measurement. * See the Summary Table below to view the considerations. * We encourage reflection and future work on what the best ways of prioritizing are and how EA should allocate resources between the three types. * With this in mind, we outline eight cruxes that sketch what factors could favor some types over others. * We also suggest some potential next steps aimed at refining our approach to prioritization by exploring variance, value of information, tractability, and the
 ·  · 5m read
 · 
[Cross-posted from my Substack here] If you spend time with people trying to change the world, you’ll come to an interesting conundrum: Various advocacy groups reference previous successful social movements as to why their chosen strategy is the most important one. Yet, these groups often follow wildly different strategies from each other to achieve social change. So, which one of them is right? The answer is all of them and none of them. This is because many people use research and historical movements to justify their pre-existing beliefs about how social change happens. Simply, you can find a case study to fit most plausible theories of how social change happens. For example, the groups might say: * Repeated nonviolent disruption is the key to social change, citing the Freedom Riders from the civil rights Movement or Act Up! from the gay rights movement. * Technological progress is what drives improvements in the human condition if you consider the development of the contraceptive pill funded by Katharine McCormick. * Organising and base-building is how change happens, as inspired by Ella Baker, the NAACP or Cesar Chavez from the United Workers Movement. * Insider advocacy is the real secret of social movements – look no further than how influential the Leadership Conference on Civil Rights was in passing the Civil Rights Acts of 1960 & 1964. * Democratic participation is the backbone of social change – just look at how Ireland lifted a ban on abortion via a Citizen’s Assembly. * And so on… To paint this picture, we can see this in action below: Source: Just Stop Oil which focuses on…civil resistance and disruption Source: The Civic Power Fund which focuses on… local organising What do we take away from all this? In my mind, a few key things: 1. Many different approaches have worked in changing the world so we should be humble and not assume we are doing The Most Important Thing 2. The case studies we focus on are likely confirmation bias, where
 ·  · 1m read
 · 
I wanted to share a small but important challenge I've encountered as a student engaging with Effective Altruism from a lower-income country (Nigeria), and invite thoughts or suggestions from the community. Recently, I tried to make a one-time donation to one of the EA-aligned charities listed on the Giving What We Can platform. However, I discovered that I could not donate an amount less than $5. While this might seem like a minor limit for many, for someone like me — a student without a steady income or job, $5 is a significant amount. To provide some context: According to Numbeo, the average monthly income of a Nigerian worker is around $130–$150, and students often rely on even less — sometimes just $20–$50 per month for all expenses. For many students here, having $5 "lying around" isn't common at all; it could represent a week's worth of meals or transportation. I personally want to make small, one-time donations whenever I can, rather than commit to a recurring pledge like the 10% Giving What We Can pledge, which isn't feasible for me right now. I also want to encourage members of my local EA group, who are in similar financial situations, to practice giving through small but meaningful donations. In light of this, I would like to: * Recommend that Giving What We Can (and similar platforms) consider allowing smaller minimum donation amounts to make giving more accessible to students and people in lower-income countries. * Suggest that more organizations be added to the platform, to give donors a wider range of causes they can support with their small contributions. Uncertainties: * Are there alternative platforms or methods that allow very small one-time donations to EA-aligned charities? * Is there a reason behind the $5 minimum that I'm unaware of, and could it be adjusted to be more inclusive? I strongly believe that cultivating a habit of giving, even with small amounts, helps build a long-term culture of altruism — and it would