Davidmanheim

Head of Research and Policy @ ALTER - Association for Long Term Existence and Resilience
7649 karmaJoined Working (6-15 years)

Participation
4

  • Received career coaching from 80,000 Hours
  • Attended more than three meetings with a local EA group
  • Completed the AGI Safety Fundamentals Virtual Program
  • Completed the In-Depth EA Virtual Program

Sequences
2

Deconfusion and Disentangling EA
Policy and International Relations Primer

Comments
945

Topic contributions
1

First, I was convinced, separately, that chip production location matters more than I presumed here because chips are not commodities in an important way I neglected - the security of a chip isn't really verifiable post-hoc, and worse, the differential insecurity of chips to US versus Chinese backdoors means that companies based in different locations will have different preferences for which risks to tolerate. (On the other hand, I think you're wrong in saying that "the chip supply chain has unique characteristics [compared to oil,] with extreme manufacturing concentration, decades-long development cycles, and tacit knowledge that make it different" - because the same is true for crude oil extraction! What matters is who refines it, and who buys it, and what it's used for.)

Second, I agree that the dichotomy of short versus long timelines unfairly simplifies the question - I had intended to indicate that this was a spectrum in the diagram, but on rereading, didn't actually say this. So I'll clarify a few points. First, as others have noted, the relevant timeline is from now to takeoff, not from now to actual endgame. Second, if we're talking about takeoff after 2035, the investments in China are going to swamp western production. (This is the command economy advantage - though I could imagine it's vulnerable to the typical failure modes where they overinvest in the wrong thing, and can't change course quickly.)

On the other hand, for the highest acceleration short timelines, for fabrication, we're past the point of any decisive decisions on chip production, and arguably past the point of doing anything on the hardware usage to decide what occurs - the only route to control the tech is short term policy, where only the relative leads of the specific frontier companies matters, and controlling the chips is about maintaining a very short term lead that doesn't depend on technical expertise, just on hardware. (I'm skeptical of this - not because it's implausible, but because the cost of these fights is high. That is, I think it's more likely that in these worlds the critical risk mitigation is global cooperation to stop loss of control - which means that the fights being created over hardware are on net damaging!)

For moderately short, 2-6 year timelines, the timelines for chip fabs are long enough that we're mostly locked in  not just to overall western dominance via chips produced in Taiwan, but because fabrication plans built today are coming online closer to 2029, and the rush to build Chinese fabrication plants is already baked in. And that's just the fabs - for the top chips, the actual chip design usually takes as long or longer than building the plant. So we're going to see shifts towards the end of the window either way. 

And in those moderate timeline worlds, I'll strongly grant your point that location, in terms of which companies have the technical lead for producing AGI, matters at lot. I just don't see it as impacted that much by chip embargoes, which will be circumvented either by smuggling, or by leasing the chips via proxies, etc. And as with the above scenario, I think that hobbling Chinese acquisition of chips turns this into a zero-sum game along the wrong dimension - because the actual force dictating which AI companies have access to the most compute is the capital markets, and expectations for profit. But this brings in a point I didn't discuss at all here, and wasn't thinking about, where AI companies have commodified their own offerings. Capital market expectations seem not to be accounting for this - or are properly pricing both the upside of a single-company AI singleton, and the downside of commodified offerings meaning there's no profit at all. 

Either way, I'm unsure that western countries should see much marginal benefit in the coming years from controlling chip location. "Hobbling" Chinese AI efforts is still easier to do via current market dynamics where western companies have better market options, and will pay more for the chips - if that's even a benefit, which seems very unclear given the commodification of models and the benefit accruing to the users of AI models.

So my conclusion is that this is very much unclear, and I'd love to see a lot more explicit reasoning about the models for impact, and how the policy angles relate to the timelines and the underlying risks - which are very much missing in the public discussions I've seen.

As a meta-comment, it's really important that a huge proportion of the disagreement in the comments here is about what "engage deeply" means.

If that means it is a crux that must be decided upon, the claim is clearly true that we must engage with them - because they are certainly cruxes.
It if means people must individually spend time on doing so, it is clearly false, because people can rationally choose not to engage and use some heuristic, or defer to experts which is rational[1].

  1. ^

    In worlds where computation and consideration are not free. Using certain technical assumptions for what rational means in game theory, we could claim it's irrational because rationality typically assumes zero cost of computation. But this is mostly a stupid nitpick.

Deference to authority is itself a philosophical contention which has been discussed and debated (in that case, in comparison to voting as a method.)

Davidmanheim
2
0
0
80% disagree

It is possible to rationally prioritise between causes without engaging deeply on philosophical issues

As laid out at length in my 2023 post, no, it is not. For a single quote "all of axiology, which includes both aesthetics and ethics, and large parts of metaphysics... are central to any discussion of how to pursue cause-neutrality, but are often, in fact, nearly always, ignored by the community."

As to the idea that one can defer to others in place of engaging deeply, this is philosophically debated, and while rational in the decision theoretic sense, it is far harder to justify as a philosophical claim.

That said, my personal views and decisions are informed on a different basis.

Definitely seems reasonable, but it would ideally need to be done somewhere high prestige.

Convince funders to invest in building those plans, to sketch out futures and treaties that could work rbustly to stop the coming likely nightmare of default AGI/ASI futures.

The key takeaway, which has been argued for by myself and others, should be to promote investment in clear plans for what post warning shot AI governance looks like. Unfortunately, despite the huge contingent value, there is very little good work on the topic.

The description is about punishment for dissent from non-influential EAs, but the title is about influential members. (And I'd vote differently depending on which is intended.)

Fit is an important aspect of hiring! (As are diversity, etc.) Picking the person who gets the highest score on the trial, while ignoring how they fit with the team, is a huge problem.

The description seems fine, but the title seems to get this wrong by referencing fit instead of nepotism or similar.

I imagine that there would be willingness to do a matching-raised-funds program, where the company pledges to match funds that employees have pledged to charities. For example, someone chooses to do a 10k run for a charity and gets friends and family to pledge to the charity, or chooses to do a birthday fundraiser in lieu of presents. This seems like it would qualify for the bounty, and the framing seems less weird than what you proposed, even though it's essentially identical.

Load more