I'm a globally ranked top 20 forecaster. I believe that AI is not a normal technology. I'm working to help shape AI for global prosperity and human freedom. Previously, I was a former data scientist with five years of industry experience.
Congrats! I also thought it was great.
Sorry for the slightly off-topic question but I noticed EAG London 2025 talks are uploaded to YouTube but I didn't see any EAG Bay Area 2025 talks. Do you know when those will go up?
If you're considering a career in AI policy, now is an especially good time to start applying widely as there's a lot of hiring going on right now. I documented in my Substack over a dozen different opportunities that I think are very promising.
Thank you for sharing your perspective and I'm sorry this has been frustrating for you and people you know. I deeply appreciate your commitment and perseverance.
I hope to share you a bit of perspective from me as a hiring manager on the other side of things:
Why aren’t orgs leaning harder on shared talent pools (e.g. HIP’s database) to bypass public rounds? HIP is currently running an open search.
It's very difficult to run an open search for all conceivable jobs and have the best fit for all of them. And even if you do have a list of the top candidates for everything, it's still hard to sort and filter through that list without more screening. This makes HIP a valuable supplement but not a replacement.
~
I also think it would be worth considering how to provide some sort of job security/benefit for proven commitment within the movement
'The movement' is just the mix of all the people and orgs doing their own thing. Individual orgs themselves should be responsible for job security and rewarding commitment - the movement itself unfortunately isn't an entity that is capable of doing that.
~
I know one lady who worked at a top EA org for eight years; she's now struggling to find her next position within the movement, competing with new applicants! That seems like a waste of career capital.
Hopefully her eight years gives her a benefit against other applicants! That is, the career capital hasn't been 'wasted' at all. But it still makes sense to view her against other applicants who may have other skills needed for the role - being good at one role doesn't make you a perfect automatic fit for another role.
~
Moreover, I would avoid the expensive undertaking of a full hiring round until my professional networks had been exhausted. After all, if you're in my network to begin with, you probably did something meritorious to get there.
While personal networks are a great place to source talent they're far from perfect - in particular while personal networks are created by merit they are also formed by bias and preferencing 'people like us'. A 'full hiring round' is thus more meritocratic - anyone can apply, you don't need to figure out how to get into the right person's network first.
~
You might like this article: Don't be bycatch.
At least in my own normative thought, I don't just wonder about what meets my standards. [...] I think the most important disagreement of all is over which standards are really warranted.
Really warranted by what? I think I'm an illusionist about this in particular as I don't even know what we could be reasonably disagreeing over.
For a disagreement about facts (is this blue?), we can argue about actual blueness (measurable) or we can argue about epistemics (which strategies most reliably predict the world?) and meta-epistemics (which strategies most reliably figure out strategies that reliably predict the world?), etc.
For disagreements about morals (is this good?), we can argue about goodness but what is goodness? Is it platonic? Is it grounded in God? I'm not even sure what there is to dispute. I'd argue the best we can do is argue to our shared values (perhaps even universal human values, perhaps idealized by arguing about consistency etc.) and then see what best satisfies those.
~
On your view, there may not be any normative disagreement, once we all agree about the logical and empirical facts.
Right - and this matches our experience! When moral disagreements persist after full empirical and logical agreement, we're left with clashing bedrock intuitions. You want to insist there's still a fact about who's ultimately correct, but can't explain what would make it true.
~
It's interesting to consider the meta question of whether one of us is really right about our present metaethical dispute, or whether all you can say is that your position follows from your epistemic standards and mine follows from mine, and there is no further objective question about which we even disagree.
I think we're successfully engaging in a dispute here and that does kind of prove my position. I'm trying to argue that you're appealing to something that just doesn't exist and that this is inconsistent with your epistemic values. Whether one can ground a judgement about what is "really warranted" is a factual question.
~
I want to add that your recent post on meta-metaethical realism also reinforces my point here. You worry that anti-realism about morality commits us to anti-realism about philosophy generally. But there's a crucial disanalogy: philosophical discourse (including this debate) works precisely because we share epistemic standards - logical consistency, explanatory power, and various other virtues. When we debate meta-ethics or meta-epistemology, we're not searching for stance-independent truths but rather working out what follows from our shared epistemic commitments.
The "companions in guilt" argument fails because epistemic norms are self-vindicating in a way moral norms aren't. To even engage in rational discourse about what's true (including about anti-realism), we must employ epistemic standards. But we can coherently describe worlds with radically different moral standards. There's no pragmatic incoherence in moral anti-realism the way there would be in global philosophical anti-realism.
You're right that I need to bite the bullet on epistemic norms too and I do think that's a highly effective reply. But at the end of the day, yes, I think "reasonable" in epistemology is also implicitly goal-relative in a meta-ethical sense - it means "in order to have beliefs that accurately track reality." The difference is that this goal is so universally shared, so universal across so many different value systems, and so deeply embedded in the concept of belief itself that it feels categorical.
You say I've "replaced all the important moral questions with trivial logical ones," but that's unfair. The questions remain very substantive - they just need proper framing:
Instead of "Which view is better justified?" we ask "Which view better satisfies [specific criteria like internal consistency, explanatory power, alignment with considered judgments, etc.]?"
Instead of "Would the experience machine be good for me?" we ask "Would it satisfy my actual values / promote my flourishing / give me what I reflectively endorse / give me what an idealized version of myself might want?"
These aren't trivial questions! They're complex empirical and philosophical questions. What I'm denying is that there's some further question -- "But which view is really justified?" -- floating free of any standard of justification.
Your challenge about moral uncertainty is interesting, but I'd say: yes, you can hedge across different moral theories if you have a higher-order standard for managing that uncertainty (like maximizing expected moral value across theories you find plausible). That's still goal-relative, just at a meta-level.
The key insight remains: every "should" or "justified" implicitly references some standard. Making those standards explicit clarifies rather than trivializes our discussions. We're not eliminating important questions - we're revealing what we're actually asking.
You raise a fair challenge about epistemic norms! Yes, I do think there are facts about which beliefs are most reasonable given evidence. But I'd argue this actually supports my view rather than undermining it.
The key difference: epistemic norms have a built-in goal - accurate representation of reality. When we ask "should I expect emeralds to be green or grue?" we're implicitly asking "in order to have beliefs that accurately track reality, what should I expect?" The standard is baked into the enterprise of belief formation itself.
But moral norms lack this inherent goal. When you say some goals are "intrinsically more rationally warranted," I'd ask: warranted for what purpose? The hypothetical imperative lurks even in your formulation. Yes, promoting happiness over misery feels obviously correct to us - but that's because we're humans with particular values, not because we've discovered some goal-independent truth.
I'm not embracing radical skepticism or saying moral questions are nonsense. I'm making a more modest claim: moral questions make perfect sense once we specify the evaluative standard. "Is X wrong according to utilitarianism?" has a determinate, objective, mind-indpendent answer. "Is X wrong simpliciter?" does not.
The fact that we share deep moral intuitions (like preferring happiness to misery) is explained by our shared humanity, not by those intuitions tracking mind-independent moral facts. After all, we could imagine beings with very different value systems who would find our intuitions as arbitrary as we might find theirs.
So yes, I think we can know things about the future and have justified beliefs. But that's because "justified" in epistemology means "likely to be true" - there's an implicit standard. In ethics, we need to make our standards explicit.
Thanks!
I think all reasons are hypothetical, but some hypotheticals (like "if you want to avoid unnecessary suffering...") are so deeply embedded in human psychology that they feel categorical. This explains our moral intuitions without mysterious metaphysical facts.
The concentration camp guard example actually supports my view - we think the guard shouldn't follow professional norms precisely because we're applying a different value system (human welfare over rule-following). There's no view from nowhere; there's just the fact that (luckily) most of us share similar core values.
Hi David - I work a lot on semiconductor/chip export policy, so very important to think about the strategy here.
My biggest issue is that "short vs. long" timelines is not a binary. I agree that under longer timelines, say post-2035, China likely can significantly catch up on chip manufacturing. (Seems much less likely pre-2035.) But I think the controls logic matters really strongly for timelines 2025-2035 and still might create a larger strategic advantage post-2035.
Who has the chips still matters, since it determines whether the country has enough compute to train their own models, run any models, and provision cloud providers. You treat "differential adoption" and "who owns chips" as separate when they're deeply interconnected. If you control chip supply, you inherently influence adoption patterns. There would be diffusion of AI of course, but it would be much more likely to come from the US given chip controls, and potentially the AI would remain on US cloud under US control.
Furthermore, if you grant that AI can accelerate AI development itself, a 2-3 year compute advantage could be decisive... and not just in "fast take-off recursive self-improvement" but even in mundane ways where better AI leads to better chip design tools, better compiler optimization better datacenter cooling systems, and better materials science for next-gen chips.
You're right that it is impossible to control 100% of the chips, but that's not the goal. The goal is to control enough of the chips enough of the time to create a structural advantage. Maintaining a 10-to-1 compute advantage of the US over China will mean that even if we had AI parity, we'd still have 10x more AI agents than China. And we'd likely have better AI per agent as well.
For example, consider the same Russian oil example you discuss - yes, there's significant leakage to India and China and these controls aren't perfect, but Russia's realized prices have stayed ~$15-20/barrel below Brent throughout 2024-2025 - forcing Russia to accept steep discounts while burning cash on shadow fleet operations and longer shipping routes.
And chips are much easier to control than oil right now. Currently, OpenAI can buy one million NVIDIA GB300s to power Stargate, but China and Russia can't even come close. Chinese chips are currently much weaker in both quantity and quality, and this will persist for awhile as China lacks the relevant chipmaking equipment and likely will for some time -- the EUV tech that prints chips at nanometer scale took decades to develop and is arguably the most advanced technology ever made. You seem to have some all-or-nothing thinking here or think that we can't possibly block enough chips to matter, but we already have significantly reduced China's compute stock and you even have people like DeepSeek's CEO mentioning that chip controls are their biggest barrier. Chinese AI development would certainly be different if China could freely buy one million GB300s as well.
The key thing is that semiconductor manufacturing isn't a commodity market with fungible goods flowing to equilibrium. You're treating this as a standard economic problem where market forces inevitably equalize and assume a lot of frictionless markets - but neither of these seem true. The chip supply chain has unique characteristics with extreme manufacturing concentration, decades-long development cycles, and tacit knowledge that make it different. Additionally, network effects in AI development could create lock-in before economic pressure equalizes access. Moreover, American/Western AI and chip development isn't going to freely flow to China because the US government would continue to stop that from happening as a matter of national security. Capital does flow, but this technology cannot flow quickly, freely, or easily.
It's also not easy to just arbitrarily make up for chip disadvantage with energy advantage. It's very difficult to train frontier AI models on ancient hardware. DeepSeek has been trying hard all year to train their model on Huawei chips and still haven't succeeded. It doesn't matter how cheap you make energy if chips remain a limiting factor. Arguably, TSMC's lead over SMIC has grown, not shrunk, over the past decade despite massive Chinese investment.
All told, I think that China is at a significant AI disadvantage over the next decade or more and this is due to reasonably effective (albeit imperfect) chip controls. Ideally we would make the chip controls even better and stronger to press that advantage further (I have ideas on how), but that's a different conversation from the strategic wisdom of the controls in the first place.