SD

Sergio Diaz 🔸

43 karmaJoined

Comments
3

Even if you're skeptical about the direct impact of AI safety work on reducing existential risk (a much longer conversation, and one I'm not fully qualified to have), there's a strong indirect case that the EA and EA-adjacent prioritization of AI in the mid-2010s will end up being hugely important for "traditional", non-speculative EA causes like global health and animal welfare. Most of Anthropic's co-founders and many of its early employees were deeply involved in the EA and rationalist communities, and it's at least plausible that this engagement is what led them to take AI seriously enough to found Anthropic in 2021 or to join early with substantial equity. As Sophie Kim's post documents, Anthropic's seven co-founders have pledged to donate 80% of their wealth, which at current valuations could amount to roughly $37.8B combined, nearly ten times what Coefficient Giving has disbursed in its entire history. Including employee equity already in DAFs, the total pool of EA-influenced philanthropic capital could reach nine or ten figures. It's not unreasonable to assume that a substantial fraction of this is likely to flow into non-AI causes. Many of these donors signed the GWWC pledge before AI was their focus and hold a worldview and values closely aligned with the broader effective altruism community ( even outside EA it is not that uncommon for people with large sums of money and a modest amount of altruism to donate significant amounts to global health). Needless to say, this is an average estimate and not guaranteed. It's possible that Anthropic or the entire AI ecosystem collapses and these funds never materialize, but it's also possible that Anthropic's returns end up being even larger.

Nice post!

The Simulacra View has (as I'm sure you're aware) a distinctly Repugnant Conclusion-ish flavor.

One thing that's not entirely clear to me is the claim that it wouldn't be possible in principle to measure simulacra welfare. The argument seems to be that measurement is pointless because the subject ceases to exist by the time we obtain it. But this (I think) conflates the epistemic validity of a measurement with the temporal persistence of the subject. A measurement of suffering at time t remains valid evidence that suffering occurred at t, regardless of whether the subject still exists at t+1.

Also, such measurements could be valuable for determining the welfare of future simulacra, if we have reason to think they'll correlate — for instance, if they're generated by the same process or systematically make similar welfare reports.

Thanks for the support and for your comments Clara! You are more than right. I've spelled out each acronym the first time it appears and added a footnote briefly explaining the meat-eater problem. Hopefully this makes it more readable and accessible :)