MJ

Matrice Jacobine

Student in fundamental and applied mathematics
319 karmaJoined Pursuing a graduate degree (e.g. Master's)

Bio

Technoprogressive, biocosmist, rationalist, defensive accelerationist, longtermist

Comments
44

GiveWell specifically was started with a focus on smaller donors, but there was a always a separation between them and EA. 

... I'm confused by what you would mean by early EA then? As the history of the movement is generally told it started by the merger of three strands: GiveWell (which attempt to make charity effectiveness research available for well-to-do-but-not-Bill-Gates-rich Westerners), GWWC (which attempt to convince well-to-do-but-not-Bill-Gates-rich Westerners to give to charity too), and the rationalists and proto-longtermists (not relevant here).

Criticisms of ineffective charities (stereotypically, the Make a Wish Foundation) could be part of that, but they're specifically the charities well-to-do-but-not-Bill-Gates-rich Westerners tend to donate to when they do donate, I don't think people were going out claiming the biggest billionaire philanthropic foundations (like, say, well, the Bill Gates Foundation) didn't knew what to do with their money.

I have said this in other spaces since the FTX collapse: The original idea of EA, as I see it, was that it was supposed to make the kind of research work done at philanthropic foundations open and usable for well-to-do-but-not-Bill-Gates-rich Westerners. While it's inadvisable to outright condemn billionaires using EA work to orient their donations for... obvious reasons, I do think there is a moral hazard in billionaires funding meta EA. Now, the most extreme policy would be to have meta EA be solely funded by membership dues (as plenty organizations are!). I'm not sure if that would really be workable for the amounts of money involved, but some kind of donation cap could be plausibly envisaged.

There is an actual field called institutional development economics which has won a great chunk of Nobel Prizes and which already has a fairly good grasp of what it takes to get poor countries to develop. The idea that you could learn more about that without engaging in the field in the slightest but by... trying to figure out how to get rich countries with the institutional framework and problems of rich countries richer and assume that this will be magically applied (by who?) to poor countries with the institutional framework and problems of poor countries and work the same is just... straight-up obvious complete nonsense.

I contend. OP (no pun intended) cites both the Abundance Institute and Progress Studies as inspiration, a cursory look at the think tank sponsors and affiliations of the people involved in those show that they are mostly a libertarian-ish right-of-center bunch.

Matrice Jacobine
2
0
0
1
36% ➔ 29% disagree

Recent advances in LLMs have led me to update toward believing that we live in the world where alignment is easy (i.e. CEV naturally emerge from LLMs, and future AI agents will be based on understanding and following natural language commands by default), but governance is hard (i.e. AI agents might be co-opted by governments or corporations to lock in humanity in a dystopian future, and the current geopolitical environment, characterized by democratic backsliding, cold war mongering, and an increase in military conflicts including wars of aggression, isn't conducive to robust multilateral governance).

Additionally, at the meta-advocacy level, EA will suffer insofar as the bureaucracy is drained of talent. This will be particularly acute for anything touching on areas with heavy federal involvement, like public health, biosecurity, or foreign aid/policy.[3] 

This may be the one silver lining actually? There is potentially now going to be a growing amount of low-hanging fruit for EAs to hire who are simultaneously value-aligned and technocratically-minded. The thing I'm most worried on the meta-advocacy side is hostile takeover, as we were discussing with @Bob Jacobs here.

Yeah IIRC I think EY do consider himself to have been net-negative overall so far, hence the whole "death with dignity" spiral. But I don't think one can claim his role has been more negative than OPP/GV deciding to bankroll OpenAI and Anthropic (at least when removing the indirect consequences due to him having influenced the development of EA in the first place).

I don't think you're alone at all. EY and other prominent rationalists (like LW webmaster Habryka) have also said they believe EA has been net-negative for human survival for quite a while already, EleutherAI's Connor Leahy has recently released the strongly EA-critical Compendium, which has been praised by many leading longtermists, particularly FLI's Max Tegmark, and Anthropic's recent antics like calling for recursive self-improvement to beat China is definitely souring a lot of people left unconvinced in those spaces on OP. From personal conservations, I can tell you PauseAI in particular is increasingly hostile to EA leadership.

While this is a good argument against it indicating governance-by-default (if people are saying that), securing longtermist funding to work with the free software community over this (thus overcoming two of the three hurdles) still seems to be a potentially very cost-effective way to reduce AI risk to look into, particularly combined with differential technological development of AI defensive v. offensive capacities.

Load more