See explainer on why AGI could not be controlled enough to stay safe:
https://www.lesswrong.com/posts/xp6n2MG5vQkPpFEBH/the-control-problem-unsolved-or-unsolvable
Note: I am no longer part of EA because of the community’s/philosophy’s overreaches. I still post here about AI safety.
Good question!
I haven't written up a separate post on UCF and how it compares to other charity interventions. I'd consider it, but I am already stretching myself with other work.
I spent time digging into Uganda Community Farm’s plans last year, and ended up becoming a regular donor. From reading the write-ups and later asking Anthony about the sorghum training and grain-processing plant projects, I understood Anthony to be thoughtful and strategic about actually helping relieve poverty in the Kamuli & Buyende region.
Here are short explainers worth reading:
UCF focusses on training farmers and giving them the materials and tools needed to build up their own incomes, which is a much more targeted approach than just transferring money (though need to account for differences in local income levels too).
Personally, I think the EA community often focussed on measuring and mapping out consequences of global poverty interventions from afar and not as much on enabling charity entrepreneurs on the ground who have first-hand contextual knowledge on what’s holding their community back. My sense is that robust approaches will tend to consider both.
Is there an argument that it is impossible?
There is actually an impossibility argument. Even if you could robustly specify goals in AGI, there is another convergent phenonemon that would cause misaligned effects and eventually remove the goal structures.
You can find an intuitive summary here: https://www.lesswrong.com/posts/jFkEhqpsCRbKgLZrd/what-if-alignment-is-not-enough
Actually, looks like there is a thirteenth lawsuit that was filed outside the US.
A class-action privacy lawsuit filed in Israel back in April 2023.
Wondering if this is still ongoing: https://www.einpresswire.com/article/630376275/first-class-action-lawsuit-against-openai-the-district-court-in-israel-approved-suing-openai-in-a-class-action-lawsuit
I agree that implies that those people are more inclined to spend the time to consider options. At least they like listening to other people give interesting opinions about the topic.
But we’re all just humans, interacting socially in a community. I think it’s good to stay humble about that.
If we’re not, then we make ourselves unable to identify and deal with any information cascades, peer proof, and/or peer group pressures that tend to form in communities.
Three reasons come to mind why OpenPhil has not funded us.
Does that raise any new questions?
They're not quite doing a brand partnership.
But 80k has featured various safety researchers working at AGI labs over the years. Eg. see OpenAI.
So it's more like 80k has created free promotional content, and given their stamp of approval of working at AGI labs (of course 'if you weigh up your options, and think it through rationally' like your friends).
Thank you for the incisive questions.
We received $57k through Manifund plus a $5k donation from a private donor.
Yes, this is correct. Even then, it is stretching it, because we haven't gotten an income for running the just finished 150-participant edition (AISC 9). Backpay would be reasonable – to maintain our personal runways.