L

Linch

@ EA Funds
26542 karmaJoined Working (6-15 years)openasteroidimpact.org

Comments
2799

Also, at the risk of saying the obvious, people occupying the ends of a position (within a specific context) will frequently feel that their perspectives are unfairly maligned or censored.

If the consensus position is that minimum wage should be $15/hour, both people who believe that it should be $0 and people who believe it should be $40/hour may feel social pressure to moderate their views; it takes active effort to reduce pressures in that direction. 

Hi Jason. Yeah this makes a lot of sense. I think in general I don't have a very good sense of how much different people want to provide input into our grantmaking vs defer to LTFF; in practice I think most people want to defer, including the big(ish) donors; our objective is usually to try to be worthy of that trust. 

That said, I think we haven't really broken down the functions as cleanly before; maybe with increased concreteness/precision/clarity donors do in fact have strong opinions about which things they care about more on the margin? I'm interested in hearing more feedback, nonymously and otherwise. 

Important caveat: A year or so ago when I floated the idea of earmarking some donations for anonymous vs non-anonymous purposes, someone (I think it was actually you? But I can't find the comment) rightly pointed out that this is difficult to do in practice because of fungibility concerns (basically if 50% of the money is earmarked "no private donations" there's nothing stopping us from increasing the anonymous donations in the other 50%). I think a similar issue might arise here, as long as we both have a "general LTFF" fund and specific "ecosystem subfunction" funds. 

I don't think the issue is dispositive, especially if most money eventually goes to the subfunction funds, but it does make the splits more difficult in various ways, both practically and as a matter of communication.

Oh wow just read the whole pilot! It's really cool! Definitely an angle on doing the most good that I did not expect.

Less seriously, you might enjoy my 2022 April 1 post on Impact Island.

I think people take this into account but not enough or something? I strongly suspect when evaluating research many people have a vague, and not sufficiently precise, sense of both the numerator and denominator, and their vague intuitions aren't sufficiently linear. I know I do this myself unless it's a grant I'm actively investigating. 

This is easiest to notice in research because it's both a) a large fraction of (non-global health and development) EA output and b) very gnarly. But I don't think research is unusually gnarly in terms of EA outputs or grants, advocacy, comms, etc have similar issues. 

It might be too hard to envision an entire grand future, but it's possible to envision specific wins in the short and medium-term. A short-term win could be large cage-free eggs campaigns succeeding, a medium-term win could be a global ban on caged layer hens. Similarly a short-term win for AI safety could be a specific major technical advance or significant legislation passed, a medium-term win could be AGIs coexisting with humans without the world going to chaos, while still having massive positive benefits (e.g. a cure to Alzheimer's).

One possible way to get most of the benefits of talking to a real human being while getting around the costs that salius mentions is to have real humans serve as templates for an AI chatbot to train on. 

You might imagine a single person per "archetype" to start with. That way if Danny is an unusually open-minded and agreeable Harris supporter, and Rupert is an unusually open-minded and agreeable Trump supporter, you can scale them up to have Dannybots and Rupertbots talk to millions of conflicted people while preserving privacy, helping assure people they aren't judged by a real human, etc.

I'm really sorry to hear that. This sounds really stressful. 

Load more