IT

Ian Turner

965 karmaJoined

Comments
238

It’s odd to me that people say they “heard about EA” at EA Global. How’d they hear about EA Global, then? 🤔

Thanks for sharing this. It was interesting to read.

I wonder if you wouldn’t mind sharing the rubric for EA involvement. What constitutes a highly engaged EA?

If your idea is that in-country employees/contractors of organizations like GiveDirectly, Fistula Foundation, AMF, MC, Living Goods, etc., should be invited to EA Global — I agree, and I think these folks often have useful information to add to the conversation. Though I don't assume everyone in these orgs is a good fit, many are and it's worth having those voices. Some have an uncritical mindset, basically just doing what they're told, while others are a little bit too sharp-elbowed and are just looking at what can get funders' attention without caring how good it actually is.

On the other hand, if your idea is to (for example) invite some folks from villages where GiveDirectly is operating, I pretty strongly feel that this would be a waste of resources. We can get a much better perspective from this group by surveying (and indeed GiveWell and GiveDirectly have sponsored such surveys). If you were to just choose randomly, I think most of those chosen wouldn't be in a good position to contribute to discussions; and if you were to choose village elites, then you end up with a systematic bias to elite interests, which has been a serious systematic problem in trying to make bottom-up charitable interventions work.

Another one you missed is that the world is getting better over time, so we should expect donation opportunities in the future to be worse.

Random thought: does the idea of explosive takeoff of intelligence assume the alignment is solvable?

If the alignment problem isn’t solvable, then an AGI, in creating ASI, would face the same dilemma as humans: The ASI wouldn’t necessarily have the same goals, would disempower the AGI, instrumental convergence, all the usual stuff.

I suppose one counter argument is that the AGI rationally shoudn’t create ASI, for these reasons, but, similar to humans, might do so anyway due to competitive/racing dynamics. Whichever AGI doesn’t creates ASI will be left behind, etc.

I think the amount of news that is helpful and healthy to consume depends a lot on what it is that you’re trying to do. So maybe good place to start is thinking about how sensitive your work is to developments, and go from there. Channel Duncan Sabien and ask, “what am I doing, and why am I doing it?”.

And if you are going to spend a lot of time with the news, read Zvi’s piece on bounded distrust and maybe also the linked piece from Scott Alexander.

Personally, I view participation in the charitable projects in my community (including donating to church or to a colleague's pledge drive) as part of my consumption basket and totally unrelated to altruistic work. Relationships are incredibly important to one's life satisfaction and participating in the community is a part of that.

I did not click Disagree; but I will say that I'm not sure I agree that "The people we are aiming to help should be well within the conversation". I don't mean to say that we should ignore their perspectives, values, or opinions, but I don't think having them attend EA Global is a useful way to achieve that. I've had a lot of interesting conversation with GiveDirectly and AMF beneficiaries, but I also think that the median beneficiary would not have much to contribute at EA Global, and if you choose exceptional beneficiaries to represent the class of beneficiaries as a whole, that leads to a different set of problems.

It's not even clear to me that EA trying to change the election would be positive EV. Look at what's happened with AI.

Load more