ER

Eli Rose🔸

Program Officer, Global Catastrophic Risks Capacity-Building @ Open Philanthropy
2515 karmaJoined Working (6-15 years)

Bio

GCR capacity-building grantmaking and projects at Open Phil.

Posts
29

Sorted by New

Sequences
1

Open Phil EA/LT Survey 2020

Comments
190

I like the main point you're making.

However, I think "the government's version of 80,000 Hours" is a very command-economy vision. Command economies have a terrible track record, and if there were such a thing as an "EA world government" (which I would have many questions about regardless) I would strongly think it shouldn't try to plan and direct everyone's individual careers, and should instead leverage market forces like ~all successful large economies.

+1 on wanting a more model-based version of this.

And +1 to you vibe coding it!

Upon seeing this, I had the same thought about vibe coding a more model-based version ... so, race you to whoever gets around to it?

I mostly donated to democracy preservation work and did some political giving. And a little to the shrimp.

Wow awesome thanks for letting me know!

Thanks for writing this!!

This risk seems equal or greater to me than AI takeover risk. Historically the EA & AIS communities focused more on misalignment, but I'm not sure if that choice has held up.

Come 2027, I'd love for it to be the case that an order of magnitude more people are usefully working on this risk. I think it will be rough going for the first 50 people in this area; I expect there's a bunch more clarificatory and scoping work to do; this is virgin territory. We need some pioneers.

People with plans in this area should feel free to apply for career transition funding from my team at Coefficient (fka Open Phil) if they think that would be helpful to them.

I'm quite excited about EAs making videos about EA principles and their applications, and I think this is an impactful thing for people to explore. It seems quite possible to do in a way that doesn't compromise on idea fidelity; I think sincerity counts for quite a lot. In many cases I think videos and other content can be lighthearted / fun / unserious and still transmit the ideas well.

I think the vast majority of people making decisions about public policy or who to vote for either aren't ethically impartial, or they're "spotlighting", as you put it. I expect the kind of bracketing I'd endorse upon reflection to look pretty different from such decision-making.

But suppose I want to know who of two candidates to vote for, and I'd like to incorporate impartial ethics into that decision. What do I do then?

That said, maybe you're thinking of this point I mentioned to you on a call

Hmm, I don't recall this; another Eli perhaps? : )

Load more