Mjreard

Advising Team @ 80,000 Hours
591 karmaJoined Working (6-15 years)London, UK
bit.ly/mattreardon

Bio

Doing calls, lead gen, application reviewing, and public speaking for the 80k advising team

How others can help me

Apply for a 1-1 call with 80k. Yes, now is a good time to do it – you can book in later, we can have a second call, come on now. 

Follow me on Twitter and listen to my podcast (search for "actually after hours" on YouTube or podcast apps)

Comments
43

The argument for Adequate temporal horizon is somewhat hazier

 

I read you suggesting we'd be explicit about the time horizons AIs would or should consider, but it seems to me we'd want them to think very flexibly about the value of what can be accomplished over different time horizons. I agree it'd be weird if we baked "over the whole lightcone" into all the goals we had, but I think we'd want smarter-than-us AIs to consider whether the coffee they could get us in 5 minutes and one second was potentially way better than the coffee they could get in five minutes, or they could make much more money in 13 months vs a year. 

Less constrained decision-making seems more desirable here, especially if we can just have the AIs report the projected trade offs to us before they move to execution. We don't know our own utility functions that well and it's something we'd want AIs to help with, right? 

I'm surprised at the three disagree votes. Most of this seemed almost trivially true to me:

  • Popular political issues are non-neglected and likely to be more intractable (people have psychological commitments to one side)
  • The reputational cost you bear in terms of turning people off to high-marginal-impact issues by associating with their political enemies is great than the low maginal benefit to these popular issues 
  • Make the trade-off yourself, but be aware of the costs

Seems like good advice/a solid foundation for thinking about this.

A minor personal concern I have is foreclosing a maybe-harder-to-achieve, but more valuable equilibrium: one where EAs are perceived as quite politically diverse and savvy in both sides of popular politics. 

Crucially, this vision depends on EAs engaging with political issues in non-EA fora and not trying to debate which political views are EA or aren't "EA" (or tolerated "within EA"). The former is likely to get EA ideas taken more seriously by a wider range of people à la Scott Alexander and Ezra Klein; the latter is likely to push people who were already engaged with EA ideas further towards their personal politics. 

Is that just from the tooltip? I'm not sure how anonymous posting works. It'd be interesting to learn who the author was if they didn't intend to be anonymous and if it was anyone readers would know.

I gave this post a strong downvote because it merely restates some commonly held conclusions without speaking directly to the evidence or experience that supports those conclusions. 

I think the value of posts principally derives from their saying something new and concrete and this post failed to do that. Anonymity contributed to this because at least knowing that person X with history Y held these views might have been new and useful.  

Sadly I didn't really know how to give a reliable forecast given the endogenous effect of providing the forecast. I'll post a pessimistic (for 80k, optimistic for referrers) update to Twitter soon. Basically, I think your chances of winning the funding are ~50% if you get two successful referrals at this point. 5 successful referrals probably gets you >80%. 

I suspect this will be easy for you in particular, Caleb. Take my money!

Good questions! Yes, they would need to speak and apply in English. There are no barred countries. 

To that last point, I'm particularly excited about fans of 80k being referrers for talented people with very little context. If you think a classmate/colleague is incredibly capable, but you don't back yourself to have a super productive conversation about impactful work with them, outsource that to us! 

I wanted to stay very far on the right side of having all our activities clearly relate to our charitable purpose. I know cash indirectly achieves this, but it leaves more room for interpretation, has some arguable optics problems, and potentially leads to unexpected reward hacking. The so far lackluster reception to the program is solid evidence against the latter two concerns. 

I think a general career grant would be better and will consider changing it to that. Thanks for raising this question and getting me there! 

Leopold's implicit response as I see it:

  1. Convincing all stakeholders of high p(doom) such that they take decisive, coordinated action is wildly improbable ("step 1: get everyone to agree with me" is the foundation of many terrible plans and almost no good ones)
  2. Still improbable, but less wildly, is the idea that we can steer institutions towards sensitivity to risk on the margin and that those institutions can position themselves to solve the technical and other challenges ahead

Maybe the key insight is that both strategies walk on a knife's edge. While Moore's law, algorithmic improvement, and chip design hum along at some level, even a little breakdown in international willpower to enforce a pause/stop can rapidly convert to catastrophe. Spending a lot of effort to get that consensus also has high opportunity cost in terms of steering institutions in the world where the effort fails (and it is very likely to fail).

Leopold's view more straightforwardly makes a high risk bet on leaders learning things they don't know now and developing tools they can't foresee now by a critical moment that's fast approaching. 

I think it's accordingly unsurprising that confidence in background doom is the crux here. In Leopold's 5% world, the first plan seems like the bigger risk. In MIRI's 90% world, the second does. Unfortunately, the error bars are wide here and the arguments on both sides seem so inextricably priors-driven that I don't have much hope they'll narrow any time soon.   

Things downstream of OpenPhil are in the 90th+ percentile of charity pay, yes, but why do people work in the charity sector? Either because they believe in the specific thing (i.e. they are EAs) or because they want the warm glow of working for a charity. Non-EA charities offer more warm glow, but maybe there's a corner of "is a charity" and "pays well for a charity even though people in my circles don't get it" that appeals to some. I claim it's not many and EA jobs are hard to discover for the even smaller population of people who have preferences like these and are high competence. 

Junior EA roles sometimes pay better than market alternatives in the short run, but I believe high potential folks will disproportionately track lifetime earnings vs the short run and do something that's better career capital.

Load more