M

Mjreard

295 karmaJoined Dec 2017

Posts
1

Sorted by New
31
· 5y ago · 15m read

Comments
16

I was confused by this:

There were lots of explicit statements that no, of course we did not mean that and of course we do not endorse any of that, no one should be doing any of that. And yes, I think everyone means it. But it’s based on, essentially, unprincipled hacks on top of the system, rather than fixing the root problem, and the smartest kids in the world are going to keep noticing this.

You seem to be saying that Sam was among the smartest and so saw that these were unprincipled hacks and ignored them, yet the rest of the post goes into great detail on how profoundly stupid Sam was for buying into such a naive, simplistic world model that very predictably failed on its own terms. I would expect the smartest people in the world to make better predictions and use better models. 

I interpret EA as common sense morality+. Abiding by common sense morality will often leave you with a lot of time and energy left over. If you want to do more good, you should use those to increase welfare, and do so in a scientific and scope-sensitive way. Is that clearly not EA or an unprincipled hack? 

I appreciate the care and detail here, but would guess that wild animals dwarf everything considered here and present a much more difficult + important question.

How bad are forests per unit of land vs corn/soy/wheat fields or cattle ranches that have been replacing them seems like a key question. 

I think a chatbot fails the cost-benefit analysis pretty badly at this point. There are big reputational hits organizations can take for giving bad advice and potential hallucinations just create a lot of surface area there. Importantly, the upside is quite minimal too. If a user wants to, they can pull up ChatGPT and ask it to act as an 80k advisor. It might do okay (or similarly to how okay it would do if we tried to develop one), only it’d be much clearer that we didn’t sanction its output.

People are often surprised that full time advisors only do ~400 calls/year as opposed to something like 5 calls/day (i.e.1,300/yr). For one thing, my BOTEC on the average focus time for an individual advisee is 2.25 hours (between, call prep, the call itself, post-call notes/research on new questions, introduction admin, and answering follow up emails). Beyond that, we have to keep up with what’s going on in the world and the job markets we track, as well as skilling up as generalist advisors. There’s also more formal systems we need to contribute to like marketing, impact assessment, and maintaining the systems that get us all the information we use to help advisees and keep that 2.25 hours at 2.25 hours.

Perhaps surprisingly (and perhaps not as relevant to this audience): take cause prioritization seriously, or more generally, have clarity about your ultimate goals/what you’ll look to to know whether you’ve made good decisions after the fact. 

It’s very common that someone wants to do X, I ask them why, they give an answer that doesn’t point to their ultimate priorities in life, I ask them “why [thing you pointed to]?” and they more or less draw a blank/fumble around uncertainly. Granted it’s a big question, but it’s your life, have a sense of what you’re trying to do at a fundamental level.

Don’t be too fixated on instant impact. Take good opportunities as they come of course, but people are often drawn towards things that sound good/ambitious for the problems of the moment even though they might not be best positioned to tackle those things and might burn a lot of future opportunities by doing so. Details will vary by situation of course.

Openness to working in existential risk mitigation is not a strict requirement for having a call with us, but it is our top priority and the broad area we know and think most about. EA identity is not-at-all a requirement outside the very broad bounds of wanting to do good and being scope sensitive with regard to that good. Accordingly, I think it’s worth the 10 minutes to apply if you’ve 1) read/listened to some 80k content and found it interesting, and 2) have some genuine uncertainty about your long run career. I think 1) + 2) describe a broad enough range of people that I’m not worried about our potential user base being too small.

So, depending on how you define EA, I might be fine with our current messaging. If people think you need to be a multiple-EAG attendee who wears the heart-lightbulb shirt all the time to get a call, that would be a problem and I’d be interested to know what we’re doing to send that message. When I look at our web content and YouTube ads for example, I’m not worried about being too narrow.

Speaking just to the little slice of the world I know:

Using a legal research platform (i.e. Westlaw, LexisNexis, Casetext) could be really helpful with several of these. If you're good at thinking up search terms and analogous products/actors/circumstances (3D firearms, banned substances, and squatting on patents are good examples here), there's basically always a case where someone wasn't happy someone else was doing X, so they hired lawyers to figure out which laws were implicated by X and file a suit/indict someone, usually on multiple different theories/laws. 

The useful part is that courts will then write opinions on the theories which provide a clear, lay-person explanation of the law at issue, what it's for, how it works, etc. before applying it to the facts at hand. Basically, instead of you having to stare at some pretty abstract, high-level, words in a statute/rule and imagine how they apply, a lot of that work has already been done for you, and in an authoritative, citable way. Because cases rely on real facts and circumstances, they also make things more concrete for further analogy-making to the thing you care about. 

Downside is these tools seem to cost at least ~$150/mo, but you may be able to get free access through a university or find other ways to reduce this. Google Scholar case law is free, but pretty bad. 

1-3 seem good for generating more research questions like ASB's, but the narrower research questions are ultimately necessary to get to impact. 4-8 seem like things EA is over-invested in relative to what ASB lays out here, not that more couldn't be done there. 

Load more