I'm currently facing a career choice between a role working on AI safety directly and a role at 80,000 Hours. I don't want to go into the details too much publicly, but one really key component is how to think about the basic leverage argument in favour of 80k. This is the claim that's like: well, in fact I heard about the AIS job from 80k. If I ensure even two (additional) people hear about AIS jobs by working at 80k, isn't it possible going to 80k could be even better for AIS than doing the job could be?
In that form, the argument is naive and implausible. But I don't think I know what the "sophisticated" argument that replaces it is. Here are some thoughts:
* Working in AIS also promotes growth of AIS. It would be a mistake to only consider the second-order effects of a job when you're forced to by the lack of first-order effects.
* OK, but focusing on org growth fulltime seems surely better for org growth than having it be a side effect of the main thing you're doing.
* One way to think about this is to compare two strategies of improving talent at a target org, between "try to find people to move them into roles in the org, as part of cultivating a whole overall talent pipeline into the org and related orgs", and "put all of your fulltime effort into having a single person, i.e. you, do a job at the org". It seems pretty easy to imagine that the former would be a better strategy?
* I think this is the same intuition that makes pyramid schemes seem appealing (something like: surely I can recruit at least 2 people into the scheme, and surely they can recruit more people, and surely the norm is actually that you recruit a tonne of people" and it's really only by looking at the mathematics of the population as a whole you can see that it can't possibly work, and that actually it's necessarily the case that most people in the scheme will recruit exactly zero people ever.
* Maybe a pyramid scheme is the extreme of "what if literally everyone in EA work
How tractable is improving (moral) philosophy education in high schools?
tldr: Do high school still neglect ethics / moral philosophy in their curriculums? Mine did (year 2012). Are there tractable ways to improve the situation, through national/state education policy or reaching out to schools and teachers? Has this been researched / tried before?
The public high school I went to in Rottweil (rural Southern Germany) was overall pretty good, probably top 2-10% globally, except for one thing: Moral philosophy. 90min/week "Christian Religion" was the default for everyone, in which we spent most of the time interpreting stories from the bible, most of which to me felt pretty irrelevant to the present. This was in 2012 in Germany, a country with more atheists than Christians as of 2023, and even in 2012 my best guess is that <20% of my classmates were practicing a religion.
Only in grade 10, we got the option to switch to secular Ethics classes instead, which only <10% of the students did (Religion was considered less work).
Ethics class quickly became one of my favorite classes. For the first time in my life I had a regular group of people equally interested in discussing Vegetarianism and other such questions (almost everyone in my school ate meat, and vegetarians were sometimes made fun of). Still, the curriculum wasn't great, we spent too much time with ancient Greek philosophers and very little time discussing moral philosophy topics relevant to the present.
How have your experiences been in high school? I'm especially curious about more recent experiences.
Are there tractable ways to improve the situation? Has anyone researched this?
1) Could we get ethics classes in the mandatory/default curriculum in more schools? Which countries or states seem best for that? In Germany, education is state-regulated - which German state might be most open to this? Hamburg? Berlin?
2) Is there a shortage in ethics teachers (compared to religion teachers)? Can we
I think that EA outreach can be net positive in a lot of circumstances, but there is one version of it that always makes me cringe. That version is the targeting of really young people (for this quicktake, I will say anyone under 20). This would basically include any high school targeting and most early-stage college targeting. I think I do not like it for two reasons: 1) it feels a bit like targeting the young/naive in a way I wish we would not have to do, given the quality of our ideas, and 2) these folks are typically far from making a real impact, and there is lots of time for them to lose interest or get lost along the way.
Interestingly, this stands in contrast to my personal experience—I found EA when I was in my early 20s and would have benefited significantly from hearing about it in my teenage years.
I'm the co-founder and one of the main organizers of EA Purdue. Last fall, we got four signups for our intro seminar; this fall, we got around fifty. Here's what's changed over the last year:
* We got officially registered with our university. Last year, we were an unregistered student organization, and as a result lacked access to opportunities like the club fair and were not listed on the official Purdue extracurriculars website. After going through the registration process, we were able to take advantage of these opportunities.
* We tabled at club fairs. Last year, we did not attend club fairs, since we weren't yet eligible for them. This year, we were eligible and attended, and we added around 100 people to our mailing list and GroupMe. This is probably the most directly impactful change we made.
* We had a seminar sign-up QR code at the club fairs. This item actually changed between the club fairs, since we were a bit slow to get the seminar sign-up form created. A majority of our sign-ups came from the one club fair where we had the QR code, despite the other club fair being ~10-50x larger.
* We held our callout meeting earlier. Last year, I delayed the first intro talk meeting until the middle of the third week of school, long after most clubs finished their callouts. This led to around 10 people showing up, which was still more than I expected, but not as much as I had hoped. This year, we held the callout early the second week of school, and ended up getting around 30-35 attendees. We also gave those attendees time to fill out the seminar sign-up form at the callout, and this accounted for most of the rest of our sign-ups.
* We brought food to the callout. People are more likely to attend meetings at universities if there is food, especially if they're busy and can skip a long dining court line by listening to your intro talk. I highly recommend bringing food to your regular meetings too - attendance at our general meetings doubled last year after I s
David Rubinstein recently interviewed Philippe Laffont, the founder of Coatue (probably worth $5-10b). When asked about his philanthropic activities, Laffont basically said he’s been too busy to think about it, but wanted to do something someday. I admit I was shocked. Laffont is a savant technology investor and entrepreneur (including in AI companies) and it sounded like he literally hadn’t put much thought into what to do with his fortune.
Are there concerted efforts in the EA community to get these people on board? Like, is there a google doc with a six degrees of separation plan to get dinner with Laffont? The guy went to MIT and invests in AI companies. In just wouldn’t be hard to get in touch. It seems like increasing the probability he aims some of his fortune at effective charities would justify a significant effort here. And I imagine there are dozens or hundreds of people like this. Am I missing some obvious reason this isn’t worth pursuing or likely to fail? Have people tried? I’m a bit of an outsider here so I’d love to hear people’s thoughts on what I’m sure seems like a pretty naive take!
https://youtu.be/_nuSOMooReY?si=6582NoLPtSYRwdMe
I quit. I'm going to stop calling myself an EA, and I'm going to stop organizing EA Ghent, which, since I'm the only organizer, means that in practice it will stop existing.
It's not just because of Manifest; that was merely the straw that broke the camel's back. In hindsight, I should have stopped after the Bostrom or FTX scandal. And it's not just because they're scandals; It's because they highlight a much broader issue within the EA community regarding whom it chooses to support with money and attention, and whom it excludes.
I'm not going to go to any EA conferences, at least not for a while, and I'm not going to give any money to the EA fund. I will continue working for my AI safety, animal rights, and effective giving orgs, but will no longer be doing so under an EA label. Consider this a data point on what choices repel which kinds of people, and whether that's worth it.
EDIT: This is not a solemn vow forswearing EA forever. If things change I would be more than happy to join again.
EDIT 2: For those wondering what this quick-take is reacting to, here's a good summary by David Thorstad.
The very well written Notes on Effective Altruism coheres some thoughts I've had over the years, and makes me think we should potentially drop the "how to do good in the best way possible framing" when introducing EA for the "be more effective when trying to help others" framing. This honestly seems straightforwardly good to me from a number of different angles, and I think we should seriously be thinking about changing our overall branding to this as a tagline instead.
But am I missing something here? Is there a reason the latter is worse than I think? Or some hidden benefits to the former that I'm not weighing?
If someone isn't already doing so, someone should estimate what % of (self-identified?) EAs donate according to our own principles. This would be useful (1) as a heuristic for the extent to which the movement/community/whatever is living up to its own standards, and (1i) assuming the answer is 'decently' it would be useful evidence for PR/publicity/responding to marginal-faith tweets during bouts of criticism.
Looking at the Rethink survey from 2020, they have some info about which causes EAs are giving to but they seem to note that not many people respond on this? And it's not quite the same question. To do: check GWWC for whether they publish anything like this.
Edit to add: maybe an imperfect but simple and quick instrument for this could be something like "For what fraction of your giving did you attempt a cost-effectiveness assessment (CEA), read a CEA, or rely on someone else who said they did a CEA?". I don't think it actually has to be about whether the respondent got the "right" result per se; the point is the principles. Deferring to GiveWell seems like living up to the principles because of how they make their recommendations, etc.