I would also guess that the overwhelming majority (>95%) of highly impactful jobs are not at explicitly EA-aligned organizations, just because only a minuscule fraction of all jobs are at EA orgs. It can be harder to identify highly impactful roles outside of these specific orgs, but it's worth trying to do this, especially if you've faced a lot of rejection from EA orgs.
Okay, so a simple gloss might be something like "better futures work is GHW for longtermists"?
In other words, I take it there's an assumption that people doing standard EA GHW work are not acting in accordance with longtermist principles. But fwiw, I get the sense that plenty of people who work on GHW are sympathetic to longtermism, and perhaps think—rightly or wrongly—that doing things like facilitating the development of meat alternatives will, in expectation, do more to promote the flourishing of sentient creatures far into the future than, say, working on space governance.
I apologize because I'm a bit late to the party, haven't read all the essays in the series yet, and haven't read all the comments here. But with those caveats, I have a basic question about the project:
Why does better futures work look so different from traditional, short-termist EA work (i.e., GHW work)?
I take it that one of the things we've been trying to do by investing in egg-sexing technology, strep A vaccines, and so on is make the future as good as possible; plenty of these projects have long time horizons, and presumably the goal of investing in them today is to ensure that—contingent on making it to 2050—chickens live better lives and people no longer die of rheumatic heart disease. But the interventions recommended in the essay on how to make the future better look quite different from the ongoing GHW work.
Is there some premise baked into better futures work that explains this discrepancy, or is this project in some way a disavowal of current GHW priorities as a mechanism for creating a better future? Thanks, and I look forward to reading the rest of the essays in the series.
Not saying something in this realm is what's happening here, but in terms of common causes of people identifying as EA adjacent, I think there are two potential kinds of brand confusion one may want to avoid:
I think EAs often want to be seen as relatively objective evaluators of the world, and this is especially true about the issues they care about. The second you identify as being part of a team/movement/brand, people stop seeing you as an objective arbiter of issues associated with that team/movement/brand. In other words, they discount your view because they see you as more biased. If you tell someone you're a fan of the New York Yankees and then predict they're going to win the World Series, they'll discount your view relative to if you just said you follow baseball but aren't on the Yankees bandwagon in particular. I suspect some people identify as politically independent for this same reason: they want to and/or want to seem like they're appraising issues objectively. My guess is this second kind of brand confusion concern is the primary thing leading many EAs to identify as EA adjacent; whether or not that's reasonable is a separate question, but I think you could definitely make the case that it is.
It's a tractability issue. In order for these interventions to be worth funding, they should reduce our chance of extinction not just now, but over the long term. And I just haven't seen many examples of projects that seem likely to do that.
Without being able to comment on your specific situation, I would strongly discourage almost anyone who wants to have a highly impactful career from dropping out of college (assuming you don’t have an excellent outside option).
There is sometimes a tendency within EA and adjacent communities to critique the value of formal education, or to at least suggest that most of the value of a college education comes via its signaling power. I think this is mistaken, but I also suspect the signaling power of a college degree may increase—rather than decrease—as AI becomes more capable, and it may become harder to use things like, e.g., work tests to assess differences in applicants’ abilities (because the floor will be higher).
This isn’t to dismiss your concerns about the relevance of the skills you will cultivate in college to a world dominated by AI; as someone who has spent the last several years doing a PhD that I suspect will soon be able to be done by AI, I sympathize. Rather, a few quick thoughts:
One q: why is viewer minutes a metric we should care about? QAVMs seems importantly different from QALYs/DALYs, in that the latter matter intrinsically (ie, they correspond to suffering associated with disease). But viewer minutes only seem to matter if they’re associated with some other, downstream outcome (Advocacy? Donating to AI safety causes? Pivoting to work on this?). By analogy, QAVMs seems akin to “number of bednets distributed” rather than something like “cases of malaria averted” or “QALYs.”
The fact that you adjust for quality of audience seems to suggest a ToC in the vein of advocacy or pivoting, but I think this is actually pretty important to specify, because I would guess the theory of change for these different types of media (eg, TikToks vs long form content) is quite different, and one unit of QAVM might accordingly translate differently into impact.