L

lilly

3009 karmaJoined

Comments
148

Okay, so a simple gloss might be something like "better futures work is GHW for longtermists"?

In other words, I take it there's an assumption that people doing standard EA GHW work are not acting in accordance with longtermist principles. But fwiw, I get the sense that plenty of people who work on GHW are sympathetic to longtermism, and perhaps think—rightly or wrongly—that doing things like facilitating the development of meat alternatives will, in expectation, do more to promote the flourishing of sentient creatures far into the future than, say, working on space governance.

I apologize because I'm a bit late to the party, haven't read all the essays in the series yet, and haven't read all the comments here. But with those caveats, I have a basic question about the project:

Why does better futures work look so different from traditional, short-termist EA work (i.e., GHW work)?  

I take it that one of the things we've been trying to do by investing in egg-sexing technology, strep A vaccines, and so on is make the future as good as possible; plenty of these projects have long time horizons, and presumably the goal of investing in them today is to ensure that—contingent on making it to 2050—chickens live better lives and people no longer die of rheumatic heart disease. But the interventions recommended in the essay on how to make the future better look quite different from the ongoing GHW work.

Is there some premise baked into better futures work that explains this discrepancy, or is this project in some way a disavowal of current GHW priorities as a mechanism for creating a better future? Thanks, and I look forward to reading the rest of the essays in the series.

Not saying something in this realm is what's happening here, but in terms of common causes of people identifying as EA adjacent, I think there are two potential kinds of brand confusion one may want to avoid:

  1. Associations with a particular brand (what you describe)
  2. Associations with brands in general:

I think EAs often want to be seen as relatively objective evaluators of the world, and this is especially true about the issues they care about. The second you identify as being part of a team/movement/brand, people stop seeing you as an objective arbiter of issues associated with that team/movement/brand. In other words, they discount your view because they see you as more biased. If you tell someone you're a fan of the New York Yankees and then predict they're going to win the World Series, they'll discount your view relative to if you just said you follow baseball but aren't on the Yankees bandwagon in particular. I suspect some people identify as politically independent for this same reason: they want to and/or want to seem like they're appraising issues objectively. My guess is this second kind of brand confusion concern is the primary thing leading many EAs to identify as EA adjacent; whether or not that's reasonable is a separate question, but I think you could definitely make the case that it is.

lilly
3
0
1
79% disagree

It's a tractability issue. In order for these interventions to be worth funding, they should reduce our chance of extinction not just now, but over the long term. And I just haven't seen many examples of projects that seem likely to do that.

This is a cool idea! Will this be recorded for people who can't attend live? 

Edit: nevermind, I think I'm confused; I take it this is all happening in writing/in the comments.

Answer by lilly9
2
2

Without being able to comment on your specific situation, I would strongly discourage almost anyone who wants to have a highly impactful career from dropping out of college (assuming you don’t have an excellent outside option).

There is sometimes a tendency within EA and adjacent communities to critique the value of formal education, or to at least suggest that most of the value of a college education comes via its signaling power. I think this is mistaken, but I also suspect the signaling power of a college degree may increase—rather than decrease—as AI becomes more capable, and it may become harder to use things like, e.g., work tests to assess differences in applicants’ abilities (because the floor will be higher).

This isn’t to dismiss your concerns about the relevance of the skills you will cultivate in college to a world dominated by AI; as someone who has spent the last several years doing a PhD that I suspect will soon be able to be done by AI, I sympathize. Rather, a few quick thoughts:

  1. Reading the new 80k career guide, which touches on this to some extent (and seeking 80k advising, as I suspect they are fielding these concerns a lot).
  2. Identifying skills at the intersection of your interests, abilities, and things that seem harder for AI to replace. For instance, if you were considering medicine, it might make more sense to pursue surgery rather than radiology.
  3. Taking classes where professors are explicitly thinking about and engaging with these concerns, and thoughtfully designing syllabi accordingly.

In the past 30 years, HIV has gone from being a lethal disease to an increasingly treatable chronic illness.

Yeah, I think these are great ideas! I’d love to see the Forum prize come back; even if there was only a nominal amount of (or no) money attached, I think it would still be motivating; people like winning stuff.

Thanks for writing this! Re this:

Perhaps the most straightforward way you can help is by being more active on the Forum. I often see posts and comments that don’t receive enough upvotes (IMO), so even voting more is useful.

I've noticed that comments with more disagree than agree votes often have more karma votes than karma. Whether this is good or bad depends on the quality of the comment, but sometimes the comments are productive and helpful, and so the fact that people are downvoting them seems bad for a few reasons: first, it disincentivizes commenting; second, it incentivizes saying things that you think people will agree with, even at the expense of saying what is true. (Of course, it's good to try to frame things more persuasively when this doesn't come at the cost of speaking honestly.) The edit here provides an example of how I think this threatens to undermine epistemic and discursive norms on the Forum. 

I'm not sure what the solution is here—I've suggested this previously, but am not sure it'd be helpful or effective. And it may turn out that this issue—how does the Forum incentivize making/promote helpful comments that people disagree with?—is relatively intractable, or hard to solve without making sacrifices in other domains. (Another thought that occurred to me is doing what websites like the NYT do: having "NYT recommended comments" and "reader recommended comments," but I assume the mods don't want to be in the business of weighing in on the merits of particular comments.) 

In developing countries, infectious diseases like visceral gout (kidney failure leading to poor appetite and uric acid build up on organs), coccidiosis (parasitic disease causing diarrhoea and vomiting), and colibacillosis (E. coli infection) are common.

I don't think visceral gout is an infectious disease. I also don't think chickens can vomit. Two inaccuracies in this one sentence just made me wonder if there were other inaccuracies in the article as well (though I appreciate how deeply researched this is and how much work went into writing it). 

Load more