A

andiehansen

68 karmaJoined

Bio

I'm pursuing an economics degree at the University of Alberta, Canada. I've started a small EA club and facilitated EA Virtual Programs several times.

Posts
1

Sorted by New

Comments
5

I liked this, for the most part! It seems useful to push for the ambition necessary to make the most of this time. But there was one thing that seemed confusing when you mentioned EAs defending America if AI and animal welfare weren't a pressing concern.

Or they’d roll up their sleeves to defend America.

If I'm trying to be charitable, maybe this is referring to safeguarding democracy in America? But when I read "roll up their sleeves to defend America," a military career also comes to mind.

However, my immediate impression was to read it as channeling American patriotism or something. I'm guessing the former interpretation might have been more what you were going for?

Without trying to be too political, as a Canadian this first interpretation deflated a lot of the piece for me. If anything, EAs could roll up their sleeves to defend the rest of the world from America right now (if AI and animal welfare weren't priorities).

I really appreciated the vast majority of it, but I wanted to highlight that this sentence was ambiguous.

I am deeply grateful to the author for her bravery, courage, and moral clarity in the face of this unspeakable horror.

I'm commenting to share how I feel in response to this as someone who has ran an EA club in 2019, led some EA Virtual Programs, been to an EAGx in 2022, has donated ~$7,000 to EA charities, and is now close to finishing my undergraduate degree. 

EA has long had a problem with patriarchy in general and sexual harassment in particular. Maybe this has been getting better (at least with the brand imagery), but top positions are still far more likely to privilege equally-qualified men over equally-qualified non-men. This has been more pronounced in certain areas of EA, like the more rationalist-oriented and AI-centric circles. The fact that these are problems in other subcultures and organizations is no excuse. EAs need to be upstanding moral citizens in all domains of their life, not just when working on top cause areas.

Patriarchal norms create power dynamics that make harm more likely, and remedying this through deep structural changes - not just policies - is essential for EA organizations to achieve their missions.

I have benefitted enormously from EA and I intend to engage a lot more with this community for the rest of my life. If anything, I'm coming back from an extended hiatus where I've focused on my undergraduate degree. I care deeply about making the world a better place and reducing suffering, and EA's principles bring out the best in me. I love the fact that many EAs care so much about animals, some are vegan, and so many are willing to buck the status quo of the crazy consumerist culture we live in to either donate a lot of their income or work in a less prestigious job for an obscure cause. That combination of caring deeply with the heart while thinking carefully about how to do the most good is unparalleled to me.

At the same time, I have felt uncomfortable seeing the sexism and male privilege in EA, which is one of the reasons I distanced myself for several years, especially after the FTX collapse. (By the way, that was another instance where top leadership saw serious red flags and warning signs but didn't take sufficient steps to investigate and correct course.)

All of this is to say that I sincerely hope that Frances' courage here will prompt serious structural changes that make it impossible for anything like this to happen again.

As an EA group facilitator, I've been a part of many complex discussions talking about the tradeoffs between prioritizing long-term and short-term causes.

Even though I consider myself a longtermist, I now have a better understanding and respect for the concerns that near-term-focused EAs bring up. Allow me to share a few of them.

  1. The world has finite resources, so when you direct resources to long-term causes, those same resources cannot be put towards short-term causes. If the EA community was 100% focused on the very long term, for example, then it's likely that solvable problems in the near-term affecting millions or billions of people would get less attention and resources, even if they were easy to solve. This is especially true as EA gets bigger, having a more outsized impact on where resources are directed. As this post says, marginal reasoning becomes less valid as EA gets larger.
  2. Some long-term EA cause areas may increase the risk of negative outcomes in the near-term. For example, people working on AI safety often collaborate with and even contribute to capabilities research. AI is already a very disruptive technology and will likely be even moreso as its capabilities become more powerful.
  3. People who think "x-risk is all that matters" may be discounting other kinds of risks, such as s-risks (suffering risks) due to dystopian futures. If we prioritize x-risk while allowing global catastrophic risks (GCRs) to increase (that is, risks which don't wipe out humanity but greatly set back civilization), that increases s-risks because it's very hard to have well-functioning institutions and governments in a world crippled by war, famine, and other problems.

These and other concerns have updated me towards preferring a "balanced portfolio" of resources spread across EA causes from different worldviews, even if my inside view prefers certain causes over others.

See this similar question here for other ways to coordinate. As for me, I'm a Canadian in Alberta interested in helping out, whether financially or with figuring out the process. Please reach out to me and let me know what you have in mind.