Bio

Participation
5

I'm interested in effective altruism and longtermism broadly. The topics I'm interested in change over time; they include existential risks, climate change, wild animal welfare, alternative proteins, and longtermist global development.

A comment I've written about my EA origin story

Pronouns: she/her

Legal notice: I hereby release under the Creative Commons Attribution 4.0 International license all contributions to the EA Forum (text, images, etc.) to which I hold copyright and related rights, including contributions published before 1 December 2022.

"It is important to draw wisdom from many different places. If we take it from only one place, it becomes rigid and stale. Understanding others, the other elements, and the other nations will help you become whole." —Uncle Iroh

Sequences
8

Philosophize This!: Consciousness
Mistakes in the moral mathematics of existential risk - Reflective altruism
EA Public Interest Tech - Career Reviews
Longtermist Theory
Democracy & EA
How we promoted EA at a large tech company
EA Survey 2018 Series
EA Survey 2019 Series

Comments
792

Topic contributions
136

I can speak for myself: I want AGI, if it is developed, to reflect the best possible values we have currently (i.e. liberal values[1]), and I believe it's likely that an AGI system developed by an organization based in the free world (the US, EU, Taiwan, etc.) would embody better values than one developed by one based in the People's Republic of China. There is a widely held belief in science and technology studies that all technologies have embedded values; the most obvious way values could be embedded in an AI system is through its objective function. It's unclear to me how much these values would differ if the AGI were developed in a free country versus an unfree one, because a lot of the AI systems that the US government uses could also be used for oppressive purposes (and arguably already are used in oppressive ways by the US).

Holden Karnofsky calls this the "competition frame" - in which it matters most who develops AGI. He contrasts this with the "caution frame", which focuses more on whether AGI is developed in a rushed way than whether it is misused. Both frames seem valuable to me, but Holden warns that most people will gravitate toward the competition frame by default and neglect the caution one.

Hope this helps!

  1. ^

    Fwiw I do believe that liberal values can be improved on, especially in that they seldom include animals. But the foundation seems correct to me: centering every individual's right to life, liberty, and the pursuit of happiness.

Thank you for posting this! I've been frustrated with the EA movement's cautiousness around media outreach for a while. I think that the overwhelmingly negative press coverage in recent weeks can be attributed in part to us not doing enough media outreach prior to the FTX collapse. And it was pointed out back in July that the top Google Search result for "longtermism" was a Torres hit piece.

I understand and agree with the view that media outreach should be done by specialists - ideally, people who deeply understand EA and know how to talk to the media. But Will MacAskill and Toby Ord aren't the only people with those qualifications! There's no reason they need to be the public face of all of EA - they represent one faction out of at least three. EA is a general concept that's compatible with a range of moral and empirical worldviews - we should be showcasing that epistemic diversity, and one way to do that is by empowering an ideologically diverse group of public figures and media specialists to speak on the movement's behalf. It would be harder for people to criticize EA as a concept if they knew how broad it was.

Perhaps more EA orgs - like GiveWell, ACE, and FHI - should have their own publicity arms that operate independently of CEA and promote their views to the public, instead of expecting CEA or a handful of public figures like MacAskill to do the heavy lifting.

Answer by Eevee🔹26
0
0

I've gotten more involved in EA since last summer. Some EA-related things I've done over the last year:

  • Attended the virtual EA Global (I didn't register, just watched it live on YouTube)
  • Read The Precipice
  • Participated in two EA mentorship programs
  • Joined Covid Watch, an organization developing an app to slow the spread of COVID-19. I'm especially involved in setting up a subteam trying to reduce global catastrophic biological risks.
  • Started posting on the EA Forum
  • Ran a birthday fundraiser for the Against Malaria Foundation. This year, I'm running another one for the Nuclear Threat Initiative.

Although I first heard of EA toward the end of high school (slightly over 4 years ago) and liked it, I had some negative interactions with EA community early on that pushed me away from the community. I spent the next 3 years exploring various social issues outside the EA community, but I had internalized EA's core principles, so I was constantly thinking about how much good I could be doing and which causes were the most important. I eventually became overwhelmed because "doing good" had become a big part of my identity but I cared about too many different issues. A friend recommended that I check out EA again, and despite some trepidation owing to my past experiences, I did. As I got involved in the EA community again, I had an overwhelmingly positive experience. The EAs I was interacting with were kind and open-minded, and they encouraged me to get involved, whereas before, I had encountered people who seemed more abrasive.

Now I'm worried about getting burned out. I check the EA Forum way too often for my own good, and I've been thinking obsessively about cause prioritization and longtermism. I talk about my current uncertainties in this post.

EA Forum posts since 2022 are licensed under CC-BY, so they are free to use. If you want to translate any portions of the 80k page that aren't part of this forum post, you should ask them.

As the Flag begins to burn, salute the flag, say the Pledge of Allegiance, and pause for a moment of silence.

I did not know this. That's wild.

OP/GV/Dustin do not like the rationalism brand because it attracts right-coded folks

Open Phil does not want to fund anything that is even slightly right of center in any policy work

I think this is specifically about the rat community attracting people with racist views like "human biodiversity" (which you alluded to re: "our speaker choices at Manifest") and not about being right-wing or right-leaning generally. As a counterexample, OP made three grants to the Niskanen Center to fund their immigration policy work. I would characterize Niskanen as centrist with a libertarian bent and not definitively right-leaning, but they were originally an offshoot of the right-libertarian movement. Niskanen was founded by people from the Cato Institute and has been funded by "donors who seek to counter libertarian-conservative hostility to measures against global warming."[1]

OP also supported the Nolan Center for Justice at the American Conservative Union, the organization that runs CPAC, in 2021 as part of their criminal justice reform program. At a quick glance, most of the groups they supported through their CJR program appear to be left-leaning, but it is untrue that OP has not funded anything right-leaning. Perhaps OP should be more willing to do the kind of cross-partisan grantmaking that their CJR program embodied.

  1. ^

    Niskanen Center on Wikipedia

Epistemic status: preliminary take, likely not considering many factors.

I'm starting to think that economic development and animal welfare go hand in hand. Since the end of the COVID pandemic, the plant-based meat industry has declined in large part because consumers' disposable incomes declined (at least in developed countries). It's good that GFI and others are trying to achieve price parity with conventional meat. However, finding ways to increase disposable incomes (or equivalently, reduce the cost of living) will likely accelerate the adoption of meat substitutes, even if price parity isn't reached.

Update: The 2024 Donation Election is using straight-up ranked-choice voting; details here.

In addition, I used to lead the EA Public Interest Tech Slack community, which was subsequently merged into the EA Software Engineers community (the Discord for which still exists btw). All of these communities eventually got merged into the #role-software-engineers channel of the EA Anywhere Slack.

I think there was too much fragmentation among slightly different EA affinity groups aimed at tech professionals - there was also EA Tech Network for folks working at tech companies, which I believe was merged into High Impact Professionals.

I'm not sure why the EA SWE community dissipated after all the consolidation that occurred. I think the lack of community leadership may have played a role. Also, it seems like EA SWEs are already well served by other communities, including AI safety (for which a lot of SWEs have the right skills) and effective giving communities like Giving What We Can (since many SWE roles are well-paid).

Lingering thoughts on the talk "How to Handle Worldview Uncertainty" by Hayley Clatterbuck (Rethink Priorities):

The talk proposed several ways that altruists with conflicting values can bargain in mutually beneficial ways, like loans, wagers, and trades, and suggested that the EA community should try to implement these more in practice and design institutions and mechanisms that incentivize them.

I think the EA Donation Election is an example of a community-wide mechanism for brokering trades between multiple anonymous donors. To illustrate this, consider a simple example of a trade, where Alice and Bob are donors with conflicting altruistic priorities. Alice's top charity is Direct Transfers Everywhere and her second favorite is Pandemics No More. Bob's top charity is Lawyers for Chickens, and his second favorite is Pandemics No More. Bob is concerned that Alice's donating to Direct Transfers Everywhere would cancel out the animal welfare benefits of his donating to Lawyers for Chickens, so he proposes that they both donate to their second choice, Pandemics No More.

The Donation Election does this in an automated, anonymous, community-wide way by using a mechanism like ranked-choice voting (RCV) to select winning charities. (The 2024 election uses RCV; the 2023 election used a points-based system similar to RCV.) Suppose that Alice and Bob are voting in the Donation Election—and for simplicity, we'll pretend that the election uses RCV. If their first-choice charities (Direct Transfers Everywhere and Lawyers for Chickens) are not that popular among the electorate, those candidates will be eliminated, and Alice and Bob's votes reallocated to Pandemics No More. This achieves the same outcome as the trade in the previous example automatically, even though Alice and Bob may not have ever personally met and agreed to that trade.

Load more