I'm interested in effective altruism and longtermism broadly. The topics I'm interested in change over time; they include existential risks, climate change, wild animal welfare, alternative proteins, and longtermist global development.
A comment I've written about my EA origin story
Pronouns: she/her
Legal notice: I hereby release under the Creative Commons Attribution 4.0 International license all contributions to the EA Forum (text, images, etc.) to which I hold copyright and related rights, including contributions published before 1 December 2022.
"It is important to draw wisdom from many different places. If we take it from only one place, it becomes rigid and stale. Understanding others, the other elements, and the other nations will help you become whole." āUncle Iroh
Thank you for posting this! I've been frustrated with the EA movement's cautiousness around media outreach for a while. I think that the overwhelmingly negative press coverage in recent weeks can be attributed in part to us not doing enough media outreach prior to the FTX collapse. And it was pointed out back in July that the top Google Search result for "longtermism" was a Torres hit piece.
I understand and agree with the view that media outreach should be done by specialists - ideally, people who deeply understand EA and know how to talk to the media. But Will MacAskill and Toby Ord aren't the only people with those qualifications! There's no reason they need to be the public face of all of EA - they represent one faction out of at least three. EA is a general concept that's compatible with a range of moral and empirical worldviews - we should be showcasing that epistemic diversity, and one way to do that is by empowering an ideologically diverse group of public figures and media specialists to speak on the movement's behalf. It would be harder for people to criticize EA as a concept if they knew how broad it was.
Perhaps more EA orgs - like GiveWell, ACE, and FHI - should have their own publicity arms that operate independently of CEA and promote their views to the public, instead of expecting CEA or a handful of public figures like MacAskill to do the heavy lifting.
I've gotten more involved in EA since last summer. Some EA-related things I've done over the last year:
Although I first heard of EA toward the end of high school (slightly over 4 years ago) and liked it, I had some negative interactions with EA community early on that pushed me away from the community. I spent the next 3 years exploring various social issues outside the EA community, but I had internalized EA's core principles, so I was constantly thinking about how much good I could be doing and which causes were the most important. I eventually became overwhelmed because "doing good" had become a big part of my identity but I cared about too many different issues. A friend recommended that I check out EA again, and despite some trepidation owing to my past experiences, I did. As I got involved in the EA community again, I had an overwhelmingly positive experience. The EAs I was interacting with were kind and open-minded, and they encouraged me to get involved, whereas before, I had encountered people who seemed more abrasive.
Now I'm worried about getting burned out. I check the EA Forum way too often for my own good, and I've been thinking obsessively about cause prioritization and longtermism. I talk about my current uncertainties in this post.
You could create a charity fundraiser on GoFundMe or Every.org and match the donations yourself.
If you want to automate the donation matching, I'd suggest batching the donations and only charging the fundraiser creator for the total amount every 14 days or when the fundraiser ends, rather than creating a matching transaction for every donation. That would cut down on payment processing fees.
You could also experiment with authorization holds on the user's credit card - they basically hold a certain amount of money hostage on the credit card to make sure that amount (or some lesser amount) can be charged to the card later. For example, you could authorize $200, and if the fundraiser only raises $150 by the time the hold expires, charge only $150 to the card.
There are some risks to the user that you might want to design around:
Not necessarily. It's more that funding gaps of c3's can be more easily filled by big foundations (which are also c3's), whereas donations from c3's to c4's are restricted. That makes it more valuable, ceteris paribus, for individuals to fill c4 funding gaps.
Also, the "50% bonus" only applies if you itemize deductions; many people use the standard deduction, including some people who earn more than six figures.
Thank you for this candid and informative post. I agree that we need to allocate more resources to advocacy (but hopefully not at the expense of research!).
I also wanted to signal boost your advice to 501(c)(3)'s in your previous post on orphaned policies, in case it's relevant to anyone reading this thread:
Admittedly, most research work is funded by 501(c)(3) donations that cannot pay for more than a small amount of direct political advocacy. However, there are ways to word the conclusion of a research paper that provide clear guidance without crossing the line into inappropriate political work. True, you might not endorse a bill thatās currently being debated by Congress, and you certainly shouldnāt be endorsing political candidates, but you can still truthfully say that one type of policy has better consequences than another. The law clearly states that ānonpartisan analysis, study, or research may advocate a particular position or viewpoint so long as there is a sufficiently full and fair exposition of the pertinent facts to enable the public or an individual to form an independent opinion or conclusion.ā
Itās therefore not āpoliticalā to point out that banning A100 chip exports while permitting A800 chip exports is ineffective; thatās a technical conclusion that a neutral researcher can reasonably draw. It's not āadvocacyā to express an opinion that the next generation of LLMs will most likely uplift the capabilities of bioweapon designers to a degree that poses risks that would be considered unacceptable in any other industry. You can have a firm opinion on an issue without being a politician; nothing about having a 501(c)(3) tax-status requires you to drown every opinion you offer in a sea of āmaybeā and ācouldā and āwarrants further research.ā The point of tax-exempt research is to come up with scientifically informed opinions that politicians can draw on to inform their work; if your organization is too timid to firmly express those opinions, then itās not upholding its part of the social contract.
The fact that c3's can and do engage in bold advocacy increases the total amount of advocacy that private foundations and tax-sensitive donors can fund, even though their ability to fund c4's is limited. So there's no reason foundations couldn't increase their effective allocation to advocacy if they wanted to.
If you would like to receive email updates about any future research endeavours that continue the mission of the Global Priorities Institute, you can also sign up here.
I signed up for the mailing list! Is there any other way to support research similar to GPI's workāfor instance, would it make sense to donate to Forethought Foundation?
I can speak for myself: I want AGI, if it is developed, to reflect the best possible values we have currently (i.e. liberal values[1]), and I believe it's likely that an AGI system developed by an organization based in the free world (the US, EU, Taiwan, etc.) would embody better values than one developed by one based in the People's Republic of China. There is a widely held belief in science and technology studies that all technologies have embedded values; the most obvious way values could be embedded in an AI system is through its objective function. It's unclear to me how much these values would differ if the AGI were developed in a free country versus an unfree one, because a lot of the AI systems that the US government uses could also be used for oppressive purposes (and arguably already are used in oppressive ways by the US).
Holden Karnofsky calls this the "competition frame" - in which it matters most who develops AGI. He contrasts this with the "caution frame", which focuses more on whether AGI is developed in a rushed way than whether it is misused. Both frames seem valuable to me, but Holden warns that most people will gravitate toward the competition frame by default and neglect the caution one.
Hope this helps!
Fwiw I do believe that liberal values can be improved on, especially in that they seldom include animals. But the foundation seems correct to me: centering every individual's right to life, liberty, and the pursuit of happiness.