I am a lawyer and policy researcher interested in improving the governance of artificial intelligence. I currently work as Director of Research at the Institute for Law & AI. I previously worked in various legal and policy roles at OpenAI.
I am also a Research Affiliate with the Centre for the Governance of AI and a VP at the O’Keefe Family Foundation.
My research focuses on the law, policy, and governance of advanced artificial intelligence.
You can share anonymous feedback with me here.
I think typical financial advice is that emergency funds should be kept in very low-risk assets, like cash, money market funds, or short-term bonds. This makes sense because the probability that you need to draw on emergency funds is negatively correlated with equities: market downturns make it more likely that you will lose your job, or some sort of disaster could cause both market downturns and personal loss. You really don't want your emergency fund to lose value at the same time that you're most likely to need it.
One dynamic worth considering here is that a person with near-typical longtermist views about the future also likely believes that there are a large number of salient risks in the future, including sub-extinction AI catastrophes, pandemics, war with China, authoritarian takeover, "white collar bloodbath" etc.
It can be very psychologically hard to spend all day thinking about these risks without also internalizing that these risks may very well affect oneself and one's family, which in turn implies that typical financial advice and financial lifecycle planning are not well-tailored to the futures that longtermists think we might face. For example, the typical suggestion to save around 6 months in an emergency fund makes sense for the economy of the last hundred years, but if there is widespread white collar automation, what are the odds that there will be job disruption lasting longer than six months? If you think that your country may experience authoritarian takeover, might you want to save enough to buy residence elsewhere?
None of this excuses not making financial sacrifices. But I do think it's hard to simultaneously think "the future is really risky" and "there is a very achievable (e.g., <<$1M) amount of savings that would make me very secure."
There used to be a website to try to coordinate this; not sure what ever happened to it.
I'm not going to defend my whole view here, but I want to give a though experiment as to why I don't think that "shadow donations"—the delta between what you could earn if you were income-maximizing, and what you're actually earning in your direct work job—are a great measure for the purposes of practical philosophy (though I agree they're both a relevant consideration and a genuine sacrifice).
Imagine two twins, Anna and Belinda. Both have just graduated with identical grades, skills, degrees, etc. Anna goes directly from college work on AI safety at Safety Org, making $75,000 / year. Belinda goes to work for OpenMind doing safety-neutral work, making $1M per year total compensation. Belinda learns more marketable skills; she could make at least $1M / year indefinitely. Anna, on the other hand, has studiously plugged away at AI safety work, but since her work is niche, she can't easily transfer these skills to do something that pays better.
Then imagine that, after three years, Belinda joins Anna at Safety Org. Belinda was not fired; she could have stayed at OpenMind and made $1M per year indefinitely. At this point, Anna has gotten a few raises and is making $100,000, and donating 3% of her salary. Belinda gets the same job on the same pay scale, and does equally good work, but donates nothing. Belinda reasons that, because she could still be $1M per year, she has "really" donated $900,000 of labor to Safety Org, and so has sacrificed roughly 90% of her income.
Anna, on the other hand, thinks that it is an immense privilege to be able to have a comfortable job where she can use her skills to do good, while still earning more than 99% of all people in the world. She knows that, if she had made different choices in life, she probably could have a higher earning potential. But that has never been her goal in life. Anna knows that the average person in her income bracket donate around 3% regardless of their outside job options, so it seems reasonable for her to at least match that.
Is Belinda more altruistic than Anna? Which attitude should EAs aspire to?
To give some more color on my general view:
I don't really think there's a first-order fact of the matter as to who of these two (or anyone) is "more altruistic," or what one's "obligations" are. At bottom, there are just worlds with more or less value in them.
My view mostly comes from a practical view of how the EA community and project can be most impactful, credible, and healthy. I think the best attitude is closer to Anna's than Belinda's.
Donating also has other virtues over salary reductions, since it is concrete, measurable, and helps create a more diversified funding ecosystem.
To be clear, I think it's great that people like Belinda exist, and they should be welcomed and celebrated in the community. But I don't think the particular mindset of "well I have really sacrificed a lot because if I was purely selfish I could have made a lot more money" is one that we ought to recognize as particularly good or healthy.
It's not clear to me whether you're talking about people who (a) do a voluntary salary sacrifice while working at an EA org, or (b) people who could have earned much more in industry but moved to a nonprofit so now earn much less than their hypothetical maximum earning potential.
In case (a), yes, their salary sacrifice should count towards their real donations.
But I think a practical moral philosophy wherein donation expectations are based on your actual material resources (and constraints), not your theoretical maximum earning potential, seems more justifiable. So I don't think people who do (b) (which includes myself) should get to say that doing (b) liberates them from the same obligation to donate that would attend to a person in the same material circumstances with worse outside options.
Definitely agreed on that point!