I am an attorney in a public-sector position not associated with EA, although I cannot provide legal advice to anyone. My involvement with EA so far has been mostly limited so far to writing checks to GiveWell and other effective charities in the Global Health space, as well as some independent reading. I have occasionally read the forum and was looking for ideas for year-end giving when the whole FTX business exploded . . .
As someone who isn't deep in EA culture (at least at the time of writing), I may be able to offer a perspective on how the broader group of people with sympathies toward EA ideas might react to certain things. I'll probably make some errors that would be obvious to other people, but sometimes a fresh set of eyes can help bring a different perspective.
I agree that there are no plausible circumstances in which anyone's relatives will benefit in a way not shared with a larger class of people. However, I do think groups of people differ in ways that are relevant to how important fast AI development vs. more risk-averse AI development is to their interests. Giving undue weight to the interests of a group of people because one's friends or family are in that group would still raise the concern I expressed above.
One group that -- if they were considering their own interests only -- might be rationally expected to accept somewhat more risk than the population as a whole are those who are ~50-55+. As Jaime wrote:
For some of my older relatives, it might make a big difference to their health and wellbeing whether AI-fueled explosive growth happens in 10 vs 20 years.
A similar outcome could also happen if (e.g.) the prior generation of my family has passed on, I had young children, and as a result of prioritizing their interests I didn't give enough weight to older individuals' desire to have powerful AI soon enough to improve and/or extend their lives.
Some of the reaction here may be based on Jaime acting in a professional, rather than a personal, capacity when working in AI.
There are a number of jobs and roles that expect your actions in a professional capacity to be impartial in the sense of not favoring your loved ones over others. For instance, a politician should not give any more weight to the effects of proposed legislation on their own mother than the effect on any other constituent. Government service in general has this expectation. One could argue that (like serving as a politician), working in AI involves handing out significant risks and harms to non-consenting others -- and that should trigger a duty of impartiality.
Government workers and politicians are free to favor their own mother in their personal life, of course.
or whether we're just dealing with people getting close to the boundaries of unilateral action in a way that is still defensible because they've never claimed to be more aligned than they were, never accepted funding that came with specific explicit assumptions, etc.)
Caveats up front: I note the complexity of figuring out what Epoch's own views are, as opposed to Jaime's [corrected spelling] view or the views of the departing employees. I also do not know what representations were made. Therefore, I am not asserting that Epoch did something or needs to do something, merely that the concern described below should be evaluated.
People and organizations change their opinions all the time. One thing I'm unclear on is whether there was a change in position here should that created an obligation to offer to return and/or redistribute unused donor funds.
I note that, in February 2023, Epoch was fundraising through September 2025. I don't know its cash flows, but I cite that to show it is plausible they were operating on safety-focused money obtained before a material change to less safety-focused views. In other words, the representations to donors may have been appropriate when the money was raised but outdated by the time it was spent.
I think it's fair to ask whether a donor would have funded a longish runway if it had known the organization's views would change by the time the monies were spent. If the answer is "no," that raises the possibility that the organization may be ethically obliged to refund or regrant the unspent grant monies.
I can imagine circumstances in which the answers are no and yes: for instance, suppose the organization was a progressive political advocacy organization that decided to go moderate left instead. It generally will not be appropriate for that org to use progressives' money to further its new stance. On the other hand, any shift here was less pronounced, and there's a stronger argument that the donors got the forecasting/information outputs they paid for.
Anyway, for me all this ties into post-FTX discussions about giving organizations a healthy financial runway. People in those discussions did a good job flagging the downsides of short-term grants without confidence in renewal, as well as the high degree of power funders hold in the ecosystem. But AI is moving fast; this isn't something more stable like anti-malarial work. So the chance of organizational drift seems considerably higher here.
How do we deal with the possibility that honest organizational changes will create a inconsistency with the implicit donor-recipient understanding at the time of grant? I don't claim to have the answer, or how to apply it here.
On taxes, you can deduct charitable giving from the amount of income used to figure your federal taxes if you "itemize." The alternative to itemizing is claiming the standard deduction, which in 2025 is $15,000. That means that, as a practical matter, the first $15,000 in itemized expenses don't help you on your taxes, but anything over that does. A very general explanation of itemizing is here.
Major categories of itemized deductions include state/local taxes (capped at $10K), mortgage interest, and charitable giving. I'm not from California, but it looks like your state/local taxes may be ~$7K based on your income. That means that, as a practical matter, the first ~$8K you donate in a year may not help you on your taxes, but everything after that will. The usual (partial) workaround is to save up your donations in a separate account and donate them every few years for more favorable tax treatment. That's what my wife and I do.
I believe most people consider the amount of their income on a post-tax basis to the extent that their donations are not tax-deductible. For you, that would involve considering some taxes (e.g., Social Security/Medicare, maybe state) and part of your federal tax.
You're absolutely right that the pledge doesn't adjust for personal circumstances, cost of living, and other factors. In my opinion, it's overdemanding for some people, and underdemanding for others. I would consider 10% as both a community norm and as the specific ask of GWWC's flagship pledge. I think GWWC would primarily explain the use of a flat percentage ask as based on something other than it being the fairest / most philosophically sound ask in an ideal world.
I'll talk here about the community norm, which is more flexible than the pledge. As to specific factors:
By definition, most people have fairly typical personal circumstances on net, with some factors enabling a higher donation percentage and others inhibiting it. Some have significantly more challenging circumstances than average and some have significantly more favorable circumstances than average. It sounds like you have some factors that are relatively favorable (e.g., no kids, some family support, developed country, relatively high income by US standards) and some factors that go in the opposite direction (e.g., very high COL area, just starting out). I think it's best to read the community norm as commending 10% to people in fairly typical personal circumstances, with the understanding that this may not be reasonable for people in less-than-average personal circumstances.
I agree that we need to be careful about who we are empowering.
"Value alignment" is one of those terms which has different meanings to different people. For example, the top hit I got on Google for "effective altruism value alignment" was a ConcernedEAs post which may not reflect what you mean by the term. Without knowing exactly what you mean, I'd hazard a guess that some facets of value alignment are pretty relevant to mitigating this kind of risk, and other facets are not so important. Moreover, I think some of the key factors are less cognitive or philosophical than emotional or motivational (e.g., a strong attraction toward money will increase the risk of defecting, a lack of self-awareness increases the risk of motivated reasoning toward goals one has in a sense repressed).
So, I think it would be helpful for orgs to consider what elements of "value alignment" are of particular importance here, as well as what other risk or protective factors might exist outside of value alignment, and focus on those specific things.
When I speak of a strong inoculant, I mean something that is very effective in preventing the harm in question -- such as the measles vaccine. Unless there were a measles case at my son's daycare, or a family member were extremely vulnerable to measles, the protection provided by the strong inoculant is enough that I can carry on with life without thinking about measles.
In contrast, the influenza vaccine is a weak inoculant -- I definitely get vaccinated because I'll get infected less and hospitalized less without it. But I'm not surprised when I get the flu. If I were at great risk of serious complications from the flu, then I'd only use vaccination as one layer of my mitigation strategy (and without placing undue reliance on it.) And of course there are strengths in between those two.
I'd call myself moderately cynical. I think history teaches us that the corrupting influence of power is strong and that managing this risk has been a struggle. I don't think I need to take the position that no strong inoculant exists. It is enough to assert that -- based on centuries of human experience across cultures -- our starting point should be that inoculants as weak until proven otherwise by sufficient experience. And when one of the star pupils goes so badly off the rails, along with several others in his orbit, that adds to the quantum of evidence I think is necessary to overcome the general rule.
I'd add that one of the traditional ways to mitigate this risk is to observe the candidate over a long period of time in conjunction with lesser levels of power. Although it doesn't always work well in practice, you do get some ability to measure the specific candidate's susceptibility in lower-stakes situations. It may not be popular to say, but we just won't have had the same potential to observe people in their 20s and 30s in intermediate-power situations that we often will have had for the 50+ crowd. Certainly people can and do fake being relatively unaffected by money and power for many years, but it's harder to pull off than for a shorter period of time.
If anything can be an inoculant against those temptations, surely a strong adherence to a cause greater than oneself packaged in lots warnings against biases and other ways humans can go wrong (as is the common message in EA and rationalist circles) seems like the best hope for it?
Maybe. But on first principles, one might have also thought that belief in an all-powerful, all-knowing deity who will hammer you if you fall out of line would be a fairly strong inoculant. But experience teaches us that this is not so!
Also, if I had to design a practical philosophy that was maximally resistant to corruption, I'd probably ground it on virtue ethics or deontology rather than give so much weight to utilitarian considerations. The risk of the newly-powerful person deceiving themselves may be greater for a utilitarian.
--
As you imply, the follow-up question is where we go from here. I think there are three possible approaches to dealing with a weak or moderate-strength inoculant:
My point is that doing these steps well requires a reasonably accurate view of inoculant strength. And I got the sense that the community is more confident in EA-as-inoculant than the combination of general human experience and the limited available evidence on EA-as-inoculant warrants.
Arguably influencers are a often a safer option - note that EA groups like GiveWell and 80k are already doing partnerships with influencers. As in, there's a decent variety of smart YouTube channels and podcasts that hold advertisements for 80k/GiveWell. I feel pretty good about much of this.
This feels different to me. In most cases, there is a cultural understanding of the advertiser-ad seller relationship that limits the reputational risk. (I have not seen the "partnerships" in question, but assume there is money flowing in one direction and promotional consideration in the other.) To be sure, activists will demand for companies to pull their ads from a certain TV show when it does something offensive, to stop sponsoring a certain sports team, or so on. However, I don't think consumers generally hold prior ad spend against a brand when it promptly cuts the relationship upon learning of the counterparty's new and problematic conduct.
In contrast, people will perceive something like FTX/EA or Anthropic/EA as a deeper relationship rather than a mostly transactional relationship involving the exchange of money for eyeballs. Deeper relationships can have a sense of authenticity that increases the value of the partnership -- the partners aren't just in it for business reasons -- but that depth probably increases the counterparty risks to each partner.
11. It would probably cost a good bit of political capital to get this through, which may have an opportunity cost. You may not even get public support from the AI companies because the proposal contains an implicit critique that they haven't been doing enough on safety.
12. By the time the legislation got out of committee and through both houses, the scope of incentivized activity would probably be significantly broader than what x-risk people have in mind (e.g., reducing racial bias). Whether companies would prefer to invest more in x-risk safety vs. other incentivized topics is unclear to me.
A new organization can often compete for dollars that weren't previously available to an EA org -- such as government or non-EA foundation grants that are only open to certain subject areas.