I think there could be compelling reasons to prioritise Earning To Give highly, depending on one's options. This is a "hot takes" explanation of this claim with a request for input from the community. This may not be a claim that I would stand by upon reflection.
I base the argument below on a few key assumptions, listed below. Each of these could be debated in their own right but I would prefer to keep any discussion of them outside this post and its comments. This is for brevity and because my reason for making them is largely a deferral to people better informed on the subject than I. The Intelligence Curse by Luke Drago is a good backdrop for this.
- Whether or not we see AGI or Superintelligence, AI will have significantly reduced the availability of white-collar jobs by 2030, and will only continue to reduce this availability.
- AI will eventually drive an enormous increase in world GDP.
- The combination of these will produce a severity of wealth inequality that is both unprecedented and near-totally locked-in.
If AI advances cause white-collar human workers to become redundant by outperforming them at lower cost, we are living in a dwindling window in which one can determine their financial destiny. Government action and philanthropy notwithstanding, one's assets may not grow appreciably again once their labour has become replaceable. An even shorter window may be available for starting new professions as entry-level jobs are likely the easiest to automate and companies will find it easier to stop hiring people than start firing them.
That this may be the fate of much of humanity in the not-too-distant future seems really bleak. While my ear is not the closest to the ground on all things AI, my intuition is that humanity will not have the collective wisdom to restructure society in time to prevent this leading to a technocratic feudal hierarchy. Frankly, I'm alarmed that having engaged with EA consistently for 7+ years I've only heard discussion of this very recently. Furthermore, the Trump Administration has proven itself willing to use America's economic and military superiority to pressure other states into arguably exploitative deals (tariffs, offering Ukraine security guarantees in exchange for mineral resources) and shed altruistic commitments (foreign aid). My assumption is that if this Administration, or a similar successor, oversaw the unveiling of workplace-changing AI, the furthest it would cast its moral circle would be American citizens. Those in other countries may have very unclear routes to income.
Should this scenario come to pass, altruistic individuals having bought shares in companies that experience this economic explosion before it happened could do disproportionate good. The number of actors able to steer the course of the future at all will have shrunk by orders of magnitude and I would predict that most of them will be more consumed by their rivalries than any desire to help others. Others have pointed out that this generally was the case in medieval feudal systems. Depending on the scale of investment, even a single such person could save dozens, hundreds, or even thousands of other people from destitution. If that person possessed charisma or political aptitude, their influence over other asset owners could improve the lives of a great many. Given that being immensely wealthy leaves many doors open for conventional Earning To Give if this scenario doesn't come to pass (and I would advocate for donating at least 10% of income along the way), it seems sensible to me for an EA to aggressively pursue their own wealth in the short term.
If one has a clear career path for helping solve the alignment problem or achieve the governance policies required to bring transformative AI into the world for the benefit of all, I unequivocally endorse pursuing those careers as a priority. These considerations are for those without such a clear path. I will now apply a vignette to my circumstances to provide a concrete example and because I genuinely want advice!
I have spent 4 years serving as a military officer. My friend works at a top financial services firm, which has a demonstrable preference for hiring ex-military personnel. He can think of salient examples of people being hired for jobs that pay £250k/year with CVs very arguably weaker, in both military and academic terms, than mine. With my friend's help, it is plausible that I could secure such a position. I am confident that I would not suffer more than trivial value drift while earning this wage, or on becoming ludicrously wealthy thereafter, based on concrete examples in which I upheld my ethics despite significant temptation not to. I am also confident that I have demonstrated sufficient resilience in my current profession to handle life as a trader, at least for a while. With much less confidence, I feel that I would be at least average in my ability to influence other wealthy people to buy into altruistic ideals.
My main alternative is to seek mid to senior operations management roles at EA and adjacent organisations with longtermist focus. I won't labour why I think these roles would be valuable, nor do I mean to diminish the contributions that can be made in such roles. This theory of impact does, of course, rely heavily on the org I get a job in delivering impactful results; money can almost certainly buy results but of a fundamentally more limited nature.
So, should one such as I Earn To Invest And Then Give, or work on pressing problems directly?
I don't really follow why one set of entities getting AGI and not sharing it should necessarily lead to widespread destitution.
Suppose A, B and C are currently working and trading between each other. A develops AGI and leaves B and C to themselves. Would B and C now just starve? Why would that necessarily happen? If they are still able to work as before, they can do that and trade with each other. They would become a bit poorer due to needing to replace the goods that A had a comparative advantage in producing I guess.
For B and C to be made destitute directly, it would seem to require that they are prevented at working at anything like their previous productivity eg if A were providing something essential and irreplaceable for B and C (maybe software products if A is techy?) or if A's AGI went and pushed B and C off a large fraction of natural resources. It doesn't seem very likely to me that B and C couldn't mostly replace what A provided (eg with current open-source software). For A to push B and C off a large enough amount of resources, when the AGI has presumably already made A very rich, would require A to be more selfish and cruel than I hope is likely - but it's unfortunately not unthinkable.
Of course there would probably still be hugely more inequality - but that doesn't imply B and C are destitute.
I could imagine there being indirect large harms on B and C if their drop in productivity were large enough to create a depression, with financial system feedbacks amplifying the effects.
In any case, the picture you paint seems to require an additional reason that B and C cannot produce the things they need for themselves.
I had a look, it seems to presume the AI-owners will control all the resources, but this doesn't seem like a given (though it may pan out that way).
I realise you said you didn't want to debate these assumptions, but just wanted to point out that the picture painted doesn't seem inevitable.