Mike Albrecht

Global macro investor -> Social enterprise founder
39 karmaJoined Working (15+ years)Brooklyn, NY, USA

Bio

I'm on a mission to help unite humanity — by using AI to leverage social psychology in order to advance the way people connect and reason with one another. My dream is a world with more solidarity, belonging, and respect, where we lift one another up, achieve our full potential together, and find happiness in our shared efforts. It's a dream to me worth fighting for.

“The peoples of this world must unite or they will perish.” 
  — J. Robert Oppenheimer

“Our ability to reach unity in diversity will be the beauty and the test of our civilization.” 
  — Mahatma Gandhi

∗ ∗ ∗

My journey up to now has been interdisciplinary. My early years skewed mathematical and technical: I started writing code at twelve, studied applied math at Carnegie Mellon, and entered into the professional world as a Wall Street quant. I then spent over a decade as a global macro strategist, studying markets through the lenses of economics, (geo)politics, psychology, and sociology — including AI's implications on all of the above.

∗ ∗ ∗

Some additional interests: world history, political theory, metaphysics/epistemology, visual arts, mindfulness, plants, hiking, powerlifting, and cycling.

How others can help me

If the mission above resonates with you, please DM me: I'd love to hear your thoughts and potentially collaborate. We're just getting started, and actively seeking experienced software developers, ML researchers, data scientists, and UI/UX designers.

How I can help others

You tell me :)

Comments
10

@alx, thank you for addressing this crucial topic. I totally agree that the macro risks[1] of aligned AI are often overlooked compared to risk of misaligned AI (notwithstanding both are crucial). My impression is that the EA community is focused significantly more on the risk of misaligned AI. Consider, however, that Metaculus estimates of the odds of misaligned AI ("Paperclipalypse") at only about half as likely as an "AI-Dystopia."

The future for labor is indeed one of the bigger macro risks, but it's also one of the more tangible ones, making it arguably less neglected in terms of research. For example, your first call to action is to prioritize "Analysis and segmentation of all top job functions within each industry and their loss exposure." This work has largely been done already, insofar as government primary source data exists in the US and EU. I personally led research in quantifying labor exposures,[2] and I'll readily admit this work is largely duplicative vs. what many others have also done. I'll also be frank that such point forecasts are inherently wild guesstimates, given huge unknowns around technological capabilities, complementary innovations, and downstream implementation.

A couple suggestions for macro discussions:

  1. When citing any analysis, time horizon is important; different horizons might coincide with very different conclusions (most studies of job displacement consider a 10-year horizon).
  2. As we consider impacts over time, we should also differentiate between transitional/frictional unemployment and more structural/permanent unemployment. It's one thing to lose a job and potentially have to re-train to do something else; it's another if there aren't even jobs to get re-trained for (a distinction that probably matters for something like deaths of despair[3]). I'm far more concerned about the latter type of unemployment, and that's where the UBI would need to come in, but that's frankly the easy problem. The far bigger challenge is how we reorganize society and maintain "social engagement" once work is no longer central?

These critiques aside, this post is great and again I totally agree with your core point.

There are other AI-related macro risks that extend beyond—and might exceed—employment. I'll share a post soon with my thoughts on those. For now, I'll just say: As we approach this brave new world, we should be preparing not only by trying to find all the answers, but also by building better systems to be able to find answers in the future, and to be more resilient, reactive, and coordinated as a global society. Perhaps this is what you meant by "innovate solutions and creative risk mitigations with novel technologies"? If so, then you and I are of the same mind.

  1. ^

    By "macro," I mean everything from macroeconomic (including labor) to political, geopolitical, and societal implications.

  2. ^

    Michael Albrecht and Stephanie Aliaga, “The Transformative Power of Generative AI,” J.P.Morgan Asset Management, September 30, 2023. I co-authored this high-level "primer" covering various macro implications of AI. It covers a lot of bases, so feel free to skip around if you're already very familiar with some of the content.

  3. ^

    I have reservations about your specific mortality rate analysis, but I'll save that discussion for another time. I do appreciate and agree with your broader perspective.

@Toby_Ord, thank you - I find this discussion interesting in theory, but I wonder if it's actually tractable? There's no metaphorical lever for affecting overall progress, to advance everything - "science, technology, the economy, population, culture, societal norms, moral norms, and so forth" all at the same pace. Moreover, any culture or effort that in principle seeks to advance all of these things at the same time is likely to leave some of them behind in practice (and I fear that those left behind would be the wrong ones!).

Rather, I think that the "different kind of intervention, which we could call differential progress" is in fact the only kind of intervention there is. That is to say, there are a whole bunch of tiny levers we might be able to pull that affect all sorts of bits of progress. Moreover, some of these levers are surprisingly powerful, while other levers don't really seem to do much. I agree about "differentially boosting other kinds of progress, such as moral progress or institutional progress, and perhaps even for delaying technological, economic, and scientific progress." And I might venture to say that our levers are more more powerful when it comes to the former set than the latter.

I do think this loose alliance of authoritarian states.[1] - Russia, Iran, North Korea, etc. - poses some meaningful challenge to democracies, especially insofar as the authoritarian states coordinate to undermine the democratic ones, e.g., through information warfare that increases polarization.

However, I'd emphasize "loose" here, given they share no ideology. That makes them different vs. what binds together the free world [2] or what held together the Cold War's communist bloc. Such a loose coalition is merely opportunistic and transactional, and likely to dissolve if the opportunity dissipates, i.e., if the U.S. retreats from its role as the global police. Perhaps an apt historical example is how the victors in WWII splintered into NATO and the Warsaw Pact once Nazi Germany was defeated.

  1. ^

    Full disclosure: I've not (yet) read Applebaum's Autocracy Inc.

  2. ^

    What comes to mind is Kant, et al.'s democratic peace theory.

Thanks, David. I mostly agree with @Stephen Clare's points, notwithstanding that I also generally agree with your critique. (The notion of a future dominated by religious fanaticism always brings to mind Frank Herbert's Dune saga.)

The biggest issue I have is, to echo David, odds of 1 in 30K strike me as far too low. 

Looking at Stephen's math...

But let’s say there’s:

  • A 10% chance that, at some point, an AI system is invented which gives whoever controls it a decisive edge over their rivals
  • A 3% chance that a totalitarian state is the first to invent it, or that the first state to invent it becomes totalitarian
  • A 1% chance that the state is able to use advanced AI to entrench its rule perpetually

That leaves about a 0.3% chance we see a totalitarian regime with unprecedented power (10% x 3%) and a 0.003% (1 in 30,000) chance it’s able to persist in perpetuity.

... I agree with emphasizing the potential for AI value lock-in, but I have a few questions:

  1. Does the eventual emergence of an AI-empowered totalitarian state most likely involve someone to have a "decisive edge" in AI at the outset?
  2. Could an AI-empowered global totalitarian state start as a democracy or even a corporation, rather than being totalitarian at the moment it acquires powerful AI capabilities?
  3. How do we justify the estimation that the conditioned odds of perpetual lock-in are only 1%?

@Toby_Ord, I very much appreciated your speech (only wish I'd seen it before this week!) not to mention your original book, which has influenced much of my thinking.

I have a hot take[1] on your puzzlement that viral gain of function research (GoFR) hasn’t slowed, despite having possibly caused the COVID-19 pandemic, as well as the general implication that it does more to increase risks to humans than reduce them. Notwithstanding some additional funding bureaucracy, there’s been nothing like the U.S.'s 2014-17 GoFR moratorium.

From my (limited) perspective, the lack of a moratorium may stem from a simple conflict of interest in those we entrust with these decisions. Namely, we trust experts to judge their work’s safety, but these experts have at stake their egos and livelihoods — not to mention potential castigation and exclusion from their communities if they speak out. Moreover, if the idea that their research is unsafe poses such psychological threats, then we might expect impartial reasoning to become motivated rationalization for a continuation of GoFR. This conflict might thus explain not only the persistence of GoFR, but also the initial dismissal of the lab-leak theory as a “conspiracy theory”.

 ∗ ∗ ∗ 

“When you see something that is technically sweet, you go ahead and do it and you argue about what to do about it only after you have had your technical success. That is the way it was with the atomic bomb.” 
  — J. Robert Oppenheimer

“What we are creating now is a monster whose influence is going to change history, provided there is any history left, yet it would be impossible not to see it through, not only for military reasons, but it would also be unethical from the point of view of the scientists not to do what they know is feasible, no matter what terrible consequences it may have.” 
  — John von Neumann

Those words from those who developed the atomic bomb resonate today. I think they very much apply to GoFR — not to mention research in other fields like AI.

  1. ^

    I’d love any feedback, especially by any experts in the field. (I'm not one.)

You're welcome. The Economist, in my opinion, has some minor biases but is usually very reasonable. The nuance I would add is that the effect of any fiscal expansion - and I'd be more inclined to emphasize expansion rather than the deficit per se - depends on many factors, not least of which is the concurrent output gap. M.Y. summarizes that point well. 

To be fair, following a decade of persistently below-target inflation in the U.S., and to an even greater degree in other major developed economies, inflation wasn't the top concern on anybody's mind! Also, speaking of other countries, the other big consideration is that while such fiscal expansion likely drove U.S. prices higher than they would've gone otherwise, high inflation has in fact been a fairly global issue since it began in 2021.

Hey Ms. Albrecht, I believe this tendency to underestimate pocketbook issues is likely true of the general populace, though political operatives are usually more aware of their significance, e.g., election probability models invariably incorporate measures of economic activity. However, these models typically focus on current/incumbent economic performance, which is the case now: discontent is less about the excitement about real incomes in 2020 (not least since a lot was going on at the time...) and more about 2024 price levels. 

Federal transfer payments notwithstanding, near-term macroeconomic outcomes usually have little to do with the White House's agenda,* which makes it particularly unfortunate that we don't do more to separate economic and social policymaking...

* That said, some of the inflation can be attributed to Biden's 2021 stimulus, and rents are high in part due to a surge in demand from new migrants amid extremely constrained housing supply (a straightforward economic reality, agnostic to the merits of said migration).

@Yelnats T.J., thanks for writing this. Also grateful to my friend John who pointed me to your post (I’m new on this forum). This topic is near and dear to me: I believe that unity is key to achieving humanity’s full potential and avoiding dystopian and annihilative outcomes. 

Your work here sparks a few thoughts.

America’s broader fatalism about polarization—the heart of this problem—is unfortunate. There are promising routes to bridging divides, especially through technology. If social media has (accidentally) contributed as much to polarization as it has, imagine the potential effectiveness of technology explicitly aimed at constructive unity. Social psychology offers a wealth of evidence on what creates an “us” identity (Van Bavel and Packer’s The Power of Us is a great overview as of ca. 2020). There is scope to implement these ideas at scale, especially with the help of LLMs/NLP and other AI. (A thorough discussion is beyond the scope of this comment, but to anyone with whom this idea resonates, please DM me; I’d love to discuss.)

Destabilization is not a U.S.-only problem. Although polarization seems most pernicious in the U.S., other liberal democracies have been growing increasingly divided (see, e.g., 2023 Edelman Trust Barometer) due to similar factors, including echo chambers (largely due to social media), more time spent online, and surging migration (see, e.g., Monk’s The Great Experiment, 2022).

The potential long-term implications of U.S. destabilization are hard to understate. Emphasizing negative implications for EA funding may actually minimize the problem. Beyond the U.S., the stakes are much higher. A failure of U.S. democracy could pave the way for Chinese global hegemony, leading to significant global retrenchment of philosophically liberal notions of equality and individual liberty. It might even usher in a global totalitarian surveillance state, one that is AI-empowered, and potentially permanent/locked-in. Conversely, suppose the U.S., among other liberal democracies, actually gets its acts together and functions effectively. Then the rest of the world might see this “shining city on a hill” of America and become more open to its liberal values. That includes China which, following Xi’s “scientific socialism”, is fundamentally guided by empirical evidence.

International division is also underrated. You hypothesize that destabilization within the U.S. could increase risks from “great power conflict, AI, bio-risk, and climate disruption.” While I agree, this increase in risks strikes me as overshadowed by any persistent disunity among nations. That is, pursuing unity within the U.S. is mostly about preserving liberty; pursuing international unity is mostly about preventing annihilation. In an AI arms race, for instance, that throws caution to the wind to maintain the upper hand over foreign adversaries, it would seem to make little difference whether the U.S. is democratic or authoritarian (Suleyman’s The Coming Wave, 2023, argues this well). From a long-term and more abstract perspective, increasingly powerful and accessible technologies present both bigger upsides and more dangerous ways to harm or destroy one another. The proximate existential risks to humanity (e.g., as outlined in Ord’s The Precipice, 2020) may largely boil down to one ultimate meta-risk of disunity. Over a long enough time horizon, J. Robert Oppenheimer’s categorical imperative strikes me as a mathematical certainty: “The peoples of this world must unite or they will perish”.

Disunity within the U.S. and internationally, being driven by the same us-vs.-them tribalism, have potential solutions that are not fundamentally different. Moreover, I'm hopeful that AI could make communicating with those who speak other languages more seamless.

Back to the U.S., I echo the point about leftist/partisan tone made by @David_Althaus, @Jason, @Geoffrey Miller, and @Marcus_Ogren. Hard as it may be to avoid suggesting that the other side is the bigger problem, even if it were hypothetically true, I don’t see it as productive. On the contrary, if the ultimate problem is polarization/disunity, then the solution involves talking and compromising with people who have different values; it might be counterproductive if those people feel blamed. If an enemy must be made of specific views/values, I think “extremism” (including in one’s own party) fits the bill. Relatedly, I’d avoid discussing institutional reforms that are theoretically wonderful but impractical given their extreme partisan implications, e.g., abolishing the electoral college. Such ideas could be mentioned in the footnotes.

Lukas, thanks for pulling together all these notes. To me, "cooperative AI" stands out and might deserve its own page(s). This terminology covers remarkably broad and disparate pursuits. In the words of Dafoe, et al. (mostly of the Cooperative AI Foundation):

  • "A first cluster consists of AI–AI cooperation, tackling ever more difficult, rich and realistic settings (see ‘Four elements of cooperative intelligence’)." - this is notably the focus of FOCAL@CMU, who are looking at "game theory appropriate for advanced, autonomous AI agents – with a focus on achieving cooperation".
  • "A second is AI–human cooperation, for which we will need to advance natural-language understanding, enable machines to learn about people’s preferences, and make machine reasoning more accessible to humans." - big problems but plenty happening here, of course, with RLHF and research on alignment (representation, etc.).
  • "A third cluster is work on tools for improving (and not harming) human–human cooperation, such as ways of making the algorithms that govern social media better at promoting healthy online communities." 

This last one seems neglected, in my view, probably because it is an an inherently less straightforward and more interdisciplinary problem to tackle. But it's also arguably the one with the single greatest upside potential. Will MacAskill, in describing “the best possible future” imagines “technological advances… in the ability to reflect and reason with one another”. Already today, there's a wealth of social psychology research on what creates connection and cooperation; these ideas might be implemented at scale, with the help of AI - to help us understand, connect, and achieve things together. In a narrow sense, that might help scientists collaborate. In a bigger sense, it might ultimately reverse societal polarization and help unite humankind, in way that reduces existential risk and increases upside potential more than anything else we could do.

Load more