Scriptwriter for RationalAnimations! Interested in lots of EA topics, but especially ideas for new institutions like prediction markets, charter cities, georgism, etc. Also a big fan of EA / rationalist fiction!
The Christians in this story who lived relatively normal lives ended up looking wiser than the ones who went all-in on the imminent-return-of-Christ idea. But of course, if christianity had been true and Christ had in fact returned, maybe the crazy-seeming, all-in Christians would have had huge amounts of impact.
Here is my attempt at thinking up other historical examples of transformative change that went the other way:
Muhammad's early followers must have been a bit uncertain whether this guy was really the Final Prophet. Do you quit your day job in Mecca so that you can flee to Medina with a bunch of your fellow cultists? In this case, it probably would've been a good idea: seven years later you'd be helping lead an army of 100,000 holy warriors to capture the city of Mecca. And over the next thirty years, you'll help convert/conquer all the civilizations of the middle east and North Africa.
Less dramatic versions of the above story could probably be told about joining many fast-growing charismatic social movements (like joining a political movement or revolution). Or, more relevantly to AI, about joining a fast-growing bay-area startup whose technology might change the world (like early Microsoft, Google, Facebook, etc).
You're a physics professor in 1940s America. One day, a team of G-men knock on your door and ask you to join a top-secret project to design an impossible superweapon capable of ending the Nazi regime and stopping the war. Do you quit your day job and move to New Mexico?...
You're a "cypherpunk" hanging out on online forums in the mid-2000s. Despite the demoralizing collapse of the dot-com boom and the failure of many of the most promising projects, some of your forum buddies are still excited about the possibilities of creating an "anonymous, distributed electronic cash system", such as the proposal called B-money. Do you quit your day job to work on weird libertarian math problems?...
People who bet everything on transformative change will always look silly in retrospect if the change never comes. But the thing about transformative change is that it does sometimes occur.
(Also, fortunately our world today is quite wealthy -- AI safety researchers are pretty smart folks and will probably be able to earn a living for themselves to pay for retirement, even if all their predictions come up empty.)
That's an interesting way to think about it! Unfortunately this is where the limits of my knowledge about the animal-welfare side of EA kick in, but you could probably find more info about these progest campaigns by searching some animal-welfare-related tags here on the Forum, or going to the sites of groups like Animal Ask or Hive that do ongoing work coordinating the field of animal activists, or by finding articles / podcast interviews with Lewis Bollard, who is the head grantmaker for this stuff at Open Philanthropy / Coefficient Giving, and has been thinking about the strategy of cage-free campaigns and related efforts for a very long time.
I'm not an expert about this, but my impression (from articles like this: https://coefficientgiving.org/research/why-are-the-us-corporate-cage-free-campaigns-succeeding/ , and websites like Animal Ask) is that the standard EA-style corporate campaign involves:
My impression is that this works because the corporations decide that it's less costly for them to implement the specific, limited, welfare-enhancing "ask" than to endure the reputational damage caused by a big public protest campaign. The efficacy doesn't depend at all on a threat of boycott by the activists themselves. (After all, the activists are probably already 100% vegan, lol...)
You might reasonably say "okay, makes sense, but isn't this just a clever way for a small group of activists to LEVERAGE the power of boycotts? the only reason the corporation is afraid of the threatened protest campaign is because they're worried consumers will stop buying their products, right? so ultimately the activists' power is deriving from the power of the mass public to make individual personal-consumption decisions".
This might be sorta true, but I think there are some nuances:
I'll admit to a perhaps overly mean-spirited or exasperated tone in that section, but I think the content itself is good actually(tm)?
I agree with you that LLM tech might not scale to AGI, and thus AGI might not arrive as soon as many hope/fear. But this doesn't really change the underlying concern?? It seems pretty plausible that, if not in five years, we might get something like AGI within our lifetime via some improved, post-LLM paradigm. (Consider the literal trillions of dollars, and thousands of brilliant researchers, now devoting their utmost efforts towards this goal!) If this happens, it does not take some kind of galaxy-brained rube-goldberg argument to make an observation like "if we invent a technology that can replace a lot of human labor, that might lead to extreme power concentration of whoever controls the technology / disempowerment of many people who currently work for a living", either via "stable-totalitarianism" style takeovers (people with power use powerful AI to maintain and grow this power very effectively) or via "gradual disempowerment" style concerns (once society no longer depends on a broad base of productive, laboring citizens, there is less incentive to respect those citizens' rights and interests).
Misalignment / AI takeover scenarios are indeed more complicated and rube-goldberg-y IMO. But the situation here is very different from what it was ten years ago -- instead of just doing Yudkowsky-style theorycrafting based on abstract philosophical principles, we can do experiments to study and demonstrate the types of misalignment we're worried about (see papers by Anthropic and others about sleeper agents, alignment faking, chain-of-thought unfaithfulness, emergent misalignment, and more). IMO the detailed science being done here is more grounded than the impression you'd get by just reading people slinging takes on twitter (or, indeed, by reading comments like mine here!). Of course if real AGI turns out to be in a totally new post-LLM paradigm, that might invalidate many of the most concrete safety techniques we've developed so far -- but IMO that makes the situation worse, not better!
In general, the whole concept of dealing with existential risks is that the stakes are so high that we should start thinking ahead and preparing to fight them, even if it's not yet certain they'll occur. I agree it's not certain that LLMs will scale to AGI, or that humanity will ever invent AGI. But it certainly seems plausible! (Many experts do believe this, even if they are in the minority on that survey. Plus, like the entire US stock market these days is basically obsessed with figuring out whether AI will turn out to be a huge deal or a nothingburger or something in-between, so the market doesn't consider it an obvious guaranteed-nothingburger. And of course all the labs are racing to get as close to AGI as fast as possible, since the closer you get to AGI, the more money you can make by automating more and more types of labor!) So we should probably start worrying now, just like we worry about nuclear war even though it seems hopefully unlikely to me that Putin or Xi Jinping or the USA would really decide to launch a major nuclear attack even in an extreme situation like an invasion of Taiwan. New technologies sometimes have risks; AI might (not certain, but definitely might) become and EXTREMELY powerful new technology, so the risks might be large!
EA is about more than just "a commitment to measurable impact" -- it's also about trying to find the most /effective/ ways to help others, which means investigating (and often doing back-of-the-envelope "importance, neglectedness, tractability" estimates) to prioritize the most important causes.
Take your Nestle example: although they make a convenient big corporate villain, so they often get brought up in debates about drought in California and elsewhere, they aren't actually a big contributor to the problem of drought since their water consumption is such a miniscule fraction of the state's total water use. Rather than getting everyone to pressure Nestle, it would be much more effective for individuals to spend their time lobbying for slightly changing the rules around how states like California regulate and price water, or lobbying for the federal government to remove wasteful farm subsidies that encourage water waste on a much larger scale.
See here for more on this issue: https://slatestarcodex.com/2015/05/11/california-water-you-doing/
Some EAs might also add that the overall problem of water scarcity in California, or the problem of misleading baby formula ads (note the science is actually not clear on whether breastmilk is any better than formula; they seems about the same for babies' health! https://parentdata.org/what-the-data-actually-says-about-breastfeeding/ ) , or the problem that Coca Cola does business in Israel (doesn't it do business basically everywhere??), are simply less severe than the problem of animal suffering. Although of course this depends on one's values.
Some other considerations about boycotts:
Fun fact: it's actually this same focus on finding causes that are important (potentially large in scale), neglected (not many other people are focused on them) and tractable, that has also led EA to take some "sci-fi doomsday scenarios" like wars between nuclear powers, pandemics, and AI risk, seriously. Consider looking into it sometime -- you might be surprised how plausible and deeply-researched these wacky, laughable, uncool, cringe, "obviously sci-fi" worries really are! (Like that countries might sometimes go to war with each other, or that it might be dangerous to have university labs experimenting with creating deadlier versions of common viruses, or that powerful new technologies might sometimes have risks.)
I suspect that part of the theory of impact here might not run through any individual grant item (ie, liberalized zoning laws leading to economic growth through increased housing construction in some particular city), but rather through a variety of bigger-picture considerations that look something like:
People have wondered for a long time if, in addition to direct work on x-risks, one should consider intermediate "existential risk-factors" like great power war. It seems plausible to me that "trying to make the United States more sane" is a pretty big factor in many valuable goals -- global health & development, existential risk mitigation, flourishing long-term futures, and so forth.
Hating a big corporation is so much goddamn fun.
England chopped up Africa and trapped it in a cycle of conflict.
...companies like Amazon have sparked a global wave of consumerism
EAs get a little obsessed with alignment when hiring.
Maybe we're so big on hiring value-aligned media people because we don't want our movement to get melded back into the morass of ordinary leftist activism!
I agree with some of your points on style / communication -- hedging can really mess up the process of trying to write compelling content, and EA should probably be more willing to be controversial and combative and identify villains of various sorts. But I think the subtext that we should specifically do this by leaning more in a standard-left-wing-activism direction risks worsening the crisis of lameness, rather than fixing it.
As other commenters have mentioned, I'd be worried about losing some of the things that make EA special (for example, suffering the same kind of epistemic decay that plagues a lot of activist movements).
But I'm also a little skeptical that, even if EA was fine with (or could somehow avoid) taking the epistemic hit of building a mass movement in this way, the aesthetics of billionaire-bashing and protest-attending and etc are really as intrinsically "sexy" to smart, ambitious young people as you make them out to be. I'd worry we'd create a vibe that ends up artificially self-limiting the audience we can reach. (I'm thinking about how a lot of left wing activism -- abolish the police, extinction rebellion climate stuff, gaza protests, etc -- often tends to create counterproductive levels of polarization, seemingly for polarization's own sake, in a way that seems to just keep re-activating the same left-leaning folks, but not accomplishing nearly as much broad societal persuasion as would seem to be possible.)
(re: "EA should probably be more willing to be controversial and combative and identify villains", my preferred take is that EA should be willing to be more weird in public, to talk seriously about things that seem sci-fi (like takeover by superintelligence) or morally bizarre (like shrimp welfare) or both (like possible utopian / transhumanist futures for humanity), thus attracting attention by further distinguishing itself from both left-wing and right-wing framings, thus offering something new and strange but also authentic and evidence-backed to people who have a truth-seeking mindset and who are tired of mainstream ideological culture-wars. Politically, I expect this would feel kind of like a "radical-centrist" vibe, or maybe like a kind of fresh alternate style of left-liberalism more like the historical Progressive Era, or something. Anyways, of course it takes plenty of media skill to talk about super-weird stuff well! And this vision of mine also has lots of drawbacks -- who knows, maybe I have rose-tinted glasses and it would actually crash and burn even harder than a more standard lefty-activism angle. But it's what I would try.)
Linking my own thoughts as part of previous discussion "How confident are you that it's preferable for America to develop AGI before China does?". I generally agree with your take.
This is a nice story, but it doesn't feel realistic to treat the city of the future as such an all-or-nothing affair. Wouldn't there be many individual components (like the merchant's initial medical tonic) that could be stand-alone technologies, diffusing throughout the world and smoothly raising standards of living in the usual way? In this sense, even your "optimistic" story seems too pessimistic about the wide-ranging, large-scale impact of the scholar's advice.
The world of the story would still develop quite differently than in real history, since they're:
This and other effects (like the obvious power-concentration aspect of whoever controls access to the oracle's insights) would probably produce a very lopsided-seeming world compared to actual modernity. But I don't think it would end up looking like either of the two endings to your story.
(Of course, your more poetic endings fit the form of a traditional fable much better. "And then the city kicked off an accelerating techno-industrial singularity" doesn't really fit the classic repertoire of tragedy, comedy, etc!)
To answer with a sequence of increasingly "systemic" ideas (naturally the following will be tinged by by own political beliefs about what's tractable or desirable):
There are lots of object-level lobbying groups that have strong EA endorsement. This includes organizations advocating for better pandemic preparedness (Guarding Against Pandemics), better climate policy (like CATF and others recommended by Giving Green), or beneficial policies in third-world countries like salt iodization or lead paint elimination.
Some EAs are also sympathetic to the "progress studies" movement and to the modern neoliberal movement connected to the Progressive Policy Institute and the Niskasen Center (which are both tax-deductible nonprofit think-tanks). This often includes enthusiasm for denser ("yimby") housing construction, reforming how science funding and academia work in order to speed up scientific progress (such as advocated by New Science), increasing high-skill immigration, and having good monetary policy. All of those cause areas appear on Open Philanthropy's list of "U.S. Policy Focus Areas".
Naturally, there are many ways to advocate for the above causes -- some are more object-level (like fighting to get an individual city to improve its zoning policy), while others are more systemic (like exploring the feasibility of "Georgism", a totally different way of valuing and taxing land which might do a lot to promote efficient land use and encourage fairer, faster economic development).
One big point of hesitancy is that, while some EAs have a general affinity for these cause areas, in many areas I've never heard any particular standout charities being recommended as super-effective in the EA sense... for example, some EAs might feel that we should do monetary policy via "nominal GDP targeting" rather than inflation-rate targeting, but I've never heard anyone recommend that I donate to some specific NGDP-targeting advocacy organization.
I wish there were more places like Center for Election Science, living purely on the meta level and trying to experiment with different ways of organizing people and designing democratic institutions to produce better outcomes. Personally, I'm excited about Charter Cities Institute and the potential for new cities to experiment with new policies and institutions, ideally putting competitive pressure on existing countries to better serve their citizens. As far as I know, there aren't any big organizations devoted to advocating for adopting prediction markets in more places, or adopting quadratic public goods funding, but I think those are some of the most promising areas for really big systemic change.