Jackson Wagner

Scriptwriter for RationalAnimations @ https://youtube.com/@RationalAnimations
3725 karmaJoined Working (6-15 years)Fort Collins, CO, USA

Bio

Scriptwriter for RationalAnimations!  Interested in lots of EA topics, but especially ideas for new institutions like prediction markets, charter cities, georgism, etc.  Also a big fan of EA / rationalist fiction!

Comments
360

To answer with a sequence of increasingly "systemic" ideas (naturally the following will be tinged by by own political beliefs about what's tractable or desirable):

There are lots of object-level lobbying groups that have strong EA endorsement. This includes organizations advocating for better pandemic preparedness (Guarding Against Pandemics), better climate policy (like CATF and others recommended by Giving Green), or beneficial policies in third-world countries like salt iodization or lead paint elimination.

Some EAs are also sympathetic to the "progress studies" movement and to the modern neoliberal movement connected to the Progressive Policy Institute and the Niskasen Center (which are both tax-deductible nonprofit think-tanks). This often includes enthusiasm for denser ("yimby") housing construction, reforming how science funding and academia work in order to speed up scientific progress (such as advocated by New Science), increasing high-skill immigration, and having good monetary policy. All of those cause areas appear on Open Philanthropy's list of "U.S. Policy Focus Areas".

Naturally, there are many ways to advocate for the above causes -- some are more object-level (like fighting to get an individual city to improve its zoning policy), while others are more systemic (like exploring the feasibility of "Georgism", a totally different way of valuing and taxing land which might do a lot to promote efficient land use and encourage fairer, faster economic development).

One big point of hesitancy is that, while some EAs have a general affinity for these cause areas, in many areas I've never heard any particular standout charities being recommended as super-effective in the EA sense... for example, some EAs might feel that we should do monetary policy via "nominal GDP targeting" rather than inflation-rate targeting, but I've never heard anyone recommend that I donate to some specific NGDP-targeting advocacy organization.

I wish there were more places like Center for Election Science, living purely on the meta level and trying to experiment with different ways of organizing people and designing democratic institutions to produce better outcomes. Personally, I'm excited about Charter Cities Institute and the potential for new cities to experiment with new policies and institutions, ideally putting competitive pressure on existing countries to better serve their citizens. As far as I know, there aren't any big organizations devoted to advocating for adopting prediction markets in more places, or adopting quadratic public goods funding, but I think those are some of the most promising areas for really big systemic change.

The Christians in this story who lived relatively normal lives ended up looking wiser than the ones who went all-in on the imminent-return-of-Christ idea. But of course, if christianity had been true and Christ had in fact returned, maybe the crazy-seeming, all-in Christians would have had huge amounts of impact.

Here is my attempt at thinking up other historical examples of transformative change that went the other way:

  • Muhammad's early followers must have been a bit uncertain whether this guy was really the Final Prophet. Do you quit your day job in Mecca so that you can flee to Medina with a bunch of your fellow cultists? In this case, it probably would've been a good idea: seven years later you'd be helping lead an army of 100,000 holy warriors to capture the city of Mecca. And over the next thirty years, you'll help convert/conquer all the civilizations of the middle east and North Africa.

  • Less dramatic versions of the above story could probably be told about joining many fast-growing charismatic social movements (like joining a political movement or revolution). Or, more relevantly to AI, about joining a fast-growing bay-area startup whose technology might change the world (like early Microsoft, Google, Facebook, etc).

  • You're a physics professor in 1940s America. One day, a team of G-men knock on your door and ask you to join a top-secret project to design an impossible superweapon capable of ending the Nazi regime and stopping the war. Do you quit your day job and move to New Mexico?...

  • You're a "cypherpunk" hanging out on online forums in the mid-2000s. Despite the demoralizing collapse of the dot-com boom and the failure of many of the most promising projects, some of your forum buddies are still excited about the possibilities of creating an "anonymous, distributed electronic cash system", such as the proposal called B-money. Do you quit your day job to work on weird libertarian math problems?...

People who bet everything on transformative change will always look silly in retrospect if the change never comes. But the thing about transformative change is that it does sometimes occur.

(Also, fortunately our world today is quite wealthy -- AI safety researchers are pretty smart folks and will probably be able to earn a living for themselves to pay for retirement, even if all their predictions come up empty.)

That's an interesting way to think about it!  Unfortunately this is where the limits of my knowledge about the animal-welfare side of EA kick in, but you could probably find more info about these progest campaigns by searching some animal-welfare-related tags here on the Forum, or going to the sites of groups like Animal Ask or Hive that do ongoing work coordinating the field of animal activists, or by finding articles / podcast interviews with Lewis Bollard, who is the head grantmaker for this stuff at Open Philanthropy / Coefficient Giving, and has been thinking about the strategy of cage-free campaigns and related efforts for a very long time.

I'm not an expert about this, but my impression (from articles like this: https://coefficientgiving.org/research/why-are-the-us-corporate-cage-free-campaigns-succeeding/ , and websites like Animal Ask) is that the standard EA-style corporate campaign involves:

  • a relatively small number of organized activists (maybe, like, 10 - 100, not tens of thousands)...
  • ...asking a corporation to commit to some relatively cheap, achievable set of reforms (like switching their chickens to larger cages or going cage-free, not like "you should all quit killing chickens and start a new company devoted to ecological restoration")
  • ...while also credibly threatening to launch a campaign of protests if the corporation refuses
  • Then rinse & repeat for additional corporations / additional incremental reforms (while also keeping an eye out to make sure that earlier promises actually get implemented).

My impression is that this works because the corporations decide that it's less costly for them to implement the specific, limited, welfare-enhancing "ask" than to endure the reputational damage caused by a big public protest campaign.  The efficacy doesn't depend at all on a threat of boycott by the activists themselves.  (After all, the activists are probably already 100% vegan, lol...)

You might reasonably say "okay, makes sense, but isn't this just a clever way for a small group of activists to LEVERAGE the power of boycotts?  the only reason the corporation is afraid of the threatened protest campaign is because they're worried consumers will stop buying their products, right?  so ultimately the activists' power is deriving from the power of the mass public to make individual personal-consumption decisions".

This might be sorta true, but I think there are some nuances:

  • i don't think the theory of change is that activists would protest and this would kick off a large formal boycott -- most people don't ever participate in boycotts, etc.  instead, I think the idea is that protests will create a vague haze of bad vibes and negative associations with a product (ie the protests will essentially be "negative advertisements"), which might push people away from buying even if they're not self-consciously boycotting.  (imagine you usually go to chipotle, but yesterday you saw a news story about protestors holding pictures of gross sad caged farmed chickens used by chipotle -- yuck!  this might tilt you towards going to a nearby mcdonalds or panda express instead that day, even though ethically it might make no sense if those companies use equally low-welfare factory-farmed chicken)
  • corporations apparently often seem much more afraid of negative PR than it seems they rationally ought to be based on how much their sales would realistically decline (ie, not much) as a result of some small protests.  this suggests that much of the power of protests is flowing through additional channels that aren't just the immediate impact on product sales
  • even if in a certain sense the cage-free activists' strategy relies on something like a consumer boycott (but less formal than a literal boycott, more like "negative advertising"), that still indicates that it's wise to pursue the leveraged activist strategy rather than the weaker strategy of just trying to be a good individual consumer and doing a ton of personal boycotts
  • in particular, a key part of the activists' power comes from their ability to single out a random corporation and focus their energies on it for a limited period of time until the company agrees to the ask.  this is the opposite of the OP's diffuse strategy of boycotting everything a little bit (they're just one individual) all the time
  • it's also powerful that the activists can threaten big action versus no-action over one specific decision the corporation can make, thus creating maximum pressure on that decision.  Contrast OP -- if Nestle cleaned up their act in one or two areas, OP would probably still be boycotting them until they also cleaned up their act in some unspecified additional number of areas.
  • We've been talking about animal welfare, which, as some other commenters have notes, has a particularly direct connection to personal consumption, so the idea of something like a boycott at least kinda makes sense, and maybe activists' power is ultimately in part derived from boycott-like mechanisms.  But there are many political issues where the connection to consumer behavior is much more tenuous and indirect.  Suppose you wanted to reduce healthcare costs in the USA -- would it make sense to try and get people to boycott certain medical procedures (but people mostly get surgeries when they need them, not just on a whim) or insurers (but for most people this comes as a fixed part of their job's benefits package)??  Similarly, if you're a YIMBY trying to get more homes built, who do you boycott?  The problem is really a policy issue of overly-restrictive zoning rules and laws like NEPA, not something you could hope to target by changing your individual consumption patterns.  This YIMBY example might seem like a joke, but OP was seriously suggesting boycotting Nestle over the issue of California water shortages, which, like NIMBYism, is really mostly a policy failure caused by weird farm-bill subsidies and messed-up water-rights laws that incentivize water waste -- how is pressure on Nestle, a european company, supposed to fix California's busted agricultural laws??  Similarly, they mention boycotting coca-cola soda because coca-cola does business in israel. How is reduced sales for the coca-cola company supposed to change the decisions of Bibi Netanyahu and his ministers?? One might as well refuse to buy Lenovo laptops or Huawei phones in an attempt to pressure Xi Jinping to stop China's ongoing nuclear-weapons buildup... surely there are more direct paths to impact here!

I'll admit to a perhaps overly mean-spirited or exasperated tone in that section, but I think the content itself is good actually(tm)?

I agree with you that LLM tech might not scale to AGI, and thus AGI might not arrive as soon as many hope/fear.  But this doesn't really change the underlying concern??  It seems pretty plausible that, if not in five years, we might get something like AGI within our lifetime via some improved, post-LLM paradigm. (Consider the literal trillions of dollars, and thousands of brilliant researchers, now devoting their utmost efforts towards this goal!)  If this happens, it does not take some kind of galaxy-brained rube-goldberg argument to make an observation like "if we invent a technology that can replace a lot of human labor, that might lead to extreme power concentration of whoever controls the technology / disempowerment of many people who currently work for a living", either via "stable-totalitarianism" style takeovers (people with power use powerful AI to maintain and grow this power very effectively) or via "gradual disempowerment" style concerns (once society no longer depends on a broad base of productive, laboring citizens, there is less incentive to respect those citizens' rights and interests).

Misalignment / AI takeover scenarios are indeed more complicated and rube-goldberg-y IMO.  But the situation here is very different from what it was ten years ago -- instead of just doing Yudkowsky-style theorycrafting based on abstract philosophical principles, we can do experiments to study and demonstrate the types of misalignment we're worried about (see papers by Anthropic and others about sleeper agents, alignment faking, chain-of-thought unfaithfulness, emergent misalignment, and more).  IMO the detailed science being done here is more grounded than the impression you'd get by just reading people slinging takes on twitter (or, indeed, by reading comments like mine here!).  Of course if real AGI turns out to be in a totally new post-LLM paradigm, that might invalidate many of the most concrete safety techniques we've developed so far -- but IMO that makes the situation worse, not better!

In general, the whole concept of dealing with existential risks is that the stakes are so high that we should start thinking ahead and preparing to fight them, even if it's not yet certain they'll occur.  I agree it's not certain that LLMs will scale to AGI, or that humanity will ever invent AGI. But it certainly seems plausible! (Many experts do believe this, even if they are in the minority on that survey.  Plus, like the entire US stock market these days is basically obsessed with figuring out whether AI will turn out to be a huge deal or a nothingburger or something in-between, so the market doesn't consider it an obvious guaranteed-nothingburger.  And of course all the labs are racing to get as close to AGI as fast as possible, since the closer you get to AGI, the more money you can make by automating more and more types of labor!)  So we should probably start worrying now, just like we worry about nuclear war even though it seems hopefully unlikely to me that Putin or Xi Jinping or the USA would really decide to launch a major nuclear attack even in an extreme situation like an invasion of Taiwan.  New technologies sometimes have risks; AI might (not certain, but definitely might) become and EXTREMELY powerful new technology, so the risks might be large!

EA is about more than just "a commitment to measurable impact" -- it's also about trying to find the most /effective/ ways to help others, which means investigating (and often doing back-of-the-envelope "importance, neglectedness, tractability" estimates) to prioritize the most important causes.

Take your Nestle example: although they make a convenient big corporate villain, so they often get brought up in debates about drought in California and elsewhere, they aren't actually a big contributor to the problem of drought since their water consumption is such a miniscule fraction of the state's total water use.  Rather than getting everyone to pressure Nestle, it would be much more effective for individuals to spend their time lobbying for slightly changing the rules around how states like California regulate and price water, or lobbying for the federal government to remove wasteful farm subsidies that encourage water waste on a much larger scale.

See here for more on this issue: https://slatestarcodex.com/2015/05/11/california-water-you-doing/

Some EAs might also add that the overall problem of water scarcity in California, or the problem of misleading baby formula ads (note the science is actually not clear on whether breastmilk is any better than formula; they seems about the same for babies' health! https://parentdata.org/what-the-data-actually-says-about-breastfeeding/ ) , or the problem that Coca Cola does business in Israel (doesn't it do business basically everywhere??), are simply less severe than the problem of animal suffering.  Although of course this depends on one's values.

Some other considerations about boycotts:

  • Many already consider veganism to be a pretty extreme constraint on one's diet that makes it harder to maintain a diet of tasty, affordable, easy-to-cook, nutritious food.  Add in "also no avocados, and nothing made by Nestle or Coca Cola, and nothing from this other long list of BDS companies, and also...", and this no longer sounds like an easy, costless way to make things a little better!  (Indeed, it starts to look more like a costly signal of ideological purity.)  https://www.lesswrong.com/posts/Wiz4eKi5fsomRsMbx/change-my-mind-veganism-entails-trade-offs-and-health-is-one
  • Within the field of animal welfare, EA has actually pioneered the strategy of DOWNPLAYING the importance of veganism and other personal choices, in favor of a stronger emphasis on corporate pressure campaigns to get companies to adopt incrementally better standards for their animals.  This has turned out to be an extremely successful tactic (billions of chickens' lives improved over just a few years, meanwhile after decades of vegan activism the percentage of vegans/vegetarians in the USA remains about the same low number it's always been).  This lesson would seem to indicate that pushing for mass personal change (eg, to reduce climate emissions by boycotting flights) is perhaps generally less effective than other approaches (like funding scientific research into greener jet fuel, or lobbying for greater public investment in high-speed rail infrastructure).  
  • TBH, the way a lot of advocates talk about consumer boycotts makes me think they believe in the (satirical) "copenhagen interpretation of ethics", the theory that if you get involved in anything bad in any way, however tangiential (like drinking a soda made by the same company who also sells sodas to israelis, who live in a country that is oppressing the people of the west bank and gaza), that means you're "entangled" with the bad thing so it's kinda now your fault, so it's important to stay pure and unentangled so nobody can blame you.  https://forum.effectivealtruism.org/posts/QXpxioWSQcNuNnNTy/the-copenhagen-interpretation-of-ethics  I admit that following the Copenhagen Interpretation of ethics is a great way to avoid anyone ever blaming you for being complicit in something bad.  But EA is about more than just avoiding blame -- it's about trying to help others the most with the resources we have available.  That often means taking big actions to create goodness in the world, rather than having the "life goals of dead people" and simply trying to minimize our entanglement with bad things: https://thingofthings.substack.com/p/the-life-goals-of-dead-people
  • The EA community is pretty small.  Even if all 10,000 of us stopped eating Nestle products, that wouldn't be a very large impact, and it would draw attention away from worthier pursuits, like trying to incubate charities directly serving people in the poorest nations, instead of worrying that maybe a few cents of the five dollars I paid for avocado toast at a restaurant might work its way into the hands of a mexican cartel.

Fun fact: it's actually this same focus on finding causes that are important (potentially large in scale), neglected (not many other people are focused on them) and tractable, that has also led EA to take some "sci-fi doomsday scenarios" like wars between nuclear powers, pandemics, and AI risk, seriously.  Consider looking into it sometime -- you might be surprised how plausible and deeply-researched these wacky, laughable, uncool, cringe, "obviously sci-fi" worries really are! (Like that countries might sometimes go to war with each other, or that it might be dangerous to have university labs experimenting with creating deadlier versions of common viruses, or that powerful new technologies might sometimes have risks.)

I suspect that part of the theory of impact here might not run through any individual grant item (ie, liberalized zoning laws leading to economic growth through increased housing construction in some particular city), but rather through a variety of bigger-picture considerations that look something like:

  1. The overall state / quality of US politics is extremely important, because the US is the most powerful country in the world, etc.  Improving the state of US politics even a little (ie by making it more likely that smart, thoughtful people will be in power, make good decisions, implement successful reforms, etc) seems like an important point of leverage for many very important causes (consider USAID cuts, AI chip export controls to China, and foreign policy especially concerning great power relations, nuclear nonproliferation, preserving democracy and broad human influence over the future, continued global economic growth, etc).
  2. Of course "fighting for influence over US politics" is gonna seem less appealing once you take into account the fact that it is in a certain sense the least-neglected possible cause, has all sorts of deranging / polarizing / etc side-effects, and so forth.  But maybe, even considering all these things, influencing US politics still seems very worthwhile.  (This seems plausible to me.)
  3. Promoting the abundance movement seems like a decent idea for both improving the US Democratic party (in terms of focusing it on smarter, more impactful ideas) and perhaps making the Democrats more likely to win elections (which is great if you think Dems are better than the current Republican party), and maybe even improve the Republican party too (if the abundance agenda proves to be a political winner and the right is forced to compete by adopting similar policies).  And, as a plus, promoting this pro-growth, liberal/libertarian agenda seems a little less polarizing that most other conceivable ways of engaging with US politics.

People have wondered for a long time if, in addition to direct work on x-risks, one should consider intermediate "existential risk-factors" like great power war.  It seems plausible to me that "trying to make the United States more sane" is a pretty big factor in many valuable goals -- global health & development, existential risk mitigation, flourishing long-term futures, and so forth.

Hating a big corporation is so much goddamn fun.

England chopped up Africa and trapped it in a cycle of conflict.

...companies like Amazon have sparked a global wave of consumerism

 

EAs get a little obsessed with alignment when hiring.

Maybe we're so big on hiring value-aligned media people because we don't want our movement to get melded back into the morass of ordinary leftist activism!

I agree with some of your points on style / communication -- hedging can really mess up the process of trying to write compelling content, and EA should probably be more willing to be controversial and combative and identify villains of various sorts.  But I think the subtext that we should specifically do this by leaning more in a standard-left-wing-activism direction risks worsening the crisis of lameness, rather than fixing it.

As other commenters have mentioned, I'd be worried about losing some of the things that make EA special (for example, suffering the same kind of epistemic decay that plagues a lot of activist movements).

But I'm also a little skeptical that, even if EA was fine with (or could somehow avoid) taking the epistemic hit of building a mass movement in this way, the aesthetics of billionaire-bashing and protest-attending and etc are really as intrinsically "sexy" to smart, ambitious young people as you make them out to be.  I'd worry we'd create a vibe that ends up artificially self-limiting the audience we can reach.  (I'm thinking about how a lot of left wing activism -- abolish the police, extinction rebellion climate stuff, gaza protests, etc -- often tends to create counterproductive levels of polarization, seemingly for polarization's own sake, in a way that seems to just keep re-activating the same left-leaning folks, but not accomplishing nearly as much broad societal persuasion as would seem to be possible.)

(re: "EA should probably be more willing to be controversial and combative and identify villains", my preferred take is that EA should be willing to be more weird in public, to talk seriously about things that seem sci-fi (like takeover by superintelligence) or morally bizarre (like shrimp welfare) or both (like possible utopian / transhumanist futures for humanity), thus attracting attention by further distinguishing itself from both left-wing and right-wing framings, thus offering something new and strange but also authentic and evidence-backed to people who have a truth-seeking mindset and who are tired of mainstream ideological culture-wars.  Politically, I expect this would feel kind of like a "radical-centrist" vibe, or maybe like a kind of fresh alternate style of left-liberalism more like the historical Progressive Era, or something. Anyways, of course it takes plenty of media skill to talk about super-weird stuff well!  And this vision of mine also has lots of drawbacks -- who knows, maybe I have rose-tinted glasses and it would actually crash and burn even harder than a more standard lefty-activism angle.  But it's what I would try.)

Linking my own thoughts as part of previous discussion "How confident are you that it's preferable for America to develop AGI before China does?".  I generally agree with your take.

This is a nice story, but it doesn't feel realistic to treat the city of the future as such an all-or-nothing affair.  Wouldn't there be many individual components (like the merchant's initial medical tonic) that could be stand-alone technologies, diffusing throughout the world and smoothly raising standards of living in the usual way?  In this sense, even your "optimistic" story seems too pessimistic about the wide-ranging, large-scale impact of the scholar's advice.

The world of the story would still develop quite differently than in real history, since they're:

  1. getting technologies much faster than in real history
  2. getting technologies without understanding as much of the theory behind them.  (although is this really true?  I feel like, if we had access to such a scholar, it might be easiest for the scholar to tell us about fundamental theories of nature, rather than laboriously transcribing the design for each and every inscrutable device.  so it's possible that an oracle would actually differentially advance our theoretical understanding -- consider how useful an oracle would be in the field of modern pharma development, where we have many effective drugs whose exact mechanisms of action are still unknown!)

This and other effects (like the obvious power-concentration aspect of whoever controls access to the oracle's insights) would probably produce a very lopsided-seeming world compared to actual modernity.  But I don't think it would end up looking like either of the two endings to your story.

(Of course, your more poetic endings fit the form of a traditional fable much better.  "And then the city kicked off an accelerating techno-industrial singularity" doesn't really fit the classic repertoire of tragedy, comedy, etc!)

Load more