There is a strategy I’ve heard expressed over the years but haven’t seen clearly articulated. Phrases used to articulate it go roughly like “outrunning one’s problems” and “walking is repeated catching ourselves as we fall”.

It is the strategy of entering unstable states, that would lead to disaster if not exited shortly, as a way to get an advantage (like efficient locomotion in humans) or on the way to other better states (like travelling across a desert to get to an oasis). If it can be done properly it enables additional opportunities and greatly extends the flexibility of your plans and lets you succeed in otherwise harsh conditions but if done in error may make you worse off; you must travel between them like walking over hot coals, if you stay too long without moving you get burnt (relatedly but with a different sense of instability).

Let’s move a bit towards a formal definition. Let the states of the world fall into these categories:

  • Loss unstable
  • Loss stable
  • Win
  • Lose

A loss unstable state is one where if you’re in that state too long the probability per time of entering a lose state goes up for reasons such as (but not limited to) accumulated damage (say CO2 levels) or resource loss (say the amount of phosphorous available that is used in fertilizer). A loss stable state is one where your probability of entering a lose state isn’t increasing with the time that you’re in it (for instance independently off how long you’re standing 20 meters from a cliff your probability of falling off it doesn’t increase). For example:

Why would one choose to enter a loss unstable state then? Well, firstly, you may have no choice and must just do the best you can in the situation. If you do have a choice though, there are several reasons why you may still choose to enter a loss unstable state:

  • They may have higher transition probabilities to the win states
  • They may be on the path to better states
  • They may otherwise be the best state one can reach as long as you don’t stay there for long (say for accumulating resources)

In general, this idea of loss unstable states contrasting with loss stable states is a new lens for highlighting important features of the world. The ‘sprinting between oases’ strategies enabled by crossing through loss unstable states may very well be better than those going solely through stable states, if used without error.

Comments6


Sorted by Click to highlight new comments since:

I'm not clear on what relevance this holds for EA or any of its cause areas, which is why I've tagged this as "Personal Blog". 

The material seems like it might be a better fit for LessWrong, unless you plan to add more detail on ways that this "strategy" might be applied to open questions or difficult trade-offs within a cause area (or something community-related, of course).

I would have been much more interested in this post if it had included explicit links to EA. That could be including EA-relevant examples. It also could be explicitly referencing existing EA 'literature' or a positioning this strategy as a solution to a known problem in the EA.

I don't think the level of abstraction was necessarily the problem; the problem was that it didn't seem especially relevant to EA.

It's true that this is pretty abstract (as abstract as fundamental epistemology posts), but because of that I'd expect it to be a relevant perspective for most strategies one might build, whether for AI safety, global governance, poverty reduction, or climate change. It's lacking the examples and explicit connections though that make this salient. In a future post that I've got queued on AI safety strategy I already have a link to this one, and in general abstract articles like this provide a nice base to build from toward specifics. I'll definitely think about, and possibly experiment with, putting the more abstract and conceptual posts on LessWrong.

If you plan on future posts which will apply elements of this writing, that's a handy thing to note in the initial post! 

You could also see what I'm advocating here as "write posts that bring the base and specifics together"; I think that will make material like this easier to understand for people who run across it when it first gets posted.

If you're working on posts that rely on a collection of concepts/definitions, you could also consider using Shortform posts to lay out the "pieces" before you assemble them in a post. None of this is mandatory, of course; I just want to lay out what possibilities exist given the Forum's current features.

I think I like the idea of more abstract posts being on the EA Forum, especially if the main intended eventual use is straightforward EA causes. Arguably, a whole lot of the interesting work to be done is kind of abstract.

This specific post seems to be somewhat related to global stability, from what I can tell?

I'm not sure what the ideal split is between this and LessWrong. I imagine that as time goes one we could do a better cluster analysis.

This idea seems somewhat related to:

  • The idea of state risks vs transition risks, as discussed in Superintelligence and Chapter 7 of The Precipice
  • This passage from The Precipice:
It is even possible to have situations where we might be best off with actions that pose their own immediate risk if they make up for it in how much they lower longterm risk. Potential examples include developing advanced artificial intelligence or centralising control of global security.
Curated and popular this week
jackva
 ·  · 3m read
 · 
 [Edits on March 10th for clarity, two sub-sections added] Watching what is happening in the world -- with lots of renegotiation of institutional norms within Western democracies and a parallel fracturing of the post-WW2 institutional order -- I do think we, as a community, should more seriously question our priors on the relative value of surgical/targeted and broad system-level interventions. Speaking somewhat roughly, with EA as a movement coming of age in an era where democratic institutions and the rule-based international order were not fundamentally questioned, it seems easy to underestimate how much the world is currently changing and how much riskier a world of stronger institutional and democratic backsliding and weakened international norms might be. Of course, working on these issues might be intractable and possibly there's nothing highly effective for EAs to do on the margin given much attention to these issues from society at large. So, I am not here to confidently state we should be working on these issues more. But I do think in a situation of more downside risk with regards to broad system-level changes and significantly more fluidity, it seems at least worth rigorously asking whether we should shift more attention to work that is less surgical (working on specific risks) and more systemic (working on institutional quality, indirect risk factors, etc.). While there have been many posts along those lines over the past months and there are of course some EA organizations working on these issues, it stil appears like a niche focus in the community and none of the major EA and EA-adjacent orgs (including the one I work for, though I am writing this in a personal capacity) seem to have taken it up as a serious focus and I worry it might be due to baked-in assumptions about the relative value of such work that are outdated in a time where the importance of systemic work has changed in the face of greater threat and fluidity. When the world seems to
 ·  · 4m read
 · 
Forethought[1] is a new AI macrostrategy research group cofounded by Max Dalton, Will MacAskill, Tom Davidson, and Amrit Sidhu-Brar. We are trying to figure out how to navigate the (potentially rapid) transition to a world with superintelligent AI systems. We aim to tackle the most important questions we can find, unrestricted by the current Overton window. More details on our website. Why we exist We think that AGI might come soon (say, modal timelines to mostly-automated AI R&D in the next 2-8 years), and might significantly accelerate technological progress, leading to many different challenges. We don’t yet have a good understanding of what this change might look like or how to navigate it. Society is not prepared. Moreover, we want the world to not just avoid catastrophe: we want to reach a really great future. We think about what this might be like (incorporating moral uncertainty), and what we can do, now, to build towards a good future. Like all projects, this started out with a plethora of Google docs. We ran a series of seminars to explore the ideas further, and that cascaded into an organization. This area of work feels to us like the early days of EA: we’re exploring unusual, neglected ideas, and finding research progress surprisingly tractable. And while we start out with (literally) galaxy-brained schemes, they often ground out into fairly specific and concrete ideas about what should happen next. Of course, we’re bringing principles like scope sensitivity, impartiality, etc to our thinking, and we think that these issues urgently need more morally dedicated and thoughtful people working on them. Research Research agendas We are currently pursuing the following perspectives: * Preparing for the intelligence explosion: If AI drives explosive growth there will be an enormous number of challenges we have to face. In addition to misalignment risk and biorisk, this potentially includes: how to govern the development of new weapons of mass destr
Sam Anschell
 ·  · 6m read
 · 
*Disclaimer* I am writing this post in a personal capacity; the opinions I express are my own and do not represent my employer. I think that more people and orgs (especially nonprofits) should consider negotiating the cost of sizable expenses. In my experience, there is usually nothing to lose by respectfully asking to pay less, and doing so can sometimes save thousands or tens of thousands of dollars per hour. This is because negotiating doesn’t take very much time[1], savings can persist across multiple years, and counterparties can be surprisingly generous with discounts. Here are a few examples of expenses that may be negotiable: For organizations * Software or news subscriptions * Of 35 corporate software and news providers I’ve negotiated with, 30 have been willing to provide discounts. These discounts range from 10% to 80%, with an average of around 40%. * Leases * A friend was able to negotiate a 22% reduction in the price per square foot on a corporate lease and secured a couple months of free rent. This led to >$480,000 in savings for their nonprofit. Other negotiable parameters include: * Square footage counted towards rent costs * Lease length * A tenant improvement allowance * Certain physical goods (e.g., smart TVs) * Buying in bulk can be a great lever for negotiating smaller items like covid tests, and can reduce costs by 50% or more. * Event/retreat venues (both venue price and smaller items like food and AV) * Hotel blocks * A quick email with the rates of comparable but more affordable hotel blocks can often save ~10%. * Professional service contracts with large for-profit firms (e.g., IT contracts, office internet coverage) * Insurance premiums (though I am less confident that this is negotiable) For many products and services, a nonprofit can qualify for a discount simply by providing their IRS determination letter or getting verified on platforms like TechSoup. In my experience, most vendors and companies