Hide table of contents

Summary

Complexity science is a field which aims to understand systems composed of many interacting parts. I believe it is relevant to a number of EA cause areas, and has several features that could help in realising the goals of effective altruism:

  1. A set of useful concepts and mental models for approaching broad and challenging systemic problems.
  2. Tools such as computer simulation for understanding and analysing complex systems.
  3. An example of building a successful interdisciplinary intellectual movement.

A note about me

I am not an academic expert in complexity science, however I have several years of experience building computer simulations to model complex systems. As such, this post focuses disproportionately on computer simulations as a tool, however it is still intended as an introduction to complexity science more broadly. I strongly believe that complex systems simulation is a powerful tool for institutional decision making. I also have a weaker belief that complexity science could benefit many other EA cause areas. My aim with this post is to introduce this field to more EA people in order to get feedback and ideas on where this could be applied within EA.

What is complexity science? 

What do ant colonies, the immune system, the economy, the energy grid and the internet all have in common? According to the field of complexity science these are all examples of complex systems, composed of many interacting components, which communicate and coordinate with each other, adapt to changing circumstances and can process information in a decentralised manner. These can be contrasted with systems that are merely complicated, but not complex. Take for example complicated human-engineered systems such as airplanes or microprocessors, these are impressively intricate but they have been designed in a top down, controlled way and their operation is fairly predictable. It is straightforward to connect the high level and low level phenomena observed in these systems. Conversely you can have apparently simple complex systems, such as cellular automata, which are made up of very simple behavioural rules at the micro level, but the individual components interact and combine in a way that is hard to understand and predict at the overall system level. Complexity science is an ambitious endeavour to understand and explain such phenomena. 

I have not seen much prior discussion of complexity science within EA, but I think there are several parallels and areas of overlap. One similarity is that much like effective altruism it is highly interdisciplinary; it draws from physics, biology, computer science, artificial intelligence and social science. In this post I aim to give a quick introduction to the world of complexity science and why I believe it holds a lot of value for effective altruism.

The website Complexity Explained contains a short introduction to the core ideas of complexity science, with interactive examples. One key concept is the idea of interactions, a complex system cannot be understood just by aggregating its component parts, since you also have to understand the interactions between these parts. This leads to system characteristics such as nonlinearity, feedback loops and tipping points, meaning a small input to the system can result in wildly different behaviour. 

To illustrate this concept imagine something non-complex such as a bag of marbles. If you want to know the weight of the whole bag then you can simply sum the weight of all the individual marbles. In contrast, if you take a complex system such as a financial market composed of many individual traders, you can’t get the macro level properties of this market simply by summing the actions of all the traders, since these traders are going to react and interact with each other. Another way of thinking about this is “more is different”, if you add more components or agents to a system then you get qualitatively different effects that can’t be predicted just by the sum of those parts. 

A follow-on concept is the idea of emergence; some complex systems exhibit surprisingly complex behaviour at the macro level as a result of relatively simple behaviour at the individual level. Ant colonies are a classic example of this. Each individual ant is following very simple rules and has no understanding of the overall intent of the colony, however ant colonies are able to coordinate to build intricate nests and cooperate on a large scale to forage for food, among other sophisticated behaviours. This shows how complex information processing can emerge from very simple individual rules. Murmurations of starlings are another example of how beautiful and complex-seeming patterns can arise from simple local behaviour. Each bird can only perceive a few of its neighbours, and will simply adjust itself to match their direction and speed, however the combined action of all these birds following simple rules results in complex patterns with no central coordination. Knowing that a system exhibits emergence does not necessarily mean we can use this to predict its behaviour, however it is a useful way to classify systems in which it is hard to link the micro and macro behaviour.

Systems which exhibit complexity and emergence cannot be understood by breaking them down via analytical reductionism, but neither are they random systems, so we cannot rely solely on statistics to understand their macro-level properties. In general such systems prove intractable for analytical mathematical tools which rely on aggregation and simplification, for a more technical introduction to this idea see this textbook. As a partial solution to this problem complexity science provides us with helpful concepts for thinking about these systems, such as emergence, hierarchy and self organization. This isn't as precise as analytical reductionism, but no other approaches will work. This requires approaching complex systems in a holistic way, it can be helpful to think “from the bottom up”; modelling the individual components as well as how they combine together.

An example of this can be seen in economics, by contrasting classical economic methods, e.g. in macroeconomics, with a new approach inspired by complexity science, known as complexity economics. Traditional economic models rely on strong assumptions such as homogeneity (all agents are the same) and rationality, agents act to maximise their own self interest and have perfect knowledge. These models solve for a global optimum which represents an equilibrium state, however complex systems such as the economy rarely reach equilibrium in reality, they are examples of non-equilibrium dynamical systems. Complexity economics takes a different approach, which focuses on bottom-up modelling of agents. This includes agent-based models (ABMs), which are computer simulation models with heterogeneous agents that can interact with each other and follow heuristic behavioural rules, with limited knowledge rather than global optimisation. A way of thinking about this is modelling “verbs” rather than “nouns”, as explained in a recent paper by W. Brian Arthur, which means capturing dynamic processes rather than static quantities. I think the idea of working with non-equilibrium systems is particularly relevant to EA, since many of the real world systems EAs are trying to impact, such as global politics or national healthcare systems, are constantly changing and never settle down into a static equilibrium.

Complexity economics is still far from the mainstream of economics, however it has been gaining acceptance in recent years, particularly since the 2008 financial crisis, which resulted in a lot of criticism for mainstream macroeconomics for failing to foresee the crisis (the finance system wasn’t even included in most macroeconomic models before then). Complexity Economics was recently featured in two Nature Reviews articles here and here.

Complex systems tend to be characterised by “fat tail” distributions which include extreme events, caused by feedback loops and cascade effects due to interactions between components and systems. This can take the form of “cascading failures”. A classic example of this is the major power blackout in the Northeastern US and Canada in 2003. This was initially triggered by short circuits in transmission lines in Ohio due to overgrown trees; the outage was then compounded by a software bug in the alarm system. Due to insufficient coordination and containment strategies the blackout spread to overload the power grid across much of the Northeastern and Midwestern US as well as the Canadian province of Ontario. This cascading failure was due to the interaction of multiple systems; a forest ecological network, the power grid made up of physical cables and software systems as well as a human coordination network. 

The ongoing Covid-19 pandemic is another example of interacting complex systems, for example disease propagation amongst interacting agents, international travel patterns and the economy. 

Of particular relevance to EA is how this may apply to analysing potential global catastrophic risks (GCRs) or existential risks. Most of the plausible scenarios for such risks involve multiple systemic problems interacting with each other, with hard to predict feedback loops and cascading failures.

Complexity science includes many more areas which I don’t have space to cover in depth here, for example information theory, evolution, genetic algorithms, fractals, chaotic systems and network theory. All of these areas exist as academic fields in their own right, however complexity science weaves a thread through them all. It can come across as overly broad when viewed through the existing ontology used in academia to demarcate separate areas of study such as physics, chemistry, biology, economics etc. However it is focused on a particular set of ideas and phenomena that can be found in many of these different areas, for example interactions and micro-macro scale relationships.

The term “Complexity Science” as an umbrella for this interdisciplinary set of ideas is only a few decades old, however these ideas have a long lineage from other areas, particularly in areas of physics such as dynamical systems and chaos theory, as well as graph theory and network science. There are many people who use this lens to study such systems even if they don’t use the term complexity science. The process of connecting all these different domains and ideas is still underway, for example formalising the notion of complexity, which arguably remains an informal concept. However I would propose that there is a set of strong underlying themes, ideas and tools which could be beneficial for effective altruism.

A particularly promising tool from complexity science is computer simulation, including agent-based models. Since this is the specific area I have the most experience in, I will spend much of the rest of this post exploring how computer simulation could be used within effective altruism. Although it’s important to note that computer simulation is just one tool from complexity science, and not all complexity scientists would place the same emphasis on simulation that I do. There are other aspects of complexity science which could be valuable for EA, for example simply being able to recognise that you are dealing with a complex, emergent system is useful because it is likely to change what assumptions are appropriate to make, your priors on what behaviour you expect to observe and the viability of any potential solutions.

Applications within Effective Altruism

When I think about the biggest challenges in effective altruism, many of them involve intervening in complex systems. For example health interventions in the developing world, often these can have 2nd and 3rd order consequences which are hard to reason about, and are hard to test with RCTs, given the complex interactions between different systems. A lot of the time the straightforward “non-complex” interventions have already been tried, and all that is left are the thornier, more complex challenges.

I think there is an opportunity for effective altruism minded people to work on building and advocating for simulations of complex social systems, and using these to explore the effects of policy interventions. This has a lot of overlap with the area of Improving Institutional Decision Making (IIDM). Our world is more interconnected than ever before, and many of the most pressing challenges of the 21st century involve complex systems, for example pandemics, misinformation in social networks, and the critical infrastructure which enables modern life, such as the transport network, the internet and the power grid.

There is also a clear link to longtermism. Parts of the complexity science community are exploring foundational questions such as the origin of life. The Interplanetary Project at the Santa Fe Institute (a well known complexity science organisation), touches on many themes which would be familiar to anyone interested in longtermism within EA.

Below I have sketched out a few examples of where I think complexity science and simulation are relevant to EA cause areas. As a warning; these are just initial ideas to prompt discussion and are not necessarily fully thought through.

Cause Areas

Biosecurity

Complex systems simulation has already had a large impact on decision making during the Covid-19 pandemic (discussed in more detail later in this post). In my view there are many untapped opportunities for better simulations to help handle such events. These could explore the 2nd order unanticipated effects of pandemics and the interactions between systems such as the disease itself, the economy and the fragility or resilience of global supply chains. Governments could use simulations to test out interventions and prevention policies, such as the move to remote work. Simulations could also model hypothetical scenarios such as an engineered pathogen with a range of different possible disease parameters.

Governance and institutional decision making

IIDM is perhaps the most obvious cause area in which to apply simulations, as they can be effective tools for decision making, by promoting collaboration and shared understanding of a situation. Simulations allow decision makers to test the effects of novel policies. In a more meta way ABMs can also simulate the mechanisms of decision making and voting themselves. There is also the closely related area of "Operations Research", which focuses on improving decision making and planning, and has already made use of tools such as ABMs and systems thinking. 

Economics

The rapidly growing field of complexity economics provides additional capabilities beyond the limitations of traditional economic modelling. Such techniques could allow a better understanding of the robustness of the economy and financial system, and help to understand the potential harm of black swan events such as the 2008 financial crisis. They could help to distinguish between short term variation and noise in the economy and financial markets, versus longer term structural trends. OpenPhil already mentions improving macroeconomic policy as a focus area.

A speculative application of complexity economics and ABMs is building simulations of hypothetical future worlds, which could help to understand what aspects of our current economic theory might still hold true in a vastly different context. 

Climate change

Climate change is a promising area of application for complex systems modelling, in particular complexity economics. Simulations can help to understand the complex interactions and feedback loops between the economy and natural systems. Existing climate models are already complex system simulations which factor in multiple different geological and climate dynamics and feedback effects, however at present there are not many serious efforts to combine these with economic models and simulations of social systems. This paper by Doyne Farmer, one of the pioneers of Complexity Economics, argues that complexity economics and ABMs can address the shortcomings of existing economic models of climate change. An economic agent-based model such as the EUROMOD tax model could be used to understand the impact of policies such as a carbon tax. 

Nuclear weapons

Agent-based models could be built to simulate hypothetical nuclear weapons proliferation scenarios, modelling the incentives and behaviours at the agent level, with agents representing different countries or other actors. This could highlight which policies may have counterintuitive or unexpectedly negative effects. This would be a form of dynamic game theory simulation, and could help test the robustness of different equilibria and sensitivity to different parameters and assumptions. Related work has been conducted during the Cold War which could be expanded upon with more data and computational power. 

AI Safety

It is plausible that the arrival of powerful AI, whether gradual or sudden, could be modelled as a complex system of many interacting humans and AI agents, similar to the scenarios discussed in “What failure looks like” by Paul Christiano. This could look like an “AI ecosystem” of powerful but narrow AI agents. The ARCHES paper by Andrew Critch and David Krueger sets out many such scenarios, including the challenges of “multi-multi” delegation and control between multiple humans and AI agents. Viewing these multi-agent dynamics as a complex system seems like a natural way to think about this.

I am not completely sure that we understand these hypothetical scenarios well enough right now to build a useful simulation, however even the process of trying to build such a simulation could promote new ways of thinking about the problem. Similarly to simulating nuclear weapon scenarios, simulations may illuminate some of the power dynamics involved with developing military AI, for example race dynamics and potential destabilising effects.

Complexity science also overlaps directly with the fields of AI and cognitive science. For example this article explores the links between deep learning and complexity / chaos. I am optimistic that there are some interesting links to AI safety here, however this requires more investigation to flesh out. 

Building more complex and realistic simulation environments for AI training is an active area of AI research, which clearly overlaps with complex systems modelling. Multi-agent simulations are particularly relevant here. The development of increasingly sophisticated training simulations has many implications for AI safety, and this could potentially increase the risk of poorly aligned AI unless approached in a careful way.

Additionally it strikes me that in order to avoid a worst case scenario when transformative AI does arise we want to ensure that it does not destroy what we might describe as “complex life”, yet another example of a complex system. So perhaps there are deep links between measuring complexity and ensuring that AI agents preserve the existing complex life in our world.

Longtermism

A big opportunity I see for complexity science applied to longtermism is in understanding the dynamics of societies which are distant to us in time, either in the past or future. Simulations can be used to encode assumptions and hypothetical scenarios even with little or no hard data. Simulations have already been used in archaeology to study past societies, for example the influential artificial anasazi agent-based model, which simulates an ancient native american civilisation. Importantly simulations could be used to understand long-term historical trends and previous societal collapses (e.g. this paper). This seems very relevant for understanding how to avoid such societal collapses in future. 

You could also imagine modelling hypothetical future worlds, for example a simulation incorporating a high amount of detail about a future society, similar to Robin Hanson’s Age of Em, but encoded in an agent-based model. This would impose a degree of rigour, and would test the internal consistency of any hypotheses about what such a world would look like. There is even a nascent project to create ABMs of worlds from science fiction, to provide interactive and dynamic explorations of these worlds.

Tools - Simulation and computational experiments 

I have explored several possible applications for complex systems modelling and simulation within EA cause areas, however it is worth digging into why these are appropriate tools. Complex systems are often unsuited to standard mathematical modelling tools which rely on aggregation and simplification, so we need a different way of modelling these systems. Computer simulations are an alternative approach, enabling agents and components of a system to be represented individually and simulating the interactions between them, capturing nonlinear effects such as feedback. A sub-field of this type of simulation is agent-based modelling, which has been employed in various fields such as biology, epidemiology, social science and economics. Computer simulation can be viewed as a new way of doing science, in addition to experiment and theory. This can help us study emergent effects, as we can recreate in the lab how complex macro-behaviours result from micro-level behaviour at the agent level.

There are many benefits to using computer simulations for decision making, and indeed many of these benefits are shared with traditional mathematical models. In an article called Why Model? Joshua Epstein, one of the pioneers of agent-based models for social science, sets out 16 separate reasons for why computer models can be useful, in addition to simply predicting the future, which is often assumed to be the sole purpose of modelling. One major advantage is that it forces us to formalise our understanding of a system, as Epstein points out in Why Model? without explicitly defined models we typically have to rely on informally specified mental models of complex systems, so there is value in attempting to formalise our mental models to expose any contradictory assumptions and explicitly combine these assumptions with observed data. These arguments apply to all mathematical modelling, however I think they are particularly applicable in the context of ABMs, where we may have some approximate understanding of the behaviour of individual agents, consisting of simple rules, and we want to understand the implied consequences of those rules at the macro-level. Epstein is a major proponent of using bottom-up models in order to understand social systems, a process he refers to as generative social science, the motto of which is “If you can’t grow it, then you don’t understand it”.

Computer simulations allow us to test interventions in a system. In particular we can test interventions at a more granular level, which may affect different agents in different ways. This allows governments to test policies in a safe simulated environment before implementing them in the real world. For example with Covid-19 a detailed simulation of a country’s population could be used to test interventions such as lockdowns, this would allow us to encode the assumption that a certain fraction of people won’t adhere to restrictions. We can also model heterogeneous populations that have varying behaviours and different vulnerabilities to the virus. This is particularly important with pandemics such as COVID-19, since often the only data we have access to are macro variables such as total caseload, number of deaths and reproduction rate. However these aggregate variables are driven by social networks and the behaviour of agents within them at the micro level, which is in turn affected by policy. Connecting aggregate data to individual behaviour of populations is something that agent-based models are well suited to.

A further advantage of this type of model is that they tend to be interpretable, and the agent-level logic can correspond neatly to our intuitive understanding of the mechanisms of the system we want to model. This contrasts with other methods of predictive modelling, such as aggregated analytical mathematical models, or black-box machine learning methods such as deep neural networks, where it can be difficult to understand the internal workings of the model, or map this to our own mental models. 

Computer simulations can be used to aid decision making through the process of wargaming, where hypothetical scenarios are tested in a simulated environment in a collaborative way. This can foster mutual understanding of a problem between multiple parties. When such simulations are combined with interactive visualisations and user interfaces they can allow decision makers to try out many different scenarios and generate intuitions about the system being modelled, and how it may respond to certain actions. Most models have a lot of uncertainty, requiring assumptions to be made, however the right user interfaces can expose these assumptions alongside unknown parameters, and allow decision makers to tweak these and put in their own assumptions and estimates of parameters.

Nowadays increasingly large and complex simulations of the real world are being built for such planning and training activities, often referred to as “digital twins”, which combine large amounts of data on real world systems with simulation models.

Covid-19 provides an informative example of how such simulations can be employed to improve critical decision making. Early on in the pandemic agent-based models such as the one built by Neil Ferguson and his team at Imperial College heavily influenced the UK government’s response, and were credited with being one of the reasons that eventually pushed them to go ahead with lockdown, due to the large number of fatalities predicted by the model. Admittedly there were several criticisms laid against this model in particular, such as the fact that it didn’t take into account how people would adjust their behaviour in the absence of government intervention, and also that the model code was over-complicated and not well tested. These criticisms are accurate to some extent, however they only apply to the specific model in question and not to ABMs more generally. Addressing these criticisms by building improved ABMs is a huge opportunity for tackling future pandemics.

There is a lot of potential value in combining models from different domains to understand the connections between real world systems. ABMs such as the Imperial model only take into account disease propagation between abstract agents, however future ABMs could incorporate detailed behavioural models of how people respond to infection levels. Many more dynamics could be added, such as realistic people movement based on data from the UK population. I recently worked on a Covid-19 ABM which combines census and population movement data with a disease model, as part of the Royal Society’s RAMP initiative. If the models used by governments had taken into account factors such as population movement and behavioural response then this might have narrowed the uncertainty in the forecasts of disease spread. 

I should sound a note of caution here: there will always be uncertainty in models, both in the data and in whether the model logic accurately reflects the real world dynamics. Some sources of uncertainty may dominate others, for example in the early days of the pandemic perhaps the lack of knowledge of disease parameters (eg. R0) was the largest source of uncertainty, so there would be little value in adding more detail and refining other aspects of the model until this had been narrowed down. However models themselves can help to investigate which are the main sources of uncertainty, using techniques such as sensitivity analysis.

Neil Ferguson and his team did commendable scientific work with the tools they had available, however their Covid model was cobbled together in a short amount of time by repurposing the code from a decade old flu model. Investing in scalable and flexible agent-based models ahead of time would result in sophisticated models which are a major asset for future pandemics or other unforeseen emergency situations requiring government intervention in complex systems. 

An even larger opportunity is in combining disease dynamics with an agent-based model of the economy. The real challenge of Covid-19 policy was making difficult tradeoffs between public health and the economy over both the short and long term. Combining disease dynamics and economics in the same simulation could have helped to forecast and quantify the true consequences of government action or lack thereof.

Creating modern user interfaces could further increase the value of such simulations. These could allow decision makers to interact with these simulations directly, rather than relying on a slow feedback loop of requesting scientists to re-run the model with different parameters. This allows people to inject their own assumptions and priorities into a model, and to build up an intuition for how a given system behaves in different scenarios.

In my view a valuable aspect of ABMs is not in perfectly predicting the future, but forcing people to face up to the stark reality of what their current data and assumptions are telling them. Computational models of complex systems are rarely able to produce accurate point predictions of future events, due to uncertainty and chaotic effects, however they can generate useful distributions of plausible outcomes.

Complex systems scientists led the way in advocating for a well-informed Covid-19 response from the very early days of the pandemic. Yaneer Bar-Yam, a complexity scientist at the New England Complex Systems Institute, was one of the earliest and most prominent advocates of a “zero covid” strategy, through his website endcoronavirus.org. He was strongly recommending controls on international movement as early as January 2020. 

Reflections on modelling and simulation

From a technical point of view what I am recommending are large scale simulations which incorporate multiple different domains. These techniques are still relatively new, it’s only recently that the amount of computational power and data has become available to build simulations of sufficient scale to be realistic. These techniques have a lot of promise, but it is still early days.

Successfully building and deploying simulations to improve decision making involves overcoming some daunting scientific and engineering challenges. It's not just a case of writing more code and running with a larger number of agents and more data. Most academic research with ABMs tends to stick to simplified “idea models” which are easier to reason about and validate. In many ways the art of modelling is about simplifying a problem down to only the most important details. Most complexity scientists appreciate the value of simple agent-based models, however some would caution against adding too much detail to these simulations, since that could detract from generalisable insights. There are advantages to increased realism from using larger datasets and more detailed simulations, although it can be tricky to know when this additional detail is beneficial. It risks introducing more degrees of freedom and potential for error. I would argue that any model at all is still an improvement over relying on implicit mental models, but with the caveat that it is not always obvious how to integrate the model output into your overall understanding of the problem, particularly if you have multiple models and sources of data which conflict with each other. If this is handled properly then larger, more detailed models can add to scientific insight and not detract from it.

Another challenge is that simulations are less portable between problems than alternative techniques such as deep learning. For a deep learning model you can develop the learning algorithm once then apply it to problems in different areas by training on new data. However writing a simulation requires encoding domain knowledge into the model, so typically this has to be done separately for each new domain. A related problem is that the software tools to build ABMs have not received nearly as much investment as those for Deep Learning, so it remains difficult to build ABMs which can scale to a large number of agents, i.e. the millions of agents needed to represent the population of a whole country. The domain experts who have the knowledge to build an accurate model may not have the software engineering skills to implement that knowledge in a scalable simulation. This lack of tooling is one of the factors holding back wider adoption of simulations for decision making.

In my view simulation models are one of the best ways to formally combine insights and data sources from multiple domains, showing the implied consequences of our assumptions and limited understanding of a real world system. They can also help non-technical decision makers leverage knowledge from multiple domain experts. However they are not a crystal ball, and there are clear limits to their predictive abilities. This is in part due to incomplete understanding of the system dynamics, but even well defined models are subject to the phenomena of chaos, popularly known as the butterfly effect. If we don’t have perfect knowledge of the starting conditions in a complex system then the simulated trajectory will quickly diverge from reality, another reason why distributions are favoured over point predictions. This is why weather forecasts are only accurate for about one week out, even though the underlying physical equations are very well understood. However while weather is unpredictable, climate on the other hand is broadly predictable for many decades out (or at least the direction of change is predictable), due to structural dynamics that are fairly well understood. So I believe there is a lot of valuable scientific work to be done in understanding what is analogous to weather (unpredictable fluctuations) vs. climate (structured and predictable) in complex social systems such as the economy. For example could the 2008 financial crisis have been predicted ahead of time? It seems to have been caused by deep structural features of the financial system in the run-up to the crisis, however maybe this only seems inevitable in hindsight.

While simulations may have some predictive power, it is often better to think of them as tools for exploring and generating intuition about a system.

A model of movement building 

In the Nature Reviews article on complexity economics W. Brian Arthur explains that he considers complexity science to be more of a “movement within science” than a science itself. In many ways this reminds me of effective altruism, however complexity science has been around for a longer time, over 3 decades. I think there is a lot we can learn from complexity science as an example of building an interdisciplinary and pioneering intellectual movement from scratch, which has achieved a lot of success and influence. 

A key part of the complexity science movement is the Santa Fe Institute (SFI), an independent research organisation founded by physicists from the Los Alamos National Laboratory in the 1980s, it is arguably the first and most prominent complexity science organisation. It can take a lot of credit for developing and popularising the ideas behind complexity science. SFI has effectively created a new interdisciplinary identity from scratch, similar to EA. They have also been very influential upon key decision makers; many prominent academics, business people and politicians have visited SFI over the years. This flow of visitors as well as new postdocs is a key component of their success, allowing a continuous transfer of ideas in and out. 

Something I admire about SFI is how they are willing to tackle ambitious and broad scientific problems and make tangible progress on them. This has resulted in notable scientific achievements such as scaling laws in biology, to name just one example. This is all the more impressive since they have built a successful scientific research organisation outside the confines of the traditional university system, which therefore has minimal bureaucracy and is liberated from many academic incentives, allowing people to focus on their research. This includes not having tenure, with a more flexible evaluation of what success looks like. Researchers are empowered to organise their own programs and events. There are no departmental boundaries or labelled disciplines, which promotes interdisciplinary work.

A particular strength of the Santa Fe Institute in my opinion is PR and outreach. They have a highly polished website with great media content, and they offer many learning resources and introductory courses. They have also fostered close connections to arts and culture, including music, which is perhaps unusual given their identity as a theoretical research institute. David Krakauer, the current SFI president, speaks in an almost poetic manner, however he is able to convey a real sense of excitement and wonder, and I think this mode of communication can be inspirational to a lot of people. Clearly there is a balance to be struck, knowing when to be rigorous and cautious vs. poetic and metaphorical. Whatever SFI is doing it seems to be working, they appear to be well funded and have been running successful research programs for decades. Part of this success may be attributed to intentionally maintaining a small size of around 30 resident researchers, which has allowed them to stay agile. It relies heavily on private donations, supplemented by government funding, which gives flexibility but requires a prestigious reputation, although this is a similar situation to many EA organisations.

What can Effective Altruism learn from this success? In fact Stefan Torges’ EA forum post on “Ingredients for creating disruptive research teams” mentions SFI and some of the lessons that can be drawn for EA research teams. However perhaps replicating this level of success and prominence is just a matter of time, building up a reputation and field over several decades. It certainly seems instructive to look at organisations like the Santa Fe Institute and the field of complexity science more broadly.

Conclusion

I have tried to give a taste of what complexity science is, how complex emergent phenomena can arise from simple interactions, how systemic problems can be approached in a holistic and bottom-up way, and how tools such as agent-based modelling might be useful and relevant to EA causes. Although I have focused more on complex systems modelling and simulation as a tool, I believe there are many aspects of complexity science which may be of value to EA. 

Beyond just a set of tools I think complexity science provides an interesting example of an intellectual movement which has grown successfully over the last few decades. It has created an interdisciplinary identity from scratch, and provides a valuable philosophy and approach to tackling large, foundational scientific questions. The Santa Fe Institute is a great example of this, however there are other well regarded independent research institutions such as the New England Complex Systems Institute and Complexity Science Hub Vienna. Many universities around the world now have their own complex systems research groups. Additionally and of particular relevance is the recently launched Simon Institute, an EA organisation applying many ideas from complexity science, among other approaches, to improving longtermist governance. 

I am very keen to receive any comments or feedback on the ideas in this post. If anyone has ideas of how complexity science could be applied to EA causes, or is interested in collaborating on related projects in this area then please get in touch. 

I believe working to improve complex systems modelling and simulation, either by building tools for modellers or applying simulations directly, could be a very high impact career path. If you are interested in exploring this, regardless of your background or existing skill set, then I am happy to discuss and advise.

Many thanks to all the people who reviewed drafts of this post: Max Stauffer, Konrad Seifert, Nora Ammann, Tamara Borine, Adam Bricknell, Michelle Hutchinson and Vicky Yang. 

In particular thank you to Max for prompting me to write this post in the first place.

I have included some links and resources for further reading below. 

Further Reading

Comments15
Sorted by Click to highlight new comments since:

As someone with a background in theoretical physics, I am very skeptical of the claims made by complexity science. At a meta-level I dislike being overly negative, and I don't want to discourage people posting things that they think might be interesting or relevant on the forum. But I have seen complexity science discussed now by quite a few EAs rather credulously, and I think it is important to set the record straight.

On to the issues with complexity science. Broadly speaking, the problem with "complexity science" is that it is trying to study "complex systems". But the only meaningful definition of "complex system" is a system that is not currently amenable to mathematical analysis. (Note this not always the definition that "complexity scientists" seem to actually use, since they like to talk about things like the Ising model which are not only well understood and long studied by physicists, but was actually exactly solved in 1944!) Trying to study the set of all "complex systems" is a bit like trying to study the set of animals that aren't jellyfish, snails, lemurs or sting rays.

The concepts developed by "complexity scientists" are usually either well-known and understood concepts from physics and mathematics (such as "phase transition", "non-linear", "non-equilibrium", "non-ergodicity", "criticality", "self-similarity") or else so hopelessly vague that as to be useless ("complexity","emergence","non-reducibility","self-organization"). If you want to learn about the former topics I would just recommend reading actual textbooks written by and aimed at physicists and mathematicians. For instance, I particularly like Nonlinear Dynamics and Chaos by Strogatz, if you want to understand dynamical systems, and David Tong's lecture notes on Statistical Physics and Statistical Field Theory if you want to understand phase-transitions and critical phenomena.

Note that none of these concepts are new. Even the idea of applying these concepts to the social sciences is hardly novel, see this review for example. Note the lack of hype, and lack of buzz words.

Unfortunately, the research that I've seen under the moniker of "complexity science" uses these (precise, limited in scope) concepts both liberally and in a facile way. As a single example, let's have a look at "scaling laws". Scaling laws are symptoms of critical behavior, and, as already mentioned such critical phenomenon has long been studied by physicists. If you look at empirical datasets (such as those of city sizes, or how various biological features scale with the size of an animal) sometimes you also find power-laws, and so naturally we might try to claim that these are also "critical systems". But this plausible idea doesn't seem to work in reality, for both for theoretical and empirical reasons.

The theoretical problem is that, pretty much all critical systems in physics require fine-tuning. For instance, you might have to dial the temperature and pressure of your gas to really specific values in order to see the behavior. There have been attempts to find models where we don't need to fine-tune, and this is known as "self-organized criticality", but these have basically all failed. Models which are often claimed to posses "self-organized criticality", such as the forest-fire model, do not actually have this behavior. On the empirical side, most purported "power-laws" are, in practice, not obviously power-laws. A long discussion of this can be found here but essentially the difficult is that it is hard in practice to distinguish power-laws from other plausible distributions, such as log-normals.

If we want to talk about the hopelessly vague topics, well, there is really nothing much to be said about them, either by complexity scientists or by anyone else. To pick on "emergence", for the moment, I think this post from The Sequences sums up nicely the emptiness of this word. There is notion of "emergence" that does appear in physics, known as "effective field theory", which is very central to our current understanding of both particle and condensed matter physics. You can find this discussed in any quantum field theory textbook (I particularly like Peskin & Schroeder). For some reason I've never seen complexity scientists discuss it, which is strange, since this is the precise mathematical language physicists use to describe the emergence of large-scale behavior in physical systems.

TLDR There is no secret sauce to studying complicated systems, and "complexity science" has not made any progress on this front. To paraphrase a famous quote, "The part that is good is not original, and the part that is correct is not original (and also misapplied)."

Overall, this seems like a weak criticism worded strongly. It looks like the opposition here is more to the moniker of Complexity Science and its false claims of novelty but not actually to the study of the phenomenon that fall within the Complexity Science umbrella. This is analogous to a critique of Machine Learning that reads "ML is just a rebranding of Statistics". Although I agree that it is not novel and there is quite a bit of vagueness in the field, I disagree on the point that Complexity Science has not made progress.

I think the biggest utility of Complexity Science comes in breaking disciplinary silos. Rebranding things to Complexity Science, just brings all the ideas on systems from different disciplines together under one roof. If you are a student, you can learn all these phenomena in one course or degree. If you are a professor, you can work on anything that relates to Complex Systems phenomena if you are in a Complexity department. The flip side of it is, you might end up living in a world of hammers without nails - you would just have a bunch of tools without a strong domain knowledge in any of the systems that you are studying.

My take on Complexity Science is that it is a set of tools to be used in the right context. For your specific context, some or none of the tools of Complexity Science can be useful. Where Complexity Science falls apart for me is when it tries to lose all context and generalize to all systems. I think the OP here is trying to stay within context. The post is just saying we can build ABMs to approach some specific EA cause areas. So I am more or less onboard with this post.

On a final note, I am in agreement with your critique on abuse of Power Laws. There are too many people that just make a log-log plot, look at the line and exclaim "Power law!". The Clauset-Shalizi-Newman paper you linked to is the citation classic here. For those who do network theory, instead of trying to prove your degree distribution is a power law, I would recommend doing Graphlet Analysis.

If the OP wants to discuss agent-based modeling, then I think they should discuss agent-based modeling. I don't think there is anything to be gained by calling agent-based models "complex systems", or that taking a complexity science viewpoint adds any value.

Likewise, if you want to study networks, why not study networks? Again, adding the word "complex" doesn't buy you anything.

As I said in my original comment, part of complexity science is good: this is the idea we can use maths and physics to modeling other systems. But this is hardly a new insight. Economists, biophysicists, mathematical biologists, computer scientists, statisticians, and applied mathematicians have been doing this for centuries. While sometimes siloing can be a problem, for the most part ideas flow fairly freely between these disciplines and there is a lot of cross-pollination. When ideas don't flow it is usually because they aren't useful in the new field. (Maybe they rely on inappropriate assumptions, or are useful in the wrong regime, or answer the wrong questions, or are trivial and/or intractable in situations the new field cares about, or don't give empirically testable results, or are already used by the new field in a slightly different way.)  The "problem" of "siloing" that complexity science claims to want to solve is largely a mirage.

But of course, complexity science makes greater claims than just this. It claims to be developing some general insights into the workings of complex systems. As I've noted in my previous comment, these claims are at best just false and at worst completely vacuous. I think it is dangerous to support the kind of sophistry spouted by complexity scientists, for the same reason it is dangerous to support sophistry anywhere. At best it draws attention away from scientists who are making progress on real problems, and at worst it leads to piles of misleading and overblown hype.

My criticism is not analogous to the claim that "ML is just a rebranding of statistics". After all, ML largely studies different topics and different questions to statistics. No, it would be as if we lived in a world without computers, and ML consisted of people waxing lyrically about how "computation" would solve learning, but then when asked how would just say basic (and sometimes incorrect) things about statistics.

@djbinder Thanks for taking the time to write these comments. No need to worry about being negative, this is exactly the sort of healthy debate that I want to see around this subject.

I think you make a lot of fair points, and it’s great to have these insights from someone with a background in theoretical physics, however I would still disagree slightly on some of them, I will try to explain myself below.

I don’t think the only meaningful definition of complex systems is that they aren’t amenable to mathematical analysis, that is perhaps a feature of them, but not always true. I would say the main hallmark is that there is a surprising level of sophisticated behaviour arising from only apparently simple rules at the level of the individual components that make up that system, and that it can be a challenge to manage and predict such systems. 

It is true that the terms “complexity” and “emergence” are not formally defined, and this maybe means that they end up getting used in an overly broad way. The area of complexity science has also been a bit prone to hype. I myself have felt uncomfortable with the term “emergence” at times, it is maybe still a bit vague for my tastes, however I have landed on the opinion that it is a good way to recognise certain properties of a system and categorise different systems. I agree with Eliezer Yudkowky’s point that it isn’t a sufficient explanation of behaviour, but it is still a relevant aspect of a system to look for, and can shape expectations. The aspiration of complexity science is to provide more formal definitions of these terms. So I do agree that there is more work to do to further refine these terms. However just because these terms can’t be formally or mathematically defined yet, doesn’t mean they have no place in science. This is also true of words like “meaning” and “consciousness”, however these are still important concepts.

I think the main point of disagreement is whether “complexity science” is a useful umbrella term. I agree that plenty of valuable interdisciplinary work applying ideas from physics to social sciences is done without reference to “complexity” or “complex systems”, however by highlighting common themes between these different areas I think complexity science has promoted a lot more interdisciplinary work than would have been done otherwise. With the review paper you linked, I would be surprised if many of the authors of those papers didn’t have some connection to complexity science or SFI at some point. In fact one of the authors is director of a lab called “Center for complex networks and systems research”. Even Steven Strogatz, whose textbook you mentioned, was an external SFI professor for a while! Although it’s true that just because he’s affiliated with them doesn’t mean that complexity science can take credit for all his prior work. Most complexity scientists do not typically mention complexity or emergence much in their published papers, they will just look like rigorous papers in a specific domain. Although the flip side of this is maybe that casts into doubt the utility of these terms, as you argued. But I would say that this framing of the problem (as “complex systems” in different domains having underlying features in common) has helped to motivate and initiate a lot of this work. The area of complexity economics is a great example of this, economics has always borrowed ideas from physics (all the way back to Walrasian equilibrium), however this process had stalled somewhat in the latter half of the 20th century. Complexity science has injected a lot of new and valuable ideas into economics, and I would say this comes from the idea of framing the economy as a complex system, not just because SFI got the right people in the same room together (although that is a necessary part).

Perhaps I am just less optimistic than you about how easy it is to do good interdisciplinary work, and how much of this would happen organically in this area without a dedicated movement towards this. I maintain that complexity science is a good way to encourage researchers to push into problem areas that are less amenable to reductionism or mathematical analysis, since this is often very difficult and risky. 

Anyway the main reason I wanted to write this blog post is not so that EA people go around waxing lyrical with words like “complexity” and “emergence” all the time, but to point to complexity science as an example of a successful interdisciplinary movement, which maybe EA can learn from (even just from a public relations point of view), and also to look at some of the tools from complexity science (eg. ABMs) and suggest that these might be useful. @Venkatesh makes a good point that my main recommendation here is that ABMs may be useful to apply to EA cause areas, so perhaps I should have separated that bit out into a separate forum post.

Thanks for the reply Rory! I think at this point it is fairly clear where we agree (quantitative methods and ideas from maths and physics can be helpful in other disciplines) and where we disagree (whether complexity science has new insights to offer, and whether there is a need for an interdisciplinary field doing this work separate from the ones that already exist), and don't have any more to offer here past my previous comments. And I appreciate your candidness noting that most complexity scientists don't mention complexity or emergence much in their published research; as is probably clear I think this suggests that, despite their rhetoric, they haven't managed to make these concepts useful.

I do not think the SFI, at least judging from their website, and from the book Scale which I read a few years ago, is a good model of public relations that EAs should try to emulate. They make grand claims about what they have achieved which seems to me to be out of proportion to their actual accomplishments. I'm curious to hear what you think the great success stories of SFI are. The one I know the most about, the scaling laws, I'm pretty skeptical of for the reasons outlined previously. I had a look at their "Evolution of Human Languages" program, and it seems to be  fringe research by the standards of mainstream comparative linguistics. But there could well be success stories that I am unfamiliar with,  particularly in economics.

  • During EAG London 2021 @Emerson Spartz and I initiated a informal but successful "Complexity and EA" meetup. 
  • @Michael Hinge wrote "Complexity Science, Economics and the Art of Zen" inspired by the meeting 
  • We now have a Telegram Chat and Discord Server for people interested in the topic, please reach out if you want to get added. (Not posting the link to keep entry selective) ("ComplexitEA")

Further resources we collected or found relevant:

Thanks for the great list of resources!

Coincidentally I just discovered the Jim Rutt Show podcast recently and I've been enjoying it.

Glad you enjoy it!

Thanks a lot for posting this! I also have the same feeling as finm in that I wanted to write something like this. But even if I had written it wouldn't have been as extensive as this one is. Wonderfully done!

To add to the pool of resources that the post has already linked to:

  1. You can meet other people interested in Complexity Science/Systems Thinking here: https://www.complexityweekend.com/ It is a wonderful community with a good mix of rookies and experts. So even if you are new to Complexity you should feel free to join in. I participated in their latest event and got a lot out of it (specifically on making ABMs with MESA)
  2. If you want a simple (also free!) intro to Complexity, I would recommend Introduction to the Modeling and Analysis of Complex Systems by Hiroki Sayama. If you go to the Wikipedia page on Complex System, the picture in it that portrays several subfields of Complexity was taken from this book. It should be easy to follow if you just know high school math. I have personally only read the Networks part of it (since that is the subfield of Complexity I have mostly worked on) and it was good enough to get my feet wet.

Haven't read this fully yet, but I'm really excited to see someone thinking about applications of complexity science / economics for EA, and I was vaguely intending to write something along these lines if nobody else did soon. So thanks for posting!

Hi Rory,

Thanks for the post!

In my view a valuable aspect of ABMs is not in perfectly predicting the future, but forcing people to face up to the stark reality of what their current data and assumptions are telling them.

I really like this point.

This was a really interesting read, thank you for putting together such an extensive resource! I've also been thinking about how to apply complexity economics to EA and great to hear others are into this.

Adding to the resources:

This sounds pretty exciting to me, thanks for writing it up. I was reminded by Shahar Avin's work on role playing games to predict the interactions between major players in AI. There might be a possibility to build on the insights from his project as a basis for agent based simulations from the field of complexity theory.

Exploring AI Futures Through Role Play https://www.cser.ac.uk/resources/ai-futures-role-play/

These links are excellent! I hadn't come across these before, but I am really excited about the idea of using roleplay and table top games as a way of generating insight and getting people to think through problems. It's great to see this being applied to AI scenarios.

Curated and popular this week
Relevant opportunities