Since working with the New Approaches to Economic Challenges (NAEC) unit at the OECD in 2019, I have been confronted with a perspective on economics that is entirely absent from undergraduate teaching. Namely: complexity economics. It sees the economy as a complex adaptive system made up of a large number of heterogeneous interacting agents. I like W. Brian Arthur's description:
|Complex systems are ones with multiple elements adapting or reacting to the pattern these elements create|
Complex adaptive systems are found across various disciplines including ecology, biology, computer science, and physics. Each of these fields have made progress in modelling and understanding such complex systems. It is time economics learns from these, especially since it is the least friendly discipline when it comes to outside ideas. Friedrich von Hayek once noted
A poor economist would be someone who was only an economist (Friedrich von Hayek)
The Santa Fe Institute has pioneered these ideas and is still the location of much debate. Recently, I read the proceedings of the 2019 fall symposium on complexity economics. Within them I found some gems. In this post I will share the important ideas and questions that jumped out to me.
Why do we need Complexity Economics?
Beginning with W. Brian Arthur, the economy is a complex system where agents constantly adapt their behavior in response to the outcomes they create. As a subset of a larger social system, the economy and its institutions evolve over time. In this regard, economics addresses two issues:
- Allocation → the determination of quantities e.g. general equilibrium, international trade, and game theoretic outcomes
- Formation → the determination of processes e.g. economic development, novel technologies, structural change, the arrival of new institutions, and temporary bubbles and crashes
The first problem can be and has been addressed in a mathematizable manner (e.g. the huge literature founded on equilibrium notions). The second problem is cumbersome to capture mathematically - how does one capture evolutionary adaptation of many different agents with complicated feedback loops? Instead it requires a computational approach that has become increasingly feasible.
Early on, researchers were aware of the complex dynamic nature of economies. One could read Adam Smith's invisible hand as one such example. However, tools for non-equilibrium analysis were not available. Hence the question of formation was rephrased into: "What behavior would be upheld by—consistent with—the aggregate patterns they create? What patterns would call for no changes in micro-behavior, i.e. an equilibrium?" This gave birth to the representative rational actor that has been widely critiqued. To this extent genetic algorithms and evolutionary modeling in economics have also been studied in the context of whether they can arrive at the rational equilibrium outcomes.
The issue of formation and adaptation has thus not been properly addressed within economics. In equilibrium, there can be no new formation by definition. It is a set of solutions that is adiabatic, in the sense that the underlying structures and parameters do not change. Today it is feasible to analyse large-scale models that have heterogeneity, interactions, evolving behavior, and feedback effects between the micro-, meso- and macro-level. The rapidly growing body on agent-based modelling (across disciplines) should be evidence enough. With that we can directly investigate some questions posed by Brian Arthur:
- How will the pattern of the system today shape individual decisions which will then collectively create the pattern of the system tomorrow?
- What would economics look like if we allowed nonequilibrium?
- How do individual actors in the economy react, make decisions, anticipate, and strategize, in response to the pattern they have created?
- And what kinds of networks, institutions, technologies, artifacts, and patterns emerge and evolve?
Eric Beinhocker brought up the idea of imagined orders which Yuval Noval Harari defined:
Humans believe in a particular order not because it is objectively true, but because believing in it enables us to cooperate effectively and forge a better society (Sapiens)
So what? The economy isn't an objective reality but conceived of ideas. Beinhocker sees the current notions of an economy as being based on maximizing utilitarianism → homo economicus → neoclassical synthesis → neoliberalism → and ultimately market capitalism. But if economics is based on a set of ideas, these can change to suit our modern needs. How should they change? Beinhocker suggests prosocial behaviors → homo sapiens → complexity economics → market humanism → the eudaimonic economy (see his lecture, an interview, recent paper, his book The Origins of Wealth, and I think there is another book coming).
I agree in principle with this approach, though I am curious about a more complete theory of market humanism. Economics, as it stands, has provided a lot of insight into economics and rationality as a subset of a much wider analysis that is necessary. However, it is time to adapt our own modeling and ideas to a complexity perspective and consider ideas from other disciplines. For instance Doyne Farmer at INET Oxford (one of my heroes) is currently working on ecological economics.
Complexity: The Intersection of Economics, Biology, Computation, and Physics
In the spirit of considering ideas from other disciplines, in this section I highlight some of the neat ideas, important questions, and other thoughts I came across as I read.
Economics and the Study of Knots
John Miller made this amazing analogy. A knot is "any complication in a length of line". We can consider the line as the underlying micro-behavior. Early economics (Smith & pin-factory) cut out the knots and considered them individually, neoclassical economics eliminated the knots and studied the straight lines in between. Recently (behavioral economics) we have studied knots again (100s of behavioral biases) and find that there many. I think a complexity approach allows us to understand the formation of the knots, their persistence, and their dynamics over time. The back and forth between knots and straight lines.
The main question of Economics: Distribution
Since the industrial revolution technology has grown exponentially. So has our consumption of resources. Our current consumption outstrips the possible regeneration of natural resources. We are faced with dire issues of sustainability. Simply increasing goods output (and thus GDP) isn't a feasible strategy. Furthermore we see raging inequality within and across social groups. Therefore, the main issue for economics may not be long-run productivity growth but rather distribution of existing resources and the use of our policy tools and models to design an institutional system that promotes equal distribution.
This was brought up by Ole Peters. Non-ergodicity means that for every individual possible trajectory of the economy, in the long-run, it might do something entirely different from the average over a large ensemble. This differentiation is based on two views of probability that diverged: (1) Maxwell's ensemble view that probability is about relative frequency of some occurrence in an ensemble, versus (2) Boltzmann's view that probability is about how often something happens over time. Non-ergodicity is about the case where these two measures diverge, and it appears that the economy, path-dependence and all, is not ergodic. We typically consider economic behavior according to the ensemble average, the expectation over all possible outcomes (small plug that the probability distribution is not well-defined in the first place, so rationality isn't well-defined either--see below). In fact there are experiments to suggest entirely the opposite, our behavior is based more on Boltzmann's view. An introduction to economics applications here + a recent paper by J-P Bouchaud and R. Farmer.
Forecasting, Acting, and Evolutionary Adapting
We deal with fundamental uncertainty (Knightian or radical uncertainty) daily. The issue with this is that it means that optimal behavior is undefined. The problem we are facing is not well-defined (no probability distribution over outcomes), therefore rationality, and hence optimal behavior (really economists mean utility maximizing) aren't well-defined. On the other hand, people (and automata) can and do act in these situations. John Holland suggested genetic algorithms are at the heart of this. In particular, we make sense of these issues through experimentation, exploration and imitation. The key being the tradeoff between trying something entirely new, and attempting to copy the behavior of successful agents (I would say mainstream economists are certainly skewed to imitation w. marginal improvements... perhaps that's why we've been going backward).
Melanie Mitchell (she has some great books) expanded on this: one of the problems of a complex system might be a maladaptive balance between exploration and exploitation. It is possible that this could be the underlying explanator for the behavioral biases. At this point I would like to say that the negative connotation of bias, w.r.t. to rational optimal outcomes, should be taken with a grain of salt. For aggregate outcomes, or a collective computation (next point), these might actually be very beneficial to us as a social species. In any case, Mitchell suggests we should look to the reinforcement learning literature (Machine learning) for for some insight into: what samples to learn from? how to decorrelate the samples? how much to discount the future? and how much confidence to give to our current knowledge? A counter to this is the finding that often, at least in finance, zero-intelligence traders might actually be the best descriptor of system-wide outcomes.
Does the economy compute?
Justine Flack stated that we don't have a theory for how you get micro-to-macro maps and information processing systems. We have ideas about how information is transferred, but not so much about how information is transformed in the economy. I would interject that we do have a theory of transformation, but it is limited to prices. Partially because in a lot of models, what can't get priced won't be considered. Flack suggests collective computation to account for how macroscopic states arise from microscopic interactions in information-processing systems, and to allow subjectivity (noisy data and bad info processing). I think agent-based modeling may actually allow us to really study this sort of mechanism. What I am particularly excited about is evolutionary learning - how is computation refined individually and collectively? Are our biases collectively helpful?
There are four cool concepts Justine highlights:  Ground truth (height of a house) vs. effective ground truth (height of the house we agree on even if wrong),  Information can be collectively encoded in a circuit or network without residing in the heads of individual actors (feedback loops),  Our aggregation mechanisms reside in the interactions of agents,  Outputs are collectively computed/constructed by the system components.
Formal Alternatives to the Rational Actor
Non-rational behavior is well-documented, yet economic models still start with the rational actor anyway. Why? (1) it is mathematically attractive, and (2) there is no formal complete alternative. As Samuelson quipped:
You can only beat a model with another model (Paul Samuelson)
This is a point raised by Joshua Epstein. The coolest idea here is inverse generative social science (a paper and a workshop). Rather than conceiving of an agent and her rules, we build a large library of rule-components (e.g. inputs to rules) and mathematical concatenation operators, and use evolutionary programming to grow the agents. The thesis of Chathika Gunaratne offers some insights on this procedure. The benefit of such a procedure is that we are not limited to the modeler's conception of human behavior (a critique levelled at ABM, but should be equally levelled towards economists), but rather we let the actions of the agents speak for themselves. At the same time, current data and method limitations would still present us with only a quasi-static picture of this decision. In other words, a single rule setup without an account of how we got there.
Another aspect to this is that we are missing a set of community models for these types of agents. It is easy to conceive of new agents, but hard to test behavioral robustness. I believe this is one of the current limitations to acceptance of ABM in the world - they are easy to make and simulate, but hard to show robustness for. The replication of output by a large complex model (quasi-black-box) does not identify what the generating mechanism is. More work is needed here. For a great first example we can look to the Mark-0 paper that actually reduces an ABM to identify the phenomenology of changing points and points out that one particular decision (hiring/firing rate) drives different phenomena. This is one of the areas of investigation I am pursuing in my PhD.
Does micro-behavior matter?
In collective behavior modelling one can start with (1) micro-behavior and simulate up to outcomes or (2) use a behavior-agnostic fluid-dynamics approach. Both can yield useful insights about the macro behavior of the complex system. In the fluid case, we don't need to know about the particular behavior of the agents. It is also, in that sense, more functional but less explanative of the underlying issues. So, do you need to go through micro to get dimension reduction? Can we just start at the mesoscopic scale? I think meso-scopic scale modeling is more approachable at the moment given data-restrictions, and it would also give us more insight for modelling micro behavior. However, modelling meso scales (e.g. industries) actually has a good chance of overlooking important micro properties (an example here). I think this often gets back to where we define the appropriate boundaries for the meso-scale (sectors might not be a good choice).
Physics and Economics...
Does physics contribute to our understanding of economics. This was a long debate. My own position is roughly that the intuition and approach can be very helpful, in particular the willingness to try out different new tools. The tools of physics itself might not be perfectly suitable in comparison to some tools from ecology or computer science. Since the debate was so long, I will simply highlight three fascinating questions that arose and warrant discussion:
- Under given institutional constraints, what are the limits/ranges of individual decision-making mechanisms that will give rise to the same macrophenomena?
- What does it mean to view a social organisation as an information transformation system? What bodies of mathematics should we use to grapple with that?
- To what extent is the theory we are developing contingent on the structure of the economy? What is the actual substrate of the economy on which it plays out? How will that influence the evolution of theory?
Tools for Economics
Another discussion about what physics, computer science, biology, ecology, political science, and social science offer to understand the economy. The question here is to ascertain in what ways the complex economy acts like and, even more importantly, unlike these systems. For instance, to what extend is economics, since its a complex adaptive system, also acting in a biological fashion? and can we come up with a common understanding of what we mean by the level of complexity in the economy? (C. Monica Capra)
The next important question, posed by Scott Page, is how does this complex-systems approach help us at a pragmatic level? Economics as it is has simple quantitative models that give prescriptions. In complexity, we often say it depends. So: How do we think about developing tools, procedure, things for people who work? How do we think about taking this science and making it a practical value for them? What tools do we give them? What measures? What techniques? To this I would certainly add the question of how to visualise agent-based models easily? and how to give policy-makers a sandbox to easily test policies in these models?
Gaming the system
The final question is posed by W. Brian Arthur: can we anticipate how such complex-adaptive systems be gamed, played, or taken advantage of and thereby collapse? Can we imaging an agent-based system or strategic counter-system that takes a policy and sees how to game it? This would require a very detailed models to get all possible avenues right. In principle though I think this particular kind of approach would be very beneficial for policymakers and modelers to understand the weaknesses of their models and institutions. The question I have in response is that if one finds a way to game the system, could one use such a position to change institutions to continue benefiting from the strategy (e.g. the republican party is a popular minority, but structurally they wield a large government representation due to the way representatives are elected) and lock-in. If so, this could lead to an interesting mechanism-design to prevent this.
Perhaps one result would be an ecology of gamers, agents that game the system with different strategies are successful at different points in time. A recent paper by Doyne Farmer models this in the case of finance, developing an ecology of traders. The economy could have a similar dynamic but it might play out at much larger timescales. In light of the gaming question and ecology model, what does it mean to have a healthy ecology? and how do we get there?
Santa Fe Institute Complexity Proceedings - Final Thoughts
Overall impression: there is a lot of very cool content to catch up on. From thought provoking questions to a long reading-list. I really like these types of proceedings for this reason: we get to see how those who have been around for a long time gathering experience ask questions and see issues that we can address with fresh perspectives. Not only this but their references are like a pre-made filter for what literature to read. Highly recommend!