Ch 3: Three Emergent Patterns

3.

12 Three emergent patterns of complex adaptive systems in the economy

As Brian Arthur (1999:2) notes, once the complexity outlook is adopted with its emphasis on the formation of socio-technological structures rather than their given existence, problems involving prediction in the economy look different. Rather than basing forecasting on rational expectations, forecasting models are formed from the bottom-up, taking the decisions made by individual agents, who are not always aware of the expectations of others or ‘rational’ response, as their basis. Despite the potential heterogeneity of expectations across agents, complex adaptive systems tend to have signature emergent patterns that are common across many types of system which can assist in this forecasting task. Here we will review three such signature patterns which have been highlighted in Beinhocker (2006:168-181): oscillations, punctuated equilibrium, and power laws.

Best services for writing your paper according to Trustpilot

Premium Partner
From $18.00 per page
4,8 / 5
4,80
Writers Experience
4,80
Delivery
4,90
Support
4,70
Price
Recommended Service
From $13.90 per page
4,6 / 5
4,70
Writers Experience
4,70
Delivery
4,60
Support
4,60
Price
From $20.00 per page
4,5 / 5
4,80
Writers Experience
4,50
Delivery
4,40
Support
4,10
Price
* All Partners were chosen among 50+ writing services by our Customer Satisfaction Team

3.12.1 Oscillations: no equilibrium in sight

Oscillations are a common feature of most complex adaptive systems. One of the most regular and pervasive of such patterns in the economy is that of the business cycle.

Schumpeter (1954) outlined that a typical cycle had four stages: expansion, crisis, recession and recovery. Following on from this work he proposed a typology of business cycles based on their periodicity:

  • The Kitchin inventory cycle of three to five years (Kitchin, 1923);
  • The Juglar fixed investment cycle of seven to eleven years, commonly identified as ‘the business cycle’ (Juglar, 1862);
  • The Kuznets infrastructural investment cycle of 15-25 years (Kuznets, 1930); and
  • The Kondratiev waves or long technological cycle of 45-60 years (Kondratiev, 1935).

The question is, are these cycles random perturbations around a long-term trend simply driven by exogenous factors, or are they driven endogenously from within the economic system itself? To answer this question let us first take a step back to consider the nature and driving force behind an oscillating system. This goes back to the pioneering work of Thomas Malthus and his essay on the Principles of Population which helped inspire a Ukrainian chemist and an Italian mathematician to build a model to describe the relationship between a predator and its-prey in a biological system – the so-called Lotka-Volterra model. This model shows how as a population of mice grows, foxes eat more mice, causing the fox population to rise and the mouse population to fall.

This decline in mice, leads to less food per fox, and eventually the fox population falls, allowing the mice population to rise again, thus creating oscillations of fox and mice populations. This dynamic system never changes, but oscillates ad infinitum, with the ups and the downs emerging endogenously from within the system itself, rather than any outside force. Indeed, this model is a good description of reality of many predator prey relationships, which are in a continual state of dynamic flux. Richard Goodwin writing in the 1950s was one of the first social scientists to apply the logic of predator-prey relationships to the relationship between employment and corporate profits and investment.

Figure 3.9 Lotka-Volterra / Goodwin Model of a time-lagged oscillatory system

Lotka-Volterra / Goodwin Model of a time-lagged oscillatory systemIn an economy based on a network of supply and demand relationships, such oscillating behaviour has been demonstrated to emerge from what psychologists have termed the anchor and adjust effect.

Such models describe decision makers as “anchoring” on a decision and then adjusting the decision based on new information. While departing from the rational agents assumption of the neoclassical school, such models are simple in structure and intuitively appeal to how decisions are actually made. For example, with inventory management where there are time delays along the supply chain, the anchor-adjustment rule causes individuals to both overshoot and undershoot in placing new orders, which in turn leads to the emergent behaviour of cyclical patterns (Beinhocker, 2006:171-172). Such models proliferated following seminal articles by Lichtenstein and Slovic (1971) and by Tversky and Kahneman (1974).The models seem to be descriptive of a wide variety of judgment tasks: intuitive estimation of numerical expressions (Tversky and Mice, Corporate profits, investment Foxes Employed workers, wages Population Wages, Profits (£) Time 190 Kahneman, 1974), forecasting (Bromiley, 1987), stock management (Sterman, 1989), risk assessment (Lichtenstein et al., 1978), preference for gambles (Lichtenstein and Slovic, 1971), predictions of spouse’s preferences (Davis et al.

, 1986), causal attribution (Einhorn and Hogarth, 1986), judgment under ambiguity (Einhorn and Hogarth, 1985), judgments of experienced real estate agents (Northcraft and Neale, 1987) and auditors (Butler, 1986). Given the volume and variety of studies, anchorand-adjustment appears to be well-established empirical phenomenon.If such effects are in fact playing out on the macro-economy, the causes of the business cycle lie in the way people’s adaptive inductive decision-making rules interact with the dynamic structure of the economy (as opposed to the deductive rules of the rational agent). If valid, one implication of this is that business cycle management though monetary and fiscal policy is only a symptomatic treatment for business cycles, and indeed may lead to exacerbating the boom and bust cycle once worked into agents adaptive expectations. Rather, if governments wished to smooth the business cycle they should look to the structure of decision-making in the economy itself – to either: reduce time delays between decision makers and to increase the system-wide transparency of decision-making.In this respect, some have suggested that information and communication technologies may have contributed to the dampening of the business cycle ( McCarthy and Zakrajsek, 2002). The hypothesis is that computers have enabled companies to speed order processing, adopt just-in-time inventory practices, and electronically link producers with their supply chains.

In this case, the authors’ suggest that new ICT technologies have worked to alter the structure of decisions in the economy, dampening the oscillations of the business cycle. Such research however, must also be evaluated in light of the recent financial and economic crisis – the worst since the Great Depression of the 1920s.

3.12.2 Punctuated ‘equilibrium’ –stability and flexibility in the evolving economy

In contrast to the idea of gradual, incremental change, the notion of punctuated ‘equilibrium’ is that the system can be characterised as following relatively long periods of stability, ‘punctuated’ by short bursts of radical change.The concept was first articulated by Niles Eldredge and Stephen Jay Gould in the context of their observations that biological evolution seems not to have followed a smooth path, but instead is characterised by long periods of stasis interrupted by sudden mass extinctions, followed by vibrant bursts of new species origination (Eldredge and Gould, 1972).

In building their theory, the authors drew attention to the ‘five great extinction events’ of the last 500 million years (the five most extreme points in a continuous oscillating cycle of species extinction and origination) the largest of which saw the extinction of up to per cent of all marine species on earth.However, these results should also be viewed in the light of recent studies which have shown a lack of any significant correlation of the series with itself over time, casting doubt on such ‘macroevolutionary theories of periodicity or selforganized criticality’ (Alroy, 2008). Another problematic technical feature of Gould’s original notion of punctuated equilibrium is that biological evolution is never in a mathematical sense in equilibrium. Indeed, as we discussed earlier in the context of path dependency, the notion of equilibrium is inconsistent with that of an evolutionary process which incessantly changes with time.Despite these issues, the use of ‘punctuated equilibrium’ as an organising metaphorical concept (something akin to Schumpeter’s ‘gales of creative destruction’) has crept into the vernacular of the social sciences.

Three areas where it has had a significant influence include: in systems and network theory (e.g. Newman, Barabási and Watts, 2006); the study of the evolution of government policy (Baumgartner, 1998); and the evolution of conflicts (Cioffi-Revilla, 1998).In the work of Watts and Newman from the SantaFe Institute, networks have been shown to self organise into a structure that has very dense and very sparse connections (Beinhocker, 2006:173-5). Some research has suggested that the cascading effects can flow from a system which exhibit a few very densely connected nodes (Jain and Krishna, 2002a, 2002b).

This follows on from the study of keystone species in biology, which are species that are densely connected to others in a food web. The more densely connected, the greater their impact on the overall system. The idea is that the system is subject to random perturbations which affect different nodes (species) in the network. Impacts on nodes with fewer connections on the system will be minor, but the effect of small changes on nodes with a higher number of connections will be greater. This is put forward as an explanation of sudden large changes in systems which are generally characterised by stability.In nature, examples of such keystone species might be a type of plankton or shrimp which form the foundation of a food web and upon which many other sea creatures feed upon, species which in turn are fed upon by other larger fish up the food chain. For example, a collapse or explosion in the population of shrimp would radiate widely through the marine ecosystem, eventually even effecting apex predators such 193 as sharks.

Another example, might be a species which is insignificant in terms of biomass, but fulfils a vital ecosystem function such as the predation of some sea stars on sea urchins, mussels and other shellfish, which otherwise have no predators.If the population of sea stars is removed the mussel population expands uncontrollably driving out other species, while the urchin destroys coral reefs causing massive ecosystem change. In Figure 3.10 a snapshot is taken from a marine food-web off the coast of Somerset during the summer of 1985 showing the relationships between the different species and the brown prawn, the ‘beating heart’ of ecosystem.

In this model it is noteworthy that total species abundance is strongly related to exogenous seasonal effects such as water temperature and salinity in the Severn estuary (where the samples were taken). One interesting feature of this time series data is an explosion in total species abundance in late 1998 (three times its usual seasonal peak) following a slightly lower than normal drop in salinity (due to increased rainfall and flow in the river Severn). While more than one contributing factor is likely to be at play here, it is a good example of how a relatively small variation beyond a certain threshold in one variable can induce very large changes in others.

Figure 3.

10 Nodes in a biological complex adaptive system

Nodes in a biological complex adaptive systemThis idea of system thresholds has been developed most notably in the social science literature by Mark Granovetter (1978), Granovetter and Soong (1983); Thomas Schelling (1971, 1978); and Jonathan Crane (1989) and popularised in Malcolm Gladwell’s The Tipping Point.

The authors use this approach to help explain observed patterns from riot behaviour, strikes, innovation diffusion, voting patterns, migration, self-organising racial segregation in the suburbs and teenage pregnancy. Combining the notions of keystone technologies or keystone institutions with this understanding of threshold effects, can provide the basis of a theory that explains non-marginal change, or how small changes, can suddenly act to drive rapid emergent behaviour in a system.The idea of punctuated equilibrium has also been applied in an interesting way to the study of the dynamics of public policy. Baumgartner and Jones (1993) first presented the notion that policy dynamics is characterised by long periods of stability with only incremental change punctuated by sudden radical shifts in policy paradigm. In this theory of understanding events, periods of stability are characterised as being reinforced by decision makers who follow an adaptive path due to the constraints of the costs calculation and their own cognitive and informational limits (Simon, 1957) in predicting future events and their consequences. This is seen to create a “stickiness” in institutional cultures supported by sets of vested interests which tend to persevere until overthrown by a new political paradigm such as through a shift in party control of government and, or public opinion (Lindblom, 1977; Jones, 2002; Knott, Miller, ; Verkuilen, 2003).Supporting this framework, Hall (1993) has outlined a classificatory system of orders to characterise policy change within this dynamic process. ‘‘First order’’ changes occur when the calibrations of policy instruments, such as increasing the passenger safety or automobile emissions requirements manufacturers must follow, change within existing institutional and instrument confines.

‘‘Second order’’ changes involve alterations to dominant types of policy instruments utilized within an existing policy regime, such as switching from an administered emission standard to an emissions tax. ‘‘Third order’’ changes involve shifts in overall abstract policy goals 196 (such as the shift in many countries to sign the Kyoto Protocol). Hall linked first and second-order changes to incremental processes and usually the result of activities endogenous to a policy subsystem while third-order changes were ‘paradigmatic’ change and driven by forces outside the control of actors.

3.12.

3 Power laws: a theory of unexpected events

When the frequency of an event varies as a power of some attribute of that event – it is said to arise as a result of a power law. The conceptual crux of the power law is the tendency, in certain probability distributions, for a small number of extremely rare, but high impact events to overwhelm the standard statistical analysis based on a normal distribution (see Figure 3.11).To recap, a normal distribution is what you get if you combine lots of observations which are characterised by little variations, each one independent of the last and each negligible when compared to the total. While individual observations matter and can differ from each other to a large degree, cumulatively across time and across the population, the observations settle into a regular bell-shaped pattern.

For example, the toss of a coin: heads or tails; in biology certain physiological properties such as: blood pressure; height, the growth of hair, nails and teeth; and in physics the velocities of molecules in the ideal gas. More controversially, economic models such as the Black-Scholes assume approximate normality for changes in stock market price indices; and commodity price changes.In contrast, phenomena which correspond to power laws do not fit a normal distribution but instead are characterised by highly leptokurtic distributions with long probability tails (so-called ‘fat-tails’), reflecting observations outside the probabilistic logic of normally 197 distributed events. Such “outlier” observations are often excluded from analysis, on the basis that they have been influenced by exogenous events or experimental error as they do not correspond to what is, usually, a well-behaved system; or because they are by nature “too hard to predict” therefore beyond the scope of scientific analysis.

Figure 3.11 Power laws: a graphical representation

Power laws: a graphical representationLet us take an example from financial markets (Mandelbrot and Hudson, 2005:24):

From 1916 to 2003, the daily index movements of the Dow Jones Industrial Average do not spread out on graph paper like a simple bell curve.

The far edges flare too high: too many big changes. Theory suggests that over that time, there should be fifty-eight days when the Dow moved more than 3.4 percent; in fact, there were 1,001. Theory predicts six days of index swings beyond 4.5 percent; in fact, there were 366. And index swings of more than 7 percent should come once every 300,000 years; in fact, the twentieth century saw forty-eight such days. Truly, a calamitous era that insists on flaunting all predictions.

Or, perhaps, our assumptions are wrong.

Mandelbrot goes on to highlight how on October 19, 1987, one of the worst days of trading during the twentieth century, the index fell 29.2 per cent – with a probability, according to standard models based on a normal distribution of less than one in 1050 , odds so small, Mandelbrot writes, that they have no meaning: “It is a number outside the scale of nature. You could span the powers of ten from the smallest subatomic particle to the breadth of the measurable universe – and still never meet such a number”. The challenge for science in this area, therefore, is how to systematically understand and predict, such unexpected events – which nevertheless regularly seem to occur.

In contrast to a theoretical understanding which uses a normal distribution at its foundation, the distributions which underpin power laws have been put forward as an alternative framework for understanding problems associated with unlikely but high impact events (e.g. Madelbrot et al., 2005; Beinhocker, 2006).The first use of distributions based on power laws took place in the social sciences, by an Italian called Vilfredo Pareto, whose name also is honoured with the articulation of the Pareto equilibrium – the theoretical point at which no member of society can be made better off without making someone else worse off and part of the foundation of mainstream neoclassical analysis (Figure 2.

1). Pareto’s investigations of the distribution of income across society, led him to construct a histogram which was negatively skewed, with a long tail stretching off to the left, reflecting his observations that the vast majority of wealth and income was concentrated in the hands of a small number of people; while the bulk of people lay in the bulging ‘low-income’ part of the diagram. This relationship he expressed as the following equation:(12)Where y is the number of people having income x or greater than x and v is an exponent Pareto estimated to be around 1.

5 and gave rise to the so-called 80:20 rule, where 80 per cent of wealth is said to be concentrated in the hands of 20 per cent of the population. Indeed, this builds on observations made as early as Aristotle in his Politics, where he comments that the balance of wealth in the richest part of society should not be allowed to exceed five times the wealth of the middle class which must remain strong in order to guard against corruption and oppression (Book VI). Pareto’s theory on income distribution was put forward in 1909: he argued that it reflected some sort of natural law describing how wealth was distributed through any human society, in any time, in any country. Pareto did not stop there, however, and went on to assert:

There is no progress in human history. Democracy is a fraud. Human nature is primitive, emotional, unyielding. The smarter, abler, stronger and shrewder take the lion’s share. The weak starve, least society becomes degenerate: one can compare the social body to the human body, which will promptly perish if prevented from eliminating toxins (quoted in Mandelbrot, et al.

2005: 153).

Pareto’s theories found fertile ground in the emerging fascist movements of Europe, and to quote his biographer Franz Borkenau (1936:18):

In the first years of his rule Mussolini literally executed the policy prescribed by Pareto, destroying political liberalism, but at the same time largely replacing state management of private enterprise, diminishing taxes on property, favouring industrial development, imposing a religious education in dogmas.

As Mandelbrot writes, it was inflammatory stuff and while this aspect of his work was passed over in silent distaste by the economic mainstream, and while most economists willingly adopted his seminal work in other areas such as equilibrium theory, this close relationship with fascism burnt his reputation in liberal democracies.

At his death in 1923, Italian fascists were beatifying him, while Karl Popper called him the “theoretician of totalitarianism”, leaving evolutionary social theory with some very heavy Darwinian-‘survival-of-the-fittest’ baggage to carry around with it into the twentieth century.Let’s now formalize these relationships a little more clearly. A power law describes the characteristics of a set of observations where the frequency of some attribute of those observations varies as a power of those observations:(13)Where a and k are constants and o(xk ) is an asymptotically small function of x k where k is typically called the scaling exponent. Now, if something is shared between a sufficiently large set of participants, there must be a number k between 50 and 100 such that k% is taken by (100-k)% of the participants. The number k may vary between 50, in the case of an equal distribution among participants to nearly 100 when a tiny number of the participants account for almost all of the resource. There is nothing particularly magical about k= 80 per cent, as in the case of the so-called 80:20 rule aside from a general observation that some systems share a k somewhere in this region of intermediate imbalance in the distribution.

In varying degrees it is often used as an organising metaphor in a range of situations from the profits on customers; to the the rise of economic and political oligarchy; the magnitude of earthquakes, the size of meteorites, or sand grains or pebbles on a beach, and the frequency of extreme weather events.A more contemporary analysis of income distribution highlights how certain kinds of economic activity can lead to a concentration of output among very few individuals in professions such as comedians, performance musicians, writers of economic text books and the popularity of PHD graduate schools (Rosen, 1981). For example, if a medical surgeon is 10 per cent more successful at saving lives than his average colleague, people are more than 10 per cent willing to pay for his services.This ‘convexity in returns’ means that business will tend to concentrate in the hands of the top performers and the income of surgeons will follow a power law. In another influential paper highlighting the importance of power laws, Internal Revenue Service data for the period 1966 to 2001 was analysed to assess where the benefits from American productivity growth went (Dew-Becker and Gordon, 2005). The startling results were that only the top 10 per cent of the income distribution 202 enjoyed a rise in real income (excluding capital gains) equal to or above the average rate of economy-wide productivity growth; while the bottom 90 per cent fell behind, or were left out of sharing in the productivity growth entirely.More precisely, the wage and salary income of workers at the 90th percentile showed an increase of 34 per cent; at the 99th percentile incomes rose 87 per cent; at the 99.9th percentile incomes rose 181 percent; while at the 99.

99th percent, incomes rose a staggering 497 percent. These observations prompted Paul Krugman to cast doubt on the standard assumption that productivity growth is the engine for increasing standards of living and point to the fallacy of the 80:20 rule, which points to an altogether too equitable view of society (Krugman, 2006). Indeed Krugman argues how the battle for the share of economic prosperity is not so much between owners of labour or capital; or highly educated graduates versus unskilled labour; but between highly the educated workforce (at the 90th percentile) and a new class of business plutocrats – the contrast between graduates versus oligarchs.The notion is that while the majority of observations inhabit a certain range, there are ‘long-tails’ in the probability distribution which mean the once in-a-while occurrence of an extreme observation happens with much greater frequency than the quickly dipping off curves of the normal distribution imply: an extreme in income; a particularly violent earthquake or intense storm outside the standard norms of prediction. Indeed, the impacts of global warming on climate change can be easily articulated within such a framework of an increasing probability of unexpected extreme events (a ‘thick-tail’ getting thicker).

But if systems are prone to such large ‘unpredictable events’ – what do power laws suggest is the value in prediction or forecasting?The random walk model of the normal distribution so useful for prediction, relies on three essential claims: first is the so-called Martingale condition, that your best guess for tomorrow’s observations are today’s and past observations; secondly, that tomorrow’s price observations are independent of yesterday’s observations; and thirdly, that all observations, taken together vary in accordance with the regular bellshaped curve.Mandelbrot’s work suggests that while the first condition may be useful, the second two in many cases are simply false. In terms of the principles we have discussed in this review, this is the equivalent of recognising the empirical reality of path dependence; and secondly, dispensing with the rational actor model. What seems necessary, therefore, is a type of reasoned history – drawing on past observations within an analytical framework to make predictions based on a full suite of quantitative and qualitative tools. Fittingly, for our context here, Mandelbrot (2005:276) concludes with an example taken from the consequences of extreme weather:

On the night of February 1, 1953, a very bad storm lashed the Dutch coast. It broke the famous sea dikes, the country’s ancient and proud bulwark against disaster. More than 1,800 died.

Dutch hydrologists found the flooding had pushed the benchmark water-level indicators, in Amsterdam, to 3.85 meters over the average level. Seemingly impossible. The dikes had been thought to be safe enough from such a calamity; the conventional odds of so high a flood were thought to have been less than one in ten thousand. And yet, further research showed, an even greater inundation of four meters had been recorded only a few centuries earlier, in 1570. Naturally, the pragmatic Dutch did not waste time arguing about the math.

They cleaned up the damage and rebuilt the dikes higher and stronger

Next Page – Ch 3: Out of EquilibriumPrevious Page – Ch 3: Strategic Niche Management