"Let the great world spin forever down the ringing grooves of change."Alfred, Lord Tennyson, Locksley Hall, 1842, l. 182
"Self-organized criticality is a new way of viewing nature...
perpetually out-of-balance, but organized in a poised state"Per Bak, How Nature Works, 1996, preface
Many of the things that seem familiar to us in the world are static, they don't seem to change, a rock for example. Others change so rapidly that we fail to recognise that they change at all. For example take the air, it comprises molecules in constant motion - yet we detect only the statistically average properties (like wind pressure). All things change continually at some speed, it is only their timescale or size relative to our limited perception that tends to mislead us into thinking them static.
Let us however concentrate here on situations of intermediate scale, where we can both follow the movement of the parts and simultaneously see the overall picture. It is in this realm that the most interesting features of complex systems are encountered.
Imagine lining up a row of dominoes, each a distance of half its height from its neighbour and facing it. Now nudge the first domino. What happens ? It falls over of course, and in doing so knocks over the next one, which continues down the line until all the dominoes are lying down.
That nudge we call a 'perturbation', the time during which something is happening we call the 'transient', and the final situation the 'steady state'. This first example isn't too interesting, but suppose instead that the dominoes are in a circle and each one is on a weak spring, so that it slowly comes upright again - can you imagine the result ? The first dominoes will be erect again by the time the last falls - which will knock down the first once more and restart the sequence.
Some energy will be lost of course, so that at some stage a domino will not move enough to knock over its neighbour, and the perturbation will die out. Our final state then will be the same as the initial one, all upright. The strength of the perturbation can be measured in terms of the effect it had - the length of time the disturbance lasted (or the 'transient length') plus the permanent change that resulted (none in this case).
This is true for any system and is a measure of its stability. We require short transient length and return to the initial state for constructions like buildings in an earthquake (either wobbling like a jelly or collapse would be better avoided !). Situations do arise however where the opposites are unavoidable. Consider our air molecules, they collide with each other continually, never settling down and never returning to exactly the same state - they are chaotic. For this situation the transient length is infinite, whereas for our best building method it would be zero.
So we have two situations here, one with zero length (static systems) and one with infinite length (chaotic systems). What about the ones in the middle ? Well, let us take an example - a room full of people. Here we have an unstable situation - what happens depends on many things. If one person pulls a gun we may have panic - chaos. If no one moves we have a static situation. Typically there will be conversations starting and dying out, people may leave the room, others may enter, activities may start and end. Each of these options is accompanied by a transient. A sentence spoken, for example, may be ignored (zero transient), may start a chain of responses which die out and are forgotten by everyone (a short transient) or may be so interesting that the participants will repeat it later to friends who will pass it on to other people until it changes the world completely (an almost infinite transient - e.g. the 'Communist Manifesto' by Karl Marx is still reverberating around the world after over 120 years).
This 'instability with order' is what we call the 'Edge of Chaos', a system midway between stable and chaotic domains (also called self-organized criticality). It is characterised by a potential to develop structure over many different scales (the three responses above could occur simultaneously - by affecting various group members differently), and is an often found feature of complex systems whose parts have some freedom to behave independently.
We haven't restricted the idea of transients in any way here, applying the concept to both inorganic and social systems, and that is significant. Most science is heavily biased to one type of system or another, its findings applicable only to say particles (physics), metabolism (biology), minds (psychology) or society (politics). Yet here we seem to have a quantifiable concept that can apply to them all. This is the essence of the complex systems approach, ideas that are universally applicable. We are now able to investigate it further.
For 'edge of chaos' behaviour we require system parts that are not totally fixed, nor are totally free. In other words we need some constraints - too many and any dynamics will die out, too few and order will not be sustained. Analogies with many other fields spring to mind here, from physical phases (solid, liquid, gas) to political systems (dictatorship, democracy, anarchy). Are all these ultimately describable by such a simple concept ?
Yes and no. The edge of chaos is both a simple concept and infinitely difficult. We do not understand what will happen in any situation - only that something interesting will ! To understand the patterns that will emerge from particular forms of interaction is the great challenge we face in applying Complexity Theory, and much work remains to be done. We can see however how the criteria of transient length is related to behaviour and this gives us at least one valuable pointer and measure to use in predicting both human behaviours and those of the interacting artefacts we increasingly create...
Notice a subtle change of emphasis here. Traditional science usually concentrates on the steady state behaviour of systems, the equilibrium position. The initial conditions are assumed irrelevant, since the equilibrium state is independent of starting point - all starting positions end up with the same behaviour (e.g. a chemical reaction always settles at the same balance of constituents; a planetary orbit follows the same path regardless of initial location). The transients are discarded in these studies, by allowing time for the system to settle down. In most cases the system is isolated from outside interference (either physically or conceptually) - actually preventing any perturbations.
Here it is the transients that are the actual behaviour - the steady state is now irrelevant. Complex Systems of the sort that we investigate never settle to a fixed state. They are subject to constant perturbation, which drives bursts of transient behaviour. This is what we are interested in understanding. Take a society, the only time it can be said to be in a steady state, perhaps, is when everyone is asleep ! New ideas normally perturb the population, feeding on each other and generating new behaviours - the transients. Perturbations and transients are closely coupled here in endless feedback loops.
These are non-equilibrium systems, systems driven away from a rest position and exhibiting dynamic behaviour. We need to find patterns in this behaviour, properties that remain unchanged (invariant) for ranges of starting positions. In general a complex system may have many separate dynamic modes of operation (think of a football crowd and all the things they can do). How are these modes related ? We would expect to have a combination of regularity and sudden shifts - a chant once started tends to continue, until a perturbation (a goal!) switches state (to cheering); the crowd remains in place, until a perturbation (the final whistle) switches state again (a rush to the exits).
Are these properties restricted to human type systems ? No. Any system under strain can experience rapid changes of state. One of the most common is related to earthquakes. It is found that earthquake activity follows a power law distribution, the severity of a quake is related to its frequency by an inverse exponential formula. There are many minor quakes felt over any period but few large ones. This Zipf/Mandelbrot scaling relationship applies widely and brings into focus one important feature of the systems we are considering. We cannot in general say that a major perturbation will have the larger effect and a minor one only a small effect. The knock-on effect of any perturbation of a system can vary from zero to infinite - there is an inherent fractal unpredictability.
Given such conditions, it is perhaps not surprising that we have difficulty dealing with complex systems. To understand a little more let us expand the discussion to encompass a few other concepts. Firstly let us look at correlation distances. Correlation is a measure of how closely a certain state matches a neighbouring state, it can vary from 1 (identical) to -1 (opposite). For a solid we expect to have a high correlation between adjacent areas, the atoms are fixed in the same arrangement and if we shift (translate) one patch we would expect to be able to overlay almost exactly the adjacent patch. It will not matter how far we shift a patch, it should still match, the correlation is constant with distance. How about gases ? Correlation here should be zero, since there is no order within the gas - each molecule behaves independently. Again the distance isn't significant, zero should be found at all scales.
Note however that each patch of gas or solid is statistically the same as the next. For this reason an alternative definition of transient length is often used for chaotic situations - the number of cycles before statistical convergence has returned (when we can no longer tell anything unusual has happened, the system has returned to the steady state or equilibrium). Instant chaos would then be said to have a transient length of zero, the same as a static state - since no change is ever detectable. This form of the definition will be used from now on.
For complex systems however, we should expect to find neither maximum correlation (nothing is happening) nor zero (too much happening), but correlations that vary with time and average around midway. We would also expect to find strong short range correlations (local order) and weak long range ones (think perhaps of the behaviour of people - they act similarly to neighbours but don't usually closely match the behaviour of those in distant countries).
This corresponds to long transient lengths under our new definition and now gives us two measures of effective complexity (correlations varying with distance and long non-statistical transients) - mathematical indications of the edge of chaos. Liquids would seem to be a good contender here for such a state, being between solid and gas - is this reasonable ? Liquids have loose associations between the molecules - a short range order, yet no overall structure - long range disorder. This sort of organisation allows the association of local building blocks within a free framework - reminiscent of computer logic designs. We could, it seems, have a liquid computer...
Here it is useful to consider what happens when we heat and cool systems. At high temperatures systems are in a gaseous state - in other words chaotic. At low temperatures we have instead solid states - static behaviour. At some point in the middle the system changes state (makes a phase transition) between the two. This liquid state is where complex behaviour can arise (e.g. in the strange properties of liquid water). This feature allows us to control complexity by external forces (heating or perturbing the system more strongly leads to increasingly chaotic behaviour, cooling or isolating it serves to lock it in to the state it has currently reached). This is seen clearly in relation to brain temperature (low = static, hypothermia; medium = normal, organised behaviour; high = chaotic, fever).
An important aspect of most complex systems is that they are massively parallel in operation, all the parts operate simultaneously. This permits features to appear in many ways or in many places, in other words we have redundancy. A part of the system can be destroyed without necessarily affecting the emergent behaviour. We can see this in studies of the brain, where damage to an area is often bypassed allowing function to be restored in time, contrast this with say a car where the failure of any part is usually enough to completely halt the intended function unless it is replaced. Because in complex systems we don't organise the parts, but let them find their own states then we have no obvious control over the connections and structure that develops.
We have here something analogous to a percolation problem, where we wish to establish a connection between two areas despite several barriers. How many ways can we do this ? This brings us to another difference between traditional and complex science. Usually a formula is treated as the 'rule' that a system follows, yet by changing any of the parameters (or constants) we have a different rule and a different solution. If we take all possible parameters and investigate them, then we can arrive at a family of solutions, rather than just one. We derive the potential behaviour of a whole class of solutions and can then better understand what possibilities are open to the system and under which combinations of parameters - we can also derive the relative frequencies of static, ordered and chaotic states.
Each solution can have many initial values for the variables. From every such starting position the system will follow a trajectory in phase space. Plotting all these possible trajectories on one graph gives us a phase portrait of that solution, a map showing the attractors present. Each solution will have a separate phase portrait - it is the variation between them with changing parameter values that interests us here, rather than the internal structure of an individual solution. This treatment is often displayed as a bifurcation diagram, a sort of two dimensional slice through a three dimensional phase portrait - looking end on to such a diagram we would move through a succession of phase portraits instead.
It is useful to regard such families of solutions as being alternative maps of our system. Imagine islands of activity in static or chaotic seas. Many alternative geographies are possible, depending on the arrangement of the parts. To obtain a useful overall structure we require that the islands are connected (by bridges) - information can then percolate across the system. This gives us an interesting analogy with social development, where pockets of innovation are passed on via communication channels to other groups, similarly we make connections between different ideas in our brain - interconnected autonomous modules perhaps, a society of mind.
What sort of connectivity is optimum for maximum emergence ? It is hard to say, and seems to depend on the complexity of the parts (the interacting agents). If these have only two states (alleles) then 2 inputs seem to drive the system to the edge of chaos (technically an NK system with K=2 and N parts). Fewer connections and the system freezes, more and it behaves chaotically. It often seems to be the case that systems are self-regulating, changes act so as to increase or reduce the complexity until the maximum emergent order is possible. To achieve this we need to consider systems that can vary their number of connections, having some way also to decide what is optimum individually (their local fitness). This is possible by random evolutionary means (a mutation could add a new sensitivity or could inhibit one) and also by design (people/groups can decide how many other people or groups to interact with, and/or how many concurrent interests to pursue).
How can this work ? Let us take a human example - a committee. Suppose all the members decide to stick to their own ideas (in other words ignoring everyone else) it is unlikely that any decisions will be made - we have a stalemate (a static situation, zero connectivity). Conversely, suppose they all wish to take account of all other views (reacting with everyone else) again it is unlikely that any decisions will get made - we have vacillation (chaotic swinging from one view to another, maximum connectivity). What will happen eventually ? Well people will start to form groups, creating larger organised blocks - they will adjust connectivity to maximise their own advantage, ignoring useless connections, augmenting advantageous ones - politics ! The system will self-organise to a state that gets the maximum amount done (assuming all members have equal power).
In the same way we can adjust our own behaviour to optimise what we do, finding other things to do if bored, neglecting them when overloaded. In nature also, if an advantage in evolutionary terms is possible then self-organisation will tend to move the system in a way that discovers it. Evolution evolves itself to the edge of chaos...