"All the mathematical sciences are founded on relations between physical laws and laws of numbers, so that the aim of exact science is to reduce the problems of nature to the determination of quantities by operations with numbers."James Clerk Maxwell, On Faraday's Lines of Force, 1856
"If we define a religion to be a system of thought that contains unprovable statements, so it contains an element of faith, then Gödel has taught us that not only is mathematics a religion but it is the only religion able to prove itself to be one."John Barrow, Pi in the Sky, 1992
One of the most obvious features of science, compared say to arts and humanities, is its fixation with putting numbers to things, by quantification using mathematical formulae. This is so invasive that often scientific work is judged more by the quality of its mathematics than by any empirical content (e.g. in cosmology). The application of equations, rather than understanding new ways to look at things (innovation), forms much of the value system by which would-be scientists are examined, and qualifications are granted, in our academic systems.
Complexity Theory is decidedly a new way of looking at things, and as such seems to throw out many of those traditional mathematical techniques used in our scientific work. This has led many scientists to question the whole basis of the theory, regarding it as too vague, ungrounded and inapplicable to be called science. The need for new mathematical techniques to deal with new science is however nothing new, e.g. the calculus of Newton and Liebnitz, the topology of Poincaré, non-Euclidean geometry of Riemann, statistics of Boltzmann, set theory of Cantor and renormalization of Wilson. All these treatments arose from the need to quantify the new scientific theories arising at their time. The same quantification issue now exists within complexity theory, so here we will look into the various problems we find and the approaches we see in the analysis and synthesis of complex systems.
Let us first put into perspective the essential components of a complex system, and the theory we shall wish to quantify. In essence a complex system is a functional whole, consisting of interdependent and variable parts. In other words, unlike a conventional system (e.g. an aircraft), the parts need not have fixed relationships, fixed behaviours or fixed quantities, thus their individual functions may also be undefined in traditional terms. Despite the apparent tenuousness of this concept, these systems form the majority of our world, and include living organisms and social systems, along with many inorganic natural systems (e.g. rivers).
Complexity Theory states that critically interacting components self-organize to form potentially evolving structures exhibiting a hierarchy of emergent system properties.
This theory takes the view that systems are best regarded as wholes, and studied as such, rejecting the traditional emphasis on simplification and reduction as inadequate techniques on which to base this sort of scientific work. Such techniques, whilst valuable in investigation and data collection, fail in their application at system level due to the inherent nonlinearity of strongly interconnected systems - the causes and effects are not separate and the whole is not the sum of the parts. The approaches used in complexity theory are based on a number of new mathematical techniques, originating from fields as diverse as physics, biology, artificial intelligence, politics and telecommunications, and this interdisciplinary viewpoint is the crucial aspect, reflecting the general applicability of the theory to systems in all areas.
The simplest form of complexity, and that generally studied both by mathematicians and scientists, is that related to fixed systems. Here we make the assumption that the structure we are interested in does not change with time, so that we can approach analysis of the system analogously to a photograph. For example, we can look at a computer chip and see that it is complex (in the popular sense), we can relate this to a circuit diagram of the electronics and compare alternative systems to determine relative or computational complexity (e.g. number of transistors). Similarly, we can do the same with lifeforms, making measurements in terms of the number of cells, genes and so on. All these quantitative aspects fail however to address one of the main problems of complexity thinking, that of defining just what complexity is, why one system of, say, 100 components differs from another of the same size.
To approach such questions we need to look for patterns as well as the statistics of quantity. It is clear that an arrangement of 50 white then 50 black balls is less complex than 5 black, 17 white, 3 black, 33 white, 42 black, yet the significance of such a pattern is unclear - is it random or meaningful ? When we expand this sort of analysis to 3 dimensional solids, and include more than one property of each part (e.g. adding size, density, shape), we get a combinatorial explosion of possible complexity that strains the analytical (pattern recognition) ability of current mathematics, even for relatively trivial systems. We have concentrated so far on just visual modalities, and views at a single magnification, yet we should be aware also that in nature multiple levels of structure exist in all systems, and this added fractal complication (e.g. complexity of molecule, plus cell, plus organism, plus ecosystem, plus planet etc.) makes even this static simplification mathematically difficult to quantify.
Adding the fourth dimension, that of time, both improves and worsens the situation. On the positive side, we can perhaps recognise function in temporal patterns more easily than in spatial ones (e.g. seasons, heartbeat), but conversely by allowing components to change we can lose those spatial patterns we originally identified, categories and classifications alter with time (e.g. leaves are green - except in autumn when they are yellow, and winter when they don't exist !). Function is one of the main modes of analysis we utilise in science, we ask the question 'what does the system do?', followed by 'how does it do it?', and both these presuppose actions in time (cyclic processes), an intrinsic meaning to the structures encountered.
Given our obsession with experimental repeatability in science, it is interesting to note that the property of being either static or cyclic is at the heart of our classification of phenomena as either being scientific or not. Science relies heavily on testing or confirmation, and this presupposes that we have multiple samples (either spatially or temporally). The forms of mathematical description that we employ will therefore have to be such that we obtain the same answers each time, and this has major implications for complexity theory. We are forced, currently, to artificially reduce the complexity of the phenomena we study to meet this constraint. A person has many aspects, but we describe them only by those that do not change with time (or do so predictably), e.g. name, skin colour, nationality (or address, job, age, height). Complexity theory however requires that we treat the system as a whole, and thus have a description that includes all aspects (as far as practical). In this we go far beyond conventional scientific and mathematical treatments, by including also one-off or variable aspects (e.g. actions, moods).
Going beyond repetitive thinking takes us to a class of phenomena usually described as organic. The best known examples of this relate to the neo-Darwinian theory of Natural Selection, where systems evolve through time into different systems (e.g. an aquatic form becomes land dwelling). This open ended form of change proves to be far more extensive than previously thought, and the same concept of non-cyclic change can be applied to immune systems, learning, art and galaxies, as well as to species. Classification of complexity thus takes another step into the dark, since if we cannot count on there being more than one example of any form how can we even apply the term science to it ?
The answer to this question comes back to pattern. In any complex
system many combinations of the parts are possible, so many in fact
that we can show that most combinations have not yet occurred even once,
during the entire history of the universe. Yet not all systems are unique,
there are symmetries present in the arrangements that allow us to classify
many systems in the same way. By examining a large number of
different systems we can recognise these similarities (patterns) and
construct categories to define them (this is, in essence, what the
Linnean taxonomy scheme for living organisms is based upon). These
statistical techniques are fine, and give useful general guidelines,
but fail to provide one significant requirement for scientific work,
and that is predictability. In the application of science (in technology)
we require to be able to build or configure a system to give a
specific function, something not usually regarded as possible from
an evolutionary viewpoint.
Our final form of complex system is that believed to
comprise the most interesting type and the one most relevant
to complexity theory. Here we combine the
internal constraints of closed systems (like machines) with
the creative evolution of open systems (like people). In this
viewpoint we regard a system as co-evolving with its environment,
so much so that classifications of the system alone, out of context,
are no longer regarded as adequate for a valid description. We must describe
the system functions in terms of how they relate to the wider outside world.
From the previous categories of discrete and self-contained systems we
seem to have arrived at a complexity concept that cannot now even
qualify a separate system, let alone quantify it, yet this misses an important
point.
Co-evolutionary systems, like ecologies and language, are
extremely adept at providing functionality, and if this is a
requirement of science (the what question) we may be able
to side-step the how question and tackle the desired predictability
in another way. This methodology moves the design process
from inside the system under consideration to outside. We can design
the environment (constraints) rather than the system itself, and let the
system evolve a solution to our needs, without trying to impose one.
This is a very new form of organic technology, yet one already beginning to show results in such fields as genetic engineering, circuit design and multiobjective optimization. From the point of view of complexity theory we wish to be able to predict which emergent solutions will occur from differing configurations and constraints.
If we allow that traditional quantification in terms of static
parameters or formulae is (at best) inadequate to fully
deal with complex systems, then what other options do
we have ? Specifically, how do we deal with variables and
constants that swap places over a system lifetime (the edge
of chaos interplay of barriers and innovation) ? In
essence we need to allow that all the parameters in our system
are variables (operating at differing timescales perhaps),
and also allow for the number of parameters to increase or
reduce dynamically (simulating birth or death). This again is
a break from tradition in science, and requires what Kuhn
called a scientific revolution - a new paradigm or set of initial axioms.
This is what Complexity Theory provides.
Having set out the considerable problems we face in the analysis
of complex systems, we can now turn to more positive matters.
Much work has already been done as a preliminary to the
quantification of complexity theory, and we can build on some
50 years of work in general systems theory or cybernetics,
in linguistics, dynamics and ecology,
as well as in modern genetics, cognitive science and artificial
intelligence. The mistakes and successes of this inheritance can help
steer our path towards more productive assumptions, those relating
to the common features we find across the subject matter of
all these disciplines, and related areas.
In complexity thought we look for global measures that can
apply in all fields. This assumption, along with others related
to unpredictability, non-equilibria, causal loops, nonlinearity
and openness means that our world view is in many ways the
opposite of traditional science. Yet all these assumptions are valid
for the organic style systems being considered here. A
new type of quantification may well be needed in consequence.
Many objectives can be proposed for Complexity Theory itself, e.g. :
For all these objectives we must provide practical methods of quantification
(i.e. they must be computable). We require a mathematics that can
distinguish systems as easily as can humans, recognizing and classifying
the patterns found therein. Further, we hope to be able to predict at least
some future aspects of a system from its past behaviour or present state,
and exercise some control over its development.
But before we try to apply any quantitative techniques to existing systems or
organizations we need to decide whether they are, in fact, complex in
any of the senses used here, and to decide if self-organizing complexity is involved.
We can use the general characteristics of SOC to suggest criteria to look for in
classifying systems as this type, thus extending traditional systems
analysis methods:
Parts have more than one input, and more than one output on average,
(but not too many active connections, as this leads to chaos). This is the fan-in and fan-out structure.
The difference between average inputs used and average outputs produced
approximates 1. If much less the system tends to a static state (converges), if much
more to a chaotic one (diverges).
Parts have the ability to learn from experience, to change their rule
sets and optimise (canalize) transitions.
Most parts act autonomously, in parallel not serial mode,
to enhance response speed and adaptability.
Parts are able to change those with which they interact, both
on a permanent and temporary basis. This allows data prioritization.
Outputs have a way to feed back into the beginning of the process,
so the results of actions early in chains can be monitored.
All variables have control paths for stability (uncontrolled
variables could indicate chaos potential). But control does not prevent
change, just acts to limit runaway effects.
There are multiple ways available to approach the same goal, giving
flexibility of response and creative freedom.
System boundaries are neither closed nor totally open. The
first is stagnation, the second panic. Filtering of information is necessary.
Multiple objectives or functions exist, giving a multi-dimensional fitness
and resilience to single dimensional fluctuations.
Building Blocks
Unplanned functions have emerged during operation, in fact the
modules have self-organized from part interactions and not been imposed.
Most internal or external perturbations leave overall function intact,
but some show unexpected global effects. A power law spread of fluctuation
size and duration is found.
Control is distributed throughout the system, local decisions are made
by parts or modules within overall constraints.
Increasing information flows can indicate a move from stability
to chaos. Introducing information technology tends to do this naturally in social systems,
unless checked.
Increasing swings in results (e.g. sales for a company) can indicate a move
towards chaos, in this case external couplings are part of the positive feedback loop.
These are not definitive indicators, but a system that has many of these
is likely to be better analysable using complexity theory than by either
linear determinism or statistical methods. These sort of criteria can also
be employed as restructuring goals, if it is desired to move a system of
simpler form towards self-organized complexity. This may allow the system
to achieve the benefits in innovation, survival and adaptability that we see
for natural complex systems.
Given that we have identified a potentially complex system, how
do we then quantify it ?
Let us now look at some specific techniques being used by
complexity researchers in an attempt to add mathematical
precision to the subject. This is by no means a comprehensive
review, since almost every researcher in this new field
has their own emphasis, so we will just sample commonly
known areas of work :
This golden oldie is a good place to start, since it traditionally
measures the opposite of order (or information in Shannon's
formulation). Unfortunately there are so many types of
entropy that the concept proves less than useful in practice.
The main problem is that a single figure does not distinguish
symmetrical or otherwise equally complex systems, and it
says nothing about the actual structure present.
This technique, developed by both Kolmogorov and Chaitin,
looks to describing complex systems by using the shortest
computer program which can generate the system. Thus
the length becomes a measure of the complexity. The
drawback is that this has a high value for random noise
(which we don't think complex). Such an approach also
takes little account of the time needed to execute the program.
Work is ongoing to address these issues, but again it reflects
a single parameter.
Self-organizing systems are found to move from static or
chaotic states to a semi-stable balance between. This property
relates to the physics idea of phase transitions (e.g. the state change
from ice to water), pioneered by Wilson. Attempts to quantify
this point are seen in Langton's work on lambda and
similar measures. Chief disadvantage is that such analysis is
so far restricted to low dimensional systems (few variables).
This technique, due to Bak, has much in common with Phase
Transitions, but concentrates on the characteristic power law
distribution of events (seen around the phase boundary)
as an indication of self-organization. This allows the treatment
of higher dimensional systems, but still gives little information
about their inherent nature.
Another approach takes account of the fact that system parts
interact freely, thus can be thought of as chemical elements,
their reactions form compounds and eventually an autocatalytic
set results (forming the system), which is self-maintaining. The mathematical analysis
of such systems is largely due to Fontana. This treatment of the
parts, whilst allowing innovation, doesn't quantify any emergent
structure that results, just concentrations of components.
Identifying the possible stable structures in connected systems requires
the concept of attractors, and this idea is employed in work on
neural networks by Hopfield, feature maps by Kohonen, and discrete
networks by Wuensche. This is the best current technique
for analysing internal network structure, but is difficult to do for realistic,
high dimensional, systems.
Using the biology concept of fitness allows us to model systems
as ecologies, where the parts coevolve with each other. This
can be extended to model multiple systems, as in Kauffman's
NKCS model, and we can derive system wide fitness measures.
A drawback is that there are so many possible models that
practical work can only sample them, generating purely statistical
indicators.
Derived from linguistics, this treats systems as grammars and
investigates rules of combination and structure. It is possible to
include context in this formulation and thus this can be applied to
environmentally situated systems, as seen in the classifier work of Holland. This is promising, but identifying the rules of existing systems is
a major problem.
The analysis of non-equilibrium systems is also at an early stage,
and is characterised by work on dissipative systems in physics (Prigogine)
and autopoiesis in biology (Maturana/Varela). These self-maintaining systems are self-organizing structures, but again little direct attention is paid to pattern.
Some other approaches, occasionally used by complexity
researchers, derive from more specialist areas. The
importance of these may increase as they become more
widely known, so let us look at a few of these also:
From political science, we have the theory of interactions
based on decisions and relative advantage, usually associated
in our field with Axelrod. This quantifies decision fitness at an
individual pair level, but is harder to apply to more diffuse systems.
The important aspect here is the distinguishing of positive from negative
evolutionary paths - goal directed behaviours.
This technique, again from physics, uses a lattice of interacting
points and is chiefly seen in complexity work under the guise
of cellular automata, which can be used to
model many physical phenomena (as in the work of Rucker). The technique,
whilst excellent for simulation, proves mathematically difficult, but is
important in relation to the demonstration of emergence,
higher level structure.
Based on communications theory, we look here to identifying
regularities in the behaviour of a system over time, trying to
quantify cyclic or chaotic (strange) attractors. It is often applied
to financial systems, e.g. at Santa Fe. Chief drawback is that
the system must have a lot of data available for analysis, but
the advantage is that limits can be placed on the system behaviour.
In the analysis of nonlinear systems we need a way of quantifying
many interacting variables and fuzzy logic provides
this, generating a result that maps all possible interactions of the inputs. This
technique, due to Zadeh, has yet to be applied widely to complexity
ideas, but has importance in the potential to treat multiple conflicting
variables in decision systems.
This idea, from operational research, recognises the inter-dependency
of multiple values in real world cases, and when combined with evolutionary
computation allows us to study the dynamics of epistatic systems and the
multiple global optima (Pareto fronts) common to such systems. There are
many techniques involved here, some involving synergic
considerations, for an overview see our introduction PMO.
Largely due to Forrester, this computer modelling technique looks to quantify
how the dynamics of systems, based upon our assumptions of how the parts/variables are
interconnected (their dependency structure), differs from our beliefs about such dynamics. It highlights the difficulties of predicting actual complex systems behaviour when our views are
constrained by the results of over-simplified reductionist experiments.
By statistically measuring the diversity, cumulative activity and innovations of
evolving systems it becomes possible to classify these in terms of their
open-ended evolutionary potential. The technique, due to Bedau, Snyder & Packard,
allows the emergent behaviour of artificial and natural systems to be determined,
but does require an historical record of their component activity to be available.
Few, if any, artificial systems currently show any unbounded emergent potential however.
This technique, based upon artificial life ideas, studies the dynamics of collections of interacting autonomous agents. The self-organization that results from different
initial assumptions and sets of agent values helps quantify how different features of real
systems can arise, and evaluates their stability to perturbations caused by changes to internal structure and goals. It can be applied to many levels of reality and was pioneered in the social sciences by Epstein and Axtell.
It may have been noticed that most of the attempts to quantify seen
so far, deal either with the parts (traditional reduction) or look to
simplify the system to a single or few parameters. This is rather
strange, especially as the main problems we wish to solve are
the prediction of complex overall structure (holistic view) and
the analysis of systems with many simultaneous aspects (e.g. man).
Techniques to do either of these are still pretty well non-existent,
and it is in this area that we hope to see progress over the next
few decades. Multidimensional patterns, and their equivalence
at different scales (e.g. growth), reflects a topology component and also aspects
of Fourier analysis (holographic transforms). We can speculate that
these disciplines may have useful contributions to make here, as perhaps
may category theory.
Complexity Theory is still a very new subject, so it is to be expected that progress
will be slow until a critical mass of interdisciplinary researchers is reached. So
far there is little specialised educational training available below
graduate level, and until the visibility of the science increases
it is likely that the research funding needed, for the analysis of
what are very complex systems, will be hard to obtain. The
interdisciplinary nature of this work also acts against easy
acceptance and comprehension, especially in an academic world
so strongly focused on specialist fields and biased against outsider
participation.
The current state-of-the-art in complex systems quantification
is very fluid, as befits a fast developing science. With regard to the
most interesting, self-organizing, type the reader is referred to the tentative
results in the . For
other types, and more specialist results, our links
and papers pages may assist in highlighting
current work. Application of such ideas however does not necessarily
require firm quantification, the general criteria listed earlier may suffice,
and our applications
page shows many of these attempts. It would be true to say however, that
few of our suggested objectives have yet been realistically approached.
In any new scientific enterprise we need to have a focus with which to
concentrate the mind, and to encourage detailed work. The overall foundations
of complexity theory are now fairly well established and we look at some
philosophical aspects of this paradigm in our Concept, where
we integrate this view with traditional approaches. Quantification of these concepts
is not easy, but promises to provide techniques that can take complexity
theory beyond the traditional scientific boundaries and into the related,
but wider, areas of the arts and humanities.
Self-Organizing Complexity (Type 4)
Quantification Preliminaries
Assumptions and Objectives
Complex Systems Analysis
Embryo Techniques
Entropy
Algorithmic Information Theory
Phase Transitions
Self-Organized Criticality
Algorithmic Chemistry
Attractors
Coevolution
Symbolic Dynamics
Far From Equilibrium
Alternative Approaches
Game Theory
Spin Glasses
Time Series Analysis
Fuzzy Logic
Multiobjective Optimization
System Dynamics
Evolutionary Dynamics
Multi-Agent Systems
Future Developments
Conclusion
Page Version 4.83 June 2006 (paper V1.2 October 2004, Original April 1999)