CALResCo Group (Complexity & Artificial Life Research), Manchester U.K.
(A talk presented at the 'Self-Organising Systems - Future Prospects for Computing' Workshop held at UMIST 28/29 October 1999 in conjunction with the EPSRC 'Emerging Computing' Networks Initiative. Published as the appendix to William Roetzheim's book "Why Things Are - How Complexity Theory Answers Life's Toughest Questions", 2007, ISBN 978-1-933769-26-4 ).
Self-organisation imposes a set of axioms that prove different in many ways to those usually adopted in scientific work. These assumptions are common to most of the complexity specialisms, and relate to system properties that are uncontrolled, nonlinear, coevolutionary, emergent and attractor rich, as well as being heterarchical, non-equilibrium, non-standard and non-uniform. Additionally, behaviours showing unpredictability, chaotic instability, mutability and phase changes, along with inherently undefined values, self-modification, self-reproduction and fuzzy functionality add issues that seem inimicable to traditional computing approaches. In this paper we attempt an overview of the philosophical implications of complex systems thought, and investigate how this alternative viewpoint affects our attempts to design and utilise adaptive computer systems. We classify the types of complex system that relate to self-organisation and contrast the old inorganic paradigm (control based) with the new organic (self-organising) perspective. Some important aspects are identified that need attention when attempting to apply this viewpoint to program design, and we also examine how these factors manifest in natural self-organising systems in order to obtain pointers for the artificial implementation of such ideas. The overall requirements for self-organising computing are considered and we explore some alternative ways of looking at some specific problems that may arise. We conclude by asking how these issues relate to a typical modern artificial life simulation and discuss various ways of moving forward in the area of practical contextual computer system design.
In recent years we have seen considerable activity within the complexity sciences. Much of this has been of a specialist nature, concerned with the investigation of specific problems and the development of experimental models and techniques for dealing with complex systems. Yet the ideas emerging from such studies also have major implications on our thought processes and challenge many of our traditional scientific axioms, especially in relation to the possibility of self-organization within local and not global contexts [Heylighen, Kauffman, Lucas1997a].
Before we apply these new philosophical ideas to computing it is well to understand both the concepts behind them and how they relate to conventional programming approaches. In many cases it would be fair to say that conventional computing follows the paradigms common to conventional science, ideas that also form a strong part of our social and educational predispositions. Those ideas are deeply ingrained in our belief systems and it is often difficult to see clearly which of them are inapplicable to new modes of thought. Our focus here will assume an ultimate goal of creating an hypothetical artificial lifeform that can exist autonomously in a human environment [Kanada], in other words a self-contained system operating in a wider context.
We will start by outlining the philosophical ideas generally accepted as being involved in complexity thinking, in other words the concepts that differ from our conventional technological approaches, before considering the various types of complexity that can exist and which are studied in the complexity research fields. The most complex of these relates to self-organisation itself in an organic mode of operation and we will then compare, from a computing viewpoint, the modes of operation of organic and inorganic systems. Moving on to consider the implications of applying of self-organisation concepts to the world around us, we consider how these ideas are manifested in typical human scenarios. The computational requirements needed to apply these features to programs are then outlined, followed by the problems that remain and some suggested approaches. We look at how these issues affect Echo, a typical Artificial Life simulation system based upon Complex Adaptive System (CAS) thinking, before outlining ways forward for future research.
Complexity philosophy is an holistic mode of thought and relates to the following properties of systems. Not all these features need be present in all systems, but the most complex cases should include them.
Complex systems are generally composed of independent agents, all of which are regarded as equally valuable in the operation of the system (an anarchic power structure). Thus any control structure or leadership (power asymmetry) must emerge by self-organisation and not be imposed.
Taking the properties of each part and adding them does not give a valid solution to overall fitness - the whole is more than the sum of the parts. Epistatic interactions between parts requires an overall non-reductionist analysis.
The properties of the overall system will be expected to contain functions that do not exist at part level. These functions or properties will not be predictable using the language applicable to the parts only and are what have been called 'Meta-System Transitions' [Turchin].
The parts are regarded as evolving in conjunction with each other in order to fit into a wider system environment, thus fitness must be measured in contextual terms as a dynamic fitness for the current niche, and not in relation to any imposed static function.
A system will be expected to contain multiple alternative attractors (areas of stable operations), thus several different behaviours are possible for the same system, depending upon the initial configuration and subsequent perturbations.
Energy flows drive the system away from equilibrium and establish semi-stable modes as dynamic attractors. This relates to metabolic self-sustaining activity which in living systems is usually called autopoiesis.
Part freedom allows varying associations or movement, permitting clumping and changes over time, initially homogenous systems will develop self-organising structures dynamically.
Each part can evolve separately, giving diversity in rule or task space. The mix of rules (learning) that occurs depends upon overall contextual system emergence.
These are critical points in connectivity terms maintained at a phase boundary by the self-organising system dynamics. At this point a power law distribution of properties occurs in both space and time.
Feedback loops allow some divergence in state space from nominally similar inputs, as well as convergence for other values. This relates to operation in an edge of chaos state and is a feature of the mix of attractors typically present at that point.
Sudden swaps between attractors are possible as the system approaches the boundaries of the attractors. Evolution operates in steps rather than gradually, with the wild swings in coevolutionary balance often associated with such perturbations to ecosystems.
New configurations are possible due to part creation, destruction or modification. This relates to changes to the structure of state space, which must be regarded as dynamic, not static and does not conserve world lines which may bifurcate and merge over time.
Systems can replicate to create additional systems (e.g. organisms or franchises). Copying errors (including mutations, recombination or insertion) permit new system structures to become available, allowing open ended evolution and self-generation (autocatalysis).
Parts can change their associations freely, either randomly or by evolved learning procedures. Thus the system can be regarded as redesigning itself over time, as far as proves necessary to maintain or change function.
The meaning of the system's interface with the environment is not initially specified and must evolve. This requires that semantic values or communications are created dynamically (or constructed) by the system by environmental interaction and are not simply a direct reflection of the external world. This is a contextual rather than an absolute view of truth.
The overall system function is not initially known but is created by coevolutionary methods. This relates to combinations of the emergent values creating an inherent theory of operation in which dualist classifications are unlikely and probabilistic matching between system and environment must suffice.
We can summarise the structure of complex systems in an overall heterarchical view (Figure 1) where successively higher levels show a many to many (N:M) structure, as does the overall metasystem.
Part interactions create emergent modules with new properties. These modules themselves interact as parts at an higher level and this process leads to the creation of an emergent Hierarchical system (the upward causation). The components at each level connect horizontally to form an Heterarchy - an evolving web like network of associations. This combination of hierarchy and heterarchy within a system is called a Dual Network by [Goertzel]. Additionally systems can have overlapping members at each level (e.g. individuals can belong to many social groups, molecules to many substances, a situation to many models and a model to many situations). These large scale interacting emergent systems are called Hyperstructures by [Baas], groups of interlaced dual networks constrained by downward causality (where emergent properties then govern the parts that formed them [Campbell]). Here we extend these ideas slightly by allowing explicit cross level interference between systems (e.g. an individual affecting another country overall, a cell affecting an external part). This extended design we call here an Heterarchical Hyperstructure (to reflect the flexible interrelationships between levels typical of human systems). We could also call this three dimensional structure a CAS Cube (intrasystem, interlevel, intersystem) or a Triple Network. Natural hyperstructures typically will have thousands of components and connections per system, rather than the few shown here for illustration, and generally therefore complex systems are very high dimensional. Given that a metasystem has such a set of structures, then the overall fitness will relate to the interdependent properties at all levels, in other words to the full contextual environment.
Previous work has identified four classes of complexity [Lucas1999], of which only the last is directly relevant to our focus here. In this more general treatment we will extend these concepts to cover high-dimensional complexity, where in the limit the system is assumed to possess infinite components. These four nested complexity types (the later including the former) are:
For example the visual complexity of a computer chip or a picture. These relate to fixed point attractors, in other words to Class 1 CA systems [Wolfram], but in multidimensional systems these do not necessarily relate to homogeneity, although they are closed equilibrium systems. This form of complexity is studied by such techniques as Algorithmic Information Theory [Chaitin] and is also common in physics.
This includes such states as planetary orbits, heartbeats, seasons. They are cyclic attractors and relate to Wolfram Class 2 CAs. Multiple cycles may be superimposed in highly complex systems (decomposable by such techniques as Fourier analysis). These closed systems are those conventionally studied in the sciences, where the time regularity gives the repeatability necessary for prediction, and again are equilibrium systems where initial transients have been discarded.
This mainly relates to the process of evolution in nature where a single cell gave rise to an extraordinary diversity of forms and functions (Linnean taxonomy). Also related are diffusion aggregration and similar branching tree structures. These are historically constrained and form ergodic or strange attractor systems, equivalent to Wolfram Class 3 CAs. They involve searches of state space, but more importantly the creation of new areas of state space, new possibilities by the production of new components (and conversely shrinkage of state space by the destruction of failed options). These are open, non-equilibrium systems and can be regarded as existing on a permanent non-repeatable transient. The high-dimensionality here is embodied in the large populations typically encountered which taken together ensure evolutionary uniqueness.
Operating at the edge of chaos, these systems loop back on themselves in nonlinear ways and generate the rich structure and complex mix of the above attractors associated with Wolfram Class 4 CA systems. This is the advent of autopoiesis, the creation of adaptive self-stabilising organic systems that can swap between the available attractors depending upon external influences and also modify and create the attractors coevolutionarily (by learning). They differ from the purely evolving category in that state space is canalized by the self-organising nature (downward causation) of their internal emergent processes, thus possible functions are self-limiting. These systems occupy dissipative, semi-stable, far-from-equilibrium positions exhibiting the typical power law distribution of events familiar from critical systems at the phase transition [Bak], they are structurally and organisationally both open and closed, with semi-permeable material and informational membranes allowing the passage of operational triggers driving their attractor modes.
In this progression from relatively simple static recognition and classification, through predictable systems. innovative systems to self-maintaining systems we encounter increasing environmental awareness (in the sense of perception) by the system - there is a process taking progressively more information from the environment. It is the ability to evolve such awareness that we wish to capture in the use of the self-organisation paradigm for computing purposes.
The main additional characteristics of Type 4 aware systems are:
Let us now contrast some current inorganic technological approaches versus those of the alternative organic paradigm. Note that these divisions are illustrative and not meant to imply strict divisions and boundaries between these approaches, many systems will cross the boundaries on some of these criteria.
Mode | Inorganic | Organic |
Construction | Designed | Evolved |
Control | Central | Distributed |
Interconnection | Hierarchical | Heterarchical |
Representation | Symbolic | Relational |
Memory | Localised | Distributed |
Information | Complete | Partial |
Structure | Top down | Bottom up |
Search space | Limited | Vast |
Values | Simple | Multivariable |
View | Isolated | Epistatic |
Expanding these somewhat:
Here we compare systems created to meet a human end or goal and those whose designs appear by trial and error, internally goal directed if at all.
There is a main procedure controlling the system in centralised global control but no such prioritising structure in distributed local control. Levels of control may thus be formal, with a single entry point, or informal with multiple entry points corresponding to each agent or to sets of them.
There is normally one linear path through tree like man-made systems (giving a single output) compared to the multi-route web-like nature of natural structures (with multiple outputs).
The system can intelligently represent the external world in its data structures (e.g. traditional AI) or may operate directly on the world itself without embodied intelligence in more natural mode (e.g. subsumption architectures [Brooks]).
Data may be stored in discrete locations (whether in one place or many, e.g. COBOL records) or may be held in an holistic form in the program structure and connections as emergent properties (e.g. Neural Network).
The program may have all the knowledge and operations it needs to fully solve the problem mathematically (within a restricted domain), or may need to rely on partial data and inadequate resources, giving approximate or probabilistic results (sampling or simplification of multidimensional space).
Top down relates to a design of a system starting from the overall function and gradually adding detail, whilst bottom up starts with the lowest level parts and by combining them creates an unplanned function.
Options can be limited by design constraints, a well defined function (with a global optimum or unique solution), compared to having all state space available for potential use (multiple local optima) where we must balance the conflicting rewards of accepting a current limited function against those of searching for a better one (especially important where better options become available with time).
Artificial systems are usually designed to deal with individual subjects or values (e.g. banking) whereas natural systems may have multiple simultaneously active values (e.g. mind). This is the difference between one-dimensional and multidimensional thinking.
Isolated systems assume all variables can be treated separately (a reductionist - genecentric view) whilst epistatic ones recognise that the individual solutions interact and need treating as a whole (an holistic - schema view).
Mode | Inorganic | Organic |
Constraints | Static | Dynamic |
Change | Deterministic | Stochastic |
Language | Procedural | Production |
Operation | Taught | Learning |
Interaction | Defined | Coevolutionary |
Function | Specified | Fuzzy |
Update | Synchronous | Asynchronous |
Future | Predictable | Unpredictable |
State Space | Ergodic | Partitioned |
Causality | Linear | Circular |
Expanding these also:
System may be constrained internally, for example engines, where the static parts have fixed degrees of freedom or they may be constrained only by external dynamic environmental factors (e.g. natural selection) giving behavioural freedom within certain limits.
The options that can be dealt with can be pre-specified (a fully bivalent transition table, i.e. standard IF..THEN) or can arise by chance (a probabilistic form of interaction, i.e. molecular encounters).
The program structures are laid down and inviolate in conventional computing but can themselves be changed in organic style systems (e.g. L-systems, Genetic Programming, Classifier Systems) which incorporate self-modifying abilities.
The system may be instructed to follow well known and fixed processes (or algorithms) or may be able to create these processes itself (as humans do by discovery). The system can be designed to cope with errors (unexpected data) by rejection or expected to change itself to adapt to any new data.
The agents may have specified connectivity or this may itself evolve contextually in a flexible manner, both in terms of spatial and temporal information transfer.
Mechanical systems are intended to perform a well defined function (e.g. process a cheque), organic ones perform a function that may be contextually free to change (e.g. our human goals).
System may operate in an ordered way, step by step, or all the parts may operate independently at their own speed.
The performance may allow the system to be taken for granted as consistent or it may throw the occasional surprise with a leap to a new attractor structure.
System is assumed to function for all possible inputs, i.e. is globally applicable or may change itself to operate in only limited canalized contexts (i.e specialising).
The program flow may be from input to output in a well specified manner or partial output may redefine the inputs along the way by cybernetic feedback paths.
These aspects relate mostly to connectivity, the idea that, unlike traditional reductionist approaches, we must consider the interactions between parts as being more crucial to their behaviour than is their composition. The ability to control connectivity allows systems to adopt positions between static and chaotic phases, the edge-of-chaos state that maximises adaptability or information processing ability [Langton].
Constraints - Not totally self contained, contextually situated
True self-organisation is impervious to environmental perturbations or selection [Burian],
it is totally self-contained by definition, thus strict self-organization has no external
function, it is over controlled (e.g. a closed chemical reaction). Therefore some freedom
is necessary, but too much (e.g. sufficient for Universal computation) proves useless
- that is too free to provide any specific function (like an unprogrammed computer).
This relates to a two level (internal plus external) style of coevolutionary self-organisation,
constraints shaping self-organisational behaviour.
Canalization - Attractor categorisations restrict available state space
Classification depends upon system dynamics and is not open ended.
For effective action there is a need for diverse program options, dynamically
selected by the environment, yet stability requires attractors giving some imperviousness
to perturbation. The presence of canalization due to the available attractors
implies that we may have possible inefficiency in the best options available to
our system. The global optimum may not be amongst the options available in practical
state space at any one time, in other words not all possible systems may be
achievable from any starting point within any reasonable time.
Environment Matching - Semantic interactions are necessary
Complexity philosophy relates to situated self-organization and we require
eigenbehaviours - environmentally correlated attractors giving a meaningful
functional emergence. Symbolic controls (both genetic and environmental)
plus material constraints (available primitive parts) gives semantic closure (internally
defined selected self-organisation) [Rocha]. Uncertainty is necessary for decisions
in complex environments thus we need forms of dynamic change whose
dynamics can generate new attractors and thus new semantic meaning.
Sensitivity - Positive feedback effects can cross systems
Due to chaos a small action can have potentially major effects. This depends
upon connectivity which in general living systems is vast e.g. gravitation, electromagnetism,
sight, touch, smell. Given such wide connectivity, the results of small actions will be unpredictable
even in theory, leading to the idea that measurements of such systems may cause
disturbances affecting quite different systems. Yet even if we neglect the wider
issues of connectivity across systems, causal loops (effects feeding back
onto causes) ensure that all agents are potentially equally important within any
system.
Responsiveness - Internally driven mutation and history
Both change (mutation or choice) and selection are needed to change a system
response. With selection, unfit options are automatically replaced, allowing the
tracking of environmental changes, which implies that fitness landscapes
are non-stationary due to coevolution. History (the current position on the
fitness landscape) and internally mutated rules provide the basis for the
path a system takes through state space and thus the grounding for what
selection can achieve. If change takes the form of choice we will need to choose
probabilistically and in general this will only approximate a true Markov process
since past data is not all compressed into the current state (uncertainty remains).
Modularity - Multi-level functionality present
Issues of combinatorics based upon the available parts together with self-organisation
suggests that modularity at an higher level should occur [Spirov].
These building blocks provides a similar resource for the generation of even higher
levels of structure and function and this may be necessary for any system able to deal
adequately with a multi-level world. Functionality can thus develop in complex ways
and this suggests that we must approach the design in an integrated way, as
a solution to not one but multiple concurrent problems.
Multivalue - Compromise optimisation or speciation
Multilevel interaction implies a multidimensional fitness and
thus multioptimisation is required in some way. Due to epistasis this usually
necessitates complex compromises with such techniques as multiobjective pareto
optimisation [Coello]. Such optimisation may not be the same however as evolutionary
stable systems (ESSs) since selfish options are rejected in the pareto process
(which often does not apply in natural systems). Multiple solutions may be
preferable in practice to a compromise one and this implies speciation,
non-commensurable objectives leading to multi-peak (niche) construction
by coevolutionary drift, active choice or innovation [Laland].
Compression - Coarse graining loses information
Complex systems are attractor dominated and this implies a convergent coarse
graining of state space - their possible responses to the environment. On a wider
level we can regard scientific laws also as being algorithms for the compression of
environmental variety. Which aspects are summed over will depend upon
emergent categorisation plus any restrictions that are imposed externally, but much
context will be missed in any particular case and this can lead to poor decision
making for low probability situations since relevant information has been discarded.
Categorisation must retain relevant dimensions yet discard unnecessary ones.
Agents - Sparse connectivity or signal density
Independent agents imply a distributed control of connection and rules. The scope
of their allowed neighbourhoods affects their possible recursion behaviour - local
variables gives more isolated causality whilst global variables allow both greater
self-organisation possibilities and the possibility of chaos. For operation at Edge
of Chaos, with highly connected agents (e.g. brain neurons), we need canalized
connectivity, in other words a sparse occupancy of neurons by signals with most
pathways being unused at any one time, a mode similar to what has been called
extremal dynamics.
Putting some of the main points together we can arrive at a definition of what sort of theory we are proposing by using complexity thinking. This scientific theory both helps to classify the nature of organic systems and predicts what we must do in general terms to create artificial equivalents:
Critically interacting components self-organize to form potentially evolving structures exhibiting a hierarchy of emergent system properties.
The elements of this definition relate to the following:
This theory, or something like it, lies behind much work in the complexity sciences and it is qualitatively well supported by experiments and discussions in both natural [Bak, Goodwin, Maturana, Nicolis] and artificial systems [Epstein, Fontana, Langton, Ray], with increasing quantitative support being developed [e.g. Kauffman]. In biological terms we can relate this mode of operation to a contextual living system (Figure 2).
In this causality loop the section from Genotype to 'emergent' Phenotype forms the metabolism of the organism - the 'critically interacting' autopoiesis stage, with Building blocks as the 'components', whilst the Variation and Selection sections include both the cross-generational 'potentially evolving' stage and the equivalent action of mind (where the genotype relates to a distributed neuronal connectivity specification). The symbolic, self-organizing semantic and pragmatic components together are in overall coevolution within an hyperstructure, maintaining the system at the phase boundary - neither static (dead) nor chaotic (disintegrating).
Let us look now at some examples of life based self-organising systems and see if we can extract any lessons for applications in the computing field.
Social laws generally do not apply within a family, people adapt to each other and compromise to achieve results. This adaptability includes the amount of contact made with others (self-organising to an optimum for each person) and this requires the ability to establish and break connections at will. Without this possibility we get on one another's nerves (due to mutual interference) or can't progress problems (due to the unavailability of family members).
Strangers fight for supremacy in the initial meetings, but given a need for decisions this eventually creates a semi-stable working arrangement that nethertheless can achieve poor results. The classification of the environment can only utilise the available attractors. Without learning by mutual information exchange we cannot alter our attractor structure so as to enable better optima to be reached.
Complex Adaptive Systems (CAS) are often proposed as models of how self-organisational ideas can be applied to improve business success and survival [Sherman]. Such systems give freedom to the parts to explore state space, to innovate, and this presupposes the absence of that centralised control and structural rigidity that we often assume to be essential in social organisations. In fact such innovation requires that the centrally imposed rules are broken.
Different values amongst different groups leads to conflicting proposed solutions, attractor structures that often seem incompatible. Standardisation would reduce fitness for many, so diversity in operation seems essential if we are to approach any overall optimum. This relates to many alternative ways of doing things, so that suitable compromises or niches [Horn] can be reached in different situations - many paths to one end.
Niche behaviour allows many different needs to coexist. This implies multiple values, and is a form of division of labour in which the creatures do not each try to maximise every value but individually optimise a limited number. This suggests we should have limited goals and look for temporary local answers in an incremental way rather than concentrating on future global utopian solutions (which will not persist due to coevolution).
Alternative scenarios are often generated and evaluated before we act. This emphasises the efficiency of offline generate and test techniques in evaluating the best option before presenting this to the environment for coevolution. But it also implies a tendency to adopt inconsistent self-supporting interpretations, an internal consistency or operational closure that can mismatch to real adaptive needs [Goertzel] and lead to delusional errors - poor local optima rather than global ones. We need to ensure consistency in the global coevolutionary and social contexts also.
Let us look now at some goals for applying self-organisation to computing, looking here not at state-of-the-art achievements but at the idea of having a machine that behaves as well as an average higher organism. This replaces the idea of duplicating our human cognitive facilities by the notion that meaning is embodied in situated animal sign exchange [Brier].
Functionality - A purpose to the exercise
We are unlikely to want random organic machines or programs, but may
desire ones that achieve some usefulness. We need to be clear what this
is and what benefits we wish to achieve by using organic methods. This
aspect sets the scope of our systems and should identify those features
that conventional techniques cannot supply but without neglecting those
currently available facilities than may so easily be lost in the change of methodology.
We also need to be clear as to the nature of the tasks we wish to set and the
fitness dynamics to be expected from their solution sets.
Evolution Ability - Phylogenetic, Genetic Algorithm style
This is the systems ability to cope with long term slow environmental change,
with novel but persistent situations. Organically it implies multiple instances
(or populations) of program variants. These will need some problem specific
knowledge for efficient search (i.e. the No Free Lunch theorem [Wolpert]) and
for generality this must evolve and not be imposed. Modular crossover may
be required to preserve and build up function (by parallel schema searches).
Development - Ontogenetic, Cellular Automata style
Growth is an aspect often missing from machines, yet is essential to
organic construction [Jacob]. This implies a self-organising phenotype, with
internal self-generated attractors. Based upon an internal (cellular or ALife style)
modular population this will be coevolutionary self-organising, with internal EOC
maintenance (similar to the brain or an ecosystem) and this relates also to contextual
requirements, the ability to select appropriate forms for the local needs by using
local information.
Learning - Epigenetic, Neural Network style
Knowledge relates to dynamic memories. Operating in
an environment implies a two way correlation or structural coupling, a tracking
of non-stationary system perturbations in real time from both points of view.
This requires evolving values or associations - an open ended, initially undirected,
and self-modifiable categorisation technique. It has been suggested that,
contrary to current NN thinking, better results are obtained by the inhibition of
weights or of unwanted pathways than by the conventional strengthening of such
[Chailvo] - a selectionist technique also common to immune and genetic mechanisms.
For complex environments we also have multiple conflicting values, thus
multioptimisation techniques of some kind will be required to implement choice.
Usability - Technique exportability
If the techniques developed are to become widespread, we require ease of
programming, the usability of the techniques by non-specialists. This
seems to require libraries of standard modules (like neurons and
nuclei) together with frameworks for supplying variation and evaluation
- a sort of Object Oriented Evolution tool kit, rather wider in scope than
the natural equivalent due to the short timescales available for solutions
to become effective.
Realistic Expectations - Awareness of limitations
Let us compare the complexity of even a massive simulation to just a simple
cell, which alone contains tens of thousands of varieties of molecules interacting
at around a trillion reactions a second. There is probably more raw computational
power in the organisms inhabiting a spoonful of soil than in all of the world's computers
added together. Thus we should scale our expectations accordingly and not expect our
often trivial simplifications to achieve major results. We can however take heart from
the fact that much organic complexity relates to ongoing survival (self-production)
and the structural coupling requirement may be of more manageable proportions
since it provides only triggers and not complete information.
The achievement of the above goals in a computer program context generates many problems and we can suggest some approaches.
Robustness relates to avoiding the system disintegrating over time, and to the necessary compromises between its ability to correlate with the immediate environment yet maintain its structure as a system. If the environmental coupling is too tight the system will become unpredictable (trying to respond to too many perturbations), yet conversely if too loose the system will become unresponsive, settled into a single attractor. We thus need to either explicitly define the dimensionality of our interfaces or provide methods for this to evolve.
Humans need to be able to have confidence in a system and this concerns being able to understand and relate to the system behaviour. Systems that do unpredictable things can only be allowed in situations where that is acceptable to the users, and this excludes very many social situations where conformance to norms is expected - evolved computers will have none and thus we may need to change our own social expectations regarding machines instead.
Many real world tasks are fuzzy problems, ill-defined scenarios that relate badly to typical academic research simplifications. If we are to generate genuinely useful adaptive programs then these will need to perform in noisy and sub-optimal environments which abound in conflicting and emotional goals. We need to understand this human environment much better than we do presently (where emotions are generally ignored) [Fell].
The time taken to adjust to new situations will be crucial to the satisfactory performance of new organic technologies. We do not have aeons to evolve solutions and must better understand how brains learn (from single examples) with low performance parts if we are to succeed in real-time optimisation. To obtain good performance we may need to use transient (short lived) attractors due to coevolution time restraints [Lucas1997b].
Our ability to correct inappropriately evolved systems may be crucial if these are released into the real world, due to the dangers posed by free format adaptation (there are shades of Asimov's Laws of Robotics here). There may be a need for a form of psychological counselling for errant robots, ways of redefining their internal operation by external means.
We need a better evaluation technique for multidimensional systems, one more in tune with how we ourselves do this in, as yet, poorly understood intuitive ways - a fast holistic mode. This evaluation needs to include multiple levels and not just the single level optimisation (internal genetic or phenotypic) often seen, and should both take into account the contextual (associative search) nature of solutions and the multidimensional nature of rewards or needs.
The high significance of nonlinear interactions between variables makes evaluation difficult across system boundaries. The program needs to anticipate environmental reaction (e.g. provide look ahead as in chess programs) to avoid myopic counterproductive 'solutions' being proposed that neglect the user's likely response. This relates to implementing the predictive mode common to science and then monitoring the results, an unsupervised, reinforcement learning mode involving cycles of evaluation and improvement [Sutton].
Parallel operation implies that more than one rule may be simultaneously active, thus a prioritisation scheme may be necessary (subsumption style). Whether we try to evolve this (difficult) or impose it implies a conflict between unconscious parallel operation and an explicit design, consciously imposed.
Being environmentally driven means that we cannot say what function the system will evolve to meet. It may well be a different one than was intended, since a general evolutionary system will be epistemically autonomous [Cariani]. The evolutionary stable states (ESSs) available to the system cannot be known in advance for unpredictable environments. We need techniques with which to constrain system functions to just those desired, to encourage appropriate emergence, and this implies that performance measures or rewards must still be specified by humans.
Many current systems are customised for particular clients. This facility may be hard to incorporate in adaptive systems and may require providing appropriate external constraints with which to evolve corporate identities. The idea of fixed ways of doing things is alien to adaptive techniques and thus the whole concept of group identity may need to be discarded in the long term.
Provision for self-repair or redundancy may be necessary and this seems to be better included early in the design [Thompson], yet this may need many generations to evolve and involve many failure costs along the way. Trade-offs between survival and cost may be necessary, perhaps leading to disposable and recyclable programs and robots.
Unlike natural systems, we need to compute self-organization (evaluating transition rules for example) and not just let it happen physically. This will have major performance implications unless we can find another way to add parallel processing power. We also need to define the operational envelope, our functional limits, to avoid trying to over engineer and this implies restricting ourselves to simplified systems (no androids).
Many of the approaches to investigating complex and self-organising systems adopt the agent based ideas common to artificial life and it is worthwhile to consider here how a typical approach of this type (we will use the Echo system [Hraber]) corresponds to the criteria we have outlined.
Echo is a modern Complex Adaptive System simulator, an auto-adaptive genetic system similar to Tierra but employing more sophisticated interactions and with a fitness measure that goes beyond just reproduction. In the standard configuration this fitness measure relates to the ability of the agents to perform environmental interactions and thus relates to our concern in this workshop.
These systems incorporate many of the features often proposed for complex adaptive systems and which are included in our list of the properties of complexity philosophy. Those features incorporated by Echo are:
|
|
But comparing the system however with our full criteria for self-organising complexity typically highlights a number of inherent limitations:
It is also local and probabilistic. This cannot evolve and precludes the adaptability and semantics that prove possible with dynamic connectivity and emergent attractors.
These are imposed directly, not by the evolutionary emergence which would be necessary if the system were to adapt to changes in requirements and innovation. This restriction is typical in ALife systems, and relates to the genome coding adopted, it precludes phenotypic categorisation changes.
These are based upon specific uses of the inherent values and this restriction is again typical, relating to the explicit fitness evaluation functions that tend to be necessary. This restriction precludes emergent approaches to functional balance and operational priorities.
No hierarchical development seems possible, the most that emerges is at the species level and as the phenotype and genotype are linearly related this is really a single level. Emergence may need to be grounded in a stable higher level arrangement before this can in turn generate further hierarchical levels.
This is an isolated system, so selection also must be internal and does not relate to any human needs. In addition, typically the environment is simplistic and does not mimic the richness and structural plasticity of natural environments, severely restricting the possible classification types that could emerge under structural coupling [Quick].
Long optimisation times are needed due to the multigenerational computation required before the system settles to a functional balance. No agent memory is available to permit varying contextual attractors, nor to allow offline evaluation of options, so short term dynamic evolution is not enabled here.
Such short genomes and associated simplistic phenotypes seem inadequate to provide the complexity needed to allow the development of the autocatalysis processes, at agent level, necessary to implement full Type 4 Self-organising Complexity. Additionally, no mechanism that could implement dissipative self-production is apparent.
Whilst these sort of systems have a major role in evaluating possibilities and studying coevolution, in general they seem to fall well short of the sort of situated and flexible systems necessary for an hybrid computer/human environment.
This idea (thought to be how the brain works [Calvin]) relates to having multiple competing solutions, cloning by recruitment and resonance. It uses an internal fitness measure which relates to probabilistic pattern matching in that the most numerous clone wins (strongest chorus). These solutions change dynamically and follow all Darwinian principles, also resembling magnetic domains or spin glasses. It is inherently multidimensional in that the strongest overall match entrains the most clones. Learning is akin to self-organising maps with the sculpture of patterns by forced cycles adjusting connectivity in multiple overlapping attractors.
We here suggest multiple modular programs, each optimised and competing in parallel for a different environmental sub-problem or niche. This idea mimics economic competition, and allows the user to directly choose which module combinations best meet their needs. Each program would make offers to the user of benefits and the corresponding costs and thus their relative successes would correlate automatically to user demand profiles. This relates to the work of [McFarland]. Unlike traditional take-it-or-leave-it packages, this approach maintains maximum openness and flexibility.
In this viewpoint we can regard the different biological processes as reflecting organic loops (Fig2) operating over different timescales and at different structural levels. These correspond to increases in organisms (Phylogenetic), in cells (Ontogenetic), and in synapse connectivity (Epigenetic). This allows the same routines to be used for models incorporating all 3 levels and also allows combinations of modes to be used within each level - corresponding to the POE space envisioned by [Sipper].
Here we invoke a metabolic technique that catalyses operation at the transcription stage, depending upon local context. Our genome is multipurpose, as in real biological systems, not all genes are active in any one situation. The editing (syntax) procedures usually missing from artificial systems add another layer between genome and building blocks (Fig 2) - this relates to the Contextual GAs of [Rocha]. This pre-processing can allow much greater flexibility for any genome, and permits true contextual multimode operation to be implemented with the generation of tissue types. Adding tags mimicking cell adhesion molecules should allow symbiotic relationships to form higher level structures.
The N:M relation between DNA and Protein (many equivalent genes, many shapes for one protein), allows compression of the genome (removing redundancy and making use of 'order for free'). But this reduces the search space available to the system and may need new approaches making use of additional local contextual information to specify the actual self-organising result. This relates, in Boolean Network terms, to the genome specifying connectivity and to the environment specifying the starting states. This dynamic mode swapping may help to implement a genetic prioritisation or subsumption architecture.
In brains hormones provide global regulation of neuronal activity by affecting neurotransmitter levels. We can model this also at physical (temperature), cellular (enzyme), social (information), and ecosystem (resource density) levels to design systems that cybernetically stabilise edge-of-chaos. Since innovations require more chaos and stability more order, this can also be used to dynamically regulate adaptability - a form of threshold control.
This combines an analogue self-organizing development scheme based upon non-computational techniques (the physical forming of the phenotype in hardware from building blocks - or a simulation thereof), with a digital perturbation engine, giving alternative production rules which specify and write the initial conditions to hardware. This latter system is perturbed by mutation in GA fashion to achieve variation for later selection. This seems to be an extension of the GA plus Self-Organisation work of [Dellaert & Beer] but using hardware to implement the cellular development in order to reduce computational needs.
In this method we need to generate and test multiple models offline before use. This corresponds to traditional AI symbolic intelligence in taking into account look ahead in evaluating the options available to the user, but instead of looking for local (machine) advantage (as in chess) we require here a policy to maximise user advantage. The total representation of the world in traditional data structures is however rejected, in favour of an hybrid combination of distributed (parallel) associations along with serial label manipulation [Sperduti], mimicking the combination of mostly unconscious association plus conscious direction familiar from our human behaviour.
A competitive bias is evident in many simulations, putting combat first as in Echo, whereas (contrary to the claims) this is a last resort for both animals and humans if fitness is to be maximised in a multidimensional environment. Fitness is enhanced by creativity (positive-sum actions including trade) and reduced by conflict (negative-sum - hence the social laws prohibiting it). The zero-sum (resource swapping) mode often employed in models is unrealistic and almost never exists socially or biologically. Basing techniques instead on forms of mutual aid is expected to improve emergent features, since combined structures are then encouraged and not destroyed [Watson].
In this option we mutate and replace one option at a time in a population, so that older less used options are gradually replaced by newer variants. This relates to the schema ideas of [Holland] and by mutating options allows new categorisation to be created experimentally (in the same way that a child learns to generalise). In this proposal however a population of category attractors is maintained whose basin of attraction sizes depend upon their successes. Unlike schemas, these are recurrent attractors more in the neural network mode than classifiers.
This uses the latest techniques of using real world organic components instead of silicon to mimic natural building blocks. It often uses massive parallelism at molecular level to help solve difficult NP-complete problems [Adleman]. In our context it can be envisioned as a method of optimising the epistatic multivariable evolutionary interface problems that will occur in trying to relate adaptive programs to the complex environment in which they are intended to operate. Engineering organic computers of this sort reduces macrosystems to microsystems, with resultant step increases in speed and parallelism. Feasibility for interactive computing is however unknown.
As Darwinian evolution depends upon selection by death, and such deaths are not normally a feature of artificial systems, we can reject the need for genetic reproduction altogether and instead use systems that evolve in more Lamarkian ways [Ackley and Littman] passing environmental knowledge on directly, e.g. dividing the system asexually, cloning a total system from new parts, or generally duplicating knowledge structures in direct ways . These sorts of techniques can include externalised shared data (online style books), environmentally situated data (e.g. bridges) or retainable memory (e.g. EPROMs).
This overview has highlighted many differences between conventional computing approaches and those derived from an organic viewpoint. Most of these differences have been poorly addressed so far, especially in combinations where epistatic interactions (compromise solutions) are important. We are however here trying to duplicate 4 Billion years of natural evolution and have so far not yet separated those aspects that are essential from those only contingent. Many current models are employed, but in general these each abstract only a few limited properties for evaluation and none come very close to incorporating the full Type 4 multilevel complexity common to natural self-organising and self-maintaining systems [Ziemke].
We need to understand and make use of self-organisational shortcuts and especially to consider the metabolic contextual implications of situated self-organisation, concentrating less on the genetic building blocks and more on their internal interactions. The connectivity approach used in Complexity Philosophy is appropriate to this view. Some work has started in trying to take these issues into account [e.g. Kennedy] but a great deal still needs to be done before we are able to grow adequate and resilient adaptive programs for real-world application in unrestricted domains.