In most studies of life processes we concentrate on steady state phenomena. We assume that the property of interest is fixed, whether a behaviour, a cell type, a belief or a creature and then try to explain it. We may allow that some processes operate in time, for example the production of proteins, but this is still investigating only one aspect of the overall complexity of the systems we study - the protein forms themselves are assumed constant.
For every system that remains constant in this way there will be one that continually changes. These systems are subject to perturbations in the same way as are static systems, by mutation, environmental influence or information. Where they differ is that the attractors they contain do not persist, they are themselves transient.
The best example of such a system is perhaps the human mind itself. New information is constantly accumulated, leading to changes in the synapses of the brain, modifications to structure. We know from our study of networks that changes to the connectivity will result in corresponding changes to the attractors present, so we have here a case where attractors are coming into being, disappearing and changing their basins of attraction in real time.
Is it possible to study such systems ? This seems as yet to be a new area, but with applications to adaptive companies, social systems and ecologies also, to quote just a few. Dynamics of changing interactions at varying speeds occur here, but the main common aspect is that the time to reach an attractor (the transient) matches the perturbation timescale, unlike normal systems where the perturbations are assumed slow compared to the cycle time.
Firstly we must define the basic attractor concept which we shall discuss. In any complex system we can divide up the structure into several levels. At the bottom level are the constructional units or parts (which from a biological point of view we can call Genes or Neurons). The interactions of these with the system environment (either external or internal) gives rise to emergent properties at an intermediate level (for biology this could correspond to part of the Phenotype). The labels we give to these structures (their meaning) appears at a yet higher level, that of ideas or dynamical symbols [7], commonly referred to in their abstract form as Memes [1].
For our purposes here I will generalise all these intermediate level structures to be forms of attractor, which correspond to internal categories or memory within the system. I will refer to these structures as 'Emergent Attractor Memories' or 'Eames' for short (in analogy with Genes and Memes). These Eames can take many forms, depending upon the configuration of the system we are studying. They may be permanent features of the system (e.g. body shape) or temporary (e.g. mental recognition), long or short lived. What they all have in common is that they result from the interaction of parts at a lower level (syntax), have an emergent overall property, but have no intrinsic meaning (semantics) unless we choose to assign any (or it is contextually appropriate).
To be more explicit:
A property contained by the whole which does not exist in terms of the parts or the vocabulary appropriate to them. A new higher level concept.
A feature that concentrates a number of possible options or states into a fewer number. The attractor is an equilibrium position of the system, the other states form the basin of attraction and will move towards the attractor over time.
Storage of data. A representation in some code that corresponds to a previously existing state of affairs. In these attractors memory is stored in a distributed form - the network connections and rule transitions.
Depending upon the type of attractor involved, the structure may be hereditary (determined by the genotype), learnable (from parents or elsewhere) or stochastic (randomly created). The mechanism of creation is not however important. Once structures that support Eames come into being then their properties are expected to follow the same general dynamic rules.
There need be no correlation between the constructional details, the attractor form and the associated meaning. Many alternative constructions may give the same attractor, many alternative meanings may be allocated to similar attractors in different circumstances. Eames are here used as abstract designations of the common features of the emergent attractors within any possible system. It will be expected that a process of self-organization will take place during which the attractor coalesces. During this period selection processes may also occur so the result may be a combination of both self-organization and selection, sometimes called 'Selected Self-Organization' [11]. We will assume here that a suitable combination of processes have occurred and an Eame is present.
In simple attractor theory a dynamical system is considered to have control parameters. When these are slowly varied the system can bifurcate, switching to a different attractor structure or phase portrait. For our complex systems this is also expected to be true, changes to the controlling connections or inputs will affect the attractors present. But for the 'real time' systems considered here we have a complication, the rate of change of the control parameters is as fast as the operational timescale of the system itself, i.e. the rate of change of the 'desired' parameters that we are trying to optimise or understand [7].
For example, take a company making a decision. The 'facts' on which that decision is based are dynamic. They include the activities of competitors, the market economics (exchange rates, share prices), customer 'whims', perhaps even the weather (affecting demand). This is true for most 'intelligent' processes and perhaps sub-conscious ones also - the decision is based on future data that we cannot know for certain. We will often assume that the facts are fixed, and then expend great effort in 'optimising' the decision - yet may well 'solve' a problem that no longer exists in the form assumed...
This process is essentially a high speed co-evolution problem, the unit and the environment change in tandem in unpredictable ways. These changes over the medium term, the lifetime of the organizational unit, reflect alterations in the attractor structure caused by its interactions with the environment. We can regard these as changes to the control parameters defining the attractors that are present.
The solutions to these problems of medium term evolution need to be determined in real time, but because of the speed of data variations this means that firm solutions are often unobtainable and the solutions must be regarded as probabilistic, approximate, incomplete and inconsistent. They will have validity at most only for a short period, after which changes to the corresponding attractors due to new information may alter the appropriate solution.
The total states that these systems can take cannot be known in advance, since this supposes that the connectivity remains unchanged for the duration of our study (the unit lifetime), which we have seen is rarely expected to be the case.
We can contrast these transitional systems with two common approaches to learning [5]. The traditional approach by artificial intelligence and psychology assumes that fixed symbolic processes operate, so that we can study them at leisure and implement a 'top down', controlled, model. But this locks the system into a static mode, largely forbidding adaptation to new situations (unless explicitly specified in the model). The second approach is that of 'bottom up' subsumption architecture, where small autonomous modules each deal with trivial tasks and the adaptability appears due to the parallel operation and prioritized interactions of the modules. For these systems we have no overall planning, no direction or higher level control (except in terms of initial design structure - the imposed symbolism), the system can appear chaotic.
What is needed is a third way, allowing the components themselves to adjust their behaviour dynamically, changing in time to cater for new needs, new priorities and new situations - an approach allowing innovation. What I'll call 'Transient Attractors' may offer that option. Here we shall focus on Eames, the intermediate level of structure - attractor dynamics, higher than that of components, but lower than that of meaning [11].
Let us consider an Eame that is temporarily in place in a mind. The current input (of whatever form) can be regarded as a starting vector positioning the system in the basin of this attractor, so eventually triggering its output (we will ignore the details of the possible form of this output for the moment). The attractor itself is formed by a set of control parameters from the same general state space as the input signals, this we can also regard as a vector specifying the attractor structure (its dynamic connectivity), so we have two cross-coupled systems. As the control vector changes, the shape of the attractor also changes, whilst at the same time the input vector in the basin of attraction may be moving (as it follows the transient). After a short time interval the input point may leave the basin of attraction, as the boundary (separatrix) moves out from underneath it (alternatively the two may track each other and a stable output may be produced). If the boundary is crossed then the input vector can be caught by a new attractor and a new output state will be possible.
The net result of this dynamic co-evolution of input vector and control vector is to set up a form of hysteresis, a short term memory such that output states will persist for short periods - their validity lifetime. It is difficult to visualize this in such complex systems as the mind, so let us take a simpler example. Cellular automata of Wolfram type 4 are known to contain complex structures. These include such high level phenomena as gliders. We can regard the glider formation as the flow from the system (in a basin of attraction) to a attractor - the glider itself [14]. This 'object' is unstable, it moves through the system and can collide with other objects, being destroyed or transformed. It is a dynamical structure appearing during the transient behaviour of the CA - a 'transient attractor' [8]. It is also emergent and a type of memory (shape) and so is a form of Eame. The persistence times of such Eames can vary tremendously, from zero (immediate disintegration) to infinite (e.g. a glider on an empty background). These structures can also appear on multiple levels (nested Eames). It may well be the case that a definition of complexity should relate not to steady state phenomena but to the richness of the transient phenomena of this type over multiple levels.
Such attractors can come into being by simple state changes, bifurcation of existing attractors, merging of attractors (the reverse process), interaction of two or more existing attractors or (as we shall assume here) by connectivity changes in the system forcing step jumps in attractor structures. The resultant attractors can be of any types, point (fixed), cyclic (oscillatory) or strange (chaotic). Whether the attractor is detectable as a spatial structure or a temporal one seems to relate more to how we view it (our projection from multi-dimensional state space onto our space) than to any inherent relation to conventional space and time. We can choose to employ whatever 'decoding' method results in the clearest understanding of the phenomena of interest.
The presence of fast changing control inputs means that the configuration of the system is a dynamic one. To understand this concept we can use as an example a simple network consisting of a collection of gates, which we will assume to have two attractors. We will allow that we can switch the network between the two attractors by forcing one gate output either low or high. If we add an extra canalizing input to this gate then this allows us to control the attractor structure directly from outside the network.
Generalising we can see that elaborate switching of attractors by other modules is an easy task and does not need fast structural changes, just logical ones within an existing sufficiently versatile structure.
The short term structure of a system's Eames may be constant in terms of physical connectivity. We may however have limited paths to adjacent Eames - but all are still there potentially and a simple logical change can make them accessible. This may be the reason that our past memories are often inaccessible, yet a simple trigger can easily release a considerable amount of old data - reactivation of old Eames at multiple levels.
To move between Eames in this way needs a suitable structure - basins of the right width to be swapped with the appropriate amount of perturbation. The connectivity is able to 'tune' the system here. Chaotic networks have attractors closely adjacent and will allow fast movement across options whilst static networks have the wider basins needed for stable categories [9]. There may be a self-organizing process at work in configuring the mind with the appropriate stability mix of Eames. The relative stability of our categories (day to day) suggests long lived attractors (permanent?) but the fleeting nature of ideas seems to need the transient variety.
Once a structure is available then what constitutes the control vector ? Stimuli, either external or internal seem to be adequate here, at least in the case of the mind. We can envision a network with multiple canalizing inputs from sensors of some sort. The arrangement of active inputs will define and select the appropriate attractor configuration.
Another possibility is that the input forms create and sustain the attractor only for the duration of their presence. In other words the 'network' itself is a form of dynamic logic that comes into being from the merging of impulses interacting in a general purpose background 'cellular' matrix - rather like the gliders we considered earlier. This may be relevant to the shorter lived Eames.
Is it possible to work out how to create a particular attractor basin ? Some recent work suggests that this can be done, a formal method is given of altering connectivity to tailor the basin to that desired [14]. In this work the subtree nodes are considered to be attractors also, on the basis that the system flows via that node more often than the starting positions. It is true that if these have any output connections outside the basin then they will be driven at a rate commensurate to their activation, which is perhaps significant. To change the pre-images of nodes in this way (i.e. to tailor their transients) it is necessary to know however what they are. For more general cases like the mind, where these details are not known, we can imagine a trial and error process (akin to mutation) where various 'transient attractor' arrangements are tried out and the most successful (selected) are then made permanent by attractor cycle driven synaptic changes. Perhaps this is what memory is [2, 12].
Can the mind do this ? If we regard synapses as the control switches then yes, this seems possible. Simultaneously firing neurons seem to strengthen their synaptic links, and weaken those not firing (Hebbian learning) - so a successful 'temporary' cycle may over time produce a permanent equivalent, in ways yet to be established in detail. We can regard this as changing connections from 'uncertain' (probability 0.5) towards either 'present' (p=1) or 'absent' (p=0).
All our consideration so far has assumed that the network is updated synchronously. Yet when asynchronous updates take place the permanent cyclic attractors seem to disappear [6]. Yet in these cases additional attractors are found (called 'loose' attractors there), these are temporary cycles that persist only for some subset of gate update conditions. These seem to be in some ways the equivalent of the 'transient' attractors we discuss here. If we substitute random (asynchronous) updating of nodes, in place of our control vector determined changes, then in both cases we have dynamic attractors that can change at any time. Stochastic transition rules of this type can be shown to be capable of generating any probabilistic sequence [7]. Yet the same is true for control inputs themselves driven by probabilistic switching rates. In both types of construction we are switching attractors rapidly, so output sequences are being dynamically reconfigured, in the one case without apparent control but in the second perhaps under some form of conscious or unconscious control (see illustrations [10]).
Can we reconcile the two approaches ? In neural nets, as we have seen, two nodes that fire together (for any reason) become associated and this increases their connectivity. But it also increases their synchrony and ultimately all such associated nodes may operate together. We have moved from asynchronous to synchronous mode by learning. This has been shown to happen in the real brain [3].
It is possible that feedback processes between modules act as a form of selection. A canalizing input, if driven by a feedback loop, may re-initialise an attractor that would otherwise have been replaced stochastically, thus maintaining that attractor output and increasing its probability. Ultimately if enough feedback links are activated the system may be forced to a particular state - a point attractor. This raises the possibility that transient cyclic attractors are a way of trying out a number of output options in turn, until one if found that correlates with a nearby module and 'locks in' to a common feature.
Considering complex systems with many Eames, for each such attractor we would expect a number of input parameters to be relevant (i.e. multi-dimensionality). Thus a dynamic change in one parameter may be compensated by a change in another - keeping the system within the same basin of attraction. The output thus takes on a probabilistic or fuzzy character. A single value of a parameter is insufficient to guarantee a consistent result, the output is a 'consensus' or emergent feature of the dynamic operation of the whole system (or a modular part of it). This allows contextual outputs to be produced where an input (say 'red') is categorised as one thing ('tomato') on one occasion and another ('English post box') at a different time - dependent upon other associations.
For coupled systems of multiple modules (each with their own Eames) the actual Eames formed are a feature of the overall higher level arrangement and do not necessarily exist when the modules are viewed in isolation [4]. Some control input values may simultaneously participate in defining several discrete Eames, whose forms may be unrelated or appear at different levels of organization, similarly data input values can also be relevant to multiple Eames.
Our classification of the world is restricted to the categories available in the network, with their resultant output behaviours (called eigenvalues and eigenbehaviours by von Foerster [11]), so we discretise some (but not all) continuous environmental parameters. Here I regard such classifications specifically as attractors in the form of Eames. Whether these are permanent attractors (stable to perturbation - fixed control vector), transient attractors (temporary structures - dynamic control vector) or subtree attractors (nodes of attractor basin trees as in [14]) is still unclear. All modes may in fact be employed for different purposes.
It does seems clear however that our categories are not fixed and clearly defined. Concepts or names are indistinct [13]. When we classify something we seem to assign it to the category to which it fits most closely, to which it has a higher 'probability' score than any other available category. We seem also to have the ability to match an object to multiple simultaneous categories, so if we adopt a view of categories as Eames then we must allow that multiple Eames are active simultaneously. From our discussions previously we can see that this is a valid possibility, if rather hard to analyse.
Moving between attractors can be a level specific step, parameter changes to a system at one level may allow variation at that level (e.g. step length, frequency) without exiting higher or lower level attractors. We can thus have dynamically nested attractors and multiple levels of independently changing detail. This seems necessary if we are to model an hierarchy of generalisations (e.g. John Doe, White, Homo Sapiens, Male, Animal, Alive etc.)
The data that we have available at any time is vast. For any single function only a small part of this data is employed. It may well be the case that this partial data is insufficient to make a firm decision, either because data is missing or has been ignored (for time, accessibility or other reasons) or it may be that the total data is contradictory and decisions must be ambiguous. Indeed some work [9] has suggested that selectively ignoring data does help optimise a problem solution.
Either way, it seems as if there are many competing Eames for any categorisation problem, so we need a mechanism to choose between them. Could activation time be relevant ? A point attractor would have 100% activation, a cycle of length four 25% and so on - probability inversely proportional to attractor length perhaps ? This relates 'fitness' to 'probability' in a way such that the fittest attractor is the one whose output is present for proportionally the longest time.
We can regard the output of Eames in several ways - as 'ideas', 'cycles of association', 'classification' or 'action'. The output manifestation may be the attractor itself (a memory cycle perhaps) or a specific state of an Eame component could trigger an external output module (e.g. muscle movement). It is unclear which method or methods chiefly affect our behaviour, but almost certainly there will be a variety employed.
Adaptation in the sense considered here effectively tracks the changes in the input data (generalises), it is adaptive only as far as the current situation (at a macro level) remains unchanged. We can regard the behaviour as a negative feedback system, maintaining the output as far as possible, before losing control as perturbations exceed the viable limits (the generalisation is no longer the most 'probable'). We must be careful not to assume that the adapted system is 'better' than a version that existed at an earlier date. This error is known in statistical situations, where A may be preferred to B, B preferred to C, but C preferred to A - no overall 'optimum' fitness exists. Adaptation here is therefore reversible, it can return to a previous state - if the conditions that applied at that time recur. We can see this in psychological cases where addicts, say, can regain their original balanced state once the cause of a earlier maladaption has been removed.
Dynamic attractors of the type we consider seem crucial to an understanding of learning in minds or social groups. In each case the variables structuring the complex system are under constant change. This may occur over timescales that are long relative to the actions needing to be taken, or over very short timescales which is seen as interfering noise - a jitter around the attractor. In the range where the action and control timescales are similar we would expect to encounter these probabilistic results and short lived Transient Eames.
For systems with many possible actions, we need to have a mechanism to prioritise them. Here we approach behavioural networks, where 'goals' are set and the one with the highest priority dominates the result [5]. This is analogous to switching attractor basins, the attractor equivalent of a computer 'stack' - storing a temporary position whilst working on another problem. In parallel complex systems like minds we can easily have multiple Eames in operation simultaneously, thus we will have a choice of outputs. Routing any of these to a action module would seem to be a simple switching task, based on other control variables.
In this paper we've looked at an intermediate way of viewing complex systems with a view to relating these ideas to the operation of mind and social systems. By taking a co-evolutionary approach to the interaction of data and control features of the system we see some possibilities of a new type of attractor structure that may have relevance to categorisation - the transient attractor. These are associated with incomplete data, due to inadequate timescales, and thus are probabilistic features of the network. We have looked at ways in which these can be dynamically created and changed and how they relate to behaviour. Future work will attempt to build on this start and develop more detailed models of how attractors of this type may be formed and maintained in biological, psychological and social contexts.