Publications: Abstracts

2021 | 2020 | 2019 | 2018 | 2017 | 2016 | 2015 | 2014 | 2013 | 2012 | 2011 | 2010 | 2009 | 2008 | 2007 | 2006 | 2005 | 2004 | 2003 | 2002 | 2001 | 2000 | 1999 | 1998 | 1997 | 1996 | 1995 | 1994 | 1993 | 1992

2021

[1] Potts, M. W., Harvey, D., Johnson, A., & Bullock, S. (2021). The complexity register: A collaborative tool for system complexity evaluation. Engineering Management Journal. [ bib | doi | pdf ]
As modern engineered systems become ever more connected and interdependent there is an increasing need to evaluate their complexity. However, evaluating system complexity is challenging due to the complicated conceptual landscape of competing definitions of the term complexity itself, and the range of perspectives that can be taken on what constitutes the System of Interest. This paper attempts to overcome these hurdles through introducing a Complexity Register with which to build and record a shared understanding of system complexity for key stakeholders. In order to overcome current challenges in evaluating the complexity of an engineering system, the Complexity Register encourages personnel to adopt a broad range of perspectives on the potential issues, impacts and mitigations to manage system complexity. The formulation of the Complexity Register is informed by design principles derived from a case study analysis of a system complexity evaluation tool. The Complexity Register should enable more effective shared understanding by encouraging collaboration that makes explicit the multiple viewpoints taken when evaluating system complexity and promoting continued re-evaluation throughout a system or project lifecycle.

2020

[2] Alkan, B. & Bullock, S. (2020). Assessing operational complexity of manufacturing systems based on algorithmic complexity of key performance indicator time-series. Journal of the Operational Research Society. [ bib | doi | pdf ]
This article presents an approach to the assessment of operational manufacturing systems complexity based on the irregularities hidden in manufacturing key performance indicator time-series by employing three complementary algorithmic complexity measures: Kolmogorov complexity, Kolmogorov complexity spectrum's highest value and overall Kolmogorov complexity. A series of computer simulations derived from discrete manufacturing systems are used to investigate the measures' potentiality. The results showed that the presented measures can be used in quantitatively identifying operational system complexity, thereby supporting operational shop-floor decision-making activities.

[3] Bullock, S. (2020). Láska za časů robotů Roombatm (Love in the time of Roombatm). In J. Čejková (ed.), Robot 100. VSCHT, Prague, CZ. bib | www ]
[4] Pitonakova, L. & Bullock, S. (2020). The robustness-fidelity trade-off in Grow When Required Neural Networks performing continuous novelty detection. Neural Networks, 122, 183-195. [ bib | doi | pdf ]
We assess the suitability of Grow When Required Neural Networks (GWRNNs) for detecting novel features in a robot's visual input in the context of randomised physics-based simulation environments. We compare, for the first time, several GWRNN architectures, including new Plastic architectures in which the number of activated input connections for individual neurons is adjusted dynamically as the robot perceives a varying number of salient environmental features. The networks are studied in both one-shot and continuous novelty reporting tasks and we demonstrate that there is a trade-off, not unique to this type of novelty detector, between robustness and fidelity. Robustness is achieved through generalisation over the input space which minimises the impact of network parameters on performance, whereas high fidelity results from learning detailed models of the input space and is especially important when a robot encounters multiple novelties consecutively or must detect that previously encountered objects have disappeated from the environment. We show that using location data as a part of the monitored input data stream improves fidelity of the GWRNN and propose a number of other improvements that could mitigate the robustness-fidelity trade-off.

[5] Potts, M. W., Johnson, A., & Bullock, S. (2020). Evaluating the complexity of engineered systems: A framework informed by a user case study in systems engineering. Systems Engineering, 23(6), 707-723. [ bib | doi | pdf ]
Evaluating the complexity of an engineered system is challenging for any organization, even more so when operating in a System-of-Systems (SoS) context. Here, we analyse one particular decision support tool as an illustratory case study. This tool has been used for several years by Thales Group to evaluate system complexity across a variety of industrial engineering projects. The case study is informed by analysis of semi-structured interviews with systems engineering experts within Thales Group. This analysis reveals a number of positive and negative aspects of (i) the tool itself, and (ii) the way in which the tool is embedded operationally within the wider organization. While the first set of issues may be solved by making improvements to the tool itself, informed by further comparative analysis and the growing literature on complexity evaluation, the second “embedding challenge” is distinct, seemingly receiving less attention in the literature.

In this paper we focus on addressing this embedding challenge, by introducing a complexity evaluation framework, designed according to a set of principles derived from the case study analysis; namely that any effective complexity evaluation activity should feature collaborative effort towards building an evaluation informed by a shared understanding of contextually relevant complexity factors, iterative (re-)evaluation over the course of a project, and progressive refinement of the complexity evaluation tools and processes themselves through linking project evaluations to project outcomes via a wider organizational learning cycle. The paper concludes by considering next steps including the challenge of assuring that such a framework is being implemented effectively.

[6] Potts, M. W., Sartor, P. A., Johnson, A., & Bullock, S. (2020). Assaying the importance of system complexity for the systems engineering community. Systems Engineering, 23(5), 579-596. [ bib | doi | pdf ]
How should organizations approach the evaluation of system complexity at the early stages of system design in order to inform decision making? Since system complexity can be understood and approached in several different ways, such evaluation is challenging. In thisstudy, we define the term ?system complexity factors? to refer to a range of different aspects of system complexity that may contribute differentially to systems engineering outcomes. Views on the absolute and relative importance of these factors for early?life cycle syste evaluation are collected and analyzed using a qualitative questionnaire of International Council on Systems Engineers (INCOSE) members ( = 55). We identified and described the following trends in the data: there is little between-participant agreement on the relative importance of system complexity factors, even for participants with a shared background and role; participants tend to be internally consistent in their ratings of the relative importance of system complexity factors. Given the lack of alignment on the relative importance of system complexity factors, we argue that successful evaluation of system complexity can be better ensured by explicit determination and discussion of the (possibly implicit) perspective(s) on system complexity that are being taken.

[7] Potts, M. W., Sartor, P. A., Johnson, A., & Bullock, S. (2020). A network perspective on assessing systems architectures: Robustness to cascading failure. Systems Engineering, 23(5), 597-616. [ bib | doi ]
Despite a wealth of system architecture frameworks and methodologies availalble, approaches to evaluate the robustness and resiliency of architectures for complex systems or systems of systems are few in number. As a result, system architects may turn to graph-theoretic meythods to assess architecture robustness and vulnerability to cascading failure. Here, we explore the application of such methods to the analysis of two real-world system architectures (a military communications system and a search and rescue system). Both architectures are found to be relatively robust to random vertex removal but more vulnerable to targeted vertex removal. Hardening strategies for limiting the extent of cascading failure are demonstrated to have varying degrees of effectiveness. However, in taking a network perspective on architecture robustness and susceptibility to cascading failure, we find several significant challenges that impede the straightforward use of graph-theoretic methods. Most fundamentally, the conceptualization of failure dynamics across heterogeneous architectural entities requires considerable further investigation.

2019

[8] Eleftheriou, A., Bullock, S., Graham, C. A., Skakoon-Sparling, S., & Ingham, R. (2019). Does attractiveness influence condom use intentions in women who have sex with men? PLoS ONE, 14(5), e0217152. [ bib | doi | pdf ]
Objectives: Attractiveness judgements have been shown to affect interpersonal relationships. The present study explored the relationships between perceived attractiveness, perceived sexual health status, condom use intentions and condom use resistance in women. Setting: The study data were collected using an online questionnaire. Participants: 489 English-speaking women who jhave sex with men, between 18-32 years old. Outcome measures: Women were asked to rate the attractiveness of 20 men on the basis of facial photographs, to estimate the likelihood that each man had a sexually transmitted infection (STI), and to indicate their willingness to have sex with each man without a condom. Condom resistance tactics were also measured and their influence was assessed. Results: The more attractive a man was judged to be, the more likely it was that participants were willing to have sex with him (r (487) = 0.987, p < .001). Further, the more attractive a man was judged to be, the less likely women were to intend to use a condom during sex (r=-0.582, df=487, p = .007). The average perceived STI likelihood for a man had no significant association with his average perceived attractiveness or with participants' average willingness to have sex with him. The more attractive a participant judged herself to be, the more she believed that, overall, men are likely to have a STI (r =0.103, df=487, p < .05). Conclusions: Women's perceptions of men's attractiveness influence their condom use intentions; such risk biases should be incorporated into sexual health education programmes and condom use interventions.

[9] Potts, M. W., Johnson, A., Sartor, P. A., & Bullock, S. (2019). A network perspective on assessing system architectures: Foundations and challenges. Systems Engineering, 22(6), 485-501. [ bib | doi | pdf ]
Organisations are increasingly faced with the challenge of architecting complex systems that must operate within a System of Systems (SoS) context. While network science has offered usefully clear insights into product and system architectures, we seek to extend these approaches to evaluate enterprise system architectures. Here, we explore the application of graph-theoretic methods to the analysis of two real-world enterprise architectures (a military communications system and a search and rescue system) and to assess the relative importance of different architecture components. For both architectures, different topological measures of component signicance identify differing network vertices as important. From this we identify several signicant challenges a system architect needs to be cognisant of when employing graph-theoretic approaches to evaluate architectures; finding suitable abstractions of heterogeneous architectural elements and distinguishing between network-structural properties and system-functional properties. These challenges are summarised as five guiding principles for utilizing network science concepts for enterprise architecture evaluation.

[10] Saffre, F., Gianini, G., Hildmann, H., Davies, J., Bullock, S., Damiani, E., & Deneubourg, J.-L. (2019). Long-term memory-induced synchronisation can impair collective performance in congested systems. Swarm Intelligence, 13(2), 95-114. [ bib | doi | pdf ]
We investigate the hypothesis that long-term memory in populations of agents can lead to counter-productive emergent properties at the system level. Our investigation is framed in the context of a discrete, one-dimensional road-traffic congestion model: we investigate the influence of simple cognition in a population of rational commuter agents that use memory to optimize their departure time, taking into account congestion delays on previous trips. Our results differ from the well-known minority game in that crowded slots do not carry any explicit penalty. We use Markov Chains analysis to uncover fundamental properties of this model and then use the gained insight as a benchmark. Then, using Monte Carlo simulations, we study two scenarios: one in which “myopic” agents only remember the outcome (delay) of their latest commute, and one in which their memory is practically infinite. We show that there exists a trade-off, whereby myopic memory reduces congestion but increases uncertainty, whilst infinite memory does the opposite. We evaluate performance against the optimal distribution of departure times (i.e. where both delay and uncertainty, are minimized simultaneously). This optimal but unstable distribution is identified using a Genetic Algorithm.

[11] Wisetjindawat, W., Wilson, R. E., Bullock, S., & De Villafranca, A. E. M. (2019). Modelling the impact of spatial correlations of road failures on travel times during adverse weather conditions. Transportation Research Record, 2673(7), 157-168. [ bib | doi | pdf ]
Traveling in extreme adverse weather involves a high risk of travel delay and traffic accidents. There is a need to assess the impact of extreme weather on transport infrastructure and to find suitable mitigation strategies to alleviate the associated undesirable outcomes. Previous work in vulnerability studies applied either a constant failure probability or an assumed probabilistic distribution. Such assumptions ignored many factors causing the occurrence of road failure, especially that infrastructure components tend to fail interdependently. Based on empirical data of road failures and rainfall intensity during a typhoon, this study develops a statistical model, incorporating spatial correlations among the segments of road infrastructure, and uses it to evaluate the impact of the typhoon on travel time reliability. Mixed effects logistic regression as well as rare events logistic regression are applied to understand the factors involved in road failures and the spatial correlations of the failed segments. The analysis suggested that, in addition to the rainfall intensity, the road geometry, including elevation, land slope and distance from the nearest river, were important factors in the failure. In addition, there is a significant correlation of failures within watersheds. This model gives an insight into the characteristics of road failures and their associated travel risks, which is useful for authorities to find proper mitigations to reduce the adverse effects in future disasters.

2018

[12] Pitonakova, L., Crowder, R., & Bullock, S. (2018). Information exchange design patterns for robot swarm foraging and their application in robot control algorithms. Frontiers in Robotics and AI, 5(47). [ bib | doi | pdf ]
In swarm robotics, a design pattern provides high-level guidelines for the implementation of a particular robot behaviour and describes its impact on swarm performance. In this paper, we explore information exchange design patterns for robot swarm foraging. First, a method for the specification of design patterns for robot swarms is proposed that builds on previous work in this field and emphasises modular behaviour design, as well as information-centric micro-macro link analysis. Next, design pattern application rules that can facilitate the pattern usage in robot control algorithms are given. A catalogue of six design patterns is then presented. The patterns are derived from an extensive list of experiments reported in the swarm robotics literature, demonstrating the capability of the proposed method to identify distinguishing features of robot behaviour and their impact on swarm performance in a wide range of swarm implementations and experimental scenarios. Each pattern features a detailed description of robot behaviour and its associated parameters, facilitated by the usage of a multi-agent modelling language, BDRML, and an account of feedback loops and forces that affect the pattern's applicability. Scenarios in which the pattern has been used are described. The consequences of each design pattern on overall swarm performance are characterised within the Information-Cost-Reward framework, that makes it possible to formally relate the way in which robots acquire, share and utilise information. Finally, the patterns are validated by demonstrating how they improved the performance of foraging e-puck swarms and how they could guide algorithm design in other scenarios.

[13] Pitonakova, L., Crowder, R., & Bullock, S. (2018). The importance of information flow regulation in preferentially foraging robot swarms. In M. Dorigo, M. Birattari, C. Blum, A. Christensen, A. Reina, & V. Trianni (eds.), Swarm Intelligence: ANTS 2018, (pp. 277-289). Springer. [ bib | doi | pdf ]
Instead of committing to the first source of reward that it discovers, an agent engaged in “preferential foraging” continues to choose between different reward sources in order to maximise its foraging efficiency. In this paper, the effect of preferential source selection on the performance of robot swarms with different recruitment strategies is studied. The swarms are tasked with foraging from multiple sources in dynamic environments where worksite locations change periodically and thus need to be re-discovered. It is demonstrated that preferential foraging leads to a more even exploitation of resources and a more efficient exploration of the environment when information flow among robots, that results from recruitment, is regulated. On the other hand, preferential selection acts as a strong positive feedback mechanism for favouring the most popular reward source when robots exchange information in a small designated area, preventing the swarm from foraging efficiently and from responding to changes.

[14] Potts, M., Sartor, P., Johnson, A., & Bullock, S. (2018). Through a glass, darkly? Taking a network perspective on system-of-systems architectures. In E. Bonjour, D. Krob, L. Palladino, & F. Stephan (eds.), Complex Systems Design and Management: Proceedings of the Ninth International Conference on Complex Systems Design & Management (CSD&M2018), (pp. 121-132). Springer. [ bib | doi | pdf ]
A system-of-systems architecture can be thought of as a complex network comprising a set of entities of different types, connected together by a set of relationships, also of different types. A systems architect might attempt to make use of the analytic tools associated with network science when evaluating such architectures, anticipating that taking a “network perspective” might offer insights into their structure. However, taking a network perspective on real-world system-of-systems architectures is fraught with challenges. The relationship between the architecture and a network representation can be overly simplistic, meaning that network-theoretic models can struggle to respect, inter alia, the heterogeneity of system entities and their relationships, the richness of their behavior, and the vital role of context in an architecture. A more mature conceptualization of the relationship between architectures and their network representations is required before the lens of network science can offer a usefully clear view of architecture properties.

[15] Pitonakova, L., Crowder, R., & Bullock, S. (2018). The Information-Cost-Reward framework for understanding robot swarm foraging. Swarm Intelligence, 12(1), 71-96. [ bib | doi | pdf ]
Demand for autonomous swarms, where robots can cooperate with each other without human intervention, is set to grow rapidly in the near future. Currently, one of the main challenges in swarm robotics is understanding how the behaviour of individual robots leads to an observed emergent collective performance. In this paper, a novel approach to understanding robot swarms that perform foraging is proposed in the form of the Information-Cost-Reward (ICR) framework. The framework relates the way in which robots obtain and share information (about where work needs to be done) to the swarm's ability to exploit that information in order to obtain reward efficiently in the context of a particular task and environment. The ICR framework can be applied to analyse underlying mechanisms that lead to observed swarm performance, as well as to inform hypotheses about the suitability of a particular robot control strategy for new swarm missions. Additionally, the information-centred understanding that the framework offers paves a way towards a new swarm design methodology where general principles of collective robot behaviour guide algorithm design.

2017

[16] Iotti, B., Antonioni, A., Bullock, S., Darabos, C., Tomassini, M., & Giacobini, M. (2017). Infection dynamics on spatial small-world network models. Physical Review E, 95(5-1), 052316. bib | doi | pdf | www ]
The study of complex networks, and in particular of social networks, has mostly concentrated on relational networks, abstracting the distance between nodes. Spatial networks are, however, extremely relevant in our daily lives, and a large body of research exists to show that the distances between nodes greatly influence the cost and probability of establishing and maintaining a link. Random Geometric Graphs (RGG) are the main type of synthetic network model used to mimic the statistical properties and behavior of many social networks. We propose a model, called REDS, that extends Energy-Constrained RGGs to account for the synergic effect of sharing the cost of a link with our neighbors, as is observed in real relational networks. We apply both the standard Watts-Strogatz rewiring procedure and another method that conserves the degree distribution of the network. The second technique was developed to eliminate unwanted forms of spatial correlation between the degree of nodes that are affected by rewiring, limiting the effect on other properties such as clustering and assortativity. We analyze both the statistical properties of these two network types and their epidemiological behavior when used as a substrate for a standard SIS compartmental model. We consider and discuss the differences in properties and behavior between RGGs and REDS as rewiring increases and as infection parameters are changed. We report considerable differences both between the network types and, in the case of REDS, between the two rewiring schemes. We conclude that REDS represent, with the application of these rewiring mechanisms, extremely useful and interesting tools in the study of social and epidemiological phenomena in synthetic complex networks.

[17] Potts, M., Sartor, P., Johnson, A., & Bullock, S. (2017). Hidden structures: Using graph theory to explore complex system of systems architectures. Presented at Complex Systems Design & Management (CSD&M2017), Paris, December 12-13, 2017. [ bib | pdf | www ]
The increasing interconnectivity of complex engineered system of systems (SoS) leads to difficulties ensuring systems architectures are of sufficient quality (availability, maintainability, reliability, etc.). Typically reductionist approaches are used during systems architecting which may fail to provide the desired insights into key relationships and behaviors. New approaches are therefore needed and this work shows how tools from complexity science can be applied. Data from a NATO Architecture Framework complex SoS architecture, based on a Search and Rescue Use Case, is modelled using graph theory. The analysis includes degree distribution, density, connected components and modularity. Such analysis supports architectural decision making such as dependency allocation, boundary identification, areas of focus and selection between architectures. It is shown how the analysis from complexity science can be used to analyze complex SoS architectures, to provide an alternative view, that explores relationships and structure in a non-reductionist, general approach when considering architecture decisions.

[18] Pitonakova, L., Crowder, R., & Bullock, S. (2017). Behaviour-Data Relations Modelling Language for multi-robot control algorithms. In R. Vaughan, T. Maciejewski, & H. Zhang (eds.), 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), (pp. 727-732). IEEE Press. bib | pdf | www ]
Designing and representing control algorithms is challenging in swarm robotics, where the collective swarm performance depends on interactions between robots and with their environment. The currently available modeling languages, such as UML, cannot fully express these interactions. We therefore propose a new, Behaviour-Data Relations Modeling Language (BDRML), where robot behaviours and data that robots utilise, as well as relationships between them, are explicitly represented. This allows BDRML to express control algorithms where robots cooperate and share information with each other while interacting with the environment.

[19] Bullock, S. (2017). Afterward: The Chinese room. In R. Page & R. Appleby (eds.), Thought X: Fictions and Hypotheticals. Comma Press. bib | pdf | www ]
[20] Gonzalez, M., Watson, R. A., & Bullock, S. (2017). Minimally sufficient conditions for the evolution of social learning and the emergence of non-genetic evolutionary systems. Artificial Life, 23(4), 493-517. [ bib | doi | pdf ]
Social learning, defined as the imitation of behaviours performed by others, is recognised as a distinctive characteristic in humans and several other animal species. Previous work has claimed that the evolutionary fixation of social learning requires decision-making cognitive abilities that result in transmission bias (e.g., discriminatory imitation) and/or guided variation (e.g., adaptive modification of behaviours through individual learning). Here, we present and analyse a simple agent-based model which demonstrates that the transition from instinctive actuators (i.e., non-learning agents whose behaviour is hardcoded in their genes) to social learners (i.e., agents that imitate behaviours) can occur without invoking such decision-making abilities. The model shows that the social learning of a trait may evolve and fix in a population if there are many possible behavioural variants of the trait, if it is subject to strong selection pressure for survival (as distinct from reproduction), and if imitation errors occur at a higher rate than genetic mutation. These results demonstrate that the (sometimes implicit) assumption in prior work that decision-making abilities are required is incorrect, thus allowing a more parsimonious explanation for the evolution of social learning that applies to a wider range of organisms. Furthermore, we identify genotype-phenotype disengagement as a signal for the imminent fixation of social learners, and explain the way in which this disengagement leads to the emergence of a basic form of cultural evolution (i.e., a non-genetic evolutionary system).

[21] Eleftheriou, A., Bullock, S., Graham, C. A., & Ingham, R. (2017). Using computer simulations for investigating a sex education intervention: An exploratory study. JMIR Serious Games, 5(2), e9. [ bib | doi | pdf ]
Background: Sexually transmitted infections are ongoing concerns. The best method for preventing the transmission of these infections is the correct and consistent use of condoms. Few studies have explored the use of games in interventions for increasing condom use by challenging the false sense of security associated with judging the presence of an STI based on attractiveness.

Objectives: The primary purpose of this study was to explore the potential use of computer simulation as a serious game for sex education. Specific aims were to study the influence of a newly designed serious game on self-rated confidence for assessing STI risk, and examine whether this varied by gender, age and scores on sexuality-related personality trait measures.

Methods: The study undertook an online questionnaire study employing between and within subject analyses. An online platform hosted in the UK was used to deliver male and female stimuli (facial photographs) and collect data. A convenience sample group of sixty-six participants (64% male, mean age 22.5 years) completed the Term on The Tides, a computer simulation developed for this study. Participants also completed questionnaires on demographics, sexual preferences, sexual risk evaluations, the Sexual Sensation Seeking Scale and the Sexual Inhibition Subscale 2 (SIS2) of the Sexual Inhibition/Sexual Excitation Scales-Short Form.

Results: The overall confidence of participants to evaluate sexual risks reduced after playing the game (p<.005). Age and personality trait measures did not predict the change in confidence of evaluating risk. Women demonstrated larger shifts in confidence than did men (p = .03).

Conclusions: This study extends the literature by investigating the potential of computer simulations as serious games for sex education. Engaging in the Term on the Tides game had an impact on participants' confidence in evaluating sexual risks.

[22] Roman, S., Bullock, S., & Brede, M. (2017). Coupled societies are more robust against collapse: A hypothetical look at Easter Island. Ecological Economics, 132, 264-278. [ bib | doi | pdf ]
Inspired by the challenges of environmental change and the resource limitations experienced by modern society, recent decades have seen an increased interest in understanding the collapse of past societies. Modelling efforts so far have focused on single, isolated societies, while multi-patch dynamical models representing networks of coupled socio-environmental systems have received limited attention. We propose a model of societal evolution that describes the dynamics of a population that harvests renewable resources and manufactures products that have positive effects on population growth. Collapse is driven by a critical transition that occurs when the rate of natural resource extraction passes beyond a certain point, for which we present numerical and analytical results. Applying the model to Easter Island gives a good fit to the archaeological record. Subsequently, we investigate what effects emerge from the movement of people, goods, and resources between two societies that share the characteristics of Easter Island. We analyse how diffusive coupling and wealth-driven coupling change the population levels and their distribution across the two societies compared to non-interacting societies. We find that the region of parameter space in which societies can stably survive in the long-term is significantly enlarged when coupling occurs in both social and environmental variables.

2016

[23] Bartlett, S. & Bullock, S. (2016). A precarious existence: Thermal homeostasis of simple dissipative structures. In C. Gershenson, T. Froese, J. M. Siqueiros, W. Aguilar, E. J. Izquierdo, & H. Sayama (eds.), Artificial Life XV: Proceedings of The Fifteenth International Conference on the Synthesis and Simulation of Living Systems, (pp. 608-615). MIT Press. [Won the Best Paper prize.]. [ bib | pdf | www ]
We demonstrate the emergence of spontaneous temperature regulation by the combined action of two sets of dissipative structures. Our model system comprised an incompressible, non-isothermal fluid in which two sets of Gray-Scott reaction diffusion systems were embedded. We show that with a temperature dependent rate constant, self-reproducing spot patterns are extremely sensitive to temperature variations. Furthermore, if only one reaction is exothermic or endothermic while the second reaction has zero enthalpy, the system shows either runaway positive feedback, or the patterns inhibit themselves. However, a symbiotic system, in which one of the two reactions is exothermic and the other is endothermic, shows striking resilience to imposed temperature variations. Not only does the system maintain its emergent patterns, but it is seen to effectively regulate its internal temperature, no matter whether the boundary temperature is warmer or cooler than optimal growth conditions. This thermal homeostasis is a completely emergent feature.

[24] Brace, L. & Bullock, S. (2016). Understanding language evolution in overlapping generations of reinforcement learning agents. In C. Gershenson, T. Froese, J. M. Siqueiros, W. Aguilar, E. J. Izquierdo, & H. Sayama (eds.), Artificial Life XV: Proceedings of The Fifteenth International Conference on the Synthesis and Simulation of Living Systems, (pp. 492-499). MIT Press. bib | pdf | www ]
Understanding how the dynamics of language learning and language change are influenced by the population structure of language users is crucial to understanding how lexical items and grammatical rules become established within the context of the cultural evolution of human language. This paper extends the recent body of work on the development of term-based languages through signalling games by exploring signalling game dynamics in a social population with overlapping generations. Specifically, we present a model with a dynamic population of agents, consisting of both mature and immature language users, where the latter learn from the formers' interactions with one another before reaching maturity. It is shown that populations in which mature individuals converse with many partners are more able to solve more complex signalling games. While interacting with a higher number of individuals initially makes it more difficult for language users to establish a conventionalised language, doing so leads to increased diversity within the input for language learners, and that this prevents them from developing the more idiosyncratic language that emerge when agents only interact with a small number of individuals. This, in turn, prevents the signalling conventions having to be renegotiated with each new generation of language users, resulting in the emerging language being more stable over subsequent generations of language users. Furthermore, it is shown that allowing the children of language users to interact with one another is beneficial to the communicative success of the population when the number of partners that mature agents interact with is low.

[25] Bullock, S. (2016). “Shit happens”: The spontaneous self-organisation of communal boundary latrines via stigmergy in a null model of the european badger, Meles meles. In C. Gershenson, T. Froese, J. M. Siqueiros, W. Aguilar, E. J. Izquierdo, & H. Sayama (eds.), Artificial Life XV: Proceedings of The Fifteenth International Conference on the Synthesis and Simulation of Living Systems, (pp. 518-525). MIT Press. bib | pdf | www ]
The ability of European badgers to establish communal latrines at their territory boundaries is a well-known but poorly understood example of group-level biological organisation. To what extent might we expect it to arise via self-organisation rather than as the result of specific adaptations? This paper replicates and extends a model of badger foraging and territoriality to include defecation, “fæcotaxis” and overmarking behaviours, and shows that communal boundary latrines arise spontaneously through stigmergy in both territorial and non-territorial badgers, with no need for specific cognitive or behavioural adaptations such as spatial memory, or individual recognition. The model suggests that fæ cotaxis and overmarking behaviours are necessary for boundary latrine formation, that culling has little effect on the prevalence of fæcal sites (implicated in the spread of bovine tuberculosis in the UK), and that the spatial micro-structure of the environment is significant to the self-organisation process.

[26] Bullock, S. (2016). Alife as a model discipline for policy-relevant simulation modelling: Might “worse” simulations fuel a better science-policy interface? (Extended abstract). In C. Gershenson, T. Froese, J. M. Siqueiros, W. Aguilar, E. J. Izquierdo, & H. Sayama (eds.), Artificial Life XV: Proceedings of The Fifteenth International Conference on the Synthesis and Simulation of Living Systems, (pp. 28-29). MIT Press. bib | pdf | www ]
Policy-relevant scientific models are typically expected to make empirically valid predictions about policy-relevant problems. What are the consequences of shaping our science-policy interface in this way? Here, it is argued that the theoretically insecure simulation modelling pioneered within artificial life is emblematic of an important alternative approach with significance for policy-relevant modelling.

[27] Eleftheriou, A., Bullock, S., Graham, C. A., Stone, N., & Ingham, R. (2016). Does attractiveness influence condom use intentions in heterosexual men: An experimental study. BMJ Open, 6(6), e010883. [ bib | doi | pdf ]
Objectives: Judgements of attractiveness have been shown to influence the character of social interactions. The present study sought to better understand the relationship between perceived attractiveness, perceived sexual health status and condom use intentions in a heterosexual male population.

Setting: The study employed an electronic questionnaire to collect all data, during face-to-face sessions.

Participants: Fifty-one heterosexual, English-speaking men, between 18-69 years old.

Outcome measures: Men were asked to rate the attractiveness of 20 women on the basis of facial photographs, to estimate the likelihood that each woman had a sexually transmitted infection (STI), and to indicate their willingness to have sex with or without a condom with each woman.

Results: The more attractive a woman was judged to be on average, the more likely participants would be willing to have sex with her (p<0.0001) and the less likely they were to intend to use a condom during sex (p<0.0001). Multivariate analysis revealed that higher condom use intentions towards a particular woman were associated with lower ratings of her attractiveness (p<0.0005), higher ratings of her STI likelihood (p<0.0001), the participant being in an exclusive relationship (p=0.002), having a less satisfactory sex life (p=0.016), lower age (p=0.001), higher number of sexual partners (p=0.001), higher age at first intercourse (p=0.003), higher rates of condomless sex in the last 12 months (p=0.041), and lower confidence in their ability to assess whether or not a woman had an STI (p=0.001). The more attractive a participant judged himself to be, the more he believed that other men like him would engage in condomless sex (p=0.001), and the less likely he was to intend to use a condom himself (p=0.02).

Conclusions: Male perceptions of attractiveness influence their condom use intentions; such risk biases could profitably be discussed during sex education sessions and in condom use promotion interventions.

[28] Gray, J., Bijak, J., & Bullock, S. (2016). Deciding to disclose: A decision theoretic agent model of pregnancy and alcohol misuse. In A. Grow & J. Van Bavel (eds.), Agent-Based Modelling in Population Studies: Concepts, Methods, and Applications, (pp. 301-340). Springer. [ bib | doi | pdf ]
We draw together methodologies from game theory, agent based modelling, decision theory, and uncertainty analysis to explore the process of decision making in the context of pregnant women disclosing their drinking behaviour to their midwives. We employ a game theoretic framework to define a signalling game. The game represents a scenario where pregnant women decide the extent to which they disclose their drinking behaviours to their midwives, and midwives employ the information provided to decide whether to refer their patients for costly specialist treatment. This game is then recast as two games played against “nature”, to permit the use of a decision theoretic approach where both classes of agent use simple rules to decide their moves. Four decision rules are explored-a lexicographic heuristic which considers only the link between moves and payoffs, a Bayesian risk minimisation agent that uses the same information, a more complex Bayesian risk minimiser with full access to the structure of the decision problem, and a Cumulative Prospect Theory (CPT) rule.

In simulation, we recreate two key qualitative trends described in the midwifery literature for all the decision models, and investigate the impact of introducing a simple form of social learning within agent groups. Finally a global sensitivity analysis using Gaussian Emulation Machines (GEMs) is conducted, to compare the response surfaces of the different decision rules in the game.

[29] Haslett, G., Bullock, S., & Brede, M. (2016). Planar growth generates scale free networks. Journal of Complex Networks. [ bib | doi | pdf ]
In this paper we introduce a model of spatial network growth in which nodes are placed at randomly selected locations on a unit square in R^2, forming new connections to old nodes subject to the constraint that edges do not cross. The resulting network has a power law degree distribution, high clustering and the small world property. We argue that these characteristics are a consequence of the two defining features of the network formation procedure; growth and planarity conservation. We demonstrate that the model can be understood as a variant of random Apollonian growth and further propose a one parameter family of models with the Random Apollonian Network and the Deterministic Apollonian Network as extreme cases and our model as a midpoint between them. We then relax the planarity constraint by allowing edge crossings with some probability and find a smooth crossover from power law to exponential degree distributions when this probability is increased.

[30] Pitonakova, L., Crowder, R., & Bullock, S. (2016). Information flow principles for plasticity in foraging robot swarms. Swarm Intelligence, 10(1), 33-63. [ bib | doi | pdf ]
An important characteristic of a robot swarm that must operate in the real world is the ability to cope with changeable environments by exhibiting behavioural plasticity at the collective level. For example, a swarm of foraging robots should be able to repeatedly reorganise in order to exploit resource deposits that appear intermittently in different locations throughout their environment. In this paper, we report on simulation experiments with homogeneous foraging robot teams and show that analysing swarm behaviour in terms of information flow can help us to identify whether a particular behavioural strategy is likely to exhibit useful swarm plasticity in response to dynamic environments. While it is beneficial to maximise the rate at which robots share information when they make collective decisions in a static environment, plastic swarm behaviour in changeable environments requires regulated information transfer in order to achieve a balance between the exploitation of existing information and exploration leading to acquisition of new information. We give examples of how information flow analysis can help designers to decide on robot control strategies with relevance to a number of applications explored in the swarm robotics literature.

[31] Pitonakova, L., Crowder, R., & Bullock, S. (2016). Task allocation in foraging robot swarms: The role of information sharing. In C. Gershenson, T. Froese, J. M. Siqueiros, W. Aguilar, E. J. Izquierdo, & H. Sayama (eds.), Artificial Life XV: Proceedings of The Fifteenth International Conference on the Synthesis and Simulation of Living Systems, (pp. 306-313). MIT Press. bib | pdf | www ]
Autonomous task allocation is a desirable feature of robot swarms that collect and deliver items in scenarios where congestion, caused by accumulated items or robots, can temporarily interfere with swarm behaviour. In such settings, self-regulation of workforce can prevent unnecessary energy consumption. We explore two types of self-regulation: non-social, where robots become idle upon experiencing congestion, and social, where robots broadcast information about congestion to their team mates in order to socially inhibit foraging. We show that while both types of self-regulation can lead to improved energy efficiency and increase the amount of resource collected, the speed with which information about congestion flows through a swarm affects the scalability of these algorithms.

[32] Romanowska, I., Gamble, C., Bullock, S., & Sturt, F. (2016). Dispersal and the Movius Line: Testing the effect of dispersal on population density through simulation. Quaternary International. [ bib | doi | pdf ]
It has been proposed that a strong relationship exists between the population size and density of Pleistocene hominins and their competence in making stone tools. Here we focus on the first “Out of Africa” dispersal, 1.8 Ma ago, and the idea that it might have featured lower population density and the fragmentation of hominin groups in areas furthest away from the point of origin. As a result, these distant populations in Central and East Asia and Europe would not be able to sustain sophisticated technological knowledge and reverted to a pattern of simpler stone-knapping techniques. This process could have led to the establishment of the “Movius Line” and other long-lasting continental-scale patterns in the spatial distribution of Lower Palaeolithic stone technology.

Here we report on a simulation developed to evaluate if, and under what conditions, the early “Out of Africa” dispersal could lead to such a demographic pattern. The model comprises a dynamic environmental reconstruction of Old World vegetation in the timeframe 2.5-0.25 Ma coupled with a standard biological model of population growth and dispersal. The spatial distribution of population density is recorded over the course of the simulation. We demonstrate that, under a wide sweep of both environmental and behavioural parameter values, and across a range of scenarios that vary the role of disease and the availability of alternative crossing points between Africa, Europe and Asia, the demographic consequence of dispersal is not a gradual attenuation of the population size away from the point of origin but a pattern of ecologically driven local variation in population density. The methodology presented opens a new route to understand the phenomenon of the Movius Line and other large-scale spatio-temporal patterns in the archaeological record and provides a new insight into the debate on the relationship between demographics and cultural complexity. This study also highlights the potential of simulation studies for testing complex conceptual models and the importance of building reference frameworks based on known proxies in order to achieve more rigorous model development in Palaeolithic archaeology and beyond.

2015

[33] Antonioni, A., Bullock, S., Darabos, C., Giacobini, M., Iotti, B. N., Moore, J. H., & Tomassini, M. (2015). Contagion on networks with self-organised community structure. In P. Andrews, L. Caves, R. Doursat, S. Hickinbotham, F. Polack, S. Stepney, T. Taylor, & J. Timmis (eds.), Advances in Artificial Life: Proceedings of the Thirteenth European Conference on Artificial Life (ECAL 2015), (pp. 183-190). MIT Press. [ bib | doi | pdf ]
Living systems are organised in space. This imposes constraints on both their structural form and, consequently, their dynamics. While artificial life research has demonstrated that embedding an adaptive system in space tends to have a significant impact on its behaviour, we do not yet have a full account of the relevance of spatiality to living self-organisation.

Here, we extend the REDS model of spatial networks with self-organised community structure to include the 'small world' effect. We demonstrate that REDS networks can become small worlds with the introduction of a small amount of random rewiring. We then explore how this rewiring influences a simple dynamic process representing the contagious spread of infection or information.

We show that epidemic outbreaks arise more easily and spread faster on REDS networks compared to standard random geometric graphs (RGGs). Outbreaks spread even faster on randomly rewired small world REDS networks (due to their shorter path lengths) but initially find it more difficult to establish themselves (due to their reduced community structure). Overall, we find that small world REDS networks, with their combination of short characteristic path length, positive assortativity, strong community structure and high clustering, are more susceptible to a range of contagion dynamics than RGGs, and that they offer a useful abstract model for studying dynamics on spatially organised organic systems.

[34] Bartlett, S. & Bullock, S. (2015). Emergence of competition between different dissipative structures for the same free energy source. In P. Andrews, L. Caves, R. Doursat, S. Hickinbotham, F. Polack, S. Stepney, & T. Taylor (eds.), Advances in Artificial Life: Proceedings of the Thirteenth European Conference on Artificial Life (ECAL 2015), (pp. 415-422). MIT Press. [Won the Best Paper prize.]. [ bib | doi | pdf ]
In this paper, we explore the emergence and direct interaction of two different types of dissipative structure in a single system: self-replicating chemical spot patterns and buoyancy-induced convection rolls. A new Lattice Boltzmann Model is developed, capable of simulating fluid flow, heat transport, and thermal chemical reactions, all within a simple, efficient framework. We report on a first set of simulations using this new model, wherein the Gray-Scott reaction diffusion system is embedded within a non isothermal fluid undergoing natural convection due to temperature gradients. The non-linear reaction which characterises the Gray-Scott system is given a temperature-dependent rate constant of the form of the Arrhenius equation. The enthalpy change (exothermic heat release or endothermic heat absorption) of the reaction can also be adjusted, allowing a direct coupling between the dynamics of the reaction and the thermal fluid flow.

The simulations show positive feedback effects when the reaction is exothermic, but an intriguing, competitive and unstable behaviour occurs when the reaction is sufficiently endothermic. In fact when convection plumes emerge and grow, the reaction diffusion spots immediately surround them, since they require a source of heat for the reaction to proceed. Then however, the proliferation of spot patterns dampens the local temperature, eventually eliminating the initial convection plume and reducing the ability of the spots to persist. This behaviour appears almost ecological, similar as it is, to competitive interactions between organisms competing for the same nutrient source.

[35] Bezerra, T. R., Moura, A., Bullock, S., & Pfahl, D. (2015). A system dynamics simulator for decision support in risk-based IT outsourcing capabilities management. In M. S. Obaidat, T. Ören, J. Kacprzyk, & J. Filipe (eds.), Simulation and Modeling Methodologies, Technologies and Applications: International Conference, SIMULTECH 2014 Vienna, Austria, August 28-30, 2014 Revised Selected Papers, (pp. 131-152). Springer. [ bib | doi | pdf ]
Organizations face important risks with IT Outsourcing (ITO), the practice of delegating organizational IT functions to third parties. Here, we employ a system dynamics simulator to support ITO decision-making under risk, taking a dynamic and integrated view of both capabilities management and benefits management. After briefly presenting its functionality, we use the simulator to assess how deficits in two IT capabilities - Contract Monitoring (on the customer's side) and Service Delivery (on the supplier's side) - affect the earned values of service orders, the ITO budget, service completion dead-lines and damage to the customer-supplier relationship. Validation is ongoing at four institutions in Brazil, including a large, state tax collecting and finance agency. Initial results are encouraging and indicate the simulator is useful for planning and managing ITO activities.

[36] Brace, L., Bullock, S., & Noble, J. (2015). Achieving compositional language in a population of iterated learners. In P. Andrews, L. Caves, R. Doursat, S. Hickinbotham, F. Polack, S. Stepney, T. Taylor, & J. Timmis (eds.), Advances in Artificial Life: Proceedings of the Thirteenth European Conference on Artificial Life (ECAL 2015), (pp. 349-356). MIT Press. [ bib | doi | pdf ]
Iterated learning takes place when the input into a particular individual's learning process is itself the output of another individual's learning process. This is an important feature to capture when investigating human language change, or the dynamics of culturally learned behaviours in general. Over the last fifteen years, the Iterated Learning Model (ILM) has been used to shed light on how the population-level characteristics of learned communication arise. However, until now each iteration of the model has tended to feature a single immature language user learning from their interactions with a single mature language user. Here, the ILM is extended to include a population of immature and mature language users. We demonstrate that the structure and make-up of this population influences the dynamics of language change that occur over generational time. In particular, we show that, by increasing the number of trainers from which an agent learns, the agent in question learns a fully compositional language at a much faster rate, and with less training data. It is also shown that, so long as the number of mature agents is large enough, this finding holds even if a learner's trainers include other agents that do not yet posses full linguistic competence.

[37] Hill, N. & Bullock, S. (2015). Modelling the role of trail pheromone in the collective construction of termite royal chambers. In P. Andrews, L. Caves, R. Doursat, S. Hickinbotham, F. Polack, & S. Stepney (eds.), Advances in Artificial Life: Proceedings of the Thirteenth European Conference on Artificial Life (ECAL 2015), (pp. 43-50). MIT Press. [ bib | doi | pdf ]
Experiments with worker termites constructing a royal chamber around a termite queen in species Macrotermes subhyalinus (Rambur) have shown that both trail and cement pheromones are involved and necessary for the successful formation of pillars during the building process. However, earlier models of the construction were able to demonstrate stigmergic pillar formation with cement pheromone alone. We present results from a new three-dimensional agent-based model, developed to investigate the role of trail pheromone in the construction process. The model is able to demonstrate how, if the properties of the cement pheromone are altered so that its attractive influence is more localised than in earlier models, termites are unable to produce significant pillar formation. The model shows how the addition of trail deposition and following effectively increases the range of the stigmergic effect so that pillar formation is restored. The presence of trail pheromone also results in pillars which are narrower than those produced by cement pheromone alone, and which show more pronounced lateral extensions. Additionally the paths that the termites take from the termite queen to building sites become more directed with time. These features are in keeping with observation and have not been previously modelled.

[38] Khoury, M., Bullock, S., Fu, G., & Dawson, R. (2015). Improving measures of topological robustness in networks of networks and suggestion of a novel way to counter both failure propagation and isolation. Infrastructure Complexity, 2(1). [ bib | doi | pdf ]
The study of interdependent complex networks in the last decade has shown how cascading failure can result in the recursive and complete fragmentation of all connected systems from the destruction of a comparatively small number of nodes. Existing "network of networks" approaches are still in infancy and have shown limits when trying to model the robustness of real-world systems, due to simplifying assumptions regarding network interdependencies and post-attack viability. In order to increase the realism of such models, we challenge such assumptions by validating the following four hypotheses through experimental results obtained from computer based simulations. Firstly, we suggest that, in the case of network topologies vulnerable to fragmentation, replacing the standard measure of robustness based on the size of the one largest remaining connected component by a new measure allowing secondary components to remain viable when measuring post-attack viability can make a significant improvement to the model. Secondly, we show that it is possible to influence the way failure propagation is balanced between coupled networks while keeping the same overall robustness score by allowing nodes in a given network to have multiple counter parts in another network. Thirdly, we challenge the generalised assumption that partitioning between networks is a good way to increase robustness and that isolation is a force as equally destructive as the iterative propagation of cascading failure. This result significantly alters where the optimum robustness lies in the balance between isolation and inter-network coupling in such interconnected systems. Finally, we propose a solution to the consequent problem of seemingly ever increasing vulnerability of interdependent networks to both cascading failure and isolation: the use of permutable nodes that would give such systems rewiring capabilities. This last concept could have wide implications when trying to improve the topological resilience of natural or engineered interdependent networks.

2014

[39] Antonioni, A., Bullock, S., & Tomassini, M. (2014). REDS: An energy-constrained spatial social network model. In H. Lipson, H. Sayama, J. Rieffel, S. Risi, & R. Doursat (eds.), Artificial Life XIV: Proceedings of The Fourteenth International Conference on the Synthesis and Simulation of Living Systems, (pp. 368-375). MIT Press. [ bib | doi | pdf ]
The organisation of living systems is neither random nor regular, but tends to exhibit complex structure in the form of clustering and modularity. Here, we present a very simple model that generates random networks with spontaneous community structure reminiscent of living systems, particularly those involving social interaction. We extend the well-known random geometric graph model, in which spatially embedded networks are constructed subject to a constraint on edge length, in order to capture two key additional features of organic social networks. First, relationships that span longer distances are more costly to maintain. Conversely, relationships between nodes that share neighbours may be less costly to maintain due to social synergy. The resulting networks have several properties in common with those of organic social networks. We demonstrate that the model generates non-trivial community structure and that, unlike for random geometric graphs, densely connected communities do not simply arise as a consequence of an initial locational advantage.

[40] Bartlett, S. & Bullock, S. (2014). Natural convection of a two-dimensional Boussinesq fluid does not maximize entropy production. Physical Review E, 90(2), 1-8. [ bib | doi | pdf ]
Rayleigh-Bénard convection is a canonical example of spontaneous pattern formation in a nonequilibrium system. It has been the subject of considerable theoretical and experimental study, primarily for systems with constant (temperature or heat flux) boundary conditions. In this investigation, we have explored the behavior of a convecting fluid system with negative feedback boundary conditions. At the upper and lower system boundaries, the inward heat flux is defined such that it is a decreasing function of the boundary temperature. Thus the system's heat transport is not constrained in the same manner that it is in the constant temperature or constant flux cases. It has been suggested that the entropy production rate (which has a characteristic peak at intermediate heat flux values) might apply as a selection rule for such a system. In this work, we demonstrate with Lattice Boltzmann simulations that entropy production maximization does not dictate the steady state of this system, despite its success in other, somewhat similar scenarios. Instead, we will show that the same scaling law of dimensionless variables found for constant boundary conditions also applies to this system.

[41] Bezerra, T. R., Bullock, S., & Moura, A. (2014). A simulation model for risk management support in IT outsourcing. In M. S. Obaidat, J. Kacprzyk, & T. Ören (eds.), SIMULTECH 2014: Proceedings of the Fourth International Conference on Simulation and Modeling Methodologies, Technologies and Applications, (pp. 339-351). Scitepress. [ bib | doi | pdf ]
IT Outsourcing (ITO) is the practice to delegate organizational IT functions to a third party. However, this practice introduces important risks for customer organizations. We have developed a system dynamics simulation model to support ITO decision making that considers a dynamic and integrated view of capabilities management and benefits management. Two IT capabilities are modelled: Contract Monitoring (on the customer's side) and Service Delivery (on the supplier's side). In this paper the proposed model is used to assess the risks presented by a deficit in these capabilities. The results of our experiments indicate that the lack of contract monitoring capability in ITO contracting organizations directly impacts on service conclusion time and influences the cost of contract management, which is an important risk factor related to exceeding the ITO budget. It was also found that low levels of service delivery capability in the supplier most significantly impact the cost of rework and the number of penalties. These factors influence the profitability of the supplier and may induce it to abandon the contract prematurely.

[42] Bullock, S. (2014). Afterward: A comma on the wall. In R. Page & M. Amos (eds.), Beta Life: Stories From an A-Life Future. Comma Press. bib | pdf | www ]
[43] Bullock, S. (2014). Levins and the lure of artificial worlds. The Monist, 97(3), 301-320. [ bib | doi | pdf ]
What is it about simulation models that has led some practitioners to treat them as potential sources of empirical data on the real-world systems being simulated; that is, to treat simulations as 'artificial worlds' within which to perform computational 'experiments'? Here we use the work of Richard Levins as a starting point in identifying the appeal of this model building strategy, and proceed to account for why this appeal is strongest for computational modellers. This analysis suggests a perspective on simulation modelling that makes room for 'artificial worlds' as legitimate science without having to accept that they should be treated as sources of empirical data

[44] Fu, G., Dawson, R., Khoury, M., & Bullock, S. (2014). Interdependent networks: Vulnerability analysis and strategies to limit cascading failure. The European Physical Journal B, 87(7), 148. [ bib | doi | pdf ]
Network theory is increasingly employed to study the structure and behaviour of social, physical and technological systems - including civil infrastructure. Many of these systems are interconnected and the interdependencies between them allow disruptive events to propagate across networks, enabling damage to spread far beyond the immediate footprint of disturbance. In this research we experiment with a model to characterise the configuration of interdependencies in terms of direction, redundancy and extent, and we analyse the performance of interdependent systems with a wide range of possible coupling modes. We demonstrate that networks with directed dependencies are less robust than those with undirected dependencies, and that the degree of redundancy in inter-network dependencies can have a differential effect on robustness determined by their direction. As interdependencies between many real-world systems exhibit these characteristics, it is likely that many such systems operate near critical thresholds. The vulnerability of an interdependent network is shown to be reducible in a cost effective way, either by optimising inter-network connections, or by hardening high degree nodes. The results improve understanding of the influence of interdependencies on system performance and how to mitigate associated risks.

[45] Gilbert, N. & Bullock, S. (2014). Complexity at the social science interface. Complexity, 19(6), 1-4. [ bib | doi | pdf ]
This paper introduces a special issue of Complexity dedicated to the increasingly important element of complexity science that engages with social policy. We introduce and frame an emerging research agenda that seeks to enhance social policy by working at the interface between the social sciences and the physical sciences (including mathematics and computer science), and term this research area the 'social science interface' by analogy with research at the life sciences interface. We locate and exemplify the contribution of complexity science at this new interface before summarising the contributions collected in this special issue and identifying some common themes that run through them.

[46] Gonzalez, M., Watson, R. A., Noble, J., & Bullock, S. (2014). The origin of culture: Selective conditions for horizontal information transfer. In H. Lipson, H. Sayama, J. Rieffel, S. Risi, & R. Doursat (eds.), Artificial Life XIV: Proceedings of the Fourteenth International Conference on the Synthesis and Simulation of Living Systems, (pp. 408-414). MIT Press. [ bib | doi | pdf ]
Culture is a central component in the study of numerous disciplines in social science and biology. Nevertheless, a consensus on what it is and how we can represent it in a meaningful and useful way has been hard to reach, especially due to the multifaceted aspects of its nature. In this work we dissect culture into its most basic components and propose horizontal information transfer as the most crucial aspect of it. We discuss the two fundamental processes that are required for culture to emerge in an evolutionary context, namely: increased mutation rates and survival selection. To show how each of these components affect the emergence of culture, a genetic algorithm was explored for a range of conditions. Here, we formalize when and how a population is said to move from biological to cultural evolution and why such a transition radically changes its evolutionary dynamics. Our results suggest that horizontal transfer of information in cultural systems requires the evolution of survival enhancing traits rather reproduction enhancing ones. We consider this requirement to be key for the evolution of rich cultural systems, like the one present in humans.

[47] Khoury, M. & Bullock, S. (2014). Multi-level resilience: reconciling robustness, recovery and adaptability from a network science perspective. International Journal of Adaptive, Resilient and Autonomic Systems, 5(4), 34-45. [ bib | doi | pdf ]
From a multi-disciplinary point of view, research on resilience focuses on robustness, recovery, and adaptive capacity. Robustness quantifies how much damage a system can take before it breaks, whereas recovery refers to the ability of a system to recuperate within limits of time and resources, and adaptability requires a system to be able to structurally reorganize throughout time so as to improve its chances of survival when facing disturbances. In this paper, after discussing examples of models of robustness, recovery and adaptability from different scientific disciplines, is a discussion on the relationship between these three aspects of resilience, introducing a multi-level resilience hierarchy with which to relate them to each other which is termed the resilience pyramid. This paper then exemplifies this multi-level view of resilience through discussing the resilience of symbiotic networks to cascading failure in the context of modern infrastructures, and considers the introduction of infrastructure nodes with permutable roles as a possible solution.

[48] Pitonakova, L., Crowder, R., & Bullock, S. (2014). Understanding the role of recruitment in collective robot foraging. In H. Lipson, H. Sayama, J. Rieffel, S. Risi, & R. Doursat (eds.), Artificial Life XIV: Proceedings of the Fourteenth International Conference on the Synthesis and Simulation of Living Systems, (pp. 264-271). MIT Press. [ bib | doi | pdf ]
When is it profitable for robots to forage collectively? Here we compare the ability of swarms of simulated bio-inspired robots to forage either collectively or individually. The conditions under which recruitment (where one robot alerts another to the location of a resource) is profitable are characterised, and explained in terms of the impact of three types of interference between robots (physical, environmental, and informational). Key factors determining swarm performance include resource abundance, the reliability of shared informa- tion, time limits on foraging, and the ability of robots to cope with congestion around discovered resources and around the base location. Additional experiments introducing odometry noise indicate that collective foragers are more susceptible to odometry error.

2013

[49] Bartlett, S. & Bullock, S. (2013). Avenues for emergent ecologies. In Emergence in Chemical Systems 3.0. Event Dates: 17-20 June, 2013. [ bib | pdf | www ]
In this work, we present some fascinating behaviour emerging from a simple synthetic chemistry model. The results of Ono and Ikegami (2001) demonstrated the spontaneous formation of primitive, self-reproducing cells from a random homogeneous mixture of chemical components. Their model made use of a simple, artificial reaction network. Discrete particles were placed on a triangular lattice and the dynamics consisted of the following particle transitions: translation over one lattice spacing and chemical transformation. The primary particle types were membrane-forming particles, catalysts and water. The membrane particles formed structures akin to lipid bilayers. Their synthesis was stimulated by the catalyst particles, which were also capable of template self-replication using precursors. The system readily exhibits protocell formation from a random initial condition. These protocells form, grow, divide and eventually decay in a continuous cycle. Such emergent dynamics were an illuminating result given that the simulation itself only defines local interactions between particles and a set of physical transition rules. The protocell structures are not explicitly represented or built into the model. Hence it demonstrated a basic physical logic wherein the concepts of self-maintenance and self-reproduction could arise spontaneously from a set of simpler, lower level rules. In essence, it was an in silico realisation of the principle of autopoiesis.

We decided to extend this work by augmenting the particle species repertoire. An additional catalyst was added, which did not stimulate the synthesis of membrane particles, but rather stimulated their decay. It was expected that this would reduce the rate of protocell formation. However a surprising dynamic was uncovered with this new system. As one might expect the protocells did not arise in abundance as in the original model. Instead they formed in small, isolated colonies since this was the only means by which they could avoid the destructive effects of the new catalyst. However because this toxic particle was also autocatalytic (like the other, constructive catalyst), its concentration rose sharply in regions confined by membrane particles since the membranes slowed their outward diffusion. Thus membranes actually created a niche for the toxic catalyst. This in turn produced a predator-prey dynamic with clouds of the toxic particle growing near protocells and protocells being forced to grow in the opposite direction to avoid the destructive effects of the new particle. These results reveal that high level, ecological phenomena can manifest themselves even in simple physico-chemical systems. They demonstrate that ideas of natural selection and fitness are intimately bound with the basic principle of free energy minimisation. We have also now enhanced the model further by adding a second reaction network. It is similar, but independent to the first and allows for two "species" of protocell. It is also possible for hybrids to form, comprised of mixtures of the membrane particles from the two reaction networks. Results from this new version are currently being gathered and analysed

[50] zu Erbach-Schoenberg, E., Bullock, S., & Brailsford, S. (2013). A model of spatially constrained social network dynamics. Social Science Computer Review, 32(3), 373-392. [ bib | doi | pdf ]
Social networks characterise the set of relationships amongst a population of social agents. As such, their structure both constrains and is constrained by social processes such as partnership formation and the spread of information, opinions and behaviour. Models of these coevolutionary network dynamics exist, but they are generally limited to specific interaction types such as games on networks or opinion dynamics. Here we present a dynamic model of social network formation and maintenance that exhibits the characteristic features of real-world social networks such as community structure, high clustering, positive degree assortativity and short characteristic path length. While these macro-structural network properties are stable, the network micro-structure undergoes continuous change at the level of relationships between individuals. Notably, the edges are weighted, allowing for gradual change in relationship strength in contrast to more abrupt mechanisms, such as rewiring, used in other models. We show how the structural features that characterise social networks can arise as the result of constraints placed on the interactions between individuals. Here we explore the relationship between structural properties and four idealised constraints placed on social interactions: space, affinity, time, and history. We show that spatial embedding and the subsequent constraints on possible interactions are crucial in this model for the emergence of the structures characterising social networks.

[51] Jacyno, M., Bullock, S., Geard, N., Payne, T. R., & Luck, M. (2013). Self-organising agent communities for autonomic resource management. Adaptive Behavior, 21(1), 3-28. [ bib | doi | pdf ]
The autonomic computing paradigm addresses the operational challenges presented by increasingly complex software systems by proposing that they be composed of many autonomous components, each responsible for the run-time reconfiguration of its own dedicated hardware and software components. Consequently, regulation of the whole software system becomes an emergent property of local adaptation and learning carried out by these autonomous system elements. Designing appropriate local adaptation policies for the components of such systems remains a major challenge. This is particularly true where the system's scale and dynamism compromise the efficiency of a central executive and/or prevent components from pooling information to achieve a shared, accurate evidence base for their negotiations and decisions.

In this paper, we investigate how a self-regulatory system response may arise spontaneously from local interactions between autonomic system elements tasked with adaptively consuming/providing computational resources or services when the demand for such resources is continually changing. We demonstrate that system performance is not maximised when all system components are able to freely share information with one another. Rather, maximum efficiency is achieved when individual components have only limited knowledge of their peers. Under these conditions, the system self-organises into appropriate community structures. By maintaining information flow at the level of communities, the system is able to remain stable enough to efficiently satisfy service demand in resource-limited environments, and thus minimise any unnecessary reconfiguration whilst remaining sufficiently adaptive to be able to reconfigure when service demand changes.

[52] Pitonakova, L. & Bullock, S. (2013). Controlling ant-based construction. In P. Liò, O. Miglino, G. Nicosia, S. Nolfi, & M. Pavone (eds.), Advances in Artificial Life: Proceedings of the Twelfth European Conference on Artificial Life (ECAL 2013), (pp. 151-158). MIT Press. [ bib | doi | pdf ]
This paper investigates the dynamics of decentralised nest construction in the ant species Leptothorax tuberointerruptus, exploring the contribution of, and interaction between, a pheromone building template and a physical building template (the bodies of the ants themselves). We present a continuous-space model of ant behaviour capable of generating ant-like nest structures, the integrity and shapes of which are non-trivially determined by choice of parameters and the building template(s) employed. We go on to demonstrate that the same behavioural algorithm is capable of generating a somewhat wider range of architectural forms, and discuss its limitations and potential extensions.

[53] Zedan, C., Bullock, S., & Ianni, A. (2013). Stabilising merger waves: An agent-based networked model of market stability. In Nineteenth International Conference on Computing in Economics and Finance (CEF 2013). Event Dates: 8-12 July 2013. [ bib | pdf | www ]
The world's markets are increasingly interconnected, imposing additional challenges for both regulators and market participants. This paper considers the effect of inter-market dependencies on the spread of endogenously generated merger waves. Though merger activity can generate efficiency gains, it disrupts market competition and can lead to negative effects for consumers. The conditions under which disruptive merger activity can spread to otherwise stable markets are identified. It is also shown which inter-market dependency configurations are more likely to lead to situations in which the stability of some markets can be disrupted by merger activity in others.

[54] Zedan, C., Ianni, A., & Bullock, S. (2013). Competition and cascades in the financial markets: An agent-based model of endogeneous mergers. Intelligent Systems in Accounting Finance & Management, 20(1), 39-51. [ bib | doi | pdf ]
We present an agent-based model of endogenous merger formation in a market with turnover of market participants. We describe the dynamics of the model and identify the conditions under which market competition is sufficiently disrupted to prompt extended periods during which mergers are desirable. We also demonstrate how merger waves can be triggered by industry shocks and firm overconfidence.

2012

[55] Bullock, S., Ladley, D., & Kerby, M. (2012). Wasps, termites and waspmites: Distinguishing competence from performance in collective construction. Artificial Life, 18(3), 267-290. [ bib | doi | pdf ]
We introduce a distinction between algorithm performance and algorithm competence and argue that bio-inpsired computing should characterise the former rather than the latter. To exemplify this, we explore and extend a bio-inspired algorithm for collective construction influenced by paper wasp behaviour. Despite being provably general in its competence we demonstrate limitations on the algorithm's performance. We explain these limitations, and extend the algorithm to include pheromone-mediated behaviour typical of termites. The resulting hybrid "waspmite" algorithm shares the generality of the original wasp algorithm, but exhibits improved peroformance and scalability.

[56] Dearing, J. A., Bullock, S., Contanza, R., Dawson, T. P., Edwards, M. E., Poppy, G. M., & Smith, G. (2012). Navigating the perfect storm: Research strategies for socialecological systems in a rapidly evolving world. Environmental Management, 49(4), 767-775. [ bib | doi | pdf ]
The "Perfect Storm" metaphor describes a combination of events that causes a surprising or dramatic impact. It lends an evolutionary perspective to how socialecological interactions change. Thus, we argue that an improved understanding of how social-ecological systems have evolved up to the present is necessary for the modelling, understanding and anticipation of current and future social-ecological systems. Here we consider the implications of an evolutionary perspective for designing research approaches. One desirable approach is the creation of multi-decadal records produced by integrating palaeoenvironmental, instrument and documentary sources at multiple spatial scales. We also consider the potential for improved analytical and modelling approaches by developing system dynamical, cellular and agent-based models, observing complex behaviour in social-ecological systems against which to test systems dynamical theory, and drawing better lessons from history. Alongside these is the need to find more appropriate ways to communicate complex systems, risk and uncertainty to the public and to policy-makers.

[57] Noble, J., Silverman, E., Bijak, J., Rossiter, S., Evandrou, M., Bullock, S., Vlachantoni, A., & Falkingham, J. (2012). Linked lives: The utility of an agent-based approach to modelling partnership and household formation in the context of social care. In C. Laroque, J. Himmelspach, R. Pasupathy, O. Rose, & A. M. Uhrmacher (eds.), Proceedings of the Winter Simulation Conference 2012 (WSC2012), (pp. 1-12). IEEE. Event dates: 9-12 December, 2012. [ bib | doi | pdf ]
The UK's population is aging, which presents a challenge as older people are the primary users of health and social care services. We present an agent-based model of the basic demographic processes that impinge on the supply of, and demand for, social care: namely mortality, fertility, health-status transitions, internal migration, and the formation and dissolution of partnerships and households. Agent-based modeling is used to capture the idea of 'linked lives' and thus to represent hypotheses that are impossible to express in alternative formalisms. Simulation runs suggest that the per-taxpayer cost of state-funded social care could double over the next forty years. A key benefit of the approach is that we can treat the average cost of state-funded care as an outcome variable, and examine the projected effect of different sets of assumptions about the relevant social processes.

[58] Bullock, S. (2012). An evolutionary advantage for extravagant honesty. Journal of Theoretical Biology, 292, 30-38. [ bib | doi | pdf ]
A game-theoretic model of handicap signalling over a pair of signalling channels is introduced in order to determine when one channel has an evolutionary advantage over the other. The stability conditions for honest handicap signalling are presented for a single channel and are shown to conform with the results of prior handicap signalling models. Evolutionary simulations are then used to show that, for a two-channel system in which honest signalling is possible on both channels, the channel featuring larger advertisements at equilibrium is favoured by evolution. This result helps to address a significant tension in the handicap principle literature. While the original theory was motivated by the prevalence of extravagant natural signalling, contemporary models have demonstrated that it is the cost associated with deception that stabilises honesty, and that the honest signals exhibited at equilibrium need not be extravagant at all. The current model suggests that while extravagant and wasteful signals are not required to ensure a signalling system's evolutionary stability, extravagant signalling systems may enjoy an advantage in terms of evolutionary attainability.

2011

[59] Barnett, L., Buckley, C. L., & Bullock, S. (2011). Neural complexity: A graph theoretic interpretation. Physical Review E, 83(4), 041906-[8pp]. [ bib | doi | pdf ]
One of the central challenges facing modern neuroscience is to explain the ability of the nervous system to coherently integrate information across distinct functional modules in the absence of a central executive. To this end Tononi et al. [Proc. Nat. Acad. Sci. USA 91, 5033 (1994)] proposed a measure of neural complexity that purports to capture this property based on mutual information between complementary subsets of a system. Neural complexity, so defined, is one of a family of information theoretic metrics developed to measure the balance between the segregation and integration of a system's dynamics. One key question arising for such measures involves understanding how they are influenced by network topology. Sporns et al. [Cereb. Cortex 10, 127 (2000)] employed numerical models in order to determine the dependence of neural complexity on the topological features of a network. However, a complete picture has yet to be established. While De Lucia et al. [Phys. Rev. E 71, 016114 (2005)] made the first attempts at an analytical account of this relationship, their work utilized a formulation of neural complexity that, we argue, did not reflect the intuitions of the original work. In this paper we start by describing weighted connection matrices formed by applying a random continuous weight distribution to binary adjacency matrices. This allows us to derive an approximation for neural complexity in terms of the moments of the weight distribution and elementary graph motifs. In particular we explicitly establish a dependency of neural complexity on cyclic graph motifs.

[60] Bartlett, S. & Bullock, S. (2011). Coming phase to phase with surfactants. In T. Lenaerts, M. Giacobini, H. Bersini, P. Bourgine, M. Dorigo, & R. Doursat (eds.), Advances in Artificial Life: Proceedings of the Eleventh European Conference on the Synthesis and Simulation of Living Systems (ECAL 2011), (pp. 69-76). MIT Press. Event Dates: 8-12 August, 2011. [ bib | pdf | www ]
We introduce a fast cellular automata model for the simulation of surfactant dynamics based on a previous model by Ono and Ikegami (2001). Here, individual lipid-like particles undergo stochastic movement and rotation on a two-dimensional lattice in response to potential energy gradients. The particles are endowed with an internal structure that reflects their amphiphilic character. Their head groups are weakly repelled by water whereas their hydrophobic tails cannot be readily hydrated. This leads to the formation of a variety of structures when the particles are placed in solution. The model in its current form compels a myriad of potential self-organisation experiments. Heterogeneous boundary conditions, chemical interactions and an arbitrary diversity of particles can easily be modelled. Our main objective was to establish a computational platform for investigating how mechanisms of lipid homeostasis might evolve among popu- lations of protocells.

[61] Bullock, S. (2011). Prospects for large-scale financial systems simulation. Technical Report DR 14, Government Office for Science. [This report was commissioned by the Foresight Programme of the UK's Government Office for Science. However, its findings are independent of government and do not constitute government policy.]. [ bib | pdf ]
As the 21st century unfolds, we find ourselves having to control, support, manage or otherwise cope with large-scale complex adaptive systems to an extent that is unprecedented in human history. Whether we are concerned with issues of food security, infrastructural resilience, climate change, health care, web science, security, or financial stability, we face problems that combine scale, connectivity, adaptive dynamics, and criticality. Complex systems simulation is emerging as the key scientific tool for dealing with such complex adaptive systems. Although a relatively new paradigm, it is one that has already established a track record in fields as varied as ecology (Grimm and Railsback, 2005), transport (Nagel et al., 1999), neuroscience (Markram, 2006), and ICT (Bullock and Cliff, 2004). In this report, we consider the application of simulation methodologies to financial systems, assessing the prospects for continued progress in this line of research.

[62] Erbach-Schoenberg, E. Z., McCabe, C., & Bullock, S. (2011). On the interaction of adaptive timescales on networks. In T. Lenaerts, M. Giacobini, H. Bersini, P. Bourgine, M. Dorigo, & R. Doursat (eds.), Advances in Artificial Life: Proceedings of the Eleventh European Conference on Artificial Life (ECAL 2011), (pp. 900-907). MIT Press. Event Dates: 8-12 August, 2011. [ bib | pdf | www ]
The dynamics of real-world systems often involve multiple processes that influence system state. The timescales that these processes operate on may be separated by orders of magnitude or may coincide closely. Where timescales are not separable, the way that they relate to each other will be important for understanding system dynamics. In this paper, we present a short overview of how modellers have dealt with multiple timescales and introduce a definition to formalise conditions under which timescales are separable. We investigate timescale separation in a simple model, consisting of a network of nodes on which two processes act. The first process updates the values taken by the network's nodes, tending to move a node's value towards that of its neighbours. The second process influences the topology of the network, by rewiring edges such that they tend to more often lie between similar individuals. We show that the behaviour of the system when timescales are separated is very different from the case where they are mixed. When the timescales of the two processes are mixed, the ratio of the rates of the two processes determines the systems equilibrium state. We go on to explore the impact of heterogeneity in the system's timescales, i.e., where some nodes may update their value and/or neighbourhood faster than others, demonstrating that it can have a significant impact on the equilibrium behaviour of the model.

[63] Geard, N., Bullock, S., Lohaus, R., Azevedo, R. B., & Wiles, J. (2011). Developmental motifs reveal complex structure in cell lineages. Complexity, 16(4), 48-57. [ bib | doi | pdf ]
Many natural and technological systems are complex, with organisational structures that exhibit characteristic patterns, but defy concise description. One effective approach to analysing such systems is in terms of repeated topological motifs. Here, we extend the motif concept to characterise the dynamic behaviour of complex systems by introducing developmental motifs, which capture patterns of system growth. As a proof of concept, we use developmental motifs to analyse the developmental cell lineage of the nematode Caenorhabditis elegans, revealing a new perspective on its complex structure. We use a family of computational models to explore how biases arising from the dynamics of the developmental gene network, as well as spatial and temporal constraints acting on development, contribute to this complex organisation.

[64] Hebbron, T., Noble, J., & Bullock, S. (2011). All in the same boat: A "situated" model of emergent immune response. In K. George, K. István, & S. Eörs (eds.), Advances in Artificial Life: Proceedings of the Tenth European Conference on Artificial Life (ECAL 2009), (pp. 353-360). Springer. [ bib | doi | pdf ]
Immune systems provide a unique window on the evolution of individuality. Existing models of immune systems fail to consider them as situated within a biochemical context. We present a model that uses an NK landscape as an underlying metabolic substrate, represents organisms as having both internal and external structure, and provides a basis for studying the coevolution of pathogens and host immune responses. Early results from the model are discussed; we show that interaction between organisms drives a population to optima distinct from those found when adapting against an abiotic background.

[65] Zedan, C., Bullock, S., & Ianni, A. (2011). Spatial mobility in the formation of agent-based economic networks. In Seventeenth International Conference on Computing in Economics and Finance (CEF 2011). Event dates: June 29 - July 1, 2011. [ bib | pdf | www ]
We extend the model of spatial social network formation of Johnson and Gilles (Review of Economic Design, 2000, 5, 273-299) by situating each economic agent within one of a set of discrete spatial locations and allowing agents to maximise the utility that they gain from their direct and indirect social contacts by relocating, in addition to forming or breaking social links. This enables the exploration of scenarios in which agents are able to alter the distance between themselves and other agents at some cost. Agents in this model might represent countries, firms or individuals, with the distance between a pair of agents representing geographical, social or individual differences. The network of social relationships characterises some form of self-organised persistent interaction such as trade agreements or friendship patterns. By varying the distance-dependent costs of relocation and maintaining social relationships we are able to identify conditions that promote the formation of spatial organisations and network configurations that are pairwise stable and efficient. We also examine the associated patterns in individual and aggregate agent behaviour. We find that both relative location and the order in which agents are allowed to act can drastically affect individual utility. These traits are found to be robust to random perturbations.

[66] Bryden, J., Funk, S., Geard, N., Bullock, S., & Jansen, V. (2011). Stability in flux: Community structure in dynamic networks. Journal of The Royal Society Interface, 8(60), 1031-1040. [ bib | doi | pdf ]
The structure of many biological, social and technological systems can usefully be described in terms of complex networks. Although often portrayed as fixed in time, such networks are inherently dynamic, as the edges that join nodes are cut and rewired, and nodes themselves update their states. Understanding the structure of these networks requires us to understand the dynamic processes that create, maintain and modify them. Here, we build upon existing models of coevolving networks to characterize how dynamic behaviour at the level of individual nodes generates stable aggregate behaviours. We focus particularly on the dynamics of groups of nodes formed endogenously by nodes that share similar properties (represented as node state) and demonstrate that, under certain conditions, network modularity based on state compares well with network modularity based on topology. We show that if nodes rewire their edges based on fixed node states, the network modularity reaches a stable equilibrium which we quantify analytically. Furthermore, if node state is not fixed, but can be adopted from neighbouring nodes, the distribution of group sizes reaches a dynamic equilibrium, which remains stable even as the composition and identity of the groups change. These results show that dynamic networks can maintain the stable community structure that has been observed in many social and biological systems.

2010

[67] Bartlett, S., Attard, G., & Bullock, S. (2010). Challenging the robustness of simulated protocells (abstract). In H. Fellerman, M. Dörr, M. M. Hanczyc, L. L. Laursen, S. Maurer, D. Merkle, P.-A. Monnard, K. Stoy, & S. Rasmussen (eds.), Artificial Life XII: Proceedings of the Twelfth International Conference on the Synthesis and Simulation of Living Systems. MIT Press. Event Dates: 19-23 August, 2010. [ bib | pdf | www ]
We have re-implemented and extended the 2D artificial chemistry model of Ono and Ikegami (2001) (see also Ono, 2005) to increase its behavioural diversity. In its original form, this cellular automata (CA) simulation of primitive chemical life produces self-organising, autopoietic protocells from a random initial configuration of membrane, catalyst, resource, waste, and water particles. These particles are free to diffuse across the CA lattice, rotate (in the case of membrane particles) and undergo chemical reactions. The system is updated stochastically, but the transitions are biased according to local potential energy gradients. These energies are specified by pre-defined repulsive interactions, which depend on the types of the interacting particles and their relative positions and orientations. All interactions are short-ranged (one lattice spacing or less). The chemistry of the system is autocatalytic, i.e., the catalyst particles stimulate their own replication (a catalyst resource particle and a catalyst particle react to form two catalyst particles). The catalyst particles also stimulate the production of membrane particles from their own resource particles. All particles spontaneously decay into wastes, which are recycled at a constant rate by an external energy source. In certain regions of the system's parameter space it will evolve towards a configuration possessing long-range order, expressed by the clustering of membrane particles, which eventually form closed boundaries. If a sufficient number of catalyst particles are confined within a membrane loop, membrane decay can be compensated by membrane synthesis due to the presence of the catalyst particles (figure 1(c)). These protocells are capable of maintaining their own boundaries, the defining property of an autopoietic entity. Our extension of the model consisted of introducing a new particle, B, with identical characteristics to the catalyst particle except that rather than stimulating the production of membranes, it stimulates their decay. At low initial concentrations, the new particle has little effect on the formation and proliferation of protocells (figure 1(f)). However, when initialised at the same concentration as the membrane-producing catalyst, the new particle inhibits protocell formation (figure 1(h)). Only a small number of cells are able to form and the characteristic time required for the emergence of the protocells is much longer than simulations run in the absence of the B-particles. However, once membranes form and aggregate, the new particle takes advantage of the fact that membranes are almost impermeable to catalyst particles. If a cloud of B-particles is bordered by one or more membranes, it is much less likely that they will diffuse away since they will be sheltered from the dispersive effects of diffusion. This protection enables an increase in the density of B-particles, which then leads to a stronger decay rate of local membrane particles (figure 1(i)). Thus, if a sufficiently dense cloud of B-particles arises near to a cluster of protocells, it is able to increase in concentration such that protocells and sometimes even whole clusters of protocells are destroyed or segregated. This behaviour is reminiscent of a primitive pursuit-evasion scenario with protocells being forced to grow in the direction of low B-particle density.

[68] Buckley, C. L., Bullock, S., & Barnett, L. (2010). Spatially embedded dynamics and complexity. Complexity, 16(2), 29-34. [ bib | doi | pdf ]
To gain a deeper understanding of the impact of spatial embedding on the dynamics of complex systems we employ a measure of interaction complexity developed within neuroscience using the tools of statistical information theory. We apply this measure to a set of simple network models embedded within Euclidean spaces of varying dimensionality in order to characterise the way in which the constraints imposed by low-dimensional spatial embedding contribute to the dynamics (rather than the structure) of complex systems. We demonstrate that strong spatial constraints encourage high intrinsic complexity, and discuss the implications for complex systems in general.

[69] Bullock, S. (2010). Living technology. In M. Bedau, P. G. Hansen, E. Parke, & S. Rasmussen (eds.), Living Technology: 5 Questions, (pp. 45-54). Automatic Press/VIP. bib | pdf | www ]
[70] Bullock, S., Barnett, L., & Di Paolo, E. A. (2010). Spatial embedding and the structure of complex networks. Complexity, 16(2), 20-28. [ bib | doi | pdf ]
We review and discuss the structural consequences of embedding a random network within a metric space such that nodes distributed in this space tend to be connected to those nearby. We find that where the spatial distribution of nodes is maximally symmetrical some of the structural properties of the resulting networks are similar to those of random non-spatial networks. However, where the distribution of nodes is inhomogeneous in some way, this ceases to be the case, with consequences for the distribution of neighbourhood sizes within the network, the correlation between the number of neighbours of connected nodes, and the way in which the largest connected component of the network grows as the density of edges is increased. We present an overview of these findings in an attempt to convey the ramifications of spatial embedding to those studying real-world complex systems.

[71] Bullock, S. & Geard, N. (2010). Spatial embedding as an enabling constraint: Introduction to a special issue of complexity on the topic of 'spatial organisation'. Complexity, 16(2), 8-10. [ bib | doi | pdf ]
We introduce and discuss the role of spatial embedding as an enabling constraint on complex system structure and function.

[72] Bullock, S. & Kerby, M. (2010). Multi-modal swarm construction (abstract). In H. Fellerman, M. Dörr, M. M. Hanczyc, L. L. Laursen, S. Maurer, D. Merkle, P.-A. Monnard, K. Stoy, & S. Rasmussen (eds.), Artificial Life XII: Twelfth International Conference on the Synthesis and Simulation of Living Systems. MIT Press. Event Dates: 19-23 August, 2010. [ bib | pdf | www ]
Swarm construction involves a population of autonomous agents collaboratively organising material into useful persistent structures without recourse to central co-ordination or control. This approach to fabrication has significant potential within nanoscale domains, where explicit centralised control of building activity is prohibitive (e.g., Martel and Mohammadi, 2010). The ultimate value of swarm construction will be demonstrated in the real world with physical agents (or perhaps software agents working with real-world digital media). However, our interest is in exploring different possibilities for decentralised control of swarm construction in abstract simulated environments populated by idealised simplistic agents. The goal of such simulations is not to demonstrate solutions to specific realistic construction challenges, but to capture elements of the fundamental logic of decentralised control. Here, we explore a population of simple simulated agents that combine information from two sensory modalities (one proximal and one distal) in order to overcome some of the limitations of two previously explored uni-modal schemes. Like the artificial paper wasps of Bonabeau et al. (2000), the agents simulated here are able to sense the configuration of building material in their immediate environment and use this proximal sensory information to trigger specific building activity via a set of microrules. In addition, like the simulated termites of Ladley and Bullock (2004, 2005), they are also able to sense simulated diffusing artificial pheromones deposited during building and movement, and use this distal sensory information to influence movement and release or inhibit building activity. Since both the proximal configuration of building material and the distal distribution of pheromone intensities in an agent's vicinity are themselves the consequence of prior agent building activity, the scheme is stigmergic-the environmental trace of agent activity guides subsequent agent behaviour. In principle, this swarm construction scheme is 'universal' in that it is capable (given enough distinct types of building material) of generating any configuration of contiguous building material-a property inherited from Bonabeau et al. (2000)'s scheme. However, proofs of universality tell us nothing about what a scheme will in fact be useful for in practice (Bullock, 2006). Consequently, we concentrate here on exploring and describing the scheme's generic behaviour: what classes of structure are readily built and why; conversely, what kinds of structure require a prohibitively complex set of building materials, pheromones, proximal microrules, etc. Here, we are able to show that, unlike Ladley and Bullock's (2004, 2005) termites, the addition of proximal microrules enables agents to construct both simple conic and rectilinear structures such as domes, arches, pillars, cubes and frames (see figure 1), and that they are able to combine these structures relatively easily (see figure 2). Moreover, we are also able to show that, unlike Bonabeau et al's (2000) wasps, the addition of distal pheromone mediated behaviour enables agents to construct architectures with long-range structure without recourse to a prohibitive number of block types (e.g., Howsman et al., 2004), and that these structures can be easily scaled in size through manipulation of pheromone parameters. However, complex structures still present challenges in terms of managing interactions between agents obeying different rule-sets, and timing issues related to the establishment of pheromone templates before the initiation of pheromone-mediated building activity.

[73] Geard, N., Bryden, J., Funk, S., Jansen, V., & Bullock, S. (2010). Stability in flux: group dynamics in evolving networks (abstract). In H. Fellerman, M. Dörr, M. M. Hanczyc, L. L. Laursen, S. Maurer, D. Merkle, P.-A. Monnard, K. Stoy, & S. Rasmussen (eds.), Artificial Life XII: Proceedings of the Twelfth International Conference on the Synthesis and Simulation of Living Systems. MIT Press. Event Dates: 19-23 August, 2010. [ bib | pdf | www ]
From Facebook groups and online gaming clans, to social movements and terrorist cells, groups of individuals aligned by interest, values or background are of increasing interest to social network researchers. In particular, understanding the structural and dynamic factors that influence the evolution of these groups remains an open challenge. Why do some groups persist and succeed, while others fail to do so? Three features characterise real social networks. They are inherently dynamic: explaining the structure of social networks requires us to understand how this structure is created, modified and maintained. They are co-evolutionary, exhibiting a reflexive relationship between topology and state. For example, individuals often interact preferentially with others who are similar to themselves, thus state affects topology; at the same time, neighbouring individuals tend to influence one another and hence become more similar, thus topology affects state. Finally, interactions between individuals are not distributed uniformly across a network: rather, we can detect community structure, in which subsets of individuals are more densely linked to each other than to the rest of the population. Analysis of telephone and collaboration data by Palla and colleagues (2007, Nature 446, 664) has demonstrated some of the ways in which social groups evolve over time, but there is more to be done in understanding the multi-level relationship between individual and group dynamics. Here, we address two questions: How do stable macro-level structures and behaviours emerge and persist as a consequence of simple micro-level processes? How can we characterise the dynamics of meso-level structures such as groups and communities? We introduce a simple model of a co-evolving network in which the state of an individual represents the group to which it is currently (and exclusively) affiliated. Four processes govern network evolution: individuals can create new groups, influence neighbours to switch affiliation to their group, replace an out-group edge with an in-group edge, or replace edges at random. Using this model, we explore the parameter space defined by the relative rates of each process, revealing a region in which networks exhibit connected community structure reminiscent of observed social networks. We demonstrate how macro-level properties of the network (e.g., state and degree distribution, modularity, clustering coefficient and path length) stabilise, while underlying micro- and meso-level properties remain dynamic; that is, individuals continue to update their neighbours and states, and groups are born, grow, shrink and die. Finally, we report findings on the behaviour of groups: at equilibrium, there is a stable rank-distribution of group sizes; however, the identities of the groups occupying each rank change over time. Furthermore, the distribution of group lifespans is bimodal, reflecting two possible group trajectories: After being introduced into a population, a group either thrives, or struggles. Interestingly, the probability of these two events appears to be almost entirely stochastic, and is independent of factors that one might expect play a role, such as the location of group foundation. While our model is undoubtedly simple, we believe it provides a useful baseline for further studies, and a helpful tool for understanding the multi-level dynamic interactions that underlying the complex behaviour of more complicated models.

[74] Geard, N. & Bullock, S. (2010). Competition and the dynamics of group affiliation. Advances in Complex Systems, 13(4), 501-517. [ bib | doi | pdf ]
How can we understand the interaction between the social network topology of a population and the patterns of group affiliation in that population? Each aspect influences the other: social networks provide the conduits via which groups recruit new members, and groups provide the context in which new social ties are formed. From an organisational ecology perspective, groups can be considered to compete with one another for the time and energy of their members. Such competition is likely to have an impact on the way in which social structure and group affiliation co-evolve. While many social simulation models exhibit group formation as a part of their behaviour (e.g., opinion clusters or converged cultures), models that explicitly focus on group affiliation are rare. We describe and explore the behaviour of a model in which, distinct from most current models, individual nodes can belong to multiple groups simultaneously. By varying the capacity of individuals to belong to groups, and the costs associated with group membership, we explore the effect of different levels of competition on population structure and group dynamics.

[75] Watson, R., Mills, R., Buckley, C. L., Penn, A., Davies, A., Noble, J., & Bullock, S. (2010). Adaptation without natural selection. In H. Fellerman, M. Dörr, M. M. Hanczyc, L. L. Laursen, S. Maurer, D. Merkle, P.-A. Monnard, K. Stoy, & S. Rasmussen (eds.), Artificial Life XII: Proceedings of the Twelfth International Conference on the Synthesis and Simulation of Living Systems, (pp. 80-81). MIT Press. Event Dates: 19-23 August, 2010. [ bib | pdf | www ]
Document is itself an extended abstract.

2009

[76] Bullock, S. (2009). Can the map be the territory? Visualization and realisation in artificial life (abstract). Presented at the Tenth European Conference on Artificial Life (ECAL 2009). [ bib | pdf | www ]
In the continuing absence of a formal, consensual, definition of what it is to be a living system, artificial life has learned to make do with a mantra of "I can't define it, but I know it when I see it". An under-appreciated consequence of this position is the attendant epistemological load placed on seeing and therefore visualization. This paper considers the role of visualization within science and artificial life, specifically, reviewing its multiple distinct uses and exploring the possibility that it might sometimes play a role that is unique to the field: visualizations as realisations. Scientific visualizations are typically taken to re-present the target phenomena of interest: a graph or chart presents data on some target system; an image from a confocal microscope is an image of some target system. Like any representation, they are typically understood to stand in some rather impoverished relation to the target system, representing it just so, capturing only a fragment of its reality-attenuating, idealising, clipping, focusing, highlighting, or otherwise differing from the real thing in itself. By contrast, when we view Craig Reynolds' Boids or Karl Sims' Blockies, we are not expected to consider the images as partial representations of some prior thing (the code, the algorithm?). They are the thing. Indeed, where typically the image offers only a glimpse of the 'real' system, here the relationship is reversed. The underlying code is impenetrable, offering only a glimpse of the flocking that it gives rise to. However, is it true to claim that the flock of Boids is simply not present in the lines of code in the same way that it is present in the image sequence? Surely there may be creatures for which viewing the image sequence, like studying the code, fails to produce the perception of a flock. Where is the locus of the 'emergence' of flocking, or life, or some other complex organisational phenomena? In the code? On the screen? Within an observer's mind? What has been termed the 'synthetic methodology' offers us the promise of a new route to understanding organisational phenomena and answering systems questions through construction rather than reduction. However, where we attempt to synthesize truly new phenomena (e.g., 'life-as-it-could-be') without the safety net of agreed formal category definitions, we must run the risk of relying on our (possibly raw unanalysed) visuocognitive apparatus to guide us, and will consequently be subject to its biases and idiosyncrasies. From this perspective, Ikegami and Hanczyc's oil droplets, Grey Walter's Elsie and Elmer, Langton's loops, and Ray's Tierran replicators must be regarded as denizens of the realm of ideas as much as (or perhaps more than) the realm of physical reality. They are constructs in the psychological sense as much as the engineering sense, and just as models teach us about the world only indirectly by shedding light on our ideas about the world, these artificial life systems may only change what we know of life by changing the way that we see it.

[77] Barnett, L., Buckley, C. L., & Bullock, S. (2009). Neural complexity and structural connectivity. Physical Review E, 79(5), 051914-[12pp]. [ bib | pdf ]
Tononi et al. Proc. Natl. Acad. Sci. U.S.A. 91, 5033 1994 proposed a measure of neural complexity based on mutual information between complementary subsystems of a given neural network, which has attracted much interest in the neuroscience community and beyond.We develop an approximation of the measure for a popular Gaussian model which, applied to a continuous-time process, elucidates the relationship between the complexity of a neural system and its structural connectivity. Moreover, the approximation is accurate for weakly coupled systems and computationally cheap, scaling polynomially with system size in contrast to the full complexity measure, which scales exponentially. We also discuss connectivity normalization and resolve some issues stemming from an ambiguity in the original Gaussian model.

[78] Bullock, S. (2009). In defence of the abstracted animat. Adaptive Behavior, 17(4), 303-305. [ bib | pdf ]
[79] Bullock, S. & Buckley, C. L. (2009). Embracing the tyranny of distance: Space as an enabling constraint. Technoetic Arts, 7(2), 141-152. [ bib | pdf ]
Architectural design is typically limited by the constraints imposed by physical space. If and when opportunities to attenuate or extinguish these limits arise, should they be seized? Here it is argued that the limiting influence of spatial embedding should not be regarded as a frustrating "tyranny" to be escaped wherever possible, but as a welcome enabling constraint to be leveraged. Examples from the natural world are presented, and an appeal is made to some recent results on complex systems and measures of interaction complexity.

[80] Geard, N. & Bullock, S. (2009). Homophily and competition: a model of group affiliation. In B. Edmonds & N. Gilbert (eds.), The Sixth Conference of the European Social Simulation Association, (p. 7). The European Social Simulation Association. [ bib | pdf ]
How can we understand the interaction between the social network topology of a population and the patterns of group affiliation in that population? Each aspect influences the other: social networks provide the conduits via which groups recruit new members, and groups provide the context in which new social ties are formed. While many social simulation models exhibit group formation as a part of their behaviour (e.g., opinion clusters or converged cultures), models that explicitly focus on group affiliation are rare. We introduce one such model, based upon the ecological theory of group affiliation, and use it to explore the effect of two system properties?bias toward the creation of homophilous ties and competition between groups?on the dynamics of social evolution and group formation.

[81] Jacyno, M., Bullock, S., Luck, M., & Payne, T. R. (2009). Emergent service provisioning and demand estimation through self-organizing agent communities. In C. Sierra, C. Castelfranchi, K. S. Decker, & J. S. Sichman (eds.), Proceedings of the Eighth International Conference on Autonomous Agents and Multiagent Systems (AAMAS 2009), (pp. 481-488). ACM. Event dates: May 10-15, 2009. [ bib | pdf ]
A major challenge within open markets is the ability to satisfy service demand with an adequate supply of service providers, especially when such demand may be volatile due to changing requirements, or fluctuations in the availability of services. Ideally, the supply and demand of services should be balanced; however, when consumer demand change over time, and providers can independently choose which services they provide, a coordination problem known as 'herding' can arise, bringing instability to the market. This behavior can emerge when consumers share similar preferences for the same providers, and thus compete for the same resources. Likewise, providers which share estimates of fluctuating service demand may respond in unison, withdrawing some services to introduce others, and thus oscillate the available supply around some ideal equilibrium. One approach to avoid this unstable behavior is to limit the flow of information across the agent community, such that agents possess an incomplete and subjective view of the local service availability and demand. By drawing inspiration from information flow within biological systems, we propose a model of an adaptive service-offering mechanism, in which providers adapt their choice of services that they offer to consumers, based on perceived demand. By varying the volume of information shared by agents, we demonstrate that a co-adaptive equilibrium can be achieved, thus avoiding the herding problem. As the knowledge that agents can possess is limited, agents self-organise into community structures that support locally shared information. We demonstrate that such a model is capable of reducing instability in service demand and thus increase utility (based on successful service provision) by up to 59%, when compared to the use of globally available information.

2008

[82] Buckley, C. L. & Bullock, S. (2008). Sensitivity and stability: A signal propagation sweet spot in a sheet of recurrent centre crossing neurons. In N. Crook & T. olde Scheper (eds.), Proceedings of the Seventh International Workshop on Information Processing in Cells and Tissues (IPCAT 2007). Tribun EU. Event Dates: 29th - 31st August 2007. [ bib | pdf ]
In this paper we demonstrate that signal propagation across a laminar sheet of recurrent neurons is maximised when two conditions are met. First, neurons must be in the so-called centre crossing configuration. Second, the network's topology and weights must be such that the network comprises strongly coupled nodes, yet lies within the weakly coupled regime. We develop tools from linear stability analysis with which to describe this regime, and use them to examine the apparent tension between the sensitivity and instability of centre crossing networks.

[83] Buckley, C. L. & Bullock, S. (2008). Sensitivity and stability: A signal propagation sweet spot in a sheet of recurrent centre crossing neurons. Biosystems, 94(1-2), 2-9. [ bib | pdf ]
In this paper we demonstrate that signal propagation across a laminar sheet of recurrent neurons is maximised when two conditions are met. First, neurons must be in the so-called centre crossing configuration. Second, the network's topology and weights must be such that the network comprises strongly coupled nodes, yet lies within the weakly coupled regime. We develop tools from linear stability analysis with which to describe this regime, and use them to examine the apparent tension between the sensitivity and instability of centre crossing networks.

[84] Buckley, C. L., Fine, P., Bullock, S., & Di Paolo, E. (2008). Monostable controllers for adaptive behavior. In M. Asada, J. C. T. Hallam, J.-A. Meyer, & J. Tani (eds.), From Animals to Animats 10: Proceedings of the Tenth International Conference on Simulation of Adaptive Behavior (SAB 2008), (pp. 103-112). Springer. [ bib | pdf ]
Recent artificial neural networks for machine learning have exploited transient dynamics around globally stable attractors, inspired by the properties of cortical microcolumns. Here we explore whether similarly constrained neural network controllers can be exploited for embodied, situated adaptive behaviour. We demonstrate that it is possible to evolve globally stable neurocontrollers containing a single basin of attraction, which nevertheless sustain multiple modes of behaviour. This is achieved by exploiting interaction between environmental input and transient dynamics. We present results that suggest that this globally stable regime may constitute an evolvable and dynamically rich subset of recurrent neural network configurations, especially in larger networks. We discuss the issue of scalability and the possibility that there may be alternative adaptive behaviour tasks that are more 'attractor hungry'.

[85] Bullock, S. (2008). Charles Babbage and the emergence of automated reason. In P. Husbands, O. Holland, & M. Wheeler (eds.), The Mechanical Mind in History, (pp. 19-39). MIT Press, Cambridge, MA. [ bib | pdf ]
[86] Bullock, S. (2008). Do roboticists dream of intelligent sheep? A book review of David McFarland's "Guilty Robots, Happy Dogs: The Question of Alien Minds". Times Higher Education Supplement. [ bib | pdf ]
It's a decent bet that right now there are more guilty "robots" roaming the internet on the lookout for your unguarded e-mail address than there will ever be real robot canines patrolling our homes and gardens. But this book is not primarily driven by the actualities of current or future robots, being more closely aligned with modern science fiction's take on robots as philosophical devices. Just as a confused amnesiac in an art-house movie is a perfect vehicle for extended meditations on the nature of identity, imagined moody androids provide seductive raw material for a good muse on our origins, purpose and morality. Dutifully, David McFarland opens and closes his new book with the imagined moral panic surrounding a humanoid traffic cop. Could one ever really be capable of replacing a person? Could one ever really be culpable, in place of its human designer, if it were to make some fatal error? The scenario is a brief distraction, though, because the book's central concern is not people but animals and the robots that might resemble them: think mechanical sniffer dogs, robot pack mules, carrier cyber-pigeons and maybe K9. After refocusing on this menagerie, McFarland sets sail for the deep waters surrounding an old question: what would it take for such a machine or animal to have a mind, one that would presumably be alien to our own? By approaching the problem from a bio-robotic direction, his hope is to navigate a route that avoids some of the choppier confusions. McFarland built a career as an Oxbridge roboticist and biologist, interpreting animals as if they were machines and machines as if they were animals. At times, it seems that he is maintaining the distinction only as a courtesy to the reader, having long since convinced himself that you might as well lump them together and proceed accordingly. He's happier equipping a robot guard dog with skunk-inspired stink-squirters than Taser guns, but it is this readiness to reach for an example from the world of animals rather than people that keeps the book on course. Careful use of research on crafty Caledonian crows, doggy dreams, self-sufficient slug-bots and vomiting pigeons allows him to steer clear of questions of (human) conscious experience until the later chapters. McFarland is on home territory dishing up a patented blend of behaviourism (infamously discredited) and economics (infamously dismal). By salvaging a surprisingly defensible hybrid of the two, he is able to use cost-benefit thinking to explain the critical balance of decision-making that a successful autonomous robot or animal must be capable of in order to continually "do the right thing". But before squaring up to McFarland's main event, the book has first to take in a daunting litany of philosophical positions, and while he trawls through them diligently, you get the feeling there is little joy in clearing the ground. Rather, he's fishing around in the science and philosophy of rationality and subjectivity (and tossing most of his catch straight back) in order to demonstrate that what prevents us from readily acknowledging the potential for fully fledged robot minds is just an "alienist" chauvinism that will dissolve as we come to regard robots (and some animals) as "us" rather than "them", despite their "alien lifestyles". This abrupt sociological turn is delayed until the final sentences, leaving the reader to reflect unaccompanied on just how alien a "lifestyle" would need to be before we begin to feel that there might not actually be "something that it is like" to be that alien something or someone, and they begin to feel the same about us.

[87] Bullock, S. & Silverman, E. (2008). Levins and the legitimacy of artificial worlds. In N. David (ed.), Third Workshop on Epistemological Perspectives on Simulation. Event Dates: October 2-3. [ bib | pdf ]
For practitioners across a growing number of academic disciplines there is a strong sense that simulation models of complex realworld systems provide something that differs fundamentally from that which is offered by mathematical models of the same phenomena. The precise nature of this difference has been difficult to isolate and explain, but, occasionally, it is cashed out in terms of an ability to use simulations to perform 'experiments', e.g., [9]. The notion here is that empirical data derived from costly experiments in the real world might usefully be augmented with data harvested from the right kind of simulation models. We will reserve the term 'artificial worlds' for such simulations. In this paper, rather than tackle the problems inherent in this type of claim head on, we will approach them obliquely by asking: what is the root of the attraction of constructing and exploring artificial worlds? By combining insights drawn from the work of Levins, Braitenberg, and Clark, we arrive at an answer that at least partially legitimises artificial worlds by allocating them a useful scientific role, without having to assign the status of empirical enquiry to their exploration.

[88] Geard, N. & Bullock, S. (2008). Group formation and social evolution: A computational model. In S. Bullock, J. Noble, R. Watson, & M. Bedau (eds.), Artificial Life XI: Proceedings of the Eleventh International Conference on the Simulation and Synthesis of Living Systems, (pp. 197-203). MIT Press. [ bib | pdf ]
The tendency to organise into groups is a fundamental property of human nature. Despite this, many models of social network evolution consider the emergence of community structure as a side effect of other processes, rather than as a mechanism driving social evolution. We present a model of social network evolution in which the group formation process forms the basis of the rewiring mechanism. Exploring the behaviour of our model, we find that rewiring on the basis of group membership reorganises the network structure in a way that, while initially facilitating the growth of groups, ultimately inhibits it.

[89] Hebbron, T., Bullock, S., & Cliff, D. (2008). NKalpha: Non-uniform epistatic interactions in an extended NK model. In S. Bullock, J. Noble, R. Watson, & M. A. Bedau (eds.), Artificial Life XI: Proceedings of the Eleventh International Conference on the Simulation and Synthesis of Living Systems, (pp. 234-241). MIT Press, Cambridge, MA. [ bib | pdf ]
Kauffman's seminal NK model was introduced to relate the properties of fitness landscapes to the extent and nature of epistasis between genes. The original model considered genomes in which the fitness contribution of each of N genes was influenced by the value of K other genes located either at random or from the immediately neighbouring loci on the genome. Both schemes ensure that (on average) every gene is as influential as any other. More recently, the epistatic connectivity between genes in natural genomes has begun to be mapped. The topologies of these genetic networks are neither random nor regular, but exhibit interesting structural properties. The model presented here extends the NK model to consider epistatic network topologies derived from a preferential attachment scheme which tends to ensure that some genes are more influential than others. We explore the consequences of this topology for the properties of the associated fitness landscapes.

[90] Jacyno, M. & Bullock, S. (2008). Energy, entropy and work in computational ecosystems: A thermodynamic account. In S. Bullock, J. Noble, R. Watson, & M. A. Bedau (eds.), Artificial Life XI: Proceedings of the Eleventh International Conference on the Simulation and Synthesis of Living Systems, (pp. 274-281). MIT Press. [ bib | pdf ]
Recently, computer scientists have begun to build computational ecosystems in which multiple autonomous agents interact locally to achieve globally efficient organised behaviour. Here we present a thermodynamic interpretation of these systems. We highlight the difference between the regular use of terms such as energy and work, and their use within a thermodynamic framework. We explore the way in which this perspective might influence the design and management of such systems.

[91] Jacyno, M., Bullock, S., Payne, T. R., Geard, N., & Luck, M. (2008). Autonomic resource management through self-organising agent communities. In S. Brueckner, P. Robertson, & U. Bellur (eds.), Proceedings of the Second IEEE International Conference on Self-Adaptive and Self-Organizing Systems (SASO 2008). IEEE. Event Dates: October 20-24, 2008. [ bib | pdf ]
In this paper, we analyse how autonomic resource management can be achieved within a system that lacks centralized information about current system demand and the state of system elements. Rather, regulation of service provision is achieved through local co-adaptation between two groups of system elements, one tasked to autonomously decide which services to offer and the other to consume them in a manner that minimises resource contention. We explore how varying the amount of information stored by agents influences system performance, and demonstrate that when the information capacity of individual agents is limited they self-organise into communities that facilitate the local exchange of relevant information. Such systems are stable enough to allocate resources efficiently and to minimise unnecessary reconfiguration, but also adaptive enough to reconfigure when resource demand changes.

[92] Ladley, D. & Bullock, S. (2008). The effects of local information and trading opportunities in a network constrained economy (abstract). In S. Bullock, J. Noble, R. Watson, & M. A. Bedau (eds.), Artificial Life XI: Proceedings of the Eleventh International Conference on the Simulation and Synthesis of Living Systems, (p. 781). MIT Press, Cambridge, MA. [ bib | pdf ]
Work within the field of artificial life has as history of exploring the ways in which locally constrained interactions between the elements of a system can give rise to organised behaviour at the level of the ensemble. Here we study the effect of constraining co-operative, competitive and communicative interactions within a market by embedding it within a network. We are particularly interested in how these different kinds of interaction are influenced by the structure of the market network. The paper aims to examine the effect of limited trading opportunities and information availability on the behaviour of individuals and of the market as a whole. It examines how a trader's ability to make profit is influenced by their location within a trade network and how trader strategy must be adapted to cope with this constraint. To this end we employ an agent-based model of trader interaction in which the actions of each trader are governed by individual behavioural rules. Traders are situated on the nodes of a network and interact with potential trading partners through the ties. The networks considered in this work are constructed via preferential attachment schemes resulting in networks both with and without positive assortedness. The behavioural rules of the traders are optimised for their respective locations within the networks through the use of a hill-climbing algorithm. It is demonstrated that a trader's ability to profit and to identify the equilibrium price is positively correlated with its degree of connectivity within the market. Better connected traders are able to exploit their market position at the expense of other market participants. When the effects of constraining trade and information are separated it is demonstrated that when traders differ in their number of potential trading partners, well-connected traders are found to benefit from aggressive trading behaviour. A higher number of potential trading partners allows these traders to demand better terms as there is a higher chance of another trader being willing to trade with them. Where information propagation is constrained by the topology of the trade network, connectedness affects the nature of the strategies employed. Better connected traders attempt to learn more quickly, taking in as much information as possible at the start of the market in order to exploit possible trading opportunities. Less well connected traders learn more slowly and average over time to avoid being exploited by better connected individuals. We also demonstrate that traders are unable to exploit second order information and trade effects connected to the network. We show that it is not possible for traders to modulate their price or the way in which they weight information based on the connectedness of the potential trading partner/information source to make higher profits. When this situation is permitted all traders adopt strategies such that none benefit from the additional abilities.

[93] Ladley, D. & Bullock, S. (2008). The strategic exploitation of limited information and opportunity in networked markets. Computational Economics, 32(3), 295-315. [ bib | pdf ]
This paper studies the effect of constraining interactions within a market. A model is analysed in which boundedly rational agents trade with and gather information from their neighbours within a trade network. It is demonstrated that a trader's ability to profit and to identify the equilibrium price is positively correlated with its degree of connectivity within the market. Where traders differ in their number of potential trading partners, well-connected traders are found to benefit from aggressive trading behaviour.Where information propagation is constrained by the topology of the trade network, connectedness affects the nature of the strategies employed.

2007

[94] Barnett, L., Di Paolo, E., & Bullock, S. (2007). Spatially embedded random networks. Physical Review E, 76(5), 056115. [ bib | doi | pdf ]
Many real-world networks analyzed in modern network theory have a natural spatial element; e.g., the Internet, social networks, neural networks, etc. Yet, aside from a comparatively small number of somewhat specialized and domain-specific studies, the spatial element is mostly ignored and, in particular, its relation to network structure disregarded. In this paper we introduce a model framework to analyze the mediation of network structure by spatial embedding; specifically, we model connectivity as dependent on the distance between network nodes. Our spatially embedded random networks construction is not primarily intended as an accurate model of any specific class of real-world networks, but rather to gain intuition for the effects of spatial embedding on network structure; nevertheless we are able to demonstrate, in a quite general setting, some constraints of spatial embedding on connectivity such as the effects of spatial symmetry, conditions for scale free degree distributions and the existence of small-world spatial networks. We also derive some standard structural statistics for spatially embedded networks and illustrate the application of our model framework with concrete examples.

[95] Buckley, C. L. & Bullock, S. (2007). Spatial embedding and complexity: The small-world is not enough. In F. A. e Costa, L. M. Rocha, E. Costa, I. Harvey, & A. Coutinho (eds.), Advances in Artificial Life: Proceedings of the Ninth European Conference on Artificial Life (ECAL 2007), (pp. 986-995). Springer, Berlin. Event Dates: September 10-14, 2007. [ bib | pdf ]
The 'order for free' exhibited by some classes of system has been exploited by natural selection in order to build systems capable of exhibiting complex behaviour. Here we explore the impact of one ordering constraint, spatial embedding, on the dynamical complexity of networks. We apply a measure of functional complexity derived from information theory to a set of spatially embedded network models in order to make some preliminary characterisations of the contribution of space to the dynamics (rather than mere structure) of complex systems. Although our measure of dynamical complexity hinges on a balance between functional integration and segregation, which seem related to an understanding of the small-world property, we demonstrate that smallworld structures alone are not enough to induce complexity. However, purely spatial constraints can produce systems of high intrinsic complexity by introducing multiple scales of organisation within a network.

[96] Clark, B. & Bullock, S. (2007). Shedding light on plant competition: Modelling the influence of plant morphology on light capture (and vice versa). Journal of Theoretical Biology, 244(2), 208-217. [ bib | pdf ]
A plant's morphology is both strongly influenced by local light availability and, simultaneously, strongly influences this local light availability. This reciprocal relationship is complex, but lies at the heart of understanding plant growth and competition. Here we develop a sub-individual-based simulation model, cast at the level of interacting plant components. The model explicitly simulates growth, development and competition for light at the level of leaves, branches, etc, located in 3-d space. In this way, we are able to explore the manner in which the low-level processes governing plant growth and development give rise to individual-, cohort-, and community-level phenomena. In particular, we show that individual-level tradeoffs between growing up and growing out arise naturally in the model, and robustly give rise to cohort-level phenomena such as self-thinning, and community processes such as the effect of ecological disturbance on the maintenance of biodiversity. We conclude with a note on our methodology and how to interpret the results of simulation models such as this one.

[97] Geard, N. & Bullock, S. (2007). Milieu and function: Toward a multilayer framework for understanding social networks. In M. Waibel, S. Mitri, J. Hubert, & D. Tarapore (eds.), Workshop Proceedings of the Ninth European Conference on Artificial Life (ECAL 2007): The Emergence of Social Behaviour, (pp. 1-11). Springer. Event date: 10th September, 2007. [ bib | pdf ]
Social interactions between individuals do not occur in a void. Nor do they take place on a pre-existing fixed social network. Real social behaviour can be understood both to take place on, and to bring about, a complex set of overlapping topologies best described by a multilayer network in which different layers indicate different modes of interaction. Here we distinguish between the milieu within which social organisation is embedded and the transactional relationships that constitute this social organisation. While both can be represented by network structures, their topologies will not necessarily be the same. Researchers in various domains have realised the importance of the context in which individuals are embedded in shaping properties of the functional transactions in which they choose to engage. We review several examples of the relationship between milieu and function and propose a conceptual framework that may help advance our understanding of how social organisation can occur as a result of self-organisation and adaptation.

[98] Jacyno, M., Bullock, S., Luck, M., & Payne, T. (2007). Understanding decentralised control of resource allocation in a minimal multi-agent system. In E. Durfee, M. Yokoo, M. Huhns, & O. Shehory (eds.), Proceedings of the Sixth International Joint Conference on Autonomous Agents and Multi-agent Systems (AAMAS 2007), (pp. 1251-1253). ACM. Event dates: May 14-18, 2007. [ bib | doi | pdf ]
Utility computing exemplifies a novel kind of solution to the increasing scale and complexity of modern IT systems. Here, the 'on-demand' provisioning of computing resources is managed via a population of independent software agents that query and negotiate with one another in an open system of resource providers and consumers that has no fixed organisation and is free to change and grow organically. Where centralised executive control of agent activity is relaxed or removed, such systems have the potential to deliver scalable, flexible computing. However, major design and control challenges must be overcome if multi-agent systems are to achieve efficient, decentralised resource allocation that delivers reliable and robust performance. In this paper we introduce a minimally complex multi-agent system, where individual agents rely on simple, local strategies to perform resource allocation. We explore the relationship between local and global behaviour as system size, load, heterogeneity and reliability are varied. We identify generic feedbacks underlying system behaviour that must be balanced if decentralised control is to become an effective technique for preserving stable functionality across utility computing infrastructures.

[99] Ladley, D. & Bullock, S. (2007). Integrating segregated markets. International Transactions on Systems Science and Applications, 3(1), 11-18. [ bib | pdf ]
In many cases real markets are segregated to some extent by constraints on who is readily able to trade and communicate with whom. Here we model this kind of segregation within a market constrained by an underlying network topology. We quantify the impact of segregation on market convergence, and explore the extent to which it is redressed by a broadcast mechanism intended to mimic the presence of information sources that are widely consulted, but imperfect, and slow to react to market change.

[100] Ladley, D. & Bullock, S. (2007). The effects of market structure on a heterogeneous evolving population of traders. In A. Namatame, S. Kurihara, & H. Nakashima (eds.), Emergent Intelligence of Networked Agents, (pp. 83-97). Springer, Berlin. Event Dates: July 30 - August 5, 2005. [ bib | pdf ]
The majority of market theory is only concerned with centralised markets. In this paper, we consider a market that is distributed over a network, allowing us to characterise spatially (or temporally) separated markets. The effect of this modification on the behaviour of a market with a heterogeneous population of traders, under selection through a genetic algorithm, is examined. It is demonstrated that better-connected traders are able to make more profit than less connected traders and that this is due to a difference in the number of possible trading opportunities and not due to informational inequalities. A learning rule that had previously been demonstrated to profitably exploit network structure for a homogeneous population is shown to confer no advantage when selection is applied to a heterogeneous population of traders. It is also shown that better-connected traders adopt more aggressive market strategies in order to extract more surplus from the market.

2006

[101] Bullock, S. (2006). Cross-fertilisation at the computing/biology interface: The invention of an algorithmic biology. In A. Grafen & M. Ridley (eds.), Richard Dawkins: How a Scientist Changed The Way We Think, (pp. 116-124). Oxford University Press, Oxford. [Invited contribution to collection celebrating Richard Dawkins' 65th birthday and 30th anniversary of the publication of "The Selfish Gene"]. [ bib | pdf ]
[102] Bullock, S. (2006). The fallacy of general purpose bio-inspired computing. In L. Rocha, L. Yaeger, M. Bedau, D. Floreano, R. Goldstone, & A. Vespignani (eds.), Artificial Life X: Proceedings of the Tenth International Conference on the Synthesis and Simulation of Living Systems, (pp. 540-545). MIT Press. [ bib | pdf ]
Bio-inspired computing comes in many flavours, inspired by biological systems from which salient features and/or organisational principles have been idealised and abstracted. These bio-inspired schemes have sometimes been demonstrated to be general purpose; able to approximate arbitrary dynamics, encode arbitrary structures, or even carry out universal computation. The generality of these abilities is typically (although often implicitly) reasoned to be an attractive and worthwhile trait. Here, it is argued that such reasoning is fallacious. Natural systems are nichiversal rather than universal, and we should expect the computational systems that they inspire to be similarly limited in their performance, even if they are ultimately capable of generality in their competence. Practical and methodological implications of this position for the use of bio-inspired computing within artificial life are outlined.

[103] Bullock, S. & Bedau, M. (2006). Exploring adaptation with evolutionary activity plots. Artificial Life, 12(2), 193-197. [Appeared in a special issue on visualization for complex adaptive systems]. [ bib | pdf ]
Evolutionary activity statistics and their visualization are introduced, and their motivation is explained. Examples of their use are described, and their strengths and limitations are discussed. References to more extensive or general accounts of these techniques are provided.

[104] Bullock, S., Smith, T., & Bird, J. (2006). Picture this: The state of the art in visualization for complex adaptive systems. Artificial Life, 12(2), 189-192. [Introduction to a special issue on visualization for complex adaptive systems edited by Seth Bullock, Tom Smith and Jon Bird]. [ bib | pdf ]
Visualization has an increasingly important role to play in scientific research. Moreover, visualization has a special role to play within artificial life as a result of the informal status of its key explananda: life and complexity. Both are poorly defined but apparently identifiable via raw inspection. Here we concentrate on how visualization techniques might allow us to move beyond this situation by facilitating increased understanding of the relationships between an ALife system's (low-level) composition and organization and its (high-level) behavior. We briefly review the use of visualization within artificial life, and point to some future developments represented by the articles collected within this special issue.

[105] Ladley, D. & Bullock, S. (2006). The effects of market structure on a heterogeneous evolving population of traders. In A. Namatame, S. Kurihara, & H. Nakashima (eds.), Emergent Intelligence of Networked Agents, (pp. 83-98). Springer, Berlin. Event Dates: May 8-12, 2006. [ bib | pdf ]
The majority of market theory is only concerned with centralised markets. In this paper, we consider a market that is distributed over a network, allowing us to characterise spatially (or temporally) separated markets. The effect of this modification on the behaviour of a market with a heterogeneous population of traders, under selection through a genetic algorithm, is examined. It is demonstrated that better-connected traders are able to make more profit than less connected traders and that this is due to a difference in the number of possible trading opportunities and not due to informational inequalities. A learning rule that had previously been demonstrated to profitably exploit network structure for a homogeneous population is shown to confer no advantage when selection is applied to a heterogeneous population of traders. It is also shown that better-connected traders adopt more aggressive market strategies in order to extract more surplus from the market.

[106] Quayle, A. & Bullock, S. (2006). Modelling the evolution of genetic regulatory networks. Journal of Theoretical Biology, 238(4), 737-753. [ bib | pdf ]
An evolutionary model of genetic regulatory networks is developed, based on a model of network encoding and dynamics called the Artificial Genome (AG). This model derives a number of specific genes and their interactions from a string of (initially random) bases in an idealized manner analogous to that employed by natural DNA. The gene expression dynamics are determined by updating the gene network as if it were a simple Boolean network.

The generic behaviour of the AG model is investigated in detail. In particular, we explore the characteristic network topologies generated by the model, their dynamical behaviours, and the typical variance of network connectivities and network structures. These properties are demonstrated to agree with a probabilistic analysis of the model, and the typical network structures generated by the model are shown to lie between those of random networks and scale-free networks in terms of their degree distribution. Evolutionary processes are simulated using a genetic algorithm, with selection acting on a range of properties from gene number and degree of connectivity through periodic behaviour to specific patterns of gene expression. The evolvability of increasingly complex patterns of gene expression is examined in detail. When a degree of redundancy is introduced, the average number of generations required to evolve given targets is reduced, but limits on evolution of complex gene expression patterns remain. In addition, cyclic gene expression patterns with periods that are multiples of shorter expression patterns are shown to be inherently easier to evolve than others. Constraints imposed by the template-matching nature of the AG model generate similar biases towards such expression patterns in networks in initial populations, in addition to the somewhat scale-free nature of these networks. The significance of these results on current understanding of biological evolution is discussed.

2005

[107] Buckley, C., Bullock, S., & Cohen, N. (2005). Timescale and stability in adaptive behaviour. In P. J. Bentley, M. Capcarrere, A. A. Freitas, C. G. Johnson, & J. Timmis (eds.), Advances in Artificial Life: Proceedings of the Eighth European Conference on Artificial Life (ECAL 2005), (pp. 292-301). Springer, Berlin. [ bib | pdf ]
Recently, in both the neuroscience and adaptive behaviour communities, there has been growing interest in the interplay of multiple timescales within neural systems. In particular, the phenomenon of neuromodulation has received a great deal of interest within neuroscience and a growing amount of attention within adaptive behaviour research. This interest has been driven by hypotheses and evidence that have linked neuromodulatory chemicals to a wide range of important adaptive processes such as regulation, reconfiguration, and plasticity. Here, we first demonstrate that manipulating timescales can qualitatively alter the dynamics of a simple system of coupled model neurons. We go on to explore this effect in larger systems within the framework employed by Gardner, Ashby and May in their seminal studies of stability in complex networks. On the basis of linear stability analysis, we conclude that, despite evidence that timescale is important for stability, the presence of multiple timescales within a single system has, in general, no appreciable effect on the May-Wigner stability/connectance relationship. Finally we address some of the shortcomings of linear stability analysis and conclude that more sophisticated analytical approaches are required in order to explore the impact of multiple timescales on the temporally extended dynamics of adaptive systems.

[108] Cole, J., College, M., Megaw, T., Powis, M., Bullock, S., & Keen, J. (2005). The implementation of electronic services: planned or organic growth? Informatics in Primary Care, 13(3), 187-194. [ bib | pdf ]
The literature on innovation suggests that projects are successful when rigorous project management is mixed judiciously with `organic' development. This paper argues that organic growth can play a substantial role in the implementation of electronic services in healthcare settings. Evidence for organic growth is presented, based on a study of email use. Methods are presented for investigating email use in health service settings in the National Health Service (NHS) in Bradford, England. Geographical information systems (GIS) outputs and social network analyses are presented. The results demonstrate a five-fold increase in the use of email over a 13-month period, which is shown to be largely independent of the growth in the number of organisations using the network. They also demonstrate a marked increase in the complexity of the patterns of email use over the period.

[109] Ladley, D. & Bullock, S. (2005). The role of logistic constraints on termite construction of chambers and tunnels. Journal of Theoretical Biology, 234, 551-564. [ bib | pdf ]
In previous models of the building behaviour of termites, physical and logistic constraints that limit the movement of termites andpheromones have been neglected. Here, we present an individual-based mo del of termite construction that includes idealized constraints on the diffusion of pheromones, the movement of termites, and the integrity of the architecture that they construct. The model allows us to explore the extent to which the results of previous idealized models (typically realised in one or two dimensions via a set of coupled partial differential equations) generalize to a physical, 3-D environment. Moreover we are able to investigate new processes and architectures that rely upon these features. We explore the role of stigmergic recruitment in pillar formation, wall building, and the construction of royal chambers, tunnels and intersections. In addition, for the first time, we demonstrate the way in which the physicality of partially built structures can help termites to achieve efficient tunnel structures and to establish and maintain entrances in royal chambers. As such we show that, in at least some cases, logistic constraints can be important or even necessary in order for termites to achieve efficient, effective constructions.

[110] Ladley, D. & Bullock, S. (2005). Who to listen to: Exploiting information quality in a zip-agent market. In H. L. Poutré, N. M. Sadeh, & S. Janson (eds.), Agent-Mediated Electronic Commerce: Designing Trading Agents and Mechanisms, (pp. 200-211). Springer, Berlin. Event Dates: July 30 - August 5, 2005. [ bib | pdf ]
Market theory is often concerned only with centralised markets. In this paper, we consider a market that is distributed over a network, allowing us to characterise spatially (or temporally) segregated markets. The effect of this modification on the behaviour of a market populated by simple trading agents was examined. It was demonstrated that an agent's ability to identify the optimum market price is positively correlated with its network connectivity. A better connected agent receives more information and, as a result, is better able to judge the market state. The ZIP trading agent algorithm is modified in light of this result. Simulations reveal that trading agents which take account of the quality of the information that they receive are better able to identify the optimum price within a market.

2004

[111] Buckley, C., Bullock, S., & Cohen, N. (2004). Toward a dynamical systems analysis of neuromodulation. In S. Schaal, A. J. Ijspeert, A. Billard, S. Vijayakumar, J. Hallam, & J.-A. Meyer (eds.), Proceedings of the Eighth International Conference on Simulation of Adaptive Behavior (SAB 2004), (pp. 334-343). MIT Press, Cambridge, MA. [Won Best Paper prize.]. [ bib | pdf ]
This work presents some first steps toward a more thorough understanding of the control systems employed in evolutionary robotics. In order to choose an appropriate architecture or to construct an effective novel control system we need insights into what makes control systems successful, robust, evolvable, etc. Here we present analysis intended to shed light on this type of question as it applies to a novel class of artificial neural networks that include a neuromodulatory mechanism: GasNets.

We begin by instantiating a particular GasNet subcircuit responsible for tuneable pattern generation and thought to underpin the attractive property of ?temporal adaptivity?. Rather than work within the GasNet formalism, we develop an extension of the well-known FitzHugh-Nagumo equations. The continuous nature of our model allows us to conduct a thorough dynamical systems analysis and to draw parallels between this subcircuit and beating/bursting phenomena reported in the neuroscience literature.

We then proceed to explore the effects of different types of parameter modulation on the system dynamics. We conclude that while there are key differences between the gain modulation used in the GasNet and alternative schemes (including threshold modulation of more traditional synaptic input), both approaches are able to produce tuneable pattern generation. While it appears, at least in this study, that the GasNet's gain modulation may not be crucial to pattern generation , we go on to suggest some possible advantages it could confer.

[112] Bullock, S. (2004). Making room for representation. Adaptive Behavior, 11(4), 279-280. Commentary On: Beer, R. D. (2003). The dynamics of active categorical perception in an evolved model agent. Adaptive Behavior, 11(4). [ bib | pdf ]
[113] Bullock, S. & Cliff, D. (2004). Complexity and emergent behaviour in ICT systems. Technical report, Hewlett-Packard Labs. [This report was commissioned by the Foresight Programme of the UK's Office of Science and Technology (DTi). However, its findings are independent of government and do not constitute government policy]. [ bib | pdf ]
Information and Communication Technology (ICT) practitioners are now readily able to create systems of such interconnected complexity that predicting the effects that small changes (such as minor component failures) will have on overall system performance may become very difficult or perhaps impossible. The notion that system-level behaviour “emerges” from parallel nonlinear interaction of multiple components in ways that are difficult or impossible to predict is explored in this document with reference to the UK's ICT investments and assets. We conclude that while it is true that there are currently limits to our ability to understand the ICT systems that we are capable of creating, nevertheless there are ways forward, including new ways of structuring and approaching software engineering, and teaching IT. This 25,000- word report is a briefing document commissioned by the Foresight Programme within the Office of Science and Technology of the UK Government's Department of Trade and Industry. Its findings are independent of government and do not constitute UK Government policy.

[114] Cartlidge, J. & Bullock, S. (2004). Combating coevolutionary disengagement by reducing parasite virulence. Evolutionary Computation, 12(2), 193-222. [ bib | pdf ]
While standard evolutionary algorithms employ a static, absolute fitness metric, coevolutionary algorithms assess individuals by their performance relative to populations of opponents that are themselves evolving. Although this arrangement offers the possibility of avoiding long-standing difficulties such as premature convergence, it suffers from its own unique problems, cycling, over-focusing and disengagement.

Here, we introduce a novel technique for dealing with the third and least explored of these problems. Inspired by studies of natural host-parasite systems, we show that disengagement can be avoided by selecting for individuals that exhibit reduced levels of ?virulence?, rather than maximum ability to defeat coevolutionary adversaries. Experiments in both simple and complex domains are used to explain how this counterintuitive approach may be used to improve the success of coevolutionary algorithms.

[115] Cartlidge, J. & Bullock, S. (2004). Unpicking tartan CIAO plots: Understanding non-periodic coevolutionary cycling. Adaptive Behavior, 12(2), 69-92. [ bib | pdf ]
We report results from a series of studies coevolving players for simple Rock?Paper?Scissors games. These results demonstrate that ?Current Individual versus Ancestral Opponent? (CIAO) plots, which have been proposed as a visualization technique for detecting both coevolutionary progress and coevolutionary cycling, suffer from ambiguity with respect to an important but rarely discussed class of cyclic behavior. While regular cycling manifests itself as a characteristic banded plot, irregular cycling produces an irregular tartan pattern which is also consistent with random drift through strategy space. Although this tartan pattern is often reported in the literature on coevolutionary algorithms, it has received little attention or analysis. Here we argue that irregular cycling will tend to be more prevalent than regular cycling, and that it corresponds to a class of coevolutionary scenario that is important both theoretically and in practice. As such, it is desirable that we improve our ability to distinguish its occurrence from that of random drift, and other forms of coevolutionary dynamic.

[116] Ladley, D. & Bullock, S. (2004). Logistic constraints on 3d termite construction. In M. Dorigo, M. Birattari, L. M. Blum, F. Mondada, & T. Stutzle (eds.), Proceedings of the Fourth International Workshop on Ant Colony Optimization and Swarm Intelligence (ANTS 2004), (pp. 178-189). Springer, Berlin. [ bib | pdf ]
The building behaviour of termites has previously been modelled mathematically in two dimensions. However, physical and logistic constraints were not taken into account in these models. Here, we develop and test a three-dimensional agent-based model of this process that places realistic constraints on the diffusion of pheromones, the movement of termites, and the integrity of the architecture that they construct. The following scenarios are modelled: the use of a pheromone template in the construction of a simple royal chamber, the effect of wind on this process, and the construction of covered pathways. We consider the role of the third dimension and the effect of logistic constraints on termite behaviour and, reciprocally, the structures that they create. For instance, when agents find it difficult to reach some elevated or exterior areas of the growing structure, building proceeds at a reduced rate in these areas, ultimately influencing the range of termite-buildable architectures.

[117] Silverman, E. & Bullock, S. (2004). Empiricism in artificial life. In J. Pollack, M. Bedau, P. H. T. Ikegami, & R. A. Watson (eds.), Artificial Life IX: Proceedings of the Ninth International Conference on Artificial Life, (pp. 534-539). MIT Press, Cambridge, MA. [ bib | pdf ]
Strong artificial life research is often thought to rely on Alife systems as sources of novel empirical data. It is hoped that by augmenting our observations of natural life, this novel data can help settle empirical questions, and thereby separate fundamental properties of living systems from those aspects that are merely contingent on the idiosyncrasies of terrestrial evolution. Some authors have questioned whether this approach can be pursued soundly in the absence of a prior, agreed-upon definition of life. Here we compare Alife's position to that of more orthodox empirical tools that nevertheless suffer from strong theory-dependence. Drawing on these examples, we consider what kind of justification might be needed to underwrite artificial life as empirical enquiry

2003

[118] Cartlidge, J. & Bullock, S. (2003). Caring versus sharing: How to maintain engagement and diversity in coevolving populations. In W. Banzhaf, T. Christaller, J. T. Dittrich, & J. Ziegler (eds.), Advances in Artificial Life: Proceedings of the Seventh European Conference on Artificial Life (ECAL 2003), (pp. 299-308). Springer, Berlin. [ bib | pdf ]
Coevolutionary optimisation suffers from a series of problems that interfere with the progressive escalating arms races that are hoped might solve difficult classes of optimisation problem. Here we explore the extent to which encouraging moderation in one coevolving population (termed parasites) can alleviate the problem of coevolutionary disengagement. Results suggest that, under these conditions, disengagement is avoided through maintaining variation in relative fitness scores. In order to explore whether standard diversity maintenance techniques such as resource sharing could achieve the same effects, we compare moderating virulence with resource sharing in a simple matching game. We demonstrate that moderating parasite virulence differs significantly from resource sharing, and that its tendency to prevent disengagement can also reduce the likelihood of coevolutionary optimisation halting at mediocre stable states.

2002

[119] Bullock, S. (2002). Will selection for mutational robustness significantly retard evolutionary innovation on neutral networks? In R. K. Standish, M. Bedau, & H. Abbass (eds.), Artificial Life VIII: Proceedings of the Eighth International Conference on the Synthesis and Simulation of Living Systems, (pp. 192-201). MIT Press, Cambridge, MA. [ bib | pdf ]
As a population evolves, its members are under selection both for rate of reproduction (fitness) and mutational robustness. For those using evolutionary algorithms as optimisation techniques, this second selection pressure can sometimes be beneficial, but it can also bias evolution in unwelcome and unexpected ways. Here, the role of selection for mutational robustness in driving adaptation on neutral networks is explored. The behaviour of a standard genetic algorithm is compared with that of a search algorithm designed to be immune to selection for mutational robustness. Performance on an RNA folding landscape suggests that selection for mutational robustness, at least sometimes, will not unduly retard the rate of evolutionary innovation enjoyed by a genetic algorithm. Two classes of random landscape are used to explore the reasons for this result.

[120] Bullock, S., Cartlidge, J., & Thompson, M. (2002). Prospects for computational steering of evolutionary computation. In T. Smith, S. Bullock, & J. Bird (eds.), Workshop Proceedings of the Eighth International Conference on Artificial Life: Beyond Fitness-Visualizing Evolution, (pp. 131-137). UNSW. [ bib | pdf ]
Currently, evolutionary computation (EC) typically takes place in batch mode: algorithms are run autonomously, with the user providing little or no intervention or guidance. Although it is rarely possible to specify in advance, on the basis of EC theory, the optimal evolutionary algorithm for a particular problem, it seems likely that experienced EC practitioners possess considerable tacit knowledge of how evolutionary algorithms work. In situations such as this, computational steering (ongoing, informed user intervention in the execution of an otherwise autonomous computational process) has been profitably exploited to improve performance and generate insights into computational processes. In this short paper, prospects for the computational steering of evolutionary computation are assessed, and a prototype example of computational steering applied to a coevolutionary algorithm is presented.

[121] Cartlidge, J. & Bullock, S. (2002). Learning lessons from the common cold: How reducing parasite virulence improves coevolutionary optimization. In D. Fogel (ed.), Proceedings of the 2002 Congress on Evolutionary Computation (CEC2002), (pp. 1420-1425). IEEE Press. [ bib | pdf ]
Inspired by the virulence of natural parasites, a novel approach is developed to tackle disengagement, a detrimental phenomenon coevolutionary systems sometimes experience [1]. After demonstrating beneficial results in a simple model, minimum-comparison sorting networks are coevolved, with results suggesting that moderating parasite virulence cnd help in practial problem domains.

[122] Clark, B. & Bullock, S. (2002). Ecological disturbance maintains and promotes biodiversity in an artificial plant ecology. In B. Hallam, D. Floreano, J. Hallam, G. Hayes, & J.-A. Meyer (eds.), Proceedings of the Seventh International Conference on the Simulation of Adaptive Behavior (SAB 2002), (pp. 355-356). MIT Press, Cambridge, MA. [ bib | pdf ]
A model of plant growth, competition and reproduction in three dimensions was constructed using L-systems to simulate plant growth, ray tracing to simulate sunlight and shading, and a steady-state genetic algorithm to simulate evolution by natural selection. Simulated plant growth conformed to expected trade-o s between, for instance, growing up and growing out. Simulated cohorts exhibited conventional population-level phenomena such as obeying the self-thinning law. Competition between species was simulated under various disturbance regimes. Undisturbed, a K-selected type of plant species dominated at equilibrium. However, under certain disturbance regimes, diverse life-history strategies were able to coexist at equilibrium, and even speciate

[123] Smith, T., Bullock, S., & Bird, J. (2002). Workshop overview. In T. Smith, S. Bullock, & J. Bird (eds.), Workshop Proceedings of the Eighth International Conference on Artificial Life: Beyond Fitness-Visualising Evolution, (pp. 99-102). UNSW. [ bib | pdf ]
[124] Wheeler, M., Bullock, S., Di Paolo, E., Noble, J., Bedau, M., Husbands, P., Kirby, S., & Seth, A. (2002). The view from elsewhere: Perspectives on alife modelling. Artificial Life, 8(1), 87-100. [ bib | pdf ]

2001

[125] Bullock, S. (2001). Smooth operator? Understanding and visualising mutation bias. In J. Kelemen & P. Sosik (eds.), Advances in Artificial Life: Proceedings of the Sixth European Conference on Artificial Life (ECAL 2001), (pp. 602-612). Springer, Heidelberg. [ bib | pdf ]
The potential for mutation operators to adversely affect the behaviour of evolutionary algorithms is demonstrated for both real-valued and discrete-valued genotypes. Attention is drawn to the utility of effective visualisation techniques and explanatory concepts in identifying and understanding these biases. The skewness of a mutation distribution is identified as a crucial determinant of its bias. For redundant discrete genotype-phenotype mappings intended to exploit neutrality in genotype space, it is demonstrated that in addition to the mere extent of phenotypic connectivity achieved by these schemes, the distribution of phenotypic connectivity may be critical in determining whether neutral networks improve the ability of an evolutionary algorithm overall.

[126] Noble, J., Di Paolo, E. A., & Bullock, S. (2001). Adaptive factors in the evolution of signalling systems. In A. Cangelosi & D. Parisi (eds.), Simulating the Evolution of Language, (pp. 53-78). Springer, Heidelberg. [ bib | pdf ]

2000

[127] Bullock, S. (2000). Something to talk about: Conflict and coincidence of interest in the evolution of shared meaning. In Proceedings of the Third International Conference on the Evolution of Language (EvoLang 2000). Event Dates: April 3rd-6th , 2000. [ bib | pdf ]
This paper investigates the possibility of evolving communication in a multi-agent system (MAS) and a software simulation has been developed for this purpose. Agents in this system are controlled by an artificial neural network of the standard feed-forward type. The agent behaviour is evolved by modifying the connection weights using a steady-state genetic algorithm. After a discussion of the theory behind the evolution of communication some preliminary results are presented that demonstrate that even in relatively simple environments coordinated group behaviour can evolve consistently. Possible extensions to the simulation system are outlined that should promote the evolution of communication between agents.

[128] Bullock, S. (2000). What can we learn from the first evolutionary simulation model? In M. A. Bedau, J. S. McCaskill, N. Packard, & S. Rasmussen (eds.), Artificial Life VII: Procedings of the Seventh International Conference on the Synthesis and Simulation of Living Systems, (pp. 477-486). MIT Press, Cambridge, MA. [ bib | pdf ]
A simple computer program dating from the first half of the nineteenth century is presented as the earliest known example of an evolutionary simulation model. The model is described in detail and its status as an evolutionary simulation model is discussed. Three broad issues raised by the model are presented and their significance for modern evolutionary simulation modelling is explored: first, the utility of attending to the character of a system's entire dynamics rather than focusing on the equilibrium states that it admits of; second, the worth of adopting an evolutionary perspective on adaptive systems beyond those addressed by evolutionary biological research; third, the potential for the non-linear character of complex dynamical systems to be explored through an individual-based simulation modelling approach.

[129] Bullock, S. & Noble, J. (2000). Evolutionary simulation modelling clarifies interactions between parallel adaptive processes (commentary). Behavioral and Brain Sciences, 23(1), 150-151. Commentary On: Laland, K. N., Odling-Smee, F. J., Feldman, M. W. (1999). Niche construction, biological evolution and cultural change. Behavioral and Brain Sciences, 23: 131-175. [ bib | pdf ]
The teleological language in the target article is ill-advised, as it obscures the question of whether ecological and cultural inheritances are directed or random. The authors present a very broad palette of explanatory possibilities; evolutionary simulation models could help narrow down the processes important in a particular case. Examples of such models are offered in the areas of language change and the Baldwin effect.

[130] Di Paolo, E. A., Bullock, S., & Noble, J. (2000). Artificial life: Discipline or method? Report on a debate held at ECAL99. Artificial Life, 6(2), 145-148. [ bib | pdf ]
[131] Di Paolo, E. A., Noble, J., & Bullock, S. (2000). Simulation models as opaque thought experiments. In M. A. Bedau, J. S. McCaskill, N. Packard, & S. Rasmussen (eds.), Artificial Life VII: Proceedings of the Seventh International Conference on the Synthesis and Simulation of Living Systems, (pp. 497-506). MIT Press, Cambridge, MA. [ bib | pdf ]
We review and critique a range of perspectives on the scientific role of indivdiual-based evolutionary simulation models as they are used within artificial life. We find that such models have the potential to enrich existing modelling enterprises through their strength in modelling systems of interacting entities. Furthermore, simulation techniques promise to provide theoreticians in various fields with entirely new conceptual, as well as methodological, approaches. However, the precise manner in which simulations can be used as models is not clear. We present two apparently opposed perspectives on this issue: simulation models as “emergent computational thought experiements” and simulation models as realistic simulacra. Through analysing the role that armchair thought experiments play in science, we develop a role for simulation models as opaque thought experiments, that is, thought experiments in which the consequences follow from the premises, but in a non-obvious manner which must be revealed through systematic enquiry. Like their better-known transparent cousins, opaque thought experiments, when understood, result in new insights and conceptual reorganisations. These may stress the current theoretical position of the thought experimenter and engender empirical predictions which must be tested in reality. As such, simulation models, like all thought experiments, are tools with which to explore the consequences of a theoretical position.

1999

[132] Bullock, S. (1999). Are artificial mutation biases unnatural? In D. Floreano, J.-D. Nicoud, & F. Mondada (eds.), Advances in Artificial Life: Proceedings of the Fifth European Conference on Artificial Life (ECAL'99), (pp. 64-73). Springer, Berlin. [ bib | pdf ]
Whilst the rate at which mutations occur in artificial evolutionary systems has received considerable attention, there has been little analysis of the mutation operators themselves. Here attention is drawn to the possibility that inherent biases within such operators might artefactually affect the direction of evolutionary change. Biases associated with several mutation operators are detailed and attempts to alleviate them are discussed. Natural evolution is then shown to be subject to analogous mutation “biases”. These tendencies are explicable in terms of (i) selection pressure for low mutation rates, and (ii) selection pressure to avoid parenting non-viable offspring. It is concluded that attempts to eradicate mutation biases from artificial evolutionary systems may lead to evolutionary dynamics that are more unnatural, rather than less. Only through increased awareness of the character of mutation biases, and analyses of our models' sensitivity to them, can we guard against artefactual results.

[133] Bullock, S. (1999). Jumping to bold conclusions. Adaptive Behavior, 7(1), 129-134. Book review of Amotz and Avishag Zahavi's “The Handicap Principle: A Missing Piece of Darwin's Puzzle”. [ bib | pdf ]
[134] Bullock, S. (1999). The child in time. Trends in Cognitive Science, 3(9), 361. Book review of Denise Cummins and Colin Allen's “The Evolution of Mind”. [ bib | pdf ]
[135] Bullock, S., Davis, J. N., & Todd, P. (1999). Simplicity rules the roost: Exploring birdbrain parental investment heuristics. In D. Floreano, J.-D. Nicoud, & F. Mondada (eds.), Advances in Artificial Life: Proceedings of the Fifth European Conference on Artificial Life (ECAL'99), (pp. 13-17). Springer, Berlin. [ bib | pdf ]
Parents raising multiple offspring must decide how to divide resources between them. Much empirical data on the parenting behaviour of particular species has been collected. Birds, in particular, have been shown to follow a number of provisioning rules. However, the adaptive significance of this variation in decision-making strategies has been largely unexplored. Here we present a simulation model of the western bluebird, Sialia mexicana, with which we explore the utility of various simple feeding heuristics. The simulated parents face the task of simultaneously raising several offspring who are of differing ages and thus have differing resource needs. We show that the success of simple rules of thumb varies with environmental parameters in a manner which (i) predicts experimental results in the biology literature, and (ii) can be explained using a notion of parental egalitarianism.

[136] Bullock, S. & Todd, P. M. (1999). Made to measure: Ecological rationality in structured environments. Minds and Machines, 9(4), 497-541. [ bib | pdf ]
A working assumption that processes of natural and cultural evolution have tailored the mind to fit the demands and structure of its environment begs the question: how are we to characterize the structure of cognitive environments? Decision problems faced by real organisms are not like simple multiple-choice examination papers. For example, some individual problems may occur much more frequently than others, whilst some may carry much more weight than others. Such considerations are not taken into account when (i) the performance of candidate cognitive mechanisms is assessed by employing a simple accuracy metric that is insensitive to the structure of the decision-maker's environment, and (ii) reason is defined as the adherence to internalist prescriptions of classical rationality. Here we explore the impact of frequency and significance structure on the performance of a range of candidate decision-making mechanisms. We show that the character of this impact is complex, since structured environments demand that decision-makers trade off general performance against performance on important subsets of test items. As a result, environment structure obviates internalist criteria of rationality. Failing to appreciate the role of environment structure in shaping cognition can lead to mischaracterising adaptive behavior as irrational.

[137] Davis, J. N., Todd, P. M., & Bullock, S. (1999). Environmental quality predicts parental provisioning decisions. Proceedings of the Royal Society of London, Series B, 266, 1791-1797. [ bib | pdf ]
Although avian parents appear to exhibit a variety of feeding strategies in nature, there currently exist no models or theories that account for this range of diversity. Here we present the results of a computer simulation designed to model interdependent parental decisions, where investment is meted out in small doses, and must be distributed over time to maximize return on investment at the end of the parental care period. With this technique we show that the success of various simple observed parental rules of thumb varies with environmental resource level, and that increasing the complexity of parental decision rules does not necessarily result in increased fitness.

[138] Goodie, A. S., Ortmann, A., Davis, J., Bullock, S., & Werner, G. M. (1999). Demons versus heuristics in artificial intelligence, behavioral ecology, and economics. In G. Gigerenzer & P. M. Todd (eds.), Simple Heuristics That Make Us Smart, (pp. 327-355). Oxford University Press, Oxford. [ bib | pdf ]

1998

[139] Bullock, S. (1998). A continuous evolutionary simulation model of the attainability of honest signalling equilibria. In C. Adami, R. Belew, H. Kitano, & C. Taylor (eds.), Artificial Life VI: Proceedings of the Sixth International Conference on the Synthesis and Simulation of Living Systems, (pp. 339-348). MIT Press, Cambridge, MA. [ bib | pdf ]
A particular game-theoretic model (Grafen, 1990) of the evolutionary stability of honest signalling, which attempts a formal proof of the validity of Zahavi's (1975, 1977) handicap principle, is generalised and rendered as an evolutionary simulation model. In addition to supporting new theoretical results, this allows the effects of differing initial conditions on the attainability of signalling equilibria to be explored. Furthermore, it allows an examination of the manner in which the character of equilibrium signalling behaviour varies with the model's parameters. It is demonstrated that (i) non-handicap signalling equilibria exist, (ii) honest signalling equilibria need not involve extravagant signals, and (iii) the basins of attraction for such equilibria are, however, relatively small. General conditions for the existence of honest signalling equilibria (which replace those offered by Zahavi) are provided, and it is demonstrated that previous theoretical results are easily accommodated by these general conditions. It is concluded that the supposed generality of the handicap principle, and the coherence of its terminology, are both suspect.

[140] Bullock, S. (1998). The emptiness of the self-contained coder (Commentary). Connexions, 3, 7-9. [ bib | pdf ]

1997

[141] Bullock, S. (1997). An exploration of signalling behaviour by both analytic and simulation means for both discrete and continuous models. In P. Husbands & I. Harvey (eds.), Advances in Artificial Life: Proceedings of the Fourth European Conference on Artificial Life (ECAL'97), (pp. 454-463). MIT Press, Cambridge, MA. [ bib | pdf ]
Hurd's (1995) model of a discrete action-response game, in which the interests of signallers and receivers conflict, is extended to address games in which, as well as signal cost varying with signaller quality, the value of an observer's response to a signal is also dependent on signaller quality. It is shown analytically that non-handicap signalling equilibria exist for such a model.

Using a distributed Genetic Algorithm (GA) to simulate the evolution of the model over time, the model's sensitivity to initial conditions is explored, and an investigation into the attainability of the analytically derived Evolutionarily Stable Strategies (ESSs) is undertaken. It is discovered that the system is capable of attaining signalling equilibria in addition to those derived via analytic techniques, and that these additional equilibria are consistent with the definition of conventional signalling.

Grafen's (1990) proof of Zahavi's handicap principle is generalised in an analogous manner, and it is demonstrated analytically that non-handicap signalling equilibria also exist for this continuous model of honest signalling.

[142] Bullock, S. (1997). Evolutionary simulation models: On their character, and application to problems concerning the evolution of natural signalling systems. Ph.D. thesis, University of Sussex, UK. [ bib | pdf ]
Evolutionary simulation modelling is presented as a methodology involving the application of modelling techniques developed within the artificial sciences to evolutionary problems. Although modelling work employing this methodology has a long and interesting history, it has remained, until recently, a relatively underdeveloped practice, lacking a unifying theoretical framework.

Within this thesis, evolutionary simulation modelling will be defined as the use of simulations, constructed under constraints imposed by evolutionary theories, to explore the adequacy of these theories, through the modelling of an adaptive system's ongoing evolution.

Evolutionary simulation models may be considered to lie within the field of artificial life, since its concerns include theories of life, evolution, dynamical systems, and the relationship between artificial and natural adaptive systems. Simultaneously, evolutionary simulation modelling should be regarded as distinct from, yet complementing, existing evolutionary modelling techniques within the biological sciences.

The ambit of evolutionary simulation modelling includes those systems towards which one is able to take the evolutionary perspective, i.e., systems comprising agents which change over time through the action of some adaptive process. This perspective is broad, allowing evolutionary simulation models to address linguistic models of glossogenetic change, anthropological models of cultural development, and models of economic learning, as well as models of biological evolution.

Once this methodology has been defined, it is applied to a group of problems current within theoretical biology, concerning the evolution of natural signalling systems.

The ubiquity of natural communication is a well attested phenomenon. However, recently the utility of such communication within a world populated by neo-Darwinian selfish individuals has been questioned. Theoretical models proposed to account for the existence of signalling within the animal kingdom are reviewed, and evolutionary simulation models are constructed in an attempt to assess these theories. Specifically, models of the evolution of complex symmetry, and models of the evolution of honesty, are addressed.

[143] Bullock, S. & Cliff, D. (1997). The role of 'hidden preferences' in the artificial co-evolution of symmetrical signals. Proceedings of the Royal Society of London, Series B, 264, 505-511. [ bib | pdf ]
Recently, within the biology literature, there has been some interest in exploring the evolutionary function of animal displays through computer simulations of evolutionary processes. Here we provide a critique of an exploration of the evolutionary function of complex symmetrical displays. We investigate the hypothesis that complex symmetrical signal form is the product of a `hidden preference' inherent in all sensory systems (i.e. a universal sensory bias). Through extending previous work and relaxing its assumptions we reveal that the posited `hidden preference' for complex symmetry is in reality a preference for homogeneity. The resulting implications for further accounts of the evolutionary function of complex symmetrical patterning are considered.

1996

[144] Bullock, S. & Cliff, D. (1996). Modelling biases and biasing models: The role of 'hidden preferences' in the artificial co-evolution of symmetrical signals. Cognitive Science Research Paper CSRP 414, University of Sussex, UK. [ bib | pdf ]
Recently, within the biology literature, there has been considerable interest in exploring the evolutionary function of animal displays through computer simulations of evolutionary processes (Arak & Enquist, 1993, 1995a; Enquist & Arak, 1993, 1994; Johnstone, 1994; Hurd, Wachtmeister, & Enquist, 1995; Krakauer & Johnstone, 1995). Whilst we applaud biologists' adoption of the simulation techniques pioneered within the artificial sciences (see, for example, Meyer & Wilson, 1991; Meyer, Roitblat, & Wilson, 1993; Cliff, Husbands, Meyer, & Wilson, 1994, for collections of such research), and feel that bi-directional cross-fertilisation between natural and artificial sciences has a bright future, we suggest that the application of such techniques to evolutionary modelling may prove to be problematic. Some debate has accompanied the work (Cook, 1995; Johnstone, 1995; Arak & Enquist, 1995b; Stamp Dawkins & Guildford, 1995) but attention to the methodology employed within this embryonic research paradigm has been cursory. Here we provide a critique of this methodology, concentrating on Enquist and Arak's (1994) exploration of the evolutionary function of complex symmetrical displays. We investigate their hypothesis that complex signal form, rather than being the product of evolutionary pressure for information exchange, is the product of `hidden preferences' inherent in sensory systems (i.e. sensory biases). Through extending their work and relaxing their assumptions we reveal that the `hidden preference' for symmetry proferred by Enquist and Arak (1994) is in reality a preference for homogeneity. We show that the flaws present in Enquist and Arak's (1994) study are immanent in any such evolutionary simulation model, and must be challenged if research within this paradigm is to prove worthwhile.

1995

[145] Bullock, S. (1995). Co-evolutionary design: Implications for evolutionary robotics. Cognitive Science Research Paper CSRP384, University of Sussex. [ bib | pdf ]
Genetic Algorithms (GAs) typically work on static fitness landscapes. In contrast, natural evolution works on fitness landscapes that change over evolutionary time as a result of (amongst other things) co-evolution. The attractions of co-evolutionary design techniques are discussed, and attempts to utilise co-evolution in the use of GAs as design tools are reviewed, before the implications of natural predator-prey co-evolution are considered. Utilising strict definitions of true and diffuse co-evolution provided by Janzen (1980), a distinction is drawn between two styles of evolutionary niche, Predator and Parasite. The former niche is robust with respect to environmental change and features systems that have had to solve evolutionary problems in ways that reveal general purpose design principles, whilst the nature of the latter is such that, despite being fragile and unsatisfactory in these respects, it is nevertheless evolutionarily successful. It is contested that if co-evolutionary design is to provide systems that solve problems in ways that reveal general purpose design principles, i.e. to provide robust styles of solution, true co-evolution must be abandoned in favour of diffuse co-evolutionary design regimes.

[146] Bullock, S. (1995). Dynamic fitness landscapes. In P. de Bourcier, R. Lemmen, & A. Thompson (eds.), The Seventh White House Papers: Graduate Research in the Cognitive & Computing Sciences at Sussex. University of Sussex, UK. [ bib | pdf ]
Genetic Algorithms (GAs) are typically thought to work on static fitness landscapes. In contrast, natural evolution works on fitness landscapes that change over evolutionary time as a result of co-evolution. Sexual selection and predator-prey evolution are examined as clear examples of phenomena that transform fitness landscapes. The concept of co-evolution is subsequently defined, before attempts to utilise co-evolution in the use of GAs as design tools are reviewed and speculations concerning future applications of automatic co-evolutionary techniques for design are considered.

1994

[147] Bullock, S. (1994). In 1994... there was the incident with the pigeon. I could go on. bib | www ]
In 1992, there was the paper about Wilson's animat. In 1993, there was a paper about Wilson's animat. In 1993, there was another paper about Wilson's animat. In 1994, there was the incident with the pigeon. In 1995... I could go on.

1993

[148] Cliff, D. & Bullock, S. (1993). Adding “foveal vision” to Wilson's animat. Adaptive Behavior, 2(1), 49-72. [ bib | doi | pdf ]
Different animals employ different strategies for sampling sensory data. In animals that can see, differences in sampling strategy manifest themselves as differences in field of view and in spatially variant sampling (so-called foveal vision). In analyzing adaptive behavior in animals, or attempting to design autonomous robots, mechanisms for exploring variations in sensory sampling strategy will be required. This article describes our work exploring a minimal system for investigating the effects of variations in patterns of sensory sampling. We have reimplemented Wilson's animat (Wilson, 1985b) and then experimented with altering its sensory sampling pattern (i.e., its sensory field). Empirical results are presented which demonstrate that alterations in the sensory field pattern can have a significant effect on the animat's observable behavior. Analysis of our results involves characterizing the interaction between the animat's sensory field and the environment within which the animat resides. We found that the animat's observed behavior can, at least in part, be explained by the animat cautiously moving in a manner that attempts to maximize the generation of new information from the environment over time. We demonstrate that similar explanations can be offered for behavioral patterns in real animals. The article concludes with a discussion of the generality of the results and reflections on the prospects for further work.

[149] Cliff, D. & Bullock, S. (1993). Adding “foveal vision” to Wilson's animat. In J. L. Deneubourg, S. Gross, G. Nicolis, H. Bersini, & R. Dagonnier (eds.), Advances in Artificial Life: Proceedings of the Second European Conference on Artificial Life (ECAL'93) (Preprint), (pp. 205-214). [ bib | pdf ]
Different animals employ different strategies for sampling sensory data. The strategies are often closely constrained by environmental considerations, such as the animals' ecological niche. In animals tat can see, differences in sampling strategy manifest themselves as differences in field of view and in spatially variant sampling (so called “foveal” vision). In analysing adaptive behaviour in animals, or attempting to design autonomous robots, mechanisms for exploring variations in sensory sampling stragegy will be required. This paper describes our work exploring a minimal system for investigating the effects of variations in patterns of sensory sampling. We have reimplemented Wilson's (1986) animat, and then experimented with altering its sensory sampling pattern (i.e. its sensory field). Empirical results are presented which demonstrate that alterations in the sensory field pattern can have a significant effect on the animat's observable behaviour.

Analysis of our results involves characteristing the interactions between the animat's sensory field and the environment within which the animat resides. We found that observed behaviour can, at least in part, be explained as a result of the animat cautiously moving in a manner which maximises the uptake of new infomation from the environment over time.

1992

[150] Cliff, D. & Bullock, S. (1992). Adding “foveal vision” to Wilson's animat. Cognitive Science Research Paper CSRP263, University of Sussex, UK. [ bib | pdf ]
Different animals employ different strategies for sampling sensory data. The strategies are often closely constrained by environmental considerations, such as the animal's ecological niche. In animals that can see, differences in sampling strategy manifest themselves as differences in field of view and in spatially variant sampling (so-called “foveal” vision). In analysing adaptive behaviour in animals, or attempting to design autonomous robots, mechanisms for exploring variations in sensory sampling strategy will be required. This paper describes our work exploring a minimal system for investigating the effects of variations in patterns of sensory sampling. We have re-implemented Wilson's (1986) animat, and then experimented with altering its sensory sampling pattern (i.e. its sensory field). Empirical results are presented which demonstrate that alterations in the sensory field pattern can have a significant effect on the animat's observable behaviour (and hence also on the internal mechanisms which generate the behaviours). Analysis of our results involves characterising the interaction between the animat's sensory field and the environment within which the animat resides. We found that the animat's observed behaviour can, at least in part, be explained as a result of the animat cautiously moving in a manner which maximises the generation of new information from the environment over time. The paper concludes with a discussion of the generality of the results, and reflections on the prospects for further work.


This file was generated by bibtex2html 1.95.