Dr. Gordon Selley,
Senior Research Fellow,
London College of Printing & Distributive Trades,
London, UK.


TechnoSphere: An A-Life Ecology on the Internet

Dr Gordon Selley

Senior Research Fellow

London College of Printing & Distributive Trades

Abstract

TechnoSphere is an evolution simulator that allows people to create their own creatures and communicate with them as they grow, evolve and die in a 3D virtual environment. Users of the system have access to a World Wide Web interface that allows them to select the component parts and behavioural characteristics which together make up an artificial life-form.

This paper discusses the relationships between the artificial life-forms and their synthetic environment, and explains how TechnoSphere is a global experiment in Gibsonian affordance, with particular reference to direct perception, affordances in virtual worlds, and virtual ecological models. It also presents a technical overview of the system and discusses some of the current interface restrictions involved in developing Internet based projects.

Introduction

TechnoSphere is an evolution simulator that allows people to create their own digital creatures and communicate with them as they grow, evolve and die in a 3D virtual environment. Users of the system have access to a World Wide Web interface that allows them to select the component parts and behavioural characteristics of an artificial life-form. During the construction phase using the Web interface the user chooses how many legs, heads, eyes, body parts a creature has. They also choose how effective each item is, bearing in mind that certain abilities will require scarce natural resources (TechnoSphere 1995). The virtual bodies consume energy, age, deteriorate and eventually die, however, the when and where of these events cannot be directly controlled by the user or the artificial life-form. The evolution process controls these events.

The life-forms have the ability to send, text, sound, images, and movies of key virtual moments to the user by email. In addition to this, the user can use the Web interface to request specific information from their life-forms on demand. Users gain experience of the virtual world through the information that their creatures send back, and on the basis of this information users can build new creatures and add to the cycle of events.

With this in mind I shall talk about the work of psychologist J.J.Gibson and the building of a computational model of an evolutionary ecology. Gibson's ideas on affordance-based perception, direct perception, and the inseparable relationship between creatures and their environments are explored with relation to the TechnoSphere project. I shall also point out the positioning of knowledge and its acquisition in the evolutionary virtual ecology.

At the time of writing this paper the development of TechnoSphere has not been completed. The constituent parts all exist (ranging between 50% and 100% of completion) and have been tested individually but not as a whole. Animated views into TechnoSphere have been generated and we have seen the digital life-forms moving around the terrain.

Brief overview of the TechnoSphere system

TechnoSphere has five main components:


* World Wide Web Server: TechnoSphere uses a standard World Wide Web interface model to support user interaction. The user establishes an Internet connection using an Internet service provider. The TechnoSphere Web server allows users to construct a new A-life and register it with the system and allows the users to make requests for images, animations, and information about their creatures.


* The Artificial Life Environment: This is a separate program that stores the artificial life forms and calculates their interactions with each other and the 3D model world. It supports the reproduction and evolution of life forms. User requests made from the Web Server are processed here and the relevant information passed onto the rendering and email engines.


* Rendering Engine: The renderer creates images and animations of the life forms and their world from data taken from the Artificial Life Environment. The renderer is an extension of research work carried out at Coventry School of Art & Design that was sponsored by Rediffusion Simulation (now Thomson Training & Simulation) with special mention to Prof. John Vince and Dr Clive Richards.


* Email Engine: The email engine accepts information from the Artificial Life Environment and Rendering Engine, in the form of images, text, and animations, and posts them on to the creator of the life form.


* The Internet: We are currently developing the first TechnoSphere system using a standard JANET 64Kbps Internet provision at LCPDT.

Diagram 1: Client-server relationship between user and TechnoSphere

Interactions with TechnoSphere

The interactions with TechnoSphere are made possible, and constrained, by three main components: the TechnoSphere system itself, the functionality of HTML and the various client browsers, and the available Internet bandwidth.

Internet bandwidth is certainly one of the most restricting aspects for any email based transaction, but suffice it to say that the Internet is by no means a real-time system, a desirable norm for anybody working in contemporary computer interface project. Email can take several hours, or even days, to arrive and many file transfers using domestic equipment can slow down to less than 100 bytes per second. For this reason the user/TechnoSphere interaction model adopts existing methodologies for Internet communication in order to provide access to domestic users operating 9.6Kbps modem speeds.

Artificial life forms can use email and File Transfer Protocol just as well as their creators (if not better, because they can do it all night) and email offers universal interaction with existing Internet users. We could have developed a new client browser for communication with TechnoSphere, however, receiving email from an artificial world along with email from your friends may give you an interesting twist with regards to the Turing Test.

These speed restrictions of the current Internet force TechnoSphere to be a latent system. The latency first arises when users create a new life-form using the Web interface. After completing the form the information is passed to the Artificial Life Environment where the creature responds to its environment (described later). At key moments (births, deaths, and environmental events) the creature emails their creator. These events are not pre-determined and do not happen at predictable times, so there is latency between the user and information returned by the A-life. This type of interaction requires the user to be a passive receiver for information over long periods of time (hours, days, weeks), however, the users total active connect time might only be a matter of minutes.

HTML (Hypertext Mark Up Language), the language used to build Web pages, and the relatively new VRML are offering greater interactive potential for Web systems but they are subject to the same bandwidth limitations, and therefore, offer low detail environments which lead to low detail images, something we are trying to avoid with TechnoSphere. The emphasis within the TechnoSphere project is to produce high quality renderings of artificial worlds and this is, currently, only possible using industrial computer rendering systems. The TechnoSphere environment with its creatures consists of up to half a million polygons, a very large database that cannot easily be rendered on a PC. However, we are using the TWIGS (Selley 1991) rendering system that is designed to handle large quantities of scene detail and unnatural phenomena. At resolutions suitable for QuickTime, the format that we will be using to transfer video and sound, the renderer requires about five seconds to compute a frame. The email system, which can operate at up to 64Kbps would take about 10 seconds to send the image via email (uncompressed), resulting in a bottleneck at the email end. A domestic user using a 9.6Kbps modem would take up to a minute to receive the email image.

Although the Internet is not a real-time system, the Artificial Life Environment is, running at speeds up to 50 time slices per second. Each time slice allows each creature to have access to the CPU to process what they might do next. This process runs on one CPU dedicated to keeping the Artificial Life Environment going all day (see diagram 1). The second CPU is used by the rendering system, firstly to render requested snap shots and animations of creatures, and secondly, to email the results back to the users. In the near future there will be development of applications to run over the emerging ATM (Asynchronous Transfer Mode) network, one of which is part of our UK's SuperJANET net work (SuperJANET 1993). ATM supports high speed transfer of data, images, video and sound and can operate at Gigabit speeds. Although available to a select few at the moment, these services will not be publicly available until about five years from now. Such services allow real-time and reactive applications to be run over the Internet, but until this is a reality real-time interaction will remain severely limited.

Knowledge models in TechnoSphere

Each creature is described by a sequence of variables that state how the creature is built, what it looks like and how it behaves. These variables relate directly to the selections made by the users using the Web interface. This model is analogous to the genotype and phenotype relationship in our own DNA system. When reproduction occurs between two artificial life-forms the values that describe the parents are mixed to form the description for the offspring. This process is subject to random variation and mutation, allows successive generations to inherit the characteristics of the previous generations, and also provides a mechanism for evolutionary change over time. Once a user has created a life-form from the Web interface it is placed in the Artificial Life Environment. This is a program that stores all the life-forms and their 3D model environment. When it is in the environment a life-form interprets the world, makes survival-based decisions, and responds according to its genetic and perceived knowledge.

Direct Perception

The Artificial Life Environment hosts all of the creatures and the 3D world, and allocates computer processor time to each creature in turn. These time slices correspond to our sense of now and during these moments a creature employs its senses to make a simple model of its current environment. At any particular time a creature will be seeking food, fleeing predators, or trying to reproduce, and its behaviour, which is encoded in its genotype, will be modulated by its interpretations of the current environment. In this way, creatures of different encoding will act differently to the same events in the virtual world.

To support perceptive functions in TechnoSphere we have based the sensory models on real world functions such as sight, sound and touch. Perhaps the most important of these functions is sight. Sight can be described as a type of functionality, whereas seeing is perception drawn from the sight process, whatever that might be. For example, our version of sight, so the Empiricists might postulate, requires retinal projections and a high level cognitive and constructive functions in order to perceive the world. There are many theories concerning the processes of visual perception, but the most seductive for our purposes is the Direct Perception model of J.J.Gibson. Although his work in this area is somewhat criticised for its chauvinism towards artistic images, Direct Perception is a model that might be effective for use in virtual environments where the computational modelling of seeing is still elusive.

Gibson's Direct Perception theories, which are expressed mainly in visual perception, are based on the coherence of light in our physical world. He argues that light reaching the eye is information rich in its structure and in some way, which he does not make explicit, the content of the light is perceived by an integral resonance facility of the perceiver. Arguments about the validity of his ideas do not concern me greatly, however, I am interested in applying a model of this system to TechnoSphere in order to support sight-like functions. The key difference is that the carrier of the information in the real world is light and the carrier of 'visual' information in TechnoSphere is the message passing facility of programming language we are using. With this language it is possible for creatures to ask direct questions about each other. This process does not model knowledge, a creature does not know and retain the information, it allows creatures to constantly interpret the world based on a constant stream of information.

The artificial life-forms really do see the environment but not by using the retinal model that we do, they see objects in terms of their own affordances by asking the objects to describe themselves. For example, in the real life scenario of prey meets predator, by some process of visual perception the prey is made to realise that this is a danger situation and that it should employ flight or fight actions. In the simulation of this event in TechnoSphere the function of visual perception is replaced by an ability for life-forms to directly access the nature of the predator. This process informs the life-form of the type of food that the predator eats and triggers an immediate flight or fight option.

'...perceiving is an act, not a response, an act of attention, not a triggered impression, an achievement not a reflex.' (Gibson 1979)

Affordances in virtual worlds

In the Gibsonian approach the affordances of an environment are what it offers a creature in terms of interaction. The affordance of any particular interaction varies from creature to creature. In the predator meets prey scenario the prey will certainly fight, hide or run, depending on the affordances of the local terrain, however, a different life-form may not be concerned about this particular encounter. For example, an artificial herbivore, on encountering an artificial wolf, would run, whereas, an artificial flea might have just found a new home.

TechnoSphere is faithful to the Gibsonian model by allowing each creature to perceive its world in terms of the affordance of a particular interaction. The affordance of any interaction will be altered by the condition of the individual, for example, a non-hungry creature will pass food by in search of its current motivation. As soon as energy in the creature starts to run low it will seek for food and ignore other affordances on the way. Affordance is not the same as classification of an object, where we might state that all predators are dangerous, but leaves the perception of the affordances to the individual.

The Ecology of TechnoSphere

Gibson's ecological ideas of visual perception are also of relevance to our work. Firstly, in the realm of perception within evolution Gibson states that animal and environment are inseparable. The environment implies the creature and the creature implies the environment and they should be regarded as two interacting systems. For example, it is no coincidence that in an oxygen rich atmosphere we find oxygen breathing creatures. Initially, new creatures added to TechnoSphere by the users will not necessarily fit this model. For example, the construction of creatures from the components interface is not limited by any natural selection process and can result in creatures being mutants with no relationship between them and the environment. First generation creatures are likely to have a dysfunctional relationship to the virtual 3D ecology. Through competitive evolution, a particular type of creature may die out or evolve into a form more resonant with the environment.

Memory is not a priority function because a creature continually interprets its locality each time it has a turn at processing. However, we are introducing a simple memory model that stores the last place that the creature found food, predators, and reproduced. If these primitive memories are assigned to other creatures rather than geographic locations this may result in creatures following each other around in groups.

Summary

In the TechnoSphere model, knowledge can exist in four places: the genetic description of a creature, the non-living environment, the encounter, and user-TechnoSphere interaction.

For the creatures in the virtual world the process of perception is an act that is mediated by its genetic experience and its current motivational state. The genotype and phenotype control the responses of the creatures, but these change over time due to evolutionary processes. Evolution is a process of encoding knowledge into an organism. The non-living environment, such as the terrain, directly affects the creatures' movement and offers its own affordances in valleys and other places, useful components in the whole knowledge system. The encounter acts as the driving process for all knowledge and without any encounters there will be no threat, and no reproduction, and hence, no change to the genetic encoding. The encounter is the mechanism for creating knowledge. Finally, the user-TechnoSphere encounter of creating the artificial life-forms can be seen as a mechanism for encoding the users knowledge of TechnoSphere.

Diagram 2: 3D models used in the development of TechnoSphere

Top left: TechnoSphere landscape rendered with TWIGS on Pentium P90 PC.

Top right: First life form in TechnoSphere.

Bottom left: Detail of component models constructed and rendered with SoftImage.

Bottom right: Test selection of wheel components modelled and rendered with SoftImage.

Bibliography

Gibson, J.J. (1979). The Ecological Approach to Visual Perception. Boston: Houghton Mifflin.

Selley, G.L.C (1991). Trees and Woods Image Generation System. PhD Thesis, Coventry School of Art & Design.

Partridge, C. (1993). Gigabit Networking. Addison-Wesley Publishing Company, Inc.

SuperJANET (1993). SuperJANET brochure and http://www.ukerna.ac.uk/SuperJANET/SuperJANET.html

TechnoSphere (1995). http://www.lond-inst.ac.uk

TechnoSphere is a collaboration between Dr Jane Prophet (Head of Video, University of Westminster), Dr Gordon Selley (Senior Research Fellow, London College of Printing & Distributive Trades), Andrew Kind (Animator at Excess) and Julian Saunderson (Lecturer at CEA Middlesex University), Jonathan Spencer (Graphics Designer BBC), Tony Taylor-Moran (London College of Printing & Distributive Trades).

http://www.lond-inst.ac.uk/

email TechnoSphere@cairn.demon.co.uk

TechnoSphere is being researched and developed with support from

Film and Video Umbrella, Cambridge Darkroom Gallery, The Arts Council of England, and Digital Workshops Ltd.




For more SYNDICATE SPEAKERS

For other SYNDICATES

Cyberbridge-4D