From: Arantza Etxeberria <arantza(a)cogs.susx.ac.uk>
Subject: Artificial Life Workshop Announcement
Date: Mon, 18 Oct 93 10:48:42 BST
ARTIFICIAL LIFE: A BRIDGE TOWARDS A NEW ARTIFICIAL INTELLIGENCE
Palacio de Miramar (San Sebastian, Spain)
December 10th and 11th, 1993
Workshop organised by the Department of Logic and Philosophy of Science,
Faculty of Computer Science & Institute of Logic, Cognition, Language
and Information (ILCLI) of the University of the Basque Country (UPV/EHU)
Directors: Alvaro Moreno (University of the Basque Country)
Francisco Varela (CREA, Paris)
This Workshop will be devoted to a discussion of the impact of work
on Artifical Life on Artificial Intelligence. Artificial Intelligence
(AI) has traditionally attempted to study cognition as an abstract
phenomenon using formal tools, that is, as a disembodied process that
can be grasped through formal operations, independent of the nature of
the system that displays it. Cognition is treated as an abstract
representation of reality. After several decades of research in this
direction the field has encountered several problems that have taken it
to what many consider a "dead end": difficulties in understanding
autonomous and situated agencies, in relating to behaviour in a real
environment, in studying the nature and evolution of perception, in
finding a practical explanation for the operation of most cognitive
capacities such as natural language, context dependent action, etc.
Artificial Life (AL) has recently emerged as a confluence of very
different fields trying to study different kinds of features of living
systems using computers as a modelling tool, and, at last, trying to
artificially (re)produce a living system (or a population of them) in
real or computational media. Examples of such phenomena are prebiotic
systems and their evolution, growth and development, self-reproduction,
adaptation to an environment, evolution of ecosystems and natural
selection, formation of sensory-motor loops, autonomous robots. Thus,
AL is having an impact on classic life sciences but also on the
conceptual foundations of AI and new methodological ideas in Cognitive
Science.
The aim of this Workshop is to focus on the last two points and to
evaluate the influence of the methodology and concepts appearing in AL
for the development of new ideas about cognition that could
eventually give birth to a new Artificial Intelligence. Some of the
sessions consist of presentations and replies on a specific subject by
invited speakers while others will be debates open to all participants
in the workshop.
MAIN TOPICS:
* A review of the problems of FUNCTIONALISM in Cognitive Science
and Artificial Life.
* Modelling Neural Networks through Genetic Algorithms.
* Autonomy and Robotics.
* Consequences of the crisis of the representational models of cognition.
* Minimal Living System and Minimal Cognitive System
* Artificial Life systems as problem solvers
* Emergence and evolution in artificial systems
SPEAKERS S. Harnad P. Husbands
G. Kampis B. Mac Mullin
D. Parisi T. Smithers
E. Thompson F. Varela
Further Information: Alvaro Moreno
Apartado 1249
20080 DONOSTIA
SPAIN
E. Mail: biziart(a)si.ehu.es
Fax: 34 43 311056
Phone: 34 43 310600 (extension 221)
34 43 218000 (extension 209)
-----------------------------------------------------------------------
LEVELS OF FUNCTIONAL EQUIVALENCE IN REVERSE BIOENGINEERING:
THE DARWINIAN TURING TEST FOR ARTIFICIAL LIFE
Stevan Harnad
Laboratoire Cognition et Mouvement
URA CNRS 1166 I.B.H.O.P.
Universite d'Aix Marseille II
13388 Marseille cedex 13, France
harnad(a)princeton.edu
ABSTRACT: Both Artificial Life and Artificial Mind are branches of what
Dennett has called "reverse engineering": Ordinary engineering attempts
to build systems to meet certain functional specifications; reverse
bioengineering attempts to understand how systems that have already
been built by the Blind Watchmaker work. Computational modelling
(virtual life) can capture the formal principles of life, perhaps
predict and explain it completely, but it can no more BE alive than a
virtual forest fire can be hot. In itself, a computational model is
just an ungrounded symbol system; no matter how closely it matches the
properties of what is being modelled, it matches them only formally,
with the mediation of an interpretation. Synthetic life is not open to
this objection, but it is still an open question how close a functional
equivalence is needed in order to capture life. Close enough to fool
the Blind Watchmaker is probably close enough, but would that require
molecular indistinguishability, and if so, do we really need to go that far?
-----------------------------------------------------------------------
Phil Husbands
School of Cognitive and Computing Sciences
Univesity of Sussex, BRIGHTON BN1 9QH, U.K
philh(a)cogs.susx.ac.uk
ABSTRACT: We discuss the mothodological foundations for our work on the
development of cognitive architectures, on control systems, for
situated autonomous agents. We focus on the problems of developing
sensory-motor ystems for mobile robots, but we also discuss the
applicability of aur approach to the study of biological systems. We
argue that, for agents required to exhibit sophisticated ionteractions
with their environments, complex sensory-motor processing is necessary,
and the design by hand of control systems capable of this is likely to
to become a prohibiytively difficult as complexity increases. We
propose an automatic design process involving artificial
evolution,where the basoc buildig blocks used for evolving cognitive
architectures are noise-tolerant dynamical networks. These networks may
be recurrent, and should operate in real time. time. The evolution
should be incremental, using an extended and modified version of a
genetic algorithm. Practical constraints suggest that initial
architecture evaluations should be done largely in simulation. To
support our claims and proposals, we summarize results from some
preliminary simulation experiments where visually guided robots are
evolved to operate in simple environments. Significantly, our results
demonstrate that robust visually-guided control systems evolve from
evaluation fuctions which do not explicitly require monitoring visual
input. We outline the difficulties involved in continuing with
simulations, and conclude by describing specialized visuo-robotic
equipment, designed to eliminate sensors and actuators.
-----------------------------------------------------------------------
Barry MacMullin
School of Electronic Engineering
Dublin City University
McMullinB(a)DCU.IE
ABSTRACT: I reconsider the status of computationalism (or, in a weak
sense, functionalism): the claim that being a realisation of some (as
yet unespecified) class of abstract machine is both necessary ans
sufficient for having genuine, full-blooded, mentality. This doctrine
is now quite widely (though by no means universally) seen as
discredited. My position is that, thoug it is undoubtedly an
unsatisfactory (perhaps even repugnant) thsis, the arguments against it
are still rather weak. In particular, I critically reassess John Searle's
infamous Chinise Room Argument, and also some relevant aspects of
Karl Popper s theory of the Open Universe. I conclude that the status
of computationalism must still be regarded as undecided' and that it
may still provide a satisfactory framework for research.
-----------------------------------------------------------------------
Domenico Parisi
Institute of Psychology
National Research Council, Rome
e-mail: domenico(a)irmkant.bitnet
ABSTRACT: Genetic algorithms are methods of parallel search for optimal
solutions to tasks which are inspired by biological evolution and are
based on selective reproductiomn and the addition of variiation through
mutations or crossover. As models of real biological and behevioral
phenomena, however, genetic algorithms suffer from many limitations.
Some of these limitations are discussed under the rubrics of (a)
environment, (b) variation, and (c) fitness, and ways are suggested to
overcome them. Various simulations using genetic algoritms and neural
networks are briefly described which incorporate a more biologically
realistic notion of evolution.
-----------------------------------------------------------------------
Tim Smithers
Facultad de Informatica
Apartado 649
20080 San Sebastian
smithers(a)si.ehu.es
ABSTRACT: Traditianally autonomous systems research has been a domain
of Artificial Intelligence. We argue that, as a consequence, it has
been heavily influenced, often tacitly, by folk psychological notions.
We believe that much of the widely acknowledged failure of this
research to produce reliable and robust artificial autonomous systems
can be apportioned to its use and dependence upon forlk psychological
constructs. As an alternative we propose taking seriously the
Eliminativce Materialism of Paul Chuchland In this paper we present our
reasons for adopting this radical alternative approach and briefly
describe the bottom-up methodology that goes with it. We illustrate the
discussion with examples form our work on autonomous systems.
[Rest of abstracts not yet available]
Feszengo erzesem van Zoli sorainak olvastan, mert a mondatszerkezetei
(Kafka es Esterhazy ota tudjuk, hogy ezek fontosabbak, mint a szavak)
osszemossak az altalanosan ismertet a hipotezisekkel, idioszinkraziakkal
meg ad hoc, vacsora utani tunodesekkel. Ettol aztan fene tudomanyos lesz
az egesz, csak teljesen felrevezeto. A dolgok kulon tartasara vonatkozo
megjegyzeseim, finomabb formaban, erre utaltak.
Amit el kellett mondani, mar elmondtam, en ugy gondoltam, vegig arrol
beszelek, lehet-e eletet szamitogeppel szimulalni, ha ez nem volt jo,
hat nem. Mindegy, uj leveleben Zoli azert rosszul mond vagy rosszul tud
par dolgot, most valojaban csak azokat szeretnem kiegyenesiteni.
- Amit az a Darwin nevu csinalt, az nem a fajok vagy az evolucio
feltalalasa, hanem egy evolucios mechanizmus felvetese volt, amely
valoszinuleg nem is igaz. A kumulativ szelekcios valtozasok speciacios
szereperol van szo. Az ALife fele nemigen mutat ez. Ha mar, akkor
Lamarck, aki zseni volt, csak abban a bizonyos aprosagban tevedett, o
foglalkozott azzal, hogy kidolgozza az elet mibenletet megalapozo elmeletet,
amely aztan, ha sikeresen megfogalmaztuk, latta ezt sajat maga is, mindenfele
lehetseges eletre vontakozna, legyen az szenes vagy nem szenes. A pozitivizmus
naiv lenduletevel Alred Lotka fizikai biologiaja folytatta ezt, majd annak
nyoman, ellenhataskent, jott Bertalanffy, meg N. Rashevsky, es a relacionalis
elmelet, and the rest is history, szoval masfele absztrakciokbol jon a
dolog.
- Egybemosas: az "atom" fogalma eppugy absztrakcio, mint a faje, megsem
hiszi senki, hogy atombombat szamitogepben is lehet robbantani. (A robbanas
bizonyos aspektusait persze lehet szimulalni, csak en nem szeretek errol a
triviarol beszelni, igaz, nem tudom, ki mit tetelez fel, ugyhogy nem art a
caveat. Emlekeztetoul viszont, a strong AI - strong AL kerdes nem arrol szol,
lehet-e az elet kivalasztott jelensegeit szamitogeppel modellezni, mert a
szineit peldaul lehet, Kodacolor Gold, hanem hogy az eletet magat lehet-e.)
- Neumann nem keveri ossze a realizacio es szimulacio kerdest (mert
hiszen errol van szo), annyira nem, hogy nem is az eletet, hanem annak
matematikai alapjait akarta megfogalmazni (o pontosan tisztaban volt a
kulonbseggel, miutan az altalanosabb kinematikus eletmodell megalkotasara
vezeto korabbi probalkozasa csodot mondott, ld a "Physical and Logical
Theory of Automata" cimmel kiadott princetoni eloadas-sorozatat, juszt
se mondom el, es a sajat konyvemet se, akit erdekel, elolvassa, ilyen ez).
Az ANN rendszerek, nos, errol epeszu ember nem hitte soha, hogy az agyat
kepezik le (Zoli ezt, felteszem, nalam jobban tudja, leven neurodinamika
modellezo, a ketto ugy viszonyul egymashoz, mint Lempel rocskaja a lvovi
rabbi kontosehez, lehet, hogy jovore tilos lesz, most sutom el). Vagyis,
ezek a peldak nem tamasztjak ala Zoli "konvergencia teziset", hogy
minden problema ugyanaz.
- Nem all, hogy algoritmusok vagy fuggvenyek osszekapcsolasa barmilyen
ertelemben is nem-algoritmikus lenne. Nevezetes tetelek vannak arrol,
hogy az algoritmusok osztalya zart mindenfele "elkepzelheto" (de legalabbis
idaig elkepzelt vagy legalabb a jovoben pontosan megfogalmazhato, masneven:
"effektiv" muveletre nezve). Ennek semmi koze a kaoszhoz, meg a
stochaszticitashoz, azt itt behozni verbalizmus. A kaotikus rendszer ugyan
nem modellezheto szamitogeppel (csak a szohossz altal limitalt hatarciklus,
ergo egy SGI Crimson [64 bit] jobb, mint egy XT [8 bit]), de maga a kaoszt
produkalo egyenlet igen. A kaosz bonyolultsaga a kezdeti feltetelben van,
nem az egyenletben. A kaosz tehat nem azert nem fer bele a szamitogepbe, mert
nem algoritmikus, hanem mert az exact kaosz vegtelen sok jegybol allo kezdeti
erteket tetelez fel. Me'g a kaotikus tartomanyban is, egy veges sok jegyu,
pontosan megadott kezdoertek, amilyen pl. a nulla, veges hatarciklust
eredmenyez. A "kaosz" egyszeruen komplexitasmegorzo transzformacio es kesz.
A nemlinearis duma lassan tiz eve folyik a vizcsapbol, eleg belole. (O.E.
Roessler, akirol a talan legnevezetesebb kaotikus attraktort neveztek
el, es akivel evek ota egyutt dolgozom, ezt nalam sokkal szebben tudja
mondani, mert eloszor ugy hangzik, mint egy dicseret, azt szokta mondani,
hogy a nemlinearis modellek mindenutt nagy attorest hoztak, szunet, majd
bocsnatkeron, kiveve talan nehany lenyegtelen kerdest, mint az elet, az elme,
a jelen (now) problemaja, es a josag, entschuldigung.) Az, hogy nem
megjosolhato, meg nem algoritmikus, az ket total kulonbozo dolog, tetezve azzal
a hibaval, hogy a kaosz "meg nem josolhatosaga" egy egeszen specialis ertelemben
van, vagyis, hogy a hibas joslas az idovel tetszolegesen leromlik. De a pontos
joslas pontos marad (ld fent), kulonben is, mit nem lehet az ingan megjosolni,
ha't leng. Persze ami nem algoritmikus, pl. amirol nem tudom, leng-e, sot inga-e
me'g egyaltalan, vagy ma'r hal(l)hatatlan le'lekke alakult, ki a networkon
levelez, az sem megjosolhato, de nem minden rovar bogar.
- Legbol kapott dolog, hogy az evolucio vagy az elet lenyege az, hogy
"nem akad el", duplan is, mert mi van, ha az elet nem a vegtelenbe megy (mint
azt Teihard de Chardin gondolja), hanem mondjuk az evolucio maris
befejezodott? Nem ezen mulik. Nem igaz tovabba, hogy nem lehet garantalni,
hogy egy algoritmus ne jusson ciklusba. A Halting Problem nevu kerdes
mutatja, hogy mas a helyzet. A problema algoritmikusan eldonthetetlen, ez
azonban azt jelenti, hogy vannak olyan algoritmusok, amelyek nem allnak meg es
nem jutnak ciklusba (ugye, mindket esetben eldonheto lenne, megall-e; purista
matematikusok kedveert, pontosabban azt tudjuk, hogy vannak programok,
amelyek vagy sosem ismetlik magukat, vagy a ciklushosszuk minden korlatnal
nagyobb lehet). Nem nagy kunszt egyebkent "infinitely long lived transient"-et
irni a differencialegyenletek nyelven sem, vagyis olyan trajektoriat talalni,
amely nem vezet attraktorra. Legegyszerubb ilyen pl. az exponencialisan
robbano fuggveny, vagyis az x'(t)= a * x(t) egyenlet megoldasa. Ennek letebol
semmi metafizikai nem kovetkezik, es az elethez sincs sok koze, legfeljebb a
malthusi demografiahoz, az meg megy mashogy is.
Az onmodosito rendszerben nem az a poen, hogy nem all meg (noha eppenseggel
tenyleg nem all meg), hanem, hogy informaciot termel. Informaciotermeles
nelkul megnezem, hogyan lehet evoluciot modellezni, informaciot termelni meg
algoritmussal nem lehet, a Kolmogorov-Chaitin elmelet ota. Nem segit ezen
semmilyen varazsige. Hogy mi van az algoritmuson tul? Hat a valosag maga
(cf system vs natural system), a sztori meg arrol szol, hogy eletet modellezni
annyi, mint a valosagot teljes gazdagsagaval egyutt modellezni, mert az elet
ezt a teljes gazdagsagot hasznalja (szinteken le-fel, sot keresztbe es tagitva),
nem pedig azokat a korlatozott mukodesi modokat, vagy sineket, amelyeket
az algoritmusok es mas formai konstrukciok biztositanak a szamara.
udv kgy