ARTIFICAL LIFE

by barkkathulla 2012-09-17 09:57:12

<font color=#003380><u>
Artificial Life

</u></font>

The first part of this paper explores the general issues in using Artificial Life techniques to program actual mobile robots. In particular, it explores the difficulties inherent in transferring programs evolved in a simulated environment to run on an actual robot. It examines the dual evolution of organism morphology and nervous systems in biology. It proposes techniques to capture some of the search space pruning that dual evolution offers in the domain of robot programming. It explores the relationship between robot morphology and program structure, and techniques for capturing regularities across this mapping.
The second part of the paper is much more
Specific. It proposes techniques, which could allow realistic explorations concerning the evolution of programs to control physically embodied mobile robots. In particular, we introduce a new abstraction for behaviour-based robot programming, which is specially tailored to be used with genetic programming techniques. To compete with hand coding techniques it will be necessary to automatically evolve programs that are one to two Orders of magnitude more complex than those previously reported in any domain. Considerable extensions to previously reported approaches to genetic programming are necessary in order to achieve this goal.

In recent years, a new approach to Artificial Intelligence has developed which are based on building behaviour-based; programs to control situated and embodied robots in unstructured dynamically changing environments. Rather than modularize perception, world modelling, planning, and execution, the new approach builds intelligent control systems where many individual modules directly generate some part of the behaviour of the robot. In the purest form of this model, each module incorporates its own perceptual, modelling, and planning requirements. An arbitration or mediation scheme, built within the framework of the modules, controls which behaviour-producing module has control of which part or the robot at any given time. The programs are layered in their construction, but are non-hierarchical in their control flow, with lower levels taking care of more primitive activities and higher levels taking care of more sophisticated ones for an introduction and review, and for a survey of the field.
This work draws its inspirations from neurobiology ethologic, psychophysics, and sociology. The approach grew out of dissatisfactions with traditional robotics and Artificial Intelligence, which seemed unable to deliver real-time performance in a dynamic world. The key idea of the new approach is to advance both robotics and AI by considering the problems of building an autonomous agent that physically is an autonomous mobile robot, and that carries out some useful tasks in an environment, which has not been specially structured or engineered for it.
While robots built on these principles have been demonstrated learning calibration information, behaviour coordination, and representations of the world, progress in learning new behaviours has proven more difficult. Today, we are constrained to programming each new behaviour by hand.
Work in Artificial Life has developed techniques for evolving programs for controlling situated, but unembodied. At some level one of the goals of Artificial Life is to move out of the digital medium into that of embodied systems. Is there a match between AL and AI? This paper explores the prospects for using Artificial Life techniques to evolve programs to control physically embodied mobile robots, so that we no longer have to do it all by hand. There have been no reports to date of programs evolved for embodied robots.
There has been work on learning new behaviours using reinforcement learning, e.g., and used Q-1earning. The major drawback is the large number of runtime trials, many more than needed by real animals, and the need for carefully "shaping" the learning by splitting up the tasks into little pieces that the robot learns sequentially. It seems that real animals have innate built-in structures that facilitate learning particular constrained classes of behaviours. The vast numbers of trials necessary are spread over the generations, and runtime learning has a more constrained space in which it must search.
Recently suggested using genetic programming for behaviour-based embodied robots to overcome these limitations.



BRAIN CONTROLLED CAR FOR DISABLED
868
like
0
dislike
0
mail
flag

You must LOGIN to add comments