Important Announcement
PubHTML5 Scheduled Server Maintenance on (GMT) Sunday, June 26th, 2:00 am - 8:00 am.
PubHTML5 site will be inoperative during the times indicated!

Home Explore Artificial Intelligence – Agents and Environments

Artificial Intelligence – Agents and Environments

Published by shahzaibahmad, 2015-09-02 08:26:44

Description: Artificial Intelligence – Agents and Environments

Keywords: none

Search

Read the Text Version

Artificial Intelligence – Agents and Environments Movement The NetLogo model used to produce this FSM (see URL reference to actual code at bottom of this chapter) animates the model by creating a number of agents each tick (the slider shown on the top left of Figure 4.3 can be used to control the births-each-tick variable; in the figure this has been set at 2). Each agent then moves from one state to the other – the lime coloured arrowheads drawn at some of the states show their current location. The agents randomly make a choice when there is more than one outgoing transition available. The plot on the left shows how the overall population (labelled “All”) varies with ticks. Also shown on the plot are the current number of agents located at the “Young, parents” stage and at the “Solitary, retired” stages. Brain power By 2020, wind could provide one-tenth of our planet’s electricity needs. Already today, SKF’s innovative know- how is crucial to running a large proportion of the world’s wind turbines. Up to 25 % of the generating costs relate to mainte- nance. These can be reduced dramatically thanks to our systems for on-line condition monitoring and automatic lubrication. We help make it more economical to create cleaner, cheaper energy out of thin air. By sharing our experience, expertise, and creativity, industries can boost performance beyond expectations. Therefore we need the best employees who can meet this challenge! The Power of Knowledge Engineering Plug into The Power of Knowledge Engineering. Visit us at www.skf.com/knowledge Download free eBooks at bookboon.com 101 Click on the ad to read more

Artificial Intelligence – Agents and Environments Movement Figure 4.3 A screenshot of the Life Cycle Stages model showing an animated FSM that models life cycle stages (based on Jobber, 1998). A decision tree is often used for representing decision-making behaviour. A tree is a directed graph with no loops in the network, with only one path between two nodes, and with each node having one or more children nodes but a single parent node, or if the root node, no parent at all. The leaves of the tree are the nodes with no outgoing paths. In a decision tree, the agent making the decisions starts from the root node of the tree, and chooses the outgoing paths based on a particular decision associated with the current node, and continues until the leaves of the tree are reached. The leaf that is reached usually represents in some manner the outcome of the decisions made along the way – for example, it can be used for classification, where the decision tree operates in a similar way to the popular parlour game Twenty Questions (such as played by characters in the Charles Dickens’ novel A Christmas Carol). In this game, the first player has to guess an object – usually an animal, vegetable or mineral – that the second player is thinking of. The first player is allowed to ask up to twenty questions about the object in question, while the second player has to truthfully provide only ‘Yes’ or ‘No’ answers to the question. The first player has to devise suitable questions in order to narrow the search space sufficiently enough in order to guess what the object in question is before using up their twenty questions, otherwise the game is lost. An example of a decision tree is shown in Figure 4.4. This decision tree can be used for guessing New Zealand birds. In the Twenty Questions game, this may have been preceded by a set of eight questions and answers as follows: ‘Is it an animal?’ [Yes], ‘Is it a mammal?’ [No] ‘Is it a bird?’ [Yes], ‘Is it a Northern Hemisphere bird?’ [No], ‘Is it an African or South American bird?’ [No], ‘Is it an Australasian bird?’ [Yes], ‘Is it an Australian bird?’ [No], ‘Is it a New Zealand bird?’ [Yes]. The decision tree asks a further four questions to guess between the eight New Zealand birds listed at the bottom of the tree. Download free eBooks at bookboon.com 102

Artificial Intelligence – Agents and Environments Movement Figure 4.4 A screenshot of the decision tree for classifying New Zealand birds created by the NZ Birds model listed in NetLogo Code 4.3. We can use NetLogo code similar to that devised for drawing FSMs to also draw decision trees. Each node in the tree can be represented in a similar way to a state in a FSM, with the paths between nodes represented by link transitions. In the code shown in NetLogo Code 4.3 below, there are fifteen states represented by the turtle agents point 0 to point 14 that define nodes in the tree, and fourteen link transitions that define paths between the nodes. Download free eBooks at bookboon.com 103 Click on the ad to read more

Artificial Intelligence – Agents and Environments Movement breed [agents agent] breed [points point] directed-link-breed [straight-paths straight-path] agents-own [location] ;; holds a point points-own [question] ;; this is the question associated with the node straight-paths-own [answer] ;; the transition paths define answers to the question to setup clear-all ;; clear everything set-default-shape points „circle 2” create-points 1 [setxy 5 35 set question 'Does the bird fly?'] ;; point 0, level 0 (root) create-points 1 [setxy -25 25 set question 'Is it a parrot?'] ;; point 1, level 1 create-points 1 [setxy 35 25 set question 'Is the bird extinct?'] ;; point 2, level 1 create-points 1 [setxy -40 15 set question 'Alpine bird?'] ;; point 3, level 2 create-points 1 [setxy -10 15 set question 'White throat?'] ;; point 4, level 2 create-points 1 [setxy 20 15 set question 'Is it large?'] ;; point 5, level 2 create-points 1 [setxy 50 15 set question 'Long beak?'] ;; point 6, level 2 create-points 1 [setxy -50 5 set question 'Kea?'] ;; point 7, level 3 create-points 1 [setxy -30 5 set question 'Kaka?'] ;; point 8, level 3 create-points 1 [setxy -20 5 set question 'Tui?'] ;; point 9, level 3 create-points 1 [setxy 0 5 set question 'Pukeko?'] ;; point 10, level 3 create-points 1 [setxy 10 5 set question 'Moa?'] ;; point 11, level 3 create-points 1 [setxy 30 5 set question 'Huia?'] ;; point 12, level 3 create-points 1 [setxy 40 5 set question 'Kiwi?'] ;; point 13, level 3 create-points 1 [setxy 60 5 set question 'Weka?'] ;; point 14, level 3 ask patches [ ;; don’t allow the viewer to see these patches; they are only for ;; displaying labels on separate lines if (pxcor = 1 and pycor = 35) [set plabel [question] of point 0] if (pxcor = -29 and pycor = 25) [set plabel [question] of point 1] if (pxcor = 30 and pycor = 25) [set plabel [question] of point 2] if (pxcor = -44 and pycor = 15) [set plabel [question] of point 3] if (pxcor = -14 and pycor = 15) [set plabel [question] of point 4] if (pxcor = 16 and pycor = 15) [set plabel [question] of point 5] if (pxcor = 46 and pycor = 15) [set plabel [question] of point 6] if (pxcor = -48 and pycor = 0) [set plabel [question] of point 7] if (pxcor = -27 and pycor = 0) [set plabel [question] of point 8] if (pxcor = -18 and pycor = 0) [set plabel [question] of point 9] if (pxcor = 4 and pycor = 0) [set plabel [question] of point 10] Download free eBooks at bookboon.com 104

Artificial Intelligence – Agents and Environments Movement if (pxcor = 12 and pycor = 0) [set plabel [question] of point 11] if (pxcor = 32 and pycor = 0) [set plabel [question] of point 12] if (pxcor = 42 and pycor = 0) [set plabel [question] of point 13] if (pxcor = 63 and pycor = 0) [set plabel [question] of point 14] ] ask points [set size 5 set color blue] ask point 0 [create-straight-path-to point 1] ask point 0 [create-straight-path-to point 2] ask point 1 [create-straight-path-to point 3] ask point 1 [create-straight-path-to point 4] ask point 2 [create-straight-path-to point 5] ask point 2 [create-straight-path-to point 6] ask point 3 [create-straight-path-to point 7] ask point 3 [create-straight-path-to point 8] ask point 4 [create-straight-path-to point 9] ask point 4 [create-straight-path-to point 10] ask point 5 [create-straight-path-to point 11] ask point 5 [create-straight-path-to point 12] ask point 6 [create-straight-path-to point 13] ask point 6 [create-straight-path-to point 14] ask straight-paths [set label-color lime] ask straight-path 0 1 [set answer 'Yes' set label answer] ask straight-path 0 2 [set answer 'No' set label answer] ask straight-path 1 3 [set answer 'Yes' set label answer] ask straight-path 1 4 [set answer 'No' set label answer] ask straight-path 2 5 [set answer 'Yes' set label answer] ask straight-path 2 6 [set answer 'No' set label answer] ask straight-path 3 7 [set answer 'Yes' set label answer] ask straight-path 3 8 [set answer 'No' set label answer] ask straight-path 4 9 [set answer 'Yes' set label answer] ask straight-path 4 10 [set answer 'No' set label answer] ask straight-path 5 11 [set answer 'Yes' set label answer] ask straight-path 5 12 [set answer 'No' set label answer] ask straight-path 6 13 [set answer 'Yes' set label answer] ask straight-path 6 14 [set answer 'No' set label answer] ;; ask points [ create-path-with one-of other points ] ;; lay it out so links are not overlapping ask straight-paths [ set thickness 0.5 ] a create-agents 1 [ set color lime set size 5 set location point 0 ;; start at point 0 move-to location ] ask links [ set thickness 0.5 ] end NetLogo Code 4.3: Code to define and visualise the decision tree in Figure 4.4. Download free eBooks at bookboon.com 105

Artificial Intelligence – Agents and Environments Movement As before, the code defines a turtle agent breed called agent, and we can also use this agent to show how the decision tree is executed depending on the decisions taken using the code listed in NetLogo Code 4.4. The code works by selecting the out link that matches the response provided by the user’s answer to the question associated with the current state that the turtle agent is visiting. If a leaf node is reached, and the guess is correct, then the program responds with the user message “Excellent! Let’s try another bird.” Otherwise it responds with the user messages “Sorry, I cannot guess what the bird is.” and “Let’s try another bird.” to go ask agents [ let neighbours [out-link-neighbors] of location let user-question [question] of location let answers [] ;; all the answers for current node let answers-points [] ;; (the points where the agent should move to depending on the answer) let parent-point location ask [out-link-neighbors] of location [ let this-link in-link-from parent-point let this-answer [answer] of this-link set answers fput this-answer answers set answers-points fput self answers-points ] ifelse empty? answers ;; we are at a leaf node [ let user-answer user-one-of user-question ['No' 'Yes'] ifelse user-answer = 'Yes' [ user-message 'Excellent! Let’s try another bird.'] [ user-message 'Sorry, I cannot guess what the bird is.' user-message 'Let’s try another bird.' ] set location point 0 ;; Go back to the beginning ask links [ set thickness 0.5 ] ;; (reset transitions so it doesn’t show path taken) ] ;;else we are at an internal node [ let user-answer user-one-of user-question answers let pos position user-answer answers let new-location item pos answers-points ;; change the thickness of the link I will cross over to show ;; the path taken ask [link-with new-location] of location [ set thickness 1.1 ] face new-location move-to new-location set location new-location ] ] tick end NetLogo Code 4.4: Code to animate the execution of the decision tree of Figure 4.4. Download free eBooks at bookboon.com 106

Artificial Intelligence – Agents and Environments Movement Decision trees and FSMs are two ways of representing the behaviour of autonomous agents. If we view behaviour as movement carried out in an abstract environment, then the link between movement and behaviour is clear as has been demonstrated by the common use of basic animation techniques in the NetLogo code examples above. The frame of reference or viewpoint of the observer is also an important aspect of behaviour to consider. For example, if the agent has the ability to make decisions for itself (it has some degree of autonomy) and also has the ability to move, then lack of movement is usually seen to be significant. There are different types of behaviour, such as communication, searching and reasoning, that we will explore in more detail in latter chapters. We will see that movement of agents in environments is a common component in all of these behaviours. 4.5 Computer Animation Computer animation is the process of manipulating images being displayed in order to create an illusion of movement. The optical illusion is due to an eye phenomenon called ‘persistence of vision’ where the eye retains an impression of an image for a short period of time after it has disappeared. Computer animation uses a variety of techniques. A traditional approach is to show a succession of images with each image varying slightly from one image to the next. Download free eBooks at bookboon.com 107 Click on the ad to read more

Artificial Intelligence – Agents and Environments Movement Computer animation is important for Artificial Intelligence research as it can be used to not only develop simulations of computer agents to test out the believability of their behaviour in virtual environments, but also to help build models to predict the behaviour of real-life agents such as humans and robots. Various examples of the use of computer animation for Artificial Life, virtual creatures and game and movie agents will be described in Volume 2 of this book series. We have already used basic computer animation techniques in the NetLogo examples above (see Figures 4.2 to 4.4) to emphasize the link between movement and behaviour. Figure 4.5 A screenshot of the Shape Animation Example model listed in NetLogo Code 4.5. The NetLogo Models Library provides a Shape Animation Example model to illustrate how to use shapes to create animations (see Figure 4.5). The Turtles Shapes Editor has been used to create nine different person shapes, and sixteen different flower shapes as shown in Figures 4.6. The model works by successively showing the different shapes from one frame to the next. This provides a primitive form of animation similar to what is produced when using a flick book (also called a flip book), often used in illustrated books for children which contain a series of pictures that vary gradually from one page to the next and which become animate when flicked through rapidly. Figure 4.6 The nine different person shapes and seven of the sixteen different flower shapes used by the Shape Animation Example model in NetLogo Code 4.5. Download free eBooks at bookboon.com 108

Artificial Intelligence – Agents and Environments Movement The code for the model is listed below in NetLogo Code 4.5. The model defines two breeds – flowers and persons. When the go button is pressed, the program calls the animate procedure repeatedly at the speed determined by the global variable seconds-per-frame. The animate procedure first calls the procedure move-person that simply selects the next person shape to be displayed and moves forward a small amount, then it calls the procedure age-flower which sets the shape of each flower turtle to the next shape in the sequence, and then finally it sprouts a new flower 20% of the time the procedure is called. breed [ flowers flower ] breed [ people person ] ;; putting people last ensures people appear ;; in front of flowers in the view people-own [frame] ;; ranges from 1 to 9 flowers-own [age] ;; ranges from 1 to 16 to setup ;; executed when we press the SETUP button clear-all ;; clear all patches and turtles create-people 1 [ set heading 90 ;; i.e. to the right set frame 1 set shape 'person-1' ] end to go every seconds-per-frame [ animate tick ] end to animate move-person ask flowers [ age-flower ] if random 100 < 20 [ make-flower ] end to move-person ask people [ set shape (word 'person-' frame) forward 1 / 20 ;; The shapes editor has a grid divided into 20 squares, which when ;; drawing made for useful points to reference the leg and shoe, to ;; make it look like the foot would be moving one square backward ;; when in contact with the ground (relative to the person), but ;; with a relative velocity of 0, when moving forward 1/20th of ;; a patch each frame set frame frame + 1 if frame > 9 [ set frame 1 ] ;; go back to beginning of cycle of animation frames ] end Download free eBooks at bookboon.com 109

Artificial Intelligence – Agents and Environments Movement to age-flower ;; flower procedure set age (age + 1) ;; age is used to keep track of how old the flower is ;; each older plant is a little bit taller with a little bit ;; larger leaves and flower. if (age >= 16) [ set age 16 ] ;; we only have 16 frames of animation, so stop age at 16 set shape (word 'flowexr-' age) end to make-flower ;; if every patch has a flower on it, then kill all of them off ;; and start over if all? patches [any? flowers-here] [ ask flowers [ die ] ] ;; now that we’re sure we have room, actually make a new flower ask one-of patches with [not any? flowers-here] [ sprout 1 [ set breed flowers set shape 'flower-1' set age 0 set color one-of [magenta sky yellow] ] ] end NetLogo Code 4.5: Code to animate a figure walking through a field of growing flowers. As a further example, NetLogo Code 4.6 below illustrates how to provide an illusion of a human stick figure walking across the screen as shown in Figure 4.7. The program works by repeatedly drawing six slightly different stick figures in succession, with each figure varying only in the positions of the hands and legs. Figure 4.7 A screenshot of a stick figure moving progressively from left to right across the screen created by the Stick Figure Walking model listed in NetLogo Example 4.5. Download free eBooks at bookboon.com 110

Artificial Intelligence – Agents and Environments Movement breed [paths path] ;; turtles that draw the limbs and body of the stick ;; figure as a jointed path breed [circle-paths circle-path] ;; for drawing the head to setup clear-all create-paths 5 [initialize-paths] ;; agents to draw paths for four limbs ;; (two legs, two arms), plus body set-default-shape paths 'circle' ;; this is the shape of the turtle for ;; drawing the path ifelse forward-movement-increment > 0 [draw-stick-figure -56 0 [160 7 170 7] [190 7 200 7] [165 7 130 5] [19 7 130 5] [0 0 0 12]] ;;else [draw-stick-figure 56 0 [160 7 170 7] [190 7 200 7] [165 7 130 5] [195 7 130 5] [0 0 0 12]] end to initialize-paths pen-up set size 0 set pen-size 4 end to initialize-circle-paths pen-up set size 0 set pen-size 8 end to go let frame 1 let xpos forward-movement-increment ifelse forward-movement-increment > 0 [set xpos xpos – 56] ;;else [set xpos xpos + 56] while [xpos > -57 and xpos < 57] [ if clear-display-in-between [clear-drawing] let front-leg item (frame mod 6) [[160 7 170 7] [170 7 180 7] [180 7 190 7] [190 7 200 7] [180 7 190 7] [170 7 180 7]] let back-leg item (frame mod 6) [[190 7 200 7] [180 7 190 7] [170 7 180 7] [160 7 170 7] [170 7 180 7] [180 7 190 7]] let front-arm item (frame mod 6) [[165 7 130 5] [173 7 125 5] [187 7 125 5] [195 7 130 5] [187 7 125 5] [173 7 125 5]] let back-arm item (frame mod 6) [[195 7 130 5] [187 7 125 5] [173 7 125 5] [165 7 130 5] [173 7 125 5] [187 7 125 5]] let body [0 0 0 12] ;; body does not change orientation Download free eBooks at bookboon.com 111

Artificial Intelligence – Agents and Environments Movement draw-stick-figure xpos 0 front-leg back-leg front-arm back-arm body set frame frame + 1 set xpos xpos + forward-movement-increment tick ] end to draw-limb [limb-color top-x top-y specs] ;; draws a limb for the stick figure according to the specifications ;; top of limb is at co-ordinates top-x, top-y ;; middle joint is drawn at heading [item 0 specs] with length ;; [item 1 specs] ;; bottom of limb is drawn at heading [item 2 specs] from the joint with ;; length [item 3 specs] let dir-to-joint item 0 specs let joint-length item 1 specs let dir-to-bot item 2 specs let bot-length item 3 specs pen-up set color limb-color set size 1.5 setxy top-x top-y pen-down stamp if joint-length > 0 [ set heading dir-to-joint forward joint-length stamp ] set heading dir-to-bot forward bot-length stamp pen-up end to draw-stick-figure [xpos ypos front-leg back-leg front-arm back-arm body] ;; draw the stick figure in the following order so that the front leg ;; and arms appear in front of the back leg and arms ask path 0 [draw-limb gray xpos ypos back-leg] ;; back leg ask path 3 [draw-limb gray xpos ypos + 12 back-arm] ;; back arm ask path 1 [draw-limb white xpos ypos front-leg] ;; front leg ask path 2 [draw-limb white xpos ypos body] ;; body ask path 4 [draw-limb white xpos ypos + 12 front-arm] ;; front arm create-circle-paths 1 [initialize-circle-paths] set-default-shape circle-paths 'circle 2' ask circle-path 5 [setxy xpos 15 set size 6 set color white stamp] end NetLogo Code 4.6: Code to make a stick figure ‘walk’ across the screen. Download free eBooks at bookboon.com 112

Artificial Intelligence – Agents and Environments Movement The program uses two turtle breeds for drawing paths, one using straight lines for drawing the limbs and the body and another using a circular path for drawing the head. The code relies on two core procedures – draw-limb and draw-stick-figure. The draw-limb procedure is used for drawing a limb, either an arm or a leg, by drawing a path consisting of three points – start, middle and end points – with two straight lines connecting the middle point to the other two points. The parameter limb-color is used to specify what colour the points and lines are drawn. It can be used to provide a semblance of depth to the stick figure by using a brighter colour (for example, white) for the front limbs, and a lighter colour for the back limbs (gray). For convenience, the procedure is also used for drawing the body as a single line. The draw-stick-figure procedure draws the stick figure by first drawing the back leg and arm, followed by the front leg, then the body and front arm and finally the head. The main go procedure repeatedly calls the draw-stick-figure procedure with six different specifications that define slightly different positions of the limbs. The Stick Figure Animation model goes further and allows the user to create multiple stick figures and move them around as shown in Figure 4.8. The program allows the user to record a snapshot of the screen as a ‘frame’, and then play all the frames back at once in order to create an animation sequence. Figure 4.8 A screenshot of the Stick Figure Animation model showing multiple stick figures, with the right one being edited in order to change its body position the next frame. The model works by recording the frames in a novel way. Rather than store the turtle and link information in each frame separately, new copies of the turtle and link agents are made, and the old copies are then hidden from the viewer. In other words, all the old frames still exist in the environment – the user just can’t see them all. Associated with each agent is extra information for storing the frame number when it was recorded. This is then used to determine when each agent should be visualised during the animation sequence. How this is done is shown in NetLogo Code 4.7. Download free eBooks at bookboon.com 113

Artificial Intelligence – Agents and Environments Movement breed [points point] ; points associated with with the stick figure breed [heads head] ; for drawing the head breed [sides side] ; the four sides of the selection square turtles-own [ frame ; number of frame the turtle is associated with child ] ; copy of turtle for next frame links-own [link-frame] ; number of frame the link is associated with globals [ frames ; how many frames so far selected ] ; agentset of currently selected objects to start-recording clear-all set frames 0 ; no frames so far set-default-shape points 'circle' ; this is the shape of the points set-default-shape heads 'circle 2' ; this is the shape of the stick figure’s heads set-default-shape sides 'line' ; this is the shape of the selected region sides ;; initially, no turtles are selected set selected no-turtles end to record-this-frame ; records current 'frame' by copying all the agents and links, ; making them all invisible, then incrementing their frame number let this-child nobody let that-child nobody let this-colour white let this-thickness 1 let this-frame 0 ask turtles with [frame = frames] [ ; record all turtles in this frame for posterity hatch 1 ; make a new copy of this turtle for next frame [ set frame frames + 1 set this-child self ] set child this-child hide-turtle ; make old copy invisible ] ask turtles with [frame = frames] [ ; record all their links for posterity as well set this-child child ask my-links [ ; copy this link set that-child [child] of other-end set this-colour color Download free eBooks at bookboon.com 114

Artificial Intelligence – Agents and Environments Movement set this-thickness thickness ask this-child ; make a new copy of this link for next frame [ setup-link that-child this-colour this-thickness (frames + 1) ] hide-link ; make old copy invisible ] ] set frames frames + 1 end to setup-link [ this-turtle colour this-thickness this-frame ] ; for setting up the link create-link-with this-turtle [ set link-frame this-frame set thickness this-thickness set color colour ] end to play-frames ; creates an animation by playing all the frames that have been ; recorded as invisible turtles and links let this-frame 0 while [this-frame <= frames] [ ; display all the frames in sequence ask turtles [ hide-turtle ] ; hide all turtles first ask links [ hide-link ] ; hide the links as well ask turtles with [frame = this-frame] [ show-turtle ] ask links with [link-frame = this-frame] [ show-link ] display wait frame-delay set this-frame this-frame + 1 ] end NetLogo Code 4.7: Selected code for animating stick figures for the Stick Figure Animation model. The code defines three agent breeds: points, for defining where the hands, elbows, feet, knees, neck and bottom are for the stick figures; heads, for defining where the head is; and sides, used for selecting the turtle agents when editing. It then defines the three procedures that are associated with the start-recording, record-this-frame and play-frames buttons at the top left of the model’s Interface. Download free eBooks at bookboon.com 115

Artificial Intelligence – Agents and Environments Movement All turtle agents (i.e. heads and points) own two variables: frame, which is the frame number when the object was created; and child, which is used by the record-this-frame procedure to help copy the links between turtle agents. In effect, this is a forward pointer from the parent agent to the child agent when the agents and links are copied during this procedure. It is used to ensure the links between child agents parallel the links between the parent agents once they have been copied. The parent agents are then made invisible. The play-frames procedure then performs the animation by incrementing the frame number, and making visible only those agents and links that have the current frame number, while making everything else invisible. Although the code for doing this is relatively straightforward, this is not the most efficient way of performing the animation – it cycles through all the agents every frame, which may cause the animation to slow down if the number of agents and frames is large. A delay is also added to slow the animation down – this can be controlled by the frame-delay slider in the model’s Interface. Challenge the way we run EXPERIENCE THE POWER OF FULL ENGAGEMENT… RUN FASTER. RUN LONGER.. READ MORE & PRE-ORDER TODAY RUN EASIER… WWW.GAITEYE.COM 1349906_A6_4+0.indd 1 Download free eBooks at bookboon.com 22-08-2014 12:56:57 116 Click on the ad to read more

Artificial Intelligence – Agents and Environments Movement The start-recording procedure simply clears the environment, sets the default shapes for each of the agents, and sets the global variable selected to an empty turtles agentset. This variable is used during editing when the mouse is clicked or dragged in order to show which turtles have been selected for moving around or for deleting. Code for doing this can be found in the model by following the URL link below. Note that this model uses the same code as used for the Map Drawing model that will be described in Section 9.7. In this respect, we can consider each frame in an animation sequence as analogous to a map, with the process of animation being equivalent to animated mapping. This latter topic is discussed in the next section. 4.6 Animated Mapping and Simulation The purpose of maps is to provide a method of representing what exists in an environment. Many maps, such as topographical maps, are static representations of the environment – they only represent the objects whose positions do not change. However, environments are usually dynamic from the frame of reference of the observer, where positions of agents and objects change constantly with time. Clearly, we require some method to represent agents and objects in motion if we wish to adequately represent the environment when we map it. One solution developed since the 1950s is ‘animated mapping’ that combines computer animation with mapping. Animation can be defined as the rapid display of 2D or 3D images in order to create an illusion of scenes of motion. In animated mapping, a map is used to represent the static relationships that exist in the environment, and then this is overlaid with animation to represent the changes that are occurring in the environment over time. The visualisation provides an opportunity to view historic events, and to alter the timescale – either by speeding up the animation so that it is faster than real time, or by slowing it down. By altering the sequence of events, and trying out different scenarios (that is, asking What-If questions such as “What would happen if this were to happen…?”), then the visualisation moves more into simulation rather than mapping. The role of simulation is to imitate or model natural or human systems, or to determine the possible consequences of a set of actions performed by agents or possible future positions of objects in motion. The main role of mapping is to represent as close as possible pre-existing environmental relationships, but a further role includes using it for predicting what might occur in the future via simulation. Download free eBooks at bookboon.com 117

Artificial Intelligence – Agents and Environments Movement Some examples of animated mapping are the visualisation of the decrease in the Arctic Ice Sheet and the Greenland Ice Sheet in recent years, the spread of civilisation and population growth over the last two centuries, and the predicted paths of hurricanes as often shown on weather channels. Animation provides an opportunity to add the further dimension of time that is difficult to represent in static maps, and include variables such as duration, rate of change and its location, the order events occur, the time at which changes occur, frequency and synchronisation (Slocum, 2005). The use of animated maps on the Internet is increasing where the user has the ability to witness how changes occur over time and where the user can also manipulate various parameters such as the viewpoint, the rate of change, and the type of data that is visualised. The Models Library in NetLogo also provides some examples of animated mapping. The Grand Canyon model simulates the rainfall on the eastern end of the Grand Canyon (where Crazy Jug Canyon and Saddle Canyon meet to form Tapeats Canyon). Each patch in the simulated environment corresponds to area approximately 32m on each side, and is based on data from the National Elevation Dataset available at http://seamless.usgs.gov. Figure 4.9 provides two screenshots after the model in separate simulations has been executing for 50 ticks and 100 ticks respectively, with the rainfall rate set at 10 drops per tick. The simulation represents higher elevations by lighter colours and lower elevations in darker colours, and shows how the raindrops flow from the lighter down to the darker patches, and start forming pools of water at locations where lower land is surrounded by higher land. Figure 4.9: NetLogo simulation of rainfall in the Grand Canyon (using the Grand Canyon model). Another NetLogo model that uses animated mapping is the Continental Divide model. The purpose of the model is to demonstrate how the continental divide is located which separates a continent based on how the rain in separate regions flows into two bodies of water. The example used in the simulation is the North American continent, and the two bodies of water are the Pacific and Atlantic oceans. Download free eBooks at bookboon.com 118

Artificial Intelligence – Agents and Environments Movement Figure 4.10 show four screenshots at different ticks – at 0 ticks (upper left), 407 ticks (upper right), 791 ticks (bottom left) and 1535 ticks (bottom right). In the simulation, the model is initialised with an elevation map, and then both oceans are incrementally risen with the two flood plains gradually converging towards each other over the continent until they eventually crash into each other. Where they crash into each other is where the continental divide is defined to be. Animated mapping and simulation also provide an opportunity for representing knowledge which is more akin to the situatedness and embodiment approach to Artificial Intelligence that is adopted in these books. Further details of this are provided in Chapter 9. In the next few chapters, we will also explore how we can get basic agents in NetLogo to perform animated movement and motion in simple 2D environments, and to create animated mazes. The maze examples will be used in latter chapters to help explain important types of agent behaviour such as searching. Also, all the NetLogo models described in these books and found in the Models Library can be considered to be examples of animated mapping and simulation. Download free eBooks at bookboon.com 119 Click on the ad to read more

Artificial Intelligence – Agents and Environments Movement Figure 4.10: Animation showing how the North American continentaldivide is located (using the Continental Divide model). 4.7 Summary The importance of movement for situated agents has been emphasized. From a design perspective, we can characterize an agent’s behaviour and decision-making from an observer’s frame of reference in terms of the movements it exhibits. If we are to map environments and behaviour in order to design agent-oriented systems for artificial intelligence, then movement must also be included. Animation of maps presents one way of achieving this. A summary of important concepts to be learned from this chapter is shown below: • Movement for an agent concerns how the agent moves its body, and how it chooses to move around the environment. • Motion refers to a constant change in the location of a body as a result of forces applied to it. Motion is observed from a particular frame of reference. Everything is in constant motion relative to an infinite number of frames of reference. • The behaviour of a situated agent can be characterised in terms of movement and motion in an environment. • Computer animation relies on an eye phenomenon called ‘persistence of vision’ to provide an illusion of movement. • Animated maps and simulation are useful tools for representing and visualising dynamic environments. Download free eBooks at bookboon.com 120

Artificial Intelligence – Agents and Environments Movement The code for the NetLogo models described in this chapter can be found as follows: Model URL Two States http://files.bookboon.com/ai/Two-States.nlogo Life Cycle Stages http://files.bookboon.com/ai/Life-Cycle-Stages.nlogo NZ Birds http://files.bookboon.com/ai/NZ-Birds.nlogo Stick Figure Animation http://files.bookboon.com/ai/Stick-Figure-Animation.nlogo Stick Figure Walking http://files.bookboon.com/ai/Stick-Figure-Walking.nlogo Model NetLogo Models Library (Wilensky, 1999) and URL Shape Animation Example Code Examples > Shape Animation Example http://ccl.northwestern.edu/netlogo/models/ShapeAnimationExample Continental Divide Earth Science > Continental Divide http://ccl.northwestern.edu/netlogo/models/ContinentalDivide Grand Canyon Earth Science > Grand Canyon http://ccl.northwestern.edu/netlogo/models/GrandCanyon Download free eBooks at bookboon.com 121

Artificial Intelligence – Agents and Environments Embodiment 5 Embodiment You never really understand a person until you consider things from his point of view – until you climb into his skin and walk around in it. Atticus Finch in ‘To Kill a Mockingbird ’ by Harper Lee. 1960. The principle of sensory-motor co-ordination states that all intelligent behavior (e.g. perception, categorization, memory) is to be conceived as a sensory-motor co-ordination… The principle has two main aspects. The first relates to embodiment. Perception, categorization, and memory, processes that up to this point have been viewed from an information processing perspective only, must now be interpreted from a perspective that includes sensory and motor processes… Embodiment places an important role in this co-ordination… The second, and more specific point of this principle is that through sensory-motor coordination embodied agents can structure their input and thereby induce regularities that significantly simplify learning. “Structuring the input” means that through the interaction with the environment, sensory data are generated, they are not simply given. Rolf Pfeifer and Christian Scheier. 1999. Understanding Intelligence. Pages 307–308. The MIT Press. Download free eBooks at bookboon.com 122

Artificial Intelligence – Agents and Environments Embodiment The purpose of this chapter is to highlight the important role that the body has in determining intelligent behaviour through the body’s senses and its interaction with the environment. The chapter is organised as follows. Section 5.1 shows how our body, mind and senses form the basis of many common metaphors used in human language. It also points out that the human body has much more than the five traditional senses of sight, hearing, smell, taste and touch. Section 5.2 describes several important features of autonomous agents – embodiment, situatedness, and reactive versus cognitive behaviour – and shows how they can be classified by these properties. Section 5.3 describes several ways that sensing can be added to turtle agents in NetLogo and defines a common modus operandi for embodied, situated agents. Section 5.4 provides a definition of cognition and perception. It also considers whether it is possible for purely reactive agents to perform non-trivial tasks without cognition, and provides several examples where this is possible. Section 5.5 provides a definition of embodied, situated cognition, an alternative to the traditional information processing view of cognition. 5.1 Our body and our senses Just as for movement in the previous chapter, our body, mind and senses form the basis of many common metaphors used in human language. Our understanding as humans is derived from fundamental concepts related to our physical embodiment in the real world, and the senses we use to perceive it. We often express and understand abstract concepts that we cannot sense directly based on these fundamental concepts. Table 5.1 provides a list of common phrases that illustrates this. www.sylvania.com We do not reinvent the wheel we reinvent light. Fascinating lighting offers an infinite spectrum of possibilities: Innovative technologies and new markets provide both opportunities and challenges. An environment in which your expertise is in high demand. Enjoy the supportive working atmosphere within our global group and benefit from international career paths. Implement sustainable ideas in close cooperation with other specialists and contribute to influencing our future. Come and join us in reinventing light every day. Light is OSRAM Download free eBooks at bookboon.com 123 Click on the ad to read more

Artificial Intelligence – Agents and Environments Embodiment Source Conceptual Metaphor Sample Phrases Body or The MIND is a BODY. His mind is strong and supple. mind. MENTAL FITNESS is PHYSICAL FITNESS. His mind is decaying. In the summer, the mind MENTAL CONTROL is PHYSICAL CONTROL. tends to go flabby. COURAGE is a BODY PART. It’s getting out of hand. Get a grip. MENTAL PRESSURE is PHYSICAL PRESSURE. The idea just slipped through my fingers. EXPERIENCES are related to PHYSICAL BODY FORM. I did not have the heart or stomach for it. MENTAL EXPERIENCES are related to PHYSICAL The pressure was unbearable. TEMPERATURE. I’m pressed for time. MENTAL DISCOMFORT is PAIN. He’s light on his feet. It is getting quite hairy out there. They were in hot pursuit, but the trail had gone cold. She gave him an icy stare. She’s a real pain. It was a dull ache of a day. Sight. HOPE is LIGHT. He has bright hopes. BELIEVING is SEEING. I can’t see how this can be true. SEEING is TOUCHING. I felt his glance. INTELLIGENCE is a LIGHT SOURCE. He is very bright. WITHIN SIGHT is WITHIN CONTAINER. It was well within my field of vision. EXISTENCE is VISIBILITY. New problems keep appearing. EXISTENCE is VISIBILITY. The controversy eventually faded away. Hearing. COLOUR is SOUND. He was wearing very loud clothes. BELIEVING is HEARING. That sounds like the truth to me. THINKING is a CLOCK. I could hear his mind ticking. UNDERSTANDING is HEARING. Are you deaf? Didn’t you hear me? Smell. INVESTIGATING is SMELLING. The situation didn’t smell right. FEELING is SMELLING. That smelt very fishy. Taste and DESIRE is HUNGER. Sexual appetite. He hungers for her touch. eating. EXPERIENCES is TASTE. That left a sour taste in my mouth. GETTING is EATING. Cough up the money you owe me. Touch and AFFECTION is WARMTH. She’s a warm person. feeling. EMOTION is PHYSICAL CONTACT. DARKNESS is a I was touched by her speech. That feels good. SOLID. We felt our way through the darkness. Table 5.1: A selection of phrases to illustrate the importance of the body, mind and senses in human language; some examples from Metaphors and Touch (2009) and Lakoff (2009). Traditionally, using a classification first attributed to Aristotle, the five main senses are considered to be sight, hearing, smell, taste and touch (as included in Table 5.1). Sometimes people also refer to a ‘sixth sense’ (also sometimes called extra sensory perception or ESP) – we can consider the term as a metaphor that people use in natural language for describing unconscious thought processes in terms of another sense. It is a mistake to consider a ‘sixth sense’ as being real, as there is very little scientific evidence to support it. However, our bodies do in fact have much more than just the five basic senses as shown by the selection in Table 5.2. Download free eBooks at bookboon.com 124

Artificial Intelligence – Agents and Environments Embodiment Sense Description Sight (vision) The ability of the eye to detect electromagnetic waves. This may in fact be two or three senses as different receptors are used to detect colour and brightness, and depth perception (stereopsis) may be a cognitive function of the brain (i.e. post-sensory). Hearing (audition) The ability of the ear to detect vibrations caused by sound. Smell (olfaction) The ability of the nose to detect different odour molecules. Taste The ability of the tongue to detect chemical properties of substances such as food. This may be five or more different senses as different receptors are used to detect sweet, sour, salty and bitter tastes. Touch (mechanoreception) The ability of nerve endings, usually in the skin, to respond to variations in pressure (such as how firm it is, whether it is sustained, or brushing). Pain (nociception) The ability to sense damage or near-damage to tissue. There are three types of pain receptors – in the skin (cutaneous), in the joints and bones (somatic) and in the body organs (visceral). Balance The ability of the vestibular labyrinthine system in the inner ears to sense body movement, direction and acceleration and maintain balance. Two separate senses are (equilibrioception) involved, to detect angular momentum and linear acceleration (and also gravity). Proprioception (kinesthetic This ability to be aware of the relative positions of parts of the body (such as where sense) the hand is in relation to the nose even with your eyes closed). Sense of time The ability of part of the brain to keep a track of time, such as circadian (daily) rhythms, or shorter-range (ultradian) timekeeping. Sense of temperature The ability of the skin and internal skin passages to detect heat and the absence of (thermoception) heat (cold). Internal senses Internal senses that are stimulated from within the body from numerous receptors in (interoception) internal organs (such as pulmonary stretch receptors found in the lungs that control respiratory rate, and other internal receptors that relate to blushing, swallowing, vomiting, blood vessel dilation, gas distension in the gastrointestinal tract and the gagging reflex while eating, for example). Table 5.2: Human senses. 5.2 Several Features of Autonomous Agents From a design perspective, we can consider every agent to be embodied in some manner, using senses to gain information about its environment. We can define embodiment in the following manner. An autonomous agent is embodied through some manifestation that allows it to sense its environment. Its embodiment concerns its physical body that it uses to move around its environment, its sensors that it uses to gain information about itself in relation to the environment, and its brain that it uses to process sensory information. The agent exists as part of the environment, and its perception and actions are determined by the way it interacts with the environment and other agents through its embodiment in a dynamic process. Download free eBooks at bookboon.com 125

Artificial Intelligence – Agents and Environments Embodiment The agent’s embodiment may consist of a physical manifestation in the real world, such as a human, industrial robot or an autonomous vehicle, or it can have a simulated or artificial manifestation in a virtual environment. If it deals exclusively with abstract information, for example a web crawling agent such as Googlebot, then its environment can be considered from a design perspective to be represented by an n-dimensional space, and its sensing capabilities relate to its movement around that space. Likewise, a real environment for a human or robotic agent can be considered to be an n-dimensional space, and the agent’s embodiment relates to its ability to gain information about the real environment as it moves around. We can consider an agent’s body as a sensory input-capturing device. For example, the senses in a 360° human body is dictated by its embodiment – that is, the entire body itself has numerous sensors that occur throughout the body, enabling our mind to get a ‘picture’ of the whole body as it interacts with thinking. the environment (see the image at the beginning of this chapter). Pfeiffer and Scheier (1999) stress the 360° important role that embodiment and sensory-motor coordination has to play in determining intelligent thinking. behaviour (see quote also at the beginning of this chapter). 360° thinking. 360° thinking. Discover the truth at www.deloitte.ca/careers Discover the truth at www.deloitte.ca/careers © Deloitte & Touche LLP and affiliated entities. Discover the truth at www.deloitte.ca/careers © Deloitte & Touche LLP and affiliated entities. Download free eBooks at bookboon.com © Deloitte & Touche LLP and affiliated entities. Discover the truth at www.deloitte.ca/careers 126 Click on the ad to read more © Deloitte & Touche LLP and affiliated entities.

Artificial Intelligence – Agents and Environments Embodiment Craig Reynolds (1999) describes further features of autonomous agents that can be used to distinguish various classes of autonomous agents such as situatedness and reactive behaviour. We can define situatedness in the following manner. An autonomous agent is situated within its environment, sharing it with other agents and objects. Therefore its behaviour is determined from a first-person perspective by its agent-to-agent interactions and agent-to-environment interactions. Autonomous agents exhibit a range of behaviours, from purely reactive to more deliberative or cognitive. An autonomous agent is said to be situated in an environment shared by other agents and/or other objects. An agent can be isolated existing by itself, for example a data mining agent searching for patterns in a database, situated in an abstract n-dimensional space that represents the database environment. A situated agent can have a range of behaviours, from purely reactive where the agent is governed by the stimulus it receives from its senses, to cognitive, where the agent deliberates on the stimulus it receives and decides on appropriate courses of actions. With a purely reactive approach, there is less need for the agent to maintain a representation of the environment – the environment is its own database that it can simply ‘look up’ by interacting directly with it. A cognitive agent, on the other hand, uses representations of what is happening in the environment in order to make decisions. (We can think of these as maps as discussed in the previous chapter). Reynolds notes that combinations of the attributes embodied, situated and reactive define distinct classes of autonomous agents, some of which are shown in Table 5.3. However, unlike Reynolds, we will consider that all agents are both situated in some environment (be it a real, virtual or an abstract environment, represented by some n-dimensional space) and embodied with the ability to sense and interact with its environment in some manner, whether explicitly or implicitly. Type of Agent Description Some example(s) Human and other real life Organic entities that exist in the real world. Ourselves; biological creatures. agents Real autonomous robots Mechanical devices that are situated in the Robots in a car manufacturing real world. plant; autonomous vehicles (AVs). Real semi-autonomous robots Mechanical devices that are partly Mars rover robots, Spirit and autonomous, and partly operated by Opportunity; bomb disposal robots. humans, situated in the real world. Virtual agents Real agents that are situated in a virtual Non-playing characters (NPCs) in world. computer games. Virtual semi-autonomous Agents that are partly autonomous, and Avatars; first-person shooters in agents partly operated by humans, situated in a computer games; agents in sports virtual world. simulation games. Simulated agents Agents that are studied by computational Artificial life simulations; simulations simulation situated in a virtual world. of robots. Table 5.3: Classes of embodied, situated agents (partly based on Reynolds, 1999). Download free eBooks at bookboon.com 127

Artificial Intelligence – Agents and Environments Embodiment 5.3 Adding Sensing Capabilities to Turtle Agents in NetLogo We can design NetLogo turtle agents introduced in the previous chapter to include sensing capabilities using the embodied, situated design perspective. One reason for doing this is that we would like to use turtle agents to search mazes (and not just to draw them) in order to demonstrate searching behaviour using animated mapping techniques (see next section 5.4 and also Chapter 8). Searching is an important process that an agent must perform in order to adequately carry out many tasks. However, before the agent is able to search, it must first have the ability to sense objects and other agents that exist in the environment in order to find the object or agent it is searching for. We could easily adopt a disembodied solution to maze searching. In this approach, the turtle agent doing the searching simply follows paths whose movement has been pre-defined within the program (like with the maze drawing models described in the previous chapter). An alternative, embodied approach is that the turtle agent is programmed with the ability to ‘look’ at the maze environment, interpret what it ‘sees’, and then choose which path it wants to take, not being restricted to the pre-defined paths that were required for the disembodied solution. We will turn your CV into an opportunity of a lifetime Do you like cars? Would you like to be a part of a successful brand? Send us your CV on We will appreciate and reward both your enthusiasm and talent. www.employerforlife.com Send us your CV. You will be surprised where it can take you. Download free eBooks at bookboon.com 128 Click on the ad to read more

Artificial Intelligence – Agents and Environments Embodiment One easily programmed sensing capability often found in computer animation and computer game engines relies on proximity detection. Here the agent ‘senses’ whether there is an object nearby and then decides on an appropriate behaviour in response. The Look Ahead Example model that comes with the NetLogo Models Library implements turtle agents with a basic form of proximity detection, effectively simulating a rudimentary form of sight by getting the agents to ‘look’ ahead a short distance in the direction of travel before they move. The process of looking ahead allows each turtle agent to determine what is in front of it, and then change its direction if needed. A screenshot of the model is shown in Figure 5.1. The image shows five turtle agents following different paths as represented by the red zigzag lines, with an arrowhead showing the turtle’s current position and pointing in the current direction of travel. When the turtle detects an obstacle in front of it, it will then change its direction to a new random heading. Figure 5.1 A screenshot of the Look Ahead Example 2 model produced by the program listed in NetLogo Code 5.1. The number-of-turtles variable has been set to 5. The code for the model is shown in NetLogo Code 5.1. The code is slightly modified from that provided in the Models Library, and a go once button and a slider to specify the value of the number-of- turtles variable has been added to the interface. The setup procedure first sets up a checkerboard of blue patches in the environment (these are the objects the turtles are trying to avoid) and the turtles are then sprouted at random non-blue locations. The go procedure asks each turtle to check if there is a blue patch ahead, then gets the turtle to change direction to a random heading if there is, otherwise the turtle will move forward 1 step. Download free eBooks at bookboon.com 129

Artificial Intelligence – Agents and Environments Embodiment ;; This procedure sets up the patches and turtles to setup ;; Clear everything. clear-all ;; This will create a 'checkerboard' of blue patches. Every third patch ;; will be blue (remember modulo gives you the remainder of a division). ask patches with [pxcor mod 3 = 0 and pycor mod 3 = 0] [ set pcolor blue ] ;; This will make the outermost patches blue. This is to prevent the ;; turtles from wrapping around the world. Notice it uses the number of ;; neighbor patches rather than a location. This is better because it ;; will allow you to change the behavior of the turtles by changing the ;; shape of the world (and it is less mistake-prone) ask patches with [count neighbors != 8] [ set pcolor blue ] ;; This will create turtles on number-of-turtles randomly chosen ;; black patches. ask n-of number-of-turtles (patches with [pcolor = black]) [ sprout 1 [ set color red ] ] end ;; This procedure makes the turtles move to go ask turtles [ ;; This important conditional determines if they are about to walk into ;; a blue patch. It lets us make a decision about what to do BEFORE the ;; turtle walks into a blue patch. This is a good way to simulate a ;; wall or barrier that turtles cannot move onto. Notice that we don’t ;; use any information on the turtle’s heading or position. Remember, ;; patch-ahead 1 is the patch the turtle would be on if it moved ;; forward 1 in its current heading. ifelse [pcolor] of patch-ahead 1 = blue [ lt random-float 360 ] ;; We see a blue patch in front of us. Turn a ;; random amount. [ fd 1 ] ;; Otherwise, it is safe to move forward. ] tick end NetLogo Code 5.1 Code for the Look Ahead Example 2 model. If we run the model a while longer (400 ticks or more) with the variable number-of-turtles set to 50 instead of 5, then the red paths rapidly cover most of the empty space in the environment as shown in Figure 5.2. Download free eBooks at bookboon.com 130

Artificial Intelligence – Agents and Environments Embodiment Figure 5.2 A screenshot of the Look Ahead Example 2 model when number-of-turtles has been set to 50, after approximately 400 ticks of the simulation. �e Graduate Programme �e Graduate Programme aduate Programme �e Gr I joined MITAS because I joined MITAS because I joined MITAS because for Engineers and Geoscientis for Engineers and Geoscientists for Engineers and Geoscientiststs I wanted real responsibili� www.discovermitas.com I wanted real responsibili� real responsibili� I wanted Maersk.com/Mitasom/Mitas Maersk.com/Mitas Maersk.c I joined MITAS because for Engineers and Geoscientists �e Graduate Programme I wanted real responsibili� Maersk.com/Mitas Month 16 Month 16th 16 Mon I was a construction I w I was a construction I was aas a I was a construction I was a Month 16 supervisor in supervisor in in supervisor I was a I was a construction the North Sea supervisor in the North Sea the North Sea advising and the North Sea advising advising and and helping h helping foremen foremen hee advising and he helping foremen Real work Real w Real work ork al International opportunities s solve problemssolve problems International opportunities International opportunities Internationa al Internationaal Internationa s s solve problems he �ree work placements Real work helping foremen �ree wo �ree work placementsee work placements or orooro � �ree wree w �r Internationa International opportunities al solve problems or �ree wo �ree work placements s Download free eBooks at bookboon.com 131 Click on the ad to read more

Artificial Intelligence – Agents and Environments Embodiment This example shows how a relatively modest number of agents with very limited sensing capabilities still have the ability to completely cover an entire environment very rapidly. Such ability for a group of real-life agents is often a very useful trait, for example insects such as ants that need to find food sources. From an observer’s frame of reference when running the model, one can easily come to the wrong conclusion that the agents are deliberately searching their environment, but an analysis of the NetLogo code reveals that the agents in this example are simply reactive rather than cognitive – they do not deliberately search the environment, but as a result of their simple combined behaviour, achieve the same effect that agents with more deliberate searching behaviour achieve through their ability to co-ordinate and combine results. With proximity detection, we can also simulate sensors for ‘touching’ as well as ‘looking’. The Wall Following Example model in the NetLogo Models Library approximates the action of touching for an agent by having it closely follow an object in the environment. A screenshot of the model is shown in Figure 5.3. The paths of the turtles are shown in the figure since the pen-down button has been selected in the model’s interface. The behaviour the turtle agents exhibit is called ‘wall’ following as it is analogous to a real-life agent keeping close contact with the outside of a building similar to the hand on the wall behaviour for solving mazes described in Chapter 3. The 2D NetLogo environment shown in the image can be thought of as a map that provides a bird’s-eye view of an imaginary 3D environment containing buildings, with the edges of the objects in the image representing where the outside walls of the building are. Figure 5.3: The Wall Following Example model demonstrates how turtle Agents can follow walls using a virtual ‘sense’ of touch. Download free eBooks at bookboon.com 132

Artificial Intelligence – Agents and Environments Embodiment In the image, the blue turtles follow the right hand wall whereas the green turtles follow the left hand wall. The turtles prefer to keep the wall they are following on a particular side of their body – for blue turtles, they prefer the right side; green turtles prefer the left side. The code for the model is shown in NetLogo Code 5.2. The procedure walk defines the walking behaviour of the turtle agents. Each agent has a direction variable that defines the side it prefers to keep the wall on. If there is no wall immediately on the preferred side, but there is a wall behind it on the preferred side, then the turtle agent must turn to the preferred side so that it does not lose the wall. If there is a wall directly in front, then the turtle agent keeps turning to the side opposite to the preferred side until it can find some open space in front of it where it can then move forward. Download free eBooks at bookboon.com 133 Click on the ad to read more

Artificial Intelligence – Agents and Environments Embodiment turtles-own [direction] ;; 1 follows right-hand wall, ;; -1 follows left-hand wall to setup clear-all ;; make some random walls for the turtles to follow. ;; the details aren’t important. ask patches [ if random-float 1.0 < 0.04 [ set pcolor brown ] ] ask patches with [pcolor = brown] [ ask patches in-radius random-float 3 [ set pcolor brown ] ] ask patches with [count neighbors4 with [pcolor = brown] = 4] [ set pcolor brown ] ;; now make some turtles. SPROUT puts the turtles on patch centers ask n-of 40 patches with [pcolor = black] [ sprout 1 [ if count neighbors4 with [pcolor = brown] = 4 [ die ] ;; trapped! set size 2 ;; bigger turtles are easier to see set pen-size 2 ;; thicker lines are easier to see face one-of neighbors4 ;; face north, south, east, or west ifelse random 2 = 0 [ set direction 1 ;; follow right hand wall set color blue ] [ set direction -1 ;; follow left hand wall set color green ] ] ] end to go ask turtles [ walk ] tick end to walk ;; turtle procedure ;; turn right if necessary if not wall? (90 * direction) and wall? (135 * direction) [ rt 90 * direction ] ;; turn left if necessary (sometimes more than once) while [wall? 0] [ lt 90 * direction ] ;; move forward fd 1 end to-report wall? [angle] ;; turtle procedure ;; note that angle may be positive or negative. if angle is ;; positive, the turtle looks right. if angle is negative, ;; the turtle looks left. report brown = [pcolor] of patch-right-and-ahead angle 1 end NetLogo Code 5.2: Code for the Wall Following Example model shown in Figure 5.3. Download free eBooks at bookboon.com 134

Artificial Intelligence – Agents and Environments Embodiment Both the Look Ahead Example model and Wall Following model demonstrate a common modus operandi for embodied, situated agents that consists of first sensing followed by behaviour selection and execution. This modus operandi is one possible method of operation for a reactive agent that consists of the agent first sensing what is happening in the environment, and then choosing to execute certain behaviours based on what it senses. For the Look Ahead Example model, the turtle agent first senses if there is a blue patch ahead, and then chooses between turning and moving forward behaviours based on what it finds out. For the Wall Following Example model, the agent first senses whether it has lost the wall, and responds by turning if necessary. Then the agent repeatedly senses if there is a wall in front of it and then it turns, until it finally senses there is no longer a wall in front of it, after which it moves forward. The sensory apparatus of the agent is an important consideration in determining the resultant possible behaviours of the agent. If the agent has no sensing capabilities, it has no ability to react to its environment. Similarly, if the sensing capabilities are restricted in some respect, then the agent will not be able to respond to events that occur in the environment that is outside the range or ranges it can sense. For example, for vision, cats can see in low light because of special eye muscles, some snakes have special organs and bats have node sensors that allow them to detect infrared light, and some birds can see in the ultraviolet down to 300 nanometres. Humans, in contrast, have a narrower vision range than these examples that restricts and defines our ability to respond to what we see. Agent-environment interaction also has an important role to play in determining what the agent senses. This is illustrated by the Line of Sight Example model in the NetLogo Models Library. A screenshot of the model is shown in Figure 5.4. Download free eBooks at bookboon.com 135

Artificial Intelligence – Agents and Environments Embodiment Figure 5.4: A screenshot of the Line of Sight Example 2 model. Download free eBooks at bookboon.com 136 Click on the ad to read more

Artificial Intelligence – Agents and Environments Embodiment The image shows some turtle agents moving on random headings, with their current line of sight drawn in dots with the same colour as the colour of the agent. The environment is a virtual representation of a landscape, using a representation often used in computer graphics called a heightmap. The green shading of each patch reflects its elevation – the lighter the shading, the higher the elevation of the patch. For example, a prominent peak occurs in the lower left corner of the 2D environment; the dark green region immediately to the peak’s right indicates a deep depression. The line of sight is computed from the first person perspective of each agent. Given the agent’s current heading and location, a straight line is projected forward by determining if the patches ahead of the agent are not obscured by anything at higher elevations along the way. In the left hand of the figure, a plot is shown of what the orange agent can ‘see’ in front of it. The leftmost vertical column in the plot shows the elevation of the patch the orange agent is standing on. The vertical columns to the right show the elevations of the patches in front of the agent according to its current heading. As the next two patches are lower than the agent’s current elevation, then these have been coloured orange to indicate that the agent will be able to ‘see’ that part of the terrain. The view of the third patch, however, is obscured by the higher elevation of the second patch, so this has been coloured black to indicate that the agent would not be able to ‘look’ directly at that part of the landscape. Similarly, viewing of what is at the fourth and sixth patches is possible as they are not obscured by higher elevations of any intervening patches, but the other patches are all obscured from the line of sight. As an analogy to a real-life situation, consider a human trying to locate a control point (e.g. a red flag) in an orienteering event in sand-dune terrain. Often the red flag will be obscured from the person’s line of sight until the person is quite close to the flag, especially if it is placed in a depression. However, as the person moves along in the terrain getting closer to the flag, it will become visible when the person reaches a vantage point that has no intervening hills or higher sloping terrain blocking the view. This example provides an illustration of the importance of using the situated first-person perspective for the design of embodied agents. The resultant behaviour of the agents is more believable from an observer’s frame of reference, and the agent-environment interaction adds to the realism. The code for the model is shown in NetLogo Code 5.3. Download free eBooks at bookboon.com 137

Artificial Intelligence – Agents and Environments Embodiment breed [walkers walker] breed [markers marker] patches-own [elevation] to setup ca set-default-shape markers 'dot' ;; setup the terrain ask patches [ set elevation random 10000 ] repeat 2 [ diffuse elevation 1 ] ask patches [ set pcolor scale-color green elevation 1000 9000 ] create-walkers 6 [ set size 1.5 setxy random-xcor random-ycor set color item who [orange blue magenta violet brown yellow] mark-line-of-sight ] end to go ;; get rid of all the old markers ask markers [ die ] ;; move the walkers ask walkers [ rt random 10 lt random 10 fd 1 mark-line-of-sight ] ;; plot the orange walker only ask walker 0 [ plot-line-of-sight ] tick end to mark-line-of-sight ;; walker procedure let dist 1 let a1 0 let c color let last-patch patch-here ;; iterate through all the patches ;; starting at the patch directly ahead ;; going through MAXIMUM-VISIBILITY while [dist <= maximum-visibility] [ let p patch-ahead dist ;; if we are looking diagonally across ;; a patch it is possible we’ll get the ;; same patch for distance x and x + 1 ;; but we don’t need to check again. if p != last-patch [ Download free eBooks at bookboon.com 138

Artificial Intelligence – Agents and Environments Embodiment ;; find the angle between the turtle’s position ;; and the top of the patch. let a2 atan dist (elevation – [elevation] of p) ;; if that angle is less than the angle toward the ;; last visible patch there is no direct line from the turtle ;; to the patch in question that is not obstructed by another ;; patch. if a1 < a2 [ ask p [ sprout-markers 1 [ set color c ] ] set a1 a2 ] set last-patch p ] set dist dist + 1 ] end NetLogo Code 5.3: Code for the Line of Sight Example 2 model shown in Figure 5.4. Excellent Economics and Business programmes at: “The perfect start of a successful, international career.” CLICK HERE to discover why both socially and academically the University of Groningen is one of the best places for a student to be www.rug.nl/feb/education Download free eBooks at bookboon.com 139 Click on the ad to read more

Artificial Intelligence – Agents and Environments Embodiment In the main go method, the turtle agents perform a small random right then left turn (this will change their heading slightly), they move forward 1 step and then they mark out their line of sight by performing the mark-line-of-sight procedure. This procedure iterates through all the patches in front of the agent out to the maximum distance as defined by the interface variable maximum-visibility. For each distinct patch along the line of the agent’s current heading, it first finds the angle between the turtle’s position and the top of the patch. If the angle to the last visible patch is less than this angle, then its view of the patch is not obstructed and it draws a marker in the environment to show that it can see the patch in question. The range of vision can be expanded to include a cone rather than a single line by using the built-in NetLogo reporter in-cone. This reporter reports the set of agents that fall within the cone of vision defined by the distance and viewing angle parameters. The cone is centred around the turtle’s current heading. If the angle is 360°, then the results is equivalent to the reporter in-radius. The Vision Cone Example model provided by the NetLogo Models Library demonstrates the use of this reporter, as shown by the screenshots in Figure 5.5. The left screenshot in the figure was obtained by the setting the Interface variables vision-radius to 50 and vision-angle to 40; the right screenshot was obtained by setting those variables to 20 and 360 respectively. Figure 5.5: Two screenshots from the Vision Cone Example model showing the effect of the in-cone reporter when different parameters are used to define the distance and angle of the cone. The code for this model is shown in NetLogo Code 5.4. The model uses two types of agents – a wanderer breed for the big red agent that explores the environment with its vision cone sense as shown in the screenshots; and a stander breed that is used to draw 6000 randomly spread gray agents in the environment in the setup procedure. Download free eBooks at bookboon.com 140

Artificial Intelligence – Agents and Environments Embodiment breed [ wanderers wanderer ] ;; big red turtle that moves around breed [ standers stander ] ;; little gray turtles that just stand there to setup clear-all ;; make a background of lots of randomly scattered ;; stationary gray turtles create-standers 6000 [ setxy random-xcor random-ycor set color gray ] ;; make one big red turtle that is going to move around create-wanderers 1 [ set color red set size 15 ] ;; make the vision cone initially visible go end to go ask standers [ set color gray ] ask wanderers [ rt random 20 lt random 20 fd 1 ;; could use IN-CONE-NOWRAP here instead of IN-CONE ask standers in-cone vision-radius vision-angle [ set color white ] ] tick end NetLogo Code 5.4: Code for the Vision Cone Example 2 model shown in Figure 5.5. In the main go method, as for the Line of Sight example, the turtle agents perform a small random right, then a left turn, followed by moving forward 1 step. Then setting the colour of all standers in the vision cone defined by the vision-radius and vision-angle variables shows their current field of vision. Download free eBooks at bookboon.com 141

Artificial Intelligence – Agents and Environments Embodiment Sensing in agents need not be restricted to the traditional senses such as vision, or touch. For example, equilibrioception (see Table 5.2), or the sense of balance, combined with other senses such as the visual system and proprioception, allows humans and animals to gain information about their body position in relation to their immediate surroundings. Sensing can involve the detection of any aspect of the environment, such as the presence, absence or change in a particular attribute. This then provides the agent with an ability to follow a gradient in the environment relating to the attribute. For example, a useful sense for real-life agents is the ability to detect changes in elevation, in order to be able to head in an upwards or downwards direction when required. The Hill Climbing Example model in the NetLogo Models Library shows how such a sense can be implemented. A screenshot of the model is shown in Figure 5.6 and the code is shown in NetLogo Code 5.5. The screenshot shows turtle agents that start out at random locations then move steadily upwards to the highest local point in their immediate surrounding landscape (i.e. towards the patches with the lightest shading). Since a single high point serves as the highest point for many of its surrounding patches, this results in the agents’ paths coalescing into linear patterns of lines as shown in the figure. The vertical line to the left of the figure, for example, represents the top of a ridgeline in the 2D environment. As an analogy with real-life human agents, we can imagine a number of hill walkers climbing from different start positions up a ridge, and once they reach the ridge line, they all end up following a single path to the top. Figure 5.6: Screenshot of the Hill Climbing Example model. The model uses the uphill command in NetLogo that directs an agent to move to a neighbouring patch that has the highest value for a particular patch variable. The variable in this case is the patch’s color. Download free eBooks at bookboon.com 142

Artificial Intelligence – Agents and Environments Embodiment The go method asks each turtle to head in an upward direction, until that is no longer possible, in which case the turtle variable peak? is set to true to indicate the turtle has reached the top. When all turtles have reached the top, then no more processing is done. turtles-own [ peak? ;; indicates whether a turtle has reached a 'peak', ;; that is, it can no longer go 'uphill' from where it stands ] to setup clear-all ;; make a landscape with hills and valleys ask n-of 100 patches [ set pcolor 120 ] ;; slightly smooth out the landscape repeat 20 [ diffuse pcolor 1 ] ;; put some turtles on patch centers in the landscape ask n-of 800 patches [ sprout 1 [ set peak? false set color red pen-down ] ] end to go ;; stop when all turtles are on peak if all? turtles [peak?] [ stop ] ask turtles [ ;; remember where we started let old-patch patch-here ;; to use UPHILL, the turtles specify a patch variable uphill pcolor ;; are we still where we started? if so, we didn’t ;; move, so we must be on a peak if old-patch = patch-here [ set peak? true ] ] tick end NetLogo Code 5.5: Code for the Hill Climbing Example 2 model shown in Figure 5.6. Download free eBooks at bookboon.com 143

Artificial Intelligence – Agents and Environments Embodiment 5.4 Performing tasks reactively without cognition Is it possible for an agent to perform a non-trivial task without knowing they are performing it? In other words, can an agent ‘solve’ a problem without cognitively being aware they are solving a problem? To answer this question, first we must ask what we mean by ‘cognitively being aware’. Cognition refers to the mental processes of an intelligent agent, either natural or artificial, such as comprehension, reasoning, decision-making, planning and learning. It can also be defined in a broader sense by linking it to the mental act of knowing or recognition of an agent in relation to a thought it is thinking or action it is performing. Thus, cognitive behaviour occurs when an agent knowingly processes its thoughts or actions. Under this definition, an autonomous agent exhibits cognitive behaviour if it knowingly processes sensory information, makes decisions, changes its preferences and applies any existing knowledge it may have while performing a task or mental thought process. Cognition is also related to perception. Perception for an agent is the mental process of attaining understanding or awareness of sensory information. Let us now return to the question posed at the beginning of this section. Consider an insect’s abilities to forage for food. Ants, for example, have the ability to quickly find food sources, but do this by sensing chemical scent laid down by other ants, and then react accordingly using a small set of rules. Despite not being cognitively aware of what they are doing, they achieve the goal of fully exploring the environment, efficiently locating nearby food sources and returning back to the nest. American online LIGS University is currently enrolling in the Interactive Online BBA, MBA, MSc, DBA and PhD programs: ▶ enroll by September 30th, 2014 and ▶ save up to 16% on the tuition! ▶ pay in 10 installments / 2 years ▶ Interactive Online education ▶ visit www.ligsuniversity.com to find out more! Note: LIGS University is not accredited by any nationally recognized accrediting agency listed by the US Secretary of Education. More info here. Download free eBooks at bookboon.com 144 Click on the ad to read more

Artificial Intelligence – Agents and Environments Embodiment An Ants model provided in the NetLogo Models Library simulates one way how this might happen. A screenshot of the model is shown in Figure 5.7. It shows a colony of ants spread out throughout the environment, with its nest shown by the purple region at the centre of the image. Originally, there were three food sources, but at this stage in the simulation, the previous two have already been devoured and the remaining food source has been found and is in the process of being collected for return to the nest. The white and green shaded region represents how much chemical scent has been load down in the environment, with white representing the greatest amount. Figure 5.7: Screenshot of the Ants model. The code for the part of the model that defines the behaviour of the ants is shown in NetLogo Code 5.6. A full listing of the code can be found by selecting the Ants model from the Models Library, and then by clicking on the Procedures button at the top of the NetLogo interface. Download free eBooks at bookboon.com 145

Artificial Intelligence – Agents and Environments Embodiment patches-own [ chemical ;; amount of chemical on this patch food ;; amount of food on this patch (0, 1, or 2) nest? ;; true on nest patches, false elsewhere nest-scent ;; number that is higher closer to the nest food-source-number ;; number (1, 2, or 3) to identify the food sources ] to recolor-patch ;; patch procedure ;; give color to nest and food sources ifelse nest? [ set pcolor violet ] [ ifelse food > 0 [ if food-source-number = 1 [ set pcolor cyan ] if food-source-number = 2 [ set pcolor sky ] if food-source-number = 3 [ set pcolor blue ] ] ;; scale color to show chemical concentration [ set pcolor scale-color green chemical 0.1 5 ] ] end to go ;; forever button ask turtles [ if who >= ticks [ stop ] ;; delay initial departure ifelse color = red [ look-for-food ] ;; not carrying food? look for it [ return-to-nest ] ;; carrying food? take it back to nest wiggle fd 1 ] diffuse chemical (diffusion-rate / 100) ask patches [ set chemical chemical * (100 – evaporation-rate) / 100 ;; slowly evaporate chemical recolor-patch ] tick do-plotting end to return-to-nest ;; turtle procedure ifelse nest? [ ;; drop food and head out again set color red rt 180 ] [ set chemical chemical + 60;; drop some chemical uphill-nest-scent ] ;; head toward the greatest value of nest-scent end to look-for-food ;; turtle procedure Download free eBooks at bookboon.com 146

Artificial Intelligence – Agents and Environments Embodiment if food > 0 [ set color orange + 1 ;; pick up food set food food – 1 ;; and reduce the food source rt 180 ;; and turn around stop ] ;; go in the direction where the chemical smell is strongest if (chemical >= 0.05) and (chemical < 2) [ uphill-chemical ] end ;; sniff left and right, and go where the strongest smell is to uphill-chemical ;; turtle procedure let scent-ahead chemical-scent-at-angle 0 let scent-right chemical-scent-at-angle 45 let scent-left chemical-scent-at-angle -45 if (scent-right > scent-ahead) or (scent-left > scent-ahead) [ ifelse scent-right > scent-left [ rt 45 ] [ lt 45 ] ] end ;; sniff left and right, and go where the strongest smell is to uphill-nest-scent ;; turtle procedure let scent-ahead nest-scent-at-angle 0 let scent-right nest-scent-at-angle 45 let scent-left nest-scent-at-angle -45 if (scent-right > scent-ahead) or (scent-left > scent-ahead) [ ifelse scent-right > scent-left [ rt 45 ] [ lt 45 ] ] end to wiggle ;; turtle procedure rt random 40 lt random 40 if not can-move? 1 [ rt 180 ] end to-report nest-scent-at-angle [angle] let p patch-right-and-ahead angle 1 if p = nobody [ report 0 ] report [nest-scent] of p end to-report chemical-scent-at-angle [angle] let p patch-right-and-ahead angle 1 if p = nobody [ report 0 ] report [chemical] of p end NetLogo Code 5.6: Code defining the reactive behaviour of the ant agents in the Ants model shown in Figure 5.7. Download free eBooks at bookboon.com 147

Artificial Intelligence – Agents and Environments Embodiment Each patch agent has a number of variables associated with it – for example, chemical stores the amount of chemical that ants have laid down on top of it, and nest-scent reflects how close the patch is to the nest. The recolor-patch procedure shows how the patch’s colour is reshaded according to how much chemical has been laid down on top it. The go procedure defines the behaviour of the ants. If the ant is not carrying food, it will look for it (by performing the look-for-food procedure) otherwise it will take the food back to the nest (by performing the return-to-nest procedure). As the agent is returning to the nest, it drops a chemical as it moves. The ants use a sense of smell to virtually ‘sniff’ this chemical to guide them towards the food source. As more and more ants carry the food back to the nest, they reinforce the strength of the chemical, but this will also become diffused over time as the strength of the chemical is reduced each tick in the go procedure. . Download free eBooks at bookboon.com 148 Click on the ad to read more

Artificial Intelligence – Agents and Environments Embodiment The two procedures uphill-chemical and uphill-nest-scent define the ants’ reactive behaviour in following the chemical scent to the food source and in returning to the nest. The ants are essentially applying similar behaviour to the hill climbing turtle agents for the Hill Climbing Example model above (which follow the gradient defined by the patch variable pcolor in relation to the 2D environment’s terrain). The ant agents in the Ants model on the other hand follow the gradient defined by the patch variable chemical if they are looking for food, and return along the gradient defined by the patch variable nest-scent if they are returning to the nest. They do this by ‘sniffing’ first left, then right, then following the path with the strongest smell. The wiggle procedure provides a random element to the movement of the ants that ensures that they will be very effective at exploring the entire environment similar to the turtle agents in the Look Ahead Example 2 model depicted in Figure 5.2. In this example, many agents employing simple reactive behaviour are able to get a non-trivial task done simply as a side effect of their interaction with each other and with the environment. If a single agent were to exhibit the same ability – that of fully exploring its environment for food, and finding its way back home once it has found some – then we as observers might attribute some degree of intelligence to the agent. The ants use a form of collective intelligence – the intelligence comes from the collective result of many agents interacting with each other. Is it also possible for a single reactive agent (rather than a collection of agents) to successfully complete a non-trivial task without knowing it is doing so? For example, can a single agent get through a maze by employing simple reactive behaviour when it has no ability to recognize that there are alternative paths to explore? A model called Mazes has been developed to demonstrate how this is possible. Screenshots of the model are shown in Figures 5.8 to 5.10. They show turtle agents using simple reactive behaviours to move around the three mazes that were defined in Chapter 3 – the empty maze, the Hampton Court Palace Maze and the Chevening House maze. Note that we have to be careful how we discuss the behaviour and actions of the turtle agent in the model. We cannot say that the agent is ‘exploring’ the maze – exploration assumes some degree of volition on the part of the agent doing the exploring. Similarly, we cannot say that the agent has ‘solved’ the maze when it has reached the centre or reached the exit. Solving requires cognition – that is, recognition of the task at hand. All we can say is that the agent moves around the maze, and in so doing, it manages to reach some state (such as the centre or exit) as a side effect of its behaviour. The model provides an interface chooser called turtle-behaviour that allows us to define the reactive behaviour of the turtle agent as it moves around the maze. There are four behaviours defined: Hand On The Wall, Random Forward 0, Random Forward 1, and Random Forward 2. The NetLogo code defining these behaviours is shown in NetLogo Code 5.7. Download free eBooks at bookboon.com 149

Artificial Intelligence – Agents and Environments Embodiment to walk ;; turtle procedure ;; turn right if necessary ifelse set-pen-down [ pen-down ] [ pen-up ] if count neighbors4 with [pcolor = blue] = 4 [ user-message 'Trapped!' stop ] let xpos [goal-x] of maze 0 if xcor >= xpos and xcor <= xpos + [goal-width] of maze 0 and ycor = [goal-y] of maze 0 [ ifelse [goal-type] of maze 0 = 'Exit' [ user-message 'Found the exit!' ] [ user-message 'Made it to the centre of the maze!' ] stop ] if (turtle-behaviour = 'Hand On The Wall') [behaviour-wall-following] if (turtle-behaviour = 'Random Forward 0') [behaviour-random-forward-0] if (turtle-behaviour = 'Random Forward 1') [behaviour-random-forward-1] if (turtle-behaviour = 'Random Forward 2') [behaviour-random-forward-2] end t'to-report wall? [angle dist] ;; note that angle may be positive or negative. if angle is ;; positive, the turtle looks right. if angle is negative, ;; the turtle looks left. let patch-color [pcolor] of patch-right-and-ahead angle dist report patch-color = blue or patch-color = sky ; blue if it is a wall, sky if it is the closed entrance end to behaviour-wall-following ; classic 'hand-on-the-wall' behaviour if not wall? (90 * direction) 1 and wall? (135 * direction) (sqrt 2) [ rt 90 * direction ] ;; wall straight ahead: turn left if necessary (sometimes more than once) while [wall? 0 1] [ lt 90 * direction] ;; move forward fd 1 end to behaviour-random-forward-0 ; moves forward unless there is a wall, then tries to turn left, then right ; then randomly turns as a last resort if wall? 0 1 [ ifelse wall? 90 1 [ lt 90 ] [ ifelse wall? 270 1 [ rt 90 ] [ ifelse random 2 = 0 [lt 90] [rt 90] ]]] ;; move forward fd 1 end Download free eBooks at bookboon.com 150


Like this book? You can publish your book online for free in a few minutes!
Create your own flipbook