A Distributed Cognitive Map for Spatial Navigation
Based on Graphically Organized Place Agents

Jörg Conradt, Rodney J Douglas
{conradt, rjd}@ini.phys.ethz.ch
Institute of Neuroinformatics, UZH/ETH-Zürich

Abstract

Animals quickly acquire spatial knowledge and are thereby able to navigate over very large regions. These natural methods dramatically outperform current algorithms for robotic navigation, which are either limited to small regions [1] or require huge computational resources to maintain a globally consistent spatial map [2]. We have now developed a novel system for mobile robotic navigation that like its biological counterpart decomposes explored space into a distributed graphical network of behaviorally significant places, each represented by an independent “place agent” (PA) that actively maintains the spatial and behavioral knowledge relevant for navigation in that place. Each PA operates only on its limited local information and communicates only with its directly connected graphical neighbors. Thus, there is no global supervisor and it is only necessary to maintain spatial consistency locally within the graph. This simple strategy significantly reduces computational complexity; scales well with the size of the navigable region; and permits a robot to autonomously explore, learn, and navigate large unknown office environments in real time.

 

Introduction

Navigation is easy - we do it every day. We can effortlessly explore areas we have never been before, memorize important places and return to those again and again. But how does this work? How do we build up, represent, and use information about space? This thesis presents a biologically plausible principle for acquiring, storing, maintaining and ultimately using spatial knowledge for short and long-range navigation between behaviorally significant places: a cognitive map [3].


Navigation in Engineering and Neuroscience

In today’s engineered navigation systems [2], an active agent such as a robot typically stores all acquired information about its spatial environment in a global map relative to an absolute origin [4, 5], shown in Figure 1, left. Adding or updating information and using stored knowledge for navigation typically require considerable computational resources and involve a single active agent - such as a computer program - having access to all accumulated information. Alternatively, several topological approaches for navigation exist [6, 7], shown in Figure 1, middle, and also hybrid topologic-metric approaches [8-10]. But all these approaches rely on a single active agent that is reasoning based on all previously acquired data. Such a mechanism is unlikely to be performed in animal or human brains: there does not appear to be any one active region of our brain “inspecting” other passive regions to use stored information for navigation.

 

In the last few decades place cells and - much more recently - grid cells have been in the center of neuroscientific research about navigation [11, 12] and its robotic implementation [1, 13, 14]. These families of cells in Hippocampus [11] and Entorhinal Cortex [15] show significantly increased activity whenever an animal happens to be within a well defined region in an experimental setup. It is widely agreed that activity of a collection of such place cells represents the animal’s current spatial position within a local frame of reference. These cells show a stable firing pattern over a long time within an environment, perform a complete remapping of their firing pattern when the observed animal enters a distinct new environment[16], and revert to the previous firing pattern upon returning to a familiar environment. However, experiments so far are constrained to relatively small areas of about 2m in diameter, whereas wild animals typically live in areas of several thousands of square meters. It is yet to be investigated how navigation in larger areas impact firing patterns in place cells.


A Distributed Cognitive Map

In this thesis we explore a new principle of perceiving, maintaining and operating on spatial knowledge: every element of the system operates exclusively on locally available information and is constrained to decide actions based only on knowledge about its vicinity. We do not represent space in a global consistent data structure as traditional topological or metric-based approaches do. Instead, starting without prior spatial knowledge, our system autonomously creates a graphical network of independent “patches of knowledge” at behaviorally relevant places. Such patches contain spatial and behavioral information only about their local environment. They actively control the physical agent - a robot - in their vicinity, and maintain and update their limited knowledge about their environment, instead of serving as passive data storage containers. As these active patches know about and communicate with their local nearest neighbors, they implicitly establish the topology of a network that represents behaviorally significant space. Each of these nodes of the network is unaware of its position within the network and the position it represents in global space. No process in our system maintains or operates globally on the network; in fact, the whole network only exists because of message-passing between autonomously acting neighboring nodes. This principle is illustrated in Figure 1, right.

Note that the distributed system does not explicitly represent cycles in the environment. Figure 1, left and middle, show a cycle in the environment between places D-E-G-D, which is computationally expensive to handle [2]. In the distributed representation, no actor has access to D, E, and G simultaneously. The system is unaware of the cycle, but still knows all individual traversable paths that together constitute the cycle.


Figure 1: left: a global Cartesian map showing a top-down view of an environment, maintained by a single computer or a mobile robot. Middle: topological representation of the same environment recorded as a consistent data structure. A computer program/robot maintains this structure and operates based on stored information. Right: A collection of individual programs as active place-representations, each only knowing their respective local space and direct neighbors. The true graphical structure exists only implicitly in the collection of all places, to which no actor has access. At any time only a single one of these individual programs interacts with the robot; i.e. it directs the robot and obtains sensed information about the current local environment.


Implementation and Results

The system we developed starts without spatial knowledge in an unfamiliar environment which it automatically explores using a mobile robot. While exploring, the robot constantly maintains a representation of its current local environment [*1] which consists of fused information from various on-board sensors. When detecting a behaviorally relevant stimulus [*2], a new place agent captures a snapshot of the current local environment, records the previously active place agent as a neighbor, and directs the robot to further explore unknown regions. Repeating this process incrementally builds up a collection of independent place agents. All these agents only know their local spatial environment and communicate with their direct neighbors, as shown in Figure 1, right. Despite these severe constraints on knowledge and communication, the collection of all individual place agents shows emergent globally consistent behavior without having a single globally acting entity in the system. An example of such global behavior is the guidance of the robot to a previously recorded target represented elsewhere in the network.

Our system has not only successfully explored spaces as small as a single room, but also our whole institute of about 60x23m (Figure 2). Only few operations require time scaling linearly with the number of nodes in the network; most operations are performed on local data only. Currently, all place agents required to represent our institute run in real-time on a single computer; but as they are independent of each other they could be distributed among multiple possibly less powerful computing units. We are convinced that we can map significantly larger environments without suffering degraded performance.

We believe that neural hardware is well suited to implement such an “active distributed cognitive map”, as brains are intrinsically distributed processing systems without global access to all memorized information - which is required by traditional algorithms for navigation. We therefore believe that the proposed system resembles biological information processing much more closely than current methods for long-range spatial navigation.

Relating our system to place-field neurons found in Hippocampus [11, 12], we interpret observed stable activity-patterns of place-cells within a small experimental setup as an indicator for a short-range local map in animals brains. Such a local map is implemented by a single place-agent in our system. When an observed animal changes to a different place, a complete remapping of firing patterns across place-field neurons occurs [16], which corresponds to transferring behavioral control to a new place agent in our model.

Our system explains results from typical behavioral experiments with rats [3, 17], without requiring a global actor such as a computer program or an omni-conscious homunculus having access to all acquired information.


Figure 2 Top: A distributed graphical representation of our institute established by 177 independent place agents (PAs, green circles). The spatial arrangement of PAs only happens for this display; no actor in the system is aware of the structure. Red arrows show 45% of the recorded distance to a neighboring PA. Places of behavioral relevance are covered densely with nodes, whereas places such as long aisles show sparse coverage. Bottom: a plan-view of the institute (60x23meters).

 

 

More information: 12-page summary or full Ph.D.-Thesis

 

 

[*1] “Current local environment” refers to space within a few times the robot’s body-length, typically within sensor reach. The overall operating area of the robot, in contrast, covers several 100 times the robot’s body-length.
[*2] Such as an intersection of paths, a battery charger or fresh coffee, depending on one’s preference.

 

Bibliography

1.            Arleo, A., Spatial Learning and Navigation in Neuro-Mimetic Systems, Modeling the Rat Hippocampus. 2000, Ecole Polytechnique Federale Lausanne: Lausanne.

2.            Thrun, S., Robotic mapping: A survey., in Exploring Artificial Intelligence in the New Millenium, G. Lakemeyer and B. Nebel, Editors. 2002, Morgan Kaufmann.

3.            Tolman, E.C., Cognitive Maps in Rats and Men. The Psychological Review, 1948. 55(4): p. 189-208.

4.            Montemerlo, M. and S. Thrun, The FastSLAM Algortihm for Simultaneous Localization and Mapping. Springer Tracts in Advanced Robotics, ed. B. Siciliano, O. Khatib, and F. Groen. Vol. 27. 2007, Berlin/Heidelberg: Springer.

5.            Bosse, M., et al., SLAM in Large-scale Cyclic Environments using the Atlas Framework. International Journal of Robotics Research, 2003.

6.            Shatkay, H. and L.P. Kaelbling. Learning Topological Maps with Weak Local Odometric Information. in International Joint Conference on Artificial Intelligence. 1997.

7.            Tapus, A., Topological SLAM - Simultaneous Localization and Mapping with Fingerprints of Places, in Computer Science and Systems Engineering Department. 2005, Swiss Federal Institute of Technology Lausanne (EPFL), Switzerland: Lausanne.

8.            Kuipers, B.J., The Spatial Semantic Hierarchy. Artificial Intelligence, 2000. 119: p. 191-233.

9.            Jefferies, M.E., J. Baker, and W. Weng, Robot Cognitive Mapping - A Role for a Global Metric Map in a Cognitive Mapping Process, in Robot and Cognitive Approaches to Spatial Mapping, M.E. Jefferies and W.-K. Yeap, Editors. 2008, Springer: Heidelberg/Berlin. p. 265-280.

10.          Tomatis, N., I.R. Nourbakhsh, and R. Siegwart. Simultaneous Localization and Map Building: A Global Topological Model with Local Metric Maps. in IEEE/RSJ International Conference on Intelligent Robots and Systems. 2001. Maui, USA.

11.          O'Keefe, J. and L. Nadel, The Hippocampus as a Cognitive Map. 1978, Oxford, UK: Oxford University Press.

12.          Redish, A.D., Beyond the Cognitive Map - From Place Cells to Episodic Memory. 1999: MIT Press.

13.          Hafner, V.V., Robots as Tools for Modelling Navigation Skills - A Neural Cognitive Map Approach, in Robot and Cognitive Approaches to Spatial Mapping, M.E. Jefferies and W.-K. Yeap, Editors. 2008, Springer: Heidelberg/Berlin. p. 315-324.

14.          Milford, M.J., Robot Navigation from Nature. Springer Tracts in Advanced Robotics, ed. B. Siciliano, O. Khatib, and F. Groen. Vol. 41. 2008, Berlin/Heidelberg: Springer.

15.          Sargolini, F., et al., Conjunctive representation of position, direction, and velocity in entorhinal cortex. Science, 2006. 312(5774): p. 758-62.

16.          Markus, E.J., et al., Interactions between location and task affect the spatial and directional firing of hippocampal neurons. Journal of Neuroscience, 1995 15(11): p. 7079-7094.

17.          Blancheteau, M. and A.L. Lorec, Raccourci et détour chez le rat: durée, vitesse et longueur des parcours [Short-cut and detour in rats: duration, speed and length of course]. L'année psychologique, 1972. 72(1): p. 7-16.

 

 

wordpress statistics