|
 |
The Project aims to combine a new aVLSI sound localization chip created around two
silicon cochleae with a mobile Koala robot containing a variety of additional sensors.
Adding audition as further stimulus will allow the robot to detect landmarks, which
are not currently visible or salient. The sound localization chip estimates the
interaural time difference (ITD) between two microphones on the robot within each
cochlear frequency band.
Currently, the robot is only able to detect landmarks in the direction where it
is looking and is furthermore limited by computational constraints to processing
a limited amount of visual input only. Adding auditory input to the robot, as a bearing
estimate for each cochlear frequency channel, will allow the robot to detect landmarks
everywhere in auditory space. The pan-tilt cameras may then be driven by salient
auditory inputs to investigate the visual properties of the landmarks. As the robot
will be able to attend to landmarks outside of its current focus of visual attention,
it will gain a better knowledge of its environment.
|
|