Friday, February 24, 2012 16h00 Posted by Shender

Software Architecture

Crucial to the design of a human-interactive mobile robot is the ability to rapidly and easily modify the robot’s behavior. This requires for a mobile robot to have a modular software architecture, with a planning module that coordinates the different software modules to achieve the goal.

Our software architecture is based on a Behavior-based architecture. A behavior is an independent software module that solves a particular problem, such as navigation or face detection.

This is achived by using a plugin pattern aproach that simplifies the creation of new modules that comunicate thanks to a shared memory using singleton pattern accesing global information that any module could use.

In this paper behaviors are also refereed as modules. Behaviors exist at 3 different levels:

  • Functional level: The lowest level behaviors interface with the robot’s sensors and actuators, relaying commands to the motors or retrieving information from the sensors.
  • Execution level: Middle level modules perform the main functions, such as navigation, localization, speech recog- nition, etc. These interface with the lowest level through a shared memory mechanism. Each middle level module computes some aspect of the state of the environment. The outputs of these modules are typically reported to the highest level modules.
  • Decision level: The highest level coordinates the middle level modules based on a global planner. The planner is based on a Markov decision process (MDP). The MDP is solved to obtain an optimal policy according to the tasks’ objectives, which is used to command the other modules.
Go top

Friday, February 24, 2012 17h00 Posted by Shender

Hardware Architecture

The base platform is a comertial MobileRobots patrolbot optimized whith a series of new sensors to improve the robots perseption of its environment. The sensor the robot currently uses are:

  • Cannon PTZ: This camera is used to scan the robots enviroment searching for persons or objects.
  • Microsoft Kinect: Is used for person tracking and obstacle avoidance.
  • Touchscreen: This is maily used for non verbal interaction alowing speech or hearing inpared persons to interact whit the robot.
  • Directional microphone: Main input devide used by the speech recognition module.
  • SICK LMS: Envoirement scanner for obstacle detection and envoirement mapping.
  • Katana 450M Arm: Is used for manipulation of objects.
  • Rear sonar ring: Is used for object detection.
  • Main laptop: Is the central procesing device for the robot.

Go top