Navigating with TurtleBot

Launch files for TurtleBot will create ROS nodes either remotely on the TurtleBot netbook (via SSH to TurtleBot) or locally on the remote computer. As a general rule, the launch files (and nodes) that handle the GUI and visualization processing should run on the remote computer while the minimal launch and camera drivers should run on the TurtleBot netbook. Note that we will specify when to SSH to TurtleBot for a ROS command or omit the SSH for using a ROS command on the remote computer.

Mapping a room with TurtleBot

TurtleBot can autonomously drive around its environment if a map is made of the environment. The 3D sensor is used to create a 2D map of the room as the TurtleBot is driven around either by a joystick, keyboard, or any other method of teleoperation.

Since we are using the Kobuki base, calibration of the gyro inside the base is not necessary. If you are using the Create base, make sure that you perform the gyro calibration procedure in the TurtleBot ROS wiki at http://wiki.ros.org/turtlebot_calibration/Tutorials/Calibrate%20Odometry%20and%20Gyro before you begin with the mapping operation.

Defining terms

The core terms that are used in TurtleBot navigation are as follows:

Odometry: Data gathered from moving sensors is used to estimate the change in a robot's position over time. This data is used to estimate the current position of the robot relative to its starting location.

Map: For TurtleBot, a map is a 2D representation of an environment encoded with occupancy data.

Occupancy Grid Map (OGM): An OGM is a map generated from the 3D sensor measurement data and the known pose of the robot. The environment is divided into an evenly-spaced grid in which the presence of obstacles is identified as a probabilistic value in each cell on the grid.

Localization: Localization determines the present position of the robot with respect to a known map. The robot uses features in the map to determine where its current position is on the map.

Building a map

The following steps are fairly complex and will require the use of four or five terminal windows. Be conscious of which commands are on TurtleBot (requiring ssh from the remote computer) and those that are on the remote computer (not requiring ssh). In each terminal window, enter the commands following the $ prompt:

  1. Terminal window 1: Minimal launch of TurtleBot
    $ ssh <username>@<TurtleBot's IP Address>
    $ roslaunch turtlebot_bringup minimal.launch
    

    These commands are the now familiar process of setting the many arguments and parameters and launching nodes for the TurtleBot mobile base functionality.

  2. Terminal window 2: Launch the gmapping operation as follows:
    $ ssh <username>@<TurtleBot's IP Address>
    $ roslaunch turtlebot_navigation gmapping_demo.launch
    

    Look for the following text on your window:

    odom received!

    The gmapping_demo launch file launches the 3dsensor.launch file, specifying turning off the rgb_processing, depth_registration, and depth_processing modules. This leaves the modules for ir_processing, disparity_processing, disparity_registered_processing, and scan_processing. The .xml files for gmapping.launch and move_base.launch are also invoked. gmapping.launch.xml launches the slam_gmapping node and sets multiple parameters in the .xml file. move_base.launch.xml launches the move_base node and also starts the nodes for velocity_smoother and safety_controller. A more complete description of this processing is provided in the following How does TurtleBot accomplish this mapping task? section.

  3. Terminal window 3: View navigation on rviz by running the following command:
    $ roslaunch turtlebot_rviz_launchers view_navigation.launch
    

    Rviz should come up in the TopDownOrtho view identified in the Views panel on the right side of the screen. This environment shows a map that is the initial OGM, which shows occupied space, free space, and unknown space.

    If a map is not displayed, make sure that the following display checkboxes have been selected on the Displays panel (on the left side):

    • Grid
    • RobotModel
    • LaserScan
    • Bumper Hit
    • Map
    • Global Map
    • Local Map
    • Amcl Particle Swarm
    • Full Plan

    Your rviz screen should display results similar to the following screenshot:

    Building a map

    An initial gmapping screen in rviz

  4. Terminal window 4: Keyboard control of TurtleBot:
    $ roslaunch turtlebot_teleop keyboard_teleop.launch
    

    Note

    Here, the keyboard navigation command is used, but the joystick teleop or interactive marker navigation can be used instead.

At this point, the operator should use keyboard commands to navigate TurtleBot completely around the environment. A representation of the map is built and can be viewed in rviz as TurtleBot's 3D sensor detects objects within its range.

The following screenshot shows a map of our lab that TurtleBot produced on rviz:

Building a map

TurtleBot mapping a room

Notice that light gray areas are clear, unoccupied space, dark gray areas are unexplored areas, black indicates a solid border, such as a wall, and colored spots are obstacles in the room. The area of the brightest color is TurtleBot's local map (the area the sensor is currently detecting).

When a complete map of the environment appears on rviz, the map should be saved. Without killing any of the prior processes, open another terminal window and type the following commands:

$ ssh <username>@<Turtlbot's IP Address>
$ rosrun map_server map_saver -f  /home/<TurtleBot's username>/my_map

If you do not know the TurtleBot's username, after ssh'ing to TurtleBot, use the pwd command to find it.

The process creates two files: my_map.yaml and my_map.pgm and places them in your TurtleBot netbook home directory. The path and filename can be changed as you desire, but files should be saved on the TurtleBot.

The .yaml file contains configuration information of the map and the path and name of the .pgm image file. The .pgm file is in portable gray map format and contains the image of the OGM.

The map configuration information includes the following:

  • The absolute pathname to the .pgm image file
  • The map resolution in meters per pixel
  • Coordinates (x, y, and yaw) of the origin on the lower-left corner of the grid
  • A flag to reverse the white pixel=free and black pixel=occupied semantics of the map color space
  • The lowest threshold value at which pixels will be considered completely occupied
  • The highest threshold value at which pixels will be considered completely free

In the next section, we will examine TurtleBot's mapping process from a more in-depth ROS perspective.

How does TurtleBot accomplish this mapping task?

TurtleBot builds maps using the ROS gmapping package. The gmapping package is based on OpenSlam's Gmapping (http://openslam.org/gmapping.html), which is a highly efficient Rao-Blackwellized particle filter algorithm. This approach is based on a laser scan-based SLAM implementation. Although a laser scanner would work the best for SLAM, the Kinect will provide a simulated laser scan for the TurtleBot. The ROS gmapping package contains the slam_gmapping node that takes the incoming laser scan stream and transforms it to the odometry tf reference frame.

The gmapping process is implemented by a set of parameters within the gmapping_demo.launch file in the turtlebot_navigation package. This launch file initiates the 3dsensor.launch file from the turtlebot_bringup package to handle the processing of the 3D sensor. Some of the sensor processing modules are turned off to minimize processing for this task.

The slam_gmapping node subscribes to the sensor_msgs/LaserScan messages from the camera_nodelet_manager node and the tf/tfMessage messages containing the odometry frames. The following diagram from rqt_graph shows the /tf and /tf_static topics (with tf/tfMessage messages) and the /scan topic (with sensor_msgs/LaserScan messages) being subscribed to by the slam_gmapping node. The slam_gmapping node combines this data to create an OGM of the environment. As the robot is driven around the room, the slam_gmapping node publishes the /map topic to update the OGM with an estimate of TurtleBot's location and the surrounding environment based on data from the laser scan.

How does TurtleBot accomplish this mapping task?

slam_gmapping node

When the operator issues the command to save the map, the map_saver node of the map_server package gets activated. The map_saver node provides a ROS service to take the OGM data and saves it to a pair of files (the .pgm and .yaml files described in the previous section). Each cell of the OGM records its occupancy state as a color for the corresponding pixel. Free space is identified as white with a value of 0 and occupied space is identified as black with a value of 100. A special value of -1 is used for unknown (unmapped) space. The threshold values within the .yaml file make the pixel values between 0 and 100 categorized as occupied, free, or in-between.

Autonomous navigation with TurtleBot

ROS has implemented the concept of a Navigation Stack. ROS stacks are a collection of packages that provide a useful functionality, in this case navigation. Packages in the Navigation Stack handle the processing of odometry data and sensor streams into velocity commands for the robot base. As a differential drive base, TurtleBot takes advantage of the ROS Navigation Stack to perform tasks, such as autonomous navigation and obstacle avoidance. Therefore, understanding TurtleBot's navigation processes will provide the knowledge base for many other ROS mobile robots as well as a basic understanding of navigation for aerial and underwater robots.

In this section, we will use the map that we created in the Mapping a room with TurtleBot section. As an alternative, you can use a bitmap image of a map of the environment, but you will need to build the .yaml file by hand. Values for map resolution, coordinates of the origin, and the threshold values will need to be selected. With the environment map loaded, we will command TurtleBot to move from its present location to a given location on the map defined as its goal.

At this point, understand that:

  • TurtleBot is publishing odometry data and accepting velocity commands
  • Kinect is publishing 3D sensor data (fake laser scan data)
  • The tf library is maintaining the transformations between base_link, odom frame, and the depth sensor frame of Kinect
  • Our map (my_map) will identify the environment locations that have obstacles

Defining terms

The following are the core terms used for autonomous navigation with TurtleBot:

Amcl: The amcl algorithm works to figure out where the robot would need to be on the map in order for its laser scans to make sense. Each possible location is represented by a particle. Particles with laser scans that do not match well are removed, resulting in a group of particles representing the location of the robot in the map. The amcl node uses the particle positions to compute and publish the transform from map to base_link.

Global navigation: These processes perform path planning for a robot to reach a goal on the map.

Local navigation: These processes perform path planning for a robot to create paths to nearby locations on a map and avoid obstacles.

Global costmap: This costmap keeps information for global navigation. Global costmap parameters control the global navigation behavior. These parameters are stored in global_costmap_params.yaml. Parameters common to global and local costmaps are stored in costmap_common_params.yaml.

Local costmap: This costmap keeps information for local navigation. Local costmap parameters control the local navigation behavior and are stored in local_costmap_params.yaml.

Driving without steering TurtleBot

To navigate the environment, TurtleBot needs a map, a localization module, and a path planning module. TurtleBot can safely and autonomously navigate the environment if the map completely and accurately defines the environment.

Before we begin with the steps for autonomous navigation, check the location of your .yaml and .pgm map files created in the previous section.

As in the previous section, be conscious of which commands are on TurtleBot (requiring ssh from the remote computer) and those that are on the remote computer (not requiring ssh). In each terminal window, enter the commands following the $ prompt:

  1. Terminal Window 1: Minimal launch of TurtleBot:
    $ ssh <username>@<TurtleBot's IP Address>
    $ roslaunch turtlebot_bringup minimal.launch
    
  2. Terminal Window 2: Launch amcl operation:
    $ ssh <username>@<TurtleBot's IP Address>
    $ roslaunch turtlebot_navigation amcl_demo.launch map_file:=/ home/<TurtleBot's username>/my_map.yaml
    

    Look for the following text on your window:

    odom received!

    The amcl_demo launch file launches the 3dsensor.launch file, specifying to turning off the rgb_processing, depth_registration, and depth_processing modules. This leaves the modules for ir_processing, disparity_processing, disparity_registered_processing, and scan_processing. The map_server node is launched to read the map data from the file. The .xml files for amcl.launch and move_base.launch are also invoked. amcl.launch.xml launches the amcl node and processing sets multiple parameters in the .xml file. move_base.launch.xml launches the move_base node and also starts the nodes for velocity_smoother and safety_controller. A more complete description of this processing is provided in the following How does TurtleBot accomplish this navigation task? section.

  3. Terminal Window 3: View navigation on rviz:
    $ roslaunch turtlebot_rviz_launchers view_navigation.launch
    

    This command launches the rviz node and rviz will come up in the TopDownOrtho view. Your rviz screen should display results similar to the following screenshot:

    Driving without steering TurtleBot

    An initial amcl screen in rviz

rviz control

When amcl_demo loads the map of the environment, TurtleBot does not know its current location on the map. It needs a little help. Locate TurtleBot's position in the rviz environment and let TurtleBot know this location by performing the following steps:

  1. Click on the 2D Pose Estimate button on the tool toolbar at the top of the main screen.
  2. Click on the location on the map where TurtleBot is located and drag the mouse in the direction TurtleBot is facing.

A giant green arrow will appear to help you align the direction of TurtleBot's orientation, as shown in the following screenshot:

rviz control

TurtleBot 2D Pose Estimate

When the mouse button is released, a collection of small arrows will appear around TurtleBot to show the direction. If the location and/or orientation are not correct, these steps can be repeated.

The previous steps seed TurtleBot's localization, so it has some idea where it is on the environment map. To improve the accuracy of the localization, it is best to drive TurtleBot around a bit so that the estimate of its current position converges when comparing data from the map with TurtleBot's current sensor streams. Use one of the teleoperation methods previously discussed. Be careful driving around the environment because there is no obstacle avoidance software running at this point. TurtleBot can be driven into obstacles even though they appear on its map.

Next, we can command TurtleBot to a new location and orientation in the room by identifying a goal:

  1. Click on the 2D Nav Goal button on the tool toolbar at the top of the main screen.
  2. Click on the location on the map where you want TurtleBot to go and drag the mouse in the direction TurtleBot should be facing when it is finished.

    Note

    Warning: Try to avoid navigating near obstacles that have low protrusions that will not be detected by the 3D sensor. In our lab, the extensions at the base of the Baxter robot cannot be seen by the TurtleBot.

The following screenshot shows setting the navigation goal for our TurtleBot:

rviz control

TurtleBot 2D Nav Goal

The following screenshot shows our TurtleBot accomplishing the goal:

rviz control

TurtleBot reaches its goal

TurtleBot can also perform obstacle avoidance during autonomous navigation. While TurtleBot is on its way to a goal, step in front of it (at least 0.5 meters (1.6 feet) in front of the Kinect) and see that TurtleBot will move around you. Objects can be moved around or doors can be opened or closed to alter the environment. TurtleBot can also respond to the teleoperation control during this autonomous navigation.

In the next section, we will examine TurtleBot's autonomous navigation process from a more in-depth ROS perspective.

How does TurtleBot accomplish this navigation task?

At the highest level of processing, ROS navigation acquires odometry data from the robot base, 3D sensor data, and a goal robot pose. To accomplish the autonomous navigation task, safe velocity commands are sent to the robot to move it to the goal location.

TurtleBot's navigation package, turtlebot_navigation, contains a collection of launch and YAML configuration files to launch nodes with the flexibility of modifying process parameters on-the-fly. The following diagram shows an overview of the navigation process:

How does TurtleBot accomplish this navigation task?

The ROS navigation process

When the amcl node is launched, it begins providing localization information on the robot based on the current 3D sensor scans (sensor_msgs/LaserScan), tf transforms (tf/tfMessage), and the OGM (nav_msgs/OccupancyGrid). When a 2D Pose Estimate is input by the operator, an initialpose message (geometry_msgs/PoseWithCovaianceStamped) resets the localization parameter and reinitializes the amcl particle filter. As laser scans are read, amcl resolves the data to the odometry frame. The amcl node provides TurtleBot's estimated position in the map (geometry_msgs/PoseWithCovarianceStamped), a particle cloud (geometry_msgs/PoseArray), and the tf transforms for odom (tf/tfMessage).

The main component of the TurtleBot navigation is the move_base node. This node performs the task of commanding the TurtleBot to make an attempt to reach the goal location. This task is set as a preemptable action based on its implementation as a ROS action and TurtleBot's progress toward the goal is provided as feedback. The move_base node uses a global and a local planner to accomplish the task. Two costmaps, global_costmap and local_costmap, are also maintained for the planners by the move_base node.

The behavior of the move_base node relies on the following YAML files:

  • costmap_common_params.yaml
  • local_costmap_params.yaml
  • global_costmap_params.yaml
  • dwa_local_planner_params.yaml
  • move_base_params.yaml
  • global_planner_params.yaml
  • navfn_global_planner_params.yaml

The global planner and costmap are used to create long-term plans over the entire environment, such as path planning for the robot to get to its goal. The local planner and costmap are primarily used for interim goals and obstacle avoidance.

The move_base node receives the goal information as a pose with position and orientation of the robot in relation to its reference frame. A move_base_msg/MoveBaseActionGoal message is used to specify the goal. The global planner will calculate a route from the robot's starting location to the goal taking into account data from the map. The 3D sensor will publish sensor_msgs/LaserScan with information on obstacles in the world to be avoided. The local planner will send navigation commands for TurtleBot to steer around objects even if they are not on the map. Navigation velocity commands are generated by the move_base node as geometry_msgs/Twist messages. TurtleBot's base will use the cmd_vel.linear.x, cmd_vel.linear.y, and cmd_vel.angular.z velocities for the base motors.

Goal tolerance is a parameter set by the user to specify the acceptable limit for achieving the goal pose. The move_base node will attempt certain recovery behaviors if TurtleBot is stuck and cannot proceed. These recovery behaviors include clearing out the supplied map and using sensor data by rotating in place.

rqt_reconfigure

The many parameters involved in TurtleBot navigation can be tweaked on-the-fly by using the rqt_reconfigure tool. This tool was previously named Dynamic Reconfigure and this name still appears on the screen. To activate this rqt plugin, use the following command:

$ rosrun rqt_reconfigure rqt_reconfigure

Nodes that have been programmed using the rqt_reconfigure API will be visible on the rqt_reconfigure GUI. On the GUI, nodes can be selected and a window with the nodes' parameters will appear with the current values and range limits. Sliders and input boxes allow the user to enter new values that will dynamically overwrite the current values. The following screenshot shows configuration parameters that can be changed for the /camera/depth, /camera/depth_registered, and /camera/driver:

rqt_reconfigure

rqt_reconfigure camera parameters

The parameters for the move_base node control can be accessed through rqt_reconfigure. These parameters are set by the move_base_params.yaml file mentioned in the previous section. This screen identifies the base_global_planner and the base_local_planner as well as how often to update the planning process (planner_frequency) and so on. These parameters allow the operator to tweak the performance of the software during an operation.

rqt_reconfigure

rqt_reconfigure move_base parameters

Exploring ROS navigation further

The ROS wiki provides extensive information on all aspects of setting up and configuring the navigation parameters. The following links are provided to enhance your understanding:

The following book is worth reading to gain an understanding on amcl and robotic navigation:

Probabilistic Robotics by Thrum, Burgard, and Fox by MIT Press

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset