In the previous chapter, we have seen how to publish directed navigation goals to the differential drive robot. The takeaways from this chapter will get us closer to the objective of enabling the robot drive autonomously to a goal in given map. . We will see how to:
- Use Gazebo Laser Scanner plugin
- Create and save a random map in Gazebo
- Use Laser Scanner model and SLAM gmapping package to create rviz map
- Load the newly created map in to rviz
Chapter-03 : Simple Navigation with Differential Drive Plugin
Chapter-04 Source Code
Download full source code of this chapter from here
Source Code Prep
Copy kbot_description
package from previous chapter. All the source code changes for this chapter to be done in a new directory
Using Laser Scanner Plugin
Lidar is a scanning method that uses lasers to measure distances of objects in the device surroundings. Laser scanners (Lidar devices) are extremely useful in creating a map of the robot’s environment. Such a map, in turn, helps robot navigate smoothly by choosing the best path and avoiding obstacles. Laser scan data is one of the main sources of odometry in modern robots.
The plugin used in this section is a Hokuyo Lidar model that can be customized based on your Lidar device specification.
To add laser scanner model to the robot, add a link
and joint
robot URDF kbot_description/urdf/kbot.xacro
.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 | <xacro:property name="laser_size_x" value="0.03"/> <xacro:property name="laser_size_y" value="0.03"/> <xacro:property name="laser_size_z" value="0.04"/> <xacro:property name="laser_origin_x" value="0.065"/> <xacro:property name="laser_origin_y" value="0"/> <xacro:property name="laser_origin_z" value="0.035"/> <link name="sensor_laser"> <visual> <geometry> <box size="${laser_size_x} ${laser_size_y} ${laser_size_z}"/> </geometry> <material name="blue"/> </visual> <collision> <geometry> <box size="${laser_size_x} ${laser_size_y} ${laser_size_z}"/> </geometry> </collision> <xacro:box_inertia m="0.2" w="${laser_size_x}" h="${laser_size_y}" d="${laser_size_z}"/> </link> <joint name="sensor_laser_joint" type="fixed"> <origin xyz="${laser_origin_x} ${laser_origin_y} ${laser_origin_z}" rpy="0 0 0" /> <parent link="base_link"/> <child link="sensor_laser" /> </joint> |
All Gazebo plugins are defined with <gazebo>
tag, followed by a tag related to specific type of plugin. <sensor>
tag is for sensor plugins and, in this case, defines various properties of the laser scanner.
Above laser scan plugin refers and applies to sensor_laser
link of the robot. The sensor type is gpu_ray
. Plugin implementation library name is libgazebo_ros_gpu_laser.so
.
TIP
If your host system has a GPU, get better performance by replacing type ray
with gpu_ray
and filename libgazebo_ros_laser.so
with libgazebo_ros_gpu_laser.so
Laser scan data is going to be published to the topic /kbot/sensor_laser/scan
. The TF frame name is same as the referring link sensor_laser
.
Adjust other parameters like range
, samples
, min_angle
& max_angle
based on your laser scanner device specification. For simulation, you may leave them unchanged and launch Gazebo & rviz.
1 | $ roslaunch kbot_description kbot_base_rviz_gazebo.launch |
Using a Mesh File
Optionally, for better visualization, a mesh file can be used (instead of a box) for the visual geometry of the laser scanner. For that, download the mesh file hokuyo.dae from Gazebo models repository to kbot_description/meshes/
directory. Then replace visual geometry description of laser scanner in kbot.xacro
with below lines.
1 2 3 4 5 6 | <visual> <geometry> <mesh filename="package://kbot_description/meshes/hokuyo.dae"/> </geometry> </visual> |
Laser Scan Visualization
In Gazebo, you should see laser scan rays all around the robot (360 degree scanner). From Gazebo model library, place any random object (e.g. Cube 20k) near the robot.

Laser rays blocked by the object can be visualized in rviz.

In rviz, add topic /kbot/sensor_laser/scan
to the displays panel. Increase the size to 0.03 to see the laser scan data clearly. The robot see only the front side of the object, hence visualiazation sometimes is just a line.

Creating a Map in Gazebo
In this section we will use Gazebo building editor to create a simple closed room model, insert that model into the empty world and save it save it as a new world (map). Eventually, KBot will use this map for navigation.
In Gazebo GUI, click File -> Open Building Editor. Use the structures like walls to build a sample building floor plan like the one shown below and save it as kbot_description/models/closed_room
.

Close the building editor to return to Gazebo main GUI. Select and insert the newly created closed_room
model in the exiting empty world. Move the closed_room
model so that our robot is inside the room as shown below. Now, save this new world as kbot_description/worlds/closed_room.world
.

Note that the robot’s laser rays are now blocked by walls of the room.
Using SLAM to create Map in Rviz
To enable autonomous navigation of robot within the map, the same map (as the one in Gazebo) needs to recreated in rviz. The robot uses laser sensor data to create a map of its surrounding using a technique called SLAM – Simultaneous Localization and Mapping. SLAM is an alorigthm to create map (mapping) as well as to calculate robot’s own position within the map (localization).
ROS has a package called slam_gmapping
that provides laser-based SLAM. slam_gmapping
subscribes to tf
and uses laser, base & odom frames for localization. The package also subscribes to laser scan topic to create the map. Needless to say, it publishes the created map to a topic called map
which can be visualized in rviz.
Create a new launch file kbot_description/launch/slam_gmapping.launch
with below contents.
1 2 3 4 5 6 7 8 | <?xml version="1.0"?> <launch> <node name="slam_gmapping" pkg="gmapping" type="slam_gmapping"> <remap from="/scan" to="/kbot/sensor_laser/scan"/> <param name="base_frame" value="base_footprint"/> </node> </launch> |
While Gazebo & rviz are running, launch the slam node in a new terminal.
1 2 | $ roslaunch kbot_description slam_gmapping.launch |
Create another launch file kbot_description/launch/teleop_keyboard.launch
to launch telop_twist_keyboard
node and launch it in another terminal.
1 2 3 4 5 6 7 | <?xml version="1.0"?> <launch> <node name="teleop" pkg="teleop_twist_keyboard" type="teleop_twist_keyboard.py" output="screen"> <remap from="/cmd_vel" to="/kbot/base_controller/cmd_vel"/--> </node> </launch> |
1 2 | $ roslaunch kbot_description teleop_keyboard.launch |

Add Map
and LaserScan visualizations to rviz. Now, looking into Gazebo map and using teleop keyboard controls, drive the robot around to map all the walls of closed room. Map created in rviz should look identical to the one in Gazebo. When you are satisfied, save the map in kbot_description/maps/
by typing below command in a new terminal.
1 2 | $ rosrun map_server map_server -f closed_room_rviz |
Above command dumps the rviz map information to two files – .yaml
contains the metadata and .pgm
has the occupancy data.
Loading the Saved Maps
In this section, let’s launch Gazebo with the newly created world file and rviz with the saved map.
In kbot_base_rviz_gazebo.launch
, use the newly created word when launching the empty world. And, start the map server to visualize the map in rviz.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 | <!--Gazebo empty world launch file--> <include file="$(find gazebo_ros)/launch/empty_world.launch"> <arg name="debug" value="false" /> <arg name="gui" value="true" /> <arg name="paused" value="false"/> <arg name="use_sim_time" value="false"/> <arg name="headless" value="false"/> <arg name="verbose" value="true"/> <arg name="world_name" value="$(find kbot_description)/maps/closed_room.world"/> </include> <!--Map Server--> <node pkg="map_server" type="map_server" name="map_server" args="$(find kbot_description)/maps/closed_room.map.yaml"/> |
On a terminal, launch the robot simulation –
1 2 | $ roslaunch kbot_description kbot_base_rviz_gazebo.launch |
to see the loaded maps. Your custom map appears automatically on Gazebo simulator. However, you will have to add the /map
topic to visualize it.

At this point, you will notice that map tf transform has an error and the map itself is not aligned to the robot’s position. These issues will be fixed in the next chapter.