Examining Basic Requirements For Mapping in ROS
When I first looked at Gazebo robot simulation environment, I didn't understand how to find the interface layer difference between a simulated robot and a physical robot. Now I've learned enough ROS to know: Look at the ROS Node graph to find a node named /gazebo
. Anything provided by or consumed by that node is a virtual substitute provided by Gazebo. When the same ROS code stack is on a physical robot, nodes that interface with physical hardware replaces everything provided in simulation by /gazebo
.
When I tested a cheap little laptop's performance in ROS, I put this knowledge to use. Gazebo ran on a high-end computer and every other node ran on the laptop. This simulates the workload of running Gmapping algorithm as if the low-end laptop was mounted on a physical robot. But what, specifically, is required? Let's look at the ROS node graph once again with rqt_graph
. Here's a graph generated while TurtleBot 3's mapping demo is running in Gazebo:
Here's a slightly different graph, generated by running the same mapping task but with Gazebo's GUI and ROS RViz visualization tool turned off. They are useful for the human developer but are not strictly necessary for the robot to run. We see the /gazebo_gui
node has dropped out as expected, and the /map
topic was also dropped because it was no longer being consumed by RViz for presentation.
We can see the Gazebo-specific parts are quite modest in this particular exercise. A physical robot running Gmapping in ROS will need to subscribe to /cmd_vel
so it can be told where to go, and provide laser distance scanning data via /scan
so Gmapping can tell where it is. Gazebo also publishes the simulated robot's state via /tf
and /joint_states
.
And now that I have a spinning LIDAR working in ROS to provide /scan
, the next project is to build a robot chassis that can be driven via /cmd_vel
. After that is complete, we can use it to learn about /tf
and /joint_states
.