A24
An Open Source Autonomous SLAM robot for teaching and research based on ROS2, Nav2 and slam_toolbox
Building A24: A Low-Cost ROS2 Robot for Learning Navigation
For the better part of the last three years, I have wanted to create a robot that moves around my house and serves water to guests. Just the thought of saying, “Robot, bring water to the living room,” felt like a distant dream. However, I realized that the tools to achieve this are already available—I just had to learn how to use them.
The first step was to learn ROS (Robot Operating System), as it provides access to SLAM and pathfinding libraries without requiring me to code them from scratch. I began by creating a smaller version of the robot to mimic the same core functionality as the full-sized version. This approach allowed me to avoid the complexity of managing/paying for more powerful motors and figuring out how to carry water effectively.
Thus began A24, a low-cost robot designed to teach myself the Nav2 stack. While I could have done everything in simulation, I’m glad I didn’t because there is a significant disparity between building a SLAM robot in simulation versus real life. The lessons learned from working on a physical robot were invaluable.
The Hardware Design
Three Stages
A24 is composed of three laser-cut plates made from a single sheet of strong 6mm transparent acrylic:
1. Lower Stage
The lower stage
houses:
- Two sensored DC motors.
- A LiPo battery.
- A low-cost caster wheel.
- 3D-printed mounts to ensure the robot’s LiDAR remains parallel to the ground.
2. Middle Stage
The middle stage
was designed for modular component organization. It features a grid of 3.5mm holes, which hold the following components:
- A Raspberry Pi 4.
- A step-down converter.
- An L298N motor driver.
- A hardware driver board built using a Seeeduino Xiao, outfitted with male header pins and PCB screw terminals to interface with the encoders and motor driver.
- A power distribution board, essentially a set of screw terminals wired together to separate voltage channels.
Each component is secured with custom 3D-printed mounts.
3. Upper Stage
The upper stage
supports:
- An RP-LiDAR A1.
- A custom connector for the LiDAR’s small PCB, allowing connection via a microUSB cable for communication.
The Display and Switch Board
Between stages 1 and 2, I added a 3D-printed piece that houses:
- A display to monitor the battery levels and voltage supplied by the LiPo.
- An SPST rocker switch to turn the robot on and off.
Connecting the Plates: Custom Legs
Initially, I planned to use brass standoffs to connect the plates, but the ones I had were male-female instead of female-female. The laser cutter also made the holes too large to screw into directly, rendering this approach unfeasible.
Instead, I designed custom hexagonal standoffs with:
- Holes for threaded brass inserts at the top and bottom.
- A central hole allowing the use of any length of M3 screw.
To secure the brass inserts, I used a soldering iron to press them into place. This solution is highly adaptable—the legs’ height and diameter can be adjusted based on user preferences.
The Software Design
fusion2urdf for ROS2
The first step was to use the fusion2urdf ROS2 plugin. This plugin converts my Fusion 360 model into a URDF (Unified Robot Description Format) file for use in simulation. The URDF is essential because:
- It allows the pathfinding algorithm to understand the robot’s collision bounding box.
- It provides SLAM software with the necessary transformations between the sensor’s origin and the robot’s origin.
The package also includes ready-to-use ROS2 description files, such as display.launch.py
for RViz and gazebo.launch.py
for Gazebo. While typically the bringup and description packages should be separate, combining them was acceptable for initial testing.
SLAM with ROS2
Mobile robots in ROS2 determine their odometry (where the robot thinks it is) using their kinematics and state feedback. Adding sensors, such as an IMU for position tracking, can improve odometry, but it will still drift over time unless corrected.
This is where SLAM (Simultaneous Localization and Mapping) comes into play. Tools like slam_toolbox update the transform between the global frame (the map’s origin) and the robot’s odometry frame. Notably:
- The same code can be used for simulation and real-world applications by simply toggling the
use_sim_time
parameter. - ROS’s architecture enables Sim2Real transitions seamlessly since nodes are environment-agnostic.
microROS with ROS2_control
To update the robot’s odometry and execute motion commands, I used ROS2_control. Initially, the learning curve was steep, but the process became clear once I understood the basics:
- JointStateBroadcaster: Updates the transforms based on encoder feedback.
- DiffDriveController: Computes velocity commands for each wheel based on the desired trajectory (via the
cmd_vel
topic) and updates the robot’s odometry.
microROS Integration
Typically, robots require custom hardware components and protocols for communication with ROS2_control. However, I opted for microROS, which simplifies communication by enabling microcontrollers to act as ROS nodes.
Using an Seeeduino XIAO running microROS:
- The microcontroller can publish and subscribe to ROS topics like any internal node on the main computer.
- An additional node acts as a translator, converting the JointState messages to
Float64MultiArray
messages, which contain only the essential data needed by the Seeeduino Xiao.
This topic-based approach simplifies the code, reduces system complexity, and improves efficiency since Float64MultiArray
is smaller than JointState
.
Building A24 has been an incredible learning journey, from hardware assembly to mastering the ROS2 ecosystem. By starting with a small, functional prototype, I’ve not only gained valuable hands-on experience but also laid the groundwork for more ambitious robotics projects in the future.