With the growing adoption of ROS 2 in modern systems, we transitioned away from ROS 1 while retaining key components from BBASV 3.0. We ensured that all packages maintain consistent functionality, and this transition has enhanced the performance of our mission planner, perception pipeline, and control systems, which now perform more optimally than before.
Behavioural Building Blocks
In our move to ROS2, we upgraded to BehaviorTree.CPP V4.6, improving mission design through streamlined tools. Behavior Trees continue to be our primary framework for mission development, simplifying the creation and reuse of key nodes across different missions. We design both low-level navigation tasks and high-level autonomy strategies using behavior trees. A task selection algorithm manages mission execution, maximizing points and ensuring optimal performance within the operational window.
Fusion of the senses
BBASV 4.0 features an enhanced perception pipeline for better integration and fusion of data from cameras, LiDAR, and hydrophones. The vehicle’s three monocular cameras provide a combined 260° field of view (FOV), significantly reducing blind spots and allowing the system to detect and track more competition targets. We employ advanced models like YOLOv10, GroundingDINO, and YOLOv8 for object detection. Our configurable image processing module enables joint consideration of outputs from multiple machine learning models, providing unified tracks of competition targets for downstream processing.
The downstream sensor fusion modules perform association of detections and tracks from cameras, lidars and hydrophones to precisely locate the various competition obstacles. This includes LiDAR-camera fusion for locating the shooting targets, and additional fusion with range-elevation estimates from the hydrophones array to accurately determine the position of the entrance/exit gates.
Control Systems at the ready
The navigation and control systems in BBASV 4.0 are adapted from our proven BBAUV 4.1 control system used in RoboSub 2023. One major improvement is the decoupling of the thrust allocator from the controller, allowing us to support azimuth thrusters. Additionally, the controller now supports both velocity-based and trajectory-following control, offering flexibility in experimenting with different planners and motion modes.
We have also integrated our custom controller as a controller plugin in the Nav2 framework, giving us full access to the other Nav2 plugins. Our integration of the Nav2 framework leverages advanced features like the Spatio-Temporal Voxel Layer for real-time obstacle tracking. Our navigation stack supports global planners such as SMAC State Lattice and Visibility Voronoi, ensuring effective collision avoidance and precise navigation in complex competition environments.