Bumblebee AUV 4.5 Software System

Foxglove

This year marks our first complete transition to ROS 2 for our entire software stack. While preserving the core capabilities from BBAUV 4.1, we’ve enhanced the system’s reliability and optimized its performance.

In addition, we are always open to exploring new approaches; we have incorporated new techniques in the perception pipeline and refined our control system, all of which are managed by a new mission planner.

Gazebo Simulation

To validate our task strategy and enable testing during hardware downtime, we set up a simulated environment using Gazebo, complete with competition elements. This allows us to perform preliminary checks on the safety and viability of much of our software logic, such as those of perception and behavior, even before deploying the actual vehicle. Through testing in simulation, we are able to identify bugs earlier, iterate faster and free up more development time for the electrical and mechanical teams.

Simulation

Telemetry and Telecommand

Telemetry

The telemetry coverage of BBAUV 4.5 has been expanded to include all vital indicators of vehicle health. Regular updates and warnings are automatically relayed to relevant team members via Telegram to enable smoother operations and proactive problem solving.

Keeping up with the electrical system’s upgrades, the software system is now also capable of power cycling individual components in extreme cases of component failure, enabling robust on-the-fly recovery from unexpected situations. The software team also utilizes custom built command line applications for telecontrol of BBAUV 4.5 to aid operations and testing.

Mission Planner 2

We migrated our mission planner to one based on py_trees, prioritizing developer-friendliness and mission safety. Our new mission planner retains a composable, modular approach while enabling faster development and greater reliability. With growing community adoption, we believe this framework has a strong future, making it a sustainable and forward-looking choice for our system.

Mission Planner

Underwater Perception and Tracking

Deep Learning

The new perception pipeline deploys deep learning models like YOLO for object detection and segmentation, supported by traditional computer vision techniques like Perspective-n-Points. Pose estimation is then augmented by feature matching and monocular depth estimation before being processed by clustering algorithms. This combined approach provides accurate perception of elements in water along with the ability to have several layers of fallbacks.

Electrical Architecture

Click here!

Mechanical Structure

Click here!

Back to Overview

Click here!