Bumblebee AUV 2.0 Software Architecture

Bumblebee’s software system consists of the mission planner and the vision subsystems to complete the competition courses. The software system is built upon the Debian GNU/Linux x64 operating system which provides multicore parallel processing for the vision, control and mission systems.

Built on the architecture set in place last year, the software architecture is based on the ROS (Robot Operating System) Software Framework by Willow Garage. The ROS distribution has been upgraded to ROS Hydro from the previous ROS Fuerte. Each software unit is a ROS node, and all communication, data acquisition and publishing are done through the ROS architecture. The software stack has been further modularised to allow experimentations and quick reconfiguration on-site. Highly customisable interfaces have been developed in Python to facilitate vision tuning. Locomotion of the vehicle is achieved using the action servers and clients based on the ROS actionlib API.

Vision Software

BumbleBee’s vision processing system consists of modular communication, movement and vision filtering packages that can be combined and tuned to complete each mission task. Each vision processing unit runs as a separate ROS node and is responsible for a single task, providing both movement through ROS SMACH state machines, and vision output, while cooperating with other units running in parallel through mission planner. BumbleBee’s front and bottom facing Microsoft Lifecam Cinema cameras provide sufficient visual feedback for the vision processing system. Improved vision algorithms are applied for better identification of the required objects.

Vision Processing

The vision nodes receive image input from the cameras in the Bayer encoded bgr8 format through the ROS protocol. The ROS images are converted to OpenCV images via the ROS cvBridge. To deal with changing water and lighting conditions, various image enhancement techniques such as image sharpening, white balancing, gray world and adaptive thresholding values are applied to obtain better image contrast. A combination of vision filters are used to detect, classify and track objects. These include HSV colour thresholding, contour detection and Hough transforms provided by the OpenCV computer vision library. The vision processing code is written in Python and an annotated processed image is published as a ROS image. An annotated image is presented in Figure 15. Centroid calculation is performed using Hu moment analysis to align the camera’s center with the centroid identified. The centroid is tracked at each frame while the vehicle manoeuvrers into position, before performing further object identification and manipulation to complete the task at hand.

Vision Tuning

Interactive vision tuning systems are developed using PyQt to experiment with vision processing parameters at real time. These systems receive a live update from the cameras and the vision processing units and provide analysis of image statistics such as colour histograms. The ROSdynamic reconfigure tool is also used to quickly adjust parameters. These configuration parameters are stored until the next system reboot and hence can be used for subsequent runs.

Telemetry

The BumbleBee Control Panel displays telemetry and camera information, enabling monitoring of sensors and actuator data for system analysis during practice runs. This control panel has been enhanced and integrated to interface with the information published by the new electrical system. The ROS logging system is used to capture telemetry and video information, and log messages during both tethered and autonomous runs. The data is captured in .bag files in the form of ROS messages. The rosbag playback utility is used to allow post-processing to improve the algorithms and system.

Mission Planner

Vehicle dynamics during an autonomous mission is fully controlled by the mission planner, which directs task nodes, controls trajectory between tasks and manages time. The mission planner is written in Python and utilizes a finite state machine structure.

The highly modular software architecture complements the functionality of the mission planner. The mission planner’s multi-threaded structure allows for simultaneous execution of mission tasks and watch states that serve to keep track of the mission and task statuses. It is coupled with extensions for execution and cleanup of arbitrary scripts on the fly. The mission planner also manages contingency states to allow for recovery via the saved waypoints during the mission run.

Mission runs can be dynamically built from user input, providing an option to test task nodes independently in addition to a full mission test. The vehicle status is consistently checked and an alert is sounded in the event of an irrecoverable component failure.