Vision Processing

ROS provides driver nodes for both the Bumblebee2 and Firefly cameras. Since both camera systems return their images in a Bayer encoded format, image processing nodes need to be run to convert the images into RGB format. Additionally, a stereo image processing node is run so that the stereo images provided by the Bumblebee2 stereo vision camera system can be used to compute depth images from which distance to obstacles is obtained. The image processing nodes also perform rectification of the images when provided with camera calibration data.


Each task of the competition is handled by a node dedicated to that task, with each node being divided into a vision processing component and a motion control component. Using the ROS publisher-subscriber model, the task nodes obtain their images by subscribing to the image topics published by the image processing nodes. Common computer vision techniques used by the task nodes include color thresholding in HSV space, contour detection and Hough transforms for finding lines and circles.


To optimise network bandwidth to view the camera streams effectively either in image_view from ROS or in the Telemetry software we developed for the AUV, JPEG lossly compression is used for the image stream transfers. Similarly for rosbags, which is the recording of mission data for future processing, the camera image streams are compressed with JPEG lossly compression to minimise the space requirement of the bag data.