Visual Servoing

Recently, robots started making their first steps towards real-world applications in agriculture and more specifically, in vineyards. Among other challenges, recognizing clusters of grapes and performing visual servoing towards them is an important task. Although approaches such as deep learning have emerged that seem to simplify the problem, and databases for training data are publicly available, results are affected severely by weather conditions. In this paper, the detection robustness of grape clusters is investigated subject to rain conditions, with the use of two state-of-the-art models for object detection, Mask-RCNN, and YOLOv3. It is shown that rain in an image markedly reduces the accuracy of the classifiers, indicating that a de-raining method is vital in classification and training detection methods with rainy images is not enough. Cycle-GANs are exploited to generate de-rained images from rainy samples. The method is validated in a lab experiment using a wheeled robotic platform and a low-cost onboard computer. Mask-RCNN proves to be computationally intensive to run onboard compared to YOLOv3. In this scope, we demonstrate a complete, robust under rainy weather, low-cost, and expandable application for precision agriculture in which a robot identifies a cluster of grapes at a high frequency by running YOLOv3-tiny on board and approaches it at a predefined distance.


Visual Odometry in Gazebo

ORB-SLAM3 is a real a real-time Visual SLAM library that tracks features in consecutive images to determine the changes in position and orientation of the camera sensor. In this video, we demonstrate the process of Visual SLAM in a simulated vineyard in Gazebo using ORB-SLAM3. The rover is equipped with a camera that ORB-SLAM uses as input. The objective is to navigate between the rows in the simulated vineyard using only the camera sensor.

Live Experiments using ORB SLAM3 & VINS Fusion

References

ORB_SLAM3

VINS Fusion

Pin It on Pinterest