Multicamera Visual SLAM For Vineyard Inspection

Automating the process of collecting samples (e.g. images) from a vineyard can help to monitor the condition of the grapes with precision and prevent diseases from spreading. A critical part in this task is the development of a robust localization algorithm so that (a) a robot is able to carry out the inspection process and (b) the vine-grower knows exactly which part of the vineyard has been covered during inspection. In this paper, we propose a novel approach for enhancing the robustness of vSLAM by utilizing multiple stereo cameras, and a novel method for detecting loops in homogeneous environments based on AprilTags, where state-of-the-art approaches may find it difficult to detect them. We test the accuracy of our method using a wheeled Robotic Platform (RP) in simulation and in a synthetic vineyard developed at NTUA. The developed method achieves high accuracy in the localization of the RP in the vineyard and robustness even when a featureless object covers a large part of the Field of View of one camera.

Visual Servoing


Recently, robots started making their first steps towards real-world applications in agriculture and more specifically, in vineyards. Among other challenges, recognizing clusters of grapes and performing visual servoing towards them is an important task. Although approaches such as deep learning have emerged that seem to simplify the problem, and databases for training data are publicly available, results are affected severely by weather conditions. In this paper, the detection robustness of grape clusters is investigated subject to rain conditions, with the use of two state-of-the-art models for object detection, Mask-RCNN, and YOLOv3. It is shown that rain in an image markedly reduces the accuracy of the classifiers, indicating that a de-raining method is vital in classification and training detection methods with rainy images is not enough. Cycle-GANs are exploited to generate de-rained images from rainy samples. The method is validated in a lab experiment using a wheeled robotic platform and a low-cost onboard computer. Mask-RCNN proves to be computationally intensive to run onboard compared to YOLOv3. In this scope, we demonstrate a complete, robust under rainy weather, low-cost, and expandable application for precision agriculture in which a robot identifies a cluster of grapes at a high frequency by running YOLOv3-tiny on board and approaches it at a predefined distance.


Visual Odometry in Gazebo

ORB-SLAM3 is a real a real-time Visual SLAM library that tracks features in consecutive images to determine the changes in position and orientation of the camera sensor. In this video, we demonstrate the process of Visual SLAM in a simulated vineyard in Gazebo using ORB-SLAM3. The rover is equipped with a camera that ORB-SLAM uses as input. The objective is to navigate between the rows in the simulated vineyard using only the camera sensor.

Live Experiments using ORB SLAM3 & VINS Fusion

References

ORB_SLAM3

VINS Fusion

Pin It on Pinterest