4.6 Article

A hybrid representation of the environment to improve autonomous navigation of mobile robots in agriculture

Journal

PRECISION AGRICULTURE
Volume 22, Issue 2, Pages 524-549

Publisher

SPRINGER
DOI: 10.1007/s11119-020-09773-9

Keywords

Hybrid topological map; Crop classification; Semantic identification; Autonomous navigation; Agricultural robotics

Funding

  1. program Investment for the Future of the French government

Ask authors/readers for more resources

This paper proposes a localization and mapping framework based on semantic place classification and key location estimation, which builds a hybrid topological map for autonomous navigation in agricultural fields. The approach utilizes a perception system with 2D LIDAR and RGB cameras to identify key locations and working areas, enabling the robot to be located in the topological map by fusing data with odometry. The evaluation of the approach using real-time object detection demonstrates the potential to obtain a simple and easy-to-update map, improve autonomy of agricultural robots, and avoid artificial landmarks.
This paper considers the problem of autonomous navigation in agricultural fields. It proposes a localization and mapping framework based on semantic place classification and key location estimation, which together build a hybrid topological map. This map benefits from generic partitioning of the field, which contains a finite set of well-differentiated workspaces and, through a semantic analysis, it is possible to estimate in a probabilistic way the position (state) of a mobile system in the field. Moreover, this map integrates both metric (key locations) and semantic features (working areas). One of its advantages is that a full and precise map prior to navigation is not necessary. The identification of the key locations and working areas is carried out by a perception system based on 2D LIDAR and RGB cameras. Fusing these data with odometry allows the robot to be located in the topological map. The approach is assessed through off-line data recorded in real conditions in diverse fields during different seasons. It exploits a real-time object detector based on a convolutional neural network called you only look once, version 3, which has been trained to classify a considerable number of crops, including market-garden crops such as broccoli and cabbage, and to identify grapevine trunks. The results show the interest in the approach, which allows (i) obtaining a simple and easy-to-update map, (ii) avoiding the use of artificial landmarks, and thus (iii) improving the autonomy of agricultural robots.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.6
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available