Abstract:In order to extract the centerlines of rice seedlings, a new method based on YOLOv3 target detection algorithm was presented, which can extract centerlines of different growth stages of rice seedlings in complex paddy field so as to provide guide lines for autonomous navigation of robot. Firstly, an industrial camera which was 1 m high above the ground with pitch angles of 45° to 60° was used to capture image of rice seedlings, and then the region of interest (ROI) of the crop image was determined in order to find the instructive guide lines. Because of the perspective projection, the rice seedlings rows were labeled in segments. Then, the ROI images dataset was built to train YOLOv3 model. After that, the best YOLOv3 model was used to detect the rice seedling in the ROI and output bounding boxes. Secondly, the bounding boxes of the same rice seedlings row was clustered. Thirdly, image segmentation was applied and the smallest univalue segment assimilating nucleus (SUSAN) feature points was extracted within the bounding box of the same cluster. Finally, the least square method was applied in the algorithm to extract the centerlines of rice seedling. For complex paddy field environment such as windy weather, dark light, rice seedlings shadow and light reflection on water surface, as well as the impacts like duckweed and cyanobacteria, the proposed algorithm successfully and accurately extracted the centerlines of rice seedlings. For 200 test images, the mean average precision of trained network reached 91.47%, the mean average angle errors of the extracted centerlines was 0.97° and the average runtime of one image (resolution: 640 pixels×480 pixels) was 82.6ms. Compared with another method for centerlines extracting, this algorithm had higher robustness, higher accuracy and faster runtime. The result showed that the method was real time and had application values.