Abstract:In order to improve the accuracy of passion fruit detection and deploy the deep learning model on mobile platforms for rapid real-time inference, a lightweight passion fruit detection model was proposed based on an improved YOLO v8s. The model replaced the neck feature fusion network with a Gather-and-distribute mechanism (GD) to enhance cross-layer feature fusion and generalization capabilities for passion fruit images. Additionally, the model was pruned by using layer-adaptive sparsity for the magnitude-based pruning (LAMP), which traded off some accuracy to reduce model size and parameter count, facilitating rapid detection on embedded devices. Knowledge distillation was employed to compensate for the accuracy loss caused by pruning, further enhancing detection performance. Experimental results showed that for a passion fruit dataset collected in natural environments, the improved model reduced parameter count and memory usage by 63.88% and 62.10%, respectively, compared with the original YOLO v8s baseline model. The precision and average precision (AP) of the improved model were increased by 0.9 percentage points and 2.3 percentage points, respectively, outperforming other comparative models. Real-time detection frame rates (FPS) on Jetson Nano and Jetson Tx2 embedded devices were 5.78f/s and 19.38f/s, respectively, which were 1.93 times and 1.24 times higher than that of the original model. Therefore, the proposed improved model effectively detects passion fruit in complex environments, providing theoretical and technical support for the deployment and application of mobile detection devices in scenarios such as automatic passion fruit harvesting.