Abstract:In the cage mode, the elimination and death of laying hens will lead to changes in the number of hens and eggs production in the cage, so it is necessary to update the number of laying hens in the cage in a timely manner. Traditional machine vision methods recognized poultry by morphology or color, but their detection accuracy was low for complex scenarios such as uneven lighting in the cages, hens obscured by cages and the eggs adhesion. Therefore, based on deep learning and image processing, a lightweight network YOLO v7-tiny-DO was proposed for hens and eggs detection based on YOLO v7-tiny, and an automated counting method was designed. Firstly, the JRWT1412 distortion-free camera and the inspection equipment were used to build an automated data acquisition platform, and a total of 2146 images of caged hens and eggs were acquired as data sources. Then the exponential linear unit (ELU) was applied to the YOLO v7-tiny network to reduce the training time of the model;the regular convolution was replaced in efficient layer aggregation network (ELAN) with depthwise convolution to reduce the number of model parameters, and on this basis, a depthwise over-parameterized depthwise convolutional layer (DO-DConv) was constructed by adding a depthwise over-parametric component (depthwise convolution) to extract the deep features of hens and eggs. At the same time, coordinate attention mechanism (CoordAtt) was embedded into the feature fusion module to improve the model’s perception of the spatial location information of hens and eggs. The results showed that the average precision (AP) of YOLO v7-tiny-DO was 96.9% and 99.3% for hens and eggs respectively, and compared with that of YOLO v7-tiny, the AP of hens and eggs was increased by 3.2 percentage points and 1.4 percentage points, respectively. The model size of YOLO v7-tiny-DO was 5.6MB, which was 6.1MB less than the original model, and it was suitable to be deployed in the inspection robot which lacked computing power. YOLO v7-tiny-DO could achieve high-precision detection and localization under partial occlusion, motion blur and eggs adhesion, and outperformed other models in dim environment, with strong robustness. YOLO v7-tiny-DO recognized that the F1 score of hens and eggs were 97.0% and 98.4% respectively. Compared with the mainstream object detection networks such as Faster R-CNN, SSD, YOLO v4-tiny and YOLO v5n, the F1 score of hens were increased by 21.0 percentage points, 4.0 percentage points, 8.0 percentage points and 1.5 percentage points, respectively, and the F1 scores of eggs were increased by 31.4 percentage points, 25.4 percentage points, 6.4percentage points and 4.4 percentage points, respectively. And frame rates were increased by 95.2f/s, 34.8f/s, 18.4f/s and 8.4f/s, respectively. Finally, the algorithm was deployed to the NVIDIA Jetson AGX Xavier edge computing device and 30 cages were selected for counting tests in a real-world scenario for 3d. The results showed that the average precision of counting hens and eggs for the three test batches were 96.7% and 96.3%, respectively, and the mean absolute error were 0.13 hens and 0.09 eggs per cage, respectively, which can provide a reference for digital management of large-cale farms.