亚洲一区欧美在线,日韩欧美视频免费观看,色戒的三场床戏分别是在几段,欧美日韩国产在线人成

基于改進YOLO v7的籠養(yǎng)雞/蛋自動識別與計數(shù)方法
作者:
作者單位:

作者簡介:

通訊作者:

中圖分類號:

基金項目:

北京市平谷區(qū)博士農(nóng)場項目、國家科技創(chuàng)新2030-“新一代人工智能”重大項目(2021ZD0113804)、北京市農(nóng)林科學(xué)院改革發(fā)展專項、北京市農(nóng)林科學(xué)院科研創(chuàng)新平臺建設(shè)項目(PT2022-34)和北京市博士后基金項目(2022-ZZ-18)


Automatic Identification and Counting Method of Caged Hens and Eggs Based on Improved YOLO v7
Author:
Affiliation:

Fund Project:

  • 摘要
  • |
  • 圖/表
  • |
  • 訪問統(tǒng)計
  • |
  • 參考文獻
  • |
  • 相似文獻
  • |
  • 引證文獻
  • |
  • 資源附件
  • |
  • 文章評論
    摘要:

    籠養(yǎng)模式下雞/蛋自動識別與計數(shù)在低產(chǎn)能雞判別及雞舍智能化管理方面具有重要作用,針對雞舍內(nèi)光線不均、雞只與籠之間遮擋及雞蛋粘連等因素導(dǎo)致自動計數(shù)困難的問題,本研究以籠養(yǎng)雞只與雞蛋為研究對象,基于YOLO v7-tiny提出一種輕量型網(wǎng)絡(luò)YOLO v7-tiny-DO用于雞只與雞蛋識別,并設(shè)計自動化分籠計數(shù)方法。首先,采用JRWT1412型無畸變相機與巡檢設(shè)備搭建自動化數(shù)據(jù)采集平臺,獲取2146幅籠養(yǎng)雞只圖像用于構(gòu)建數(shù)據(jù)集。然后,在YOLO v7-tiny網(wǎng)絡(luò)基礎(chǔ)上應(yīng)用指數(shù)線性單元(Exponential linear unit,ELU)激活函數(shù)減少模型訓(xùn)練時間;將高效層聚合網(wǎng)絡(luò)(Efficient layer aggregation network,ELAN)中的常規(guī)卷積替換為深度卷積減少模型參數(shù)量,并在其基礎(chǔ)上添加深度過參數(shù)化組件(深度卷積)構(gòu)建深度過參數(shù)化深度卷積層(Depthwise over-parameterized depthwise convolutional layer,DO-DConv),以提取目標深層特征;同時在特征融合模塊引入坐標注意力機制(Coordinate attention mechanism,CoordAtt),提升模型對目標空間位置信息的感知能力。試驗結(jié)果表明,YOLO v7-tiny-DO識別雞只和雞蛋的平均精確率(Average precision,AP)分別為96.9%與99.3%,與YOLO v7-tiny相比,雞只與雞蛋的AP分別提高3.2、1.4個百分點;改進后模型內(nèi)存占用量為5.6MB,比原模型減小6.1MB,適合部署于算力相對有限的巡檢機器人;YOLO v7-tiny-DO在局部遮擋、運動模糊和雞蛋粘連情況下均能實現(xiàn)高精度識別與定位,在光線昏暗情況下識別結(jié)果優(yōu)于其他模型,具有較強的魯棒性。最后,將本文算法部署到NVIDIA Jetson AGX Xavier邊緣計算設(shè)備,在實際場景下選取30個雞籠開展計數(shù)測試,持續(xù)3d。結(jié)果表明,3個測試批次雞只與雞蛋的計數(shù)平均準確率均值分別為96.7%和96.3%,每籠平均絕對誤差均值分別為0.13只雞和0.09枚雞蛋,可為規(guī)?;B(yǎng)殖場智能化管理提供參考。

    Abstract:

    In the cage mode, the elimination and death of laying hens will lead to changes in the number of hens and eggs production in the cage, so it is necessary to update the number of laying hens in the cage in a timely manner. Traditional machine vision methods recognized poultry by morphology or color, but their detection accuracy was low for complex scenarios such as uneven lighting in the cages, hens obscured by cages and the eggs adhesion. Therefore, based on deep learning and image processing, a lightweight network YOLO v7-tiny-DO was proposed for hens and eggs detection based on YOLO v7-tiny, and an automated counting method was designed. Firstly, the JRWT1412 distortion-free camera and the inspection equipment were used to build an automated data acquisition platform, and a total of 2146 images of caged hens and eggs were acquired as data sources. Then the exponential linear unit (ELU) was applied to the YOLO v7-tiny network to reduce the training time of the model;the regular convolution was replaced in efficient layer aggregation network (ELAN) with depthwise convolution to reduce the number of model parameters, and on this basis, a depthwise over-parameterized depthwise convolutional layer (DO-DConv) was constructed by adding a depthwise over-parametric component (depthwise convolution) to extract the deep features of hens and eggs. At the same time, coordinate attention mechanism (CoordAtt) was embedded into the feature fusion module to improve the model’s perception of the spatial location information of hens and eggs. The results showed that the average precision (AP) of YOLO v7-tiny-DO was 96.9% and 99.3% for hens and eggs respectively, and compared with that of YOLO v7-tiny, the AP of hens and eggs was increased by 3.2 percentage points and 1.4 percentage points, respectively. The model size of YOLO v7-tiny-DO was 5.6MB, which was 6.1MB less than the original model, and it was suitable to be deployed in the inspection robot which lacked computing power. YOLO v7-tiny-DO could achieve high-precision detection and localization under partial occlusion, motion blur and eggs adhesion, and outperformed other models in dim environment, with strong robustness. YOLO v7-tiny-DO recognized that the F1 score of hens and eggs were 97.0% and 98.4% respectively. Compared with the mainstream object detection networks such as Faster R-CNN, SSD, YOLO v4-tiny and YOLO v5n, the F1 score of hens were increased by 21.0 percentage points, 4.0 percentage points, 8.0 percentage points and 1.5 percentage points, respectively, and the F1 scores of eggs were increased by 31.4 percentage points, 25.4 percentage points, 6.4percentage points and 4.4 percentage points, respectively. And frame rates were increased by 95.2f/s, 34.8f/s, 18.4f/s and 8.4f/s, respectively. Finally, the algorithm was deployed to the NVIDIA Jetson AGX Xavier edge computing device and 30 cages were selected for counting tests in a real-world scenario for 3d. The results showed that the average precision of counting hens and eggs for the three test batches were 96.7% and 96.3%, respectively, and the mean absolute error were 0.13 hens and 0.09 eggs per cage, respectively, which can provide a reference for digital management of large-cale farms.

    參考文獻
    相似文獻
    引證文獻
引用本文

趙春江,梁雪文,于合龍,王海峰,樊世杰,李斌.基于改進YOLO v7的籠養(yǎng)雞/蛋自動識別與計數(shù)方法[J].農(nóng)業(yè)機械學(xué)報,2023,54(7):300-312. ZHAO Chunjiang, LIANG Xuewen, YU Helong, WANG Haifeng, FAN Shijie, LI Bin. Automatic Identification and Counting Method of Caged Hens and Eggs Based on Improved YOLO v7[J]. Transactions of the Chinese Society for Agricultural Machinery,2023,54(7):300-312.

復(fù)制
分享
文章指標
  • 點擊次數(shù):
  • 下載次數(shù):
  • HTML閱讀次數(shù):
  • 引用次數(shù):
歷史
  • 收稿日期:2022-12-01
  • 最后修改日期:
  • 錄用日期:
  • 在線發(fā)布日期: 2023-07-10
  • 出版日期:
文章二維碼