亚洲一区欧美在线,日韩欧美视频免费观看,色戒的三场床戏分别是在几段,欧美日韩国产在线人成

基于改進(jìn)Faster R-CNN的蘋(píng)果采摘視覺(jué)定位與檢測(cè)方法
作者:
作者單位:

作者簡(jiǎn)介:

通訊作者:

中圖分類號(hào):

基金項(xiàng)目:

國(guó)家自然科學(xué)基金項(xiàng)目(52265065、51765031)


Vision Detection Method for Picking Robots Based on Improved Faster R-CNN
Author:
Affiliation:

Fund Project:

  • 摘要
  • |
  • 圖/表
  • |
  • 訪問(wèn)統(tǒng)計(jì)
  • |
  • 參考文獻(xiàn)
  • |
  • 相似文獻(xiàn)
  • |
  • 引證文獻(xiàn)
  • |
  • 資源附件
  • |
  • 文章評(píng)論
    摘要:

    針對(duì)采摘機(jī)器人對(duì)場(chǎng)景中目標(biāo)分布密集、果實(shí)相互遮擋的檢測(cè)及定位能力不理想問(wèn)題,提出一種引入高效通道注意力機(jī)制(ECA)和多尺度融合特征金字塔(FPN)改進(jìn)Faster R-CNN果實(shí)檢測(cè)及定位方法。首先,利用表達(dá)能力較強(qiáng)的融合FPN的殘差網(wǎng)絡(luò)ResNet50替換原VGG16網(wǎng)絡(luò),消除了網(wǎng)絡(luò)退化問(wèn)題,進(jìn)而提取更加抽象和豐富的語(yǔ)義信息,提升模型對(duì)多尺度和小目標(biāo)的檢測(cè)能力;其次,引入注意力機(jī)制ECA模塊,使特征提取網(wǎng)絡(luò)聚焦特征圖像的局部高效信息,減少無(wú)效目標(biāo)的干擾,提升模型檢測(cè)精度;最后,采用一種枝葉插圖數(shù)據(jù)增強(qiáng)方法改進(jìn)蘋(píng)果數(shù)據(jù)集,解決圖像數(shù)據(jù)不足問(wèn)題?;跇?gòu)建的數(shù)據(jù)集,使用遺傳算法優(yōu)化K-means++聚類生成自適應(yīng)錨框,提高模型定位準(zhǔn)確性。試驗(yàn)結(jié)果表明,改進(jìn)模型對(duì)可抓取和不可直接抓取蘋(píng)果的精度均值分別為96.16%和86.95%,平均精度均值為92.79%,較傳統(tǒng)Faster R-CNN提升15.68個(gè)百分點(diǎn);對(duì)可抓取和不可直接抓取的蘋(píng)果定位精度分別為97.14%和88.93%,較傳統(tǒng)Faster R-CNN分別提高12.53個(gè)百分點(diǎn)和40.49個(gè)百分點(diǎn);內(nèi)存占用量減少38.20%,每幀平均計(jì)算時(shí)間縮短40.7%,改進(jìn)后的模型參數(shù)量小且實(shí)時(shí)性好,能夠更好地應(yīng)用于果實(shí)采摘機(jī)器人視覺(jué)系統(tǒng)。

    Abstract:

    To address the issue of poor detection and positioning capabilities of fruit picking robots in scenes with densely distributed targets and fruits occluding each other, a method to improve the fruit detection and positioning of Faster R-CNN was proposed by introducing an efficient channel attention mechanism(ECA)and a multiscale feature fusion pyramid(FPN). Firstly,the commonly used VGG16 network was replaced with a ResNet50 residual network with strong expression capability and eliminate network degradation problem,thus extracting more abstract and rich semantic information to enhance the models detection ability for multiscale and small targets. Secondly,the ECA module was introduced to enable the feature extraction network to focus on local and efficient information in the feature map,reduce the interference of invalid targets,and improve the model's detection accuracy. Finally,a branch and leaf grafting data augmentation method was used to improve the apple dataset and solve the problem of insufficient image data. Based on the constructed dataset,genetic algorithms were used to optimize K-means++ clustering and generate adaptive anchor boxes. Experimental results showed that the improved model had average precision of 96.16% for graspable apples and 86.95% for non-graspable apples,and the mean average precision was 92.79%,which was 15.68 percentages higher than that of the traditional Faster R-CNN. The positioning accuracy for graspable and non-directly graspable apples were 97.14% and 88.93%, respectively,which were 12.53 percentages and 40.49 percentages higher than that of traditional Faster R-CNN. The weight was reduced by 38.20%. The computation time was reduced by 40.7%. The improved model was more suitable for application in fruit-picking robot visual systems.

    參考文獻(xiàn)
    相似文獻(xiàn)
    引證文獻(xiàn)
引用本文

李翠明,楊柯,申濤,尚拯宇.基于改進(jìn)Faster R-CNN的蘋(píng)果采摘視覺(jué)定位與檢測(cè)方法[J].農(nóng)業(yè)機(jī)械學(xué)報(bào),2024,55(1):47-54. LI Cuiming, YANG Ke, SHEN Tao, SHANG Zhengyu. Vision Detection Method for Picking Robots Based on Improved Faster R-CNN[J]. Transactions of the Chinese Society for Agricultural Machinery,2024,55(1):47-54.

復(fù)制
分享
文章指標(biāo)
  • 點(diǎn)擊次數(shù):
  • 下載次數(shù):
  • HTML閱讀次數(shù):
  • 引用次數(shù):
歷史
  • 收稿日期:2023-06-26
  • 最后修改日期:
  • 錄用日期:
  • 在線發(fā)布日期: 2023-07-13
  • 出版日期: