亚洲一区欧美在线,日韩欧美视频免费观看,色戒的三场床戏分别是在几段,欧美日韩国产在线人成

基于關(guān)鍵點預(yù)測的裝配機器人工件視覺定位技術(shù)
作者:
作者單位:

作者簡介:

通訊作者:

中圖分類號:

基金項目:

吉林省重點研發(fā)計劃項目(20200101130GX)


Visual Positioning Technology of Assembly Robot Workpiece Based on Prediction of Key Points
Author:
Affiliation:

Fund Project:

  • 摘要
  • |
  • 圖/表
  • |
  • 訪問統(tǒng)計
  • |
  • 參考文獻(xiàn)
  • |
  • 相似文獻(xiàn)
  • |
  • 引證文獻(xiàn)
  • |
  • 資源附件
  • |
  • 文章評論
    摘要:

    針對目前裝配機器人基于手工的特征檢測易受光照條件、背景和遮擋等干擾因素的影響,而基于點云特征檢測又依賴模型構(gòu)建精度,本文采用深度學(xué)習(xí)的方式,對基于關(guān)鍵點預(yù)測的工件視覺定位技術(shù)展開研究。首先,采集工件各個角度的深度圖像,計算得到工件的位姿信息,選取工件表面的關(guān)鍵點作為數(shù)據(jù)集。然后,構(gòu)造工件表面關(guān)鍵點的向量場,與數(shù)據(jù)集一同進(jìn)行深度訓(xùn)練,以實現(xiàn)前景點指向關(guān)鍵點的向量場預(yù)測。之后,將向量場中各像素指向同一關(guān)鍵點的方向向量每兩個劃分為一組,取其向量交點生成關(guān)鍵點的假設(shè),并基于RANSAC的投票對所有假設(shè)進(jìn)行評價。使用EPnP求解器計算工件位姿,并生成工件的有向包圍盒顯示位姿估計結(jié)果。最后,通過實驗驗證了系統(tǒng)估計結(jié)果的準(zhǔn)確性和魯棒性。

    Abstract:

    Aiming at the problem that the current manual feature detection of assembly robots was susceptible to interference factors such as illumination conditions, background and occlusion, and the feature detection based on point cloud depends on the accuracy of model construction, the method of deep learning was proposed to carry out research on the visual positioning technology of the workpiece based on key point prediction. Firstly, the ArUco pose detection marker and ICP point cloud registration technology were used to construct a set of data for training the pose estimation network model. The depth images from various angles of the workpiece were collected, and the pose information of the workpiece was calculated. The key points on the workpiece surface were selected as the data set. Then the vector field of the key points on the workpiece surface was constructed, and the depth training was carried out to gather with the data set to realize the vector field prediction of the foreground points pointing to the key points. And the direction vectors of each pixel in the vector field pointing to the same key point were divided into two groups, the intersection points of their vectors were taken to generate the hypothesis of the key point, and all the hypotheses were evaluated based on RANSAC voting. The EPnP solver was used to calculate the pose of the workpiece, and the orientation bounding box of the workpiece was generated to display the pose estimation results. Finally, the accuracy and robustness of the estimation results were verified by experiments.

    參考文獻(xiàn)
    相似文獻(xiàn)
    引證文獻(xiàn)
引用本文

倪濤,張泮虹,李文航,趙亞輝,張紅彥,翟海陽.基于關(guān)鍵點預(yù)測的裝配機器人工件視覺定位技術(shù)[J].農(nóng)業(yè)機械學(xué)報,2022,53(6):443-450. NI Tao, ZHANG Panhong, LI Wenhang, ZHAO Yahui, ZHANG Hongyan, ZHAI Haiyang. Visual Positioning Technology of Assembly Robot Workpiece Based on Prediction of Key Points[J]. Transactions of the Chinese Society for Agricultural Machinery,2022,53(6):443-450.

復(fù)制
分享
文章指標(biāo)
  • 點擊次數(shù):
  • 下載次數(shù):
  • HTML閱讀次數(shù):
  • 引用次數(shù):
歷史
  • 收稿日期:2021-06-23
  • 最后修改日期:
  • 錄用日期:
  • 在線發(fā)布日期: 2021-08-13
  • 出版日期:
文章二維碼