亚洲一区欧美在线,日韩欧美视频免费观看,色戒的三场床戏分别是在几段,欧美日韩国产在线人成

基于深度學習的群豬圖像實例分割方法
作者:
作者單位:

作者簡介:

通訊作者:

中圖分類號:

基金項目:

國家重點研發(fā)計劃項目(2016YFD0500506)和中央高校自主創(chuàng)新基金項目(2662018JC003、2662018JC010、2662017JC028)


Instance-level Segmentation Method for Group Pig Images Based on Deep Learning
Author:
Affiliation:

Fund Project:

  • 摘要
  • |
  • 圖/表
  • |
  • 訪問統(tǒng)計
  • |
  • 參考文獻
  • |
  • 相似文獻
  • |
  • 引證文獻
  • |
  • 資源附件
  • |
  • 文章評論
    摘要:

    群養(yǎng)飼喂模式下豬群有聚集在一起的習性,特別是躺臥時,當使用機器視覺跟蹤監(jiān)測豬只時,圖像中存在豬體粘連,導致分割困難,成為實現(xiàn)群豬視覺追蹤和監(jiān)測的瓶頸。根據(jù)實例分割原理,把豬群中的豬只看作一個實例,在深度卷積神經(jīng)網(wǎng)絡(luò)基礎(chǔ)上建立PigNet網(wǎng)絡(luò),對群豬圖像尤其是對粘連豬體進行實例分割,實現(xiàn)獨立豬體的分辨和定位。PigNet網(wǎng)絡(luò)采用44層卷積層作為主干網(wǎng)絡(luò),經(jīng)區(qū)域候選網(wǎng)絡(luò)(Region proposal networks,RPN)提取感興趣區(qū)域(ROI),并和主干網(wǎng)絡(luò)前向傳播的特征圖共享給感興趣區(qū)域?qū)R層(Region of interest align,ROIAlign),分支通過雙線性插值計算目標空間,三分支并行輸出ROI目標的類別、回歸框和掩模。Mask分支采用平均二值交叉熵損失函數(shù)計算獨立豬體的目標掩模損失。連續(xù)28d采集6頭9.6kg左右大白仔豬圖像,抽取前7d內(nèi)各不同時段、不同行為模式群養(yǎng)豬圖像2500幅作為訓練集和驗證集,訓練集和驗證集的比例為4∶1。結(jié)果表明,PigNet網(wǎng)絡(luò)模型在訓練集上總分割準確率達86.15%,在驗證集上準確率達85.40%。本文算法對不同形態(tài)、粘連嚴重的群豬圖像能夠準確分割出獨立的豬個體目標。將本文算法與Mask R-CNN模型及其改進模型進行對比,準確率比Mask R-CNN模型高11.40個百分點,單幅圖像處理時間為2.12s,比Mask R-CNN模型短30ms。

    Abstract:

    With the development of intelligence and automation technology, people pay more attention to use it to monitor pig welfare and health in modern pig industry. Since the behaviors of group pigs present their healthy status, it is necessary to detect and monitor behaviors of group pigs. At present, machine vision technology with advantages of low price, easy installation, noninvasion and mature algorithm has been preferentially utilized to monitor pigs’ behaviors, such as drinking, eating, farrowing behavior of sow, and detect some of pigs’ physiological indices, such as lean yield rate. Feeding pigs at group level was used the most in intensive pig farms. Owing to normally happened huddled pigs showing in grouppig images, it was challenging to utilize traditional machine vision technique to monitor the behaviors of group pigs through separating adhesive pig areas. Thus a new segmentation method was introduced based on deep convolution neural network to separate adhesive pig areas in grouppig images. A PigNet network was built to solute the problem of separating adhesive pig areas in grouppig images. Main part of the PigNet network was established on the structure of the Mask R-CNN network. The Mask R-CNN network was a deep convolution neural network, which had a backbone network with a branch of FCN from classification layer and regression layer to mask the region of interest. The PigNet network used 44 convolutional layers of backbone network of Mask R-CNN network as its main network. After the main network, the output feature image was fed to the next four convolutional layers with different convolution kernels, which was the resting part of the main network and produced binary mask for each pig area. As well, the output feature image was fed into two branches, one was the region proposal networks (RPN), the other was region of interest align (ROIAlign) processing. The first branch outputted the regions of interest, and then the second one aligned each pig area and produced class of the whole pig area and the background area and bounding boxes of each pig regions. A binary cross entropy loss function was utilized to calculate the loss of each mask to correct the class layer and the location of ROIs. Here, the ROIAlign was used to align the candidate region and convolution characteristics through the bilinear difference, and which would not lose information by quantization, making the segmentation more accurate, and FCN of the mask branch used average binary cross entropy as the loss function to process each mask, which avoided the competition among pig masks. After all, the ROI was labeled with different colors. Totally 2000 images captured from previous five days of a 28day experiments were taken as the training set, and 500 images from the next 6th to 7th day were validation set. The results showed that the accuracy of the PigNet on training set was 86.15% and on validation set was 85.40%. The accuracies on each data set were very close, which showed that the model had effective generalization performance and high precision. The cooperation between the PigNet, Mask R-CNN (ResNet101-FPN) and its improvement showed the PigNet surpassed the other two algorithms in accuracy. Meanwhile, the PigNet run faster than the Mask R-CNN. However, the times of three algorithms spent on 500 samples of the validation set were similar. The algorithm can be used to separate individual pig from grouppig images with different behaviors and severe adhesion situation. The PigNet network model adopted the GPU operation mode, and used the three branches of class, regression box and mask to compute parallel processing time, which made the processing time of single image quick, only 2.12s. To a certain degree, the PigNet could reduce convolution parameters and simplify the network structure. The research provided a new segmentation method for adhesive grouppig images, which would increase the possibility of grouppig tracing and monitoring.

    參考文獻
    相似文獻
    引證文獻
引用本文

高云,郭繼亮,黎煊,雷明剛,盧軍,童宇.基于深度學習的群豬圖像實例分割方法[J].農(nóng)業(yè)機械學報,2019,50(4):179-187. GAO Yun, GUO Jiliang, LI Xuan, LEI Minggang, LU Jun, TONG Yu. Instance-level Segmentation Method for Group Pig Images Based on Deep Learning[J]. Transactions of the Chinese Society for Agricultural Machinery,2019,50(4):179-187.

復制
分享
文章指標
  • 點擊次數(shù):
  • 下載次數(shù):
  • HTML閱讀次數(shù):
  • 引用次數(shù):
歷史
  • 收稿日期:2018-10-17
  • 最后修改日期:
  • 錄用日期:
  • 在線發(fā)布日期: 2019-04-10
  • 出版日期: