Instance-level Segmentation Method for Group Pig Images Based on Deep Learning

被引:0
|
作者
Gao Y. [1 ,2 ]
Guo J. [1 ]
Li X. [1 ,2 ]
Lei M. [2 ,3 ]
Lu J. [4 ]
Tong Y. [1 ,2 ]
机构
[1] College of Engineering, Huazhong Agricultural University, Wuhan
[2] The Cooperative Innovation Center for Sustainable Pig Production, Wuhan
[3] College of Animal Science and Technology, College of Animal Medicine, Huazhong Agricultural University, Wuhan
[4] College of Science, Huazhong Agricultural University, Wuhan
关键词
Adherent pig body; Convolution neural network; Deep learning; Group pig; Image segmentation; Instance segmentation;
D O I
10.6041/j.issn.1000-1298.2019.04.020
中图分类号
学科分类号
摘要
With the development of intelligence and automation technology, people pay more attention to use it to monitor pig welfare and health in modern pig industry. Since the behaviors of group pigs present their healthy status, it is necessary to detect and monitor behaviors of group pigs. At present, machine vision technology with advantages of low price, easy installation, non-invasion and mature algorithm has been preferentially utilized to monitor pigs' behaviors, such as drinking, eating, farrowing behavior of sow, and detect some of pigs' physiological indices, such as lean yield rate. Feeding pigs at group level was used the most in intensive pig farms. Owing to normally happened huddled pigs showing in group-pig images, it was challenging to utilize traditional machine vision technique to monitor the behaviors of group pigs through separating adhesive pig areas. Thus a new segmentation method was introduced based on deep convolution neural network to separate adhesive pig areas in group-pig images. A PigNet network was built to solute the problem of separating adhesive pig areas in group-pig images. Main part of the PigNet network was established on the structure of the Mask R-CNN network. The Mask R-CNN network was a deep convolution neural network, which had a backbone network with a branch of FCN from classification layer and regression layer to mask the region of interest. The PigNet network used 44 convolutional layers of backbone network of Mask R-CNN network as its main network. After the main network, the output feature image was fed to the next four convolutional layers with different convolution kernels, which was the resting part of the main network and produced binary mask for each pig area. As well, the output feature image was fed into two branches, one was the region proposal networks (RPN), the other was region of interest align (ROIAlign) processing. The first branch outputted the regions of interest, and then the second one aligned each pig area and produced class of the whole pig area and the background area and bounding boxes of each pig regions. A binary cross entropy loss function was utilized to calculate the loss of each mask to correct the class layer and the location of ROIs. Here, the ROIAlign was used to align the candidate region and convolution characteristics through the bilinear difference, and which would not lose information by quantization, making the segmentation more accurate, and FCN of the mask branch used average binary cross entropy as the loss function to process each mask, which avoided the competition among pig masks. After all, the ROI was labeled with different colors. Totally 2000 images captured from previous five days of a 28-day experiments were taken as the training set, and 500 images from the next 6th to 7th day were validation set. The results showed that the accuracy of the PigNet on training set was 86.15% and on validation set was 85.40%. The accuracies on each data set were very close, which showed that the model had effective generalization performance and high precision. The cooperation between the PigNet, Mask R-CNN (ResNet101-FPN) and its improvement showed the PigNet surpassed the other two algorithms in accuracy. Meanwhile, the PigNet run faster than the Mask R-CNN. However, the times of three algorithms spent on 500 samples of the validation set were similar. The algorithm can be used to separate individual pig from group-pig images with different behaviors and severe adhesion situation. The PigNet network model adopted the GPU operation mode, and used the three branches of class, regression box and mask to compute parallel processing time, which made the processing time of single image quick, only 2.12 s. To a certain degree, the PigNet could reduce convolution parameters and simplify the network structure. The research provided a new segmentation method for adhesive group-pig images, which would increase the possibility of group-pig tracing and monitoring. © 2019, Chinese Society of Agricultural Machinery. All right reserved.
引用
收藏
页码:179 / 187
页数:8
相关论文
共 35 条
  • [1] Yang Q., Xiao D., Zhang G., Automatic pig drinking behavior recognition with machine vision, Transactions of the Chinese Society for Agricultural Machinery, 49, 6, pp. 232-238, (2018)
  • [2] Kashiha M.A., Bahr C., Haredasht S.A., Et al., The automatic monitoring of pigs water use by cameras, Computers and Electronics in Agriculture, 90, 1, pp. 164-169, (2013)
  • [3] Kashiha M.A., Bahr C., Ott S., Et al., Automatic monitoring of pig locomotion using image analysis, Livestock Science, 159, 1, pp. 141-148, (2014)
  • [4] Liu L., Shen M., Bo G., Et al., Sows parturition detection method based on machine vision, Transactions of the Chinese Society for Agricultural Machinery, 45, 3, pp. 237-242, (2014)
  • [5] Zhang M., Zhong N., Liu Y., Estimation method of pig lean meat percentage based on image of pig shape characteristics, Transactions of the CSAE, 33, 12, pp. 308-314, (2017)
  • [6] Zhou L., Chen D., Chen Z., Et al., Pig ear abnormal color detection on image processing techniques, Transactions of the Chinese Society for Agricultural Machinery, 48, 4, pp. 166-172, (2017)
  • [7] Yu H., Gao Y., Li X., Et al., Research review of animal behavior monitoring technologies: commercial pigs as realistic example, Chinese Journal of Animal Science, 51, 20, pp. 66-70, (2015)
  • [8] Wang D., Huang H., Zhang H., Et al., Analysis of research status and development on engineering technology of swine farming facilities, Transactions of the Chinese Society for Agricultural Machinery, 49, 11, pp. 1-14, (2018)
  • [9] Navarro-Jover J.M., Alcaniz-Raya M., Gomez V., Et al., An automatic colour-based computer vision algorithm for tracking the position of piglets, Spanish Journal of Agricultural Research, 7, 3, pp. 535-549, (2009)
  • [10] Shao B., Xin H., A real-time computer vision assessment and control of thermal comfort for group-housed pigs, Computers and Electronics in Agriculture, 62, 1, pp. 15-21, (2008)