[go: up one dir, main page]

CN113706512B - Live pig weight measurement method based on deep learning and depth camera - Google Patents

Live pig weight measurement method based on deep learning and depth camera Download PDF

Info

Publication number
CN113706512B
CN113706512B CN202111014612.2A CN202111014612A CN113706512B CN 113706512 B CN113706512 B CN 113706512B CN 202111014612 A CN202111014612 A CN 202111014612A CN 113706512 B CN113706512 B CN 113706512B
Authority
CN
China
Prior art keywords
pig
point cloud
cloud data
live
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111014612.2A
Other languages
Chinese (zh)
Other versions
CN113706512A (en
Inventor
王晓辰
郝云涛
武岩松
常虹飞
田茂
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Inner Mongolia University
Original Assignee
Inner Mongolia University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Inner Mongolia University filed Critical Inner Mongolia University
Priority to CN202111014612.2A priority Critical patent/CN113706512B/en
Publication of CN113706512A publication Critical patent/CN113706512A/en
Application granted granted Critical
Publication of CN113706512B publication Critical patent/CN113706512B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01GWEIGHING
    • G01G17/00Apparatus for or methods of weighing material of special form or property
    • G01G17/08Apparatus for or methods of weighing material of special form or property for weighing livestock
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P60/00Technologies relating to agriculture, livestock or agroalimentary industries
    • Y02P60/80Food processing, e.g. use of renewable energies or variable speed drives in handling, conveying or stacking
    • Y02P60/87Re-use of by-products of food processing for fodder production

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Geometry (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

本发明公开了一种基于深度学习和深度相机的生猪体重测量方法,获得生猪的深度图像;将所述深度图像转化为三维点云数据;对所述三维点云数据进行预处理,去除噪声;将去除噪声的三维点云数据放入点云卷积神经网络PointNet++模型进行深度学习,去除背景三维点云数据;将去除背景后的三维点云数据投影到三维坐标系下,求出相应的极值点位置,即获得相应的特征点坐标,从而得到相应的体尺数据;将生猪的体尺数据作为自变量,将生猪的体重作为因变量,获取生猪的体重。本发明不再需要大量的人力物力来对生猪进行逐只的称重,而是通过对生猪的图像采集,并进行图像分析和计算来准确估计重量,从而达到批量测重的目的。

The invention discloses a pig weight measurement method based on deep learning and a depth camera, which obtains a depth image of a pig; converts the depth image into three-dimensional point cloud data; and preprocesses the three-dimensional point cloud data to remove noise; Put the noise-removed 3D point cloud data into the point cloud convolutional neural network PointNet++ model for deep learning, remove the background 3D point cloud data; project the 3D point cloud data after removing the background into the 3D coordinate system, and find the corresponding extreme Value point position, that is, to obtain the corresponding feature point coordinates, so as to obtain the corresponding body size data; take the body size data of the live pig as the independent variable, and take the weight of the live pig as the dependent variable to obtain the weight of the live pig. The present invention no longer needs a lot of manpower and material resources to weigh live pigs one by one, but accurately estimates the weight by collecting images of live pigs, and performing image analysis and calculation, so as to achieve the purpose of batch weight measurement.

Description

一种基于深度学习和深度相机的生猪体重测量方法A pig weight measurement method based on deep learning and deep camera

技术领域technical field

本发明涉及农业领域,尤其涉及一种基于深度学习和深度相机的生猪体重测量方法。The invention relates to the field of agriculture, in particular to a pig weight measurement method based on deep learning and a deep camera.

背景技术Background technique

我国是世界养生猪生产的第一大国,无论是生猪养殖规模还是生猪肉消费量均居世界第一。生猪的体重是评价其生长发育状况的重要指标,也是后备母生猪的选育评价的特征依据。体重指标是评价母生猪繁殖能力、哺育能力的重要依据。体重适宜的母生猪产仔量高,健仔率高。同时,生猪的饲喂量也要受到生猪体重的数据进行调控。在饲养、管理生猪的过程中,可以根据其体重数据适当调整生猪的饲养工作。及时了解生猪的营养状况,避免母生猪健康状况下降。my country is the largest country in the world for the production of live pigs, ranking first in the world in both the scale of live pig breeding and the consumption of raw pork. The weight of live pigs is an important index to evaluate its growth and development status, and it is also the characteristic basis for the selection and evaluation of gilt pigs. Body weight index is an important basis for evaluating the reproductive ability and feeding ability of sow pigs. A sow with a suitable body weight has a high litter size and a high healthy piglet rate. At the same time, the feeding amount of the pigs is also regulated by the data of the pigs' weight. In the process of raising and managing pigs, the feeding work of pigs can be adjusted appropriately according to their weight data. Keep abreast of the nutritional status of live pigs to avoid a decline in the health of sow pigs.

然而对于生猪的体重测量一直是一个较为麻烦和头疼的事,甚至许多小型生猪场和家庭农场仅仅目测或者直接忽视。目测生猪的体重,国内现行生猪体重评定方法,多数是饲养员根据经验目测生猪体重并打分据统计,评分往往波动较大,说明使用现有的目测评分法存在较大的误差。However, the weight measurement of pigs has always been a troublesome and troublesome matter, and even many small pig farms and family farms are only visually inspected or directly ignored. Visually measure the weight of live pigs. Most of the current methods for evaluating the weight of live pigs in China are based on the experience of breeders to visually measure the weight of live pigs and score them. According to statistics, the scores often fluctuate greatly, indicating that there are large errors in the existing visual scoring methods.

还有两种较为准确的方法:体尺估算和直接测量。体尺测量估算主要有:皮尺测量估算、后备体尺和PIC体重速测尺。但人工测量误差较大,而且存在耗费大量人力,效率极低等问题,同时,由于生猪的体型大多较大,很难保证生猪的稳定会对测量人员的安全造成威胁。直接测量应用较为广泛的方法是:平台秤(地磅)法。该方法相较于前者测量较为精确,但“平台秤(地磅)”却耗资巨大,耗费大量的物资,钱财,在基数庞大的现代化规模养殖面前,传统的测量方法显然力不从心,效率低下,逐个称重的方式,无疑加大了人工成本的投入,且费时费力,侧重周期较长。There are two more accurate methods: body size estimation and direct measurement. Body measurement and estimation mainly include: tape measurement and estimation, backup body size and PIC body weight speed measuring ruler. However, manual measurement errors are large, and there are problems such as consuming a lot of manpower and extremely low efficiency. At the same time, because most of the pigs are large in size, it is difficult to ensure the stability of the pigs, which will pose a threat to the safety of the surveyors. The most widely used method of direct measurement is: platform scale (weighbridge) method. Compared with the former method, this method is more accurate in measurement, but the "platform scale (weighbridge)" is costly, consuming a lot of materials and money. In the face of modern large-scale farming with a huge base, the traditional measurement method is obviously powerless and inefficient. Weighing one by one The heavy method will undoubtedly increase the input of labor costs, and it is time-consuming and labor-intensive, and the focus cycle is longer.

发明内容Contents of the invention

本发明为了解决以上问题,提供了一种基于深度学习和深度相机的生猪体重测量方法。In order to solve the above problems, the present invention provides a pig weight measurement method based on deep learning and a deep camera.

为实现上述目的,本发明所采用的技术方案如下:In order to achieve the above object, the technical scheme adopted in the present invention is as follows:

一种基于深度学习和深度相机的生猪体重测量方法,包括以下步骤:A method for measuring pig body weight based on deep learning and deep cameras, comprising the following steps:

获得生猪的深度图像;Obtain the depth image of the pig;

将所述深度图像转化为三维点云数据;Converting the depth image into three-dimensional point cloud data;

对所述三维点云数据进行预处理,去除噪声;Preprocessing the 3D point cloud data to remove noise;

利用PyTorch深度学习框架运行点云卷积神经网络PointNet++模型,将去除噪声的三维点云数据放入点云卷积神经网络PointNet++模型进行深度学习,去除背景三维点云数据;Use the PyTorch deep learning framework to run the point cloud convolutional neural network PointNet++ model, put the noise-removed 3D point cloud data into the point cloud convolutional neural network PointNet++ model for deep learning, and remove the background 3D point cloud data;

将去除背景后的三维点云数据投影到三维坐标系下,求取每个切片点云上z值的最小点,将每个z值最小点合并成新点列,对该点列进行离散点去除,得到相应的二维的拟合线,求出相应的极值点位置,即获得相应的特征点坐标,从而得到相应的体尺数据;Y=a0+a1x1+a2x2+…+anxnProject the 3D point cloud data after removing the background to the 3D coordinate system, find the minimum point of z value on each slice point cloud, merge each point with the minimum z value into a new point column, and perform discrete points on the point column Remove, get the corresponding two-dimensional fitting line, and find the corresponding extreme point position, that is, get the corresponding feature point coordinates, so as to get the corresponding body size data; Y=a 0 +a 1 x 1 +a 2 x 2 +…+a n x n

将生猪的体尺数据作为自变量,将生猪的体重作为因变量,根据如下公式获得生猪的体重:Taking the body size data of the live pig as the independent variable and the weight of the live pig as the dependent variable, the weight of the live pig is obtained according to the following formula:

Y=α01X12X2+…+αnXnY=α 01 X 12 X 2 +…+α n X n

其中,Y为生猪体重,a0、a1、a2...an以及ε为不同种类生猪对应的系数,x1、x2...xn为生猪的体尺数据。Among them, Y is the weight of the pig, a 0 , a 1 , a 2 ...a n and ε are coefficients corresponding to different types of pigs, and x 1 , x 2 ... x n are the body size data of the pig.

可选的,获得生猪的深度图像包括:将Kinect深度相机架设在饲喂机侧方,对进食时静止的生猪进行深度图像的捕获,距离范围为Optionally, obtaining the depth image of the live pig includes: installing the Kinect depth camera on the side of the feeding machine to capture the depth image of the still pig while eating, with a distance range of

其中,a为Kinect深度相机视角,L为生猪进食区的长度,D为Kinect深度相机距离生猪的距离。 Among them, a is the angle of view of the Kinect depth camera, L is the length of the pig eating area, and D is the distance between the Kinect depth camera and the pig.

可选的,将所述深度图像转化为三维点云数据包括:在Kinectfor WindowsSDK2.0软件开发环境下使用其自带的点云获取函数,将采集得到的深度图像转化三维点云数据。Optionally, converting the depth image into 3D point cloud data includes: using its built-in point cloud acquisition function in the Kinectfor WindowsSDK2.0 software development environment to convert the acquired depth image into 3D point cloud data.

可选的,对所述三维点云数据进行预处理,去除噪声包括:使用双边滤波法对三维点云数据集进行滤波降噪处理,其表达式为:Optionally, preprocessing the 3D point cloud data, and removing noise includes: performing filtering and noise reduction processing on the 3D point cloud data set using a bilateral filter method, the expression of which is:

J0=I Jt+1=f(Jt)J 0 =IJ t+1 =f(J t )

其中,J0为初始图像,Jt为t次迭代后的结果,f(Jt)为滤波器。Among them, J 0 is the initial image, J t is the result after t iterations, and f(Jt) is the filter.

可选的,所述特征点包括地面平面、生猪的头顶、生猪的第一尾椎骨、生猪的髻甲骨以及生猪的趾节骨,其中,地面平面的方程为Optionally, the feature points include the ground plane, the top of the head of the pig, the first tail vertebra of the pig, the chinchilla bone of the pig and the phalanx bone of the pig, wherein the equation of the ground plane is

ax+by+cz+d=0ax+by+cz+d=0

相应的体尺数据,包括体长X1、体高X2、体斜长X3和体宽X4Corresponding body size data, including body length X 1 , body height X 2 , body oblique length X 3 and body width X 4 ;

其中,体长X1为生猪的头顶N(x1,y1,z1)到M(x2,y2,z2)的直线距离,其表达式为:X1=|x1-x2|;Among them, the body length X 1 is the linear distance from N(x 1 , y 1 , z 1 ) to M(x 2 , y 2 , z 2 ) of the pig’s head, and its expression is: X 1 =|x 1 -x 2 |;

体高X2为生猪的第一尾椎骨M(x2,y2,z2)到地面平面ax+by+cz+d=0的直线距离,其表达式为: Body height X 2 is the straight-line distance from the pig's first tailbone M (x 2 , y 2 , z 2 ) to the ground plane ax+by+cz+d=0, and its expression is:

体斜长X3为生猪的髻甲骨A(x3,y3,z3)到生猪的趾节骨B(x4,y4,z4)的直线距离,其表达式为: Body oblique length X 3 is the straight-line distance from pig’s bun bone A (x 3 , y 3 , z 3 ) to pig’s phalanx B (x 4 , y 4 , z 4 ), and its expression is:

体宽X4由Kinect深度相机测得,其表达式为:X4=|zu-zd|。The body width X 4 is measured by the Kinect depth camera, and its expression is: X 4 = |z u -z d |.

本发明与现有技术相比,所取得的技术进步在于:Compared with the prior art, the technical progress achieved by the present invention lies in:

本发明中生猪只需有序通过,提前架设好深度相机的拍摄区域之后,获得通过各项提取参数,便可测得生猪的体重,这种侧重方式类似于流水化作业,极大地提高了称重效率,顺应当下,规模化养殖潮流In the present invention, the live pigs only need to pass through in an orderly manner, and after setting up the shooting area of the depth camera in advance, the weight of the live pigs can be measured through various extraction parameters. Emphasis on efficiency, conform to the current trend of large-scale farming

相比传统生猪的体重的测量方法,利用深度学习来测量生猪的体重的方法更加方便、安全。可以实现生猪体重非接触式测量减少了人为干预,极大程度上的保障了生猪自然生长的要求,达到福利化养殖的目的。Compared with the traditional method of measuring the weight of live pigs, the method of using deep learning to measure the weight of live pigs is more convenient and safer. It can realize the non-contact measurement of pig weight, reduce human intervention, guarantee the natural growth requirements of pigs to a great extent, and achieve the purpose of welfare breeding.

本发明着力于研究非接触式生猪体重估测系统。采集生猪的全身图像,通过深度学习算法实现生猪的体重的估测,为推动精准化、现代化、智能化养殖具有重要意义。本发明旨在设计一种精准、高效的生猪的体重非接触式检测系统,通过深度学习算法解决体重测量问题,减少人力、物力的不必要的损耗;实现生猪的养殖信息化、规范化管理,从而帮助养殖场扩大养殖规模,创造更高效益。The present invention focuses on researching a non-contact pig body weight estimation system. Collecting whole-body images of live pigs and estimating the weight of live pigs through deep learning algorithms is of great significance for promoting precise, modern and intelligent farming. The present invention aims to design an accurate and efficient pig weight non-contact detection system, solve the weight measurement problem through deep learning algorithm, reduce unnecessary loss of manpower and material resources; realize pig breeding informatization and standardized management, thereby Help farms to expand the scale of farming and create higher benefits.

附图说明Description of drawings

附图用来提供对本发明的进一步理解,并且构成说明书的一部分,与本发明的实施例一起用于解释本发明,并不构成对本发明的限制。The accompanying drawings are used to provide a further understanding of the present invention, and constitute a part of the description, and are used together with the embodiments of the present invention to explain the present invention, and do not constitute a limitation to the present invention.

在附图中:In the attached picture:

图1为本发明的方法流程示意图。Fig. 1 is a schematic flow chart of the method of the present invention.

图2为本发明深度相机架设的结构示意图。FIG. 2 is a schematic structural diagram of the installation of the depth camera of the present invention.

具体实施方式Detailed ways

下面这几个具体的实施例可以相互结合,对于相同或相似的概念或过程可能在某些实施例中不再赘述。下面将结合附图,对本发明的实施例进行描述。The following specific embodiments may be combined with each other, and the same or similar concepts or processes may not be repeated in some embodiments. Embodiments of the present invention will be described below with reference to the accompanying drawings.

如图1所示,本发明公开了一种基于深度学习和深度相机的生猪体重测量方法,包括:S01:获得生猪的深度图像;As shown in Figure 1, the present invention discloses a pig weight measurement method based on deep learning and a deep camera, including: S01: Obtain a pig's depth image;

如图2所示的,获得生猪的深度图像包括:将Kinect深度相机架设在饲喂机侧方,对进食时静止的生猪进行深度图像的捕获,距离范围为As shown in Figure 2, obtaining the depth image of the live pig includes: setting up the Kinect depth camera on the side of the feeding machine, and capturing the depth image of the still pig while eating, with a distance range of

其中,a为Kinect深度相机视角,L为生猪进食区的长度,D为Kinect深度相机距离生猪的距离,通过上式可得Kinect深度相机距生猪只为2.5m~2.7m之间可调。Among them, a is the angle of view of the Kinect depth camera, L is the length of the pig’s eating area, and D is the distance between the Kinect depth camera and the pig. The distance between the Kinect depth camera and the pig can be adjusted from 2.5m to 2.7m through the above formula.

Kinect深度相机采用TOF(Time OfLight)测距技术获取深度信息,即红外发射器主动发出连续红外光脉冲,深度传感器接收从物体返回的红外光脉冲,依据红外光脉冲的来回飞行时间得出物体距传感器的深度值,故而使其对被测物体的特征识别不受照明限制,降低了对外界光照的要求,并且通过更高的深度保真和大幅改进的噪声基底,可以使其采集到更小的物体数据信息。The Kinect depth camera uses TOF (Time Of Light) ranging technology to obtain depth information, that is, the infrared emitter actively sends out continuous infrared light pulses, and the depth sensor receives the infrared light pulses returned from the object, and the object distance is obtained according to the round-trip flight time of the infrared light pulses. The depth value of the sensor, so that the feature recognition of the measured object is not limited by lighting, which reduces the requirements for external lighting, and through higher depth fidelity and greatly improved noise floor, it can collect smaller object data information.

Kinect深度相机标定是准确测量目标物体的必要过程,通过Kinect深度相机标定可以校正镜头的畸变并获取世界坐标系中目标物体的米制单位的坐标。在生猪的图像采集过程中,可以通过红外触发控制Kinect深度相机工作,将其安装在猪的饲喂器旁边,每当生猪进行集体进食时系统便会打开,当猪对红外装置进行遮挡时,系统此时启动Kinect深度相机工作为生猪进行拍照。红外触发装置采用E18-8MNK光电传感器,其检测距离为0-8米,额定电流100mA,额定电压5v。Kinect depth camera calibration is a necessary process to accurately measure the target object. Through Kinect depth camera calibration, the distortion of the lens can be corrected and the coordinates of the target object in metric units in the world coordinate system can be obtained. During the image acquisition process of live pigs, the Kinect depth camera can be controlled by infrared trigger, and installed next to the pig feeder, the system will be turned on whenever the pigs eat collectively, and when the pigs block the infrared device, The system starts the Kinect depth camera work to take pictures for the live pig at this moment. The infrared trigger device adopts E18-8MNK photoelectric sensor, the detection distance is 0-8 meters, the rated current is 100mA, and the rated voltage is 5v.

生猪体重检测系统的重点和难点体现在对生猪图像的获取和处理过程,因此生猪形态信息的采集过程尤为重要,尤其是Kinect深度相机的架构与布局,为优化计算量,提高测重速度,我们在考虑到生物体尺数据大致对称,且在精度损失最小的情况下,抛弃掉双Kinect深度相机的方案,仅仅在侧方架设单个Kinect深度相机。The focus and difficulty of the pig weight detection system are reflected in the acquisition and processing of pig images. Therefore, the acquisition process of pig morphological information is particularly important, especially the structure and layout of the Kinect depth camera. In order to optimize the amount of calculation and increase the speed of weight measurement, we Considering that the biological scale data is roughly symmetrical and the accuracy loss is minimal, the dual Kinect depth camera solution is discarded, and only a single Kinect depth camera is installed on the side.

前端对于生猪的图像的采集与预处理能否成功。在不定的,较为混乱的养殖场环境中获得准确清晰的图像极其的特征,在本实施例中,在其饲喂机侧方架设Kinect深度相机。对进食时静止的生猪进行图像的捕获,以防止生猪的头部扭动对图像的采集造成影响,将数据从深度图像转换为三维点云数据。Whether the front-end can successfully collect and preprocess the images of live pigs. Accurate and clear images are extremely characteristic in uncertain and relatively chaotic farm environments. In this embodiment, a Kinect depth camera is set up on the side of the feeding machine. Capture the image of the still pig while eating to prevent the head twisting of the pig from affecting the image acquisition, and convert the data from the depth image to the 3D point cloud data.

深度学习的应用中,前期的图像采集任务极其的特征,制作的数据集的数量与精细程度直接影响到后期模型进行训练的结果,进而对生猪的体尺数据测量的准确性产生很大的影响。我们获取大量生猪的深度图像,包括不同的体型,相同的方位(侧视图像),同时对获得的深度图像数据进行挑选后,全部转化为三维点云的形式保存并制作生猪的点云数据集。In the application of deep learning, the early image acquisition tasks are extremely characteristic, and the number and fineness of the produced data sets directly affect the results of later model training, which in turn has a great impact on the accuracy of pig body measurement data . We obtain a large number of depth images of pigs, including different body shapes and the same orientation (side-view images), and at the same time, after selecting the obtained depth image data, they are all converted into 3D point clouds to save and create point cloud data sets of pigs .

图像质量是保证计算体重数据精度的首要条件。由于图像提取的环境较为复杂,图像提取时很难保证生猪只的绝对静止,为保证图像质量的可靠与稳定。同时,在图像的传输、转化为三维点云数据集的过程减少图片产生不必要的噪声,影响深度图像采集的质量。所以,我们还要对采集的数据进行预处理,来保证点云数据的精确,改善点云数据的质量。Image quality is the primary condition to ensure the accuracy of calculated body weight data. Due to the complex environment of image extraction, it is difficult to ensure the absolute stillness of live pigs during image extraction, in order to ensure the reliability and stability of image quality. At the same time, in the process of image transmission and conversion into a 3D point cloud dataset, unnecessary noise is reduced in the image, which affects the quality of depth image acquisition. Therefore, we also need to preprocess the collected data to ensure the accuracy of point cloud data and improve the quality of point cloud data.

S02:将所述深度图像转化为三维点云数据;S02: Convert the depth image into three-dimensional point cloud data;

点云在组成特点上分为两种,一种是有序点云,一种是无序点云。由深度图像还原的点云数据可以按照三维数据排列,很容易找到其相邻各点的信息,排除其中的无效点云序列,保留其中的有效序列。点云数据的处理相较于深度图像的更加容易快速,所以需要将深度图像转化成点云数据,制成相应的点云数据集。在Kinectfor Windows SDK2.0软件开发环境下使用其自带的相应的点云获取函数,将采集得到的深度图像转化点云数据。Point cloud is divided into two types in terms of composition characteristics, one is ordered point cloud, and the other is disordered point cloud. The point cloud data restored from the depth image can be arranged according to the three-dimensional data, and it is easy to find the information of its adjacent points, exclude the invalid point cloud sequence, and retain the valid sequence. The processing of point cloud data is easier and faster than that of depth images, so it is necessary to convert depth images into point cloud data and make corresponding point cloud data sets. In the Kinectfor Windows SDK2.0 software development environment, use its own corresponding point cloud acquisition function to convert the acquired depth image into point cloud data.

S03:对所述三维点云数据进行预处理,去除噪声;S03: Preprocessing the 3D point cloud data to remove noise;

由于扫描设备不可避免精度的问题,以及测试时设备可能发生轻微的晃动,亦或环境复杂的变化等问题,这些都可能导致提取的点云数据包含大量散列点、孤立点,数据集的制作需要滤除掉这些噪声。本发明拟主要采用双边滤波法进行滤波降噪处理,相对于其他的滤波方法,双边滤波法保持点云的边缘数据的能力更强,双边滤波是结合点云数据的空间邻近度和相似度的一种折中处理,同时考虑空间信息和相似性,达到保边去噪的目的。当连续应用双边滤波器或其他基于平均的边界提取滤波器时,连续执行双边滤波,通用表达式为:J0=I Jt+1=f(Jt),其中,J0为初始图像,Jt为t次迭代后的结果,f(Jt)为滤波器,连续执行双边滤波器不会移除小型结构,相反的是能更好地保留数据的细节,只是会去除生猪只身体上的毛边等噪声。Due to the unavoidable accuracy of the scanning equipment, and the slight shaking of the equipment during the test, or the complex changes in the environment, these may cause the extracted point cloud data to contain a large number of hash points and isolated points. The production of the data set These noises need to be filtered out. The present invention intends to mainly use the bilateral filtering method for filtering and noise reduction processing. Compared with other filtering methods, the bilateral filtering method has a stronger ability to maintain the edge data of the point cloud, and the bilateral filtering is combined with the spatial proximity and similarity of the point cloud data. A kind of compromise processing, considering spatial information and similarity at the same time, to achieve the purpose of edge preservation and denoising. When continuously applying a bilateral filter or other average-based boundary extraction filters, bilateral filtering is continuously performed, and the general expression is: J 0 =IJ t+1 =f(J t ), where J 0 is the initial image, J t is the result after t iterations, f(J t ) is the filter, continuous execution of the bilateral filter will not remove small structures, on the contrary, it can better retain the details of the data, but will remove the pig's body Noise such as rough edges.

S04:利用PyTorch深度学习框架运行点云卷积神经网络PointNet++模型,将去除噪声的三维点云数据放入点云卷积神经网络PointNet++模型进行深度学习,去除背景三维点云数据;S04: Use the PyTorch deep learning framework to run the point cloud convolutional neural network PointNet++ model, put the noise-removed 3D point cloud data into the point cloud convolutional neural network PointNet++ model for deep learning, and remove the background 3D point cloud data;

虽然进行了降噪处理,但是在三维点云数据的采集上,很难在养殖场的环境下获取单独生猪的点云数据而忽略其复杂的背景点云数据集,为了在较大的点云数据量中去除不必要的背景等点云数据,减少需要处理的点云数据量。所以我们要对点云数据进行分割处理,以单独获得生猪的点云数据。Although the noise reduction process has been carried out, it is difficult to obtain the point cloud data of individual pigs in the environment of the farm and ignore the complex background point cloud data set in the collection of 3D point cloud data. Unnecessary background and other point cloud data are removed from the data volume, reducing the amount of point cloud data that needs to be processed. Therefore, we need to segment the point cloud data to obtain the point cloud data of live pigs separately.

本发明利用PyTorch深度学习框架运行点云卷积神经网络PointNet++模型,将去除噪声的三维点云数据放入点云卷积神经网络PointNet++模型进行深度学习,将三维数据进行图像处理转化为深度图像使用传统卷积神经网络或者体素化后再应用3D卷积神经网络的方法,通常会导致一部分的数据损失或者计算成本过大的问题,直接处理点云的方法可以最大程度的利用三维点云数据的特点,获取更多的特征信息。PointNet++模型可以将传统的二维卷积神经网络扩展到三维进行使用,PointNet++模型可以直接对点云数据进行处理,将传统的图片输入格式扩展至点云数据的输入。使用PointNet++模型直接对三维点云数据进行分割,再进行点云数据处理,获得的结果更加直接,数度更加快,且需要的数据集相较于上面一种方法较小,精度更高。同时较于深度图像预处理的时间,该方法所耗费的精力较少,节约成本和人力。The present invention utilizes the PyTorch deep learning framework to run the point cloud convolutional neural network PointNet++ model, puts the noise-removed 3D point cloud data into the point cloud convolutional neural network PointNet++ model for deep learning, and converts the 3D data into a depth image for use by image processing The method of applying 3D convolutional neural network after traditional convolutional neural network or voxelization usually leads to a part of data loss or excessive calculation cost. The method of directly processing point cloud can maximize the use of 3D point cloud data features to get more feature information. The PointNet++ model can extend the traditional two-dimensional convolutional neural network to three-dimensional use. The PointNet++ model can directly process point cloud data, and extend the traditional image input format to the input of point cloud data. Using the PointNet++ model to directly segment the 3D point cloud data, and then process the point cloud data, the obtained result is more direct, the number is faster, and the required data set is smaller than the above method, and the accuracy is higher. At the same time, compared with the time of depth image preprocessing, this method consumes less energy, saving cost and manpower.

PyTorch深度学习框架是Caffe2和Torch的结合,重构和统一了二者框架的代码库,删除了重复的组件并共享上层抽象,得到了一个统一的框架,支持高效的图模式执行、移动部署和广泛的供应商集成等。相对于来说开发更加灵活,代码的编写也更加简易。PointNet++模型精度更高,可以直接对点云数据进行处理,处理步骤更为简洁,对于体重的采集与检测更为有效,使后续数据的测量更为精确。The PyTorch deep learning framework is a combination of Caffe2 and Torch. It refactors and unifies the code libraries of the two frameworks, deletes duplicate components and shares upper-level abstractions, and obtains a unified framework that supports efficient graph mode execution, mobile deployment and Extensive vendor integrations and more. Relatively speaking, development is more flexible, and code writing is easier. The PointNet++ model has higher precision, can directly process point cloud data, the processing steps are simpler, and it is more effective for the collection and detection of body weight, making the measurement of subsequent data more accurate.

SO5:体尺数据的计算可以转化为相应特征点的距离长度。将处理后的点云数据投影到三维坐标系下,先求取每个切片点云上z值的最小点,将这些z值最小点合并成新点列,再对该点列进行离散点去除,即得到相应的二维的拟合线。提取特征点时只需对点列的x和z值进行编程识别即可,然后相应的对各个方向进行投影,拟合出相应的二维坐标系下的二维曲线,通过数学函数分别求出相应的极值点位置,即可获得相应的特征点坐标,得到相应的体尺数据。由于采集到的是生猪的侧身的深度图像,在计算生猪的身宽时应将数据长度乘2。SO5: The calculation of body size data can be converted into the distance length of the corresponding feature points. Project the processed point cloud data into the three-dimensional coordinate system, first find the minimum point of z value on each slice point cloud, merge these points with minimum z value into a new point column, and then perform discrete point removal on the point column , that is, the corresponding two-dimensional fitting line is obtained. When extracting feature points, it is only necessary to program and identify the x and z values of the point column, and then project in each direction accordingly, and fit the two-dimensional curve under the corresponding two-dimensional coordinate system, and obtain them respectively through mathematical functions Corresponding extreme point positions, the corresponding feature point coordinates can be obtained, and the corresponding body size data can be obtained. Since what is collected is the depth image of the side body of the pig, the data length should be multiplied by 2 when calculating the body width of the pig.

对曲线进行体尺测点的提取得到生猪的头顶、生猪的第一尾椎骨、生猪的髻甲骨以及生猪的趾节骨特征部位的特征点。利用特征点和地面平面方程ax+by+cz+d=0,计算体尺数据的像素长度:The body measurement point is extracted from the curve to obtain the characteristic points of the top of the pig's head, the first tail vertebra of the pig, the chinchilla bone of the pig and the characteristic parts of the phalanx bone of the pig. Using the feature points and the ground plane equation ax+by+cz+d=0, calculate the pixel length of the body-scale data:

相应的体尺数据,包括体长X1、体高X2、体斜长X3和体宽X4Corresponding body size data, including body length X 1 , body height X 2 , body oblique length X 3 and body width X 4 ;

其中,体长X1为生猪的头顶N(x1,y1,z1)到M(x2,y2,z2)的直线距离,其表达式为:X1=|x1-x2|;Among them, the body length X 1 is the linear distance from N(x 1 , y 1 , z 1 ) to M(x 2 , y 2 , z 2 ) of the pig’s head, and its expression is: X 1 =|x 1 -x 2 |;

体高X2为生猪的第一尾椎骨M(x2,y2,z2)到地面平面ax+by+cz+d=0的直线距离,其表达式为: Body height X 2 is the straight-line distance from the pig's first tailbone M (x 2 , y 2 , z 2 ) to the ground plane ax+by+cz+d=0, and its expression is:

体斜长X3为生猪的髻甲骨A(x3,y3,z3)到生猪的趾节骨B(x4,y4,z4)的直线距离,其表达式为: Body oblique length X 3 is the straight-line distance from pig’s bun bone A (x 3 , y 3 , z 3 ) to pig’s phalanx B (x 4 , y 4 , z 4 ), and its expression is:

体宽X4由Kinect深度相机测得,其表达式为:X4=|zu-zd|。The body width X 4 is measured by the Kinect depth camera, and its expression is: X 4 = |z u -z d |.

S06:将生猪的体重与其体尺数据进行相关性分析,利用多因素线性回归的最小二乘法进行多元线性回归分析,将生猪的体长X1、体高X2、体斜长X3和体宽X4体尺数据作为最小二乘法的多元自变量,同时将生猪的体重作为自变量进行分析。可以得出如下的模型:S06: Carry out correlation analysis between the weight of live pigs and their body size data, and use the least square method of multi-factor linear regression to conduct multiple linear regression analysis, and calculate the body length X 1 , body height X 2 , body oblique length X 3 and body width X 4 The body size data was used as the multivariate independent variable of the least square method, and the weight of the live pig was analyzed as the independent variable at the same time. The following model can be derived:

Y=α01X12X2+…+αnXnY=α 01 X 12 X 2 +…+α n X n

其中,Y为生猪体重,a0、a1、a2...an以及ε为不同种类生猪对应的系数,x1、x2...xn为生猪的体尺数据。Among them, Y is the weight of the pig, a 0 , a 1 , a 2 ...a n and ε are coefficients corresponding to different types of pigs, and x 1 , x 2 ... x n are the body size data of the pig.

最后应说明的是:以上所述仅为本发明的优选实施例而已,并不用于限制本发明,尽管参照前述实施例对本发明进行了详细的说明,对于本领域的技术人员来说,其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分技术特征进行等同替换。凡在本发明的精神和原则之内,所作的任何修改、等同替换、改进等,均应包含在本发明权利要求保护的范围之内。Finally, it should be noted that: the above is only a preferred embodiment of the present invention, and is not intended to limit the present invention. Although the present invention has been described in detail with reference to the foregoing embodiments, for those skilled in the art, it still The technical solutions recorded in the foregoing embodiments may be modified, or some technical features thereof may be equivalently replaced. Any modifications, equivalent replacements, improvements, etc. made within the spirit and principles of the present invention shall be included within the protection scope of the claims of the present invention.

Claims (4)

1.一种基于深度学习和深度相机的生猪体重测量方法,其特征在于,包括以下步骤:1. A live pig body weight measurement method based on depth learning and depth camera, is characterized in that, comprises the following steps: 获得生猪的深度图像;Obtain the depth image of the pig; 将所述深度图像转化为三维点云数据;Converting the depth image into three-dimensional point cloud data; 对所述三维点云数据进行预处理,去除噪声;Preprocessing the 3D point cloud data to remove noise; 利用PyTorch深度学习框架运行点云卷积神经网络PointNet++模型,将去除噪声的三维点云数据放入点云卷积神经网络PointNet++模型进行深度学习,去除背景三维点云数据;Use the PyTorch deep learning framework to run the point cloud convolutional neural network PointNet++ model, put the noise-removed 3D point cloud data into the point cloud convolutional neural network PointNet++ model for deep learning, and remove the background 3D point cloud data; 将去除背景后的三维点云数据投影到三维坐标系下,求取每个切片点云上z值的最小点,将每个z值最小点合并成新点列,对该点列进行离散点去除,得到相应的二维的拟合线,求出相应的极值点位置,即获得相应的特征点坐标,从而得到相应的体尺数据;Project the 3D point cloud data after removing the background to the 3D coordinate system, find the minimum point of z value on each slice point cloud, merge each point with minimum z value into a new point column, and perform discrete points on the point column Remove, get the corresponding two-dimensional fitting line, and find the corresponding extreme point position, that is, get the corresponding feature point coordinates, so as to get the corresponding body size data; 将生猪的体尺数据作为自变量,将生猪的体重作为因变量,根据如下公式获得生猪的体重:Taking the body size data of the live pig as the independent variable and the weight of the live pig as the dependent variable, the weight of the live pig is obtained according to the following formula: Y=α01X12X2+…+αnXnY=α 01 X 12 X 2 +…+α n X n 其中,Y为生猪体重,a0、a1、a2...an以及ε为不同种类生猪对应的系数,x1、x2...xn为生猪的体尺数据;Among them, Y is the weight of live pigs, a 0 , a 1 , a 2 ... a n and ε are coefficients corresponding to different types of live pigs, x 1 , x 2 ... x n are the body size data of live pigs; 所述特征点包括地面平面、生猪的头顶、生猪的第一尾椎骨、生猪的髻甲骨以及生猪的趾节骨,其中,地面平面的方程为The feature points include the ground plane, the top of the head of the pig, the first tail vertebra of the pig, the chinchilla bone of the pig and the phalanx of the pig, wherein the equation of the ground plane is ax+by+cz+d=0ax+by+cz+d=0 相应的体尺数据,包括体长X1、体高X2、体斜长X3和体宽X4Corresponding body size data, including body length X 1 , body height X 2 , body oblique length X 3 and body width X 4 ; 其中,体长X1为生猪的头顶N(x1,y1,z1)到M(x2,y2,z2)的直线距离,其表达式为:X1=|x1-x2|;Among them, the body length X 1 is the linear distance from N(x 1 , y 1 , z 1 ) to M(x 2 , y 2 , z 2 ) of the pig’s head, and its expression is: X 1 =|x 1 -x 2 |; 体高X2为生猪的第一尾椎骨M(x2,y2,z2)到地面平面ax+by+cz+d=0的直线距离,其表达式为: Body height X 2 is the straight-line distance from the pig's first tailbone M (x 2 , y 2 , z 2 ) to the ground plane ax+by+cz+d=0, and its expression is: 体斜长X3为生猪的髻甲骨A(x3,y3,z3)到生猪的趾节骨B(x4,y4,z4)的直线距离,其表达式为: Body oblique length X 3 is the straight-line distance from pig’s bun bone A (x 3 , y 3 , z 3 ) to pig’s phalanx B (x 4 , y 4 , z 4 ), and its expression is: 体宽X4由Kinect深度相机测得,其表达式为:X4=|zu-zd|。The body width X 4 is measured by the Kinect depth camera, and its expression is: X 4 = |z u -z d |. 2.根据权利要求1所述的基于深度学习和深度相机的生猪体重测量方法,其特征在于:获得生猪的深度图像包括:将Kinect深度相机架设在饲喂机侧方,对进食时静止的生猪进行深度图像的捕获,距离范围为2. The pig body weight measurement method based on deep learning and depth camera according to claim 1, characterized in that: obtaining the depth image of the pig comprises: setting up the Kinect depth camera on the side of the feeding machine, for the still pigs when eating Capture the depth image, the distance range is 其中,a为Kinect深度相机视角,L为生猪进食区的长度,D为Kinect深度相机距离生猪的距离。Among them, a is the angle of view of the Kinect depth camera, L is the length of the pig eating area, and D is the distance between the Kinect depth camera and the pig. 3.根据权利要求2所述的基于深度学习和深度相机的生猪体重测量方法,其特征在于:将所述深度图像转化为三维点云数据包括:在Kinectfor Windows SDK2.0软件开发环境下使用其自带的点云获取函数,将采集得到的深度图像转化三维点云数据。3. the pig body weight measurement method based on depth learning and depth camera according to claim 2, is characterized in that: described depth image is converted into three-dimensional point cloud data and comprises: under Kinectfor Windows SDK2.0 software development environment, use its The built-in point cloud acquisition function converts the acquired depth image into 3D point cloud data. 4.根据权利要求3所述的基于深度学习和深度相机的生猪体重测量方法,其特征在于:对所述三维点云数据进行预处理,去除噪声包括:使用双边滤波法对三维点云数据集进行滤波降噪处理,其表达式为:4. The live pig body weight measurement method based on deep learning and depth camera according to claim 3, characterized in that: the three-dimensional point cloud data is preprocessed, and removing noise comprises: using a bilateral filter method to process the three-dimensional point cloud data set Perform filtering and noise reduction processing, the expression is: J0=I Jt+1=f(Jt)J 0 =IJ t+1 =f(J t ) 其中,J0为初始图像,Jt为t次迭代后的结果,f(Jt)为滤波器。Among them, J 0 is the initial image, J t is the result after t iterations, and f(J t ) is the filter.
CN202111014612.2A 2021-08-31 2021-08-31 Live pig weight measurement method based on deep learning and depth camera Active CN113706512B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111014612.2A CN113706512B (en) 2021-08-31 2021-08-31 Live pig weight measurement method based on deep learning and depth camera

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111014612.2A CN113706512B (en) 2021-08-31 2021-08-31 Live pig weight measurement method based on deep learning and depth camera

Publications (2)

Publication Number Publication Date
CN113706512A CN113706512A (en) 2021-11-26
CN113706512B true CN113706512B (en) 2023-08-11

Family

ID=78658182

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111014612.2A Active CN113706512B (en) 2021-08-31 2021-08-31 Live pig weight measurement method based on deep learning and depth camera

Country Status (1)

Country Link
CN (1) CN113706512B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114972165B (en) * 2022-03-24 2024-03-15 中山大学孙逸仙纪念医院 A method and device for measuring time-averaged shear force
CN114972472B (en) * 2022-04-21 2024-08-30 北京福通互联科技集团有限公司 Beef cattle stereoscopic depth image acquisition method based on laser array and monocular camera
CN115984554B (en) * 2022-12-07 2025-12-02 西北农林科技大学 A Deep Learning-Based Weight Estimation Method
CN118661684B (en) * 2024-06-04 2024-12-10 山东海能生物工程有限公司 Intelligent control method for influence experiment of 25-hydroxy vitamin D3 on poultry

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110415282A (en) * 2019-07-31 2019-11-05 宁夏金宇智慧科技有限公司 A kind of milk cow weight forecasting system
CN110986788A (en) * 2019-11-15 2020-04-10 华南农业大学 Automatic measurement method based on three-dimensional point cloud livestock phenotype body size data
CN111612850A (en) * 2020-05-13 2020-09-01 河北工业大学 A method for measuring parameters of pig body size based on point cloud
CN112712590A (en) * 2021-01-15 2021-04-27 中国农业大学 Animal point cloud generation method and system
KR20210096448A (en) * 2020-01-28 2021-08-05 전북대학교산학협력단 A contactless mobile weighting system for livestock using asymmetric stereo cameras
CN113313833A (en) * 2021-06-29 2021-08-27 西藏新好科技有限公司 Pig body weight estimation method based on 3D vision technology

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110415282A (en) * 2019-07-31 2019-11-05 宁夏金宇智慧科技有限公司 A kind of milk cow weight forecasting system
CN110986788A (en) * 2019-11-15 2020-04-10 华南农业大学 Automatic measurement method based on three-dimensional point cloud livestock phenotype body size data
KR20210096448A (en) * 2020-01-28 2021-08-05 전북대학교산학협력단 A contactless mobile weighting system for livestock using asymmetric stereo cameras
CN111612850A (en) * 2020-05-13 2020-09-01 河北工业大学 A method for measuring parameters of pig body size based on point cloud
CN112712590A (en) * 2021-01-15 2021-04-27 中国农业大学 Animal point cloud generation method and system
CN113313833A (en) * 2021-06-29 2021-08-27 西藏新好科技有限公司 Pig body weight estimation method based on 3D vision technology

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
多视角深度相机的猪体三维点云重构及体尺测量;尹令 等;《农业工程学报》;第35卷(第23期);201-208 *

Also Published As

Publication number Publication date
CN113706512A (en) 2021-11-26

Similar Documents

Publication Publication Date Title
CN113706512B (en) Live pig weight measurement method based on deep learning and depth camera
CN111243005B (en) Livestock weight estimation method, apparatus, device and computer readable storage medium
CN109636779B (en) Method, device and storage medium for identifying poultry volume size
CN107180438B (en) Method for estimating size and weight of yak and corresponding portable computer device
AU2010219406B2 (en) Image analysis for making animal measurements
CN109146948B (en) Vision-based analysis method of crop growth phenotype parameter quantification and yield correlation
Liu et al. Automatic estimation of dairy cattle body condition score from depth image using ensemble model
Zhang et al. Fully automatic system for fish biomass estimation based on deep neural network
CN109141248A (en) Pig weight measuring method and system based on image
EP3353744B1 (en) Image analysis for making animal measurements including 3-d image analysis
CN106651900A (en) Three-dimensional modeling method of elevated in-situ strawberry based on contour segmentation
CN116763295B (en) Livestock scale measuring method, electronic equipment and storage medium
CN106529006A (en) Depth image-based broiler growth model fitting method and apparatus
CN111696150A (en) Method for measuring phenotypic data of channel catfish
CN113920106B (en) Corn growth vigor three-dimensional reconstruction and stem thickness measurement method based on RGB-D camera
CN110569735A (en) An analysis method and device based on the back body condition of dairy cows
Shi et al. Underwater fish mass estimation using pattern matching based on binocular system
CN112825791A (en) A cow body condition scoring method based on deep learning and point cloud convex hull features
CN109238264A (en) A kind of domestic animal posture method for normalizing and device
CN118015062A (en) A livestock body size measurement method based on depth camera and instance segmentation algorithm
CN118735984A (en) A non-contact sheep weight estimation method based on point cloud technology after wool length correction
CN112907546A (en) Beef body ruler non-contact measuring device and method
Liu et al. The Development of a Sorting System Based on Point Cloud Weight Estimation for Fattening Pigs
CN119147085A (en) System for living body estimation muscovy duck body chi
CN107507192A (en) A kind of Flag Leaves in Rice angle is in bulk measurement portable unit and in bulk measurement method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant