[go: up one dir, main page]

CN109784345A - A kind of agricultural pests detection method based on scale free depth network - Google Patents

A kind of agricultural pests detection method based on scale free depth network Download PDF

Info

Publication number
CN109784345A
CN109784345A CN201811587707.1A CN201811587707A CN109784345A CN 109784345 A CN109784345 A CN 109784345A CN 201811587707 A CN201811587707 A CN 201811587707A CN 109784345 A CN109784345 A CN 109784345A
Authority
CN
China
Prior art keywords
pest
target
scale
free
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201811587707.1A
Other languages
Chinese (zh)
Other versions
CN109784345B (en
Inventor
王红强
王儒敬
焦林
张绳昱
王琦进
时明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hefei Institutes of Physical Science of CAS
Original Assignee
Hefei Institutes of Physical Science of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hefei Institutes of Physical Science of CAS filed Critical Hefei Institutes of Physical Science of CAS
Priority to CN201811587707.1A priority Critical patent/CN109784345B/en
Publication of CN109784345A publication Critical patent/CN109784345A/en
Application granted granted Critical
Publication of CN109784345B publication Critical patent/CN109784345B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Analysis (AREA)
  • Catching Or Destruction (AREA)

Abstract

本发明涉及一种基于无尺度深度网络的农业害虫检测方法,该方法包括以下步骤:(1)预处理给定的害虫图像训练数据。(2)构建害虫无尺度目标检测器。(3)提取待测害虫图像的无尺度特征,预测害虫目标的置信度和位置。(4)后置优化获取的害虫目标的置信度和位置。(5)判定待测害虫图像中的害虫目标的位置及数目。本发明通过提取并编码害虫图像无尺度特征,有效解决了人工设置目标参考框缺陷,可适应不同尺度害虫识别,改善小害虫目标的识别与检测性能,提高农业害虫检测精度和鲁棒性。

The invention relates to an agricultural pest detection method based on a scale-free deep network. The method comprises the following steps: (1) preprocessing the given pest image training data. (2) Build a scale-free target detector for pests. (3) Extract the scale-free features of the pest image to be tested, and predict the confidence and location of the pest target. (4) Confidence and location of pest targets obtained by post-optimization. (5) Determine the position and number of pest targets in the image of the pest to be detected. The invention effectively solves the defect of manually setting the target reference frame by extracting and encoding the non-scale features of the pest image, can adapt to the identification of pests of different scales, improves the identification and detection performance of small pest targets, and improves the detection accuracy and robustness of agricultural pests.

Description

Agricultural pest detection method based on non-scale depth network
Technical Field
The invention relates to the technical field of accurate agricultural pest target detection, in particular to an agricultural pest detection method based on a non-scale depth network.
Background
China is a big agricultural country, agricultural production accounts for a great proportion in national economy, however, agricultural production is reduced due to pest attack, and the quality of agricultural products is greatly damaged. The monitoring of the species and the number of the agricultural pests is a precondition and a key for predicting and preventing the agricultural pests, and has important application value and significance.
The traditional agricultural pest detection method mainly depends on the plant protection expert to carry out manual identification according to the features of pests, the detection accuracy rate of the traditional agricultural pest detection method is influenced by the knowledge level, experience level and subjective consciousness of the expert, and certain subjectivity and limitation exist. Meanwhile, the agricultural pests are various and numerous in number, and a large amount of manpower, material resources and financial resources are consumed for detecting the agricultural pests.
With the continuous development of computer vision technology, agricultural pest detection by using a deep learning network is more and more widely applied. In the prior art, the detection of pests with different scales is realized by setting recommendation frames with different sizes. However, the scale of the artificially set recommendation box is limited, and cannot well cover targets with different scales, and the non-scale features of the pest targets cannot be obtained and utilized, so that the robustness is poor. Meanwhile, due to the fact that a large number of target frames are recommended, repeated recommendation is serious, detection time is prolonged, and particularly, detection accuracy of small target objects is low.
Disclosure of Invention
The invention aims to provide an agricultural pest detection method based on a non-scale depth network, which overcomes the defects in the prior art and improves the agricultural pest detection precision and efficiency.
In order to achieve the purpose, the invention adopts the following technical scheme:
an agricultural pest detection method based on a non-scale depth network comprises the following steps:
(1) preprocessing given pest image training data.
(2) Constructing a pest non-scale target detector.
(3) And extracting scale-free features of the pest image to be detected, and predicting the confidence coefficient and the position of the pest target.
(4) And post-optimizing the confidence and the position of the acquired pest target.
(5) And judging the position and the number of the pest targets in the pest image to be detected.
Further, the step of "preprocessing the given pest image training data" in step (1) specifically includes the following steps:
(11) compressing and converting all pest image training data into a fixed size.
(12) And marking a minimum rectangular frame containing the pest target in each image, and acquiring coordinate information of the real target position of the pest.
Further, the step (2) of constructing a pest dimensionless target detector specifically comprises the following steps:
(21) randomly initializing weights and biases (W, B) of a deep neural network of a pest non-scale target detector, the weights W of the deep neural network comprising weights and of a depth feature encoderWeights of object detectors, the depth feature encoder comprising N1Layer positive convolution module, N2The target detector is a single-layer fully-connected network, and an output layer of the target detector comprises 6 neurons.
(22) Obtaining a scale-free positive and negative sample by adopting a center point falling method, and sliding on an output characteristic diagram of a deconvolution module by using a sliding window with the size of n multiplied by n; on each sliding position, mapping the central position of the sliding window corresponding to the sliding position to an original image, making a circle with the mapped central point as an original point and R as a radius, and if the central point of a real target frame falls in the circle, regarding the sliding window as a positive sample and marking the sample as 1; otherwise, it is regarded as a negative sample, and is marked as 0.
The original image is an input pest image, and the invention refers to the pest image transformed by resize operation in step (11); the real target frame refers to the smallest rectangular frame of the pest target marked in step (12).
(23) Obtaining a non-scale deep network learning loss function L according to the positive and negative sample sets obtained in the step (22):
wherein k denotes a sample number, NcFor the size of the batch process, NrEqual to the number of samples, α is the balance factor, pkRepresenting a probability value of predicting a pest target;indicating the corresponding sample label, the positive sample takes 1 and the negative sample takes 0.
Representing the true bounding box of the parameterized sample, tk={tx,ty,tw,thDenotes a parameterized sample prediction bounding box; tx=x/Winput,ty=y/Hinput,tw=log(w/Winput),th=log(h/Hinput) (ii) a Correspondingly, x, y, w and h are respectively the abscissa, the ordinate, the width and the height of the upper left corner point of the predicted bounding box; x is the number of*,y*,w*,h*Respectively the abscissa, ordinate, width and height of the upper left corner of the real bounding box; winput、HinputThe width and height of the input pest image are respectively.
(24) Learning the weight and the bias parameters of the deep neural network of the insect scale-free target detector by adopting a BP algorithm, and iterating for N times until the parameters of the target detector reach the optimum:
where l denotes the number of target detector network layers l 1,2, …, N1+N2+2,WlWeight matrix representing the l-th layer, BlThe bias parameter of the l-th layer is indicated, and η indicates the learning rate.
(25) Obtaining optimal non-scale object detector, specifically including obtaining parameters W of optimal feature encoderl,Bl,l=1,2,…,N1+N2+1 and the weight and bias parameters of the optimal target detector: wc1,Wc2,Wb1,Wb2,Wb3,Wb4,bc1,bc1,Bb1,Bb2,Bb3,Bb4Wherein W isc1,Wc2As target confidence regression weight, bc1,bc2As a bias parameter, Wb1,Wb2,Wb3,Wb4As position regression weights, Bb1,Bb2,Bb3,Bb4Is a bias parameter.
Further, the step (3) of extracting the non-scale features of the pest image to be detected and predicting the confidence coefficient and the position of the pest target specifically comprises the following steps:
(31) and (5) obtaining the non-scale characteristic Y of the pest image to be detected by using the non-scale target detector trained in the step (25):
wherein,the weights for the non-scale feature modules are,for the non-scale feature module bias parameters,is the output of the deconvolution module and,it can be recursively derived from the following formula: xl=σ(Wl*Xl-1+Bl),XlFor each layer output characteristic, l denotes the index of the number of target detector network layers, l ═ 1,2, …, N1+N2Denotes the convolution operation, σ () is the ReLu function, X0Inputting a pest image matrix to be detected.
(32) And (5) calculating the confidence coefficient c of each pest candidate frame in the pest image to be detected by using the optimal target detector trained in the step (25) and adopting the following formula:
wherein,e-2.72 is a mathematical constant.
(33) And (5) obtaining the pest target position by using the optimal target detector trained in the step (25) as follows:
x-w on the upper left-hand abscissax×WinputThe ordinate y of the upper left corner is wy×HinputWidth ofAnd height
Wherein, wx=σ(Wb1*Y+bb1),wy=σ(Wb2*Y+bb2),ww=σ(Wb3*Y+bb3),wh=σ(Wb4*Y+bb4)。
Further, the step (4) of obtaining confidence and position of pest target by post optimization specifically includes the following steps:
(41) the Q and P values are corrected using an area compensation strategy, wherein s represents an area of a pest target candidate frame, s1In order to set the area threshold, lambda is an adjusting factor, and the value range of lambda is (0, 1)]And recalculating to obtain the target confidence
(42) Post-optimizing the position of the pest target:
(421) and sorting the target candidate frames according to the confidence coefficient c of the pest target, and marking the candidate frame with the maximum confidence coefficient.
(422) And calculating the intersection ratio of the pest target candidate box with the highest confidence value and each of the rest candidate boxes.
(423) The ratio of the removed intersection is larger than a set threshold value NtThe candidate frame of (1).
(424) And (5) repeating the steps (421), (422) and (423) on the reserved pest target candidate frames until the last candidate frame is reached, ending the iteration, and outputting all marked candidate frames.
Further, the step (5) of determining the position and number of the pest target in the pest image to be detected specifically includes the following steps: giving a threshold value t to all marked candidate frames obtained in the step (4)sSelecting a target confidence value greater than tsAnd taking the candidate frame as a final pest target detection result, and calculating the number of the candidate frame as the total number of pests in the pest image to be detected.
Compared with the prior art, the invention has the beneficial effects that:
(1) the invention adopts a center point-falling method to obtain the scale-free positive and negative samples, utilizes the depth neural network to encode the scale-free characteristics of the pest image, avoids the manual setting of the scale of the target reference frame, is suitable for the recognition of pests with different scales, and improves the precision and the flexibility of the agricultural pest recognition.
(2) The invention adopts an area compensation strategy to balance the confidence weights of the small target and the large target, and is particularly beneficial to the identification and detection of the small pest target.
Drawings
FIG. 1 is a flow chart of a method of the present invention;
FIG. 2 is a structural view of a non-scale target detector for vermin according to the present invention;
FIG. 3 is a schematic illustration of a center-drop method;
FIG. 4 is a flow chart of pest target confidence and location post-optimization in the present invention.
Detailed Description
The invention is further described below with reference to the accompanying drawings:
an agricultural pest detection method based on a non-scale depth network as shown in fig. 1 comprises the following steps:
s1, preprocessing given pest image training data, and specifically comprises the following steps:
and S11, converting all the pest image training data into a fixed size through a resize operation. The resize function is a function in matlab or opencv used to compress an image to a specified size.
And S12, marking a minimum rectangular frame containing the pest target in each image, and acquiring coordinate information of the real pest target position.
And S2, constructing a pest non-scale target detector. The construction of the pest dimensionless target detector specifically comprises the following steps:
s21, randomly initializing weights and biases (W, B) of a deep neural network of the pest non-scale target detector, wherein the weights W of the deep neural network comprise weights of a depth feature encoder and weights of target detectors, and the depth feature encoder comprises N1Layer positive convolution module, N2A layer deconvolution module and a 1-layer scale-free feature module. The non-scale feature module outputs k P-dimensional non-scale feature vectors, the target detector is a single-layer fully-connected network, the output layer comprises 6 neurons, and 2 neurons are used forTarget confidence is predicted, and the other 4 neurons are used for predicting pest target positions.
S22, as shown in figure 3, obtaining a scale-free positive and negative sample by adopting a center point falling method, and sliding on an output characteristic diagram of a deconvolution module by using a sliding window with the size of n multiplied by n; on each sliding position, mapping the central position of the sliding window corresponding to the sliding position to an original image, making a circle with the mapped central point as an original point and R as a radius, and if the central point of a real target frame falls in the circle, regarding the sliding window as a positive sample and marking the sample as 1; otherwise, it is regarded as a negative sample, and is marked as 0.
The original image is an input pest image, and the invention refers to the pest image transformed by resize operation in step (11); the real target frame refers to the smallest rectangular frame of the pest target marked in step (12).
S23, obtaining a non-scale deep network learning loss function L according to the positive and negative sample sets obtained in the step S22:
wherein k denotes a sample number, NcFor the size of the batch process, NrEqual to the number of samples, α is the balance factor, pkRepresenting a probability value of predicting a pest target;indicating the corresponding sample label, the positive sample takes 1 and the negative sample takes 0.
Representing the true bounding box of the parameterized sample, tk={tx,ty,tw,thDenotes a parameterized sample prediction bounding box; tx=x/Winput,ty=y/Hinput,tw=log(w/Winput),th=log(h/Hinput) (ii) a Correspondingly, x, y, w and h are respectively the abscissa, the ordinate, the width and the height of the upper left corner point of the predicted bounding box; x is the number of*,y*,w*,h*Respectively the abscissa, ordinate, width and height of the upper left corner of the real bounding box; winput、WinputThe width and height of the input pest image are respectively.
S24, learning the weight and the bias parameter of the deep neural network of the insect scale-free target detector by adopting a BP algorithm, and iterating for N times until the target detector parameter is optimal:
where l denotes the number of target detector network layers l 1,2, …, N1+N2+2,WlWeight matrix representing the l-th layer, BlThe bias parameter of the l-th layer is indicated, and η indicates the learning rate.
S25, obtaining the optimal non-scale target detector, specifically, obtaining the parameter W of the optimal feature encoderl,Bl,l=1,2,…,N1+N2+1 and the weight and bias parameters of the optimal target detector: wc1,Wc2,Wb1,Wb2,Wb3,Wb4,bc1,bc1,Bb1,Bb2,Bb3,Bb4Wherein W isc1,Wc2As target confidence regression weight, bc1,bc2As a bias parameter, Wb1,Wb2,Wb3,Wb4As position regression weights, Bb1,Bb2,Bb3,Bb4Is a bias parameter.
S3, extracting the scale-free characteristics of the pest image to be detected, and predicting the confidence coefficient and the position of the pest target, wherein the method specifically comprises the following steps:
s31, obtaining the scale-free features of the pest image to be detected by using the scale-free feature encoder trained in the step S25:
wherein,the weights for the non-scale feature modules are,for the non-scale feature module bias parameters,is the output of the deconvolution module and,it can be recursively derived from the following formula: xl=σ(Wl*Xl-1+Bl),XlFor each layer output characteristic, l denotes the index of the number of target detector network layers, l ═ 1,2, …, N1+N2Denotes the convolution operation, σ () is the ReLu function, X0Inputting a pest image matrix to be detected.
S32, calculating the confidence coefficient c of each pest candidate frame in the pest image to be detected by using the optimal target detector trained in the step (25) by adopting the following formula:
wherein,e-2.72 is a mathematical constant.
S33, using the optimal target detector trained in the step S25 to obtain the target position of the pests as follows:
x-w on the upper left-hand abscissax×WinputThe ordinate y of the upper left corner is wy×HinputWidth ofAnd height
Wherein, wx=σ(Wb1*Y+bb1),wy=σ(Wb2*Y+bb2),ww=σ(Wb3*Y+bb3),wh=σ(Wb4*Y+bb4)。
And S4, post-optimizing the confidence and the position of the acquired pest target.
As shown in fig. 4, the "confidence and location of pest target obtained by post optimization" specifically includes the following steps:
s41, correcting the Q and P values by using an area compensation strategy, wherein s represents an area of a pest target candidate frame, s1In order to set the area threshold, lambda is an adjusting factor, and the value range of lambda is (0, 1)]And recalculating to obtain the target confidence
S42, post-optimizing the position of the pest target:
and S421, sequencing the target candidate frames according to the confidence coefficient c of the pest target, and marking the candidate frame with the maximum confidence coefficient.
S422, calculating the intersection ratio of the pest target candidate frame with the highest confidence value and each of the rest candidate frames.
S423, removing the intersection ratio larger than the set threshold value NtThe candidate frame of (1).
And S424, repeating the steps (421), (422) and (423) on the reserved pest target candidate frames until the last candidate frame is reached, ending the iteration, and outputting all marked candidate frames.
S5, judging the position and the number of the pest target in the pest image to be detected, which comprises the following steps: giving a threshold value t to all marked candidate frames obtained in the step (4)sSelecting a target confidence value greater than tsAnd taking the candidate frame as a final pest target detection result, and calculating the number of the candidate frame as the total number of pests in the pest image to be detected.
The above-mentioned embodiments are merely illustrative of the preferred embodiments of the present invention, and do not limit the scope of the present invention, and various modifications and improvements of the technical solution of the present invention by those skilled in the art should fall within the protection scope defined by the claims of the present invention without departing from the spirit of the present invention.

Claims (6)

1.一种基于无尺度深度网络的农业害虫检测方法,其特征在于:该方法包括以下步骤:1. an agricultural pest detection method based on a scale-free depth network, is characterized in that: the method comprises the following steps: (1)预处理给定的害虫图像训练数据;(1) Preprocessing the given pest image training data; (2)构建害虫无尺度目标检测器;(2) Build a scale-free target detector for pests; (3)提取待测害虫图像的无尺度特征,预测害虫目标的置信度和位置;(3) Extract the scale-free features of the pest image to be tested, and predict the confidence and location of the pest target; (4)后置优化获取的害虫目标的置信度和位置;(4) The confidence and position of the pest targets obtained by post-optimization; (5)判定待测害虫图像中的害虫目标的位置及数目。(5) Determine the position and number of pest targets in the image of the pest to be detected. 2.根据权利要求1所述的一种基于无尺度深度网络的农业害虫检测方法,其特征在于:步骤(1)中所述的“预处理给定的害虫图像训练数据”,具体包括以下步骤:2. a kind of agricultural pest detection method based on scale-free deep network according to claim 1, is characterized in that: " preprocessing given pest image training data " described in step (1), specifically comprises the following steps : (11)将所有害虫图像训练数据,通过resize操作变换到固定尺寸;(11) Transform all pest image training data to a fixed size through the resize operation; (12)标注出每张图像中包含害虫目标的最小矩形框,获取害虫真实目标位置的坐标信息。(12) Mark the smallest rectangular frame containing the pest target in each image, and obtain the coordinate information of the real target position of the pest. 3.根据权利要求2所述的一种基于无尺度深度网络的农业害虫检测方法,其特征在于:步骤(2)中所述的“构建害虫无尺度目标检测器”,具体包括以下步骤:3. a kind of agricultural pest detection method based on scale-free depth network according to claim 2, is characterized in that: " constructing pest-free scale target detector " described in step (2), specifically comprises the following steps: (21)随机初始化害虫无尺度目标检测器的深度神经网络的权重和偏置(W,B),所述深度神经网络的权重W包括深度特征编码器的权重和目标检测子的权重,所述深度特征编码器包括N1层正卷积模块、N2层反卷积模块和1层无尺度特征模块,所述目标检测子为单层全连接网络,其输出层包括6个神经元;(21) Randomly initialize the weights and biases (W, B) of the deep neural network of the pest-free scale target detector, the weight W of the deep neural network includes the weight of the deep feature encoder and the weight of the target detector, the The depth feature encoder includes N 1 layers of positive convolution modules, N 2 layers of deconvolution modules and 1 layer of scale-free feature modules, the target detector is a single-layer fully connected network, and its output layer includes 6 neurons; (22)采用中心落点法获取无尺度正负样本,使用大小为n×n的滑动窗口在反卷积模块的输出特征图上进行滑动;在每个滑动位置上,将该滑动位置对应的滑动窗口的中心位置映射到原图上,做一个以映射的中心点为原点,以R为半径的圆,若有真实目标框中心点落于该圆周内,则将该滑动窗视为正样本,标记为1;否则视为负样本,标记为0;(22) Use the center drop method to obtain scale-free positive and negative samples, and use a sliding window of size n × n to slide on the output feature map of the deconvolution module; at each sliding position, the corresponding sliding position The center position of the sliding window is mapped to the original image, and a circle is made with the mapped center point as the origin and R as the radius. If the center point of the real target frame falls within the circle, the sliding window is regarded as a positive sample , marked as 1; otherwise, it is regarded as a negative sample and marked as 0; (23)根据步骤(22)得到的正负样本集,获得无尺度深度网络学习损失函数L:(23) According to the positive and negative sample sets obtained in step (22), obtain the scale-free deep network learning loss function L: 其中,k表示样本标号,Nc为批量处理的大小,Nr等于样本数,α为平衡系数,pk表示预测害虫目标的概率值;表示对应的样本标签,正样本取1,负样本取0;Among them, k is the sample label, N c is the batch size, N r is equal to the number of samples, α is the balance coefficient, and p k is the probability value of predicting the pest target; Indicates the corresponding sample label, 1 for positive samples and 0 for negative samples; 表示参数化样本真实边界框,tk={tx,ty,tw,th}表示参数化样本预测边界框; tx=x/Winput,ty=y/Hinput,tw=log(w/Winput),th=log(h/Hinput);对应的,x、y、w、h分别为预测边界框的左上角点的横坐标、纵坐标、宽度和高度;x*,y*,w*,h*分别为真实边界框的左上角的横坐标、纵坐标、宽度和高度;Winput、Hinput分别为输入害虫图像的宽度、高度; represents the real bounding box of the parameterized sample, t k ={t x , ty , t w , t h } represents the predicted bounding box of the parameterized sample; t x =x/W input , ty = y /H input , t w =log(w/W input ), th =log( h /H input ); correspondingly, x, y, w, and h are respectively The abscissa, ordinate, width and height of the upper left corner of the predicted bounding box; x * , y * , w * , h * are the abscissa, ordinate, width and height of the upper left corner of the real bounding box; W input , H input are the width and height of the input pest image, respectively; (24)采用BP算法对害虫无尺度目标检测器的深度神经网络的权重和偏置参数进行学习,迭代N次,至目标检测器参数达到最优:(24) The BP algorithm is used to learn the weights and bias parameters of the deep neural network of the scale-free target detector of the pest, and iterates N times until the target detector parameters reach the optimum: 其中,l表示目标检测器网络层数l=1,2,…,N1+N2+2,Wl表示第l层的权值矩阵,Bl表示第l层的偏置参数,η表示学习率;Among them, l represents the number of target detector network layers l=1, 2, ..., N 1 +N 2 +2, W l represents the weight matrix of the first layer, B l represents the bias parameter of the first layer, η represents the learning rate; (25)获取最优无尺度目标检测器,具体地说,包括获取最优特征编码器的参数Wl,Bl,l=1,2,…,N1+N2+1和最优目标检测子的权重与偏置参数:Wc1,Wc2,Wb1,Wb2,Wb3,Wb4,bc1,bc1,Bb1,Bb2,Bb3,Bb4,其中,Wc1,Wc2为目标置信度回归权重,bc1,bc2为偏置参数,Wb1,Wb2,Wb3,Wb4为位置回归权重,Bb1,Bb2,Bb3,Bb4为偏置参数。(25) Obtaining the optimal scale-free target detector, specifically, including obtaining the parameters W l , B l , l=1, 2, . . . , N 1 +N 2 +1 of the optimal feature encoder and the optimal target Detector weight and bias parameters: W c1 , W c2 , W b1 , W b2 , W b3 , W b4 , b c1 , b c1 , B b1 , B b2 , B b3 , B b4 , where W c1 , W c2 is the target confidence regression weight, b c1 , b c2 are bias parameters, W b1 , W b2 , W b3 , W b4 are position regression weights, B b1 , B b2 , B b3 , B b4 are bias parameters . 4.根据权利要求3所述的一种基于无尺度深度网络的农业害虫检测方法,其特征在于:步骤(3)中所述的“提取待测害虫图像的无尺度特征,预测害虫目标的置信度和位置”,具体包括以下步骤:4. a kind of agricultural pest detection method based on scale-free depth network according to claim 3, it is characterized in that: described in step (3) " the scale-free feature of extracting the pest image to be tested, predicting the confidence of the pest target Degree and Location", which includes the following steps: (31)利用步骤(25)中训练好的无尺度目标检测器,得到待测害虫图像的无尺度特征Y:(31) Using the scale-free target detector trained in step (25), the scale-free feature Y of the pest image to be tested is obtained: 其中,为无尺度特征模块权重,为无尺度特征模块偏置参数,为反卷积模块的输出,可由以下公式递推得到:Xl=σ(Wl*Xl -1+Bl),Xl为每层输出特征,l表示目标检测器网络层数的索引,l=1,2,…,N1+N2,*表示卷积操作,σ(·)为ReLu函数,X0为输入待测害虫图像矩阵;in, is the weight of the scale-free feature module, is the bias parameter for the scale-free feature module, is the output of the deconvolution module, It can be obtained recursively by the following formula: X l =σ(W l *X l -1 +B l ), X l is the output feature of each layer, l represents the index of the number of layers of the target detector network, l=1, 2,... , N 1 +N 2 , * represents the convolution operation, σ(·) is the ReLu function, and X 0 is the input pest image matrix to be tested; (32)利用步骤(25)中训练好的最优目标检测子,采用以下公式计算待测害虫图像中每个害虫候选框的置信度c:(32) Using the optimal target detector trained in step (25), the following formula is used to calculate the confidence c of each pest candidate frame in the pest image to be tested: 其中,e=2.72为数学常数;in, e=2.72 is a mathematical constant; (33)利用步骤(25)中训练好的最优目标检测子,得到害虫目标位置表示为:(33) Using the optimal target detector trained in step (25), the target position of the pest is obtained and expressed as: 左上角横坐标x=wx×Winput,左上角纵坐标y=wy×Hinput,宽度和高度 The upper left corner abscissa x=w x ×W input , the upper left corner ordinate y=w y ×H input , the width and height 其中,wx=σ(Wb1*Y+bb1),wy=σ(Wb2*Y+bb2),ww=σ(Wb3*Y+bb3),wh=σ(Wb4*Y+bb4)。Wherein, w x =σ(W b1 *Y+b b1 ), w y =σ(W b2 *Y+b b2 ), w w =σ(W b3 *Y+b b3 ), w h =σ(W b4 *Y+b b4 ). 5.根据权利要求4所述的一种基于无尺度深度网络的农业害虫检测方法,其特征在于:步骤(4)中所述的“后置优化获取的害虫目标的置信度和位置”,具体包括以下步骤:5. a kind of agricultural pest detection method based on scale-free depth network according to claim 4, is characterized in that: "the confidence level and position of the pest target obtained by post-optimization" described in step (4), specific Include the following steps: (41)使用面积补偿策略修正Q和P值:(41) Use the area compensation strategy to correct the Q and P values: 其中,s表示害虫目标候选框面积,s1为设定的面积阈值,λ为调节因子,λ的取值范围为(0,1],并重新计算获得目标置信度 Among them, s represents the area of the pest target candidate frame, s 1 is the set area threshold, λ is the adjustment factor, and the value range of λ is (0, 1], and the target confidence is recalculated. (42)对害虫目标的位置进行后置优化:(42) Post-optimize the position of the pest target: (421)根据害虫目标的置信度c对目标候选框排序,并标记置信度最大候选框;(421) Rank the target candidate frames according to the confidence degree c of the pest target, and mark the candidate frame with the highest confidence degree; (422)计算置信度值最高的害虫目标候选框与其余每个候选框的交并比;(422) Calculate the intersection ratio of the pest target candidate frame with the highest confidence value and each of the remaining candidate frames; (423)去除交并比值大于设定阈值Nt的候选框;(423) remove the candidate frame whose intersection ratio is greater than the set threshold N t ; (424)对保留的害虫目标候选框重复步骤(421)、(422)和(423),直至最后一个候选框,迭代结束,输出所有标记的候选框。(424) Repeat steps (421), (422) and (423) for the remaining pest target candidate boxes until the last candidate box, the iteration ends, and output all marked candidate boxes. 6.根据权利要求5所述的一种基于无尺度深度网络的农业害虫检测方法,其特征在于:步骤(5)中所述的“判定待测害虫图像中的害虫目标的位置及数目”,具体包括以下步骤:对步骤(4)获得的所有标记的候选框,给定阈值ts,选择目标置信度值大于ts的候选框作为最终的害虫目标检测结果,并计算其数目作为待测害虫图像中害虫总数目。6. a kind of agricultural pest detection method based on scale-free depth network according to claim 5, is characterized in that: "determine the position and the number of the pest target in the pest image to be tested" described in step (5), Specifically, it includes the following steps: for all the marked candidate boxes obtained in step (4), given the threshold ts , select the candidate boxes whose target confidence value is greater than ts as the final pest target detection result, and calculate the number of them as the target detection result. The total number of pests in the pest image.
CN201811587707.1A 2018-12-25 2018-12-25 Agricultural pest detection method based on non-scale depth network Expired - Fee Related CN109784345B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811587707.1A CN109784345B (en) 2018-12-25 2018-12-25 Agricultural pest detection method based on non-scale depth network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811587707.1A CN109784345B (en) 2018-12-25 2018-12-25 Agricultural pest detection method based on non-scale depth network

Publications (2)

Publication Number Publication Date
CN109784345A true CN109784345A (en) 2019-05-21
CN109784345B CN109784345B (en) 2022-10-28

Family

ID=66497571

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811587707.1A Expired - Fee Related CN109784345B (en) 2018-12-25 2018-12-25 Agricultural pest detection method based on non-scale depth network

Country Status (1)

Country Link
CN (1) CN109784345B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110245604A (en) * 2019-06-12 2019-09-17 西安电子科技大学 Mosquito recognition method based on convolutional neural network
CN110287993A (en) * 2019-05-22 2019-09-27 广东精点数据科技股份有限公司 A kind of data preprocessing method and system based on characteristics of image refinement
CN113191229A (en) * 2021-04-20 2021-07-30 华南农业大学 Intelligent visual pest detection method

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107665355A (en) * 2017-09-27 2018-02-06 重庆邮电大学 A kind of agricultural pests detection method based on region convolutional neural networks
WO2018137357A1 (en) * 2017-01-24 2018-08-02 北京大学 Target detection performance optimization method

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018137357A1 (en) * 2017-01-24 2018-08-02 北京大学 Target detection performance optimization method
CN107665355A (en) * 2017-09-27 2018-02-06 重庆邮电大学 A kind of agricultural pests detection method based on region convolutional neural networks

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
魏杨等: "基于区域卷积神经网络的农业害虫检测方法", 《计算机科学》 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110287993A (en) * 2019-05-22 2019-09-27 广东精点数据科技股份有限公司 A kind of data preprocessing method and system based on characteristics of image refinement
CN110245604A (en) * 2019-06-12 2019-09-17 西安电子科技大学 Mosquito recognition method based on convolutional neural network
CN113191229A (en) * 2021-04-20 2021-07-30 华南农业大学 Intelligent visual pest detection method
CN113191229B (en) * 2021-04-20 2023-09-05 华南农业大学 A method for intelligent visual detection of pests

Also Published As

Publication number Publication date
CN109784345B (en) 2022-10-28

Similar Documents

Publication Publication Date Title
CN111639748B (en) Watershed pollutant flux prediction method based on LSTM-BP space-time combination model
CN110333554B (en) NRIET rainstorm intelligent similarity analysis method
CN109784345B (en) Agricultural pest detection method based on non-scale depth network
CN105913450A (en) Tire rubber carbon black dispersity evaluation method and system based on neural network image processing
CN118736550B (en) A water conservancy water level data acquisition and transmission system and method based on the Internet of Things
CN114462718A (en) CNN-GRU wind power prediction method based on time sliding window
CN112069955B (en) Typhoon intensity remote sensing inversion method based on deep learning
CN114943831A (en) Knowledge distillation-based mobile terminal pest target detection method and mobile terminal equipment
CN116297239B (en) Soil organic matter content hyperspectral inversion method
CN111340771B (en) Fine particulate matter real-time monitoring method integrating visual information richness and wide-depth joint learning
CN117456306B (en) Label self-correction method based on meta-learning
CN117633502A (en) Multi-factor quantitative inversion method for soil moisture supply
CN114662790A (en) Sea cucumber culture water temperature prediction method based on multi-dimensional data
CN117274359A (en) A method and system for measuring plant height of crop groups
CN119760640A (en) Important area ecology comprehensive evaluation method based on natural resource investigation monitoring data
CN116842358A (en) Soft measurement modeling method based on multi-scale convolution and self-adaptive feature fusion
CN112465821A (en) Multi-scale pest image detection method based on boundary key point perception
CN118820690A (en) A multi-sensor data correction method based on correlation analysis and integrated model
CN118135398A (en) A remote sensing image classification method and system suitable for large-scale and long-term series
CN117392077A (en) A method for identifying bridge underwater structural cracks based on the yoloV7 improved algorithm
CN117934963B (en) Gas sensor drift compensation method
CN111222576B (en) High-resolution remote sensing image classification method
CN117909927B (en) Precipitation quantitative estimation method and device based on multisource data fusion model
CN112288744A (en) A SAR Image Change Detection Method Based on Integer Inference and Quantized CNN
CN119027807A (en) A satellite image vegetation extraction method combining deep learning and vegetation index

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20221028