CN115984335A - A Method of Obtaining the Feature Parameters of Fog Droplets Based on Image Processing - Google Patents
A Method of Obtaining the Feature Parameters of Fog Droplets Based on Image Processing Download PDFInfo
- Publication number
- CN115984335A CN115984335A CN202310265163.1A CN202310265163A CN115984335A CN 115984335 A CN115984335 A CN 115984335A CN 202310265163 A CN202310265163 A CN 202310265163A CN 115984335 A CN115984335 A CN 115984335A
- Authority
- CN
- China
- Prior art keywords
- image
- droplet
- droplets
- matching
- pixel
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02A—TECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
- Y02A90/00—Technologies having an indirect contribution to adaptation to climate change
- Y02A90/10—Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation
Landscapes
- Image Analysis (AREA)
Abstract
Description
技术领域technical field
本发明涉及精准农业航空施药技术领域,特别是涉及基于图像处理的获取雾滴特征参数方法。The invention relates to the technical field of precision agricultural aerial pesticide application, in particular to a method for obtaining characteristic parameters of droplets based on image processing.
背景技术Background technique
施化学农药进行病虫害防治是目前我国使用的最主要而有效的防治手段。The application of chemical pesticides for pest control is the most important and effective control method used in our country at present.
而随着我国农业航空产业的迅速发展,植保无人机施药方式具有作业效率高,适应性强、环境污染少等特点;但植保无人机施药过程中受旋翼风场与外界风场影响下造成药液雾滴的飘移沉积等情况,影响植保无人机的施药效果。针对现有技术存在以下问题:With the rapid development of my country's agricultural aviation industry, the spraying method of plant protection drones has the characteristics of high operating efficiency, strong adaptability, and less environmental pollution; but the process of plant protection drone spraying is affected by the rotor wind field and the external wind field Under the influence, the drift and deposition of liquid mist droplets will be caused, which will affect the spraying effect of plant protection drones. There are following problems at prior art:
1、在农药喷施过程中,药液雾滴的沉积效果是用户关注的重要指标,药液的飘移是造成农药有效利用率低、环境污染的主要原因。现有的技术主要集中于分析雾滴粒径对雾滴沉积飘移的影响,研究大多通过改变作业参数来改变雾滴粒径大小来减少施药过程中的飘移问题;忽略了雾滴本身的运动速度也会对雾滴沉积运动产生较大影响。1. In the process of pesticide spraying, the deposition effect of liquid spray droplets is an important indicator that users pay attention to, and the drift of chemical liquid is the main reason for the low effective utilization rate of pesticides and environmental pollution. Existing technologies mainly focus on the analysis of the influence of droplet size on droplet deposition and drift. Most of the research is to change the operating parameters to change the droplet size to reduce the drift problem in the spraying process; the movement of the droplet itself is ignored. Velocity also has a greater impact on droplet deposition motion.
2、在分析雾滴沉积飘移过程中,多用水敏纸、布线测量等手段测量飘移率,通过测量水敏纸上的雾滴参数来分析得到飘移率;但药液雾滴滴到水敏纸中会有扩散现象,飘移率或沉积量会存在一定误差。2. In the process of analyzing the deposition and drift of droplets, water-sensitive paper, wiring measurement and other means are used to measure the drift rate, and the drift rate is obtained by measuring the droplet parameters on the water-sensitive paper; There will be diffusion phenomenon in the process, and there will be some errors in the drift rate or deposition amount.
发明内容Contents of the invention
本发明需要解决的技术问题是提供基于图像处理的获取雾滴特征参数方法,所计算的雾滴特征参数包括雾滴大小、雾滴位置坐标与雾滴运动速度。本发明不干扰旋翼风场与雾滴运动,适用于小范围、短时间的风场喷雾特性试验,以解决上述背景技术中所提出的问题。The technical problem to be solved in the present invention is to provide a method for obtaining the characteristic parameters of the droplets based on image processing, and the calculated characteristic parameters of the droplets include the size of the droplets, the position coordinates of the droplets and the moving speed of the droplets. The invention does not interfere with the wind field of the rotor and the movement of the droplets, and is suitable for small-scale, short-term spray characteristic tests of the wind field to solve the problems raised in the above-mentioned background technology.
本发明提出基于图像处理的获取雾滴特征参数方法,包括:The present invention proposes a method for obtaining characteristic parameters of fog droplets based on image processing, including:
获取目标区域的雾滴图像;Obtain the droplet image of the target area;
对雾滴图像进行图像预处理;Perform image preprocessing on the droplet image;
根据图像预处理的结果,筛选出特征区域,计算特征区域中可匹配雾滴,并根据图像预处理的结果获取雾滴图像的雾滴个数、雾滴所占像素大小以及雾滴坐标位置;According to the results of image preprocessing, filter out the feature area, calculate the matching droplets in the feature area, and obtain the number of droplets, the pixel size of the droplets and the coordinate position of the droplets according to the results of image preprocessing;
根据特征区域中可匹配雾滴的匹配计算的结果和匹配雾滴的坐标位置,计算可匹配雾滴的运动速度。According to the matching calculation result of the matching droplet in the feature area and the coordinate position of the matching droplet, the motion velocity of the matching droplet is calculated.
优选地,所述对雾滴图像进行图像预处理,包括:Preferably, the image preprocessing of the droplet image includes:
获取雾滴原始彩色图像;Obtain the original color image of the droplet;
原始彩色图像通过固定阈值二值化转换为二值化图像;The original color image is converted into a binarized image by fixed threshold binarization;
将二值化图像进行膨胀学操作;Perform dilation operations on the binarized image;
对膨胀后图像进行拉普拉斯图像增强操作。Perform Laplacian image enhancement on the dilated image.
优选地,所述膨胀学操作包括:先定义3*3的十字交叉形的结构元;Preferably, the dilation operation includes: first defining a 3*3 cross-shaped structural element;
遍历二值化图像的各个像素点,对像素点的3*3邻域与结构元的原点对应进行判断;Traverse each pixel of the binarized image, and judge the correspondence between the 3*3 neighborhood of the pixel and the origin of the structural element;
如果像素点的值为0,则将该像素点的3*3邻域与结构元对应位置;If the value of the pixel point is 0, then the 3*3 neighborhood of the pixel point corresponds to the position of the structural element;
若对应结构元中的值为1,则将该像素点的值重设为1。If the value in the corresponding structure element is 1, reset the value of the pixel to 1.
优选地,对特征区域中可匹配雾滴的匹配计算,包括:Preferably, the matching calculation of the matchable droplets in the characteristic area includes:
输入两幅经过图像预处理的相邻图像,并进行HARRIS角点检测第一图像计算以获取候选角点;Input two adjacent images that have undergone image preprocessing, and perform HARRIS corner detection first image calculation to obtain candidate corners;
计算候选角点的相关信息,进行候选角点的特征描述;Calculate the relevant information of the candidate corner points, and perform the feature description of the candidate corner points;
根据特征描述,使用归一化互相关计算的第一图像中与第二图像中匹配度最高的角点;According to the feature description, the corner points in the first image and the second image with the highest matching degree calculated using normalized cross-correlation;
展示匹配雾滴结果,并返回对应匹配角点的相关信息。Display the matching droplet results and return the relevant information of the corresponding matching corner points.
优选地,所述进行HARRIS角点检测第一图像计算以获取候选角点,包括,使用卷积框在第一图像的每一点进行向上或者向下,向左或者向右平移一个像素点后的平移操作;Preferably, the HARRIS corner point detection first image calculation to obtain candidate corner points includes, using the convolution frame to perform up or down, left or right translation of one pixel point on each point of the first image translation operation;
对图像中的每一个像素(x,y),在9*9卷积框的邻域中,使用尺寸为9的Sobel算子计算梯度图的协方差矩阵M;For each pixel (x, y) in the image, in the neighborhood of the 9*9 convolution frame, use the Sobel operator with a size of 9 to calculate the covariance matrix M of the gradient map;
定义角点响应函数R为0.06,判断R>0.04的结果视为角点。The corner point response function R is defined as 0.06, and the result of judging R>0.04 is regarded as a corner point.
优选地,对雾滴特征参数计算识别并提取雾滴的相关信息,具体包括:使用findContours函数识别出每个雾滴轮廓后,使用标签化算法,按照从左到右、从上到下的规则给雾滴进行标号;Preferably, calculating and identifying the characteristic parameters of the droplets and extracting the relevant information of the droplets, specifically including: after using the findContours function to identify the outline of each droplet, using the labeling algorithm to follow the rules from left to right and from top to bottom Label the droplets;
使用count_nonzero函数计算每个雾滴轮廓内的像素个数;Use the count_nonzero function to calculate the number of pixels in each droplet outline;
使用boundingRect函数提取雾滴轮廓的边界矩形坐标,计算雾滴的中心位置;Use the boundingRect function to extract the bounding rectangle coordinates of the droplet outline, and calculate the center position of the droplet;
计算可匹配雾滴的运动速度;Calculation can match the velocity of the droplets;
根据邻域雾滴匹配计算部分中的可匹配雾滴的结果和匹配雾滴的坐标位置计算得出匹配雾滴的运动速度。According to the results of the matching droplets in the neighborhood droplet matching calculation part and the coordinate position of the matching droplets, the motion velocity of the matching droplets is calculated.
优选地,对所述雾滴特征参数中雾滴大小可通过以下公式计算获得:Preferably, the droplet size in the droplet characteristic parameters can be calculated by the following formula:
M=L/N D=(P*M)/aM=L/N D=(P*M)/a
其中,M为一个像素点的实际长度,单位:cm;Among them, M is the actual length of a pixel, unit: cm;
L为网格标定中网格大小,单位:cm;L is the grid size in grid calibration, unit: cm;
N为网格在图像中所占的像素数,单位:像素;N is the number of pixels occupied by the grid in the image, unit: pixel;
D为雾滴原始大小,单位:cm;D is the original size of the droplet, unit: cm;
P为雾滴所占图像中的像素数,单位:像素;P is the number of pixels in the image occupied by fog droplets, unit: pixel;
a为关系系数。a is the relationship coefficient.
优选地,可匹配成对雾滴的运动速度的计算可通过以下公式计算获得:Preferably, the calculation that can match the velocity of motion of a pair of droplets can be calculated by the following formula:
S=M*√[(X1-X2)2+(Y1-Y2)2]S=M*√[(X 1 -X 2 ) 2 +(Y 1 -Y 2 ) 2 ]
其中,S为雾滴运动的实际距离;X1,X2,Y1,Y2是由计算雾滴特征参数中的计算雾滴中心坐标位置里得出;Wherein, S is the actual distance of the droplet movement; X 1 , X 2 , Y 1 , Y 2 are obtained from the coordinate position of the center of the droplet in the calculation of the droplet characteristic parameters;
M为一个像素点的实际长度,单位:cm,计算方式与计算雾滴原始大小中的计算公式一致;M is the actual length of a pixel, unit: cm, the calculation method is consistent with the calculation formula in the calculation of the original size of the droplet;
V=S/TV=S/T
其中,V为雾滴运动速度,单位:m/s,S为雾滴运动的实际距离,单位:cm;T为两幅图像的时间间隔,单位:s。Among them, V is the speed of the droplet movement, unit: m/s, S is the actual distance of the droplet movement, unit: cm; T is the time interval between two images, unit: s.
本发明通过获取目标区域的雾滴图像;对雾滴图像进行图像预处理;根据图像预处理的结果,筛选出特征区域,计算特征区域中可匹配雾滴,并根据图像预处理的结果获取雾滴图像的雾滴个数、雾滴所占像素大小以及雾滴坐标位置;根据特征区域中可匹配雾滴的匹配计算的结果和匹配雾滴的坐标位置,计算可匹配雾滴的运动速度高速摄像机拍摄,经过图像处理技术与计算机视觉技术处理分析,得到喷雾在喷施过程中的特征参数,为研究雾滴在喷施过程中的实际运动情况与雾滴沉积飘移计算等提供了新方法。The present invention obtains the droplet image of the target area; performs image preprocessing on the droplet image; screens out the feature area according to the image preprocessing result, calculates the matching droplet in the feature area, and obtains the fog according to the image preprocessing result The number of droplets in the droplet image, the pixel size of the droplets and the coordinate position of the droplets; according to the matching calculation results of the matching droplets in the feature area and the coordinate positions of the matching droplets, the movement speed of the matching droplets can be calculated at a high speed Camera shooting, image processing technology and computer vision technology processing and analysis, get the characteristic parameters of the spray during the spraying process, which provides a new method for the study of the actual movement of the droplets during the spraying process and the calculation of the deposition and drift of the droplets.
附图说明Description of drawings
此处的附图被并入说明书中并构成本说明书的一部分,标示出了符合本发明的实施例,并与说明书一起用于解释本发明的原理。The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the invention and together with the description serve to explain the principles of the invention.
为了更清楚地说明本发明实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,对于本领域普通技术人员而言,在不付出创造性劳动性的前提下,还根据这些附图获得其他的附图。In order to more clearly illustrate the technical solutions in the embodiments of the present invention or the prior art, the following will briefly introduce the drawings that need to be used in the description of the embodiments or the prior art. Obviously, for those of ordinary skill in the art, In other words, on the premise of no creative labor, other drawings are also obtained based on these drawings.
图1为本发明基于图像处理的获取雾滴特征参数方法的流程示意图;Fig. 1 is the schematic flow chart of the method for obtaining the characteristic parameters of droplets based on image processing in the present invention;
图2为领域雾滴匹配步骤中显示带有连接匹配角点之间连线的图;Fig. 2 shows the figure with connecting line between the matching corner points in the field droplet matching step;
图3为雾滴特征参数计算步骤中输出最小斜矩形并计算中心点坐标位置的示意图。Fig. 3 is a schematic diagram of outputting the minimum oblique rectangle and calculating the coordinate position of the center point in the calculation step of the droplet characteristic parameters.
具体实施方式Detailed ways
下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本发明的一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。The following will clearly and completely describe the technical solutions in the embodiments of the present invention with reference to the accompanying drawings in the embodiments of the present invention. Obviously, the described embodiments are only part of the embodiments of the present invention, not all of them. Based on the embodiments of the present invention, all other embodiments obtained by persons of ordinary skill in the art without creative efforts fall within the protection scope of the present invention.
需要说明,本发明实施例中所有方向性指示(诸如上、下、左、右、前、后……)仅用于解释在某一特定姿态(如附图所示)下各部件之间的相对位置关系、运动情况等,如果该特定姿态发生改变时,则该方向性指示也相应地随之改变。It should be noted that all directional indications (such as up, down, left, right, front, back...) in the embodiments of the present invention are only used to explain the relationship between the components in a certain posture (as shown in the figure). Relative positional relationship, movement conditions, etc., if the specific posture changes, the directional indication will also change accordingly.
另外,在本发明中涉及“第一”、“第二”等的描述仅用于描述目的,而不能理解为指示或暗示其相对重要性或者隐含指明所指示的技术特征的数量。由此,限定有“第一”、“第二”的特征明示或者隐含地包括至少一种该特征。另外,各个实施例之间的技术方案相互结合,但是必须是以本领域普通技术人员能够实现为基础,当技术方案的结合出现相互矛盾或无法实现时应当认为这种技术方案的结合不存在,也不在本发明要求的保护范围之内。In addition, the descriptions involving "first", "second" and so on in the present invention are only for descriptive purposes, and should not be understood as indicating or implying their relative importance or implicitly indicating the quantity of the indicated technical features. Thus, features defined with "first" and "second" explicitly or implicitly include at least one of these features. In addition, the technical solutions of various embodiments are combined with each other, but it must be based on the realization of those skilled in the art. When the combination of technical solutions is contradictory or cannot be realized, it should be considered that the combination of technical solutions does not exist. Also not within the scope of protection required by the present invention.
如果仅仅在空中构建若干个位置相对固定的空中中继通信节点,则难以适应急剧变化的作战小组位置的变化,依然不能保证作战小组之间以及整个作战分队之间的高效通信。本发明通过判定作战小组之间是否需要中继通信节点;若是,根据作战小组之间的距离调整中继通信无人机的数量和位置。面向城市作战复杂战场环境,在不改变通信装备的前提下,运用通信中继无人机位置实时调整的方法实现各战斗小组的有效、稳定信息通联,为指挥作战提供有效信息支撑;实现了无人机位置的实时计算和动态调整,保障了信息通联的稳定性和有效性。If only a few air relay communication nodes with relatively fixed positions are built in the air, it will be difficult to adapt to the rapidly changing position of the combat team, and it is still impossible to ensure efficient communication between the combat teams and the entire combat unit. The present invention determines whether a relay communication node is needed between combat teams; if so, adjusts the quantity and position of relay communication drones according to the distance between combat teams. Facing the complex battlefield environment of urban operations, on the premise of not changing the communication equipment, the method of real-time adjustment of the position of the communication relay UAV is used to realize effective and stable information communication of each combat team, and provide effective information support for command operations; The real-time calculation and dynamic adjustment of the man-machine position ensure the stability and effectiveness of information communication.
实施例1Example 1
本发明涉及基于图像处理的获取雾滴特征参数方法,参考图1,包括以下步骤:The present invention relates to the method for obtaining droplet characteristic parameters based on image processing, with reference to Fig. 1, comprises the following steps:
一、获取目标区域的雾滴图像;1. Obtain the droplet image of the target area;
具体的,雾滴图像采集步骤,利用高速摄像机采集雾滴场图像,获取目标区域的雾滴图像;Specifically, the fog droplet image collection step uses a high-speed camera to collect the fog droplet field image, and obtains the fog droplet image of the target area;
二、对雾滴图像进行图像预处理;2. Perform image preprocessing on the fog droplet image;
具体地,计算机获取雾滴图像信息并进行图像预处理的步骤,高速摄像机传输所述雾滴图像信息到计算机进行图像预处理;Specifically, the computer acquires the image information of the fog droplets and performs image preprocessing, and the high-speed camera transmits the image information of the fog droplets to the computer for image preprocessing;
三、根据图像预处理的结果,筛选出特征区域,计算特征区域中可匹配雾滴,并根据图像预处理的结果获取雾滴图像的雾滴个数、雾滴所占像素大小以及雾滴坐标位置;3. According to the results of image preprocessing, filter out the feature area, calculate the matching droplets in the feature area, and obtain the number of droplets, the pixel size of the droplets and the coordinates of the droplets according to the results of image preprocessing Location;
具体地,邻域雾滴匹配计算步骤,根据所述图像预处理的结果,筛选出特征区域,计算特征区域中可匹配雾滴。Specifically, in the neighborhood droplet matching calculation step, according to the result of the image preprocessing, the feature area is screened out, and the matchable droplets in the feature area are calculated.
四、根据特征区域中可匹配雾滴的匹配计算的结果和匹配雾滴的坐标位置,计算可匹配雾滴的运动速度。4. According to the matching calculation result of the matching droplet in the characteristic area and the coordinate position of the matching droplet, calculate the moving speed of the matching droplet.
具体地,雾滴特征参数计算步骤,根据所述图像预处理的结果获取雾滴图像的雾滴个数、所占像素大小与雾滴坐标位置;根据所述邻域雾滴匹配计算的结果结合匹配雾滴的坐标位置,计算可匹配雾滴的运动速度。Specifically, the droplet feature parameter calculation step is to obtain the number of droplets in the droplet image, the pixel size and the coordinate position of the droplets according to the results of the image preprocessing; Match the coordinate position of the droplet and calculate the speed of motion of the droplet.
本发明通过获取目标区域的雾滴图像;对雾滴图像进行图像预处理;根据图像预处理的结果,筛选出特征区域,计算特征区域中可匹配雾滴,并根据图像预处理的结果获取雾滴图像的雾滴个数、雾滴所占像素大小以及雾滴坐标位置;根据特征区域中可匹配雾滴的匹配计算的结果和匹配雾滴的坐标位置,计算可匹配雾滴的运动速度高速摄像机拍摄,经过图像处理技术与计算机视觉技术处理分析,得到喷雾在喷施过程中的特征参数,为研究雾滴在喷施过程中的实际运动情况与雾滴沉积飘移计算等提供了新方法。The present invention obtains the droplet image of the target area; performs image preprocessing on the droplet image; screens out the feature area according to the image preprocessing result, calculates the matching droplet in the feature area, and obtains the fog according to the image preprocessing result The number of droplets in the droplet image, the pixel size of the droplets and the coordinate position of the droplets; according to the matching calculation results of the matching droplets in the feature area and the coordinate positions of the matching droplets, the movement speed of the matching droplets can be calculated at a high speed Camera shooting, image processing technology and computer vision technology processing and analysis, get the characteristic parameters of the spray during the spraying process, which provides a new method for the study of the actual movement of the droplets during the spraying process and the calculation of the deposition and drift of the droplets.
对于步骤一,具体实施方式为:借助无频闪光源照亮拍摄区域,使用高速摄像机设备进行网格标定,而后设定拍摄帧率进行采集雾滴图像;For step one, the specific implementation method is: illuminate the shooting area with the help of frequency-free light source, use high-speed camera equipment to perform grid calibration, and then set the shooting frame rate to collect fog drop images;
在本实施例中,高速摄像机放置在整个喷雾平面的左下区域,距离喷雾平面40cm处,实际拍摄的喷雾区域的大小为5cm*8cm;并采用1cm*1cm大小的网格纸进行标定;拍摄帧率F设定为2000fp/s,持续喷雾时间为3s。In this embodiment, the high-speed camera is placed in the lower left area of the entire spray plane, 40cm away from the spray plane, and the size of the spray area actually photographed is 5cm*8cm; and the grid paper with the size of 1cm*1cm is used for calibration; the shooting frame The rate F is set to 2000fp/s, and the continuous spraying time is 3s.
对于步骤二,所述计算机获取雾滴图像信息并进行图像预处理的步骤中;For step 2, the computer obtains the droplet image information and performs image preprocessing;
所述计算机获取雾滴图像的方式,可采用USB接口、双绞线等标准接口方式将雾滴图像传输到计算机中;The computer obtains the mode of fog droplet image, can adopt the standard interface mode such as USB interface, twisted pair to transmit fog droplet image in the computer;
所述对原始雾滴图像进行预处理,主要操作包括将计算机获取的原始彩色图像通过固定阈值二值化转换为二值化图像;将所述二值化图像进行膨胀学操作;对膨胀后图像进行拉普拉斯图像增强操作;对原始雾滴图像进行预处理的进一步操作包括以下步骤:The main operations of the preprocessing of the original droplet image include converting the original color image acquired by the computer into a binarized image through fixed threshold binarization; performing dilation operations on the binarized image; Carry out the Laplacian image enhancement operation; The further operation of preprocessing the original fog droplet image includes the following steps:
对获取的每帧图像,根据图像中雾滴的数量与分布特性,利用图像处理技术进行预处理分析。首先对所获取的24bit的RGB原始彩色图像,设定固定阈值为127,将原始彩色图像转化为二值化图像。For each frame of image acquired, according to the number and distribution characteristics of fog droplets in the image, image processing technology is used for preprocessing analysis. Firstly, for the acquired 24bit RGB original color image, set a fixed threshold value of 127, and convert the original color image into a binary image.
然后,对二值化图像进行3*3的膨胀学操作,先定义3*3的十字交叉形的结构元,然后遍历图像的各个像素点,对像素点的3*3邻域与所述结构元的原点对应进行判断;结果如果像素点的值为0,则将该像素点的3*3邻域与结构元对应位置,进行判断,若对应结构元中的值为1,则将该像素点的值重设为1。Then, perform a 3*3 expansion operation on the binary image, first define a 3*3 cross-shaped structural element, and then traverse each pixel of the image, and compare the 3*3 neighborhood of the pixel with the structure The origin of the element is correspondingly judged; if the value of the pixel point is 0, then the 3*3 neighborhood of the pixel point is judged against the corresponding position of the structural element, and if the value of the corresponding structural element is 1, the pixel is The point value is reset to 1.
对膨胀学操作后的图像进行图像增强,所选的图像增强的方法是拉普拉斯算法,拉普拉斯算子大小ksize设置为5。拉普拉斯图像增强算法是一种常用的图像增强算法,能够产生一个很明显的灰度边界,利于后续雾滴轮廓提取。Image enhancement is performed on the image after the dilatation operation. The selected image enhancement method is the Laplacian algorithm, and the Laplacian operator size ksize is set to 5. The Laplacian image enhancement algorithm is a commonly used image enhancement algorithm, which can produce a very obvious gray boundary, which is beneficial to the subsequent extraction of fog droplet contours.
在上述实施例的基础上,所述步骤三邻域雾滴匹配计算的进一步操作如下:On the basis of the above-mentioned embodiments, the further operation of the step three neighborhood droplet matching calculation is as follows:
第一步,输入两幅经过所述图像预处理的相邻图像,寻找那些最容易识别的角点(像素点)作为检测子;本步骤使用HARRIS角点检测计算方法来提取检测子。具体原理是,使用卷积框在图上每一点进行向上或者向下,向左或者向右平移一个像素点后的平移操作,如果卷积框内的灰度值有较大的变化,那么认为这个卷积框所在的区域就存在角点。在具体实施中,对图像中的每一个像素(x,y),在9*9卷积框的邻域中,使用尺寸为9的Sobel算子计算梯度图的协方差矩阵M;设定角点响应函数R为0.06,判断角点响应函数R>0.04的结果视为角点。In the first step, two adjacent images that have been preprocessed by the image are input, and the corner points (pixels) that are most easily recognized are searched for as detectors; this step uses the HARRIS corner detection calculation method to extract detectors. The specific principle is to use the convolution frame to perform a translation operation after moving up or down, left or right by one pixel point on each point on the map. If the gray value in the convolution frame has a large change, then it is considered There are corner points in the area where the convolution frame is located. In the specific implementation, for each pixel (x, y) in the image, in the neighborhood of the 9*9 convolution frame, use the Sobel operator with a size of 9 to calculate the covariance matrix M of the gradient map; set the angle The point response function R is 0.06, and the result of judging the corner point response function R>0.04 is regarded as a corner point.
第二步,计算候选角点的相关信息,进行角点特征描述。进一步步骤为:设定角点阈值为0.1,在步骤E中HARRIS响应图像中寻找高于阈值的候选角点;找到候选角点后,获取其坐标并保存到数组,同时获取候选角点的HARRIS响应值,并按照响应值大小进行排序。The second step is to calculate the relevant information of the candidate corner points and describe the corner point features. The further steps are: set the corner threshold to 0.1, and search for candidate corners higher than the threshold in the HARRIS response image in step E; after finding the candidate corners, obtain their coordinates and save them in an array, and obtain the HARRIS of the candidate corners at the same time Response value, and sort by the size of the response value.
寻找候选角点中的最佳HARRIS点,具体方法为:设定分割角点和图像边界的最少像素数目为min_dist,并将其值设置为10;判断min_dist的值是否大于10,只有min_dist大于10的角点被视为最佳HARRIS点;Find the best HARRIS point among the candidate corner points. The specific method is: set the minimum number of pixels to segment corner points and image boundaries as min_dist, and set its value to 10; determine whether the value of min_dist is greater than 10, only min_dist is greater than 10 The corner point of is regarded as the best HARRIS point;
找到最佳HARRIS点,但HARRIS角点检测方法没有给出根据角点的信息来匹配角点方法,因此要提取描述子作为每一个角点的特征。经过Harris 角点检测的角点描述子通常是其周围图像像素块信息。Find the best HARRIS point, but the HARRIS corner detection method does not provide a method for matching corners according to the information of the corners, so the descriptor must be extracted as the feature of each corner. The corner descriptor after Harris corner detection is usually the information of its surrounding image pixel blocks.
因此在本实施例中,通过设定宽度wid为9,返回最佳HARRIS点周围2*wid+1个像素的值作为角点描述子。Therefore, in this embodiment, by setting the width wid to 9, the value of 2*wid+1 pixels around the best HARRIS point is returned as the corner point descriptor.
第三步,计算成对角点描述子的匹配程度;本步骤采用归一化互相关性计算方法(NCC),NCC是用于表示归一化待匹配目标之间的相关程度。The third step is to calculate the matching degree of the paired corner point descriptors; this step uses the normalized cross-correlation calculation method (NCC), and NCC is used to indicate the degree of correlation between the normalized targets to be matched.
具体匹配方式如下:结合上述步骤的特征描述子,对第一幅图像中的每个角点描述子,使用归一化互相关计算方法(NCC),来选取它在第二幅图像中的匹配度最高的角点,得出匹配的成对角点。The specific matching method is as follows: combined with the feature descriptors in the above steps, for each corner descriptor in the first image, use the normalized cross-correlation calculation method (NCC) to select its matching in the second image The corner with the highest degree is obtained to obtain the matching pair of corners.
第四步,展示匹配雾滴结果,并返回对应匹配角点的相关信息;将相邻两张雾滴图像拼接成一幅新图像,然后获取匹配成功的成对特征描述子的位置,显示一副带有连接匹配角点之间连线的图片,如图2。The fourth step is to display the matching droplet results, and return the relevant information of the corresponding matching corner points; stitch two adjacent fog droplet images into a new image, and then obtain the positions of the paired feature descriptors that are successfully matched, and display a A picture with lines connecting the matching corners is shown in Figure 2.
在上述实施例的基础上,可以计算得到雾滴特征参数,本实施例中的雾滴特征参数主要包括:雾滴图像的雾滴个数、所占像素大小与雾滴坐标位置;可匹配雾滴的运动速度。On the basis of the above embodiments, the droplet characteristic parameters can be calculated. The droplet characteristic parameters in this embodiment mainly include: the number of droplets in the droplet image, the pixel size and the coordinate position of the droplets; The velocity of the drop.
上述计算雾滴特征参数处理方法,所述步骤四雾滴特征参数计算的步骤进一步包括:The above-mentioned method for calculating the characteristic parameters of the droplets, the step of calculating the characteristic parameters of the droplets in the step 4 further includes:
识别雾滴并提取雾滴的相关信息(雾滴个数、所占像素大小、雾滴坐标位置与可匹配雾滴的运动速度),主要实施过程由识别雾滴并排序与计算雾滴特征参数两部分组成:Identify the droplets and extract the relevant information of the droplets (the number of droplets, the size of the pixels occupied, the coordinate position of the droplets and the movement speed of the droplets that can be matched), the main implementation process consists of identifying the droplets, sorting and calculating the characteristic parameters of the droplets It consists of two parts:
所述识别雾滴并排序的进一步操作为:要识别提取雾滴图像中每个雾滴的轮廓,本发明中采用findContours函数识别出每个雾滴轮廓。对于提取的每个雾滴轮廓,使用sort_contours函数将雾滴轮廓按照从左到右、从上到下的规则顺序给雾滴进行标号;The further operation of identifying and sorting the droplets is: to identify and extract the contour of each droplet in the droplet image, in the present invention, the findContours function is used to identify the contour of each droplet. For each extracted droplet contour, use the sort_contours function to label the droplet contours according to the rule order from left to right and from top to bottom;
所述计算雾滴特征参数这一步骤,本发明中涉及计算的雾滴特征参数包括雾滴个数、雾滴所占像素大小、雾滴的坐标位置以及可匹配雾滴的运动速度。In the step of calculating the characteristic parameters of the droplets, the characteristic parameters of the droplets involved in the present invention include the number of droplets, the pixel size occupied by the droplets, the coordinate position of the droplets, and the moving speed of the droplets that can be matched.
关于雾滴个数的统计,在识别雾滴并进行排序这一步骤就已经能得出一幅雾滴图像中的雾滴总数,在此不过多叙述。Regarding the statistics of the number of droplets, the total number of droplets in a droplet image can be obtained after the step of identifying and sorting the droplets, which will not be described here.
关于计算雾滴所占像素的大小,本实施例使用count_nonzero函数计算每个雾滴轮廓内的像素个数;count_nonzero函数可以返回灰度值不为0的像素数,能够更准确计算出雾滴所占像素的大小。值得注意的是,因为本步骤是在经过步骤二图像预处理的基础上进行计算的,而图像预处理中有对雾滴进行膨胀学处理以及拉普拉斯图像增强操作,这些操作会使雾滴大小变大变粗;因此本步骤中计算得出的雾滴所占像素的大小,并不是真实的雾滴所占像素的大小。With regard to calculating the size of the pixels occupied by the droplets, this embodiment uses the count_nonzero function to calculate the number of pixels in the outline of each droplet; the count_nonzero function can return the number of pixels whose gray value is not 0, and can more accurately calculate the number of pixels that the droplets occupy. The size in pixels. It is worth noting that this step is calculated on the basis of the image preprocessing in step 2, and in the image preprocessing there are dilation processing and Laplacian image enhancement operations on the fog droplets, which will make the fog The size of the droplet becomes larger and thicker; therefore, the size of the pixels occupied by the droplets calculated in this step is not the size of the pixels occupied by the real droplets.
在后续的计算中发现,本发明中,经过图像预处理后的雾滴大小比原始彩色图像中的雾滴大小大3倍左右的关系,具体详细的系数关系仍有待进一步研究;因此本步骤计算的雾滴所占像素的大小可用于雾滴大小的对比;对于雾滴原始大小仍不是特别精准,基于众多学者对雾滴原始大小仍较感兴趣,本发明提供一个基于估算的关系系数的计算雾滴原始大小的计算公式。In the follow-up calculation, it is found that in the present invention, the droplet size after image preprocessing is about 3 times larger than the droplet size in the original color image, and the specific and detailed coefficient relationship remains to be further studied; therefore, this step calculates The size of the pixels occupied by the droplets can be used to compare the size of the droplets; the original size of the droplets is not particularly accurate, and based on the fact that many scholars are still interested in the original size of the droplets, the present invention provides a calculation based on an estimated relationship coefficient The formula for calculating the original size of the droplet.
M=L/N D=(P*M)/aM=L/N D=(P*M)/a
其中,M为一个像素点的实际长度;L为网格标定中网格大小;N为网格在图像中所占的像素数。D为雾滴原始大小,P为雾滴所占图像中的像素数,a为关系系数。Among them, M is the actual length of a pixel; L is the grid size in grid calibration; N is the number of pixels occupied by the grid in the image. D is the original size of the droplet, P is the number of pixels in the image occupied by the droplet, and a is the relationship coefficient.
关于计算雾滴坐标位置,本实施例使用minAreaRect函数提取雾滴轮廓的边界矩形坐标,计算雾滴的中心坐标位置;minAreaRect函数可以计算得到包覆轮廓的最小斜矩形,能够更精准获取雾滴的中心位置;同时minAreaRect函数能输出最小斜矩形中的中心点坐标(x,y)以及斜矩形的宽度width与高度height,如图3所示。Regarding the calculation of the coordinate position of the droplet, this embodiment uses the minAreaRect function to extract the boundary rectangle coordinates of the droplet outline, and calculates the center coordinate position of the droplet; the minAreaRect function can calculate the smallest oblique rectangle covering the outline, and can obtain the droplet's position more accurately. Center position; at the same time, the minAreaRect function can output the coordinates (x, y) of the center point in the smallest oblique rectangle and the width and height of the oblique rectangle, as shown in Figure 3.
关于计算可匹配雾滴的运动速度如下:根据步骤三邻域雾滴匹配计算部分中的可匹配雾滴的结果,寻找出成对的匹配雾滴,再结合上述雾滴的特征参数信息与下列公式计算课得出匹配雾滴的运动速度。The calculation of the motion speed of the matchable droplets is as follows: According to the results of the matchable droplets in the third step of the neighborhood droplet matching calculation part, find out the paired matching droplets, and then combine the characteristic parameter information of the above droplets with the following The formula is calculated to match the speed of motion of the droplets.
S=M*√[(X1-X2)2+(Y1-Y2)2]S=M*√[(X 1 -X 2 ) 2 +(Y 1 -Y 2 ) 2 ]
其中S为雾滴运动的实际距离;X1,X2,Y1,Y2是由所述计算雾滴特征参数中的计算雾滴中心坐标位置里得出;M为一个像素点的实际长度,计算方式与计算雾滴原始大小中的计算公式一致。Wherein S is the actual distance of droplet movement; X 1 , X 2 , Y 1 , Y 2 are obtained in the coordinate position of the center of the calculation droplet in the calculation droplet characteristic parameters; M is the actual length of a pixel , the calculation method is consistent with the calculation formula in the calculation of the original size of the droplet.
V=S/TV=S/T
其中,V为雾滴运动速度,单位:m/s,S为雾滴运动的实际距离,单位:cm;T为两幅图像的时间间隔,单位:S。Among them, V is the speed of the droplet movement, unit: m/s, S is the actual distance of the droplet movement, unit: cm; T is the time interval between two images, unit: S.
以上所述仅是本发明的具体实施方式,使本领域技术人员能够理解或实现本发明。对这些实施例的多种修改对本领域的技术人员来说将是显而易见的,本文中所定义的一般原理在不脱离本发明的精神或范围的情况下,在其它实施例中实现。因此,本发明将不会被限制于本文所示的这些实施例,而是要符合与本文所申请的原理和新颖特点相一致的最宽的范围。The above descriptions are only specific embodiments of the present invention, so that those skilled in the art can understand or implement the present invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the general principles defined herein may be implemented in other embodiments without departing from the spirit or scope of the invention. Accordingly, the present invention will not be limited to the embodiments shown herein, but is to be accorded the widest scope consistent with the principles and novel features claimed herein.
Claims (8)
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202310265163.1A CN115984335B (en) | 2023-03-20 | 2023-03-20 | A Method of Obtaining the Feature Parameters of Fog Droplets Based on Image Processing |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202310265163.1A CN115984335B (en) | 2023-03-20 | 2023-03-20 | A Method of Obtaining the Feature Parameters of Fog Droplets Based on Image Processing |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| CN115984335A true CN115984335A (en) | 2023-04-18 |
| CN115984335B CN115984335B (en) | 2023-06-23 |
Family
ID=85970865
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN202310265163.1A Active CN115984335B (en) | 2023-03-20 | 2023-03-20 | A Method of Obtaining the Feature Parameters of Fog Droplets Based on Image Processing |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN115984335B (en) |
Citations (10)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20130208997A1 (en) * | 2010-11-02 | 2013-08-15 | Zte Corporation | Method and Apparatus for Combining Panoramic Image |
| US20130243250A1 (en) * | 2009-09-14 | 2013-09-19 | Trimble Navigation Limited | Location of image capture device and object features in a captured image |
| US20160196654A1 (en) * | 2015-01-07 | 2016-07-07 | Ricoh Company, Ltd. | Map creation apparatus, map creation method, and computer-readable recording medium |
| CN106981073A (en) * | 2017-03-31 | 2017-07-25 | 中南大学 | A kind of ground moving object method for real time tracking and system based on unmanned plane |
| US20170278258A1 (en) * | 2011-08-31 | 2017-09-28 | Apple Inc. | Method Of Detecting And Describing Features From An Intensity Image |
| CN107657626A (en) * | 2016-07-25 | 2018-02-02 | 浙江宇视科技有限公司 | The detection method and device of a kind of moving target |
| CN112164037A (en) * | 2020-09-16 | 2021-01-01 | 天津大学 | MEMS device in-plane motion measurement method based on optical flow tracking |
| CN113706566A (en) * | 2021-09-01 | 2021-11-26 | 四川中烟工业有限责任公司 | Perfuming spray performance detection method based on edge detection |
| CN115272403A (en) * | 2022-06-10 | 2022-11-01 | 南京理工大学 | Fragment scattering characteristic testing method based on image processing technology |
| CN115760893A (en) * | 2022-11-29 | 2023-03-07 | 江苏大学 | Single droplet particle size and speed measuring method based on nuclear correlation filtering algorithm |
-
2023
- 2023-03-20 CN CN202310265163.1A patent/CN115984335B/en active Active
Patent Citations (10)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20130243250A1 (en) * | 2009-09-14 | 2013-09-19 | Trimble Navigation Limited | Location of image capture device and object features in a captured image |
| US20130208997A1 (en) * | 2010-11-02 | 2013-08-15 | Zte Corporation | Method and Apparatus for Combining Panoramic Image |
| US20170278258A1 (en) * | 2011-08-31 | 2017-09-28 | Apple Inc. | Method Of Detecting And Describing Features From An Intensity Image |
| US20160196654A1 (en) * | 2015-01-07 | 2016-07-07 | Ricoh Company, Ltd. | Map creation apparatus, map creation method, and computer-readable recording medium |
| CN107657626A (en) * | 2016-07-25 | 2018-02-02 | 浙江宇视科技有限公司 | The detection method and device of a kind of moving target |
| CN106981073A (en) * | 2017-03-31 | 2017-07-25 | 中南大学 | A kind of ground moving object method for real time tracking and system based on unmanned plane |
| CN112164037A (en) * | 2020-09-16 | 2021-01-01 | 天津大学 | MEMS device in-plane motion measurement method based on optical flow tracking |
| CN113706566A (en) * | 2021-09-01 | 2021-11-26 | 四川中烟工业有限责任公司 | Perfuming spray performance detection method based on edge detection |
| CN115272403A (en) * | 2022-06-10 | 2022-11-01 | 南京理工大学 | Fragment scattering characteristic testing method based on image processing technology |
| CN115760893A (en) * | 2022-11-29 | 2023-03-07 | 江苏大学 | Single droplet particle size and speed measuring method based on nuclear correlation filtering algorithm |
Also Published As
| Publication number | Publication date |
|---|---|
| CN115984335B (en) | 2023-06-23 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| Fu et al. | Fast and accurate detection of kiwifruit in orchard using improved YOLOv3-tiny model | |
| CN108596101B (en) | A multi-target detection method for remote sensing images based on convolutional neural network | |
| CN107808143B (en) | Computer Vision-Based Dynamic Gesture Recognition Method | |
| CN110689519B (en) | Fog drop deposition image detection system and method based on yolo network | |
| CN109949361A (en) | An Attitude Estimation Method for Rotor UAV Based on Monocular Vision Positioning | |
| CN110770752A (en) | Automatic pest counting method combining multi-scale feature fusion network with positioning model | |
| CN109635875A (en) | A kind of end-to-end network interface detection method based on deep learning | |
| CN112381870B (en) | Binocular vision-based ship identification and navigational speed measurement system and method | |
| CN113537049B (en) | Ground point cloud data processing method and device, terminal equipment and storage medium | |
| CN111126183A (en) | Method for detecting damage of building after earthquake based on near-ground image data | |
| WO2016106955A1 (en) | Laser infrared composite ground building recognition and navigation method | |
| CN115717867A (en) | Bridge deformation measurement method based on airborne double cameras and target tracking | |
| CN109458978B (en) | An Antenna Downtilt Angle Measurement Method Based on Multi-scale Detection Algorithm | |
| CN113095324A (en) | Classification and distance measurement method and system for cone barrel | |
| CN111354007A (en) | A projection interaction method based on pure machine vision positioning | |
| CN111598033B (en) | Goods positioning method, device, system and computer readable storage medium | |
| CN114049624A (en) | Intelligent detection method and system for ship cabin based on machine vision | |
| CN108256462A (en) | A kind of demographic method in market monitor video | |
| CN112464933A (en) | Intelligent recognition method for small dim target of ground-based staring infrared imaging | |
| CN116310675A (en) | Feature complementary image processing method of infrared-visible light image under low illumination | |
| Yang et al. | Droplet deposition characteristics detection method based on deep learning | |
| CN115546170A (en) | Fan blade defect positioning method and system based on laser ranging | |
| CN109657540B (en) | Dead tree location method and system | |
| CN114170487A (en) | Vision-based water surface oil stain detection method | |
| CN110307903A (en) | A kind of method of the contactless temperature dynamic measurement of poultry privileged site |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| GR01 | Patent grant | ||
| GR01 | Patent grant |