[go: up one dir, main page]

CN109657577B - An animal detection method based on entropy and motion offset - Google Patents

An animal detection method based on entropy and motion offset Download PDF

Info

Publication number
CN109657577B
CN109657577B CN201811496717.4A CN201811496717A CN109657577B CN 109657577 B CN109657577 B CN 109657577B CN 201811496717 A CN201811496717 A CN 201811496717A CN 109657577 B CN109657577 B CN 109657577B
Authority
CN
China
Prior art keywords
image
entropy
animal
current frame
detection
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201811496717.4A
Other languages
Chinese (zh)
Other versions
CN109657577A (en
Inventor
朱小飞
陈建促
王越
李章宇
林志航
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing University of Technology
Original Assignee
Chongqing University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing University of Technology filed Critical Chongqing University of Technology
Priority to CN201811496717.4A priority Critical patent/CN109657577B/en
Publication of CN109657577A publication Critical patent/CN109657577A/en
Application granted granted Critical
Publication of CN109657577B publication Critical patent/CN109657577B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection

Landscapes

  • Engineering & Computer Science (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

本发明公开了一种基于熵和运动偏移量的动物检测方法,基于现有YOLOv3模型对当前帧图像进行初步检测判断,若图像中存在遮挡的情况,当前帧图像检测失败,则通过熵最小的在前队列图像来确定当前帧图像的动物类别信息,并通过多张在前队列图像来计算当前帧图像的动物位置信息。从而实现对存在遮挡的图像的动物检测,提升了实时动物检测的稳定性与准确率。

Figure 201811496717

The invention discloses an animal detection method based on entropy and motion offset. Based on the existing YOLOv3 model, the current frame image is preliminarily detected and judged. If there is occlusion in the image, the current frame image detection fails, and the minimum entropy is passed. The preceding queue images are used to determine the animal category information of the current frame image, and the animal position information of the current frame image is calculated through multiple preceding queue images. In this way, animal detection for occluded images is realized, and the stability and accuracy of real-time animal detection are improved.

Figure 201811496717

Description

一种基于熵和运动偏移量的动物检测方法An animal detection method based on entropy and motion offset

技术领域technical field

本发明涉及图像识别领域,尤其涉及一种基于熵和运动偏移量的动物检测方法。The invention relates to the field of image recognition, in particular to an animal detection method based on entropy and motion offset.

背景技术Background technique

实时动物检测是机器视觉领域的一个重要研究方向,其应用范围涵盖了安防、工业、汽车辅助驾驶等多个领域。将实时动物检测运用于动物领域,记录动物的日常行为及生活规律,以辅助科学研究,从而更好的保护珍贵或濒危的动物,更好的保护复杂的生态系统,避免发生因某一动物的毁灭而引起不良连锁反应。然而,在对动物进行实时动物检测的过程中,由于运动模糊、形态变化、光照、复杂背景以及其他物体的遮挡等的影响,会降低动物检测的稳定性与准确率。其中,动物检测中遮挡问题的解决是提升动物检测稳定性与准确率的关键。Real-time animal detection is an important research direction in the field of machine vision, and its applications cover many fields such as security, industry, and automotive assisted driving. Apply real-time animal detection in the animal field to record the daily behavior and life rules of animals to assist scientific research, so as to better protect precious or endangered animals, better protect complex ecosystems, and avoid the occurrence of accidents caused by a certain animal. Destruction and causing adverse chain reactions. However, in the process of real-time animal detection on animals, the stability and accuracy of animal detection will be reduced due to the influence of motion blur, morphological changes, illumination, complex backgrounds, and occlusion by other objects. Among them, the solution to the occlusion problem in animal detection is the key to improving the stability and accuracy of animal detection.

存在遮挡情况时动物的检测主要分为传统的遮挡检测方法、基于人工特征提取的分类方法以及基于深度学习的方法。传统的遮挡检测方法通过融合中心加权、子块匹配、轨迹预测、贝叶斯理论等对动物进行检测,其缺点在于不能很好的适用于目标尺度变化的情况,存在高误检率;基于人工特征提取的方法通过先验知识获得目标的特征描述,并输入到一个分类器中学习分类规则,其存在对目标的表达能力不足、可分性较差等缺点;基于深度学习的方法,通过卷积运算让计算机自动地从图像中提取出鲁棒性高、通用性好的目标特征。近年来,基于深度学习的方法在动物检测领域显现出的巨大优势使其成为研究热点。部分研究人员结合selective search算法与SVM分类器,提出了基于区域的卷积神经网络R-CNN,其相对于基于人工特征提取的动物检测方法的平均识别率提升了近20%,但时间开销与空间开销大;部分研究人员设计的YOLO模型,使用一个网络对图片进行端到端的训练,将整张图片作为网络的输入,在输出层回归边界框的位置及类别,检测速度得到了很大的提升,但检测准确率降低;部分研究人员设计的SSD(Single Shot MultiBox Detector)模型结合VGG-16网络、滑动窗口以及锚点框,在全图各个位置的多尺度区域特征进行回归,提高了动物检测的准确率,但检测速度不及YOLO。部分研究人员提出的YOLOv3模型,使用Darknet-53网络与金字塔网络,对图像进行多尺度检测,在保持YOLO检测速度的同时,达到了SSD的检测准确率。以上基于深度学习的动物检测方法虽取得了很好的研究成果,但对视频进行动物检测时,未考虑视频特有的时间序列关系,检测准确率较低。The detection of animals in the presence of occlusion is mainly divided into traditional occlusion detection methods, classification methods based on artificial feature extraction, and methods based on deep learning. The traditional occlusion detection method detects animals through fusion center weighting, sub-block matching, trajectory prediction, Bayesian theory, etc. The disadvantage is that it cannot be well applied to the situation where the target scale changes, and there is a high false detection rate; based on artificial The feature extraction method obtains the feature description of the target through prior knowledge, and inputs it into a classifier to learn the classification rules, which has shortcomings such as insufficient ability to express the target and poor separability. The product operation allows the computer to automatically extract the target features with high robustness and generality from the image. In recent years, the great advantages of deep learning-based methods in the field of animal detection have made them a research hotspot. Some researchers combine the selective search algorithm and SVM classifier to propose a region-based convolutional neural network R-CNN, which improves the average recognition rate by nearly 20% compared to the animal detection method based on artificial feature extraction, but the time overhead is similar to that of the animal detection method. The space overhead is large; the YOLO model designed by some researchers uses a network to train the image end-to-end, uses the entire image as the input of the network, and returns the position and category of the bounding box at the output layer, and the detection speed is greatly improved. improved, but the detection accuracy rate is reduced; the SSD (Single Shot MultiBox Detector) model designed by some researchers combines the VGG-16 network, sliding window and anchor box to regress the multi-scale regional features in various positions of the whole image, which improves animal performance. The accuracy of detection, but the detection speed is not as fast as YOLO. The YOLOv3 model proposed by some researchers uses the Darknet-53 network and the pyramid network to perform multi-scale detection on images, which achieves the detection accuracy of SSD while maintaining the detection speed of YOLO. Although the above deep learning-based animal detection methods have achieved good research results, when animal detection is performed on videos, the unique time series relationship of videos is not considered, and the detection accuracy is low.

因此,如何提高存在遮挡情况时动物检测的准确率,成为了本领域技术人员急需解决的问题。Therefore, how to improve the accuracy of animal detection in the presence of occlusion has become an urgent problem to be solved by those skilled in the art.

发明内容SUMMARY OF THE INVENTION

针对现有技术中存在的上述不足,本发明需要解决的问题是:如何提高存在遮挡情况时动物检测的准确率。In view of the above deficiencies in the prior art, the problem to be solved by the present invention is: how to improve the accuracy of animal detection in the presence of occlusion.

为解决上述技术问题,本发明采用了如下的技术方案:In order to solve the above-mentioned technical problems, the present invention adopts the following technical solutions:

一种基于熵和运动偏移量的动物检测方法,包括如下步骤:An animal detection method based on entropy and motion offset, comprising the following steps:

S1、获取视频序列图像,所述视频序列图像包括当前帧图像及在前队列图像,在前队列图像包括多张连续的当前帧图像之前的图像,执行S2;S1, obtain a video sequence image, the video sequence image includes a current frame image and a previous queue image, and the previous queue image includes a plurality of consecutive images before the current frame image, and perform S2;

S2、基于YOLOv3模型对当前帧图像及在前队列图像进行检测,得到当前帧图像及在前队列图像的检测信息,所述检测信息包括检测评分、动物类别信息及动物位置信息,执行S3;S2. Detect the current frame image and the previous queue image based on the YOLOv3 model to obtain detection information of the current frame image and the previous queue image, where the detection information includes detection score, animal category information and animal location information, and execute S3;

S3、若当前帧图像的检测评分大于或等于评分阈值,执行S6,否则,执行S4;S3. If the detection score of the current frame image is greater than or equal to the score threshold, execute S6, otherwise, execute S4;

S4、计算每张在前队列图像的熵,将熵最低的在前队列图像的动物类别信息代替当前帧图像原有的动物类别信息,执行S5;S4. Calculate the entropy of each image in the previous queue, replace the animal category information of the previous queue image with the lowest entropy for the original animal category information of the current frame image, and execute S5;

S5、基于所有在前队列图像的动物位置信息计算当前帧图像的动物位置信息,执行S6;S5, calculate the animal position information of the current frame image based on the animal position information of all the previous queue images, and execute S6;

S6、输出当前帧图像的检测信息。S6, output the detection information of the current frame image.

优选地,S4中,任一在前队列图像的熵的计算方法如下:Preferably, in S4, the method for calculating the entropy of any preceding queue image is as follows:

S401、基于

Figure GDA0003573046710000021
计算在前队列图像不同尺度下单个区域对应的类别分数和,S为类别分数和,ci1为单个区域对应的类别i1的识别率,C为类别集合,N1为动物类别集合,N1∈C;S401, based on
Figure GDA0003573046710000021
Calculate the sum of the category scores corresponding to a single region in the front cohort image at different scales, S is the category score sum, c i1 is the recognition rate of the category i1 corresponding to a single region, C is the category set, N1 is the animal category set, N1∈C;

S402、基于公式

Figure GDA0003573046710000022
计算单个区域对应的类别i1的识别率占类别分数和的比值p(ci1);S402, based on formula
Figure GDA0003573046710000022
Calculate the ratio p(c i1 ) of the recognition rate of the category i1 corresponding to a single area to the sum of the category scores;

S403、基于公式

Figure GDA0003573046710000023
计算单个区域的熵,Ej1为第j1个单个区域的熵;S403, based on formula
Figure GDA0003573046710000023
Calculate the entropy of a single area, E j1 is the entropy of the j1th single area;

S404、基于公式

Figure GDA0003573046710000031
计算在前队列图像单个尺度的熵,EK为尺度K的熵,m为在前队列图像尺度K的单个区域的总个数,N2表示YOLOv3模型中尺度K对应的单个区域尺寸参数;S404, based on formula
Figure GDA0003573046710000031
Calculate the entropy of a single scale of the image in the previous queue, E K is the entropy of the scale K, m is the total number of single regions of the scale K of the previous queue image, and N2 represents the size parameter of a single region corresponding to the scale K in the YOLOv3 model;

S405、基于公式

Figure GDA0003573046710000032
计算在前队列图像的熵E。S405, formula based
Figure GDA0003573046710000032
Calculate the entropy E of the image in the previous queue.

优选地,S5包括如下步骤:Preferably, S5 includes the following steps:

S501、获取熵最低的在前队列图像的动物位置信息;S501. Obtain the animal position information of the image in the previous queue with the lowest entropy;

S502、基于

Figure GDA0003573046710000033
计算当前帧图像的动物位置信息,xi2、yi2、wi2及hi2分别为当前帧图像中动物图像的x轴定位坐标、y轴定位坐标、宽度及高度,xj2、yj2、wj2及hj2分别为熵最低的在前队列图像中动物图像的x轴定位坐标、y轴定位坐标、宽度及高度,offset_x、offset_y、offset_w及offset_h为当前帧图像中动物图像相对于熵最低的在前队列图像中动物图像的x轴定位坐标变化量、y轴定位坐标变化量、宽度变化量及高度变化量,
Figure GDA0003573046710000034
S502, based on
Figure GDA0003573046710000033
Calculate the animal position information of the current frame image, x i2 , y i2 , w i2 and h i2 are the x-axis positioning coordinates, y-axis positioning coordinates, width and height of the animal image in the current frame image respectively, x j2 , y j2 , w j2 and h j2 are the x-axis positioning coordinates, y-axis positioning coordinates, width and height of the animal image in the previous queue image with the lowest entropy, respectively, offset_x, offset_y, offset_w and offset_h are the animal image in the current frame image relative to the lowest entropy image In the front queue image, the x-axis positioning coordinate change, the y-axis positioning coordinate change, the width change and the height change of the animal image,
Figure GDA0003573046710000034

综上所述,本发明公开了一种基于熵和运动偏移量的动物检测方法,基于现有YOLOv3模型对当前帧图像进行初步检测判断,若图像中存在遮挡的情况,当前帧图像检测失败,则通过熵最小的在前队列图像来确定当前帧图像的动物类别信息,并通过多张在前队列图像来计算当前帧图像的动物位置信息。从而实现对存在遮挡的图像的动物检测,提升了实时动物检测的稳定性与准确率。To sum up, the present invention discloses an animal detection method based on entropy and motion offset. The current frame image is preliminarily detected and judged based on the existing YOLOv3 model. If there is occlusion in the image, the current frame image detection fails. , the animal category information of the current frame image is determined by the previous queue image with the smallest entropy, and the animal position information of the current frame image is calculated by using multiple previous queue images. In this way, animal detection for occluded images is realized, and the stability and accuracy of real-time animal detection are improved.

附图说明Description of drawings

图1为本发明公开的一种基于熵和运动偏移量的动物检测方法的流程图;Fig. 1 is the flow chart of a kind of animal detection method based on entropy and motion offset disclosed by the present invention;

图2为第3尺度下未遮挡图像的熵的示意图;Fig. 2 is the schematic diagram of the entropy of the unoccluded image under the 3rd scale;

图3为第3尺度下遮挡图像的熵的示意图;Fig. 3 is the schematic diagram of the entropy of the occlusion image under the 3rd scale;

图4为第3尺度下未遮挡图像熵与遮挡图像熵的对比示意图;4 is a schematic diagram of the comparison between the entropy of the unoccluded image and the entropy of the occluded image under the third scale;

图5为第3尺度下未遮挡图像熵与遮挡图像熵的单维对比示意图;5 is a schematic diagram of a single-dimensional comparison of the entropy of the unoccluded image and the entropy of the occluded image at the third scale;

图6为实验中对应视频序列图像熵和检测识别率的变化曲线示意图。FIG. 6 is a schematic diagram of the change curve of the corresponding video sequence image entropy and detection recognition rate in the experiment.

具体实施方式Detailed ways

下面结合附图对本发明作进一步的详细说明。The present invention will be further described in detail below in conjunction with the accompanying drawings.

如图1所示,本发明公开了一种基于熵和运动偏移量的动物检测方法,包括如下步骤:As shown in Figure 1, the present invention discloses an animal detection method based on entropy and motion offset, comprising the following steps:

S1、获取视频序列图像,所述视频序列图像包括当前帧图像及在前队列图像,在前队列图像包括多张连续的当前帧图像之前的图像,执行S2;S1, obtain a video sequence image, the video sequence image includes a current frame image and a previous queue image, and the previous queue image includes a plurality of consecutive images before the current frame image, and perform S2;

S2、基于YOLOv3模型对当前帧图像及在前队列图像进行检测,得到当前帧图像及在前队列图像的检测信息,所述检测信息包括检测评分、动物类别信息及动物位置信息,执行S3;S2. Detect the current frame image and the previous queue image based on the YOLOv3 model to obtain detection information of the current frame image and the previous queue image, where the detection information includes detection score, animal category information and animal location information, and execute S3;

S3、若当前帧图像的检测评分大于或等于评分阈值,执行S6,否则,执行S4;S3. If the detection score of the current frame image is greater than or equal to the score threshold, execute S6, otherwise, execute S4;

S4、计算每张在前队列图像的熵,将熵最低的在前队列图像的动物类别信息代替当前帧图像原有的动物类别信息,执行S5;S4. Calculate the entropy of each image in the previous queue, replace the animal category information of the previous queue image with the lowest entropy for the original animal category information of the current frame image, and execute S5;

S5、基于所有在前队列图像的动物位置信息计算当前帧图像的动物位置信息,执行S6;S5, calculate the animal position information of the current frame image based on the animal position information of all the previous queue images, and execute S6;

S6、输出当前帧图像的检测信息。S6, output the detection information of the current frame image.

YOLOv3(You Only Look Once)算法是Joseph Redmon和Ali Farhadi在2018年提出的一种基于回归的实时目标检测算法,是一个可以一次性预测多个目标框位置和类别的卷积神经网络。其将Darknet-53作为基础网络,对目标进行特征提取;在Darknet-53的基础之上,加入额外的卷积层,对图像进行三个不同尺度的预测,从而获得更高的语义信息。The YOLOv3 (You Only Look Once) algorithm is a regression-based real-time target detection algorithm proposed by Joseph Redmon and Ali Farhadi in 2018. It is a convolutional neural network that can predict the positions and categories of multiple target boxes at one time. It uses Darknet-53 as the basic network to perform feature extraction on the target; on the basis of Darknet-53, additional convolutional layers are added to predict images at three different scales to obtain higher semantic information.

在对每个尺度特征图进行预测时,分别对每个特征图下的每个区域进行了三次类别预测与边界框位置的回归预测,则每个预测任务得到的特征大小T:When predicting the feature map of each scale, three classification predictions and regression prediction of the bounding box position are performed for each region under each feature map, and the feature size T obtained by each prediction task is:

T=N3×N3×[3*(4+1+C)] (1)T=N3×N3×[3*(4+1+C)] (1)

式(1)中,N3为格子大小,N3的取值分别为13、26、52;3为锚点数量;4是边界框偏移量;1是目标预测值;C是类别总数量。In formula (1), N3 is the grid size, and the values of N3 are 13, 26, and 52; 3 is the number of anchor points; 4 is the bounding box offset; 1 is the target prediction value; C is the total number of categories.

YOLOv3在得到对应的边界框、目标预测与类别预测之后,进行了非极大值抑制(NMS,Non-maximum suppression),从而得到最终的预测结果。After obtaining the corresponding bounding box, target prediction and category prediction, YOLOv3 performs non-maximum suppression (NMS, Non-maximum suppression) to obtain the final prediction result.

由于YOLOv3对视频进行目标检测时,是将视频拆分为一系列视频序列图像,其本质上仍然是在对图片进行检测,未考虑到视频特有的时间序列关系。当视频图像受到一定程度的遮挡等影响时,会出现检测失败的情况,并影响实时目标检测的稳定性。为解决上述问题,本发明对YOLOv3实时目标检测模型进行扩展,将视频图像的熵、运动偏移量与视频特有的时间序列关系引入至YOLOv3模型之中。通过评分阈值对YOLOv3的检测评分进行判断,若检测评分低于评分阈值,则通过时间序列关系,取得在前队列图像中最小熵的在前队列图像对应的动物类别信息,并计算当前帧图像与最小熵对应在前队列图像之间的位置偏移,从而得到最后的检测输出。Since YOLOv3 splits the video into a series of video sequence images when it performs target detection on the video, it is still detecting pictures in essence, without considering the unique time series relationship of the video. When the video image is affected by a certain degree of occlusion, detection failure will occur and the stability of real-time target detection will be affected. In order to solve the above problem, the present invention expands the YOLOv3 real-time target detection model, and introduces the entropy, motion offset and video-specific time series relationship of the video image into the YOLOv3 model. The detection score of YOLOv3 is judged by the score threshold. If the detection score is lower than the score threshold, the animal category information corresponding to the previous cohort image with the minimum entropy in the previous cohort image is obtained through the time series relationship, and the current frame image and the previous cohort image are calculated. The minimum entropy corresponds to the positional offset between the images in the previous queue, resulting in the final detection output.

本发明中在前队列图像存储在在前队列中,在前队列为一个先进先出的队列,本发明以在前队列可以存储8个图像为例。若当前帧图像存在遮挡,则将熵最小的在前队列图像的熵作为当前帧队列图像的熵。完成对当前帧图像的检测后,将当前帧图像也放入在前队列中,对下一帧的图像继续进行检测,实现对视频图像的实时的持续的检测。In the present invention, the images of the front queue are stored in the front queue, and the front queue is a first-in, first-out queue. In the present invention, the front queue can store 8 images as an example. If the current frame image is occluded, the entropy of the previous queue image with the smallest entropy is taken as the entropy of the current frame queue image. After the detection of the current frame image is completed, the current frame image is also put into the front queue, and the detection of the next frame image is continued, so as to realize the real-time continuous detection of the video image.

本发明公开了一种基于熵和运动偏移量的动物检测方法,基于现有YOLOv3模型对当前帧图像进行初步检测判断,若图像中存在遮挡的情况,当前帧图像检测失败,则通过熵最小的在前队列图像来确定当前帧图像的动物类别信息,并通过多张在前队列图像来计算当前帧图像的动物位置信息。从而实现对存在遮挡的图像的动物检测,提升了实时动物检测的稳定性与准确率。The invention discloses an animal detection method based on entropy and motion offset. Based on the existing YOLOv3 model, the current frame image is preliminarily detected and judged. If there is occlusion in the image, the current frame image detection fails, and the minimum entropy is passed. The preceding queue images are used to determine the animal category information of the current frame image, and the animal position information of the current frame image is calculated through multiple preceding queue images. In this way, animal detection for occluded images is realized, and the stability and accuracy of real-time animal detection are improved.

具体实施时,S4中,任一在前队列图像的熵的计算方法如下:During specific implementation, in S4, the method for calculating the entropy of any preceding queue image is as follows:

S401、基于

Figure GDA0003573046710000051
计算在前队列图像不同尺度下单个区域对应的类别分数和,S为类别分数和,ci1为单个区域对应的类别i1的识别率,C为类别集合,N1为动物类别集合,N1∈C;S401, based on
Figure GDA0003573046710000051
Calculate the sum of the category scores corresponding to a single region in the front cohort image at different scales, S is the category score sum, c i1 is the recognition rate of the category i1 corresponding to a single region, C is the category set, N1 is the animal category set, N1∈C;

S402、基于公式

Figure GDA0003573046710000052
计算单个区域对应的类别i1的识别率占类别分数和的比值p(ci1);S402, based on formula
Figure GDA0003573046710000052
Calculate the ratio p(c i1 ) of the recognition rate of the category i1 corresponding to a single area to the sum of the category scores;

S403、基于公式

Figure GDA0003573046710000053
计算单个区域的熵,Ej1为第j1个单个区域的熵;S403, based on formula
Figure GDA0003573046710000053
Calculate the entropy of a single area, E j1 is the entropy of the j1th single area;

S404、基于公式

Figure GDA0003573046710000061
计算在前队列图像单个尺度的熵,EK为尺度K的熵,m为在前队列图像尺度K的单个区域的总个数,N2表示YOLOv3模型中尺度K对应的单个区域尺寸参数;S404, based on formula
Figure GDA0003573046710000061
Calculate the entropy of a single scale of the image in the previous queue, E K is the entropy of the scale K, m is the total number of single regions of the scale K of the previous queue image, and N2 represents the size parameter of a single region corresponding to the scale K in the YOLOv3 model;

S405、基于公式

Figure GDA0003573046710000062
计算在前队列图像的熵E。S405, formula based
Figure GDA0003573046710000062
Calculate the entropy E of the image in the previous queue.

对视频序列图像之间是否存在遮挡进行判断,可以引入信息论中的最大香农熵理论。熵在信息论中代表随机变量不确定度的度量,用来衡量一个平面或一个区域内的物体的混乱程度,反应了一个信息的不确定度。To judge whether there is occlusion between video sequence images, the maximum Shannon entropy theory in information theory can be introduced. Entropy represents a measure of the uncertainty of random variables in information theory, and is used to measure the degree of confusion of objects in a plane or an area, reflecting the uncertainty of an information.

受到遮挡的影响,目标在进入遮挡时,目标信息会逐渐丢失,目标特征点的数量也会逐渐减少。特征点的数量降低会导致目标信息的不稳定甚至丢失,这时会产生多种识别结果,当识别结果越混乱,则不确定度越高。根据信息熵的定义可知,混乱程度越大,信息的不确定度越大,即熵越大。Affected by occlusion, when the target enters the occlusion, the target information will be gradually lost, and the number of target feature points will gradually decrease. The reduction of the number of feature points will lead to the instability or even loss of target information, and various recognition results will be produced at this time. When the recognition results are more chaotic, the uncertainty will be higher. According to the definition of information entropy, the greater the degree of confusion, the greater the uncertainty of information, that is, the greater the entropy.

信息熵的高低与目标检测识别率的高低成反比。目标检测识别率越高,信息熵则越低;目标检测识别率越低,信息熵则越高。The level of information entropy is inversely proportional to the level of target detection and recognition. The higher the target detection recognition rate, the lower the information entropy; the lower the target detection recognition rate, the higher the information entropy.

因此,本发明将熵最小的在前队列图像的动物类别信息作为当前帧图像的动物类别信息。Therefore, the present invention uses the animal category information of the previous queue image with the smallest entropy as the animal category information of the current frame image.

具体实施时,S5包括如下步骤:During specific implementation, S5 includes the following steps:

S501、获取熵最低的在前队列图像的动物位置信息;S501. Obtain the animal position information of the image in the previous queue with the lowest entropy;

S502、基于

Figure GDA0003573046710000063
计算当前帧图像的动物位置信息,xi2、yi2、wi2及hi2分别为当前帧图像中动物图像的x轴定位坐标、y轴定位坐标、宽度及高度,xj2、yj2、wj2及hj2分别为熵最低的在前队列图像中动物图像的x轴定位坐标、y轴定位坐标、宽度及高度,offset_x、offset_y、offset_w及offset_h为当前帧图像中动物图像相对于熵最低的在前队列图像中动物图像的x轴定位坐标变化量、y轴定位坐标变化量、宽度变化量及高度变化量,
Figure GDA0003573046710000071
S502, based on
Figure GDA0003573046710000063
Calculate the animal position information of the current frame image, x i2 , y i2 , w i2 and h i2 are the x-axis positioning coordinates, y-axis positioning coordinates, width and height of the animal image in the current frame image respectively, x j2 , y j2 , w j2 and h j2 are the x-axis positioning coordinates, y-axis positioning coordinates, width and height of the animal image in the previous queue image with the lowest entropy, respectively, offset_x, offset_y, offset_w and offset_h are the animal image in the current frame image relative to the lowest entropy image In the front queue image, the x-axis positioning coordinate change, the y-axis positioning coordinate change, the width change and the height change of the animal image,
Figure GDA0003573046710000071

对于遮挡目标,我们可以根据遮挡前的运动信息预测目标位置。通过使用遮挡前视频图像的运动信息,可以避免由于运动目标状态的改变导致的预测位置与目标实际位置的偏离,从而出现的定位偏差问题。根据目标运动存在的惯性,它的运动速度和加速度在短时间(本发明中采用8帧举例)内,不会发生很大变化。因此,我们可以假设当前帧图像与最小信息熵对应的在前队列图像之间的位置变化关系为匀速直线变化。For occlusion targets, we can predict the target position based on the motion information before occlusion. By using the motion information of the video image before occlusion, the deviation of the predicted position from the actual position of the target caused by the change of the state of the moving target can be avoided, thus the problem of positioning deviation. According to the inertia existing in the movement of the target, its movement speed and acceleration will not change greatly in a short period of time (8 frames are used as an example in the present invention). Therefore, we can assume that the position change relationship between the current frame image and the previous queue image corresponding to the minimum information entropy is a uniform linear change.

当前帧图像的检测判断方法包括通过选取检测框与目标真实框的交并比IoU(Intersection over Union)和阈值进行比较来判断。若图像的真实框为sr,预测框表示为sp,则The detection and judgment method of the current frame image includes comparing the IoU (Intersection over Union) and the threshold by selecting the intersection between the detection frame and the target real frame. If the real frame of the image is sr and the predicted frame is expressed as sp, then

Figure GDA0003573046710000072
Figure GDA0003573046710000072

若检测的目标类别用‘0,1,...,c’表示,未检测到目标用‘-1’表示,则判断检测目标的所属类别C:If the detected target category is represented by '0,1,...,c', and the undetected target is represented by '-1', then determine the category C of the detected target:

Figure GDA0003573046710000073
Figure GDA0003573046710000073

下面为采用本发明公开的方法进行检测的实验说明:The following is the experimental description of the detection by the method disclosed in the present invention:

本文实验环境与配置为:Ubuntu 14.04操作系统,Inter Xeon E5-2623 v3处理器,64GB内存,NVIDIA Tesla K80的GPU,以及Keras深度学习框架。The experimental environment and configuration of this paper are: Ubuntu 14.04 operating system, Inter Xeon E5-2623 v3 processor, 64GB memory, NVIDIA Tesla K80 GPU, and Keras deep learning framework.

由于野生动物公开数据集AWA2(animals with attributes)为图片分类数据集,未包含视频数据集特有的时间序列关系。对于野生动物的视频遮挡检测,我们构建了一个包含12个类的野生动物视频遮挡检测数据集WVDDS(Wildlife Video DetectionDatasets),WVDDS对视频数据按照每5帧标注一次的频率进行手工标注,数据标注格式为PASCAL VOC,WVDDS数据集包含的类别及对应数量如表1所示。Since the wild animal public dataset AWA2 (animals with attributes) is an image classification dataset, it does not contain the peculiar time series relationship of video datasets. For the video occlusion detection of wild animals, we constructed a wildlife video occlusion detection dataset WVDDS (Wildlife Video Detection Datasets) containing 12 categories. WVDDS manually labels the video data according to the frequency of labeling every 5 frames. The data labeling format For PASCAL VOC, the categories and corresponding quantities contained in the WVDDS dataset are shown in Table 1.

表1Table 1

Figure GDA0003573046710000081
Figure GDA0003573046710000081

在模型训练过程中,我们使用了keras中的EarlyStopping回调函数,其中监控数据选用val_loss,当val_loss保持一定程度的稳定时,则停止训练。In the model training process, we used the EarlyStopping callback function in keras, where val_loss is selected for monitoring data, and training is stopped when val_loss remains stable to a certain extent.

实验表明目标检测识别率随着外物遮挡面积的增加而降低;目标检测识别率随着自遮挡面积的增加而降低;目标检测识别率随着外物遮挡面积的增加而降低,并且当遮挡达到一定范围时,会出现检测失败的情况;目标检测识别率随着外物遮挡面积的减少而上升;目标检测识别率随着外物遮挡面积的增多而降低,并且当遮挡达到一定范围时,会产生检测错误的情况。Experiments show that the target detection recognition rate decreases with the increase of the occlusion area of foreign objects; the target detection recognition rate decreases with the increase of the self-occlusion area; the target detection recognition rate decreases with the increase of the occlusion area of foreign objects, and when the occlusion reaches In a certain range, the detection failure will occur; the target detection recognition rate increases with the decrease of the occlusion area of foreign objects; the target detection recognition rate decreases with the increase of the occlusion area of foreign objects, and when the occlusion reaches a certain range, it will A detection error condition occurs.

我们分别获取视频序列中一张存在遮挡的视频图像与一张未存在遮挡的视频图像,并通过Darknet-53和额外卷积层,得到图像三个不同尺度的特征图。并对图像进行信息熵计算,得到不同尺度特征图对应的信息熵。我们选取第3尺度特征图对应的信息熵进行分析:We obtain a video image with occlusion and a video image without occlusion in the video sequence respectively, and obtain feature maps of three different scales of the image through Darknet-53 and additional convolutional layers. Calculate the information entropy of the image, and obtain the information entropy corresponding to the feature maps of different scales. We select the information entropy corresponding to the third scale feature map for analysis:

若在三维坐标空间中熵的一般形式为If the general form of entropy in three-dimensional coordinate space is

Figure GDA0003573046710000091
Figure GDA0003573046710000091

其中,Φ(N,N,e)表示图像空间区域中的变化情况,(N,N)∈Ω表示图像空间Ω中像素点的横坐标和纵坐标,W是Φ的取值空间,函数E是与Φ相关的变换函数。通过对比上式中E的变化,即可得到不同序列图像之间熵的变化关系。Among them, Φ(N,N,e) represents the change in the image space region, (N,N)∈Ω represents the abscissa and ordinate of the pixel in the image space Ω, W is the value space of Φ, and the function E is the transformation function related to Φ. By comparing the changes of E in the above formula, the change relationship of entropy between different sequences of images can be obtained.

对于图像空间(N1,N1)∈Ω1,(N2,N2)∈Ω2For the image space (N 1 ,N 1 )∈Ω1,(N 2 ,N 2 )∈Ω2

Figure GDA0003573046710000092
Figure GDA0003573046710000092

图2至图5为第3尺度下遮挡图像与未遮挡图像的熵比较。其中,图2为未遮挡图像的熵;图3为遮挡图像的熵,凸起的部分表示熵的突然增加,说明其对应区域的内容变化大;图4为未遮挡图像熵与遮挡图像熵的对比,表明遮挡图像熵在突然变化的区域位置明显大于未遮挡图像熵;图5选取了未遮挡图像熵与遮挡图像熵的单维对比,表明遮挡图像的熵高于未遮挡图像的熵。Figures 2 to 5 show the entropy comparison between the occluded image and the unoccluded image at the third scale. Among them, Figure 2 is the entropy of the unoccluded image; Figure 3 is the entropy of the occluded image, and the raised part represents a sudden increase in entropy, indicating that the content of the corresponding area changes greatly; Figure 4 is the unoccluded image entropy and occluded image entropy The comparison shows that the entropy of the occluded image is significantly larger than the entropy of the unoccluded image in the sudden change of the area; Figure 5 selects the one-dimensional comparison of the entropy of the unoccluded image and the entropy of the occluded image, which shows that the entropy of the occluded image is higher than that of the unoccluded image.

首先,我们去除视频序列图像的特征图中检测分数为0.00的区域,并使用图像熵计算步骤进行计算,得到entropy的数据;使用训练之后得到的模型对视频序列图像进行检测(评分阈值=0.3,交并比阈值=0.5),得到检测评分(scores)的数据,每张视频图像的entropy与检测评分一一对应。然后,对得到的entropy数据进行降序排序,其对应的检测评分也依次排列。最后,将排序之后的数据进行可视化分析,分析结果如图6所示:视频序列图像随着熵entropy的降低,由于检测过程中会受到除遮挡之外的其他因素(光照、动物形变、运动模糊等)影响,其对应的检测结果scores会存在一定的波动,但整体而言,曲线还是呈现出明显的上升趋势。First, we remove the area with a detection score of 0.00 in the feature map of the video sequence image, and use the image entropy calculation step to calculate the entropy data; use the model obtained after training to detect the video sequence image (score threshold = 0.3, Cross-union ratio threshold=0.5), data of detection scores (scores) are obtained, and the entropy of each video image corresponds to the detection score one-to-one. Then, the obtained entropy data are sorted in descending order, and their corresponding detection scores are also arranged in order. Finally, the sorted data is visualized and analyzed, and the analysis result is shown in Figure 6: as the entropy of the video sequence image decreases, it will be affected by other factors (lighting, animal deformation, motion blur) in addition to occlusion during the detection process. etc.), the corresponding test result scores will fluctuate to a certain extent, but on the whole, the curve still shows a clear upward trend.

图6表明:随着视频序列图像熵的降低,目标检测的识别率增大;视频序列图像的熵与目标检测识别率大致成反比关系。Figure 6 shows that as the entropy of the video sequence image decreases, the recognition rate of target detection increases; the entropy of the video sequence image is roughly inversely proportional to the recognition rate of target detection.

当遮挡达到一定程度时,YOLOv3不能对目标进行检测;但由于本发明结合了信息熵、时间序列关系以及位置偏移,则能够对存在遮挡的视频图像目标进行准确检测;另外,由于模型结构的设计,会对视频中的每张视频图像进行检测输出,使得基于视频目标检测的稳定性大幅提高。When the occlusion reaches a certain level, YOLOv3 cannot detect the target; but because the present invention combines information entropy, time series relationship and position offset, it can accurately detect the video image target with occlusion; The design will detect and output each video image in the video, which greatly improves the stability of video target detection.

为了验证本文模型的有效性和准确性,我们采用Faster R-CNN、RetinaNet、SSD以及YOLOv3与本文模型(ET-YOLO)进行实验对比:本文拟用实时目标检测中常用的mAP(meanAverage Precision)与FPS(Frames Per Second)作为算法性能评价指标。In order to verify the validity and accuracy of the model in this paper, we use Faster R-CNN, RetinaNet, SSD and YOLOv3 to compare with the model in this paper (ET-YOLO): this paper intends to use mAP (meanAverage Precision) and FPS (Frames Per Second) is used as the performance evaluation index of the algorithm.

表4Table 4

Figure GDA0003573046710000101
Figure GDA0003573046710000101

表5table 5

Figure GDA0003573046710000102
Figure GDA0003573046710000102

表4为不同模型在WVDDS数据集上的检测准确率与检测速率的实验结果;表5为不同模型在WVDDS数据集上各个类别的准确率。实验结果表明,ET-YOLO的检测精度高于Faster R-NN、RetinaNet、SSD与YOLOv3四个模型;虽然其检测速度略低于YOLOv3,但在不影响实时目标检测的基础上,其检测精度相对于YOLOv3提升了5.5%。Table 4 shows the experimental results of the detection accuracy and detection rate of different models on the WVDDS dataset; Table 5 shows the accuracy rates of different models on the WVDDS dataset for each category. The experimental results show that the detection accuracy of ET-YOLO is higher than that of Faster R-NN, RetinaNet, SSD and YOLOv3; although its detection speed is slightly lower than that of YOLOv3, its detection accuracy is relatively high without affecting real-time target detection. It has increased by 5.5% in YOLOv3.

综上所述,本发明将基于深度学习的实时目标检测方法应用于野生动物保护领域,并构造了包含时间序列信息的野生动物视频遮挡检测数据集WVDDS,为野生动物的目标检测研究提供了新的数据资源;In summary, the present invention applies the deep learning-based real-time target detection method to the field of wildlife protection, and constructs a wildlife video occlusion detection data set WVDDS containing time-series information, which provides new insights for wildlife target detection research. data resources;

证明了视频图像的信息熵与目标检测的识别率呈反比关系;It is proved that the information entropy of video image is inversely proportional to the recognition rate of target detection;

结合时间序列信息与YOLOv3模型,通过信息熵的变化与时间序列关系很大程度上解决了遮挡检测的问题,提升了实时目标检测的稳定性与识别率;Combining time series information and YOLOv3 model, the problem of occlusion detection is largely solved through the change of information entropy and the relationship between time series, and the stability and recognition rate of real-time target detection are improved;

计算检测目标随时间变化的位置偏移量,提升了遮挡目标预测框与真实框的重合度(IoU)。Calculate the position offset of the detection target over time, which improves the coincidence degree (IoU) between the predicted frame of the occlusion target and the real frame.

上述仅是本发明优选的实施方式,需指出是,对于本领域技术人员在不脱离本技术方案的前提下,还可以作出若干变形和改进,上述变形和改进的技术方案应同样视为落入本发明要求保护的范围。The above is only the preferred embodiment of the present invention. It should be pointed out that, for those skilled in the art, without departing from the technical solution, some deformations and improvements can be made, and the technical solutions of the above deformations and improvements should be regarded as falling into the same The scope of protection of the present invention.

Claims (1)

1. An animal detection method based on entropy and motion offset is characterized by comprising the following steps:
s1, acquiring video sequence images, wherein the video sequence images comprise a current frame image and a previous queue image, the previous queue image comprises a plurality of images before the current frame image, and S2 is executed;
S2, detecting the current frame image and the previous queue image based on a YOLOv3 model to obtain detection information of the current frame image and the previous queue image, wherein the detection information comprises detection scores, animal type information and animal position information, and S3 is executed;
s3, if the detection score of the current frame image is larger than or equal to the score threshold value, executing S6, otherwise, executing S4;
s4, calculating the entropy of each previous queue image, replacing the original animal type information of the current queue image with the animal type information of the previous queue image with the lowest entropy, and executing S5; in S4, the entropy of any previous queue image is calculated as follows:
s401, based on
Figure FDA0003596587100000011
Wherein i1 is equal to N1, the sum of the category scores corresponding to a single region under different scales of the previous queue image is calculated, S is the sum of the category scores, c is the sum of the category scoresi1The identification rate of a category i1 corresponding to a single region, wherein C is a category set, N1 is an animal category set, and N1 belongs to C;
s402, based on formula
Figure FDA0003596587100000012
Wherein i1 is equal to N1, and the ratio p (c) of the recognition rate of the category i1 corresponding to a single area to the sum of the category scores is calculatedi1);
S403, based on formula
Figure FDA0003596587100000013
Where i1 ∈ N1, the entropy of a single region, E, is calculatedj1Entropy of the j1 th individual region;
s404, based on formula
Figure FDA0003596587100000014
Where m ∈ [0, N2 × N2 × 3)), the entropy of the previous queue image at a single scale is calculated, E KThe entropy of the scale K is shown, m is the total number of single areas of the scale K of the previous queue image, and N2 represents a single area size parameter corresponding to the scale K in the YOLOv3 model;
s405, based on formula
Figure FDA0003596587100000015
Calculating the entropy E of the previous queue image;
s5, calculating the animal position information of the current frame image based on the animal position information of all the previous queue images, and executing S6; s5 includes the steps of:
s501, obtaining animal position information of a previous queue image with the lowest entropy;
s502, based on
Figure FDA0003596587100000021
Calculating animal position information, x, of the current frame imagei2、yi2、wi2And hi2Respectively an x-axis positioning coordinate, a y-axis positioning coordinate, a width and a height of the animal image in the current frame image, xj2、yj2、wj2And hj2X-axis positioning coordinates, y-axis positioning coordinates, width and height of the animal image in the front queue image having the lowest entropy, respectively, offset _ x, offset _ y, offset _ w and offset _ h are the amount of change in x-axis positioning coordinates, the amount of change in y-axis positioning coordinates, the amount of change in width and the amount of change in height of the animal image in the current frame image with respect to the animal image in the front queue image having the lowest entropy,
Figure FDA0003596587100000022
and S6, outputting the detection information of the current frame image.
CN201811496717.4A 2018-12-07 2018-12-07 An animal detection method based on entropy and motion offset Expired - Fee Related CN109657577B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811496717.4A CN109657577B (en) 2018-12-07 2018-12-07 An animal detection method based on entropy and motion offset

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811496717.4A CN109657577B (en) 2018-12-07 2018-12-07 An animal detection method based on entropy and motion offset

Publications (2)

Publication Number Publication Date
CN109657577A CN109657577A (en) 2019-04-19
CN109657577B true CN109657577B (en) 2022-06-28

Family

ID=66113566

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811496717.4A Expired - Fee Related CN109657577B (en) 2018-12-07 2018-12-07 An animal detection method based on entropy and motion offset

Country Status (1)

Country Link
CN (1) CN109657577B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11094070B2 (en) * 2019-04-23 2021-08-17 Jiangnan University Visual multi-object tracking based on multi-Bernoulli filter with YOLOv3 detection
CN112906452A (en) * 2020-12-10 2021-06-04 叶平 Automatic identification, tracking and statistics method and system for antelope buffalo deer

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108304787A (en) * 2018-01-17 2018-07-20 河南工业大学 Road target detection method based on convolutional neural networks
CN108805064A (en) * 2018-05-31 2018-11-13 中国农业大学 A kind of fish detection and localization and recognition methods and system based on deep learning
CN108830192A (en) * 2018-05-31 2018-11-16 珠海亿智电子科技有限公司 Vehicle and detection method of license plate under vehicle environment based on deep learning

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9881234B2 (en) * 2015-11-25 2018-01-30 Baidu Usa Llc. Systems and methods for end-to-end object detection

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108304787A (en) * 2018-01-17 2018-07-20 河南工业大学 Road target detection method based on convolutional neural networks
CN108805064A (en) * 2018-05-31 2018-11-13 中国农业大学 A kind of fish detection and localization and recognition methods and system based on deep learning
CN108830192A (en) * 2018-05-31 2018-11-16 珠海亿智电子科技有限公司 Vehicle and detection method of license plate under vehicle environment based on deep learning

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
Detecting plains and Grevy"s Zebras in the realworld;Jason Parham等;《2016 IEEE Winter Applications of Computer Vision Workshops (WACVW)》;20160519;1-9 *
基于似物性采样和核化相关滤波器的目标跟踪算法研究;王鹏飞;《中国优秀博硕士学位论文全文数据库(硕士)信息科技辑》;20180215(第02期);I138-1994 *
基于快速区域建议网络的图像多目标分割算法;黄劲潮;《山东大学学报(工学版)》;20180525(第04期);24-30+40 *
基于深度学习的水面无人船前方船只图像识别方法;王贵槐等;《船舶工程》;20180425(第04期);28-31+108 *
基于视频的野生动物目标检测算法研究;陈建促;《中国优秀硕士学位论文全文数据库 农业科技辑》;20190815(第08期);D051-4 *

Also Published As

Publication number Publication date
CN109657577A (en) 2019-04-19

Similar Documents

Publication Publication Date Title
Jana et al. YOLO based Detection and Classification of Objects in video records
Nguyen et al. Yolo based real-time human detection for smart video surveillance at the edge
CN112669275B (en) YOLOv3 algorithm-based PCB surface defect detection method and device
CN112861635B (en) Fire disaster and smoke real-time detection method based on deep learning
CN111640089B (en) Defect detection method and device based on feature map center point
CN110796141B (en) Target detection method and related equipment
CN112200131B (en) A vehicle collision detection method based on vision, intelligent terminal and storage medium
CN112836639A (en) Pedestrian multi-target tracking video recognition method based on improved YOLOv3 model
CN111428733B (en) Zero-shot object detection method and system based on semantic feature space conversion
CN110991311A (en) A target detection method based on densely connected deep network
CN110647816B (en) Target detection method for real-time monitoring of goods shelf medicines
CN110610165A (en) A Ship Behavior Analysis Method Based on YOLO Model
CN107909027A (en) It is a kind of that there is the quick human body target detection method for blocking processing
CN112149664A (en) Target detection method for optimizing classification and positioning tasks
CN113496260B (en) Detection method for irregular operations of grain depot personnel based on improved YOLOv3 algorithm
CN113011322A (en) Detection model training method and detection method for specific abnormal behaviors of monitoring video
CN115512387A (en) Construction site safety helmet wearing detection method based on improved YOLOV5 model
CN113780145A (en) Sperm morphology detection method, sperm morphology detection device, computer equipment and storage medium
CN111259736B (en) Real-time pedestrian detection method based on deep learning in complex environment
CN112149665A (en) High-performance multi-scale target detection method based on deep learning
CN113312968B (en) Real abnormality detection method in monitoring video
Jia et al. Forest fire detection and recognition using YOLOv8 algorithms from UAVs images
CN113989726A (en) Building site safety helmet identification method and system
CN114882428A (en) Target detection method based on attention mechanism and multi-scale fusion
CN103020580A (en) Rapid human face detection method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20220628