WO2018120038A1 - Method and device for target detection - Google Patents
Method and device for target detection Download PDFInfo
- Publication number
- WO2018120038A1 WO2018120038A1 PCT/CN2016/113548 CN2016113548W WO2018120038A1 WO 2018120038 A1 WO2018120038 A1 WO 2018120038A1 CN 2016113548 W CN2016113548 W CN 2016113548W WO 2018120038 A1 WO2018120038 A1 WO 2018120038A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- detected
- target
- feature
- depth
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Ceased
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/50—Depth or shape recovery
Definitions
- the present application relates to the field of artificial intelligence technologies, and in particular, to a method and apparatus for target detection.
- the existing method of target detection is generally divided into three stages: first, candidate regions are selected in a given image; then feature extraction is performed on these candidate regions, and finally the candidate regions are classified using the trained classifier, and then all the detected regions are detected. Possible target object.
- the candidate region is selected in the first stage, the current mainstream target detection method generally uses information such as texture, edge, color, and the like in the image to find out possible locations of the target in the image, and extract hundreds of points. Thousands of candidate areas.
- the above candidate region-based target detection method has the following disadvantages:
- the embodiment of the present application provides a target detection method and device, which mainly solves the problem that the target detection method in the prior art has limited target detection precision and large calculation amount.
- the present application provides a target detection method, including: acquiring a to-be-detected image and a depth value of a plurality of pixel points in the image to be detected; and performing target detection on the detected image in combination with the depth value of each pixel.
- the present application provides a target detecting apparatus, including: an acquiring unit, a depth value for acquiring a plurality of pixels in the image to be detected and the image to be detected; and a target detecting unit configured to perform target detection on the image to be detected in combination with the depth value of each pixel acquired by the acquiring unit.
- the present application provides a computer storage medium for storing computer software instructions, comprising program code designed to perform the object detection method of the first aspect.
- the present application provides a computer program product that can be directly loaded into an internal memory of a computer and includes software code, and the computer program can be loaded and executed by a computer to implement the target detection method according to the first aspect. .
- the present application provides an electronic device, including: a memory, a communication interface, and a processor, the memory is configured to store computer executable code, and the processor is configured to execute the computer executable code control to perform the first aspect
- the target detecting method is used for data transmission between the electronic device and an external device.
- the present application provides a robot, including the electronic device of the fifth aspect.
- the solution provided by the present application first acquires a depth value of a plurality of pixels in the image to be detected and the image to be detected when performing target detection; and performs target detection on the detected image in combination with the depth value of each pixel.
- the present application considers the depth value when performing target detection, so the color and texture can be similar but different in depth when the target is detected.
- the target object is distinguished, which in turn can improve the accuracy of the target detection.
- FIG. 1 is a schematic structural diagram of a target detection system according to an embodiment of the present application.
- FIG. 2 is a schematic flowchart of a target detection method according to an embodiment of the present application.
- FIG. 3 is a schematic diagram of an image to be detected, a depth image, and an initial candidate region obtained by using the method shown in FIG. 2 according to an embodiment of the present disclosure
- FIG. 4 is a schematic flowchart diagram of another object detection method according to an embodiment of the present application.
- FIG. 5 is a schematic diagram of splitting an initial candidate region shown in FIG. 3 to obtain a target candidate region by using the method shown in FIG. 4 according to an embodiment of the present application;
- FIG. 6 is a schematic flowchart diagram of still another object detection method according to an embodiment of the present application.
- FIG. 7 is a schematic structural diagram of a target detecting device according to an embodiment of the present application.
- FIG. 8 is a schematic structural diagram of another object detecting apparatus according to an embodiment of the present application.
- FIG. 9 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
- the embodiment of the present application provides a target detection system.
- the system includes: an image collection device 11 and a target detection device 12 .
- the image capturing device 11 is configured to perform image collection on the area to be detected and send the collected image to the target detection.
- Device 12 Illustratively, the image acquisition device 11 may be a single or multiple cameras capable of acquiring two-dimensional information of an image or a binocular camera capable of acquiring three-dimensional information or the like.
- the target detecting device 12 is configured to perform an analysis process on the image received from the image capturing device 11 to perform target detection using the received image, and the target detecting device 12 may be a device having a processing function, such as a server or the like.
- the specific implementation of the image capturing device 11 and the target detecting device 12 can refer to the prior art, and details are not described herein again.
- the embodiment of the present application provides a target detection method, which can be applied to the system shown in FIG. 1.
- the execution body of the method may be the target detection device 12 shown in FIG. .
- the following description is made with the execution subject as the target detecting device 12, as shown in FIG. 2, the method includes:
- Step 101 Acquire an image to be detected and a depth value of a plurality of pixel points in the image to be detected.
- the image to be detected is an image obtained by photographing the area to be detected, and may be an image obtained directly, or may be an image obtained by performing gradation, denoising, or the like on the directly obtained color image.
- an image obtained by photographing a detection area using an ordinary camera or a mobile phone having an imaging function can be used as an image to be detected referred to in the present application.
- the depth value may be obtained by acquiring a depth image corresponding to the image to be detected; and determining a depth value of each pixel of the image to be detected from the depth image.
- the depth image is also referred to as a range image, and refers to a distance (or depth) from an image collector, such as a binocular camera, to each point in the area to be detected as a pixel value.
- the image can directly reflect the geometry of the visible surface of the object, that is, the contour of each object can be directly determined.
- each pixel represents the distance from the object to the camera plane at the particular (x,y) coordinate in the field of view of the image collector. Therefore, each pixel point in the depth image corresponds to the depth value and is used to represent the depth value of each object in the area to be detected.
- Common depth image acquisition methods include lidar depth imaging, computer stereo vision imaging, coordinate measuring machine method, moiré fringe method, structured light method, etc. The existing technology is tested and will not be described here.
- Step 102 Perform target detection on the detected image in combination with the depth value of each pixel.
- pixels in the image to be detected whose depth values are in the same range may be divided into the same candidate region.
- the present application respectively shows an image to be detected, a depth image, and a candidate region 1 and a candidate region 2 which are divided after performing step 102.
- the candidate region After the candidate region is determined according to the depth value of the pixel, the candidate region may be directly extracted, and finally the candidate region is classified by using the trained classifier, thereby achieving target detection.
- the candidate region For the specific implementation process of this step, reference may be made to the prior art, and details are not described herein again.
- the candidate region determined according to the depth value may be used as an initial candidate region, and the initial candidate region is further split to obtain a target candidate region, as shown in FIG. Can be implemented as:
- Step 201 Divide pixel points whose depth values are in the same range into the same initial candidate area.
- Step 202 Split the initial candidate region into target candidate regions according to image features.
- Step 203 Determine a target in the target candidate area.
- the image feature includes any one or any of the following features: any one or more of a color feature, a texture feature, a structural feature, a face feature, or a contour feature.
- the step 202 may be specifically implemented to: respectively detect contour lines in each initial candidate region; and include at least two independent closed contours in the presence of at least one initial candidate region. At the time of the line, each of the at least one initial candidate region is split to obtain at least two target candidate regions such that at most one closed contour is included in each of the target candidate regions.
- the initial candidate region 1 can be split into two target candidate regions, and the target candidate regions as shown in FIG. 5 are obtained. .
- the step 202 may be specifically implemented to detect a color feature in each initial candidate region.
- each of the at least one initial candidate region is respectively split into at least two target candidate regions such that each target candidate region includes one Color characteristics.
- step 102 includes:
- Step 301 Divide the image to be detected into initial candidate regions according to image features.
- the image feature includes any one or any of the following features: any one or more of a color feature, a texture feature, a structural feature, a face feature, or a contour feature.
- the specific implementation of the method for dividing the image to be detected according to the image feature to obtain the initial candidate region may refer to the implementation process of region proposal for one region in the prior art.
- Step 302 Remove the initial candidate region where the depth value of the pixel is not in the same range.
- Step 303 Determine a target in the remaining initial candidate regions.
- the solution provided by the present application first acquires a depth value of a plurality of pixels in the image to be detected and the image to be detected when performing target detection; and performs target detection on the detected image in combination with the depth value of each pixel.
- the present application considers the depth value when performing target detection, so the color and texture can be similar but different in depth when the target is detected.
- the target object is distinguished, which in turn can improve the accuracy of the target detection.
- the depth information contained in the area to be detected is in most cases smaller than the color information contained therein, for example, an object may include multiple colors, but the depth value corresponding to the entire object may be only one, and therefore, according to Depth value division The amount of calculation of the candidate area is also greatly reduced.
- the target detecting device 12 may also output information such as a category, an outline, and the like of the target.
- the method provided by the embodiment of the present application can be used in a scenario in which target recognition is required, for example, in a mobile robot, and the mobile robot can automatically identify an object in its environment to make a decision by using the method of the present application. It can also be applied in the process of assisting users in finding a specific target object.
- the method provided by this application can be applied to any process that requires detection and identification of an object.
- the embodiment of the present application may perform the division of the function module on the target detection device or the like according to the above method example.
- each function module may be divided according to each function, or two or more functions may be integrated into one processing module.
- the above integrated modules can be implemented in the form of hardware or in the form of software functional modules. It should be noted that the division of the module in the embodiment of the present application is schematic, and is only a logical function division, and the actual implementation may have another division manner.
- FIG. 7 is a schematic diagram showing a possible structure of the target detecting device involved in the foregoing embodiment.
- the target detecting device includes: an obtaining unit 401, and a target detecting unit 402. .
- the obtaining unit 401 is configured to support the target detecting device to perform the process 101 in FIG. 2;
- the target detecting unit 402 is configured to support the target detecting device to perform the process 102 in FIG. 2, the processes 201, 202, 203 in FIG. 4, in FIG. Processes 301, 302, 303. All the related content of the steps involved in the foregoing method embodiments may be referred to the functional descriptions of the corresponding functional modules, and details are not described herein again.
- FIG. 8 shows the involved in the above embodiment.
- the target detecting device includes a processing module 501 and a communication module 502.
- the processing module 501 is configured to perform control management on the action of the target detecting device.
- the processing module 501 is configured to support the target detecting device to perform the processes 101, 102, and 103 in FIG. 2, and the processes 201, 202, and 203 in FIG. Processes 301, 302, 303 in 6, and/or other processes for the techniques described herein.
- the communication module 502 is configured to support communication of the target detection device with other network entities, such as with the functional modules or network entities shown in FIG.
- the target detecting device may further include a storage module 503 for storing program codes and data of the target detecting device.
- the processing module 501 can be a processor or a controller, for example, a central processing unit (CPU), a general-purpose processor, a digital signal processor (DSP), and an application-specific integrated circuit (Application-Specific Integrated Circuit (ASIC), Field Programmable Gate Array (FPGA) or other programmable logic device, transistor logic device, hardware component, or any combination thereof. It is possible to implement or carry out the various illustrative logical blocks, modules and circuits described in connection with the present disclosure.
- the processor may also be a combination of computing functions, for example, including one or more microprocessor combinations, a combination of a DSP and a microprocessor, and the like.
- the communication module 502 can be a transceiver, a transceiver circuit, a communication interface, or the like.
- the storage module 503 can be a memory.
- the target detection device When the processing module 501 is a processor, the communication module 502 is a communication interface, and the storage module 503 is a memory, the target detection device according to the embodiment of the present application may be the electronic device shown in FIG.
- the electronic device includes a processor 601, a communication interface 602, a memory 603, and a bus 604.
- the communication interface 602, the processor 601, and the memory 603 are connected to each other through a bus 604.
- the bus 604 may be a Peripheral Component Interconnect (PCI) bus or an Extended Industry Standard Architecture (EISA) bus. Wait.
- PCI Peripheral Component Interconnect
- EISA Extended Industry Standard Architecture
- the bus can be divided into an address bus, a data bus, a control bus, and the like. For ease of representation, only one thick line is shown in Figure 9, but it does not mean that there is only one bus or one type. bus.
- the steps of a method or algorithm described in connection with the present disclosure may be implemented in a hardware or may be implemented by a processor executing software instructions.
- the software instructions may be composed of corresponding software modules, which may be stored in a random access memory (RAM), a flash memory, a read only memory (ROM), an erasable programmable read only memory ( Erasable Programmable ROM (EPROM), electrically erasable programmable read only memory (EEPROM), registers, hard disk, removable hard disk, compact disk read only (CD-ROM) or any other form of storage medium known in the art.
- An exemplary storage medium is coupled to the processor to enable the processor to read information from, and write information to, the storage medium.
- the storage medium can also be an integral part of the processor.
- the processor and the storage medium can be located in an ASIC.
- the functions described herein can be implemented in hardware, software, firmware, or any combination thereof.
- the functions may be stored in a computer readable medium or transmitted as one or more instructions or code on a computer readable medium.
- Computer readable media includes both computer storage media and communication media including any medium that facilitates transfer of a computer program from one location to another.
- a storage medium may be any available media that can be accessed by a general purpose or special purpose computer.
Landscapes
- Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Multimedia (AREA)
- Image Analysis (AREA)
Abstract
Description
本申请涉及人工智能技术领域,尤其涉及一种目标检测的方法及装置。The present application relates to the field of artificial intelligence technologies, and in particular, to a method and apparatus for target detection.
现有目标检测的方法一般分为三个阶段:首先在给定的图像中选择候选区域;然后对这些候选区域进行特征提取,最后使用训练的分类器对候选区域进行分类,进而检测得到所有的可能目标物体。其中,在第一阶段选择候选区域(region proposal)时,目前主流的目标检测方法一般是利用图像中的纹理、边缘、颜色等信息,预先找出图像中目标可能出现的位置,提取几百至几千个候选区域。上述基于候选区域的目标检测方法存在以下不足:The existing method of target detection is generally divided into three stages: first, candidate regions are selected in a given image; then feature extraction is performed on these candidate regions, and finally the candidate regions are classified using the trained classifier, and then all the detected regions are detected. Possible target object. When the candidate region is selected in the first stage, the current mainstream target detection method generally uses information such as texture, edge, color, and the like in the image to find out possible locations of the target in the image, and extract hundreds of points. Thousands of candidate areas. The above candidate region-based target detection method has the following disadvantages:
1、要对几百至几千个候选区域进行后续的特征提取,不仅计算量较大,而且候选区域的数量一般情况下远远超过图像所包含物体的数量,使得很多计算也是多余的。1. Subsequent feature extraction for hundreds to thousands of candidate regions is not only computationally intensive, but the number of candidate regions generally exceeds the number of objects contained in the image, making many calculations redundant.
2、图像中有些物体的二维空间位置相邻,但其深度值不同,由于颜色、纹理相似,很可能被当做整体提取到同一个候选区域中,进而影响目标检测的精度。2. The two-dimensional spatial positions of some objects in the image are adjacent, but their depth values are different. Because the colors and textures are similar, it is likely to be extracted as a whole into the same candidate region, which affects the accuracy of target detection.
发明内容Summary of the invention
本申请的实施例提供一种目标检测方法及装置,主要解决现有技术中的目标检测方法存在的目标检测精度有限以及计算量大的问题。The embodiment of the present application provides a target detection method and device, which mainly solves the problem that the target detection method in the prior art has limited target detection precision and large calculation amount.
为达到上述目的,本申请的实施例采用如下技术方案:To achieve the above objective, the embodiment of the present application adopts the following technical solutions:
第一方面,本申请提供一种目标检测方法,包括:获取待检测图像以及待检测图像中多个像素点的深度值;结合各个像素点的深度值对待检测图像进行目标检测。In a first aspect, the present application provides a target detection method, including: acquiring a to-be-detected image and a depth value of a plurality of pixel points in the image to be detected; and performing target detection on the detected image in combination with the depth value of each pixel.
第二方面,本申请提供一种目标检测装置,包括:获取单元, 用于获取待检测图像以及待检测图像中多个像素点的深度值;目标检测单元,用于结合所述获取单元获取的各个像素点的深度值对待检测图像进行目标检测。In a second aspect, the present application provides a target detecting apparatus, including: an acquiring unit, a depth value for acquiring a plurality of pixels in the image to be detected and the image to be detected; and a target detecting unit configured to perform target detection on the image to be detected in combination with the depth value of each pixel acquired by the acquiring unit.
第三方面,本申请提供一种计算机存储介质,用于储存计算机软件指令,其包含执行第一方面所述的目标检测方法所设计的程序代码。In a third aspect, the present application provides a computer storage medium for storing computer software instructions, comprising program code designed to perform the object detection method of the first aspect.
第四方面,本申请提供一种计算机程序产品,可直接加载到计算机的内部存储器中,并含有软件代码,所述计算机程序经由计算机载入并执行后能够实现第一方面所述的目标检测方法。In a fourth aspect, the present application provides a computer program product that can be directly loaded into an internal memory of a computer and includes software code, and the computer program can be loaded and executed by a computer to implement the target detection method according to the first aspect. .
第五方面,本申请提供一种电子设备,包括:存储器、通信接口和处理器,所述存储器用于存储计算机可执行代码,所述处理器用于执行所述计算机可执行代码控制执行第一方面所述目标检测方法,所述通信接口用于所述电子设备与外部设备的数据传输。In a fifth aspect, the present application provides an electronic device, including: a memory, a communication interface, and a processor, the memory is configured to store computer executable code, and the processor is configured to execute the computer executable code control to perform the first aspect The target detecting method is used for data transmission between the electronic device and an external device.
第六方面,本申请提供一种机器人,包括第五方面所述的电子设备。In a sixth aspect, the present application provides a robot, including the electronic device of the fifth aspect.
本申请提供的方案,在进行目标检测时,首先获取待检测图像以及待检测图像中多个像素点的深度值;并结合各个像素点的深度值对待检测图像进行目标检测。与现有技术中的基于图像的纹理、边缘、颜色等信息进行目标检测相比,本申请在进行目标检测时,考虑了深度值,因此在目标检测时,能够将颜色、纹理相似但深度不同的目标物体进行区分,进而能够提高目标检测的精度。The solution provided by the present application first acquires a depth value of a plurality of pixels in the image to be detected and the image to be detected when performing target detection; and performs target detection on the detected image in combination with the depth value of each pixel. Compared with the prior art image-based texture, edge, color and other information for target detection, the present application considers the depth value when performing target detection, so the color and texture can be similar but different in depth when the target is detected. The target object is distinguished, which in turn can improve the accuracy of the target detection.
为了更清楚地说明本申请实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings to be used in the embodiments or the prior art description will be briefly described below. Obviously, the drawings in the following description are only It is a certain embodiment of the present application, and other drawings can be obtained according to the drawings without any creative work for those skilled in the art.
图1为本申请实施例提供的目标检测系统的架构示意图;FIG. 1 is a schematic structural diagram of a target detection system according to an embodiment of the present application;
图2为本申请实施例提供的目标检测方法的流程示意图; 2 is a schematic flowchart of a target detection method according to an embodiment of the present application;
图3为本申请实施例提供的待检测图像、深度图像以及采用图2所示方法划分得到的初始候选区域的示意图;FIG. 3 is a schematic diagram of an image to be detected, a depth image, and an initial candidate region obtained by using the method shown in FIG. 2 according to an embodiment of the present disclosure;
图4为本申请实施例提供的另一种目标检测方法的流程示意图;FIG. 4 is a schematic flowchart diagram of another object detection method according to an embodiment of the present application;
图5为本申请实施例的采用图4所示的方法对图3所示的初始候选区域拆分得到目标候选区域的示意图;5 is a schematic diagram of splitting an initial candidate region shown in FIG. 3 to obtain a target candidate region by using the method shown in FIG. 4 according to an embodiment of the present application;
图6为本申请实施例提供的又一种目标检测方法的流程示意图;FIG. 6 is a schematic flowchart diagram of still another object detection method according to an embodiment of the present application;
图7为本申请实施例提供的一种目标检测设备的结构示意图;FIG. 7 is a schematic structural diagram of a target detecting device according to an embodiment of the present application;
图8为本申请实施例提供的另一种目标检测设备的结构示意图;FIG. 8 is a schematic structural diagram of another object detecting apparatus according to an embodiment of the present application;
图9为本申请实施例提供的一种电子设备的结构示意图。FIG. 9 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
本申请实施例描述的系统架构以及业务场景是为了更加清楚的说明本申请实施例的技术方案,并不构成对于本申请实施例提供的技术方案的限定,本领域普通技术人员可知,随着系统架构的演变和新业务场景的出现,本申请实施例提供的技术方案对于类似的技术问题,同样适用。The system architecture and the service scenario described in the embodiments of the present application are for the purpose of more clearly explaining the technical solutions of the embodiments of the present application, and do not constitute a limitation of the technical solutions provided by the embodiments of the present application. The technical solutions provided by the embodiments of the present application are equally applicable to similar technical problems.
需要说明的是,本申请实施例中,“示例性的”或者“例如”等词用于表示作例子、例证或说明。本申请实施例中被描述为“示例性的”或者“例如”的任何实施例或设计方案不应被解释为比其它实施例或设计方案更优选或更具优势。确切而言,使用“示例性的”或者“例如”等词旨在以具体方式呈现相关概念。It should be noted that, in the embodiments of the present application, the words "exemplary" or "such as" are used to mean an example, an illustration, or a description. Any embodiment or design described as "exemplary" or "for example" in the embodiments of the present application should not be construed as preferred or advantageous over other embodiments or designs. Rather, the use of the words "exemplary" or "such as" is intended to present the concepts in a particular manner.
需要说明的是,本申请实施例中,“的(英文:of)”,“相应的(英文:corresponding,relevant)”和“对应的(英文:corresponding)”有时可以混用,应当指出的是,在不强调其区别时,其所要表达的含义是一致的。It should be noted that, in the embodiment of the present application, "(English: of)", "corresponding (relevant)" and "corresponding" may sometimes be mixed. It should be noted that When the difference is not emphasized, the meaning to be expressed is the same.
本申请实施例提供一种目标检测系统,如图1所示,该系统包括:图像采集设备11和目标检测设备12。其中,图像采集设备11用于对待检测区域进行图像采集并将采集到的图像发送给目标检测
设备12,示例性的,该图像采集设备11可以为单个或多个能够获取图像的二维信息的摄像机或能够获取三维信息的双目相机等。目标检测设备12用于对从图像采集设备11接收到的图像进行分析处理以利用接收到的图像进行目标检测,目标检测设备12可以为具有处理功能的设备,如服务器等。图像采集设备11和目标检测设备12的具体实现可参考现有技术,此处不再赘述。The embodiment of the present application provides a target detection system. As shown in FIG. 1 , the system includes: an
本申请实施例提供一种目标检测方法,可应用于图1所示的系统中,当应用于图1所示的系统中时,该方法的执行主体可以为图1所示的目标检测设备12。下文以执行主体为目标检测设备12进行说明,如图2所示,该方法包括:The embodiment of the present application provides a target detection method, which can be applied to the system shown in FIG. 1. When applied to the system shown in FIG. 1, the execution body of the method may be the
步骤101、获取待检测图像以及待检测图像中多个像素点的深度值。Step 101: Acquire an image to be detected and a depth value of a plurality of pixel points in the image to be detected.
其中,待检测图像为对待检测区域进行拍摄后得到的图像,可以为直接得到的图像,也可以为对该直接得到的彩色图像进行灰度、去噪等处理后得到的图像。The image to be detected is an image obtained by photographing the area to be detected, and may be an image obtained directly, or may be an image obtained by performing gradation, denoising, or the like on the directly obtained color image.
示例性的,使用普通摄像机或者具有摄像功能的手机等设备对待检测区域进行拍摄得到的图像即可作为本申请中所指的待检测图像。Exemplarily, an image obtained by photographing a detection area using an ordinary camera or a mobile phone having an imaging function can be used as an image to be detected referred to in the present application.
深度值可以通过获取待检测图像对应的深度图像;并从深度图像中确定待检测图像的各个像素点的深度值。The depth value may be obtained by acquiring a depth image corresponding to the image to be detected; and determining a depth value of each pixel of the image to be detected from the depth image.
其中,深度图像(depth image)也被称为距离影像(range image),是指将从图像采集器,如双目摄像机到待检测区域中各点的距离(或者称为深度)作为像素值的图像,能够直接反映物体可见表面的几何形状,也即能够直接确定各个物体的轮廓线。深度图像中,每一个像素点代表的是在图像采集器的视野中,该特定的(x,y)坐标处物体到摄像头平面的距离。因此,深度图像中各个像素点与深度值对应,用于表示待检测区域中各个物体的深度值。常见的深度图像的获取方法有激光雷达深度成像法、计算机立体视觉成像、坐标测量机法、莫尔条纹法、结构光法等,深度图像的具体实现方式可参 考现有技术,此处不再赘述。The depth image is also referred to as a range image, and refers to a distance (or depth) from an image collector, such as a binocular camera, to each point in the area to be detected as a pixel value. The image can directly reflect the geometry of the visible surface of the object, that is, the contour of each object can be directly determined. In the depth image, each pixel represents the distance from the object to the camera plane at the particular (x,y) coordinate in the field of view of the image collector. Therefore, each pixel point in the depth image corresponds to the depth value and is used to represent the depth value of each object in the area to be detected. Common depth image acquisition methods include lidar depth imaging, computer stereo vision imaging, coordinate measuring machine method, moiré fringe method, structured light method, etc. The existing technology is tested and will not be described here.
步骤102、结合各个像素点的深度值对待检测图像进行目标检测。Step 102: Perform target detection on the detected image in combination with the depth value of each pixel.
在步骤102的一种实现方式中,可将待检测图像中,深度值位于同一范围内的像素点划分为同一候选区域。In an implementation manner of
示例性的,如图3所示,本申请分别示出了待检测图像、深度图像以及执行步骤102后划分得到的候选区域1和候选区域2。Exemplarily, as shown in FIG. 3, the present application respectively shows an image to be detected, a depth image, and a candidate region 1 and a
在根据像素点的深度值确定出候选区域后,可直接对候选区域进行特征提取,最后使用训练的分类器对候选区域进行分类,从而实现目标检测。该步骤的具体实现过程可参考现有技术,此处不再赘述。After the candidate region is determined according to the depth value of the pixel, the candidate region may be directly extracted, and finally the candidate region is classified by using the trained classifier, thereby achieving target detection. For the specific implementation process of this step, reference may be made to the prior art, and details are not described herein again.
在本步骤102的另一种实现方式中,可将按照深度值确定出来的候选区域作为初始候选区域,并对初始候选区域进行进一步拆分得到目标候选区域,如图4所示,步骤102具体可实现为:In another implementation manner of the
步骤201、将深度值位于同一范围内的像素点划分到同一初始候选区域中。Step 201: Divide pixel points whose depth values are in the same range into the same initial candidate area.
步骤202、按照图像特征将初始候选区域拆分为目标候选区域。Step 202: Split the initial candidate region into target candidate regions according to image features.
步骤203、在目标候选区域中确定目标。Step 203: Determine a target in the target candidate area.
其中,所述图像特征包括如下特征中的任意一种或者任意几种:颜色特征、纹理特征、结构特征、人脸特征或轮廓特征中的任意一种或多种。该按照图像特征对初始候选区域进行拆分得到目标候选区域的具体实现可参考现有技术中对一个区域进行region proposal的实现过程。The image feature includes any one or any of the following features: any one or more of a color feature, a texture feature, a structural feature, a face feature, or a contour feature. For the specific implementation of splitting the initial candidate region according to the image feature to obtain the target candidate region, reference may be made to the implementation process of region proposal for one region in the prior art.
例如,当该图像特征为为轮廓线特征时,上述步骤202具体可以实现为:分别检测每个初始候选区域中的轮廓线;当存在至少一个初始候选区域中包括至少两个互相独立的闭合轮廓线时,将所述至少一个初始候选区域中的每个初始候选区域进行拆分得到至少两个目标候选区域以使得每个目标候选区域中包括至多一个闭合轮廓线。
For example, when the image feature is a contour feature, the
结合图3得到的多个初始候选区域,对各个初始候选区域按照轮廓线特征进行拆分后,可将初始候选区域1拆分为两个目标候选区域,得到如图5所示的目标候选区域。Referring to the plurality of initial candidate regions obtained in FIG. 3, after the initial candidate regions are split according to the contour features, the initial candidate region 1 can be split into two target candidate regions, and the target candidate regions as shown in FIG. 5 are obtained. .
又如,当该图像特征为为轮廓线特征时,上述步骤202具体可以实现为:分别检测每个初始候选区域中的颜色特征。当存在至少一个初始候选区域中包括至少两种颜色时,分别将所述至少一个初始候选区域中的每个初始候选区域拆分为至少两个目标候选区域以使得每个目标候选区域包含一种颜色特征。For example, when the image feature is a contour feature, the
需要说明的是,上述仅以轮廓线特征和颜色特征为例进行说明,实际应用中在划分目标候选区域时,可能会结合颜色、纹理等多个特征。It should be noted that the above description only takes the outline feature and the color feature as an example. In the actual application, when the target candidate region is divided, a plurality of features such as color and texture may be combined.
在步骤102的再一种实现方式中,如图6所示,包括:In still another implementation of
步骤301、按照图像特征将待检测图像划分为初始候选区域。Step 301: Divide the image to be detected into initial candidate regions according to image features.
其中,所述图像特征包括如下特征中的任意一种或者任意几种:颜色特征、纹理特征、结构特征、人脸特征或轮廓特征中的任意一种或多种。该按照图像特征对待检测图像进行划分得到初始候选区域的具体实现可参考现有技术中对一个区域进行region proposal的实现过程。The image feature includes any one or any of the following features: any one or more of a color feature, a texture feature, a structural feature, a face feature, or a contour feature. The specific implementation of the method for dividing the image to be detected according to the image feature to obtain the initial candidate region may refer to the implementation process of region proposal for one region in the prior art.
步骤302、去除像素点的深度值不在同一范围的初始候选区域。Step 302: Remove the initial candidate region where the depth value of the pixel is not in the same range.
步骤303、在剩余的初始候选区域中确定目标。Step 303: Determine a target in the remaining initial candidate regions.
本申请提供的方案,在进行目标检测时,首先获取待检测图像以及待检测图像中多个像素点的深度值;并结合各个像素点的深度值对待检测图像进行目标检测。与现有技术中的基于图像的纹理、边缘、颜色等信息进行目标检测相比,本申请在进行目标检测时,考虑了深度值,因此在目标检测时,能够将颜色、纹理相似但深度不同的目标物体进行区分,进而能够提高目标检测的精度。The solution provided by the present application first acquires a depth value of a plurality of pixels in the image to be detected and the image to be detected when performing target detection; and performs target detection on the detected image in combination with the depth value of each pixel. Compared with the prior art image-based texture, edge, color and other information for target detection, the present application considers the depth value when performing target detection, so the color and texture can be similar but different in depth when the target is detected. The target object is distinguished, which in turn can improve the accuracy of the target detection.
此外,一般而言,待检测区域中包含的深度信息大多数情况下小于其包含的颜色信息,例如:一个物体可能包括多种颜色,但这整个物体对应的深度值可能只有一个,因此,根据深度值划分得到 候选区域的计算量也大大减少。In addition, in general, the depth information contained in the area to be detected is in most cases smaller than the color information contained therein, for example, an object may include multiple colors, but the depth value corresponding to the entire object may be only one, and therefore, according to Depth value division The amount of calculation of the candidate area is also greatly reduced.
可选的,在识别出目标后,目标检测设备12还可以输出目标的类别、轮廓等信息。Optionally, after the target is identified, the
本申请实施例提供的方法可以用在需要进行目标识别的场景中,例如:应用于移动机器人中,移动机器人可以采用本申请体用的方法自动识别其所在环境中的物体以进行决策。还可以应用在辅助用户寻找特定目标物体的过程中。任何需要检测、识别物体的过程都可应用本申请提供的方法。The method provided by the embodiment of the present application can be used in a scenario in which target recognition is required, for example, in a mobile robot, and the mobile robot can automatically identify an object in its environment to make a decision by using the method of the present application. It can also be applied in the process of assisting users in finding a specific target object. The method provided by this application can be applied to any process that requires detection and identification of an object.
本领域技术人员应该很容易意识到,结合本文中所公开的实施例描述的各示例的单元及算法步骤,本申请能够以硬件或硬件和计算机软件的结合形式来实现。某个功能究竟以硬件还是计算机软件驱动硬件的方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本申请的范围。Those skilled in the art will readily appreciate that the present application can be implemented in a combination of hardware or hardware and computer software in combination with the elements and algorithm steps of the various examples described in the embodiments disclosed herein. Whether a function is implemented in hardware or computer software to drive hardware depends on the specific application and design constraints of the solution. A person skilled in the art can use different methods to implement the described functions for each particular application, but such implementation should not be considered to be beyond the scope of the present application.
本申请实施例可以根据上述方法示例对目标检测设备等进行功能模块的划分,例如,可以对应各个功能划分各个功能模块,也可以将两个或两个以上的功能集成在一个处理模块中。上述集成的模块既可以采用硬件的形式实现,也可以采用软件功能模块的形式实现。需要说明的是,本申请实施例中对模块的划分是示意性的,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式。The embodiment of the present application may perform the division of the function module on the target detection device or the like according to the above method example. For example, each function module may be divided according to each function, or two or more functions may be integrated into one processing module. The above integrated modules can be implemented in the form of hardware or in the form of software functional modules. It should be noted that the division of the module in the embodiment of the present application is schematic, and is only a logical function division, and the actual implementation may have another division manner.
在采用对应各个功能划分各个功能模块的情况下,图7示出了上述实施例中所涉及的目标检测设备的一种可能的结构示意图,目标检测设备包括:获取单元401、以及目标检测单元402。获取单元401用于支持目标检测设备执行图2中的过程101;目标检测单元402用于支持目标检测设备执行图2中的过程102,图4中的过程201、202、203,图6中的过程301、302、303。其中,上述方法实施例涉及的各步骤的所有相关内容均可以援引到对应功能模块的功能描述,在此不再赘述。FIG. 7 is a schematic diagram showing a possible structure of the target detecting device involved in the foregoing embodiment. The target detecting device includes: an obtaining
在采用集成的单元的情况下,图8示出了上述实施例中所涉及
的目标检测设备的一种可能的结构示意图。目标检测设备包括:处理模块501和通信模块502。处理模块501用于对目标检测设备的动作进行控制管理,例如,处理模块501用于支持目标检测设备执行图2中的过程101、102、103,图4中的过程201、202、203,图6中的过程301、302、303,和/或用于本文所描述的技术的其它过程。通信模块502用于支持目标检测设备与其他网络实体的通信,例如与图1中示出的功能模块或网络实体之间的通信。目标检测设备还可以包括存储模块503,用于存储目标检测设备的程序代码和数据。In the case of using an integrated unit, FIG. 8 shows the involved in the above embodiment.
A possible structural schematic of the target detection device. The target detecting device includes a
其中,处理模块501可以是处理器或控制器,例如可以是中央处理器(Central Processing Unit,CPU),通用处理器,数字信号处理器(Digital Signal Processor,DSP),专用集成电路(Application-Specific Integrated Circuit,ASIC),现场可编程门阵列(Field Programmable Gate Array,FPGA)或者其他可编程逻辑器件、晶体管逻辑器件、硬件部件或者其任意组合。其可以实现或执行结合本申请公开内容所描述的各种示例性的逻辑方框,模块和电路。所述处理器也可以是实现计算功能的组合,例如包含一个或多个微处理器组合,DSP和微处理器的组合等等。通信模块502可以是收发器、收发电路或通信接口等。存储模块503可以是存储器。The
当处理模块501为处理器,通信模块502为通信接口,存储模块503为存储器时,本申请实施例所涉及的目标检测设备可以为图9所示的电子设备。When the
参阅图9所示,该电子设备包括:处理器601、通信接口602、存储器603以及总线604。其中,通信接口602、处理器601以及存储器603通过总线604相互连接;总线604可以是外设部件互连标准(Peripheral Component Interconnect,PCI)总线或扩展工业标准结构(Extended Industry Standard Architecture,EISA)总线等。所述总线可以分为地址总线、数据总线、控制总线等。为便于表示,图9中仅用一条粗线表示,但并不表示仅有一根总线或一种类型的
总线。Referring to FIG. 9, the electronic device includes a
结合本申请公开内容所描述的方法或者算法的步骤可以硬件的方式来实现,也可以是由处理器执行软件指令的方式来实现。软件指令可以由相应的软件模块组成,软件模块可以被存放于随机存取存储器(Random Access Memory,RAM)、闪存、只读存储器(Read Only Memory,ROM)、可擦除可编程只读存储器(Erasable Programmable ROM,EPROM)、电可擦可编程只读存储器(Electrically EPROM,EEPROM)、寄存器、硬盘、移动硬盘、只读光盘(CD-ROM)或者本领域熟知的任何其它形式的存储介质中。一种示例性的存储介质耦合至处理器,从而使处理器能够从该存储介质读取信息,且可向该存储介质写入信息。当然,存储介质也可以是处理器的组成部分。处理器和存储介质可以位于ASIC中。The steps of a method or algorithm described in connection with the present disclosure may be implemented in a hardware or may be implemented by a processor executing software instructions. The software instructions may be composed of corresponding software modules, which may be stored in a random access memory (RAM), a flash memory, a read only memory (ROM), an erasable programmable read only memory ( Erasable Programmable ROM (EPROM), electrically erasable programmable read only memory (EEPROM), registers, hard disk, removable hard disk, compact disk read only (CD-ROM) or any other form of storage medium known in the art. An exemplary storage medium is coupled to the processor to enable the processor to read information from, and write information to, the storage medium. Of course, the storage medium can also be an integral part of the processor. The processor and the storage medium can be located in an ASIC.
本领域技术人员应该可以意识到,在上述一个或多个示例中,本申请所描述的功能可以用硬件、软件、固件或它们的任意组合来实现。当使用软件实现时,可以将这些功能存储在计算机可读介质中或者作为计算机可读介质上的一个或多个指令或代码进行传输。计算机可读介质包括计算机存储介质和通信介质,其中通信介质包括便于从一个地方向另一个地方传送计算机程序的任何介质。存储介质可以是通用或专用计算机能够存取的任何可用介质。Those skilled in the art will appreciate that in one or more examples described above, the functions described herein can be implemented in hardware, software, firmware, or any combination thereof. When implemented in software, the functions may be stored in a computer readable medium or transmitted as one or more instructions or code on a computer readable medium. Computer readable media includes both computer storage media and communication media including any medium that facilitates transfer of a computer program from one location to another. A storage medium may be any available media that can be accessed by a general purpose or special purpose computer.
以上所述的具体实施方式,对本申请的目的、技术方案和有益效果进行了进一步详细说明,所应理解的是,以上所述仅为本申请的具体实施方式而已,并不用于限定本申请的保护范围,凡在本申请的技术方案的基础之上,所做的任何修改、等同替换、改进等,均应包括在本申请的保护范围之内。 The specific embodiments of the present invention have been described in detail with reference to the specific embodiments of the present application. It is to be understood that the foregoing description is only The scope of protection, any modifications, equivalent substitutions, improvements, etc. made on the basis of the technical solutions of the present application are included in the scope of protection of the present application.
Claims (14)
Priority Applications (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| PCT/CN2016/113548 WO2018120038A1 (en) | 2016-12-30 | 2016-12-30 | Method and device for target detection |
| CN201680016608.0A CN107636727A (en) | 2016-12-30 | 2016-12-30 | Target detection method and device |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| PCT/CN2016/113548 WO2018120038A1 (en) | 2016-12-30 | 2016-12-30 | Method and device for target detection |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2018120038A1 true WO2018120038A1 (en) | 2018-07-05 |
Family
ID=61113496
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/CN2016/113548 Ceased WO2018120038A1 (en) | 2016-12-30 | 2016-12-30 | Method and device for target detection |
Country Status (2)
| Country | Link |
|---|---|
| CN (1) | CN107636727A (en) |
| WO (1) | WO2018120038A1 (en) |
Cited By (23)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN110276742A (en) * | 2019-05-07 | 2019-09-24 | 平安科技(深圳)有限公司 | Tail light for train monitoring method, device, terminal and storage medium |
| CN110796659A (en) * | 2019-06-24 | 2020-02-14 | 科大讯飞股份有限公司 | Method, device, equipment and storage medium for identifying target detection result |
| CN110853127A (en) * | 2018-08-20 | 2020-02-28 | 浙江宇视科技有限公司 | Image processing method, device and equipment |
| CN111199198A (en) * | 2019-12-27 | 2020-05-26 | 深圳市优必选科技股份有限公司 | Image target positioning method, image target positioning device and mobile robot |
| CN111223111A (en) * | 2020-01-03 | 2020-06-02 | 歌尔股份有限公司 | Depth image contour generation method, device, equipment and storage medium |
| CN111324139A (en) * | 2018-12-13 | 2020-06-23 | 顺丰科技有限公司 | Unmanned aerial vehicle landing method, device, equipment and storage medium |
| CN111353115A (en) * | 2018-12-24 | 2020-06-30 | 中移(杭州)信息技术有限公司 | Method and device for generating Spanish chart |
| CN111507958A (en) * | 2020-04-15 | 2020-08-07 | 全球能源互联网研究院有限公司 | Target detection method, training method of detection model, and electronic device |
| CN111783584A (en) * | 2020-06-22 | 2020-10-16 | 杭州飞步科技有限公司 | Image target detection method, device, electronic device and readable storage medium |
| CN111898641A (en) * | 2020-07-01 | 2020-11-06 | 中国建设银行股份有限公司 | A target model detection, apparatus, electronic device and computer-readable storage medium |
| CN112149656A (en) * | 2020-09-29 | 2020-12-29 | 腾讯科技(深圳)有限公司 | Method, device, computer equipment and storage medium for determining the area of a ventilating cover of a cabinet |
| CN112258482A (en) * | 2020-10-23 | 2021-01-22 | 广东博智林机器人有限公司 | Building exterior wall mortar flow drop detection method and device |
| CN112446918A (en) * | 2019-09-04 | 2021-03-05 | 三赢科技(深圳)有限公司 | Method and device for positioning target object in image, computer device and storage medium |
| CN113420735A (en) * | 2021-08-23 | 2021-09-21 | 深圳市信润富联数字科技有限公司 | Contour extraction method, contour extraction device, contour extraction equipment, program product and storage medium |
| CN113538449A (en) * | 2020-04-20 | 2021-10-22 | 顺丰科技有限公司 | An image correction method, device, server and storage medium |
| CN114004788A (en) * | 2021-09-23 | 2022-02-01 | 中大(海南)智能科技有限公司 | Defect detection method, device, equipment and storage medium |
| CN114445347A (en) * | 2021-12-31 | 2022-05-06 | 深圳云天励飞技术股份有限公司 | Quality detection method of capsule medicine and related equipment |
| CN114529611A (en) * | 2022-01-19 | 2022-05-24 | 湖南视比特机器人有限公司 | Three-dimensional machine workpiece positioning method, system and storage medium |
| CN114693586A (en) * | 2020-12-25 | 2022-07-01 | 富泰华工业(深圳)有限公司 | Object detection method, device, electronic device and storage medium |
| CN114979617A (en) * | 2021-02-26 | 2022-08-30 | 浙江宇视科技有限公司 | Motor detection method, device and system for motorized lens |
| CN115019157A (en) * | 2022-07-06 | 2022-09-06 | 武汉市聚芯微电子有限责任公司 | Target detection method, device, equipment and computer readable storage medium |
| CN115601306A (en) * | 2022-09-21 | 2023-01-13 | 深圳市深汕特别合作区万泽精密科技有限公司(Cn) | Method and device for detecting porosity, electronic equipment and storage medium |
| CN116342737A (en) * | 2022-12-02 | 2023-06-27 | 浙江大华技术股份有限公司 | Ruled line drawing method, device, electronic device and storage medium |
Families Citing this family (8)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN111179332B (en) * | 2018-11-09 | 2023-12-19 | 北京市商汤科技开发有限公司 | Image processing method and device, electronic equipment and storage medium |
| CN111210471B (en) * | 2018-11-22 | 2023-08-25 | 浙江欣奕华智能科技有限公司 | Positioning method, device and system |
| CN111383238B (en) * | 2018-12-28 | 2023-07-14 | Tcl科技集团股份有限公司 | Target detection method, target detection device and intelligent terminal |
| CN111950543B (en) * | 2019-05-14 | 2024-08-16 | 北京京东乾石科技有限公司 | A target detection method and device |
| CN110348333A (en) * | 2019-06-21 | 2019-10-18 | 深圳前海达闼云端智能科技有限公司 | Object detecting method, device, storage medium and electronic equipment |
| CN110502978A (en) * | 2019-07-11 | 2019-11-26 | 哈尔滨工业大学 | A Lidar Waveform Signal Classification Method Based on BP Neural Network Model |
| CN111366916B (en) * | 2020-02-17 | 2021-04-06 | 山东睿思奥图智能科技有限公司 | Method and device for determining distance between interaction target and robot and electronic equipment |
| CN114241222A (en) * | 2021-12-13 | 2022-03-25 | 深圳前海微众银行股份有限公司 | Image retrieval method and device |
Citations (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US7158656B2 (en) * | 1999-03-08 | 2007-01-02 | Vulcan Patents Llc | Three dimensional object pose estimation which employs dense depth information |
| CN101237522A (en) * | 2007-02-02 | 2008-08-06 | 华为技术有限公司 | A motion detection method and device |
| CN101657825A (en) * | 2006-05-11 | 2010-02-24 | 普莱姆传感有限公司 | Modeling Human Figures from Depth Maps |
| CN102402687A (en) * | 2010-09-13 | 2012-04-04 | 三星电子株式会社 | Method and device for detecting direction of rigid body parts based on depth information |
| CN102855459A (en) * | 2011-06-30 | 2013-01-02 | 株式会社理光 | Method and system for detecting and verifying specific foreground objects |
Family Cites Families (7)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN102243759B (en) * | 2010-05-10 | 2014-05-07 | 东北大学 | Three-dimensional lung vessel image segmentation method based on geometric deformation model |
| CN101840577B (en) * | 2010-06-11 | 2012-07-25 | 西安电子科技大学 | Image automatic segmentation method based on graph cut |
| CN103093473A (en) * | 2013-01-25 | 2013-05-08 | 北京理工大学 | Multi-target picture segmentation based on level set |
| CN104217225B (en) * | 2014-09-02 | 2018-04-24 | 中国科学院自动化研究所 | A kind of sensation target detection and mask method |
| CN105354838B (en) * | 2015-10-20 | 2018-04-10 | 努比亚技术有限公司 | The depth information acquisition method and terminal of weak texture region in image |
| CN105872477B (en) * | 2016-05-27 | 2018-11-23 | 北京旷视科技有限公司 | video monitoring method and video monitoring system |
| CN106250812B (en) * | 2016-07-15 | 2019-08-20 | 汤一平 | A kind of model recognizing method based on quick R-CNN deep neural network |
-
2016
- 2016-12-30 WO PCT/CN2016/113548 patent/WO2018120038A1/en not_active Ceased
- 2016-12-30 CN CN201680016608.0A patent/CN107636727A/en active Pending
Patent Citations (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US7158656B2 (en) * | 1999-03-08 | 2007-01-02 | Vulcan Patents Llc | Three dimensional object pose estimation which employs dense depth information |
| CN101657825A (en) * | 2006-05-11 | 2010-02-24 | 普莱姆传感有限公司 | Modeling Human Figures from Depth Maps |
| CN101237522A (en) * | 2007-02-02 | 2008-08-06 | 华为技术有限公司 | A motion detection method and device |
| CN102402687A (en) * | 2010-09-13 | 2012-04-04 | 三星电子株式会社 | Method and device for detecting direction of rigid body parts based on depth information |
| CN102855459A (en) * | 2011-06-30 | 2013-01-02 | 株式会社理光 | Method and system for detecting and verifying specific foreground objects |
Cited By (31)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN110853127A (en) * | 2018-08-20 | 2020-02-28 | 浙江宇视科技有限公司 | Image processing method, device and equipment |
| CN111324139A (en) * | 2018-12-13 | 2020-06-23 | 顺丰科技有限公司 | Unmanned aerial vehicle landing method, device, equipment and storage medium |
| CN111353115B (en) * | 2018-12-24 | 2023-10-27 | 中移(杭州)信息技术有限公司 | Method and device for generating snowplow map |
| CN111353115A (en) * | 2018-12-24 | 2020-06-30 | 中移(杭州)信息技术有限公司 | Method and device for generating Spanish chart |
| CN110276742B (en) * | 2019-05-07 | 2023-10-10 | 平安科技(深圳)有限公司 | Train tail lamp monitoring method, device, terminal and storage medium |
| CN110276742A (en) * | 2019-05-07 | 2019-09-24 | 平安科技(深圳)有限公司 | Tail light for train monitoring method, device, terminal and storage medium |
| CN110796659A (en) * | 2019-06-24 | 2020-02-14 | 科大讯飞股份有限公司 | Method, device, equipment and storage medium for identifying target detection result |
| CN110796659B (en) * | 2019-06-24 | 2023-12-01 | 科大讯飞股份有限公司 | Target detection result identification method, device, equipment and storage medium |
| CN112446918A (en) * | 2019-09-04 | 2021-03-05 | 三赢科技(深圳)有限公司 | Method and device for positioning target object in image, computer device and storage medium |
| CN111199198A (en) * | 2019-12-27 | 2020-05-26 | 深圳市优必选科技股份有限公司 | Image target positioning method, image target positioning device and mobile robot |
| CN111199198B (en) * | 2019-12-27 | 2023-08-04 | 深圳市优必选科技股份有限公司 | Image target positioning method, image target positioning device and mobile robot |
| CN111223111A (en) * | 2020-01-03 | 2020-06-02 | 歌尔股份有限公司 | Depth image contour generation method, device, equipment and storage medium |
| CN111223111B (en) * | 2020-01-03 | 2023-04-25 | 歌尔光学科技有限公司 | Depth image contour generation method, device, equipment and storage medium |
| CN111507958A (en) * | 2020-04-15 | 2020-08-07 | 全球能源互联网研究院有限公司 | Target detection method, training method of detection model, and electronic device |
| CN111507958B (en) * | 2020-04-15 | 2023-05-26 | 全球能源互联网研究院有限公司 | Target detection method, training method of detection model and electronic equipment |
| CN113538449A (en) * | 2020-04-20 | 2021-10-22 | 顺丰科技有限公司 | An image correction method, device, server and storage medium |
| CN111783584A (en) * | 2020-06-22 | 2020-10-16 | 杭州飞步科技有限公司 | Image target detection method, device, electronic device and readable storage medium |
| CN111783584B (en) * | 2020-06-22 | 2023-08-08 | 杭州飞步科技有限公司 | Image target detection method, device, electronic equipment and readable storage medium |
| CN111898641A (en) * | 2020-07-01 | 2020-11-06 | 中国建设银行股份有限公司 | A target model detection, apparatus, electronic device and computer-readable storage medium |
| CN112149656A (en) * | 2020-09-29 | 2020-12-29 | 腾讯科技(深圳)有限公司 | Method, device, computer equipment and storage medium for determining the area of a ventilating cover of a cabinet |
| CN112258482A (en) * | 2020-10-23 | 2021-01-22 | 广东博智林机器人有限公司 | Building exterior wall mortar flow drop detection method and device |
| CN114693586A (en) * | 2020-12-25 | 2022-07-01 | 富泰华工业(深圳)有限公司 | Object detection method, device, electronic device and storage medium |
| CN114979617A (en) * | 2021-02-26 | 2022-08-30 | 浙江宇视科技有限公司 | Motor detection method, device and system for motorized lens |
| CN113420735A (en) * | 2021-08-23 | 2021-09-21 | 深圳市信润富联数字科技有限公司 | Contour extraction method, contour extraction device, contour extraction equipment, program product and storage medium |
| CN114004788A (en) * | 2021-09-23 | 2022-02-01 | 中大(海南)智能科技有限公司 | Defect detection method, device, equipment and storage medium |
| CN114445347A (en) * | 2021-12-31 | 2022-05-06 | 深圳云天励飞技术股份有限公司 | Quality detection method of capsule medicine and related equipment |
| CN114529611A (en) * | 2022-01-19 | 2022-05-24 | 湖南视比特机器人有限公司 | Three-dimensional machine workpiece positioning method, system and storage medium |
| CN115019157A (en) * | 2022-07-06 | 2022-09-06 | 武汉市聚芯微电子有限责任公司 | Target detection method, device, equipment and computer readable storage medium |
| CN115019157B (en) * | 2022-07-06 | 2024-03-22 | 武汉市聚芯微电子有限责任公司 | Object detection method, device, equipment and computer readable storage medium |
| CN115601306A (en) * | 2022-09-21 | 2023-01-13 | 深圳市深汕特别合作区万泽精密科技有限公司(Cn) | Method and device for detecting porosity, electronic equipment and storage medium |
| CN116342737A (en) * | 2022-12-02 | 2023-06-27 | 浙江大华技术股份有限公司 | Ruled line drawing method, device, electronic device and storage medium |
Also Published As
| Publication number | Publication date |
|---|---|
| CN107636727A (en) | 2018-01-26 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| WO2018120038A1 (en) | Method and device for target detection | |
| CN112528831B (en) | Multi-target attitude estimation method, multi-target attitude estimation device and terminal equipment | |
| CN106909873B (en) | The method and apparatus of recognition of face | |
| CN111222395A (en) | Target detection method and device and electronic equipment | |
| WO2018120027A1 (en) | Method and apparatus for detecting obstacles | |
| JP5538868B2 (en) | Image processing apparatus, image processing method and program | |
| CN110458772B (en) | Point cloud filtering method and device based on image processing and storage medium | |
| CN106709500B (en) | Image feature matching method | |
| CN110335216A (en) | Image processing method, image processing apparatus, terminal device and readable storage medium | |
| CN107221005B (en) | Object detection method and device | |
| WO2018027527A1 (en) | Optical system imaging quality detection method and apparatus | |
| CN114638891A (en) | Target detection positioning method and system based on image and point cloud fusion | |
| CN109345484A (en) | A depth map repair method and device | |
| CN113762253A (en) | Speckle extraction method and device, electronic device and storage medium | |
| CN111739071A (en) | Rapid iterative registration method, medium, terminal and device based on initial value | |
| WO2024239630A1 (en) | Defect detection method and apparatus, device, and storage medium | |
| CN110673607A (en) | Feature point extraction method and device in dynamic scene and terminal equipment | |
| CN108229583B (en) | Method and device for fast template matching based on main direction difference characteristics | |
| CN116051736A (en) | Three-dimensional reconstruction method, device, edge equipment and storage medium | |
| CN111126296A (en) | Fruit positioning method and device | |
| CN118967813A (en) | Crack location and analysis method of concrete curved surface structure combined with binocular camera and LiDAR | |
| WO2022174603A1 (en) | Pose prediction method, pose prediction apparatus, and robot | |
| CN106991379B (en) | Human skin recognition method and device combined with depth information and electronic device | |
| CN111383185A (en) | Hole filling method based on dense disparity map and vehicle-mounted equipment | |
| CN112966658B (en) | Robot navigation method, device, terminal equipment and computer readable storage medium |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 16925538 Country of ref document: EP Kind code of ref document: A1 |
|
| NENP | Non-entry into the national phase |
Ref country code: DE |
|
| 32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 29.10.2019) |
|
| 122 | Ep: pct application non-entry in european phase |
Ref document number: 16925538 Country of ref document: EP Kind code of ref document: A1 |