CN114611635A - Object identification method and device, storage medium and electronic device - Google Patents
Object identification method and device, storage medium and electronic device Download PDFInfo
- Publication number
- CN114611635A CN114611635A CN202210506905.0A CN202210506905A CN114611635A CN 114611635 A CN114611635 A CN 114611635A CN 202210506905 A CN202210506905 A CN 202210506905A CN 114611635 A CN114611635 A CN 114611635A
- Authority
- CN
- China
- Prior art keywords
- target
- data
- feature
- point cloud
- point
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/25—Fusion techniques
- G06F18/253—Fusion techniques of extracted features
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Health & Medical Sciences (AREA)
- Software Systems (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Biophysics (AREA)
- Biomedical Technology (AREA)
- Mathematical Physics (AREA)
- Computational Linguistics (AREA)
- Health & Medical Sciences (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Biology (AREA)
- Image Analysis (AREA)
Abstract
本发明实施例提供了一种对象的识别方法、装置、存储介质及电子装置,其中,该方法包括:确定第一设备在目标时刻对目标区域进行拍摄所得到目标点云;确定第二设备在目标时刻采集到的目标数据中包括的目标子数据,以及确定目标子数据的数据特征;将目标子数据中包括的每个像素点映射至目标点云所在的点云坐标系中,得到多个目标像素点;基于多个目标像素点以及目标点云确定点云特征;基于数据特征的维度参数融合数据特征以及点云特征,得到融合特征;基于融合特征识别目标对象。通过本发明,解决了相关技术中存在的识别对象不准确的问题,达到提高识别对象的准确率的效果。
Embodiments of the present invention provide an object recognition method, device, storage medium, and electronic device, wherein the method includes: determining a target point cloud obtained by a first device photographing a target area at a target moment; The target sub-data included in the target data collected at the target moment, and the data characteristics of the target sub-data are determined; each pixel point included in the target sub-data is mapped to the point cloud coordinate system where the target point cloud is located, and multiple Target pixel points; determine point cloud features based on multiple target pixel points and target point clouds; fuse data features and point cloud features based on dimension parameters of data features to obtain fusion features; identify target objects based on fusion features. The invention solves the problem of inaccurate identification of objects existing in the related art, and achieves the effect of improving the accuracy of identification of objects.
Description
技术领域technical field
本发明实施例涉及计算机领域,具体而言,涉及一种对象的识别方法、装置、存储介质及电子装置。Embodiments of the present invention relate to the field of computers, and in particular, to an object identification method, device, storage medium, and electronic device.
背景技术Background technique
在相关技术中,在识别对象时通常采用单目视觉下的图像识别技术加以实现。然而,单目视觉技术也存在一些局限性,比如对于距离的测量、物体的尺寸以及弱光下表现都存在能力欠缺,导致识别结果不准确。In the related art, image recognition technology under monocular vision is usually used to realize object recognition. However, monocular vision technology also has some limitations, such as the lack of ability to measure distance, size of objects, and performance in low light, resulting in inaccurate recognition results.
由此可知,相关技术中存在识别对象不准确的问题。It can be seen from this that there is a problem of inaccurate identification of objects in the related art.
针对相关技术中存在的上述问题,目前尚未提出有效的解决方案。For the above problems existing in the related art, no effective solution has been proposed yet.
发明内容SUMMARY OF THE INVENTION
本发明实施例提供了一种对象的识别方法、装置、存储介质及电子装置,以至少解决相关技术中存在的识别对象不准确的问题。Embodiments of the present invention provide an object identification method, device, storage medium, and electronic device, so as to at least solve the problem of inaccurate identification of objects existing in the related art.
根据本发明的一个实施例,提供了一种对象的识别方法,包括:确定第一设备在目标时刻对目标区域进行拍摄所得到目标点云;确定第二设备在所述目标时刻采集到的目标数据中包括的目标子数据,以及确定所述目标子数据的数据特征,其中,所述目标数据为所述第二设备对所述目标区域进行拍摄所得到的数据,所述第二设备对所述目标区域进行拍摄的角度与所述第一设备对所述目标区域拍摄的角度相同,所述目标子数据为所述目标数据中包括的目标对象的数据;将所述目标子数据中包括的每个像素点映射至所述目标点云所在的点云坐标系中,得到多个目标像素点;基于多个所述目标像素点以及所述目标点云确定点云特征;基于所述数据特征的维度参数融合所述数据特征以及所述点云特征,得到融合特征;基于所述融合特征识别所述目标对象。According to an embodiment of the present invention, an object recognition method is provided, including: determining a target point cloud obtained by photographing a target area by a first device at a target time; determining a target collected by a second device at the target time The target sub-data included in the data, and the data characteristics for determining the target sub-data, wherein the target data is the data obtained by the second device photographing the target area, and the second device The angle at which the target area is photographed is the same as the angle at which the first device photographs the target area, and the target sub-data is the data of the target object included in the target data; Each pixel is mapped to the point cloud coordinate system where the target point cloud is located to obtain a plurality of target pixel points; point cloud features are determined based on a plurality of the target pixel points and the target point cloud; based on the data features The dimensional parameters of the data feature and the point cloud feature are fused to obtain a fused feature; the target object is identified based on the fused feature.
根据本发明的另一个实施例,提供了一种对象的识别装置,包括:第一确定模块,用于确定第一设备在目标时刻对目标区域进行拍摄所得到目标点云;第二确定模块,用于确定第二设备在所述目标时刻采集到的目标数据中包括的目标子数据,以及确定所述目标子数据的数据特征,其中,所述目标数据为所述第二设备对所述目标区域进行拍摄所得到的数据,所述第二设备对所述目标区域进行拍摄的角度与所述第一设备对所述目标区域拍摄的角度相同,所述目标子数据为所述目标数据中包括的目标对象的数据;映射模块,用于将所述目标子数据中包括的每个像素点映射至所述目标点云所在的点云坐标系中,得到多个目标像素点;第三确定模块,用于基于多个所述目标像素点以及所述目标点云确定点云特征;融合模块,用于基于所述数据特征的维度参数融合所述数据特征以及所述点云特征,得到融合特征;识别模块,用于基于所述融合特征识别所述目标对象。According to another embodiment of the present invention, an object identification device is provided, comprising: a first determination module for determining a target point cloud obtained by photographing a target area by a first device at a target moment; a second determination module, Used to determine the target sub-data included in the target data collected by the second device at the target moment, and determine the data characteristics of the target sub-data, wherein the target data is the second device to the target The data obtained by shooting the target area, the angle at which the second device shoots the target area is the same as the angle at which the first device shoots the target area, and the target sub-data is the target data including The data of the target object; the mapping module is used to map each pixel point included in the target sub-data to the point cloud coordinate system where the target point cloud is located to obtain a plurality of target pixel points; the third determination module , for determining the point cloud feature based on a plurality of the target pixel points and the target point cloud; the fusion module, for fusing the data feature and the point cloud feature based on the dimension parameters of the data feature to obtain the fusion feature ; Recognition module for recognizing the target object based on the fusion feature.
根据本发明的又一个实施例,还提供了一种计算机可读存储介质,所述计算机可读存储介质中存储有计算机程序,其中,所述计算机程序被处理器执行时实现上述任一项中所述的方法的步骤。According to yet another embodiment of the present invention, there is also provided a computer-readable storage medium, where a computer program is stored in the computer-readable storage medium, wherein, when the computer program is executed by a processor, any one of the above-mentioned items is implemented. the steps of the method.
根据本发明的又一个实施例,还提供了一种电子装置,包括存储器和处理器,所述存储器中存储有计算机程序,所述处理器被设置为运行所述计算机程序以执行上述任一项方法实施例中的步骤。According to yet another embodiment of the present invention, there is also provided an electronic device comprising a memory and a processor, wherein the memory stores a computer program, the processor is configured to run the computer program to execute any of the above Steps in Method Examples.
通过本发明,确定第一设备在目标时刻对目标区域进行拍摄所得到的目标点云,确定第二设备在目标设备采集到的目标数据中包括的目标子数据,以及确定目标子数据的目标特征,其中,目标数据为第二设备对目标区域进行拍摄所得到的数据,第二设备对目标区域进行拍摄的角度第一设备对目标区域拍摄的角度相同,目标子数据为目标数据中包括的目标对象的数据,将目标数据中包括的每个像素点映射至目标点云所在的点云坐标系中,得到多个目标像素点;根据多个目标像素点以及目标点云确定点云特征,根据数据特征的维度参数融合数据特征以及点云特征,得到融合特征,根据融合特征识别目标对象。由于可以根据融合了点云特征以及数据特征的融合特征识别目标对象,即在识别目标对象时,融合了多个设备采集的数据,因此,可以解决相关技术中存在的识别对象不准确的问题,达到提高识别对象的准确率的效果。Through the present invention, the target point cloud obtained by the first device shooting the target area at the target time is determined, the target sub-data included in the target data collected by the second device in the target device is determined, and the target feature of the target sub-data is determined , where the target data is the data obtained by the second device shooting the target area, the angle at which the second device shoots the target area is the same as the angle at which the first device shoots the target area, and the target sub-data is the target included in the target data The data of the object, map each pixel included in the target data to the point cloud coordinate system where the target point cloud is located to obtain multiple target pixel points; determine the point cloud features according to the multiple target pixel points and the target point cloud, according to The dimensional parameters of the data features fuse the data features and point cloud features to obtain the fused features, and identify the target object according to the fused features. Since the target object can be identified according to the fusion feature that combines the point cloud feature and the data feature, that is, when identifying the target object, the data collected by multiple devices is fused, so the problem of inaccurate object recognition in the related technology can be solved. To achieve the effect of improving the accuracy of object recognition.
附图说明Description of drawings
图1是本发明实施例的一种对象的识别方法的移动终端的硬件结构框图;1 is a block diagram of a hardware structure of a mobile terminal of an object identification method according to an embodiment of the present invention;
图2是根据本发明实施例的对象的识别方法的流程图;2 is a flowchart of an object identification method according to an embodiment of the present invention;
图3是根据本发明示例性实施例的辅网络模块示意图;3 is a schematic diagram of a secondary network module according to an exemplary embodiment of the present invention;
图4是根据本发明具体实施例的对象的识别方法流程图;4 is a flowchart of an object identification method according to a specific embodiment of the present invention;
图5是根据本发明实施例的对象的识别装置的结构框图。FIG. 5 is a structural block diagram of an object identification apparatus according to an embodiment of the present invention.
具体实施方式Detailed ways
下文中将参考附图并结合实施例来详细说明本发明的实施例。Hereinafter, embodiments of the present invention will be described in detail with reference to the accompanying drawings and in conjunction with the embodiments.
需要说明的是,本发明的说明书和权利要求书及上述附图中的术语“第一”、“第二”等是用于区别类似的对象,而不必用于描述特定的顺序或先后次序。It should be noted that the terms "first", "second" and the like in the description and claims of the present invention and the above drawings are used to distinguish similar objects, and are not necessarily used to describe a specific sequence or sequence.
本申请实施例中所提供的方法实施例可以在移动终端、计算机终端或者类似的运算装置中执行。以运行在移动终端上为例,图1是本发明实施例的一种对象的识别方法的移动终端的硬件结构框图。如图1所示,移动终端可以包括一个或多个(图1中仅示出一个)处理器102(处理器102可以包括但不限于微处理器MCU或可编程逻辑器件FPGA等的处理装置)和用于存储数据的存储器104,其中,上述移动终端还可以包括用于通信功能的传输设备106以及输入输出设备108。本领域普通技术人员可以理解,图1所示的结构仅为示意,其并不对上述移动终端的结构造成限定。例如,移动终端还可包括比图1中所示更多或者更少的组件,或者具有与图1所示不同的配置。The method embodiments provided in the embodiments of this application may be executed in a mobile terminal, a computer terminal, or a similar computing device. Taking running on a mobile terminal as an example, FIG. 1 is a block diagram of a hardware structure of a mobile terminal according to an object identification method according to an embodiment of the present invention. As shown in FIG. 1 , the mobile terminal may include one or more (only one is shown in FIG. 1 ) processors 102 (the processors 102 may include but are not limited to processing devices such as a microprocessor MCU or a programmable logic device FPGA) and a
存储器104可用于存储计算机程序,例如,应用软件的软件程序以及模块,如本发明实施例中的对象的识别方法对应的计算机程序,处理器102通过运行存储在存储器104内的计算机程序,从而执行各种功能应用以及数据处理,即实现上述的方法。存储器104可包括高速随机存储器,还可包括非易失性存储器,如一个或者多个磁性存储装置、闪存、或者其他非易失性固态存储器。在一些实例中,存储器104可进一步包括相对于处理器102远程设置的存储器,这些远程存储器可以通过网络连接至移动终端。上述网络的实例包括但不限于互联网、企业内部网、局域网、移动通信网及其组合。The
传输设备106用于经由一个网络接收或者发送数据。上述的网络具体实例可包括移动终端的通信供应商提供的无线网络。在一个实例中,传输设备106包括一个网络适配器(Network Interface Controller,简称为NIC),其可通过基站与其他网络设备相连从而可与互联网进行通讯。在一个实例中,传输设备106可以为射频(Radio Frequency,简称为RF)模块,其用于通过无线方式与互联网进行通讯。
在本实施例中提供了一种对象的识别方法,图2是根据本发明实施例的对象的识别方法的流程图,如图2所示,该流程包括如下步骤:A method for recognizing an object is provided in this embodiment. FIG. 2 is a flowchart of the method for recognizing an object according to an embodiment of the present invention. As shown in FIG. 2 , the process includes the following steps:
步骤S202,确定第一设备在目标时刻对目标区域进行拍摄所得到目标点云;Step S202, determining the target point cloud obtained by the first device photographing the target area at the target moment;
步骤S204,确定第二设备在所述目标时刻采集到的目标数据中包括的目标子数据,以及确定所述目标子数据的数据特征,其中,所述目标数据为所述第二设备对所述目标区域进行拍摄所得到的数据,所述第二设备对所述目标区域进行拍摄的角度与所述第一设备对所述目标区域拍摄的角度相同,所述目标子数据为所述目标数据中包括的目标对象的数据;Step S204: Determine the target sub-data included in the target data collected by the second device at the target moment, and determine the data characteristics of the target sub-data, wherein the target data is the second device to the target data. The data obtained by shooting the target area, the angle at which the second device shoots the target area is the same as the angle at which the first device shoots the target area, and the target sub-data is in the target data Include data on the target object;
步骤S206,将所述目标子数据中包括的每个像素点映射至所述目标点云所在的点云坐标系中,得到多个目标像素点;Step S206, mapping each pixel point included in the target sub-data to the point cloud coordinate system where the target point cloud is located to obtain a plurality of target pixel points;
步骤S208,基于多个所述目标像素点以及所述目标点云确定点云特征;Step S208, determining point cloud features based on a plurality of the target pixel points and the target point cloud;
步骤S210,基于所述数据特征的维度参数融合所述数据特征以及所述点云特征,得到融合特征;Step S210, fuse the data feature and the point cloud feature based on the dimension parameter of the data feature to obtain a fusion feature;
步骤S212,基于所述融合特征识别所述目标对象。Step S212, identifying the target object based on the fusion feature.
在上述实施例中,第一设备可以是雷达,如激光雷达、微波雷达、毫米波雷达等。第二设备可以是摄像设备,如相机(单目相机、多目相机)、录像机等。目标数据可以为摄像设备采集到的图像、视频等。第一设备和第二设备可以安装在同一高度、同一朝向、相邻位置,以使第一设备和第二设备对目标区域进行拍摄时的角度相同。当然,还可以设置第一设备和第二设备的朝向相同,位置不同,通过调整第一设备和第二设备的拍摄角度,使第一设备和第二设备对目标区域进行拍摄时的拍摄角度相同。In the above embodiment, the first device may be a radar, such as a laser radar, a microwave radar, a millimeter-wave radar, or the like. The second device may be a camera device, such as a camera (monocular camera, multi-eye camera), a video recorder, and the like. The target data may be images, videos, etc. collected by a camera device. The first device and the second device may be installed at the same height, the same orientation, and adjacent positions, so that the first device and the second device have the same angle when photographing the target area. Of course, it is also possible to set the first device and the second device to have the same orientation and different positions. By adjusting the shooting angles of the first device and the second device, the shooting angles of the first device and the second device when shooting the target area are the same. .
在上述实施例中,目标子数据可以是目标对象的数据,在确定目标子数据时,可以利用对象检测算法对第二设备采集的目标数据进行2D框检测,确定用矩形框在目标数据中框选出目标对象,将矩形框中包括的数据确定为目标子数据。目标子数据可以包括矩形框的位置信息、矩形框的像素尺寸信息等。即目标子数据可以表示为,其中,)为第i个对象的矩形框的目标点的像素坐标位置,)为第i辆车的矩形框像素宽高。其中,目标点可以是矩形框的中心点、顶点等。In the above embodiment, the target sub-data may be data of the target object. When determining the target sub-data, an object detection algorithm may be used to perform 2D frame detection on the target data collected by the second device, and a rectangular frame is determined to be framed in the target data. The target object is selected, and the data included in the rectangular frame is determined as the target sub-data. The target sub-data may include position information of the rectangular frame, pixel size information of the rectangular frame, and the like. That is, the target subdata can be expressed as ,in, ) is the pixel coordinate position of the target point of the rectangular frame of the i-th object, ) is the pixel width and height of the rectangle frame of the i-th vehicle. Wherein, the target point may be the center point, vertex, etc. of the rectangular frame.
在上述实施例中,第一设备和第二设备可以是预先完成联合标定的设备,可以预先计算激光雷达坐标系映射至单目相机坐标系的相机外参,使得激光雷达扫描得到的目标点云可以正确投影至相机图像画面上且完成正确映射。第一设备采集的目标点云中可以包括多个对象的点云,即目标点云中可以包括多个子点云。目标数据中可以包括多个对象的数据,即目标数据中可以包括多个目标子数据。可以将目标子数据中包括的每个像素点映射至目标点云所在的点云坐标系中,得到多个目标像素点。In the above embodiment, the first device and the second device may be devices that have completed joint calibration in advance, and may pre-calculate the camera extrinsic parameters that map the lidar coordinate system to the monocular camera coordinate system, so that the target point cloud obtained by the lidar scanning Can be correctly projected onto the camera image frame and mapped correctly. The target point cloud collected by the first device may include point clouds of multiple objects, that is, the target point cloud may include multiple sub-point clouds. The target data may include data of multiple objects, that is, the target data may include multiple target sub-data. Each pixel point included in the target sub-data can be mapped to the point cloud coordinate system where the target point cloud is located to obtain multiple target pixel points.
在上述实施例中,可以根据目标像素点以及目标点云确定点云特征,并根据目标子数据的数据特征的维度参数融合数据特征以及点云特征,得到融合特征。将融合特征输入至识别网络中,识别目标对象。其中,识别目标对象包括识别目标对象的属性信息。目标对象可以包括车辆、人等。当目标对象为车辆时,目标对象的属性信息可以包括车型、车牌等。当目标对象为人时,目标对象的属性信息可以包括人脸信息、身份证号等信息。In the above embodiment, the point cloud feature may be determined according to the target pixel point and the target point cloud, and the data feature and the point cloud feature may be fused according to the dimension parameter of the data feature of the target sub-data to obtain the fusion feature. Input the fusion feature into the recognition network to recognize the target object. Wherein, identifying the target object includes identifying attribute information of the target object. Target objects may include vehicles, people, and the like. When the target object is a vehicle, the attribute information of the target object may include a vehicle model, a license plate, and the like. When the target object is a person, the attribute information of the target object may include information such as face information, ID number and the like.
在上述实施例中,可以通过目标网络模型确定融合特征,并将融合特征传输至目标网络模型中包括的辅网络模型及分类器中,以识别目标对象。即在融合特征Fusion_Feai后可以接辅网络模块及分类器,并使用sgd随机梯度下降优化器对整体网络进行训练优化。辅网络模块可以包括3个连续的3x3卷积层,后接1个1x1卷积层构成和1个fc全连接层,辅网络模块示意图可参见附图3。分类器可以为交叉熵损失函数loss。In the above embodiment, the fusion feature may be determined by the target network model, and the fusion feature may be transmitted to the auxiliary network model and the classifier included in the target network model to identify the target object. That is, after the fusion feature Fusion_Fea i , the auxiliary network module and classifier can be connected, and the sgd stochastic gradient descent optimizer can be used to train and optimize the overall network. The auxiliary network module may include three consecutive 3x3 convolutional layers, followed by a 1x1 convolutional layer and an fc fully connected layer. For a schematic diagram of the auxiliary network module, see Figure 3. The classifier can be the cross entropy loss function loss .
可选地,上述步骤的执行主体可以是后台处理器,或者其他的具备类似处理能力的设备,还可以是至少集成有数据处理设备的机器,其中,数据处理设备可以包括计算机、手机等终端,但不限于此。Optionally, the execution subject of the above steps may be a background processor, or other equipment with similar processing capabilities, or may be a machine at least integrated with data processing equipment, wherein the data processing equipment may include terminals such as computers and mobile phones, But not limited to this.
通过本发明,确定第一设备在目标时刻对目标区域进行拍摄所得到的目标点云,确定第二设备在目标设备采集到的目标数据中包括的目标子数据,以及确定目标子数据的目标特征,其中,目标数据为第二设备对目标区域进行拍摄所得到的数据,第二设备对目标区域进行拍摄的角度第一设备对目标区域拍摄的角度相同,目标子数据为目标数据中包括的目标对象的数据,将目标数据中包括的每个像素点映射至目标点云所在的点云坐标系中,得到多个目标像素点;根据多个目标像素点以及目标点云确定点云特征,根据数据特征的维度参数融合数据特征以及点云特征,得到融合特征,根据融合特征识别目标对象。由于可以根据融合了点云特征以及数据特征的融合特征识别目标对象,即在识别目标对象时,融合了多个设备采集的数据,因此,可以解决相关技术中存在的识别对象不准确的问题,达到提高识别对象的准确率的效果。Through the present invention, the target point cloud obtained by the first device shooting the target area at the target time is determined, the target sub-data included in the target data collected by the second device in the target device is determined, and the target feature of the target sub-data is determined , where the target data is the data obtained by the second device shooting the target area, the angle at which the second device shoots the target area is the same as the angle at which the first device shoots the target area, and the target sub-data is the target included in the target data The data of the object, map each pixel included in the target data to the point cloud coordinate system where the target point cloud is located to obtain multiple target pixel points; determine the point cloud features according to the multiple target pixel points and the target point cloud, according to The dimensional parameters of the data features fuse the data features and point cloud features to obtain the fused features, and identify the target object according to the fused features. Since the target object can be identified according to the fusion feature that combines the point cloud feature and the data feature, that is, when identifying the target object, the data collected by multiple devices is fused, so the problem of inaccurate object recognition in the related technology can be solved. To achieve the effect of improving the accuracy of object recognition.
在一个示例性实施例中,基于多个所述目标像素点以及所述目标点云确定点云特征包括:针对每个所述目标像素点均执行以下操作,得到每个目标像素点对应的特征值:确定所述目标点云中是否包括与所述目标像素点坐标相同的第一点;在存在第一点的情况下,将所述第一点的竖坐标以及所述第一点对应的响应强度确定为所述目标像素点的特征值;在不存在所述第一点的情况下,确定所述目标点云中距离所述目标像素点最近的第二点,将所述第二点的竖坐标以及所述第二点对应的响应强度确定为所述目标像素点的特征值;将多个所述特征值构成的矩阵确定为所述点云特征。在本实施例中,目标点云可以是三维点云,目标点云中包括的每个点均可以表示为,其中,x、y、z为第j个点云在点In an exemplary embodiment, determining a point cloud feature based on a plurality of the target pixel points and the target point cloud includes: performing the following operations on each of the target pixel points to obtain a feature corresponding to each target pixel point Value: determine whether the target point cloud includes a first point with the same coordinates as the target pixel point; if there is a first point, the vertical coordinate of the first point and the corresponding The response intensity is determined as the characteristic value of the target pixel point; in the absence of the first point, the second point in the target point cloud that is closest to the target pixel point is determined, and the second point is The vertical coordinate of and the response intensity corresponding to the second point are determined as the eigenvalues of the target pixel point; a matrix formed by a plurality of the eigenvalues is determined as the point cloud feature. In this embodiment, the target point cloud may be a three-dimensional point cloud, and each point included in the target point cloud may be expressed as , where x, y, and z are the jth point cloud at the point
云坐标系下的三维坐标,a为第j个点云的激光响应强度值。对于每个目标对象,均可以构建一个维的三维矩阵,并按照被映射的像素坐标位置存储对应的点云点的z值和a值,对于那些没有被映射到的像素点,则用最近邻的z值和a值进行填充,即得到了每个对象的点云特征矩阵。将点云特征矩阵确定为点云特征。The three-dimensional coordinates in the cloud coordinate system, a is the laser response intensity value of the jth point cloud. For each target object, you can build a Dimensional three-dimensional matrix, and store the z value and a value of the corresponding point cloud point according to the mapped pixel coordinate position. For those pixels that are not mapped, use the nearest neighbor z value and a value to fill, that is Got the point cloud feature matrix for each object . The point cloud feature matrix is determined as the point cloud feature.
在一个示例性实施例中,在存在第一点的情况下,将所述第一点的竖坐标以及所述第一点对应的响应强度确定为所述目标像素点的特征值包括:在所述第一点的数量为一个的情况下,将所述第一点的竖坐标以及所述第一点对应的响应强度确定为所述目标像素点的特征值;在所述第一点的数量为多个的情况下,确定多个所述第一点中包括的竖坐标最小的第三点,将所述第三点的竖坐标以及所述第三点对应的响应强度确定为所述目标像素点的特征值。在本实施例中,在将每个像素点映射至点云坐标系时,每一个目标子数据都将会被映射若干个离散点云点。如若存在多个点云映射至同一个图像像素上的情况,则取z值最小的点云即可,其它的丢弃掉。In an exemplary embodiment, in the presence of a first point, determining the vertical coordinate of the first point and the response intensity corresponding to the first point as the feature value of the target pixel point includes: In the case that the number of the first point is one, the vertical coordinate of the first point and the response intensity corresponding to the first point are determined as the characteristic value of the target pixel point; in the number of the first point In the case of more than one, determine the third point with the smallest vertical coordinate included in the multiple first points, and determine the vertical coordinate of the third point and the response intensity corresponding to the third point as the target The eigenvalues of the pixels. In this embodiment, when mapping each pixel point to the point cloud coordinate system, each target sub-data will be mapped to several discrete point cloud points. If there are multiple point clouds mapped to the same image pixel, just take the point cloud with the smallest z value, and discard the others.
在一个示例性实施例中,确定所述目标子数据的数据特征包括:将所述目标子数据输入至目标网络模型中,确定所述目标网络模型所提取的特征;将任意一层网络层所提取的特征确定为所述数据特征,其中,所述网络层为所述目标网络模型的主干网络中包括的网络层。在本实施例中,可以将目标子数据输入至目标网络模型中,如卷积神经网络模型CNN、VGG、ResNet、Mobilenet等作为目标网络模型的主干网络。利用主干网络提取每个目标子数据的深层网络特征Camera_Feai。In an exemplary embodiment, determining the data features of the target sub-data includes: inputting the target sub-data into a target network model, and determining the features extracted by the target network model; The extracted feature is determined as the data feature, wherein the network layer is a network layer included in the backbone network of the target network model. In this embodiment, the target sub-data may be input into the target network model, such as convolutional neural network models CNN, VGG, ResNet, Mobilenet, etc., as the backbone network of the target network model. The deep network feature Camera_Fea i of each target sub-data is extracted using the backbone network.
在上述实施例中,当主干网络要求输入的数据的尺寸时,如主干网络的输入大小设定为224*224*3,则可以将目标子数据统一resize成224*224*3。In the above embodiment, when the size of the input data is required by the backbone network, if the input size of the backbone network is set to 224*224*3, the target sub-data can be uniformly resized to 224*224*3.
在上述实施例中,可以选择目标网络模型的任意一层提取的深层网络特征Camera_Feai作为目标子数据的数据特征。例如,可以采用该网络16倍下采样的网络block层的输出作为Camera_Feai,即数据特征,维度大小为7*7*d,其中,d为该层特征的通道数,7为224⁄16。In the above embodiment, the deep network feature Camera_Feai extracted from any layer of the target network model may be selected as the data feature of the target sub-data. For example, the output of the network block layer with 16 times downsampling of the network can be used as Camera_Fea i , that is, the data feature, and the dimension size is 7*7*d, where d is the number of channels of the feature of this layer, and 7 is 224⁄16.
在一个示例性实施例中,基于所述数据特征的维度参数融合所述数据特征以及所述点云特征,得到融合特征包括:按照所述维度参数调整所述点云特征,得到目标点云特征;融合所述数据特征以及所述目标点云特征,得到所述融合特征。在本实施例中,在确定融合特征时,可以将点云特征与数据特征统一成相同的尺寸,例如,可以按照数据特征的维度参数调整点云特征,得到目标点云特征。融合目标点云特征与数据特征,得到融合特征。In an exemplary embodiment, fusing the data feature and the point cloud feature based on the dimension parameter of the data feature to obtain the fusion feature includes: adjusting the point cloud feature according to the dimension parameter to obtain the target point cloud feature ; fuse the data feature and the target point cloud feature to obtain the fusion feature. In this embodiment, when determining the fusion feature, the point cloud feature and the data feature can be unified into the same size, for example, the point cloud feature can be adjusted according to the dimension parameter of the data feature to obtain the target point cloud feature. Fusion of target point cloud features and data features to obtain fusion features.
在上述实施例中,还可以按照点云特征的维度参数调整数据特征,得到目标数据特征,将目标数据特征与点云特征进行融合,得到融合特征。In the above embodiment, the data features can also be adjusted according to the dimension parameters of the point cloud features to obtain the target data features, and the target data features and the point cloud features are fused to obtain the fusion features.
需要说明的是,可以确定点云特征的维度参数与数据特征的维度参数的大小关系,按照维度参数小的维度参数调整维度参数大的特征。It should be noted that the size relationship between the dimension parameter of the point cloud feature and the dimension parameter of the data feature can be determined, and the feature with a larger dimension parameter can be adjusted according to the dimension parameter with a smaller dimension parameter.
在一个示例性实施例中,融合所述数据特征以及所述目标点云特征,得到所述融合特征包括:按照所述维度参数中包括的通道维度连接所述数据特征以及所述目标点云特征,得到所述融合特征。在本实施例中,在融合数据特征和目标点云特征时,可以直接将相同维度的特征进行连接,得到融合特征。例如,可以将目标点云特征Lidar_Feai同数据特征Camera_Feai按通道维度concat到一起,得到融合特征Fusion_Feai,其维度大小为7*7*(d+2)。In an exemplary embodiment, fusing the data feature and the target point cloud feature to obtain the fused feature includes: connecting the data feature and the target point cloud feature according to the channel dimension included in the dimension parameter , to obtain the fusion feature. In this embodiment, when fusing data features and target point cloud features, features of the same dimension can be directly connected to obtain fused features. For example, the target point cloud feature Lidar_Fea i and the data feature Camera_Fea i can be concat together according to the channel dimension to obtain the fusion feature Fusion_Fea i , whose dimension size is 7*7*(d+2).
在一个示例性实施例中,按照所述维度参数调整所述点云特征,得到目标点云特征包括:利用线性插值算法调整所述点云特征,使调整后的点云特征的维度为所述维度参数;将调整后的点云特征确定为所述目标点云特征。在本实施例中,可以将点云特征Lidar_Feai通过最近邻插值resize到数据特征的维度,例如,7*7*2维度,得到与数据特征Camera_Feai前两维度一致的特征。In an exemplary embodiment, adjusting the point cloud feature according to the dimension parameter to obtain the target point cloud feature includes: adjusting the point cloud feature by using a linear interpolation algorithm, so that the dimension of the adjusted point cloud feature is the Dimension parameter; determine the adjusted point cloud feature as the target point cloud feature. In this embodiment, the point cloud feature Lidar_Fea i can be resized to the dimension of the data feature through nearest neighbor interpolation, for example, 7*7*2 dimensions, to obtain features consistent with the first two dimensions of the data feature Camera_Fea i .
下面结合具体实施方式对对象的识别方法进行说明:The method for identifying objects is described below in conjunction with specific embodiments:
图4是根据本发明具体实施例的对象的识别方法流程图,如图4所示,该方法包括:FIG. 4 is a flowchart of an object recognition method according to a specific embodiment of the present invention. As shown in FIG. 4 , the method includes:
步骤S402,单目相机采集彩色图像,激光雷达采集激光点云。单目相机与激光雷达已完成联合标定。In step S402, the monocular camera collects the color image, and the lidar collects the laser point cloud. The monocular camera and lidar have been jointly calibrated.
步骤S404,车辆检测,得到车辆子图。In step S404, vehicle detection is performed to obtain a vehicle sub-image.
步骤S406,映射得到车辆雷达特征。Step S406, mapping to obtain vehicle radar features.
步骤S408,将车辆子图(对应于上述数目标子数据)输入至主网络模块,得到图像特征(对应于上述数据特征)。Step S408 , input the vehicle sub-image (corresponding to the above-mentioned number target sub-data) to the main network module to obtain image features (corresponding to the above-mentioned data characteristics).
步骤S410,将车辆雷达特征(对应于上述点云特征)进行线性插值,得到雷达特征(对应于上述目标点云特征)。Step S410: Perform linear interpolation on the vehicle radar feature (corresponding to the above point cloud feature) to obtain the radar feature (corresponding to the above target point cloud feature).
步骤S412,concat图像特征以及雷达特征,得到融合特征。Step S412, concat image features and radar features to obtain fusion features.
步骤S414,将融合特征输入至辅网络模型以及分类器,得到识别结果。Step S414, input the fusion feature into the auxiliary network model and the classifier to obtain the recognition result.
在前述实施例中,对激光雷达和单目相机两种不同模态且同一时间戳下采集的数据分别进行计算,并将前者的三维特征同后者的深度学习主网络模块的中间特征层加以融合,再通过后接一个辅网络模块及分类器进行训练得到最终车型识别结果。采用车辆检测模型对相机彩色图像进行2D车辆框检测;将激光雷达与单目相机进行配准,正确映射3D激光点云至2D相机平面,获取每一个车辆框里的三维点云特征;抠取每辆车辆彩色子图,提取主网络模块的图像特征,并同该车的点云特征特征进行concat联接;后接辅网络模块及分类器加以分类训练。因单目视觉的技术局限性的存在,必然导致在车型识别问题上有些问题得不到很好的解决,但随着激光雷达扫描的三维点云特征的补充,激光雷达在距离测量方面有先天性的优势,为此可以作为单目相机很有力的一个补充,通过将两者进行雷达视觉融合,得到更为精准的识别效果。可以弥补纯视觉技术存在的不足,极大的提升车型识别的精度。In the foregoing embodiment, the data collected in two different modalities of lidar and monocular camera and collected under the same time stamp are calculated separately, and the three-dimensional features of the former are added to the intermediate feature layer of the deep learning main network module of the latter. After fusion, the final vehicle recognition result is obtained by training with an auxiliary network module and a classifier. Use the vehicle detection model to detect the 2D vehicle frame on the camera color image; register the lidar with the monocular camera, correctly map the 3D laser point cloud to the 2D camera plane, and obtain the 3D point cloud features in each vehicle frame; For each vehicle color sub-image, the image features of the main network module are extracted, and concat is connected with the point cloud features of the vehicle; then the auxiliary network module and classifier are connected for classification and training. Due to the technical limitations of monocular vision, it is inevitable that some problems in vehicle identification cannot be well solved. However, with the addition of 3D point cloud features scanned by LiDAR, LiDAR has an innate ability in distance measurement. For this reason, it can be used as a powerful supplement to the monocular camera. By combining the two with radar vision, a more accurate recognition effect can be obtained. It can make up for the shortcomings of pure visual technology and greatly improve the accuracy of vehicle recognition.
通过以上的实施方式的描述,本领域的技术人员可以清楚地了解到根据上述实施例的方法可借助软件加必需的通用硬件平台的方式来实现,当然也可以通过硬件,但很多情况下前者是更佳的实施方式。基于这样的理解,本发明的技术方案本质上或者说对现有技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质(如ROM/RAM、磁碟、光盘)中,包括若干指令用以使得一台终端设备(可以是手机,计算机,服务器,或者网络设备等)执行本发明各个实施例所述的方法。From the description of the above embodiments, those skilled in the art can clearly understand that the method according to the above embodiment can be implemented by means of software plus a necessary general hardware platform, and of course can also be implemented by hardware, but in many cases the former is better implementation. Based on this understanding, the technical solutions of the present invention essentially or the parts that contribute to the prior art can be embodied in the form of software products, and the computer software products are stored in a storage medium (such as ROM/RAM, magnetic disk, CD-ROM), including several instructions to make a terminal device (which may be a mobile phone, a computer, a server, or a network device, etc.) to execute the methods described in the various embodiments of the present invention.
在本实施例中还提供了一种对象的识别装置,该装置用于实现上述实施例及优选实施方式,已经进行过说明的不再赘述。如以下所使用的,术语“模块”可以实现预定功能的软件和/或硬件的组合。尽管以下实施例所描述的装置较佳地以软件来实现,但是硬件,或者软件和硬件的组合的实现也是可能并被构想的。In this embodiment, an object identification device is also provided, and the device is used to implement the above-mentioned embodiments and preferred implementations, and the descriptions that have been described will not be repeated. As used below, the term "module" may be a combination of software and/or hardware that implements a predetermined function. Although the apparatus described in the following embodiments is preferably implemented in software, implementations in hardware, or a combination of software and hardware, are also possible and contemplated.
图5是根据本发明实施例的对象的识别装置的结构框图,如图5所示,该装置包括:FIG. 5 is a structural block diagram of an object identification device according to an embodiment of the present invention. As shown in FIG. 5 , the device includes:
第一确定模块502,用于确定第一设备在目标时刻对目标区域进行拍摄所得到目标点云;The
第二确定模块504,用于确定第二设备在所述目标时刻采集到的目标数据中包括的目标子数据,以及确定所述目标子数据的数据特征,其中,所述目标数据为所述第二设备对所述目标区域进行拍摄所得到的数据,所述第二设备对所述目标区域进行拍摄的角度与所述第一设备对所述目标区域拍摄的角度相同,所述目标子数据为所述目标数据中包括的目标对象的数据;The second determining
映射模块506,用于将所述目标子数据中包括的每个像素点映射至所述目标点云所在的点云坐标系中,得到多个目标像素点;a
第三确定模块508,用于基于多个所述目标像素点以及所述目标点云确定点云特征;A third determining
融合模块510,用于基于所述数据特征的维度参数融合所述数据特征以及所述点云特征,得到融合特征;A
识别模块512,用于基于所述融合特征识别所述目标对象。The
在一个示例性实施例中,第三确定模块508可以通过如下方式实现基于多个所述目标像素点以及所述目标点云确定点云特征:针对每个所述目标像素点均执行以下操作,得到每个目标像素点对应的特征值:确定所述目标点云中是否包括与所述目标像素点坐标相同的第一点;在存在第一点的情况下,将所述第一点的竖坐标以及所述第一点对应的响应强度确定为所述目标像素点的特征值;在不存在所述第一点的情况下,确定所述目标点云中距离所述目标像素点最近的第二点,将所述第二点的竖坐标以及所述第二点对应的响应强度确定为所述目标像素点的特征值;将多个所述特征值构成的矩阵确定为所述点云特征。In an exemplary embodiment, the third determining
在一个示例性实施例中,第三确定模块508可以通过如下方式实现在存在第一点的情况下,将所述第一点的竖坐标以及所述第一点对应的响应强度确定为所述目标像素点的特征值:在所述第一点的数量为一个的情况下,将所述第一点的竖坐标以及所述第一点对应的响应强度确定为所述目标像素点的特征值;在所述第一点的数量为多个的情况下,确定多个所述第一点中包括的竖坐标最小的第三点,将所述第三点的竖坐标以及所述第三点对应的响应强度确定为所述目标像素点的特征值。In an exemplary embodiment, the third determining
在一个示例性实施例中,第二确定模块504可以通过如下方式实现确定所述目标子数据的数据特征:将所述目标子数据输入至目标网络模型中,确定所述目标网络模型所提取的特征;将任意一层网络层所提取的特征确定为所述数据特征,其中,所述网络层为所述目标网络模型的主干网络中包括的网络层。In an exemplary embodiment, the second determining
在一个示例性实施例中,融合模块510可以通过如下方式实现基于所述数据特征的维度参数融合所述数据特征以及所述点云特征,得到融合特征:按照所述维度参数调整所述点云特征,得到目标点云特征;融合所述数据特征以及所述目标点云特征,得到所述融合特征。In an exemplary embodiment, the
在一个示例性实施例中,融合模块510可以通过如下方式实现融合所述数据特征以及所述目标点云特征,得到所述融合特征:按照所述维度参数中包括的通道维度连接所述数据特征以及所述目标点云特征,得到所述融合特征。In an exemplary embodiment, the
在一个示例性实施例中,融合模块510可以通过如下方式实现按照所述维度参数调整所述点云特征,得到目标点云特征:利用线性插值算法调整所述点云特征,使调整后的点云特征的维度为所述维度参数;将调整后的点云特征确定为所述目标点云特征。In an exemplary embodiment, the
需要说明的是,上述各个模块是可以通过软件或硬件来实现的,对于后者,可以通过以下方式实现,但不限于此:上述模块均位于同一处理器中;或者,上述各个模块以任意组合的形式分别位于不同的处理器中。It should be noted that the above modules can be implemented by software or hardware, and the latter can be implemented in the following ways, but not limited to this: the above modules are all located in the same processor; or, the above modules can be combined in any combination The forms are located in different processors.
本发明的实施例还提供了一种计算机可读存储介质,所述计算机可读存储介质中存储有计算机程序,其中,所述计算机程序被处理器执行时实现上述任一项中所述的方法的步骤。An embodiment of the present invention also provides a computer-readable storage medium, where a computer program is stored in the computer-readable storage medium, wherein the computer program implements any of the methods described above when executed by a processor A step of.
在一个示例性实施例中,上述计算机可读存储介质可以包括但不限于:U盘、只读存储器(Read-Only Memory,简称为ROM)、随机存取存储器(Random Access Memory,简称为RAM)、移动硬盘、磁碟或者光盘等各种可以存储计算机程序的介质。In an exemplary embodiment, the above-mentioned computer-readable storage medium may include, but is not limited to, a USB flash drive, a read-only memory (Read-Only Memory, referred to as ROM for short), and a random access memory (Random Access Memory, referred to as RAM for short) , mobile hard disk, magnetic disk or CD-ROM and other media that can store computer programs.
本发明的实施例还提供了一种电子装置,包括存储器和处理器,该存储器中存储有计算机程序,该处理器被设置为运行计算机程序以执行上述任一项方法实施例中的步骤。An embodiment of the present invention also provides an electronic device, comprising a memory and a processor, where a computer program is stored in the memory, and the processor is configured to run the computer program to execute the steps in any of the above method embodiments.
在一个示例性实施例中,上述电子装置还可以包括传输设备以及输入输出设备,其中,该传输设备和上述处理器连接,该输入输出设备和上述处理器连接。In an exemplary embodiment, the above-mentioned electronic device may further include a transmission device and an input-output device, wherein the transmission device is connected to the above-mentioned processor, and the input-output device is connected to the above-mentioned processor.
本实施例中的具体示例可以参考上述实施例及示例性实施方式中所描述的示例,本实施例在此不再赘述。For specific examples in this embodiment, reference may be made to the examples described in the foregoing embodiments and exemplary implementation manners, and details are not described herein again in this embodiment.
显然,本领域的技术人员应该明白,上述的本发明的各模块或各步骤可以用通用的计算装置来实现,它们可以集中在单个的计算装置上,或者分布在多个计算装置所组成的网络上,它们可以用计算装置可执行的程序代码来实现,从而,可以将它们存储在存储装置中由计算装置来执行,并且在某些情况下,可以以不同于此处的顺序执行所示出或描述的步骤,或者将它们分别制作成各个集成电路模块,或者将它们中的多个模块或步骤制作成单个集成电路模块来实现。这样,本发明不限制于任何特定的硬件和软件结合。Obviously, those skilled in the art should understand that the above-mentioned modules or steps of the present invention can be implemented by a general-purpose computing device, which can be centralized on a single computing device, or distributed in a network composed of multiple computing devices On the other hand, they can be implemented in program code executable by a computing device, so that they can be stored in a storage device and executed by the computing device, and in some cases, can be performed in a different order than shown here. Or the described steps, or they are respectively made into individual integrated circuit modules, or a plurality of modules or steps in them are made into a single integrated circuit module to realize. As such, the present invention is not limited to any particular combination of hardware and software.
以上所述仅为本发明的优选实施例而已,并不用于限制本发明,对于本领域的技术人员来说,本发明可以有各种更改和变化。凡在本发明的原则之内,所作的任何修改、等同替换、改进等,均应包含在本发明的保护范围之内。The above descriptions are only preferred embodiments of the present invention, and are not intended to limit the present invention. For those skilled in the art, the present invention may have various modifications and changes. Any modification, equivalent replacement, improvement, etc. made within the principle of the present invention shall be included within the protection scope of the present invention.
Claims (10)
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202210506905.0A CN114611635B (en) | 2022-05-11 | 2022-05-11 | Object recognition method, device, storage medium and electronic device |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202210506905.0A CN114611635B (en) | 2022-05-11 | 2022-05-11 | Object recognition method, device, storage medium and electronic device |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| CN114611635A true CN114611635A (en) | 2022-06-10 |
| CN114611635B CN114611635B (en) | 2022-08-30 |
Family
ID=81870651
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN202210506905.0A Active CN114611635B (en) | 2022-05-11 | 2022-05-11 | Object recognition method, device, storage medium and electronic device |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN114611635B (en) |
Cited By (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN114782496A (en) * | 2022-06-20 | 2022-07-22 | 杭州闪马智擎科技有限公司 | Object tracking method and device, storage medium and electronic device |
| CN116246267A (en) * | 2023-03-06 | 2023-06-09 | 武汉极动智能科技有限公司 | Tray identification method and device, computer equipment and storage medium |
Citations (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20190122067A1 (en) * | 2017-01-24 | 2019-04-25 | Ford Global Technologies, Llc. | Object Detection Using Recurrent Neural Network And Concatenated Feature Map |
| CN111027401A (en) * | 2019-11-15 | 2020-04-17 | 电子科技大学 | An end-to-end object detection method for camera and lidar fusion |
| CN112927233A (en) * | 2021-01-27 | 2021-06-08 | 湖州市港航管理中心 | Marine laser radar and video combined target capturing method |
| CN114092850A (en) * | 2020-08-05 | 2022-02-25 | 北京万集科技股份有限公司 | Re-recognition method and device, computer equipment and storage medium |
| CN114155497A (en) * | 2021-09-24 | 2022-03-08 | 智道网联科技(北京)有限公司 | Object identification method and device and storage medium |
-
2022
- 2022-05-11 CN CN202210506905.0A patent/CN114611635B/en active Active
Patent Citations (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20190122067A1 (en) * | 2017-01-24 | 2019-04-25 | Ford Global Technologies, Llc. | Object Detection Using Recurrent Neural Network And Concatenated Feature Map |
| CN111027401A (en) * | 2019-11-15 | 2020-04-17 | 电子科技大学 | An end-to-end object detection method for camera and lidar fusion |
| CN114092850A (en) * | 2020-08-05 | 2022-02-25 | 北京万集科技股份有限公司 | Re-recognition method and device, computer equipment and storage medium |
| CN112927233A (en) * | 2021-01-27 | 2021-06-08 | 湖州市港航管理中心 | Marine laser radar and video combined target capturing method |
| CN114155497A (en) * | 2021-09-24 | 2022-03-08 | 智道网联科技(北京)有限公司 | Object identification method and device and storage medium |
Cited By (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN114782496A (en) * | 2022-06-20 | 2022-07-22 | 杭州闪马智擎科技有限公司 | Object tracking method and device, storage medium and electronic device |
| CN116246267A (en) * | 2023-03-06 | 2023-06-09 | 武汉极动智能科技有限公司 | Tray identification method and device, computer equipment and storage medium |
Also Published As
| Publication number | Publication date |
|---|---|
| CN114611635B (en) | 2022-08-30 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| CN111340864B (en) | Three-dimensional scene fusion method and device based on monocular estimation | |
| CN109614889B (en) | Object detection method, related equipment and computer storage medium | |
| EP2731075B1 (en) | Backfilling points in a point cloud | |
| CN113240734B (en) | Vehicle cross-position judging method, device, equipment and medium based on aerial view | |
| CN115147333A (en) | Target detection method and device | |
| CN107274483A (en) | A kind of object dimensional model building method | |
| CN111950428A (en) | Target obstacle identification method, device and vehicle | |
| WO2023279584A1 (en) | Target detection method, target detection apparatus, and robot | |
| CN116029996A (en) | Stereo matching method and device and electronic equipment | |
| CN114611635B (en) | Object recognition method, device, storage medium and electronic device | |
| CN105825543A (en) | Multi-view dense point cloud generation method and system based on low-altitude remote sensing images | |
| CN115035235A (en) | Three-dimensional reconstruction method and device | |
| WO2025086907A1 (en) | Three-dimensional information determination method and apparatus, device, storage medium and program product | |
| CN115836322B (en) | Image cropping method and device, electronic device and storage medium | |
| CN111626241B (en) | A face detection method and device | |
| CN112598736A (en) | Map construction based visual positioning method and device | |
| CN116343143A (en) | Object detection method, storage medium, roadside equipment and automatic driving system | |
| CN116051736A (en) | Three-dimensional reconstruction method, device, edge equipment and storage medium | |
| CN114842466A (en) | Object detection method, computer program product and electronic device | |
| CN119919582A (en) | Three-dimensional modeling method, device, terminal and storage medium based on visual fusion | |
| CN114764822A (en) | Image processing method and device and electronic equipment | |
| CN118155189A (en) | Parking space recognition model training method, parking recognition method and device | |
| CN115100535B (en) | Satellite remote sensing image rapid reconstruction method and device based on affine camera model | |
| CN111656404A (en) | Image processing method and system and movable platform | |
| CN117237609A (en) | A multi-modal fusion three-dimensional target detection method and system |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| CB03 | Change of inventor or designer information | ||
| CB03 | Change of inventor or designer information |
Inventor after: Ni Huajian Inventor after: Peng Yao Inventor after: Lin Yining Inventor after: Zhao Zhijian Inventor before: Peng Yao Inventor before: Ni Huajian Inventor before: Lin Yining Inventor before: Zhao Zhijian |
|
| GR01 | Patent grant | ||
| GR01 | Patent grant | ||
| TR01 | Transfer of patent right | ||
| TR01 | Transfer of patent right |
Effective date of registration: 20241204 Address after: No. 132, 2nd Floor, Building 15, Maker Town Community Supporting Commercial Building, Wenquan Town, Haidian District, Beijing 100095 Patentee after: Beijing ShanMa Zhijian Technology Co.,Ltd. Country or region after: China Patentee after: Shanghai Shanma Data Technology Co.,Ltd. Address before: No. 132, 2nd Floor, Building 15, Maker Town Community Supporting Commercial Building, Wenquan Town, Haidian District, Beijing 100095 Patentee before: Beijing ShanMa Zhijian Technology Co.,Ltd. Country or region before: China |