[go: up one dir, main page]

WO2020024079A1 - 图像识别系统 - Google Patents

图像识别系统 Download PDF

Info

Publication number
WO2020024079A1
WO2020024079A1 PCT/CN2018/097687 CN2018097687W WO2020024079A1 WO 2020024079 A1 WO2020024079 A1 WO 2020024079A1 CN 2018097687 W CN2018097687 W CN 2018097687W WO 2020024079 A1 WO2020024079 A1 WO 2020024079A1
Authority
WO
WIPO (PCT)
Prior art keywords
imaging
different
microlenses
image recognition
image information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
PCT/CN2018/097687
Other languages
English (en)
French (fr)
Inventor
王星泽
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Heren Keji Shenzhen LLC
Original Assignee
Heren Keji Shenzhen LLC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Heren Keji Shenzhen LLC filed Critical Heren Keji Shenzhen LLC
Priority to PCT/CN2018/097687 priority Critical patent/WO2020024079A1/zh
Priority to CN201880002314.1A priority patent/CN109496316B/zh
Publication of WO2020024079A1 publication Critical patent/WO2020024079A1/zh
Anticipated expiration legal-status Critical
Ceased legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/10Image acquisition
    • G06V10/12Details of acquisition arrangements; Constructional details thereof
    • G06V10/14Optical characteristics of the device performing the acquisition or on the illumination arrangements
    • G06V10/147Details of sensors, e.g. sensor lenses
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks

Definitions

  • the present application relates to the technical field of optical image processing, and in particular, to an image recognition system.
  • the object recognition process can be divided into: image acquisition, feature extraction, classifier classification, and classification results.
  • image acquisition mainly uses ordinary imaging systems to project the three-dimensional image information into two-dimensional color pictures, so the third-dimensional information of the actual object is lost, that is, there is no depth and longitudinal information, and the computer finally obtains only the features on the planar image. the difference.
  • the probe method which uses a probe to directly locate the surface of an object. This method is inefficient and will damage the object itself;
  • the binocular vision method which uses the triangle principle to calculate the distance between objects, requires two cameras, is expensive, and is not suitable for objects with smooth and non-textured surfaces;
  • Structured light method which projects a specific light signal onto the surface of the object, calculates the position and depth information of the object through the change of the light signal caused by the object, is not suitable for long-distance use, and can not be used under strong light environment, because the projected Coded light will be overwhelmed;
  • the time-of-flight method emits light pulses from an emitter to an object, and determines the distance of the measured object by calculating the time of flight of the light pulse. This method has low depth accuracy, the recognition distance is limited by the intensity of the light source, and consumes a lot of energy;
  • the main purpose of this application is to provide an image recognition system with a multi-view imaging module capable of collecting three-dimensional image data and a high recognition rate.
  • the present invention provides an image recognition system including a multi-view imaging module, a memory, a processor, and a computer program stored in the memory and executable on the processor.
  • the multi-view imaging module includes:
  • a microlens array disposed between the lens and the photosensitive element and located on a focal plane of an imaging side of the lens, the microlens array including a plurality of microlenses arranged in an array;
  • the light of the imaged object is projected onto the plurality of microlenses of the microlens array from the different directions through the lens, refracted through the plurality of microlenses, and then incident on the light receiving element.
  • the image information of multiple to-be-identified objects at different angles obtained by imaging the to-be-recognized object through the multi-view imaging module is brought into the target model to perform image recognition on the to-be-recognized object.
  • the structure of the plurality of microlenses in the microlens array is one selected from the following structures:
  • the lens shapes and sizes of the plurality of microlenses are the same, and the focal lengths are the same and fixed;
  • Lens shapes and sizes of the plurality of microlenses are different, and lens focal lengths of the plurality of microlenses are different and fixed;
  • the lens shapes and sizes of the plurality of microlenses are the same, and the focal length is adjustable;
  • the lens shapes and sizes of the plurality of micro lenses are different, and the focal length is adjustable.
  • the plurality of microlenses in the microlens array are distributed on a transparent structure in a uniform or non-uniform manner; the transparent structure is a convex transparent structure, a concave transparent structure, or a planar transparent structure.
  • microlens array is a one-time imaging microlens array or a multi-time imaging microlens array.
  • the multi-imaging microlens array includes at least two microlens arrays arranged in parallel.
  • the photosensitive element is a complementary metal oxide semiconductor image sensor or a charge-coupled device image sensor.
  • the photosensitive element is provided with a multi-pixel photosensitive array
  • the multi-pixel photosensitive array includes a plurality of photosensitive regions provided in one-to-one correspondence with a plurality of micro lenses on the micro lens array.
  • the method further includes:
  • the step of obtaining the target model by training the convolutional neural network model based on the collection of the image information of the multiple identified objects at different angles each time as the sample training data includes:
  • a set of multiple different-angle images and depth information of the identified object collected each time is used as sample training data, and training is performed based on a convolutional neural network model to obtain a target model.
  • the method further includes the step of repeatedly collecting image information of multiple identified objects at different angles obtained by imaging the multi-view imaging module for multiple different perspectives of the identified object.
  • the step of repeatedly collecting image information of multiple identified objects at different angles obtained by the multi-view imaging module imaging a single perspective of the identified object includes:
  • multiple pieces of image information of the identified object at different angles obtained by the multi-view imaging module imaging at multiple different perspectives of the identified object are repeatedly collected.
  • a multi-view imaging module introduces a micro lens array into a conventional imaging system, and can obtain images and depth information of multiple weakly different perspectives of an object at a time, which has a simple structure and is applicable.
  • FIG. 1 is a schematic structural diagram of a multi-view imaging module according to an embodiment of the present invention.
  • FIG. 2 is a schematic structural diagram of a photosensitive element of the multi-view imaging module in FIG. 1;
  • FIG. 3 is a schematic structural diagram of an image recognition system according to an embodiment of the present invention.
  • FIG. 4 is a schematic diagram of an image recognition result according to an embodiment of the present invention.
  • FIG. 5 is a schematic diagram of another image recognition result in an embodiment of the present invention.
  • FIG. 6 is a flowchart of an image recognition method according to an embodiment of the present invention.
  • fixed may be a fixed connection, may be a detachable connection, or be integrated into one; It is a mechanical connection or an electrical connection; it can be directly connected or indirectly connected through an intermediate medium. It can be the internal connection of two elements or the interaction relationship between two elements, unless it is clearly defined otherwise.
  • fixed may be a fixed connection, may be a detachable connection, or be integrated into one; It is a mechanical connection or an electrical connection; it can be directly connected or indirectly connected through an intermediate medium. It can be the internal connection of two elements or the interaction relationship between two elements, unless it is clearly defined otherwise.
  • FIG. 1 it is a schematic structural diagram of a multi-view imaging module 100 in a first embodiment of the present invention.
  • the multi-view imaging module 100 includes a lens 10, a microlens array 20, and a photosensitive element 30.
  • the microlens array 20 is disposed between the lens 10 and the photosensitive element 30 and is located on a focal plane of the imaging side of the lens 10.
  • the micro lens array 20 includes a plurality of micro lenses 21 arranged in an array.
  • the light of the imaged object 101 is incident on the inside of the multi-view imaging module 100 through the lens 10, that is, the light of the imaged object 101 is projected through the lens 10 to the micro lens array 20 from different directions.
  • the microlenses 21 are refracted by the plurality of microlenses 21 and are incident on different photosensitive areas 31 of the photosensitive element 30 to form image information of a plurality of imaged objects with different angles. Since the source angles of the light collected by each photosensitive region 31 are different, it is possible to record images and depth information of multiple weakly different perspectives of the imaged object 101.
  • the multi-view imaging module 100 introduces a micro lens array 20 into a conventional imaging system, and can obtain images and depth information of multiple weakly different perspectives of an object at a time, with a simple structure and a wide application range. .
  • the photosensitive element 30 is provided with a multi-pixel photosensitive array 33.
  • the multi-pixel photosensitive array 33 includes a plurality of photosensitive elements arranged one-to-one corresponding to the plurality of micro-lenses on the micro-lens array 20. Area 31.
  • the photosensitive element 30 may be a complementary metal oxide semiconductor (CMOS) image sensor or a charge-coupled device (CCD) image sensor.
  • CMOS complementary metal oxide semiconductor
  • CCD charge-coupled device
  • the microlens array 20 is set on the focal plane of the imaging side of the lens 10, and a photosensitive element 30 (CCD or CMOS) is placed behind the microlens array 20, and the light refracted by each microlens 21 can be covered correspondingly.
  • a plurality of pixels on the photosensitive element 30 form a large pixel unit; a corresponding one of the plurality of micro lenses on the micro lens array 20 is provided on the multi-pixel photosensitive array 33 corresponding to each of the large pixel units.
  • Set the photosensitive area 31 is set on the focal plane of the imaging side of the lens 10
  • a photosensitive element 30 CCD or CMOS
  • the light emitted by the imaged object 101 enters from the lens 10 of the multi-view imaging module 100, and then is projected from different directions to the microlens array 20.
  • Each microlens 21 of the microlens array 20 refracts and then projects the light to the back of the light.
  • a complete image can be collected by the photosensitive area 31 corresponding to each large pixel unit; and the light angle collected by the photosensitive area 31 corresponding to each large pixel unit is different, so that the imaged image can be recorded. Images and depth information of multiple weakly different perspectives of the object 101.
  • the structures of the plurality of microlenses 21 in the microlens array 20 may be microlenses with the same lens shape and size, the same focal length, and a fixed structure.
  • each of the microlenses 21 may be a convex lens with convex surfaces on both sides, or a semi-convex convex lens with convex surfaces on one side;
  • the shape of the microlenses 21 may be circular, square, hexagonal, or octagonal. Shape etc.
  • the structures of the plurality of microlenses 21 in the microlens array 20 may be different in lens shape and size, and the lens focal lengths of the plurality of microlenses are different and fixed.
  • Micro lens composition may be a plurality of convex lenses having convex surfaces on both sides having different sizes and focal lengths, or the microlens 21 may be a plurality of semi-convex convex lenses having convex surfaces on different sides having different sizes and focal lengths.
  • the structures of the plurality of microlenses 21 in the microlens array 20 may be microlenses with the same lens shape and size and adjustable focal lengths.
  • the micro-lens 21 may be composed of multiple micro-lenses with the same size and size but adjustable focal length; wherein the implementation of the adjustable focal length may be electrical, light, thermal, etc. The technician can set it according to the needs, and it will not be repeated here.
  • the structures of the plurality of microlenses 21 in the microlens array 20 may be microlenses with different lens shapes and sizes and adjustable focal lengths.
  • the micro-lens 21 may be composed of multiple micro-lenses with different sizes and sizes, but with adjustable focal length; wherein the implementation of the adjustable focal length may be electrical, light, thermal, etc.
  • the plurality of microlenses 21 in the microlens array 20 may be distributed on a transparent structure in a uniform or non-uniform manner; the transparent structure is a convex transparent structure and a concave surface. Transparent structure or flat transparent structure.
  • the micro lens array 20 may be a plurality of convex lenses 211 uniformly distributed on a flat transparent structure, and the plurality of convex lenses 211 are connected to form a whole micro lens array 20 through a flat transparent structure;
  • the micro lens array 20 may be a plurality of convex lenses 211 unevenly distributed on a planar transparent structure, and the plurality of convex lenses 211 are connected to form a whole micro lens array 20 through a planar transparent structure;
  • the micro lens array 20 may also be a plurality of convex lenses 211 uniformly distributed on a convex transparent structure.
  • the plurality of convex lenses 211 are connected to form a whole micro lens array 20 through the convex transparent structure.
  • the microlens array 20 may be a one-time imaging microlens array or a multi-time imaging microlens array.
  • the multi-imaging microlens array 20 may include multiple layers of microlenses, for example, at least two microlens arrays arranged in parallel. The light reaching the microlens array 20 is refracted by the multiple layers of microlenses, and then emitted to Imaging is performed on the photosensitive element 30.
  • the present invention further provides an image recognition system 200 including the multi-view imaging module 100, a memory 201, a processor 202, and stored in the memory 201 and stored in the processor 202.
  • the image recognition system 200 uses the multi-view imaging module 100 to repeatedly acquire image information of multiple identified objects at different angles obtained by repeatedly imaging a single perspective of the identified object; and then collects each time
  • the set of image information of the identified objects at different angles is used as sample training data, and training is performed based on the convolutional neural network model to obtain a target model; the object to be identified is acquired through the multi-view imaging module to be imaged.
  • the obtained image information of the identified objects at different angles is brought into the target model to perform image recognition on the object to be identified.
  • multiple pieces of image information of the identified objects at different angles obtained by imaging the multi-angle imaging module 100 for multiple different perspectives of the identified object may be repeatedly collected; it can be understood that In this way, after collecting image information of multiple recognized objects at different angles obtained by imaging at different different perspectives as training samples, training can be performed to accurately identify each angle of the object to be recognized.
  • a microlens array 20 is introduced into a conventional imaging system, and images and depth information of multiple weakly different perspectives of an object can be obtained at one time, which is combined with image recognition of a neural network algorithm model Technology, in the later recognition of the identified object, the use of multiple weakly different images of the same object for recognition, greatly improving the accuracy of object recognition.
  • a plurality of different angle images and depth information of the identified object can be obtained through a light reconstruction algorithm;
  • the set of multiple different angle images and depth information of the identified object is used as sample training data, and training is performed based on the convolutional neural network model to obtain the target model.
  • image recognition system 200 in this embodiment, images and depth information of multiple different angles of the identified object can be obtained; compared with the traditional imaging system, only two-dimensional image information of the object can be obtained, and the third-dimensional information recognition method is lost.
  • the image recognition system 200 in this embodiment greatly improves the accuracy of object recognition.
  • the focal length of the microlens when the focal length of the microlens is adjustable, the focal lengths of multiple microlenses on the microlens array can also be adjusted; at different focal lengths, repeatedly acquiring the multi-view imaging module Image information of a plurality of identified objects at different angles obtained by imaging at different perspectives of the identified object. Therefore, a large-distance multi-point reversal can be implemented in the vertical range, and each object in the vertical range can be accurately identified.
  • the present invention further provides an image recognition method 300 including steps:
  • Step S10 repeatedly collecting image information of multiple identified objects at different angles obtained by imaging the multi-view imaging module for a single perspective of the identified object;
  • Step S20 Use the set of image information of the plurality of identified objects of different angles collected each time as sample training data, and perform training based on the convolutional neural network model to obtain a target model;
  • step S30 image information of multiple to-be-identified objects at different angles obtained by imaging the to-be-recognized object through the multi-view imaging module is brought into the target model to perform image recognition on the to-be-recognized object.
  • images and depth information of multiple weakly different perspectives of an object can be obtained by imaging at one time, and combined with the image recognition technology of the neural network algorithm model, when the recognized object is recognized at a later stage , Using multiple weakly different images of the same object for recognition, greatly improving the accuracy of object recognition.
  • the method further includes: obtaining, based on the image information of multiple identified objects at different angles obtained through the multiple imaging, multiple images at different angles of the identified object through a light reconstruction algorithm, and Depth information
  • the step 20 may specifically include: using a collection of multiple different-angle images and depth information of the identified object each time as sample training data, training based on a convolutional neural network model to obtain a target model .
  • the image recognition method 300 may further include: repeatedly collecting image information of multiple identified objects at different angles obtained by imaging the multi-view imaging module for multiple different perspectives of the identified object.
  • the step S20 is correspondingly performed during training based on a convolutional neural network model,
  • the input is the characteristic data of each angle of the identified object.
  • the step S10 may include:
  • multiple pieces of image information of the identified object at different angles obtained by the multi-view imaging module imaging at multiple different perspectives of the identified object are repeatedly collected.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Vascular Medicine (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

一种图像识别系统,包括具有微透镜阵列的多视角成像模组,被成像的物体的光线通过所述多个微透镜分别折射后,入射到所述感光元件的不同感光区域,可以一次性成像得到物体的多个微弱不同视角的图像和深度信息;通过反复采集所述多视角成像模组针对被识别物体的单一视角进行成像得到的多幅不同角度的被识别物体的影像信息;将每次采集到的所述多幅不同角度的被识别物体的影像信息的集合作为样本训练数据,基于卷积神经网络模型进行训练,得到目标模型;从而以对所述待识别物体进行图像识别。采用同一物体的多个微弱不同视角的图像进行识别,大大提高了物体识别的准确率。

Description

图像识别系统 技术领域
本申请涉及光学图像处理技术领域,尤其涉及一种图像识别系统。
背景技术
现有的科学研究表明,人类的学习和认知活动有80%~85%是通过视觉完成的。在人工智能领域,计算机视觉也是一个非常重要的研究方向。1963年在麻省理工学院读书的拉里:罗伯茨(Larry Roberts)在博士毕业论文《Machine Perception of Three-Dimensional Solids》中提出边缘是用来描述物体形状的最关键信息。自此计算机视觉进入了快速发展的道路,目前主要的应用场景有:身份认证领域、安防领域,无人驾驶领域,工业检测领域等。
以上种种应用都需要计算机视觉具有高准确率的物体识别能力。物体识别过程可分为:图像采集,特征提取,分类器分类,分类结果。目前图像采集主要是用普通的成像系统将三维图像信息投影成二维彩色图片,这样就丢失了实际物体的第三维信息,也就是没有了深度纵向信息,计算机最终得到的只是平面图像上的特征区别。也有少部分方法将物体进行三维图像重构再进行识别,比如:
探针法,直接用探针在物体表面进行定点,这种方法效率低,而且会破坏物体本身;
双目视觉法,通过三角形原理计算物体距离,需要使用两个摄像头,成本高,并且不适合表面光滑无纹理的物体;
结构光方法,将特定的光信号投射到物体表面,通过物体造成的光信号的变化来计算物体的位置和深度信息,不适合远距离使用,而且在强光环境下基本不能使用,因为投射的编码光会被淹没;
飞行时间法,从发射极向对象发射光脉冲,通过计算光脉冲飞行时间来确定被测量对象的距离,这种方法深度精确度低,识别距离受光源强度限制,消耗大量能源;
如公众所习知的,传统成像技术结合神经网络来做物体识别,是用普通摄 像头采集到的图片输入已训练好的模型来识别物体。这种方法的一个缺点是普通摄像头采集到的图片只有二维的信息。要达到较高的准确度,就对卷积神经网络的层数要求较高,算法复杂,并且需要大量的样本来训练模型。而探针法,双目视觉法,结构光方法和飞行时间法适用范围有限,只适合特定环境下使用。
因此,有必要提供一种新的图像识别系统来解决上述技术问题。
申请内容
本申请的主要目的在于提供一种具有可以采集三维图像数据的多视角成像模组且识别率高的图像识别系统。
为实现上述目的,本发明提供一种图像识别系统,包括多视角成像模组、存储器、处理器以及存储在所述存储器中并可在所述处理器上运行的计算机程序,
所述多视角成像模组包括:
镜头;
感光元件;
设置于所述镜头与所述感光元件之间,且位于所述镜头的成像侧的焦平面上的微透镜阵列,所述微透镜阵列包括多个阵列排布的微透镜;
其中,被成像的物体的光线通过所述镜头从不同方向分别投射到所述微透镜阵列的所述多个微透镜上,经由所述多个微透镜分别折射后,入射到所述感光元件的不同感光区域,形成多幅不同角度的被成像的物体的影像信息;
所述处理器执行所述计算机程序时实现如下所述的图像识别方法的步骤:
反复采集所述多视角成像模组针对被识别物体的单一视角进行成像得到的多幅不同角度的被识别物体的影像信息;
将每次采集到的所述多幅不同角度的被识别物体的影像信息的集合作为样本训练数据,基于卷积神经网络模型的进行训练,得到目标模型;
将通过所述多视角成像模组获取待识别物体经成像后得到的多幅不同角度的待识别物体的影像信息带入所述目标模型,以对所述待识别物体进行图像识别。
进一步地,所述微透镜阵列中的所述多个微透镜的结构为选自以下结构中的一种:
所述多个微透镜的透镜形状与大小相同、焦距相同且固定;
所述多个微透镜的透镜形状与大小各不相同、所述多个微透镜的透镜焦距各不相同且固定;
所述多个微透镜的透镜形状与大小相同、焦距可调;
所述多个微透镜的透镜形状与大小各不相同、焦距可调。
进一步地,所述微透镜阵列中的所述多个微透镜以均匀分布或者非均匀的方式分布在透明结构上;所述透明结构为凸面透明结构、凹面透明结构或者平面透明结构。
进一步地,所述微透镜阵列为一次成像微透镜阵列或者多次成像微透镜阵列。
进一步地,所述多次成像微透镜阵列包括至少两个平行设置的微透镜阵列。
进一步地,所述感光元件为互补金属氧化物半导体图像传感器或者电荷耦合器件图像传感器。
进一步地,所述感光元件上设有多像素感光阵列,所述多像素感光阵列包括多个与所述微透镜阵列上的多个微透镜一一对应的设置的感光区域。
进一步地,反复采集所述多视角成像模组针对被识别物体的单一视角进行成像得到的多幅不同角度的被识别物体的影像信息的步骤之后,还包括:
根据所述多次成像得到的多幅不同角度的被识别物体的影像信息,通过光线重构算法获取被识别物体的多个不同角度的图像和深度信息;
所述将每次采集到的所述多幅不同角度的被识别物体的影像信息的集合作为样本训练数据,基于卷积神经网络模型的进行训练,得到目标模型的步骤,包括:
将每次采集到的所述被识别物体的多个不同角度的图像和深度信息的集合作为样本训练数据,基于卷积神经网络模型的进行训练,得到目标模型。
进一步地,还包括步骤:反复采集所述多视角成像模组针对被识别物体的多个不同视角进行成像得到的多幅不同角度的被识别物体的影像信息。
进一步地,在所述微透镜的焦距可调时,反复采集所述多视角成像模组针对被识别物体的单一视角进行成像得到的多幅不同角度的被识别物体的影像信息的步骤,包括:
调整所述微透镜阵列上的多个微透镜的焦距;
在不同的焦距下,反复采集所述多视角成像模组针对被识别物体的多个不同视角进行成像得到的多幅不同角度的被识别物体的影像信息。
在本发明中提供的提出图像识别系统中,多视角成像模组在传统成像系统中引入微透镜阵列,可以一次性成像得到物体的多个微弱不同视角的图像和深度信息,结构简单并且而且适用范围广;再结合神经网络算法模型的图像识别技术,在后期对被识别物体进行识别时,采用同一物体的多个微弱不同视角的图像进行识别,大大提高了物体识别的准确率。
附图说明
为了更清楚地说明本申请实施例中的技术方案,下面将对实施例中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1为本发明一实施例中的多视角成像模组的结构示意图;
图2为图1中多视角成像模组的感光元件的结构示意图;
图3为本发明一实施例中的图像识别系统的结构示意图;
图4为本发明一实施例中的一图像识别结果的示意图;
图5为本发明一实施例中的另一图像识别结果的示意图;
图6为本发明一实施例中的图像识别方法的流程图。
具体实施方式
下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本发明的一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。
需要说明,本发明实施例中所有方向性指示(诸如上、下、左、右、前、后、横向、径向、水平、垂直……)仅用于解释在某一特定姿态(如附图所示)下各部件之间的相对位置关系、运动情况等,如果该特定姿态发生改变时,则 该方向性指示也相应地随之改变。
另外,在本发明中如涉及“第一”、“第二”等的描述仅用于描述目的,而不能理解为指示或暗示其相对重要性或者隐含指明所指示的技术特征的数量。由此,限定有“第一”、“第二”的特征可以明示或者隐含地包括至少一个该特征。在本发明的描述中,“多个”的含义是至少两个,例如两个,三个等,除非另有明确具体的限定。
在本发明中,除非另有明确的规定和限定,术语“连接”、“固定”等应做广义理解,例如,“固定”可以是固定连接,也可以是可拆卸连接,或成一体;可以是机械连接,也可以是电连接;可以是直接相连,也可以通过中间媒介间接相连,可以是两个元件内部的连通或两个元件的相互作用关系,除非另有明确的限定。对于本领域的普通技术人员而言,可以根据具体情况理解上述术语在本发明中的具体含义。
另外,本发明各个实施例之间的技术方案可以相互结合,但是必须是以本领域普通技术人员能够实现为基础,当技术方案的结合出现相互矛盾或无法实现时应当认为这种技术方案的结合不存在,也不在本发明要求的保护范围之内。
如图1所示,为本发明第一实施例中的多视角成像模组100的结构示意图。
多视角成像模组100包括镜头10、微透镜阵列20以及感光元件30。
其中,微透镜阵列20设置于镜头10与感光元件30之间,且位于镜头10的成像侧的焦平面上。所述微透镜阵列20包括多个阵列排布的微透镜21。
其中,被成像的物体101的光线通过所述镜头10入射到多视角成像模组100的内部,即被成像的物体101的光线通过所述镜头10从不同方向分别投射到微透镜阵列20的多个微透镜21上,经由所述多个微透镜21分别折射后,入射到感光元件30的不同感光区域31,形成多幅不同角度的被成像的物体的影像信息。由于各个感光区域31采集到的光线的来源角度是不一样的,从而可以达到记录被成像的物体101的多个微弱不同视角的图像和深度信息。
本实施例中提供的提出多视角成像模组100,在传统成像系统中引入微透镜阵列20,可以一次性成像得到物体的多个微弱不同视角的图像和深度信息,结构简单并且而且适用范围广。
进一步地,请一并结合图2,感光元件30上设有多像素感光阵列33,多像素感光阵列33包括多个与所述微透镜阵列20上的多个微透镜一一对应的设置的感光区域31。
其中,感光元件30可以为互补金属氧化物半导体(CMOS,Complementary Metal Oxide Semiconductor)图像传感器或者电荷耦合器件(CCD,Charge-coupled Device)图像传感器。
具体的,微透镜阵列20设置所述镜头10的成像侧的焦平面上,感光元件30(CCD或者CMOS)放置在微透镜阵列20的后面,每个微透镜21折射出的光线可以对应覆盖到感光元件30上多个像素,形成一个大像素单元;多像素感光阵列33上对应所述每一个大像素单元,设有多个与所述微透镜阵列20上的多个微透镜一一对应的设置的感光区域31。
被成像的物体101发出的光线从多视角成像模组100的镜头10射入,然后从不同方向投射到微透镜阵列20,微透镜阵列20的每个微透镜21折射再将光线投射后方的感光元件20上,每个大像素单元对应的感光区域31就能采集到一副完整的图像;并且各个大像素单元对应的感光区域31采集到的光线角度是不一样的,从而可以记录被成像的物体101的多个微弱不同视角的图像和深度信息。
可选的,在一优选的实施例中,微透镜阵列20中的多个微透镜21的结构可以为透镜形状与大小相同、焦距相同且固定的微透镜组成。例如,每个所述微透镜21可以是两面均成凸面的凸透镜,或者一面为凸面的半凸面凸透镜;所述微透镜21的形状可以是圆形,也可以是方形,六边形,八边形等。
可选的,在一优选的实施例中,微透镜阵列20中的多个微透镜21的结构可以为透镜形状与大小各不相同、所述多个微透镜的透镜焦距各不相同且固定的微透镜组成。例如,所述微透镜21可以是多个尺寸与焦距都不相同的两面均成凸面的凸透镜,或者所述微透镜21可以是多个尺寸与焦距都不相同的一面为凸面的半凸面凸透镜。
可选的,在一优选的实施例中,微透镜阵列20中的多个微透镜21的结构可以为透镜形状与大小相同、焦距可调的微透镜组成。例如,所述微透镜21可以是多个尺寸与大小相同的,但是焦距可调的微透镜组成;其中,所述焦距 可调的实现方式可以是电调,光调,热调等,本领域技术人可以根据需要进行设置,在此不再赘述。
可选的,在一优选的实施例中,微透镜阵列20中的多个微透镜21的结构可以为透镜形状与大小各不相同、焦距可调的微透镜组成。例如,所述微透镜21可以是多个尺寸与大小各不相同的,但是焦距可调的微透镜组成;其中,所述焦距可调的实现方式可以是电调,光调,热调等,本领域技术人可以根据需要进行设置,在此不再赘述。
进一步的,在一实施例中,所述微透镜阵20中的所述多个微透镜21可以是以均匀分布或者非均匀的方式分布在透明结构上;所述透明结构为凸面透明结构、凹面透明结构或者平面透明结构。
例如,所述微透镜阵20可以为平面的透明结构上均匀分布有多个凸透镜211,该多个凸透镜211通过平面的透明结构连接成一个整体的微透镜阵20;
又例如,所述微透镜阵20可以为平面的透明结构上非均匀分布有多个凸透镜211,该多个凸透镜211通过平面的透明结构连接成一个整体的微透镜阵20;
所述微透镜阵20还可以为凸面的透明结构上均匀分布有多个凸透镜211,该多个凸透镜211通过凸面的透明结构连接成一个整体的微透镜阵20。
进一步的,所述微透镜阵列20可以为一次成像微透镜阵列或者多次成像微透镜阵列。多次成像微透镜阵列20可以包括多层微透镜,例如,包括至少两个平行设置的微透镜阵列,达到所述微透镜阵列20的光线,经过多层的微透镜进行折射后,再出射到感光元件30上进行成像。
请一并结合图3,本发明还提供一种图像识别系统200,包括所述多视角成像模组100、存储器201、处理器202以及存储在所述存储器201中并可在所述处理器202上运行的计算机程序。
所述图像识别系统200通过所述多视角成像模组100对被识别物体的单一视角进行反复多次成像得到的多幅不同角度的被识别物体的影像信息进行重复采集;然后将每次采集到的所述多幅不同角度的被识别物体的影像信息的集合作为样本训练数据,基于卷积神经网络模型的进行训练,得到目标模型;将通过所述多视角成像模组获取待识别物体经成像后得到的多幅不同角度的被 识别物体的影像信息带入所述目标模型,以对所述待识别物体进行图像识别。
请一并结合图4,通常的,在对待识别物体进行识别时,如果目标匹配,就会得到类似图4的结果,某个视角的图片匹配率最高,其他视角的图片匹配率依次下降;该匹配率最高的图片的视角,即为所述多视角成像模组100对被识别物体进行训练样本采集时的单一视角;
如果目标不匹配,就会得到类似图5所示,匹配率很低,而且无规律。
在一可选的实施例中,还可以反复采集所述多视角成像模组100针对被识别物体的多个不同视角进行成像得到的多幅不同角度的被识别物体的影像信息;可以理解的是,这样采集多各不同视角进行成像得到的多幅不同角度的被识别物体的影像信息作为训练样本进行训练后,可以对待识别物体的各个角度都能精确地识别。
本实施例中提供的图像识别系统200中,在传统成像系统中引入微透镜阵列20,可以一次性成像得到物体的多个微弱不同视角的图像和深度信息,再结合神经网络算法模型的图像识别技术,在后期对被识别物体进行识别时,采用同一物体的多个微弱不同视角的图像进行识别,大大提高了物体识别的准确率。
进一步地,还可以根据所述多次成像得到的多幅不同角度的被识别物体的影像信息,通过光线重构算法获取被识别物体的多个不同角度的图像和深度信息;将每次采集到的所述被识别物体的多个不同角度的图像和深度信息的集合作为样本训练数据,基于卷积神经网络模型的进行训练,得到目标模型。
通过本实施例中的图像识别系统200,可以获取被识别物体的多个不同角度的图像和深度信息;相比传统成像系统只能得到物体的二维图像信息,丢失了第三维信息的识别方式,本实施例中的图像识别系统200大大提高了提高物体识别的准确率。
可选的,在所述微透镜的焦距可调时,还可以通过调整所述微透镜阵列上的多个微透镜的焦距;在不同的焦距下,反复采集所述多视角成像模组针对被识别物体的多个不同视角进行成像得到的多幅不同角度的被识别物体的影像信息。从而可以实现对纵向范围内进行大距离的多点对调,对纵向范围内的各个物体进行精准识别。
请一并结合图6,本发明还提供一种图像识别方法300,包括步骤:
步骤S10,反复采集所述多视角成像模组针对被识别物体的单一视角进行成像得到的多幅不同角度的被识别物体的影像信息;
步骤S20,将每次采集到的所述多幅不同角度的被识别物体的影像信息的集合作为样本训练数据,基于卷积神经网络模型的进行训练,得到目标模型;
步骤S30,将通过所述多视角成像模组获取待识别物体经成像后得到的多幅不同角度的待识别物体的影像信息带入所述目标模型,以对所述待识别物体进行图像识别。
本实施例中提供的图像识别方法300中,可以一次性成像得到物体的多个微弱不同视角的图像和深度信息,再结合神经网络算法模型的图像识别技术,在后期对被识别物体进行识别时,采用同一物体的多个微弱不同视角的图像进行识别,大大提高了物体识别的准确率。
可选的,所述步骤S10之后,还包括:根据所述多次成像得到的多幅不同角度的被识别物体的影像信息,通过光线重构算法获取被识别物体的多个不同角度的图像和深度信息;
所述步骤20,具体可以包括:将每次采集到的所述被识别物体的多个不同角度的图像和深度信息的集合作为样本训练数据,基于卷积神经网络模型的进行训练,得到目标模型。
从而可以获取被识别物体的多个不同角度的图像和深度信息;相比传统成像系统只能得到物体的二维图像信息,丢失了第三维信息的识别方式,本实施例中的图像识别系统200大大提高了提高物体识别的准确率。
可选的,所述图像识别方法300,还可以包括:反复采集所述多视角成像模组针对被识别物体的多个不同视角进行成像得到的多幅不同角度的被识别物体的影像信息。
可以理解的是,在针对被识别物体的多个不同视角进行成像得到的多幅不同角度的被识别物体的影像信息后,所述步骤S20相应的在基于卷积神经网络模型的进行训练时,输入的就是被识别物体的各个角度的特征数据。这样采集多各不同视角进行成像得到的多幅不同角度的被识别物体的影像信息作为训练样本进行训练后,可以对待识别物体的各个角度都能精确地识别。
可选的,在所述微透镜的焦距可调时,所述步骤S10,可以包括:
调整所述微透镜阵列上的多个微透镜的焦距;
在不同的焦距下,反复采集所述多视角成像模组针对被识别物体的多个不同视角进行成像得到的多幅不同角度的被识别物体的影像信息。
请再次结合图3,在本发明一实施例中,在存储在所述存储器201中并可在所述处理器202上运行的计算机程序被执行时,可以实现如上述任一项所述的图像识别方法的步骤。
可以理解的是,在本说明书的描述中,参考术语“一实施例”、“另一实施例”、“其他实施例”、或“第一实施例~第N实施例”等的描述意指结合该实施例或示例描述的具体特征、结构、材料或者特点包含于本发明的至少一个实施例或示例中。在本说明书中,对上述术语的示意性表述不一定指的是相同的实施例或示例。而且,描述的具体特征、结构、材料或者特点可以在任何的一个或多个实施例或示例中以合适的方式结合。
需要说明的是,在本文中,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者系统不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者系统所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括该要素的过程、方法、物品或者系统中还存在另外的相同要素。
本发明的优选实施例,并非因此限制本发明的专利范围,凡是在本发明的构思下,利用本发明说明书及附图内容所作的等效结构变换,或直接/间接运用在其他相关的技术领域均包括在本发明的专利保护范围内。

Claims (10)

  1. 一种图像识别系统,包括多视角成像模组、存储器、处理器以及存储在所述存储器中并可在所述处理器上运行的计算机程序,其特征在于,
    所述多视角成像模组包括:
    镜头;
    感光元件;
    设置于所述镜头与所述感光元件之间,且位于所述镜头的成像侧的焦平面上的微透镜阵列,所述微透镜阵列包括多个阵列排布的微透镜;
    其中,被成像的物体的光线通过所述镜头从不同方向分别投射到所述微透镜阵列的所述多个微透镜上,经由所述多个微透镜分别折射后,入射到所述感光元件的不同感光区域,形成多幅不同角度的被成像的物体的影像信息;
    所述处理器执行所述计算机程序时实现如下所述的图像识别方法的步骤:
    反复采集所述多视角成像模组针对被识别物体的单一视角进行成像得到的多幅不同角度的被识别物体的影像信息;
    将每次采集到的所述多幅不同角度的被识别物体的影像信息的集合作为样本训练数据,基于卷积神经网络模型的进行训练,得到目标模型;
    将通过所述多视角成像模组获取待识别物体经成像后得到的多幅不同角度的待识别物体的影像信息带入所述目标模型,以对所述待识别物体进行图像识别。
  2. 根据权利要求1所述的图像识别系统,其特征在于,所述微透镜阵列中的所述多个微透镜的结构为选自以下结构中的一种:
    所述多个微透镜的透镜形状与大小相同、焦距相同且固定;
    所述多个微透镜的透镜形状与大小各不相同、所述多个微透镜的透镜焦距各不相同且固定;
    所述多个微透镜的透镜形状与大小相同、焦距可调;
    所述多个微透镜的透镜形状与大小各不相同、焦距可调。
  3. 根据权利要求1所述的图像识别系统,其特征在于,所述微透镜阵列中的所述多个微透镜以均匀分布或者非均匀的方式分布在透明结构上;所述透明结构为凸面透明结构、凹面透明结构或者平面透明结构。
  4. 根据权利要求1所述的图像识别系统,其特征在于,所述微透镜阵列为一次成像微透镜阵列或者多次成像微透镜阵列。
  5. 根据权利要求4所述的图像识别系统,其特征在于,所述多次成像微透镜阵列包括至少两个平行设置的微透镜阵列。
  6. 根据权利要求1-5中任一项所述的图像识别系统,其特征在于,所述感光元件为互补金属氧化物半导体图像传感器或者电荷耦合器件图像传感器。
  7. 根据权利要求6所述的图像识别系统,其特征在于,所述感光元件上设有多像素感光阵列,所述多像素感光阵列包括多个与所述微透镜阵列上的多个微透镜一一对应的设置的感光区域。
  8. 根据权利要求1所述的图像识别系统,其特征在于,反复采集所述多视角成像模组针对被识别物体的单一视角进行成像得到的多幅不同角度的被识别物体的影像信息的步骤之后,还包括:
    根据所述多次成像得到的多幅不同角度的被识别物体的影像信息,通过光线重构算法获取被识别物体的多个不同角度的图像和深度信息;
    所述将每次采集到的所述多幅不同角度的被识别物体的影像信息的集合作为样本训练数据,基于卷积神经网络模型的进行训练,得到目标模型的步骤,包括:
    将每次采集到的所述被识别物体的多个不同角度的图像和深度信息的集合作为样本训练数据,基于卷积神经网络模型的进行训练,得到目标模型。
  9. 根据权利要求1所述的图像识别系统,其特征在于,还包括步骤:反复采集所述多视角成像模组针对被识别物体的多个不同视角进行成像得到的多幅不同角度的被识别物体的影像信息。
  10. 根据权利要求9所述的图像识别系统,其特征在于,在所述微透镜的焦距可调时,反复采集所述多视角成像模组针对被识别物体的单一视角进行成像得到的多幅不同角度的被识别物体的影像信息的步骤,包括:
    调整所述微透镜阵列上的多个微透镜的焦距;
    在不同的焦距下,反复采集所述多视角成像模组针对被识别物体的多个不同视角进行成像得到的多幅不同角度的被识别物体的影像信息。
PCT/CN2018/097687 2018-07-28 2018-07-28 图像识别系统 Ceased WO2020024079A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
PCT/CN2018/097687 WO2020024079A1 (zh) 2018-07-28 2018-07-28 图像识别系统
CN201880002314.1A CN109496316B (zh) 2018-07-28 2018-07-28 图像识别系统

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2018/097687 WO2020024079A1 (zh) 2018-07-28 2018-07-28 图像识别系统

Publications (1)

Publication Number Publication Date
WO2020024079A1 true WO2020024079A1 (zh) 2020-02-06

Family

ID=65713867

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/097687 Ceased WO2020024079A1 (zh) 2018-07-28 2018-07-28 图像识别系统

Country Status (2)

Country Link
CN (1) CN109496316B (zh)
WO (1) WO2020024079A1 (zh)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112600994A (zh) * 2020-12-02 2021-04-02 达闼机器人有限公司 物体探测装置、方法、存储介质和电子设备
CN114200498A (zh) * 2022-02-16 2022-03-18 湖南天巡北斗产业安全技术研究院有限公司 卫星导航/光学组合的目标检测方法及系统

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111260299B (zh) * 2020-02-18 2023-07-18 中国联合网络通信集团有限公司 货品盘点及管理方法、装置、电子设备及存储介质
US11543654B2 (en) * 2020-09-16 2023-01-03 Aac Optics Solutions Pte. Ltd. Lens module and system for producing image having lens module
CN112329567A (zh) * 2020-10-27 2021-02-05 武汉光庭信息技术股份有限公司 自动驾驶场景中目标检测的方法及系统、服务器及介质
CN114049317A (zh) * 2021-11-03 2022-02-15 安徽灿宇光电科技有限公司 一种基于人工智能的ccd设备自动化检验系统及方法

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101610353A (zh) * 2008-01-23 2009-12-23 奥多比公司 用于全分辨率光场捕获和绘制的方法和设备
CN106846463A (zh) * 2017-01-13 2017-06-13 清华大学 基于深度学习神经网络的显微图像三维重建方法及系统
CN107993260A (zh) * 2017-12-14 2018-05-04 浙江工商大学 一种基于混合型卷积神经网络的光场图像深度估计方法
CN108154066A (zh) * 2016-12-02 2018-06-12 中国科学院沈阳自动化研究所 一种基于曲率特征递归神经网络的三维目标识别方法
CN108175535A (zh) * 2017-12-21 2018-06-19 北京理工大学 一种基于微透镜阵列的牙科三维扫描仪

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105118044B (zh) * 2015-06-16 2017-11-07 华南理工大学 一种轮形铸造产品缺陷自动检测方法
WO2017046651A2 (en) * 2015-09-17 2017-03-23 Valdhorn Dan Method and apparatus for privacy preserving optical monitoring
US10115032B2 (en) * 2015-11-04 2018-10-30 Nec Corporation Universal correspondence network
CN106840398B (zh) * 2017-01-12 2018-02-02 南京大学 一种多光谱光场成像方法
CN107302695A (zh) * 2017-05-31 2017-10-27 天津大学 一种基于仿生视觉机理的电子复眼系统

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101610353A (zh) * 2008-01-23 2009-12-23 奥多比公司 用于全分辨率光场捕获和绘制的方法和设备
CN108154066A (zh) * 2016-12-02 2018-06-12 中国科学院沈阳自动化研究所 一种基于曲率特征递归神经网络的三维目标识别方法
CN106846463A (zh) * 2017-01-13 2017-06-13 清华大学 基于深度学习神经网络的显微图像三维重建方法及系统
CN107993260A (zh) * 2017-12-14 2018-05-04 浙江工商大学 一种基于混合型卷积神经网络的光场图像深度估计方法
CN108175535A (zh) * 2017-12-21 2018-06-19 北京理工大学 一种基于微透镜阵列的牙科三维扫描仪

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112600994A (zh) * 2020-12-02 2021-04-02 达闼机器人有限公司 物体探测装置、方法、存储介质和电子设备
CN114200498A (zh) * 2022-02-16 2022-03-18 湖南天巡北斗产业安全技术研究院有限公司 卫星导航/光学组合的目标检测方法及系统

Also Published As

Publication number Publication date
CN109496316A (zh) 2019-03-19
CN109496316B (zh) 2022-04-01

Similar Documents

Publication Publication Date Title
WO2020024079A1 (zh) 图像识别系统
CN103472592B (zh) 一种快照式高通量的偏振成像方法和偏振成像仪
US9048153B2 (en) Three-dimensional image sensor
CN103323113B (zh) 基于光场成像技术的多光谱成像仪
CN104050662B (zh) 一种用光场相机一次成像直接获取深度图的方法
CN105282443B (zh) 一种全景深全景图像成像方法
US10021340B2 (en) Method and an apparatus for generating data representative of a light field
JP2019532451A (ja) 視点から距離情報を取得するための装置及び方法
WO2018049949A1 (zh) 一种基于手持式光场相机的距离估计方法
US10715711B2 (en) Adaptive three-dimensional imaging system and methods and uses thereof
CN112866512B (zh) 复眼摄像装置及复眼系统
CN102081296B (zh) 仿复眼视觉动目标快速定位及全景图同步获取装置及方法
CN105721854B (zh) 摄像装置及其便携式终端以及使用摄像装置的摄像方法
CN107454377B (zh) 一种利用相机进行三维成像的算法和系统
CN105654484A (zh) 光场相机外参数标定装置及方法
CN109883391B (zh) 基于微透镜阵列数字成像的单目测距方法
CN111650759A (zh) 近红外光斑投影的多焦距微透镜阵列遥感光场成像系统
Martel et al. Real-time depth from focus on a programmable focal plane processor
WO2021121037A1 (zh) 一种应用深度采样进行光场重构的方法及系统
US10872442B2 (en) Apparatus and a method for encoding an image captured by an optical acquisition system
US11092820B2 (en) Apparatus and a method for generating data representative of a pixel beam
CN107610170B (zh) 多目图像重聚焦的深度获取方法及系统
CN113936063A (zh) 一种双光路成像装置及图像深度检测方法、存储介质
CN208063317U (zh) 一种基于单镜头棱镜的立体视觉还原系统
CN111164970B (zh) 一种光场采集方法及采集装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18928958

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 14.06.2021)

122 Ep: pct application non-entry in european phase

Ref document number: 18928958

Country of ref document: EP

Kind code of ref document: A1