[go: up one dir, main page]

WO2020143090A1 - Image acquisition method, apparatus and device based on multiple cameras - Google Patents

Image acquisition method, apparatus and device based on multiple cameras Download PDF

Info

Publication number
WO2020143090A1
WO2020143090A1 PCT/CN2019/073764 CN2019073764W WO2020143090A1 WO 2020143090 A1 WO2020143090 A1 WO 2020143090A1 CN 2019073764 W CN2019073764 W CN 2019073764W WO 2020143090 A1 WO2020143090 A1 WO 2020143090A1
Authority
WO
WIPO (PCT)
Prior art keywords
camera
image
function
overlapping area
boundary
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
PCT/CN2019/073764
Other languages
French (fr)
Chinese (zh)
Inventor
蒋壮
郑勇
段瑾
王辉
黄磊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Water World Co Ltd
Original Assignee
Shenzhen Water World Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Water World Co Ltd filed Critical Shenzhen Water World Co Ltd
Publication of WO2020143090A1 publication Critical patent/WO2020143090A1/en
Anticipated expiration legal-status Critical
Ceased legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/90Arrangement of cameras or camera modules, e.g. multiple cameras in TV studios or sports stadiums
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects

Definitions

  • the invention relates to the field of imaging, in particular to an image acquisition method, device and equipment based on multiple cameras.
  • the main object of the present invention is to provide an image acquisition method, device and equipment based on multiple cameras, which can remove overlapping regions only by the geometric optics method, thereby increasing the speed of the stitching process.
  • the invention provides an image acquisition method based on multiple cameras, including:
  • This application provides an image acquisition device based on multiple cameras, including:
  • the simultaneous acquisition unit is configured to acquire the first image captured by the first camera and the second image captured by the second camera simultaneously with the first camera;
  • An overlapping area acquisition unit configured to obtain an overlapping area of the first image and the second image via geometric optics
  • a backup image generating unit configured to remove the overlapping area in the first image to obtain a backup image after removing the overlapping area
  • a stitching unit is used to stitch the backup image and the second image.
  • the present application provides an apparatus, which includes a processor, a memory, and a computer program stored on the memory and executable on the processor, and the processor implements the computer program as described in any one of the foregoing The described image acquisition method based on multiple cameras.
  • the present invention has the beneficial effect that, according to the multi-camera-based image acquisition method, device and equipment provided by the present invention, the overlapping area in the images captured by different cameras is calculated by geometric optics and before the stitching process , Delete the overlapping area in the image formed by the first camera, thereby reducing the stitching operation of each pixel in the overlapping area when stitching multiple images, that is to say, it is not necessary to use the pixel level of each image frame.
  • the comparison algorithm plays a technical effect of saving splicing calculation power and accelerating splicing speed.
  • FIG. 1 is a schematic flowchart of an image acquisition method based on multiple cameras according to an embodiment of the present application
  • FIG. 2 is a schematic block diagram of a structure of an image acquisition device based on multiple cameras according to an embodiment of the present application
  • FIG. 3 is a structural block diagram of a storage medium according to an embodiment of the application.
  • FIG. 4 is a structural block diagram of a device according to an embodiment of the application.
  • FIG. 5-7 are schematic diagrams of the principle of the image acquisition method based on multiple cameras of this application.
  • Figures a and b are schematic diagrams of the auxiliary lines made when calculating the coordinate values of point A 1 and point B 1 in Figure 6;
  • FIG. c is a schematic diagram of the auxiliary line made when calculating the coordinate value of point B 2 in FIG. 7 of the present application.
  • A2 is the first camera
  • A1 is the second camera
  • f is the focus
  • A is the point that can be imaged in the first camera and the second camera at the same time
  • line l1 is the connection between the lower end of the first camera and the focus
  • line l2 is the second The upper end of the camera is connected to the focus.
  • an embodiment of an image acquisition method based on multiple cameras includes:
  • the multi-camera means that the number of cameras is two or more than two.
  • the center of the distance between the first camera (lens A2 in the figure below) and the second camera (lens A1 in the figure above) is taken as the origin, and the center of the first camera is
  • the center line of the second camera is the y-axis
  • the parallel line passing through the origin between the center of the first camera and the focal length of the first camera is the x-axis
  • the z-axis is perpendicular to the y-axis and the x-axis
  • the vertical paper establish a three-dimensional rectangular coordinate system; the left-side cone formed by the straight line l1 (the connection between the lower end of the first camera and the focal length, ie the imaging boundary) and the straight line l2 (the connection between the upper end of the second camera and the focal length, ie the imaging boundary)
  • the body area is the area that can be imaged by both the first camera and the second camera (that is, point A can be imaged by the first camera and the
  • AB is the target to be imaged
  • AB is the solid area corresponding to the overlapped area of imaging
  • A(-u1,h1 ) Is the intersection point with the l1 line
  • B(-u2,-h2) is the intersection point with the l2 line
  • using geometric optics to calculate the coordinates A 1 and B 1 (the imaging point coordinates of the first camera)
  • a 2 and B 2 (the coordinates of the imaging point of the second camera)
  • a 1 and B 1 are boundary points/critical points
  • the coordinates of A 1 and B 1 can be obtained to know the imaging A 1 B 1 , A 2 B 2 is the same, and then A 1 B 1 or A 2 B 2 can be removed.
  • the calculation method of the coordinates of A 1 ⁇ B 1 ⁇ A 2 ⁇ B 2 combines the imaging principle and similar triangles. It is assumed that the diameter of the camera (that is, the convex lens) is r, the distance between the two cameras is d, and the focal length of the lens is f. The following describes the coordinates of the point A 1 of the method for finding.
  • step S1 the first image captured by the first camera and the second image captured by the second camera simultaneously with the first camera are acquired. According to this, the first image and the second image captured simultaneously are obtained as a basis for subsequent stitching.
  • the overlapping area of the first image and the second image is obtained via geometric optics.
  • Any feasible method can be used to obtain the overlapping area of the first image and the second image through geometric optics, for example: establishing a first framing boundary function of the first camera and a second framing boundary function of the second camera, and calculating the first framing The boundary function and the framing boundary intersection curve function of the second framing boundary function; obtain the object distance, and substitute the object distance into the framing boundary intersection curve function to obtain the first intersection curve function; according to the imaging principle, calculate the first An intersection curve function is a first intersection curve imaging function imaged by the first camera, and the area enclosed by the first intersection curve imaging function is the overlapping area of the first image and the second image; the overlapping area is acquired.
  • the overlapping area is removed from the first image to obtain a backup image after removing the overlapping area.
  • the overlapping area is the area overlapping the second image in the image acquired by the first camera. Therefore, in order to obtain a suitable image, the overlapping area should be removed in the first image.
  • the stitching method may include: based on the backup image, incorporating the second image into the backup image from a first predetermined direction, where the first predetermined direction refers to a direction in which the second camera points to the first camera Or, based on the second image, the backup image is merged into the second image from a second predetermined direction, where the second predetermined direction refers to the direction in which the first camera points to the second camera. Therefore, the original image is stitched and synthesized into the final image, which avoids the process of copying the image data and searching for the stitching path, which is convenient and fast.
  • the step S2 of obtaining the overlapping area of the first image and the second image via geometric optics includes:
  • S203 Calculate the first intersection curve function imaged by the first camera according to the imaging principle.
  • the area enclosed by the first intersection curve function is the overlap of the first image and the second image. area;
  • the parameters of the first camera and the second camera are the same, and the viewing range may be a cone range.
  • the shape of the "cone” is only an implementation example, not Constitute the limitations of this solution in other possible embodiments.
  • the camera framing has a range and cannot be shot 360 degrees without a dead angle. Therefore, the range that the camera can shoot can be expressed by the range circled by the three-dimensional function, and the boundary of the range that the camera can shoot is called the framing boundary function.
  • the parameters of the first camera and the second camera may also be different, and the viewfinder range may be any feasible range.
  • the intersection area of the two cones is the overlapping viewfinder area
  • the boundary curve of the overlapping viewfinder area is the viewfinder intersection curve
  • the function of this curve is the viewfinder intersection curve function.
  • the specific calculation method can calculate the intersection curve function of the framing boundary through the method of analytical geometry. Obtain the object distance, and substitute the object distance into the view boundary intersection curve function to obtain the first intersection curve function.
  • the object distance refers to the distance between the vertical plane of the object to be photographed and the vertical plane of the multi-camera, which can be set by the user to operate, such as manually inputting the parameter of the object distance, or the user selects the object to The object distance is calculated by the multi-camera distance measurement principle.
  • the principle of multi-camera distance measurement is an existing technology and will not be repeated here.
  • the first intersection curve function is a function of the intersection curve of the view boundary intersection curve function and the vertical plane where the object corresponding to the object distance is located (that is, the vertical plane where the subject is located), which is a two-dimensional closed curve
  • the aforementioned intersection boundary curve function of the viewfinder is a function of three-dimensional space.
  • the imaging principle the first intersection curve imaging function of the first intersection curve function imaged by the first camera is calculated, and the area enclosed by the first intersection curve imaging function is the overlapping area on the imaging side. Based on the imaging principle, under the premise that the camera parameters are known, the image formed by the photographed object is certain. According to this, the first intersection curve imaging function that is imaged by the first camera with the first intersection curve function can be obtained.
  • the calculation method adopts the method of analytic geometry, which will not be described in detail.
  • the framing ranges of the first camera and the second camera are both conical, and the first framing boundary function of the first camera and the second framing boundary function of the second camera are calculated.
  • the step S201 of the intersection curve function of the first framing boundary function and the framing boundary of the second framing boundary function includes:
  • the intersection curve function of the framing boundary of the first camera and the second camera is calculated. It is worth mentioning that the three-dimensional coordinate system is different from the dimensional directions defined by the coordinate system described and illustrated in FIG. 5, Different coordinate systems are only for the content described, and do not cause conflicts.
  • the framing range of the present application may not be conical. At this time, the parameter r is set to different values according to the framing range, so that it can be applied to various lenses.
  • the obtaining object distance includes:
  • the first intersection curve function is obtained.
  • the first camera is turned on first, and the user can select the subject of interest in the temporary image acquired by the first camera, and set the object distance accordingly, or the target subject can be obtained through specific settings, such as in the first camera Within the range of the camera to extract the subject located in a special area to meet different needs.
  • the method of obtaining the object distance is to use the multi-camera distance measuring principle.
  • step S4 of splicing the backup image and the second image it includes:
  • the overlapping content is further deleted. Since the previous step has deleted an overlapping area determined by the object distance value before stitching the image, the remaining overlapping content is not much. On this basis, the overlapping pixels are further deleted, thereby further improving the quality of the image. In addition, since there are not many overlapping contents left, there are not many pixels to be deleted in this embodiment, which can greatly reduce the demand for computing power.
  • the reason for the occurrence of duplicate pixels is introduced here:
  • the viewfinder range of the camera is continuously extended toward the object side in a tapered shape, so the overlapping part of the viewfinder range of the two cameras is also continuously extended
  • the overlapping area obtained before in this embodiment refers to the overlapping area of the part determined according to an object distance value in the overlapping portion extending toward the object side, that is, when the value is greater than the object distance value Objects in the extended space of the camera still exist. Some objects still appear in the viewing range of the two cameras at the same time. Therefore, in this embodiment, after deleting the overlapping area within an object distance, some objects larger than the object distance are still in the captured image. There will be some overlap, so duplicate pixels appear.
  • the image acquisition method based on multiple cameras includes:
  • image acquisition based on multiple cameras is achieved. Since the foregoing method has acquired image acquisition of multiple cameras, according to this, image acquisition based on multiple cameras can be achieved.
  • Amo (where o is greater than m and less than or equal to n) is the overlapping area of the image captured by the mth camera with the image captured by the oth camera, and the overlapping area should be removed, so that the mth camera There is no overlapping area between the captured image and the image captured by the o-th camera. According to this, all overlapping regions can be removed and then stitched to obtain the final non-overlapping image.
  • the first camera and the second camera in the foregoing embodiments are any two of the multiple cameras in this embodiment, and the first image and the second image are any of the preliminary images in the embodiment. Two.
  • an embodiment of an image acquisition device based on multiple cameras includes:
  • the simultaneous obtaining unit 1 is configured to obtain the first image captured by the first camera and the second image captured by the second camera simultaneously with the first camera;
  • the overlapping area acquisition unit 2 is used to obtain the overlapping area of the first image and the second image via geometric optics;
  • the backup image generating unit 3 is configured to remove the overlapping area in the first image to obtain a backup image after removing the overlapping area;
  • the stitching unit 4 is used for stitching the backup image and the second image.
  • the first image captured by the first camera and the second image captured by the second camera simultaneously with the first camera are acquired. According to this, the first image and the second image captured simultaneously are obtained as a basis for subsequent stitching.
  • the overlapping area of the first image and the second image is obtained via geometric optics.
  • Any feasible method can be used to obtain the overlapping area of the first image and the second image through geometric optics, for example: establishing a first framing boundary function of the first camera and a second framing boundary function of the second camera, and calculating the first framing The boundary function and the framing boundary intersection curve function of the second framing boundary function; obtain the object distance, and substitute the object distance into the framing boundary intersection curve function to obtain the first intersection curve function; according to the imaging principle, calculate the first An intersection curve function is a first intersection curve imaging function imaged by the first camera, and the area enclosed by the first intersection curve imaging function is the overlapping area of the first image and the second image; the overlapping area is acquired.
  • the overlapping area is removed from the first image to obtain a backup image after removing the overlapping area.
  • the overlapping area is the area overlapping the second image in the image acquired by the first camera. Therefore, in order to obtain a suitable image, the overlapping area should be removed in the first image.
  • the stitching unit 4 includes: a first stitching subunit for incorporating a second image from the first predetermined direction into the backup image based on the backup image, where the first predetermined direction refers to the second The camera points in the direction of the first camera; or, the second splicing subunit is used to merge the backup image from the second predetermined direction into the second image based on the second image, where the second predetermined direction refers to the first camera Point in the direction of the second camera. Therefore, the original image is stitched and synthesized into the final image, which avoids the process of copying the image data and searching for the stitching path, which is convenient and fast.
  • the overlapping area acquisition unit 2 includes:
  • the framing boundary intersection curve function calculation subunit is used to establish the first framing boundary function of the first camera and the second framing boundary function of the second camera, and calculate the relationship between the first framing boundary function and the second framing boundary function Intersecting curve function of viewfinder boundary;
  • the first intersection curve function calculation subunit is used to obtain the object distance, and substitute the object distance into the view boundary intersection curve function to obtain the first intersection curve function;
  • the overlapping area calculation subunit is used for calculating the first intersection curve imaging function of the first intersection curve function imaged by the first camera according to the imaging principle, and the area enclosed by the first intersection curve imaging function is the first image The overlapping area with the second image;
  • the overlapping area acquisition subunit is used to acquire the overlapping area.
  • the overlapping area of the first image and the second image is obtained via geometric optics.
  • the parameters of the first camera and the second camera are the same, and the viewing range may be a cone range.
  • the “cone” is only an implementation example and does not constitute Limitations of this solution in other possible embodiments.
  • the parameters of the first camera and the second camera are different, and the viewing range may be any feasible range. Therefore, the intersection area of the two cones is the overlapping viewfinder area, the boundary curve of the overlapping viewfinder area is the viewfinder intersection curve, and the function of this curve is the viewfinder intersection curve function.
  • the specific calculation method can calculate the intersection curve function of the framing boundary through the method of analytical geometry.
  • the object distance refers to the distance between the vertical plane of the object to be photographed and the vertical plane of the multi-camera, which can be set by the user to operate, such as manually inputting the parameter of the object distance, or the user selects the object to
  • the object distance is calculated by the multi-camera distance measurement principle.
  • the principle of multi-camera distance measurement is an existing technology and will not be repeated here.
  • the first intersection curve function is a function of the intersection curve of the view boundary intersection curve function and the vertical plane in which the object corresponding to the object distance is located (that is, the vertical plane in which the subject is located) is a two-dimensional closed curve
  • the aforementioned intersection boundary curve function of the viewfinder is a function of three-dimensional space.
  • the imaging principle the first intersection curve imaging function of the first intersection curve function imaged by the first camera is calculated, and the area enclosed by the first intersection curve imaging function is the overlapping area on the imaging side.
  • the image formed by the photographed object is certain. According to this, the first intersection curve imaging function that is imaged by the first camera with the first intersection curve function can be obtained.
  • the calculation method adopts the method of analytic geometry, which will not be described in detail.
  • the framing ranges of the first camera and the second camera are both cone-shaped, and the sub-unit for calculating the intersection curve function of the framing boundary includes:
  • the three-dimensional Cartesian coordinate system building module is used to take the center of the distance between the first camera and the second camera as the origin, take the line connecting the center of the first camera and the center of the second camera as the y-axis, and make the straight line of the z-axis parallel to the Connecting the center of a camera to the focus of the first camera, setting the x-axis perpendicular to the y-axis and z-axis, and establishing a three-dimensional rectangular coordinate system;
  • the curve function of the intersection of the framing boundaries of the first camera and the second camera is calculated.
  • the first intersection curve function calculation subunit includes:
  • a subject receiving module configured to acquire a temporary image through the first camera, and receive a subject selected by the user in the temporary image
  • the object distance setting module is used to turn on the second camera, use the principle of multi-camera distance measurement to obtain the distance between the shooting object and the plane where the first camera and the second camera are located, and set the distance to the object distance.
  • the first intersection curve function is obtained.
  • the first camera is turned on first, and the user can select the subject of interest in the temporary image acquired by the first camera, and set the object distance accordingly, or the target subject can be obtained through specific settings, such as in the first camera Within the range of the camera to extract the subject located in a special area to meet different needs.
  • the method of obtaining the object distance is to use the multi-camera distance measuring principle.
  • the device includes:
  • a comparison unit for comparing the first pixel of the backup image with the second pixel of the second image
  • the pixel deletion unit is configured to delete the first pixel that is the same as the second pixel in the backup image, so as to obtain a backup image for stitching.
  • the device includes:
  • a plurality of preliminary image acquisition units configured to acquire a plurality of preliminary images simultaneously shot by a plurality of cameras, wherein the cameras have n in total, and n is equal to or greater than 3;
  • overlapping area acquisition units for acquiring A12, A13, ..., A1n, A23, A24, ..., A2n, Am(m+1), Am(m+2), ..., Amn overlapping areas via geometric optics, n is greater than m, the Amn refers to the overlapping area of the mth camera and the nth camera;
  • the preliminary image splicing unit is used for splicing the plurality of preliminary images to remove overlapping regions, as described above, to achieve image acquisition based on multiple cameras. Since the foregoing method has acquired image acquisition of multiple cameras, according to this, image acquisition based on multiple cameras can be achieved.
  • the first camera and the second camera in the foregoing embodiments are any two of the multiple cameras in this embodiment, and the first image and the second image are any two of the multiple preliminary images in this embodiment. Pcs.
  • the overlapping area in the images captured by different cameras is calculated by geometric optics, and before the stitching process, the overlapping area is deleted from the image formed by the first camera, thereby starting The technical effect of saving splicing calculation power and accelerating splicing speed.
  • the present application also provides a storage medium 10 that stores a computer program 20, which when run on a computer, causes the computer to execute the multi-camera-based image acquisition method described in the above embodiment Or, when the processor executes the computer program, an image acquisition method based on multiple cameras as described in the above embodiment is implemented.
  • the present application also provides a device 30 containing instructions.
  • the device 30 causes the device 30 to execute the multi-camera based on the multi-camera described in the above embodiment through the processor 40 provided therein.
  • Image acquisition method or when the processor executes the computer program, an image acquisition method based on multiple cameras as described in the above embodiment is implemented.
  • the device 30 in this embodiment is a computer device 30.
  • the computer program product includes one or more computer instructions. When the computer program instructions are loaded and executed on the computer, all or part of the processes or functions according to the embodiments of the present application are generated.
  • the computer may be a general-purpose computer, a dedicated computer, a computer network, or other programmable devices.
  • the computer instructions may be stored in a storage medium, or transmitted from one storage medium to another storage medium, for example, the computer instructions may be from a website site, computer, server, or data center via wire (eg, coaxial cable, optical fiber) , Digital subscriber line (DSL)) or wireless (such as infrared, wireless, microwave, etc.) to another website, computer, server or data center.
  • wire eg, coaxial cable, optical fiber
  • DSL Digital subscriber line
  • wireless such as infrared, wireless, microwave, etc.
  • the storage medium may be any available medium that can be stored by a computer or a data storage device such as a server, a data center, etc. integrated with one or more available media.
  • the usable medium may be a magnetic medium (for example, a floppy disk, a hard disk, a magnetic tape), an optical medium (for example, a DVD), or a semiconductor medium (for example, Solid State Disk (SSD)), or the like.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Studio Devices (AREA)
  • Length Measuring Devices By Optical Means (AREA)
  • Image Analysis (AREA)

Abstract

Disclosed are an image acquisition method, apparatus and device based on multiple cameras. The image acquisition method based on multiple cameras comprises: acquiring a first image photographed by a first camera and a second image photographed by a second camera simultaneously with the first camera (S1); obtaining an overlapping region of the first image and the second image via geometrical optics (S2); removing the overlapping region from the first image to obtain a standby image after removal of the overlapping region (S3); and splicing the standby image and the second image (S4). Therefore, the technical effects of saving on calculation power and accelerating the splicing speed are achieved.

Description

基于多摄像头的图像获取方法、装置及设备Multi-camera-based image acquisition method, device and equipment 技术领域Technical field

本发明涉及到摄像领域,特别是涉及一种基于多摄像头的图像获取方法、装置及设备。The invention relates to the field of imaging, in particular to an image acquisition method, device and equipment based on multiple cameras.

背景技术Background technique

手机集成多摄像头是一个趋势,可以扩大拍摄区域。多摄像头或多摄像头成像,先获取每个摄像头拍摄时的图像,再将每个摄像头拍摄时的图像进行拼接处理,从而获取具有更广拍摄区域的最终图像。现有技术的多摄像头成像的拼接合成技术一般采用基于不同摄像头每图像帧像素级的比较算法,耗费算力,不易使用。因此现有技术缺乏快捷地计算重叠区域、快捷地拼接图像,以获得最终的多摄像头成像的方案。It is a trend for mobile phones to integrate multiple cameras, which can expand the shooting area. For multi-camera or multi-camera imaging, first acquire the image taken by each camera, and then stitch the image taken by each camera to obtain the final image with a wider shooting area. The multi-camera imaging stitching and synthesis technology in the prior art generally uses a pixel-level comparison algorithm based on each camera of different cameras, which is computationally intensive and difficult to use. Therefore, the prior art lacks a solution for quickly calculating overlapping areas and quickly stitching images to obtain the final multi-camera imaging.

技术问题technical problem

本发明的主要目的为提供一种基于多摄像头的图像获取方法、装置及设备,仅通过几何光学的方法就能去除重叠区域,从而提高拼接处理的速度。The main object of the present invention is to provide an image acquisition method, device and equipment based on multiple cameras, which can remove overlapping regions only by the geometric optics method, thereby increasing the speed of the stitching process.

技术解决方案Technical solution

本发明提供一种基于多摄像头的图像获取方法,包括:The invention provides an image acquisition method based on multiple cameras, including:

获取第一摄像头拍摄的第一图像以及与第一摄像头同时拍摄的第二摄像头拍摄的第二图像;Acquiring the first image captured by the first camera and the second image captured by the second camera simultaneously with the first camera;

经由几何光学获得第一图像与第二图像的重叠区域;Obtain the overlapping area of the first image and the second image via geometric optics;

在所述第一图像中去除所述重叠区域,获得去除所述重叠区域后的备用图像;Removing the overlapping area in the first image to obtain a backup image after removing the overlapping area;

拼接所述备用图像与所述第二图像。Join the backup image and the second image.

本申请提供一种基于多摄像头的图像获取装置,包括:This application provides an image acquisition device based on multiple cameras, including:

同时获取单元,用于获取第一摄像头拍摄的第一图像以及与第一摄像头同时拍摄的第二摄像头拍摄的第二图像;The simultaneous acquisition unit is configured to acquire the first image captured by the first camera and the second image captured by the second camera simultaneously with the first camera;

重叠区域获取单元,用于经由几何光学获得第一图像与第二图像的重叠区域;An overlapping area acquisition unit, configured to obtain an overlapping area of the first image and the second image via geometric optics;

备用图像生成单元,用于在所述第一图像中去除所述重叠区域,获得去除所述重叠区域后的备用图像;A backup image generating unit, configured to remove the overlapping area in the first image to obtain a backup image after removing the overlapping area;

拼接单元,用于拼接所述备用图像与所述第二图像。A stitching unit is used to stitch the backup image and the second image.

本申请提供一种设备,其包括处理器、存储器及存储于所述存储器上并可在所述处理器上运行的计算机程序,所述处理器执行所述计算机程序时实现如前述任一项所述的基于多摄像头的图像获取方法。The present application provides an apparatus, which includes a processor, a memory, and a computer program stored on the memory and executable on the processor, and the processor implements the computer program as described in any one of the foregoing The described image acquisition method based on multiple cameras.

有益效果Beneficial effect

本发明与现有技术相比,有益效果在于:根据本发明提供的基于多摄像头的图像获取方法、装置及设备,通过几何光学计算出不同摄像头拍摄的图像中的重叠区域,并在拼接处理前,在第一摄像头所成图像中删除所述重叠区域,从而在对多幅图像进行拼接时,减少了重叠区域内每一像素的拼接运算,也就是说,不需要采用每图像帧像素级的比较算法,起到节省拼接运算力、加快拼接速度的技术效果。Compared with the prior art, the present invention has the beneficial effect that, according to the multi-camera-based image acquisition method, device and equipment provided by the present invention, the overlapping area in the images captured by different cameras is calculated by geometric optics and before the stitching process , Delete the overlapping area in the image formed by the first camera, thereby reducing the stitching operation of each pixel in the overlapping area when stitching multiple images, that is to say, it is not necessary to use the pixel level of each image frame. The comparison algorithm plays a technical effect of saving splicing calculation power and accelerating splicing speed.

附图说明BRIEF DESCRIPTION

图1为本申请一实施例的基于多摄像头的图像获取方法的流程示意图;FIG. 1 is a schematic flowchart of an image acquisition method based on multiple cameras according to an embodiment of the present application;

图2为本申请一实施例的基于多摄像头的图像获取装置的结构示意框图;2 is a schematic block diagram of a structure of an image acquisition device based on multiple cameras according to an embodiment of the present application;

图3为本申请一实施例的存储介质的结构框图;3 is a structural block diagram of a storage medium according to an embodiment of the application;

图4为本申请一实施例的设备的结构框图;4 is a structural block diagram of a device according to an embodiment of the application;

图5-7为本申请的基于多摄像头的图像获取方法的原理示意图;5-7 are schematic diagrams of the principle of the image acquisition method based on multiple cameras of this application;

图a、b为本申请求取图6中点A 1和点B 1坐标值时所做辅助线的示意图; Figures a and b are schematic diagrams of the auxiliary lines made when calculating the coordinate values of point A 1 and point B 1 in Figure 6;

图c为求取本申请图7中点B 2的坐标值时所做辅助线的示意图。 FIG. c is a schematic diagram of the auxiliary line made when calculating the coordinate value of point B 2 in FIG. 7 of the present application.

其中附图标记示意如下:The reference signs are as follows:

A2为第一摄像头,A1为第二摄像头,f为焦点,A为可同时在第一摄像头、第二摄像头中成像的点,直线l1为第一摄像头下端与焦点连线,直线l2为第二摄像头上端与焦点连线。A2 is the first camera, A1 is the second camera, f is the focus, A is the point that can be imaged in the first camera and the second camera at the same time, line l1 is the connection between the lower end of the first camera and the focus, line l2 is the second The upper end of the camera is connected to the focus.

本发明目的实现、功能特点及优点将结合实施例,参照附图做进一步说明。The objectives, functional features and advantages of the present invention will be further described in conjunction with the embodiments and with reference to the drawings.

本发明的最佳实施方式Best Mode of the Invention

应当理解,此处所描述的具体实施例仅仅用以解释本发明,并不用于限定本发明。It should be understood that the specific embodiments described herein are only used to explain the present invention and are not intended to limit the present invention.

如图1所示,一种基于多摄像头的图像获取方法的一实施方式,包括:As shown in FIG. 1, an embodiment of an image acquisition method based on multiple cameras includes:

S1、获取第一摄像头拍摄的第一图像以及与第一摄像头同时拍摄的第二摄像头拍摄的第二图像;S1. Acquire the first image captured by the first camera and the second image captured by the second camera taken simultaneously with the first camera;

S2、经由几何光学获得第一图像与第二图像的重叠区域;S2. Obtain an overlapping area of the first image and the second image via geometric optics;

S3、在所述第一图像中去除所述重叠区域,获得去除所述重叠区域后的备用图像;S3. Remove the overlapping area in the first image to obtain a backup image after removing the overlapping area;

S4、拼接所述备用图像与所述第二图像。S4. Splice the backup image with the second image.

其中,所述多摄像头指摄像头的数量为两个或者大于两个。Wherein, the multi-camera means that the number of cameras is two or more than two.

在此介绍本申请的原理:参见图5,以第一摄像头(图中下方的透镜A2)与第二摄像头(图中上方的透镜A1)的间距的中心为原点,以第一摄像头的中心与第二摄像头的中心连线为y轴,第一摄像头的中心与第一摄像头的焦距的连线的经过所述原点的平行线为x轴,设置z轴垂直于所述y轴与x轴(即 垂直纸面),建立三维直角坐标系;直线l1(第一摄像头下端与焦距连线,即成像边界)与直线l2(第二摄像头上端与焦距连线,即成像边界)所构成的左面圆锥体区域即是第一摄像头、第二摄像头均能成像的区域(即点A可被第一摄像头、第二摄像头同时成像),只需在在第一摄像头所成图像中去除该区域通过第一摄像头的成像,即可进行后续的拼接步骤。The principle of the present application is introduced here: referring to FIG. 5, the center of the distance between the first camera (lens A2 in the figure below) and the second camera (lens A1 in the figure above) is taken as the origin, and the center of the first camera is The center line of the second camera is the y-axis, and the parallel line passing through the origin between the center of the first camera and the focal length of the first camera is the x-axis, and the z-axis is perpendicular to the y-axis and the x-axis ( (The vertical paper), establish a three-dimensional rectangular coordinate system; the left-side cone formed by the straight line l1 (the connection between the lower end of the first camera and the focal length, ie the imaging boundary) and the straight line l2 (the connection between the upper end of the second camera and the focal length, ie the imaging boundary) The body area is the area that can be imaged by both the first camera and the second camera (that is, point A can be imaged by the first camera and the second camera at the same time). Simply remove the area from the image formed by the first camera and pass the first After the imaging of the camera, the subsequent splicing steps can be performed.

参见图6、7,更具体地,在此将利用几何光学获取成像坐标点的方法介绍如下:AB为待成像目标,且该AB为成像的重叠区域对应的实体区域,A(-u1,h1)为与l1直线的交点,B(-u2,-h2)为与l2直线的交点,利用几何光学计算出A点、B点所成像的坐标A 1和B 1(第一摄像头的成像点坐标)、或者A 2和B 2(第二摄像头的成像点坐标),由于A 1和B 1为边界点/临界点,因此,求出A 1和B 1的坐标就可以知道成像A 1B 1,A 2B 2也是如此,然后将A 1B 1或A 2B 2去除即可。 Referring to FIGS. 6 and 7, more specifically, the method of using geometric optics to acquire imaging coordinate points is introduced as follows: AB is the target to be imaged, and AB is the solid area corresponding to the overlapped area of imaging, A(-u1,h1 ) Is the intersection point with the l1 line, B(-u2,-h2) is the intersection point with the l2 line, using geometric optics to calculate the coordinates A 1 and B 1 (the imaging point coordinates of the first camera) ), or A 2 and B 2 (the coordinates of the imaging point of the second camera), since A 1 and B 1 are boundary points/critical points, the coordinates of A 1 and B 1 can be obtained to know the imaging A 1 B 1 , A 2 B 2 is the same, and then A 1 B 1 or A 2 B 2 can be removed.

其中,A 1\B 1\A 2\B 2 4个点的坐标的计算方法结合了成像原理与相似三角形。其中,假设摄像头(即凸透镜)的直径为r,两摄像头之间的间距为d,镜片焦距为f。以下介绍点A 1的坐标的求法。 Among them, the calculation method of the coordinates of A 1 \B 1 \A 2 \B 2 combines the imaging principle and similar triangles. It is assumed that the diameter of the camera (that is, the convex lens) is r, the distance between the two cameras is d, and the focal length of the lens is f. The following describes the coordinates of the point A 1 of the method for finding.

对于点A 1,过点A作平行于x轴的直线,求出s 1(如图a所示)就可以知道点A 1的坐标。由相似三角形有:

Figure PCTCN2019073764-appb-000001
所以有
Figure PCTCN2019073764-appb-000002
再推出A 1的横坐标X a1,有:
Figure PCTCN2019073764-appb-000003
所以:
Figure PCTCN2019073764-appb-000004
也就是说,A 1的坐标为:
Figure PCTCN2019073764-appb-000005
同理也可以推出A 2、B 1、B 2,可参见图b、图c,此处不做详细描述。 For point A 1 , make a straight line parallel to the x-axis through point A, and find the coordinates of point A 1 by finding s 1 (as shown in Figure a). The similar triangles are:
Figure PCTCN2019073764-appb-000001
So have
Figure PCTCN2019073764-appb-000002
Then the abscissa X a1 of A 1 is derived, which are:
Figure PCTCN2019073764-appb-000003
and so:
Figure PCTCN2019073764-appb-000004
In other words, the coordinates of A 1 are:
Figure PCTCN2019073764-appb-000005
Similarly, A 2 , B 1 , and B 2 can also be derived, which can be seen in Figures b and c, which will not be described in detail here.

最终,求得A 1\B 1\A 2\B 2 4个点的坐标如下: Finally, find the coordinates of A 1 \B 1 \A 2 \B 2 4 points as follows:

Figure PCTCN2019073764-appb-000006
Figure PCTCN2019073764-appb-000006

Figure PCTCN2019073764-appb-000007
Figure PCTCN2019073764-appb-000007

Figure PCTCN2019073764-appb-000008
Figure PCTCN2019073764-appb-000008

Figure PCTCN2019073764-appb-000009
Figure PCTCN2019073764-appb-000009

在获取A 1\B 1\A 2\B 24个点坐标之后,在相应的摄像头获取的图片中去取相应的重叠区域,再进行拼接操作即可。 After acquiring the coordinates of A 1 \B 1 \A 2 \B 2 4 points, take the corresponding overlapping area in the picture obtained by the corresponding camera and then perform the stitching operation.

如上述步骤S1所述,获取第一摄像头拍摄的第一图像以及与第一摄像头同时拍摄的第二摄像头拍摄的第二图像。据此,获得同时拍摄的第一图像与第二图像,以作为后续拼接的基础。As described in step S1 above, the first image captured by the first camera and the second image captured by the second camera simultaneously with the first camera are acquired. According to this, the first image and the second image captured simultaneously are obtained as a basis for subsequent stitching.

如上述步骤S2所述,经由几何光学获得第一图像与第二图像的重叠区域。通过几何光学获得第一图像与第二图像的重叠区域可以采用任意可行方法,例如:建立第一摄像头的第一取景边界函数与第二摄像头的第二取景边界函数,计算出所述第一取景边界函数与所述第二取景边界函数的取景边界相交曲线函数;获取物距,将所述物距代入所述取景边界相交曲线函数中,得到第一相交曲线函数;根据成像原理,计算出第一相交曲线函数通过第一摄像头成像的第一相交曲线成像函数,所述第一相交曲线成像函数所围成的区域即为第一图像与第二图像的重叠区域;获取所述重叠区域。As described in step S2 above, the overlapping area of the first image and the second image is obtained via geometric optics. Any feasible method can be used to obtain the overlapping area of the first image and the second image through geometric optics, for example: establishing a first framing boundary function of the first camera and a second framing boundary function of the second camera, and calculating the first framing The boundary function and the framing boundary intersection curve function of the second framing boundary function; obtain the object distance, and substitute the object distance into the framing boundary intersection curve function to obtain the first intersection curve function; according to the imaging principle, calculate the first An intersection curve function is a first intersection curve imaging function imaged by the first camera, and the area enclosed by the first intersection curve imaging function is the overlapping area of the first image and the second image; the overlapping area is acquired.

如上述步骤S3所述,在所述第一图像中去除所述重叠区域,获得去除所述重叠区域后的备用图像。如前所述,所述重叠区域是在第一摄像头获取的图像中,与第二图像重叠的区域。因此为了得到合适的图像,应在第一图像中去除所述重叠区域。As described in step S3 above, the overlapping area is removed from the first image to obtain a backup image after removing the overlapping area. As described above, the overlapping area is the area overlapping the second image in the image acquired by the first camera. Therefore, in order to obtain a suitable image, the overlapping area should be removed in the first image.

如上述步骤S4所述,拼接所述备用图像与所述第二图像。从而获得最终图像。其中拼接的方法可以包括:以所述备用图像为基础,将所述第二图像由第一预定方向并入所述备用图像中,所述第一预定方向指第二摄像头指向第一摄像头的方向;或者,以所述第二图像为基础,将所述备用图像由第二预定方向并入所述第二图像中,所述第二预定方向指第一摄像头指向第二摄像头的方向。从而在原始图像上拼接合成为最终图像,避免了复制图像数据、并寻找拼接路径的过程,方便快捷。As described in step S4 above, the backup image and the second image are spliced. Thereby obtaining the final image. The stitching method may include: based on the backup image, incorporating the second image into the backup image from a first predetermined direction, where the first predetermined direction refers to a direction in which the second camera points to the first camera Or, based on the second image, the backup image is merged into the second image from a second predetermined direction, where the second predetermined direction refers to the direction in which the first camera points to the second camera. Therefore, the original image is stitched and synthesized into the final image, which avoids the process of copying the image data and searching for the stitching path, which is convenient and fast.

一实施方式中,所述经由几何光学获得第一图像与第二图像的重叠区域的步骤S2,包括:In one embodiment, the step S2 of obtaining the overlapping area of the first image and the second image via geometric optics includes:

S201、建立第一摄像头的第一取景边界函数与第二摄像头的第二取景边界函数,计算出所述第一取景边界函数与所述第二取景边界函数的取景边界相交曲线函数;S201. Establish a first framing boundary function of the first camera and a second framing boundary function of the second camera, and calculate a curve boundary intersection function between the first framing boundary function and the second framing boundary function;

S202、获取物距,将所述物距代入所述取景边界相交曲线函数中,得到第一相交曲线函数;S202: Obtain an object distance, and substitute the object distance into the intersection curve function of the viewfinder boundary to obtain a first intersection curve function;

S203、根据成像原理,计算出第一相交曲线函数通过第一摄像头成像的第一相交曲线成像函数,所述第一相交曲线成像函数所围成的区域即为第一图像与第二图像的重叠区域;S203. Calculate the first intersection curve function imaged by the first camera according to the imaging principle. The area enclosed by the first intersection curve function is the overlap of the first image and the second image. area;

S204、获取所述重叠区域。S204. Acquire the overlapping area.

如上所述,实现了经由几何光学获得第一图像与第二图像的重叠区域。本实施方式中,优选的,所述第一摄像头与第二摄像头的参数相同,且取景范围可以均为圆锥体范围,可以理解的是,该“圆锥体”形状仅为一个实施示例,并不构成本方案在其他可实施方式中的限制。摄像头取景是有范围的,无法 360度无死角拍摄,因此摄像头所能拍摄的范围可用三维的函数圈成的范围来表示,而摄像头所能拍摄的范围的边界,被称为取景边界函数。进一步地,所述第一摄像头与第二摄像头的参数也可以不同,且取景范围可以为任意可行范围。因此两个圆锥体的相交区域即为重叠取景区域,所述重叠取景区域的边界曲线即为取景边界相交曲线,该曲线的函数即为取景边界相交曲线函数。具体计算方法可以通过解析几何的方法计算出取景边界相交曲线函数。获取物距,将所述物距代入所述取景边界相交曲线函数中,得到第一相交曲线函数。其中,物距是指被拍摄对象所在竖直平面与多摄像头所在竖直平面的距离,可以是用户操作而进行设置物距,例如将物距的参数手动输入,或者由用户选择拍摄对象,以通过多摄像头测距原理计算得到物距。其中多摄像头测距原理为已有技术,不再赘述。其中所述第一相交曲线函数为所述取景边界相交曲线函数与所述物距对应的物体所在竖直平面(即被拍摄物所在竖直平面)的相交曲线的函数,是二维闭合曲线,而前述的取景边界相交曲线函数,是三维空间的函数。根据成像原理,计算出第一相交曲线函数通过第一摄像头成像的第一相交曲线成像函数,所述第一相交曲线成像函数所围成的区域即为成像侧的重叠区域。基于成像原理,在摄像头参数已知的前提下,被拍摄对像所成的像是一定的。据此可以获取与第一相交曲线函数通过第一摄像头成像的第一相交曲线成像函数。其中计算方法采用解析几何的方法,不再赘述。As described above, it is achieved that the overlapping area of the first image and the second image is obtained via geometric optics. In this embodiment, preferably, the parameters of the first camera and the second camera are the same, and the viewing range may be a cone range. It is understandable that the shape of the "cone" is only an implementation example, not Constitute the limitations of this solution in other possible embodiments. The camera framing has a range and cannot be shot 360 degrees without a dead angle. Therefore, the range that the camera can shoot can be expressed by the range circled by the three-dimensional function, and the boundary of the range that the camera can shoot is called the framing boundary function. Further, the parameters of the first camera and the second camera may also be different, and the viewfinder range may be any feasible range. Therefore, the intersection area of the two cones is the overlapping viewfinder area, the boundary curve of the overlapping viewfinder area is the viewfinder intersection curve, and the function of this curve is the viewfinder intersection curve function. The specific calculation method can calculate the intersection curve function of the framing boundary through the method of analytical geometry. Obtain the object distance, and substitute the object distance into the view boundary intersection curve function to obtain the first intersection curve function. Among them, the object distance refers to the distance between the vertical plane of the object to be photographed and the vertical plane of the multi-camera, which can be set by the user to operate, such as manually inputting the parameter of the object distance, or the user selects the object to The object distance is calculated by the multi-camera distance measurement principle. The principle of multi-camera distance measurement is an existing technology and will not be repeated here. The first intersection curve function is a function of the intersection curve of the view boundary intersection curve function and the vertical plane where the object corresponding to the object distance is located (that is, the vertical plane where the subject is located), which is a two-dimensional closed curve, The aforementioned intersection boundary curve function of the viewfinder is a function of three-dimensional space. According to the imaging principle, the first intersection curve imaging function of the first intersection curve function imaged by the first camera is calculated, and the area enclosed by the first intersection curve imaging function is the overlapping area on the imaging side. Based on the imaging principle, under the premise that the camera parameters are known, the image formed by the photographed object is certain. According to this, the first intersection curve imaging function that is imaged by the first camera with the first intersection curve function can be obtained. Among them, the calculation method adopts the method of analytic geometry, which will not be described in detail.

一实施方式中,所述第一摄像头与所述第二摄像头的取景范围均为圆锥体状,所述建立第一摄像头的第一取景边界函数与第二摄像头的第二取景边界函数,计算出所述第一取景边界函数与所述第二取景边界函数的取景边界相交曲线函数的步骤S201,包括:In one embodiment, the framing ranges of the first camera and the second camera are both conical, and the first framing boundary function of the first camera and the second framing boundary function of the second camera are calculated. The step S201 of the intersection curve function of the first framing boundary function and the framing boundary of the second framing boundary function includes:

S2011、以第一摄像头与第二摄像头的间距的中心为原点,以第一摄像头的中心与第二摄像头的中心连线为y轴,使z轴所在直线平行于第一摄像头的中心与第一摄像头的焦点的连线,设置x轴垂直于所述y轴与z轴,建立三维直角坐标系;S2011. Taking the center of the distance between the first camera and the second camera as the origin, taking the line connecting the center of the first camera and the center of the second camera as the y-axis, making the straight line on the z-axis parallel to the center of the first camera and the first The line of the focal point of the camera, the x axis is set perpendicular to the y axis and the z axis, and a three-dimensional rectangular coordinate system is established;

S2012、建立第一摄像头的第一取景边界函数:F1=k 2(x 2+(y+d/2+r/2) 2)-(z+f) 2=0,k≠0,z<=-f;建立第二摄像头的第二取景边界函数:F2=k 2(x 2+(y-d/2-r/2) 2)-(z+f) 2=0,k≠0,z<=-f,其中d为第一摄像头与第二摄像头的间隔,r为第一摄像头与第二摄像头的直径,f为第一摄像头与第二摄像头的焦距,k 2=f 2/(r/2) 2S2012. Establish a first framing boundary function of the first camera: F1=k 2 (x 2 +(y+d/2+r/2) 2 )-(z+f) 2 =0,k≠0,z<=-f; establish the second viewfinder boundary function of the second camera: F2=k 2 (x 2 +(yd/2-r/2) 2 )-(z+f) 2 =0,k≠0,z< =-f, where d is the distance between the first camera and the second camera, r is the diameter of the first camera and the second camera, f is the focal length of the first camera and the second camera, k 2 = f 2 /(r/ 2) 2 ;

S2013、计算出第一取景边界函数与第二取景边界函数的取景边界相交曲线函数F3:当y>0时,F3=k 2(x 2+(y+d/2+r/2) 2)-(z+f) 2=0,k≠0,z<=-f;当y<=0时,F3=k 2(x 2+(y-d/2-r/2) 2)-(z+f) 2=0,k≠0,z<=-f。 S2013. Calculate the intersection curve function F3 of the framing boundary of the first framing boundary function and the second framing boundary function: when y>0, F3=k 2 (x 2 +(y+d/2+r/2) 2 ) -(z+f) 2 = 0, k≠0, z<=-f; when y<=0, F3=k 2 (x 2 +(yd/2-r/2) 2 )-(z+ f) 2 = 0, k≠0, z<=-f.

如上所述,计算出第一摄像头与第二摄像头的取景边界相交曲线函数,值得一提的是,该三维坐标系与图5中描述和示意出的坐标系定义的维度方向有所不一样,不同的坐标系仅分别针对所描述的内容,并不造成矛盾。As described above, the intersection curve function of the framing boundary of the first camera and the second camera is calculated. It is worth mentioning that the three-dimensional coordinate system is different from the dimensional directions defined by the coordinate system described and illustrated in FIG. 5, Different coordinate systems are only for the content described, and do not cause conflicts.

具体的,标准圆锥体边界曲线方程为z 2=k 2(x 2+y 2),在本实施例中建立的三维坐标系中,可得到第一取景边界函数:F1=k 2(x 2+(y+d/2+r/2) 2)-(z+f) 2=0,(k≠0),z<=-f;建立第二摄像头的第二取景边界函数:F2=k 2(x 2+(y-d/2-r/2) 2)-(z+f) 2=0,(k≠0),z<=-f,其中d为第一摄像头与第二摄像头的间隔,r为第一摄像头与第二摄像头的直径,f为第一摄像头与第二摄像头的焦距,k 2=f 2/(r/2) 2。从而得到第一取景边界函数F1与第二取景边界函数F2的相交轨迹,即取景边界相交曲线函数F3:当y>0时,F3=k 2(x 2+(y+d/2+r/2) 2)-(z+f) 2=0,(k≠0),z<=-f;当y<=0时,F3=k 2(x 2+(y-d/2-r/2) 2)-(z+f) 2=0,(k≠0),z<=-f。进一步地,本申请的取景范围可以不为圆锥体状,此时参数r根据取景范围的不同而设置不同的数值,从而可以适用于各种镜头。 Specifically, the standard cone boundary curve equation is z 2 =k 2 (x 2 +y 2 ). In the three-dimensional coordinate system established in this embodiment, the first framing boundary function can be obtained: F1=k 2 (x 2 +(y+d/2+r/2) 2 )-(z+f) 2 =0, (k≠0), z<=-f; establish the second view boundary function of the second camera: F2=k 2 (x 2 +(yd/2-r/2) 2 )-(z+f) 2 =0, (k≠0), z<=-f, where d is the distance between the first camera and the second camera , r is the diameter of the first camera and the second camera, f is the focal length of the first camera and the second camera, k 2 = f 2 /(r/2) 2 . Thus, the intersection trajectory of the first framing boundary function F1 and the second framing boundary function F2 is obtained, that is, the framing boundary intersection curve function F3: when y>0, F3=k 2 (x 2 +(y+d/2+r/ 2) 2 )-(z+f) 2 =0, (k≠0), z<=-f; when y<=0, F3=k 2 (x 2 +(yd/2-r/2) 2 )-(z+f) 2 =0, (k≠0), z<=-f. Further, the framing range of the present application may not be conical. At this time, the parameter r is set to different values according to the framing range, so that it can be applied to various lenses.

一实施方式中,所述获取物距,将所述物距代入所述取景边界相交曲线函数中,得到第一相交曲线函数的步骤S202中,所述获取物距包括:In one embodiment, in the step S202 of obtaining the object distance and substituting the object distance into the intersection curve function of the viewfinder boundary to obtain the first intersection curve function, the obtaining object distance includes:

S2021、通过第一摄像头获取暂时图像,接收用户在所述暂时图像中选择的拍摄对象;S2021. Acquire a temporary image through the first camera, and receive a shooting object selected by the user in the temporary image;

S2022、打开第二摄像头,利用多摄像头测距原理,获得所述拍摄对象与第一摄像头、第二摄像头所在平面间的距离,并将所述距离设置为所述物距。S2022. Turn on the second camera and use the multi-camera distance measurement principle to obtain the distance between the shooting object and the plane where the first camera and the second camera are located, and set the distance to the object distance.

如上所述,得到了第一相交曲线函数。其中先打开第一摄像头,用户得以在第一摄像头获取的暂时图像中选择感兴趣的拍摄对象,据此设置物距,也可以是通过特定的设定来获取目标拍摄对象,比如在第一摄像头的摄像范围内提取位于特殊区域的拍摄对象,以满足不同的需求。其中获得物距的方法为利用多摄像头测距原理。As described above, the first intersection curve function is obtained. The first camera is turned on first, and the user can select the subject of interest in the temporary image acquired by the first camera, and set the object distance accordingly, or the target subject can be obtained through specific settings, such as in the first camera Within the range of the camera to extract the subject located in a special area to meet different needs. Among them, the method of obtaining the object distance is to use the multi-camera distance measuring principle.

一实施方式中,所述拼接所述备用图像与所述第二图像的步骤S4之前,包括:In an embodiment, before the step S4 of splicing the backup image and the second image, it includes:

S31、比较所述备用图像的第一像素点与所述第二图像的第二像素点;S31. Compare the first pixel of the backup image with the second pixel of the second image;

S32、在所述备用图像中删除与所述第二像素点相同的第一像素点,从而获得用于拼接使用的备用图像。S32. Delete the first pixel that is the same as the second pixel in the backup image, so as to obtain a backup image for stitching.

如上述步骤所述,实现了进一步删除重叠内容。由于前述步骤已在拼接图像之前,删除了一个物距值所确定的重叠区域,因此剩下的重叠内容不多,在此基础上进一步删除重叠的像素点,从而进一步提高图像的质量。另外,由于剩下的重叠内容不多,因此本实施方式中所需要删除的像素点不多,能大大减少算力的需求。As described in the above steps, the overlapping content is further deleted. Since the previous step has deleted an overlapping area determined by the object distance value before stitching the image, the remaining overlapping content is not much. On this basis, the overlapping pixels are further deleted, thereby further improving the quality of the image. In addition, since there are not many overlapping contents left, there are not many pixels to be deleted in this embodiment, which can greatly reduce the demand for computing power.

另外,在此介绍重复像素点出现的原因:由于本实施方式中,摄像头的取景范围是呈锥形不断朝物侧延伸扩大的,因此两个摄像头取景范围的交叠部分也随着不断的延伸扩大,本实施例之前所述取得的重叠区域,是指在朝物侧延伸的交叠部分中,根据一个物距值所确定的部分的交叠区域,也就是说,在大于该物距值的延伸空间内的物体,依旧存在部分物体同时出现在两个摄像头的取景范围中,因此,本实施例在删除一个物距内的重叠区域后,大于该物距的部分物体在拍摄图像中仍会存在部分的重叠,因此出现重复像素点。In addition, the reason for the occurrence of duplicate pixels is introduced here: In this embodiment, the viewfinder range of the camera is continuously extended toward the object side in a tapered shape, so the overlapping part of the viewfinder range of the two cameras is also continuously extended Expansion, the overlapping area obtained before in this embodiment refers to the overlapping area of the part determined according to an object distance value in the overlapping portion extending toward the object side, that is, when the value is greater than the object distance value Objects in the extended space of the camera still exist. Some objects still appear in the viewing range of the two cameras at the same time. Therefore, in this embodiment, after deleting the overlapping area within an object distance, some objects larger than the object distance are still in the captured image. There will be some overlap, so duplicate pixels appear.

一实施方式中,所述基于多摄像头的图像获取方法,包括:In an embodiment, the image acquisition method based on multiple cameras includes:

ST1、获取多个摄像头同时拍摄的多个初步图像,其中所述摄像头共有n个,n大于等于3;ST1. Acquire multiple preliminary images simultaneously shot by multiple cameras, wherein the cameras have n in total, and n is greater than or equal to 3;

ST2、经由几何光学,获取A12,A13,…,A1n,A23,A24,…,A2n,Am(m+1),Am(m+2),…,Amn重叠区域,n大于m,所述Amn指第m个摄像头与第n个摄像头的重叠区域;ST2. Via geometric optics, obtain A12, A13, ..., A1n, A23, A24, ..., A2n, Am(m+1), Am(m+2),..., Amn overlapping area, n is greater than m, the Amn Refers to the overlapping area of the mth camera and the nth camera;

ST3、在所述多个初步图像中去除所述A12,A13,…,A1n,A23,A24,…,A2n,Am(m+1),Am(m+2),…,Amn重叠区域;ST3. Remove the A12, A13, ..., A1n, A23, A24, ..., A2n, Am(m+1), Am(m+2), ..., Amn overlapping regions in the plurality of preliminary images;

ST4、接拼去除重叠区域的所述多个初步图像。ST4. Splicing to remove the plurality of preliminary images of the overlapping area.

如上述步骤ST1-ST4所述,实现了基于多摄像头的图像获取。由于前述方法已获得多摄像头的图像获取,据此,可以实现基于多摄像头的图像获取。As described in steps ST1-ST4 above, image acquisition based on multiple cameras is achieved. Since the foregoing method has acquired image acquisition of multiple cameras, according to this, image acquisition based on multiple cameras can be achieved.

其中,由于Amo(其中,o大于m且小于等于n)是在第m个摄像头拍摄的图像中与第o个摄像头拍摄的图像的重叠区域,且该重叠区域应被去除,从而第m个摄像头拍摄的图像与第o个摄像头拍摄的图不存在重叠区域。据此,可以将所有重叠区域去除,再进行拼接,即可得到最终无重叠的图像。其中,前述实施例中的所第一摄像头、第二摄像头为本实施例的多个摄像头中的任意两个,以及第一图像、第二图像为本实施例中的多个初步图像中的任意两个。Among them, since Amo (where o is greater than m and less than or equal to n) is the overlapping area of the image captured by the mth camera with the image captured by the oth camera, and the overlapping area should be removed, so that the mth camera There is no overlapping area between the captured image and the image captured by the o-th camera. According to this, all overlapping regions can be removed and then stitched to obtain the final non-overlapping image. The first camera and the second camera in the foregoing embodiments are any two of the multiple cameras in this embodiment, and the first image and the second image are any of the preliminary images in the embodiment. Two.

参见图2,一种基于多摄像头的图像获取装置的一实施方式,包括:Referring to FIG. 2, an embodiment of an image acquisition device based on multiple cameras includes:

同时获取单元1,用于获取第一摄像头拍摄的第一图像以及与第一摄像头同时拍摄的第二摄像头拍摄的第二图像;The simultaneous obtaining unit 1 is configured to obtain the first image captured by the first camera and the second image captured by the second camera simultaneously with the first camera;

重叠区域获取单元2,用于经由几何光学获得第一图像与第二图像的重叠区域;The overlapping area acquisition unit 2 is used to obtain the overlapping area of the first image and the second image via geometric optics;

备用图像生成单元3,用于在所述第一图像中去除所述重叠区域,获得去除所述重叠区域后的备用图像;The backup image generating unit 3 is configured to remove the overlapping area in the first image to obtain a backup image after removing the overlapping area;

拼接单元4,用于拼接所述备用图像与所述第二图像。The stitching unit 4 is used for stitching the backup image and the second image.

如上述单元1所述,获取第一摄像头拍摄的第一图像以及与第一摄像头同时拍摄的第二摄像头拍摄的第二图像。据此,获得同时拍摄的第一图像与第二 图像,以作为后续拼接的基础。As described in the above unit 1, the first image captured by the first camera and the second image captured by the second camera simultaneously with the first camera are acquired. According to this, the first image and the second image captured simultaneously are obtained as a basis for subsequent stitching.

如上述单元2所述,经由几何光学获得第一图像与第二图像的重叠区域。通过几何光学获得第一图像与第二图像的重叠区域可以采用任意可行方法,例如:建立第一摄像头的第一取景边界函数与第二摄像头的第二取景边界函数,计算出所述第一取景边界函数与所述第二取景边界函数的取景边界相交曲线函数;获取物距,将所述物距代入所述取景边界相交曲线函数中,得到第一相交曲线函数;根据成像原理,计算出第一相交曲线函数通过第一摄像头成像的第一相交曲线成像函数,所述第一相交曲线成像函数所围成的区域即为第一图像与第二图像的重叠区域;获取所述重叠区域。As described in the above unit 2, the overlapping area of the first image and the second image is obtained via geometric optics. Any feasible method can be used to obtain the overlapping area of the first image and the second image through geometric optics, for example: establishing a first framing boundary function of the first camera and a second framing boundary function of the second camera, and calculating the first framing The boundary function and the framing boundary intersection curve function of the second framing boundary function; obtain the object distance, and substitute the object distance into the framing boundary intersection curve function to obtain the first intersection curve function; according to the imaging principle, calculate the first An intersection curve function is a first intersection curve imaging function imaged by the first camera, and the area enclosed by the first intersection curve imaging function is the overlapping area of the first image and the second image; the overlapping area is acquired.

如上述单元3所述,在所述第一图像中去除所述重叠区域,获得去除所述重叠区域后的备用图像。如前所述,所述重叠区域是在第一摄像头获取的图像中,与第二图像重叠的区域。因此为了得到合适的图像,应在第一图像中去除所述重叠区域。As described in the above unit 3, the overlapping area is removed from the first image to obtain a backup image after removing the overlapping area. As described above, the overlapping area is the area overlapping the second image in the image acquired by the first camera. Therefore, in order to obtain a suitable image, the overlapping area should be removed in the first image.

如上述单元4所述,拼接所述备用图像与所述第二图像。从而获得最终图像。其中,优选的,拼接单元4包括:第一拼接子单元,用于以备用图像为基础,将第二图像由第一预定方向并入所述备用图像中,所述第一预定方向指第二摄像头指向第一摄像头的方向;或者,第二拼接子单元,用于以第二图像为基础,将备用图像由第二预定方向并入第二图像中,所述第二预定方向指第一摄像头指向第二摄像头的方向。从而在原始图像上拼接合成为最终图像,避免了复制图像数据、并寻找拼接路径的过程,方便快捷。As described in the above unit 4, the backup image and the second image are spliced. Thereby obtaining the final image. Among them, preferably, the stitching unit 4 includes: a first stitching subunit for incorporating a second image from the first predetermined direction into the backup image based on the backup image, where the first predetermined direction refers to the second The camera points in the direction of the first camera; or, the second splicing subunit is used to merge the backup image from the second predetermined direction into the second image based on the second image, where the second predetermined direction refers to the first camera Point in the direction of the second camera. Therefore, the original image is stitched and synthesized into the final image, which avoids the process of copying the image data and searching for the stitching path, which is convenient and fast.

一实施方式中,所述重叠区域获取单元2,包括:In an embodiment, the overlapping area acquisition unit 2 includes:

取景边界相交曲线函数计算子单元,用于建立第一摄像头的第一取景边界函数与第二摄像头的第二取景边界函数,计算出所述第一取景边界函数与所述第二取景边界函数的取景边界相交曲线函数;The framing boundary intersection curve function calculation subunit is used to establish the first framing boundary function of the first camera and the second framing boundary function of the second camera, and calculate the relationship between the first framing boundary function and the second framing boundary function Intersecting curve function of viewfinder boundary;

第一相交曲线函数计算子单元,用于获取物距,将所述物距代入所述取景边界相交曲线函数中,得到第一相交曲线函数;The first intersection curve function calculation subunit is used to obtain the object distance, and substitute the object distance into the view boundary intersection curve function to obtain the first intersection curve function;

重叠区域计算子单元,用于根据成像原理,计算出第一相交曲线函数通过第一摄像头成像的第一相交曲线成像函数,所述第一相交曲线成像函数所围成的区域即为第一图像与第二图像的重叠区域;The overlapping area calculation subunit is used for calculating the first intersection curve imaging function of the first intersection curve function imaged by the first camera according to the imaging principle, and the area enclosed by the first intersection curve imaging function is the first image The overlapping area with the second image;

重叠区域获取子单元,用于获取所述重叠区域。The overlapping area acquisition subunit is used to acquire the overlapping area.

如上所述,实现了经由几何光学获得第一图像与第二图像的重叠区域。本实施方式中,优选的,所述第一摄像头与第二摄像头的参数相同,且取景范围可以均为圆锥体范围,可以理解的是,该“圆锥体”仅为一个实施示例,并不构成本方案在其他可实施方式中的限制。进一步地,所述第一摄像头与第二摄 像头的参数不同,且取景范围可以为任意可行范围。因此两个圆锥体的相交区域即为重叠取景区域,所述重叠取景区域的边界曲线即为取景边界相交曲线,该曲线的函数即为取景边界相交曲线函数。具体计算方法可以通过解析几何的方法计算出取景边界相交曲线函数。获取物距,将所述物距代入所述取景边界相交曲线函数中,得到第一相交曲线函数。其中,物距是指被拍摄对象所在竖直平面与多摄像头所在竖直平面的距离,可以是用户操作而进行设置物距,例如将物距的参数手动输入,或者由用户选择拍摄对象,以通过多摄像头测距原理计算得到物距。其中多摄像头测距原理为已有技术,不再赘述。其中所述第一相交曲线函数为所述取景边界相交曲线函数与所述物距对应的物体所在竖直平面(即被拍摄物所在竖直平面)的相交曲线的函数,是二维闭合曲线,而前述的取景边界相交曲线函数,是三维空间的函数。根据成像原理,计算出第一相交曲线函数通过第一摄像头成像的第一相交曲线成像函数,所述第一相交曲线成像函数所围成的区域即为成像侧的重叠区域。基于成像原理,在摄像头参数已知的前提下,被拍摄对像所成的像是一定的。据此可以获取与第一相交曲线函数通过第一摄像头成像的第一相交曲线成像函数。其中计算方法采用解析几何的方法,不再赘述。As described above, it is achieved that the overlapping area of the first image and the second image is obtained via geometric optics. In this embodiment, preferably, the parameters of the first camera and the second camera are the same, and the viewing range may be a cone range. It can be understood that the "cone" is only an implementation example and does not constitute Limitations of this solution in other possible embodiments. Further, the parameters of the first camera and the second camera are different, and the viewing range may be any feasible range. Therefore, the intersection area of the two cones is the overlapping viewfinder area, the boundary curve of the overlapping viewfinder area is the viewfinder intersection curve, and the function of this curve is the viewfinder intersection curve function. The specific calculation method can calculate the intersection curve function of the framing boundary through the method of analytical geometry. Obtain the object distance, and substitute the object distance into the view boundary intersection curve function to obtain the first intersection curve function. Among them, the object distance refers to the distance between the vertical plane of the object to be photographed and the vertical plane of the multi-camera, which can be set by the user to operate, such as manually inputting the parameter of the object distance, or the user selects the object to The object distance is calculated by the multi-camera distance measurement principle. The principle of multi-camera distance measurement is an existing technology and will not be repeated here. The first intersection curve function is a function of the intersection curve of the view boundary intersection curve function and the vertical plane in which the object corresponding to the object distance is located (that is, the vertical plane in which the subject is located) is a two-dimensional closed curve, The aforementioned intersection boundary curve function of the viewfinder is a function of three-dimensional space. According to the imaging principle, the first intersection curve imaging function of the first intersection curve function imaged by the first camera is calculated, and the area enclosed by the first intersection curve imaging function is the overlapping area on the imaging side. Based on the imaging principle, under the premise that the camera parameters are known, the image formed by the photographed object is certain. According to this, the first intersection curve imaging function that is imaged by the first camera with the first intersection curve function can be obtained. Among them, the calculation method adopts the method of analytic geometry, which will not be described in detail.

一实施方式中,所述第一摄像头与所述第二摄像头的取景范围均为圆锥体状,所述取景边界相交曲线函数计算子单元,包括:In one embodiment, the framing ranges of the first camera and the second camera are both cone-shaped, and the sub-unit for calculating the intersection curve function of the framing boundary includes:

三维直角坐标系建立模块,用于以第一摄像头与第二摄像头的间距的中心为原点,以第一摄像头的中心与第二摄像头的中心连线为y轴,使z轴所在直线平行于第一摄像头的中心与第一摄像头的焦点的连线,设置x轴垂直于所述y轴与z轴,建立三维直角坐标系;The three-dimensional Cartesian coordinate system building module is used to take the center of the distance between the first camera and the second camera as the origin, take the line connecting the center of the first camera and the center of the second camera as the y-axis, and make the straight line of the z-axis parallel to the Connecting the center of a camera to the focus of the first camera, setting the x-axis perpendicular to the y-axis and z-axis, and establishing a three-dimensional rectangular coordinate system;

取景边界函数建立模块,用于建立第一摄像头的第一取景边界函数:F1=k 2(x 2+(y+d/2+r/2) 2)-(z+f) 2=0,k≠0,z<=-f;建立第二摄像头的第二取景边界函数:F2=k 2(x 2+(y-d/2-r/2) 2)-(z+f) 2=0,k≠0,z<=-f,其中d为第一摄像头与第二摄像头的间隔,r为第一摄像头与第二摄像头的直径,f为第一摄像头与第二摄像头的焦距,k 2=f 2/(r/2) 2The framing boundary function building module is used to establish the first framing boundary function of the first camera: F1=k 2 (x 2 +(y+d/2+r/2) 2 )-(z+f) 2 =0, k≠0, z<=-f; establish the second viewfinder boundary function of the second camera: F2=k 2 (x 2 +(yd/2-r/2) 2 )-(z+f) 2 =0, k≠0, z<=-f, where d is the distance between the first camera and the second camera, r is the diameter of the first camera and the second camera, f is the focal length of the first camera and the second camera, k 2 = f 2 /(r/2) 2 ;

取景边界相交曲线函数计算模块,用于计算出第一取景边界函数与第二取景边界函数的取景边界相交曲线函数F3:当y>0时,F3=k 2(x 2+(y+d/2+r/2) 2)-(z+f) 2=0,k≠0,z<=-f;当y<=0时,F3=k 2(x 2+(y-d/2-r/2) 2)-(z+f) 2=0,k≠0,z<=-f。 The framing boundary intersection curve function calculation module is used to calculate the framing boundary intersection curve function F3 of the first framing boundary function and the second framing boundary function: when y>0, F3=k 2 (x 2 +(y+d/ 2+r/2) 2 )-(z+f) 2 = 0, k≠0, z<=-f; when y<=0, F3=k 2 (x 2 +(yd/2-r/ 2) 2 )-(z+f) 2 = 0, k≠0, z<=-f.

如上所述,计算出第一摄像头与第二摄像头的取景边界相交曲线函数。具体的,标准圆锥体边界曲线方程为z 2=k 2(x 2+y 2),在本实施例中建立的三维坐标系中,可得到第一取景边界函数:F1=k 2(x 2+(y+d/2+r/2) 2)-(z+f) 2= 0,(k≠0),z<=-f;建立第二摄像头的第二取景边界函数:F2=k 2(x 2+(y-d/2-r/2) 2)-(z+f) 2=0,(k≠0),z<=-f,其中d为第一摄像头与第二摄像头的间隔,r为第一摄像头与第二摄像头的直径,f为第一摄像头与第二摄像头的焦距,k 2=f 2/(r/2) 2。从而得到第一取景边界函数F1与第二取景边界函数F2的相交轨迹,即取景边界相交曲线函数F3:当y>0时,F3=k 2(x 2+(y+d/2+r/2) 2)-(z+f) 2=0,(k≠0),z<=-f;当y<=0时,F3=k 2(x 2+(y-d/2-r/2) 2)-(z+f) 2=0,(k≠0),z<=-f。 As described above, the curve function of the intersection of the framing boundaries of the first camera and the second camera is calculated. Specifically, the standard cone boundary curve equation is z 2 =k 2 (x 2 +y 2 ). In the three-dimensional coordinate system established in this embodiment, the first framing boundary function can be obtained: F1=k 2 (x 2 +(y+d/2+r/2) 2 )-(z+f) 2 = 0, (k≠0), z<=-f; establish the second view boundary function of the second camera: F2=k 2 (x 2 +(yd/2-r/2) 2 )-(z+f) 2 =0, (k≠0), z<=-f, where d is the distance between the first camera and the second camera , r is the diameter of the first camera and the second camera, f is the focal length of the first camera and the second camera, k 2 = f 2 /(r/2) 2 . Thus, the intersection trajectory of the first framing boundary function F1 and the second framing boundary function F2 is obtained, that is, the framing boundary intersection curve function F3: when y>0, F3=k 2 (x 2 +(y+d/2+r/ 2) 2 )-(z+f) 2 =0, (k≠0), z<=-f; when y<=0, F3=k 2 (x 2 +(yd/2-r/2) 2 )-(z+f) 2 =0, (k≠0), z<=-f.

一实施方式中,所述第一相交曲线函数计算子单元,包括:In an embodiment, the first intersection curve function calculation subunit includes:

拍摄对象接收模块,用于通过第一摄像头获取暂时图像,接收用户在所述暂时图像中选择的拍摄对象;A subject receiving module, configured to acquire a temporary image through the first camera, and receive a subject selected by the user in the temporary image;

物距设置模块,用于打开第二摄像头,利用多摄像头测距原理,获得所述拍摄对象与第一摄像头、第二摄像头所在平面间的距离,并将所述距离设置为所述物距。The object distance setting module is used to turn on the second camera, use the principle of multi-camera distance measurement to obtain the distance between the shooting object and the plane where the first camera and the second camera are located, and set the distance to the object distance.

如上所述,得到了第一相交曲线函数。其中先打开第一摄像头,用户得以在第一摄像头获取的暂时图像中选择感兴趣的拍摄对象,据此设置物距,也可以是通过特定的设定来获取目标拍摄对象,比如在第一摄像头的摄像范围内提取位于特殊区域的拍摄对象,以满足不同的需求。其中获得物距的方法为利用多摄像头测距原理。As described above, the first intersection curve function is obtained. The first camera is turned on first, and the user can select the subject of interest in the temporary image acquired by the first camera, and set the object distance accordingly, or the target subject can be obtained through specific settings, such as in the first camera Within the range of the camera to extract the subject located in a special area to meet different needs. Among them, the method of obtaining the object distance is to use the multi-camera distance measuring principle.

一实施方式中,所述装置包括:In one embodiment, the device includes:

比较单元,用于比较所述备用图像的第一像素点与所述第二图像的第二像素点;A comparison unit for comparing the first pixel of the backup image with the second pixel of the second image;

像素点删除单元,用于在所述备用图像中删除与所述第二像素点相同的第一像素点,从而获得用于拼接使用的备用图像。The pixel deletion unit is configured to delete the first pixel that is the same as the second pixel in the backup image, so as to obtain a backup image for stitching.

如上所述,实现了进一步删除重叠内容。由于前述步骤已在拼接图像之前删除了重叠区域,因此剩下的重叠内容不多,在此基础上进一步删除重叠的像素点,从而进一步提高图像的质量。另外,由于剩下的重叠内容不多,因此本实施方式中所需要删除的像素点不多,能大大减少算力的需求。另外,在此介绍重复像素点出现的原因:由于本实施例是删除物距内的重叠区域,因此大于物距的拍摄图像仍会存在部分的重叠,因此出现重复像素点。As described above, further deletion of overlapping content is achieved. Since the previous steps have deleted the overlapping area before stitching the image, there is not much remaining overlapping content. On this basis, the overlapping pixels are further deleted, thereby further improving the quality of the image. In addition, since there are not many overlapping contents left, there are not many pixels to be deleted in this embodiment, which can greatly reduce the demand for computing power. In addition, the reason for the occurrence of duplicate pixels is introduced here: Since this embodiment deletes the overlapping area within the object distance, there will still be some overlap of the captured images larger than the object distance, so duplicate pixels appear.

一实施方式中,所述装置,包括:In an embodiment, the device includes:

多个初步图像获取单元,用于获取多个摄像头同时拍摄的多个初步图像,其中所述摄像头共有n个,n大于等于3;A plurality of preliminary image acquisition units, configured to acquire a plurality of preliminary images simultaneously shot by a plurality of cameras, wherein the cameras have n in total, and n is equal to or greater than 3;

多个重叠区域获取单元,用于经由几何光学,获取A12,A13,…,A1n,A23,A24,…,A2n,Am(m+1),Am(m+2),…,Amn重叠区域, n大于m,所述Amn指第m个摄像头与第n个摄像头的重叠区域;Multiple overlapping area acquisition units for acquiring A12, A13, ..., A1n, A23, A24, ..., A2n, Am(m+1), Am(m+2), ..., Amn overlapping areas via geometric optics, n is greater than m, the Amn refers to the overlapping area of the mth camera and the nth camera;

多个重叠区域去除单元,用于在所述多个初步图像中去除所述A12,A13,…,A1n,A23,A24,…,A2n,Am(m+1),Am(m+2),…,Amn重叠区域;Multiple overlapping area removal units for removing the A12, A13, ..., A1n, A23, A24, ..., A2n, Am(m+1), Am(m+2), in the multiple preliminary images …, Amn overlapping area;

初步图像拼接单元,用于接拼去除重叠区域的所述多个初步图像如上所述,实现了基于多摄像头的图像获取。由于前述方法已获得多摄像头的图像获取,据此,可以实现基于多摄像头的图像获取。The preliminary image splicing unit is used for splicing the plurality of preliminary images to remove overlapping regions, as described above, to achieve image acquisition based on multiple cameras. Since the foregoing method has acquired image acquisition of multiple cameras, according to this, image acquisition based on multiple cameras can be achieved.

其中,由于Amo(其中,o大于m且小于等于n)是在第m个摄像头拍摄的图像中与第o个摄像头拍摄的图像的重叠区域,且该重叠区域应被去除,从而第m个摄像头拍摄的图像与第o个摄像头拍摄的图不存在重叠区域。据此,可以将所有重叠区域去除,再进行拼接,即可得到最终无重叠的图像。其中,前述实施例中的第一摄像头、第二摄像头为本实施例的多个摄像头中的任意两个,以及第一图像、第二图像为本实施例中的多个初步图像中的任意两个。Among them, since Amo (where o is greater than m and less than or equal to n) is the overlapping area of the image captured by the mth camera with the image captured by the oth camera, and the overlapping area should be removed, so that the mth camera There is no overlapping area between the captured image and the image captured by the o-th camera. According to this, all overlapping regions can be removed and then stitched to obtain the final non-overlapping image. Wherein, the first camera and the second camera in the foregoing embodiments are any two of the multiple cameras in this embodiment, and the first image and the second image are any two of the multiple preliminary images in this embodiment. Pcs.

根据本发明提供的基于多摄像头的图像获取装置,通过几何光学计算出不同摄像头拍摄的图像中的重叠区域,并在拼接处理前,在第一摄像头所成图像中删除所述重叠区域,从而起到节省拼接运算力、加快拼接速度的技术效果。According to the multi-camera-based image acquisition device provided by the present invention, the overlapping area in the images captured by different cameras is calculated by geometric optics, and before the stitching process, the overlapping area is deleted from the image formed by the first camera, thereby starting The technical effect of saving splicing calculation power and accelerating splicing speed.

参考附图3,本申请还提供了一种存储介质10,存储介质10中存储有计算机程序20,当其在计算机上运行时,使得计算机执行以上实施例所描述的基于多摄像头的图像获取方法,或者所述处理器执行所述计算机程序时实现如以上实施例所描述的基于多摄像头的图像获取方法。With reference to FIG. 3, the present application also provides a storage medium 10 that stores a computer program 20, which when run on a computer, causes the computer to execute the multi-camera-based image acquisition method described in the above embodiment Or, when the processor executes the computer program, an image acquisition method based on multiple cameras as described in the above embodiment is implemented.

参考附图4,本申请还提供了一种包含指令的设备30,当存储介质10在设备30上运行时,使得设备30通过其内部设置的处理器40执行以上实施例所描述的基于多摄像头的图像获取方法,或者所述处理器执行所述计算机程序时实现如以上实施例所描述的基于多摄像头的图像获取方法。本实施方式中的设备30为计算机设备30。With reference to FIG. 4, the present application also provides a device 30 containing instructions. When the storage medium 10 runs on the device 30, the device 30 causes the device 30 to execute the multi-camera based on the multi-camera described in the above embodiment through the processor 40 provided therein. Image acquisition method, or when the processor executes the computer program, an image acquisition method based on multiple cameras as described in the above embodiment is implemented. The device 30 in this embodiment is a computer device 30.

在上述实施例中,可以全部或部分地通过软件、硬件、固件或者其任意组合来实现。使用软件实现时,可以全部或部分地以计算机程序产品的形式实现。In the above embodiments, it can be implemented in whole or in part by software, hardware, firmware, or any combination thereof. When implemented using software, it can be implemented in whole or in part in the form of a computer program product.

所述计算机程序产品包括一个或多个计算机指令。在计算机上加载和执行所述计算机程序指令时,全部或部分地产生按照本申请实施例所述的流程或功能。所述计算机可以是通用计算机、专用计算机、计算机网络、或者其他可编程装置。所述计算机指令可以存储在存储介质中,或者从一个存储介质向另一存储介质传输,例如,所述计算机指令可以从一个网站站点、计算机、服务器或数据中心通过有线(例如同轴电缆、光纤、数字用户线(DSL))或无线(例如红外、无线、微波等)方式向另一个网站站点、计算机、服务器或数据中心进行传输。所述存储介质可以是计算机能够存储的任何可用介质或者是包含一 个或多个可用介质集成的服务器、数据中心等数据存储设备。所述可用介质可以是磁性介质,(例如,软盘、硬盘、磁带)、光介质(例如,DVD)、或者半导体介质(例如固态硬盘Solid State Disk(SSD))等。The computer program product includes one or more computer instructions. When the computer program instructions are loaded and executed on the computer, all or part of the processes or functions according to the embodiments of the present application are generated. The computer may be a general-purpose computer, a dedicated computer, a computer network, or other programmable devices. The computer instructions may be stored in a storage medium, or transmitted from one storage medium to another storage medium, for example, the computer instructions may be from a website site, computer, server, or data center via wire (eg, coaxial cable, optical fiber) , Digital subscriber line (DSL)) or wireless (such as infrared, wireless, microwave, etc.) to another website, computer, server or data center. The storage medium may be any available medium that can be stored by a computer or a data storage device such as a server, a data center, etc. integrated with one or more available media. The usable medium may be a magnetic medium (for example, a floppy disk, a hard disk, a magnetic tape), an optical medium (for example, a DVD), or a semiconductor medium (for example, Solid State Disk (SSD)), or the like.

Claims (15)

一种基于多摄像头的图像获取方法,其特征在于,包括:An image acquisition method based on multiple cameras is characterized by including: 获取第一摄像头拍摄的第一图像以及与第一摄像头同时拍摄的第二摄像头拍摄的第二图像;Acquiring the first image captured by the first camera and the second image captured by the second camera simultaneously with the first camera; 经由几何光学获得第一图像与第二图像的重叠区域;Obtain the overlapping area of the first image and the second image via geometric optics; 在所述第一图像中去除所述重叠区域,获得去除所述重叠区域后的备用图像;Removing the overlapping area in the first image to obtain a backup image after removing the overlapping area; 拼接所述备用图像与所述第二图像。Join the backup image and the second image. 根据权利要求1所述的基于多摄像头的图像获取方法,其特征在于,所述经由几何光学获得第一图像与第二图像的重叠区域的步骤,包括:The multi-camera based image acquisition method according to claim 1, wherein the step of acquiring the overlapping area of the first image and the second image via geometric optics includes: 建立第一摄像头的第一取景边界函数与第二摄像头的第二取景边界函数,计算出所述第一取景边界函数与所述第二取景边界函数的取景边界相交曲线函数;Establishing a first framing boundary function of the first camera and a second framing boundary function of the second camera, and calculating an intersection curve function of the framing boundary of the first framing boundary function and the second framing boundary function; 获取物距,将所述物距代入所述取景边界相交曲线函数中,得到第一相交曲线函数;Obtain the object distance, and substitute the object distance into the view boundary intersection curve function to obtain the first intersection curve function; 根据成像原理,计算出第一相交曲线函数通过第一摄像头成像的第一相交曲线成像函数,所述第一相交曲线成像函数所围成的区域即为第一图像与第二图像的重叠区域;According to the imaging principle, calculate the first intersection curve imaging function of the first intersection curve function imaged by the first camera, and the area enclosed by the first intersection curve imaging function is the overlapping area of the first image and the second image; 获取所述重叠区域。Obtain the overlapping area. 根据权利要求2所述的基于多摄像头的图像获取方法,其特征在于,所述第一摄像头与所述第二摄像头的取景范围均为圆锥体状,所述建立第一摄像头的第一取景边界函数与第二摄像头的第二取景边界函数,计算出所述第一取景边界函数与所述第二取景边界函数的取景边界相交曲线函数的步骤,包括:The multi-camera-based image acquisition method according to claim 2, wherein the framing ranges of the first camera and the second camera are both cone-shaped, and the first framing boundary of the first camera is established The step of calculating the intersection curve function of the first framing boundary function and the framing boundary of the second framing boundary function by the function and the second framing boundary function of the second camera includes: 以第一摄像头与第二摄像头的间距的中心为原点,以第一摄像头的中心与第二摄像头的中心连线为y轴,使z轴所在直线平行于第一摄像头的中心与第一摄像头的焦点的连线,设置x轴垂直于所述y轴与z轴,建立三维直角坐标系;Taking the center of the distance between the first camera and the second camera as the origin, and taking the line connecting the center of the first camera and the center of the second camera as the y-axis, the line where the z-axis lies is parallel to the center of the first camera and the first camera Set the three-dimensional rectangular coordinate system by setting the x-axis perpendicular to the y-axis and z-axis for the connection line of the focal point; 建立第一摄像头的第一取景边界函数:F1=k 2(x 2+(y+d/2+r/2) 2)-(z+f) 2=0,k≠0,z<=-f;建立第二摄像头的第二取景边界函数:F2=k 2(x 2+(y-d/2-r/2) 2)-(z+f) 2=0,k≠0,z<=-f,其中d为第一摄像头与第二摄像头的间隔,r为第一 摄像头与第二摄像头的直径,f为第一摄像头与第二摄像头的焦距,k 2=f 2/(r/2) 2Establish the first viewfinder boundary function of the first camera: F1=k 2 (x 2 +(y+d/2+r/2) 2 )-(z+f) 2 =0,k≠0,z<=- f; establish the second viewfinder boundary function of the second camera: F2=k 2 (x 2 +(yd/2-r/2) 2 )-(z+f) 2 =0, k≠0, z<=- f, where d is the distance between the first camera and the second camera, r is the diameter of the first camera and the second camera, f is the focal length of the first camera and the second camera, k 2 = f 2 /(r/2) 2 ; 计算出第一取景边界函数与第二取景边界函数的取景边界相交曲线函数F3:当y>0时,F3=k 2(x 2+(y+d/2+r/2) 2)-(z+f) 2=0,k≠0,z<=-f;当y<=0时,F3=k 2(x 2+(y-d/2-r/2) 2)-(z+f) 2=0,k≠0,z<=-f。 Calculate the intersection curve function F3 of the first framing boundary function and the framing boundary of the second framing boundary function: when y>0, F3=k 2 (x 2 +(y+d/2+r/2) 2 )-( z+f) 2 =0, k≠0, z<=-f; when y<=0, F3=k 2 (x 2 +(yd/2-r/2) 2 )-(z+f) 2 = 0, k≠0, z<=-f. 根据权利要求2所述的基于多摄像头的图像获取方法,其特征在于,所述获取物距,将所述物距代入所述取景边界相交曲线函数中,得到第一相交曲线函数的步骤中,所述获取物距包括:The multi-camera-based image acquisition method according to claim 2, wherein in the step of acquiring the object distance, the object distance is substituted into the view boundary intersection curve function to obtain the first intersection curve function, The obtaining object distance includes: 通过第一摄像头获取暂时图像,接收用户在所述暂时图像中选择的拍摄对象;Acquiring a temporary image through the first camera, and receiving a subject selected by the user in the temporary image; 打开第二摄像头,利用多摄像头测距原理,获得所述拍摄对象与第一摄像头、第二摄像头所在平面间的距离,并将所述距离设置为所述物距。Turn on the second camera and use the multi-camera distance measurement principle to obtain the distance between the shooting object and the plane where the first camera and the second camera are located, and set the distance to the object distance. 根据权利要求2所述的基于多摄像头的图像获取方法,其特征在于,所述拼接所述备用图像与所述第二图像的步骤之前,包括:The image acquisition method based on multiple cameras according to claim 2, wherein before the step of splicing the backup image and the second image, the method includes: 比较所述备用图像的第一像素点与所述第二图像的第二像素点;Comparing the first pixel of the backup image with the second pixel of the second image; 在所述备用图像中删除与所述第二像素点相同的第一像素点,从而获得用于拼接使用的备用图像。The first pixel that is the same as the second pixel is deleted from the backup image, thereby obtaining a backup image for stitching. 根据权利要求1所述的基于多摄像头的图像获取方法,其特征在于,所述拼接所述备用图像与所述第二图像的步骤,包括:The image acquisition method based on multiple cameras according to claim 1, wherein the step of splicing the backup image and the second image includes: 以所述备用图像为基础,将所述第二图像由第一预定方向并入所述备用图像中,所述第一预定方向指第二摄像头指向第一摄像头的方向;Based on the backup image, incorporating the second image into the backup image from a first predetermined direction, where the first predetermined direction refers to a direction in which the second camera points to the first camera; 或者,以所述第二图像为基础,将所述备用图像由第二预定方向并入所述第二图像中,所述第二预定方向指第一摄像头指向第二摄像头的方向。Alternatively, based on the second image, the backup image is merged into the second image from a second predetermined direction, where the second predetermined direction refers to a direction in which the first camera points toward the second camera. 根据权利要求1-6任一项所述的基于多摄像头的图像获取方法,其特征在于,The image acquisition method based on multiple cameras according to any one of claims 1 to 6, wherein: 获取多个摄像头同时拍摄的多个初步图像,其中所述摄像头共有n个,n大于等于3;Acquire multiple preliminary images simultaneously shot by multiple cameras, wherein the cameras have n in total, and n is greater than or equal to 3; 经由几何光学,获取A12,A13,…,A1n,A23,A24,…,A2n,Am(m+1),Am(m+2),…,Amn重叠区域,n大于m,所述Amn指第m个摄像头与第n个摄像头的重叠区域;Through geometric optics, A12, A13, ..., A1n, A23, A24, ..., A2n, Am(m+1), Am(m+2), ..., Amn overlapping area, n is greater than m, the Amn refers to the first The overlapping area of m cameras and nth camera; 在所述多个初步图像中去除所述A12,A13,…,A1n,A23,A24,…,A2n,Am(m+1),Am(m+2),…,Amn重叠区域;Remove the A12, A13, ..., A1n, A23, A24, ..., A2n, Am(m+1), Am(m+2), ..., Amn overlapping regions in the plurality of preliminary images; 接拼去除重叠区域的所述多个初步图像。Splicing removes the plurality of preliminary images of overlapping regions. 一种基于多摄像头的图像获取装置,其特征在于,包括:An image acquisition device based on multiple cameras, characterized in that it includes: 同时获取单元,用于获取第一摄像头拍摄的第一图像以及与第一摄像头同时拍摄的第二摄像头拍摄的第二图像;The simultaneous acquisition unit is configured to acquire the first image captured by the first camera and the second image captured by the second camera simultaneously with the first camera; 重叠区域获取单元,用于经由几何光学获得第一图像与第二图像的重叠区域;An overlapping area acquisition unit, configured to obtain an overlapping area of the first image and the second image via geometric optics; 备用图像生成单元,用于在所述第一图像中去除所述重叠区域,获得去除所述重叠区域后的备用图像;A backup image generating unit, configured to remove the overlapping area in the first image to obtain a backup image after removing the overlapping area; 拼接单元,用于拼接所述备用图像与所述第二图像。A stitching unit is used to stitch the backup image and the second image. 根据权利要求8所述的基于多摄像头的图像获取装置,其特征在于,所述重叠区域获取单元,包括:The image acquisition device based on multiple cameras according to claim 8, wherein the overlapping area acquisition unit includes: 取景边界相交曲线函数计算子单元,用于建立第一摄像头的第一取景边界函数与第二摄像头的第二取景边界函数,计算出所述第一取景边界函数与所述第二取景边界函数的取景边界相交曲线函数;The framing boundary intersection curve function calculation subunit is used to establish the first framing boundary function of the first camera and the second framing boundary function of the second camera, and calculate the relationship between the first framing boundary function and the second framing boundary function Intersecting curve function of viewfinder boundary; 第一相交曲线函数计算子单元,用于获取物距,将所述物距代入所述取景边界相交曲线函数中,得到第一相交曲线函数;The first intersection curve function calculation subunit is used to obtain the object distance, and substitute the object distance into the view boundary intersection curve function to obtain the first intersection curve function; 重叠区域计算子单元,用于根据成像原理,计算出第一相交曲线函数通过第一摄像头成像的第一相交曲线成像函数,所述第一相交曲线成像函数所围成的区域即为第一图像与第二图像的重叠区域;The overlapping area calculation subunit is used for calculating the first intersection curve imaging function of the first intersection curve function imaged by the first camera according to the imaging principle, and the area enclosed by the first intersection curve imaging function is the first image The overlapping area with the second image; 重叠区域获取子单元,用于获取所述重叠区域。The overlapping area acquisition subunit is used to acquire the overlapping area. 根据权利要求9所述的基于多摄像头的图像获取装置,其特征在于,所述取景边界相交曲线函数计算子单元,包括:The image acquisition device based on multi-cameras according to claim 9, wherein the sub-unit for calculating the boundary boundary intersection curve function includes: 三维直角坐标系建立模块,用于以第一摄像头与第二摄像头的间距的中心为原点,以第一摄像头的中心与第二摄像头的中心连线为y轴,使z轴所在直线平行于第一摄像头的中心与第一摄像头的焦点的连线,设置x轴垂直于所述y轴与z轴,建立三维直角坐标系;The three-dimensional Cartesian coordinate system building module is used to take the center of the distance between the first camera and the second camera as the origin, take the line connecting the center of the first camera and the center of the second camera as the y-axis, and make the straight line of the z-axis parallel to the Connecting the center of a camera to the focus of the first camera, setting the x-axis perpendicular to the y-axis and z-axis, and establishing a three-dimensional rectangular coordinate system; 取景边界函数建立模块,用于建立第一摄像头的第一取景边界函数:F1=k 2(x 2+(y+d/2+r/2) 2)-(z+f) 2=0,k≠0,z<=-f;建立第二摄像头的第二取景边界函数:F2=k 2(x 2+(y-d/2-r/2) 2)-(z+f) 2=0,k≠0,z<=-f,其中d为第一 摄像头与第二摄像头的间隔,r为第一摄像头与第二摄像头的直径,f为第一摄像头与第二摄像头的焦距,k 2=f 2/(r/2) 2The framing boundary function building module is used to establish the first framing boundary function of the first camera: F1=k 2 (x 2 +(y+d/2+r/2) 2 )-(z+f) 2 =0, k≠0, z<=-f; establish the second viewfinder boundary function of the second camera: F2=k 2 (x 2 +(yd/2-r/2) 2 )-(z+f) 2 =0, k≠0, z<=-f, where d is the distance between the first camera and the second camera, r is the diameter of the first camera and the second camera, f is the focal length of the first camera and the second camera, k 2 = f 2 /(r/2) 2 ; 取景边界相交曲线函数计算模块,用于计算出第一取景边界函数与第二取景边界函数的取景边界相交曲线函数F3:当y>0时,F3=k 2(x 2+(y+d/2+r/2) 2)-(z+f) 2=0,k≠0,z<=-f;当y<=0时,F3=k 2(x 2+(y-d/2-r/2) 2)-(z+f) 2=0,k≠0,z<=-f。 The framing boundary intersection curve function calculation module is used to calculate the framing boundary intersection curve function F3 of the first framing boundary function and the second framing boundary function: when y>0, F3=k 2 (x 2 +(y+d/ 2+r/2) 2 )-(z+f) 2 = 0, k≠0, z<=-f; when y<=0, F3=k 2 (x 2 +(yd/2-r/ 2) 2 )-(z+f) 2 = 0, k≠0, z<=-f. 根据权利要求9所述的基于多摄像头的图像获取装置,其特征在于,所述第一相交曲线函数计算子单元,包括:The multi-camera-based image acquisition device according to claim 9, wherein the first intersection curve function calculation subunit includes: 拍摄对象接收模块,用于通过第一摄像头获取暂时图像,接收用户在所述暂时图像中选择的拍摄对象;A subject receiving module, configured to acquire a temporary image through the first camera, and receive a subject selected by the user in the temporary image; 物距设置模块,用于打开第二摄像头,利用多摄像头测距原理,获得所述拍摄对象与第一摄像头、第二摄像头所在平面间的距离,并将所述距离设置为所述物距。The object distance setting module is used to turn on the second camera, use the principle of multi-camera distance measurement to obtain the distance between the shooting object and the plane where the first camera and the second camera are located, and set the distance to the object distance. 根据权利要求9所述的基于多摄像头的图像获取装置,其特征在于,所述装置,包括:The image acquisition device based on multiple cameras according to claim 9, wherein the device comprises: 比较单元,用于比较所述备用图像的第一像素点与所述第二图像的第二像素点;A comparison unit for comparing the first pixel of the backup image with the second pixel of the second image; 像素点删除单元,用于在所述备用图像中删除与所述第二像素点相同的第一像素点,从而获得用于拼接使用的备用图像。The pixel deletion unit is configured to delete the first pixel that is the same as the second pixel in the backup image, so as to obtain a backup image for stitching. 根据权利要求8所述的基于多摄像头的图像获取装置,其特征在于,所述拼接单元,包括:The image acquisition device based on multiple cameras according to claim 8, wherein the splicing unit includes: 第一拼接子单元,用于以所述备用图像为基础,将所述第二图像由第一预定方向并入所述备用图像中,所述第一预定方向指第二摄像头指向第一摄像头的方向;The first splicing subunit is used to merge the second image into the backup image from the first predetermined direction based on the backup image, where the first predetermined direction refers to the second camera pointing to the first camera direction; 第二拼接子单元,用于以所述第二图像为基础,将所述备用图像由第二预定方向并入所述第二图像中,所述第二预定方向指第一摄像头指向第二摄像头的方向。A second splicing subunit, which is used to merge the backup image into the second image from a second predetermined direction based on the second image, where the second predetermined direction refers to the first camera pointing to the second camera Direction. 根据权利要求8-13任一项所述的基于多摄像头的图像获取装置,其特征在于,包括:The image acquisition device based on multiple cameras according to any one of claims 8 to 13, characterized in that it includes: 多个初步图像获取单元,用于获取多个摄像头同时拍摄的多个初步图像, 其中所述摄像头共有n个,n大于等于3;A plurality of preliminary image acquiring units, configured to acquire a plurality of preliminary images simultaneously shot by a plurality of cameras, wherein the cameras have n in total, and n is greater than or equal to 3; 多个重叠区域获取单元,用于经由几何光学,获取A12,A13,…,A1n,A23,A24,…,A2n,Am(m+1),Am(m+2),…,Amn重叠区域,n大于m,所述Amn指第m个摄像头与第n个摄像头的重叠区域;Multiple overlapping area acquisition units for acquiring A12, A13, ..., A1n, A23, A24, ..., A2n, Am(m+1), Am(m+2), ..., Amn overlapping areas via geometric optics, n is greater than m, the Amn refers to the overlapping area of the mth camera and the nth camera; 多个重叠区域去除单元,用于在所述多个初步图像中去除所述A12,A13,…,A1n,A23,A24,…,A2n,Am(m+1),Am(m+2),…,Amn重叠区域;Multiple overlapping area removal units for removing the A12, A13, ..., A1n, A23, A24, ..., A2n, Am(m+1), Am(m+2), in the multiple preliminary images …, Amn overlapping area; 初步图像拼接单元,用于接拼去除重叠区域的所述多个初步图像。A preliminary image stitching unit is used to splice the plurality of preliminary images of the overlapping area. 一种设备,其特征在于,其包括处理器、存储器及存储于所述存储器上并可在所述处理器上运行的计算机程序,所述处理器执行所述计算机程序时实现如权利要求1至7任一项所述的基于多摄像头的图像获取方法。An apparatus, characterized in that it includes a processor, a memory, and a computer program stored on the memory and executable on the processor, and the processor implements the computer program as described in claims 1 to 7. An image acquisition method based on multiple cameras as described in any of 7.
PCT/CN2019/073764 2019-01-10 2019-01-29 Image acquisition method, apparatus and device based on multiple cameras Ceased WO2020143090A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910024692.6A CN109889736B (en) 2019-01-10 2019-01-10 Image acquisition method, device and equipment based on double cameras and multiple cameras
CN201910024692.6 2019-01-10

Publications (1)

Publication Number Publication Date
WO2020143090A1 true WO2020143090A1 (en) 2020-07-16

Family

ID=66925878

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/073764 Ceased WO2020143090A1 (en) 2019-01-10 2019-01-29 Image acquisition method, apparatus and device based on multiple cameras

Country Status (2)

Country Link
CN (1) CN109889736B (en)
WO (1) WO2020143090A1 (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112884767B (en) * 2021-03-26 2022-04-26 长鑫存储技术有限公司 Image fitting method
CN115868933B (en) * 2022-12-12 2024-01-05 四川互慧软件有限公司 Method and system for collecting waveform of monitor
CN117315815A (en) * 2023-10-20 2023-12-29 深圳紫杉视讯有限公司 Image processing method, device and electronic equipment for driving recorder

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080012877A1 (en) * 1999-01-28 2008-01-17 Lewis Michael C Method and system for providing edge antialiasing
US20080298718A1 (en) * 2007-05-31 2008-12-04 Che-Bin Liu Image Stitching
CN105279735A (en) * 2015-11-20 2016-01-27 沈阳东软医疗系统有限公司 Fusion method, fusion device and fusion equipment of image splicing
CN105869113A (en) * 2016-03-25 2016-08-17 华为技术有限公司 Panoramic image generation method and device
CN106296577A (en) * 2015-05-19 2017-01-04 富士通株式会社 Image split-joint method and image mosaic device

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101931772B (en) * 2010-08-19 2012-02-29 深圳大学 A panoramic video fusion method, system and video processing equipment
CN102620713A (en) * 2012-03-26 2012-08-01 梁寿昌 Method for measuring distance and positioning by utilizing dual camera
CN105138194A (en) * 2013-01-11 2015-12-09 海信集团有限公司 Positioning method and electronic device
CN104933755B (en) * 2014-03-18 2017-11-28 华为技术有限公司 A kind of stationary body method for reconstructing and system
CN104754228A (en) * 2015-03-27 2015-07-01 广东欧珀移动通信有限公司 A method for taking pictures by using a camera of a mobile terminal and a mobile terminal
CN106683071B (en) * 2015-11-06 2020-10-30 杭州海康威视数字技术股份有限公司 Image stitching method and device
CN105654502B (en) * 2016-03-30 2019-06-28 广州市盛光微电子有限公司 A kind of panorama camera caliberating device and method based on more camera lens multisensors
CN106683045A (en) * 2016-09-28 2017-05-17 深圳市优象计算技术有限公司 Binocular camera-based panoramic image splicing method
CN106331527B (en) * 2016-10-12 2019-05-17 腾讯科技(北京)有限公司 A kind of image split-joint method and device
CN106791422A (en) * 2016-12-30 2017-05-31 维沃移动通信有限公司 Image processing method and mobile terminal
CN108769578B (en) * 2018-05-17 2020-07-31 南京理工大学 Real-time panoramic imaging system and method based on multiple cameras

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080012877A1 (en) * 1999-01-28 2008-01-17 Lewis Michael C Method and system for providing edge antialiasing
US20080298718A1 (en) * 2007-05-31 2008-12-04 Che-Bin Liu Image Stitching
CN106296577A (en) * 2015-05-19 2017-01-04 富士通株式会社 Image split-joint method and image mosaic device
CN105279735A (en) * 2015-11-20 2016-01-27 沈阳东软医疗系统有限公司 Fusion method, fusion device and fusion equipment of image splicing
CN105869113A (en) * 2016-03-25 2016-08-17 华为技术有限公司 Panoramic image generation method and device

Also Published As

Publication number Publication date
CN109889736B (en) 2020-06-19
CN109889736A (en) 2019-06-14

Similar Documents

Publication Publication Date Title
US8345961B2 (en) Image stitching method and apparatus
CN110730296B (en) Image processing apparatus, image processing method, and computer-readable medium
WO2021227360A1 (en) Interactive video projection method and apparatus, device, and storage medium
CN106934777B (en) Scanning image acquisition method and device
WO2021098544A1 (en) Image processing method and apparatus, storage medium and electronic device
CN110799921A (en) Filming method, device and drone
CN110738599B (en) Image stitching method and device, electronic equipment and storage medium
CN107833179A (en) The quick joining method and system of a kind of infrared image
CN105554367A (en) Movement photographing method and mobile terminal
CN110611767B (en) Image processing method and device and electronic equipment
US20200160560A1 (en) Method, system and apparatus for stabilising frames of a captured video sequence
WO2020143090A1 (en) Image acquisition method, apparatus and device based on multiple cameras
CN107343165A (en) A kind of monitoring method, equipment and system
CN116416701A (en) Inspection method, inspection device, electronic equipment and storage medium
WO2020135394A1 (en) Video splicing method and device
WO2017128750A1 (en) Image collection method and image collection device
JP2011238048A (en) Position attitude measurement device and position attitude measurement program
CN108513057B (en) Image processing method and device
CN113286064B (en) Method and device for collecting looking-around image, mobile terminal and storage medium
WO2022179554A1 (en) Video splicing method and apparatus, and computer device and storage medium
CN114881901A (en) Video synthesis method, device, equipment, medium and product
CN105827932A (en) Image synthesis method and mobile terminal
JP2025510536A (en) Image processing method, apparatus, and device
WO2018006669A1 (en) Parallax fusion method and apparatus
CN114390206A (en) Shooting method and device and electronic equipment

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19909516

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19909516

Country of ref document: EP

Kind code of ref document: A1