WO2018233373A1 - Procédé et appareil de traitement d'image, et dispositif - Google Patents
Procédé et appareil de traitement d'image, et dispositif Download PDFInfo
- Publication number
- WO2018233373A1 WO2018233373A1 PCT/CN2018/084518 CN2018084518W WO2018233373A1 WO 2018233373 A1 WO2018233373 A1 WO 2018233373A1 CN 2018084518 W CN2018084518 W CN 2018084518W WO 2018233373 A1 WO2018233373 A1 WO 2018233373A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- camera
- sub
- region
- preset
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Ceased
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
Definitions
- the present invention relates to the field of terminal technologies, and in particular, to an image processing method, apparatus, and device.
- Aperture value (FNO) focal length / entrance pupil diameter
- FNO focal length / entrance pupil diameter
- a conventional photographing lens has a structure of six lenses, which ensures ideal imaging in the entire field of view (FOV) of the lens.
- the lens design system obtains the number of lenses according to the user's aperture value according to the empirical value, and then calculates the positional relationship, focal length, shape and other parameters of the lens according to the physical principle and mathematical principle, thereby performing process production and configuration into finished products.
- the embodiment of the invention provides a camera module and a terminal, which realizes a large aperture FNO ⁇ 1.6 by a dual camera or a multi-camera, and simultaneously takes into consideration the level of the production process, so that it becomes a simple and effective implementation method for realizing a large aperture.
- an embodiment of the present invention provides an image processing method, which is applied to a photographing apparatus including a first camera and a second camera.
- the optical axes of the first camera and the second camera are parallel to each other, and the first camera and the second The distance between the cameras is less than the preset distance; the aperture values of the first camera and the second camera are all less than 1.6;
- the method includes: acquiring a first camera to capture a first image of the object to be photographed; and acquiring a second camera to capture the object to be photographed And acquiring a first sub-image of the first image according to a first preset rule, where a first field image corresponds to an angle of view of the first camera as [0, ⁇ 1 ], according to a second preset rule, acquiring a second sub-image of the second image, where the second sub-image corresponds to a field of view of the second camera as [ ⁇ 2 , ⁇ 3 ]; wherein, 0 ⁇ 2 ⁇ 1 ⁇ 3 , the first sub-image and the second
- an embodiment of the present invention provides an image processing apparatus, where the apparatus is applied to a photographing apparatus including a first camera and a second camera, wherein optical axes of the first camera and the second camera are parallel to each other, and the first camera and the second camera are The spacing between the first camera and the second camera is less than 1.6;
- the device includes: a first acquiring module, configured to acquire a first image of the first camera to capture the object to be photographed; and a second acquiring module And acquiring a second image of the second image capturing the object to be photographed;
- the third acquiring module is configured to acquire the first sub image of the first image according to the first preset rule, where the first sub image corresponds to the first camera
- the range of the field of view is [0, ⁇ 1 ]
- the fourth acquiring module is configured to acquire the second sub-image of the second image according to the second preset rule, and the second sub-image corresponds to the range of the field of view of the second camera [ ⁇ 2 , ⁇ 3 ]; wherein
- the task of imaging an oversized aperture can be shared by two cameras, and the high-definition area under the condition of super-large aperture is obtained by the first camera and according to a certain algorithm. , that is, the first sub-image; then, through the second camera and acquiring a high-definition region under the condition of super-large aperture, that is, the second sub-image according to a certain algorithm; splicing and merging the first sub-image and the second sub-image to obtain The entire field of view area satisfies the high definition target image.
- the difficulty in designing and manufacturing the camera is reduced, and the design cost and the manufacturing cost are saved.
- acquiring the first sub-image of the first image according to the first preset rule comprises: acquiring first parameter information of the first camera;
- the first parameter information expresses: the image captured by the first camera in the field of view angle range [0, ⁇ 1 ], and the modulation transfer function MTF value corresponding to the preset spatial frequency is greater than the first preset threshold; , ⁇ 1 is smaller than 1/2 of the angle of view of the first camera; acquiring an image receiving area P of the image sensor in the first camera; determining that the first image has a field of view angle range of [0, ⁇ The image of the intersection area of 1 ] and the intersection area of the P is taken as the first sub-image.
- this technical implementation can be invoked by a processor to call a program in memory and an instruction to perform a corresponding operation.
- acquiring the second sub-image of the second image according to the second preset rule comprises: acquiring second parameter information of the second camera;
- the second parameter information expresses: the image captured by the second camera in the field of view angle range [ ⁇ 2 , ⁇ 3 ], and the modulation transfer function MTF value corresponding to the preset spatial frequency is greater than the second preset threshold;
- 0 ⁇ 2 ⁇ 1 , ⁇ 3 is less than or equal to 1/2 of the angle of view of the second camera;
- this technical implementation can be invoked by a processor to call a program in memory and an instruction to perform a corresponding operation.
- the obtaining the target image by using the first sub-image and the second sub-image according to a preset splicing algorithm comprises: determining the S1 and An image of the intersection region S3 of the S2; an image of the complement region S32 of the S3 in the S2; an image of the S1 and the image of the S32 are obtained according to a first preset stitching algorithm.
- this technical implementation can be invoked by a processor to call a program in memory and an instruction to perform a corresponding operation.
- the obtaining the target image by using the first sub-image and the second sub-image according to a preset splicing algorithm comprises: determining the S1 and An image of the intersection region S3 of the S2; an image of the complement region S31 of the S3 in the S1; an image of the image of the S31 and the image of the S2 according to a second preset stitching algorithm.
- this technical implementation can be invoked by a processor to call a program in memory and an instruction to perform a corresponding operation.
- the obtaining the target image by using the first sub-image and the second sub-image according to a preset splicing algorithm comprises: determining the S1 and An image of the intersection region S3 of the S2; an image of the complement region S31 of the S3 in the S1; an image of the complement region S32 of the S3 in the S2; an image of the S1 And performing an enhancement process on the image of the S3 according to a preset enhancement algorithm to obtain an image of S4; and splicing the image of the S31, the image of the S32, and the image of the S4 according to a third preset The algorithm gets the target image.
- this technical implementation can be invoked by a processor to call a program in memory and an instruction to perform a corresponding operation.
- the first camera comprises a first imaging lens; the second camera comprises a second imaging lens; the first imaging lens is according to the first imaging
- the second imaging lens is designed according to the second preset requirement; the first preset requirement corresponds to the first parameter information, and the second preset requirement corresponds to the Second parameter information.
- these design requirements are self-property of the imaging lens, which is stored in advance in the photographing device or in the server, and can be called by the subsequent processor for image processing so as to be able to determine from the first image.
- a first sub-image is derived, and a second sub-image is determined from the second image.
- the imaging lenses of the first camera and the second camera comprise 4, 5 or 6 lens segments.
- the aperture values of the first camera and the second camera are equal.
- the image sensors of the first camera and the second camera are the same. Therefore, the above P and Q are also the same.
- the focal length and the (maximum) field of view of the first camera and the second camera are the same.
- the first preset threshold and the second preset threshold are greater than or equal to 0.25.
- the predetermined spatial frequency is greater than 400 pairs of lines/mm.
- the larger the spatial frequency the more refined the corresponding image.
- the larger the spatial frequency the more the MTF is greater than the preset threshold, indicating that the resolution of the image is better.
- S1 is a circular area.
- the above-mentioned field of view angle ranges [0, ⁇ 1 ], [ ⁇ 2 , ⁇ 3 ], etc.
- the corresponding image area is not necessarily a regular circle or a circle, and may be an approximate circle or a ring, or some irregular pattern; but as long as all the high-resolution sub-images finally obtained by the camera are obtained. Under the premise of taking the same image sensor as a reference, it can cover the area where the image sensor is located. Seamless stitching is possible. A target image that satisfies the high definition of the super large aperture is formed.
- the image processing program in the process of determining the sub-image, in the above-mentioned field of view angle range [0, ⁇ 1 ], [ ⁇ 2 , ⁇ 3 ], can also Take the image of the above two areas, such as a square or ellipse, such as a non-circular area, and perform the corresponding stitching. As long as the union of these sub-images can cover the area where the image sensor is located.
- the photographing apparatus further includes an adjusting device, the method further comprising: controlling the adjusting device, adjusting the first camera and the second camera Pitch. If the object to be photographed is closer to the lens, the pitch of the two cameras needs to be smaller to ensure that the acquired sub-image areas can be overlapped. If the object to be photographed is farther from the lens, the pitch of the two cameras needs to be slightly larger, so that the acquired sub-images are overlapped and the overlapping area is not excessive.
- the photographing apparatus further includes a third camera, the optical axis of the third camera and the optical axis of the first camera are parallel to each other;
- the distance between the three cameras and the first camera is less than a preset distance;
- the distance between the third camera and the second camera is less than a preset distance;
- the method further includes: acquiring the third camera to capture the a third image of the object to be photographed; acquiring a third sub-image of the third image according to a third preset rule, the third sub-image corresponding to the third camera having a field of view angle range of [ ⁇ 4 , ⁇ 5 ]; wherein ⁇ 2 ⁇ 4 ⁇ 3 ⁇ 5 , the second sub-image and the third sub-image have overlapping images; ⁇ 5 is smaller than 1/2 of the angle of view of the third camera And obtaining, by the first sub-image and the second sub-image, the target image according to the preset splicing algorithm, including: the first sub-image, the second
- acquiring the third sub-image of the third image according to the third preset rule comprises:
- the first parameter information expresses: an image captured by the first camera in a range of viewing angles [ ⁇ 4 , ⁇ 5 ], corresponding to a preset spatial frequency
- the modulation transfer function MTF value is greater than the first predetermined threshold; wherein ⁇ 5 is less than 1/2 of the field of view of the first camera;
- An image of the intersection of the region of the third image having the field of view angle range [ ⁇ 4 , ⁇ 5 ] and the intersection of the R is determined as the third sub image.
- an embodiment of the present invention provides a terminal device, where the terminal device includes a first camera and a second camera, a memory, a processor, and a bus; the first camera, the second camera, the memory, and the processor are connected by a bus; The optical axes of the first camera and the second camera are parallel to each other, and the distance between the first camera and the second camera is less than a preset distance; the first camera and the second camera The aperture values are each less than 1.6; the camera is for acquiring image signals under the control of the processor; the memory is for storing computer programs and instructions; the processor is for calling the computer stored in the memory Programs and instructions that perform any of the possible implementations described above.
- the terminal device further includes an antenna system, and the antenna system transmits and receives wireless communication signals under the control of the processor to implement wireless communication with the mobile communication network;
- the mobile communication network includes the following one Or multiple: GSM network, CDMA network, 3G network, FDMA, TDMA, PDC, TACS, AMPS, WCDMA, TDSCDMA, WIFI and LTE networks.
- an embodiment of the present invention provides an image processing method, where the method is applied to a photographing device including a first camera and a second camera, and optical axes of the first camera and the second camera are parallel to each other.
- the spacing between the first camera and the second camera is less than a preset distance; the aperture values of the first camera and the second camera are both less than 1.6, and the first camera and the second camera are The number of lenses is not more than 6;
- the method includes: acquiring a first image of the first camera to capture the object to be photographed; acquiring a second image of the object to be photographed by the second camera; acquiring the first sub of the first image An image; wherein a resolution of the first sub-image satisfies a preset definition standard; acquiring a second sub-image of the second image; wherein a resolution of the second sub-image satisfies the preset definition a standard; and the first sub-image and the second sub-image have an image intersection, and the image integration of the first sub-image and the first
- an embodiment of the present invention provides an image processing apparatus, where the apparatus is applied to a photographing apparatus including a first camera and a second camera, wherein optical axes of the first camera and the second camera are parallel to each other, The distance between the first camera and the second camera is less than a preset distance; the aperture values of the first camera and the second camera are all less than 1.6, and the lenses of the first camera and the second camera
- the device includes: a first acquiring module, configured to acquire a first image of the first image to be photographed, and a second acquiring module, configured to acquire a second camera to capture the object to be photographed a second acquisition module, configured to acquire a first sub-image of the first image, wherein a resolution of the first sub-image satisfies a preset definition standard; and a fourth acquisition module, configured to acquire the image a second sub-image of the second image; wherein a resolution of the second sub-image satisfies the preset definition standard; and the first sub-image and the second sub-
- acquiring the first sub-image of the first image comprises: acquiring a first physical design parameter of the first camera; wherein the first physical design parameter Expressing that in any of the images captured by the first camera, the sharpness of the image of the first region is higher than the sharpness of the image of the second region, and the preset sharpness criterion is satisfied, and the second region is the first Complementing an area in any image captured by the first camera; acquiring a first area of the first image according to the first physical design parameter; acquiring an image receiving area of the image sensor in the first camera P: determining an image of the intersection area S1 of the first area of the first image and the P as the first sub-image.
- This technical feature can be implemented by a third acquisition module.
- the first area and the second area may be any graphics, which is not limited in the embodiment of the present invention.
- this technical implementation can be invoked by the processor to call the memory or the program in the cloud and the corresponding operation.
- the acquiring the second sub-image of the second image comprises: acquiring a second physical design parameter of the second camera; wherein the second physics The design parameter expresses that in any of the images captured by the second camera, the sharpness of the image of the third region is higher than the sharpness of the image of the fourth region, and the predetermined sharpness criterion is met, and the fourth region is Complementing a third region in any image captured by the second camera; acquiring a third region of the second image according to the second physical design parameter; acquiring an image of the image sensor in the second camera Receiving region Q; determining an image of the third region of the second image and the intersection region S2 of the Q as a second sub-image.
- This technical feature can be implemented by the fourth acquisition module.
- the third area and the fourth area may be any graphics, which is not limited in the embodiment of the present invention.
- this technical implementation can be invoked by the processor to call the memory or the program in the cloud and the corresponding operation.
- the first physical design parameter comprises: the image captured by the first camera in a field of view angle range [0, ⁇ 1 ], in a preset space
- the modulation transfer function MTF value corresponding to the frequency is greater than the first preset threshold; wherein ⁇ 1 is smaller than 1/2 of the field of view of the first camera, and the image captured by the first camera in other field of view angles,
- the modulation transfer function MTF value corresponding to the preset spatial frequency is not greater than the first preset threshold.
- the second physical design parameter comprises: an image captured by the second camera within a viewing angle range of [ ⁇ 2 , ⁇ 3 ],
- the modulation transfer function MTF value corresponding to the preset spatial frequency is greater than a second preset threshold; wherein ⁇ 3 is less than 1/2 of the field angle of the second camera, and 0 ⁇ 2 ⁇ 1 ⁇ 3
- the image captured by the second camera in the range of other viewing angles has a modulation transfer function MTF value corresponding to the preset spatial frequency that is not greater than the second predetermined threshold.
- This information can be stored in memory or in the network cloud.
- the performing the fusion processing on the first sub-image and the second sub-image to obtain the target image includes any one of the following three manners; And can be implemented by the image stitching module:
- Method 1 determining an image of the intersection region S3 of the S1 and the S2; determining an image of the complement region S32 of the S3 in the S2; and fusing the image of the S1 and the image of the S32 Process to get the target image; or,
- Mode 2 determining an image of the intersection region S3 of the S1 and the S2; determining an image of the complement region S31 of the S3 in the S1; and fusing the image of the S31 and the image of the S2 Process to get the target image; or,
- Mode 3 determining an image of the intersection region S3 of the S1 and the S2; determining an image of the complement region S31 of the S3 in the S1; determining a complement region S32 of the S3 in the S2 And performing an enhancement process on the S3 according to a preset enhancement algorithm to obtain an image of S4; and performing fusion processing on the image of the S31, the image of the S32, and the image of the S4; , get the target image.
- this technical implementation can be invoked by a processor to call a program in memory and an instruction to perform a corresponding operation.
- an adjustment module/module is further included for adjusting the spacing between the first camera and the second camera.
- the photographing apparatus further includes a third camera, an optical axis of the third camera and an optical axis of the first camera are parallel to each other;
- the distance between the three cameras and the first camera is less than a preset distance;
- the distance between the third camera and the second camera is less than a preset distance;
- the method further includes: acquiring the third camera to capture the a third image of the object to be photographed; acquiring third parameter information of the third camera, wherein the third camera is designed according to the third parameter information;
- the third parameter information expresses: The image captured by the third camera in the field of view angle range [ ⁇ 4 , ⁇ 5 ], the modulation transfer function MTF value corresponding to the preset spatial frequency is greater than the third preset threshold; wherein ⁇ 2 ⁇ 4 ⁇ 3 ⁇ ⁇ 5 , ⁇ 5 is smaller than 1/2 of the angle of view of the third camera; acquiring a third sub-image of the third image according to the third parameter information; wherein the third sub-image is
- the device further includes: a fifth acquiring module, configured to acquire a third image of the third image capturing the object to be photographed; and a sixth acquiring module, configured to acquire a third parameter of the third camera Information, wherein the third camera is designed according to the third parameter information; the third parameter information indicates that the third camera is photographed within a range of viewing angles [ ⁇ 4 , ⁇ 5 ]
- the image, the modulation transfer function MTF value corresponding to the preset spatial frequency is greater than a third preset threshold; wherein ⁇ 2 ⁇ 4 ⁇ 3 ⁇ 5 , ⁇ 5 is less than 1 of the angle of view of the third camera /2;
- the sixth obtaining module is further configured to acquire a third sub-image of the third image according to the third parameter information; wherein, the third sub-image has higher definition than the third complementary image Sharpness, the third complement image is a complement of the third sub image in the third image; the second sub image and the third sub image have an image intersection; and the first An image of the sub image, the second sub image
- the imaging lenses of the first camera and the second camera comprise 4, 5 or 6 lens segments.
- the aperture values of the first camera and the second camera are equal.
- the image sensors of the first camera and the second camera are the same. Therefore, the above P and Q are also the same.
- the focal length and the (maximum) field of view of the first camera and the second camera are the same.
- the first preset threshold and the second preset threshold are greater than or equal to 0.25.
- the predetermined spatial frequency is greater than 400 pairs of lines/mm.
- the larger the spatial frequency the more refined the corresponding image.
- the larger the spatial frequency the more the MTF is greater than the preset threshold, indicating that the resolution of the image is better.
- S1 is a circular area.
- the above-mentioned field of view angle ranges [0, ⁇ 1 ], [ ⁇ 2 , ⁇ 3 ], etc.
- the corresponding image area is not necessarily a regular circle or a circle, and may be an approximate circle or a ring, or some irregular pattern; but as long as all the high-resolution sub-images finally obtained by the camera are obtained. Under the premise of taking the same image sensor as a reference, it can cover the area where the image sensor is located. Seamless stitching is possible. A target image that satisfies the high definition of the super large aperture is formed.
- the image processing program in the process of determining the sub-image, in the above-mentioned field of view angle range [0, ⁇ 1 ], [ ⁇ 2 , ⁇ 3 ], can also Take the image of the above two areas, such as a square or ellipse, such as a non-circular area, and perform the corresponding stitching. As long as the union of these sub-images can cover the area where the image sensor is located, it is possible to express a high-definition subject.
- the patterns of the first region, the second region, the third region, and the fourth region are not limited.
- an embodiment of the present invention provides a terminal device, where the terminal device includes a first camera and a second camera, a memory, a processor, and a bus; the first camera, the second camera, the memory, and the processor are connected by a bus; The optical axes of the first camera and the second camera are parallel to each other, and the distance between the first camera and the second camera is less than a preset distance; the first camera and the second camera The aperture values are all less than 1.6 and the number of lenses of the first camera and the second camera are not greater than 6; the camera is configured to acquire an image signal under the control of the processor; the memory is used to store a computer Programs and instructions; the processor for invoking the computer program and instructions stored in the memory to perform any of the possible implementation methods described above.
- the terminal device further includes an antenna system, and the antenna system transmits and receives wireless communication signals under the control of the processor to implement wireless communication with the mobile communication network;
- the mobile communication network includes the following one Or multiple: GSM network, CDMA network, 3G network, FDMA, TDMA, PDC, TACS, AMPS, WCDMA, TDSCDMA, WIFI and LTE networks.
- the above method, device and device can be applied to a scene in which the camera software provided by the terminal is used for shooting; or can be applied to a scene in which a third-party camera software is used for shooting in the terminal; the shooting includes normal shooting, self-timer, video telephony, and video conference. , VR shooting, aerial photography and other shooting methods.
- Figure 1 is a schematic view showing the structure of a lens
- FIG. 2 is a schematic structural view of a terminal
- FIG. 3 is a flowchart of an image processing method according to an embodiment of the present invention.
- FIG. 4 is a schematic structural diagram of hardware of a camera according to an embodiment of the present invention.
- FIG. 5 is a schematic diagram of a first camera in an embodiment of the present invention.
- FIG. 6 is a schematic diagram of image quality evaluation of a first camera in an embodiment of the present invention.
- FIG. 7 is a schematic diagram of a second camera according to an embodiment of the present invention.
- FIG. 8 is a schematic diagram of image quality evaluation of a second camera according to an embodiment of the present invention.
- FIG. 9 is a schematic diagram of acquiring an image by a dual lens module according to an embodiment of the present invention.
- FIG. 10 is another schematic diagram of acquiring an image according to an embodiment of the present invention.
- FIG. 11 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present invention.
- the terminal may be a device that provides photographing and/or data connectivity to the user, a handheld device with a wireless connection function, or other processing device connected to the wireless modem, such as a digital camera, a SLR camera, Mobile phones (or “cellular" phones) can be portable, pocket-sized, handheld, wearable devices (such as smart watches, etc.), tablets, personal computers (PCs, Personal Computers), PDAs (Personal Digital Assistants, Personal digital assistant), POS (Point of Sales), on-board computer, drone, aerial camera, etc.
- a digital camera a SLR camera
- Mobile phones or “cellular” phones
- PCs personal computers
- PDAs Personal Digital Assistants
- POS Point of Sales
- on-board computer drone
- aerial camera etc.
- FIG. 2 shows an alternative hardware structure diagram of the terminal 100.
- the terminal 100 may include a radio frequency unit 110, a memory 120, an input unit 130, a display unit 140, a camera 150, an audio circuit 160, a speaker 161, a microphone 162, a processor 170, an external interface 180, a power supply 190, and the like.
- the camera 150 has at least two.
- the camera 150 is used for capturing images or videos, and can be triggered by an application instruction to realize a photographing or photographing function.
- the camera includes an imaging lens, a filter, an image sensor, a focus anti-shake motor and the like.
- the light emitted or reflected by the object enters the imaging lens, passes through the filter, and finally converges on the image sensor.
- the imaging lens is mainly used for collecting and reflecting light emitted or reflected by all objects in the photographing angle of view;
- the filter is mainly used for filtering out unnecessary light waves in the light (for example, light waves other than visible light, such as infrared); image sensor It is mainly used for photoelectric conversion of the received optical signal, conversion into an electrical signal, and input to the process 170 for subsequent processing.
- FIG. 2 is merely an example of a portable multi-function device, and does not constitute a limitation of the portable multi-function device, and may include more or less components than those illustrated, or may combine some components, or different. Parts.
- the input unit 130 can be configured to receive input numeric or character information and to generate key signal inputs related to user settings and function control of the portable multifunction device.
- input unit 130 can include touch screen 131 as well as other input devices 132.
- the touch screen 131 can collect touch operations on or near the user (such as the user's operation on the touch screen or near the touch screen using any suitable object such as a finger, a joint, a stylus, etc.), and drive the corresponding according to a preset program. Connection device.
- the touch screen can detect a user's touch action on the touch screen, convert the touch action into a touch signal and send the signal to the processor 170, and can receive and execute a command sent by the processor 170; the touch signal includes at least a touch Point coordinate information.
- the touch screen 131 can provide an input interface and an output interface between the terminal 100 and a user.
- touch screens can be implemented in various types such as resistive, capacitive, infrared, and surface acoustic waves.
- the input unit 130 may also include other input devices.
- other input devices 132 may include, but are not limited to, one or more of a physical keyboard, function keys (such as volume control button 132, switch button 133, etc.), trackball, mouse, joystick, and the like.
- the display unit 140 can be used to display information input by a user or information provided to a user and various menus of the terminal 100.
- the display unit is further configured to display an image acquired by the device using the camera 150, including a preview image, an initial image captured, and a target image processed by a certain algorithm after the shooting.
- the touch screen 131 may cover the display panel 141.
- the touch screen 131 detects a touch operation on or near it, the touch screen 131 transmits to the processor 170 to determine the type of the touch event, and then the processor 170 displays the panel according to the type of the touch event.
- a corresponding visual output is provided on 141.
- the touch screen and the display unit can be integrated into one component to implement the input, output, and display functions of the terminal 100.
- the touch display screen represents the function set of the touch screen and the display unit; In some embodiments, the touch screen and the display unit can also function as two separate components.
- the memory 120 can be used to store instructions and data, the memory 120 can mainly include a storage instruction area and a storage data area, the storage data area can store an association relationship between the joint touch gesture and the application function; the storage instruction area can store an operating system, an application, Software units such as instructions required for at least one function, or their subsets, extension sets.
- a non-volatile random access memory can also be included; providing hardware, software, and data resources in the management computing device to the processor 170, supporting the control software and applications. Also used for the storage of multimedia files, as well as the storage of running programs and applications.
- the processor 170 is a control center of the terminal 100, and connects various parts of the entire mobile phone by various interfaces and lines, and executes various kinds of the terminal 100 by operating or executing an instruction stored in the memory 120 and calling data stored in the memory 120. Function and process data to monitor the phone as a whole.
- the processor 170 may include one or more processing units; preferably, the processor 170 may integrate an application processor and a modem processor, where the application processor mainly processes an operating system, a user interface, an application, and the like.
- the modem processor primarily handles wireless communications. It can be understood that the above modem processor may not be integrated into the processor 170.
- the processors, memories can be implemented on a single chip, and in some embodiments, they can also be implemented separately on separate chips.
- the processor 170 can also be configured to generate corresponding operational control signals, send to corresponding components of the computing processing device, read and process data in the software, and in particular read and process the data and programs in the memory 120 to enable Each function module performs the corresponding function, thereby controlling the corresponding component to act according to the requirements of the instruction.
- the radio frequency unit 110 can be used for receiving and transmitting signals during transmission and reception of information or during a call. Specifically, after receiving the downlink information of the base station, the processing is performed by the processor 170. In addition, the uplink data is designed to be sent to the base station.
- RF circuits include, but are not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a Low Noise Amplifier (LNA), a duplexer, and the like.
- the radio unit 110 can also communicate with network devices and other devices through wireless communication.
- the wireless communication may use any communication standard or protocol, including but not limited to Global System of Mobile communication (GSM), General Packet Radio Service (GPRS), Code Division Multiple Access (Code). Division Multiple Access (CDMA), Wideband Code Division Multiple Access (WCDMA), Long Term Evolution (LTE), E-mail, Short Messaging Service (SMS), etc.
- GSM Global System of Mobile communication
- GPRS General Packet Radio Service
- CDMA Code Division Multiple Access
- the audio circuit 160, the speaker 161, and the microphone 162 can provide an audio interface between the user and the terminal 100.
- the audio circuit 160 can transmit the converted electrical data of the received audio data to the speaker 161 for conversion to the sound signal output by the speaker 161; on the other hand, the microphone 162 is used to collect the sound signal, and can also convert the collected sound signal.
- the electrical signal is received by the audio circuit 160 and converted into audio data, and then processed by the audio data output processor 170, transmitted to the terminal, for example, via the radio frequency unit 110, or outputted to the memory 120 for further processing.
- the audio circuit can also include a headphone jack 163 for providing a connection interface between the audio circuit and the earphone.
- the terminal 100 also includes a power source 190 (such as a battery) for powering various components.
- a power source 190 such as a battery
- the power source can be logically coupled to the processor 170 through a power management system to manage functions such as charging, discharging, and power management through the power management system.
- the terminal 100 further includes an external interface 180, which may be a standard Micro USB interface, or a multi-pin connector, which may be used to connect the terminal 100 to communicate with other devices, or may be used to connect the charger to the terminal 100. Charging.
- an external interface 180 which may be a standard Micro USB interface, or a multi-pin connector, which may be used to connect the terminal 100 to communicate with other devices, or may be used to connect the charger to the terminal 100. Charging.
- the terminal 100 may further include a flash, a wireless fidelity (WiFi) module, a Bluetooth module, various sensors, and the like, and details are not described herein.
- WiFi wireless fidelity
- Bluetooth wireless fidelity
- an embodiment of the present invention provides an image processing method, which may be applied to a terminal having at least two cameras.
- the two cameras are respectively referred to as a first camera and a second camera;
- the first, second, and the like, as used in the present application are used only for the distinction, and there is no order or performance limitation;
- the first camera and the second camera are positioned such that the optical axes of the two are parallel to each other;
- the aperture values of both terminals are less than 1.6 (the super-large aperture referred to in this application refers to the aperture value is less than 1.6); the minimum limit value of the aperture value can be infinitely close to 0;
- the terminal can be
- the terminal 100 shown in FIG. 2 can also be a simple camera device or the like as shown in FIG.
- the specific processing method process includes the following steps:
- Step 31 Acquire a first camera to capture a first image of an object to be photographed
- Step 32 Acquire a second camera to capture a second image of the object to be photographed
- the object to be photographed can be understood as an object that the user expects to photograph; it can also be understood that when the user adjusts the shooting position of the terminal, the terminal displays the imaged object in the screen, for example, the common image portion of the two camera framing; it should be understood that The first camera and the second camera are not in the same position. Therefore, the image content obtained by the first camera and the second camera when shooting the object to be photographed is not completely the same, and most of the image areas are the same, and there are some differences in the edges.
- the theoretical approximation is that the imaging of the two cameras is the same.
- the existing correction techniques to correct the images captured by the two cameras, such as the correction of the positional offset factor, to obtain the first image and the second image.
- the images are made approximately the same; or a common image area in the two-part image may be taken as the first image and the second image, making them approximately the same.
- the geometric center of the first image and the geometric center of the second image can be corrected to overlap; that is, if the two images are compared for content, if the geometric centers are coincident, the contents of the two images The same parts can overlap.
- Step 33 Acquire a first sub-image of the first image according to a first preset rule, where a first field image corresponds to an angle of view of the first camera as [0, ⁇ 1 ],
- Step 34 Acquire a second sub-image of the second image according to a second preset rule, where the second sub-image corresponds to a field of view of the second camera as [ ⁇ 2 , ⁇ 3 ]; 0 ⁇ 2 ⁇ 1 ⁇ 3 , the first sub image and the second sub image have overlapping images;
- This field of view can be specified in advance, or it can be obtained by taking the camera parameters and then determining them.
- intersection of images refers to the same area of content of the two images.
- the alignments of the first sub-image and the second sub-image are placed such that their contents overlap in the same area; if the first sub-image and the second sub-image geometric center coincide, the first sub- The intersection of the image and the second sub-image is a circular area, and the outer ring of the circular area is completely within the second sub-image, and the inner ring of the ring is completely within the first sub-image, so that the first sub-image and The second sub-image can constitute a complete imaging of the subject.
- the alignments of the first sub-image and the second sub-image are placed such that their contents overlap the same area; if the collection centers of the first sub-image and the second sub-image do not coincide, At this time, the intersection of the first sub-image and the second sub-image is no longer a ring, and may be a closed area surrounded by an inner closed curve and an outer closed curve; the outer curve of the closed area is completely within the second sub-image. The internal curve of the enclosed area is completely within the first sub-image.
- Step 35 The first sub-image and the second sub-image are obtained according to a preset splicing algorithm.
- the alignments of the first sub-image and the second sub-image are placed such that their contents overlap the same area; if the collection centers of the first sub-image and the second sub-image do not coincide, At this time, the intersection of the first sub-image and the second sub-image is no longer a ring, and may be a non-closed area enclosed by an inner closed curve and an outer closed curve; if the image content of the non-closed area does not affect the object to be photographed
- the expression, or the image content of the non-closed region corresponds to a certain image quality standard corresponding to the image quality in the first image or the second image, and then the subsequent sub-image, the second sub-image, and the non-closed region may be The corresponding image in the first image or the second image is subjected to fusion processing to obtain a target image.
- the imaging lenses of the first camera and the second camera are specially manufactured according to certain special requirements in advance, that is, according to certain physical design parameters.
- the lens manufacturing system can determine the number of shots according to the empirical value according to the user's target parameter requirements, and formulate corresponding specific hardware configuration parameters according to the number of pieces, such as the focal length of each lens and the relative position between each lens. Due to the difficulty of setting the large aperture, the large aperture cannot achieve a clear image within the entire field of view without increasing the number of lenses. Therefore, in a specific design, there is a compromise between the aperture value and the size of the field of view of the high-quality image, and the smaller the aperture value, the smaller the range of the angle of view that satisfies the required image clarity.
- the imaging lens 201 in the first camera is composed of five lenses, but the number of lenses is not limited to five, and may be four to six.
- the aperture value of the invention is designed to be small, for example, FNO 1 (FNO 1 is less than 1.6), and the imaging quality design weight of the field of view angle range of [0, ⁇ 1 ] is increased;
- the imaging quality in the range of the angle of view of [0, ⁇ 1 ] satisfies the expected; that is, the quality of the imaging range corresponding to the portion of the field of view of [0, ⁇ 1 ] is in accordance with FNO 1
- the image quality of the range of angles of view greater than ⁇ 1 is not of concern, even if the quality is poor.
- the existing process can produce a corresponding imaging lens without increasing the difficulty. It can be seen that the FNO 1 of the first camera achieving a smaller value is at the expense of the image quality corresponding to the range of angles of view other than [0, ⁇ 1 ].
- the imaging lens of the first camera has the image quality corresponding to the field of view angle range [0, ⁇ 1 ] in the object to be photographed obtained by the first camera due to the above special design requirements, and can meet the requirements of FNO 1 . .
- FNO 1 can be measured by MTF.
- MTF the value of MTF can still reach the preset standard.
- ⁇ 1 is not more than 1/2 of the angle of view of the first camera.
- the field of view is an inherent property of the camera after it is fixed at the terminal, and is the maximum field of view that the camera can image when the terminal is at a fixed position. It is well known in the industry that the angle at which the object image of the object to be measured can pass through the two edges of the largest range of the lens is called the angle of view.
- an evaluation standard can refer to FIG. 6.
- the specific parameters of the first camera are: FNO is 1.4; focal length is 3.95 mm; FOV is 75°, and can also be expressed as [0, 37.5°] (Note: in the present application, the field of view angle range is It can be expressed as Indicates that the lens is the starting point, centered on the optical axis, and the optical axis The area of the cone formed by all the rays of the angle, the angle range of projection to the plane is The MTF performance is shown in Figure 6. The figure shows the MTF of the sagittal direction of the imaging lens. (The contrast of the output image through the optical system is always worse than the contrast of the input image. The contrast and spatial frequency of this contrast.
- the characteristics are closely related.
- the ratio of the contrast of the output image to the input image can be defined as the modulation transfer function MTF), and the angle of the annotation is the corresponding half angle of view.
- the different lines represent the different 1/2 FOV MTF curves, and the abscissa represents the spatial frequency. The larger the value, the finer the resolution of the image.
- the ordinate represents the MTF, which characterizes the contrast of the image, and the greater the contrast, the clearer the image.
- the dashed line in the figure represents the limit of the contrast of the system, and the closer it is to the better the image quality.
- the MTF of the imaging lens in the meridional direction behaves like the sagittal direction.
- the MTF value of the center FOV is about [0°, 32°]
- the MTF of the obtained image is still maintained when the image in the range of the angle of view is 500 line pairs/mm.
- the image quality of this region can reach a very good level under the condition that the FNO is 1.4.
- the image acquired by the first camera is in the range of FOV about [32°, 37.5°]
- the corresponding MTF value is relatively poor, and the high quality imaging of this field of view area will be borne by the second camera. It should be understood that due to factors of the manufacturing process, the boundaries of the high quality image area and the low quality image area are not necessarily strictly circular, and the actual limit may be an irregular pattern.
- the imaging lens 204 in the second camera is composed of five lenses, but the number of lenses is not limited to five, and may be four to six.
- the aperture value of the invention is designed to be small, for example, FNO 2 (FNO 2 is less than 1.6), and the imaging quality design weight of the field of view angle range [ ⁇ 2 , ⁇ 3 ] is increased;
- the imaging quality in the range of the angle of view of [ ⁇ 2 , ⁇ 3 ] satisfies the expectation; that is, the quality of the imaging range corresponding to the portion of the field of view of [ ⁇ 2 , ⁇ 3 ] is Meets the requirements of FNO 2 ; and does not pay attention to image quality for a range of angles of view less than ⁇ 2 , even if the quality is poor.
- the existing process can produce a corresponding imaging lens without increasing the difficulty. It can be seen that the second camera achieves a smaller value of FNO 2 at the expense of the image quality corresponding to the range of field angles of [0, ⁇ 2 ].
- the imaging lens of the second camera has the image quality corresponding to the field of view angle range [ ⁇ 2 , ⁇ 3 ] in the object to be photographed obtained by the second camera due to the above special design requirements, and can meet the requirements of FNO 2 .
- FNO 2 can be measured by MTF.
- ⁇ 3 is not more than 1/2 of the angle of view of the second camera.
- an evaluation standard can refer to FIG. 8.
- the specific parameters of the second camera are: FNO is 1.4; focal length is 3.95 mm; FOV is 75°, which can also be expressed as [0, 37.5°]; MTF performance is shown in FIG. 8 , and the imaging lens is shown in the figure.
- the MTF of the sagittal direction (the contrast of the output image through the optical system is always worse than the contrast of the input image.
- the amount of change in contrast is closely related to the spatial frequency characteristics.
- the contrast between the output image and the input image is The ratio can be defined as the modulation transfer function MTF), and the angle of the annotation is the corresponding half angle of view.
- the different lines represent the different 1/2 FOV MTF curves, and the abscissa represents the spatial frequency. The larger the value, the finer the resolution of the image.
- the ordinate represents the MTF, which characterizes the contrast of the image, and the greater the contrast, the clearer the image.
- the dashed line in the figure represents the limit of the contrast of the system, and the closer it is to the better the image quality.
- the MTF of the imaging lens in the meridional direction behaves like the sagittal direction.
- the image has an FOV of [28°, 37.5°].
- the lens is used as the starting point, the optical axis is centered, and the optical axis is The cone region C1 formed by all the rays of the angle is formed with the optical axis
- the cone area C2 formed by all the rays of the angle is
- the MTF value in the range of the field of view between C1 and C2 is high.
- the image in the range of the field angle is in the range of 500 line pairs/mm, and the MTF can still be maintained at 0.25.
- the image quality of the region can reach a good level under the condition that the FNO is 1.4.
- the image acquired at the second camera is in the range of FOV about [0°, 28°], and the corresponding MTF value is relatively poor.
- the parameter information of the lens is pre-stored locally in the photographing device or the cloud server; therefore, in the process of performing subsequent image processing; the processor may acquire the image acquired by the camera according to the local parameter information, and obtain the portion in which the sharpness of the super-large aperture is satisfied.
- the area is used for subsequent splicing and fusion processing (in the present invention, splicing and merging are both image splicing, but the naming is different, referring to the prior art of processing a plurality of partial pictures into one complete picture).
- the first camera includes an imaging lens 201 , a filter 202 , and an image sensor 203 ; and the second camera includes an imaging lens 204 , a filter 205 , and an image sensor 206 .
- the optical axes of the first camera and the second camera are parallel to each other, and the optical axis spacing is a preset distance. Due to the special design of the first camera and the second camera; the first camera is capable of obtaining a sharp image in the field of view angle range [0, ⁇ 1 ]; the first camera can be in the field of view angle range [ ⁇ 2 , ⁇ 3 Get a good-resolution image inside.
- the image that can be acquired by the imaging lens should theoretically be a circular area if projected onto the plane of the image sensor.
- the size of the circle depends on the complete field of view of the imaging lens.
- the image sensing is designed to be square, so that the image finally acquired by the first camera and the second camera, that is, the image finally received by the image sensor, is square.
- the image difference obtained by the two sensors is within the tolerance range of the subsequent processing algorithm, it may be directly used as the first image and the second image; if the image difference obtained by the two sensors exceeds the allowable difference range of the subsequent processing algorithm, Then, the first image and the second image need to be obtained by using a correction technique in the prior art or intercepting the same content region. Therefore, how to perform subsequent processing based on the square images acquired by the two cameras is particularly important. That is, steps 33, 34, and 35.
- Step 33 Acquire first parameter information of the first camera; the first parameter information includes: manufacturing parameters, performance parameters, and the like of the first camera, such as, in what angle of view the first camera can obtain the resolution under a large aperture.
- the requested image For example, the image captured by the first camera in the field of view angle range [0, ⁇ 1 ] may be obtained, and the modulation transfer function MTF value corresponding to the preset spatial frequency is greater than the first preset threshold; wherein ⁇ 1 is smaller than 1/2 of the angle of view of the first camera.
- Acquiring an image receiving area P of the image sensor in the first camera, that is, an image received by the image sensor in the first camera may be acquired
- an image of the first image in the region of the angle of view angle [0, ⁇ 1 ] and the intersection S1 of the P is determined as the first sub-image.
- the area where the square 203 is located represents the image sensor receiving image area in the first camera; the different circles represent different angles of view, for example, the area where the circle 301 is located corresponds to the field of view angle range [0] Image area of ⁇ 1 ].
- the area where the circle 302 is located corresponds to the image area of the full camera angle of the first camera. Therefore, in this example, the intersection area of 301 and 203 is the first sub-image.
- the first camera is specifically designed to have the same geometric center of 301 and 203, and the diameter of 301 is smaller than the diameter of the circumcircle of square 203. Due to the special design of the first camera and the above-described method of obtaining the first sub-image, the sharpness of the obtained first sub-image satisfies the super-large aperture FNO 1 .
- Step 34 Acquire second parameter information of the second camera; the second parameter information includes: manufacturing parameters, performance parameters, and the like of the second camera, such as, in what angle of view the second camera can obtain the resolution under a large aperture.
- the requested image For example, an image captured by the second camera in a range of viewing angles [ ⁇ 2 , ⁇ 3 ] may be obtained, and a modulation transfer function MTF value corresponding to the preset spatial frequency is greater than a second preset threshold; wherein, 0 ⁇ 2 ⁇ ⁇ 1 , ⁇ 3 is less than or equal to 1/2 of the angle of view of the second camera.
- Acquiring an image receiving area Q of the image sensor in the second camera, that is, an image received by the image sensor in the second camera may be acquired
- the area where the square 206 is located indicates that the image sensor in the second camera receives the image area; the different circles represent different angles of view, for example, the area where the circle 303 is located corresponds to the field angle range [0] The image area of ⁇ 2 ], the area where the circle 304 is located corresponds to the image area of the field of view angle range [0, ⁇ 3 ]; the area of the circle 306 sandwiched between the circle 303 and the circle 304 corresponds to the field of view angle range The image area of [ ⁇ 2 , ⁇ 3 ]. Therefore, in this example, the intersection of the ring 306 and the square 206 is the second sub-image.
- the geometric centers of the circle 303, the circle 304 and the square 206 are the same, and the diameter of the circle 303 is smaller than the diameter of the circumscribed circle of the square 206, and smaller than the diameter of the circle 301, and the circle 304
- the diameter is larger than the diameter of the circle 301 described above to achieve seamless stitching of the image; in general, the diameter of 304 can also be larger than the diameter of the outer circle of the square 206 to ensure subsequent formation of a complete image. Due to the special design of the second camera and the above-described method of obtaining the second sub-image, the sharpness of the obtained second sub-image satisfies the super-large aperture FNO 2 .
- the overlapping area of 305 in FIG. 10 is formed, and the area where 301 is located and the area where 306 is located just have the condition that can be spliced into one complete image.
- the overlapping portion is the area where the ring 305 is located.
- the first sub-image is S1
- the second sub-image is S2
- step 35 may include the following forms:
- the image of S1 and the image of S32 are obtained according to a first preset stitching algorithm.
- the image of S31 and the image of S2 are obtained according to a second preset stitching algorithm.
- the image of S1 and the image of S2 are enhanced by a preset enhancement algorithm to obtain an image of S4;
- the image of S31, the image of S32, and the image of S4 are obtained according to a third preset stitching algorithm.
- the image quality of the area inside the circle 301 satisfies the high definition under the large aperture; the image quality of the area between the rings 306 satisfies the high definition under the large aperture. Therefore, the image quality of the target image formed by splicing also satisfies the high definition under the large aperture.
- the second camera should be approximated by the main parameters of the first camera. Including but not limited to the aperture value, the overall range of the camera's field of view, the number of lenses, the imaging focal length, the overall size of the imaging lens, the performance and size of the sensor. It should be understood that it is difficult to obtain exactly the same result in any manufacturing method, and that some error in the actual parameters is allowed, as long as the error range is insufficient to change the technical realization of the essence, it should fall within the scope of protection of the present invention.
- the acquiring a first camera to capture a first image of the object to be photographed, and acquiring the second image of the object to be photographed by the second camera may be triggered by the same trigger signal; Triggered by two different trigger signals.
- the spacing between the first camera and the second camera is less than a preset distance to ensure that the two cameras are The pictures taken when shooting the same object are as identical as possible; it should be understood that in the dual-camera scene, the distance between the two cameras is set in relation to the image area to be obtained, and the determination of the image area is determined by the subsequent image area.
- the processing algorithm in the present invention, the obtained images of the two cameras are to be subsequently spliced, so the larger the overlapping area of the images obtained by the two cameras, the better; alternatively, the spacing of the two cameras is less than 1.5.
- Cm some designs can also be less than or equal to 1cm.
- the distance between the first camera and the second camera to capture the object to be photographed also has a certain influence on the acquisition field of the image. For example, the closer the camera is to the object to be photographed, the smaller the field of view deviation is; the camera distance is to be photographed. The farther the object is, the larger the field of view deviation.
- the photographing device may further comprise an adjusting device, adjusting the spacing between the first camera and the second camera, and flexibly adjusting the spacing of the two cameras according to different distances of the object to be photographed, so as to ensure that the camera is to be photographed for different distances.
- Objects can obtain the same image as possible (such as content similarity greater than 90%, or the ratio of the image of the two images to the single image is greater than 90%, etc.), and can ensure the first sub-image of the first camera And the second sub-image of the second camera can have an overlapping area.
- the two cameras cannot obtain a clear image of the super-large aperture of the entire viewing angle; in the above embodiment, the diameter of 304 is not larger than the diameter of the outer circle of the square 206, and some partial area images are missed. Meet the clarity of the large aperture.
- the photographing device may further comprise a third camera, the optical axis of the third camera and the optical axis of the first camera are parallel to each other; the distance between the third camera and the first camera is less than a preset distance; the third camera and the second Obtaining a distance between the cameras is less than a preset distance; acquiring a third image of the third image by the third camera; acquiring a third sub image of the third image according to a third preset rule, where the third sub image corresponds to the third camera
- the angle of view of the field of view is [ ⁇ 4 , ⁇ 5 ]; wherein ⁇ 2 ⁇ ⁇ 4 ⁇ ⁇ 3 ⁇ ⁇ 5 , there is an overlapping image of the second sub-image and the third sub-image; ⁇ 5 is smaller than the field of view of the third camera 1/2 of the angle.
- the first sub image, the second sub image, and the third sub image are obtained according to a fourth preset splicing algorithm.
- the acquired first word image and the second sub-image itself have splicing conditions, the first preset splicing algorithm, the second preset splicing algorithm, and the third preset mentioned in the above embodiments.
- the splicing algorithm and the fourth preset splicing algorithm can all be implemented by using existing technologies. This article will not go into details.
- the present invention provides an image processing method, which is applied to a photographing apparatus including a first camera and a second camera; the optical axes of the second camera of the first camera are parallel to each other with a pitch smaller than a preset distance; and their aperture values are smaller than Obtaining a first image of the first image of the object to be photographed; acquiring a second image of the object to be photographed; acquiring a first sub image of the first image according to the first preset rule, corresponding to the first sub image
- the field of view of the first camera is in the range of [0, ⁇ 1 ]
- the second sub-image of the second image is acquired according to the second preset rule, and the field of view of the second sub-image corresponding to the second camera is [ ⁇ 2 , ⁇ 3 ]; wherein ⁇ 2 ⁇ 1 , the first sub-image and the second sub-image have overlapping images; and the first sub-image and the second sub-image are obtained according to a preset splicing algorithm.
- an embodiment of the present invention provides an image processing apparatus 700, where the apparatus 700 is applied to a photographing apparatus including a first camera and a second camera, the first camera, the second The optical axes of the cameras are parallel to each other, and the distance between the first camera and the second camera is less than a preset distance; the aperture values of the first camera and the second camera are all less than 1.6;
- the apparatus 700 includes a first obtaining module 701, a second obtaining module 702, a third obtaining module 703, a fourth obtaining module 704, and an image stitching module 705, where:
- the first obtaining module 701 is configured to acquire a first image of the first camera to take a subject to be photographed.
- the first obtaining module 701 can be implemented by the processor by calling the first camera to acquire an image.
- the first obtaining module 702 is configured to acquire a second image that the second camera captures an object to be photographed.
- the second obtaining module 702 can be implemented by the processor by calling the first camera to acquire an image.
- the third obtaining module 703 is configured to obtain a first sub-image of the first image according to the first preset rule, where the first sub-image corresponds to the field of view of the first camera, and the third field is [0, ⁇ 1 ].
- the module 703 can be implemented by a processor, and can perform a corresponding calculation by calling data and an algorithm in a local storage or a cloud server to obtain a first sub-image from the first image.
- a fourth obtaining module 704 configured to acquire a second sub-image of the second image according to the second preset rule, where the second sub-image corresponds to a second camera having a field angle range of [ ⁇ 2 , ⁇ 3 ]; wherein ⁇ 2 ⁇ ⁇ 1 , the fourth obtaining module 704 can be implemented by a processor, and can perform corresponding calculation by calling data and an algorithm in the local storage or the cloud server to obtain a second sub-image from the second image.
- the image splicing module 705 is configured to obtain the target image according to a preset splicing algorithm by using the first sub-image and the second sub-image.
- the image splicing module 705 can be implemented by a processor, and can perform corresponding calculation by calling data in a local memory or a cloud server and a splicing fusion algorithm, and splicing the first sub image and the second sub image into a complete target image.
- the target image still has high definition under a large aperture.
- the first obtaining module 701 is specifically configured to perform the method mentioned in step 31 and the method that can be replaced by the same; the second obtaining module 702 is specifically configured to perform the method mentioned in step 32 and may be equivalent.
- An alternative method; the third obtaining module 703 is specifically configured to perform the method mentioned in step 33 and the method that can be replaced by the same; the fourth obtaining module 704 is specifically configured to perform the method mentioned in step 34 and can be replaced equally Method;
- the image splicing module 705 is specifically configured to perform the method mentioned in the step 35 and a method that can be equivalently replaced.
- the photographing device may further include a third camera, the optical axis of the third camera and the optical axis of the first camera are parallel to each other; and the spacing between the third camera and the first camera is less than a preset distance.
- the distance between the third camera and the second camera is less than the preset distance; the device further includes: a fifth acquisition module 706 (not shown) for acquiring a third image of the third camera to capture the object to be photographed a sixth acquisition module 707 (not shown), configured to acquire a third sub-image of the third image according to a third preset rule, where the third sub-image corresponds to a range of field angles of the third camera [ ⁇ 4 , ⁇ 5 ]; wherein ⁇ 2 ⁇ 4 ⁇ 3 ⁇ 5 , the second sub-image and the third sub-image have overlapping images, and ⁇ 5 is smaller than the angle of view of the third camera
- the image splicing module 705 is specifically configured to obtain the target image according to the fourth preset splicing algorithm by using the first sub image, the second sub image, and the third sub image.
- the present invention provides an image processing apparatus, which is applied to a photographing apparatus including a first camera and a second camera; the optical axes of the second camera of the first camera are parallel to each other with a pitch smaller than a preset distance; and their aperture values are smaller than 1.6: acquiring, by the first camera, a first image of the object to be photographed; acquiring a second image of the second camera to capture the object to be photographed; acquiring the first sub-image of the first image according to the first preset rule, the first sub-image
- the image corresponding to the first camera has an angle of view of [0, ⁇ 1 ], and the second sub-image of the second image is obtained according to the second preset rule, and the second sub-image corresponds to the range of the second camera.
- each module in the above device 700 is only a division of a logical function, and the actual implementation may be integrated into one physical entity in whole or in part, or may be physically separated.
- each of the above modules may be a separately set processing component, or may be integrated in one chip of the terminal, or may be stored in a storage element of the controller in the form of program code, and processed by one of the processors.
- the component calls and executes the functions of each of the above modules.
- the individual modules can be integrated or implemented independently.
- the processing elements described herein can be an integrated circuit chip with signal processing capabilities.
- each step of the above method or each of the above modules may be completed by an integrated logic circuit of hardware in the processor element or an instruction in a form of software.
- the processing element may be a general purpose processor, such as a central processing unit (CPU), or may be one or more integrated circuits configured to implement the above method, for example: one or more specific integrations Circuit (English: application-specific integrated circuit, ASIC for short), or one or more microprocessors (English: digital signal processor, referred to as: DSP), or one or more field programmable gate arrays (English: Field-programmable gate array, referred to as: FPGA).
- CPU central processing unit
- DSP digital signal processor
- FPGA Field-programmable gate array
- embodiments of the present invention can be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment, or a combination of software and hardware. Moreover, the invention can take the form of a computer program product embodied on one or more computer-usable storage media (including but not limited to disk storage, CD-ROM, optical storage, etc.) including computer usable program code.
- computer-usable storage media including but not limited to disk storage, CD-ROM, optical storage, etc.
- the computer program instructions can also be stored in a computer readable memory that can direct a computer or other programmable data processing device to operate in a particular manner, such that the instructions stored in the computer readable memory produce an article of manufacture comprising the instruction device.
- the apparatus implements the functions specified in one or more blocks of a flow or a flow and/or block diagram of the flowchart.
- These computer program instructions can also be loaded onto a computer or other programmable data processing device such that a series of operational steps are performed on a computer or other programmable device to produce computer-implemented processing for execution on a computer or other programmable device.
- the instructions provide steps for implementing the functions specified in one or more of the flow or in a block or blocks of a flow diagram.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Studio Devices (AREA)
Abstract
La présente invention concerne un procédé et un appareil de traitement d'image, et un dispositif. Le procédé est appliqué à un terminal comportant deux caméras fabriquées spécialement, la première caméra et la seconde caméra étant spécifiquement personnalisées, pour réaliser une très grande ouverture conformément au processus existant au coût de la définition d'une image acquise dans une certaine plage d'angle de visualisation, de telle sorte que certaines images de zone photographiées peuvent satisfaire les exigences de qualité de la très grande ouverture. Le procédé consiste : à acquérir une première sous-image d'un objet à photographier par une première caméra, ayant une plage correspondante d'angle de visualisation de [0, θ1]; à acquérir une seconde sous-image d'un objet à photographier par une seconde caméra, ayant une plage correspondante d'angle de visualisation de [θ2, θ3], la qualité de la première sous-image et la qualité de la seconde sous-image satisfaisant les exigences de définition d'une très grande ouverture; et à assembler et à fusionner la première sous-image et la seconde sous-image, de façon à obtenir une image cible qui satisfait les exigences de la très grande ouverture dans une plage plus étendue d'angle de visualisation.
Priority Applications (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| EP18820572.8A EP3629569B1 (fr) | 2017-06-23 | 2018-04-25 | Procédé et appareil de traitement d'image, et dispositif |
| US16/723,554 US11095812B2 (en) | 2017-06-23 | 2019-12-20 | Image processing method, apparatus, and device |
Applications Claiming Priority (4)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201710488848.7A CN107295256A (zh) | 2017-06-23 | 2017-06-23 | 一种图像处理方法、装置与设备 |
| CN201710488848.7 | 2017-06-23 | ||
| CN201711243255.0A CN109120818B (zh) | 2017-06-23 | 2017-11-30 | 一种图像处理方法、装置与设备 |
| CN201711243255.0 | 2017-11-30 |
Related Child Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| US16/723,554 Continuation US11095812B2 (en) | 2017-06-23 | 2019-12-20 | Image processing method, apparatus, and device |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2018233373A1 true WO2018233373A1 (fr) | 2018-12-27 |
Family
ID=64736177
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/CN2018/084518 Ceased WO2018233373A1 (fr) | 2017-06-23 | 2018-04-25 | Procédé et appareil de traitement d'image, et dispositif |
Country Status (1)
| Country | Link |
|---|---|
| WO (1) | WO2018233373A1 (fr) |
Cited By (7)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN112053403A (zh) * | 2020-04-27 | 2020-12-08 | 北京迈格威科技有限公司 | 双摄像头的光轴夹角确定方法及装置 |
| CN112132879A (zh) * | 2019-06-25 | 2020-12-25 | 北京沃东天骏信息技术有限公司 | 一种图像处理的方法、装置和存储介质 |
| CN112233185A (zh) * | 2020-09-24 | 2021-01-15 | 浙江大华技术股份有限公司 | 相机标定方法、图像配准方法及摄像器件、存储装置 |
| CN114169369A (zh) * | 2021-12-01 | 2022-03-11 | 普联技术有限公司 | 移动目标的监测方法、装置、终端设备及可读存储介质 |
| CN114820314A (zh) * | 2022-04-27 | 2022-07-29 | Oppo广东移动通信有限公司 | 图像处理方法及装置、计算机可读存储介质和电子设备 |
| CN114845052A (zh) * | 2022-04-22 | 2022-08-02 | 杭州海康威视数字技术股份有限公司 | 一种拍摄参数调整方法、装置、摄像机及目标设备 |
| CN115022510A (zh) * | 2022-05-30 | 2022-09-06 | 艾酷软件技术(上海)有限公司 | 摄像头组件、电子设备及其拍摄方法和拍摄装置 |
Citations (8)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN101183175A (zh) * | 2006-11-13 | 2008-05-21 | 华晶科技股份有限公司 | 数码相机的像差补正系统与方法 |
| US8922625B2 (en) * | 2009-11-19 | 2014-12-30 | Lg Electronics Inc. | Mobile terminal and controlling method thereof |
| CN105120145A (zh) * | 2015-07-31 | 2015-12-02 | 努比亚技术有限公司 | 电子设备及图像处理方法 |
| CN105262951A (zh) * | 2015-10-22 | 2016-01-20 | 努比亚技术有限公司 | 具有双目摄像头的移动终端及其拍照方法 |
| CN105338244A (zh) * | 2015-10-30 | 2016-02-17 | 努比亚技术有限公司 | 一种信息处理方法及移动终端 |
| CN106161980A (zh) * | 2016-07-29 | 2016-11-23 | 宇龙计算机通信科技(深圳)有限公司 | 基于双摄像头的拍照方法及系统 |
| CN106570110A (zh) * | 2016-10-25 | 2017-04-19 | 北京小米移动软件有限公司 | 图像去重方法及装置 |
| CN107295256A (zh) * | 2017-06-23 | 2017-10-24 | 华为技术有限公司 | 一种图像处理方法、装置与设备 |
-
2018
- 2018-04-25 WO PCT/CN2018/084518 patent/WO2018233373A1/fr not_active Ceased
Patent Citations (8)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN101183175A (zh) * | 2006-11-13 | 2008-05-21 | 华晶科技股份有限公司 | 数码相机的像差补正系统与方法 |
| US8922625B2 (en) * | 2009-11-19 | 2014-12-30 | Lg Electronics Inc. | Mobile terminal and controlling method thereof |
| CN105120145A (zh) * | 2015-07-31 | 2015-12-02 | 努比亚技术有限公司 | 电子设备及图像处理方法 |
| CN105262951A (zh) * | 2015-10-22 | 2016-01-20 | 努比亚技术有限公司 | 具有双目摄像头的移动终端及其拍照方法 |
| CN105338244A (zh) * | 2015-10-30 | 2016-02-17 | 努比亚技术有限公司 | 一种信息处理方法及移动终端 |
| CN106161980A (zh) * | 2016-07-29 | 2016-11-23 | 宇龙计算机通信科技(深圳)有限公司 | 基于双摄像头的拍照方法及系统 |
| CN106570110A (zh) * | 2016-10-25 | 2017-04-19 | 北京小米移动软件有限公司 | 图像去重方法及装置 |
| CN107295256A (zh) * | 2017-06-23 | 2017-10-24 | 华为技术有限公司 | 一种图像处理方法、装置与设备 |
Non-Patent Citations (1)
| Title |
|---|
| See also references of EP3629569A4 * |
Cited By (10)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN112132879A (zh) * | 2019-06-25 | 2020-12-25 | 北京沃东天骏信息技术有限公司 | 一种图像处理的方法、装置和存储介质 |
| CN112132879B (zh) * | 2019-06-25 | 2024-03-08 | 北京沃东天骏信息技术有限公司 | 一种图像处理的方法、装置和存储介质 |
| CN112053403A (zh) * | 2020-04-27 | 2020-12-08 | 北京迈格威科技有限公司 | 双摄像头的光轴夹角确定方法及装置 |
| CN112233185A (zh) * | 2020-09-24 | 2021-01-15 | 浙江大华技术股份有限公司 | 相机标定方法、图像配准方法及摄像器件、存储装置 |
| CN112233185B (zh) * | 2020-09-24 | 2024-06-11 | 浙江大华技术股份有限公司 | 相机标定方法、图像配准方法及摄像器件、存储装置 |
| CN114169369A (zh) * | 2021-12-01 | 2022-03-11 | 普联技术有限公司 | 移动目标的监测方法、装置、终端设备及可读存储介质 |
| CN114845052A (zh) * | 2022-04-22 | 2022-08-02 | 杭州海康威视数字技术股份有限公司 | 一种拍摄参数调整方法、装置、摄像机及目标设备 |
| CN114845052B (zh) * | 2022-04-22 | 2024-03-12 | 杭州海康威视数字技术股份有限公司 | 一种拍摄参数调整方法、装置、摄像机及目标设备 |
| CN114820314A (zh) * | 2022-04-27 | 2022-07-29 | Oppo广东移动通信有限公司 | 图像处理方法及装置、计算机可读存储介质和电子设备 |
| CN115022510A (zh) * | 2022-05-30 | 2022-09-06 | 艾酷软件技术(上海)有限公司 | 摄像头组件、电子设备及其拍摄方法和拍摄装置 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US11095812B2 (en) | Image processing method, apparatus, and device | |
| KR102310430B1 (ko) | 촬영 방법, 장치 및 디바이스 | |
| WO2018233373A1 (fr) | Procédé et appareil de traitement d'image, et dispositif | |
| CN113472976B (zh) | 微距成像的方法及终端 | |
| US11108953B2 (en) | Panoramic photo shooting method and apparatus | |
| CN109671106B (zh) | 一种图像处理方法、装置与设备 | |
| CN106226976B (zh) | 一种双摄像头拍摄方法、系统及终端 | |
| CN104363379A (zh) | 使用不同焦距摄像头拍照的方法和终端 | |
| WO2023016127A1 (fr) | Plaque de phase, module de caméra et terminal mobile | |
| CN107637063A (zh) | 用于基于用户的手势控制功能的方法和拍摄装置 | |
| CN110072045B (zh) | 镜头、摄像头及电子设备 | |
| WO2019072222A1 (fr) | Procédé, dispositif et appareil de traitement d'image | |
| HK40003312B (en) | Panoramic shooting method and device | |
| HK40003312A (en) | Panoramic shooting method and device |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 18820572 Country of ref document: EP Kind code of ref document: A1 |
|
| NENP | Non-entry into the national phase |
Ref country code: DE |
|
| ENP | Entry into the national phase |
Ref document number: 2018820572 Country of ref document: EP Effective date: 20191227 |
|
| WWG | Wipo information: grant in national office |
Ref document number: 2018820572 Country of ref document: EP |