[go: up one dir, main page]

WO2023015981A1 - Image processing method and related device therefor - Google Patents

Image processing method and related device therefor Download PDF

Info

Publication number
WO2023015981A1
WO2023015981A1 PCT/CN2022/091225 CN2022091225W WO2023015981A1 WO 2023015981 A1 WO2023015981 A1 WO 2023015981A1 CN 2022091225 W CN2022091225 W CN 2022091225W WO 2023015981 A1 WO2023015981 A1 WO 2023015981A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
block
camera
fused
processing method
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
PCT/CN2022/091225
Other languages
French (fr)
Chinese (zh)
Inventor
肖斌
乔晓磊
朱聪超
王宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Honor Device Co Ltd
Original Assignee
Honor Device Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Honor Device Co Ltd filed Critical Honor Device Co Ltd
Publication of WO2023015981A1 publication Critical patent/WO2023015981A1/en
Anticipated expiration legal-status Critical
Ceased legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • G06T7/337Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods involving reference images or patches
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20021Dividing image into blocks, subimages or windows
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Definitions

  • the present application relates to the field of image processing, in particular to an image processing method and related equipment.
  • the present application provides an image processing method and related equipment, which can perform image restoration processing on low-resolution areas in an image to restore details, thereby improving user experience.
  • an image processing method which is applied to an electronic device including a first camera and a second camera, and the method includes:
  • the electronic device starts the camera; displays a preview interface, and the preview interface includes a first control; detects a first operation on the first control; in response to the first operation, the first camera captures a first image and the second camera captures a second image, the second
  • the definition of an image is lower than that of the second image, the first image includes a first region, and the first region is a region in the first image whose resolution is less than a preset threshold; a mask block is obtained according to the first image, and the mask The block corresponds to the first area; the first image and the second image are fused to obtain the first fused image; according to the mask block, the first image block in the first image and the second image block in the second image are determined , the first image block corresponds to the mask block, and the second image block corresponds to the mask block; the first image block and the second image block are fused to obtain a fused image block; the first fused image is fused with the fused image block , to get the third image.
  • the first control may be the camera key 11 .
  • the embodiment of the present application provides an image processing method, by determining the mask block corresponding to the first region missing details from the first low-resolution image, and then obtaining the mask block corresponding to the first image from the first image An image block, and obtaining a second image block corresponding to the mask block from a second image with high definition and rich details, and fusing the first image block and the second image block to obtain a clear fused image block; Then, the first fused image obtained by merging the first image and the second image is further fused with the fused image block to restore missing details and obtain a high-definition third image.
  • obtaining the mask block according to the first image includes: inputting the first image into a segmentation model for segmentation, and generating a mask block; A region is segmented, and a mask block corresponding to the first region is generated.
  • the segmentation model may be: a fully convolutional neural network.
  • the first image can be finely segmented through the segmentation model to obtain multiple segmented image regions, which facilitates the subsequent independent repair of regions with severe local missing details in the first image without affecting the second image.
  • An image of the surrounding area An image of the surrounding area.
  • merging the first image and the second image to obtain the first fused image includes: merging the first image and the second image using the first fusion model to obtain the first Blend images.
  • the second image since the second image has a higher resolution than the first image, after the fusion of the first image and the second image, the resolution of the overall image can be improved, and a higher-definition first fusion can be obtained. image.
  • merging the first image block and the second image block to obtain the fused image block includes: merging the first image block and the second image block using a second fusion model, Get the fused image block.
  • the resolution of the first image is lower than that of the second image
  • the resolution of the first image block is also lower than that of the second image block, and even there is no Any details. Therefore, by fusing the first image block that is unclear and lacks details with the second image block that is clear and rich in details, a higher-definition fused image block can be obtained.
  • merging the first fused image and the fused image block to obtain the third image includes: merging the first fused image and the fused image block using a third fusion model to obtain the third image Three images.
  • the first fused image has improved overall definition compared with the first image
  • the fused image block has locally improved definition compared with the first image block in the first image
  • the second A fused image is fused with the fused image block, and a part in the first fused image can be further repaired to obtain a third image with higher definition.
  • the method further includes: when the mask block is not obtained according to the first image, fusing the first image and the second image using the first fusion model to obtain the first fusion image .
  • the method further includes: registering the first image and the second image.
  • registration can improve the accuracy of fusing the first image and the second image.
  • the method further includes: registering the first image block and the second image block.
  • registration can improve the accuracy of fusing the first image block and the second image block.
  • registration includes: global registration and/or local registration, global registration is used to register all content in multiple images, and local registration is used to represent Register local content in multiple images.
  • global registration is used to register all content in multiple images
  • local registration is used to represent Register local content in multiple images.
  • the alignment accuracy of all content in multiple images can be improved through global registration
  • the alignment accuracy of local content in multiple images can be improved through local registration.
  • the method further includes: using the training image set and adding random highlight noise to train the first fusion model to obtain the second fusion model, wherein the training image set includes the original image , the original image is annotated with a mask block.
  • the training image set includes the original image
  • the original image is annotated with a mask block.
  • the third fusion model is a Laplace fusion model.
  • the Laplacian fusion model when using the Laplacian fusion model for fusion, can first decompose the first fused image and the fused image block into different spatial frequency bands, and then in each spatial frequency band layer Fusion is performed separately, so that through the frequency division processing, the fusion of the first fusion image and the fusion image blocks can be made more natural, the connection is more delicate, and the obtained third image is of higher quality.
  • an image processing apparatus in a second aspect, includes a unit for performing each step in the above first aspect or any possible implementation manner of the first aspect.
  • an electronic device including a camera module, a processor, and a memory; the camera module is used to collect a first image and a second image, and the definition of the first image is lower than that of the second image , the first image includes a first area, and the first area is an area whose resolution in the first image is less than a preset threshold; the memory is used to store a computer program that can run on the processor; the processor is used to execute the first A step of processing in the image processing method provided in any possible implementation manner of the aspect or the first aspect.
  • the camera module includes a wide-angle camera, a main camera, and a telephoto camera; the wide-angle camera is used to obtain the first image after the processor obtains a camera instruction; The second image is obtained after the processor obtains the photographing instruction, or; the main camera is used to obtain the first image after the processor obtains the photographing instruction; the telephoto camera is used to obtain the first image after the processor obtains the photographing instruction second image.
  • a chip including: a processor, configured to call and run a computer program from a memory, so that a device installed with the chip executes the chip as provided in the first aspect or any possible implementation manner of the first aspect. The steps of processing in the image processing method.
  • a computer-readable storage medium stores a computer program.
  • the computer program includes program instructions. Steps of performing processing in the image processing method provided in any possible implementation manner of the aspect.
  • a computer program product in a sixth aspect, includes a computer-readable storage medium storing a computer program, and the computer program enables the computer to execute the image provided in the first aspect or any possible implementation manner of the first aspect The step in the processing method that performs the processing.
  • Fig. 1 is a schematic diagram of an image obtained by using related technologies
  • FIG. 2 is a schematic diagram of an application scenario provided by an embodiment of the present application.
  • FIG. 3 is a schematic diagram of an image processing method provided in an embodiment of the present application.
  • Fig. 4 is a schematic flow chart of another image processing method provided by the embodiment of the present application.
  • FIG. 5 is a schematic diagram of image processing by a segmentation model provided in an embodiment of the present application.
  • FIG. 6 is a schematic diagram of image processing when obtaining a mask block provided by an embodiment of the present application.
  • FIG. 7 is a schematic diagram of a display interface for zooming when taking pictures and previewing provided by an embodiment of the present application.
  • FIG. 8 is a schematic diagram of the process of multi-camera zooming during photo preview provided by the embodiment of the present application.
  • Fig. 9 is a schematic diagram of a hardware system applicable to the device of the present application.
  • Fig. 10 is a schematic diagram of a software system applicable to the device of the present application.
  • FIG. 11 is a schematic structural diagram of an image processing device provided by an embodiment of the present application.
  • FIG. 12 is a schematic structural diagram of a chip provided in the embodiment of the application.
  • a relationship means that there may be three kinds of relationships, for example, A and/or B means: A exists alone, A and B exist simultaneously, and B exists alone.
  • plural refers to two or more than two.
  • first and second are used for descriptive purposes only, and cannot be understood as indicating or implying relative importance or implicitly specifying the quantity of indicated technical features. Thus, a feature defined as “first” and “second” may explicitly or implicitly include one or more of these features. In the description of this embodiment, unless otherwise specified, “plurality” means two or more.
  • RGB (red, green, blue) color space refers to a color model related to the structure of the human visual system. According to the structure of the human eye, all colors are seen as different combinations of red, green and blue.
  • a pixel value refers to a set of color components corresponding to each pixel in a color image located in the RGB color space.
  • each pixel corresponds to a group of three primary color components, wherein the three primary color components are red component R, green component G and blue component B respectively.
  • Image registration refers to the matching of geographic coordinates of different images obtained by different imaging methods in the same area. Among them, it includes the processing of three aspects: geometric correction, projection transformation and unified scale.
  • FOV Field of view
  • the camera can be divided into a main camera, a wide-angle camera, and a telephoto camera due to different field of view angles.
  • the field of view of the wide-angle camera is larger than that of the main camera, and the focal length is smaller, which is suitable for close-up shooting; while the field of view of the telephoto camera is smaller than that of the main camera, and the focal length is longer. Suitable for remote shooting.
  • Backlighting is a situation where the subject is just between the light source and the camera. In this state, it is easy to cause insufficient exposure of the subject. Therefore, in general, users should try to avoid shooting objects under backlight conditions.
  • Fig. 1 is an image captured by using related technologies.
  • FIG 1 there are three people in the scene to be shot waiting for the user to take a photo in the sun. Since the sun shines on the face area, and the sunlight is very strong, the face area produces high light reflection, and the face area is high Brightness area.
  • the captured image loses the details of the face area, resulting in poor image quality and the content of the face area cannot be seen clearly, which affects the user experience.
  • the embodiment of the present application provides an image processing method, by collecting the first image and the second image with different resolutions, using the content corresponding to the high-brightness area in the clearer second image, and the low-definition first image
  • the high-brightness area in the first image is fused, so that the missing details in the high-brightness area in the first image can be recovered, and then a higher-quality captured image can be obtained through multiple fusions to improve user experience.
  • Fig. 2 is a schematic diagram of an application scenario provided by an embodiment of the present application.
  • the image processing method provided in this application can be applied to restore the details of the high-brightness area in the image.
  • GUI graphical user interface
  • the preview interface may include a viewfinder window 21 .
  • the preview image can be displayed in the viewfinder window 21 in real time.
  • the preview interface may also include a variety of shooting mode options and a first control, that is, the shooting key 11 .
  • the multiple shooting mode options include, for example: a shooting mode, a video recording mode, etc., and the shooting key 11 is used to indicate that the current shooting mode is a shooting mode, a video recording mode or other modes. Among them, when the camera application is opened, it is generally in the camera mode by default.
  • the electronic device when the electronic device starts the camera application, the electronic device runs the program corresponding to the image processing method, and acquires and stores the captured image in response to the user's click operation on the shooting key 11 .
  • the image processing method of the present application can detect the highlighted face area, and then restore the details of the face area to obtain a high-quality captured image.
  • FIG. 3 is a schematic flowchart of an image processing method provided by an embodiment of the present application. As shown in FIG. 3 , the image processing method includes the following S10-S60.
  • the electronic device starts the camera, and displays a preview interface as shown in (b) in FIG.
  • the first image and the second image are images captured for the same scene to be captured.
  • the definition of the first image is lower than that of the second image, and the first image includes a first area, where the first area is an area in the first image whose resolution is less than a preset threshold.
  • the preset threshold can be set and modified according to needs, which is not limited in this embodiment of the present application.
  • both the first image and the second image are Bayer format images, and may also be referred to as images in the RAW domain.
  • the first area is used to represent an area in the first image that is unclear and lacks details.
  • the first area may refer to a high-brightness area where details are missing due to strong illumination when the first image is acquired, or may refer to a key area where details are missing when the first image is acquired, for example, a human face, Human body, facial features, etc.
  • the mask block refers to a mask image corresponding to the first region in the first image.
  • the processing of the first region in the first image where details need to be restored is controlled by replacing or merging the first region in the first image where details are missing.
  • the definition of the second image is higher than that of the first image, the definition of the overall image can be improved after the fusion of the first image and the second image, and a first fused image with higher definition can be obtained.
  • the first area may refer to the face areas where the three colleagues are illuminated by strong light so that facial features cannot be seen clearly, and the generated mask block corresponds to the first area , used to represent the face area.
  • the first image block is the human face area determined from the first image
  • the second image block is the human face area determined from the second image.
  • the definition of the first image is lower than that of the second image
  • the definition of the first image block is also lower than that of the second image block, and even there is no detail in the first image block,
  • the first image block that is not clear and lacks details is fused with the second image block that is clear and rich in details, and a fused image block with higher definition can be obtained.
  • the overall definition of the first fused image is improved relative to the first image, the fused image block is partially improved relative to the first image block in the first image, and the first fused image By merging with the fused image block, the part in the first fused image can be further repaired to obtain a third image with higher definition.
  • An embodiment of the present application provides an image processing method, by determining the mask block corresponding to the first area missing details from the low-resolution first image, and then obtaining the first mask block corresponding to the mask block from the first image.
  • the first fused image obtained by fusing the first image and the second image is further fused with the fused image block to restore missing details and obtain a high-definition third image.
  • FIG. 4 is a schematic flowchart of another image processing method provided by an embodiment of the present application.
  • the image processing method 10 includes: S110 to S190.
  • the first image and the second image are images captured for the same scene to be captured.
  • the resolution of the first image is lower than that of the second image.
  • first image and the second image are images captured by the electronic device through a camera, or the first image and the second image may also be images obtained from inside the electronic device, for example, images stored in the electronic device, or , the image obtained by the electronic device from the cloud. Wherein, both the first image and the second image are Bayer format images.
  • the corresponding low-resolution image of the two images is called the first image; and the corresponding high-resolution image is called the second image. Since the definition is relative, the first image and the second image are also relative.
  • image processing method provided by the embodiment of the present application is used to perform image processing on image a and image b
  • image a is the first image
  • image b That is the second image
  • image b is the first image
  • image c is second image
  • the first image is an image collected by a wide-angle camera
  • the second image is an image collected by a telephoto camera
  • the wide-angle camera and the telephoto camera collect images at the same time
  • the first image is an image collected by a wide-angle camera
  • the second image is an image collected by the ultra-wide-angle camera
  • the wide-angle camera and the ultra-wide-angle camera collect images at the same time.
  • the first image may be an image with a region of missing details, and the missing details in the first image may be restored through the image processing method of the embodiment of the present application.
  • the segmentation model is used to segment the first region in the first image, and generate a mask block corresponding to the first region.
  • the first area is used to represent an area in the first image whose sharpness is less than a preset threshold, that is, an area lacking certain details.
  • the segmentation model may be: a fully convolutional neural network (fully convolutional networks, FCN) and the like.
  • the segmentation model may segment the first image to obtain a plurality of segmented image regions, where the plurality of image regions include some regions containing details, and may also include some regions lacking details.
  • the segmentation model can segment the one or more regions with missing details, and generate corresponding one or more mask blocks.
  • the segmentation model will not segment out areas with missing details, let alone generate corresponding mask blocks.
  • the first area may refer to a high-brightness area where details are missing due to strong illumination when the first image is acquired, for example, in an HDR scene, or may also refer to a key area where details are missing when the first image is acquired, For example, human face, human body, facial features, etc.
  • the number of first regions is the same as the number of mask blocks, and the range of viewing angles of the mask blocks corresponding to each first region is the same as the range of viewing angles corresponding to the first region.
  • FIG. 5 shows a schematic diagram of processing an image by a segmentation model provided in an embodiment of the present application.
  • the first image is input into the segmentation model. Since the first image contains three face areas that are illuminated by strong light and lack details, the segmentation model can segment three first areas and generate corresponding The 3 mask blocks. For example, the three mask blocks corresponding to the face area in Fig. 1 .
  • the pixel value of each pixel corresponding to the mask block is 0.
  • the registration may be global registration.
  • Global registration is used to register all the content in multiple images, that is to say, here you can register all the content in the first image and the second image, so that the first image and the second image are in In the subsequent fusion, it can correspond more accurately.
  • the registration may include global registration and local registration.
  • Local registration is used to refer to the registration of local content in multiple images.
  • the first region is not segmented from the first image by using the segmentation model, some other regions can be segmented, for example, the human body region and the background region other than the human body region, thus, the first The body region in the image is locally registered with the body region in the second image without registering the background region in the first image with the background region in the second image.
  • the global registration can be performed first, and then the local registration can be performed, or the local registration can be performed first, and then the global registration can be performed, and the registration order can be set and adjusted as required. No restrictions are imposed.
  • the first fusion model can fuse images with different resolutions.
  • the first fusion model may be a VGG net model.
  • the field angle range corresponding to the second image in the first image can be improved The sharpness of the content in the image, so as to obtain the first fused image with higher definition.
  • the field angle range of the first fused image is the same as the field angle range of the first image.
  • first image and the second image may not be registered, and the acquired first image and the second image may be fused using the first fusion model to obtain the first fused image.
  • one first image block corresponding to the mask block can be determined from the first image according to the mask block.
  • the field angle ranges of the first image block and the mask block are the same.
  • a corresponding second image block can be determined from the second image.
  • the field angle range of the second image block is the same as that of the mask block.
  • each mask block in the plurality of mask blocks Corresponding to one first image block, that is to say, the same number of multiple first image blocks can be determined from the first image, and the first image block corresponds to the mask block one by one.
  • a second image block corresponding to each mask block in the plurality of mask blocks can also be determined from the second image, that is, it can be obtained from the second image
  • a plurality of second image blocks of the same number are determined, and the second image blocks are in one-to-one correspondence with the mask block and the first image block.
  • registering the first image block and the second image block refers to registering the first image block and the second image block in each group of image blocks.
  • the registration may be global registration.
  • the first image block and the second image block in each group of image blocks are globally registered. Here, it refers to registering all contents of the first image block and all contents of the second image block in each group of image blocks.
  • the registration may include global registration and local registration.
  • the first image block and the second image block in each group of image blocks are firstly registered globally, and then locally registered.
  • local registration refers to registering the local content of the first image block in each group of image blocks with the local content of the second image block.
  • both the first image block and the second image block include a human face
  • the corresponding regions of the eyes in the human face in the first image block and the second image block are respectively registered
  • the eyes in the human face are The mouth is registered in the regions corresponding to the first image block and the second image block respectively.
  • the present application extracts the first image block from the first image, extracts the second image block from the second image, and then extracts the first image block Global registration is performed with the second image block, so that the background area is isolated without affecting the surrounding background area.
  • local registration may continue to be performed on the first image block and the second image block, so as to improve the registration accuracy, and obtain the first image block and the second image block with higher registration accuracy.
  • first image block and the second image block after registration still have different resolutions
  • the second fusion model can fuse image blocks with different resolutions.
  • the definition of the second image is higher than that of the first image
  • the definition of the second image block is higher than that of the first image block, thus, the registered first image block and the second image After the blocks are fused, a fused image block with higher definition can be obtained.
  • the field angle range of the fused image block is the same as the field angle ranges of the first image block and the second image block.
  • the second fusion model is a pre-trained fusion model.
  • the training image set may include an original image and a manually marked mask block, the mask block is used to identify a first region of missing details in the original image.
  • original images refer to images in various HDR scenarios.
  • On each original image there are 1 or more mask blocks that indicate high-brightness regions (ie, the first region where details are missing) are manually marked.
  • the second fusion model is trained from the first fusion model.
  • the second fusion model is trained by adding random highlight noise to the first fusion model.
  • the second image block with higher definition is larger than that of the first image block.
  • the proportion of the weight is larger, so that the fused image block obtained by fusion can obtain more details from the second image block.
  • the third blending model may be a Laplacian blending model.
  • the Laplacian fusion model can first decompose the first fused image and the fused image block into different spatial frequency bands, and then perform fusion on each spatial frequency band layer, thus , through the frequency division processing, the fusion of the first fused image and the fused image block can be made more natural, the joint is more delicate, and the obtained third image has higher quality.
  • FIG. 6 shows a schematic diagram of processing an image when obtaining a mask block according to an embodiment of the present application.
  • the first image is input into the segmentation model. Since the first image contains three face areas that are illuminated by strong light and lack details, the segmentation model can segment three first areas and generate corresponding The 3 mask blocks.
  • the first image and the second image are registered, and the registered first image and the second image are fused using the first fusion model to obtain the first fusion image.
  • the corresponding 3 first image blocks in the first image are obtained, and the corresponding 3 second image blocks in the second image are obtained, and then, the first image corresponding to the same mask block
  • the block and the second image block are registered and fused by using the second fusion model to obtain a fused image block, thus, three fused image blocks can be obtained.
  • the first fused image and the 3 fused image blocks are fused using the third fused model to obtain the third image.
  • the mask block when the mask block is not obtained from the first image by using the segmentation model, only the first fusion model is used to register and fuse the first image and the second image, and the obtained first The fused images are taken as captured images.
  • the segmentation model when the segmentation model is used to obtain the mask block from the first image, it means that there are areas with missing details in the first image. At this time, the first image and the second image are first fused to obtain a large-scale definition improvement.
  • the first image block is obtained from the first image
  • the second image block is obtained from the second image
  • the first image block and the second image block are registered and Fusion, thereby obtaining a fused image block that effectively restores clarity and details, and then further fusing the first fused image with the fused image block to repair missing details and obtain a high-definition, high-quality captured image.
  • the image processing method of the embodiment of the present application has been described in detail above with reference to FIGS. 2 to 6.
  • the first image and the second image are captured by two cameras.
  • current electronic devices usually include 3 or more The camera, therefore, needs to trigger two different cameras at different focal lengths to acquire the first image and the second image.
  • the zoom factor range corresponding to the electronic device is set to [0.4, 100].
  • the zoom factor range is divided into three zoom factor ranges, and the three zoom factor ranges are respectively the first zoom factor range, the second zoom factor range, and the third zoom factor range, and the zoom factors included in the three zoom factor ranges are sequentially increase.
  • the first zoom factor range F1 is [0.4, 0.9)
  • the second zoom factor range F2 is [0.9, 3.5)
  • the third zoom factor range F3 is [3.5, 100]. It should be understood that, here, each number is only for illustration, which can be set and changed as required, and is not limited in this embodiment of the present application.
  • the applicable zoom factor range of the wide-angle camera itself is [0.4,1]
  • the applicable zoom factor range of the main camera itself is [0.6,3.5]
  • the applicable zoom factor range of the telephoto camera itself is [2.0,100] ].
  • the target camera corresponding to the first zoom range is set as the wide-angle camera
  • the target camera corresponding to the second zoom range is the main camera
  • the target camera corresponding to the third zoom range is set as the telephoto camera.
  • FIG. 7 is a schematic diagram of an interface for zooming during photo preview provided by an embodiment of the present application.
  • FIG. 8 shows a schematic diagram of a process of multi-camera zooming during photo preview provided by an embodiment of the present application.
  • the electronic device 100 displays a preview interface as shown in (a) in FIG. 7 .
  • the shooting key 11 indicates that the current shooting mode is the shooting mode.
  • the preview interface also includes a viewfinder window 21, and the viewfinder window 21 can be used to display a preview image before taking pictures in real time.
  • a zoom option 22 is also displayed on the preview screen. The user can select the zoom factor of the current photo taking in the zoom option 22, for example, 0.4 times, 2 times or 50 times.
  • the preview image in response to the user's zoom operation, can be enlarged or reduced according to the currently selected zoom factor, and as the zoom factor is enlarged or reduced, the preview image in the viewfinder window 21 also becomes larger or smaller. zoom out.
  • two different cameras are invoked to acquire captured images by using the image processing method provided by the embodiment of the present application.
  • the wide-angle camera corresponding to the first zoom multiple range is in the foreground sending display state, and the acquired image is sent to the display screen show.
  • the wide-angle camera When zooming to the first zoom switching point (for example, 0.6X), the wide-angle camera continues to be in the foreground display state, and the main camera corresponding to the second zoom range F2 starts to enter the background operation state.
  • the first zoom switching point for example, 0.6X
  • the wide-angle camera Since the wide-angle camera has a larger field of view and lower definition than the main camera, therefore, within the zoom range F11 of [0.6,0.9], in response to the user's operation on the shooting key 11, the wide-angle camera captures The image is used as the first image, and the image obtained by the main camera is used as the second image. Then, based on the first image obtained by the wide-angle camera and the second image obtained by the main camera, using the image processing method provided in the embodiment of the present application, a clear Capture images with high resolution and rich details.
  • the wide-angle camera When zooming to 0.9X, the wide-angle camera is turned off, and the main camera switches to the foreground sending display state, that is, the main camera sends the acquired image to the display screen for display.
  • the wide-angle camera continues to be in the foreground display state, and the telephoto camera corresponding to the third zoom multiple range F3 starts to enter the background operation state.
  • the image captured by the main camera has low resolution and a large field of view. Therefore, within the zoom range F21 of [2.0, 3.5], in response to the user's operation on the shooting key 11, the main camera The acquired image is used as the first image, and the image acquired by the telephoto camera is used as the second image. Then, based on the first image acquired by the main camera and the second image acquired by the telephoto camera, the image processing method provided by the embodiment of the present application is used. , to obtain images with high definition and rich details.
  • the main camera When zooming to 3.5X, the main camera is turned off, and the telephoto camera switches to the foreground sending display state, that is, the telephoto camera sends the acquired image to the display screen for display.
  • the image processing method provided in the embodiment of the present application may be applicable to various electronic devices, and correspondingly, the image processing apparatus provided in the embodiment of the present application may be electronic devices in various forms.
  • the electronic device may be various camera devices such as SLR cameras and card players, mobile phones, tablet computers, wearable devices, vehicle-mounted devices, augmented reality (augmented reality, AR)/virtual reality (virtual reality) reality, VR) equipment, notebook computer, ultra-mobile personal computer (ultra-mobile personal computer, UMPC), netbook, personal digital assistant (personal digital assistant, PDA), etc., or other equipment or devices capable of image processing,
  • camera devices such as SLR cameras and card players, mobile phones, tablet computers, wearable devices, vehicle-mounted devices, augmented reality (augmented reality, AR)/virtual reality (virtual reality) reality, VR) equipment, notebook computer, ultra-mobile personal computer (ultra-mobile personal computer, UMPC), netbook, personal digital assistant (personal digital assistant, PDA), etc., or other equipment or devices capable of image processing
  • the embodiment of the present application does not set any limitation on the specific type of the electronic device.
  • FIG. 9 shows a schematic structural diagram of an electronic device 100 provided in an embodiment of the present application.
  • the electronic device 100 may include a processor 110, an external memory interface 120, an internal memory 121, a universal serial bus (universal serial bus, USB) interface 130, a charging management module 140, a power management module 141, a battery 142, an antenna 1, and an antenna 2 , mobile communication module 150, wireless communication module 160, audio module 170, speaker 170A, receiver 170B, microphone 170C, earphone jack 170D, sensor module 180, button 190, motor 191, indicator 192, camera 193, display screen 194, and A subscriber identification module (subscriber identification module, SIM) card interface 195 and the like.
  • SIM subscriber identification module
  • the sensor module 180 may include a pressure sensor 180A, a gyroscope sensor 180B, an air pressure sensor 180C, a magnetic sensor 180D, an acceleration sensor 180E, a distance sensor 180F, a proximity light sensor 180G, a fingerprint sensor 180H, a temperature sensor 180J, a touch sensor 180K, an ambient light sensor 180L, bone conduction sensor 180M, etc.
  • the structure shown in FIG. 1 does not constitute a specific limitation on the electronic device 100 .
  • the electronic device 100 may include more or fewer components than those shown in FIG. 1 , or the electronic device 100 may include a combination of some of the components shown in FIG. 1 , or , the electronic device 100 may include subcomponents of some of the components shown in FIG. 1 .
  • the components shown in FIG. 1 can be realized in hardware, software, or a combination of software and hardware.
  • the processor 110 may include one or more processing units, for example: the processor 110 may include an application processor (application processor, AP), a modem processor, a graphics processing unit (graphics processing unit, GPU), an image signal processor (image signal processor, ISP), controller, video codec, digital signal processor (digital signal processor, DSP), baseband processor, and/or neural network processor (neural-network processing unit, NPU), etc. Wherein, different processing units may be independent devices, or may be integrated in one or more processors.
  • application processor application processor, AP
  • modem processor graphics processing unit
  • GPU graphics processing unit
  • image signal processor image signal processor
  • ISP image signal processor
  • controller video codec
  • digital signal processor digital signal processor
  • baseband processor baseband processor
  • neural network processor neural-network processing unit
  • the controller may be the nerve center and command center of the electronic device 100 .
  • the controller can generate an operation control signal according to the instruction opcode and timing signal, and complete the control of fetching and executing the instruction.
  • a memory may also be provided in the processor 110 for storing instructions and data.
  • the memory in processor 110 is a cache memory.
  • the memory may hold instructions or data that the processor 110 has just used or recycled. If the processor 110 needs to use the instruction or data again, it can be called directly from the memory. Repeated access is avoided, and the waiting time of the processor 110 is reduced, thereby improving the efficiency of the system.
  • the processor 110 may run the software code of the image processing method provided in the embodiment of the present application to capture an image with higher definition.
  • processor 110 may include one or more interfaces.
  • the interface may include an integrated circuit (inter-integrated circuit, I2C) interface, an integrated circuit built-in audio (inter-integrated circuit sound, I2S) interface, a pulse code modulation (pulse code modulation, PCM) interface, a universal asynchronous transmitter (universal asynchronous receiver/transmitter, UART) interface, mobile industry processor interface (mobile industry processor interface, MIPI), general-purpose input and output (general-purpose input/output, GPIO) interface, subscriber identity module (subscriber identity module, SIM) interface, and /or universal serial bus (universal serial bus, USB) interface, etc.
  • I2C integrated circuit
  • I2S integrated circuit built-in audio
  • PCM pulse code modulation
  • PCM pulse code modulation
  • UART universal asynchronous transmitter
  • MIPI mobile industry processor interface
  • GPIO general-purpose input and output
  • subscriber identity module subscriber identity module
  • SIM subscriber identity module
  • USB universal serial bus
  • the MIPI interface can be used to connect the processor 110 with peripheral devices such as the display screen 194 and the camera 193 .
  • MIPI interface includes camera serial interface (camera serial interface, CSI), display serial interface (display serial interface, DSI), etc.
  • the processor 110 communicates with the camera 193 through the CSI interface to realize the shooting function of the electronic device 100 .
  • the processor 110 communicates with the display screen 194 through the DSI interface to realize the display function of the electronic device 100 .
  • the GPIO interface can be configured by software.
  • the GPIO interface can be configured as a control signal or as a data signal.
  • the GPIO interface can be used to connect the processor 110 with the camera 193 , the display screen 194 , the wireless communication module 160 , the audio module 170 , the sensor module 180 and so on.
  • the GPIO interface can also be configured as an I2C interface, I2S interface, UART interface, MIPI interface, etc.
  • the USB interface 130 is an interface conforming to the USB standard specification, specifically, it can be a Mini USB interface, a Micro USB interface, a USB Type C interface, and the like.
  • the USB interface 130 can be used to connect a charger to charge the electronic device 100 , and can also be used to transmit data between the electronic device 100 and peripheral devices. It can also be used to connect headphones and play audio through them. This interface can also be used to connect other electronic devices, such as AR devices.
  • the interface connection relationship between the modules shown in the embodiment of the present application is only a schematic illustration, and does not constitute a structural limitation of the electronic device 100 .
  • the electronic device 100 may also adopt different interface connection manners in the foregoing embodiments, or a combination of multiple interface connection manners.
  • the charging management module 140 is configured to receive a charging input from a charger.
  • the charger may be a wireless charger or a wired charger.
  • the charging management module 140 can receive the current of the wired charger through the USB interface 130 .
  • the charging management module 140 can receive electromagnetic waves through the wireless charging coil of the electronic device 100 (the current path is shown as a dotted line). While the charging management module 140 is charging the battery 142 , it can also supply power to the electronic device 100 through the power management module 141 .
  • the power management module 141 is used for connecting the battery 142 , the charging management module 140 and the processor 110 .
  • the power management module 141 receives the input from the battery 142 and/or the charging management module 140 to provide power for the processor 110 , the internal memory 121 , the display screen 194 , the camera 193 , and the wireless communication module 160 .
  • the wireless communication function of the electronic device 100 can be realized by the antenna 1 , the antenna 2 , the mobile communication module 150 , the wireless communication module 160 , a modem processor, a baseband processor, and the like.
  • Antenna 1 and Antenna 2 are used to transmit and receive electromagnetic wave signals.
  • Each antenna in electronic device 100 may be used to cover single or multiple communication frequency bands. Different antennas can also be multiplexed to improve the utilization of the antennas.
  • Antenna 1 can be multiplexed as a diversity antenna of a wireless local area network.
  • the antenna may be used in conjunction with a tuning switch.
  • the mobile communication module 150 may provide a wireless communication solution applied to the electronic device 100, such as at least one of the following solutions: a second generation (2th generation, 2G) mobile communication solution, a third generation (3th generation, 3G) Mobile communication solutions, fourth generation (4th generation, 5G) mobile communication solutions, fifth generation (5th generation, 5G), sixth generation (6th generation, 6G) mobile communication solutions.
  • the mobile communication module 150 may include at least one filter, switch, power amplifier, low noise amplifier (low noise amplifier, LNA) and the like.
  • the mobile communication module 150 can receive electromagnetic waves through the antenna 1, filter and amplify the received electromagnetic waves, and send them to the modem processor for demodulation.
  • the mobile communication module 150 can also amplify the signals modulated by the modem processor, and convert them into electromagnetic waves through the antenna 1 for radiation.
  • at least part of the functional modules of the mobile communication module 150 may be set in the processor 110 .
  • at least part of the functional modules of the mobile communication module 150 and at least part of the modules of the processor 110 may be set in the same device.
  • the wireless communication module 160 can provide wireless local area networks (wireless local area networks, WLAN) (such as wireless fidelity (Wireless Fidelity, Wi-Fi) network), bluetooth (bluetooth, BT), global navigation satellite, etc. applied on the electronic device 100.
  • System global navigation satellite system, GNSS
  • frequency modulation frequency modulation, FM
  • near field communication technology near field communication, NFC
  • infrared technology infrared, IR
  • the wireless communication module 160 may be one or more devices integrating at least one communication processing module.
  • the wireless communication module 160 receives electromagnetic waves via the antenna 2 , frequency-modulates and filters the electromagnetic wave signals, and sends the processed signals to the processor 110 .
  • the wireless communication module 160 can also receive the signal to be sent from the processor 110 , frequency-modulate it, amplify it, and convert it into electromagnetic waves through the antenna 2 for radiation.
  • the antenna 1 of the electronic device 100 is coupled to the mobile communication module 150, and the antenna 2 is coupled to the wireless communication module 160, so that the electronic device 100 can communicate with the network and other devices through wireless communication technology.
  • the wireless communication technology may include global system for mobile communications (GSM), general packet radio service (general packet radio service, GPRS), code division multiple access (code division multiple access, CDMA), broadband Code division multiple access (wideband code division multiple access, WCDMA), time division code division multiple access (time-division code division multiple access, TD-SCDMA), long term evolution (long term evolution, LTE), BT, GNSS, WLAN, NFC , FM, and/or IR techniques, etc.
  • GSM global system for mobile communications
  • GPRS general packet radio service
  • code division multiple access code division multiple access
  • CDMA broadband Code division multiple access
  • WCDMA wideband code division multiple access
  • time division code division multiple access time-division code division multiple access
  • TD-SCDMA time-division code division multiple access
  • the GNSS may include a global positioning system (global positioning system, GPS), a global navigation satellite system (global navigation satellite system, GLONASS), a Beidou navigation satellite system (beidou navigation satellite system, BDS), a quasi-zenith satellite system (quasi -zenith satellite system (QZSS) and/or satellite based augmentation systems (SBAS).
  • GPS global positioning system
  • GLONASS global navigation satellite system
  • Beidou navigation satellite system beidou navigation satellite system
  • BDS Beidou navigation satellite system
  • QZSS quasi-zenith satellite system
  • SBAS satellite based augmentation systems
  • the electronic device 100 realizes the display function through the GPU, the display screen 194 , and the application processor.
  • the GPU is a microprocessor for image processing, and is connected to the display screen 194 and the application processor. GPUs are used to perform mathematical and geometric calculations for graphics rendering.
  • Processor 110 may include one or more GPUs that execute program instructions to generate or change display information.
  • the display screen 194 is used to display images, videos and the like.
  • the display screen 194 includes a display panel.
  • the display panel can be a liquid crystal display (LCD), an organic light-emitting diode (OLED), an active matrix organic light emitting diode or an active matrix organic light emitting diode (active-matrix organic light emitting diode, AMOLED), flexible light-emitting diode (flex light-emitting diode, FLED), Miniled, MicroLed, Micro-oLed, quantum dot light emitting diodes (quantum dot light emitting diodes, QLED), etc.
  • the electronic device 100 may include 1 or N display screens 194 , where N is a positive integer greater than 1.
  • Camera 193 is used to capture images or videos. It can be triggered by an application command to realize the camera function, such as capturing images of any scene.
  • a camera may include components such as an imaging lens, an optical filter, and an image sensor. The light emitted or reflected by the object enters the imaging lens, passes through the filter, and finally converges on the image sensor.
  • the image sensor is mainly used for converging and imaging the light emitted or reflected by all objects in the camera perspective (also called the scene to be shot, the target scene, or the scene image that the user expects to shoot); the filter is mainly used to It is used to filter out redundant light waves (such as light waves other than visible light, such as infrared) in the light; the image sensor is mainly used to perform photoelectric conversion on the received light signal, convert it into an electrical signal, and input it into the processor 130 for subsequent processing .
  • the camera 193 may be located at the front of the electronic device 100, or at the back of the electronic device 100, and the specific number and arrangement of the cameras may be set according to requirements, which are not limited in this application.
  • the electronic device 100 includes a front camera and a rear camera.
  • a front camera or a rear camera may include one or more cameras.
  • the camera is arranged on an external accessory of the electronic device 100, the external accessory is rotatably connected to the frame of the mobile phone, and the angle formed between the external accessory and the display screen 194 of the electronic device 100 is 0-360 degrees any angle between.
  • the external accessory drives the camera to rotate to a position facing the user.
  • the mobile phone has multiple cameras, only some of the cameras may be set on the external accessories, and the rest of the cameras may be set on the electronic device 100 body, which is not limited in this embodiment of the present application.
  • the internal memory 121 may be used to store computer-executable program codes including instructions.
  • the internal memory 121 may include an area for storing programs and an area for storing data.
  • the stored program area can store an operating system, at least one application program required by a function (such as a sound playing function, an image playing function, etc.) and the like.
  • the storage data area can store data created during the use of the electronic device 100 (such as audio data, phonebook, etc.) and the like.
  • the internal memory 121 may include a high-speed random access memory, and may also include a non-volatile memory, such as at least one magnetic disk storage device, flash memory device, universal flash storage (universal flash storage, UFS) and the like.
  • the processor 110 executes various functional applications and data processing of the electronic device 100 by executing instructions stored in the internal memory 121 and/or instructions stored in a memory provided in the processor.
  • the internal memory 121 can also store the software code of the image processing method provided by the embodiment of the present application.
  • the processor 110 runs the software code, it executes the process steps of the image processing method to obtain an image with higher definition.
  • the internal memory 121 can also store captured images.
  • the external memory interface 120 can be used to connect an external memory card, such as a Micro SD card, so as to expand the storage capacity of the electronic device 100.
  • the external memory card communicates with the processor 110 through the external memory interface 120 to implement a data storage function. Such as saving files such as music in an external memory card.
  • the software code of the image processing method provided in the embodiment of the present application can also be stored in an external memory, and the processor 110 can run the software code through the external memory interface 120 to execute the process steps of the image processing method to obtain a high-definition image.
  • Image Images captured by the electronic device 100 may also be stored in an external memory.
  • the user can designate whether to store the image in the internal memory 121 or the external memory.
  • the electronic device 100 when the electronic device 100 is currently connected to the external memory, if the electronic device 100 captures one frame of image, a prompt message may pop up to remind the user whether to store the image in the external memory or the internal memory; of course, there may be other specified ways , the embodiment of the present application does not impose any limitation on this; alternatively, when the electronic device 100 detects that the memory capacity of the internal memory 121 is less than a preset amount, it may automatically store the image in the external memory.
  • the electronic device 100 can implement audio functions through the audio module 170 , the speaker 170A, the receiver 170B, the microphone 170C, the earphone interface 170D, and the application processor. Such as music playback, recording, etc.
  • the pressure sensor 180A is used to sense the pressure signal and convert the pressure signal into an electrical signal.
  • pressure sensor 180A may be disposed on display screen 194 .
  • the gyro sensor 180B can be used to determine the motion posture of the electronic device 100 .
  • the angular velocity of the electronic device 100 around three axes ie, x, y and z axes
  • the gyro sensor 180B can be used for image stabilization.
  • the air pressure sensor 180C is used to measure air pressure.
  • the electronic device 100 calculates the altitude based on the air pressure value measured by the air pressure sensor 180C to assist positioning and navigation.
  • the magnetic sensor 180D includes a Hall sensor.
  • the electronic device 100 may use the magnetic sensor 180D to detect the opening and closing of the flip leather case.
  • the electronic device 100 when the electronic device 100 is a clamshell machine, the electronic device 100 can detect opening and closing of the clamshell according to the magnetic sensor 180D.
  • features such as automatic unlocking of the flip cover are set.
  • the acceleration sensor 180E can detect the acceleration of the electronic device 100 in various directions (generally three axes). When the electronic device 100 is stationary, the magnitude and direction of gravity can be detected. It can also be used to identify the posture of electronic devices, and can be used in applications such as horizontal and vertical screen switching, pedometers, etc.
  • the distance sensor 180F is used to measure the distance.
  • the electronic device 100 may measure the distance by infrared or laser. In some embodiments, when shooting a scene, the electronic device 100 may use the distance sensor 180F for distance measurement to achieve fast focusing.
  • Proximity light sensor 180G may include, for example, light emitting diodes (LEDs) and light detectors, such as photodiodes.
  • the light emitting diodes may be infrared light emitting diodes.
  • the electronic device 100 emits infrared light through the light emitting diode.
  • Electronic device 100 uses photodiodes to detect infrared reflected light from nearby objects. When sufficient reflected light is detected, it may be determined that there is an object near the electronic device 100 . When insufficient reflected light is detected, the electronic device 100 may determine that there is no object near the electronic device 100 .
  • the electronic device 100 can use the proximity light sensor 180G to detect that the user is holding the electronic device 100 close to the ear to make a call, so as to automatically turn off the screen to save power.
  • the proximity light sensor 180G can also be used in leather case mode, automatic unlock and lock screen in pocket mode.
  • the ambient light sensor 180L is used for sensing ambient light brightness.
  • the electronic device 100 can adaptively adjust the brightness of the display screen 194 according to the perceived ambient light brightness.
  • the ambient light sensor 180L can also be used to automatically adjust the white balance when taking pictures.
  • the ambient light sensor 180L can also cooperate with the proximity light sensor 180G to detect whether the electronic device 100 is in the pocket, so as to prevent accidental touch.
  • the fingerprint sensor 180H is used to collect fingerprints.
  • the electronic device 100 can use the collected fingerprint characteristics to implement fingerprint unlocking, access to application locks, take pictures with fingerprints, answer incoming calls with fingerprints, and the like.
  • the temperature sensor 180J is used to detect temperature.
  • the electronic device 100 uses the temperature detected by the temperature sensor 180J to implement a temperature treatment strategy. For example, when the temperature reported by the temperature sensor 180J exceeds the threshold, the electronic device 100 may reduce the performance of the processor located near the temperature sensor 180J, so as to reduce power consumption and implement thermal protection.
  • the electronic device 100 when the temperature is lower than another threshold, the electronic device 100 heats the battery 142 to avoid abnormal shutdown of the electronic device 100 caused by the low temperature.
  • the electronic device 100 boosts the output voltage of the battery 142 to avoid abnormal shutdown caused by low temperature.
  • the touch sensor 180K is also called “touch device”.
  • the touch sensor 180K can be disposed on the display screen 194, and the touch sensor 180K and the display screen 194 form a touch screen, also called a “touch screen”.
  • the touch sensor 180K is used to detect a touch operation on or near it.
  • the touch sensor can pass the detected touch operation to the application processor to determine the type of touch event.
  • Visual output related to the touch operation can be provided through the display screen 194 .
  • the touch sensor 180K may also be disposed on the surface of the electronic device 100 , which is different from the position of the display screen 194 .
  • the bone conduction sensor 180M can acquire vibration signals. In some embodiments, the bone conduction sensor 180M can acquire the vibration signal of the vibrating bone mass of the human voice. The bone conduction sensor 180M can also contact the human pulse and receive the blood pressure beating signal. In some embodiments, the bone conduction sensor 180M can also be disposed in the earphone, combined into a bone conduction earphone.
  • the audio module 170 can analyze the voice signal based on the vibration signal of the vibrating bone mass of the voice part acquired by the bone conduction sensor 180M, so as to realize the voice function.
  • the application processor can analyze the heart rate information based on the blood pressure beating signal acquired by the bone conduction sensor 180M, so as to realize the heart rate detection function.
  • the keys 190 include a power key, a volume key and the like.
  • the key 190 may be a mechanical key. It can also be a touch button.
  • the electronic device 100 can receive key input and generate key signal input related to user settings and function control of the electronic device 100 .
  • the motor 191 can generate a vibrating reminder.
  • the motor 191 can be used for incoming call vibration prompts, and can also be used for touch vibration feedback.
  • touch operations applied to different applications may correspond to different vibration feedback effects.
  • the indicator 192 can be an indicator light, and can be used to indicate charging status, power change, and can also be used to indicate messages, missed calls, notifications, and the like.
  • the SIM card interface 195 is used for connecting a SIM card.
  • the SIM card can be connected and separated from the electronic device 100 by inserting it into the SIM card interface 195 or pulling it out from the SIM card interface 195 .
  • the hardware system of the electronic device 100 is described in detail above, and the software system of the electronic device 100 is introduced below.
  • the software system may adopt a layered architecture, an event-driven architecture, a micro-kernel architecture, a micro-service architecture, or a cloud architecture.
  • the embodiment of the present application uses a layered architecture as an example to exemplarily describe the software system of the electronic device 100 .
  • a software system adopting a layered architecture is divided into several layers, and each layer has a clear role and division of labor. Layers communicate through software interfaces.
  • the software system can be divided into five layers, which are application layer 210 , application framework layer 220 , hardware abstraction layer 230 , driver layer 240 and hardware layer 250 from top to bottom.
  • the application layer 210 may include application programs such as camera and gallery, and may also include application programs such as calendar, call, map, navigation, WLAN, Bluetooth, music, video, and short message.
  • the application framework layer 220 provides application program access interfaces and programming frameworks for the applications of the application layer 210 .
  • the application framework layer includes a camera access interface, and the camera access interface is used to provide camera shooting services through camera management and camera equipment.
  • Camera management in the application framework layer is used to manage cameras. Camera management can obtain camera parameters, such as judging the working status of the camera.
  • the camera device in the application framework layer is used to provide a data access interface between different camera devices and camera management.
  • the hardware abstraction layer 230 is used to abstract hardware.
  • the hardware abstraction layer can include the camera hardware abstraction layer and other hardware device abstraction layers; the camera hardware abstraction layer can include camera device 1, camera device 2, etc.; the camera hardware abstraction layer can be connected with the camera algorithm library, and the camera hardware abstraction layer Algorithms in the camera algorithm library can be called.
  • the driver layer 240 is used to provide drivers for different hardware devices.
  • the driver layer may include camera drivers; digital signal processor drivers and graphics processor drivers.
  • the hardware layer 250 may include sensors, image signal processors, digital signal processors, graphics processors, and other hardware devices.
  • the sensor may include sensor 1, sensor 2, etc., and may also include a depth sensor (time of flight, TOF) and a multispectral sensor.
  • TOF time of flight
  • the camera hardware abstraction layer judges that the current zoom factor is between the range of [0.6, 0.9] zoom factor, so it can send instructions to the camera device driver to call the wide-angle camera and the main camera, and the camera algorithm library Start to load the algorithm in the network model used in the embodiment of this application.
  • sensor 1 in the wide-angle camera is invoked to obtain the first image
  • sensor 2 in the main camera captures the second image
  • the first image and the second image are sent to image signal processing for processing
  • Preliminary processing such as registration
  • the camera device driver returns to the hardware abstraction layer, and then uses the algorithm in the loaded camera algorithm library for processing, such as using the segmentation model, the first fusion model, the second fusion model and the third fusion model according to
  • the relevant processing steps provided in the embodiment of the present application are processed to obtain the captured image.
  • the segmentation model, the first fusion model, the second fusion model and the third fusion model can be processed by calling the digital signal processor through the driver of the digital signal processor, and calling the graphics processor through the driver of the graphics processor.
  • the captured images are sent back to the camera application via the camera hardware abstraction layer and the camera access interface for display and storage.
  • FIG. 11 is a schematic structural diagram of an image processing device provided by an embodiment of the present application. As shown in FIG. 11 , the image processing device 300 includes an acquisition module 310 and a processing module 320 .
  • the image processing device 300 can perform the following schemes:
  • the acquisition module 310 is configured to acquire a first image and a second image, the resolution of the first image is lower than that of the second image, the first image includes a first area, and the first area is that the resolution of the first image is smaller than a preset Threshold area.
  • a processing module 320 configured to input the first image into a segmentation model to determine whether a mask block is obtained, wherein the segmentation model is used to segment the first region in the first image and generate a mask block corresponding to the first region , the first region is used to represent the region of missing detail in the first image.
  • the processing module 320 is further configured to fuse the first image and the second image using the first fusion model to obtain a first fusion image.
  • the processing module 320 is further configured to determine the first image block in the first image according to the mask block, determine the second image block in the second image, and combine the first image block and the second image block The blocks are fused using the second fusion model to obtain fused image blocks.
  • the processing module 320 is further configured to fuse the first fusion image and the fusion image block by using the third fusion model to obtain a captured image.
  • the processing module 320 will fuse the first image and the second image using the first fusion model to obtain the first fusion image.
  • the processing module 320 is further configured to register the first image and the second image.
  • the processing module 320 is further configured to register the first image block and the second image block.
  • Registration includes: global registration and/or local registration. Global registration is used to register all content in multiple images, and local registration is used to register local content in multiple images.
  • the processing module 320 is further configured to use the training image set and add random highlight noise to train the first fusion model to obtain the second fusion model, wherein the training image set includes the original image, the original Images are annotated with mask blocks.
  • the third fusion model is a Laplace fusion model.
  • module may be implemented in the form of software and/or hardware, which is not specifically limited.
  • a “module” may be a software program, a hardware circuit or a combination of both to realize the above functions.
  • the hardware circuitry may include application specific integrated circuits (ASICs), electronic circuits, processors (such as shared processors, dedicated processors, or group processors) for executing one or more software or firmware programs. etc.) and memory, incorporating logic, and/or other suitable components to support the described functionality.
  • ASICs application specific integrated circuits
  • processors such as shared processors, dedicated processors, or group processors for executing one or more software or firmware programs. etc.
  • memory incorporating logic, and/or other suitable components to support the described functionality.
  • modules of each example described in the embodiments of the present application can be realized by electronic hardware, or a combination of computer software and electronic hardware. Whether these functions are executed by hardware or software depends on the specific application and design constraints of the technical solution. Those skilled in the art may use different methods to implement the described functions for each specific application, but such implementation should not be regarded as exceeding the scope of the present application.
  • the embodiment of the present application also provides another electronic device, including a camera module, a processor, and a memory.
  • the camera module is used to acquire a first image and a second image, the first image and the second image are images taken for the same scene to be photographed, and the definition of the first image is lower than that of the second image.
  • Memory which stores computer programs that run on the processor.
  • a processor configured to execute the processing steps in the above-mentioned image processing method.
  • the camera module includes a wide-angle camera, a main camera and a telephoto camera; the wide-angle camera is used to obtain the first image after the processor obtains the camera instruction; the main camera is used to obtain the first image after the processor obtains the camera instruction , to acquire the second image; or, the main camera is used to acquire the first image after the processor acquires the photographing instruction; the telephoto camera is configured to acquire the second image after the processor acquires the photographing instruction.
  • the image is obtained by the image processor in the color camera and the black and white camera.
  • the image sensor may be, for example, a charge-coupled device (charge-coupled device, CCD), a complementary metal oxide semiconductor (complementary metal oxide semiconductor, CMOS) and the like.
  • the embodiment of the present application also provides a computer-readable storage medium, where computer instructions are stored in the computer-readable storage medium; when the computer-readable storage medium is run on an image processing device, the image processing device executes the following steps: The method shown in Figure 3 and/or Figure 4.
  • the computer instructions may be stored in or transmitted from one computer-readable storage medium to another computer-readable storage medium, for example, the computer instructions may be transmitted from a website, computer, server, or data center Transmission to another website site, computer, server or data center by wired (such as coaxial cable, optical fiber, digital subscriber line (DSL)) or wireless (such as infrared, wireless, microwave, etc.).
  • wired such as coaxial cable, optical fiber, digital subscriber line (DSL)
  • wireless such as infrared, wireless, microwave, etc.
  • the computer-readable storage medium may be any available medium that can be accessed by a computer, or may be a data storage device including one or more servers, data centers, etc. that can be integrated with the medium.
  • the available medium may be a magnetic medium (for example, a floppy disk, a hard disk, or a magnetic tape), an optical medium, or a semiconductor medium (for example, a solid state disk (solid state disk, SSD)) and the like.
  • the embodiment of the present application also provides a computer program product including computer instructions, which, when run on an image processing device, enables the image processing device to execute the method shown in FIG. 3 and/or FIG. 4 .
  • FIG. 12 is a schematic structural diagram of a chip provided by an embodiment of the present application.
  • the chip shown in FIG. 12 may be a general-purpose processor or a special-purpose processor.
  • the chip includes a processor 401 .
  • the processor 401 is configured to support the image processing apparatus to execute the technical solutions shown in FIG. 3 and/or FIG. 4 .
  • the chip further includes a transceiver 402, and the transceiver 402 is configured to be controlled by the processor 401, and configured to support the communication device to execute the technical solution shown in FIG. 3 and/or FIG. 4 .
  • the chip shown in FIG. 12 may further include: a storage medium 403 .
  • the chip shown in Figure 12 can be implemented using the following circuits or devices: one or more field programmable gate arrays (field programmable gate array, FPGA), programmable logic device (programmable logic device, PLD) , controllers, state machines, gate logic, discrete hardware components, any other suitable circuitry, or any combination of circuitry capable of performing the various functions described throughout this application.
  • field programmable gate array field programmable gate array, FPGA
  • programmable logic device programmable logic device
  • controllers state machines, gate logic, discrete hardware components, any other suitable circuitry, or any combination of circuitry capable of performing the various functions described throughout this application.
  • the electronic equipment, image processing device, computer storage medium, computer program product, and chip provided by the above-mentioned embodiments of the present application are all used to execute the method provided above. Therefore, the beneficial effects that it can achieve can refer to the above-mentioned The beneficial effects corresponding to the method will not be repeated here.
  • sequence numbers of the above processes do not mean the order of execution, and the execution order of each process should be determined by its functions and internal logic, and should not constitute any limitation on the implementation process of the embodiment of the present application.
  • presetting and predefining can be realized by pre-saving corresponding codes, tables or other methods that can be used to indicate related information in devices (for example, including electronic devices) , the present application does not limit its specific implementation.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Processing (AREA)
  • Studio Devices (AREA)

Abstract

An image processing method and a related device therefor, which relate to the field of image processing. The method comprises: a first camera collecting a first image and a second camera collecting a second image (S10); obtaining a mask block according to the first image; fusing the first image and the second image, so as to obtain a first fused image (S30); according to the mask block, determining a first image block in the first image and determining a second image block in the second image (S40); fusing the first image block and the second image block, so as to obtain a fused image block (S50); and fusing the first fused image and the fused image block, so as to obtain a third image (S60). By means of fusing the content of the clearer second image and the same area in the low-definition first image, missing details are recovered, and fusion is performed a plurality of times, so as to obtain a high-definition image.

Description

图像处理方法及其相关设备Image processing method and related equipment

本申请要求于2021年08月12日提交国家知识产权局、申请号为202110923642.9、申请名称为“图像处理方法及其相关设备”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。This application claims the priority of a Chinese patent application filed with the State Intellectual Property Office on August 12, 2021, with application number 202110923642.9, and application title "Image processing method and related equipment", the entire contents of which are incorporated in this application by reference middle.

技术领域technical field

本申请涉及图像处理领域,尤其涉及一种图像处理方法及其相关设备。The present application relates to the field of image processing, in particular to an image processing method and related equipment.

背景技术Background technique

随着电子设备的广泛使用,使用电子设备进行拍照已经成为人们生活中的一种日常行为方式。以电子设备为手机为例,随之出现了各种多帧图像合成算法以提升图像质量的技术,例如:多帧降噪、多帧超分辨率等。With the widespread use of electronic devices, taking pictures with electronic devices has become a daily behavior in people's lives. Taking electronic devices as mobile phones as an example, various multi-frame image synthesis algorithms have emerged to improve image quality, such as: multi-frame noise reduction, multi-frame super-resolution, etc.

但是,在一些高动态范围(high dynamic range,HDR)场景中或者逆光场景中,由于待拍摄场景中的物体表面的部分区域产生了高光反射,导致这些高亮度区域细节丢失,相关技术却无法有效处理这种情况。因此,如何对图像的高亮度区域中的细节进行复原成为一个亟需解决的问题。However, in some high dynamic range (high dynamic range, HDR) scenes or in backlit scenes, due to the specular reflection on some areas of the object surface in the scene to be shot, the details of these high-brightness areas are lost, and the related technology is not effective. handle the situation. Therefore, how to restore the details in the high-brightness area of the image has become an urgent problem to be solved.

发明内容Contents of the invention

本申请提供一种图像处理方法及其相关设备,能够对图像中的低清晰度区域进行图像复原处理,恢复细节,从而提高用户体验。The present application provides an image processing method and related equipment, which can perform image restoration processing on low-resolution areas in an image to restore details, thereby improving user experience.

为达到上述目的,本申请采用如下技术方案:In order to achieve the above object, the application adopts the following technical solutions:

第一方面,提供一种图像处理方法,应用于包括第一摄像头和第二摄像头的电子设备,该方法包括:In a first aspect, an image processing method is provided, which is applied to an electronic device including a first camera and a second camera, and the method includes:

电子设备启动相机;显示预览界面,预览界面包括第一控件;检测到对第一控件的第一操作;响应于第一操作,第一摄像头采集第一图像和第二摄像头采集第二图像,第一图像的清晰度低于第二图像的清晰度,第一图像包括第一区域,第一区域为第一图像中清晰度小于预设阈值的区域;根据第一图像得到掩膜块,掩膜块与第一区域对应;将第一图像和第二图像进行融合,得到第一融合图像;根据掩膜块,确定第一图像中的第一图像块和确定第二图像中的第二图像块,第一图像块与掩膜块对应,第二图像块与掩膜块对应;将第一图像块和第二图像块进行融合,得到融合图像块;将第一融合图像与融合图像块进行融合,得到第三图像。The electronic device starts the camera; displays a preview interface, and the preview interface includes a first control; detects a first operation on the first control; in response to the first operation, the first camera captures a first image and the second camera captures a second image, the second The definition of an image is lower than that of the second image, the first image includes a first region, and the first region is a region in the first image whose resolution is less than a preset threshold; a mask block is obtained according to the first image, and the mask The block corresponds to the first area; the first image and the second image are fused to obtain the first fused image; according to the mask block, the first image block in the first image and the second image block in the second image are determined , the first image block corresponds to the mask block, and the second image block corresponds to the mask block; the first image block and the second image block are fused to obtain a fused image block; the first fused image is fused with the fused image block , to get the third image.

示例性的,第一控件可以为拍摄键11。Exemplarily, the first control may be the camera key 11 .

本申请实施例提供了一种图像处理方法,通过从低清晰度的第一图像中确定出缺失细节的第一区域所对应的掩膜块,然后从第一图像中获取掩膜块对应的第一图像块,以及从高清晰度、包含丰富细节的第二图像中获取掩膜块对应的第二图像块,并将第一图像块和第二图像块进行融合,得到清晰的融合图像块;再将第一图像、第二图像融合成的第一融合图像与融合图像块进一步进行融合,以修复缺失的细节,得到高清晰度的第三图像。The embodiment of the present application provides an image processing method, by determining the mask block corresponding to the first region missing details from the first low-resolution image, and then obtaining the mask block corresponding to the first image from the first image An image block, and obtaining a second image block corresponding to the mask block from a second image with high definition and rich details, and fusing the first image block and the second image block to obtain a clear fused image block; Then, the first fused image obtained by merging the first image and the second image is further fused with the fused image block to restore missing details and obtain a high-definition third image.

在第一方面一种可能的实现方式中,根据第一图像得到掩膜块,包括:将第一图像输入分割模型进行分割,并生成掩膜块;分割模型用于对第一图像中的第一区域进行分割,并生成与第一区域对应的掩膜块。In a possible implementation manner of the first aspect, obtaining the mask block according to the first image includes: inputting the first image into a segmentation model for segmentation, and generating a mask block; A region is segmented, and a mask block corresponding to the first region is generated.

示例性的,分割模型可以为:全卷积神经网络。Exemplarily, the segmentation model may be: a fully convolutional neural network.

在该实现方式中,通过分割模型可以对第一图像进行精细化分割,得到分割后的多个图像区域,从而便于后续对第一图像中局部缺失细节严重的区域独立进行修复,而不影第一区域周边的图像。In this implementation, the first image can be finely segmented through the segmentation model to obtain multiple segmented image regions, which facilitates the subsequent independent repair of regions with severe local missing details in the first image without affecting the second image. An image of the surrounding area.

在第一方面一种可能的实现方式中,将第一图像和第二图像进行融合,得到第一融合图像,包括:将第一图像和第二图像利用第一融合模型进行融合,得到第一融合图像。In a possible implementation manner of the first aspect, merging the first image and the second image to obtain the first fused image includes: merging the first image and the second image using the first fusion model to obtain the first Blend images.

在该实现方式中,由于第二图像相对于第一图像清晰度高,由此,将第一图像和第二图像融合后,可以提高图像整体的清晰度,得到较高清晰度的第一融合图像。In this implementation, since the second image has a higher resolution than the first image, after the fusion of the first image and the second image, the resolution of the overall image can be improved, and a higher-definition first fusion can be obtained. image.

在第一方面一种可能的实现方式中,将第一图像块和第二图像块进行融合,得到融合图像块,包括:将第一图像块和第二图像块利用第二融合模型进行融合,得到融合图像块。In a possible implementation manner of the first aspect, merging the first image block and the second image block to obtain the fused image block includes: merging the first image block and the second image block using a second fusion model, Get the fused image block.

在该实现方式中,由于第一图像的清晰度低于第二图像的清晰度,所以第一图像块的清晰度也低于第二图像块的清晰度,甚至于,第一图像块中没有任何细节,由此,将不清晰、缺失细节的第一图像块与清晰的、细节丰富的第二图像块进行融合,可以得到较高清晰度的融合图像块。In this implementation, since the resolution of the first image is lower than that of the second image, the resolution of the first image block is also lower than that of the second image block, and even there is no Any details. Therefore, by fusing the first image block that is unclear and lacks details with the second image block that is clear and rich in details, a higher-definition fused image block can be obtained.

在第一方面一种可能的实现方式中,将第一融合图像和融合图像块进行融合,得到第三图像,包括:将第一融合图像和融合图像块利用第三融合模型进行融合,得到第三图像。In a possible implementation manner of the first aspect, merging the first fused image and the fused image block to obtain the third image includes: merging the first fused image and the fused image block using a third fusion model to obtain the third image Three images.

在该实现方式中,第一融合图像相对于第一图像进行了整体上的清晰度的提高,融合图像块相对于第一图像中的第一图像块进行了局部的清晰度的提高,将第一融合图像和融合图像块进行融合,可以对第一融合图像中的局部进一步进行修复,得到更高清晰度的第三图像。In this implementation, the first fused image has improved overall definition compared with the first image, and the fused image block has locally improved definition compared with the first image block in the first image, and the second A fused image is fused with the fused image block, and a part in the first fused image can be further repaired to obtain a third image with higher definition.

在第一方面一种可能的实现方式中,该方法还包括:当根据第一图像没得到掩膜块时,将第一图像和第二图像利用第一融合模型进行融合,得到第一融合图像。In a possible implementation manner of the first aspect, the method further includes: when the mask block is not obtained according to the first image, fusing the first image and the second image using the first fusion model to obtain the first fusion image .

在该实现方式中,没有得到掩膜块,说明第一图像没有局部缺失细节特别严重的区域,但是,第一图像的整体清晰度依然很低,由此,可以将第一图像和第二图像进行融合,以提高图像的清晰度。In this implementation, no mask block is obtained, indicating that the first image does not have areas with severe partial loss of details, but the overall definition of the first image is still very low. Therefore, the first image and the second image can be combined Blending is performed to improve the clarity of the image.

在第一方面一种可能的实现方式中,该方法还包括:对第一图像和第二图像进行配准。在该实现方式中,通过配准,可以提高第一图像和第二图像融合时的精确度。In a possible implementation manner of the first aspect, the method further includes: registering the first image and the second image. In this implementation manner, registration can improve the accuracy of fusing the first image and the second image.

在第一方面一种可能的实现方式中,该方法还包括:对第一图像块和第二图像块进行配准。在该实现方式中,通过配准,可以提高第一图像块和第二图像块融合时的精确度。In a possible implementation manner of the first aspect, the method further includes: registering the first image block and the second image block. In this implementation manner, registration can improve the accuracy of fusing the first image block and the second image block.

在第一方面一种可能的实现方式中,配准包括:全局配准和/或局部配准,全局配准用于表示将多个图像中的全部内容进行配准,局部配准用于表示将多个图像中的局部内容进行配准。在该实现方式中,通过全局配准可以提高多个图像中全部内容的对 位精准度,通过局部配准可以提高多个图像中局部内容的对位精准度。In a possible implementation manner of the first aspect, registration includes: global registration and/or local registration, global registration is used to register all content in multiple images, and local registration is used to represent Register local content in multiple images. In this implementation, the alignment accuracy of all content in multiple images can be improved through global registration, and the alignment accuracy of local content in multiple images can be improved through local registration.

在第一方面一种可能的实现方式中,该方法还包括:利用训练图像集,并加入随机高光噪声,对第一融合模型进行训练,得到第二融合模型,其中,训练图像集包括原始图像,原始图像标注有掩膜块。在该实现方式中,由于在训练时加入了随机高光噪声,使得后续在利用训练好的第二融合模型将第一图像块和第二图像块进行融合时,清晰度较高的第二图像块比第一图像块的权重占比要大,从而使得融合得到的融合图像块从第二图像块中获取更多细节。In a possible implementation of the first aspect, the method further includes: using the training image set and adding random highlight noise to train the first fusion model to obtain the second fusion model, wherein the training image set includes the original image , the original image is annotated with a mask block. In this implementation, due to the addition of random highlight noise during training, when the first image block and the second image block are fused using the trained second fusion model, the second image block with higher definition The proportion of the weight of the first image block is larger, so that the fused image block obtained by fusion can obtain more details from the second image block.

在第一方面一种可能的实现方式中,第三融合模型为拉普拉斯融合模型。在该实现方式中,在利用拉普拉斯融合模型进行融合时,拉普拉斯融合模型可以先将第一融合图像与融合图像块分解到不同的空间频带上,然后在各个空间频带层上分别进行融合,由此,通过分频处理,可以使得第一融合图像和融合图像块融合的更加自然,衔接处更细腻,得到的第三图像质量更高。In a possible implementation manner of the first aspect, the third fusion model is a Laplace fusion model. In this implementation, when using the Laplacian fusion model for fusion, the Laplacian fusion model can first decompose the first fused image and the fused image block into different spatial frequency bands, and then in each spatial frequency band layer Fusion is performed separately, so that through the frequency division processing, the fusion of the first fusion image and the fusion image blocks can be made more natural, the connection is more delicate, and the obtained third image is of higher quality.

第二方面,提供了一种图像处理装置,该装置包括用于执行以上第一方面或第一方面的任意可能的实现方式中各个步骤的单元。In a second aspect, an image processing apparatus is provided, and the apparatus includes a unit for performing each step in the above first aspect or any possible implementation manner of the first aspect.

第三方面,提供了一种电子设备,包括摄像头模组、处理器和存储器;摄像头模组,用于采集第一图像和第二图像,第一图像的清晰度低于第二图像的清晰度,第一图像包括第一区域,第一区域为第一图像中清晰度小于预设阈值的区域;存储器,用于存储可在处理器上运行的计算机程序;处理器,用于执行如第一方面或第一方面的任意可能的实现方式中提供的图像处理方法中进行处理的步骤。In a third aspect, an electronic device is provided, including a camera module, a processor, and a memory; the camera module is used to collect a first image and a second image, and the definition of the first image is lower than that of the second image , the first image includes a first area, and the first area is an area whose resolution in the first image is less than a preset threshold; the memory is used to store a computer program that can run on the processor; the processor is used to execute the first A step of processing in the image processing method provided in any possible implementation manner of the aspect or the first aspect.

在第三方面一种可能的实现方式中,摄像头模组包括广角摄像头、主摄摄像头和长焦摄像头;广角摄像头,用于在处理器获取拍照指令后,获取第一图像;主摄摄像头,用于在处理器获取拍照指令后,获取第二图像,或者;主摄摄像头,用于在处理器获取拍照指令后,获取第一图像;长焦摄像头,用于在处理器获取拍照指令后,获取第二图像。In a possible implementation of the third aspect, the camera module includes a wide-angle camera, a main camera, and a telephoto camera; the wide-angle camera is used to obtain the first image after the processor obtains a camera instruction; The second image is obtained after the processor obtains the photographing instruction, or; the main camera is used to obtain the first image after the processor obtains the photographing instruction; the telephoto camera is used to obtain the first image after the processor obtains the photographing instruction second image.

第四方面,提供了一种芯片,包括:处理器,用于从存储器中调用并运行计算机程序,使得安装有芯片的设备执行如如第一方面或第一方面的任意可能的实现方式中提供的图像处理方法中进行处理的步骤。In a fourth aspect, a chip is provided, including: a processor, configured to call and run a computer program from a memory, so that a device installed with the chip executes the chip as provided in the first aspect or any possible implementation manner of the first aspect. The steps of processing in the image processing method.

第五方面,提供了一种计算机可读存储介质,计算机可读存储介质存储有计算机程序,计算机程序包括程序指令,程序指令当被处理器执行时,使处理器执行如第一方面或第一方面的任意可能的实现方式中提供的图像处理方法中进行处理的步骤。In a fifth aspect, a computer-readable storage medium is provided. The computer-readable storage medium stores a computer program. The computer program includes program instructions. Steps of performing processing in the image processing method provided in any possible implementation manner of the aspect.

第六方面,提供了一种计算机程序产品,计算机程序产品包括存储了计算机程序的计算机可读存储介质,计算机程序使得计算机执行如第一方面或第一方面的任意可能的实现方式中提供的图像处理方法中进行处理的步骤。In a sixth aspect, a computer program product is provided, the computer program product includes a computer-readable storage medium storing a computer program, and the computer program enables the computer to execute the image provided in the first aspect or any possible implementation manner of the first aspect The step in the processing method that performs the processing.

第二方面至第六方面的有益效果,可以参考上述第一方面的有益效果,在此不再赘述。For the beneficial effects of the second aspect to the sixth aspect, reference may be made to the beneficial effects of the first aspect above, which will not be repeated here.

附图说明Description of drawings

图1是利用相关技术拍摄得到的一张图像的示意图;Fig. 1 is a schematic diagram of an image obtained by using related technologies;

图2是本申请实施例提供的应用场景的示意图;FIG. 2 is a schematic diagram of an application scenario provided by an embodiment of the present application;

图3是本申请实施例提供的一种图像处理方法的示意图;FIG. 3 is a schematic diagram of an image processing method provided in an embodiment of the present application;

图4是本申请实施例提供的另一种图像处理方法的流程示意图;Fig. 4 is a schematic flow chart of another image processing method provided by the embodiment of the present application;

图5是本申请实施例提供的分割模型处理图像的示意图;FIG. 5 is a schematic diagram of image processing by a segmentation model provided in an embodiment of the present application;

图6是本申请实施例提供的得到掩膜块时处理图像的示意图;FIG. 6 is a schematic diagram of image processing when obtaining a mask block provided by an embodiment of the present application;

图7是本申请实施例提供的拍照预览时变焦的显示界面示意图;FIG. 7 is a schematic diagram of a display interface for zooming when taking pictures and previewing provided by an embodiment of the present application;

图8是本申请实施例提供的拍照预览时多摄变焦的进程示意图;FIG. 8 is a schematic diagram of the process of multi-camera zooming during photo preview provided by the embodiment of the present application;

图9是一种适用于本申请的装置的硬件系统的示意图;Fig. 9 is a schematic diagram of a hardware system applicable to the device of the present application;

图10是一种适用于本申请的装置的软件系统的示意图;Fig. 10 is a schematic diagram of a software system applicable to the device of the present application;

图11为本申请实施例提供的一种图像处理装置的结构示意图;FIG. 11 is a schematic structural diagram of an image processing device provided by an embodiment of the present application;

图12为申请实施例提供的一种芯片的结构示意图。FIG. 12 is a schematic structural diagram of a chip provided in the embodiment of the application.

具体实施方式Detailed ways

下面将结合附图,对本申请中的技术方案进行描述。The technical solution in this application will be described below with reference to the accompanying drawings.

在本申请实施例的描述中,除非另有说明,“/”表示或的意思,例如,A/B可以表示A或B;本文中的“和/或”仅仅是一种描述关联对象的关联关系,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B这三种情况。另外,在本申请实施例的描述中,“多个”是指两个或多于两个。In the description of the embodiments of this application, unless otherwise specified, "/" means or, for example, A/B can mean A or B; "and/or" in this article is only a description of the association of associated objects A relationship means that there may be three kinds of relationships, for example, A and/or B means: A exists alone, A and B exist simultaneously, and B exists alone. In addition, in the description of the embodiments of the present application, "plurality" refers to two or more than two.

以下,术语“第一”、“第二”仅用于描述目的,而不能理解为指示或暗示相对重要性或者隐含指明所指示的技术特征的数量。由此,限定有“第一”、“第二”的特征可以明示或者隐含地包括一个或者更多个该特征。在本实施例的描述中,除非另有说明,“多个”的含义是两个或两个以上。Hereinafter, the terms "first" and "second" are used for descriptive purposes only, and cannot be understood as indicating or implying relative importance or implicitly specifying the quantity of indicated technical features. Thus, a feature defined as "first" and "second" may explicitly or implicitly include one or more of these features. In the description of this embodiment, unless otherwise specified, "plurality" means two or more.

首先,对本申请实施例中的部分用语进行解释说明,以便于本领域技术人员理解。First of all, some terms used in the embodiments of the present application are explained to facilitate the understanding of those skilled in the art.

1、RGB(red,green,blue)颜色空间,指的是一种与人的视觉系统结构相关的颜色模型。根据人眼睛的结构,将所有颜色都当作是红色、绿色和蓝色的不同组合。1. RGB (red, green, blue) color space refers to a color model related to the structure of the human visual system. According to the structure of the human eye, all colors are seen as different combinations of red, green and blue.

2、像素值,指的是位于RGB颜色空间的彩色图像中每个像素对应的一组颜色分量。例如,每个像素对应一组三基色分量,其中,三基色分量分别为红色分量R、绿色分量G和蓝色分量B。2. A pixel value refers to a set of color components corresponding to each pixel in a color image located in the RGB color space. For example, each pixel corresponds to a group of three primary color components, wherein the three primary color components are red component R, green component G and blue component B respectively.

3、配准(image registration),指的是在同一区域内以不同成像手段所获得的不同图像的地理坐标的匹配。其中,包括几何纠正、投影变换与统一比例尺三方面的处理。3. Image registration refers to the matching of geographic coordinates of different images obtained by different imaging methods in the same area. Among them, it includes the processing of three aspects: geometric correction, projection transformation and unified scale.

4、视场角(field of view,FOV),用于指示摄像头所能拍摄到的最大的角度范围。若待拍摄物体处于这个角度范围内,该待拍摄物体便会被摄像头捕捉到。若待拍摄物体处于这个角度范围之外,该待拍摄物体便不会被摄像头捕捉到。4. Field of view (FOV), which is used to indicate the maximum angle range that the camera can capture. If the object to be photographed is within the angle range, the object to be photographed will be captured by the camera. If the object to be photographed is outside the angle range, the object to be photographed will not be captured by the camera.

通常,摄像头的视场角越大,则拍摄范围就越大,焦距就越短;而摄像头的视场角越小,则拍摄范围就越小,焦距就越长。因此,摄像头因视场角的不同可以被划分主摄像头、广角摄像头和长焦摄像头。其中,广角摄像头的视场角相对于主摄像头的视场角较大,焦距较小,适合近景拍摄;而长焦摄像头的视场角相对于主摄像头的视场角较小,焦距较长,适合远景拍摄。Generally, the larger the field of view of the camera, the larger the shooting range and the shorter the focal length; while the smaller the field of view of the camera, the smaller the shooting range and the longer the focal length. Therefore, the camera can be divided into a main camera, a wide-angle camera, and a telephoto camera due to different field of view angles. Among them, the field of view of the wide-angle camera is larger than that of the main camera, and the focal length is smaller, which is suitable for close-up shooting; while the field of view of the telephoto camera is smaller than that of the main camera, and the focal length is longer. Suitable for remote shooting.

5、逆光,逆光是一种由于被摄主体恰好处于光源和相机之间的状况。在该状态下,容易造成被摄主体曝光不充分的问题,因此,在一般情况下用户应尽量避免在逆光条 件下拍摄物体。5. Backlighting, backlighting is a situation where the subject is just between the light source and the camera. In this state, it is easy to cause insufficient exposure of the subject. Therefore, in general, users should try to avoid shooting objects under backlight conditions.

以上是对本申请实施例所涉及的名词的简单介绍,以下不再赘述。The above is a brief introduction to the nouns involved in the embodiments of the present application, and details will not be repeated below.

随着电子设备的广泛使用,使用电子设备进行拍照已经成为人们生活中的一种日常行为方式。以电子设备为手机为例,随之出现了各种多帧图像合成算法以提升图像质量的技术,例如:多帧降噪、多帧超分辨率等。With the widespread use of electronic devices, taking pictures with electronic devices has become a daily behavior in people's lives. Taking electronic devices as mobile phones as an example, various multi-frame image synthesis algorithms have emerged to improve image quality, such as: multi-frame noise reduction, multi-frame super-resolution, etc.

但是,在一些高动态范围(high dynamic range,HDR)场景中或者逆光场景中,由于待拍摄场景中的物体表面的部分区域产生了高光反射,导致这些高亮度区域细节丢失,相关技术却无法有效处理这种情况。However, in some high dynamic range (high dynamic range, HDR) scenes or in backlit scenes, due to the specular reflection on some areas of the object surface in the scene to be shot, the details of these high-brightness areas are lost, and the related technology is not effective. handle the situation.

例如,图1是利用相关技术拍摄得到的一张图像。如图1所示,待拍摄场景中有3个人在阳光下等待用户进行拍照,由于阳光照到人脸区域,并且,阳光非常强烈,导致人脸区域产生了高光反射,人脸区域即为高亮度区域。此时,用户利用相关技术对该3个人进行拍摄时,拍摄出的图像丢失了人脸区域的细节,从而导致拍摄出的图像质量较差,看不清人脸区域的内容,影响用户体验。For example, Fig. 1 is an image captured by using related technologies. As shown in Figure 1, there are three people in the scene to be shot waiting for the user to take a photo in the sun. Since the sun shines on the face area, and the sunlight is very strong, the face area produces high light reflection, and the face area is high Brightness area. At this time, when the user uses the relevant technology to shoot the three people, the captured image loses the details of the face area, resulting in poor image quality and the content of the face area cannot be seen clearly, which affects the user experience.

有鉴于此,本申请实施例提供了一种图像处理方法,通过采集清晰度不同的第一图像和第二图像,利用较清晰的第二图像中对应高亮度区域的内容,与低清晰的第一图像中的高亮度区域进行融合,从而可以恢复出第一图像中高亮度区域中缺失的细节,再通过多次融合得到质量较高的拍摄图像,提升用户体验。In view of this, the embodiment of the present application provides an image processing method, by collecting the first image and the second image with different resolutions, using the content corresponding to the high-brightness area in the clearer second image, and the low-definition first image The high-brightness area in the first image is fused, so that the missing details in the high-brightness area in the first image can be recovered, and then a higher-quality captured image can be obtained through multiple fusions to improve user experience.

首先对本申请实施例的应用场景进行简要说明。Firstly, the application scenarios of the embodiments of the present application are briefly described.

图2是本申请实施例提供的一种应用场景的示意图。本申请提供的图像处理方法可以应用于复原图像中的高亮度区域的细节。Fig. 2 is a schematic diagram of an application scenario provided by an embodiment of the present application. The image processing method provided in this application can be applied to restore the details of the high-brightness area in the image.

示例性的,如图2中的(a)所示,为电子设备的图形用户界面(graphical user interface,GUI)。当电子设备检测到用户点击界面上的相机应用的图标的操作后,可以启动相机应用,显示如图2中的(b)所示的另一GUI,该GUI可以称为预览界面。Exemplarily, as shown in (a) in FIG. 2, it is a graphical user interface (graphical user interface, GUI) of the electronic device. After the electronic device detects that the user clicks the icon of the camera application on the interface, the camera application can be started, and another GUI as shown in (b) in FIG. 2 is displayed, which can be called a preview interface.

该预览界面上可以包括取景窗口21。在预览状态下,该取景窗口21内可以实时显示预览图像。该预览界面还可以包括多种拍摄模式选项以及第一控件,即,拍摄键11。该多种拍摄模式选项例如包括:拍照模式、录像模式等,拍摄键11用于指示当前拍摄模式为拍照模式、录像模式或者为其他模式。其中,相机应用打开时一般默认处于拍照模式。The preview interface may include a viewfinder window 21 . In the preview state, the preview image can be displayed in the viewfinder window 21 in real time. The preview interface may also include a variety of shooting mode options and a first control, that is, the shooting key 11 . The multiple shooting mode options include, for example: a shooting mode, a video recording mode, etc., and the shooting key 11 is used to indicate that the current shooting mode is a shooting mode, a video recording mode or other modes. Among them, when the camera application is opened, it is generally in the camera mode by default.

示例性的,如图2中的(b)所示,当电子设备启动相机应用后,电子设备运行图像处理方法对应的程序,响应于用户对拍摄键11的点击操作,获取并存储拍摄图像。Exemplarily, as shown in (b) of FIG. 2 , when the electronic device starts the camera application, the electronic device runs the program corresponding to the image processing method, and acquires and stores the captured image in response to the user's click operation on the shooting key 11 .

应理解,待拍摄场景中有3个人,由于阳光照在3个人的脸部区域,并且阳光非常强烈,导致3个人的脸部区域产生高光反射,从而在对3个人进行拍照时,利用相关技术通常无法获取到3个人的脸部特征。但是,通过本申请的图像处理方法可以检测出高亮的脸部区域,进而对脸部区域的细节进行复原,得到高质量的拍摄图像。It should be understood that there are 3 people in the scene to be photographed, and since the sun shines on the faces of the 3 people, and the sunlight is very strong, the facial areas of the 3 people produce high light reflections, so when taking pictures of the 3 people, use related technologies Facial features of 3 people are usually not available. However, the image processing method of the present application can detect the highlighted face area, and then restore the details of the face area to obtain a high-quality captured image.

应理解,上述为对应用场景的举例说明,并不对本申请的应用场景进行任何限制。It should be understood that the foregoing is an illustration of an application scenario, and does not impose any limitation on the application scenario of the present application.

下面结合说明书附图,对本申请实施例所提供的图像处理方法进行详细介绍。The image processing method provided by the embodiment of the present application will be described in detail below with reference to the drawings in the description.

图3为本申请实施例提供的图像处理方法的流程示意图。如图3所示,该图像处理方法包括以下S10~S60。FIG. 3 is a schematic flowchart of an image processing method provided by an embodiment of the present application. As shown in FIG. 3 , the image processing method includes the following S10-S60.

电子设备启动相机,显示如图2中的(b)所示的预览界面,预览界面包括第一控件,该第一控件可以为拍摄键11。The electronic device starts the camera, and displays a preview interface as shown in (b) in FIG.

S10、当电子设备检测到用户对拍摄键11的第一操作后,响应于第一操作,第一摄像头采集第一图像,第二摄像头采集第二图像。S10. After the electronic device detects the first operation of the camera key 11 by the user, in response to the first operation, the first camera captures the first image, and the second camera captures the second image.

其中,第一图像和第二图像为对相同的待拍摄场景拍摄的图像。第一图像的清晰度低于第二图像的清晰度,第一图像包括第一区域,第一区域为第一图像中清晰度小于预设阈值的区域。Wherein, the first image and the second image are images captured for the same scene to be captured. The definition of the first image is lower than that of the second image, and the first image includes a first area, where the first area is an area in the first image whose resolution is less than a preset threshold.

其中,预设阈值可以根据需要进行设置和修改,本申请实施例对此不进行任何限制。Wherein, the preset threshold can be set and modified according to needs, which is not limited in this embodiment of the present application.

应理解,第一图像和第二图像均为拜耳格式图像,也可以称为位于RAW域的图像。It should be understood that both the first image and the second image are Bayer format images, and may also be referred to as images in the RAW domain.

应理解,第一区域用于表示第一图像中不清晰、缺失细节的区域。示例性的,第一区域可以是指获取第一图像时,由于光照强烈导致缺失细节的高亮度区域,或者,也可以是指获取第一图像时,缺失细节的关键区域,例如,人脸、人体、五官等。It should be understood that the first area is used to represent an area in the first image that is unclear and lacks details. Exemplarily, the first area may refer to a high-brightness area where details are missing due to strong illumination when the first image is acquired, or may refer to a key area where details are missing when the first image is acquired, for example, a human face, Human body, facial features, etc.

S20、根据第一图像,确定是否得到掩膜(mask)块,掩膜块与第一区域对应。S20. According to the first image, determine whether a mask (mask) block is obtained, and the mask block corresponds to the first area.

应理解,掩膜块指的是对第一图像中的第一区域对应的掩膜图像。通过对第一图像中的缺失细节的第一区域进行替换或融合,来控制对第一图像中需要恢复细节的第一区域的处理。It should be understood that the mask block refers to a mask image corresponding to the first region in the first image. The processing of the first region in the first image where details need to be restored is controlled by replacing or merging the first region in the first image where details are missing.

S30、无论是否得到掩膜块,都将第一图像和第二图像进行融合,得到第一融合图像。S30. Regardless of whether the mask block is obtained, fuse the first image and the second image to obtain a first fused image.

应理解,由于第二图像相对于第一图像清晰度高,由此,将第一图像和第二图像融合后,可以提高图像整体的清晰度,得到较高清晰度的第一融合图像。It should be understood that since the definition of the second image is higher than that of the first image, the definition of the overall image can be improved after the fusion of the first image and the second image, and a first fused image with higher definition can be obtained.

S40、若是,则根据掩膜块,确定第一图像中的第一图像块,确定第二图像中的第二图像块,第一图像块与掩膜块对应,第二图像块与掩膜块对应。S40. If so, then according to the mask block, determine the first image block in the first image, determine the second image block in the second image, the first image block corresponds to the mask block, and the second image block corresponds to the mask block correspond.

例如,用户使用电子设备对三个同事进行拍照时,第一区域可以是指该三个同事分别被强光照射导致看不清面部特征的人脸区域,生成的掩膜块与第一区域对应,用于表示人脸区域。第一图像块即从第一图像中确定出的人脸区域,第二图像块即从第二图像中确定出的人脸区域。For example, when a user uses an electronic device to take pictures of three colleagues, the first area may refer to the face areas where the three colleagues are illuminated by strong light so that facial features cannot be seen clearly, and the generated mask block corresponds to the first area , used to represent the face area. The first image block is the human face area determined from the first image, and the second image block is the human face area determined from the second image.

S50、将第一图像块和第二图像块进行融合,得到融合图像块。S50. Fusion the first image block and the second image block to obtain a fused image block.

应理解,由于第一图像的清晰度低于第二图像的清晰度,所以第一图像块的清晰度也低于第二图像块的清晰度,甚至于,第一图像块中没有任何细节,由此,将不清晰、缺失细节的第一图像块与清晰的、细节丰富的第二图像块进行融合,可以得到较高清晰度的融合图像块。It should be understood that since the definition of the first image is lower than that of the second image, the definition of the first image block is also lower than that of the second image block, and even there is no detail in the first image block, Thus, the first image block that is not clear and lacks details is fused with the second image block that is clear and rich in details, and a fused image block with higher definition can be obtained.

S60、将第一融合图像和融合图像块进行融合,得到第三图像。S60. Fusion the first fused image and the fused image block to obtain a third image.

应理解,第一融合图像相对于第一图像进行了整体上的清晰度的提高,融合图像块相对于第一图像中的第一图像块进行了局部的清晰度的提高,将第一融合图像和融合图像块进行融合,可以对第一融合图像中的局部进一步进行修复,得到更高清晰度 的第三图像。It should be understood that the overall definition of the first fused image is improved relative to the first image, the fused image block is partially improved relative to the first image block in the first image, and the first fused image By merging with the fused image block, the part in the first fused image can be further repaired to obtain a third image with higher definition.

本申请实施例提供一种图像处理方法,通过从低清晰度的第一图像中确定出缺失细节的第一区域所对应的掩膜块,然后从第一图像中获取掩膜块对应的第一图像块,以及从高清晰度、包含丰富细节的第二图像中获取掩膜块对应的第二图像块,并将第一图像块和第二图像块进行融合,得到清晰的融合图像块;再将第一图像、第二图像融合成的第一融合图像与融合图像块进一步进行融合,以修复缺失的细节,得到高清晰度的第三图像。An embodiment of the present application provides an image processing method, by determining the mask block corresponding to the first area missing details from the low-resolution first image, and then obtaining the first mask block corresponding to the mask block from the first image. An image block, and obtaining a second image block corresponding to the mask block from a second image with high definition and rich details, and fusing the first image block and the second image block to obtain a clear fused image block; then The first fused image obtained by fusing the first image and the second image is further fused with the fused image block to restore missing details and obtain a high-definition third image.

图4是本申请实施例提供的另一种图像处理方法的流程示意图。FIG. 4 is a schematic flowchart of another image processing method provided by an embodiment of the present application.

如图4所示,该图像处理方法10包括:S110至S190。As shown in FIG. 4 , the image processing method 10 includes: S110 to S190.

S110、获取第一图像和第二图像。第一图像和第二图像为对相同的待拍摄场景拍摄的图像。第一图像的清晰度低于第二图像的清晰度。S110. Acquire a first image and a second image. The first image and the second image are images captured for the same scene to be captured. The resolution of the first image is lower than that of the second image.

应理解,第一图像和第二图像是电子设备通过摄像头拍摄得到的图像,或者,第一图像和第二图像还可以是从电子设备内部获取的图像,例如,电子设备中存储的图像,或者,电子设备从云端获取的图像。其中,第一图像和第二图像均为拜耳格式图像。It should be understood that the first image and the second image are images captured by the electronic device through a camera, or the first image and the second image may also be images obtained from inside the electronic device, for example, images stored in the electronic device, or , the image obtained by the electronic device from the cloud. Wherein, both the first image and the second image are Bayer format images.

应理解,当利用摄像头获取第一图像和第二图像时,通常两个图像中对应的清晰度低的图像称为第一图像;而清晰度高的图像称为第二图像。由于清晰度高低是相对的,所以,第一图像和第二图像也是相对的。It should be understood that when the camera is used to acquire the first image and the second image, generally, the corresponding low-resolution image of the two images is called the first image; and the corresponding high-resolution image is called the second image. Since the definition is relative, the first image and the second image are also relative.

例如,当图像a对应的清晰度比图像b对应的清晰度低时,利用本申请实施例提供的图像处理方法对图像a和图像b进行图像处理时,图像a即为第一图像,图像b即为第二图像。For example, when the resolution corresponding to image a is lower than that corresponding to image b, when the image processing method provided by the embodiment of the present application is used to perform image processing on image a and image b, image a is the first image, and image b That is the second image.

当图像b对应的清晰度比图像c对应的清晰度低时,利用本申请实施例提供的图像处理方法对图像b和图像c进行图像处理时,图像b即为第一图像,图像c即为第二图像。When the definition corresponding to image b is lower than that corresponding to image c, when using the image processing method provided by the embodiment of the present application to perform image processing on image b and image c, image b is the first image, and image c is second image.

例如,第一图像为广角摄像头所采集的图像,第二图像为长焦摄像头所采集的图像,广角摄像头和长焦摄像头在同一时刻采集图像;同理,第一图像为广角摄像头所采集的图像,第二图像为超广角摄像头所采集的图像,广角摄像头和超广角摄像头在同一时刻采集图像。For example, the first image is an image collected by a wide-angle camera, the second image is an image collected by a telephoto camera, and the wide-angle camera and the telephoto camera collect images at the same time; similarly, the first image is an image collected by a wide-angle camera , the second image is an image collected by the ultra-wide-angle camera, and the wide-angle camera and the ultra-wide-angle camera collect images at the same time.

应理解,第一图像可以是具有细节缺失区域的图像,通过本申请实施例的图像处理方法可以将第一图像中缺失的细节进行恢复。It should be understood that the first image may be an image with a region of missing details, and the missing details in the first image may be restored through the image processing method of the embodiment of the present application.

S120、将第一图像输入分割模型进行分割,若第一图像可分割,则根据分割的第一区域生成对应的掩膜块,若不可分割,则说明第一图像中没有包括第一区域,由此,得不到掩膜块。S120. Input the first image into the segmentation model for segmentation. If the first image is segmentable, generate a corresponding mask block according to the segmented first region. If it is not segmentable, it means that the first image does not include the first region. By Therefore, mask blocks cannot be obtained.

其中,分割模型用于对第一图像中的第一区域进行分割,并生成与第一区域对应的掩膜块。第一区域用于表示第一图像中清晰度小于预设阈值的区域,也即,缺失一定细节的区域。Wherein, the segmentation model is used to segment the first region in the first image, and generate a mask block corresponding to the first region. The first area is used to represent an area in the first image whose sharpness is less than a preset threshold, that is, an area lacking certain details.

可选地,分割模型可以为:全卷积神经网络(fully convolutional networks,FCN)等。Optionally, the segmentation model may be: a fully convolutional neural network (fully convolutional networks, FCN) and the like.

应理解,分割模型可以对第一图像进行分割,得到分割后的多个图像区域,该多个图像区域包括一些包含细节的区域,也可能包括一些缺失细节的区域。当第一图像 包含1个或多个缺失细节的区域时,分割模型可以分割出该1个或多个缺失细节的区域,并生成对应的1个或多个掩膜块。而当第一图像没有包含缺失细节的区域时,分割模型则不会分割出缺失细节的区域,更不会生成对应的掩膜块。It should be understood that the segmentation model may segment the first image to obtain a plurality of segmented image regions, where the plurality of image regions include some regions containing details, and may also include some regions lacking details. When the first image contains one or more regions with missing details, the segmentation model can segment the one or more regions with missing details, and generate corresponding one or more mask blocks. However, when the first image does not contain areas with missing details, the segmentation model will not segment out areas with missing details, let alone generate corresponding mask blocks.

应理解,第一区域可以是指获取第一图像时,例如在HDR场景中,由于光照强烈导致缺失细节的高亮度区域,或者,也可以是指获取第一图像时,缺失细节的关键区域,例如,人脸、人体、五官等。It should be understood that the first area may refer to a high-brightness area where details are missing due to strong illumination when the first image is acquired, for example, in an HDR scene, or may also refer to a key area where details are missing when the first image is acquired, For example, human face, human body, facial features, etc.

还应理解,第一区域的数量与掩膜块的数量相同,每个第一区域对应的掩膜块的视场角范围与该第一区域对应的视场角范围相同。It should also be understood that the number of first regions is the same as the number of mask blocks, and the range of viewing angles of the mask blocks corresponding to each first region is the same as the range of viewing angles corresponding to the first region.

示例性的,图5示出了本申请实施例提供的分割模型处理图像的示意图。如图5所示,将第一图像输入分割模型,由于第一图像中包含有3个被强光照射导致缺失细节的人脸区域,因此,分割模型可以分割出3个第一区域并生成对应的3个掩膜块。例如图1中的人脸区域对应的3个掩膜块。Exemplarily, FIG. 5 shows a schematic diagram of processing an image by a segmentation model provided in an embodiment of the present application. As shown in Figure 5, the first image is input into the segmentation model. Since the first image contains three face areas that are illuminated by strong light and lack details, the segmentation model can segment three first areas and generate corresponding The 3 mask blocks. For example, the three mask blocks corresponding to the face area in Fig. 1 .

示例性的,掩膜块对应的每个像素的像素值均为0。Exemplarily, the pixel value of each pixel corresponding to the mask block is 0.

S130、当得到掩膜块或者没得到掩膜块时,都将第一图像和第二图像进行配准。S130. When the mask block is obtained or not obtained, register the first image and the second image.

应理解,当没得到掩膜块时,说明第一图像虽然清晰度低,但没有包括第一区域,即没有低于预设阈值、严重缺失细节的区域。It should be understood that when no mask blocks are obtained, it means that although the first image has low resolution, it does not include the first region, that is, there is no region below the preset threshold and seriously missing details.

可选地,作为一种示例,该配准可以为全局配准。Optionally, as an example, the registration may be global registration.

全局配准用于表示将多个图像中的全部内容进行配准,也就是说,此处可以将第一图像和第二图像中的全部内容进行配准,使得第一图像和第二图像在后续融合时,能更精准对应。Global registration is used to register all the content in multiple images, that is to say, here you can register all the content in the first image and the second image, so that the first image and the second image are in In the subsequent fusion, it can correspond more accurately.

可选地,作为一种示例,该配准可以包括全局配准和局部配准。Optionally, as an example, the registration may include global registration and local registration.

局部配准用于表示将多个图像中的局部内容进行配准。示例性的,利用分割模型虽然没有从第一图像中分割出第一区域,但是,可以分割出一些其他区域,例如,人体区域和除人体区域之外的背景区域,由此,可以将第一图像中的人体区域与第二图像中的人体区域进行局部配准,而不对第一图像中的背景区域和第二图像中的背景区域进行配准。Local registration is used to refer to the registration of local content in multiple images. Exemplarily, although the first region is not segmented from the first image by using the segmentation model, some other regions can be segmented, for example, the human body region and the background region other than the human body region, thus, the first The body region in the image is locally registered with the body region in the second image without registering the background region in the first image with the background region in the second image.

示例性的,可以先进行全局配准,再进行局部配准,或者,也可以先进行局部配准,再进行全局配准,配准顺序可以根据需要进行设置和调整,本申请实施例对此不进行限制。Exemplarily, the global registration can be performed first, and then the local registration can be performed, or the local registration can be performed first, and then the global registration can be performed, and the registration order can be set and adjusted as required. No restrictions are imposed.

应理解,将第一图像和第二图像进行配准后,可以提高后续融合时的准确度,融合后的效果更好。It should be understood that after the first image and the second image are registered, the accuracy of the subsequent fusion can be improved, and the fusion effect is better.

S140、将配准后的第一图像和第二图像,利用第一融合模型进行融合,得到第一融合图像。S140. Fusion the registered first image and the second image by using the first fusion model to obtain a first fusion image.

应理解,配准后的第一图像和第二图像的清晰度依然不同,第一融合模型可以对不同清晰度的图像进行融合。其中,第一融合模型可以为VGG net模型。It should be understood that, after registration, the definition of the first image and the second image are still different, and the first fusion model can fuse images with different resolutions. Wherein, the first fusion model may be a VGG net model.

应理解,由于第二图像相对于第一图像清晰度较高,由此,将配准后的第一图像和第二图像进行融合后,可以提升第一图像中对应第二图像视场角范围中的内容的清晰度,从而得到清晰度较高的第一融合图像。It should be understood that since the second image has a higher definition than the first image, after the registration of the first image and the second image are fused, the field angle range corresponding to the second image in the first image can be improved The sharpness of the content in the image, so as to obtain the first fused image with higher definition.

其中,第一融合图像的视场角范围与第一图像的视场角范围相同。Wherein, the field angle range of the first fused image is the same as the field angle range of the first image.

当然,也可以对第一图像和第二图像不配准,将获取的第一图像和第二图像,利用第一融合模型进行融合,得到第一融合图像。Of course, the first image and the second image may not be registered, and the acquired first image and the second image may be fused using the first fusion model to obtain the first fused image.

S150、当得到掩膜块时,根据掩膜块,确定第一图像中的第一图像块和确定第二图像中的第二图像块。S150. When the mask block is obtained, determine a first image block in the first image and determine a second image block in the second image according to the mask block.

应理解,当利用分割模型从第一图像中确定出1个掩膜块时,根据该掩膜块,可以从第一图像中确定出与该掩膜块对应的1个第一图像块。其中,第一图像块和该掩膜块的视场角范围相同。It should be understood that when one mask block is determined from the first image by using the segmentation model, one first image block corresponding to the mask block can be determined from the first image according to the mask block. Wherein, the field angle ranges of the first image block and the mask block are the same.

同理,根据该掩膜块,可以从第二图像中确定出对应的1个第二图像块。其中,第二图像块和该掩膜块的视场角范围相同。Similarly, according to the mask block, a corresponding second image block can be determined from the second image. Wherein, the field angle range of the second image block is the same as that of the mask block.

基于此,当利用分割模型从第一图像中确定出多个掩膜块时,根据该多个掩膜块,可以从第一图像中确定出与多个掩膜块中的每一个掩膜块对应的一个第一图像块,也就是说,可以从第一图像中确定出相同数量的多个第一图像块,且第一图像块与掩膜块一一对应。Based on this, when using the segmentation model to determine a plurality of mask blocks from the first image, according to the plurality of mask blocks, it can be determined from the first image that each mask block in the plurality of mask blocks Corresponding to one first image block, that is to say, the same number of multiple first image blocks can be determined from the first image, and the first image block corresponds to the mask block one by one.

同理,根据该多个掩膜块,也可以从第二图像中确定出与多个掩膜块中的每一个掩膜块对应的一个第二图像块,也就是说,可以从第二图像中确定出相同数量的多个第二图像块,且第二图像块与掩膜块、第一图像块一一对应。Similarly, according to the plurality of mask blocks, a second image block corresponding to each mask block in the plurality of mask blocks can also be determined from the second image, that is, it can be obtained from the second image A plurality of second image blocks of the same number are determined, and the second image blocks are in one-to-one correspondence with the mask block and the first image block.

S160、将第一图像块和第二图像块进行配准。S160. Register the first image block and the second image block.

应理解,根据掩膜块,获取到的第一图像块和第二图像块一一对应,对应的第一图像块和第二图像块可以组成一组图像块。由此,将第一图像块和第二图像块进行配准,指的是将每组图像块中的第一图像块和第二图像块进行配准。It should be understood that, according to the mask block, there is a one-to-one correspondence between the acquired first image block and the second image block, and the corresponding first image block and the second image block may form a group of image blocks. Therefore, registering the first image block and the second image block refers to registering the first image block and the second image block in each group of image blocks.

可选地,作为一种示例,该配准可以为全局配准。Optionally, as an example, the registration may be global registration.

将每组图像块中的第一图像块和第二图像块进行全局配准。此处,指的是将每组图像块中的第一图像块的全部内容与第二图像块的全部内容进行配准。The first image block and the second image block in each group of image blocks are globally registered. Here, it refers to registering all contents of the first image block and all contents of the second image block in each group of image blocks.

可选地,作为另一种示例,该配准可以包括全局配准和局部配准。Optionally, as another example, the registration may include global registration and local registration.

示例性的,将每组图像块中的第一图像块和第二图像块先进行全局配准,再进行局部配准。此处,局部配准指的是将将每组图像块中的第一图像块的局部内容与第二图像块中的局部内容进行配准。例如,第一图像块和第二图像块均包括一个人脸,则将该人脸中的眼睛分别在第一图像块和第二图像块中所对应的区域进行配准,将人脸中的嘴巴分别在第一图像块和第二图像块中所对应的区域进行配准。Exemplarily, the first image block and the second image block in each group of image blocks are firstly registered globally, and then locally registered. Here, local registration refers to registering the local content of the first image block in each group of image blocks with the local content of the second image block. For example, if both the first image block and the second image block include a human face, the corresponding regions of the eyes in the human face in the first image block and the second image block are respectively registered, and the eyes in the human face are The mouth is registered in the regions corresponding to the first image block and the second image block respectively.

应理解,全局配准和局部配准是相对的,当图像面积减小时,进行全局配准和局部配准的精度更高、效果更好。It should be understood that global registration and local registration are relative, and when the image area is reduced, the accuracy and effect of global registration and local registration are higher.

此外,还需要说明的是,由于对第一图像和第二图像进行局部配准时,例如对第一图像和第二图像中的人体区域进行局部配准时,相应的,旁边的背景区域会因此受到影响,反而变得不准,产生误差。由此,为了避免对背景区域产生不必要的影响,本申请将第一图像块从第一图像中提取出,将第二图像块从第二图像中提取出,然后,再将第一图像块和第二图像块进行全局配准,这样就将背景区域隔离开,不会影响周边的背景区域。而且,可以继续对第一图像块和第二图像块进行局部配准,以提高配准精度,得到配准准确度更高的第一图像块和第二图像块。In addition, it should be noted that, when performing local registration on the first image and the second image, for example, when performing local registration on the human body area in the first image and the second image, correspondingly, the adjacent background area will be affected by Influence, but become inaccurate, resulting in errors. Therefore, in order to avoid unnecessary influence on the background area, the present application extracts the first image block from the first image, extracts the second image block from the second image, and then extracts the first image block Global registration is performed with the second image block, so that the background area is isolated without affecting the surrounding background area. Moreover, local registration may continue to be performed on the first image block and the second image block, so as to improve the registration accuracy, and obtain the first image block and the second image block with higher registration accuracy.

S170、将配准后的每组图像块中的第一图像块和第二图像块,利用第二融合模型 进行融合,得到融合图像块。S170. Use the second fusion model to fuse the first image block and the second image block in each group of image blocks after registration to obtain a fused image block.

应理解,配准后的第一图像块和第二图像块依然清晰度不同,第二融合模型可以对不同清晰度的图像块进行融合。It should be understood that the first image block and the second image block after registration still have different resolutions, and the second fusion model can fuse image blocks with different resolutions.

应理解,由于第二图像相对于第一图像的清晰度较高,第二图像块相对于第一图像块的清晰度较高,由此,将配准后的第一图像块和第二图像块进行融合后,可以得到清晰度较高的融合图像块。It should be understood that since the definition of the second image is higher than that of the first image, the definition of the second image block is higher than that of the first image block, thus, the registered first image block and the second image After the blocks are fused, a fused image block with higher definition can be obtained.

其中,融合图像块的视场角范围和第一图像块、第二图像块的视场角范围相同。Wherein, the field angle range of the fused image block is the same as the field angle ranges of the first image block and the second image block.

在本申请的实施例中,第二融合模型是一个预先训练的融合模型。训练图像集可以包括原始图像以及人工标注出的掩膜块,掩膜块用于标识原始图像中缺失细节的第一区域。例如,原始图像是指各种HDR场景下的图像。在每个原始图像上,有人工标注出的1个或多个指示高亮度区域(即缺失细节的第一区域)的掩膜块。In the embodiment of the present application, the second fusion model is a pre-trained fusion model. The training image set may include an original image and a manually marked mask block, the mask block is used to identify a first region of missing details in the original image. For example, original images refer to images in various HDR scenarios. On each original image, there are 1 or more mask blocks that indicate high-brightness regions (ie, the first region where details are missing) are manually marked.

在一个示例中,第二融合模型是由第一融合模型训练得到的。In one example, the second fusion model is trained from the first fusion model.

在另一个示例中,第二融合模型是由第一融合模型,并且加入随机高光噪声训练得到的。In another example, the second fusion model is trained by adding random highlight noise to the first fusion model.

由于在训练时加入了随机高光噪声,使得后续在利用训练好的第二融合模型将第一图像块和第二图像块进行融合时,清晰度较高的第二图像块比第一图像块的权重占比要大,从而使得融合得到的融合图像块从第二图像块中获取更多细节。Due to the addition of random highlight noise during training, when the first image block and the second image block are fused by using the trained second fusion model, the second image block with higher definition is larger than that of the first image block. The proportion of the weight is larger, so that the fused image block obtained by fusion can obtain more details from the second image block.

S180、将第一融合图像与融合图像块,利用第三融合模型进行融合,得到第三图像。S180. Use the third fusion model to fuse the first fusion image and the fusion image block to obtain a third image.

可选地,第三融合模型可以为拉普拉斯融合(laplacian blending)模型。Optionally, the third blending model may be a Laplacian blending model.

在利用拉普拉斯融合模型进行融合时,拉普拉斯融合模型可以先将第一融合图像与融合图像块分解到不同的空间频带上,然后在各个空间频带层上分别进行融合,由此,通过分频处理,可以使得第一融合图像和融合图像块融合的更加自然,衔接处更细腻,得到的第三图像质量更高。When using the Laplacian fusion model for fusion, the Laplacian fusion model can first decompose the first fused image and the fused image block into different spatial frequency bands, and then perform fusion on each spatial frequency band layer, thus , through the frequency division processing, the fusion of the first fused image and the fused image block can be made more natural, the joint is more delicate, and the obtained third image has higher quality.

应理解,由于融合图像块相对于第一融合图像中对应区域的清晰度更高,由此,利用第三融合模型进行融合之后,可以得到清晰度更高的第三图像。It should be understood that, since the fused image block has higher definition than the corresponding region in the first fused image, a third image with higher definition can be obtained after fusion by using the third fusion model.

S190、当没得到掩膜块时,将第一融合图像作为拍摄图像输出。当得到掩膜块时,将第三图像作为拍摄图像输出。S190. When no mask block is obtained, output the first fused image as a captured image. When the mask block is obtained, the third image is output as a captured image.

示例性的,图6示出了本申请实施例提供的得到掩膜块时处理图像的示意图。如图6所示,将第一图像输入分割模型,由于第一图像中包含有3个被强光照射导致缺失细节的人脸区域,因此,分割模型可以分割出3个第一区域并生成对应的3个掩膜块。Exemplarily, FIG. 6 shows a schematic diagram of processing an image when obtaining a mask block according to an embodiment of the present application. As shown in Figure 6, the first image is input into the segmentation model. Since the first image contains three face areas that are illuminated by strong light and lack details, the segmentation model can segment three first areas and generate corresponding The 3 mask blocks.

此时,将第一图像和第二图像进行配准,并将配准的第一图像和第二图像利用第一融合模型进行融合,得到第一融合图像。At this time, the first image and the second image are registered, and the registered first image and the second image are fused using the first fusion model to obtain the first fusion image.

同时,根据该3个掩膜块,获取第一图像中对应的3个第一图像块,获取第二图像中对应的3个第二图像块,接着,将对应同一掩膜块的第一图像块和第二图像块进行配准并利用第二融合模型进行融合得到融合图像块,由此,可以得到3个融合图像块。At the same time, according to the 3 mask blocks, the corresponding 3 first image blocks in the first image are obtained, and the corresponding 3 second image blocks in the second image are obtained, and then, the first image corresponding to the same mask block The block and the second image block are registered and fused by using the second fusion model to obtain a fused image block, thus, three fused image blocks can be obtained.

再将第一融合图像和该3个融合图像块利用第三融合模型进行融合,得到第三图 像。Then the first fused image and the 3 fused image blocks are fused using the third fused model to obtain the third image.

在本申请的实施例中,当利用分割模型从第一图像中未得到掩膜块时,只利用第一融合模型将第一图像和第二图像进行配准和融合,并将得到的第一融合图像作为拍摄图像。而当利用分割模型从第一图像中得到掩膜块时,说明第一图像中对应有缺失细节的区域,此时,先将第一图像和第二图像进行融合,得到一个大范围清晰度提高的第一融合图像,然后,根据掩膜块,从第一图像中获取第一图像块,从第二图像中获取第二图像块,并将第一图像块和第二图像块进行配准和融合,由此可以得到清晰度、细节有效恢复的融合图像块,接着,再将第一融合图像与融合图像块进一步进行融合,以修复缺失的细节,得到高清晰度、高质量的拍摄图像。In the embodiment of the present application, when the mask block is not obtained from the first image by using the segmentation model, only the first fusion model is used to register and fuse the first image and the second image, and the obtained first The fused images are taken as captured images. However, when the segmentation model is used to obtain the mask block from the first image, it means that there are areas with missing details in the first image. At this time, the first image and the second image are first fused to obtain a large-scale definition improvement. Then, according to the mask block, the first image block is obtained from the first image, the second image block is obtained from the second image, and the first image block and the second image block are registered and Fusion, thereby obtaining a fused image block that effectively restores clarity and details, and then further fusing the first fused image with the fused image block to repair missing details and obtain a high-definition, high-quality captured image.

应理解,上述举例说明是为了帮助本领域技术人员理解本申请实施例,而非要将本申请实施例限于所例示的具体数值或具体场景。本领域技术人员根据所给出的上述举例说明,显然可以进行各种等价的修改或变化,这样的修改或变化也落入本申请实施例的范围内。It should be understood that the above illustrations are intended to help those skilled in the art understand the embodiments of the present application, rather than to limit the embodiments of the present application to the illustrated specific values or specific scenarios. Those skilled in the art can obviously make various equivalent modifications or changes based on the above illustrations given, and such modifications or changes also fall within the scope of the embodiments of the present application.

上文结合图2至图6,详细描述了本申请实施例的图像处理方法,结合上述内容,第一图像和第二图像由两个摄像头获取,但是,目前电子设备通常包括有3个及以上摄像头,由此,需要在不同焦段触发不同的两个摄像头来获取第一图像和第二图像。The image processing method of the embodiment of the present application has been described in detail above with reference to FIGS. 2 to 6. In combination with the above content, the first image and the second image are captured by two cameras. However, current electronic devices usually include 3 or more The camera, therefore, needs to trigger two different cameras at different focal lengths to acquire the first image and the second image.

以电子设备包括广角摄像头、主摄摄像头和长焦摄像头为例,下面对本申请提供的触发方法进行详细说明。Taking an electronic device including a wide-angle camera, a main camera, and a telephoto camera as an example, the trigger method provided by this application will be described in detail below.

示例性的,设定电子设备对应的变焦倍数范围为[0.4,100]。该变焦倍数范围划分为3个变焦倍数范围,该3个变焦倍数范围分别为第一变焦倍数范围、第二变焦倍数范围、第三变焦倍数范围,且该3个变焦倍数范围包含的变焦倍数依次增大。Exemplarily, the zoom factor range corresponding to the electronic device is set to [0.4, 100]. The zoom factor range is divided into three zoom factor ranges, and the three zoom factor ranges are respectively the first zoom factor range, the second zoom factor range, and the third zoom factor range, and the zoom factors included in the three zoom factor ranges are sequentially increase.

示例性的,假设第一变焦倍数范围F1为[0.4,0.9),第二变焦倍数范围F2为[0.9,3.5),第三变焦倍数范围F3为[3.5,100]。应理解,此处,各个数字仅为示意,具体可以根据需要进行设置和更改,本申请实施例对此不进行任何限制。Exemplarily, it is assumed that the first zoom factor range F1 is [0.4, 0.9), the second zoom factor range F2 is [0.9, 3.5), and the third zoom factor range F3 is [3.5, 100]. It should be understood that, here, each number is only for illustration, which can be set and changed as required, and is not limited in this embodiment of the present application.

示例性的,广角摄像头自身适用的变焦倍数范围为[0.4,1],主摄摄像头自身适用的变焦倍数范围为[0.6,3.5],而长焦摄像头自身适用的变焦倍数范围为[2.0,100]。Exemplarily, the applicable zoom factor range of the wide-angle camera itself is [0.4,1], the applicable zoom factor range of the main camera itself is [0.6,3.5], and the applicable zoom factor range of the telephoto camera itself is [2.0,100] ].

基于此,设定第一变焦倍数范围对应的目标摄像头为广角摄像头,第二变焦倍数范围对应的目标摄像头为主摄摄像头,第三变焦倍数范围对应的目标摄像头为长焦摄像头。Based on this, the target camera corresponding to the first zoom range is set as the wide-angle camera, the target camera corresponding to the second zoom range is the main camera, and the target camera corresponding to the third zoom range is set as the telephoto camera.

图7示出了本申请实施例提供的一种拍照预览时变焦的界面示意图。图8示出了本申请实施例提供的一种拍照预览时多摄变焦的进程示意图。FIG. 7 is a schematic diagram of an interface for zooming during photo preview provided by an embodiment of the present application. FIG. 8 shows a schematic diagram of a process of multi-camera zooming during photo preview provided by an embodiment of the present application.

示例性的,响应于用户的触摸操作,当电子设备100运行相机应用时,电子设备100显示如图7中的(a)所示的预览界面。在该预览界面上,拍摄键11指示当前拍摄模式为拍照模式。该预览界面中还包括取景窗口21,取景窗口21可用于实时显示拍照前的预览画面。另外,预览画面中还显示有变焦选项22。用户可以在变焦选项22中选择当前拍照的变焦倍数,例如,0.4倍、2倍或50倍等。如图7中的(b)所示,响应于用户的变焦操作,预览画面可以根据当前选择的变焦倍数放大或缩小,随着变焦倍数放大或缩小,取景窗口21中的预览画面也变大或缩小。当变焦至某一切换点时, 调用不同的两个摄像头,利用本申请实施例提供的图像处理方法获取拍摄图像。Exemplarily, in response to a user's touch operation, when the electronic device 100 runs the camera application, the electronic device 100 displays a preview interface as shown in (a) in FIG. 7 . On the preview interface, the shooting key 11 indicates that the current shooting mode is the shooting mode. The preview interface also includes a viewfinder window 21, and the viewfinder window 21 can be used to display a preview image before taking pictures in real time. In addition, a zoom option 22 is also displayed on the preview screen. The user can select the zoom factor of the current photo taking in the zoom option 22, for example, 0.4 times, 2 times or 50 times. As shown in (b) in Figure 7, in response to the user's zoom operation, the preview image can be enlarged or reduced according to the currently selected zoom factor, and as the zoom factor is enlarged or reduced, the preview image in the viewfinder window 21 also becomes larger or smaller. zoom out. When zooming to a certain switching point, two different cameras are invoked to acquire captured images by using the image processing method provided by the embodiment of the present application.

如图8所示,在进行拍照预览时,当在第一变焦倍数范围F1内从小到大变焦时,第一变焦倍数范围对应的广角摄像头为前台送显状态,将获取的图像发送至显示屏显示。As shown in Figure 8, when taking pictures and previewing, when zooming from small to large within the first zoom multiple range F1, the wide-angle camera corresponding to the first zoom multiple range is in the foreground sending display state, and the acquired image is sent to the display screen show.

当变焦至第一变焦切换点(例如0.6X),广角摄像头继续为前台送显状态,而第二变焦倍数范围F2对应的主摄摄像头开始进入后台运行状态。When zooming to the first zoom switching point (for example, 0.6X), the wide-angle camera continues to be in the foreground display state, and the main camera corresponding to the second zoom range F2 starts to enter the background operation state.

由于广角摄像头相对于主摄摄像头获取的图像视场角大、清晰度低,由此,在[0.6,0.9]的变焦倍数范围F11内,响应于用户对拍摄键11的操作,广角摄像头获取的图像作为第一图像,主摄摄像头获取的图像作为第二图像,然后,基于广角摄像头获取的第一图像和主摄摄像头获取的第二图像,利用本申请实施例提供的图像处理方法,得到清晰度高、细节丰富的拍摄图像。Since the wide-angle camera has a larger field of view and lower definition than the main camera, therefore, within the zoom range F11 of [0.6,0.9], in response to the user's operation on the shooting key 11, the wide-angle camera captures The image is used as the first image, and the image obtained by the main camera is used as the second image. Then, based on the first image obtained by the wide-angle camera and the second image obtained by the main camera, using the image processing method provided in the embodiment of the present application, a clear Capture images with high resolution and rich details.

当变焦至0.9X时,广角摄像头关闭,主摄摄像头转换为前台送显状态,即,主摄摄像头将获取的图像发送至显示屏显示。When zooming to 0.9X, the wide-angle camera is turned off, and the main camera switches to the foreground sending display state, that is, the main camera sends the acquired image to the display screen for display.

当变焦至第二变焦切换点时(例如2.0X),广角摄像头继续为前台送显状态,而第三变焦倍数范围F3对应的长焦摄像头开始进入后台运行状态。When zooming to the second zoom switching point (for example, 2.0X), the wide-angle camera continues to be in the foreground display state, and the telephoto camera corresponding to the third zoom multiple range F3 starts to enter the background operation state.

由于主摄摄像头相对于长焦摄像头获取的图像清晰度低、视场角大,由此,在[2.0,3.5]的变焦倍数范围F21内,响应于用户对拍摄键11的操作,主摄摄像头获取的图像作为第一图像,长焦摄像头获取的图像作为第二图像,然后,基于主摄摄像头获取的第一图像和长焦摄像头获取的第二图像,利用本申请实施例提供的图像处理方法,得到清晰度高、细节丰富的拍摄图像。Compared with the telephoto camera, the image captured by the main camera has low resolution and a large field of view. Therefore, within the zoom range F21 of [2.0, 3.5], in response to the user's operation on the shooting key 11, the main camera The acquired image is used as the first image, and the image acquired by the telephoto camera is used as the second image. Then, based on the first image acquired by the main camera and the second image acquired by the telephoto camera, the image processing method provided by the embodiment of the present application is used. , to obtain images with high definition and rich details.

当变焦至3.5X时,主摄摄像头关闭,长焦摄像头转换为前台送显状态,即,长焦摄像头将获取的图像发送至显示屏显示。When zooming to 3.5X, the main camera is turned off, and the telephoto camera switches to the foreground sending display state, that is, the telephoto camera sends the acquired image to the display screen for display.

上文结合图2至图8,对本申请实施例的图像处理方法以及不同摄像头的触发条件进行了详细描述,下面将结合图9至图12,详细描述本申请适用的电子设备的软件系统、硬件系统、装置以及芯片。应理解,本申请实施例中的软件系统、硬件系统、装置以及芯片可以执行前述本申请实施例的各种图像处理方法,即以下各种产品的具体工作过程,可以参考前述方法实施例中的对应过程。The image processing method of the embodiment of the present application and the trigger conditions of different cameras are described in detail above in conjunction with FIG. 2 to FIG. 8 . The software system and hardware of the electronic device applicable to this application will be described in detail below in conjunction with FIG. 9 to FIG. 12 Systems, devices and chips. It should be understood that the software systems, hardware systems, devices, and chips in the embodiments of the present application can execute various image processing methods in the foregoing embodiments of the present application, that is, the specific working processes of the following various products can refer to the above-mentioned method embodiments corresponding process.

本申请实施例提供的图像处理方法可以适用于各种电子设备,对应的,本申请实施例提供的图像处理装置可以为多种形态的电子设备。The image processing method provided in the embodiment of the present application may be applicable to various electronic devices, and correspondingly, the image processing apparatus provided in the embodiment of the present application may be electronic devices in various forms.

在本申请的一些实施例中,该电子设备可以为单反相机、卡片机等各种摄像装置、手机、平板电脑、可穿戴设备、车载设备、增强现实(augmented reality,AR)/虚拟现实(virtual reality,VR)设备、笔记本电脑、超级移动个人计算机(ultra-mobile personal computer,UMPC)、上网本、个人数字助理(personal digital assistant,PDA)等,或者可以为其他能够进行图像处理的设备或装置,对于电子设备的具体类型,本申请实施例不作任何限制。In some embodiments of the present application, the electronic device may be various camera devices such as SLR cameras and card players, mobile phones, tablet computers, wearable devices, vehicle-mounted devices, augmented reality (augmented reality, AR)/virtual reality (virtual reality) reality, VR) equipment, notebook computer, ultra-mobile personal computer (ultra-mobile personal computer, UMPC), netbook, personal digital assistant (personal digital assistant, PDA), etc., or other equipment or devices capable of image processing, The embodiment of the present application does not set any limitation on the specific type of the electronic device.

下文以电子设备为手机为例,图9示出了本申请实施例提供的一种电子设备100的结构示意图。Hereinafter, the electronic device is taken as an example of a mobile phone, and FIG. 9 shows a schematic structural diagram of an electronic device 100 provided in an embodiment of the present application.

电子设备100可以包括处理器110,外部存储器接口120,内部存储器121,通用 串行总线(universal serial bus,USB)接口130,充电管理模块140,电源管理模块141,电池142,天线1,天线2,移动通信模块150,无线通信模块160,音频模块170,扬声器170A,受话器170B,麦克风170C,耳机接口170D,传感器模块180,按键190,马达191,指示器192,摄像头193,显示屏194,以及用户标识模块(subscriber identification module,SIM)卡接口195等。其中传感器模块180可以包括压力传感器180A,陀螺仪传感器180B,气压传感器180C,磁传感器180D,加速度传感器180E,距离传感器180F,接近光传感器180G,指纹传感器180H,温度传感器180J,触摸传感器180K,环境光传感器180L,骨传导传感器180M等。The electronic device 100 may include a processor 110, an external memory interface 120, an internal memory 121, a universal serial bus (universal serial bus, USB) interface 130, a charging management module 140, a power management module 141, a battery 142, an antenna 1, and an antenna 2 , mobile communication module 150, wireless communication module 160, audio module 170, speaker 170A, receiver 170B, microphone 170C, earphone jack 170D, sensor module 180, button 190, motor 191, indicator 192, camera 193, display screen 194, and A subscriber identification module (subscriber identification module, SIM) card interface 195 and the like. The sensor module 180 may include a pressure sensor 180A, a gyroscope sensor 180B, an air pressure sensor 180C, a magnetic sensor 180D, an acceleration sensor 180E, a distance sensor 180F, a proximity light sensor 180G, a fingerprint sensor 180H, a temperature sensor 180J, a touch sensor 180K, an ambient light sensor 180L, bone conduction sensor 180M, etc.

需要说明的是,图1所示的结构并不构成对电子设备100的具体限定。在本申请另一些实施例中,电子设备100可以包括比图1所示的部件更多或更少的部件,或者,电子设备100可以包括图1所示的部件中某些部件的组合,或者,电子设备100可以包括图1所示的部件中某些部件的子部件。图1示的部件可以以硬件、软件、或软件和硬件的组合实现。It should be noted that the structure shown in FIG. 1 does not constitute a specific limitation on the electronic device 100 . In other embodiments of the present application, the electronic device 100 may include more or fewer components than those shown in FIG. 1 , or the electronic device 100 may include a combination of some of the components shown in FIG. 1 , or , the electronic device 100 may include subcomponents of some of the components shown in FIG. 1 . The components shown in FIG. 1 can be realized in hardware, software, or a combination of software and hardware.

处理器110可以包括一个或多个处理单元,例如:处理器110可以包括应用处理器(application processor,AP),调制解调处理器,图形处理器(graphics processing unit,GPU),图像信号处理器(image signal processor,ISP),控制器,视频编解码器,数字信号处理器(digital signal processor,DSP),基带处理器,和/或神经网络处理器(neural-network processing unit,NPU)等。其中,不同的处理单元可以是独立的器件,也可以集成在一个或多个处理器中。The processor 110 may include one or more processing units, for example: the processor 110 may include an application processor (application processor, AP), a modem processor, a graphics processing unit (graphics processing unit, GPU), an image signal processor (image signal processor, ISP), controller, video codec, digital signal processor (digital signal processor, DSP), baseband processor, and/or neural network processor (neural-network processing unit, NPU), etc. Wherein, different processing units may be independent devices, or may be integrated in one or more processors.

其中,控制器可以是电子设备100的神经中枢和指挥中心。控制器可以根据指令操作码和时序信号,产生操作控制信号,完成取指令和执行指令的控制。Wherein, the controller may be the nerve center and command center of the electronic device 100 . The controller can generate an operation control signal according to the instruction opcode and timing signal, and complete the control of fetching and executing the instruction.

处理器110中还可以设置存储器,用于存储指令和数据。在一些实施例中,处理器110中的存储器为高速缓冲存储器。该存储器可以保存处理器110刚用过或循环使用的指令或数据。如果处理器110需要再次使用该指令或数据,可从所述存储器中直接调用。避免了重复存取,减少了处理器110的等待时间,因而提高了系统的效率。A memory may also be provided in the processor 110 for storing instructions and data. In some embodiments, the memory in processor 110 is a cache memory. The memory may hold instructions or data that the processor 110 has just used or recycled. If the processor 110 needs to use the instruction or data again, it can be called directly from the memory. Repeated access is avoided, and the waiting time of the processor 110 is reduced, thereby improving the efficiency of the system.

处理器110可以运行本申请实施例提供的图像处理方法的软件代码,拍摄得到清晰度较高的图像。The processor 110 may run the software code of the image processing method provided in the embodiment of the present application to capture an image with higher definition.

在一些实施例中,处理器110可以包括一个或多个接口。接口可以包括集成电路(inter-integrated circuit,I2C)接口,集成电路内置音频(inter-integrated circuit sound,I2S)接口,脉冲编码调制(pulse code modulation,PCM)接口,通用异步收发传输器(universal asynchronous receiver/transmitter,UART)接口,移动产业处理器接口(mobile industry processor interface,MIPI),通用输入输出(general-purpose input/output,GPIO)接口,用户标识模块(subscriber identity module,SIM)接口,和/或通用串行总线(universal serial bus,USB)接口等。In some embodiments, processor 110 may include one or more interfaces. The interface may include an integrated circuit (inter-integrated circuit, I2C) interface, an integrated circuit built-in audio (inter-integrated circuit sound, I2S) interface, a pulse code modulation (pulse code modulation, PCM) interface, a universal asynchronous transmitter (universal asynchronous receiver/transmitter, UART) interface, mobile industry processor interface (mobile industry processor interface, MIPI), general-purpose input and output (general-purpose input/output, GPIO) interface, subscriber identity module (subscriber identity module, SIM) interface, and /or universal serial bus (universal serial bus, USB) interface, etc.

MIPI接口可以被用于连接处理器110与显示屏194,摄像头193等外围器件。MIPI接口包括摄像头串行接口(camera serial interface,CSI),显示屏串行接口(display serial interface,DSI)等。在一些实施例中,处理器110和摄像头193通过CSI接口通信,实现电子设备100的拍摄功能。处理器110和显示屏194通过DSI接口通信,实现电子设备100的显示功能。The MIPI interface can be used to connect the processor 110 with peripheral devices such as the display screen 194 and the camera 193 . MIPI interface includes camera serial interface (camera serial interface, CSI), display serial interface (display serial interface, DSI), etc. In some embodiments, the processor 110 communicates with the camera 193 through the CSI interface to realize the shooting function of the electronic device 100 . The processor 110 communicates with the display screen 194 through the DSI interface to realize the display function of the electronic device 100 .

GPIO接口可以通过软件配置。GPIO接口可以被配置为控制信号,也可被配置为数据信号。在一些实施例中,GPIO接口可以用于连接处理器110与摄像头193,显示屏194,无线通信模块160,音频模块170,传感器模块180等。GPIO接口还可以被配置为I2C接口,I2S接口,UART接口,MIPI接口等。The GPIO interface can be configured by software. The GPIO interface can be configured as a control signal or as a data signal. In some embodiments, the GPIO interface can be used to connect the processor 110 with the camera 193 , the display screen 194 , the wireless communication module 160 , the audio module 170 , the sensor module 180 and so on. The GPIO interface can also be configured as an I2C interface, I2S interface, UART interface, MIPI interface, etc.

USB接口130是符合USB标准规范的接口,具体可以是Mini USB接口,Micro USB接口,USB Type C接口等。USB接口130可以用于连接充电器为电子设备100充电,也可以用于电子设备100与外围设备之间传输数据。也可以用于连接耳机,通过耳机播放音频。该接口还可以用于连接其他电子设备,例如AR设备等。The USB interface 130 is an interface conforming to the USB standard specification, specifically, it can be a Mini USB interface, a Micro USB interface, a USB Type C interface, and the like. The USB interface 130 can be used to connect a charger to charge the electronic device 100 , and can also be used to transmit data between the electronic device 100 and peripheral devices. It can also be used to connect headphones and play audio through them. This interface can also be used to connect other electronic devices, such as AR devices.

可以理解的是,本申请实施例示意的各模块间的接口连接关系,只是示意性说明,并不构成对电子设备100的结构限定。在本申请另一些实施例中,电子设备100也可以采用上述实施例中不同的接口连接方式,或多种接口连接方式的组合。It can be understood that the interface connection relationship between the modules shown in the embodiment of the present application is only a schematic illustration, and does not constitute a structural limitation of the electronic device 100 . In other embodiments of the present application, the electronic device 100 may also adopt different interface connection manners in the foregoing embodiments, or a combination of multiple interface connection manners.

充电管理模块140用于从充电器接收充电输入。其中,充电器可以是无线充电器,也可以是有线充电器。在一些有线充电的实施例中,充电管理模块140可以通过USB接口130接收有线充电器的电流。在一些无线充电的实施例中,充电管理模块140可以通过电子设备100的无线充电线圈接收电磁波(电流路径如虚线所示)。充电管理模块140为电池142充电的同时,还可以通过电源管理模块141为电子设备100供电。The charging management module 140 is configured to receive a charging input from a charger. Wherein, the charger may be a wireless charger or a wired charger. In some embodiments of wired charging, the charging management module 140 can receive the current of the wired charger through the USB interface 130 . In some wireless charging embodiments, the charging management module 140 can receive electromagnetic waves through the wireless charging coil of the electronic device 100 (the current path is shown as a dotted line). While the charging management module 140 is charging the battery 142 , it can also supply power to the electronic device 100 through the power management module 141 .

电源管理模块141用于连接电池142,充电管理模块140与处理器110。电源管理模块141接收电池142和/或充电管理模块140的输入,为处理器110,内部存储器121,显示屏194,摄像头193,和无线通信模块160等供电。The power management module 141 is used for connecting the battery 142 , the charging management module 140 and the processor 110 . The power management module 141 receives the input from the battery 142 and/or the charging management module 140 to provide power for the processor 110 , the internal memory 121 , the display screen 194 , the camera 193 , and the wireless communication module 160 .

电子设备100的无线通信功能可以通过天线1,天线2,移动通信模块150,无线通信模块160,调制解调处理器以及基带处理器等实现。The wireless communication function of the electronic device 100 can be realized by the antenna 1 , the antenna 2 , the mobile communication module 150 , the wireless communication module 160 , a modem processor, a baseband processor, and the like.

天线1和天线2用于发射和接收电磁波信号。电子设备100中的每个天线可用于覆盖单个或多个通信频带。不同的天线还可以复用,以提高天线的利用率。例如:可以将天线1复用为无线局域网的分集天线。在另外一些实施例中,天线可以和调谐开关结合使用。Antenna 1 and Antenna 2 are used to transmit and receive electromagnetic wave signals. Each antenna in electronic device 100 may be used to cover single or multiple communication frequency bands. Different antennas can also be multiplexed to improve the utilization of the antennas. For example: Antenna 1 can be multiplexed as a diversity antenna of a wireless local area network. In other embodiments, the antenna may be used in conjunction with a tuning switch.

移动通信模块150可以提供应用在电子设备100上的无线通信的解决方案,例如下列方案中的至少一个:第二代(2th generation,2G)移动通信解决方案、第三代(3th generation,3G)移动通信解决方案、第四代(4th generation,5G)移动通信解决方案、第五代(5th generation,5G)、第六代(6th generation,6G)移动通信解决方案。移动通信模块150可以包括至少一个滤波器,开关,功率放大器,低噪声放大器(low noise amplifier,LNA)等。移动通信模块150可以由天线1接收电磁波,并对接收的电磁波进行滤波,放大等处理,传送至调制解调处理器进行解调。移动通信模块150还可以对经调制解调处理器调制后的信号放大,经天线1转为电磁波辐射出去。在一些实施例中,移动通信模块150的至少部分功能模块可以被设置于处理器110中。在一些实施例中,移动通信模块150的至少部分功能模块可以与处理器110的至少部分模块被设置在同一个器件中。The mobile communication module 150 may provide a wireless communication solution applied to the electronic device 100, such as at least one of the following solutions: a second generation (2th generation, 2G) mobile communication solution, a third generation (3th generation, 3G) Mobile communication solutions, fourth generation (4th generation, 5G) mobile communication solutions, fifth generation (5th generation, 5G), sixth generation (6th generation, 6G) mobile communication solutions. The mobile communication module 150 may include at least one filter, switch, power amplifier, low noise amplifier (low noise amplifier, LNA) and the like. The mobile communication module 150 can receive electromagnetic waves through the antenna 1, filter and amplify the received electromagnetic waves, and send them to the modem processor for demodulation. The mobile communication module 150 can also amplify the signals modulated by the modem processor, and convert them into electromagnetic waves through the antenna 1 for radiation. In some embodiments, at least part of the functional modules of the mobile communication module 150 may be set in the processor 110 . In some embodiments, at least part of the functional modules of the mobile communication module 150 and at least part of the modules of the processor 110 may be set in the same device.

无线通信模块160可以提供应用在电子设备100上的包括无线局域网(wireless local area networks,WLAN)(如无线保真(wireless fidelity,Wi-Fi)网络),蓝牙(bluetooth,BT),全球导航卫星系统(global navigation satellite system,GNSS),调频(frequency  modulation,FM),近距离无线通信技术(near field communication,NFC),红外技术(infrared,IR)等无线通信的解决方案。无线通信模块160可以是集成至少一个通信处理模块的一个或多个器件。无线通信模块160经由天线2接收电磁波,将电磁波信号调频以及滤波处理,将处理后的信号发送到处理器110。无线通信模块160还可以从处理器110接收待发送的信号,对其进行调频,放大,经天线2转为电磁波辐射出去。The wireless communication module 160 can provide wireless local area networks (wireless local area networks, WLAN) (such as wireless fidelity (Wireless Fidelity, Wi-Fi) network), bluetooth (bluetooth, BT), global navigation satellite, etc. applied on the electronic device 100. System (global navigation satellite system, GNSS), frequency modulation (frequency modulation, FM), near field communication technology (near field communication, NFC), infrared technology (infrared, IR) and other wireless communication solutions. The wireless communication module 160 may be one or more devices integrating at least one communication processing module. The wireless communication module 160 receives electromagnetic waves via the antenna 2 , frequency-modulates and filters the electromagnetic wave signals, and sends the processed signals to the processor 110 . The wireless communication module 160 can also receive the signal to be sent from the processor 110 , frequency-modulate it, amplify it, and convert it into electromagnetic waves through the antenna 2 for radiation.

在一些实施例中,电子设备100的天线1和移动通信模块150耦合,天线2和无线通信模块160耦合,使得电子设备100可以通过无线通信技术与网络以及其他设备通信。所述无线通信技术可以包括全球移动通讯系统(global system for mobile communications,GSM),通用分组无线服务(general packet radio service,GPRS),码分多址接入(code division multiple access,CDMA),宽带码分多址(wideband code division multiple access,WCDMA),时分码分多址(time-division code division multiple access,TD-SCDMA),长期演进(long term evolution,LTE),BT,GNSS,WLAN,NFC,FM,和/或IR技术等。所述GNSS可以包括全球卫星定位系统(global positioning system,GPS),全球导航卫星系统(global navigation satellite system,GLONASS),北斗卫星导航系统(beidou navigation satellite system,BDS),准天顶卫星系统(quasi-zenith satellite system,QZSS)和/或星基增强系统(satellite based augmentation systems,SBAS)。In some embodiments, the antenna 1 of the electronic device 100 is coupled to the mobile communication module 150, and the antenna 2 is coupled to the wireless communication module 160, so that the electronic device 100 can communicate with the network and other devices through wireless communication technology. The wireless communication technology may include global system for mobile communications (GSM), general packet radio service (general packet radio service, GPRS), code division multiple access (code division multiple access, CDMA), broadband Code division multiple access (wideband code division multiple access, WCDMA), time division code division multiple access (time-division code division multiple access, TD-SCDMA), long term evolution (long term evolution, LTE), BT, GNSS, WLAN, NFC , FM, and/or IR techniques, etc. The GNSS may include a global positioning system (global positioning system, GPS), a global navigation satellite system (global navigation satellite system, GLONASS), a Beidou navigation satellite system (beidou navigation satellite system, BDS), a quasi-zenith satellite system (quasi -zenith satellite system (QZSS) and/or satellite based augmentation systems (SBAS).

电子设备100通过GPU,显示屏194,以及应用处理器等实现显示功能。GPU为图像处理的微处理器,连接显示屏194和应用处理器。GPU用于执行数学和几何计算,用于图形渲染。处理器110可包括一个或多个GPU,其执行程序指令以生成或改变显示信息。The electronic device 100 realizes the display function through the GPU, the display screen 194 , and the application processor. The GPU is a microprocessor for image processing, and is connected to the display screen 194 and the application processor. GPUs are used to perform mathematical and geometric calculations for graphics rendering. Processor 110 may include one or more GPUs that execute program instructions to generate or change display information.

显示屏194用于显示图像,视频等。显示屏194包括显示面板。显示面板可以采用液晶显示屏(liquid crystal display,LCD),有机发光二极管(organic light-emitting diode,OLED),有源矩阵有机发光二极体或主动矩阵有机发光二极体(active-matrix organic light emitting diode的,AMOLED),柔性发光二极管(flex light-emitting diode,FLED),Miniled,MicroLed,Micro-oLed,量子点发光二极管(quantum dot light emitting diodes,QLED)等。在一些实施例中,电子设备100可以包括1个或N个显示屏194,N为大于1的正整数。The display screen 194 is used to display images, videos and the like. The display screen 194 includes a display panel. The display panel can be a liquid crystal display (LCD), an organic light-emitting diode (OLED), an active matrix organic light emitting diode or an active matrix organic light emitting diode (active-matrix organic light emitting diode, AMOLED), flexible light-emitting diode (flex light-emitting diode, FLED), Miniled, MicroLed, Micro-oLed, quantum dot light emitting diodes (quantum dot light emitting diodes, QLED), etc. In some embodiments, the electronic device 100 may include 1 or N display screens 194 , where N is a positive integer greater than 1.

摄像头193用于捕获图像或视频。可以通过应用程序指令触发开启,实现拍照功能,如拍摄获取任意场景的图像。摄像头可以包括成像镜头、滤光片、图像传感器等部件。物体发出或反射的光线进入成像镜头,通过滤光片,最终汇聚在图像传感器上。图像传感器主要是用于对拍照视角中的所有物体(也可称为待拍摄场景、目标场景,也可以理解为用户期待拍摄的场景图像)发出或反射的光汇聚成像;滤光片主要是用于将光线中的多余光波(例如除可见光外的光波,如红外)滤去;图像传感器主要是用于对接收到的光信号进行光电转换,转换成电信号,并输入处理器130进行后续处理。其中,摄像头193可以位于电子设备100的前面,也可以位于电子设备100的背面,摄像头的具体个数以及排布方式可以根据需求设置,本申请不做任何限制。Camera 193 is used to capture images or videos. It can be triggered by an application command to realize the camera function, such as capturing images of any scene. A camera may include components such as an imaging lens, an optical filter, and an image sensor. The light emitted or reflected by the object enters the imaging lens, passes through the filter, and finally converges on the image sensor. The image sensor is mainly used for converging and imaging the light emitted or reflected by all objects in the camera perspective (also called the scene to be shot, the target scene, or the scene image that the user expects to shoot); the filter is mainly used to It is used to filter out redundant light waves (such as light waves other than visible light, such as infrared) in the light; the image sensor is mainly used to perform photoelectric conversion on the received light signal, convert it into an electrical signal, and input it into the processor 130 for subsequent processing . Wherein, the camera 193 may be located at the front of the electronic device 100, or at the back of the electronic device 100, and the specific number and arrangement of the cameras may be set according to requirements, which are not limited in this application.

示例性的,电子设备100包括前置摄像头和后置摄像头。例如,前置摄像头或者后置摄像头,均可以包括1个或多个摄像头。以电子设备100具有3个后置摄像头为例,这样,电子设备100启动3个后置摄像头中的2个摄像头进行拍摄时,可以使用 本申请实施例提供的图像处理方法。或者,摄像头设置于电子设备100的外置配件上,该外置配件可旋转的连接于手机的边框,该外置配件与电子设备100的显示屏194之间所形成的角度为0-360度之间的任意角度。比如,当电子设备100自拍时,外置配件带动摄像头旋转到朝向用户的位置。当然,手机具有多个摄像头时,也可以只有部分摄像头设置在外置配件上,剩余的摄像头设置在电子设备100本体上,本申请实施例对此不进行任何限制。Exemplarily, the electronic device 100 includes a front camera and a rear camera. For example, a front camera or a rear camera may include one or more cameras. Taking the electronic device 100 with three rear cameras as an example, in this way, when the electronic device 100 starts shooting with two of the three rear cameras, the image processing method provided in the embodiment of the present application can be used. Alternatively, the camera is arranged on an external accessory of the electronic device 100, the external accessory is rotatably connected to the frame of the mobile phone, and the angle formed between the external accessory and the display screen 194 of the electronic device 100 is 0-360 degrees any angle between. For example, when the electronic device 100 takes a selfie, the external accessory drives the camera to rotate to a position facing the user. Certainly, when the mobile phone has multiple cameras, only some of the cameras may be set on the external accessories, and the rest of the cameras may be set on the electronic device 100 body, which is not limited in this embodiment of the present application.

内部存储器121可以用于存储计算机可执行程序代码,所述可执行程序代码包括指令。内部存储器121可以包括存储程序区和存储数据区。其中,存储程序区可存储操作系统,至少一个功能所需的应用程序(比如声音播放功能,图像播放功能等)等。存储数据区可存储电子设备100使用过程中所创建的数据(比如音频数据,电话本等)等。此外,内部存储器121可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件,闪存器件,通用闪存存储器(universal flash storage,UFS)等。处理器110通过运行存储在内部存储器121的指令,和/或存储在设置于处理器中的存储器的指令,执行电子设备100的各种功能应用以及数据处理。The internal memory 121 may be used to store computer-executable program codes including instructions. The internal memory 121 may include an area for storing programs and an area for storing data. Wherein, the stored program area can store an operating system, at least one application program required by a function (such as a sound playing function, an image playing function, etc.) and the like. The storage data area can store data created during the use of the electronic device 100 (such as audio data, phonebook, etc.) and the like. In addition, the internal memory 121 may include a high-speed random access memory, and may also include a non-volatile memory, such as at least one magnetic disk storage device, flash memory device, universal flash storage (universal flash storage, UFS) and the like. The processor 110 executes various functional applications and data processing of the electronic device 100 by executing instructions stored in the internal memory 121 and/or instructions stored in a memory provided in the processor.

内部存储器121还可以存储本申请实施例提供的图像处理方法的软件代码,当处理器110运行所述软件代码时,执行图像处理方法的流程步骤,得到清晰度较高的图像。The internal memory 121 can also store the software code of the image processing method provided by the embodiment of the present application. When the processor 110 runs the software code, it executes the process steps of the image processing method to obtain an image with higher definition.

内部存储器121还可以存储拍摄得到的图像。The internal memory 121 can also store captured images.

外部存储器接口120可以用于连接外部存储卡,例如Micro SD卡,实现扩展电子设备100的存储能力。外部存储卡通过外部存储器接口120与处理器110通信,实现数据存储功能。例如将音乐等文件保存在外部存储卡中。The external memory interface 120 can be used to connect an external memory card, such as a Micro SD card, so as to expand the storage capacity of the electronic device 100. The external memory card communicates with the processor 110 through the external memory interface 120 to implement a data storage function. Such as saving files such as music in an external memory card.

当然,本申请实施例提供的图像处理方法的软件代码也可以存储在外部存储器中,处理器110可以通过外部存储器接口120运行所述软件代码,执行图像处理方法的流程步骤,得到清晰度较高的图像。电子设备100拍摄得到的图像也可以存储在外部存储器中。Certainly, the software code of the image processing method provided in the embodiment of the present application can also be stored in an external memory, and the processor 110 can run the software code through the external memory interface 120 to execute the process steps of the image processing method to obtain a high-definition image. Image. Images captured by the electronic device 100 may also be stored in an external memory.

应理解,用户可以指定将图像存储在内部存储器121还是外部存储器中。比如,电子设备100当前与外部存储器相连接时,若电子设备100拍摄得到1帧图像时,可以弹出提示信息,以提示用户将图像存储在外部存储器还是内部存储器;当然,还可以有其他指定方式,本申请实施例对此不进行任何限制;或者,电子设备100检测到内部存储器121的内存量小于预设量时,可以自动将图像存储在外部存储器中。It should be understood that the user can designate whether to store the image in the internal memory 121 or the external memory. For example, when the electronic device 100 is currently connected to the external memory, if the electronic device 100 captures one frame of image, a prompt message may pop up to remind the user whether to store the image in the external memory or the internal memory; of course, there may be other specified ways , the embodiment of the present application does not impose any limitation on this; alternatively, when the electronic device 100 detects that the memory capacity of the internal memory 121 is less than a preset amount, it may automatically store the image in the external memory.

电子设备100可以通过音频模块170,扬声器170A,受话器170B,麦克风170C,耳机接口170D,以及应用处理器等实现音频功能。例如音乐播放,录音等。The electronic device 100 can implement audio functions through the audio module 170 , the speaker 170A, the receiver 170B, the microphone 170C, the earphone interface 170D, and the application processor. Such as music playback, recording, etc.

压力传感器180A用于感受压力信号,可以将压力信号转换成电信号。在一些实施例中,压力传感器180A可以设置于显示屏194。The pressure sensor 180A is used to sense the pressure signal and convert the pressure signal into an electrical signal. In some embodiments, pressure sensor 180A may be disposed on display screen 194 .

陀螺仪传感器180B可以用于确定电子设备100的运动姿态。在一些实施例中,可以通过陀螺仪传感器180B确定电子设备100围绕三个轴(即,x,y和z轴)的角速度。陀螺仪传感器180B可以用于拍摄防抖。The gyro sensor 180B can be used to determine the motion posture of the electronic device 100 . In some embodiments, the angular velocity of the electronic device 100 around three axes (ie, x, y and z axes) may be determined by the gyro sensor 180B. The gyro sensor 180B can be used for image stabilization.

气压传感器180C用于测量气压。在一些实施例中,电子设备100通过气压传感器180C测得的气压值计算海拔高度,辅助定位和导航。The air pressure sensor 180C is used to measure air pressure. In some embodiments, the electronic device 100 calculates the altitude based on the air pressure value measured by the air pressure sensor 180C to assist positioning and navigation.

磁传感器180D包括霍尔传感器。电子设备100可以利用磁传感器180D检测翻盖皮套的开合。在一些实施例中,当电子设备100是翻盖机时,电子设备100可以根据磁传感器180D检测翻盖的开合。进而根据检测到的皮套的开合状态或翻盖的开合状态,设置翻盖自动解锁等特性。The magnetic sensor 180D includes a Hall sensor. The electronic device 100 may use the magnetic sensor 180D to detect the opening and closing of the flip leather case. In some embodiments, when the electronic device 100 is a clamshell machine, the electronic device 100 can detect opening and closing of the clamshell according to the magnetic sensor 180D. Furthermore, according to the detected opening and closing state of the leather case or the opening and closing state of the flip cover, features such as automatic unlocking of the flip cover are set.

加速度传感器180E可检测电子设备100在各个方向上(一般为三轴)加速度的大小。当电子设备100静止时可检测出重力的大小及方向。还可以用于识别电子设备姿态,应用于横竖屏切换,计步器等应用。The acceleration sensor 180E can detect the acceleration of the electronic device 100 in various directions (generally three axes). When the electronic device 100 is stationary, the magnitude and direction of gravity can be detected. It can also be used to identify the posture of electronic devices, and can be used in applications such as horizontal and vertical screen switching, pedometers, etc.

距离传感器180F,用于测量距离。电子设备100可以通过红外或激光测量距离。在一些实施例中,拍摄场景,电子设备100可以利用距离传感器180F测距以实现快速对焦。The distance sensor 180F is used to measure the distance. The electronic device 100 may measure the distance by infrared or laser. In some embodiments, when shooting a scene, the electronic device 100 may use the distance sensor 180F for distance measurement to achieve fast focusing.

接近光传感器180G可以包括例如发光二极管(LED)和光检测器,例如光电二极管。发光二极管可以是红外发光二极管。电子设备100通过发光二极管向外发射红外光。电子设备100使用光电二极管检测来自附近物体的红外反射光。当检测到充分的反射光时,可以确定电子设备100附近有物体。当检测到不充分的反射光时,电子设备100可以确定电子设备100附近没有物体。电子设备100可以利用接近光传感器180G检测用户手持电子设备100贴近耳朵通话,以便自动熄灭屏幕达到省电的目的。接近光传感器180G也可用于皮套模式,口袋模式自动解锁与锁屏。Proximity light sensor 180G may include, for example, light emitting diodes (LEDs) and light detectors, such as photodiodes. The light emitting diodes may be infrared light emitting diodes. The electronic device 100 emits infrared light through the light emitting diode. Electronic device 100 uses photodiodes to detect infrared reflected light from nearby objects. When sufficient reflected light is detected, it may be determined that there is an object near the electronic device 100 . When insufficient reflected light is detected, the electronic device 100 may determine that there is no object near the electronic device 100 . The electronic device 100 can use the proximity light sensor 180G to detect that the user is holding the electronic device 100 close to the ear to make a call, so as to automatically turn off the screen to save power. The proximity light sensor 180G can also be used in leather case mode, automatic unlock and lock screen in pocket mode.

环境光传感器180L用于感知环境光亮度。电子设备100可以根据感知的环境光亮度自适应调节显示屏194亮度。环境光传感器180L也可用于拍照时自动调节白平衡。环境光传感器180L还可以与接近光传感器180G配合,检测电子设备100是否在口袋里,以防误触。The ambient light sensor 180L is used for sensing ambient light brightness. The electronic device 100 can adaptively adjust the brightness of the display screen 194 according to the perceived ambient light brightness. The ambient light sensor 180L can also be used to automatically adjust the white balance when taking pictures. The ambient light sensor 180L can also cooperate with the proximity light sensor 180G to detect whether the electronic device 100 is in the pocket, so as to prevent accidental touch.

指纹传感器180H用于采集指纹。电子设备100可以利用采集的指纹特性实现指纹解锁,访问应用锁,指纹拍照,指纹接听来电等。The fingerprint sensor 180H is used to collect fingerprints. The electronic device 100 can use the collected fingerprint characteristics to implement fingerprint unlocking, access to application locks, take pictures with fingerprints, answer incoming calls with fingerprints, and the like.

温度传感器180J用于检测温度。在一些实施例中,电子设备100利用温度传感器180J检测的温度,执行温度处理策略。例如,当温度传感器180J上报的温度超过阈值,电子设备100执行降低位于温度传感器180J附近的处理器的性能,以便降低功耗实施热保护。在另一些实施例中,当温度低于另一阈值时,电子设备100对电池142加热,以避免低温导致电子设备100异常关机。在其他一些实施例中,当温度低于又一阈值时,电子设备100对电池142的输出电压执行升压,以避免低温导致的异常关机。The temperature sensor 180J is used to detect temperature. In some embodiments, the electronic device 100 uses the temperature detected by the temperature sensor 180J to implement a temperature treatment strategy. For example, when the temperature reported by the temperature sensor 180J exceeds the threshold, the electronic device 100 may reduce the performance of the processor located near the temperature sensor 180J, so as to reduce power consumption and implement thermal protection. In some other embodiments, when the temperature is lower than another threshold, the electronic device 100 heats the battery 142 to avoid abnormal shutdown of the electronic device 100 caused by the low temperature. In some other embodiments, when the temperature is lower than another threshold, the electronic device 100 boosts the output voltage of the battery 142 to avoid abnormal shutdown caused by low temperature.

触摸传感器180K,也称“触控器件”。触摸传感器180K可以设置于显示屏194,由触摸传感器180K与显示屏194组成触摸屏,也称“触控屏”。触摸传感器180K用于检测作用于其上或附近的触摸操作。触摸传感器可以将检测到的触摸操作传递给应用处理器,以确定触摸事件类型。可以通过显示屏194提供与触摸操作相关的视觉输出。在另一些实施例中,触摸传感器180K也可以设置于电子设备100的表面,与显示屏194所处的位置不同。The touch sensor 180K is also called "touch device". The touch sensor 180K can be disposed on the display screen 194, and the touch sensor 180K and the display screen 194 form a touch screen, also called a “touch screen”. The touch sensor 180K is used to detect a touch operation on or near it. The touch sensor can pass the detected touch operation to the application processor to determine the type of touch event. Visual output related to the touch operation can be provided through the display screen 194 . In other embodiments, the touch sensor 180K may also be disposed on the surface of the electronic device 100 , which is different from the position of the display screen 194 .

骨传导传感器180M可以获取振动信号。在一些实施例中,骨传导传感器180M可以获取人体声部振动骨块的振动信号。骨传导传感器180M也可以接触人体脉搏,接收血压跳动信号。在一些实施例中,骨传导传感器180M也可以设置于耳机中,结合成骨传导耳机。音频模块170可以基于所述骨传导传感器180M获取的声部振动骨 块的振动信号,解析出语音信号,实现语音功能。应用处理器可以基于所述骨传导传感器180M获取的血压跳动信号解析心率信息,实现心率检测功能。The bone conduction sensor 180M can acquire vibration signals. In some embodiments, the bone conduction sensor 180M can acquire the vibration signal of the vibrating bone mass of the human voice. The bone conduction sensor 180M can also contact the human pulse and receive the blood pressure beating signal. In some embodiments, the bone conduction sensor 180M can also be disposed in the earphone, combined into a bone conduction earphone. The audio module 170 can analyze the voice signal based on the vibration signal of the vibrating bone mass of the voice part acquired by the bone conduction sensor 180M, so as to realize the voice function. The application processor can analyze the heart rate information based on the blood pressure beating signal acquired by the bone conduction sensor 180M, so as to realize the heart rate detection function.

按键190包括开机键,音量键等。按键190可以是机械按键。也可以是触摸式按键。电子设备100可以接收按键输入,产生与电子设备100的用户设置以及功能控制有关的键信号输入。The keys 190 include a power key, a volume key and the like. The key 190 may be a mechanical key. It can also be a touch button. The electronic device 100 can receive key input and generate key signal input related to user settings and function control of the electronic device 100 .

马达191可以产生振动提示。马达191可以用于来电振动提示,也可以用于触摸振动反馈。例如,作用于不同应用(例如拍照,音频播放等)的触摸操作,可以对应不同的振动反馈效果。The motor 191 can generate a vibrating reminder. The motor 191 can be used for incoming call vibration prompts, and can also be used for touch vibration feedback. For example, touch operations applied to different applications (such as taking pictures, playing audio, etc.) may correspond to different vibration feedback effects.

指示器192可以是指示灯,可以用于指示充电状态,电量变化,也可以用于指示消息,未接来电,通知等。The indicator 192 can be an indicator light, and can be used to indicate charging status, power change, and can also be used to indicate messages, missed calls, notifications, and the like.

SIM卡接口195用于连接SIM卡。SIM卡可以通过插入SIM卡接口195,或从SIM卡接口195拔出,实现和电子设备100的接触和分离。The SIM card interface 195 is used for connecting a SIM card. The SIM card can be connected and separated from the electronic device 100 by inserting it into the SIM card interface 195 or pulling it out from the SIM card interface 195 .

上文详细描述了电子设备100的硬件系统,下面介绍电子设备100的软件系统。软件系统可以采用分层架构、事件驱动架构、微核架构、微服务架构或云架构,本申请实施例以分层架构为例,示例性地描述电子设备100的软件系统。The hardware system of the electronic device 100 is described in detail above, and the software system of the electronic device 100 is introduced below. The software system may adopt a layered architecture, an event-driven architecture, a micro-kernel architecture, a micro-service architecture, or a cloud architecture. The embodiment of the present application uses a layered architecture as an example to exemplarily describe the software system of the electronic device 100 .

如图10所示,采用分层架构的软件系统分成若干个层,每一层都有清晰的角色和分工。层与层之间通过软件接口通信。在一些实施例中,软件系统可以分为五层,从上至下分别为应用层210、应用框架层220、硬件抽象层230、驱动层240以及硬件层250。As shown in Figure 10, a software system adopting a layered architecture is divided into several layers, and each layer has a clear role and division of labor. Layers communicate through software interfaces. In some embodiments, the software system can be divided into five layers, which are application layer 210 , application framework layer 220 , hardware abstraction layer 230 , driver layer 240 and hardware layer 250 from top to bottom.

应用层210可以包括相机、图库应用程序,还可以包括日历、通话、地图、导航、WLAN、蓝牙、音乐、视频、短信息等应用程序。The application layer 210 may include application programs such as camera and gallery, and may also include application programs such as calendar, call, map, navigation, WLAN, Bluetooth, music, video, and short message.

应用框架层220为应用层210的应用程序提供应用程序访问接口和编程框架。The application framework layer 220 provides application program access interfaces and programming frameworks for the applications of the application layer 210 .

例如,应用框架层包括相机访问接口,该相机访问接口用于通过相机管理和相机设备来提供相机的拍摄服务。For example, the application framework layer includes a camera access interface, and the camera access interface is used to provide camera shooting services through camera management and camera equipment.

应用框架层中的相机管理用于管理相机。相机管理可以获取相机的参数,例如判断相机的工作状态等。Camera management in the application framework layer is used to manage cameras. Camera management can obtain camera parameters, such as judging the working status of the camera.

应用框架层中的相机设备用于提供不用相机设备以及相机管理之间的数据访问接口。The camera device in the application framework layer is used to provide a data access interface between different camera devices and camera management.

硬件抽象层230用于将硬件抽象化。比如,硬件抽象层可以包相机硬件抽象层以及其他硬件设备抽象层;相机硬件抽象层中可以包括相机设备1、相机设备2等;相机硬件抽象层可以与相机算法库相连接,相机硬件抽象层可以调用相机算法库中的算法。The hardware abstraction layer 230 is used to abstract hardware. For example, the hardware abstraction layer can include the camera hardware abstraction layer and other hardware device abstraction layers; the camera hardware abstraction layer can include camera device 1, camera device 2, etc.; the camera hardware abstraction layer can be connected with the camera algorithm library, and the camera hardware abstraction layer Algorithms in the camera algorithm library can be called.

驱动层240用于为不同的硬件设备提供驱动。比如,驱动层可以包括相机驱动;数字信号处理器驱动以及图形处理器驱动。The driver layer 240 is used to provide drivers for different hardware devices. For example, the driver layer may include camera drivers; digital signal processor drivers and graphics processor drivers.

硬件层250可以包括传感器、图像信号处理器、数字信号处理器、图形处理器以及其他硬件设备。其中,传感器可以包括传感器1、传感器2等,还可以包括深度传感器(time of flight,TOF)和多光谱传感器。The hardware layer 250 may include sensors, image signal processors, digital signal processors, graphics processors, and other hardware devices. Wherein, the sensor may include sensor 1, sensor 2, etc., and may also include a depth sensor (time of flight, TOF) and a multispectral sensor.

下面结合显示拍照场景,示例性说明电子设备100的软件系统的工作流程。In the following, the workflow of the software system of the electronic device 100 is exemplarily described in conjunction with displaying a photographing scene.

当用户在触摸传感器180K上进行单击操作时,相机APP被单击操作唤醒后,通 过相机访问接口调用相机硬件抽象层的各个相机设备。示例性的,相机硬件抽象层判断出当前变焦倍数处于[0.6,0.9]变焦倍数范围之间,由此,可以通过向相机设备驱动下发调用广角摄像头和主摄摄像头的指令,同时相机算法库开始加载本申请实施例所利用的网络模型中的算法。When the user performs a click operation on the touch sensor 180K, after the camera APP is awakened by the click operation, it calls each camera device of the camera hardware abstraction layer through the camera access interface. Exemplarily, the camera hardware abstraction layer judges that the current zoom factor is between the range of [0.6, 0.9] zoom factor, so it can send instructions to the camera device driver to call the wide-angle camera and the main camera, and the camera algorithm library Start to load the algorithm in the network model used in the embodiment of this application.

当硬件层的传感器被调用后,例如,调用广角摄像头中的传感器1获取第一图像,主摄摄像头中的传感器2获取第二图像后,将第一图像和第二图像发送给图像信号处理进行配准等初步处理,处理后经相机设备驱动返回硬件抽象层,再利用加载的相机算法库中的算法进行处理,例如利用分割模型、第一融合模型、第二融合模型和第三融合模型按照本申请实施例提供的相关处理步骤进行处理,得到拍摄图像。其中,分割模型、第一融合模型、第二融合模型和第三融合模型可以通过数字信号处理器驱动调用数字信号处理器、图形处理器驱动调用图形处理器进行处理。When the sensor at the hardware layer is invoked, for example, sensor 1 in the wide-angle camera is invoked to obtain the first image, and sensor 2 in the main camera captures the second image, and the first image and the second image are sent to image signal processing for processing Preliminary processing such as registration, after processing, the camera device driver returns to the hardware abstraction layer, and then uses the algorithm in the loaded camera algorithm library for processing, such as using the segmentation model, the first fusion model, the second fusion model and the third fusion model according to The relevant processing steps provided in the embodiment of the present application are processed to obtain the captured image. Wherein, the segmentation model, the first fusion model, the second fusion model and the third fusion model can be processed by calling the digital signal processor through the driver of the digital signal processor, and calling the graphics processor through the driver of the graphics processor.

将得到的拍摄图像经相机硬件抽象层、相机访问接口发送回相机应用进行显示和存储。The captured images are sent back to the camera application via the camera hardware abstraction layer and the camera access interface for display and storage.

图11为本申请实施例提供的一种图像处理装置的结构示意图。如图11所示,该图像处理装置300包括获取模块310和处理模块320。FIG. 11 is a schematic structural diagram of an image processing device provided by an embodiment of the present application. As shown in FIG. 11 , the image processing device 300 includes an acquisition module 310 and a processing module 320 .

该图像处理装置300可以执行以下方案:The image processing device 300 can perform the following schemes:

获取模块310,用于采集第一图像和第二图像,第一图像的清晰度低于第二图像的清晰度,第一图像包括第一区域,第一区域为第一图像中清晰度小于预设阈值的区域。The acquisition module 310 is configured to acquire a first image and a second image, the resolution of the first image is lower than that of the second image, the first image includes a first area, and the first area is that the resolution of the first image is smaller than a preset Threshold area.

处理模块320,用于将第一图像输入分割模型,确定是否得到掩膜块,其中,分割模型用于对第一图像中的第一区域进行分割,并生成与第一区域对应的掩膜块,第一区域用于表示第一图像中缺失细节的区域。A processing module 320, configured to input the first image into a segmentation model to determine whether a mask block is obtained, wherein the segmentation model is used to segment the first region in the first image and generate a mask block corresponding to the first region , the first region is used to represent the region of missing detail in the first image.

处理模块320还用于将第一图像和第二图像利用第一融合模型进行融合,得到第一融合图像。The processing module 320 is further configured to fuse the first image and the second image using the first fusion model to obtain a first fusion image.

当得到掩膜块时,处理模块320还用于根据掩膜块,确定第一图像中的第一图像块,确定第二图像中的第二图像块,并将第一图像块和第二图像块利用第二融合模型进行融合,得到融合图像块。When the mask block is obtained, the processing module 320 is further configured to determine the first image block in the first image according to the mask block, determine the second image block in the second image, and combine the first image block and the second image block The blocks are fused using the second fusion model to obtain fused image blocks.

处理模块320还用于将第一融合图像与融合图像块,利用第三融合模型进行融合,得到拍摄图像。The processing module 320 is further configured to fuse the first fusion image and the fusion image block by using the third fusion model to obtain a captured image.

可选地,作为一个实施例,当没得到掩膜块时,处理模块320将将第一图像和第二图像利用第一融合模型进行融合,得到第一融合图像。Optionally, as an embodiment, when the mask block is not obtained, the processing module 320 will fuse the first image and the second image using the first fusion model to obtain the first fusion image.

可选地,作为一个实施例,处理模块320还用于对第一图像和第二图像进行配准。Optionally, as an embodiment, the processing module 320 is further configured to register the first image and the second image.

可选地,作为一个实施例,处理模块320还用于对第一图像块和第二图像块进行配准。Optionally, as an embodiment, the processing module 320 is further configured to register the first image block and the second image block.

配准包括:全局配准和/或局部配准,全局配准用于表示将多个图像中的全部内容进行配准,局部配准用于表示将多个图像中的局部内容进行配准。Registration includes: global registration and/or local registration. Global registration is used to register all content in multiple images, and local registration is used to register local content in multiple images.

可选地,作为一个实施例,处理模块320还用于利用训练图像集,并加入随机高光噪声,对第一融合模型进行训练,得到第二融合模型,其中,训练图像集包括原始图像,原始图像标注有掩膜块。Optionally, as an embodiment, the processing module 320 is further configured to use the training image set and add random highlight noise to train the first fusion model to obtain the second fusion model, wherein the training image set includes the original image, the original Images are annotated with mask blocks.

可选地,作为一个实施例,述第三融合模型为拉普拉斯融合模型。Optionally, as an embodiment, the third fusion model is a Laplace fusion model.

需要说明的是,上述图像处理装置300以功能模块的形式体现。这里的术语“模块”可以通过软件和/或硬件形式实现,对此不作具体限定。It should be noted that, the image processing apparatus 300 described above is embodied in the form of functional modules. The term "module" here may be implemented in the form of software and/or hardware, which is not specifically limited.

例如,“模块”可以是实现上述功能的软件程序、硬件电路或二者结合。所述硬件电路可能包括应用特有集成电路(application specific integrated circuit,ASIC)、电子电路、用于执行一个或多个软件或固件程序的处理器(例如共享处理器、专有处理器或组处理器等)和存储器、合并逻辑电路和/或其它支持所描述的功能的合适组件。For example, a "module" may be a software program, a hardware circuit or a combination of both to realize the above functions. The hardware circuitry may include application specific integrated circuits (ASICs), electronic circuits, processors (such as shared processors, dedicated processors, or group processors) for executing one or more software or firmware programs. etc.) and memory, incorporating logic, and/or other suitable components to support the described functionality.

因此,在本申请的实施例中描述的各示例的模块,能够以电子硬件、或者计算机软件和电子硬件的结合来实现。这些功能究竟以硬件还是软件方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本申请的范围。Therefore, the modules of each example described in the embodiments of the present application can be realized by electronic hardware, or a combination of computer software and electronic hardware. Whether these functions are executed by hardware or software depends on the specific application and design constraints of the technical solution. Those skilled in the art may use different methods to implement the described functions for each specific application, but such implementation should not be regarded as exceeding the scope of the present application.

本申请实施例还提供另一种电子设备,包括摄像头模组、处理器和存储器。The embodiment of the present application also provides another electronic device, including a camera module, a processor, and a memory.

摄像头模组,用于获取第一图像和第二图像,第一图像和第二图像为对相同的待拍摄场景拍摄的图像,第一图像的清晰度低于第二图像的清晰度。The camera module is used to acquire a first image and a second image, the first image and the second image are images taken for the same scene to be photographed, and the definition of the first image is lower than that of the second image.

存储器,用于存储可在处理器上运行的计算机程序。Memory, which stores computer programs that run on the processor.

处理器,用于执行如上述所述的图像处理方法中进行处理的步骤。A processor, configured to execute the processing steps in the above-mentioned image processing method.

可选地,摄像头模组包括广角摄像头、主摄摄像头和长焦摄像头;广角摄像头,用于在处理器获取拍照指令后,获取第一图像;主摄摄像头,用于在处理器获取拍照指令后,获取第二图像;或者,主摄摄像头,用于在处理器获取所述拍照指令后,获取第一图像;长焦摄像头,用于在处理器获取拍照指令后,获取第二图像。Optionally, the camera module includes a wide-angle camera, a main camera and a telephoto camera; the wide-angle camera is used to obtain the first image after the processor obtains the camera instruction; the main camera is used to obtain the first image after the processor obtains the camera instruction , to acquire the second image; or, the main camera is used to acquire the first image after the processor acquires the photographing instruction; the telephoto camera is configured to acquire the second image after the processor acquires the photographing instruction.

严格来说,是通过彩色摄像头和黑白摄像头中的图像处理器来获取图像。其中,图像传感器例如可以为电荷耦合元件(charge-coupled device,CCD)、互补金属氧化物半导体(complementary metal oxide semiconductor,CMOS)等。Strictly speaking, the image is obtained by the image processor in the color camera and the black and white camera. Wherein, the image sensor may be, for example, a charge-coupled device (charge-coupled device, CCD), a complementary metal oxide semiconductor (complementary metal oxide semiconductor, CMOS) and the like.

本申请实施例还提供一种计算机可读存储介质,所述计算机可读存储介质中存储有计算机指令;当所述计算机可读存储介质在图像处理装置上运行时,使得该图像处理装置执行如图3和/或图4所示的方法。所述计算机指令可以存储在计算机可读存储介质中,或者从一个计算机可读存储介质向另一个计算机可读存储介质传输,例如,所述计算机指令可以从一个网站站点、计算机、服务器或者数据中心通过有线(例如同轴电缆、光纤、数字用户线(digital subscriber line,DSL))或无线(例如红外、无线、微波等)方式向另一个网站站点、计算机、服务器或数据中心进行传输。所述计算机可读存储介质可以是计算机能够存取的任何可用介质或者是包含一个或多个可以用介质集成的服务器、数据中心等数据存储设备。所述可用介质可以是磁性介质(例如,软盘、硬盘、磁带),光介质、或者半导体介质(例如固态硬盘(solid state disk,SSD))等。The embodiment of the present application also provides a computer-readable storage medium, where computer instructions are stored in the computer-readable storage medium; when the computer-readable storage medium is run on an image processing device, the image processing device executes the following steps: The method shown in Figure 3 and/or Figure 4. The computer instructions may be stored in or transmitted from one computer-readable storage medium to another computer-readable storage medium, for example, the computer instructions may be transmitted from a website, computer, server, or data center Transmission to another website site, computer, server or data center by wired (such as coaxial cable, optical fiber, digital subscriber line (DSL)) or wireless (such as infrared, wireless, microwave, etc.). The computer-readable storage medium may be any available medium that can be accessed by a computer, or may be a data storage device including one or more servers, data centers, etc. that can be integrated with the medium. The available medium may be a magnetic medium (for example, a floppy disk, a hard disk, or a magnetic tape), an optical medium, or a semiconductor medium (for example, a solid state disk (solid state disk, SSD)) and the like.

本申请实施例还提供了一种包含计算机指令的计算机程序产品,当其在图像处理装置上运行时,使得图像处理装置可以执行图3和/或图4所示的方法。The embodiment of the present application also provides a computer program product including computer instructions, which, when run on an image processing device, enables the image processing device to execute the method shown in FIG. 3 and/or FIG. 4 .

图12为本申请实施例提供的一种芯片的结构示意图。图12所示的芯片可以为通用处理器,也可以为专用处理器。该芯片包括处理器401。其中,处理器401用于支持图像处理装置执行图3和/或图4所示的技术方案。FIG. 12 is a schematic structural diagram of a chip provided by an embodiment of the present application. The chip shown in FIG. 12 may be a general-purpose processor or a special-purpose processor. The chip includes a processor 401 . Wherein, the processor 401 is configured to support the image processing apparatus to execute the technical solutions shown in FIG. 3 and/or FIG. 4 .

可选的,该芯片还包括收发器402,收发器402用于接受处理器401的控制,用于支持通信装置执行图3和/或图4所示的技术方案。Optionally, the chip further includes a transceiver 402, and the transceiver 402 is configured to be controlled by the processor 401, and configured to support the communication device to execute the technical solution shown in FIG. 3 and/or FIG. 4 .

可选的,图12所示的芯片还可以包括:存储介质403。Optionally, the chip shown in FIG. 12 may further include: a storage medium 403 .

需要说明的是,图12所示的芯片可以使用下述电路或者器件来实现:一个或多个现场可编程门阵列(field programmable gate array,FPGA)、可编程逻辑器件(programmable logic device,PLD)、控制器、状态机、门逻辑、分立硬件部件、任何其他适合的电路、或者能够执行本申请通篇所描述的各种功能的电路的任意组合。It should be noted that the chip shown in Figure 12 can be implemented using the following circuits or devices: one or more field programmable gate arrays (field programmable gate array, FPGA), programmable logic device (programmable logic device, PLD) , controllers, state machines, gate logic, discrete hardware components, any other suitable circuitry, or any combination of circuitry capable of performing the various functions described throughout this application.

上述本申请实施例提供的电子设备、图像处理装置、计算机存储介质、计算机程序产品、芯片均用于执行上文所提供的方法,因此,其所能达到的有益效果可参考上文所提供的方法对应的有益效果,在此不再赘述。The electronic equipment, image processing device, computer storage medium, computer program product, and chip provided by the above-mentioned embodiments of the present application are all used to execute the method provided above. Therefore, the beneficial effects that it can achieve can refer to the above-mentioned The beneficial effects corresponding to the method will not be repeated here.

应理解,上述只是为了帮助本领域技术人员更好地理解本申请实施例,而非要限制本申请实施例的范围。本领域技术人员根据所给出的上述示例,显然可以进行各种等价的修改或变化,例如,上述检测方法的各个实施例中某些步骤可以是不必须的,或者可以新加入某些步骤等。或者上述任意两种或者任意多种实施例的组合。这样的修改、变化或者组合后的方案也落入本申请实施例的范围内。It should be understood that the foregoing is only to help those skilled in the art better understand the embodiments of the present application, rather than limiting the scope of the embodiments of the present application. Those skilled in the art can obviously make various equivalent modifications or changes based on the above examples given, for example, some steps in the various embodiments of the above detection method may be unnecessary, or some steps may be newly added wait. Or a combination of any two or more of the above-mentioned embodiments. Such modifications, changes or combined solutions also fall within the scope of the embodiments of the present application.

还应理解,上文对本申请实施例的描述着重于强调各个实施例之间的不同之处,未提到的相同或相似之处可以互相参考,为了简洁,这里不再赘述。It should also be understood that the above description of the embodiments of the present application focuses on emphasizing the differences between the various embodiments, and the same or similar points that are not mentioned can be referred to each other, and for the sake of brevity, details are not repeated here.

还应理解,上述各过程的序号的大小并不意味着执行顺序的先后,各过程的执行顺序应以其功能和内在逻辑确定,而不应对本申请实施例的实施过程构成任何限定。It should also be understood that the sequence numbers of the above processes do not mean the order of execution, and the execution order of each process should be determined by its functions and internal logic, and should not constitute any limitation on the implementation process of the embodiment of the present application.

还应理解,本申请实施例中,“预先设定”、“预先定义”可以通过在设备(例如,包括电子设备)中预先保存相应的代码、表格或其他可用于指示相关信息的方式来实现,本申请对于其具体的实现方式不做限定。It should also be understood that in this embodiment of the present application, "presetting" and "predefining" can be realized by pre-saving corresponding codes, tables or other methods that can be used to indicate related information in devices (for example, including electronic devices) , the present application does not limit its specific implementation.

还应理解,本申请实施例中的方式、情况、类别以及实施例的划分仅是为了描述的方便,不应构成特别的限定,各种方式、类别、情况以及实施例中的特征在不矛盾的情况下可以相结合。It should also be understood that the division of methods, situations, categories and embodiments in the embodiments of the present application is only for the convenience of description, and should not constitute a special limitation, and the features in various methods, categories, situations and embodiments are not contradictory cases can be combined.

还应理解,在本申请的各个实施例中,如果没有特殊说明以及逻辑冲突,不同的实施例之间的术语和/或描述具有一致性、且可以相互引用,不同的实施例中的技术特征根据其内在的逻辑关系可以组合形成新的实施例。It should also be understood that in each embodiment of the present application, if there is no special explanation and logical conflict, the terms and/or descriptions between different embodiments are consistent and can be referred to each other, and the technical features in different embodiments New embodiments can be formed by combining them according to their inherent logical relationships.

最后应说明的是:以上所述,仅为本申请的具体实施方式,但本申请的保护范围并不局限于此,任何在本申请揭露的技术范围内的变化或替换,都应涵盖在本申请的保护范围之内。因此,本申请的保护范围应以所述权利要求的保护范围为准。Finally, it should be noted that: the above is only a specific implementation of the application, but the scope of protection of the application is not limited thereto, and any changes or replacements within the technical scope disclosed in the application shall be covered by this application. within the scope of the application. Therefore, the protection scope of the present application should be determined by the protection scope of the claims.

Claims (15)

一种图像处理方法,其特征在于,应用于包括第一摄像头和第二摄像头的电子设备,所述方法包括:An image processing method, characterized in that it is applied to an electronic device including a first camera and a second camera, the method comprising: 所述电子设备启动相机;the electronic device activates a camera; 显示预览界面,所述预览界面包括第一控件;displaying a preview interface, where the preview interface includes a first control; 检测到对所述第一控件的第一操作;detecting a first operation on the first control; 响应于所述第一操作,所述第一摄像头采集第一图像和所述第二摄像头采集第二图像,所述第一图像的清晰度低于所述第二图像的清晰度,所述第一图像包括第一区域,所述第一区域为所述第一图像中清晰度小于预设阈值的区域;In response to the first operation, the first camera captures a first image and the second camera captures a second image, the resolution of the first image is lower than that of the second image, and the second image An image includes a first area, where the first area is an area in the first image whose resolution is less than a preset threshold; 根据所述第一图像得到掩膜块,所述掩膜块与第一区域对应;Obtaining a mask block according to the first image, the mask block corresponding to the first region; 将所述第一图像和所述第二图像进行融合,得到第一融合图像;fusing the first image and the second image to obtain a first fused image; 根据所述掩膜块,确定所述第一图像中的第一图像块和确定所述第二图像中的第二图像块,所述第一图像块与所述掩膜块对应,所述第二图像块与所述掩膜块对应;According to the mask block, determining a first image block in the first image and determining a second image block in the second image, the first image block corresponds to the mask block, and the first image block corresponds to the mask block, and the second image block in the second image is determined. The second image block corresponds to the mask block; 将所述第一图像块和所述第二图像块进行融合,得到融合图像块;fusing the first image block and the second image block to obtain a fused image block; 将所述第一融合图像与所述融合图像块进行融合,得到第三图像。The first fused image is fused with the fused image block to obtain a third image. 根据权利要求1所述的图像处理方法,其特征在于,根据所述第一图像得到掩膜块,包括:The image processing method according to claim 1, wherein obtaining a mask block according to the first image comprises: 将所述第一图像输入分割模型进行分割,并生成所述掩膜块;所述分割模型用于对所述第一图像中的所述第一区域进行分割,并生成与所述第一区域对应的所述掩膜块。The first image is input into a segmentation model for segmentation, and the mask block is generated; the segmentation model is used to segment the first region in the first image, and generate a block corresponding to the first region corresponding to the mask block. 根据权利要求1或2所述的图像处理方法,其特征在于,将所述第一图像和所述第二图像进行融合,得到第一融合图像,包括:The image processing method according to claim 1 or 2, wherein merging the first image and the second image to obtain a first fused image comprises: 将所述第一图像和所述第二图像利用第一融合模型进行融合,得到所述第一融合图像。The first image and the second image are fused using a first fusion model to obtain the first fused image. 根据权利要求1-3任一项所述的图像处理方法,其特征在于,将所述第一图像块和所述第二图像块进行融合,得到融合图像块,包括:The image processing method according to any one of claims 1-3, wherein merging the first image block and the second image block to obtain a fused image block includes: 将所述第一图像块和所述第二图像块利用第二融合模型进行融合,得到所述融合图像块。The first image block and the second image block are fused using a second fusion model to obtain the fused image block. 根据权利要求1-4任一项所述的图像处理方法,其特征在于,将所述第一融合图像和所述融合图像块进行融合,得到第三图像,包括:The image processing method according to any one of claims 1-4, wherein merging the first fused image and the fused image block to obtain a third image comprises: 将所述第一融合图像和所述融合图像块利用第三融合模型进行融合,得到所述第三图像。The first fusion image and the fusion image block are fused using a third fusion model to obtain the third image. 根据权利要求1-5任一项所述的图像处理方法,其特征在于,所述方法还包括:The image processing method according to any one of claims 1-5, wherein the method further comprises: 当根据所述第一图像没得到所述掩膜块时,将所述第一图像和所述第二图像利用所述第一融合模型进行融合,得到所述第一融合图像。When the mask block is not obtained according to the first image, the first image and the second image are fused using the first fusion model to obtain the first fused image. 根据权利要求3或6所述的图像处理方法,其特征在于,所述方法还包括:The image processing method according to claim 3 or 6, wherein the method further comprises: 对所述第一图像和所述第二图像进行配准。The first image and the second image are registered. 根据权利要求4所述的图像处理方法,其特征在于,所述方法还包括:对所述第一图像块和所述第二图像块进行配准。The image processing method according to claim 4, further comprising: registering the first image block and the second image block. 根据权利要求7或8所述的图像处理方法,其特征在于,所述配准包括:全局配准和/或局部配准,所述全局配准用于表示将多个图像中的全部内容进行配准,所述局部配准用于表示将多个图像中的局部内容进行配准。The image processing method according to claim 7 or 8, wherein the registration includes: global registration and/or local registration, and the global registration is used to indicate that all contents in multiple images are Registration, the local registration refers to registering local contents in multiple images. 根据权利要求4所述的图像处理方法,其特征在于,所述方法还包括:The image processing method according to claim 4, wherein the method further comprises: 利用训练图像集,并加入随机高光噪声,对所述第一融合模型进行训练,得到所述第二融合模型,其中,所述训练图像集包括原始图像,所述原始图像标注有掩膜块。Using a training image set and adding random highlight noise to train the first fusion model to obtain the second fusion model, wherein the training image set includes an original image, and the original image is marked with a mask block. 根据权利要求5所述的图像处理方法,其特征在于,所述第三融合模型为拉普拉斯融合模型。The image processing method according to claim 5, wherein the third fusion model is a Laplace fusion model. 一种电子设备,其特征在于,包括摄像头模组、处理器和存储器;An electronic device, characterized in that it includes a camera module, a processor and a memory; 所述摄像头模组,用于采集第一图像和第二图像,所述第一图像的清晰度低于所述第二图像的清晰度,所述第一图像包括第一区域,所述第一区域为所述第一图像中清晰度小于预设阈值的区域;The camera module is configured to collect a first image and a second image, the definition of the first image is lower than that of the second image, the first image includes a first area, and the first The area is an area in the first image whose resolution is less than a preset threshold; 所述存储器,用于存储可在所述处理器上运行的计算机程序;said memory for storing a computer program executable on said processor; 所述处理器,用于执行如权利要求1至11中任一项所述的图像处理方法中进行处理的步骤。The processor is configured to execute the processing steps in the image processing method according to any one of claims 1 to 11. 根据权利要求12所述的电子设备,其特征在于,所述摄像头模组包括广角摄像头、主摄摄像头和长焦摄像头;The electronic device according to claim 12, wherein the camera module includes a wide-angle camera, a main camera and a telephoto camera; 所述广角摄像头,用于在所述处理器获取拍照指令后,获取所述第一图像;The wide-angle camera is used to acquire the first image after the processor acquires a photographing instruction; 所述主摄摄像头,用于在所述处理器获取所述拍照指令后,获取所述第二图像,或者;The main camera is configured to acquire the second image after the processor acquires the photographing instruction, or; 所述主摄摄像头,用于在所述处理器获取所述拍照指令后,获取所述第一图像;The main camera is configured to acquire the first image after the processor acquires the photographing instruction; 所述长焦摄像头,用于在所述处理器获取所述拍照指令后,获取所述第二图像。The telephoto camera is configured to acquire the second image after the processor acquires the photographing instruction. 一种芯片,其特征在于,包括:处理器,用于从存储器中调用并运行计算机程序,使得安装有所述芯片的设备执行如权利要求1至11中任一项所述的图像处理方法。A chip, characterized by comprising: a processor for calling and running a computer program from a memory, so that a device installed with the chip executes the image processing method according to any one of claims 1 to 11. 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质存储有计算机程序,所述计算机程序包括程序指令,所述程序指令当被处理器执行时,使所述处理器执行如权利要求1至11中任一项所述的图像处理方法。A computer-readable storage medium, characterized in that the computer-readable storage medium stores a computer program, the computer program includes program instructions, and when the program instructions are executed by a processor, the processor executes the following steps: The image processing method according to any one of claims 1 to 11.
PCT/CN2022/091225 2021-08-12 2022-05-06 Image processing method and related device therefor Ceased WO2023015981A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110923642.9 2021-08-12
CN202110923642.9A CN114092364B (en) 2021-08-12 2021-08-12 Image processing method and related device

Publications (1)

Publication Number Publication Date
WO2023015981A1 true WO2023015981A1 (en) 2023-02-16

Family

ID=80296087

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/091225 Ceased WO2023015981A1 (en) 2021-08-12 2022-05-06 Image processing method and related device therefor

Country Status (2)

Country Link
CN (1) CN114092364B (en)
WO (1) WO2023015981A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117132629A (en) * 2023-02-17 2023-11-28 荣耀终端有限公司 Image processing method and electronic device
CN118450269A (en) * 2023-10-25 2024-08-06 荣耀终端有限公司 Image processing method and electronic device

Families Citing this family (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114092364B (en) * 2021-08-12 2023-10-03 荣耀终端有限公司 Image processing method and related device
CN118474535A (en) * 2022-02-28 2024-08-09 荣耀终端有限公司 Multi-shot strategy scheduling method and related equipment thereof
CN114782296B (en) * 2022-04-08 2023-06-09 荣耀终端有限公司 Image fusion method, device and storage medium
CN116051386B (en) * 2022-05-30 2023-10-20 荣耀终端有限公司 Image processing methods and related equipment
CN115631098B (en) * 2022-06-16 2023-10-03 荣耀终端有限公司 De-reflection methods and devices
CN116245741B (en) * 2022-06-28 2023-11-17 荣耀终端有限公司 Image processing method and related device
CN116051368B (en) * 2022-06-29 2023-10-20 荣耀终端有限公司 Image processing methods and related equipment
CN116110035A (en) * 2022-12-26 2023-05-12 杭州海康威视数字技术股份有限公司 Image processing method, device, electronic equipment and storage medium
CN116188311A (en) * 2023-02-28 2023-05-30 爱芯元智半导体(上海)有限公司 Image noise reduction method, device and electronic equipment
CN116801093B (en) * 2023-08-25 2023-11-28 荣耀终端有限公司 Image processing method, device and storage medium
CN117729445B (en) * 2024-02-07 2024-12-24 荣耀终端有限公司 Image processing method, electronic device and computer readable storage medium

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107197169A (en) * 2017-06-22 2017-09-22 维沃移动通信有限公司 A kind of high dynamic range images image pickup method and mobile terminal
CN107277387A (en) * 2017-07-26 2017-10-20 维沃移动通信有限公司 High dynamic range images image pickup method, terminal and computer-readable recording medium
US10165194B1 (en) * 2016-12-16 2018-12-25 Amazon Technologies, Inc. Multi-sensor camera system
CN112184609A (en) * 2020-10-10 2021-01-05 展讯通信(上海)有限公司 Image fusion method and device, storage medium and terminal
CN112995544A (en) * 2019-12-02 2021-06-18 三星电子株式会社 System and method for generating multiple exposure frames from a single input
CN113099123A (en) * 2021-04-07 2021-07-09 中煤科工集团重庆研究院有限公司 High dynamic range video image acquisition method
CN114092364A (en) * 2021-08-12 2022-02-25 荣耀终端有限公司 Image processing method and related equipment

Family Cites Families (30)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9779491B2 (en) * 2014-08-15 2017-10-03 Nikon Corporation Algorithm and device for image processing
US9779227B1 (en) * 2014-10-24 2017-10-03 Amazon Technologies, Inc. Security system using keys encoded in holograms
CN108291867B (en) * 2015-07-01 2021-07-16 堀场仪器株式会社 Specialized test tube assembly and method for microscopic observation of nanoparticles in liquids
US10012580B2 (en) * 2015-10-14 2018-07-03 MANTA Instruments, Inc. Apparatus and method for measurements of growth or dissolution kinetics of colloidal particles
CN107730528A (en) * 2017-10-28 2018-02-23 天津大学 A kind of interactive image segmentation and fusion method based on grabcut algorithms
AU2018369977A1 (en) * 2017-11-17 2020-05-28 C 3 Limited Object measurement system
CN108682015B (en) * 2018-05-28 2021-10-19 安徽科大讯飞医疗信息技术有限公司 Focus segmentation method, device, equipment and storage medium in biological image
KR102192899B1 (en) * 2018-08-16 2020-12-18 주식회사 날비컴퍼니 Method and storage medium for applying bokeh effect to one or more images
CN111340044A (en) * 2018-12-19 2020-06-26 北京嘀嘀无限科技发展有限公司 Image processing method, image processing device, electronic equipment and storage medium
CN109951633B (en) * 2019-02-18 2022-01-11 华为技术有限公司 Method for shooting moon and electronic equipment
CN112954219A (en) * 2019-03-18 2021-06-11 荣耀终端有限公司 Multi-channel video recording method and equipment
CN110430357B (en) * 2019-03-26 2021-01-29 华为技术有限公司 Image shooting method and electronic equipment
CN110163875A (en) * 2019-05-23 2019-08-23 南京信息工程大学 One kind paying attention to pyramidal semi-supervised video object dividing method based on modulating network and feature
CN110246141B (en) * 2019-06-13 2022-10-21 大连海事大学 A Vehicle Image Segmentation Method in Complex Traffic Scenes Based on Joint Corner Pooling
CN112116624B (en) * 2019-06-21 2025-05-06 华为技术有限公司 Image processing method and electronic device
CN113132620B (en) * 2019-12-31 2022-10-11 华为技术有限公司 Image shooting method and related device
CN111341419A (en) * 2020-02-19 2020-06-26 京东方科技集团股份有限公司 Medical image processing method, device, system, control system and storage medium
CN111582093A (en) * 2020-04-27 2020-08-25 北京工业大学 An automatic detection method for small objects in high-resolution images based on computer vision and deep learning
CN112001391A (en) * 2020-05-11 2020-11-27 江苏鲲博智行科技有限公司 A method of image feature fusion for image semantic segmentation
CN111612807B (en) * 2020-05-15 2023-07-25 北京工业大学 Small target image segmentation method based on scale and edge information
CN111709878B (en) * 2020-06-17 2023-06-23 北京百度网讯科技有限公司 Face super-resolution implementation method and device, electronic equipment and storage medium
CN112116620B (en) * 2020-09-16 2023-09-22 北京交通大学 Indoor image semantic segmentation and coating display method
CN112507777A (en) * 2020-10-10 2021-03-16 厦门大学 Optical remote sensing image ship detection and segmentation method based on deep learning
CN112465843A (en) * 2020-12-22 2021-03-09 深圳市慧鲤科技有限公司 Image segmentation method and device, electronic equipment and storage medium
CN112598580B (en) * 2020-12-29 2023-07-25 广州光锥元信息科技有限公司 Method and device for improving definition of portrait photo
CN112950606B (en) * 2021-03-15 2023-04-07 重庆邮电大学 Mobile phone screen defect segmentation method based on small samples
CN112926556B (en) * 2021-04-28 2023-05-02 上海大学 Semantic segmentation-based aerial photography transmission line broken strand identification method and system
CN113111857A (en) * 2021-05-10 2021-07-13 金华高等研究院 Human body posture estimation method based on multi-mode information fusion
CN113239784B (en) * 2021-05-11 2022-09-30 广西科学院 Pedestrian re-identification system and method based on space sequence feature learning
CN113240679A (en) * 2021-05-17 2021-08-10 广州华多网络科技有限公司 Image processing method, image processing device, computer equipment and storage medium

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10165194B1 (en) * 2016-12-16 2018-12-25 Amazon Technologies, Inc. Multi-sensor camera system
CN107197169A (en) * 2017-06-22 2017-09-22 维沃移动通信有限公司 A kind of high dynamic range images image pickup method and mobile terminal
CN107277387A (en) * 2017-07-26 2017-10-20 维沃移动通信有限公司 High dynamic range images image pickup method, terminal and computer-readable recording medium
CN112995544A (en) * 2019-12-02 2021-06-18 三星电子株式会社 System and method for generating multiple exposure frames from a single input
CN112184609A (en) * 2020-10-10 2021-01-05 展讯通信(上海)有限公司 Image fusion method and device, storage medium and terminal
CN113099123A (en) * 2021-04-07 2021-07-09 中煤科工集团重庆研究院有限公司 High dynamic range video image acquisition method
CN114092364A (en) * 2021-08-12 2022-02-25 荣耀终端有限公司 Image processing method and related equipment

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117132629A (en) * 2023-02-17 2023-11-28 荣耀终端有限公司 Image processing method and electronic device
CN118450269A (en) * 2023-10-25 2024-08-06 荣耀终端有限公司 Image processing method and electronic device

Also Published As

Publication number Publication date
CN114092364A (en) 2022-02-25
CN114092364B (en) 2023-10-03

Similar Documents

Publication Publication Date Title
CN114092364B (en) Image processing method and related device
WO2022262260A1 (en) Photographing method and electronic device
WO2021052232A1 (en) Time-lapse photography method and device
WO2020073959A1 (en) Image capturing method, and electronic device
US12301993B2 (en) Photographing method and apparatus
WO2020168956A1 (en) Method for photographing the moon and electronic device
US20240119566A1 (en) Image processing method and apparatus, and electronic device
CN113810601B (en) Terminal image processing method, device and terminal equipment
WO2024045670A1 (en) Method for generating high-dynamic-range video, and electronic device
WO2023015991A1 (en) Photography method, electronic device, and computer readable storage medium
CN113630558B (en) A camera exposure method and electronic device
US20240236504A9 (en) Point light source image detection method and electronic device
CN113660408B (en) Anti-shake method and device for video shooting
CN114466134A (en) Method and electronic device for generating HDR image
CN117496391B (en) Image processing method and electronic equipment
WO2023226612A1 (en) Exposure parameter determining method and apparatus
WO2022001258A1 (en) Multi-screen display method and apparatus, terminal device, and storage medium
CN115567630A (en) Method for managing electronic equipment, electronic equipment, and readable storage medium
WO2024078275A1 (en) Image processing method and apparatus, electronic device and storage medium
CN113592751B (en) Image processing method and device and electronic equipment
CN115460343B (en) Image processing method, device and storage medium
CN119631419A (en) Image processing method and related equipment
WO2022267608A1 (en) Exposure intensity adjusting method and related apparatus
WO2023015985A1 (en) Image processing method and electronic device
CN119520974B (en) Shooting method and electronic equipment

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22854977

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 22854977

Country of ref document: EP

Kind code of ref document: A1