[go: up one dir, main page]

CN111126568B - Image processing method and device, electronic equipment and computer readable storage medium - Google Patents

Image processing method and device, electronic equipment and computer readable storage medium Download PDF

Info

Publication number
CN111126568B
CN111126568B CN201911252330.9A CN201911252330A CN111126568B CN 111126568 B CN111126568 B CN 111126568B CN 201911252330 A CN201911252330 A CN 201911252330A CN 111126568 B CN111126568 B CN 111126568B
Authority
CN
China
Prior art keywords
image
repaired
upsampling
feature map
feature
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911252330.9A
Other languages
Chinese (zh)
Other versions
CN111126568A (en
Inventor
阎法典
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN201911252330.9A priority Critical patent/CN111126568B/en
Publication of CN111126568A publication Critical patent/CN111126568A/en
Application granted granted Critical
Publication of CN111126568B publication Critical patent/CN111126568B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/77Retouching; Inpainting; Scratch removal
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Studio Devices (AREA)
  • Image Processing (AREA)

Abstract

The application discloses an image processing method, an image processing device, an electronic apparatus, and a nonvolatile computer-readable storage medium. The image processing method comprises the following steps: acquiring exposure time of an image to be repaired in shooting; determining an exposure time interval in which the exposure time is located; determining a repair model according to the exposure time interval, wherein different exposure time intervals correspond to different repair models; and carrying out restoration processing on the portrait area of the image to be restored according to the determined restoration model. According to the image processing method, the image processing device, the electronic equipment and the nonvolatile computer readable storage medium, different repair models are selected according to different exposure time lengths of the image to be repaired, so that a repair model suitable for the current exposure time length can be selected to repair the image to be repaired, and the repair effect of the repaired image can be improved.

Description

图像处理方法及装置、电子设备及计算机可读存储介质Image processing method and device, electronic device, and computer-readable storage medium

技术领域technical field

本申请涉及图像处理技术领域,特别涉及一种图像处理方法、图像处理装置、电子设备及非易失性计算机可读存储介质。The present application relates to the technical field of image processing, and in particular to an image processing method, an image processing device, electronic equipment, and a non-volatile computer-readable storage medium.

背景技术Background technique

超清人像技术是指利用图像处理算法处理图像中的人像,以使得人像的细节更丰富、清晰度更高的一种技术。但使用超清人像算法修复人像时,多采用选定的一种算法处理人像,这可能导致图像修复效果不佳的问题。Ultra-clear portrait technology refers to a technology that uses image processing algorithms to process portraits in images to make the details of the portraits richer and clearer. However, when using the ultra-clear portrait algorithm to restore portraits, one of the selected algorithms is often used to process portraits, which may lead to poor image restoration effects.

发明内容Contents of the invention

本申请实施方式提供了一种图像处理方法、图像处理装置、电子设备及非易失性计算机可读存储介质。Embodiments of the present application provide an image processing method, an image processing device, electronic equipment, and a non-volatile computer-readable storage medium.

本申请实施方式的图像处理方法用于电子设备。所述图像处理方法包括:获取待修复图像在拍摄时的曝光时长;确定所述曝光时长所处的曝光时长区间;根据所述曝光时长区间确定修复模型,不同的所述曝光时长区间对应不同的所述修复模型;根据确定的所述修复模型对所述待修复图像的人像区域进行修复处理。The image processing method according to the embodiment of the present application is used in an electronic device. The image processing method includes: acquiring the exposure time of the image to be repaired when it is taken; determining the exposure time interval in which the exposure time is located; determining the repair model according to the exposure time interval, and different exposure time intervals correspond to different The inpainting model: performing inpainting processing on the portrait region of the image to be inpainted according to the determined inpainting model.

本申请实施方式的图像处理装置用于电子设备。所述图像处理装置包括获取模块、第一确定模块、第二确定模块、及修复模块。所述获取模块用于获取待修复图像在拍摄时的曝光时长。所述第一确定模块用于确定所述曝光时长所处的曝光时长区间。所述第二确定模块用于根据所述曝光时长区间确定修复模型,不同的所述曝光时长区间对应不同的所述修复模型。所述修复模块用于根据确定的所述修复模型对所述待修复图像的人像区域进行修复处理。The image processing device according to the embodiment of the present application is used in electronic equipment. The image processing device includes an acquisition module, a first determination module, a second determination module, and a restoration module. The acquiring module is used to acquire the exposure time of the image to be repaired when it is taken. The first determining module is used to determine the exposure duration interval in which the exposure duration is located. The second determining module is configured to determine a repair model according to the exposure time interval, and different exposure time intervals correspond to different repair models. The inpainting module is used for inpainting the portrait area of the image to be inpainted according to the determined inpainting model.

本申请实施方式的电子设备包括壳体及处理器。所述处理器安装在所述壳体上。所述处理器用于:获取待修复图像在拍摄时的曝光时长;确定所述曝光时长所处的曝光时长区间;根据所述曝光时长区间确定修复模型,不同的所述曝光时长区间对应不同的所述修复模型;根据确定的所述修复模型对所述待修复图像的人像区域进行修复处理。The electronic device in the embodiment of the present application includes a casing and a processor. The processor is mounted on the housing. The processor is used to: obtain the exposure duration of the image to be repaired when shooting; determine the exposure duration interval of the exposure duration; determine the restoration model according to the exposure duration interval, and the different exposure duration intervals correspond to different The inpainting model; performing inpainting processing on the portrait area of the image to be inpainted according to the determined inpainting model.

本申请实施方式的包含计算机可读指令的非易失性计算机可读存储介质,所述计算机可读指令被处理器执行时,使得所述处理器执行以下步骤:获取待修复图像在拍摄时的曝光时长;确定所述曝光时长所处的曝光时长区间;根据所述曝光时长区间确定修复模型,不同的所述曝光时长区间对应不同的所述修复模型;根据确定的所述修复模型对所述待修复图像的人像区域进行修复处理。A non-volatile computer-readable storage medium containing computer-readable instructions according to an embodiment of the present application. When the computer-readable instructions are executed by a processor, the processor is made to perform the following steps: acquire the image of the image to be restored when it is taken Exposure duration; determine the exposure duration interval in which the exposure duration is located; determine the repair model according to the exposure duration interval, and the different exposure duration intervals correspond to different repair models; The portrait area of the image to be repaired is repaired.

本申请实施方式的图像处理方法、图像处理装置、电子设备及非易失性计算机可读存储介质根据待修复图像的曝光时长的不同选定不同的修复模型,从而可以选取适合于当前的曝光时长的修复模型来对待修复图像进行修复处理,可以改善修复后的图像的修复效果。The image processing method, image processing device, electronic equipment, and non-volatile computer-readable storage medium in the embodiments of the present application select different repair models according to the exposure time of the image to be repaired, so that the current exposure time can be selected The inpainting model is used to inpaint the image to be inpainted, which can improve the inpainting effect of the inpainted image.

本申请实施方式的附加方面和优点将在下面的描述中部分给出,部分将从下面的描述中变得明显,或通过本申请的实践了解到。Additional aspects and advantages of embodiments of the application will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by practice of the application.

附图说明Description of drawings

本申请的上述和/或附加的方面和优点可以从结合下面附图对实施方式的描述中将变得明显和容易理解,其中:The above and/or additional aspects and advantages of the present application will become apparent and easily understood from the description of the embodiments in conjunction with the following drawings, wherein:

图1是本申请某些实施方式的图像处理方法的流程示意图;FIG. 1 is a schematic flow diagram of an image processing method in some embodiments of the present application;

图2是本申请某些实施方式的图像处理装置的示意图;FIG. 2 is a schematic diagram of an image processing device in some embodiments of the present application;

图3是本申请某些实施方式的电子设备的示意图;FIG. 3 is a schematic diagram of an electronic device according to some embodiments of the present application;

图4是本申请某些实施方式的图像处理方法的流程示意图;4 is a schematic flow diagram of an image processing method in some embodiments of the present application;

图5是本申请某些实施方式的图像处理装置的示意图;5 is a schematic diagram of an image processing device in some embodiments of the present application;

图6是本申请某些实施方式的图像处理方法的流程示意图;6 is a schematic flow diagram of an image processing method in some embodiments of the present application;

图7是本申请某些实施方式的图像处理方法的流程示意图;FIG. 7 is a schematic flowchart of an image processing method in some embodiments of the present application;

图8是本申请某些实施方式的图像处理装置中修复模块的示意图;Fig. 8 is a schematic diagram of a restoration module in an image processing device according to some embodiments of the present application;

图9是本申请某些实施方式的图像处理装置中第二处理单元的示意图;Fig. 9 is a schematic diagram of a second processing unit in an image processing device according to some embodiments of the present application;

图10是本申请某些实施方式的人脸检测模型的示意图;Fig. 10 is a schematic diagram of a face detection model in some embodiments of the present application;

图11是本申请某些实施方式的人脸修复模型的示意图;Fig. 11 is a schematic diagram of a human face restoration model in some embodiments of the present application;

图12是本申请某些实施方式的图像处理方法的流程示意图;FIG. 12 is a schematic flowchart of an image processing method in some embodiments of the present application;

图13是本申请某些实施方式的图像处理装置的示意图;Fig. 13 is a schematic diagram of an image processing device in some embodiments of the present application;

图14是本申请某些实施方式的图像处理方法的流程示意图;FIG. 14 is a schematic flowchart of an image processing method in some embodiments of the present application;

图15是本申请某些实施方式的图像处理装置中第三获取模块的示意图;Fig. 15 is a schematic diagram of a third acquisition module in an image processing device in some embodiments of the present application;

图16是本申请某些实施方式的图像处理方法的流程示意图;Fig. 16 is a schematic flowchart of an image processing method in some embodiments of the present application;

图17是本申请某些实施方式的图像处理装置中处理模块的示意图;Fig. 17 is a schematic diagram of a processing module in an image processing device in some embodiments of the present application;

图18是本申请某些实施方式的图像处理方法的流程示意图;Fig. 18 is a schematic flowchart of an image processing method in some embodiments of the present application;

图19是本申请某些实施方式的非易失性可读存储介质与处理器的交互示意图。Fig. 19 is a schematic diagram of interaction between a non-volatile readable storage medium and a processor in some embodiments of the present application.

具体实施方式Detailed ways

下面详细描述本申请的实施方式,所述实施方式的示例在附图中示出,其中,相同或类似的标号自始至终表示相同或类似的元件或具有相同或类似功能的元件。下面通过参考附图描述的实施方式是示例性的,仅用于解释本申请的实施方式,而不能理解为对本申请的实施方式的限制。Embodiments of the present application are described in detail below, examples of which are shown in the drawings, wherein the same or similar reference numerals denote the same or similar elements or elements having the same or similar functions throughout. The embodiments described below by referring to the drawings are exemplary, are only for explaining the embodiments of the present application, and should not be construed as limiting the embodiments of the present application.

请参阅图1和图3,本申请提供一种图像处理方法。图像处理方法包括:Referring to Fig. 1 and Fig. 3, the present application provides an image processing method. Image processing methods include:

01:获取待修复图像在拍摄时的曝光时长;01: Obtain the exposure time of the image to be repaired when shooting;

02:确定曝光时长所处的曝光时长区间;02: Determine the exposure duration interval of the exposure duration;

03:根据曝光时长区间确定修复模型,不同的曝光时长区间对应不同的修复模型;及03: Determine the repair model according to the exposure time interval, and different exposure time intervals correspond to different restoration models; and

04:根据确定的修复模型对待修复图像的人像区域进行修复处理。04: Repair the portrait area of the image to be repaired according to the determined repair model.

请参阅图2和图3,本申请还提供一种图像处理装置10。本申请实施方式的图像处理方法可以由本申请实施方式的图像处理装置10实现。图像处理装置10包括第一获取模块11、第一确定模块12、第二确定模块13、及修复模块14。步骤01可以由第一获取模块11实现。步骤02可以由第一确定模块12实现。步骤03可以由第二确定模块13实现。步骤04可以由修复模块14实现。也即是说,第一获取模块11可以用于获取待修复图像在拍摄时的曝光时长。检测模块12可以用于确定曝光时长所处的曝光时长区间。修复模块13可以用于根据曝光时长区间确定修复模型,不同的曝光时长区间对应不同的修复模型。Referring to FIG. 2 and FIG. 3 , the present application also provides an image processing device 10 . The image processing method in the embodiment of the present application can be implemented by the image processing device 10 in the embodiment of the present application. The image processing device 10 includes a first acquisition module 11 , a first determination module 12 , a second determination module 13 , and a restoration module 14 . Step 01 can be implemented by the first acquisition module 11 . Step 02 can be implemented by the first determination module 12 . Step 03 can be implemented by the second determining module 13 . Step 04 can be implemented by the repair module 14 . That is to say, the first acquiring module 11 can be used to acquire the exposure time of the image to be repaired when it is taken. The detection module 12 may be used to determine the exposure duration interval in which the exposure duration is located. The restoration module 13 can be used to determine the restoration model according to the exposure time interval, and different exposure time intervals correspond to different restoration models.

请参阅图3,本申请还提供一种电子设备20。本申请实施方式的图像处理方法还可以由本申请实施方式的电子设备20实现。电子设备20可以是手机、平板电脑、笔记本电脑、智能穿戴设备(例如智能手表、智能手环、智能头盔、智能眼镜等)、智能镜子、无人机、无人车、无人船等,在此不作限制。电子设备20包括壳体22、处理器21及摄像头23。处理器21及摄像头23均安装在壳体22上。步骤01、步骤02、步骤03、及步骤04均可以由处理器21实现。也即是说,处理器21可以用于获取待修复图像在拍摄时的曝光时长及确定曝光时长所处的曝光时长区间。处理器21还可以用于根据曝光时长区间确定修复模型及根据确定的修复模型对待修复图像的人像区域进行修复处理。其中,不同的曝光时长区间对应不同的修复模型。Referring to FIG. 3 , the present application also provides an electronic device 20 . The image processing method in the embodiment of the present application may also be implemented by the electronic device 20 in the embodiment of the present application. The electronic device 20 can be a mobile phone, a tablet computer, a notebook computer, a smart wearable device (such as a smart watch, a smart bracelet, a smart helmet, smart glasses, etc.), a smart mirror, a drone, an unmanned vehicle, an unmanned ship, etc. This is not limited. The electronic device 20 includes a casing 22 , a processor 21 and a camera 23 . Both the processor 21 and the camera 23 are mounted on the casing 22 . Step 01 , step 02 , step 03 , and step 04 can all be implemented by the processor 21 . That is to say, the processor 21 may be configured to obtain the exposure time of the image to be repaired when it is taken and determine the exposure time interval in which the exposure time is located. The processor 21 may also be configured to determine a restoration model according to the exposure time interval, and perform restoration processing on the portrait area of the image to be repaired according to the determined restoration model. Among them, different exposure time intervals correspond to different restoration models.

其中,修复模型是预先训练好的模型,一个修复模型可以视为一种超清人像算法,不同的修复模型属于不同的超清人像算法。图11是修复模型的一个示例。如图11所示,待修复图像中的人像区域作为修复模型的输入,修复模型会对人像区域进行多次卷积、池化、上采样、及反卷积等处理。不同曝光时长对应的修复模型可以具有不同数量的卷积层、不同数量的池化层、不同数量的上采样层、不同数量的反卷积层、及不同的权值等。在本申请的一个实施例中,不同的修复模型具有相同数量的卷积层、相同数量的池化层、相同数量的上采样层、及相同数量的反卷积层,但不同的修复模型具有不同的权值。权值不同使得修复模型对输入的同一待修复图像可以有不同的修复图像的输出,换言之,可以得到具有不同修复效果的修复图像。当输入的待修复图像的曝光时长位于与修复模型对应的曝光时长区间内时,使用该修复模型对该待修复图像进行修复处理可以输出修复效果最佳的修复图像。Among them, the inpainting model is a pre-trained model, and one inpainting model can be regarded as an ultra-clear portrait algorithm, and different inpainting models belong to different ultra-clear portrait algorithms. Figure 11 is an example of a repair model. As shown in Figure 11, the portrait area in the image to be repaired is used as the input of the repair model, and the repair model will perform multiple convolutions, pooling, upsampling, and deconvolution on the portrait area. The inpainting models corresponding to different exposure times can have different numbers of convolutional layers, different numbers of pooling layers, different numbers of upsampling layers, different numbers of deconvolutional layers, and different weights. In one embodiment of the present application, different inpainting models have the same number of convolutional layers, the same number of pooling layers, the same number of upsampling layers, and the same number of deconvolutional layers, but different inpainting models have different weights. Different weights make the inpainting model have different inpainted image outputs for the same input image to be inpainted, in other words, inpainted images with different inpainting effects can be obtained. When the exposure time of the input image to be repaired is within the exposure time interval corresponding to the repair model, using the repair model to perform repair processing on the image to be repaired can output a repaired image with the best repair effect.

具体地,假设有三个曝光时长区间,分别为曝光时长区间[T1,T2)、曝光时长区间[T2,T3)、曝光时长区间[T3,T4),曝光时长区间[T1,T2]对应的修复模型为P1,曝光时长区间[T2,T3)对应的修复模型为P2,曝光时长区间[T3,T4]对应的修复模型为P2,待修复图像的曝光时长为t,且T2<t<T3,那么,处理器21应该选择修复模型P2来对待修复图像进行修复处理。需要说明的是,曝光时长区间的个数为3个仅为示例,在其他例子中,曝光时长区间的个数还可以是两个、四个、五个、八个、十个、十五个、二十个、三十个等,在此不作限制。修复模型的个数与曝光时长区间的个数相等。Specifically, it is assumed that there are three exposure time intervals, namely the exposure time interval [T1, T2), the exposure time interval [T2, T3), the exposure time interval [T3, T4), and the repair corresponding to the exposure time interval [T1, T2] The model is P1, the repair model corresponding to the exposure time interval [T2, T3) is P2, the repair model corresponding to the exposure time interval [T3, T4] is P2, the exposure time of the image to be repaired is t, and T2<t<T3, Then, the processor 21 should select the inpainting model P2 to perform inpainting on the image to be inpainted. It should be noted that the number of exposure time intervals is 3 only as an example, and in other examples, the number of exposure time intervals may also be two, four, five, eight, ten, or fifteen , twenty, thirty, etc., are not limited here. The number of inpainted models is equal to the number of exposure time intervals.

可以理解,曝光时长不同,摄像头23获取的待修复图像的清晰度通常也不同。一般地,曝光时长越长,电子设备20抖动的可能性越大,摄像头23获取的待修复图像的清晰度也就越低。待修复图像的清晰度不同,待修复图像中人像区域的细节的丰富度也不同,那么,针对不同曝光时长下获取的待修复图像选定与该曝光时长对应的修复模型,可以使得利用该模型修复后输出的修复图像具有最佳的修复效果。It can be understood that the clarity of the image to be repaired acquired by the camera 23 is usually different when the exposure time is different. Generally, the longer the exposure time is, the more likely the electronic device 20 is shaken, and the resolution of the image to be repaired acquired by the camera 23 is lower. The clarity of the image to be repaired is different, and the richness of the details of the portrait area in the image to be repaired is also different. Then, for the images to be repaired obtained under different exposure time lengths, the restoration model corresponding to the exposure time length can be selected, which can make the use of the model The repaired image output after repairing has the best restoration effect.

相关技术中,电子设备每获取到一张包含人像的图像时,通常都会采用同一超清人像算法对该图像中的人像区域进行修复处理。但实际应用过程中,不同的图像通常具有不同的清晰度,如果始终采用同一种超清人像算法对图像中的人像区域进行修复处理,则可能导致图像修复效果不佳的问题。In related technologies, each time an electronic device acquires an image containing a portrait, it usually uses the same ultra-clear portrait algorithm to repair the portrait area in the image. However, in the actual application process, different images usually have different sharpness. If the same ultra-clear portrait algorithm is always used to repair the portrait area in the image, it may lead to the problem of poor image restoration effect.

本申请实施方式的图像处理方法、图像处理装置10及电子设备20会根据待修复图像的曝光时长选取与该曝光时长对应的修复模型,从而可以利用最佳的修复模型来对待修复图像进行修复处理,有利于改善修复后的图像的修复效果。The image processing method, the image processing device 10 and the electronic device 20 in the embodiment of the present application will select a repair model corresponding to the exposure time according to the exposure time of the image to be repaired, so that the best repair model can be used to perform repair processing on the image to be repaired , which is beneficial to improve the restoration effect of the repaired image.

请参阅图4,在某些实施方式中,图像处理方法还包括:Referring to Fig. 4, in some embodiments, the image processing method also includes:

051:建立初始模型;051: Establish an initial model;

052:获取曝光时长位于不同的曝光时长区间内的多帧训练图像;及052: Obtain multiple frames of training images whose exposure durations are in different exposure duration intervals; and

053:利用曝光时长位于同一曝光时长区间内的多帧训练图像训练初始模型,以得到与曝光时长区间对应的修复模型。053: Use multiple frames of training images whose exposure time is within the same exposure time interval to train the initial model to obtain a restoration model corresponding to the exposure time interval.

请参阅图5,在某些实施方式中,图像处理装置10还包括建立模块151、第二获取模块152、及训练模块153。步骤051可以由建立模块151实现。步骤052可以由第二获取模块152。步骤053可以由训练模块153实现。也即是说,建立模块151可以用于建立初始模型。第二获取模块152可以用于获取曝光时长位于不同的曝光时长区间内的多帧训练图像。训练模块153可以用于利用曝光时长位于同一曝光时长区间内的多帧训练图像训练初始模型,以得到与曝光时长区间对应的修复模型。Referring to FIG. 5 , in some implementations, the image processing device 10 further includes a building module 151 , a second acquiring module 152 , and a training module 153 . Step 051 can be implemented by the establishment module 151 . Step 052 may be performed by the second acquiring module 152 . Step 053 can be implemented by the training module 153 . That is to say, the establishment module 151 can be used to establish an initial model. The second acquiring module 152 may be used to acquire multiple frames of training images whose exposure durations are in different exposure duration intervals. The training module 153 may be used to train an initial model by using multiple frames of training images whose exposure durations are within the same exposure duration interval, so as to obtain a restoration model corresponding to the exposure duration interval.

请再参阅图3,在某些实施方式中,步骤051、步骤052、及步骤053均可以由处理器21实现。也即是说,处理器21可以用于建立初始模型及获取曝光时长位于不同的曝光时长区间内的多帧训练图像。处理器21还可以用于利用曝光时长位于同一曝光时长区间内的多帧训练图像训练初始模型,以得到与曝光时长区间对应的修复模型。Please refer to FIG. 3 again. In some embodiments, step 051 , step 052 , and step 053 can all be implemented by the processor 21 . That is to say, the processor 21 can be used to establish an initial model and acquire multiple frames of training images whose exposure durations are in different exposure duration intervals. The processor 21 may also be configured to train an initial model by using multiple frames of training images whose exposure durations are within the same exposure duration interval, so as to obtain a restoration model corresponding to the exposure duration interval.

具体地,假设设定了三个曝光时长区间,分别为曝光时长区间[T1,T2)、曝光时长区间[T2,T3)、曝光时长区间[T3,T4]。处理器21首先建立一个初始模型。示例地,初始模型与图11所示的修复模型具有相同的模型架构,但初始模型中的权值还未经过训练。处理器21需要将多张训练图像输入初始模型以训练初始模型中的权值。具体地,处理器21需要获取大量的曝光时长位于曝光时长区间[T1,T2)内的第一类训练图像、大量的曝光时长位于曝光时长区间[T2,T3)内的第二类训练图像、以及大量的曝光时长位于曝光时长区间[T3,T4]内的第三类训练图像。随后,处理器21将大量的第一类训练图像输入初始模型中进行训练,以得到与曝光时长区间[T1,T2)对应的修复模型P1,该修复模型P1具有第一类权值。处理器21将大量的第二类训练图像输入初始模型中进行训练,以得到与曝光时长区间[T2,T3)对应的修复模型P2,该修复模型P2具有第二类权值,其中,第一类权值与第二类权值不同。处理器21将大量的第三类训练图像输入初始模型中进行训练,以得到与曝光时长区间[T3,T4]对应的修复模型P3,该修复模型P3具有第三类权值,其中,第三类权值与第一类权值不同,且与第二类权值不同。由此,处理器21即可训练出多个修复模型。处理器21可以将修复模型存储在存储器中,在后续人像修复过程中处理器21即可直接调用这些修复模型。Specifically, it is assumed that three exposure time intervals are set, namely the exposure time interval [T1, T2), the exposure time interval [T2, T3), and the exposure time interval [T3, T4]. Processor 21 first builds an initial model. Exemplarily, the initial model has the same model architecture as the restoration model shown in FIG. 11 , but the weights in the initial model have not been trained yet. The processor 21 needs to input multiple training images into the initial model to train weights in the initial model. Specifically, the processor 21 needs to acquire a large number of training images of the first type whose exposure time is within the exposure time interval [T1, T2), a large number of training images of the second type whose exposure time is within the exposure time interval [T2, T3), And a large number of training images of the third type whose exposure time is within the exposure time interval [T3, T4]. Subsequently, the processor 21 inputs a large number of training images of the first type into the initial model for training, so as to obtain a restoration model P1 corresponding to the exposure time interval [T1, T2), and the restoration model P1 has a weight of the first type. Processor 21 inputs a large number of training images of the second type into the initial model for training to obtain a restoration model P2 corresponding to the exposure time interval [T2, T3), the restoration model P2 has weights of the second type, wherein the first The class weights are different from the second class weights. The processor 21 inputs a large number of training images of the third type into the initial model for training, so as to obtain a repair model P3 corresponding to the exposure time interval [T3, T4], and the repair model P3 has a weight of the third type, wherein, the third The class weights are different from the first class weights and different from the second class weights. Thus, the processor 21 can train a plurality of restoration models. The processor 21 can store the restoration models in the memory, and the processor 21 can directly call these restoration models in the subsequent portrait restoration process.

请参阅图6和图7,在某些实施方式中,步骤04根据确定的修复模型对待修复图像的人像区域进行修复处理,包括:Please refer to FIG. 6 and FIG. 7, in some implementations, step 04 performs repair processing on the portrait area of the image to be repaired according to the determined repair model, including:

041:检测待修复图像中的人像区域;041: Detect the portrait area in the image to be repaired;

042:对人像区域执行多次卷积以获得多张特征图像;042: Perform multiple convolutions on the portrait area to obtain multiple feature images;

043:对最后一次卷积输出的特征图像执行多次上采样及至少一次反卷积以获得残差图像;及043: Perform multiple upsampling and at least one deconvolution on the feature image output by the last convolution to obtain a residual image; and

044:融合残差图像及人像区域以得到修复图像。044: Fusion residual image and portrait area to obtain repaired image.

其中,步骤043对最后一次卷积输出的特征图像执行多次上采样及至少一次反卷积以获得残差图像,包括:Wherein, step 043 performs multiple upsampling and at least one deconvolution on the feature image output by the last convolution to obtain a residual image, including:

0431:在第一次上采样过程中,对最后一次卷积输出的特征图像执行上采样及反卷积;0431: During the first upsampling process, perform upsampling and deconvolution on the feature image output by the last convolution;

0432:在第二次及第二次以上的上采样过程中,融合前一次上采样得到的图像、与前一次上采样得到的图像的尺寸相对应的特征图像、及前N次反卷积得到的图像,并对融合后的图像执行上采样或者执行上采样及反卷积;0432: In the second or more upsampling process, fuse the image obtained by the previous upsampling, the feature image corresponding to the size of the image obtained by the previous upsampling, and the previous N times of deconvolution to obtain , and perform upsampling or upsampling and deconvolution on the fused image;

0433:融合最后一次上采样得到的图像及前N次反卷积得到的图像以得到残差图像,其中,N≥1,且N∈N+。0433: Fuse the image obtained by the last upsampling and the image obtained by the first N times of deconvolution to obtain a residual image, where N≥1, and N∈N+.

请参阅图8和图9,在某些实施方式中,修复模块14包括第一检测单元141、第一处理单元142、第二处理单元143、及融合单元144。第二处理单元143包括第一处理子单元1431、第二处理子单元1432、及融合子单元1433。步骤031可以由第一检测单元141实现。步骤042可以由第一处理单元142实现。步骤043可以由第二处理单元143实现。步骤044可以由融合单元144实现。步骤0431可以由第一处理子单元1431实现。步骤0432可以由第二处理子单元1432实现。步骤0433可以由融合子单元1433实现。也即是说,第一检测单元141可以用于检测待修复图像中的人像区域。第一处理单元142可以用于对人像区域执行多次卷积以获得多张特征图像。第二处理单元143可以用于对最后一次卷积输出的特征图像执行多次上采样及至少一次反卷积以获得残差图像。融合单元144可以用于融合残差图像及人像区域以得到修复图像。第一处理子单元1431可以用于在第一次上采样过程中,对最后一次卷积输出的特征图像执行上采样及反卷积。第二处理子单元1432可以用于在第二次及第二次以上的上采样过程中,融合前一次上采样得到的图像、与前一次上采样得到的图像的尺寸相对应的特征图像、及前N次反卷积得到的图像,并对融合后的图像执行上采样或者执行上采样及反卷积。融合子单元1433可以用于融合最后一次上采样得到的图像及前N次反卷积得到的图像以得到残差图像,其中,N≥1,且N∈N+。Referring to FIG. 8 and FIG. 9 , in some implementations, the restoration module 14 includes a first detection unit 141 , a first processing unit 142 , a second processing unit 143 , and a fusion unit 144 . The second processing unit 143 includes a first processing subunit 1431 , a second processing subunit 1432 , and a fusion subunit 1433 . Step 031 can be implemented by the first detection unit 141 . Step 042 may be implemented by the first processing unit 142 . Step 043 may be implemented by the second processing unit 143 . Step 044 can be implemented by the fusion unit 144 . Step 0431 may be implemented by the first processing subunit 1431. Step 0432 may be implemented by the second processing subunit 1432 . Step 0433 can be implemented by the fusion subunit 1433 . That is to say, the first detection unit 141 can be used to detect the portrait area in the image to be repaired. The first processing unit 142 may be configured to perform multiple convolutions on the portrait area to obtain multiple feature images. The second processing unit 143 may be configured to perform multiple upsampling and at least one deconvolution on the feature image output by the last convolution to obtain a residual image. The fusion unit 144 can be used to fuse the residual image and the portrait area to obtain a repaired image. The first processing subunit 1431 may be configured to perform upsampling and deconvolution on the feature image output by the last convolution during the first upsampling process. The second processing subunit 1432 may be configured to fuse the image obtained by the previous upsampling, the feature image corresponding to the size of the image obtained by the previous upsampling, and The image obtained by the first N times of deconvolution, and perform upsampling or perform upsampling and deconvolution on the fused image. The fusion subunit 1433 can be used to fuse the image obtained by the last upsampling and the image obtained by the previous N times of deconvolution to obtain a residual image, where N≥1, and N∈N+.

请再参阅图3,在某些实施方式中,步骤041、步骤042、步骤043、步骤0431、步骤0432、步骤0433、步骤0434、及步骤044均可以由处理器21实现。也即是说,处理器21还可以用于检测待修复图像中的人像区域以及对人像区域执行多次卷积以获得多张特征图像。处理器21还可以用于对最后一次卷积输出的特征图像执行多次上采样及至少一次反卷积以获得残差图像。处理器21还可以用于融合残差图像及人像区域以得到修复图像。处理器21用于对最后一次卷积输出的特征图像执行多次上采样及至少一次反卷积以获得残差图像时,具体用于:在第一次上采样过程中,对最后一次卷积后输出的特征图像执行上采样及反卷积;在第二次及第二次以上的上采样过程中,融合前一次上采样得到的图像、与前一次上采样得到的图像的尺寸相对应的特征图像、及前N次反卷积得到的图像,并对融合后的图像执行上采样或者执行上采样及反卷积;融合最后一次上采样得到的图像及前N次反卷积得到的图像以得到残差图像,其中,N≥1,且N∈N+。Please refer to FIG. 3 again. In some embodiments, step 041 , step 042 , step 043 , step 0431 , step 0432 , step 0433 , step 0434 , and step 044 can all be implemented by the processor 21 . That is to say, the processor 21 can also be used to detect the portrait area in the image to be repaired and perform multiple convolutions on the portrait area to obtain multiple feature images. The processor 21 may also be configured to perform multiple upsampling and at least one deconvolution on the feature image output by the last convolution to obtain a residual image. The processor 21 can also be used to fuse the residual image and the portrait area to obtain a repaired image. When the processor 21 is used to perform multiple upsampling and at least one deconvolution on the feature image output by the last convolution to obtain a residual image, it is specifically used for: during the first upsampling process, the last convolution After the output feature image performs upsampling and deconvolution; in the second and more than the second upsampling process, the image obtained by the previous upsampling is fused, and the image corresponding to the size of the image obtained by the previous upsampling The feature image, and the image obtained by the first N times of deconvolution, and perform upsampling or upsampling and deconvolution on the fused image; fuse the image obtained by the last upsampling and the image obtained by the first N times of deconvolution To get the residual image, where N≥1, and N∈N+.

具体地,请结合图3、图10及图11,处理器21首先检测待修复图像中的人像区域。示例地,处理器可以根据图10所示的人脸的检测模型检测出待修复图像中的人像区域。具体地,图10·所示的人脸检测模型的具体检测过程为:卷积层及池化层(Convolution andPooling)对待修复图像做特征提取以得到多张特征图像;最后一层卷积层(Final ConvFeature Map)对卷积层和池化层输出的特征图像执行最后一次卷积,并将最后一次卷积得到的特征图像输出至全连接层(Fully-connected Layers)中。全连接层对最后一层卷积层输出的特征图像进行分类,并将分类结果输出至坐标输出支路(Coordinate)。坐标输出支路输出人脸在待修复图像中的位置坐标。至此,即完成待修复图像中的人像区域的检测。随后,处理器21将人像区域输入与待修复图像的曝光时长对应的修复模型中进行修复。在一个例子中,修复模型可以为图11所示的人脸修复模型,其中,不同的修复模型均可以采用图11所示的模型架构,但不同的修复模型具有不同的权值(图11未示出)。如图11所示,人像区域输入人脸修复模型后,处理器21首先对人像区域执行第一次卷积,再对第一次卷积后的特征图像执行第一次池化。随后,处理器21对第一次池化后的特征图像执行第二次卷积,再对第二次卷积后的特征图像执行第二次池化。随后,处理器21对第二次池化后的特征图像执行第三次卷积,再对第三次卷积后的特征图像执行第三次池化。随后,处理器21对第三次池化后的特征图像执行第四次卷积,再对第四次卷积后的特征图像执行第四次池化。随后,处理器21对第四次池化后的特征图像执行第五次卷积。随后,处理器21对第五次卷积后的特征图像执行第一次上采样及第一次反卷积,其中,第一次反卷积实际上执行了两个反卷积的动作,一个反卷积的动作输出第一尺寸的图像,另一个反卷积的动作输出第二尺寸的图像,第二尺寸大于第一尺寸。随后,处理器21将第一次上采样得到的图像及与第一次上采样得到的图像的尺寸相对应的特征图像(即第四次卷积后的特征图像)进行融合(即图11所示的第四次卷积对应的卷积层与第二次上采样对应的上采样层的链接),并对融合后的图像执行第二次上采样及第二次反卷积,其中,第二次反卷积实际上执行了两个反卷积的动作,一个反卷积的动作输出第二尺寸的图像,另一个反卷积的动作输出第三尺寸的图像,第三尺寸大于第二尺寸。随后,处理器21将第二次上采样得到的图像、与第二次上采样得到的图像的尺寸相对应的特征图像(即第三次卷积后的特征图像,如图11所示,第三次卷积对应的卷积层与第三次上采样对应的上采样层链接)、及前第二次反卷积得到的图像(即第一次反卷积得到的第一尺寸的图像)进行融合,并对融合后的图像执行第三次上采样。随后,处理器21融合第三次上采样得到的图像、与第三次上采样得到的图像的尺寸相对应的特征图像(即第二次卷积后的特征图像,如图11所示,第二次卷积对应的卷积层与第四次上采样对应的上采样层链接)、前第二次反卷积得到的图像(即第一次反卷积得到的第二尺寸的图像)、及前一次反卷积得到的图像(即第二次反卷积得到的第二尺寸的图像),并对融合后的图像执行第四次上采样。随后,处理器21融合第四次上采样得到的图像及前一次反卷积得到的图像(即第二次反卷积得到的第三尺寸的图像)以得到残差图像。最后,处理器21将残差图像与输入人脸修复模型的人像区域进行融合,即可得到修复图像。修复图像中的人脸细节较为丰富,清晰度较高。Specifically, referring to FIG. 3 , FIG. 10 and FIG. 11 , the processor 21 first detects the portrait area in the image to be repaired. For example, the processor may detect the portrait area in the image to be repaired according to the face detection model shown in FIG. 10 . Specifically, the specific detection process of the face detection model shown in Fig. 10. is: the convolutional layer and the pooling layer (Convolution and Pooling) perform feature extraction on the image to be repaired to obtain multiple feature images; the last layer of convolutional layer ( Final ConvFeature Map) performs the last convolution on the feature image output by the convolution layer and the pooling layer, and outputs the feature image obtained by the last convolution to the fully-connected layer (Fully-connected Layers). The fully connected layer classifies the feature image output by the last convolutional layer, and outputs the classification result to the coordinate output branch (Coordinate). The coordinate output branch outputs the position coordinates of the face in the image to be repaired. So far, the detection of the portrait area in the image to be repaired is completed. Subsequently, the processor 21 inputs the portrait region into a repair model corresponding to the exposure time of the image to be repaired for repair. In an example, the restoration model can be the face restoration model shown in Figure 11, wherein different restoration models can adopt the model architecture shown in Figure 11, but different restoration models have different weights (not shown in Figure 11 Shows). As shown in FIG. 11 , after the portrait area is input into the face restoration model, the processor 21 first performs the first convolution on the portrait area, and then performs the first pooling on the feature image after the first convolution. Subsequently, the processor 21 performs a second convolution on the feature image after the first pooling, and then performs a second pooling on the feature image after the second convolution. Subsequently, the processor 21 performs a third convolution on the feature image after the second pooling, and then performs a third pooling on the feature image after the third convolution. Subsequently, the processor 21 performs a fourth convolution on the feature image after the third pooling, and then performs a fourth pooling on the feature image after the fourth convolution. Subsequently, the processor 21 performs a fifth convolution on the feature image after the fourth pooling. Subsequently, the processor 21 performs the first upsampling and the first deconvolution on the feature image after the fifth convolution, wherein the first deconvolution actually performs two deconvolution actions, one An action of deconvolution outputs an image of a first size, and another action of deconvolution outputs an image of a second size, the second size being larger than the first size. Subsequently, processor 21 fuses the image obtained by upsampling for the first time and the feature image corresponding to the size of the image obtained by upsampling for the first time (i.e. the feature image after the fourth convolution) (i.e., as shown in FIG. 11 The link between the convolutional layer corresponding to the fourth convolution and the upsampling layer corresponding to the second upsampling shown), and performs the second upsampling and the second deconvolution on the fused image, where the first The second deconvolution actually performs two deconvolution actions, one deconvolution action outputs an image of the second size, and the other deconvolution action outputs an image of the third size, and the third size is larger than the second size. Subsequently, the processor 21 combines the image obtained by the second upsampling and the feature image corresponding to the size of the image obtained by the second upsampling (i.e. the feature image after the third convolution, as shown in FIG. 11 , the first The convolutional layer corresponding to the third convolution is linked to the upsampling layer corresponding to the third upsampling), and the image obtained by the second deconvolution (that is, the image of the first size obtained by the first deconvolution) Fusion is performed and a third upsampling is performed on the fused image. Subsequently, the processor 21 fuses the image obtained by the third upsampling, and the feature image corresponding to the size of the image obtained by the third upsampling (that is, the feature image after the second convolution, as shown in FIG. 11 , the first The convolutional layer corresponding to the second convolution is linked to the upsampling layer corresponding to the fourth upsampling), the image obtained by the second deconvolution before (that is, the image of the second size obtained by the first deconvolution), and the image obtained by the previous deconvolution (that is, the image of the second size obtained by the second deconvolution), and perform the fourth upsampling on the fused image. Subsequently, the processor 21 fuses the image obtained by the fourth upsampling and the image obtained by the previous deconvolution (that is, the image of the third size obtained by the second deconvolution) to obtain a residual image. Finally, the processor 21 fuses the residual image with the portrait area input into the face restoration model to obtain the restoration image. The details of the face in the repaired image are richer and the definition is higher.

本申请实施方式的图像处理方法、图像处理装置10及电子设备20通过多次卷积及多次池化处理来提取人像区域的特征,再通过多次上采样对提取的特征进行放大,并在部分上采样过程中对融合了反卷积后的图像进行处理,以实现特征的传递及图像尺寸的扩展。另外,部分上采样是对融合了与前一次上采样得到的图像的尺寸相对应的特征图像的图像进行处理,即连接了相应层级的特征提取层,使得高级语义特征在上采样时可以更充分的传递,人像修复的细节会更加明显,人像细节的还原会更加细腻。此外,图11所示的人脸修复模型中,第一次反卷积得到的图像并不直接传递到第二次上采样过程中以与待执行第二次上采样的图像融合,而是传递到第三次上采样过程中及第四次上采样过程中,类似地,第二次反卷积得到的图像并不直接传递到第三次上采样过程中,而是传递到第四次上采样过程中,此种方式可以将高层次特征与低层次特征结合起来,使得特征更加丰富,细节还原得更加准确。再者,图11所示的人脸修复模型中,第二次及第二次以上的上采样得到的图像均不再执行反卷积,此种方式可以避免对低层次的特征做反卷积而出现块效应的问题。The image processing method, the image processing device 10, and the electronic device 20 in the embodiment of the present application extract the features of the portrait area through multiple convolutions and multiple pooling processes, and then amplify the extracted features through multiple upsampling, and In the process of partial upsampling, the image fused with deconvolution is processed to realize the transfer of features and the expansion of image size. In addition, partial upsampling is to process the image that combines the feature image corresponding to the size of the image obtained by the previous upsampling, that is, the feature extraction layer of the corresponding level is connected, so that the high-level semantic features can be more fully when upsampling. The details of portrait restoration will be more obvious, and the restoration of portrait details will be more delicate. In addition, in the face inpainting model shown in Figure 11, the image obtained by the first deconvolution is not directly passed to the second upsampling process to be fused with the image to be performed the second upsampling, but is passed In the third upsampling process and the fourth upsampling process, similarly, the image obtained by the second deconvolution is not directly passed to the third upsampling process, but to the fourth upsampling process In the sampling process, this method can combine high-level features with low-level features, making the features richer and the details restored more accurately. Furthermore, in the face inpainting model shown in Figure 11, deconvolution is no longer performed on the images obtained by the second and above upsampling. This method can avoid deconvolution of low-level features. And there is the problem of block effect.

请参阅图12,在某些实施方式中,图像处理方法还包括:Referring to Fig. 12, in some embodiments, the image processing method also includes:

06:判断曝光时长是否位于预定时长范围内;06: Determine whether the exposure time is within the predetermined time range;

在曝光时长位于预定时长范围内时,执行确定曝光时长所处的曝光时长区间的步骤;When the exposure duration is within the predetermined duration range, performing the step of determining the exposure duration interval in which the exposure duration is located;

07:在曝光时长位于预定时长范围外时,判断曝光时长是否大于预定时长范围中的最大值;07: When the exposure time is outside the predetermined time range, determine whether the exposure time is greater than the maximum value in the predetermined time range;

08:在曝光时长大于预定时长范围中的最大值时,获取参照图像,参照图像的清晰度高于预定清晰度;及08: When the exposure time is longer than the maximum value in the predetermined time range, obtain a reference image, and the resolution of the reference image is higher than the predetermined resolution; and

09:根据参照图像对待修复图像进行人像超分算法处理,以得到修复图像。09: Perform portrait super-resolution algorithm processing on the image to be repaired according to the reference image to obtain the repaired image.

请参阅图13,在某些实施方式中,图像处理装置10还包括第一判断模块16、第二判断模块17、第三获取模块18、及处理模块19。步骤06可以由第一判断模块16实现。步骤07可以由第二判断模块17实现。步骤08可以由第三获取模块18实现。步骤09可以由处理模块19实现。也即是说,判断模块16可以用于判断曝光时长是否位于预定时长范围内。在曝光时长位于预定时长范围内时,第一确定模块12可以用于确定曝光时长所处的曝光时长区间。在曝光时长位于预定时长范围外时,第二判断模块17可以用于判断曝光时长是否大于预定时长范围中的最大值。在曝光时长大于预定时长范围中的最大值时,第二获取模块18可以用于获取参照图像,参照图像的清晰度高于预定清晰度。处理模块19可以用于根据参照图像对待修复图像进行人像超分算法处理,以得到修复图像。Please refer to FIG. 13 , in some implementations, the image processing device 10 further includes a first judging module 16 , a second judging module 17 , a third acquiring module 18 , and a processing module 19 . Step 06 can be implemented by the first judging module 16 . Step 07 can be realized by the second judging module 17 . Step 08 can be implemented by the third acquiring module 18 . Step 09 can be implemented by the processing module 19 . That is to say, the judging module 16 can be used to judge whether the exposure time is within the predetermined time range. When the exposure duration is within the predetermined duration range, the first determining module 12 may be configured to determine an exposure duration interval in which the exposure duration is located. When the exposure time is outside the predetermined time range, the second judging module 17 may be used to judge whether the exposure time is greater than the maximum value in the predetermined time range. When the exposure time is longer than the maximum value in the predetermined time range, the second acquisition module 18 may be used to acquire a reference image whose resolution is higher than the predetermined resolution. The processing module 19 can be used to perform portrait super-resolution algorithm processing on the image to be repaired according to the reference image, so as to obtain the repaired image.

请再参阅图3,在某些实施方式中,步骤06、步骤07、步骤08、及步骤09均可以由处理器21实现。也即是说,处理器21可以用于判断曝光时长是否位于预定时长范围内。在曝光时长位于预定时长范围内时,处理器21可以用于确定曝光时长所处的曝光时长区间。在曝光时长位于预定时长范围外时,处理器21可以用于判断曝光时长是否大于预定时长范围中的最大值。在曝光时长大于预定时长范围中的最大值时,处理器21可以用于获取参照图像并根据参照图像对待修复图像进行人像超分算法处理以得到修复图像。其中,参照图像的清晰度高于预定清晰度。Please refer to FIG. 3 again. In some embodiments, step 06 , step 07 , step 08 , and step 09 can all be implemented by the processor 21 . That is to say, the processor 21 can be used to determine whether the exposure time is within the predetermined time range. When the exposure time is within the predetermined time range, the processor 21 may be configured to determine an exposure time interval in which the exposure time is located. When the exposure time is outside the predetermined time range, the processor 21 may be configured to determine whether the exposure time is greater than a maximum value in the predetermined time range. When the exposure time is greater than the maximum value in the predetermined time range, the processor 21 can be used to acquire a reference image and perform portrait super-resolution algorithm processing on the image to be repaired according to the reference image to obtain a repaired image. Wherein, the resolution of the reference image is higher than the predetermined resolution.

具体地,多个曝光时长区间均位于预定时长范围内。示例地,假设有三个曝光时长区间,分别为曝光时长区间[T1,T2)、曝光时长区间[T2,T3)、曝光时长区间[T3,T4],则预定时长范围可以为[T1,T4]。当待修复图像的曝光时长位于预定时长范围[T1,T4]内时,处理器21可以采用对应曝光时长的修复模型对待修复图像进行修复。当待修复图像的曝光时长位于预定时长范围[T1,T4]外时,待修复图像的模糊程度可能较高,人像的细节已较不明显,此时再采用修复模型来修复待修复图像可能无法达到较佳的修复效果。因此,在待修复图像的曝光时长位于预定时长范围[T1,T4]外且待修复图像的曝光时长大于预定时长范围[T1,T4]的最大值(即T4)时,处理器21可以采用参照图像来对待修复图像进行修复,以获取修复效果较好的修复图像。其中,参照图像可包括预设用户人像或预设标准人像。以电子设备20是手机为例,预设用户人像可以是电子设备20中的用户提前拍摄好的人像,需要说明的是,该预设用户人像可为用户相册中的证件照或者其他清晰度更高的具有人像的图像。当电子设备20中没有预设用户人像时,可通过获取一个预设的标准人像,该标准人像可在网络上下载与用户同地区的任意一张高清人像,例如高清海报等。预设用户人像和预设标准人像的清晰度都要高于预定清晰度,预定清晰度可预先设置,只有高于预定清晰度的图像才能作为参照图像(预设用户人像或者预设标准人像),以达到更好的图像处理效果。Specifically, the plurality of exposure duration intervals are all within a predetermined duration range. For example, assuming that there are three exposure time intervals, namely the exposure time interval [T1, T2), the exposure time interval [T2, T3), and the exposure time interval [T3, T4], the predetermined time length range may be [T1, T4] . When the exposure time of the image to be repaired is within the predetermined time range [T1, T4], the processor 21 may use a repair model corresponding to the exposure time to repair the image to be repaired. When the exposure time of the image to be repaired is outside the predetermined time range [T1, T4], the blur of the image to be repaired may be high, and the details of the portrait are less obvious. At this time, it may not be possible to use the repair model to repair the image to be repaired achieve better restoration results. Therefore, the processor 21 can use the reference The image to be repaired is repaired to obtain a repaired image with better repair effect. Wherein, the reference image may include a preset user portrait or a preset standard portrait. Taking the electronic device 20 as a mobile phone as an example, the preset user portrait can be a portrait taken by the user in the electronic device 20 in advance. Tall images with portraits. When there is no preset user portrait in the electronic device 20, a preset standard portrait can be obtained, and any high-definition portrait in the same region as the user can be downloaded from the network, such as a high-definition poster. Both the preset user portrait and the preset standard portrait have a resolution higher than the predetermined resolution, and the predetermined resolution can be set in advance, and only images with a higher than the predetermined resolution can be used as reference images (preset user portrait or preset standard portrait) , in order to achieve better image processing effect.

请参阅图14,在某些实施方式中,步骤08获取参照图像,包括:Please refer to Fig. 14, in some embodiments, step 08 acquires a reference image, including:

081:对待修复图像的人像区域和预设用户人像进行人脸检测;081: Perform face detection on the portrait area of the image to be repaired and the preset user portrait;

082:判断待修复图像的人脸与预设用户的人脸的相似度是否大于或等于第一预设相似度;082: Determine whether the similarity between the face of the image to be repaired and the preset user's face is greater than or equal to the first preset similarity;

083:在待修复图像的人脸与预设用户的人脸的相似度大于或等于第一预设相似度时,将预设用户人像作为参照图像;083: When the similarity between the face of the image to be repaired and the face of the preset user is greater than or equal to the first preset similarity, use the preset user portrait as a reference image;

084:在待修复图像的人脸与预设用户的人脸的相似度小于第一预设相似度时,获取预设标准人像作为参照图像。084: When the similarity between the face of the image to be repaired and the preset user's face is less than the first preset similarity, acquire a preset standard portrait as a reference image.

请参阅图15,在某些实施方式中,第三获取模块18包括第二检测单元181、判断单元182、确定单元183、第一获取单元184。步骤081可以由第二检测单元181实现。步骤082可以由判断单元182实现。步骤083可以由确定单元183实现。步骤084可以由第一获取单元184实现。也即是说,第二检测单元181可以用于对待修复图像的人像区域和预设用户人像进行人脸检测。判断单元182可以用于判断待修复图像的人脸与预设用户的人脸的相似度是否大于或等于第一预设相似度。确定单元183可以用于在待修复图像的人脸与预设用户的人脸的相似度大于或等于第一预设相似度时,将预设用户人像作为参照图像。第一获取单元184可以用于在待修复图像的人脸与预设用户的人脸的相似度小于第一预设相似度时,获取预设标准人像作为参照图像。Please refer to FIG. 15 , in some implementations, the third acquisition module 18 includes a second detection unit 181 , a judgment unit 182 , a determination unit 183 , and a first acquisition unit 184 . Step 081 can be implemented by the second detection unit 181 . Step 082 can be implemented by the judging unit 182 . Step 083 can be implemented by the determination unit 183 . Step 084 may be implemented by the first acquiring unit 184 . That is to say, the second detection unit 181 may be used to perform face detection on the portrait area of the image to be repaired and the preset user portrait. The judging unit 182 may be configured to judge whether the similarity between the face of the image to be repaired and the preset user's face is greater than or equal to the first preset similarity. The determining unit 183 may be configured to use the preset user portrait as the reference image when the similarity between the face of the image to be repaired and the preset user's face is greater than or equal to the first preset similarity. The first acquiring unit 184 may be configured to acquire a preset standard portrait as a reference image when the similarity between the face of the image to be repaired and the preset user's face is less than the first preset similarity.

请再参阅图3,在某些实施方式中,步骤081、步骤082、步骤083及步骤084均可以由处理器21实现。也即是说,处理器21可以用于对待修复图像的人像区域和预设用户人像进行人脸检测及判断待修复图像的人脸与预设用户的人脸的相似度是否大于或等于第一预设相似度。处理器21还可以用于在待修复图像的人脸与预设用户的人脸的相似度大于或等于第一预设相似度时,将预设用户人像作为参照图像。处理器21还可以用于在待修复图像的人脸与预设用户的人脸的相似度小于第一预设相似度时,获取预设标准人像作为参照图像。Please refer to FIG. 3 again. In some embodiments, step 081 , step 082 , step 083 and step 084 can all be implemented by the processor 21 . That is to say, the processor 21 can be used to perform face detection on the portrait area of the image to be repaired and the portrait of the preset user and determine whether the similarity between the face of the image to be repaired and the face of the preset user is greater than or equal to the first Default similarity. The processor 21 may also be configured to use the preset user portrait as a reference image when the similarity between the face of the image to be repaired and the preset user's face is greater than or equal to the first preset similarity. The processor 21 may also be configured to obtain a preset standard portrait as a reference image when the similarity between the face of the image to be repaired and the preset user's face is less than a first preset similarity.

具体地,处理器21可以对待修复图像和预设用户人像的人脸进行检测,示例地,可以采用图10所示的人脸检测模型对待修复图像和预设用户的人像进行人脸检测。随后,处理器21可以进一步检测出待修复图像中的人脸特征点和预设用户人像中的人脸特征点。随后,处理器21再对两个图像的人脸特征点进行比较。若两个图像的人脸特征点的相似度大于第一预设相似度,则说明待修复图像的人像区域与预设用户人像是同一人(即待修复图像中人像对应的实际用户与预设用户是同一人),此时,处理器21可以根据该预设用户人像对待修复图像的人像区域进行人像超分算法(即超清人像算法)处理得到修复图像。使用同一人的两张图像进行处理,得到的修复图像中的人像与实际用户本人更加相似,也更加自然,用户体验会更好。若两个图像的人脸特征点的相似度低于第一预设相似度,则说明待修复图像的人像区域与预设用户人像不是同一人(即实际用户与预设用户不是同一人),此时采用标准人像作为参照图像进行超分算法处理,得到的效果会更好。因此,处理器21可根据预设标准人像对待修复图像的人像区域进行人像超分算法处理,以得到修复图像。Specifically, the processor 21 may detect the face of the image to be repaired and the portrait of the preset user. For example, the face detection model shown in FIG. 10 may be used to detect the face of the image to be repaired and the portrait of the preset user. Subsequently, the processor 21 may further detect the facial feature points in the image to be repaired and the facial feature points in the preset user portrait. Subsequently, the processor 21 compares the facial feature points of the two images. If the similarity of the face feature points of the two images is greater than the first preset similarity, it means that the portrait area of the image to be repaired and the preset user portrait are the same person (that is, the actual user corresponding to the portrait in the image to be repaired is the same as the preset user). The user is the same person), at this time, the processor 21 may perform portrait super-resolution algorithm (ie ultra-clear portrait algorithm) processing on the portrait area of the image to be repaired according to the preset user portrait to obtain the repaired image. Using two images of the same person for processing, the portrait in the repaired image obtained is more similar and more natural to the actual user, and the user experience will be better. If the similarity of the face feature points of the two images is lower than the first preset similarity, it means that the portrait area of the image to be repaired is not the same person as the preset user portrait (that is, the actual user is not the same person as the preset user), At this time, the standard portrait is used as the reference image for super-resolution algorithm processing, and the effect obtained will be better. Therefore, the processor 21 may perform portrait super-resolution algorithm processing on the portrait area of the image to be repaired according to the preset standard portrait to obtain the repaired image.

请参阅图16,在某些实施方式中,步骤09根据参照图像对待修复图像进行人像超分算法处理,以得到修复图像,包括:Please refer to FIG. 16. In some embodiments, step 09 performs portrait super-resolution algorithm processing on the image to be repaired according to the reference image to obtain the repaired image, including:

091:获取待修复图像经上采样后的第一特征图;091: Obtain the first feature map of the upsampled image to be repaired;

092:获取参照图像经过上采样和下采样后的第二特征图;092: Obtain the second feature map of the reference image after upsampling and downsampling;

093:获取参照图像未经过上采样和下采样的第三特征图;093: Obtain the third feature map of the reference image without upsampling and downsampling;

094:获取第二特征图中与第一特征图相似度超过第二预设相似度的特征以作为参照特征;094: Acquiring features in the second feature map whose similarity to the first feature map exceeds the second preset similarity as reference features;

095:获取第二特征图中与参照特征相似度超过第三预设相似度的特征,以得到交换特征图;095: Obtain features whose similarity with the reference feature in the second feature map exceeds the third preset similarity to obtain an exchange feature map;

096:合并交换特征图与第一特征图,以得到第四特征图;096: Merge the exchange feature map and the first feature map to obtain the fourth feature map;

097:将第四特征图放大预定倍数以得到第五特征图;及097: Enlarge the fourth feature map by a predetermined factor to obtain the fifth feature map; and

098:将第五特征图作为待修复图像并循环执行上述步骤,直至得到的第五特征图为目标放大倍数,则具有目标放大倍数的第五特征图为修复图像。098: Use the fifth feature map as the image to be repaired and execute the above steps in a loop until the obtained fifth feature map is the target magnification, then the fifth feature map with the target magnification is the repair image.

请参阅图17,在某些实施方式中,处理模块19包括第二获取单元191、第三获取单元192、第四获取单元193、第五获取单元194、第六获取单元195、合并单元196、放大单元197、及第三处理单元198。步骤091可以由第二获取单元191实现。步骤092可以由第三获取单元192实现。步骤093可以由第四获取单元193实现。步骤094可以由第五获取单元194实现。步骤095可以由第六获取单元195实现。步骤096可以由合并单元196实现。步骤097可以由放大单元197实现。步骤098可以由第三处理单元198实现。也即是说,第二获取单元181可以用于获取待修复图像经上采样后的第一特征图。第三获取单元192可以用于获取参照图像经过上采样和下采样后的第二特征图。第四获取单元193可以用于获取参照图像未经过上采样和下采样的第三特征图。第五获取单元194可以用于获取第二特征图中与第一特征图相似度超过第二预设相似度的特征以作为参照特征。第六获取单元195可以用于获取第二特征图中与参照特征相似度超过第三预设相似度的特征,以得到交换特征图。合并单元196可以用于合并交换特征图与第一特征图,以得到第四特征图。放大单元197可以用于将第四特征图放大预定倍数以得到第五特征图。第三处理单元198可以用于将第五特征图作为待修复图像并循环执行上述步骤,直至得到的第五特征图为目标放大倍数,则具有目标放大倍数的第五特征图为修复图像。Referring to FIG. 17 , in some implementations, the processing module 19 includes a second acquiring unit 191, a third acquiring unit 192, a fourth acquiring unit 193, a fifth acquiring unit 194, a sixth acquiring unit 195, a combining unit 196, an amplification unit 197 , and a third processing unit 198 . Step 091 may be implemented by the second acquiring unit 191 . Step 092 may be implemented by the third obtaining unit 192 . Step 093 may be implemented by the fourth acquiring unit 193 . Step 094 may be implemented by the fifth acquiring unit 194 . Step 095 can be implemented by the sixth acquiring unit 195 . Step 096 can be implemented by the merging unit 196 . Step 097 can be implemented by the amplification unit 197 . Step 098 may be implemented by the third processing unit 198 . That is to say, the second acquiring unit 181 may be configured to acquire the upsampled first feature map of the image to be repaired. The third acquiring unit 192 may be configured to acquire the second feature map of the reference image after upsampling and downsampling. The fourth acquiring unit 193 may be configured to acquire a third feature map of the reference image that has not been up-sampled or down-sampled. The fifth acquiring unit 194 may be configured to acquire features in the second feature map whose similarity to the first feature map exceeds a second preset similarity as reference features. The sixth obtaining unit 195 may be configured to obtain features in the second feature map whose similarity to the reference feature exceeds a third preset similarity, so as to obtain the exchange feature map. The merging unit 196 can be used for merging the exchanged feature map and the first feature map to obtain a fourth feature map. The enlarging unit 197 may be used to amplify the fourth feature map by a predetermined factor to obtain a fifth feature map. The third processing unit 198 may be configured to use the fifth feature map as the image to be repaired and execute the above steps in a loop until the obtained fifth feature map is the target magnification, then the fifth feature map with the target magnification is the repair image.

请再参阅图3,在某些实施方式中,步骤091、步骤092、步骤093、步骤094、步骤095、步骤096、步骤097及步骤098均可以由处理器21实现。也即是说,处理器21可以用于获取待修复图像经上采样后的第一特征图、获取参照图像经过上采样和下采样后的第二特征图、以及获取参照图像未经过上采样和下采样的第三特征图。处理器21还可以用于获取第二特征图中与第一特征图相似度超过第二预设相似度的特征以作为参照特征及获取第二特征图中与参照特征相似度超过第三预设相似度的特征以得到交换特征图。处理器21还可以用于合并交换特征图与第一特征图,以得到第四特征图及将第四特征图放大预定倍数以得到第五特征图。处理器21还可以用于将第五特征图作为待修复图像并循环执行上述步骤,直至得到的第五特征图为目标放大倍数,则具有目标放大倍数的第五特征图为修复图像。Please refer to FIG. 3 again. In some embodiments, step 091 , step 092 , step 093 , step 094 , step 095 , step 096 , step 097 and step 098 can all be implemented by the processor 21 . That is to say, the processor 21 can be used to obtain the first feature map of the image to be repaired after being up-sampled, obtain the second feature map of the reference image after up-sampling and down-sampling, and obtain the reference image without up-sampling and Downsampled third feature map. The processor 21 can also be used to obtain features in the second feature map whose similarity to the first feature map exceeds a second preset similarity as reference features, and obtain features in the second feature map whose similarity to the reference feature exceeds a third preset Similarity features to get the exchange feature map. The processor 21 can also be used for merging the swapped feature map and the first feature map to obtain a fourth feature map and amplifying the fourth feature map by a predetermined factor to obtain a fifth feature map. The processor 21 may also be configured to use the fifth feature map as the image to be repaired and execute the above steps in a loop until the obtained fifth feature map is the target magnification, then the fifth feature map with the target magnification is the repair image.

具体地,上采样可理解为对待修复图像或者参照图像进行放大处理,下采样可理解为对参照图像进行缩小处理。Specifically, upsampling can be understood as enlarging the image to be repaired or a reference image, and downsampling can be understood as reducing the reference image.

更具体地,请参阅图19,在某些实施方式中,步骤091获取待修复图像经上采样后的第一特征图,包括:More specifically, please refer to FIG. 19. In some implementations, step 091 acquires the upsampled first feature map of the image to be repaired, including:

0911,对待修复图像进行上采样;0911, upsampling the image to be repaired;

0912,将上采样后的待修复图像输入到卷积神经网络进行特征提取,得到第一特征图;0912, input the upsampled image to be repaired into the convolutional neural network for feature extraction, and obtain the first feature map;

步骤092获取参照图像经过上采样和下采样后的第二特征图,包括:Step 092 obtains the second feature map of the reference image after upsampling and downsampling, including:

0921,对参照图像进行下采样;0921. Downsample the reference image;

0922,对下采样后的参照图像进行上采样;0922. Upsample the downsampled reference image;

0923,将上采样后的参照图像输入到卷积神经网络进行特征提取,得到第二特征图;0923, input the up-sampled reference image to the convolutional neural network for feature extraction, and obtain the second feature map;

步骤093获取参照图像未经过上采样和下采样的第三特征图,包括:Step 093 obtains the third feature map of the reference image without upsampling and downsampling, including:

0931,将参照图像输入到卷积神经网络进行特征提取,得到第三特征图。0931. Input the reference image to the convolutional neural network for feature extraction to obtain a third feature map.

处理器21可以对待修复图像进行上采样(放大)处理,再将上采样后的待修复图像输入到卷积神经网络中进行特征提取得到第一特征图。第一特征图可理解为待修复图像中的人像区域经放大后的图像,第一特征图中包括人像中的各个特征,例如五官、肤质、头发、轮廓等等。由于第一特征图是通过处理放大后的待修复图像得到的,第一特征图的清晰度较低,而参照图像的清晰度是比较高的,因此处理器21还需要对参照图像先进行下采样(缩小),再对下采样后的图像进行上采样,以实现参照图像的模糊化处理,提升第二特征图与第一特征图的相似度。第二特征图中也可包括例如五官、肤质、头发、轮廓等特征。另外,处理器21还需要将参照图像输入到卷积神经网络进行特征提取,得到第三特征图。需要说明的是,卷积神经网络为一个经过深度学习的网络,能对输入的图像进行高准确率的特征提取。The processor 21 may perform upsampling (enlargement) processing on the image to be repaired, and then input the upsampled image to be repaired into a convolutional neural network for feature extraction to obtain a first feature map. The first feature map can be understood as an enlarged image of the portrait area in the image to be repaired. The first feature map includes various features of the portrait, such as facial features, skin texture, hair, outline, and so on. Since the first feature map is obtained by processing the enlarged image to be repaired, the definition of the first feature map is relatively low, while the definition of the reference image is relatively high, so the processor 21 also needs to perform the following steps on the reference image. Sampling (shrinking), and then upsampling the downsampled image to achieve blurring of the reference image and improve the similarity between the second feature map and the first feature map. Features such as facial features, skin texture, hair, and contour may also be included in the second feature map. In addition, the processor 21 also needs to input the reference image to the convolutional neural network for feature extraction to obtain a third feature map. It should be noted that the convolutional neural network is a deep learning network that can extract features of the input image with high accuracy.

随后,处理器21将第二特征图和第一特征图中的特征进行对比,判断两者的相似度,并将相似度与一个第二预设相似度进行比较。若相似度大于或等于第二预设相似度,说明第二特征图的该特征与第一特征图的对应特征很像,处理器21即可将第二特征图上的该特征作为参照特征。处理器21再将第三特征图与参照特征进行对比,判断两者的相似度,并将相似度与第三预设相似度进行比较,若相似度大于或等于第三预设相似度,则得到对应的交换特征图。随后,处理器21对交换特征图与第一特征图进行合并,得到第四特征图,再对第四特征图进行放大预定倍数得到第五特征图。随后,处理器21对第五特征图的放大倍数进行判断。若放大倍数等于目标放大倍数,则处理器21将该第五特征图作为修复图像。需要说明的是,第二预设相似度和第三预设相似度可与上文中第一预设相似度相同。Subsequently, the processor 21 compares the features in the second feature map with the features in the first feature map, judges the similarity between them, and compares the similarity with a second preset similarity. If the similarity is greater than or equal to the second preset similarity, it means that the feature in the second feature map is very similar to the corresponding feature in the first feature map, and the processor 21 can use the feature in the second feature map as a reference feature. The processor 21 then compares the third feature map with the reference feature, judges the similarity between the two, and compares the similarity with the third preset similarity, if the similarity is greater than or equal to the third preset similarity, then Get the corresponding exchange feature map. Subsequently, the processor 21 merges the exchange feature map with the first feature map to obtain a fourth feature map, and then enlarges the fourth feature map by a predetermined factor to obtain a fifth feature map. Subsequently, the processor 21 judges the magnification factor of the fifth characteristic map. If the magnification is equal to the target magnification, the processor 21 uses the fifth feature map as the inpainted image. It should be noted that the second preset similarity and the third preset similarity may be the same as the first preset similarity above.

在某些实施方式中,当曝光时长大于预定时长范围中的最大值且小于等于预定值时,处理器21可以根据参照图像对待修复图像进行人像超分算法处理以得到修复图像。当曝光时长大于预定值时,说明待修复图像的模糊程度已非常高,此时无论是采用图11所示的修复模型修复待修复图像还是采用参照图像来修复带修复图像均无法达到较佳的处理效果。因此,此时处理器21不再对待修复图像执行修复处理。当曝光时长小于预定时长范围中的最小值时,说明待修复图像的模糊程度较小,清晰度较高,此时处理器21也不对待修复图像进行修复处理。如此,处理器21仅在曝光时长大于等于预定时长范围中的最小值且小于等于预定值时才对待修复图像进行处理,可以避免不必要的修复处理带来的功耗损失,有利于提升电子设备20的续航能力。In some implementations, when the exposure duration is greater than the maximum value in the predetermined duration range and less than or equal to the predetermined value, the processor 21 may perform portrait super-resolution processing on the image to be repaired according to the reference image to obtain the repaired image. When the exposure time is longer than the predetermined value, it means that the blur degree of the image to be repaired is very high. At this time, neither the repair model shown in Figure 11 is used to repair the image to be repaired nor the reference image is used to repair the image with repair. processing effect. Therefore, at this time, the processor 21 does not perform restoration processing on the image to be repaired. When the exposure time is less than the minimum value in the predetermined time range, it means that the image to be repaired has less blur and higher definition, and at this time, the processor 21 does not perform repair processing on the image to be repaired. In this way, the processor 21 processes the image to be repaired only when the exposure time is greater than or equal to the minimum value in the predetermined time range and less than or equal to the predetermined value, which can avoid power consumption loss caused by unnecessary repair processing, and is conducive to improving electronic equipment. 20 endurance.

请参阅图19,本申请还提供一种非易失性计算机可读存储介质30。非易失性计算机可读存储介质30包含计算机可读指令。计算机可读指令被处理器21执行时,使得处理器21执行上述任意一个实施方式所述的图像处理方法。Referring to FIG. 19 , the present application also provides a non-volatile computer-readable storage medium 30 . The non-transitory computer readable storage medium 30 contains computer readable instructions. When the computer-readable instructions are executed by the processor 21, the processor 21 is made to execute the image processing method described in any one of the above-mentioned implementation manners.

例如,请结合图1和图19,计算机可读指令被处理器21执行时,使得处理器21执行以下图像处理方法的步骤:For example, please refer to FIG. 1 and FIG. 19 , when the computer readable instructions are executed by the processor 21, the processor 21 is made to perform the following steps of the image processing method:

01:获取待修复图像在拍摄时的曝光时长;01: Obtain the exposure time of the image to be repaired when shooting;

02:确定曝光时长所处的曝光时长区间;02: Determine the exposure duration interval of the exposure duration;

03:根据曝光时长区间确定修复模型,不同的曝光时长区间对应不同的修复模型;及03: Determine the repair model according to the exposure time interval, and different exposure time intervals correspond to different restoration models; and

04:根据确定的修复模型对待修复图像的人像区域进行修复处理。04: Repair the portrait area of the image to be repaired according to the determined repair model.

再例如,请结合图6和图19,计算机可读指令被处理器21执行时,使得处理器21执行以下图像处理方法的步骤:For another example, please refer to FIG. 6 and FIG. 19 , when the computer readable instructions are executed by the processor 21, the processor 21 is made to perform the following steps of the image processing method:

041:检测待修复图像中的人像区域;041: Detect the portrait area in the image to be repaired;

042:对人像区域执行多次卷积以获得多张特征图像;042: Perform multiple convolutions on the portrait area to obtain multiple feature images;

043:对最后一次卷积输出的特征图像执行多次上采样及至少一次反卷积以获得残差图像;及043: Perform multiple upsampling and at least one deconvolution on the feature image output by the last convolution to obtain a residual image; and

044:融合残差图像及人像区域以得到修复图像。044: Fusion residual image and portrait area to obtain repaired image.

非易失性计算机可读存储介质30可设置在图像处理装置10(图2所示)或者电子设备20(图3所示)内,也可设置在云端服务器内。当非易失性计算机可读存储介质30设置在云端服务器内时,图像处理装置10或者电子设备20能够与云端服务器进行通讯来获取到相应的计算机可读指令。The non-volatile computer-readable storage medium 30 can be set in the image processing apparatus 10 (shown in FIG. 2 ) or the electronic device 20 (shown in FIG. 3 ), and can also be set in the cloud server. When the non-volatile computer-readable storage medium 30 is set in the cloud server, the image processing apparatus 10 or the electronic device 20 can communicate with the cloud server to obtain corresponding computer-readable instructions.

可以理解,计算机可读指令包括计算机程序代码。计算机程序代码可以为源代码形式、对象代码形式、可执行文件或某些中间形式等。非易失性计算机可读存储介质可以包括:能够携带计算机程序代码的任何实体或装置、记录介质、U盘、移动硬盘、磁碟、光盘、计算机存储器、只读存储器(ROM,Read-Only Memory)、随机存取存储器(RAM,Random AccessMemory)、以及软件分发介质等。It will be appreciated that computer readable instructions comprise computer program code. The computer program code may be in source code form, object code form, executable file or some intermediate form, etc. The non-volatile computer-readable storage medium may include: any entity or device capable of carrying computer program code, recording medium, U disk, removable hard disk, magnetic disk, optical disk, computer memory, read-only memory (ROM, Read-Only Memory ), random access memory (RAM, Random Access Memory), and software distribution media, etc.

处理器21可以是指驱动板。驱动板可以是中央处理单元(Central ProcessingUnit,CPU),还可以是其他通用处理器、数字信号处理器(Digital Signal Processor,DSP)、专用集成电路(Application Specific Integrated Circuit,ASIC)、现成可编程门阵列(Field-Programmable Gate Array,FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件等。The processor 21 may refer to a driver board. The driver board can be a central processing unit (Central Processing Unit, CPU), or other general-purpose processors, digital signal processors (Digital Signal Processor, DSP), application specific integrated circuits (Application Specific Integrated Circuit, ASIC), off-the-shelf programmable gates Array (Field-Programmable Gate Array, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, etc.

在本说明书的描述中,参考术语“一个实施方式”、“某些实施方式”、“示意性实施方式”、“示例”、“具体示例”或“一些示例”等的描述意指结合所述实施方式或示例描述的具体特征、结构、材料或者特点包含于本申请的至少一个实施方式或示例中。在本说明书中,对上述术语的示意性表述不一定指的是相同的实施方式或示例。而且,描述的具体特征、结构、材料或者特点可以在任何的一个或多个实施方式或示例中以合适的方式结合。此外,在不相互矛盾的情况下,本领域的技术人员可以将本说明书中描述的不同实施例或示例以及不同实施例或示例的特征进行结合和组合。In the description of this specification, reference to the terms "one embodiment", "certain embodiments", "exemplary embodiments", "examples", "specific examples" or "some examples" etc. The specific features, structures, materials, or characteristics described in the embodiments or examples are included in at least one embodiment or example of the present application. In this specification, schematic representations of the above terms do not necessarily refer to the same embodiment or example. Furthermore, the described specific features, structures, materials or characteristics may be combined in any suitable manner in any one or more embodiments or examples. In addition, those skilled in the art can combine and combine different embodiments or examples and features of different embodiments or examples described in this specification without conflicting with each other.

流程图中或在此以其他方式描述的任何过程或方法描述可以被理解为,表示包括一个或更多个用于实现特定逻辑功能或过程的步骤的可执行指令的代码的模块、片段或部分,并且本申请的优选实施方式的范围包括另外的实现,其中可以不按所示出或讨论的顺序,包括根据所涉及的功能按基本同时的方式或按相反的顺序,来执行功能,这应被本申请的实施例所属技术领域的技术人员所理解。Any process or method descriptions in flowcharts or otherwise described herein may be understood to represent modules, segments or portions of code comprising one or more executable instructions for implementing specific logical functions or steps of the process , and the scope of preferred embodiments of the present application includes additional implementations in which functions may be performed out of the order shown or discussed, including in substantially simultaneous fashion or in reverse order depending on the functions involved, which shall It should be understood by those skilled in the art to which the embodiments of the present application belong.

尽管上面已经示出和描述了本申请的实施方式,可以理解的是,上述实施方式是示例性的,不能理解为对本申请的限制,本领域的普通技术人员在本申请的范围内可以对上述实施方式进行变化、修改、替换和变型。Although the implementation of the present application has been shown and described above, it can be understood that the above-mentioned implementation is exemplary and should not be construed as limiting the application, and those skilled in the art can make the above-mentioned The embodiments are subject to changes, modifications, substitutions and variations.

Claims (9)

1.一种图像处理方法,其特征在于,所述图像处理方法包括:1. an image processing method, is characterized in that, described image processing method comprises: 获取待修复图像在拍摄时的曝光时长;Obtain the exposure time of the image to be repaired when it is taken; 确定所述曝光时长所处的曝光时长区间;determining the exposure duration interval in which the exposure duration is located; 根据所述曝光时长区间确定修复模型,不同的所述曝光时长区间对应不同的所述修复模型;及determining a restoration model according to the exposure time interval, and different exposure duration intervals correspond to different restoration models; and 根据确定的所述修复模型对所述待修复图像的人像区域进行修复处理;performing repair processing on the portrait area of the image to be repaired according to the determined repair model; 所述图像处理方法还包括:The image processing method also includes: 建立初始模型;Create an initial model; 获取曝光时长位于不同的所述曝光时长区间内的多帧训练图像;及acquiring multiple frames of training images whose exposure durations are located in different exposure duration intervals; and 利用所述曝光时长位于同一所述曝光时长区间内的多帧所述训练图像训练所述初始模型,以得到与所述曝光时长区间对应的所述修复模型。The initial model is trained by using multiple frames of the training images whose exposure durations are within the same exposure duration interval, so as to obtain the restoration model corresponding to the exposure duration interval. 2.根据权利要求1所述的图像处理方法,其特征在于,所述根据确定的所述修复模型对所述待修复图像的人像区域进行修复处理,包括:2. The image processing method according to claim 1, wherein said repairing the portrait region of the image to be repaired according to the determined repair model includes: 检测所述待修复图像中的人像区域;Detecting the portrait area in the image to be repaired; 对所述人像区域执行多次卷积以获得多张特征图像;Perform multiple convolutions on the portrait area to obtain multiple feature images; 对最后一次卷积输出的特征图像执行多次上采样及至少一次反卷积以获得残差图像;及performing multiple upsampling and at least one deconvolution on the feature image output by the last convolution to obtain a residual image; and 融合所述残差图像及所述人像区域以得到修复图像。The residual image and the portrait area are fused to obtain a repaired image. 3.根据权利要求2所述的图像处理方法,其特征在于,所述对最后一次卷积后输出的特征图像执行多次上采样及至少一次反卷积以获得残差图像,包括:3. The image processing method according to claim 2, wherein said performing multiple upsampling and at least one deconvolution on the output feature image after the last convolution to obtain a residual image, comprising: 在第一次上采样过程中,对最后一次卷积输出的所述特征图像执行上采样及反卷积;In the first upsampling process, perform upsampling and deconvolution on the feature image output by the last convolution; 在第二次及第二次以上的上采样过程中,融合前一次上采样得到的图像、与所述前一次上采样得到的图像的尺寸相对应的所述特征图像、及前N次反卷积得到的图像,并对融合后的图像执行上采样或者执行上采样及反卷积;In the second or more upsampling process, fuse the image obtained by the previous upsampling, the feature image corresponding to the size of the image obtained by the previous upsampling, and the previous N times of deconvolution The image obtained by the multiplication, and perform upsampling or perform upsampling and deconvolution on the fused image; 融合最后一次上采样得到的图像及前N次反卷积得到的图像以得到所述残差图像,其中,N≥1,且N∈N+。The image obtained by the last upsampling and the image obtained by the previous N times of deconvolution are fused to obtain the residual image, where N≥1, and N∈N+. 4.根据权利要求1所述的图像处理方法,其特征在于,所述图像处理方法还包括:4. image processing method according to claim 1, is characterized in that, described image processing method also comprises: 判断所述曝光时长是否位于预定时长范围内;judging whether the exposure duration is within a predetermined duration range; 在所述曝光时长位于所述预定时长范围内时,执行确定所述曝光时长所处的曝光时长区间的步骤;When the exposure duration is within the predetermined duration range, performing the step of determining the exposure duration interval in which the exposure duration is located; 在所述曝光时长大于所述预定时长范围中的最大值时,获取参照图像,所述参照图像的清晰度高于预定清晰度;及When the exposure duration is longer than the maximum value in the predetermined duration range, acquiring a reference image, the resolution of the reference image is higher than the predetermined resolution; and 根据所述参照图像对所述待修复图像进行人像超分算法处理,以得到修复图像。Perform portrait super-resolution algorithm processing on the image to be repaired according to the reference image to obtain a repaired image. 5.根据权利要求4所述的图像处理方法,其特征在于,所述获取参照图像,包括:5. The image processing method according to claim 4, wherein said obtaining a reference image comprises: 对所述待修复图像的所述人像区域和预设用户人像进行人脸检测;Perform face detection on the portrait area of the image to be repaired and the preset user portrait; 在所述待修复图像的人脸与预设用户的人脸的相似度大于或等于第一预设相似度时,将所述预设用户人像作为所述参照图像;When the similarity between the face of the image to be repaired and the face of the preset user is greater than or equal to a first preset similarity, using the preset user portrait as the reference image; 在所述待修复图像的人脸与所述预设用户的人脸的相似度小于所述第一预设相似度时,获取预设标准人像作为所述参照图像。When the similarity between the face of the image to be repaired and the preset user's face is smaller than the first preset similarity, a preset standard portrait is acquired as the reference image. 6.根据权利要求4所述的图像处理方法,其特征在于,所述根据所述参照图像对所述待修复图像进行人像超分算法处理,以得到修复图像,包括:6. The image processing method according to claim 4, wherein said performing portrait super-resolution algorithm processing on said image to be repaired according to said reference image to obtain a repaired image comprises: 获取所述待修复图像经上采样后的第一特征图;Obtaining the first feature map of the upsampled image to be repaired; 获取所述参照图像经过上采样和下采样后的第二特征图;Obtaining a second feature map of the reference image after upsampling and downsampling; 获取所述参照图像的第三特征图;Obtaining a third feature map of the reference image; 获取所述第二特征图中与所述第一特征图相似度超过第二预设相似度的特征以作为参照特征;Obtaining a feature in the second feature map whose similarity to the first feature map exceeds a second preset similarity as a reference feature; 获取所述第二特征图中与所述参照特征相似度超过第三预设相似度的特征,以得到交换特征图;Obtaining features in the second feature map whose similarity to the reference feature exceeds a third preset similarity to obtain an exchange feature map; 合并所述交换特征图与所述第一特征图,以得到第四特征图;merging the exchanged feature map with the first feature map to obtain a fourth feature map; 将所述第四特征图放大预定倍数以得到第五特征图;及amplifying the fourth characteristic map by a predetermined factor to obtain a fifth characteristic map; and 将所述第五特征图作为所述待修复图像并循环执行上述步骤,直至得到的所述第五特征图为目标放大倍数,则具有所述目标放大倍数的所述第五特征图为所述修复图像。Using the fifth feature map as the image to be repaired and performing the above steps in a loop until the fifth feature map obtained is the target magnification, then the fifth feature map with the target magnification is the Fix the image. 7.一种图像处理装置,用于电子设备,其特征在于,包括:7. An image processing device for electronic equipment, characterized in that it comprises: 获取模块,用于获取待修复图像在拍摄时的曝光时长;An acquisition module, configured to acquire the exposure time of the image to be repaired when it is taken; 第一确定模块,用于确定所述曝光时长所处的曝光时长区间;A first determining module, configured to determine the exposure duration interval in which the exposure duration is located; 第二确定模块,用于根据所述曝光时长区间确定修复模型,不同的所述曝光时长区间对应不同的所述修复模型;及The second determining module is configured to determine a repair model according to the exposure time interval, and different exposure time intervals correspond to different repair models; and 修复模块,用于根据确定的所述修复模型对所述待修复图像的人像区域进行修复处理;An inpainting module, configured to inpaint the portrait area of the image to be inpainted according to the determined inpainting model; 建立模块,用于建立初始模型;Build modules for building initial models; 第二获取模块,用于获取曝光时长位于不同的所述曝光时长区间内的多帧训练图像;The second acquisition module is used to acquire multiple frames of training images whose exposure durations are located in different exposure duration intervals; 训练模块,用于利用所述曝光时长位于同一所述曝光时长区间内的多帧所述训练图像训练所述初始模型,以得到与所述曝光时长区间对应的所述修复模型。A training module, configured to use multiple frames of the training images whose exposure durations are within the same exposure duration interval to train the initial model, so as to obtain the restoration model corresponding to the exposure duration interval. 8.一种电子设备,其特征在于,包括:8. An electronic device, characterized in that it comprises: 壳体;及casing; and 处理器,所述处理器安装在所述壳体上,所述处理器用于实现权利要求1-6任意一项所述的图像处理方法。A processor, the processor is installed on the casing, and the processor is used to implement the image processing method described in any one of claims 1-6. 9.一种包含计算机可读指令的非易失性计算机可读存储介质,其特征在于,所述计算机可读指令被处理器执行时,使得所述处理器执行权利要求1-6任意一项所述的图像处理方法。9. A non-volatile computer-readable storage medium containing computer-readable instructions, characterized in that, when the computer-readable instructions are executed by a processor, the processor performs any one of claims 1-6 The image processing method described.
CN201911252330.9A 2019-12-09 2019-12-09 Image processing method and device, electronic equipment and computer readable storage medium Active CN111126568B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911252330.9A CN111126568B (en) 2019-12-09 2019-12-09 Image processing method and device, electronic equipment and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911252330.9A CN111126568B (en) 2019-12-09 2019-12-09 Image processing method and device, electronic equipment and computer readable storage medium

Publications (2)

Publication Number Publication Date
CN111126568A CN111126568A (en) 2020-05-08
CN111126568B true CN111126568B (en) 2023-08-08

Family

ID=70497962

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911252330.9A Active CN111126568B (en) 2019-12-09 2019-12-09 Image processing method and device, electronic equipment and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN111126568B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116452428A (en) * 2023-02-02 2023-07-18 鼎道智芯(上海)半导体有限公司 Repairing method and repairing device
CN119251101B (en) * 2024-02-01 2025-07-04 荣耀终端股份有限公司 Image processing method, model training method and electronic equipment

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105389780A (en) * 2015-10-28 2016-03-09 维沃移动通信有限公司 Image processing method and mobile terminal
WO2018076172A1 (en) * 2016-10-25 2018-05-03 华为技术有限公司 Image display method and terminal
CN108492262A (en) * 2018-03-06 2018-09-04 电子科技大学 It is a kind of based on gradient-structure similitude without ghost high dynamic range imaging method
CN110310247A (en) * 2019-07-05 2019-10-08 Oppo广东移动通信有限公司 Image processing method, device, terminal and computer-readable storage medium
CN110378845A (en) * 2019-06-17 2019-10-25 杭州电子科技大学 A kind of image repair method under extreme condition based on convolutional neural networks

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2008258982A (en) * 2007-04-05 2008-10-23 Canon Inc Image processing apparatus, control method therefor, and program
CN106507019B (en) * 2016-11-29 2019-05-10 Oppo广东移动通信有限公司 Control method, control device and electronic device
CN108492266B (en) * 2018-03-18 2020-10-09 Oppo广东移动通信有限公司 Image processing method, device, storage medium and electronic device

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105389780A (en) * 2015-10-28 2016-03-09 维沃移动通信有限公司 Image processing method and mobile terminal
WO2018076172A1 (en) * 2016-10-25 2018-05-03 华为技术有限公司 Image display method and terminal
CN108492262A (en) * 2018-03-06 2018-09-04 电子科技大学 It is a kind of based on gradient-structure similitude without ghost high dynamic range imaging method
CN110378845A (en) * 2019-06-17 2019-10-25 杭州电子科技大学 A kind of image repair method under extreme condition based on convolutional neural networks
CN110310247A (en) * 2019-07-05 2019-10-08 Oppo广东移动通信有限公司 Image processing method, device, terminal and computer-readable storage medium

Also Published As

Publication number Publication date
CN111126568A (en) 2020-05-08

Similar Documents

Publication Publication Date Title
CN109493350B (en) Portrait segmentation method and device
CN112602088B (en) Methods, systems and computer-readable media for improving the quality of low-light images
CN110428366B (en) Image processing method and device, electronic equipment and computer readable storage medium
CN108961303B (en) Image processing method and device, electronic equipment and computer readable medium
CN108010031B (en) Portrait segmentation method and mobile terminal
CN110910330B (en) Image processing method, image processing apparatus, electronic device, and readable storage medium
CN111583097A (en) Image processing method, apparatus, electronic device, and computer-readable storage medium
CN111507333B (en) Image correction method and device, electronic equipment and storage medium
CN112801911B (en) Method and device for removing text noise in natural image and storage medium
CN111028170B (en) Image processing method, image processing apparatus, electronic device, and readable storage medium
CN111325657A (en) Image processing method, apparatus, electronic device, and computer-readable storage medium
CN113313162A (en) Method and system for detecting multi-scale feature fusion target
CN114022497B (en) Image processing method and device
CN113744142A (en) Image restoration method, electronic device and storage medium
US12205249B2 (en) Intelligent portrait photography enhancement system
CN114372931A (en) A target object blurring method, device, storage medium and electronic device
EP4632664A1 (en) Image processing method and apparatus, and device and medium
CN111127345B (en) Image processing method and device, electronic equipment and computer readable storage medium
CN110992284A (en) Image processing method, image processing apparatus, electronic device, and computer-readable storage medium
CN111126568B (en) Image processing method and device, electronic equipment and computer readable storage medium
CN116485944A (en) Image processing method and device, computer-readable storage medium, and electronic device
CN114596516A (en) Target tracking method, apparatus, electronic device, and computer-readable storage medium
CN114612291A (en) Face image processing method and device
CN114202457B (en) Low-resolution image processing method, electronic device and computer program product
CN111080543B (en) Image processing methods and devices, electronic equipment and computer-readable storage media

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant