[go: up one dir, main page]

CN113610884B - Image processing method, device, electronic equipment and computer readable storage medium - Google Patents

Image processing method, device, electronic equipment and computer readable storage medium Download PDF

Info

Publication number
CN113610884B
CN113610884B CN202110771363.5A CN202110771363A CN113610884B CN 113610884 B CN113610884 B CN 113610884B CN 202110771363 A CN202110771363 A CN 202110771363A CN 113610884 B CN113610884 B CN 113610884B
Authority
CN
China
Prior art keywords
image
area
foreground
blurring
hair
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110771363.5A
Other languages
Chinese (zh)
Other versions
CN113610884A (en
Inventor
王顺飞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN202110771363.5A priority Critical patent/CN113610884B/en
Publication of CN113610884A publication Critical patent/CN113610884A/en
Application granted granted Critical
Publication of CN113610884B publication Critical patent/CN113610884B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/194Segmentation; Edge detection involving foreground-background segmentation
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/90Dynamic range modification of images or parts thereof

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Processing (AREA)
  • Studio Devices (AREA)

Abstract

The embodiment of the application discloses an image processing method, an image processing device, electronic equipment and a computer readable storage medium. The method comprises the following steps: identifying a foreground region in the first image to obtain a first foreground identification result; performing blurring processing on the first image based on the first foreground identification result to obtain a first blurring image; determining one or more image areas selected in the first virtual image by the selection operation in response to the selection operation for the first virtual image; identifying foreground areas of the image areas to obtain second foreground identification results corresponding to the image areas; and carrying out blurring processing on the first image or the first blurring image based on a second foreground identification result of each image area to obtain a second blurring image. The image processing method, the device, the electronic equipment and the computer readable storage medium can improve the accuracy of image foreground identification and improve the blurring effect of images.

Description

图像处理方法、装置、电子设备及计算机可读存储介质Image processing method, device, electronic device and computer readable storage medium

技术领域Technical Field

本申请涉及影像技术领域,具体涉及一种图像处理方法、装置、电子设备及计算机可读存储介质。The present application relates to the field of imaging technology, and in particular to an image processing method, device, electronic device and computer-readable storage medium.

背景技术Background technique

电子设备在通过成像装置(例如摄像头等)采集图像后,为了突出图像中的关注对象,会将图像划分为前景区域及背景区域,并对图像的背景区域进行虚化处理,以达到突出在前景区域的关注对象的图像效果。对于某些容易出现前景区域及背景区域混淆的图像,容易出现将部分前景区域错误地虚化或遗漏部分背景区域未虚化的情况,导致图像的虚化效果差。After an electronic device captures an image through an imaging device (such as a camera), in order to highlight the object of interest in the image, it divides the image into a foreground area and a background area, and blurs the background area of the image to achieve an image effect that highlights the object of interest in the foreground area. For some images where the foreground area and the background area are easily confused, it is easy to mistakenly blur part of the foreground area or omit part of the background area from being blurred, resulting in a poor blurring effect of the image.

发明内容Summary of the invention

本申请实施例公开了一种图像处理方法、装置、电子设备及计算机可读存储介质,能够提高图像前景识别的准确性,并提高了图像的虚化效果。The embodiments of the present application disclose an image processing method, device, electronic device and computer-readable storage medium, which can improve the accuracy of image foreground recognition and enhance the blurring effect of the image.

本申请实施例公开了一种图像处理方法,包括:The present application discloses an image processing method, including:

识别第一图像中的前景区域,得到第一前景识别结果;Identify a foreground area in the first image to obtain a first foreground identification result;

基于所述第一前景识别结果对所述第一图像进行虚化处理,得到第一虚化图像;Performing blur processing on the first image based on the first foreground recognition result to obtain a first blur image;

响应针对所述第一虚化图像的选择操作,确定所述选择操作在所述第一虚化图像中选择的一个或多个图像区域;In response to a selection operation on the first blurred image, determining one or more image areas selected in the first blurred image by the selection operation;

识别各个所述图像区域的前景区域,得到各个所述图像区域对应的第二前景识别结果;Identify the foreground area of each of the image areas to obtain a second foreground identification result corresponding to each of the image areas;

基于各个所述图像区域的第二前景识别结果,对所述第一图像或所述第一虚化图像进行虚化处理,得到第二虚化图像。Based on the second foreground recognition results of each of the image regions, blurring is performed on the first image or the first blurred image to obtain a second blurred image.

本申请实施例公开了一种图像处理装置,包括:The present application discloses an image processing device, including:

第一识别模块,用于识别第一图像中的前景区域,得到第一前景识别结果;A first recognition module, used to recognize a foreground area in the first image and obtain a first foreground recognition result;

虚化模块,用于基于所述第一前景识别结果对所述第一图像进行虚化处理,得到第一虚化图像;A blur module, configured to blur the first image based on the first foreground recognition result to obtain a first blurred image;

区域选择模块,用于响应针对所述第一虚化图像的选择操作,确定所述选择操作在所述第一虚化图像中选择的一个或多个图像区域;an area selection module, configured to respond to a selection operation on the first blurred image and determine one or more image areas selected by the selection operation in the first blurred image;

第二识别模块,用于识别各个所述图像区域的前景区域,得到各个所述图像区域对应的第二前景识别结果;A second recognition module, used to recognize the foreground area of each of the image areas, and obtain a second foreground recognition result corresponding to each of the image areas;

所述虚化模块,还用于基于各个所述图像区域的第二前景识别结果,对所述第一虚化图像进行虚化处理,得到第二虚化图像。The blurring module is further configured to perform blurring processing on the first blurred image based on the second foreground recognition results of each of the image regions to obtain a second blurred image.

本申请实施例公开了一种电子设备,包括存储器及处理器,所述存储器中存储有计算机程序,所述计算机程序被所述处理器执行时,使得所述处理器实现如上所述的方法。An embodiment of the present application discloses an electronic device, including a memory and a processor, wherein a computer program is stored in the memory, and when the computer program is executed by the processor, the processor implements the method described above.

本申请实施例公开了一种计算机可读存储介质,其上存储有计算机程序,所述计算机程序被处理器执行时实现如上所述的方法。An embodiment of the present application discloses a computer-readable storage medium having a computer program stored thereon. When the computer program is executed by a processor, the method described above is implemented.

本申请实施例公开的图像处理方法、装置、电子设备及计算机可读存储介质,在基于第一前景识别结果对第一图像进行虚化处理,得到第一虚化图像后,响应针对第一虚化图像的选择操作,确定选择操作在所述第一虚化图像中选择的一个或多个图像区域,识别各个图像区域的前景区域,得到各个图像区域对应的第二前景识别结果,再基于各个图像区域的第二前景识别结果,对第一图像或第一虚化图像进行虚化处理,得到第二虚化图像。在对第一图像初次虚化处理后,用户可选择需要进一步优化的图像区域,并再次对选择的图像区域进行前景识别,提高了前景识别的准确性,且基于更加精确的第二前景识别结果对第一图像或第一虚化图像进行二次虚化处理,能够改善将部分前景区域错误地虚化或遗漏部分背景区域未虚化的情况,提高了图像的虚化效果。此外,用户可在第一虚化图像中选择需要进一步优化的图像区域,贴合用户的不同需求,提高了与用户之间的互动性。The image processing method, device, electronic device and computer-readable storage medium disclosed in the embodiments of the present application, after blurring the first image based on the first foreground recognition result to obtain the first blurred image, in response to the selection operation on the first blurred image, determine one or more image areas selected by the selection operation in the first blurred image, identify the foreground area of each image area, obtain the second foreground recognition result corresponding to each image area, and then blur the first image or the first blurred image based on the second foreground recognition result of each image area to obtain the second blurred image. After the first blurred image is initially blurred, the user can select the image area that needs to be further optimized, and perform foreground recognition on the selected image area again, thereby improving the accuracy of foreground recognition, and performing secondary blurring on the first image or the first blurred image based on the more accurate second foreground recognition result can improve the situation where part of the foreground area is mistakenly blurred or part of the background area is omitted and not blurred, thereby improving the blurring effect of the image. In addition, the user can select the image area that needs to be further optimized in the first blurred image, which meets the different needs of the user and improves the interactivity with the user.

附图说明BRIEF DESCRIPTION OF THE DRAWINGS

为了更清楚地说明本申请实施例中的技术方案,下面将对实施例中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings required for use in the embodiments will be briefly introduced below. Obviously, the drawings described below are only some embodiments of the present application. For ordinary technicians in this field, other drawings can be obtained based on these drawings without creative work.

图1为一个实施例中图像处理电路的框图;FIG1 is a block diagram of an image processing circuit in one embodiment;

图2为一个实施例中图像处理方法的流程图;FIG2 is a flow chart of an image processing method in one embodiment;

图3A为一个实施例中对第一虚化图像进行选择操作的示意图;FIG3A is a schematic diagram of performing a selection operation on a first blurred image in one embodiment;

图3B为一个实施例中显示选择框的示意图;FIG3B is a schematic diagram showing a selection box in one embodiment;

图3C为另一个实施例中对第一虚化图像进行选择操作的示意图;FIG3C is a schematic diagram of performing a selection operation on a first blurred image in another embodiment;

图3D为另一个实施例中显示选择框的示意图;FIG3D is a schematic diagram showing a selection box in another embodiment;

图3E为一个实施例中调整选择框尺寸的示意图;FIG3E is a schematic diagram of adjusting the size of a selection box in one embodiment;

图4为另一个实施例中图像处理方法的流程图;FIG4 is a flow chart of an image processing method in another embodiment;

图5为一个实施例中利用选择的图像区域的第二前景识别结果对第一前景识别结果进行修正的示意图;FIG5 is a schematic diagram of correcting a first foreground recognition result using a second foreground recognition result of a selected image region in one embodiment;

图6为一个实施例中利用选择的图像区域的第二前景识别结果对第一深度图进行修正的示意图;FIG6 is a schematic diagram of correcting a first depth map using a second foreground recognition result of a selected image region in one embodiment;

图7为另一个实施例中图像处理方法的流程图;FIG7 is a flow chart of an image processing method in another embodiment;

图8为一个实施例中得到图像区域对应的局部发丝抠图结果的示意图;FIG8 is a schematic diagram of obtaining a local hair cutout result corresponding to an image area in one embodiment;

图9为一个实施例中图像处理装置的框图;FIG9 is a block diagram of an image processing apparatus according to an embodiment;

图10为一个实施例中电子设备的结构框图。FIG. 10 is a structural block diagram of an electronic device in one embodiment.

具体实施方式Detailed ways

下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。The following will be combined with the drawings in the embodiments of the present application to clearly and completely describe the technical solutions in the embodiments of the present application. Obviously, the described embodiments are only part of the embodiments of the present application, not all of the embodiments. Based on the embodiments in the present application, all other embodiments obtained by ordinary technicians in this field without creative work are within the scope of protection of this application.

需要说明的是,本申请实施例及附图中的术语“包括”和“具有”以及它们任何变形,意图在于覆盖不排他的包含。例如包含了一系列步骤或单元的过程、方法、系统、产品或设备没有限定于已列出的步骤或单元,而是可选地还包括没有列出的步骤或单元,或可选地还包括对于这些过程、方法、产品或设备固有的其它步骤或单元。It should be noted that the terms "including" and "having" and any variations thereof in the embodiments of the present application and the accompanying drawings are intended to cover non-exclusive inclusions. For example, a process, method, system, product or device including a series of steps or units is not limited to the listed steps or units, but may optionally include steps or units not listed, or may optionally include other steps or units inherent to these processes, methods, products or devices.

可以理解,本申请所使用的术语“第一”、“第二”等可在本文中用于描述各种元件,但这些元件不受这些术语限制。这些术语仅用于将第一个元件与另一个元件区分。举例来说,在不脱离本申请的范围的情况下,可以将第一前景识别结果称为第二前景识别结果,且类似地,可将第二前景识别结果称为第一前景识别结果。第一前景识别结果和第二前景识别结果两者都是前景识别结果,但其不是同一前景识别结果。It is understood that the terms "first", "second", etc. used in this application may be used herein to describe various elements, but these elements are not limited by these terms. These terms are only used to distinguish a first element from another element. For example, without departing from the scope of this application, a first foreground recognition result may be referred to as a second foreground recognition result, and similarly, a second foreground recognition result may be referred to as a first foreground recognition result. Both the first foreground recognition result and the second foreground recognition result are foreground recognition results, but they are not the same foreground recognition result.

本申请实施例提供一种电子设备。该电子设备中包括图像处理电路,图像处理电路可以利用硬件和/或软件组件实现,可包括定义ISP(Image Signal Processing,图像信号处理)管线的各种处理单元。图1为一个实施例中图像处理电路的框图。为便于说明,图1仅示出与本申请实施例相关的图像处理技术的各个方面。An embodiment of the present application provides an electronic device. The electronic device includes an image processing circuit, which can be implemented using hardware and/or software components and can include various processing units that define an ISP (Image Signal Processing) pipeline. FIG1 is a block diagram of an image processing circuit in one embodiment. For ease of explanation, FIG1 only shows various aspects of image processing technology related to an embodiment of the present application.

如图1所示,图像处理电路包括ISP处理器140和控制逻辑器150。成像设备110捕捉的图像数据首先由ISP处理器140处理,ISP处理器140对图像数据进行分析以捕捉可用于确定成像设备110的一个或多个控制参数的图像统计信息。成像设备110可包括一个或多个透镜112和图像传感器114。图像传感器114可包括色彩滤镜阵列(如Bayer滤镜),图像传感器114可获取每个成像像素捕捉的光强度和波长信息,并提供可由ISP处理器140处理的一组原始图像数据。姿态传感器120(如三轴陀螺仪、霍尔传感器、加速度计等)可基于姿态传感器120接口类型把采集的图像处理的参数(如防抖参数)提供给ISP处理器140。姿态传感器120接口可以采用SMIA(Standard Mobile Imaging Architecture,标准移动成像架构)接口、其它串行或并行摄像头接口或上述接口的组合。As shown in FIG1 , the image processing circuit includes an ISP processor 140 and a control logic 150. The image data captured by the imaging device 110 is first processed by the ISP processor 140, which analyzes the image data to capture image statistics that can be used to determine one or more control parameters of the imaging device 110. The imaging device 110 may include one or more lenses 112 and an image sensor 114. The image sensor 114 may include a color filter array (such as a Bayer filter), and the image sensor 114 may obtain light intensity and wavelength information captured by each imaging pixel, and provide a set of raw image data that can be processed by the ISP processor 140. The attitude sensor 120 (such as a three-axis gyroscope, a Hall sensor, an accelerometer, etc.) may provide the acquired image processing parameters (such as anti-shake parameters) to the ISP processor 140 based on the interface type of the attitude sensor 120. The attitude sensor 120 interface may use a SMIA (Standard Mobile Imaging Architecture) interface, other serial or parallel camera interfaces, or a combination of the above interfaces.

需要说明的是,虽然图1中仅示出了一个成像设备110,但是在本申请实施例中,可包括至少两个成像设备110,每个成像设备110可分别对应一个图像传感器114,也可多个成像设备110对应一个图像传感器114,在此不作限定。每个成像设备110的工作过程可参照上述所描述的内容。It should be noted that, although only one imaging device 110 is shown in FIG1 , in the embodiment of the present application, at least two imaging devices 110 may be included, each imaging device 110 may correspond to one image sensor 114, or multiple imaging devices 110 may correspond to one image sensor 114, which is not limited here. The working process of each imaging device 110 may refer to the above description.

此外,图像传感器114也可将原始图像数据发送给姿态传感器120,姿态传感器120可基于姿态传感器120接口类型把原始图像数据提供给ISP处理器140,或者姿态传感器120将原始图像数据存储到图像存储器130中。In addition, the image sensor 114 may also send raw image data to the gesture sensor 120 , and the gesture sensor 120 may provide the raw image data to the ISP processor 140 based on the interface type of the gesture sensor 120 , or the gesture sensor 120 may store the raw image data in the image memory 130 .

ISP处理器140按多种格式逐个像素地处理原始图像数据。例如,每个图像像素可具有8、10、12或14比特的位深度,ISP处理器140可对原始图像数据进行一个或多个图像处理操作、收集关于图像数据的统计信息。其中,图像处理操作可按相同或不同的位深度精度进行。The ISP processor 140 processes the raw image data pixel by pixel in a variety of formats. For example, each image pixel may have a bit depth of 8, 10, 12, or 14 bits, and the ISP processor 140 may perform one or more image processing operations on the raw image data and collect statistical information about the image data. The image processing operations may be performed at the same or different bit depth precisions.

ISP处理器140还可从图像存储器130接收图像数据。例如,姿态传感器120接口将原始图像数据发送给图像存储器130,图像存储器130中的原始图像数据再提供给ISP处理器140以供处理。图像存储器130可为存储器装置的一部分、存储设备、或电子设备内的独立的专用存储器,并可包括DMA(Direct Memory Access,直接直接存储器存取)特征。The ISP processor 140 may also receive image data from the image memory 130. For example, the gesture sensor 120 interface sends raw image data to the image memory 130, and the raw image data in the image memory 130 is then provided to the ISP processor 140 for processing. The image memory 130 may be a portion of a memory device, a storage device, or an independent dedicated memory within an electronic device, and may include a DMA (Direct Memory Access) feature.

当接收到来自图像传感器114接口或来自姿态传感器120接口或来自图像存储器130的原始图像数据时,ISP处理器140可进行一个或多个图像处理操作,如时域滤波。处理后的图像数据可发送给图像存储器130,以便在被显示之前进行另外的处理。ISP处理器140从图像存储器130接收处理数据,并对该处理数据进行原始域中以及RGB和YCbCr颜色空间中的图像数据处理。ISP处理器140处理后的图像数据可输出给显示器160,以供用户观看和/或由图形引擎或GPU(Graphics Processing Unit,图形处理器)进一步处理。此外,ISP处理器140的输出还可发送给图像存储器130,且显示器160可从图像存储器130读取图像数据。在一个实施例中,图像存储器130可被配置为实现一个或多个帧缓冲器。When receiving raw image data from the image sensor 114 interface or from the posture sensor 120 interface or from the image memory 130, the ISP processor 140 may perform one or more image processing operations, such as time domain filtering. The processed image data may be sent to the image memory 130 for further processing before being displayed. The ISP processor 140 receives the processed data from the image memory 130 and performs image data processing in the original domain and in the RGB and YCbCr color spaces on the processed data. The image data processed by the ISP processor 140 may be output to the display 160 for viewing by the user and/or further processed by a graphics engine or GPU (Graphics Processing Unit). In addition, the output of the ISP processor 140 may also be sent to the image memory 130, and the display 160 may read the image data from the image memory 130. In one embodiment, the image memory 130 may be configured to implement one or more frame buffers.

ISP处理器140确定的统计数据可发送给控制逻辑器150。例如,统计数据可包括陀螺仪的振动频率、自动曝光、自动白平衡、自动聚焦、闪烁检测、黑电平补偿、透镜112阴影校正等图像传感器114统计信息。控制逻辑器150可包括执行一个或多个例程(如固件)的处理器和/或微控制器,一个或多个例程可根据接收的统计数据,确定成像设备110的控制参数及ISP处理器140的控制参数。例如,成像设备110的控制参数可包括姿态传感器120控制参数(例如增益、曝光控制的积分时间、防抖参数等)、照相机闪光控制参数、照相机防抖位移参数、透镜112控制参数(例如聚焦或变焦用焦距)或这些参数的组合。ISP控制参数可包括用于自动白平衡和颜色调整(例如,在RGB处理期间)的增益水平和色彩校正矩阵,以及透镜112阴影校正参数。The statistical data determined by the ISP processor 140 may be sent to the control logic 150. For example, the statistical data may include image sensor 114 statistical information such as vibration frequency of a gyroscope, automatic exposure, automatic white balance, automatic focus, flicker detection, black level compensation, lens 112 shading correction, etc. The control logic 150 may include a processor and/or microcontroller that executes one or more routines (such as firmware), and the one or more routines may determine control parameters of the imaging device 110 and control parameters of the ISP processor 140 based on the received statistical data. For example, the control parameters of the imaging device 110 may include attitude sensor 120 control parameters (such as gain, integration time for exposure control, anti-shake parameters, etc.), camera flash control parameters, camera anti-shake displacement parameters, lens 112 control parameters (such as focal length for focus or zoom), or a combination of these parameters. The ISP control parameters may include gain levels and color correction matrices for automatic white balance and color adjustment (for example, during RGB processing), and lens 112 shading correction parameters.

示例性地,结合图1的图像处理电路,对本申请实施例所提供的图像处理方法进行说明。ISP处理器140可从成像设备110或图像存储器130中获取第一图像,识别该第一图像中的前景区域,得到第一前景识别结果,并基于该第一前景识别结果对第一图像进行虚化处理,得到第一虚化图像。ISP处理器140可将第一虚化图像输出到显示器160进行显示。用户可根据显示器160显示的第一虚化图像选择所需优化的图像区域,ISP处理器140可响应针对第一虚化图像的选择操作,确定该选择操作在第一虚化图像中选择的一个或多个图像区域,并识别各个图像区域的前景区域,得到各个图像区域对应的第二前景识别结果,再基于各个图像区域的第二前景识别结果,对第一虚化图像进行虚化处理,得到第二虚化图像。可选地,ISP处理器140可将第二虚化图像输出至显示器160进行显示,也可将第二虚化图像存储在图像存储器130中。Exemplarily, the image processing method provided by the embodiment of the present application is described in conjunction with the image processing circuit of FIG. 1. The ISP processor 140 may obtain a first image from the imaging device 110 or the image memory 130, identify a foreground area in the first image, obtain a first foreground recognition result, and perform blur processing on the first image based on the first foreground recognition result to obtain a first blurred image. The ISP processor 140 may output the first blurred image to the display 160 for display. The user may select an image area to be optimized according to the first blurred image displayed by the display 160, and the ISP processor 140 may respond to a selection operation for the first blurred image, determine one or more image areas selected by the selection operation in the first blurred image, and identify a foreground area of each image area, obtain a second foreground recognition result corresponding to each image area, and then perform blur processing on the first blurred image based on the second foreground recognition result of each image area to obtain a second blurred image. Optionally, the ISP processor 140 may output the second blurred image to the display 160 for display, or store the second blurred image in the image memory 130.

如图2所示,在一个实施例中,提供一种图像处理方法,可应用于上述的电子设备,该电子设备可包括但不限于手机、智能可穿戴设备、平板电脑、PC(Personal Computer,个人计算机)、车载终端、数码相机等,本申请实施例对此不作限定。该图像处理方法可包括以下步骤:As shown in FIG2 , in one embodiment, an image processing method is provided, which can be applied to the above-mentioned electronic devices, which may include but are not limited to mobile phones, smart wearable devices, tablet computers, PCs (Personal Computers), vehicle-mounted terminals, digital cameras, etc., and the present application embodiment does not limit this. The image processing method may include the following steps:

步骤210,识别第一图像中的前景区域,得到第一前景识别结果。Step 210: Identify a foreground area in the first image to obtain a first foreground identification result.

第一图像中可包括前景区域及背景区域,其中,前景区域可指的是第一图像中的目标对象所在的图像区域,背景区域可指的是第一图像中除目标对象以外的图像区域。其中,该目标对象可为第一图像中关注的对象,例如,第一图像可为人物图像,则目标对象可为人物图像中的人像,第一图像可为动物图像,则目标对象可为动物图像中的动物等,第一图像可为建筑图像,则目标对象可为建筑图像中的建筑等,但不限于此。The first image may include a foreground area and a background area, wherein the foreground area may refer to the image area where the target object in the first image is located, and the background area may refer to the image area other than the target object in the first image. The target object may be an object of interest in the first image, for example, the first image may be a person image, and the target object may be a person portrait in the person image; the first image may be an animal image, and the target object may be an animal in the animal image, etc.; the first image may be a building image, and the target object may be a building in the building image, etc., but is not limited thereto.

第一图像可为彩色图像,例如可以是RGB(Red Green Blue,红绿蓝)格式的图像或YUV(Y表示明亮度,U和V表示色度)格式的图像等。第一图像可以是预先存储在电子设备的存储器中的图像,也可以是电子设备通过摄像头实时采集到的图像。The first image may be a color image, for example, an image in RGB (Red Green Blue) format or an image in YUV (Y represents brightness, U and V represent chrominance) format, etc. The first image may be an image pre-stored in a memory of the electronic device, or may be an image captured in real time by a camera of the electronic device.

电子设备可对第一图像进行前景识别,得到第一前景识别结果,该第一前景识别结果可用于对第一图像中的前景区域进行标注。作为一种实施方式,电子设备可获取第一图像中各个像素点的深度信息,该深度信息可用于表征被拍摄物体与摄像头之间的距离,深度信息越大可表示距离越远。前景区域与背景区域对应的深度信息差别较大,因此,可利用第一图像中各个像素点对应的深度信息对第一图像中的前景区域及背景区域进行划分,例如,该背景区域可以是深度信息大于第一阈值的像素点组成的区域,该前景区域可以是深度信息小于第二阈值的像素点组成的区域等。The electronic device may perform foreground recognition on the first image to obtain a first foreground recognition result, and the first foreground recognition result may be used to mark the foreground area in the first image. As an embodiment, the electronic device may obtain depth information of each pixel in the first image, and the depth information may be used to characterize the distance between the photographed object and the camera. The larger the depth information, the farther the distance. The depth information corresponding to the foreground area and the background area is quite different. Therefore, the depth information corresponding to each pixel in the first image may be used to divide the foreground area and the background area in the first image. For example, the background area may be an area composed of pixels whose depth information is greater than a first threshold, and the foreground area may be an area composed of pixels whose depth information is less than a second threshold, and so on.

在一些实施例中,电子设备可提取第一图像的图像特征,并对该图像特征进行分析,以确定第一图像的前景区域。可选地,该图像特征可包括但不限于边缘特征、颜色特征、位置特征等。In some embodiments, the electronic device may extract image features of the first image and analyze the image features to determine the foreground area of the first image. Optionally, the image features may include but are not limited to edge features, color features, position features, etc.

在一些实施例中,电子设备也可采用神经网络确定第一图像中的前景区域,可将第一图像输入预先训练得到的对象分割模型,并通过该对象分割模型识别第一图像中包含的目标对象,以得到该目标对象对应的前景区域。该对象分割模型可以是根据多组样本训练图像进行训练得到的,每一组样本训练图像可包括样本图像,每一样本图像可标注有前景区域。该图像分割模型可包括但不限于基于deeplab语义分割算法的网络、U-Net网络结构、FCN(Fully Convolutional Networks,全卷积神经网络)等,在此不作限定。In some embodiments, the electronic device may also use a neural network to determine the foreground area in the first image, and the first image may be input into a pre-trained object segmentation model, and the target object contained in the first image may be identified through the object segmentation model to obtain the foreground area corresponding to the target object. The object segmentation model may be trained based on multiple groups of sample training images, each group of sample training images may include sample images, and each sample image may be annotated with a foreground area. The image segmentation model may include but is not limited to a network based on the deeplab semantic segmentation algorithm, a U-Net network structure, FCN (Fully Convolutional Networks), etc., which are not limited here.

需要说明的是,电子设备也可采用其它方式识别第一图像中的前景区域,识别前景区域的方式在本申请实施例中不作限制。It should be noted that the electronic device may also use other methods to identify the foreground area in the first image, and the method of identifying the foreground area is not limited in the embodiment of the present application.

步骤220,基于第一前景识别结果对第一图像进行虚化处理,得到第一虚化图像。Step 220: blurring the first image based on the first foreground recognition result to obtain a first blurred image.

电子设备可根据第一前景识别结果确定第一图像的前景区域及背景区域,并对第一图像中的背景区域进行虚化处理,以得到第一虚化图像,其中,虚化处理可采用高斯滤波器、均值模糊处理、中值模糊处理等方式进行实现,在此不作限定。The electronic device can determine the foreground area and background area of the first image based on the first foreground recognition result, and blur the background area in the first image to obtain a first blurred image, wherein the blurring can be implemented by using Gaussian filters, mean blur processing, median blur processing, etc., which are not limited here.

在一些实施例中,电子设备也可先对第一图像进行虚化,再基于第一前景识别结果,将虚化后的第一图像与虚化前的第一图像进行融合,得到第一虚化图像。该融合的方式可包括但不限于取均值进行融合、分配不同权重系数融合、Alpha融合处理等。以Alpha融合处理为例,Alpha融合处理可为虚化前的第一图像及虚化后的第一图像中的每个像素点分别赋予一个Alpha值,使得虚化前的第一图像及虚化后的第一图像具有不同的透明度。可将第一前景识别结果作为虚化后的第一图像的Alpha值,将虚化后的第一图像与虚化前的第一图像进行融合。In some embodiments, the electronic device may also first blur the first image, and then, based on the first foreground recognition result, fuse the blurred first image with the first image before blurring to obtain the first blurred image. The fusion method may include but is not limited to fusion by taking the average, fusion by assigning different weight coefficients, Alpha fusion processing, etc. Taking Alpha fusion processing as an example, Alpha fusion processing may assign an Alpha value to each pixel in the first image before blurring and the first image after blurring, so that the first image before blurring and the first image after blurring have different transparencies. The first foreground recognition result may be used as the Alpha value of the first image after blurring, and the first image after blurring may be fused with the first image before blurring.

在一些实施例中,电子设备可对第一图像进行深度估计,得到第一图像的深度估计结果,该深度估计结果可包括第一图像中各个像素点的深度信息。可根据深度估计结果对第一图像进行区域划分,将深度信息相同或相近的像素点划分到同一图像区域中。可根据划分后的各个图像区域的像素点的深度信息确定各个图像区域对应的虚化参数,再根据各个图像区域对应的虚化参数对各个图像区域进行虚化处理。该虚化参数可用于描述虚化程度,例如可包括虚化力度、虚化系数等参数,深度信息较大的图像区域可对应较大的虚化程度,深度信息较小的图像区域可对应较小的虚化程度,从而可分别对不同的图像区域进行不同程度的虚化处理。In some embodiments, the electronic device may perform depth estimation on the first image to obtain a depth estimation result of the first image, and the depth estimation result may include depth information of each pixel in the first image. The first image may be divided into regions according to the depth estimation result, and pixels with the same or similar depth information may be divided into the same image region. The blur parameters corresponding to each image region may be determined according to the depth information of the pixels in each divided image region, and then blur processing may be performed on each image region according to the blur parameters corresponding to each image region. The blur parameters may be used to describe the degree of blur, for example, may include parameters such as blur strength and blur coefficient. An image region with greater depth information may correspond to a greater degree of blur, and an image region with less depth information may correspond to a smaller degree of blur, so that different image regions may be blurred to different degrees.

步骤230,响应针对第一虚化图像的选择操作,确定选择操作在第一虚化图像中选择的一个或多个图像区域。Step 230 , in response to a selection operation on the first blurred image, determining one or more image regions selected by the selection operation in the first blurred image.

电子设备在对第一图像进行虚化处理后,可通过显示装置显示得到的第一虚化图像,用户可查看该第一虚化图像,并对需要优化的图像区域进行选择。可选地,该需要优化的图像区域可以是前景区域与背景区域的交界部分等容易出现虚化错误或漏虚化的区域,例如人像图像中头发与背景的交界区域,或是头发中的漏洞区域等。After the electronic device performs blurring processing on the first image, the first blurred image can be displayed on the display device, and the user can view the first blurred image and select the image area to be optimized. Optionally, the image area to be optimized can be a boundary between the foreground area and the background area, which is prone to blurring errors or missed blurring, such as the boundary between the hair and the background in a portrait image, or a gap in the hair.

电子设备可响应针对该第一虚化图像的选择操作,确定选择操作在第一虚化图像中选择的一个或多个图像区域。其中,选择操作可包括但不限于触控操作、语音操作、视线交互操作、手势操作等多种交互操作方式。The electronic device may respond to the selection operation on the first blurred image and determine one or more image areas selected in the first blurred image by the selection operation. The selection operation may include but is not limited to a variety of interactive operation modes such as touch operation, voice operation, sight interaction operation, and gesture operation.

作为一种实施方式,该选择操作可为用户在触控屏上进行的触控操作。电子设备可获取选择操作在屏幕上的一个或多个触控位置,针对各个触控位置,可按照区域尺寸形成与各个触控位置对应的选择框,并确定第一虚化图像中与各个选择框对应的图像区域。选择操作在屏幕上的触控位置可包括触控坐标,用户可在屏幕上进行多次触控,以选择多个同时需要进行优化的图像区域。电子设备可同时获取多个触控位置,以确定多个对就要的图像区域,再对用户选择的多个图像区域进行优化的虚化处理(即多次触控一次优化的方式)。电子设备也可以在每次检测到用户进行选择操作时,获取当前检测到的触控位置,以得到相应的图像区域,并对该图像区域进行优化的虚化处理(即多次触控多次优化的方式)。As an implementation mode, the selection operation may be a touch operation performed by a user on a touch screen. The electronic device may obtain one or more touch positions of the selection operation on the screen, and for each touch position, a selection box corresponding to each touch position may be formed according to the area size, and the image area corresponding to each selection box in the first blurred image may be determined. The touch position of the selection operation on the screen may include touch coordinates, and the user may perform multiple touches on the screen to select multiple image areas that need to be optimized at the same time. The electronic device may obtain multiple touch positions at the same time to determine multiple image areas, and then optimize the blurring of the multiple image areas selected by the user (i.e., multiple touches and one optimization method). The electronic device may also obtain the currently detected touch position each time it detects that the user performs a selection operation to obtain the corresponding image area, and optimize the blurring of the image area (i.e., multiple touches and multiple optimization methods).

上述的区域尺寸可以是用户根据实际需求进行预先设置的固定尺寸,也可以是电子设备在出厂前由研发人员统一设置的固定尺寸,区域尺寸也可根据第一虚化图像的图像分辨率、图像尺寸等进行动态调整,例如,图像尺寸较大,对应的区域尺寸也可较大等,但不限于此。电子设备在确定各个触控位置后,可基于区域尺寸形成与每个触控位置对应的选择框,该选择框可以是矩形、正方形、多边形、圆形等任意形状,在此不作限制。触控位置可处于选择框的特定位置,例如,可以处于选择框的中心位置,也可以处于选择框的角点位置(如左上角点、右上角点等)。The above-mentioned area size can be a fixed size pre-set by the user according to actual needs, or it can be a fixed size uniformly set by the R&D personnel of the electronic device before leaving the factory. The area size can also be dynamically adjusted according to the image resolution, image size, etc. of the first blurred image. For example, if the image size is larger, the corresponding area size can also be larger, etc., but it is not limited to this. After determining each touch position, the electronic device can form a selection box corresponding to each touch position based on the area size. The selection box can be any shape such as a rectangle, square, polygon, circle, etc., which is not limited here. The touch position can be at a specific position of the selection box, for example, it can be at the center of the selection box, or it can be at a corner point of the selection box (such as the upper left corner point, the upper right corner point, etc.).

电子设备可在屏幕上按照预设的显示方式(如预设的颜色、线条等)显示选择框,第一虚化图像中处于选择框内的图像内容即为选择的图像区域,用户通过显示的选择框可以直观地获知所选择的图像区域。在一些实施例中,用户可根据实际需求对选择框的尺寸进行调整,若电子设备检测到针对目标选择框触发的调整操作时,则可根据调整操作调整该目标选择框的尺寸,该目标选择框指的是用户需要调整尺寸的选择框。调整操作的操作方式可区别于选择操作的操作方式,例如,选择操作可为单击操作,调整操作可为滑动操作,或选择操作可为单击操作,调整操作可为双击操作等,但不限于此。The electronic device may display a selection box on the screen in a preset display mode (such as a preset color, line, etc.). The image content in the selection box in the first blurred image is the selected image area. The user can intuitively know the selected image area through the displayed selection box. In some embodiments, the user can adjust the size of the selection box according to actual needs. If the electronic device detects an adjustment operation triggered for a target selection box, the size of the target selection box may be adjusted according to the adjustment operation. The target selection box refers to the selection box whose size the user needs to adjust. The operation mode of the adjustment operation may be different from the operation mode of the selection operation. For example, the selection operation may be a single-click operation, and the adjustment operation may be a sliding operation, or the selection operation may be a single-click operation, and the adjustment operation may be a double-click operation, etc., but is not limited thereto.

示例性地,请参考图3A及图3B,图3A为一个实施例中对第一虚化图像进行选择操作的示意图,图3B为一个实施例中显示选择框的示意图。如图3A及图3B所示,电子设备10中可在屏幕中显示第一虚化图像310,用户可根据实际需求对需要进行优化虚化的区域进行选择,可通过触控方式进行选择,电子设备10可根据用户触控的触控位置,在屏幕中显示选择框320,该选择框320内的图像内容即为选择的图像区域。For example, please refer to FIG. 3A and FIG. 3B , FIG. 3A is a schematic diagram of selecting a first blurred image in an embodiment, and FIG. 3B is a schematic diagram of displaying a selection box in an embodiment. As shown in FIG. 3A and FIG. 3B , the electronic device 10 can display a first blurred image 310 on the screen, and the user can select the area to be optimized and blurred according to actual needs, and can select by touch. The electronic device 10 can display a selection box 320 on the screen according to the touch position of the user, and the image content in the selection box 320 is the selected image area.

又示例性地,请参考图3C及图3D,图3C为另一个实施例中对第一虚化图像进行选择操作的示意图,图3D为另一个实施例中显示选择框的示意图。如图3C及图3D所示,电子设备10中可在屏幕中显示第一虚化图像330,用户可根据实际需求在第一虚化图像310中进行多次触控操作,电子设备10可根据用户触控的多个触控位置,在在屏幕中显示每个触控位置对应的选择框340,每个选择框340内的图像内容即为选择的图像区域。图3E为一个实施例中调整选择框尺寸的示意图。如图3E所示,用户可根据实际需求调整选择框340的尺寸大小,从而调整选择的图像区域。As another example, please refer to Figures 3C and 3D. Figure 3C is a schematic diagram of performing a selection operation on the first blurred image in another embodiment, and Figure 3D is a schematic diagram of displaying a selection box in another embodiment. As shown in Figures 3C and 3D, the electronic device 10 can display a first blurred image 330 on the screen, and the user can perform multiple touch operations in the first blurred image 310 according to actual needs. The electronic device 10 can display a selection box 340 corresponding to each touch position on the screen according to the multiple touch positions touched by the user, and the image content in each selection box 340 is the selected image area. Figure 3E is a schematic diagram of adjusting the size of the selection box in one embodiment. As shown in Figure 3E, the user can adjust the size of the selection box 340 according to actual needs, thereby adjusting the selected image area.

步骤240,识别各个图像区域的前景区域,得到各个图像区域对应的第二前景识别结果。Step 240 , identifying the foreground area of each image area, and obtaining a second foreground identification result corresponding to each image area.

电子设备在确定用户选择的一个或多个图像区域后,可分别重新识别各个图像区域的前景区域,得到各个图像区域对应的第二前景识别结果。可选地,由于第一虚化图像为经过虚化处理后的图像,为了保证前景识别的准确性,电子设备在确定各个图像区域后,可根据各个图像区域在第一虚化图像中的图像位置,从第一图像中裁剪相同图像位置的区域图像,并对从第一图像中裁剪的区域图像进行前景识别,以得到各个图像区域对应的第二前景识别结果。由于图像区域仅为第一虚化图像中的局部区域,因此裁剪的区域图像也为第一图像中的局部图像,通过重新对局部图像进行前景识别,可以得到更加精细、准确的第二前景识别结果,以对第一前景识别结果进行修正,提高前景识别的准确性。After determining one or more image areas selected by the user, the electronic device may re-identify the foreground areas of each image area respectively, and obtain a second foreground recognition result corresponding to each image area. Optionally, since the first blurred image is an image that has been blurred, in order to ensure the accuracy of foreground recognition, after determining each image area, the electronic device may crop the regional image at the same image position from the first image according to the image position of each image area in the first blurred image, and perform foreground recognition on the regional image cropped from the first image, so as to obtain a second foreground recognition result corresponding to each image area. Since the image area is only a local area in the first blurred image, the cropped regional image is also a local image in the first image. By re-performing foreground recognition on the local image, a more refined and accurate second foreground recognition result can be obtained to correct the first foreground recognition result and improve the accuracy of foreground recognition.

步骤250,基于各个图像区域的第二前景识别结果,对第一图像或第一虚化图像进行虚化处理,得到第二虚化图像。Step 250: Based on the second foreground recognition results of each image area, blurring is performed on the first image or the first blurred image to obtain a second blurred image.

在一些实施例中,电子设备在得到各个图像区域对应的第二前景识别结果后,可根据各个图像区域对应的第二前景识别结果对第一前景识别结果进行修正,以得到修正后的目标前景识别结果,并基于该目标前景识别结果对第一图像进行虚化处理,得到第二虚化图像。由于目标前景识别结果的前景、背景的识别准确度更高,因此可得到虚化效果更好的第二虚化图像。In some embodiments, after obtaining the second foreground recognition results corresponding to each image area, the electronic device may correct the first foreground recognition results according to the second foreground recognition results corresponding to each image area to obtain a corrected target foreground recognition result, and blur the first image based on the target foreground recognition result to obtain a second blurred image. Since the recognition accuracy of the foreground and background of the target foreground recognition result is higher, a second blurred image with a better blurring effect can be obtained.

在一些实施例中,电子设备在得到各个图像区域对应的第二前景识别结果后,也可直接根据各个图像区域对应的第二前景识别结果,对第一虚化图像中的各个图像区域进行虚化处理,得到第二虚化图像。仅对第一虚化图像中选择的各个图像区域进行二次的虚化处理,在提高虚化效果的同时,减少计算量,提高处理效果。In some embodiments, after obtaining the second foreground recognition results corresponding to each image area, the electronic device may also directly perform blur processing on each image area in the first blurred image according to the second foreground recognition results corresponding to each image area to obtain the second blurred image. Performing a secondary blur processing only on each image area selected in the first blurred image can improve the blurring effect while reducing the amount of calculation and improving the processing effect.

在本申请实施例中,在对第一图像初次虚化处理后,用户可选择需要进一步优化的图像区域,并再次对选择的图像区域进行前景识别,提高了前景识别的准确性,且基于更加精确的第二前景识别结果对第一图像或第一虚化图像进行二次虚化处理,能够改善将部分前景区域错误地虚化或遗漏部分背景区域未虚化的情况,提高了图像的虚化效果。此外,用户可在第一虚化图像中选择需要进一步优化的图像区域,贴合用户的不同需求,提高了与用户之间的互动性。In the embodiment of the present application, after the first blurring process is performed on the first image, the user can select the image area that needs to be further optimized, and perform foreground recognition on the selected image area again, thereby improving the accuracy of foreground recognition, and performing secondary blurring on the first image or the first blurred image based on the more accurate second foreground recognition result, thereby improving the situation where part of the foreground area is mistakenly blurred or part of the background area is omitted and not blurred, thereby improving the blurring effect of the image. In addition, the user can select the image area that needs to be further optimized in the first blurred image, thereby meeting the different needs of the user and improving the interactivity with the user.

如图4所示,在另一个实施例中,提供一种图像处理方法,该方法可应用于上述的电子设备,该方法可包括以下步骤:As shown in FIG. 4 , in another embodiment, an image processing method is provided. The method can be applied to the above-mentioned electronic device. The method may include the following steps:

步骤402,识别第一图像中的前景区域,得到第一前景识别结果。Step 402: Identify a foreground area in a first image to obtain a first foreground identification result.

步骤402的描述可参考上述实施例中步骤210的描述,在此不再赘述。The description of step 402 may refer to the description of step 210 in the above embodiment, which will not be repeated here.

步骤404,对第一图像进行深度估计,得到深度估计结果。Step 404: perform depth estimation on the first image to obtain a depth estimation result.

电子设备可对第一图像进行深度估计,确定第一图像中各个像素点的深度信息,得到深度估计结果。电子设备对第一图像进行深度估计的方式可以是软件的深度估计方式,也可以是结合硬件设备计算深度信息的方式等。软件的深度估计方式可包括但不限于使用深度估计模型等神经网络进行深度估计的方式,该深度估计模型可通过深度训练集训练得到,深度训练集可包括多张样本图像及每张样本图像对应的深度图等。结合硬件设备的深度估计方式可包括但不限于利用多摄像头(例如双摄像头)进行深度估计、利用结构光进行深度估计、利用TOF(Time of flight,飞行时间)进行深度估计等。本申请实施例对深度估计的方式不作限定。The electronic device may perform depth estimation on the first image, determine the depth information of each pixel in the first image, and obtain a depth estimation result. The method in which the electronic device performs depth estimation on the first image may be a software depth estimation method, or a method in which the depth information is calculated in combination with a hardware device, etc. The software depth estimation method may include but is not limited to a method of performing depth estimation using a neural network such as a depth estimation model, which may be obtained by training a depth training set, and the depth training set may include multiple sample images and a depth map corresponding to each sample image, etc. The depth estimation method combined with the hardware device may include but is not limited to depth estimation using multiple cameras (such as dual cameras), depth estimation using structured light, depth estimation using TOF (Time of flight, time of flight), etc. The embodiment of the present application does not limit the method of depth estimation.

需要说明的是,步骤402与步骤404之间的执行顺序在此不作限定,也可先执行步骤404再执行步骤402,或是同时执行步骤402及步骤404。It should be noted that the execution order of step 402 and step 404 is not limited here, and step 404 may be executed first and then step 402, or step 402 and step 404 may be executed simultaneously.

步骤406,根据第一前景识别结果及深度估计结果对第一图像进行虚化处理,得到第一虚化图像。Step 406 , blurring the first image according to the first foreground recognition result and the depth estimation result to obtain a first blurred image.

第一图像的深度估计结果中可包含第一图像中各个像素点的深度信息,可根据各个像素点的深度信息将第一图像划分为多个图像块,从而可确定每个图像块对应的虚化参数。例如,被划分为同一图像块的各个像素点的深度信息可属于同一深度值区间,或被划分为同一图像块的各个像素点之间的深度信息之间的差值小于深度阈值等。The depth estimation result of the first image may include the depth information of each pixel in the first image, and the first image may be divided into a plurality of image blocks according to the depth information of each pixel, so as to determine the blur parameter corresponding to each image block. For example, the depth information of each pixel divided into the same image block may belong to the same depth value interval, or the difference between the depth information of each pixel divided into the same image block is less than the depth threshold, etc.

电子设备可根据各个像素点的深度信息可将第一图像划分为前景区域及背景区域,由于深度估计结果中划分的前景区域及背景区域是基于深度信息划分的,会导致前景区域的边缘不够准确,而第一前景识别结果识别的前景区域更加准确,因此,在一些实施例中,可根据第一前景识别结果对深度估计结果进行修正,得到第一深度图。可选地,可基于第一前景识别结果调整深度估计结果中划分的前景区域的边缘信息,得到第一深度图,再利用第一深度图对第一图像进行虚化处理,得到第一虚化图像。该边缘信息可包括被标注为边缘像素点的像素点坐标。The electronic device may divide the first image into a foreground area and a background area according to the depth information of each pixel. Since the foreground area and the background area divided in the depth estimation result are divided based on the depth information, the edge of the foreground area may not be accurate enough, while the foreground area identified by the first foreground recognition result is more accurate. Therefore, in some embodiments, the depth estimation result may be corrected according to the first foreground recognition result to obtain a first depth map. Optionally, the edge information of the foreground area divided in the depth estimation result may be adjusted based on the first foreground recognition result to obtain a first depth map, and then the first depth map may be used to blur the first image to obtain a first blurred image. The edge information may include the coordinates of the pixel points marked as edge pixels.

可选地,电子设备可将第一前景识别结果与深度估计结果中划分的前景区域进行比对,判断第一前景识别结果中前景区域的边缘信息与深度估计结果中前景区域的边缘信息是否一致,若不一致,可直接将深度估计结果中前景区域的边缘信息修改为第一前景识别结果中前景区域的边缘信息,也可以将深度估计结果中前景区域的边缘信息与第一前景识别结果中前景区域的边缘信息进行融合。可选地,该融合的方式可包括但不限于像素点的均值融合、按照不同的权重系数进行融合等方式,由于第一前景识别结果对前景区域的识别准确度大于深度估计结果,因此,第一前景识别结果中前景区域的边缘信息对应的权重系数可大于深度估计结果中前景区域的边缘信息的权重系数。需要说明的是,也可采用其它方式对深度估计结果进行修正及调整,在此不作限定。Optionally, the electronic device may compare the first foreground recognition result with the foreground area divided in the depth estimation result to determine whether the edge information of the foreground area in the first foreground recognition result is consistent with the edge information of the foreground area in the depth estimation result. If not, the edge information of the foreground area in the depth estimation result may be directly modified to the edge information of the foreground area in the first foreground recognition result, or the edge information of the foreground area in the depth estimation result may be fused with the edge information of the foreground area in the first foreground recognition result. Optionally, the fusion method may include but is not limited to pixel mean fusion, fusion according to different weight coefficients, etc. Since the recognition accuracy of the foreground area by the first foreground recognition result is greater than that of the depth estimation result, the weight coefficient corresponding to the edge information of the foreground area in the first foreground recognition result may be greater than the weight coefficient of the edge information of the foreground area in the depth estimation result. It should be noted that other methods may also be used to correct and adjust the depth estimation result, which are not limited here.

在得到第一深度图后,电子设备可根据第一深度图中划分的前景区域及背景区域,以及背景区域中各个像素点的深度信息对背景区域进行划分,得到多个背景子区域。电子设备可再基于各个背景子区域中的像素点的深度信息确定各个背景子区域对应的虚化参数,以根据各个背景子区域对应的虚化参数对各个背景子区域进行不同虚化力度的虚化处理。After obtaining the first depth map, the electronic device can divide the background area according to the foreground area and the background area divided in the first depth map, and the depth information of each pixel in the background area, to obtain multiple background sub-areas. The electronic device can then determine the blur parameters corresponding to each background sub-area based on the depth information of the pixels in each background sub-area, so as to perform blur processing with different blur strengths on each background sub-area according to the blur parameters corresponding to each background sub-area.

步骤408,响应针对第一虚化图像的选择操作,确定选择操作在第一虚化图像中选择的一个或多个图像区域。Step 408 , in response to a selection operation on the first blurred image, determining one or more image regions selected by the selection operation in the first blurred image.

步骤410,识别各个图像区域的前景区域,得到各个图像区域对应的第二前景识别结果。Step 410: Identify the foreground area of each image area to obtain a second foreground identification result corresponding to each image area.

步骤408~410的描述可参考上述实施例中步骤230~240的描述,在此不再赘述。The description of steps 408 to 410 may refer to the description of steps 230 to 240 in the above embodiment, which will not be repeated here.

步骤412,将各个图像区域对应的第二前景识别结果与第一前景识别结果进行融合,得到目标前景识别结果。Step 412: The second foreground recognition result corresponding to each image region is merged with the first foreground recognition result to obtain a target foreground recognition result.

在一些实施例中,电子设备可根据选择的各个图像区域对应的第二前景识别结果对第一前景识别结果进行修正,将各个图像区域对应的第二前景识别结果与第一前景识别结果进行融合,以得到更加准确的目标前景识别结果。In some embodiments, the electronic device may correct the first foreground recognition result according to the second foreground recognition result corresponding to each selected image area, and merge the second foreground recognition result corresponding to each image area with the first foreground recognition result to obtain a more accurate target foreground recognition result.

作为一种具体实施方式,电子设备可将第一前景识别结果中与各个图像区域对应的前景识别结果,分别替换为各个图像区域对应的第二前景识别结果,得到目标前景识别结果。第一前景识别结果可包括第一图像的前景掩膜,该前景掩膜可用于对第一图像的前景区域的位置进行标注。电子设备在确定用户选择的各个图像区域后,可根据各个图像区域在第一虚化图像中的图像位置,确定前景掩膜中与各个图像区域具有相同图像位置的掩膜区域,即为各个图像区域对应的掩膜区域。各个图像区域的第二前景识别结果可包括各个图像区域对应的局部前景掩膜,可将第一图像的前景掩膜中,与各个图像区域对应的掩膜区域替换为对应的局部前景掩膜,以得到更加准确的目标前景掩膜,该目标前景掩膜可作为目标前景识别结果。As a specific implementation, the electronic device may replace the foreground recognition results corresponding to each image area in the first foreground recognition result with the second foreground recognition results corresponding to each image area, respectively, to obtain a target foreground recognition result. The first foreground recognition result may include a foreground mask of the first image, and the foreground mask may be used to mark the position of the foreground area of the first image. After determining each image area selected by the user, the electronic device may determine, based on the image position of each image area in the first blurred image, a mask area in the foreground mask having the same image position as each image area, that is, the mask area corresponding to each image area. The second foreground recognition result of each image area may include a local foreground mask corresponding to each image area, and the mask area corresponding to each image area in the foreground mask of the first image may be replaced with the corresponding local foreground mask to obtain a more accurate target foreground mask, which may be used as the target foreground recognition result.

图5为一个实施例中利用选择的图像区域的第二前景识别结果对第一前景识别结果进行修正的示意图。如图5所示,电子设备对第一图像510进行前景识别,得到第一前景识别结果520,可基于第一前景识别结果520及第一图像510的深度估计结果对第一图像510进行虚化处理,得到第一虚化图像530。用户可在电子设备的屏幕显示的第一虚化图像530中选择需要进行优化的图像区域532,可对该图像区域532进行局部的前景识别,得到图像区域532对应的第二前景识别结果540。可将第一前景识别结果520中与图像区域532对应的前景识别结果522,替换为第二前景识别结果540,得到目标前景识别结果550。若存在多个选择的图像区域,则可依次将第一前景识别结果中与每个图像区域5对应的前景识别结果替换为对应的第二前景识别结果。FIG5 is a schematic diagram of correcting a first foreground recognition result using a second foreground recognition result of a selected image area in one embodiment. As shown in FIG5 , the electronic device performs foreground recognition on a first image 510 to obtain a first foreground recognition result 520, and can perform blur processing on the first image 510 based on the first foreground recognition result 520 and the depth estimation result of the first image 510 to obtain a first blurred image 530. The user can select an image area 532 to be optimized in the first blurred image 530 displayed on the screen of the electronic device, and can perform local foreground recognition on the image area 532 to obtain a second foreground recognition result 540 corresponding to the image area 532. The foreground recognition result 522 corresponding to the image area 532 in the first foreground recognition result 520 can be replaced with the second foreground recognition result 540 to obtain a target foreground recognition result 550. If there are multiple selected image areas, the foreground recognition result corresponding to each image area 5 in the first foreground recognition result can be replaced with the corresponding second foreground recognition result in sequence.

需要说明的是,也可采用其它方式将各个图像区域对应的第二前景识别结果与第一前景识别结果进行融合,例如,可将第一前景识别结果中各个图像区域对应的前景识别结果,与各个图像区域对应的第二前景识别结果进行加权平均融合等方式,在此不作限定。It should be noted that other methods may also be used to fuse the second foreground recognition results corresponding to each image area with the first foreground recognition results. For example, the foreground recognition results corresponding to each image area in the first foreground recognition result may be fused with the second foreground recognition results corresponding to each image area by weighted average, etc., which are not limited here.

步骤414,根据目标前景识别结果对深度估计结果进行修正,得到目标深度图,并基于目标深度图对第一图像进行虚化处理,得到第二虚化图像。Step 414: correct the depth estimation result according to the target foreground recognition result to obtain a target depth map, and blur the first image based on the target depth map to obtain a second blurred image.

可根据更加精准的目标前景识别结果对深度估计结果进行修正,得到前景区域更加精确的目标深度图,电子设备可根据目标深度图重新对第一图像进行虚化处理,以得到虚化效果更好的第二虚化图像。可以理解地,根据目标前景识别结果对深度估计结果进行修正的方式可与上述实施例中描述的根据第一前景识别结果对深度估计结果进行修正的方式类似,根据目标深度图对第一图像进行虚化处理的方式可与上述实施例中描述的根据第一深度图对第一图像进行虚化的方式类似,在此不再重复赘述。The depth estimation result can be corrected according to the more accurate target foreground recognition result to obtain a more accurate target depth map of the foreground area, and the electronic device can re-blur the first image according to the target depth map to obtain a second blurred image with a better blur effect. It can be understood that the method of correcting the depth estimation result according to the target foreground recognition result can be similar to the method of correcting the depth estimation result according to the first foreground recognition result described in the above embodiment, and the method of blurring the first image according to the target depth map can be similar to the method of blurring the first image according to the first depth map described in the above embodiment, and will not be repeated here.

在本申请实施例中,在对第一图像初次虚化处理后,用户可选择需要进一步优化的图像区域,电子设备可对选择的各个图像区域进行前景识别,并将得到的各个图像区域对应的第二前景识别结果与第一前景识别结果进行融合,以对第一前景识别结果进行修正,得到更加准确的目标前景识别结果,从而基于该目标前景识别结果对第一图像进行虚化处理可得到虚化效果更好的第二虚化图像,提高了前景识别的准确性及图像虚化效果。In an embodiment of the present application, after the first image is initially blurred, the user can select image areas that need to be further optimized, and the electronic device can perform foreground recognition on each selected image area, and fuse the second foreground recognition results corresponding to each image area with the first foreground recognition result to correct the first foreground recognition result and obtain a more accurate target foreground recognition result, so that the first image can be blurred based on the target foreground recognition result to obtain a second blurred image with a better blurring effect, thereby improving the accuracy of foreground recognition and the image blurring effect.

在一些实施例中,除了上述实施例中利用各个选择的图像区域的第二前景识别结果对第一图像再次进行整图的虚化处理以外,还可直接利用各个选择的图像区域的第二前景识别结果对第一虚化图像中进行局部的虚化处理,以减少计算量,提高图像处理效率。电子设备在得到各个图像区域对应的第二前景识别结果后,可基于各个图像区域的第二前景识别结果对第一深度图进行修正,可基于各个图像区域的第二前景识别结果,对第一深度图中与各个图像区域对应的边缘信息进行调整,得到第二深度图。In some embodiments, in addition to using the second foreground recognition results of each selected image area to blur the entire first image again in the above embodiments, the second foreground recognition results of each selected image area can also be used directly to perform local blurring on the first blurred image to reduce the amount of calculation and improve image processing efficiency. After obtaining the second foreground recognition results corresponding to each image area, the electronic device can correct the first depth map based on the second foreground recognition results of each image area, and can adjust the edge information corresponding to each image area in the first depth map based on the second foreground recognition results of each image area to obtain the second depth map.

作为一种具体实施方式,可根据各个选择的图像区域在第一虚化图像中的图像位置,确定第一深度图中与各个选择的图像区域具有相同图像位置的深度图区域,可根据各个选择的图像区域的第二前景识别结果,对相应的深度图区域中包含的前景区域的边缘信息进行修正,从而调整第一深度图中前景区域的边缘信息。As a specific implementation, a depth map area having the same image position as each selected image area in the first depth map can be determined based on the image position of each selected image area in the first blurred image, and edge information of the foreground area contained in the corresponding depth map area can be corrected based on the second foreground recognition result of each selected image area, thereby adjusting the edge information of the foreground area in the first depth map.

以选择的各个图像区域中的第一图像区域为例,该第一图像区域可为选择的任一图像区域。可获取第一图像区域的第二前景识别结果中前景区域的边缘信息,可将第一图像区域的第二前景识别结果中前景区域的边缘信息,与第一深度图中与第一图像区域对应的第一深度图区域中包含的前景区域的边缘信息进行比对,判断二者是否一致,若不一致,则可将第一深度图区域中包含的前景区域的边缘信息修改为第一图像区域的第二前景识别结果中前景区域的边缘信息。可选地,若二者不一致,也可以将第一深度图区域中包含的前景区域的边缘信息与第一图像区域的第二前景识别结果中前景区域的边缘信息进行融合,该融合的方式可包括但不限于像素点的均值融合、按照不同的权重系数进行融合等方式,在此不作限定。Taking the first image area among the selected image areas as an example, the first image area can be any selected image area. The edge information of the foreground area in the second foreground recognition result of the first image area can be obtained, and the edge information of the foreground area in the second foreground recognition result of the first image area can be compared with the edge information of the foreground area contained in the first depth map area corresponding to the first image area in the first depth map to determine whether the two are consistent. If they are inconsistent, the edge information of the foreground area contained in the first depth map area can be modified to the edge information of the foreground area in the second foreground recognition result of the first image area. Optionally, if the two are inconsistent, the edge information of the foreground area contained in the first depth map area can also be fused with the edge information of the foreground area in the second foreground recognition result of the first image area. The fusion method can include but is not limited to the mean fusion of pixels, fusion according to different weight coefficients, etc., which are not limited here.

由于各个图像区域的第二前景识别结果更加准确,因此利用各个图像区域的第二前景识别结果对第一深度图中相应深度图区域的前景区域的边缘信息进行修正,可以到更加准确划分前景区域与背景区域的第二深度图。Since the second foreground recognition results of each image area are more accurate, the second foreground recognition results of each image area are used to correct the edge information of the foreground area of the corresponding depth map area in the first depth map, so as to obtain a second depth map that more accurately divides the foreground area and the background area.

作为一种实施方式,电子设备可利用第二深度图对第一图像进行虚化处理,以得到更加准确的第二虚化图像。作为另一种实施方式,电子设备也可根据各个图像区域在第二深度图中的深度信息,分别对第一虚化图像中的各个图像区域进行虚化处理,得到第二虚化图像。As an implementation, the electronic device may use the second depth map to blur the first image to obtain a more accurate second blurred image. As another implementation, the electronic device may also blur each image area in the first blurred image according to the depth information of each image area in the second depth map to obtain a second blurred image.

进一步地,各个图像区域在第二深度图中对应的深度图区域准确地划分前景区域及背景区域,可根据各个图像区域在第二深度图中对应的深度图区域中包含的背景区域的深度信息,重新确定各个图像区域对应的虚化参数,并根据各个图像区域对应的虚化参数对各个图像区域中包含的背景区域进行虚化处理,以得到第二虚化图像。Furthermore, the depth map area corresponding to each image area in the second depth map accurately divides the foreground area and the background area. The blur parameters corresponding to each image area can be re-determined according to the depth information of the background area contained in the depth map area corresponding to each image area in the second depth map, and the background area contained in each image area can be blurred according to the blur parameters corresponding to each image area to obtain a second blurred image.

图6为一个实施例中利用选择的图像区域的第二前景识别结果对第一深度图进行修正的示意图。如图6所示,电子设备可对第一图像610进行深度估计,得到深度估计结果,并利用第一图像的第一前景识别结果对该深度估计结果进行修正,得到第一深度图620。可根据第一深度图620对第一图像610进行虚化处理,得到第一虚化图像630。用户可在第一虚化图像630中选择需要进行优化的图像区域,电子设备可根据用户的选择操作确定图像区域632,并对图像区域632进行前景识别,得到图像区域632对应的第二前景识别结果640。可根据图像区域632对应的第二前景识别结果640,对第一深度图620中与该图像区域632具有相同图像位置的深度图区域622中的前景区域的边缘信息进行调整,得到第二深度图650。FIG6 is a schematic diagram of correcting a first depth map using a second foreground recognition result of a selected image area in one embodiment. As shown in FIG6 , the electronic device may perform depth estimation on the first image 610 to obtain a depth estimation result, and correct the depth estimation result using the first foreground recognition result of the first image to obtain a first depth map 620. The first image 610 may be defocused according to the first depth map 620 to obtain a first defocused image 630. The user may select an image area to be optimized in the first defocused image 630, and the electronic device may determine an image area 632 according to the user's selection operation, and perform foreground recognition on the image area 632 to obtain a second foreground recognition result 640 corresponding to the image area 632. According to the second foreground recognition result 640 corresponding to the image area 632, the edge information of the foreground area in the depth map area 622 having the same image position as the image area 632 in the first depth map 620 may be adjusted to obtain a second depth map 650.

可选地,电子设备在得到第二深度图650后,可根据第二深度图650中与图像区域632具有相同图像位置的深度图区域的深度信息,对第一虚化图像630中的图像区域632进行虚化处理,得到第二虚化图像。Optionally, after obtaining the second depth map 650, the electronic device may blur the image area 632 in the first blurred image 630 according to the depth information of the depth map area having the same image position as the image area 632 in the second depth map 650 to obtain a second blurred image.

在本申请实施例中,可根据用户选择的各个图像区域的第二前景识别结果对第一深度图进行修正,并基于修正得到的第二深度图对第一虚化图像中的图像区域进行虚化处理,得到虚化效果更好的第二虚化图像,直接利用各个选择的图像区域的第二前景识别结果对第一虚化图像中进行局部的虚化处理,提高了前景识别的准确性及图像虚化效果,且减少了计算量,提高图像处理效率。In an embodiment of the present application, the first depth map can be corrected according to the second foreground recognition result of each image area selected by the user, and the image area in the first blurred image can be blurred based on the corrected second depth map to obtain a second blurred image with a better blurring effect. The second foreground recognition result of each selected image area is directly used to perform local blurring in the first blurred image, thereby improving the accuracy of foreground recognition and the image blurring effect, reducing the amount of calculation, and improving image processing efficiency.

如图7所示,在另一个实施例中,提供一种图像处理方法,可应用于上述的电子设备,该方法可包括以下步骤:As shown in FIG. 7 , in another embodiment, an image processing method is provided, which can be applied to the above electronic device. The method may include the following steps:

步骤702,识别第一图像的人像区域及头发区域,得到满足精度条件的人像分割结果。Step 702, identifying the portrait area and the hair area of the first image, and obtaining a portrait segmentation result that meets the accuracy requirement.

在本申请实施例中,第一图像可包括人像图像,人像图像指的是包含有人像的图像,人像图像中的人像区域即为前景区域,除人像区域以外的区域即为背景区域。由于人像图像在进行前景的人像区域识别时,人物的头发区域特别容易发生误识别的情况,比如将头发区域的发丝部分错误地识别为背景区域,或是将背景区域错误地识别为头发区域等,导致得到的前景人像区域不准确。因此,在本申请实施例中,可分别识别第一图像的人像区域及头发区域,得到满足精度条件的人像分割结果,人像分割结果可用于标注第一图像中的人像区域位置。该满足精度条件的人像分割结果能够精确定位第一图像中的头发区域,人像分割结果更为准确。可选地,该精度条件可采用一项或多项精度指标进行设定,例如,该精度指标可包括得到的人像分割结果与真实人像分割结果之间的绝对误差和(Sum ofAbsolute Difference,SAD)、均方误差(mean squared error,MSE)、梯度误差等;精度条件可包括得到的人像分割结果与真实人像分割结果之间的误差小于SAD阈值、小于MSE阈值、小于梯度误差阈值等中的一种或多种。In an embodiment of the present application, the first image may include a portrait image, which refers to an image containing a portrait. The portrait area in the portrait image is the foreground area, and the area other than the portrait area is the background area. Because the hair area of the person is particularly prone to misidentification when the foreground portrait area of the portrait image is identified, such as the hair part of the hair area is mistakenly identified as the background area, or the background area is mistakenly identified as the hair area, etc., resulting in an inaccurate foreground portrait area. Therefore, in an embodiment of the present application, the portrait area and the hair area of the first image can be identified separately to obtain a portrait segmentation result that meets the accuracy conditions, and the portrait segmentation result can be used to mark the position of the portrait area in the first image. The portrait segmentation result that meets the accuracy conditions can accurately locate the hair area in the first image, and the portrait segmentation result is more accurate. Optionally, the accuracy condition may be set using one or more accuracy indicators. For example, the accuracy indicator may include the sum of absolute differences (SAD), mean squared error (MSE), gradient error, etc. between the obtained portrait segmentation result and the true portrait segmentation result; the accuracy condition may include one or more of the error between the obtained portrait segmentation result and the true portrait segmentation result being less than a SAD threshold, less than an MSE threshold, less than a gradient error threshold, etc.

在一些实施例中,电子设备可先识别第一图像中的人像区域,得到第一图像对应的人像分割图,再基于该人像分割图识别第一图像中的头发区域,得到第一图像的发丝抠图结果,可根据该发丝抠图结果对人像分割图进行修正,得到满足精度条件的人像分割结果。In some embodiments, the electronic device may first identify the portrait area in the first image, obtain a portrait segmentation map corresponding to the first image, and then identify the hair area in the first image based on the portrait segmentation map to obtain a hair cutout result of the first image. The portrait segmentation map may be corrected according to the hair cutout result to obtain a portrait segmentation result that meets the accuracy requirements.

具体地,电子设备识别第一图像中的人像区域的方式,可包括但不限于利用基于图论的人像分割方法、基于聚类的人像分割方法、基于语义的人像分割方法、基于实例的人像分割方法、基于deeplab系列的网络模型的人像分割方法、基于U型网络(U-Net)的分割方法或者基于全卷积网络(Fully Convolutional Network,FCN)的人像分割方法等方式。Specifically, the way in which the electronic device identifies the portrait area in the first image may include but is not limited to utilizing a portrait segmentation method based on graph theory, a portrait segmentation method based on clustering, a portrait segmentation method based on semantics, an instance-based portrait segmentation method, a portrait segmentation method based on a deeplab series network model, a segmentation method based on a U-net network (U-Net), or a portrait segmentation method based on a fully convolutional network (FCN).

以电子设备通过人像分割模型识别第一图像的人像区域,得到人像分割图为例,该人像分割模型可以是U-Net结构的模型,该人像分割模型中可包括编码器及解码器,编码器中可包括多个下采样层,解码器可包括多个上采样层。人像分割模型可通过编码器的多个下采样层先对第一图像进行多次的下采样卷积处理,再通过解码器的多个上采样层进行多次的上采样处理,得到人像分割图。人像分割模型中,相同分辨率之间的下采样层及上采样层之间可实现跳跃连接,将相同分辨率之间的下采样层与上采样层的特征进行融合,使得上采样过程更加准确。Taking the example of an electronic device identifying the portrait area of the first image through a portrait segmentation model to obtain a portrait segmentation map, the portrait segmentation model can be a model with a U-Net structure, and the portrait segmentation model can include an encoder and a decoder, the encoder can include multiple downsampling layers, and the decoder can include multiple upsampling layers. The portrait segmentation model can first perform multiple downsampling convolution processing on the first image through the multiple downsampling layers of the encoder, and then perform multiple upsampling processing through the multiple upsampling layers of the decoder to obtain a portrait segmentation map. In the portrait segmentation model, jump connections can be implemented between downsampling layers and upsampling layers with the same resolution, and the features of the downsampling layers and upsampling layers with the same resolution are fused to make the upsampling process more accurate.

可选地,人像分割模型可以是根据人像样本集合训练得到的,该人像样本集合可包括多张携带有人像标签的人像样本图像,该人像标签可用于标注人像样本图像中的人像区域,例如,该人像标签可包括人像掩膜,人像腌膜中属于人像区域的像素点可对应第一像素值,属于背景区域的像素点可对应第二像素值,通过二值化的人像掩膜可准确标注样本图像的人像区域。Optionally, the portrait segmentation model can be trained based on a portrait sample set, which may include multiple portrait sample images carrying portrait labels. The portrait labels can be used to mark the portrait areas in the portrait sample images. For example, the portrait labels may include a portrait mask. The pixel points belonging to the portrait area in the portrait mask may correspond to a first pixel value, and the pixel points belonging to the background area may correspond to a second pixel value. The portrait area of the sample image can be accurately marked by the binarized portrait mask.

在一些实施例中,在将第一图像输入人像分割模型之前,可根据人像分割模型的输入尺寸对第一图像进行缩放处理和/或旋转处理,得到满足该输入尺寸的第一图像,再将第一图像输入人像分割模型进行人像识别。例如,人像分割模型的输入尺寸为竖向尺寸(图像与水平线平行的边短于与水平线垂直的边),若第一图像为横向尺寸(图像与水平线平行的边长于与水平线垂直的边),则可先将第一图像顺时针或逆时针旋转90度;或人像分割模型的输入尺寸小于第一图像的图像尺寸,则可先对第一图像进行缩小处理,得到与该输入尺寸一致的第一图像等。能够保证输入的第一图像与人像分割模型适配,保证输出的人像分割图的准确性。In some embodiments, before the first image is input into the portrait segmentation model, the first image may be scaled and/or rotated according to the input size of the portrait segmentation model to obtain a first image that meets the input size, and then the first image is input into the portrait segmentation model for portrait recognition. For example, if the input size of the portrait segmentation model is a vertical size (the side of the image parallel to the horizontal line is shorter than the side perpendicular to the horizontal line), if the first image is a horizontal size (the side of the image parallel to the horizontal line is longer than the side perpendicular to the horizontal line), the first image may be rotated 90 degrees clockwise or counterclockwise first; or if the input size of the portrait segmentation model is smaller than the image size of the first image, the first image may be reduced first to obtain a first image consistent with the input size, etc. It can ensure that the input first image is compatible with the portrait segmentation model and the accuracy of the output portrait segmentation map is guaranteed.

电子设备在得到人像分割图后,可将人像分割图与第一图像进行通道拼接,得到拼接图像,并根据该拼接图像识别第一图像中的头发区域,以得到满足精度条件的人像分割结果。After obtaining the portrait segmentation map, the electronic device may perform channel stitching on the portrait segmentation map and the first image to obtain a stitched image, and identify the hair area in the first image based on the stitched image to obtain a portrait segmentation result that meets the accuracy requirements.

在一些实施例中,电子设备得到的人像分割图可为单通道图像,进一步地,该人像分割图可为单通道的三值图像,在人像分割图中,被识别为属于人像区域的像素点可对应第一像素值,被识别为属于背景区域的像素点可对应第二像素值,被识别为属于人像与背景交接区域的像素点可对应第三像素值。例如,以人像分割图为灰度图像为例,被识别为属于人像区域的像素点对应的灰度值可为0,被识别为属于背景区域的像素点对应的灰度值可为255,被识别为属于人像与背景交接区域的像素点对应的灰度值可为127.5等,但不限于此。In some embodiments, the portrait segmentation map obtained by the electronic device may be a single-channel image. Further, the portrait segmentation map may be a single-channel three-value image. In the portrait segmentation map, the pixel points identified as belonging to the portrait area may correspond to the first pixel value, the pixel points identified as belonging to the background area may correspond to the second pixel value, and the pixel points identified as belonging to the junction area of the portrait and the background may correspond to the third pixel value. For example, taking the portrait segmentation map as a grayscale image as an example, the grayscale value corresponding to the pixel points identified as belonging to the portrait area may be 0, the grayscale value corresponding to the pixel points identified as belonging to the background area may be 255, and the grayscale value corresponding to the pixel points identified as belonging to the junction area of the portrait and the background may be 127.5, etc., but it is not limited thereto.

可将单通道的人像分割图与第一图像进行通道拼接,该第一图像为三通道的图像(如RGB图像或HSV图像等),可将人像分割图拼接为第一图像的第4个通道,得到具备四通道的拼接图像。可选地,在对人像分割图及第一图像进行通道拼接前,可先分别对第一图像及人像分割图进行归一化处理,再将归一化处理后的第一图像及人像分割图进行通道拼接。该归一化处理的方式可包括将图像中每个像素点的像素值减去均值,再除以方差,得到归一化后的像素值,或是直接将每个像素点的像素值除了灰度值区域(如255),得到归一化后的像素值等,但不限于于此。先对第一图像及人像分割图进行归一化处理,再进行通道拼接,可以提高后续进行头发区域识别的准确性及效率。The single-channel portrait segmentation map and the first image can be channel-joined. The first image is a three-channel image (such as an RGB image or an HSV image, etc.). The portrait segmentation map can be joined to the fourth channel of the first image to obtain a four-channel joined image. Optionally, before the portrait segmentation map and the first image are channel-joined, the first image and the portrait segmentation map can be normalized respectively, and then the normalized first image and the portrait segmentation map are channel-joined. The normalization method may include subtracting the mean from the pixel value of each pixel in the image and dividing it by the variance to obtain the normalized pixel value, or directly removing the gray value area (such as 255) from the pixel value of each pixel to obtain the normalized pixel value, etc., but is not limited thereto. Normalizing the first image and the portrait segmentation map first and then performing channel-joining can improve the accuracy and efficiency of subsequent hair area recognition.

电子设备可对拼接图像进行发丝抠图,识别发丝区域,得到第一图像的发丝抠图结果,并基于该发丝抠图结果及人像分割图得到满足精度条件的人像分割结果。该发丝抠图的方式可包括但不限于泊松抠图(Poisson Matting)方法、基于贝叶斯理论的贝叶斯抠图(Bayes Matting)、基于数据驱动的机器学习抠图方法或者封闭式表面抠图方法等不使用深度学习的传统抠图方法,或者运用诸如卷积神经网络(Convolutional NeutralNetwork,CNN)等人工神经网络的基于深度学习的抠图方法。The electronic device can perform hair matting on the spliced image, identify the hair area, obtain the hair matting result of the first image, and obtain the portrait segmentation result that meets the accuracy condition based on the hair matting result and the portrait segmentation map. The hair matting method may include but is not limited to Poisson Matting method, Bayes Matting based on Bayesian theory, data-driven machine learning matting method or closed surface matting method and other traditional matting methods that do not use deep learning, or deep learning-based matting methods using artificial neural networks such as Convolutional Neutral Network (CNN).

作为一种具体实施方式,电子设备可将拼接图像输入第一发丝抠图模型,通过该第一发丝抠图模型提取拼接图像的特征,并根据特征确定第一图像中的头发区域,以得到满足精度条件的人像分割结果。其中,第一发丝抠图模型可以是基于第一训练集训练得到的,该第一训练集包括多张标注有头发区域的人像样本图像,该人像样本图像中可携带有头发标签,该头发标签可用于标注人像样本图像中的头发区域。可选地,为了保证得到满足精度条件的人像分割结果,可按照该精度条件对第一发丝抠图模型进行训练,使得第一发丝抠图模型输出的预测的头发区域满足该精度条件,例如,第一发丝抠图模型输出的预测的头发区域与人像样本图像的真实头发区域之间的误差可小于设置的SAD阈值、小于MSE阈值、小于梯度误差阈值等,但不限于此。As a specific implementation, the electronic device may input the spliced image into a first hair cutout model, extract features of the spliced image through the first hair cutout model, and determine the hair region in the first image according to the features, so as to obtain a portrait segmentation result that meets the accuracy condition. The first hair cutout model may be obtained by training based on a first training set, and the first training set includes a plurality of portrait sample images with hair regions annotated, and the portrait sample images may carry hair labels, and the hair labels may be used to annotate the hair regions in the portrait sample images. Optionally, in order to ensure that the portrait segmentation result that meets the accuracy condition is obtained, the first hair cutout model may be trained according to the accuracy condition, so that the predicted hair region output by the first hair cutout model meets the accuracy condition, for example, the error between the predicted hair region output by the first hair cutout model and the real hair region of the portrait sample image may be less than a set SAD threshold, less than an MSE threshold, less than a gradient error threshold, etc., but is not limited thereto.

该第一发丝抠图模型也可以是采用U-Net等网络架构的模型,第一发丝抠图模型可包括编码器及解码器,第一发丝抠图模型基于输入的拼接图像,输出第一图像的发丝抠图结果,该发丝抠图结果可包括第一图像对应的头发掩膜,该头发掩膜可包括第一图像中头发区域的位置信息,可用于对第一图像中的头发区域进行标注。The first hair cutout model may also be a model using a network architecture such as U-Net. The first hair cutout model may include an encoder and a decoder. The first hair cutout model outputs a hair cutout result of the first image based on the input spliced image. The hair cutout result may include a hair mask corresponding to the first image. The hair mask may include position information of the hair area in the first image, which can be used to mark the hair area in the first image.

可根据第一发丝抠图模型输出的发丝抠图结果对人像分割图进行修正,以得到满足精度条件的人像分割结果。进一步地,可根据发丝抠图结果对人像分割图中属于人像与背景交接区域的像素点的像素值进行调整,确定属于人像与背景交接区域的每个像素点是属于头发区域还是背景区域。可查找人像分割图中被识别为属于人像与背景交接区域的各个像素点,在发丝抠图结果中对应头发区域还是背景区域,以发丝抠图结果对人像分割图中被识别为属于人像与背景交接区域的各个像素点进行准确的区域划分。The portrait segmentation map can be corrected according to the hair cutout result output by the first hair cutout model to obtain a portrait segmentation result that meets the accuracy condition. Further, the pixel values of the pixel points in the portrait segmentation map that belong to the intersection area of the portrait and the background can be adjusted according to the hair cutout result to determine whether each pixel point in the intersection area of the portrait and the background belongs to the hair area or the background area. Each pixel point in the portrait segmentation map that is identified as belonging to the intersection area of the portrait and the background can be searched to determine whether it corresponds to the hair area or the background area in the hair cutout result, and each pixel point in the portrait segmentation map that is identified as belonging to the intersection area of the portrait and the background can be accurately divided into regions using the hair cutout result.

作为另一种实施方式,发丝抠图结果可以是单通道的头发掩膜,可直接将人像分割图与该发丝抠图结果匹配的区域替换为该发丝抠图结果,得到满足精度条件的人像分割结果。分别采用人像分割模型及第一发丝抠图模型识别第一图像的人像区域及头发区域,能够提高得到的满足精度条件的人像分割结果的稳定性及准确性。As another implementation, the hair cutout result may be a single-channel hair mask, and the region of the portrait segmentation map that matches the hair cutout result may be directly replaced with the hair cutout result to obtain a portrait segmentation result that meets the accuracy requirement. Using the portrait segmentation model and the first hair cutout model to identify the portrait region and the hair region of the first image, respectively, can improve the stability and accuracy of the obtained portrait segmentation result that meets the accuracy requirement.

在其它实施方式中,电子设备也可直接将第一图像输入图像处理模型,并通过该图像处理模型识别第一图像的人像区域及头发区域,得到满足精度条件的人像分割结果。可选地,该图像处理模型可以是具备双编解码结构的神经网络,图像处理模型可以通过同时携带有人像标签及头发标签的人像样本图像进行训练得到。采用此方式可以降低得到的满足精度条件的人像分割结果的计算量,提高图像处理效率。In other embodiments, the electronic device may also directly input the first image into an image processing model, and identify the portrait area and the hair area of the first image through the image processing model to obtain a portrait segmentation result that meets the accuracy requirements. Optionally, the image processing model may be a neural network with a dual encoding and decoding structure, and the image processing model may be obtained by training portrait sample images that carry both portrait labels and hair labels. This approach can reduce the amount of calculation required to obtain a portrait segmentation result that meets the accuracy requirements, thereby improving image processing efficiency.

步骤704,基于满足精度条件的人像分割结果对第一图像进行虚化处理,得到第一虚化图像。Step 704: blur the first image based on the portrait segmentation result that meets the accuracy requirement to obtain a first blurred image.

在一些实施例中,电子设备还可对第一图像进行深度估计,得到第一图像的深度估计结果,并根据满足精度条件的人像分割结果对该深度估计结果进行修正,得到第一图像的第一深度图,再利用该第一深度图对第一图像进行虚化处理,得到第一虚化图像。In some embodiments, the electronic device may also perform depth estimation on the first image to obtain a depth estimation result of the first image, and correct the depth estimation result according to a portrait segmentation result that meets the accuracy conditions to obtain a first depth map of the first image, and then use the first depth map to blur the first image to obtain a first blurred image.

步骤706,响应针对第一虚化图像的选择操作,确定选择操作在第一虚化图像中选择的一个或多个图像区域。Step 706 , in response to a selection operation on the first blurred image, determining one or more image regions selected by the selection operation in the first blurred image.

步骤706的描述可参考上述各实施例中的相关描述,在此不再赘述。The description of step 706 may refer to the relevant description in the above embodiments, which will not be repeated here.

步骤708,识别各个图像区域的头发区域,得到各个图像区域对应的局部发丝抠图结果。Step 708, identifying the hair area of each image area, and obtaining the local hair cutout result corresponding to each image area.

对于人像图像来说,容易出现虚化效果不好的区域通常为发丝与背景的交界区域,为了提高图像虚化效果,本申请实施例在确定用户选择的一个或多个图像区域后,可再次对各个图像区域内的头发区域进行识别,得到各个图像区域对应的局部发丝抠图结果,以细化第一图像中局部的发丝识别。For portrait images, the area where the blurring effect is poor is usually the boundary area between the hair and the background. In order to improve the blurring effect of the image, after determining one or more image areas selected by the user, the embodiment of the present application can identify the hair area in each image area again to obtain the local hair cutout result corresponding to each image area, so as to refine the local hair identification in the first image.

对各个图像区域内的头发区域进行识别的方式可包括但不限于泊松抠图方法、基于贝叶斯理论的贝叶斯抠图、基于数据驱动的机器学习抠图方法或者封闭式表面抠图方法等不使用深度学习的传统抠图方法,或者运用诸如卷积神经网络等人工神经网络的基于深度学习的抠图方法。Methods for identifying hair areas in each image area may include but are not limited to traditional cutout methods that do not use deep learning, such as Poisson cutout method, Bayesian cutout based on Bayesian theory, data-driven machine learning cutout method, or closed surface cutout method, or deep learning-based cutout methods using artificial neural networks such as convolutional neural networks.

作为一种具体实施方式,电子设备可通过第二发丝抠图模型识别各个图像区域中的头发区域,得到各个图像区域对应的局部发丝抠图结果。第二发丝抠图模型与第一发丝抠图模型的网络架构可相同或相似,其中,第二发丝抠图模型可以是基于第二训练集训练得到的,该第二训练集中包括多张从第一训练集的人像样本图像随机裁剪得到的样本图像。As a specific implementation, the electronic device can identify the hair area in each image area through the second hair cutout model to obtain the local hair cutout result corresponding to each image area. The network architecture of the second hair cutout model and the first hair cutout model can be the same or similar, wherein the second hair cutout model can be trained based on a second training set, and the second training set includes a plurality of sample images randomly cropped from the portrait sample images of the first training set.

可选地,第二训练集中的样本图像,可以是按照区域尺寸从第一训练集的人像样本图像随机裁剪得到的,由于第一训练集的人像样本图像携带有头发标签,直接从第一训练集的人像样本图像裁剪与选择框相同尺寸的图像区域作为样本图像对第二发丝抠图模型,可提高第二发丝抠图模型的局部发丝识别能力,并提高训练效率。Optionally, the sample images in the second training set can be randomly cropped from the portrait sample images of the first training set according to the area size. Since the portrait sample images of the first training set carry hair labels, directly cropping the image area of the same size as the selection box from the portrait sample images of the first training set as the sample image for the second hair cutout model can improve the local hair recognition capability of the second hair cutout model and improve the training efficiency.

在一些实施例中,在电子设备通过第二发丝抠图模型识别各个图像区域中的头发区域之前,可从第一图像中裁剪与各个图像区域对应的第一区域图像,并从人像分割图中裁剪与各个图像区域对应的第二区域图像。其中,与各个图像区域对应的第一区域图像可指的是第一图像中与各个图像区域具备相同图像位置的图像内容所形成的图像,与各个图像区域对应的第二区域图像指的是人像分割图中与各个图像区域具备相同图像位置的图像内容所形成的图像。In some embodiments, before the electronic device recognizes the hair area in each image area through the second hair cutout model, the first area image corresponding to each image area may be cropped from the first image, and the second area image corresponding to each image area may be cropped from the portrait segmentation map. The first area image corresponding to each image area may refer to an image formed by image content having the same image position as each image area in the first image, and the second area image corresponding to each image area refers to an image formed by image content having the same image position as each image area in the portrait segmentation map.

可将各个图像区域对应的第一区域图像与第二区域图像进行通道拼接,得到各个图像区域对应的输入图像,并将各个图像区域对应的输入图像输入第二发丝抠图模型。可通过第二发丝抠图模型识别输入的各个图像区域对应的输入图像中包含的头发区域,得到各个图像区域对应的局部发丝抠图结果。The first region image and the second region image corresponding to each image region can be channel-joined to obtain an input image corresponding to each image region, and the input image corresponding to each image region can be input into the second hair cutout model. The second hair cutout model can be used to identify the hair region contained in the input image corresponding to each input image region, and obtain a local hair cutout result corresponding to each image region.

示例性地,现结合图8对上述实施例中得到各个图像区域对应的局部发丝抠图结果进行说明。如图8所示,电子设备的屏幕可显示第一虚化图像810,用户可选择需要进行优化的图像区域812,电子设备可从第一图像820中裁剪与该图像区域812具备相同图像位置的第一区域图像822,并从人像分割图830中裁剪与该图像区域812具备相同图像位置的第二区域图像832。可将第一区域图像822与第二区域图像832进行通道拼接,第一区域图像822可拼接为第二区域图像832的第四个通道,得到四通道的输入图像840。可将输入图像840输入第二发丝抠图模型,并通过第二发丝抠图模型识别输入图像840的头发区域,得到图像区域812对应的局部发丝抠图结果850。Exemplarily, the local hair cutout results corresponding to each image region in the above embodiment are now described in conjunction with FIG8. As shown in FIG8, the screen of the electronic device can display a first blurred image 810, and the user can select an image region 812 to be optimized. The electronic device can crop a first region image 822 having the same image position as the image region 812 from the first image 820, and crop a second region image 832 having the same image position as the image region 812 from the portrait segmentation image 830. The first region image 822 and the second region image 832 can be channel-joined, and the first region image 822 can be joined as the fourth channel of the second region image 832 to obtain a four-channel input image 840. The input image 840 can be input into the second hair cutout model, and the hair region of the input image 840 can be identified by the second hair cutout model to obtain a local hair cutout result 850 corresponding to the image region 812.

在一些实施例中,在将各个图像区域的输入图像输入第二发丝抠图模型之前,可先对输入图像进行预处理,以使得预处理后的输入图像与第二发丝抠图模型适配。可按照第二发丝抠图模型对应的尺寸要求,对输入图像进行缩放处理和/或旋转处理,得到满足该尺寸要求的输入图像,该尺寸要求为第二发丝抠图模型对于输入的图像的尺寸要求。例如,裁剪的第一区域图像及第二区域图像均为选择框的尺寸,小于第二发丝抠图模型的尺寸要求,则可先将输入图像放大;或是拼接得到的输入图像为竖向图像(图像与水平线平行的边短于与水平线垂直的边),而第二发丝抠图模型的尺寸要求为横向尺寸(图像与水平线平行的边长于与水平线垂直的边),则可先将输入图像按顺时针或逆时针旋转90度。先对输入图像进行缩放处理和/或旋转处理,使得输入图像满足该尺寸要求,再将满足该尺寸要求的输入图像输入至第二发丝抠图模型进行头发区域识别,可保证识别结果的准确性。In some embodiments, before inputting the input images of each image region into the second hair cutout model, the input images may be preprocessed so that the preprocessed input images are compatible with the second hair cutout model. The input images may be scaled and/or rotated according to the size requirement corresponding to the second hair cutout model to obtain input images that meet the size requirement, which is the size requirement of the second hair cutout model for the input images. For example, if the cropped first region image and the second region image are both the size of the selection box, which is smaller than the size requirement of the second hair cutout model, the input images may be enlarged first; or if the spliced input images are vertical images (the side of the image parallel to the horizontal line is shorter than the side perpendicular to the horizontal line), and the size requirement of the second hair cutout model is horizontal size (the side of the image parallel to the horizontal line is longer than the side perpendicular to the horizontal line), the input images may be rotated 90 degrees clockwise or counterclockwise first. The input image is first scaled and/or rotated so that the input image meets the size requirement, and then the input image that meets the size requirement is input into the second hair cutout model for hair area recognition, thereby ensuring the accuracy of the recognition result.

进一步地,在对输入图像进行缩放处理和/或旋转处理后,还可对输入图像进行归一化处理,归一化处理的方式可与上述实施例中描述的对第一图像、人像分割图进行归一化处理的方式类似,再将归一化处理后的输入图像输入至第二发丝抠图模型进行头发区域识别。可选地,也可在裁剪得到第一区域图像及第二区域图像后,先分别对第一区域图像及第二区域图像进行归一化处理,再将归一化处理后的第一区域图像及第二区域图像进行通道拼接。Furthermore, after scaling and/or rotating the input image, the input image may be normalized in a manner similar to the normalization of the first image and the portrait segmentation image described in the above embodiment, and the normalized input image is then input into the second hair cutout model for hair region recognition. Optionally, after cropping to obtain the first region image and the second region image, the first region image and the second region image may be normalized respectively, and then the normalized first region image and the second region image may be channel stitched.

在得到第二发丝抠图模型输出的局部发丝抠图结果后,可按照输入图像的原始尺寸对局部发丝抠图结果进行缩放处理和/或旋转处理,以得到与图像区域的尺寸相同的局部发丝抠图结果。After obtaining the local hair cutout result output by the second hair cutout model, the local hair cutout result may be scaled and/or rotated according to the original size of the input image to obtain a local hair cutout result having the same size as the image area.

步骤710,基于各个图像区域的局部发丝抠图结果,对第一图像或第一虚化图像进行虚化处理,得到第二虚化图像。Step 710: Based on the local hair cutout results of each image region, blur the first image or the first blurred image to obtain a second blurred image.

作为一种实施方式,电子设备可将各个图像区域的局部发丝抠图结果与满足精度条件的人像分割结果进行融合,得到目标人像分割结果。可根据各个图像区域的局部发丝抠图结果对满足精度条件的人像分割结果进行修正,以得到更加准确的目标人像分割结果。具体地,可将满足精度条件的人像分割结果中,与各个图像区域具备相同位置的图像内容,替换为各个图像区域对应的局部发丝抠图结果。在得到目标人像分割结果后,可根据目标前景识别结果对第一图像的深度估计结果进行修正,得到目标深度图,并基于目标深度图对第一图像进行虚化处理,得到第二虚化图像。As an implementation method, the electronic device may fuse the local hair cutout results of each image area with the portrait segmentation results that meet the accuracy conditions to obtain a target portrait segmentation result. The portrait segmentation results that meet the accuracy conditions may be corrected based on the local hair cutout results of each image area to obtain a more accurate target portrait segmentation result. Specifically, the image content that has the same position as each image area in the portrait segmentation results that meet the accuracy conditions may be replaced with the local hair cutout results corresponding to each image area. After obtaining the target portrait segmentation result, the depth estimation result of the first image may be corrected based on the target foreground recognition result to obtain a target depth map, and the first image may be blurred based on the target depth map to obtain a second blurred image.

作为另一种实施方式,电子设备可将各个图像区域的局部发丝抠图结果对第一深度图进行修正,可基于各个图像区域的局部发丝抠图结果对第一深度图中与各个图像区域对应的头发边缘进行调整,得到第二深度图。再利用各个图像区域在第二深度图中的深度信息,分别对第一虚化图像中的各个图像区域进行虚化处理,得到第二虚化图像。可选地,也可利用第二深度图对第一图像进行虚化处理,以得到更加准确的第二虚化图像。As another implementation, the electronic device may correct the first depth map using the local hair cutout results of each image area, and may adjust the hair edges corresponding to each image area in the first depth map based on the local hair cutout results of each image area to obtain a second depth map. Then, using the depth information of each image area in the second depth map, blur each image area in the first blurred image to obtain a second blurred image. Optionally, the second depth map may also be used to blur the first image to obtain a more accurate second blurred image.

需要说明的是,本申请的方案除了应用于虚化优化场景以外,还可应用于其它的图像处理场景中,例如,在对选择的各个图像区域进行局部发丝抠图,得到更加准确的人像区域后,可基于更加准确的人像区域对发丝与背景的交界区域进行颜色调整或光斑虚化等处理,在此不作限定。It should be noted that in addition to being applied to blur optimization scenarios, the solution of the present application can also be applied to other image processing scenarios. For example, after performing local hair cutouts on each selected image area to obtain a more accurate portrait area, the boundary area between the hair and the background can be color adjusted or the light spot blurred based on the more accurate portrait area. This is not limited here.

在本申请实施例中,采用交互式的发丝抠图渲染优化方式,用户可根据实际需求选择需要优化的图像区域,并对选择的图像区域进行局部的发丝抠图,提升了发丝抠图的精度,提高了人像识别的准确度,从而提高了虚化处理后果的发丝显著性效果,避免发丝区域背景漏虚或发丝区域误虚化的情况,且提高了与用户之间的交互性。In the embodiment of the present application, an interactive hair cutout rendering optimization method is adopted. The user can select the image area that needs to be optimized according to actual needs, and perform local hair cutout on the selected image area, thereby improving the accuracy of hair cutout and the accuracy of portrait recognition, thereby improving the hair significance effect of the blurred processing result, avoiding the situation where the background of the hair area is blurred or the hair area is blurred by mistake, and improving the interactivity with the user.

如图9所示,在一个实施例中,提供一种图像处理装置900,可应用于上述的电子设备,该图像处理装置900可包括第一识别模块910、虚化模块920、区域选择模块930及第二识别模块940。As shown in FIG. 9 , in one embodiment, an image processing device 900 is provided, which can be applied to the above-mentioned electronic device. The image processing device 900 may include a first recognition module 910 , a blurring module 920 , a region selection module 930 and a second recognition module 940 .

第一识别模块910,用于识别第一图像中的前景区域,得到第一前景识别结果。The first recognition module 910 is used to recognize a foreground area in a first image and obtain a first foreground recognition result.

虚化模块920,用于基于第一前景识别结果对第一图像进行虚化处理,得到第一虚化图像。The blur module 920 is used to blur the first image based on the first foreground recognition result to obtain a first blurred image.

区域选择模块930,用于响应针对第一虚化图像的选择操作,确定选择操作在第一虚化图像中选择的一个或多个图像区域。The area selection module 930 is used to respond to a selection operation on the first blurred image and determine one or more image areas selected by the selection operation in the first blurred image.

在一个实施例中,区域选择模块930,还用于获取选择操作在屏幕上的一个或多个触控位置;针对各个触控位置,按照区域尺寸形成与各个触控位置对应的选择框;以及确定第一虚化图像中与各个选择框对应的图像区域。In one embodiment, the area selection module 930 is also used to obtain one or more touch positions of the selection operation on the screen; for each touch position, form a selection box corresponding to each touch position according to the area size; and determine the image area corresponding to each selection box in the first blurred image.

在一个实施例中,区域选择模块930,还用于若检测到针对目标选择框触发的调整操作,则根据调整操作调整目标选择框的尺寸。In one embodiment, the region selection module 930 is further configured to adjust the size of the target selection box according to the adjustment operation if an adjustment operation triggered on the target selection box is detected.

第二识别模块940,用于识别各个图像区域的前景区域,得到各个图像区域对应的第二前景识别结果。The second recognition module 940 is used to recognize the foreground area of each image area and obtain a second foreground recognition result corresponding to each image area.

虚化模块920,还用于基于各个图像区域的第二前景识别结果,对第一虚化图像进行虚化处理,得到第二虚化图像。The blur module 920 is further configured to blur the first blurred image based on the second foreground recognition result of each image region to obtain a second blurred image.

在本申请实施例中,在对第一图像初次虚化处理后,用户可选择需要进一步优化的图像区域,并再次对选择的图像区域进行前景识别,提高了前景识别的准确性,且基于更加精确的第二前景识别结果对第一图像或第一虚化图像进行二次虚化处理,能够改善将部分前景区域错误地虚化或遗漏部分背景区域未虚化的情况,提高了图像的虚化效果。此外,用户可在第一虚化图像中选择需要进一步优化的图像区域,贴合用户的不同需求,提高了与用户之间的互动性。In the embodiment of the present application, after the first blurring process is performed on the first image, the user can select the image area that needs to be further optimized, and perform foreground recognition on the selected image area again, thereby improving the accuracy of foreground recognition, and performing secondary blurring on the first image or the first blurred image based on the more accurate second foreground recognition result, thereby improving the situation where part of the foreground area is mistakenly blurred or part of the background area is omitted and not blurred, thereby improving the blurring effect of the image. In addition, the user can select the image area that needs to be further optimized in the first blurred image, thereby meeting the different needs of the user and improving the interactivity with the user.

在一个实施例中,虚化模块920,包括融合单元及虚化单元。In one embodiment, the blur module 920 includes a fusion unit and a blur unit.

融合单元,用于将各个图像区域对应的第二前景识别结果与第一前景识别结果进行融合,得到目标前景识别结果。The fusion unit is used to fuse the second foreground recognition results corresponding to each image area with the first foreground recognition results to obtain a target foreground recognition result.

在一个实施例中,融合单元,还用于将第一前景识别结果中与各个图像区域对应的前景识别结果,分别替换为各个图像区域对应的第二前景识别结果,得到目标前景识别结果。In one embodiment, the fusion unit is further used to replace the foreground recognition results corresponding to each image region in the first foreground recognition results with the second foreground recognition results corresponding to each image region, respectively, to obtain a target foreground recognition result.

虚化单元,用于基于目标前景识别结果对第一图像进行虚化处理,得到第二虚化图像。The blurring unit is used to blur the first image based on the target foreground recognition result to obtain a second blurred image.

在一个实施例中,上述图像处理装置900,除了包括第一识别模块910、虚化模块920、区域选择模块930及第二识别模块940,还包括深度估计模块。In one embodiment, the image processing device 900 , in addition to the first recognition module 910 , the blurring module 920 , the region selection module 930 , and the second recognition module 940 , further includes a depth estimation module.

深度估计模块,用于对第一图像进行深度估计,得到深度估计结果,深度估计结果包括第一图像中各个像素点的深度信息。The depth estimation module is used to perform depth estimation on the first image to obtain a depth estimation result, wherein the depth estimation result includes depth information of each pixel in the first image.

虚化单元,还用于根据目标前景识别结果对深度估计结果进行修正,得到目标深度图,并基于目标深度图对第一图像进行虚化处理,得到第二虚化图像。The blurring unit is further used to correct the depth estimation result according to the target foreground recognition result to obtain a target depth map, and blur the first image based on the target depth map to obtain a second blurred image.

在本申请实施例中,在对第一图像初次虚化处理后,用户可选择需要进一步优化的图像区域,电子设备可对选择的各个图像区域进行前景识别,并将得到的各个图像区域对应的第二前景识别结果与第一前景识别结果进行融合,以对第一前景识别结果进行修正,得到更加准确的目标前景识别结果,从而基于该目标前景识别结果对第一图像进行虚化处理可得到虚化效果更好的第二虚化图像,提高了前景识别的准确性及图像虚化效果。In an embodiment of the present application, after the first image is initially blurred, the user can select image areas that need to be further optimized, and the electronic device can perform foreground recognition on each selected image area, and fuse the second foreground recognition results corresponding to each image area with the first foreground recognition result to correct the first foreground recognition result and obtain a more accurate target foreground recognition result, so that the first image can be blurred based on the target foreground recognition result to obtain a second blurred image with a better blurring effect, thereby improving the accuracy of foreground recognition and the image blurring effect.

在一个实施例中,虚化模块920,还用于根据第一前景识别结果对深度估计结果进行修正,得到第一图像的第一深度图,并根据第一深度图对第一图像进行虚化处理,得到第一虚化图像。In one embodiment, the blur module 920 is further configured to correct the depth estimation result according to the first foreground recognition result to obtain a first depth map of the first image, and blur the first image according to the first depth map to obtain a first blurred image.

在一个实施例中,虚化模块920,还用于基于各个图像区域的第二前景识别结果,对第一深度图中与各个图像区域对应的边缘信息进行调整,得到第二深度图;以及根据各个图像区域在第二深度图中的深度信息,分别对第一虚化图像中的各个图像区域进行虚化处理,得到第二虚化图像。In one embodiment, the blur module 920 is also used to adjust the edge information corresponding to each image area in the first depth map based on the second foreground recognition result of each image area to obtain a second depth map; and blur each image area in the first blurred image according to the depth information of each image area in the second depth map to obtain a second blurred image.

在本申请实施例中,可根据用户选择的各个图像区域的第二前景识别结果对第一深度图进行修正,并基于修正得到的第二深度图对第一虚化图像中的图像区域进行虚化处理,得到虚化效果更好的第二虚化图像,直接利用各个选择的图像区域的第二前景识别结果对第一虚化图像中进行局部的虚化处理,提高了前景识别的准确性及图像虚化效果,且减少了计算量,提高图像处理效率。In an embodiment of the present application, the first depth map can be corrected according to the second foreground recognition result of each image area selected by the user, and the image area in the first blurred image can be blurred based on the corrected second depth map to obtain a second blurred image with a better blurring effect. The second foreground recognition result of each selected image area is directly used to perform local blurring in the first blurred image, thereby improving the accuracy of foreground recognition and the image blurring effect, reducing the amount of calculation, and improving image processing efficiency.

在一个实施例中,第一图像包括人像图像。第一识别模块910,还用于识别第一图像的人像区域及头发区域,得到满足精度条件的人像分割结果。In one embodiment, the first image includes a portrait image. The first recognition module 910 is further configured to recognize a portrait region and a hair region of the first image to obtain a portrait segmentation result that meets the accuracy requirement.

在一个实施例中,第一识别模块910,包括人像分割单元、拼接单元及发丝抠图单元。In one embodiment, the first recognition module 910 includes a portrait segmentation unit, a splicing unit and a hair cutout unit.

人像分割单元,用于识别第一图像的人像区域,得到人像分割图。The portrait segmentation unit is used to identify the portrait area of the first image and obtain a portrait segmentation map.

拼接单元,用于将人像分割图与第一图像进行通道拼接,得到拼接图像。The stitching unit is used to perform channel stitching on the portrait segmentation image and the first image to obtain a stitched image.

发丝抠图单元,用于根据拼接图像识别第一图像中的头发区域,以得到满足精度条件的人像分割结果。The hair cutout unit is used to identify the hair area in the first image according to the spliced image to obtain a portrait segmentation result that meets the accuracy condition.

在一个实施例中,发丝抠图单元,还用于将拼接图像输入第一发丝抠图模型,通过第一发丝抠图模型提取拼接图像的特征,并根据该特征确定第一图像中的头发区域,以得到满足精度条件的人像分割结果,其中,第一发丝抠图模型是基于第一训练集训练得到的,第一训练集包括多张标注有头发区域的人像样本图像。In one embodiment, the hair cutout unit is further used to input the stitched image into a first hair cutout model, extract features of the stitched image through the first hair cutout model, and determine the hair area in the first image according to the features to obtain a portrait segmentation result that meets the accuracy conditions, wherein the first hair cutout model is trained based on a first training set, and the first training set includes multiple portrait sample images with hair areas marked.

在一个实施例中,第二识别模块940,还用于通过第二发丝抠图模型识别各个图像区域中的头发区域,得到各个图像区域对应的局部发丝抠图结果,其中,第二发丝抠图模型是基于第二训练集训练得到的,第二训练集中包括多张从第一训练集的人像样本图像随机裁剪得到的样本图像。In one embodiment, the second recognition module 940 is further used to identify the hair area in each image area through a second hair cutout model to obtain a local hair cutout result corresponding to each image area, wherein the second hair cutout model is trained based on a second training set, and the second training set includes multiple sample images randomly cropped from portrait sample images of the first training set.

在一个实施例中,上述图像处理装置900,还包括裁剪模块及拼接模块。In one embodiment, the image processing device 900 further includes a cropping module and a splicing module.

裁剪模块,用于从第一图像中裁剪与各个图像区域对应的第一区域图像;以及从人像分割图中裁剪与各个图像区域对应的第二区域图像。The cropping module is used to crop first region images corresponding to each image region from the first image; and to crop second region images corresponding to each image region from the portrait segmentation image.

拼接模块,用于将各个图像区域对应的第一区域图像与第二区域图像进行通道拼接,得到各个图像区域对应的输入图像。The stitching module is used to perform channel stitching on the first region image and the second region image corresponding to each image region to obtain the input image corresponding to each image region.

第二识别模块940,还用于将各个图像区域对应的输入图像输入第二发丝抠图模型,并通过第二发丝抠图模型识别各个图像区域对应的输入图像中的头发区域,得到各个图像区域对应的局部发丝抠图结果。The second recognition module 940 is further used to input the input image corresponding to each image area into the second hair cutout model, and identify the hair area in the input image corresponding to each image area through the second hair cutout model to obtain the local hair cutout result corresponding to each image area.

在一个实施例中,上述图像处理装置900,还包括预处理模块。In one embodiment, the above-mentioned image processing device 900 further includes a pre-processing module.

预处理模块,用于在拼接模块得到各个图像区域对应的输入图像后,按照第二发丝抠图模型对应的尺寸要求,对输入图像进行缩放处理和/或旋转处理,得到满足尺寸要求的输入图像。The preprocessing module is used to scale and/or rotate the input image according to the size requirement corresponding to the second hair cutout model after the stitching module obtains the input image corresponding to each image area, so as to obtain an input image that meets the size requirement.

在本申请实施例中,采用交互式的发丝抠图渲染优化方式,用户可根据实际需求选择需要优化的图像区域,并对选择的图像区域进行局部的发丝抠图,提升了发丝抠图的精度,提高了人像识别的准确度,从而提高了虚化处理后果的发丝显著性效果,避免发丝区域背景漏虚或发丝区域误虚化的情况,且提高了与用户之间的交互性。In the embodiment of the present application, an interactive hair cutout rendering optimization method is adopted. The user can select the image area that needs to be optimized according to actual needs, and perform local hair cutout on the selected image area, thereby improving the accuracy of hair cutout and the accuracy of portrait recognition, thereby improving the hair significance effect of the blurred processing result, avoiding the situation where the background of the hair area is blurred or the hair area is blurred by mistake, and improving the interactivity with the user.

图10为一个实施例中电子设备的结构框图。如图10所示,电子设备1000可以包括一个或多个如下部件:处理器1010、与处理器1010耦合的存储器1020,其中存储器1020可存储有一个或多个计算机程序,一个或多个计算机程序可以被配置为由一个或多个处理器1010执行时实现如上述各实施例描述的方法。Fig. 10 is a block diagram of the structure of an electronic device in an embodiment. As shown in Fig. 10, the electronic device 1000 may include one or more of the following components: a processor 1010, a memory 1020 coupled to the processor 1010, wherein the memory 1020 may store one or more computer programs, and the one or more computer programs may be configured to implement the methods described in the above embodiments when executed by one or more processors 1010.

处理器1010可以包括一个或者多个处理核。处理器1010利用各种接口和线路连接整个电子设备1000内的各个部分,通过运行或执行存储在存储器1020内的指令、程序、代码集或指令集,以及调用存储在存储器1020内的数据,执行电子设备1000的各种功能和处理数据。可选地,处理器1010可以采用数字信号处理(Digital Signal Processing,DSP)、现场可编程门阵列(Field-Programmable Gate Array,FPGA)、可编程逻辑阵列(Programmable Logic Array,PLA)中的至少一种硬件形式来实现。处理器1010可集成中央处理器(Central Processing Unit,CPU)、图像处理器(Graphics Processing Unit,GPU)和调制解调器等中的一种或几种的组合。其中,CPU主要处理操作系统、用户界面和应用程序等;GPU用于负责显示内容的渲染和绘制;调制解调器用于处理无线通信。可以理解的是,上述调制解调器也可以不集成到处理器1010中,单独通过一块通信芯片进行实现。The processor 1010 may include one or more processing cores. The processor 1010 uses various interfaces and lines to connect various parts of the entire electronic device 1000, and executes various functions and processes data of the electronic device 1000 by running or executing instructions, programs, code sets or instruction sets stored in the memory 1020, and calling data stored in the memory 1020. Optionally, the processor 1010 can be implemented in at least one hardware form of digital signal processing (Digital Signal Processing, DSP), field programmable gate array (Field-Programmable Gate Array, FPGA), and programmable logic array (Programmable Logic Array, PLA). The processor 1010 can integrate one or a combination of a central processing unit (Central Processing Unit, CPU), a graphics processing unit (Graphics Processing Unit, GPU) and a modem. Among them, the CPU mainly processes the operating system, user interface and application programs; the GPU is responsible for rendering and drawing display content; and the modem is used to process wireless communications. It can be understood that the above-mentioned modem may not be integrated into the processor 1010, but may be implemented separately through a communication chip.

存储器1020可以包括随机存储器(Random Access Memory,RAM),也可以包括只读存储器(Read-Only Memory,ROM)。存储器1020可用于存储指令、程序、代码、代码集或指令集。存储器1020可包括存储程序区和存储数据区,其中,存储程序区可存储用于实现操作系统的指令、用于实现至少一个功能的指令(比如触控功能、声音播放功能、图像播放功能等)、用于实现上述各个方法实施例的指令等。存储数据区还可以存储电子设备1000在使用中所创建的数据等。The memory 1020 may include a random access memory (RAM) or a read-only memory (ROM). The memory 1020 may be used to store instructions, programs, codes, code sets or instruction sets. The memory 1020 may include a program storage area and a data storage area, wherein the program storage area may store instructions for implementing an operating system, instructions for implementing at least one function (such as a touch function, a sound playback function, an image playback function, etc.), instructions for implementing the above-mentioned various method embodiments, etc. The data storage area may also store data created by the electronic device 1000 during use, etc.

可以理解地,电子设备1000可包括比上述结构框图中更多或更少的结构元件,例如,包括电源模块、物理按键、WiFi(Wireless Fidelity,无线保真)模块、扬声器、蓝牙模块、传感器等,还可在此不进行限定。It can be understood that the electronic device 1000 may include more or fewer structural elements than those in the above structural block diagram, for example, including a power module, physical buttons, a WiFi (Wireless Fidelity) module, a speaker, a Bluetooth module, a sensor, etc., and is not limited here.

本申请实施例公开一种计算机可读存储介质,其存储计算机程序,其中,该计算机程序被处理器执行时实现如上述实施例描述的方法。An embodiment of the present application discloses a computer-readable storage medium storing a computer program, wherein the computer program implements the method described in the above embodiment when executed by a processor.

本申请实施例公开一种计算机程序产品,该计算机程序产品包括存储了计算机程序的非瞬时性计算机可读存储介质,且该计算机程序可被处理器执行时实现如上述各实施例描述的方法。An embodiment of the present application discloses a computer program product, which includes a non-transitory computer-readable storage medium storing a computer program, and the computer program can be executed by a processor to implement the methods described in the above embodiments.

本领域普通技术人员可以理解实现上述实施例方法中的全部或部分流程,是可以通过计算机程序来指令相关的硬件来完成,所述的程序可存储于一非易失性计算机可读取存储介质中,该程序在执行时,可包括如上述各方法的实施例的流程。其中,所述的存储介质可为磁碟、光盘、ROM等。Those skilled in the art can understand that all or part of the processes in the above-mentioned embodiments can be implemented by instructing related hardware through a computer program, and the program can be stored in a non-volatile computer-readable storage medium, and when the program is executed, it can include the processes of the embodiments of the above-mentioned methods. The storage medium can be a disk, an optical disk, a ROM, etc.

如此处所使用的对存储器、存储、数据库或其它介质的任何引用可包括非易失性和/或易失性存储器。合适的非易失性存储器可包括ROM、可编程ROM(Programmable ROM,PROM)、可擦除PROM(Erasable PROM,EPROM)、电可擦除PROM(Electrically ErasablePROM,EEPROM)或闪存。易失性存储器可包括随机存取存储器(random access memory,RAM),它用作外部高速缓冲存储器。作为说明而非局限,RAM可为多种形式,诸如静态RAM(Static RAM,SRAM)、动态RAM(Dynamic Random Access Memory,DRAM)、同步DRAM(synchronous DRAM,SDRAM)、双倍数据率SDRAM(Double Data Rate SDRAM,DDR SDRAM)、增强型SDRAM(Enhanced Synchronous DRAM,ESDRAM)、同步链路DRAM(Synchlink DRAM,SLDRAM)、存储器总线直接RAM(Rambus DRAM,RDRAM)及直接存储器总线动态RAM(DirectRambus DRAM,DRDRAM)。As used herein, any reference to memory, storage, database, or other medium may include non-volatile and/or volatile memory. Suitable non-volatile memory may include ROM, programmable ROM (Programmable ROM, PROM), erasable PROM (Erasable PROM, EPROM), electrically erasable PROM (Electrically Erasable PROM, EEPROM), or flash memory. Volatile memory may include random access memory (RAM), which is used as an external cache memory. By way of illustration and not limitation, RAM may be in a variety of forms, such as static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double data rate SDRAM (Double Data Rate SDRAM, DDR SDRAM), enhanced SDRAM (ESDRAM), synchronous link DRAM (SLDRAM), memory bus direct RAM (RDRAM) and direct memory bus dynamic RAM (DRDRAM).

应理解,说明书通篇中提到的“一个实施例”或“一实施例”意味着与实施例有关的特定特征、结构或特性包括在本申请的至少一个实施例中。因此,在整个说明书各处出现的“在一个实施例中”或“在一实施例中”未必一定指相同的实施例。此外,这些特定特征、结构或特性可以以任意适合的方式结合在一个或多个实施例中。本领域技术人员也应该知悉,说明书中所描述的实施例均属于可选实施例,所涉及的动作和模块并不一定是本申请所必须的。It should be understood that "one embodiment" or "an embodiment" mentioned throughout the specification means that specific features, structures or characteristics related to the embodiment are included in at least one embodiment of the present application. Therefore, "in one embodiment" or "in an embodiment" appearing throughout the specification does not necessarily refer to the same embodiment. In addition, these specific features, structures or characteristics can be combined in one or more embodiments in any suitable manner. Those skilled in the art should also be aware that the embodiments described in the specification are all optional embodiments, and the actions and modules involved are not necessarily required by the present application.

在本申请的各种实施例中,应理解,上述各过程的序号的大小并不意味着执行顺序的必然先后,各过程的执行顺序应以其功能和内在逻辑确定,而不应对本申请实施例的实施过程构成任何限定。In the various embodiments of the present application, it should be understood that the size of the serial numbers of the above-mentioned processes does not necessarily mean the order of execution. The execution order of each process should be determined by its function and internal logic, and should not constitute any limitation on the implementation process of the embodiments of the present application.

上述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可位于一个地方,或者也可以分布到多个网络单元上。可根据实际的需要选择其中的部分或全部单元来实现本实施例方案的目的。The units described above as separate components may or may not be physically separated, and the components shown as units may or may not be physical units, that is, they may be located in one place or distributed on multiple network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.

另外,在本申请各实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。In addition, each functional unit in each embodiment of the present application may be integrated into one processing unit, or each unit may exist physically separately, or two or more units may be integrated into one unit. The above-mentioned integrated unit may be implemented in the form of hardware or in the form of software functional units.

以上对本申请实施例公开的一种图像处理方法、装置、电子设备及计算机可读存储介质进行了详细介绍,本文中应用了具体个例对本申请的原理及实施方式进行了阐述,以上实施例的说明只是用于帮助理解本申请的方法及其核心思想。同时,对于本领域的一般技术人员,依据本申请的思想,在具体实施方式及应用范围上均会有改变之处,综上所述,本说明书内容不应理解为对本申请的限制。The above is a detailed introduction to an image processing method, device, electronic device, and computer-readable storage medium disclosed in the embodiments of the present application. Specific examples are used herein to illustrate the principles and implementation methods of the present application. The description of the above embodiments is only used to help understand the method and core idea of the present application. At the same time, for those of ordinary skill in the art, according to the idea of the present application, there will be changes in the specific implementation methods and application scopes. In summary, the content of this specification should not be understood as limiting the present application.

Claims (17)

1. An image processing method, comprising:
identifying a foreground region in the first image to obtain a first foreground identification result;
performing blurring processing on the first image based on the first foreground identification result to obtain a first blurring image;
Determining one or more image areas selected in the first virtual image by the selection operation in response to the selection operation for the first virtual image; the image area is an area where blurring errors or blurring leakage occurs;
Identifying foreground areas of the image areas to obtain second foreground identification results corresponding to the image areas;
based on the second foreground identification result of each image area, carrying out blurring processing on the first image or the first blurring image to obtain a second blurring image;
The step of identifying the foreground region of each image region to obtain a second foreground identification result corresponding to each image region comprises the following steps:
And cutting out an area image with the same image position from the first image according to the image position of each image area in the first blurring image, and carrying out foreground recognition on the area image cut out from the first image so as to obtain a second foreground recognition result corresponding to each image area.
2. The method according to claim 1, wherein blurring the first image or the first blurring image based on the second foreground identification result of each image area, to obtain a second blurring image, includes:
Fusing the second foreground recognition results corresponding to the image areas with the first foreground recognition results to obtain target foreground recognition results;
and blurring the first image based on the target foreground identification result to obtain a second blurring image.
3. The method according to claim 2, wherein the fusing the second foreground recognition result corresponding to each image area with the first foreground recognition result to obtain a target foreground recognition result includes:
And respectively replacing the foreground recognition results corresponding to the image areas in the first foreground recognition results with the second foreground recognition results corresponding to the image areas to obtain target foreground recognition results.
4. The method of claim 2, wherein prior to blurring the first image based on the first foreground recognition result, the method further comprises:
Performing depth estimation on the first image to obtain a depth estimation result, wherein the depth estimation result comprises depth information of each pixel point in the first image;
the blurring processing is performed on the first image based on the target foreground recognition result to obtain a second blurring image, including:
correcting the depth estimation result according to the target foreground identification result to obtain a target depth map;
and blurring the first image based on the target depth map to obtain a second blurring image.
5. The method of claim 1, wherein prior to blurring the first image based on the first foreground recognition result, the method further comprises:
Performing depth estimation on the first image to obtain a depth estimation result, wherein the depth estimation result comprises depth information of each pixel point in the first image;
The blurring processing is performed on the first image based on the first foreground recognition result to obtain a first blurring image, including:
Correcting the depth estimation result according to the first foreground identification result to obtain a first depth map of the first image;
and carrying out blurring processing on the first image according to the first depth map to obtain a first blurring image.
6. The method of claim 5, wherein blurring the first image or the first blurred image based on the second foreground identification result of each of the image areas to obtain a second blurred image, comprising:
Based on a second foreground identification result of each image area, adjusting edge information corresponding to each image area in the first depth map to obtain a second depth map;
And according to the depth information of each image region in the second depth map, blurring processing is carried out on each image region in the first blurring image, so as to obtain a second blurring image.
7. The method of any one of claims 1-6, wherein the first image comprises a portrait image; the step of identifying the foreground region in the first image to obtain a first foreground identification result comprises the following steps:
and identifying the portrait area and the hair area of the first image to obtain a portrait segmentation result meeting the precision condition.
8. The method of claim 7, wherein the identifying the portrait area and the hair area of the first image to obtain the portrait segmentation result satisfying the accuracy condition includes:
Identifying a human image area of the first image to obtain a human image segmentation map;
channel stitching is carried out on the portrait segmentation drawing and the first image, so that a stitched image is obtained;
and identifying the hair region in the first image according to the spliced image so as to obtain a portrait segmentation result meeting the precision condition.
9. The method of claim 8, wherein the identifying the hair region in the first image from the stitched image to obtain the portrait segmentation result satisfying the accuracy condition comprises:
Inputting the spliced image into a first hairline matting model, extracting the characteristics of the spliced image through the first hairline matting model, and determining the hair region in the first image according to the characteristics so as to obtain a portrait segmentation result meeting the precision condition, wherein the first hairline matting model is obtained based on a first training set, and the first training set comprises a plurality of portrait sample images marked with the hair region.
10. The method according to claim 9, wherein the identifying the foreground region of each image region to obtain the second foreground identification result corresponding to each image region includes:
and identifying hair areas in the image areas through a second hair matting model to obtain local hair matting results corresponding to the image areas, wherein the second hair matting model is obtained based on training of a second training set, and the second training set comprises a plurality of sample images obtained by randomly cutting human image sample images of the first training set.
11. The method of claim 10 further including, prior to said identifying hair regions in each of said image regions by a second hair cut model:
cropping a first region image corresponding to each image region from the first image;
Clipping a second region image corresponding to each image region from the portrait segmentation drawing;
Channel stitching is carried out on the first area image and the second area image corresponding to each image area, and input images corresponding to each image area are obtained;
and inputting the input images corresponding to the image areas into a second hairline matting model.
12. The method of claim 11, wherein after said obtaining the input image corresponding to each of the image areas, the method further comprises:
And performing scaling and/or rotation processing on the input image according to the size requirement corresponding to the second hairline matting model to obtain the input image meeting the size requirement.
13. The method of any of claims 1-6, 8-12, wherein the determining the one or more image regions selected by the selecting operation in the first blurred image comprises:
Acquiring one or more touch positions of the selection operation on a screen;
forming a selection frame corresponding to each touch position according to the area size aiming at each touch position;
and determining an image area corresponding to each selection frame in the first blurring image.
14. The method of claim 13, wherein after the forming of the selection boxes corresponding to the respective touch positions according to the area size, the method further comprises:
and if the adjustment operation triggered by the target selection frame is detected, adjusting the size of the target selection frame according to the adjustment operation.
15. An image processing apparatus, comprising:
the first identification module is used for identifying a foreground region in the first image to obtain a first foreground identification result;
The blurring module is used for blurring the first image based on the first foreground identification result to obtain a first blurring image;
A region selection module for determining one or more image regions selected by the selection operation in the first virtual image in response to the selection operation for the first virtual image; the image area is an area where blurring errors or blurring leakage occurs;
The second recognition module is used for recognizing the foreground region of each image region to obtain a second foreground recognition result corresponding to each image region;
The blurring module is further configured to perform blurring processing on the first blurring image based on a second foreground recognition result of each image area, so as to obtain a second blurring image;
The second recognition module is further configured to cut out an area image with the same image position from the first image according to the image position of each image area in the first blurring image, and perform foreground recognition on the area image cut out from the first image, so as to obtain a second foreground recognition result corresponding to each image area.
16. An electronic device comprising a memory and a processor, the memory having stored therein a computer program which, when executed by the processor, causes the processor to implement the method of any of claims 1 to 14.
17. A computer readable storage medium, on which a computer program is stored, characterized in that the computer program, when being executed by a processor, implements the method according to any of claims 1 to 14.
CN202110771363.5A 2021-07-08 2021-07-08 Image processing method, device, electronic equipment and computer readable storage medium Active CN113610884B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110771363.5A CN113610884B (en) 2021-07-08 2021-07-08 Image processing method, device, electronic equipment and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110771363.5A CN113610884B (en) 2021-07-08 2021-07-08 Image processing method, device, electronic equipment and computer readable storage medium

Publications (2)

Publication Number Publication Date
CN113610884A CN113610884A (en) 2021-11-05
CN113610884B true CN113610884B (en) 2024-08-02

Family

ID=78304190

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110771363.5A Active CN113610884B (en) 2021-07-08 2021-07-08 Image processing method, device, electronic equipment and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN113610884B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116152320A (en) * 2021-11-23 2023-05-23 Oppo广东移动通信有限公司 Image processing method, device, electronic device, and computer-readable storage medium
CN114359307B (en) * 2022-01-04 2024-06-25 浙江大学 A fully automatic high-resolution image matting method
CN114758391B (en) * 2022-04-08 2023-09-12 北京百度网讯科技有限公司 Hair style image determination method, device, electronic equipment, storage medium and product

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109146767A (en) * 2017-09-04 2019-01-04 成都通甲优博科技有限责任公司 Image weakening method and device based on depth map
CN110009556A (en) * 2018-01-05 2019-07-12 广东欧珀移动通信有限公司 Image background blurring method and device, storage medium and electronic equipment
CN112487974A (en) * 2020-11-30 2021-03-12 叠境数字科技(上海)有限公司 Video stream multi-person segmentation method, system, chip and medium

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104219445B (en) * 2014-08-26 2017-07-25 小米科技有限责任公司 Screening-mode method of adjustment and device
CN108076286B (en) * 2017-11-30 2019-12-27 Oppo广东移动通信有限公司 Image blurring method and device, mobile terminal and storage medium
CN110009555B (en) * 2018-01-05 2020-08-14 Oppo广东移动通信有限公司 Image blurring method, device, storage medium and electronic device
CN108848367B (en) * 2018-07-26 2020-08-07 宁波视睿迪光电有限公司 Image processing method and device and mobile terminal
CN111311482B (en) * 2018-12-12 2023-04-07 Tcl科技集团股份有限公司 Background blurring method and device, terminal equipment and storage medium
CN111741283A (en) * 2019-03-25 2020-10-02 华为技术有限公司 Apparatus and method for image processing
CN112614057B (en) * 2019-09-18 2025-08-15 华为技术有限公司 Image blurring processing method and electronic equipment
CN111754528B (en) * 2020-06-24 2024-07-12 Oppo广东移动通信有限公司 Portrait segmentation method, device, electronic equipment and computer readable storage medium
CN112950641B (en) * 2021-02-24 2024-06-25 Oppo广东移动通信有限公司 Image processing method and device, computer readable storage medium and electronic equipment

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109146767A (en) * 2017-09-04 2019-01-04 成都通甲优博科技有限责任公司 Image weakening method and device based on depth map
CN110009556A (en) * 2018-01-05 2019-07-12 广东欧珀移动通信有限公司 Image background blurring method and device, storage medium and electronic equipment
CN112487974A (en) * 2020-11-30 2021-03-12 叠境数字科技(上海)有限公司 Video stream multi-person segmentation method, system, chip and medium

Also Published As

Publication number Publication date
CN113610884A (en) 2021-11-05

Similar Documents

Publication Publication Date Title
CN111402135B (en) Image processing method, device, electronic device, and computer-readable storage medium
CN113888437B (en) Image processing method, device, electronic device and computer readable storage medium
CN111507994B (en) Portrait extraction method, portrait extraction device and mobile terminal
CN113610884B (en) Image processing method, device, electronic equipment and computer readable storage medium
CN110473185B (en) Image processing method and device, electronic equipment and computer readable storage medium
US8253819B2 (en) Electronic camera and image processing method
CN110796041B (en) Subject identification method and device, electronic device, computer-readable storage medium
EP4013033A1 (en) Method and apparatus for focusing on subject, and electronic device, and storage medium
CN110149482A (en) Focusing method, focusing device, electronic equipment and computer readable storage medium
CN107742274A (en) Image processing method, device, computer-readable storage medium, and electronic device
CN107798652A (en) Image processing method and device, readable storage medium and electronic equipment
CN110536068A (en) Focusing method and device, electronic equipment and computer readable storage medium
CN107368806B (en) Image rectification method, image rectification device, computer-readable storage medium and computer equipment
CN110191287B (en) Focusing method and apparatus, electronic device, computer-readable storage medium
CN110650288B (en) Focus control method and apparatus, electronic device, computer-readable storage medium
CN112017137B (en) Image processing method, device, electronic equipment and computer readable storage medium
CN110121031B (en) Image acquisition method and apparatus, electronic device, computer-readable storage medium
CN113298829B (en) Image processing method, device, electronic equipment and computer readable storage medium
CN107622497B (en) Image cropping method and device, computer readable storage medium and computer equipment
CN113313626A (en) Image processing method, image processing device, electronic equipment and storage medium
CN113674303B (en) Image processing method, device, electronic device and storage medium
CN113673474B (en) Image processing method, device, electronic equipment and computer-readable storage medium
WO2022261828A1 (en) Image processing method and apparatus, electronic device, and computer-readable storage medium
CN107862658A (en) Image processing method, device, computer-readable storage medium, and electronic device
CN113610865A (en) Image processing method, image processing device, electronic equipment and computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant