[go: up one dir, main page]

WO2020082830A1 - Method and apparatus for processing images - Google Patents

Method and apparatus for processing images Download PDF

Info

Publication number
WO2020082830A1
WO2020082830A1 PCT/CN2019/097444 CN2019097444W WO2020082830A1 WO 2020082830 A1 WO2020082830 A1 WO 2020082830A1 CN 2019097444 W CN2019097444 W CN 2019097444W WO 2020082830 A1 WO2020082830 A1 WO 2020082830A1
Authority
WO
WIPO (PCT)
Prior art keywords
processed
pixel
region
sub
pixels
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
PCT/CN2019/097444
Other languages
French (fr)
Chinese (zh)
Inventor
卢艺帆
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing ByteDance Network Technology Co Ltd
Original Assignee
Beijing ByteDance Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing ByteDance Network Technology Co Ltd filed Critical Beijing ByteDance Network Technology Co Ltd
Publication of WO2020082830A1 publication Critical patent/WO2020082830A1/en
Anticipated expiration legal-status Critical
Ceased legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/507Depth or shape recovery from shading
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics

Definitions

  • the embodiments of the present application relate to the field of computer technology, such as a method and an apparatus for processing images.
  • Out-of-focus imaging is usually one of the image effects that users want.
  • the out-of-focus imaging is related to the focal length, object distance, aperture and other parameters of the shooting device. Therefore, when the shooting equipment is fixed, how to better realize out-of-focus imaging is still a problem to be studied.
  • the embodiments of the present application propose a method and an apparatus for processing images.
  • an embodiment of the present application provides a method for processing an image, the method includes: acquiring an image to be processed; determining a region to be processed of the image to be processed, the region to be processed includes a sub-region to be processed; Pixels in the area, determine the weight of the pixels in the sub-region to be processed relative to the pixels in the sub-region where the pixels are located; based on the pixels of the pixels in the sub-region to be processed relative to the pixels Weights, to determine the weighted sum of the color values of the pixels in the sub-region to be processed; according to the weighted sum of the color values of the pixels in the sub-region to be processed, to determine the new color value of the pixel ; Adjust the color value of the pixel to a new color value.
  • an embodiment of the present application provides an apparatus for processing an image
  • the apparatus includes: an acquiring unit configured to acquire an image to be processed; a determining unit configured to determine an area to be processed of the image to be processed, the The to-be-processed area includes the to-be-processed sub-area; the adjustment unit is configured to determine the weight of the pixel in the to-be-processed sub-area where the pixel is located relative to the pixel for the pixel in the to-be-processed area; based on Determine the weighted sum of the color values of the area pixels in the sub-area to be processed according to the weight of the area pixels in the sub-area to be processed relative to the pixels; according to the area pixels in the sub-area to be processed The weighted sum of the color values of the points determines the new color values of the pixel points; the color values of the pixel points are adjusted to the new color values.
  • an embodiment of the present application provides an electronic device including: at least one processor; a storage device configured to store at least one program; the at least one program is executed by the at least one processor, so that The at least one processor implements the method as described in any one implementation manner of the first aspect.
  • an embodiment of the present application provides a computer-readable medium on which a computer program is stored, and when the computer program is executed by a processor, the method described in any one of the implementation manners of the first aspect is implemented.
  • FIG. 1 is an exemplary system architecture diagram to which an embodiment of the present application can be applied;
  • FIG. 2 is a flowchart of an embodiment of a method for processing an image according to the present application
  • FIG. 3 is a schematic diagram of an application scenario of the image processing method according to an embodiment of the present application.
  • FIG. 5 is a schematic structural diagram of an embodiment of an image processing apparatus according to the present application.
  • FIG. 6 is a schematic structural diagram of a computer system suitable for implementing an electronic device according to an embodiment of the present application.
  • FIG. 1 shows an exemplary architecture 100 of an embodiment of an image processing method or an image processing apparatus of the present application to which the present application can be applied.
  • the system architecture 100 may include terminal devices 101, 102, and 103, a network 104, and a server 105.
  • the network 104 is a medium used to provide a communication link between the terminal devices 101, 102, 103 and the server 105.
  • the network 104 may include various connection types, such as wired, wireless communication links, or fiber optic cables, and so on.
  • the terminal devices 101, 102, 103 interact with the server 105 through the network 104 to receive or send messages, and so on.
  • Various client applications may be installed on the terminal devices 101, 102, and 103. For example, browser applications, camera applications, image processing applications, etc.
  • the terminal devices 101, 102, and 103 may be hardware or software.
  • the terminal devices 101, 102, and 103 may be various electronic devices that support image storage and image transmission, including but not limited to smart phones, tablet computers, e-book readers, laptop portable computers, and desktop computers, etc. Wait.
  • the terminal devices 101, 102, and 103 are software, they can be installed in the electronic devices listed above. It can be implemented as multiple software or software modules (for example, multiple software or software modules used to provide distributed services) or as a single software or software module. No limitation here.
  • the server 105 may be a server that provides various services, for example, an image processing server that processes a region to be processed among the images to be processed sent by the terminal devices 101, 102, and 103.
  • the image processing server can also feed back the processing results to the terminal devices 101, 102, and 103.
  • the above-mentioned to-be-processed images can also be directly stored locally on the server 105, and the server 105 can directly extract and process the locally-stored to-be-processed images.
  • the terminal devices 101, 102, 103 and the network may not exist 104.
  • the image processing method provided in the embodiments of the present application is generally executed by the server 105, and accordingly, the image processing device is generally provided in the server 105.
  • image processing applications may also be installed in the terminal devices 101, 102, and 103.
  • the terminal devices 101, 102, and 103 may also process the image to be processed based on the installed image processing application.
  • the method for processing images may also be executed by the terminal devices 101, 102, and 103, and accordingly, the apparatus for processing images may also be provided in the terminal devices 101, 102, and 103.
  • the exemplary system architecture 100 may not have the server 105 and the network 104.
  • the server can be hardware or software.
  • the server can be implemented as a distributed server cluster composed of multiple servers or as a single server.
  • the server is software, it may be implemented as multiple software or software modules (for example, multiple software or software modules used to provide distributed services), or as a single software or software module. No limitation here.
  • terminal devices, networks, and servers in FIG. 1 are only schematic. According to the implementation needs, there can be any number of terminal devices, networks and servers.
  • FIG. 2 it shows a flow 200 of an embodiment of a method for processing an image according to the present application.
  • the image processing method includes steps 201 to 203.
  • step 201 an image to be processed is acquired.
  • the execution subject of the image processing method (the server 105 shown in FIG. 1) can be connected from a local or other storage device (the terminal devices 101 and 102 shown in FIG. 1) through a wired connection or a wireless connection. , 103) Acquire the image to be processed.
  • the image to be processed may be various types of images.
  • a region to be processed of the image to be processed is determined, and the region to be processed includes a sub-region to be processed.
  • the area to be processed may be an image area of the image to be processed. According to actual application requirements, a variety of ways can be used to determine the area to be processed of the image to be processed.
  • the area to be processed may be an image area of the image to be processed that meets a preset condition.
  • the preset conditions can be preset by relevant personnel.
  • the to-be-processed area may be an image area covered by a circle whose radius is one-third of the width of the to-be-processed image using the geometric center of the to-be-processed image as a starting point.
  • the image area to be processed may also be an image area of the image to be processed and the displayed content is a non-person.
  • the area to be processed may also be an image area of the image to be processed designated by a user or a technician.
  • the above-mentioned execution subject may receive the image area of the image to be processed selected by the user through frame selection.
  • the above-mentioned execution subject may further divide the image region to be processed into multiple image regions according to a certain classification method (eg, according to the content displayed by the image to be processed, according to the color of the image to be processed, etc.), and then may receive the image selected by the user The area is to be processed.
  • the depth value of the pixels in the image to be processed may also be determined first, and then the image area composed of the pixels within the target depth value interval may be determined as the area to be processed.
  • monocular image depth estimation algorithms such as monocular image depth estimation based on fully convolutional neural network, monocular image depth estimation based on Markov random field, etc.
  • monocular image depth estimation based on fully convolutional neural network can be used to determine pixels in the image Depth value.
  • the target depth value interval may be a depth value interval preset by relevant personnel.
  • the target depth value interval may also be determined according to the depth value of pixels in the image to be processed. For example, one third of the minimum value of the depth value of pixels in the image to be processed can be the lower bound of the depth value interval, and one third of the maximum value of the depth value of pixels in the image to be processed can be the depth The upper bound of the value interval, and use the determined depth value interval as the target depth value interval.
  • the target depth value interval may also be a depth value interval determined by the upper and lower bounds specified by the user.
  • the area to be processed may be one image area of the image to be processed, or may be multiple image areas of the image to be processed.
  • step 203 for pixels in the region to be processed, steps 2031 to 2034 are performed.
  • step 2031 the weight of the area pixel in the sub-area to be processed where the pixel is located relative to the pixel is determined.
  • the sub-region to be processed may be the surrounding area of the pixel in the region to be processed, which contains the pixel.
  • the size of the sub-region to be processed can be specified by the user, or based on the relevant information of the pixel (such as the position of the pixel, the depth value corresponding to the pixel, the content displayed by the pixel, etc.) Determined by the preset calculation method.
  • the area pixels may refer to pixels in the sub-area to be processed.
  • At least one of the size and shape of the sub-region to be processed may be specified in advance.
  • the shape can be any shape, such as rectangle, square, circle, pentagon, love heart and so on.
  • the sub-regions to be processed in at least one of different sizes and shapes may be pre-specified according to actual application requirements, so as to achieve different processing effects of the image to be processed.
  • the weight of each area pixel in the sub-area to be processed relative to the pixel can be determined separately. In an embodiment, different weight determination methods may be adopted to determine the weight of each pixel in the sub-region to be processed relative to the pixel according to actual processing effect requirements.
  • each area pixel in the to-be-processed sub-area can be determined according to the difference between the color value of each area pixel in the sub-area to be processed and the color value of the pixel respectively
  • the weight of pixels For example, the weight of the regional pixel in the sub-region to be processed relative to the pixel may be inversely proportional to the difference between the color value of the regional pixel in the sub-region to be processed and the color value of the pixel. That is, the larger the difference between the corresponding color value and the color value of the pixel, the smaller the weight of the pixel in the area relative to the pixel.
  • the weight of the regional pixel in the sub-region to be processed where the pixel is located relative to the pixel can be determined according to the distance between the regional pixel in the sub-region to be processed and the pixel.
  • the weight of the area pixel in the sub-region to be processed relative to the pixel may be inversely related to the distance between the area pixel in the sub-region to be processed and the pixel. That is, the area pixels farther away from the pixel point, the smaller the weight of the area pixel points relative to the pixel point.
  • the above distance can be set to represent the positional relationship between two pixels.
  • the expression method of the distance can be determined according to actual application requirements.
  • the above distance may be Euclidean distance, block distance, chessboard distance, etc. It should be understood that the calculation method of the distance may also be specified by a technician.
  • the area pixel in the sub-area to be processed where the pixel is located may be determined relative to the pixel according to the difference between the depth value of the area pixel in the sub-area to be processed and the depth value of the pixel
  • the weight of the point may be negatively related to the difference between the depth value of the regional pixel in the sub-region to be processed and the depth value of the pixel. That is, the larger the depth difference from the pixel, the smaller the weight of the pixel in the region relative to the pixel.
  • the weight of the pixel of the region in the sub-region to be processed relative to the pixel can be comprehensively determined from multiple aspects. For example, according to the difference between the color value of the pixel in the sub-region to be processed and the color value of the pixel, the distance between the pixel in the sub-region to be processed and the pixel, the distance between the pixel in the sub-region to be processed Any combination of the difference between the depth value of the region pixel and the depth value of the pixel to comprehensively determine the weight of the region pixel in the sub-region to be processed relative to the pixel.
  • step 2032 the weighted sum of the color values of the regional pixels in the sub-region to be processed is determined based on the weight of the regional pixels in the sub-region to be processed with respect to the pixels.
  • the weighted sum of the color values of the pixels in the sub-region to be processed can be calculated. It should be understood that the sub-region to be processed where the pixel is located also naturally includes the pixel, that is, the pixel is also a region pixel in the sub-region to be processed.
  • any pixel A in the region to be processed in the sub-region to be processed where the pixel A is located, in addition to the regional pixel A (ie, pixel A), there are three regional pixels B, C, D.
  • the weights of regional pixels A, B, C, and D relative to pixel A are WAA, WAB, WAC, and WAD, and the color values of regional pixels A, B, C, and D are CA, CB, CC, and CD, respectively.
  • the weighted sum of the color values of the pixels in the area in the sub-area to be processed where the pixel A is located can be determined as:
  • step 2033 according to the weighted sum of the color values of the pixels of the area in the sub-region to be processed, a new color value of the pixel is determined.
  • the weighted sum of the color values of the pixels of the area in the sub-region to be processed can be directly determined as the new color values of the pixels.
  • the weighted sum of the color values of the pixels of the region in the sub-region to be processed can be adjusted, so that the obtained processing result is convenient for subsequent calculations.
  • the product of the weighted sum of the color values of the pixels in the sub-region to be processed and the preset adjustment parameters may be determined as the new color value of the pixel.
  • the quotient of the weighted sum of the color values of the regional pixels in the sub-region to be processed and the number of regional pixels in the sub-region to be processed may be determined as the new color value of the pixel.
  • step 2034 the color value of the pixel is adjusted to a new color value.
  • the color value of the pixel can be adjusted to a new color value, so as to realize the processing of the pixel.
  • FIG. 3 is a schematic diagram of an application scenario of the image processing method according to this embodiment.
  • the above-mentioned execution subject may first obtain the image to be processed 301. Then, the image area where the person is displayed is detected, and the image area other than the image area where the person is displayed is determined as the to-be-processed area 302 (as shown by hatched portions in the figure).
  • the new pixel value of each pixel in the region to be processed 302 is determined separately.
  • the N pixel points to be processed in the area to be processed are from left to right and from top to bottom. Numbered sequentially.
  • the color values of the N pixels included in the to-be-processed area 302 are C1, C2, ..., CN, respectively, and the new color values correspond to C1 ⁇ , C2 ⁇ , ..., CN ⁇ .
  • the new color value of the pixel can be determined by the following steps (as indicated by reference number 304 in the figure): First determine that the pixel to be processed includes the pixel and the next pixel in the sub-region to be processed (in the order of numbering). Then it is determined that the weight of the pixel with respect to the pixel is W1, and the weight of the next pixel of the pixel with respect to the pixel is W2.
  • the color value of each pixel in the region to be processed 302 can be adjusted to the corresponding new color value, so that the image to be processed 305 after the region to be processed 302 can be obtained.
  • the method provided by the above embodiments of the present application determines the weight of the pixels in the sub-region to be processed where the pixels in the region to be processed are located, and determines the area in the sub-region to be processed where the pixels in the region to be processed are located
  • the weighted sum of the color values of the pixels and then according to the determined weighted sum, determine and adjust the new color values of the pixels in the area to be processed, thereby realizing the color values of the pixels in the area to be processed of the image to be processed
  • the effective processing of the area makes the area to be processed after processing appear blurred.
  • the weight of each pixel in the sub-region to be processed in the sub-region where each pixel in the to-be-processed region is located relative to each pixel is determined separately, which helps to improve the processed The smoothness of the blur effect presented by the area to be processed.
  • FIG. 4 shows a flow 400 of yet another embodiment of a method of processing images.
  • the flow 400 of the image processing method includes steps 401 to 403.
  • step 401 an image to be processed is acquired.
  • a region to be processed of the image to be processed is determined, and the region to be processed includes a sub-region to be processed.
  • steps 401 and 402 For the specific execution process of the above steps 401 and 402, reference may be made to the relevant descriptions of steps 201 and 202 in the embodiment corresponding to FIG. 2, which will not be repeated here.
  • step 403 for the pixels in the region to be processed, steps 4031 to 4033 are performed.
  • step 4031 the weight of the area pixel in the to-be-processed sub-area where the pixel is located relative to the pixel is determined through steps 40311-40317.
  • step 40311 the square of the Euclidean distance between the pixel in the sub-region to be processed and the pixel is taken as the distance.
  • the square of the Euclidean distance between the pixel in the area and the pixel can be:
  • step 40312 the inverse of the quotient of the distance value and the preset distance adjustment value is used as the first index.
  • the distance adjustment value may be a value greater than zero preset by a technician.
  • the first exponential power of the natural constant is determined as the distance weight of the pixel in the sub-region to be processed relative to the pixel.
  • the value range of the distance adjustment value S may be any real number greater than zero.
  • the distance adjustment value S may be specified by a technician in advance.
  • step 40314 the square of the difference between the depth value of the pixel in the sub-region to be processed and the depth value of the pixel is taken as the depth difference.
  • multiple monocular image depth estimation algorithms can be used to determine the sub-region to be processed The depth value of the pixel in the area and the depth value of the pixel.
  • step 40315 the inverse of the quotient of the depth difference value and the preset depth adjustment value is used as the second index.
  • the depth adjustment value may be a value greater than zero preset by a technician.
  • step 40316 the second exponential power of the natural constant is determined as the depth weight of the pixel in the sub-region to be processed relative to the pixel.
  • the depth weight of a point relative to the pixel can be:
  • the range of the depth adjustment value D can be any real number greater than zero.
  • the depth adjustment value D may be specified by a technician in advance.
  • step 40317 the product of the distance weight and the depth weight is determined as the weight of the regional pixel in the sub-region to be processed relative to the pixel.
  • step 4032 the new color value of the pixel is determined according to the weighted sum of the color values of the pixels in the sub-region to be processed through steps 40321-40322.
  • step 40321 the sum of the weights of the pixels in the sub-region to be processed is taken as the total weight.
  • step 40322 the quotient of the weighted sum of the color values of the pixels in the sub-region to be processed and the total weight is determined as the new color value of the pixel.
  • the weight of the i-th regional pixel among the N regional pixels relative to the pixel is Wi, and N regions
  • the color value of the i-th pixel in the pixel is Ci, where the value range of i can be an integer between 1 and N (including 1 and N).
  • the new color value of the pixel can be:
  • step 4033 the color value of the pixel is adjusted to a new color value.
  • the process 400 of the image processing method in this embodiment can be based on the area pixel in the sub-region to be processed where a pixel is located and the pixel
  • the distance and the depth difference determine the weight of each area pixel relative to the pixel, and determine the new color value of the pixel according to the determined weight, so that the processed area to be processed is rendered blurred
  • the effect may have a corresponding level change according to the depth value corresponding to the pixel in the area to be processed.
  • the present application provides an embodiment of an apparatus for processing images, which corresponds to the method embodiment shown in FIG. 2, and the apparatus can be applied to various Kinds of electronic equipment.
  • the image processing apparatus 500 includes an acquisition unit 501, a determination unit 502 and an adjustment unit 503.
  • the obtaining unit 501 is configured to obtain an image to be processed
  • the determining unit 502 is configured to determine a to-be-processed area of the to-be-processed image, the to-be-processed area includes a to-be-processed sub-area
  • the adjusting unit 503 is configured to target pixels in the to-be-processed area Point, determine the weight of the area pixel in the sub-area to be processed relative to the pixel; based on the determined weight of the area pixel in the sub-area to be processed relative to the pixel, determine the to-be-processed
  • the processing procedures of the acquiring unit 501, the determining unit 502, and the adjusting unit 503 can refer to the relevant descriptions of step 201, step 202, and step 203 in the corresponding embodiment of FIG. This will not be repeated here.
  • the adjustment unit 503 is configured to determine the weight according to at least one of the following: the distance between the area pixel in the sub-region to be processed and the pixel, the area pixel in the sub-region to be processed The difference between the depth value of the point and the depth value of the pixel; where the weights are inversely related to the following two: the distance between the area pixel in the sub-region to be processed and the pixel, the area pixel in the sub-region to be processed The difference between the depth value of a point and the depth value of the pixel.
  • At least one of the size and shape of the sub-region to be processed is specified in advance.
  • the above determination unit 502 is configured to: determine the depth value of pixels in the image to be processed; and determine the image area composed of pixels within the target depth value interval as Area to be processed.
  • the above-mentioned adjustment unit 503 is configured to: take the square of the Euclidean distance between the pixel in the sub-region to be processed and the pixel as the distance; take the distance and the preset value The inverse number of the quotient of the distance adjustment value is taken as the first index; the first exponent of the natural constant is determined as the distance weight of the pixel in the sub-region to be processed relative to the pixel; the The square of the difference between the depth value of the regional pixel and the depth value of the pixel is taken as the depth difference; the inverse of the quotient of the depth difference and the preset depth adjustment value is taken as the second index; the second of the natural constant The exponential power is determined as the depth weight of the regional pixel in the sub-region to be processed relative to the pixel; the product of the distance weight and the depth weight is determined as the weight of the regional pixel in the sub-region to be processed relative to the pixel.
  • the adjustment unit 503 is configured to: use the sum of the weights of the pixels in the sub-region to be processed as the total weight; and use the color values of the pixels in the sub-region to be processed The quotient of the weighted sum and the total weight is determined as the new color value of the pixel.
  • the device provided in the above embodiment of the present application acquires the image to be processed through the acquisition unit; the determination unit determines the area to be processed of the image to be processed; and for the pixel in the area to be processed, the adjustment unit determines the sub-region to be processed where the pixel is located
  • the weight of the pixels in the region relative to the pixel based on the weight of the pixels in the sub-region to be processed relative to the pixel, determine the weighted sum of the color values of the pixels in the sub-region to be processed; Process the weighted sum of the color values of the pixels in the sub-region to determine the new color value of the pixel; adjust the color value of the pixel to the new color value, thereby realizing the area to be processed of the image to be processed
  • the effective processing of the color value of the pixel point makes the processed area to be processed appear a blur effect.
  • FIG. 6 shows a schematic structural diagram of a computer system 600 suitable for implementing an electronic device according to an embodiment of the present application.
  • the electronic device shown in FIG. 6 is only an example, and should not bring any limitation to the functions and use scope of the embodiments of the present application.
  • the computer system 600 includes a central processing unit (Central Processing Unit, CPU) 601, which can be loaded into a random portion according to a program stored in a read-only memory (Read-Only Memory, ROM) 602 or from the storage section 608
  • the program in the memory (Random Access Memory) 603 is accessed to perform various appropriate actions and processes.
  • RAM 603 various programs and data necessary for the operation of the system 600 are also stored.
  • the CPU 601, ROM 602, and RAM 603 are connected to each other through a bus 604.
  • An input / output (I / O) interface 605 is also connected to the bus 604.
  • the following components are connected to the I / O interface 605: an input section 606 including a keyboard, a mouse, etc .; an output section 607 including a cathode ray tube (Cathode Ray Tube, CRT), liquid crystal display (Liquid Crystal Display, LCD) etc.
  • a storage section 608 including a hard disk, etc .; and a communication section 609 including a network interface card such as a local area network (Local Area Network, LAN) card, modem, etc.
  • the communication section 609 performs communication processing via a network such as the Internet.
  • the driver 610 is also connected to the I / O interface 605 as needed.
  • a removable medium 611 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like, is installed on the drive 610 as needed, so that the computer program read out therefrom is installed into the storage section 608 as needed.
  • the process described above with reference to the flowchart may be implemented as a computer software program.
  • embodiments of the present disclosure include a computer program product that includes a computer program carried on a computer-readable medium, the computer program containing program code for performing the method shown in the flowchart.
  • the computer program may be downloaded and installed from the network through the communication section 609, and / or installed from the removable medium 611.
  • CPU central processing unit
  • the computer-readable medium of the present application may be a computer-readable signal medium or a computer-readable storage medium, or any combination of the two.
  • the computer-readable storage medium may be, for example, but not limited to, an electrical, magnetic, optical, electromagnetic, infrared, or semiconductor system, device, or device, or any combination of the above.
  • Computer-readable storage media may include, but are not limited to: electrical connections with at least one wire, portable computer disk, hard disk, random access memory (RAM), read-only memory (ROM), erasable programmable only Read memory (Erasable Programmable Read-Only Memory, EPROM or flash memory), optical fiber, portable compact disk read-only memory (Compact Disc Read-Only Memory, CD-ROM), optical storage device, magnetic storage device, or any suitable combination.
  • the computer-readable storage medium may be any tangible medium that contains or stores a program, and the program may be used by or in combination with an instruction execution system, apparatus, or device.
  • the computer-readable signal medium may include a data signal propagated in the baseband or as part of the carrier wave, in which the computer-readable program code is carried.
  • This propagated data signal can take many forms, including but not limited to electromagnetic signals, optical signals, or any suitable combination of the above.
  • the computer-readable signal medium may also be any computer-readable medium other than a computer-readable storage medium, and the computer-readable medium may send, propagate, or transmit a program for use by or in combination with an instruction execution system, apparatus, or device. .
  • the program code contained on the computer-readable medium may be transmitted using any appropriate medium, including but not limited to: wireless, wire, optical cable, radio frequency (Radio Frequency, RF), etc., or any suitable combination of the foregoing.
  • each block in the flowchart or block diagram may represent a module, a program segment, or a part of code
  • the module, a program segment, or a part of the code contains at least one Execute instructions.
  • the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession can actually be executed in parallel, and sometimes they can also be executed in reverse order, depending on the functions involved.
  • each block in the block diagrams and / or flowcharts, and combinations of blocks in the block diagrams and / or flowcharts can be implemented with dedicated hardware-based systems that perform specified functions or operations Or, it can be realized by a combination of dedicated hardware and computer instructions.
  • the units described in the embodiments of the present application may be implemented in software or hardware.
  • the described unit may also be provided in the processor.
  • it may be described as: a processor including an acquisition unit, a determination unit, and an adjustment unit.
  • the names of these units do not constitute a limitation on the unit itself.
  • the acquisition unit may also be described as a “unit for acquiring an image to be processed”.
  • the present application also provides a computer-readable medium, which may be contained in the electronic device described in the foregoing embodiments; or may exist alone without being assembled into the electronic device in.
  • the above computer readable medium carries at least one program, and when the above at least one program is executed by the electronic device, the electronic device is caused to: acquire an image to be processed; determine a region to be processed of the image to be processed, the region to be processed includes a sub-region to be processed ; For the pixels in the area to be processed, determine the weight of the pixel in the sub-region to be processed relative to the pixel; based on the weight of the pixel in the sub-region to be processed relative to the pixel , Determine the weighted sum of the color values of the pixels in the sub-region to be processed; determine the new color value of the pixel according to the weighted sum of the color values of the pixels in the sub-region to be processed; The color value is adjusted to the new color value.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

Disclosed in embodiments of the present application are a method and apparatus for processing images. An embodiment of the method comprises: acquiring an image to be processed; determining a region to be processed of the image to be processed, the region to be processed comprising a sub-region to be processed; for a pixel point in the region to be processed, determining the weight of a regional pixel point in the sub-region to be processed in which the pixel point is located relative to the pixel point; on the basis of the determined weight of the regional pixel point in the sub-region to be processed relative to the pixel point, determining the weighted sum of a color value of the regional pixel point in the sub-region to be processed; according to the weighted sum of the color values of the regional pixel point in the sub-region to be processed, determining new color values of the pixel point; and adjusting the color values of the pixel point to be the new color values.

Description

处理图像的方法和装置Method and device for processing images

本申请要求在2018年10月23日提交中国专利局、申请号为201811236252.9的中国专利申请的优先权,该申请的全部内容通过引用结合在本申请中。This application requires the priority of the Chinese patent application with the application number 201811236252.9 filed by the China Patent Office on October 23, 2018. The entire contents of this application are incorporated by reference in this application.

技术领域Technical field

本申请实施例涉及计算机技术领域,例如处理图像的方法和装置。The embodiments of the present application relate to the field of computer technology, such as a method and an apparatus for processing images.

背景技术Background technique

焦外成像通常是用户想要的图像效果之一。一般地,焦外成像与拍摄设备的焦距、物距、光圈等参数相关。因此,在拍摄设备一定时,如何更好地实现焦外成像仍是有待研究的问题。Out-of-focus imaging is usually one of the image effects that users want. Generally, the out-of-focus imaging is related to the focal length, object distance, aperture and other parameters of the shooting device. Therefore, when the shooting equipment is fixed, how to better realize out-of-focus imaging is still a problem to be studied.

发明内容Summary of the invention

本申请实施例提出了处理图像的方法和装置。The embodiments of the present application propose a method and an apparatus for processing images.

第一方面,本申请实施例提供了一种处理图像的方法,所述方法包括:获取待处理图像;确定待处理图像的待处理区域,所述待处理区域包括待处理子区域;针对待处理区域中的像素点,确定所述像素点所在的待处理子区域中的区域像素点相对于所述像素点的权重;基于所述待处理子区域中的区域像素点相对于所述像素点的权重,确定所述待处理子区域中的区域像素点的颜色值的加权和;根据所述待处理子区域中的区域像素点的颜色值的加权和,确定所述像素点的新的颜色值;将所述像素点的颜色值调整为新的颜色值。In a first aspect, an embodiment of the present application provides a method for processing an image, the method includes: acquiring an image to be processed; determining a region to be processed of the image to be processed, the region to be processed includes a sub-region to be processed; Pixels in the area, determine the weight of the pixels in the sub-region to be processed relative to the pixels in the sub-region where the pixels are located; based on the pixels of the pixels in the sub-region to be processed relative to the pixels Weights, to determine the weighted sum of the color values of the pixels in the sub-region to be processed; according to the weighted sum of the color values of the pixels in the sub-region to be processed, to determine the new color value of the pixel ; Adjust the color value of the pixel to a new color value.

第二方面,本申请实施例提供了一种处理图像的装置,所述装置包括:获取单元,被配置成获取待处理图像;确定单元,被配置成确定待处理图像的待处理区域,所述待处理区域包括待处理子区域;调整单元,被配置成针对待处理区域中的像素点,确定所述像素点所在的待处理子区域中的区域像素点相对于所述像素点的权重;基于所述待处理子区域中的区域像素点相对于所述像素点的权重,确定所述待处理子区域中的区域像素点的颜色值的加权和;根据所述待处理子区域中的区域像素点的颜色值的加权和,确定所述像素点的新的颜色值;将所述像素点的颜色值调整为新的颜色值。In a second aspect, an embodiment of the present application provides an apparatus for processing an image, the apparatus includes: an acquiring unit configured to acquire an image to be processed; a determining unit configured to determine an area to be processed of the image to be processed, the The to-be-processed area includes the to-be-processed sub-area; the adjustment unit is configured to determine the weight of the pixel in the to-be-processed sub-area where the pixel is located relative to the pixel for the pixel in the to-be-processed area; based on Determine the weighted sum of the color values of the area pixels in the sub-area to be processed according to the weight of the area pixels in the sub-area to be processed relative to the pixels; according to the area pixels in the sub-area to be processed The weighted sum of the color values of the points determines the new color values of the pixel points; the color values of the pixel points are adjusted to the new color values.

第三方面,本申请实施例提供了一种电子设备,该电子设备包括:至少一个处理器;存储装置,设置为存储至少一个程序;所述至少一个程序被所述至少一个处理器执行,使得所述至少一个处理器实现如第一方面中任一实现方式描述的方法。In a third aspect, an embodiment of the present application provides an electronic device including: at least one processor; a storage device configured to store at least one program; the at least one program is executed by the at least one processor, so that The at least one processor implements the method as described in any one implementation manner of the first aspect.

第四方面,本申请实施例提供了一种计算机可读介质,其上存储有计算机 程序,所述计算机程序被处理器执行时实现如第一方面中任一实现方式描述的方法。According to a fourth aspect, an embodiment of the present application provides a computer-readable medium on which a computer program is stored, and when the computer program is executed by a processor, the method described in any one of the implementation manners of the first aspect is implemented.

附图说明BRIEF DESCRIPTION

图1是本申请的一个实施例可以应用于其中的示例性系统架构图;FIG. 1 is an exemplary system architecture diagram to which an embodiment of the present application can be applied;

图2是根据本申请的处理图像的方法的一个实施例的流程图;2 is a flowchart of an embodiment of a method for processing an image according to the present application;

图3是根据本申请实施例的处理图像的方法的一个应用场景的示意图;3 is a schematic diagram of an application scenario of the image processing method according to an embodiment of the present application;

图4是根据本申请的处理图像的方法的又一个实施例的流程图;4 is a flowchart of still another embodiment of the image processing method according to the present application;

图5是根据本申请的处理图像的装置的一个实施例的结构示意图;5 is a schematic structural diagram of an embodiment of an image processing apparatus according to the present application;

图6是适于用来实现本申请实施例的电子设备的计算机系统的结构示意图。6 is a schematic structural diagram of a computer system suitable for implementing an electronic device according to an embodiment of the present application.

具体实施方式detailed description

下面结合附图和实施例对本申请作详细说明。可以理解的是,此处所描述的示例实施例仅仅用于解释相关申请,而非对本申请的限定。另外还需要说明的是,为了便于描述,附图中仅示出了与有关申请相关的部分。The application will be described in detail below with reference to the drawings and embodiments. It can be understood that the example embodiments described herein are only used for explaining related applications, rather than limiting the present application. In addition, it should be noted that, for ease of description, only the parts related to the relevant applications are shown in the drawings.

需要说明的是,在不冲突的情况下,本申请中的实施例及实施例中的特征可以相互组合。下面将参考附图并结合实施例来详细说明本申请。It should be noted that the embodiments in the present application and the features in the embodiments can be combined with each other without conflict. The application will be described in detail below with reference to the drawings and in conjunction with the embodiments.

图1示出了可以应用本申请的处理图像的方法或处理图像的装置的实施例的示例性架构100。FIG. 1 shows an exemplary architecture 100 of an embodiment of an image processing method or an image processing apparatus of the present application to which the present application can be applied.

如图1所示,系统架构100可以包括终端设备101、102、103,网络104和服务器105。网络104用以在终端设备101、102、103和服务器105之间提供通信链路的介质。网络104可以包括各种连接类型,例如有线、无线通信链路或者光纤电缆等等。As shown in FIG. 1, the system architecture 100 may include terminal devices 101, 102, and 103, a network 104, and a server 105. The network 104 is a medium used to provide a communication link between the terminal devices 101, 102, 103 and the server 105. The network 104 may include various connection types, such as wired, wireless communication links, or fiber optic cables, and so on.

终端设备101、102、103通过网络104与服务器105交互,以接收或发送消息等。终端设备101、102、103上可以安装有各种客户端应用。例如浏览器类应用、摄像类应用、图像处理类应用等。The terminal devices 101, 102, 103 interact with the server 105 through the network 104 to receive or send messages, and so on. Various client applications may be installed on the terminal devices 101, 102, and 103. For example, browser applications, camera applications, image processing applications, etc.

终端设备101、102、103可以是硬件,也可以是软件。当终端设备101、102、103为硬件时,可以是支持图像存储和图像传输的各种电子设备,包括但不限于智能手机、平板电脑、电子书阅读器、膝上型便携计算机和台式计算机等等。当终端设备101、102、103为软件时,可以安装在上述所列举的电子设备中。其可以实现成多个软件或软件模块(例如用来提供分布式服务的多个软件或软件模块),也可以实现成单个软件或软件模块。在此不做限定。The terminal devices 101, 102, and 103 may be hardware or software. When the terminal devices 101, 102, and 103 are hardware, they may be various electronic devices that support image storage and image transmission, including but not limited to smart phones, tablet computers, e-book readers, laptop portable computers, and desktop computers, etc. Wait. When the terminal devices 101, 102, and 103 are software, they can be installed in the electronic devices listed above. It can be implemented as multiple software or software modules (for example, multiple software or software modules used to provide distributed services) or as a single software or software module. No limitation here.

服务器105可以是提供各种服务的服务器,例如为终端设备101、102、103发送的待处理图像中的待处理区域进行处理的图像处理服务器。图像处理服务器还可以将处理结果反馈给终端设备101、102、103。The server 105 may be a server that provides various services, for example, an image processing server that processes a region to be processed among the images to be processed sent by the terminal devices 101, 102, and 103. The image processing server can also feed back the processing results to the terminal devices 101, 102, and 103.

需要说明的是,上述待处理图像也可以直接存储在服务器105的本地,服务器105可以直接提取本地所存储的待处理图像并进行处理,此时,可以不存在终端设备101、102、103和网络104。It should be noted that the above-mentioned to-be-processed images can also be directly stored locally on the server 105, and the server 105 can directly extract and process the locally-stored to-be-processed images. At this time, the terminal devices 101, 102, 103 and the network may not exist 104.

需要说明的是,本申请实施例所提供的处理图像的方法一般由服务器105执行,相应地,处理图像的装置一般设置于服务器105中。It should be noted that the image processing method provided in the embodiments of the present application is generally executed by the server 105, and accordingly, the image processing device is generally provided in the server 105.

还需要指出的是,终端设备101、102、103中也可以安装有图像处理类应用。此时,终端设备101、102、103也可以基于所安装的图像处理类应用对待处理图像进行处理。此时,处理图像的方法也可以由终端设备101、102、103执行,相应地,处理图像的装置也可以设置于终端设备101、102、103中。此时,示例性系统架构100可以不存在服务器105和网络104。It should also be noted that image processing applications may also be installed in the terminal devices 101, 102, and 103. At this time, the terminal devices 101, 102, and 103 may also process the image to be processed based on the installed image processing application. At this time, the method for processing images may also be executed by the terminal devices 101, 102, and 103, and accordingly, the apparatus for processing images may also be provided in the terminal devices 101, 102, and 103. At this time, the exemplary system architecture 100 may not have the server 105 and the network 104.

需要说明的是,服务器可以是硬件,也可以是软件。当服务器为硬件时,可以实现成多个服务器组成的分布式服务器集群,也可以实现成单个服务器。当服务器为软件时,可以实现成多个软件或软件模块(例如用来提供分布式服务的多个软件或软件模块),也可以实现成单个软件或软件模块。在此不做限定。It should be noted that the server can be hardware or software. When the server is hardware, it can be implemented as a distributed server cluster composed of multiple servers or as a single server. When the server is software, it may be implemented as multiple software or software modules (for example, multiple software or software modules used to provide distributed services), or as a single software or software module. No limitation here.

应该理解,图1中的终端设备、网络和服务器的数目仅仅是示意性的。根据实现需要,可以具有任意数目的终端设备、网络和服务器。It should be understood that the numbers of terminal devices, networks, and servers in FIG. 1 are only schematic. According to the implementation needs, there can be any number of terminal devices, networks and servers.

参考图2,其示出了根据本申请的处理图像的方法的一个实施例的流程200。该处理图像的方法包括步骤201至步骤203。Referring to FIG. 2, it shows a flow 200 of an embodiment of a method for processing an image according to the present application. The image processing method includes steps 201 to 203.

在步骤201中,获取待处理图像。In step 201, an image to be processed is acquired.

在本实施例中,处理图像的方法的执行主体(如图1所示的服务器105)可以通过有线连接或无线连接的方式从本地或其它存储设备(如图1所示的终端设备101、102、103)获取待处理图像。待处理图像可以是多种类型的图像。In this embodiment, the execution subject of the image processing method (the server 105 shown in FIG. 1) can be connected from a local or other storage device (the terminal devices 101 and 102 shown in FIG. 1) through a wired connection or a wireless connection. , 103) Acquire the image to be processed. The image to be processed may be various types of images.

在步骤202中,确定待处理图像的待处理区域,待处理区域包括待处理子区域。In step 202, a region to be processed of the image to be processed is determined, and the region to be processed includes a sub-region to be processed.

在本实施例中,待处理区域可以是待处理图像的图像区域。根据实际的应用需求,可以采用多种方式来确定待处理图像的待处理区域。In this embodiment, the area to be processed may be an image area of the image to be processed. According to actual application requirements, a variety of ways can be used to determine the area to be processed of the image to be processed.

在一实施例中,待处理区域可以是待处理图像的、符合预设的条件的图像区域。其中,预设的条件可以由相关人员预先设置。例如,待处理区域可以是以待处理图像的几何中心为起点,且以待处理图像的宽度的三分之一为半径的圆所覆盖的图像区域。又例如,待处理图像区域还可以是待处理图像的、显示的内容为非人物的图像区域等。In an embodiment, the area to be processed may be an image area of the image to be processed that meets a preset condition. Among them, the preset conditions can be preset by relevant personnel. For example, the to-be-processed area may be an image area covered by a circle whose radius is one-third of the width of the to-be-processed image using the geometric center of the to-be-processed image as a starting point. For another example, the image area to be processed may also be an image area of the image to be processed and the displayed content is a non-person.

在一实施例中,待处理区域也可以是由用户或技术人员所指定的待处理图像的图像区域。例如,上述执行主体可以接收用户通过框选的方式所选择 的待处理图像的图像区域。例如,上述执行主体还可以预先将待处理图像区域根据一定分类方式(例如根据待处理图像显示的内容、根据待处理图像的颜色等)划分为多个图像区域,然后可以接收用户所选择的图像区域作为待处理区域。In an embodiment, the area to be processed may also be an image area of the image to be processed designated by a user or a technician. For example, the above-mentioned execution subject may receive the image area of the image to be processed selected by the user through frame selection. For example, the above-mentioned execution subject may further divide the image region to be processed into multiple image regions according to a certain classification method (eg, according to the content displayed by the image to be processed, according to the color of the image to be processed, etc.), and then may receive the image selected by the user The area is to be processed.

在本实施例的一些实现方式中,还可以先确定待处理图像中的像素点的深度值,然后将深度值在目标深度值区间内的像素点所组成的图像区域,确定为待处理区域。In some implementations of this embodiment, the depth value of the pixels in the image to be processed may also be determined first, and then the image area composed of the pixels within the target depth value interval may be determined as the area to be processed.

其中,可以利用多种单目图像深度估计算法(如基于全卷积神经网络的单目图像深度估计、基于马尔科夫随机场的单目图像深度估计等)来确定待处理图像中的像素点的深度值。Among them, a variety of monocular image depth estimation algorithms (such as monocular image depth estimation based on fully convolutional neural network, monocular image depth estimation based on Markov random field, etc.) can be used to determine pixels in the image Depth value.

其中,目标深度值区间可以是由相关人员预先设置的深度值区间。目标深度值区间还可以是根据待处理图像中的像素点的深度值所确定的。例如,可以以待处理图像中的像素点的深度值的最小值的三分之一为深度值区间的下界,以待处理图像中的像素点的深度值的最大值的三分之一为深度值区间的上界,并将确定的深度值区间作为目标深度值区间。目标深度值区间还可以是由用户所指定的上界和下界所确定的深度值区间。The target depth value interval may be a depth value interval preset by relevant personnel. The target depth value interval may also be determined according to the depth value of pixels in the image to be processed. For example, one third of the minimum value of the depth value of pixels in the image to be processed can be the lower bound of the depth value interval, and one third of the maximum value of the depth value of pixels in the image to be processed can be the depth The upper bound of the value interval, and use the determined depth value interval as the target depth value interval. The target depth value interval may also be a depth value interval determined by the upper and lower bounds specified by the user.

应当可以理解,在不同的应用场景下,待处理区域可以是待处理图像的一个图像区域,也可以是待处理图像的多个图像区域。It should be understood that in different application scenarios, the area to be processed may be one image area of the image to be processed, or may be multiple image areas of the image to be processed.

在步骤203中,针对待处理区域中的像素点,执行步骤2031-步骤2034。In step 203, for pixels in the region to be processed, steps 2031 to 2034 are performed.

在步骤2031中,确定该像素点所在的待处理子区域中的区域像素点相对于该像素点的权重。In step 2031, the weight of the area pixel in the sub-area to be processed where the pixel is located relative to the pixel is determined.

在本实施例中,待处理子区域可以是待处理区域中该像素点的、包含该像素点的周围区域。待处理子区域的尺寸可以是有用户所指定的,也可以是根据该像素点的相关信息(如该像素点的位置、该像素点对应的深度值、该像素点显示的内容等)而根据预设的计算方式所确定的。区域像素点可以指待处理子区域中的像素点。In this embodiment, the sub-region to be processed may be the surrounding area of the pixel in the region to be processed, which contains the pixel. The size of the sub-region to be processed can be specified by the user, or based on the relevant information of the pixel (such as the position of the pixel, the depth value corresponding to the pixel, the content displayed by the pixel, etc.) Determined by the preset calculation method. The area pixels may refer to pixels in the sub-area to be processed.

在本实施例的一些实现方式中,待处理子区域的尺寸和形状中的至少之一可以是预先指定的。其中,形状可以是任意的形状,如长方形、正方形、圆形、五角形、爱心形等等。在一实施例中,可以根据实际应用需求,预先指定不同尺寸和形状中的至少之一的待处理子区域,从而实现对待处理图像的不同处理效果。In some implementations of this embodiment, at least one of the size and shape of the sub-region to be processed may be specified in advance. Among them, the shape can be any shape, such as rectangle, square, circle, pentagon, love heart and so on. In an embodiment, the sub-regions to be processed in at least one of different sizes and shapes may be pre-specified according to actual application requirements, so as to achieve different processing effects of the image to be processed.

在本实施例中,对于该像素点所在的待处理子区域中的区域像素点,可以分别确定待处理子区域中的每个区域像素点相对于该像素点的权重。在一实施例中,可以根据实际的处理效果需求,采取不同的权重确定方法来确定待处理子区域中的每个区域像素点相对于该像素点的权重。In this embodiment, for the area pixels in the sub-area to be processed where the pixel is located, the weight of each area pixel in the sub-area to be processed relative to the pixel can be determined separately. In an embodiment, different weight determination methods may be adopted to determine the weight of each pixel in the sub-region to be processed relative to the pixel according to actual processing effect requirements.

在一实施例中,可以根据待处理子区域中的每个区域像素点的颜色值分别与该像素点的颜色值的差值来分别确定待处理子区域中的每个区域像素点相对于该像素点的权重。例如,待处理子区域中的区域像素点相对于该像素点的权重可以与待处理子区域中的区域像素点的颜色值与该像素点的颜色值的差值成反比。即对应的颜色值与该像素点的颜色值的差值越大的区域像素点,该区域像素点相对于该像素点的权重就越小。In an embodiment, each area pixel in the to-be-processed sub-area can be determined according to the difference between the color value of each area pixel in the sub-area to be processed and the color value of the pixel respectively The weight of pixels. For example, the weight of the regional pixel in the sub-region to be processed relative to the pixel may be inversely proportional to the difference between the color value of the regional pixel in the sub-region to be processed and the color value of the pixel. That is, the larger the difference between the corresponding color value and the color value of the pixel, the smaller the weight of the pixel in the area relative to the pixel.

在一实施例中,可以根据待处理子区域中的区域像素点与该像素点的距离,确定该像素点所在的待处理子区域中的区域像素点相对于该像素点的权重。例如,待处理子区域中的区域像素点相对于该像素点的权重可以与待处理子区域中的区域像素点与该像素点的距离负相关。即与该像素点的距离越远的区域像素点,该区域像素点相对于该像素点的权重就越小。In an embodiment, the weight of the regional pixel in the sub-region to be processed where the pixel is located relative to the pixel can be determined according to the distance between the regional pixel in the sub-region to be processed and the pixel. For example, the weight of the area pixel in the sub-region to be processed relative to the pixel may be inversely related to the distance between the area pixel in the sub-region to be processed and the pixel. That is, the area pixels farther away from the pixel point, the smaller the weight of the area pixel points relative to the pixel point.

其中,上述的距离可以设置为表示两个像素点之间的位置关系。在一实施例中可以根据实际的应用需求确定距离的表达方式。例如,上述的距离可以是欧氏距离,也可以是街区距离、棋盘距离等等。应当可以理解,也可以由技术人员指定距离的计算方法。Wherein, the above distance can be set to represent the positional relationship between two pixels. In an embodiment, the expression method of the distance can be determined according to actual application requirements. For example, the above distance may be Euclidean distance, block distance, chessboard distance, etc. It should be understood that the calculation method of the distance may also be specified by a technician.

在一实施例中,可以根据待处理子区域中的区域像素点的深度值与该像素点的深度值的差值,确定该像素点所在的待处理子区域中的区域像素点相对于该像素点的权重。例如,待处理子区域中的区域像素点相对于该像素点的权重可以与待处理子区域中的区域像素点的深度值与该像素点的深度值的差值负相关。即与该像素点的深度差越大的区域像素点,该区域像素点相对于该像素点的权重越小。In an embodiment, the area pixel in the sub-area to be processed where the pixel is located may be determined relative to the pixel according to the difference between the depth value of the area pixel in the sub-area to be processed and the depth value of the pixel The weight of the point. For example, the weight of the regional pixel in the sub-region to be processed relative to the pixel may be negatively related to the difference between the depth value of the regional pixel in the sub-region to be processed and the depth value of the pixel. That is, the larger the depth difference from the pixel, the smaller the weight of the pixel in the region relative to the pixel.

在一实施例中,可以从多个方面综合确定待处理子区域中的区域像素点相对于该像素点的权重。例如,可以根据上述待处理子区域中的区域像素点的颜色值与该像素点的颜色值的差值、待处理子区域中的区域像素点与该像素点的距离、待处理子区域中的区域像素点的深度值与该像素点的深度值的差值中的任意组合来综合确定待处理子区域中的区域像素点相对于该像素点的权重。In an embodiment, the weight of the pixel of the region in the sub-region to be processed relative to the pixel can be comprehensively determined from multiple aspects. For example, according to the difference between the color value of the pixel in the sub-region to be processed and the color value of the pixel, the distance between the pixel in the sub-region to be processed and the pixel, the distance between the pixel in the sub-region to be processed Any combination of the difference between the depth value of the region pixel and the depth value of the pixel to comprehensively determine the weight of the region pixel in the sub-region to be processed relative to the pixel.

在步骤2032中,基于待处理子区域中的区域像素点相对于该像素点的权重,确定待处理子区域中的区域像素点的颜色值的加权和。In step 2032, the weighted sum of the color values of the regional pixels in the sub-region to be processed is determined based on the weight of the regional pixels in the sub-region to be processed with respect to the pixels.

在本实施例中,在确定出待处理子区域中的每个区域像素点分别相对于该像素点的权重之后,可以计算得到待处理子区域中的区域像素点的颜色值的加权和。应当可以理解,该像素点所在的待处理子区域中自然也包括该像素点,即该像素点也是待处理子区域中的一区域像素点。In this embodiment, after determining the weight of each pixel in the sub-region to be processed with respect to the pixel, the weighted sum of the color values of the pixels in the sub-region to be processed can be calculated. It should be understood that the sub-region to be processed where the pixel is located also naturally includes the pixel, that is, the pixel is also a region pixel in the sub-region to be processed.

作为示例,对于待处理区域中的任一像素点A,该像素点A所在的待处理子区域中除了区域像素点A(即像素点A)之外,还有三个区域像素点B, C,D。区域像素点A、B、C、D相对于像素点A的权重分别为WAA、WAB、WAC、WAD,以及区域像素点A、B、C、D的颜色值分别为CA、CB、CC、CD。那么,像素点A所在的上述待处理子区域中的区域像素点的颜色值的加权和可以确定为:As an example, for any pixel A in the region to be processed, in the sub-region to be processed where the pixel A is located, in addition to the regional pixel A (ie, pixel A), there are three regional pixels B, C, D. The weights of regional pixels A, B, C, and D relative to pixel A are WAA, WAB, WAC, and WAD, and the color values of regional pixels A, B, C, and D are CA, CB, CC, and CD, respectively. . Then, the weighted sum of the color values of the pixels in the area in the sub-area to be processed where the pixel A is located can be determined as:

CA*WAA+CB*WAB+CC*WAC+CD*WCD。CA * WAA + CB * WAB + CC * WAC + CD * WCD.

在步骤2033中,根据待处理子区域中的区域像素点的颜色值的加权和,确定该像素点的新的颜色值。In step 2033, according to the weighted sum of the color values of the pixels of the area in the sub-region to be processed, a new color value of the pixel is determined.

在本实施例中,可以根据上述步骤2032得到的加权和的结果,采用多种方法来确定该像素点的新的颜色值。In this embodiment, according to the result of the weighted sum obtained in the above step 2032, various methods may be used to determine the new color value of the pixel.

在一实施例中,可以直接将待处理子区域中的区域像素点的颜色值的加权和确定为该像素点的新的颜色值。In an embodiment, the weighted sum of the color values of the pixels of the area in the sub-region to be processed can be directly determined as the new color values of the pixels.

在一实施例中,可以将待处理子区域中的区域像素点的颜色值的加权和进行调整,以使得得到的处理结果便于后续计算。例如,可以将待处理子区域中的区域像素点的颜色值的加权和与预设的调节参数的乘积确定为该像素点的新的颜色值。又例如,可以将待处理子区域中的区域像素点的颜色值的加权和与待处理子区域中的区域像素点的数目的商确定为该像素点的新的颜色值。In an embodiment, the weighted sum of the color values of the pixels of the region in the sub-region to be processed can be adjusted, so that the obtained processing result is convenient for subsequent calculations. For example, the product of the weighted sum of the color values of the pixels in the sub-region to be processed and the preset adjustment parameters may be determined as the new color value of the pixel. For another example, the quotient of the weighted sum of the color values of the regional pixels in the sub-region to be processed and the number of regional pixels in the sub-region to be processed may be determined as the new color value of the pixel.

在步骤2034中,将该像素点的颜色值调整为新的颜色值。In step 2034, the color value of the pixel is adjusted to a new color value.

在本实施例中,可以调整该像素点的颜色值为新的颜色值,以实现对该像素点的处理。In this embodiment, the color value of the pixel can be adjusted to a new color value, so as to realize the processing of the pixel.

参见图3,图3是根据本实施例的处理图像的方法的应用场景的一个示意图。在图3的应用场景中,上述执行主体可以首先获取待处理图像301。然后,检测出显示人物的图像区域,并将除显示人物的图像区域之外的图像区域确定为待处理区域302(如图中打斜线部分所示)。Referring to FIG. 3, FIG. 3 is a schematic diagram of an application scenario of the image processing method according to this embodiment. In the application scenario of FIG. 3, the above-mentioned execution subject may first obtain the image to be processed 301. Then, the image area where the person is displayed is detected, and the image area other than the image area where the person is displayed is determined as the to-be-processed area 302 (as shown by hatched portions in the figure).

之后,分别确定待处理区域302中的每个像素点的新的像素值。在一实施例中,如图中标号303所示,以待处理区域302中包含N个像素点为示例,且待处理区域中包含的N个像素点按照从左到右且从上到下的顺序依次编号。待处理区域302中包含的N个像素点的颜色值分别为C1、C2、……、CN,新的颜色值对应分别为:C1`、C2`、……、CN`。After that, the new pixel value of each pixel in the region to be processed 302 is determined separately. In one embodiment, as indicated by reference numeral 303 in the figure, taking the pixel area to be processed 302 as an example, and the N pixel points to be processed in the area to be processed are from left to right and from top to bottom. Numbered sequentially. The color values of the N pixels included in the to-be-processed area 302 are C1, C2, ..., CN, respectively, and the new color values correspond to C1`, C2`, ..., CN`.

在一实施例中,以待处理区域302中的、颜色值为C1的第一个像素点为示例,该像素点的新的颜色值可以通过如下步骤确定(如图中标号304所示):先确定该像素点所在的待处理子区域中包括该像素点和该像素点的下一像素点(按照编号顺序)。然后分别确定该像素点相对于该像素点的权重为W1,该像素点的下一像素点相对于该像素点的权重为W2。之后,可以先计算该像素点和该像素点的下一像素点的颜色值的加权和,然后将计算得到的 加权和的结果与二(待处理子区域中的像素点的个数为二)的商作为该像素点的新的像素值。类似地,可以分别得到待处理区域302中的每个像素点的新的像素值。In an embodiment, taking the first pixel in the to-be-processed area 302 with a color value of C1 as an example, the new color value of the pixel can be determined by the following steps (as indicated by reference number 304 in the figure): First determine that the pixel to be processed includes the pixel and the next pixel in the sub-region to be processed (in the order of numbering). Then it is determined that the weight of the pixel with respect to the pixel is W1, and the weight of the next pixel of the pixel with respect to the pixel is W2. After that, you can first calculate the weighted sum of the color values of the pixel and the next pixel of the pixel, and then the result of the calculated weighted sum is two (the number of pixels in the sub-region to be processed is two) The quotient of is the new pixel value for that pixel. Similarly, a new pixel value for each pixel in the area to be processed 302 can be obtained separately.

之后,可以将待处理区域302中的每个像素点的颜色值调整为各自对应的新的颜色值,从而可以得到对待处理区域302处理之后的待处理图像305。After that, the color value of each pixel in the region to be processed 302 can be adjusted to the corresponding new color value, so that the image to be processed 305 after the region to be processed 302 can be obtained.

本申请的上述实施例提供的方法通过确定待处理区域中的像素点所在的待处理子区域中的区域像素点的权重,并确定待处理区域中的像素点所在的待处理子区域中的区域像素点的颜色值的加权和,然后根据所确定的加权和,确定待处理区域中的像素点的新的颜色值并调整,从而实现了对待处理图像的待处理区域中的像素点的颜色值的有效处理,使得处理后的待处理区域呈现虚化的效果。另外,根据待处理区域中的每个像素点所在的待处理子区域中的每个区域像素点相对于每个像素点的权重来分别确定每个像素点的权重,有助于提升处理后的待处理区域所呈现的虚化效果的平滑性。The method provided by the above embodiments of the present application determines the weight of the pixels in the sub-region to be processed where the pixels in the region to be processed are located, and determines the area in the sub-region to be processed where the pixels in the region to be processed are located The weighted sum of the color values of the pixels, and then according to the determined weighted sum, determine and adjust the new color values of the pixels in the area to be processed, thereby realizing the color values of the pixels in the area to be processed of the image to be processed The effective processing of the area makes the area to be processed after processing appear blurred. In addition, according to the weight of each pixel in the sub-region to be processed in the sub-region where each pixel in the to-be-processed region is located relative to each pixel, the weight of each pixel is determined separately, which helps to improve the processed The smoothness of the blur effect presented by the area to be processed.

参考图4,其示出了处理图像的方法的又一个实施例的流程400。该处理图像的方法的流程400,包括步骤401至步骤403。Referring to FIG. 4, it shows a flow 400 of yet another embodiment of a method of processing images. The flow 400 of the image processing method includes steps 401 to 403.

在步骤401中,获取待处理图像。In step 401, an image to be processed is acquired.

在步骤402中,确定待处理图像的待处理区域,待处理区域包括待处理子区域。In step 402, a region to be processed of the image to be processed is determined, and the region to be processed includes a sub-region to be processed.

上述步骤401和402的具体的执行过程可参考图2对应实施例中的步骤201和202的相关说明,在此不再赘述。For the specific execution process of the above steps 401 and 402, reference may be made to the relevant descriptions of steps 201 and 202 in the embodiment corresponding to FIG. 2, which will not be repeated here.

在步骤403中,针对待处理区域中的像素点,执行步骤4031-步骤4033。In step 403, for the pixels in the region to be processed, steps 4031 to 4033 are performed.

在步骤4031中,通过步骤40311-步骤40317确定该像素点所在的待处理子区域中的区域像素点相对于该像素点的权重。In step 4031, the weight of the area pixel in the to-be-processed sub-area where the pixel is located relative to the pixel is determined through steps 40311-40317.

在步骤40311中,将待处理子区域中的区域像素点与该像素点的欧氏距离的平方作为距离取值。In step 40311, the square of the Euclidean distance between the pixel in the sub-region to be processed and the pixel is taken as the distance.

在本步骤中,以待处理子区域中的任一区域像素点为示例,假设该区域像素点的横坐标为X1,纵坐标为Y1,该像素点的横坐标为X0,纵坐标为Y0,那么,该区域像素点与该像素点的欧式距离的平方可以为:In this step, taking any area pixel in the sub-region to be processed as an example, suppose the horizontal coordinate of the area pixel is X1, the vertical coordinate is Y1, the horizontal coordinate of the pixel is X0, and the vertical coordinate is Y0, Then, the square of the Euclidean distance between the pixel in the area and the pixel can be:

(X1-X0) 2+(Y1-Y0) 2(X1-X0) 2 + (Y1-Y0) 2 .

在步骤40312中,将距离取值与预设的距离调节值的商的相反数作为第一指数。In step 40312, the inverse of the quotient of the distance value and the preset distance adjustment value is used as the first index.

在本步骤中,距离调节值可以是由技术人员预先设置的大于零的数值。In this step, the distance adjustment value may be a value greater than zero preset by a technician.

在步骤40313中,将自然常数的第一指数次幂确定为待处理子区域中的区域像素点相对于该像素点的距离权重。In step 40313, the first exponential power of the natural constant is determined as the distance weight of the pixel in the sub-region to be processed relative to the pixel.

在本步骤中,以待处理子区域中的横坐标为X1,纵坐标为Y1的一区域 像素点为示例,假设距离调节值为S,该像素点的横坐标为X0,纵坐标为Y0,那么该区域像素点相对于该像素点的距离权重可以为:In this step, taking an area pixel in the to-be-processed sub-region as X1 and an ordinate in Y1 as an example, assuming that the distance adjustment value is S, the pixel's abscissa is X0 and the ordinate is Y0, Then the distance weight of pixels in this area relative to this pixel can be:

Figure PCTCN2019097444-appb-000001
Figure PCTCN2019097444-appb-000001

其中,距离调节值S的取值范围可以是大于零的任意实数。在一实施例中,距离调节值S可以由技术人员预先指定。The value range of the distance adjustment value S may be any real number greater than zero. In an embodiment, the distance adjustment value S may be specified by a technician in advance.

在步骤40314中,将待处理子区域中的区域像素点的深度值与该像素点的深度值的差值的平方作为深度差值。In step 40314, the square of the difference between the depth value of the pixel in the sub-region to be processed and the depth value of the pixel is taken as the depth difference.

在本步骤中,可以利用多种单目图像深度估计算法(如基于全卷积神经网络的单目图像深度估计、基于马尔科夫随机场的单目图像深度估计等)来确定待处理子区域中的区域像素点的深度值和该像素点的深度值。In this step, multiple monocular image depth estimation algorithms (such as monocular image depth estimation based on fully convolutional neural network, monocular image depth estimation based on Markov random field, etc.) can be used to determine the sub-region to be processed The depth value of the pixel in the area and the depth value of the pixel.

在步骤40315中,将深度差值与预设的深度调节值的商的相反数作为第二指数。In step 40315, the inverse of the quotient of the depth difference value and the preset depth adjustment value is used as the second index.

在本步骤中,深度调节值可以是由技术人员预先设置的、大于零的数值。In this step, the depth adjustment value may be a value greater than zero preset by a technician.

在步骤40316中,将自然常数的第二指数次幂确定为待处理子区域中的区域像素点相对于该像素点的深度权重。In step 40316, the second exponential power of the natural constant is determined as the depth weight of the pixel in the sub-region to be processed relative to the pixel.

在本步骤中,以待处理子区域中的任一区域像素点为示例,假设该区域像素点的深度值为D1,该像素点的深度值为D0,深度调节值为D,那么该区域像素点相对于该像素点的深度权重可以为:In this step, taking any area pixel in the sub-region to be processed as an example, assuming that the depth value of the area pixel is D1, the depth value of the pixel is D0, and the depth adjustment value is D, then the area pixel The depth weight of a point relative to the pixel can be:

Figure PCTCN2019097444-appb-000002
Figure PCTCN2019097444-appb-000002

其中,深度调节值D的取值范围可以是大于零的任意实数。在一实施例中,深度调节值D可以由技术人员预先指定。The range of the depth adjustment value D can be any real number greater than zero. In an embodiment, the depth adjustment value D may be specified by a technician in advance.

在步骤40317中,将距离权重和深度权重的乘积确定为待处理子区域中的区域像素点相对于该像素点的权重。In step 40317, the product of the distance weight and the depth weight is determined as the weight of the regional pixel in the sub-region to be processed relative to the pixel.

在步骤4032中,通过步骤40321-步骤40322根据待处理子区域中的区域像素点的颜色值的加权和,确定该像素点的新的颜色值。In step 4032, the new color value of the pixel is determined according to the weighted sum of the color values of the pixels in the sub-region to be processed through steps 40321-40322.

在步骤40321中,将待处理子区域中的区域像素点的权重的总和作为总权重。In step 40321, the sum of the weights of the pixels in the sub-region to be processed is taken as the total weight.

在步骤40322中,将待处理子区域中的区域像素点的颜色值的加权和与总权重的商确定为该像素点的新的颜色值。In step 40322, the quotient of the weighted sum of the color values of the pixels in the sub-region to be processed and the total weight is determined as the new color value of the pixel.

在本步骤中,以该像素点所在的待处理子区域中共包含N个区域像素点为示例,N个区域像素点中第i个区域像素点相对于该像素点的权重为Wi,N个区域像素点中第i个区域像素点的颜色值为Ci,其中,i的取值范围可以 是一到N之间的整数(包括1和N)。那么,该像素点的新的颜色值可以为:In this step, taking a total of N regional pixels in the sub-region to be processed where the pixel is located as an example, the weight of the i-th regional pixel among the N regional pixels relative to the pixel is Wi, and N regions The color value of the i-th pixel in the pixel is Ci, where the value range of i can be an integer between 1 and N (including 1 and N). Then, the new color value of the pixel can be:

Figure PCTCN2019097444-appb-000003
Figure PCTCN2019097444-appb-000003

在步骤4033中,将该像素点的颜色值调整为新的颜色值。In step 4033, the color value of the pixel is adjusted to a new color value.

从图4中可以看出,与图2对应的实施例相比,本实施例中的处理图像的方法的流程400可以根据一像素点所在的待处理子区域中的区域像素点与该像素点的距离和深度差,确定每个区域像素点相对于该像素点的权重,并根据所确定的权重确定该像素点的新的颜色取值,从而使得处理后的待处理区域所呈现的虚化效果可以根据待处理区域中的像素点对应的深度值具有相应的层次变化。As can be seen from FIG. 4, compared with the embodiment corresponding to FIG. 2, the process 400 of the image processing method in this embodiment can be based on the area pixel in the sub-region to be processed where a pixel is located and the pixel The distance and the depth difference, determine the weight of each area pixel relative to the pixel, and determine the new color value of the pixel according to the determined weight, so that the processed area to be processed is rendered blurred The effect may have a corresponding level change according to the depth value corresponding to the pixel in the area to be processed.

参考图5,作为对上述各图所示方法的实现,本申请提供了处理图像的装置的一个实施例,该装置实施例与图2所示的方法实施例相对应,该装置可以应用于各种电子设备中。Referring to FIG. 5, as an implementation of the method shown in the above figures, the present application provides an embodiment of an apparatus for processing images, which corresponds to the method embodiment shown in FIG. 2, and the apparatus can be applied to various Kinds of electronic equipment.

如图5所示,本实施例提供的处理图像的装置500包括获取单元501、确定单元502和调整单元503。其中,获取单元501被配置成获取待处理图像;确定单元502被配置成确定待处理图像的待处理区域,待处理区域包括待处理子区域;调整单元503被配置成针对待处理区域中的像素点,确定该像素点所在的待处理子区域中的区域像素点相对于该像素点的权重;基于所确定的、待处理子区域中的区域像素点相对于该像素点的权重,确定待处理子区域中的区域像素点的颜色值的加权和;根据待处理子区域中的区域像素点的颜色值的加权和,确定该像素点的新的颜色值;将该像素点的颜色值调整为新的颜色值。As shown in FIG. 5, the image processing apparatus 500 provided in this embodiment includes an acquisition unit 501, a determination unit 502 and an adjustment unit 503. Wherein, the obtaining unit 501 is configured to obtain an image to be processed; the determining unit 502 is configured to determine a to-be-processed area of the to-be-processed image, the to-be-processed area includes a to-be-processed sub-area; and the adjusting unit 503 is configured to target pixels in the to-be-processed area Point, determine the weight of the area pixel in the sub-area to be processed relative to the pixel; based on the determined weight of the area pixel in the sub-area to be processed relative to the pixel, determine the to-be-processed The weighted sum of the color values of the pixels in the sub-region; according to the weighted sum of the color values of the pixels in the sub-region to be processed, determine the new color value of the pixel; adjust the color value of the pixel to New color value.

在本实施例中,处理图像的装置500中:获取单元501、确定单元502和调整单元503的处理过程可分别参考图2对应实施例中的步骤201、步骤202和步骤203的相关说明,在此不再赘述。In this embodiment, in the image processing apparatus 500: the processing procedures of the acquiring unit 501, the determining unit 502, and the adjusting unit 503 can refer to the relevant descriptions of step 201, step 202, and step 203 in the corresponding embodiment of FIG. This will not be repeated here.

在本实施例的一些实现方式中,上述调整单元503被配置成:根据以下至少之一确定权重:待处理子区域中的区域像素点与该像素点的距离,待处理子区域中的区域像素点的深度值与该像素点的深度值的差值;其中,权重分别与以下两者负相关:待处理子区域中的区域像素点与该像素点的距离,待处理子区域中的区域像素点的深度值与该像素点的深度值的差值。In some implementations of this embodiment, the adjustment unit 503 is configured to determine the weight according to at least one of the following: the distance between the area pixel in the sub-region to be processed and the pixel, the area pixel in the sub-region to be processed The difference between the depth value of the point and the depth value of the pixel; where the weights are inversely related to the following two: the distance between the area pixel in the sub-region to be processed and the pixel, the area pixel in the sub-region to be processed The difference between the depth value of a point and the depth value of the pixel.

在本实施例的一些实现方式中,待处理子区域的尺寸和形状中的至少之一是预先指定的。In some implementations of this embodiment, at least one of the size and shape of the sub-region to be processed is specified in advance.

在本实施例的一些实现方式中,上述确定单元502被配置成:确定待处理图像中的像素点的深度值;将深度值在目标深度值区间内的像素点所组成的图像区域,确定为待处理区域。In some implementations of this embodiment, the above determination unit 502 is configured to: determine the depth value of pixels in the image to be processed; and determine the image area composed of pixels within the target depth value interval as Area to be processed.

在本实施例的一些实现方式中,上述调整单元503被配置成:将待处理子区域中的区域像素点与该像素点的欧氏距离的平方作为距离取值;将距离取值与预设的距离调节值的商的相反数作为第一指数;将自然常数的第一指数次幂确定为待处理子区域中的区域像素点相对于该像素点的距离权重;将待处理子区域中的区域像素点的深度值与该像素点的深度值的差值的平方作为深度差值;将深度差值与预设的深度调节值的商的相反数作为第二指数;将自然常数的第二指数次幂确定为待处理子区域中的区域像素点相对于该像素点的深度权重;将距离权重和深度权重的乘积确定为待处理子区域中的区域像素点相对于该像素点的权重。In some implementations of this embodiment, the above-mentioned adjustment unit 503 is configured to: take the square of the Euclidean distance between the pixel in the sub-region to be processed and the pixel as the distance; take the distance and the preset value The inverse number of the quotient of the distance adjustment value is taken as the first index; the first exponent of the natural constant is determined as the distance weight of the pixel in the sub-region to be processed relative to the pixel; the The square of the difference between the depth value of the regional pixel and the depth value of the pixel is taken as the depth difference; the inverse of the quotient of the depth difference and the preset depth adjustment value is taken as the second index; the second of the natural constant The exponential power is determined as the depth weight of the regional pixel in the sub-region to be processed relative to the pixel; the product of the distance weight and the depth weight is determined as the weight of the regional pixel in the sub-region to be processed relative to the pixel.

在本实施例的一些实现方式中,上述调整单元503被配置成:将待处理子区域中的区域像素点的权重的总和作为总权重;将待处理子区域中的区域像素点的颜色值的加权和与总权重的商确定为该像素点的新的颜色值。In some implementations of this embodiment, the adjustment unit 503 is configured to: use the sum of the weights of the pixels in the sub-region to be processed as the total weight; and use the color values of the pixels in the sub-region to be processed The quotient of the weighted sum and the total weight is determined as the new color value of the pixel.

本申请的上述实施例提供的装置,通过获取单元获取待处理图像;确定单元确定待处理图像的待处理区域;针对待处理区域中的像素点,调整单元确定该像素点所在的待处理子区域中的区域像素点相对于该像素点的权重;基于待处理子区域中的区域像素点相对于该像素点的权重,确定待处理子区域中的区域像素点的颜色值的加权和;根据待处理子区域中的区域像素点的颜色值的加权和,确定该像素点的新的颜色值;将该像素点的颜色值调整为新的颜色值,从而实现了对待处理图像的待处理区域中的像素点的颜色值的有效处理,使得处理后的待处理区域呈现虚化的效果。The device provided in the above embodiment of the present application acquires the image to be processed through the acquisition unit; the determination unit determines the area to be processed of the image to be processed; and for the pixel in the area to be processed, the adjustment unit determines the sub-region to be processed where the pixel is located The weight of the pixels in the region relative to the pixel; based on the weight of the pixels in the sub-region to be processed relative to the pixel, determine the weighted sum of the color values of the pixels in the sub-region to be processed; Process the weighted sum of the color values of the pixels in the sub-region to determine the new color value of the pixel; adjust the color value of the pixel to the new color value, thereby realizing the area to be processed of the image to be processed The effective processing of the color value of the pixel point makes the processed area to be processed appear a blur effect.

下面参考图6,其示出了适于用来实现本申请实施例的电子设备的计算机系统600的结构示意图。图6示出的电子设备仅仅是一个示例,不应对本申请实施例的功能和使用范围带来任何限制。6, which shows a schematic structural diagram of a computer system 600 suitable for implementing an electronic device according to an embodiment of the present application. The electronic device shown in FIG. 6 is only an example, and should not bring any limitation to the functions and use scope of the embodiments of the present application.

如图6所示,计算机系统600包括中央处理单元(Central Processing Unit,CPU)601,其可以根据存储在只读存储器(Read-Only Memory,ROM)602中的程序或者从存储部分608加载到随机访问存储器(Random Access Memory,RAM)603中的程序而执行各种适当的动作和处理。在RAM 603中,还存储有系统600操作所需的各种程序和数据。CPU 601、ROM 602以及RAM 603通过总线604彼此相连。输入/输出(Input/Output,I/O)接口605也连接至总线604。As shown in FIG. 6, the computer system 600 includes a central processing unit (Central Processing Unit, CPU) 601, which can be loaded into a random portion according to a program stored in a read-only memory (Read-Only Memory, ROM) 602 or from the storage section 608 The program in the memory (Random Access Memory) 603 is accessed to perform various appropriate actions and processes. In the RAM 603, various programs and data necessary for the operation of the system 600 are also stored. The CPU 601, ROM 602, and RAM 603 are connected to each other through a bus 604. An input / output (I / O) interface 605 is also connected to the bus 604.

以下部件连接至I/O接口605:包括键盘、鼠标等的输入部分606;包括诸如阴极射线管(Cathode Ray Tube,CRT)、液晶显示器(Liquid Crystal  Display,LCD)等以及扬声器等的输出部分607;包括硬盘等的存储部分608;以及包括诸如局域网(Local Area Network,LAN)卡、调制解调器等的网络接口卡的通信部分609。通信部分609经由诸如因特网的网络执行通信处理。驱动器610也根据需要连接至I/O接口605。可拆卸介质611,诸如磁盘、光盘、磁光盘、半导体存储器等等,根据需要安装在驱动器610上,以便于从其上读出的计算机程序根据需要被安装入存储部分608。The following components are connected to the I / O interface 605: an input section 606 including a keyboard, a mouse, etc .; an output section 607 including a cathode ray tube (Cathode Ray Tube, CRT), liquid crystal display (Liquid Crystal Display, LCD) etc. A storage section 608 including a hard disk, etc .; and a communication section 609 including a network interface card such as a local area network (Local Area Network, LAN) card, modem, etc. The communication section 609 performs communication processing via a network such as the Internet. The driver 610 is also connected to the I / O interface 605 as needed. A removable medium 611, such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like, is installed on the drive 610 as needed, so that the computer program read out therefrom is installed into the storage section 608 as needed.

特别地,根据本公开的实施例,上文参考流程图描述的过程可以被实现为计算机软件程序。例如,本公开的实施例包括一种计算机程序产品,其包括承载在计算机可读介质上的计算机程序,该计算机程序包含用于执行流程图所示的方法的程序代码。在这样的实施例中,该计算机程序可以通过通信部分609从网络上被下载和安装,和/或从可拆卸介质611被安装。在该计算机程序被中央处理单元(CPU)601执行时,执行本申请的方法中限定的上述功能。In particular, according to an embodiment of the present disclosure, the process described above with reference to the flowchart may be implemented as a computer software program. For example, embodiments of the present disclosure include a computer program product that includes a computer program carried on a computer-readable medium, the computer program containing program code for performing the method shown in the flowchart. In such an embodiment, the computer program may be downloaded and installed from the network through the communication section 609, and / or installed from the removable medium 611. When the computer program is executed by the central processing unit (CPU) 601, the above-mentioned functions defined in the method of the present application are executed.

需要说明的是,本申请的计算机可读介质可以是计算机可读信号介质或者计算机可读存储介质或者是上述两者的任意组合。计算机可读存储介质例如可以是——但不限于——电、磁、光、电磁、红外线、或半导体的系统、装置或器件,或者任意以上的组合。计算机可读存储介质的更具体的例子可以包括但不限于:具有至少一个导线的电连接、便携式计算机磁盘、硬盘、随机访问存储器(RAM)、只读存储器(ROM)、可擦式可编程只读存储器(Erasable Programmable Read-Only Memory,EPROM或闪存)、光纤、便携式紧凑磁盘只读存储器(Compact Disc Read-Only Memory,CD-ROM)、光存储器件、磁存储器件、或者上述的任意合适的组合。在本申请中,计算机可读存储介质可以是任何包含或存储程序的有形介质,该程序可以被指令执行系统、装置或者器件使用或者与其结合使用。而在本申请中,计算机可读的信号介质可以包括在基带中或者作为载波一部分传播的数据信号,其中承载了计算机可读的程序代码。这种传播的数据信号可以采用多种形式,包括但不限于电磁信号、光信号或上述的任意合适的组合。计算机可读的信号介质还可以是计算机可读存储介质以外的任何计算机可读介质,该计算机可读介质可以发送、传播或者传输用于由指令执行系统、装置或者器件使用或者与其结合使用的程序。计算机可读介质上包含的程序代码可以用任何适当的介质传输,包括但不限于:无线、电线、光缆、射频(Radio Frequency,RF)等等,或者上述的任意合适的组合。It should be noted that the computer-readable medium of the present application may be a computer-readable signal medium or a computer-readable storage medium, or any combination of the two. The computer-readable storage medium may be, for example, but not limited to, an electrical, magnetic, optical, electromagnetic, infrared, or semiconductor system, device, or device, or any combination of the above. More specific examples of computer-readable storage media may include, but are not limited to: electrical connections with at least one wire, portable computer disk, hard disk, random access memory (RAM), read-only memory (ROM), erasable programmable only Read memory (Erasable Programmable Read-Only Memory, EPROM or flash memory), optical fiber, portable compact disk read-only memory (Compact Disc Read-Only Memory, CD-ROM), optical storage device, magnetic storage device, or any suitable combination. In this application, the computer-readable storage medium may be any tangible medium that contains or stores a program, and the program may be used by or in combination with an instruction execution system, apparatus, or device. In this application, the computer-readable signal medium may include a data signal propagated in the baseband or as part of the carrier wave, in which the computer-readable program code is carried. This propagated data signal can take many forms, including but not limited to electromagnetic signals, optical signals, or any suitable combination of the above. The computer-readable signal medium may also be any computer-readable medium other than a computer-readable storage medium, and the computer-readable medium may send, propagate, or transmit a program for use by or in combination with an instruction execution system, apparatus, or device. . The program code contained on the computer-readable medium may be transmitted using any appropriate medium, including but not limited to: wireless, wire, optical cable, radio frequency (Radio Frequency, RF), etc., or any suitable combination of the foregoing.

附图中的流程图和框图,图示了按照本申请各种实施例的系统、方法和计算机程序产品的可能实现的体系架构、功能和操作。在这点上,流程图或框图中的每个方框可以代表一个模块、程序段、或代码的一部分,该模块、 程序段、或代码的一部分包含至少一个用于实现规定的逻辑功能的可执行指令。也应当注意,在有些作为替换的实现中,方框中所标注的功能也可以以不同于附图中所标注的顺序发生。例如,两个接连地表示的方框实际上可以基本并行地执行,它们有时也可以按相反的顺序执行,这依所涉及的功能而定。也要注意的是,框图和/或流程图中的每个方框、以及框图和/或流程图中的方框的组合,可以用执行规定的功能或操作的专用的基于硬件的系统来实现,或者可以用专用硬件与计算机指令的组合来实现。The flowcharts and block diagrams in the drawings illustrate the architecture, functions, and operations of possible implementations of systems, methods, and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagram may represent a module, a program segment, or a part of code, the module, a program segment, or a part of the code contains at least one Execute instructions. It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession can actually be executed in parallel, and sometimes they can also be executed in reverse order, depending on the functions involved. It should also be noted that each block in the block diagrams and / or flowcharts, and combinations of blocks in the block diagrams and / or flowcharts, can be implemented with dedicated hardware-based systems that perform specified functions or operations Or, it can be realized by a combination of dedicated hardware and computer instructions.

描述于本申请实施例中所涉及到的单元可以通过软件的方式实现,也可以通过硬件的方式来实现。所描述的单元也可以设置在处理器中,例如,可以描述为:一种处理器,包括获取单元、确定单元和调整单元。其中,这些单元的名称在某种情况下并不构成对该单元本身的限定,例如,获取单元还可以被描述为“获取待处理图像的单元”。The units described in the embodiments of the present application may be implemented in software or hardware. The described unit may also be provided in the processor. For example, it may be described as: a processor including an acquisition unit, a determination unit, and an adjustment unit. In some cases, the names of these units do not constitute a limitation on the unit itself. For example, the acquisition unit may also be described as a “unit for acquiring an image to be processed”.

作为另一方面,本申请还提供了一种计算机可读介质,该计算机可读介质可以是上述实施例中描述的电子设备中所包含的;也可以是单独存在,而未装配入该电子设备中。上述计算机可读介质承载有至少一个程序,当上述至少一个程序被该电子设备执行时,使得该电子设备:获取待处理图像;确定待处理图像的待处理区域,待处理区域包括待处理子区域;针对待处理区域中的像素点,确定该像素点所在的待处理子区域中的区域像素点相对于该像素点的权重;基于待处理子区域中的区域像素点相对于该像素点的权重,确定待处理子区域中的区域像素点的颜色值的加权和;根据待处理子区域中的区域像素点的颜色值的加权和,确定该像素点的新的颜色值;将该像素点的颜色值调整为新的颜色值。As another aspect, the present application also provides a computer-readable medium, which may be contained in the electronic device described in the foregoing embodiments; or may exist alone without being assembled into the electronic device in. The above computer readable medium carries at least one program, and when the above at least one program is executed by the electronic device, the electronic device is caused to: acquire an image to be processed; determine a region to be processed of the image to be processed, the region to be processed includes a sub-region to be processed ; For the pixels in the area to be processed, determine the weight of the pixel in the sub-region to be processed relative to the pixel; based on the weight of the pixel in the sub-region to be processed relative to the pixel , Determine the weighted sum of the color values of the pixels in the sub-region to be processed; determine the new color value of the pixel according to the weighted sum of the color values of the pixels in the sub-region to be processed; The color value is adjusted to the new color value.

Claims (14)

一种处理图像的方法,包括:A method of processing images, including: 获取待处理图像;Get the image to be processed; 确定所述待处理图像的待处理区域,所述待处理区域包括待处理子区域;Determining a region to be processed of the image to be processed, the region to be processed includes a sub-region to be processed; 针对所述待处理区域中的像素点,确定所述像素点所在的待处理子区域中的区域像素点相对于所述像素点的权重;基于所述待处理子区域中的区域像素点相对于所述像素点的权重,确定所述待处理子区域中的区域像素点的颜色值的加权和;根据所述待处理子区域中的区域像素点的颜色值的加权和,确定所述像素点的新的颜色值;将所述像素点的颜色值调整为所述新的颜色值。For the pixels in the to-be-processed area, determine the weight of the pixels in the sub-area to be processed relative to the pixels in the sub-area to be processed; based on the pixels in the sub-area to be processed The weight of the pixels determines the weighted sum of the color values of the pixels in the sub-region to be processed; the pixel is determined according to the weighted sum of the color values of the pixels in the sub-region to be processed The new color value of the pixel; adjust the color value of the pixel to the new color value. 根据权利要求1所述的方法,其中,所述确定所述像素点所在的待处理子区域中的区域像素点相对于所述像素点的权重,包括:The method according to claim 1, wherein the determining the weight of the regional pixel in the sub-region to be processed where the pixel is located relative to the pixel includes: 根据以下至少之一确定所述权重:所述待处理子区域中的区域像素点与所述像素点的距离,所述待处理子区域中的区域像素点的深度值与所述像素点的深度值的差值;其中,所述权重分别与以下两者负相关:所述待处理子区域中的区域像素点与所述像素点的距离,所述待处理子区域中的区域像素点的深度值与所述像素点的深度值的差值。The weight is determined according to at least one of the following: the distance between the area pixel in the sub-region to be processed and the pixel, the depth value of the area pixel in the sub-region to be processed and the depth of the pixel The difference between the values; wherein the weights are inversely related to the following two: the distance between the area pixel in the sub-region to be processed and the pixel, the depth of the area pixel in the sub-region to be processed The difference between the value and the depth value of the pixel. 根据权利要求1所述的方法,其中,所述待处理子区域的尺寸和形状中的至少之一是预先指定的。The method according to claim 1, wherein at least one of the size and shape of the sub-region to be processed is specified in advance. 根据权利要求1-3任一项所述的方法,其中,所述确定所述待处理图像的待处理区域,包括:The method according to any one of claims 1 to 3, wherein the determining the area to be processed of the image to be processed includes: 确定所述待处理图像中的像素点的深度值;Determine the depth value of the pixel in the image to be processed; 将深度值在目标深度值区间内的像素点所组成的图像区域,确定为所述待处理区域。The image area composed of pixels within the target depth value interval is determined as the area to be processed. 根据权利要求1所述的方法,其中,所述确定所述像素点所在的待处理子区域中的区域像素点相对于所述像素点的权重,包括:The method according to claim 1, wherein the determining the weight of the regional pixel in the sub-region to be processed where the pixel is located relative to the pixel includes: 将所述待处理子区域中的区域像素点与所述像素点的欧氏距离的平方作为距离取值;Taking the square of the Euclidean distance between the pixel in the sub-region to be processed and the pixel as the distance; 将所述距离取值与预设的距离调节值的商的相反数作为第一指数;Use the inverse of the quotient of the distance value and the preset distance adjustment value as the first index; 将自然常数的所述第一指数次幂确定为所述待处理子区域中的区域像素点相对于所述像素点的距离权重;Determining the first exponential power of the natural constant as the distance weight of the regional pixel in the sub-region to be processed relative to the pixel; 将所述待处理子区域中的区域像素点的深度值与所述像素点的深度值的差值的平方作为深度差值;Taking the square of the difference between the depth value of the pixel point of the region in the sub-region to be processed and the depth value of the pixel point as the depth difference value; 将所述深度差值与预设的深度调节值的商的相反数作为第二指数;Use the inverse number of the quotient of the depth difference value and the preset depth adjustment value as the second index; 将自然常数的所述第二指数次幂确定为所述待处理子区域中的区域像素点相对于所述像素点的深度权重;Determining the second exponential power of the natural constant as the depth weight of the regional pixel in the sub-region to be processed relative to the pixel; 将所述距离权重和所述深度权重的乘积确定为所述待处理子区域中的区域 像素点相对于所述像素点的权重。The product of the distance weight and the depth weight is determined as the weight of the pixels of the region in the sub-region to be processed relative to the pixels. 根据权利要求5所述的方法,其中,所述根据所述待处理子区域中的区域像素点的颜色值的加权和,确定所述像素点的新的颜色值,包括:The method according to claim 5, wherein the determining a new color value of the pixel according to the weighted sum of the color values of the pixels in the sub-region to be processed includes: 将所述待处理子区域中的区域像素点的权重的总和作为总权重;Taking the sum of the weights of the pixels in the sub-regions to be processed as the total weights; 将所述待处理子区域中的区域像素点的颜色值的加权和与所述总权重的商确定为所述像素点的新的颜色值。The quotient of the weighted sum of the color values of the pixels in the region in the sub-region to be processed and the total weight is determined as the new color value of the pixel. 一种处理图像的装置,包括:An image processing device, including: 获取单元,被配置成获取待处理图像;An acquisition unit configured to acquire an image to be processed; 确定单元,被配置成确定所述待处理图像的待处理区域,所述待处理区域包括待处理子区域;A determining unit configured to determine a region to be processed of the image to be processed, the region to be processed includes a sub-region to be processed; 调整单元,被配置成针对所述待处理区域中的像素点,确定所述像素点所在的待处理子区域中的区域像素点相对于所述像素点的权重;基于所述待处理子区域中的区域像素点相对于所述像素点的权重,确定所述待处理子区域中的区域像素点的颜色值的加权和;根据所述待处理子区域中的区域像素点的颜色值的加权和,确定所述像素点的新的颜色值;将所述像素点的颜色值调整为所述新的颜色值。The adjusting unit is configured to determine the weight of the pixel in the sub-region to be processed where the pixel is located relative to the pixel for the pixel in the region to be processed; based on the The weight of the pixels in the area relative to the pixels, to determine the weighted sum of the color values of the pixels in the sub-region to be processed; To determine the new color value of the pixel; adjust the color value of the pixel to the new color value. 根据权利要求7所述的装置,其中,所述调整单元被配置成:The apparatus according to claim 7, wherein the adjustment unit is configured to: 根据以下至少之一确定所述权重:所述待处理子区域中的区域像素点与所述像素点的距离,所述待处理子区域中的区域像素点的深度值与所述像素点的深度值的差值;其中,所述权重分别与以下两者负相关:所述待处理子区域中的区域像素点与所述像素点的距离,所述待处理子区域中的区域像素点的深度值与所述像素点的深度值的差值。The weight is determined according to at least one of the following: the distance between the area pixel in the sub-region to be processed and the pixel, the depth value of the area pixel in the sub-region to be processed and the depth of the pixel The difference between the values; wherein the weights are inversely related to the following two: the distance between the area pixel in the sub-region to be processed and the pixel, the depth of the area pixel in the sub-region to be processed The difference between the value and the depth value of the pixel. 根据权利要求7所述的装置,其中,所述待处理子区域的尺寸和形状中的至少之一是预先指定的。The apparatus according to claim 7, wherein at least one of the size and shape of the sub-region to be processed is specified in advance. 根据权利要求7-9任一项所述的装置,其中,所述确定单元被配置成:The apparatus according to any one of claims 7-9, wherein the determination unit is configured to: 确定所述待处理图像中的像素点的深度值;Determine the depth value of the pixel in the image to be processed; 将深度值在目标深度值区间内的像素点所组成的图像区域,确定为所述待处理区域。The image area composed of pixels within the target depth value interval is determined as the area to be processed. 根据权利要求7所述的装置,其中,所述调整单元被配置成:The apparatus according to claim 7, wherein the adjustment unit is configured to: 将所述待处理子区域中的区域像素点与所述像素点的欧氏距离的平方作为距离取值;Taking the square of the Euclidean distance between the pixel in the sub-region to be processed and the pixel as the distance; 将所述距离取值与预设的距离调节值的商的相反数作为第一指数;Use the inverse of the quotient of the distance value and the preset distance adjustment value as the first index; 将自然常数的所述第一指数次幂确定为所述待处理子区域中的区域像素点相对于所述像素点的距离权重;Determining the first exponential power of the natural constant as the distance weight of the regional pixel in the sub-region to be processed relative to the pixel; 将所述待处理子区域中的区域像素点的深度值与所述像素点的深度值的差 值的平方作为深度差值;Taking the square of the difference between the depth value of the pixel in the sub-region to be processed and the depth value of the pixel as the depth difference; 将所述深度差值与预设的深度调节值的商的相反数作为第二指数;Use the inverse number of the quotient of the depth difference value and the preset depth adjustment value as the second index; 将自然常数的所述第二指数次幂确定为所述待处理子区域中的区域像素点相对于所述像素点的深度权重;Determining the second exponential power of the natural constant as the depth weight of the regional pixel in the sub-region to be processed relative to the pixel; 将所述距离权重和所述深度权重的乘积确定为所述待处理子区域中的区域像素点相对于所述像素点的权重。The product of the distance weight and the depth weight is determined as the weight of the regional pixel in the sub-region to be processed relative to the pixel. 根据权利要求11所述的装置,其中,所述调整单元被配置成:The apparatus according to claim 11, wherein the adjustment unit is configured to: 将所述待处理子区域中的区域像素点的权重的总和作为总权重;Taking the sum of the weights of the pixels in the sub-regions to be processed as the total weights; 将所述待处理子区域中的区域像素点的颜色值的加权和与所述总权重的商确定为所述像素点的新的颜色值。The quotient of the weighted sum of the color values of the pixels in the region in the sub-region to be processed and the total weight is determined as the new color value of the pixel. 一种电子设备,包括:An electronic device, including: 至少一个处理器;At least one processor; 存储装置,其上存储有至少一个程序;A storage device on which at least one program is stored; 所述至少一个程序被所述至少一个处理器执行,使得所述至少一个处理器实现如权利要求1-6中任一项所述的方法。The at least one program is executed by the at least one processor, so that the at least one processor implements the method according to any one of claims 1-6. 一种计算机可读介质,其上存储有计算机程序,其中,所述程序被处理器执行时实现如权利要求1-6中任一项所述的方法。A computer-readable medium on which a computer program is stored, wherein the program is executed by a processor to implement the method according to any one of claims 1-6.
PCT/CN2019/097444 2018-10-23 2019-07-24 Method and apparatus for processing images Ceased WO2020082830A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201811236252.9 2018-10-23
CN201811236252.9A CN109345580B (en) 2018-10-23 2018-10-23 Method and apparatus for processing image

Publications (1)

Publication Number Publication Date
WO2020082830A1 true WO2020082830A1 (en) 2020-04-30

Family

ID=65311270

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/097444 Ceased WO2020082830A1 (en) 2018-10-23 2019-07-24 Method and apparatus for processing images

Country Status (2)

Country Link
CN (1) CN109345580B (en)
WO (1) WO2020082830A1 (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109345580B (en) * 2018-10-23 2020-03-24 北京字节跳动网络技术有限公司 Method and apparatus for processing image
CN110110778B (en) * 2019-04-29 2023-04-25 腾讯科技(深圳)有限公司 Image processing method, apparatus, electronic device, and computer-readable storage medium
CN110288543B (en) * 2019-06-21 2021-11-30 北京迈格威科技有限公司 Depth image edge-preserving processing method and device
CN110288625B (en) * 2019-07-04 2021-09-03 北京字节跳动网络技术有限公司 Method and apparatus for processing image
CN110400353B (en) * 2019-07-18 2021-11-02 北京蜜莱坞网络科技有限公司 Image generation method, apparatus, device and medium
CN113222844B (en) * 2021-05-14 2023-11-24 上海绚显科技有限公司 Image beautifying method and device, electronic equipment and medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104008534A (en) * 2014-06-18 2014-08-27 福建天晴数码有限公司 Intelligent human face beautifying method and device
CN106839975A (en) * 2015-12-03 2017-06-13 杭州海康威视数字技术股份有限公司 Volume measuring method and its system based on depth camera
CN107067536A (en) * 2017-04-27 2017-08-18 深圳怡化电脑股份有限公司 A kind of image boundary determines method, device, equipment and storage medium
US20180174271A1 (en) * 2016-12-16 2018-06-21 Canon Kabushiki Kaisha Image processing apparatus, image processing method, and storage medium
CN109345580A (en) * 2018-10-23 2019-02-15 北京字节跳动网络技术有限公司 Method and apparatus for handling image

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9047681B2 (en) * 2011-07-07 2015-06-02 Samsung Electronics Co., Ltd. Depth image conversion apparatus and method
CN102316307B (en) * 2011-08-22 2013-09-25 安科智慧城市技术(中国)有限公司 Road traffic video detection method and apparatus thereof
CN106096559A (en) * 2016-06-16 2016-11-09 深圳零度智能机器人科技有限公司 Obstacle detection method and system and moving object
US10102682B1 (en) * 2017-04-17 2018-10-16 Raytheon Company System and method for combining 3D images in color
CN108399632B (en) * 2018-03-02 2021-06-15 重庆邮电大学 An RGB-D camera depth image inpainting method for joint color images

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104008534A (en) * 2014-06-18 2014-08-27 福建天晴数码有限公司 Intelligent human face beautifying method and device
CN106839975A (en) * 2015-12-03 2017-06-13 杭州海康威视数字技术股份有限公司 Volume measuring method and its system based on depth camera
US20180174271A1 (en) * 2016-12-16 2018-06-21 Canon Kabushiki Kaisha Image processing apparatus, image processing method, and storage medium
CN107067536A (en) * 2017-04-27 2017-08-18 深圳怡化电脑股份有限公司 A kind of image boundary determines method, device, equipment and storage medium
CN109345580A (en) * 2018-10-23 2019-02-15 北京字节跳动网络技术有限公司 Method and apparatus for handling image

Also Published As

Publication number Publication date
CN109345580B (en) 2020-03-24
CN109345580A (en) 2019-02-15

Similar Documents

Publication Publication Date Title
WO2020082830A1 (en) Method and apparatus for processing images
CN109191514B (en) Method and apparatus for generating a depth detection model
US11416781B2 (en) Image processing method and apparatus, and computer-readable medium, and electronic device
US11282182B2 (en) Image processing method and apparatus, computer-readable medium, and electronic device
US10403001B2 (en) Producing three-dimensional representation based on images of an object
CN109472264B (en) Method and apparatus for generating an object detection model
CN108197618B (en) Method and device for generating human face detection model
CN110288705B (en) Method and device for generating three-dimensional model
CN109118456B (en) Image processing method and device
CN107507153B (en) Image denoising method and device
WO2020143728A1 (en) Picture rendering method and device, terminal, and corresponding storage medium
CN109272543B (en) Method and apparatus for generating a model
CN109377508B (en) Image processing method and device
WO2020056901A1 (en) Method and device for processing image
CN113724391A (en) Three-dimensional model construction method and device, electronic equipment and computer readable medium
CN110189252B (en) Method and device for generating average face image
CN110288625B (en) Method and apparatus for processing image
CN108921792B (en) Method and device for processing pictures
JP2023518865A (en) Systems and methods for machine-learned image conversion
CN109034085B (en) Method and apparatus for generating information
CN110895699B (en) Method and apparatus for processing feature points of image
CN114170068B (en) Method, device, electronic device and storage medium for generating image special effects
CN109510943A (en) Method and apparatus for shooting image
CN109523564B (en) Method and apparatus for processing image
CN108256451B (en) Method and device for detecting human face

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19876260

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 06.08.2021)

122 Ep: pct application non-entry in european phase

Ref document number: 19876260

Country of ref document: EP

Kind code of ref document: A1