[go: up one dir, main page]

WO2019091196A1 - Procédé et appareil de traitement d'images - Google Patents

Procédé et appareil de traitement d'images Download PDF

Info

Publication number
WO2019091196A1
WO2019091196A1 PCT/CN2018/103351 CN2018103351W WO2019091196A1 WO 2019091196 A1 WO2019091196 A1 WO 2019091196A1 CN 2018103351 W CN2018103351 W CN 2018103351W WO 2019091196 A1 WO2019091196 A1 WO 2019091196A1
Authority
WO
WIPO (PCT)
Prior art keywords
function
image
detail layer
detail
nonlinear
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
PCT/CN2018/103351
Other languages
English (en)
Chinese (zh)
Inventor
李蒙
陈海
郑建铧
余全合
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Publication of WO2019091196A1 publication Critical patent/WO2019091196A1/fr
Anticipated expiration legal-status Critical
Ceased legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/73Deblurring; Sharpening

Definitions

  • the present application relates to the field of image processing and, more particularly, to a method and apparatus for image processing.
  • the image is usually filtered by a spatial filter function to obtain the basic layer information in the image, and processed by the image information and the image base layer information to obtain the detail layer (texture) information of the image.
  • the spatial filtering function processes the image, acquires the basic layer of the image (medium and low frequency information), and performs image subtraction or division operation of the image and image base layer. Splitting, obtaining the detail layer of the image (medium and high frequency information of the image), and adjusting the detail layer to improve the contrast and sharpness of the image, and finally subtracting the detail layer and the base layer of the adjusted operation or Multiply the multiplication and output the processed image.
  • the photoelectric transfer function is different from the traditional standard dynamic range (SDR) image photoelectric transfer function (Gamma function), and the HDR image is performed by the above adjustment method.
  • SDR standard dynamic range
  • the Weber weber score of the pixel in the brightness range of the image exceeds the Schreiber threshold of the Schreiber threshold. If a small adjustment value is used, the image contrast and sharpness cannot be improved. the goal of.
  • the Schreiber threshold is related to the visual characteristics of the human eye. When the Weber weber score of the pixel exceeds the Schreiber threshold, the image quality problem that the human eye can perceive appears in the image, which affects the visual experience of the human eye.
  • the present application provides a method and apparatus for image processing.
  • a detail layer adjustment function for a nonlinear signal of an image
  • the detail layer adjustment function By establishing a detail layer adjustment function for a nonlinear signal of an image, and adjusting the detail layer of the image by the detail layer adjustment function, the selection of the adjustment coefficient of the detail layer can be avoided. Improper, resulting in image quality problems that the human eye can perceive in the adjusted image (for example, image contrast due to insufficient adjustment, insufficient clarity), thereby affecting the visual experience of the human eye.
  • a method of image processing comprising: acquiring a first image; processing the first image according to a spatial filter function to generate a first base layer; and the first image and the first image a base layer performs a subtraction operation or a division operation to generate a first detail layer; and according to the first image, a detail layer adjustment function is determined, wherein an independent variable of the detail layer adjustment function is a nonlinear signal of the first image; Adjusting the first detail layer to obtain a second detail layer according to the detail layer adjustment function; performing an addition operation or a multiplication operation on the first base layer and the second detail layer to generate a second image.
  • the spatial filtering function comprises at least one of the following filtering functions: a Gaussian filtering function, a bilateral filtering function or a guiding filtering function.
  • the detail layer adjustment coefficient acting on each pixel in the image is associated with the nonlinear signal of the corresponding pixel point, and the image is adjusted by the detail layer adjustment function.
  • the detail layer is adjusted so that the corresponding pixel points in the detail layer can be flexibly adjusted according to the nonlinear signal of the corresponding pixel point, so as to avoid the improper selection of the detail layer adjustment coefficient, and the human eye can be perceived in the adjusted image.
  • the image quality problem which in turn affects the visual experience of the human eye.
  • the determining a detail layer adjustment function includes: determining a Weber weber score corresponding to the photoelectric transfer function according to a photoelectric transfer function of the first image a function; determining a ratio function between the Schreiber Schreiber threshold function and the weber score function; determining the detail layer adjustment function based on the ratio function.
  • the detail layer adjustment function is, when the first non-linear signal is an independent variable, the corresponding function value is less than or equal to the ratio function to the first
  • the non-linear signal is a corresponding function value when the argument is independent, and the first non-linear signal is any non-linear signal of the first image.
  • the detail layer adjustment function is determined by a ratio function between the weber score function and the Schreiber Schreiber threshold function, and the function value of the detail layer adjustment function with respect to any nonlinear signal is less than or equal to the ratio function at the corresponding nonlinear signal
  • the function value is such that when the detail layer of the image is adjusted by the detail layer adjustment function determined by the embodiment of the present application, the Weber weber score of the adjusted image is not exceeded by the Schreiber threshold, thereby avoiding the adjustment factor due to the detail layer. Improper selection leads to image quality problems that the human eye can perceive in the adjusted image, which in turn affects the visual experience of the human eye.
  • the monotonicity of the detail layer adjustment function is consistent with the monotonicity of the ratio function.
  • the detail layer adjustment function is a piecewise function
  • the segmentation function includes at least one demarcation point, wherein the at least one demarcation point is a non-linear signal of the first image corresponding to an extreme point of the ratio function, or the at least one demarcation point is a nonlinearity of the first image corresponding to an intersection of the weber score function and the Schreiber threshold function signal.
  • the functional form in the piecewise function includes at least one of the following functional forms: an exponential function, a logarithmic function, a power function, or a linear function.
  • the method further includes: acquiring statistical data of the first image; determining, according to the statistical data, a correction coefficient a, 0 ⁇ a ⁇ 1 Correcting the detail layer adjustment function F(V) according to the correction coefficient a:
  • F'(V) is the modified detail layer adjustment function
  • V is the nonlinear signal of the first image
  • determining the correction coefficient a includes: determining the correction coefficient a according to the following functional relationship:
  • g(M) is a correction coefficient function
  • M is statistical data of the first image
  • r is a parameter of the correction coefficient function g(M), r>0.
  • the correction coefficient a is determined according to the statistical data of the image, and the detail layer adjustment function of the image is corrected according to the correction coefficient a, that is, the detail layer adjustment function of the image in different scenes is dynamically adjusted, so that the adjusted image is obtained. It can better meet the visual characteristics of the human eye, thus improving the visual experience of the human eye.
  • the statistical data includes at least one of: a maximum pixel brightness of the first image, and an average pixel brightness of the first image a minimum value of a nonlinear Y component of a pixel of the first image, a maximum value of a nonlinear Y component of a pixel of the first image, or an average value of a nonlinear Y component of a pixel of the first image.
  • the photoelectric transfer function comprises at least one of the following photoelectric transfer functions: a perceptually quantized PQ photoelectric transfer function, a scene brightness fidelity SLF photoelectric transfer function, or Mixed log gamma HLG photoelectric transfer function.
  • the detail layer adjustment function comprises at least one of the following function types: an exponential function, a logarithmic function, a power function, or a linear function.
  • the detail layer adjustment function is a continuous function.
  • an apparatus for image processing for performing the method of any of the first aspect or the first aspect of the first aspect.
  • the apparatus may comprise means for performing the method of the first aspect or any of the possible implementations of the first aspect.
  • an apparatus for image processing comprising a memory for storing instructions, the processor for executing the instructions stored by the memory, and for storing in the memory Execution of the instructions causes the processor to perform the method of the first aspect or any of the possible implementations of the first aspect.
  • a computer readable storage medium in a fourth aspect, storing instructions that, when executed on a computer, cause the computer to perform any of the first aspect or the first aspect The method in the implementation.
  • a computer program product comprising instructions for causing a computer to perform the method of the first aspect or any of the possible implementations of the first aspect, when the computer program product is run on a computer.
  • FIG. 1 is a schematic block diagram of a display device according to the present application.
  • FIG. 2 is a schematic flowchart of a method for image processing according to an embodiment of the present application.
  • FIG. 3 is another schematic flowchart of a method for image processing according to an embodiment of the present application.
  • FIG. 4 is a schematic block diagram of an apparatus for image processing according to an embodiment of the present application.
  • FIG. 5 is another schematic block diagram of an apparatus for image processing according to an embodiment of the present application.
  • Dynamic Range is used in many fields to represent the ratio of the maximum value to the minimum value of a variable.
  • the dynamic range of light can reach (10 -3 ⁇ 10 6 ) nits, but cameras and other shooting equipment can record
  • the linear signal value (for example, optical signal value) has a limited ability.
  • the dynamic range of the linear signal value of the image exceeds (0.01 to 1000) nits, which is called the high dynamic range (HDR) linear signal value.
  • the corresponding image is called an HDR image
  • the dynamic range of the linear signal value of the image is less than (0.1 to 400 nits), which is called a standard dynamic range (SDR) linear signal value, and the corresponding image is referred to as an SDR image.
  • SDR standard dynamic range
  • An Optical-Electro Transfer Function is used to convert a linear signal (eg, an optical signal value) of an image into a non-linear signal (eg, an electrical signal value).
  • a linear signal eg, an optical signal value
  • a non-linear signal eg, an electrical signal value.
  • L is a linear signal of an image pixel
  • V is a nonlinear signal of a corresponding pixel of the image pixel
  • HDR phototransfer function For an HDR image (eg, the first image in this application), its corresponding HDR phototransfer function includes at least Perceptio Quantization (PQ) photoelectric transfer function, Scene Luminance Fidelity (SLF) photoelectric transfer function. Or mix any of the Hybrid Log-Gamma (HLG) photoelectric transfer functions.
  • PQ Perceptio Quantization
  • SLF Scene Luminance Fidelity
  • HLG Hybrid Log-Gamma
  • the HDR image is converted by the HDR photoelectric transfer function, and the photoelectric transfer curve corresponding to the HDR photoelectric transfer function correspondingly includes at least a Perceptio Quantization (PQ) photoelectric transfer curve and a Scene Luminance Fidelity (SLF).
  • PQ Perceptio Quantization
  • SLF Scene Luminance Fidelity
  • a photoelectric transfer curve or a mixed logarithmic gamma (HLG) photoelectric transfer curve is converted by the HDR photoelectric transfer function, and the photoelectric transfer curve corresponding to the HDR photoelectric transfer function correspondingly includes at least a Perceptio Quantization (PQ) photoelectric transfer curve and a Scene Luminance Fidelity (SLF).
  • PQ Perceptio Quantization
  • SMF Scene Luminance Fidelity
  • the PQ photoelectric transfer function is different from the traditional gamma transfer function. According to the human eye's brightness perception model, the perceptual quantization transfer function is proposed.
  • the PQ photoelectric transfer function represents the conversion of the linear signal of the image pixel to the nonlinear signal of the PQ domain. relationship.
  • L is a linear signal. After normalizing the linear signal, the value range is [0, 1], 1 is 10000 nits, and V is a nonlinear signal. The normalized value range is [0, 1].
  • n 1 , m 2 , c 1 , c 2 , and c 3 are PQ photoelectric transfer coefficients, and their values are as follows:
  • the SLF photoelectric transfer function represents a conversion relationship between a linear signal of an image pixel and a nonlinear signal of the SLF domain.
  • the SLF photoelectric transfer function V SLF F SLF (L) is of the form:
  • L is a linear signal. After normalizing the linear signal, the value range is [0, 1], 1 is 10000 nits, and V is a nonlinear signal. The normalized value range is [0, 1].
  • p, m 3 , a, b are SLF photoelectric transfer coefficients, and their values are:
  • the HLG photoelectric transfer function is improved based on the traditional Gamma curve.
  • the HLG photoelectric transfer function applies the traditional Gamma curve in the low stage and the log curve in the high stage.
  • the HLG photoelectric transfer function represents the conversion relationship between the linear signal of the image pixel and the nonlinear signal of the HLG domain.
  • L is a linear signal. After normalizing the linear signal, the value range is [0, 12], 1 is 10000 nits, and V is a nonlinear signal. The normalized value range is [0, 1].
  • a, b, c are HLG photoelectric transfer coefficients, and their values are:
  • FIG. 1 is a schematic block diagram of a display device 100 in accordance with an embodiment of the present application.
  • display device 100 includes an input interface 101, a video decoder 102, a processor 103, and a display 104.
  • the input interface 101 can include a receiver and/or a modem for receiving encoded video data.
  • the video decoder 102 can decode the video data from the input interface 101 and send the decoded video data to the processor 103 for processing, for example, the processor 103 performs detail layer adjustment on the image data corresponding to the decoded video data, and The video data obtained after the adjustment is sent to the display 104 for display.
  • the display 104 can be integrated with the display device 100 or can be external to the display device 100.
  • display 104 is at least one of the following:
  • LCD Liquid crystal display
  • plasma display Organic Light-Emitting Diode (OLED) display or other type of display.
  • OLED Organic Light-Emitting Diode
  • display device 100 is at least one of the following:
  • a desktop computer a mobile computing device, a notebook (eg, a laptop) computer, a tablet computer, a set top box, a smart phone, etc., a television, a camera, a display device, a digital media player, a video game console, an onboard computer, or Similar.
  • FIG. 2 is a schematic flowchart of a method 200 for image processing according to an embodiment of the present application. As shown in FIG. 2, the method 200 includes at least the following steps.
  • the first image is processed according to a spatial filter function to generate a first base layer.
  • the obtained first image is filtered by a spatial filter function to generate a base layer (eg, a first base layer) of the first image, and the first image is generated according to the generated first base layer and the first image.
  • Detail layer eg, first detail layer
  • the following two methods may be included:
  • the first detail layer is generated after performing a subtraction operation on the first image and the first base layer;
  • the first detail layer is generated after the first image and the first base layer are divided.
  • the spatial filtering function comprises at least one of the following filtering functions: a Gaussian filtering function, a bilateral filtering function or a guiding filtering function.
  • the spatial filtering function may further include other basics capable of generating the first image.
  • the filter function of the layer may further include other basics capable of generating the first image.
  • a detail layer adjustment function of the first image needs to be determined, where the detail layer adjustment function is a nonlinear signal about the first image (for example, the nonlinear signal of the first image is PQ
  • the function of the nonlinear signal) for the determined detail layer adjustment function, the detail layer adjustment function corresponds to a detail layer adjustment function value for the nonlinear signal of each pixel point on the first image.
  • a detail layer (eg, a first detail layer) of the first image is determined in steps 220 and 230, and a detail layer adjustment function is determined in step 240, in which the detail layer adjustment function is passed,
  • the nonlinear signal of each pixel of the first detail layer is adjusted by the function value of the detail layer adjustment function at the corresponding pixel point, and the adjusted detail layer (for example, the second detail layer) is obtained.
  • the first image is split into a first detail layer (Detial) and a first base layer (Base), the detail layer adjustment function of the first image is F(V), and V represents a nonlinear signal of the first image pixel. Then, when the detail layer (Detial) of the first image is adjusted by the detail layer adjustment function F(V), the expression is:
  • step 250 the detail layer of the first image is adjusted by the detail layer adjustment function, and then generated according to the adjusted detail layer (for example, the second detail layer) and the first base layer of the first image.
  • the second image is output to the display device for viewing by the human eye.
  • the following two methods may be included:
  • the second image is generated by adding the first base layer and the second detail layer, or
  • the second image is generated after multiplying the first base layer and the second detail layer.
  • the detail layer adjustment coefficient applied to each pixel point in the image is associated with the nonlinear signal of the corresponding pixel point, and is passed through
  • the detail layer adjustment function adjusts the detail layer of the image, so that the corresponding pixel points in the detail layer can be flexibly adjusted according to the nonlinear signal of the corresponding pixel point, thereby avoiding the adjustment due to improper selection of the detail layer adjustment coefficient.
  • Image quality problems that the human eye can perceive in the image which in turn affects the visual experience of the human eye.
  • the signal stored in the image is a non-linear signal, and the non-linear signal of the image needs to be quantized by using an integer N.
  • the value of the quantized value N can generally be 255, 1023 or 65535, etc., and the ratio of the adjacent two quantization errors is called Weber score, weber score is used to measure the pros and cons of the photoelectric transfer function, the weber score function is as follows
  • N is the quantized value
  • V is the nonlinear signal
  • L is the linear signal
  • F(L) is the photoelectric transfer function of any of the above three photoelectric transfer functions
  • F'(L) is the photoelectric transfer function F(L) The derivative function.
  • the Schreiber Threshold function is a limit function of the weber fractional function obtained by experimental measurements (eg, experimental calibration), ie when the webber score is less than the Schreiber threshold function value, the human eye does not see the image quantization band.
  • the visual problem that comes from Schreiber's Schreiber threshold function is obtained by experimental calibration. Therefore, the Schreiber threshold function can be approximated as the following functional form:
  • determining the detail layer adjustment function comprises: determining a Weber weber score function corresponding to the photoelectric transfer function according to the photoelectric transfer function of the first image; determining a ratio function between the weber score function and the Schreiber threshold function According to the ratio function, the detail layer adjustment function is determined.
  • the detail layer adjustment function of the first image when determining the detail layer adjustment function of the first image, determining a photoelectric transfer function of the first image, and determining a Weber weber score function corresponding to the photoelectric transfer function according to the photoelectric transfer function of the first image, and calculating The ratio function between the weber score function and the Schreiber Schreiber threshold function is finally determined, and finally the detail layer adjustment function is determined according to the ratio function.
  • the Weber weber fractional function corresponding to the PQ photoelectric transfer function is further determined according to the PQ photoelectric transfer function of the first image, and the weber fractional function is calculated.
  • the ratio function between the Schreiber threshold function and the Schreiber threshold function, for example, the form of the Weber weber fractional function corresponding to the PQ photoelectric transfer function is:
  • the Schreiber Schreiber threshold function takes the form:
  • the form of the ratio function is determined as:
  • the detail layer adjustment function takes the first nonlinear signal as an independent variable, and the corresponding function value is less than or equal to a function value corresponding to the ratio function when the first nonlinear signal is an independent variable, the first nonlinearity
  • the signal is any non-linear signal of the first image.
  • the function value of the detail layer adjustment function is less than or equal to the function value of the ratio function.
  • the detail layer adjustment function and the ratio function function are independent variables at the same pixel point, and the function value of the detail layer adjustment function is less than or equal to the function value of the ratio function, the detail layer
  • the adjustment function includes at least one of the following function types: an exponential function, a logarithmic function, a power function, or a linear function.
  • the functional form of the detail layer adjustment function is:
  • the functional form of the detail layer adjustment function may also be:
  • the monotonicity of the detail layer adjustment function is consistent with the monotonicity of the ratio function.
  • the detail layer is used except that the detail layer adjustment function and the ratio function function are independent variables (for example, the first nonlinear signal) at the same pixel point.
  • the function value of the adjustment function is less than or equal to the function value of the ratio function, and the ratio function can also be made uniform with the monotonicity of the detail layer adjustment function, that is, the ratio function is consistent with the increase and decrease intervals of the detail layer adjustment function.
  • the detail layer adjustment function may be the ratio function itself.
  • the detail layer adjustment function and the ratio function function are independent variables at the same pixel point, and the function value of the detail layer adjustment function is less than or equal to the function value of the ratio function
  • the detail layer The adjustment function may also be a piecewise function comprising at least one demarcation point, wherein the at least one demarcation point is a non-linear signal of the first image corresponding to an extreme point of the ratio function, or the at least one demarcation The point is a non-linear signal of the first image corresponding to the intersection of the weber score function and the Schreiber threshold function.
  • the detail layer adjustment function is a piecewise function
  • the boundary point of the piecewise function may be a nonlinear signal of the first image at the extreme point of the ratio function; or the boundary point of the piecewise function may also be A non-linear signal of the first image at the intersection of the weber score function of the image and the Schreiber threshold function.
  • the function form of the detail layer adjustment function is:
  • the function form of the detail layer adjustment function is:
  • the ratio function is a turning point of a function curve of the Schreiber threshold function
  • the ratio function The value of the nonlinear signal of the first image corresponding to the extreme point is 0.22, that is, the boundary point of the detail layer adjustment function is 0.22, and the boundary point of the detail layer adjustment function is recorded as x 3 , then the detail layer adjustment function
  • the function form is:
  • the extreme point of the ratio function is the turning point of the function curve of the turning point of the Schreiber threshold function and the weber fractional function corresponding to the SLF photoelectric transfer function, and the extreme value of the extreme value corresponding to the extreme value of the ratio function
  • the values of the linear signals are respectively 0.22 and 0.77, that is, the demarcation points of the detail layer adjustment function are respectively 0.22 and 0.77, and the boundary points of the detail layer adjustment function are respectively recorded as x 4 and x 5
  • the detail layer adjustment function is The function form is:
  • the turning point of the function curve of the extreme value point of the ratio function is the Schreiber threshold function corresponding to the HLG photoelectric transfer function
  • the turning point of the function curve of the weber fractional function the values of the nonlinear signals of the first image corresponding to the extreme points of the ratio function are respectively 0.026, 0.05, and 0.5, that is, the demarcation points of the detail layer adjustment function are respectively 0.026, 0.05, and 0.5
  • the boundary points of the detail layer adjustment function are respectively recorded as x 6 , x 7 , and x 8
  • the function form of the detail layer adjustment function is:
  • the above description is only taken as an example of a linear function in a piecewise function, but the embodiment of the present application is not limited thereto.
  • the function in the piecewise function may also be an exponential function and a power function. Or logarithmic functions, etc.
  • the detail layer adjustment function is determined by a ratio function between the weber score function and the Schreiber Schreiber threshold function, and the function value of the nonlinear signal at the pixel point corresponding to the ratio function is less than or equal to
  • the function value of the ratio function is such that when the detail layer of the image is adjusted by the detail layer adjustment function determined by the embodiment of the present application, the Weber weber score of the adjusted image is not exceeded by the Schreiber threshold, thereby avoiding adjustment due to the detail layer Improper selection of coefficients results in image quality problems that the human eye can perceive in the adjusted image, which in turn affects the visual experience of the human eye.
  • the method 200 further includes:
  • the correction coefficient a is determined, where 0 ⁇ a ⁇ 1, and the detail layer adjustment function of the first image is adjusted according to the correction coefficient a. Therefore, the correction coefficient a is determined according to the statistical data of the image, and the detail layer adjustment function of the image is corrected according to the correction coefficient a, that is, the detail layer adjustment function of the image in different scenes is dynamically adjusted, so that the adjusted image is obtained. It can better meet the visual characteristics of the human eye, thus improving the visual experience of the human eye.
  • the statistical data includes at least one of the following: a maximum pixel brightness of the first image, an average pixel brightness of the first image, a minimum value of a nonlinear Y component of a pixel of the first image, The maximum value of the nonlinear Y component of the pixel of the first image or the average of the nonlinear Y component of the pixel of the first image.
  • the statistical data of the first image is the average value of the pixel nonlinear Y component of the first image
  • the detail layer adjustment function of the first image is corrected by the correction coefficient a 1 .
  • the correction factor a can be determined by the following two methods:
  • g(M) is a correction coefficient function and M is a statistical data for the first image.
  • g(M) is a correction coefficient function
  • M is the statistical data of the first image
  • modified detail layer adjustment function F'(V) is of the form:
  • the statistical data includes the foregoing information, and the statistical data may further include other statistical data that can determine the correction coefficient, and the embodiment of the present application is not limited thereto.
  • the detail layer adjustment function is a continuous function.
  • the size of the sequence numbers of the foregoing processes does not mean the order of execution sequence, and the execution order of each process should be determined by its function and internal logic, and should not be taken by the embodiment of the present application.
  • the implementation process constitutes any qualification.
  • FIG. 4 is a schematic block diagram of an apparatus 300 for image processing according to an embodiment of the present disclosure.
  • the apparatus 300 includes:
  • the obtaining module 310 is configured to acquire the first image.
  • the processing module 320 is configured to process the first image according to a spatial filter function to generate a first base layer.
  • the processing module is further configured to perform a subtraction operation or a division operation on the first image and the first base layer to generate a first detail layer.
  • the processing module 320 is further configured to determine, according to the first image, a detail layer adjustment function, where the independent variable of the detail layer adjustment function is a nonlinear signal of the first image.
  • the processing module 320 is further configured to adjust the first detail layer according to the detail layer adjustment function to obtain the second detail layer.
  • the processing module 320 is further configured to perform an adding operation or a multiplication operation on the first base layer and the second detail layer to generate a second image.
  • the detail layer adjustment coefficient acting on each pixel in the image is associated with the nonlinear signal of the corresponding pixel point, and passes through the detail layer.
  • the adjustment function adjusts the detail layer of the image, so that the corresponding pixel points in the detail layer can be flexibly adjusted according to the nonlinear signal of the corresponding pixel point, thereby avoiding the selection of the adjustment layer of the detail layer, resulting in the image being adjusted.
  • the processing module 320 is configured to determine, according to the photoelectric transfer function of the first image, a Weber weber score function corresponding to the photoelectric transfer function; and determine a ratio function between the weber score function and the Schreiber threshold function; Based on the ratio function, the detail layer adjustment function is determined.
  • the detail layer adjustment function takes the first nonlinear signal as an independent variable, and the corresponding function value is less than or equal to a function value corresponding to the ratio function when the first nonlinear signal is an independent variable, the first nonlinearity
  • the signal is any non-linear signal of the first image.
  • the monotonicity of the detail layer adjustment function is consistent with the monotonicity of the ratio function.
  • the detail layer adjustment function is a piecewise function
  • the piecewise function includes at least one demarcation point, wherein the at least one demarcation point is a non-linear signal of the first image corresponding to an extreme point of the ratio function, Or the at least one demarcation point is a non-linear signal of the first image corresponding to the intersection of the weber score function and the Schreiber threshold function.
  • the obtaining module 310 is further configured to: obtain statistics of the first image; the processing module 320 is further configured to: according to the statistical data, determine a correction coefficient a, 0 ⁇ a ⁇ 1; according to the correction coefficient a , modify the detail layer adjustment function:
  • F'(V) is the modified detail layer adjustment function
  • V is the nonlinear signal of the first image
  • the processing module 320 is specifically configured to: determine, according to the following functional relationship, the correction coefficient a is:
  • g(M) is a correction coefficient function
  • M is the statistical data of the first image
  • r is a parameter of the correction coefficient function g(M), r>0.
  • the statistical data includes at least one of the following: a maximum pixel brightness of the first image, an average pixel brightness of the first image, a minimum value of a nonlinear Y component of a pixel of the first image, The maximum value of the nonlinear Y component of the pixel of the first image or the average of the nonlinear Y component of the pixel of the first image.
  • the phototransfer function comprises at least one of the following photo transfer functions: a perceptually quantized PQ phototransfer function, a scene luminance fidelity SLF phototransfer function, or a mixed log gamma HLG phototransfer function.
  • the detail layer adjustment function comprises at least one of the following function types: an exponential function, a logarithmic function, a power function, or a linear function.
  • the detail layer adjustment function is a continuous function.
  • the spatial filtering function comprises at least one of the following filtering functions: a Gaussian filtering function, a bilateral filtering function or a guiding filtering function.
  • each module in the apparatus 300 for image processing may be implemented by a processor or a processor-related circuit component.
  • the apparatus 300 can also include a memory in which instructions are stored, the processor executing the instructions stored in the memory to perform the actions of the various modules in the apparatus 300.
  • the embodiment of the present application further provides an apparatus 400 for image processing.
  • the apparatus 400 includes a processor 410, a memory 420, and a communication interface 430.
  • the memory 420 stores instructions, and the processor 410 is configured to execute the memory 320.
  • the processor 410 is configured to execute the method provided by the foregoing method embodiment, and the processor 410 is further configured to control the communication interface 430 to communicate with the outside world.
  • the apparatus 300 shown in FIG. 4, the apparatus 400 shown in FIG. 5 can be used to perform the operations or processes in the foregoing method embodiments, and the operations and/or functions of the respective modules in the apparatus 300 or the apparatus 400 are respectively implemented.
  • the corresponding processes in the foregoing method embodiments are not described herein for brevity.
  • the embodiment of the present application further provides a computer readable storage medium, comprising a computer program, when executed on a computer, causing the computer to execute the method provided by the foregoing method embodiment.
  • the embodiment of the present application further provides a computer program product comprising instructions, when the computer program product is run on a computer, causing the computer to execute the method provided by the foregoing method embodiment.
  • processors mentioned in the embodiment of the present invention may be a central processing unit (CPU), or may be other general-purpose processors, a digital signal processor (DSP), an application specific integrated circuit ( Application Specific Integrated Circuit (ASIC), Field Programmable Gate Array (FPGA) or other programmable logic device, discrete gate or transistor logic device, discrete hardware component, etc.
  • DSP digital signal processor
  • ASIC Application Specific Integrated Circuit
  • FPGA Field Programmable Gate Array
  • the general purpose processor may be a microprocessor or the processor or any conventional processor or the like.
  • the memory referred to in the embodiments of the present application may be a volatile memory or a non-volatile memory, or may include both volatile and non-volatile memory.
  • the non-volatile memory may be a read-only memory (ROM), a programmable read only memory (PROM), an erasable programmable read only memory (Erasable PROM, EPROM), or an electric Erase programmable read only memory (EEPROM) or flash memory.
  • the volatile memory can be a Random Access Memory (RAM) that acts as an external cache.
  • RAM Random Access Memory
  • many forms of RAM are available, such as static random access memory (SRAM), dynamic random access memory (DRAM), synchronous dynamic random access memory (Synchronous DRAM). SDRAM), Double Data Rate SDRAM (DDR SDRAM), Enhanced Synchronous Dynamic Random Access Memory (ESDRAM), Synchronous Connection Dynamic Random Access Memory (Synchlink DRAM, SLDRAM) ) and direct memory bus random access memory (DR RAM).
  • processor is a general-purpose processor, DSP, ASIC, FPGA or other programmable logic device, discrete gate or transistor logic device, discrete hardware component, the memory (storage module) is integrated in the processor.
  • memories described herein are intended to comprise, without being limited to, these and any other suitable types of memory.
  • modules and algorithm steps of the various examples described in connection with the embodiments disclosed herein can be implemented in electronic hardware or a combination of computer software and electronic hardware. Whether these functions are performed in hardware or software depends on the specific application and design constraints of the solution. A person skilled in the art can use different methods to implement the described functions for each particular application, but such implementation should not be considered to be beyond the scope of the present application.
  • the disclosed systems, devices, and methods may be implemented in other manners.
  • the device embodiments described above are merely illustrative.
  • the division of the modules is only a logical function division.
  • there may be another division manner for example, multiple modules or components may be combined or Can be integrated into another system, or some features can be ignored or not executed.
  • the mutual coupling or direct coupling or communication connection shown or discussed may be an indirect coupling or communication connection through some interface, device or module, and may be electrical, mechanical or otherwise.
  • the modules described as separate components may or may not be physically separated.
  • the components displayed as modules may or may not be physical modules, that is, may be located in one place, or may be distributed to multiple network modules. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the embodiment.
  • each functional module in each embodiment of the present application may be integrated into one processing module, or each module may exist physically separately, or two or more modules may be integrated into one module.
  • the functions, if implemented in the form of software functional modules and sold or used as separate products, may be stored in a computer readable storage medium.
  • the technical solution of the present application which is essential or contributes to the prior art, or a part of the technical solution, may be embodied in the form of a software product, which is stored in a storage medium, including
  • the instructions are used to cause a computer device (which may be a personal computer, server, or network device, etc.) to perform all or part of the steps of the methods described in various embodiments of the present application.
  • the foregoing storage medium includes: a U disk, a mobile hard disk, a read-only memory (ROM), a random access memory (RAM), a magnetic disk, or an optical disk, and the like, which can store program codes. .

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)

Abstract

L'invention concerne un procédé et un appareil de traitement d'images. Le procédé consiste à : acquérir une première image ; traiter la première image selon une fonction de filtre spatial afin de générer une première couche de base ; exécuter une opération de soustraction ou une opération de division sur la première image et la première couche de base afin de générer une première couche de détail ; selon la première image, déterminer une fonction d'ajustement de couche de détail, une variable indépendante de la fonction d'ajustement de couche de détail étant un signal non linéaire de la première image ; selon la fonction de réglage de couche de détail, ajuster la première couche de détail afin d'acquérir une seconde couche de détail ; et exécuter une opération d'addition ou une opération de multiplication sur la première couche de base et la seconde couche de détail afin de générer une seconde image. En établissant une fonction d'ajustement de couche de détail par rapport à un signal non linéaire d'une image et en ajustant une couche de détail de l'image au moyen de la fonction d'ajustement de couche de détail, il est possible d'éviter qu'un problème de qualité d'image perceptible par l'œil humain ne survienne dans l'image ajustée.
PCT/CN2018/103351 2017-11-13 2018-08-30 Procédé et appareil de traitement d'images Ceased WO2019091196A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201711112510.8A CN109785239B (zh) 2017-11-13 2017-11-13 图像处理的方法和装置
CN201711112510.8 2017-11-13

Publications (1)

Publication Number Publication Date
WO2019091196A1 true WO2019091196A1 (fr) 2019-05-16

Family

ID=66438627

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/103351 Ceased WO2019091196A1 (fr) 2017-11-13 2018-08-30 Procédé et appareil de traitement d'images

Country Status (2)

Country Link
CN (1) CN109785239B (fr)
WO (1) WO2019091196A1 (fr)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116016809A (zh) * 2022-11-24 2023-04-25 扬州联图大数据有限公司 一种无人机影像采集及生成系统

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111383178A (zh) * 2018-12-29 2020-07-07 Tcl集团股份有限公司 一种图像增强方法、装置及终端设备
CN113628106B (zh) * 2020-05-08 2025-01-28 华为技术有限公司 图像动态范围处理方法和装置
CN112200719B (zh) * 2020-09-27 2023-12-12 咪咕视讯科技有限公司 图像处理方法、电子设备及可读存储介质
CN112991209B (zh) * 2021-03-12 2024-01-12 北京百度网讯科技有限公司 图像处理方法、装置、电子设备及存储介质

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7840066B1 (en) * 2005-11-15 2010-11-23 University Of Tennessee Research Foundation Method of enhancing a digital image by gray-level grouping
CN103700067A (zh) * 2013-12-06 2014-04-02 浙江宇视科技有限公司 一种提升图像细节的方法和装置
CN105427255A (zh) * 2015-11-16 2016-03-23 中国航天时代电子公司 一种基于grhp的无人机红外图像细节增强方法
WO2017101137A1 (fr) * 2015-12-15 2017-06-22 华为技术有限公司 Procédé et appareil de traitement d'image à large gamme dynamique et dispositif de terminal

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102289792A (zh) * 2011-05-03 2011-12-21 北京云加速信息技术有限公司 一种低照度视频图像增强方法及系统
US8576445B2 (en) * 2011-06-28 2013-11-05 Konica Minolta Laboratory U.S.A., Inc. Method for processing high dynamic range images using tone mapping to extended RGB space
CN108370405B (zh) * 2015-12-23 2019-11-26 华为技术有限公司 一种图像信号转换处理方法、装置及终端设备

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7840066B1 (en) * 2005-11-15 2010-11-23 University Of Tennessee Research Foundation Method of enhancing a digital image by gray-level grouping
CN103700067A (zh) * 2013-12-06 2014-04-02 浙江宇视科技有限公司 一种提升图像细节的方法和装置
CN105427255A (zh) * 2015-11-16 2016-03-23 中国航天时代电子公司 一种基于grhp的无人机红外图像细节增强方法
WO2017101137A1 (fr) * 2015-12-15 2017-06-22 华为技术有限公司 Procédé et appareil de traitement d'image à large gamme dynamique et dispositif de terminal

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116016809A (zh) * 2022-11-24 2023-04-25 扬州联图大数据有限公司 一种无人机影像采集及生成系统

Also Published As

Publication number Publication date
CN109785239A (zh) 2019-05-21
CN109785239B (zh) 2021-05-04

Similar Documents

Publication Publication Date Title
JP6362793B2 (ja) ハイダイナミックレンジ映像のためのディスプレイマネジメント
US8290295B2 (en) Multi-modal tone-mapping of images
US10074162B2 (en) Brightness control for spatially adaptive tone mapping of high dynamic range (HDR) images
US9621767B1 (en) Spatially adaptive tone mapping for display of high dynamic range (HDR) images
WO2019091196A1 (fr) Procédé et appareil de traitement d'images
US10839495B2 (en) Computing devices and methods of image processing with input image data and reference tone mapping strength data
CN112449169B (zh) 色调映射的方法和装置
US10491924B2 (en) Encoding and decoding of image data
US9324137B2 (en) Low-frequency compression of high dynamic range images
US20140140615A1 (en) Global Approximation to Spatially Varying Tone Mapping Operators
US12333694B2 (en) Image processing method and apparatus
US10298896B2 (en) Method and apparatus for generating HDRI
CN109817170B (zh) 像素补偿方法、装置和终端设备
US8538145B2 (en) Gamma adjustment for maximizing information in images
CN108694030B (zh) 处理高动态范围图像的方法和装置
CN111754412B (zh) 构建数据对的方法、装置及终端设备
US10915996B2 (en) Enhancement of edges in images using depth information
US10019645B2 (en) Image processing apparatus and method, and electronic equipment
Liu et al. HDRC: A subjective quality assessment database for compressed high dynamic range image
CN114138218B (zh) 一种内容显示方法和内容显示设备
US20100053381A1 (en) Image processing apparatus and method thereof
CN109308690B (zh) 一种图像亮度均衡方法及终端
CN117094908A (zh) 图像处理方法、装置、电子设备及存储介质
JP6484244B2 (ja) カラー/グレーの小さい差異を保持する画像処理方法
CN112991209A (zh) 图像处理方法、装置、电子设备及存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18876189

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18876189

Country of ref document: EP

Kind code of ref document: A1