WO2025228523A1 - Method, computer program and electronic device for creating high dynamic range images - Google Patents
Method, computer program and electronic device for creating high dynamic range imagesInfo
- Publication number
- WO2025228523A1 WO2025228523A1 PCT/EP2024/062043 EP2024062043W WO2025228523A1 WO 2025228523 A1 WO2025228523 A1 WO 2025228523A1 EP 2024062043 W EP2024062043 W EP 2024062043W WO 2025228523 A1 WO2025228523 A1 WO 2025228523A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- intensity
- input image
- pixel
- pixels
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/50—Image enhancement or restoration using two or more images, e.g. averaging or subtraction
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/90—Dynamic range modification of images or parts thereof
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10024—Color image
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20172—Image enhancement details
- G06T2207/20208—High dynamic range [HDR] image processing
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20212—Image combination
- G06T2207/20221—Image fusion; Image merging
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/70—Circuitry for compensating brightness variation in the scene
- H04N23/741—Circuitry for compensating brightness variation in the scene by increasing the dynamic range of the image compared to the dynamic range of the electronic image sensors
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/70—Circuitry for compensating brightness variation in the scene
- H04N23/743—Bracketing, i.e. taking a series of images with varying exposure conditions
Definitions
- the invention is related to a method for creating a high dynamic range (HDR) output image from a plurality of low dynamic range (LDR) input images.
- HDR high dynamic range
- LDR low dynamic range
- the invention is related to a computer program having program code means adapted to perform such a method.
- the invention is related to an electronic device adapted to perform such a method.
- the invention concerns the field of high dynamic range (HDR) imaging.
- HDR high dynamic range
- the human eye can perceive a very large range of brightness from very dark areas to very bright areas in the same scene.
- the detected amount of light is represented by the intensity of each pixel.
- dynamic range refers to the ratio between the maximum intensity (brightest area) and the minimum intensity (darkest area) of an image.
- dynamic intensity range refers to the ratio between the maximum intensity (brightest area) and the minimum intensity (darkest area) of an image.
- dynamic intensity range dynamic range
- dynamic range and “intensity range” are used equivalently in the context of this application.
- HDR images typically offer a dynamic range of 14-16 bits to 20-24 bits, while traditional LDR images (or standard dynamic range images) typically only offer a dynamic range of 8-10 bits. Therefore, HDR images allow preserving the details of real-world scenes that contain very bright, but also very dark areas much better than conventional LDR images. This means that HDR images allow capturing dynamic ranges similar to
- Bracketed input images can be captured and merged to create an HDR output image.
- the bracketed input images of the series represent the same scene, but differ in their exposure levels, i.e. each bracketed input image covers a part of the full dynamic range of the scene.
- a method for enhancing a dynamic range of an image is known.
- a plurality of sub-images with different exposure times is created by means of an image sensor with a multi-exposure pixel pattern.
- Exposure values are interpolated for pixels with missing exposure values in each sub-image.
- an image with an enhanced dynamic range is created.
- bracketed LDR input images are associated with several disadvantages.
- these methods require that the exact amount of light (irradiance) represented by each bracketed input image must be known.
- the camera response functions also referred to as sensor response functions or characteristic functions in this application
- This requires, amongst others, exact knowledge of the characteristics of the deployed optics and of the individual exposure adjustments, such as sensor gains, exposure times and apertures.
- the object of the invention is achieved by a method for creating a high dynamic range (HDR) output image from a plurality of low dynamic range (LDR) input images with the features of claim 1.
- HDR high dynamic range
- LDR low dynamic range
- step e) could be executed before steps b) to j), or some of the method’s steps could be executed in parallel.
- the term “high dynamic range output image” can generally refer to any image with a dynamic range that is greater than the dynamic range of the low dynamic range input images.
- the low dynamic range input images can be conventional low dynamic range images, e.g. images with a dynamic range of 8 to 10 bits and/or standard dynamic range (SDR) images.
- the input images and the output image can be part of an input video sequence and an output video sequence, respectively.
- Each respective input image’s exposure level can be controlled by the exposure settings that are used for generating the input image.
- Such exposure settings may include, for example, one or more of the following: exposure time, aperture, ISO speed (sensor gain).
- exposure time is a particularly important means of varying exposure, the terms “high exposure” (or “high exposure level”) and “long exposure” are used equivalently and the terms “low exposure” (or “low exposure level”) and “short exposure” are used equivalently, respectively, in the context of this application.
- a short exposure can generally also be achieved, for example, using a smaller aperture and/or a lower ISO speed.
- step a a bracketed series of at least two low dynamic range (LDR) input images is obtained.
- the bracketed input images i.e. the input images that belong to the bracketed series of LDR input images, represent the same scene, but have different exposure levels.
- the series of input images can include, for example, three input images: one image with a long exposure (high exposure level), one image with a short exposure (medium exposure level) and one image with a very short exposure (low exposure level).
- the long exposure input image should capture dark areas of the scene satisfactorily, while bright areas of the scene may be overexposed.
- the short exposure input image should capture areas of medium brightness satisfactorily, while dark areas of the scene may be underexposed and/or bright areas of the scene may be overexposed.
- the very short exposure input image should capture highlights, i.e. bright areas of the scene satisfactorily, while dark areas may be underexposed.
- the exposure settings and exposure levels of the input images can be freely chosen, as long as there is a partial overlap of the intensities of the pixels of at least two different bracketed input images within the intensity range of interest. This constraint ensures that the intensity ratios in step g) and hence the mapping factors in step h) can be properly determined.
- the bracketed series of LDR input images includes a reference image, whose exposure level serves as a reference exposure level, and at least one non-reference image.
- a non-reference image is any input image that belongs to the series of bracketed LDR input images and is not the reference image.
- an intensity histogram is obtained for each input image, wherein the intensity histogram represents the intensity distribution of the respective input image.
- Each intensity histogram can have a plurality of bins. For example, each intensity histogram can have 32 bins, 64 bins, 128 bins or 256 bins.
- the intensity histogram for the respective input image can be obtained directly from the intensities of individual pixels of the respective input image. In other words, the intensity of each individual pixel can be counted for obtaining the intensity histogram.
- the respective image can be divided into a plurality of pixel blocks, wherein each pixel block includes a plurality of pixels, and the intensity histogram for the respective input image can be obtained from the mean intensities of the pixel blocks. This will be explained in detail later.
- step c) an intensity range of interest is determined for each input image from the corresponding intensity histogram of the respective input image.
- the intensity range of interest is limited by a lower intensity threshold and an upper intensity threshold.
- a mapping mask is obtained for each input image, wherein the mapping mask indicates the pixel positions of all pixels of the respective input image whose intensities are within the respective image’s intensity range of interest.
- the mapping mask excludes the pixel positions of pixels whose intensities lie outside of the intensity range of interest, i.e. pixels whose intensities are smaller than the lower intensity threshold or greater than the upper intensity threshold of the intensity range of interest.
- Such excluded pixel positions may be, for example, the positions of underexposed and/or overexposed pixels and/or defective pixels (such as cold pixels and/or hot pixels).
- a comparison image for each non-reference image is determined (or selected) from the series of input images, wherein the reference image is determined (or selected) as a comparison image for at least one of the non-reference images.
- the reference image can be selected as the comparison image for every non-reference image.
- the comparison image for each non- reference image can be determined depending on the exposure level of the respective non- reference image. This will be explained in detail later.
- step f) for each non-reference image, common pixel positions of the respective non- reference image and its respective comparison image are determined.
- Such common pixel positions are pixel positions that are included both in the mapping mask of the respective non- reference image and in the mapping mask of the respective non-reference image’s comparison image.
- the common pixel positions can be determined, for example, based on an intersection of the mapping mask of the respective non-reference image and the mapping mask of the corresponding comparison image.
- step g) for each non-reference image, an intensity ratio between the intensities of the pixels of the respective non-reference image’s comparison image and the intensities of the pixels of the respective non-reference image is calculated, wherein only the pixels at the common pixel positions are considered for calculating the intensity ratio. In other words, pixels whose pixel positions are not included in the common pixel positions determined in step f) are excluded from the calculation of the intensity ratio. This means that only those pixel positions that are included both in the mapping mask of the respective non-reference image and in the mapping mask of the respective non-reference image’s comparison image are considered for calculating the intensity ratio of the respective non-reference image.
- step h) for each non-reference image, a mapping factor m between the respective nonreference image’s exposure level and the reference exposure level is determined based on at least one intensity ratio calculated in step g).
- the mapping factor m for this respective non-reference image can be equal to the intensity ratio determined for this respective non-reference image in step g).
- the mapping factor m for this respective non-reference image can be equal to the intensity ratio between the intensities of the pixels of the reference image and the intensities of the pixels of this respective non-reference image.
- the mapping factor m for each non-reference image can be equal to the intensity ratio determined in step g) for the respective non-reference image. This means that in this case the mapping factor m for each non-reference image can be equal to the intensity ratio between the intensities of the pixels of the reference image and the intensities of the pixels of the respective non-reference image.
- the mapping factor m for this respective non-reference image can be determined based on a plurality of intensity ratios calculated in step g).
- the mapping factor m for the respective non-reference image whose comparison image is not the reference image can be recursively determined by multiplying the intensity ratio that has been calculated for the respective non-reference image in step g) (i.e. the intensity ratio between the intensities of the pixels of the respective non-reference image’s comparison image and the intensities of the pixels of the respective non-reference image) by the mapping factor m that is determined for the respective non-reference image’s comparison image in step h).
- This recursive calculation ends when it reaches a non-reference image whose comparison image is the reference image.
- step i) for each non-reference image, the intensities of all pixels of the respective non- reference image are mapped to the reference exposure level, thereby obtaining a mapped intensity for each pixel of the respective non-reference image.
- This is achieved by applying the mapping factor m determined in step h) to all pixels of the respective non-reference image.
- the intensity of each pixel in order to map the intensity of all pixels of the respective non- reference image to the reference exposure level, the intensity of each pixel can simply be multiplied by the mapping factor m that has been determined for the respective non-reference image in step h).
- step j) the input images are merged into a high dynamic range output image based on the intensities of the pixels of the reference image and the mapped intensities of the pixels of each non-reference image, wherein only the pixels whose pixel positions are included in the mapping mask of the respective input image are considered for the merging.
- pixels whose pixel positions are not included in the mapping mask of the respective input image obtained in step d) i.e. pixels whose intensities lie outside of the intensity range of interest, are excluded from the merging of the input images into a high dynamic range output image.
- such excluded pixels may be, for example, underexposed and/or overexposed pixels and/or defective pixels (such as cold pixels and/or hot pixels).
- the invention advantageously allows creating high quality HDR images from LDR input images in a particularly flexible, efficient and computationally inexpensive manner.
- the invention improves the accuracy of the created HDR output image by considering only pixels that are included in the respective mapping mask of each input image for the creation of the HDR output image, i.e. by considering only pixels whose intensities are within the respective input image’s intensity range of interest.
- the adverse effects that certain pixels of the input images have on the output image quality for example underexposed and overexposed pixels as well as defective and noisy pixels, can be eliminated, since these unwanted pixels can be effectively excluded from the process of creating the HDR output image.
- the proposed method is robust against noise and bad pixels and is able to create high quality HDR images even under most difficult lighting conditions.
- the invention significantly increases the flexibility of creating HDR images from LDR input images. This is achieved by means of dynamically calculating the mapping factor for each series of bracketed input images, which provides the advantage that the ratio of the exposure levels of the input images does not have to be constant and, even more beneficial, the exposure levels of the input images do not even have to be known in advance. Accordingly, the characteristics of the deployed optics, the exposure settings used for generating the input images and the sensor response function, which is usually non-linear, can remain unknown as well. This enormous simplifies the HDR image creation process and provides great flexibility.
- the proposed method can be applied in conjunction with all kinds of optics and all kinds of image sensors (cameras), even if the image sensor’s exposure settings and/or the sensor response function are configured dynamically and hence cannot be known in advance.
- the invention advantageously allows the use of low-cost image sensors.
- the proposed method can flexibly be used with any number of bracketed input images that is equal to or greater than two, i.e. the bracketed series of input images may comprise two, three or more images.
- the proposed method advantageously works in a self-adjusting manner, i.e. no timeconsuming calibration processes of the deployed image capturing chain are needed.
- the invention provides the advantage that there is no need for a time-consuming and computationally expensive calculation of the inverses of the camera response functions.
- HDR images can be created very efficiently, which results in low requirements with regard to computational resources and hence allows the use of low-cost hardware.
- the term “Bayer domain” is used in a generalized fashion. Despite the use of the term “Bayer domain”, the employed Color Filter Array (CFA) does not necessarily have to be a Bayer filter, but can generally be any type of CFA. Therefore, the term “CFA domain” (“Color Filter Array domain”) can be used equivalently to the term “Bayer domain” within the context of this application.
- CFA domain Color Filter Array domain
- step j) comprises j1) for each pixel position and each input image, determining a weighting factor w for the pixel of the respective input image at the respective pixel position, wherein
- this embodiment of the invention if there are at least two input images whose pixels at the respective pixel position are within the intensity range of interest, these pixels are blended according to their weighting factors (0 ⁇ w ⁇ 1) in order to determine the HDR output image’s intensity at the respective pixel position.
- these pixels are blended according to their weighting factors (0 ⁇ w ⁇ 1) in order to determine the HDR output image’s intensity at the respective pixel position.
- that pixel’s intensity or mapped intensity, respectively
- This embodiment of the invention provides the advantage that it allows merging the LDR input images into a high quality HDR output image in a relatively simple, efficient and hence computationally inexpensive manner. While underexposed, overexposed and defective pixels can be effectively excluded, the entirety of beneficial image information from all input images can still be utilized to improve the quality of the output image.
- the weighting factor w can also be determined from the interval 0 ⁇ w ⁇ 1 (instead of the aforementioned interval 0 ⁇ w ⁇ 1) if the pixel position is included in the mapping mask of the respective input image and the pixel position is included in the mapping mask of at least one other input image.
- all steps of the method are executed in the Bayer domain (CFA domain).
- the input images are directly obtained from the Color Filter Array (CFA).
- CFA Color Filter Array
- the CFA does not necessarily have to be a Bayer filter, but can generally be any type of CFA.
- the terms “Bayer domain” and “CFA domain” are used equivalently in this application. Without loss of generality, all CFA filtered pixels are addressed also as RGB pixels in the context of this application.
- This embodiment of the invention provides the advantage that demosaicing, which is a computationally expensive process, does not have to be conducted for each input image, but only once per output image. For example, if three bracketed input images are used to create the output image, this allows reducing the required computational effort for demosaicing by approximately two thirds. As a result, this embodiment of the invention allows for a particularly simple and hence inexpensive implementation.
- the intensity of each pixel of each input image is the intensity of that respective pixel itself in the Bayer domain, regardless of the color of the respective pixel.
- the intensity histogram in step b) and/or for obtaining the mapping mask in step d) and/or for calculating the intensity ratio in step g) and/or for determining the weighting factor in step j 1 ) is the intensity of that respective pixel itself in the Bayer domain, regardless of the color of the respective pixel.
- the intensity of each pixel of each input image is determined only from the intensity of the green component in the Bayer domain, wherein the intensities of non-green pixels in the Bayer domain are determined based on the intensities of neighboring green pixels, or
- the intensity of that respective pixel is the intensity of the green pixel itself in the Bayer domain
- the intensity of that respective non-green pixel is determined as max(GI,N), wherein Gl is an intensity determined based on the intensities of neighboring green pixels and N is the intensity of the respective non-green pixel itself in the Bayer domain and max is the maximum function.
- the CFA is a Bayer filter, i.e. a CFA that consists of green, red and blue pixels.
- the intensity of the green component in the Bayer domain is a good measure of the scene’s brightness level. Therefore, in these cases, a high quality of the output images can be ensured if the intensity of the pixels of the input image is determined from the intensity of the green component.
- the green channel typically includes the largest amount of light for typical light sources like sunlight and light bulbs.
- the CFA is a Bayer filter
- the CFA usually includes twice as many green pixels compared to red or blue pixels. As a result, the green channel usually has a better signal to noise ratio compared to the red channel and the blue channel.
- the intensity of each pixel of each input image is determined only from the intensity of a selected color component in the Bayer domain, wherein the intensities of non-selected color pixels in the Bayer domain are determined based on the intensities of neighboring selected color pixels, or
- the intensity of that respective pixel is the intensity of the selected color pixel itself in the Bayer domain
- the intensity of that respective non-selected color pixel is determined as max(SI,N), wherein SI is an intensity determined based on the intensities of neighboring selected color pixels and N is the intensity of the respective non-selected color pixel itself in the Bayer domain and max is the maximum function.
- the selected color component in the Bayer domain can be, for example, the green component.
- a corner case detection can be conducted for each input image, wherein a corner case is detected if the amount of the selected color (for example green) light of at least one input image is particularly low, e.g. lower than a specified threshold.
- the amount of selected color (for example green) light can be quantified, for example, by the mean intensity of the selected color (for example green) component of the pixels of the respective input image in the Bayer domain.
- the intensity of each pixel of each input image can be determined from the intensity of one or more components other than the selected color (for example green) component in the Bayer domain.
- the proposed method may additionally include, after obtaining the bracketed series of input images, for one of the input images or for a number of input images or for each input image, determining a dominant color component of the respective input image in the Bayer domain.
- a mean intensity e.g., arithmetic mean
- the dominant color component of the input image can be determined based on a comparison of the mean intensities of all color components, wherein the color component with the largest mean intensity is determined as the dominant color component of the input image.
- the dominant color component may serve as the selected color component referred to above.
- the intensity of each pixel of each input image is determined only from the intensity of the dominant color component in the Bayer domain, wherein the intensities of non-dominant color pixels in the Bayer domain are determined based on the intensities of neighboring dominant color pixels, or
- the intensity of that respective pixel is the intensity of the dominant color pixel itself in the Bayer domain
- the intensity of that respective non-dominant color pixel is determined as max(DI,N), wherein DI is an intensity determined based on the intensities of neighboring dominant color pixels and N is the intensity of the respective non-dominant color pixel itself in the Bayer domain and max is the maximum function.
- the selected color component in the Bayer domain referred to above can be the white component.
- the intensity of each pixel of each input image is generally determined only from the intensity of the white component in the Bayer domain, wherein the intensities of non-white pixels in the Bayer domain are determined based on the intensities of neighboring white pixels.
- CFAs that include a white component provide significant advantages for capturing dark scenes, i.e. scenes that only include a small amount of light, as the light sensitivity of the white pixels typically is significantly higher (approximately 2.5 times higher) than the light sensitivity of RGB pixels and hence underexposed pixels can be avoided.
- using the white component as the selected component referred to above, i.e. as the selected color component in the Bayer domain for determining the intensity of the pixels of the input image can be particularly advantageous for dark scenes.
- the selected color component in the Bayer domain for determining the intensity of the pixels of the input image can be determined based on the brightness level of the captured scene, wherein the white component is determined as the selected color component for dark scenes, e.g. if the brightness level does not exceed a specified threshold, and a non-white color component (e.g. the green component) is determined as the selected color component for bright scenes, e.g. if the brightness level exceeds a specified threshold.
- the corner case detection described below may be applied for finally determining the selected color component.
- a corner case detection can be conducted for each input image, wherein a corner case is detected if, based on using the white component for determining the intensity of the pixels of the input image, the number or fraction of common pixel positions determined in step f) is too small, e.g. is lower than a specified threshold.
- the intensity of each pixel of each input image can be determined from the intensity of one or more components other than the white component (e.g. the green component) in the Bayer domain.
- some or all steps of the method can be executed in the RGB domain.
- all steps of the method are executed in the RGB domain. This means that demosaicing is executed before executing any of the steps of the method according to the invention. Consequently, in such embodiments, the input images are RGB images. In such embodiments, the method can be applied to each component of the RGB input images separately.
- step b) comprises b1) for each input image, dividing the respective image into a plurality of pixel blocks, wherein each pixel block includes a plurality of pixels, and determining, for each pixel block, a mean intensity for the respective pixel block from the intensities of the pixels included in the respective pixel block, and b2) for each input image, obtaining an intensity histogram that represents the intensity distribution of the respective input image based on the mean intensities of the pixel blocks of the respective input image determined in step b1).
- the pixel blocks determined in step b1) are used instead of individual pixels, wherein the mean intensity of the respective pixel block is used as the intensity of all pixels included in the respective pixel block.
- Such embodiments of the invention provide the advantage that by using pixel blocks instead of single pixels, the computational effort associated with executing the proposed method can be significantly reduced, which allows the use of less powerful hardware. On the other hand, the accuracy that can be achieved by using pixel blocks is completely adequate for the purposes of the proposed method.
- step e) comprises determining the reference image as the comparison image for each non-reference image.
- This embodiment of the invention provides the advantage that it allows a particularly simple implementation of the proposed method. The reason for this is as follows: As explained before, if the respective non-reference image’s comparison image is the reference image, the mapping factor m for this respective non-reference image can be equal to the intensity ratio determined for this respective non-reference image in step g). This means that the mapping factor m can be equal to the intensity ratio between the intensities of the pixels of the reference image and the intensities of the pixels of this respective non-reference image.
- the mapping factor m for each non-reference image can be equal to the intensity ratio determined in step g) for the respective non-reference image. This means that the mapping factor m for each non-reference image can be equal to the intensity ratio between the intensities of the pixels of the reference image and the intensities of the pixels of the respective non-reference image.
- step e) comprises determining the comparison image for the respective non-reference image depending on the exposure level of the respective non-reference image, wherein
- the input image whose exposure level is next higher than the exposure level of the respective non-reference image is determined as the comparison image for the respective non-reference image, and/or
- the input image whose exposure level is next lower than the exposure level of the respective non-reference image is determined as the comparison image for the respective non-reference image.
- the mapping factor m for this respective non-reference image can be determined based on a plurality of intensity ratios calculated in step g).
- the mapping factor m for the respective non-reference image whose comparison image is not the reference image can be recursively determined by multiplying the intensity ratio that has been calculated for the respective non-reference image in step g) by the mapping factor m that is determined for the respective non-reference image’s comparison image in step h). This recursive calculation ends when it reaches a non-reference image whose comparison image is the reference image.
- the lower intensity threshold and the upper intensity threshold are determined based on an empirical cumulative intensity distribution of the respective input image, wherein the empirical cumulative intensity distribution can be calculated from the corresponding intensity histogram of the respective input image, and wherein
- the lower intensity threshold is determined as the intensity where the empirical cumulative intensity distribution of the respective input image exceeds a first value p1 and
- the upper intensity threshold is determined as the intensity where the empirical cumulative intensity distribution of the respective input image exceeds a second value p2 that is greater than the first value p1 .
- the lower intensity threshold is determined as the empirical p1-quantile of the respective input image’s intensity distribution as it is represented by the respective input image’s intensity histogram.
- the upper intensity threshold is determined as the empirical p2-quantile of the respective input image’s intensity distribution as it is represented by the respective input image’s intensity histogram, wherein 0 ⁇ p1 ⁇ 1 and 0 ⁇ p2 ⁇ 1 and p1 ⁇ p2.
- This embodiment of the invention provides the advantage that the intensity range of interest can be determined in a very simple and efficient manner, which results in low computational complexity and supports the applicability of low-cost hardware.
- the bracketed series of input images comprises three input images, namely a first input image with a high exposure level (e.g. long exposure) and a second input image with a medium exposure level (e.g. short exposure) and a third input image with a low exposure level (e.g. very short exposure).
- a first input image with a high exposure level e.g. long exposure
- a second input image with a medium exposure level e.g. short exposure
- a third input image with a low exposure level e.g. very short exposure
- This embodiment provides the advantage that it allows capturing dark areas as well as areas of medium brightness and bright areas of the scene reliably and efficiently.
- the long exposure input image high exposure level
- the short exposure input image medium exposure level
- the very short exposure input image low exposure level
- this embodiment allows creating high quality HDR output images based on a relatively small number of input images.
- the lower intensity threshold and the upper intensity threshold used in step c) are configured so that the intensity range of interest excludes underexposed pixels and/or overexposed pixels of the respective input image, and/or
- the lower intensity threshold and the upper intensity threshold used in step c) are configured so that the intensity range of interest excludes defective pixels of the respective input image, in particular hot pixels and/or cold pixels of the respective input image.
- the intensity range of interest can be configured to exclude underexposed and/or overexposed pixels of the respective input image.
- the intensity range of interest can be configured to exclude defective pixels, in particular cold pixels (pixels with false low intensity values at the lower end of the captured intensity range) and/or hot pixels (pixels with false high intensity values at the upper end of the captured intensity range).
- the proposed method considers only pixels whose intensities are within the respective input image’s intensity range of interest, which applies i) to determining the mapping factors that are used for mapping the input images’ intensities to the reference exposure level and ii) to merging the input images into the HDR output image. Therefore, this embodiment provides the advantage that the adverse effects that underexposed, overexposed pixels and/or defective pixels of the input images have on the output image quality can be eliminated, since these unwanted pixels can be effectively excluded from the process of creating the HDR output image.
- the exposure level of each input image in particular the exposure time and/or the sensor gain of each input image, is automatically configured by an Auto Exposure Module.
- the exposure settings (e.g. the exposure time and/or the sensor gain and/or the aperture) of each input image is automatically configured by an Auto Exposure Module.
- the Auto Exposure Module can be adapted to ensure that, by automatically configuring the exposure levels of the input images accordingly, the bracketed input images in total appropriately capture the full dynamic range of the scene.
- the Auto Exposure Module can be adapted to ensure that, by automatically configuring the exposure levels of the input images accordingly, there is an at least partial overlap between the intensity ranges of interest of the bracketed input images. More specifically, the Auto Exposure Module can be adapted to ensure that, by automatically configuring the exposure settings of the input images accordingly, for each non-reference image, there is an at least partial overlap between the intensity ranges of interest of the respective non-reference image and its respective comparison image, wherein the latter can be, for example, the reference image.
- the series of input images comprises an input image with the lowest exposure level and an input image with the highest exposure level, wherein - the lower intensity threshold of the input image with the lowest exposure level is configured so that underexposed pixels are excluded, and/or
- the upper intensity threshold of the input image with the highest exposure level is configured so that overexposed pixels are excluded, and/or
- the lower intensity threshold and the upper intensity threshold of every other input image are configured so that underexposed pixels and overexposed pixels are excluded.
- This embodiment provides the advantage that underexposed and/or overexposed pixels are excluded from the respective intensity range of interest and hence these pixels can be effectively excluded from the process of creating the HDR output image. Moreover, this embodiment advantageously provides a simple solution for configuring the intensity ranges of interest in a way that i) the intensity ranges of interest of the bracketed input images collectively cover the full dynamic range of the scene and ii) there is an at least partial overlap between the intensity ranges of interest of the input images, which is important for the reasons mentioned above. As a result, a high quality of the HDR output image can be achieved.
- the weighting factor w for each pixel is determined as a function of the intensity of the respective pixel, wherein the function is defined in a way that in an overlapping area of the intensity range of interest of a first input image and the intensity range of interest of a second input image, the weighting factor w for the intensity of the pixels of the first input image steadily increases, for example linearly, from 0 to 1 and the weighting factor w for the intensity of the pixels of the second input image correspondingly decreases from 1 to 0 with increasing intensity.
- the embodiment described above provides the advantage that it provides a simple and efficient solution for blending the pixels of two input images if both pixels’ intensities are in the intensity range of interest of the respective image.
- a median filter with a specified median filter kernel size is applied to each weighting factor w for each pixel of the respective input image.
- a median filter is applied to the initially determined weighting factor w of the respective pixel, wherein the median is calculated based on the initially determined weighting factor w of the respective pixel and the initially determined weighting factors of neighboring pixels of the same input image within the kernel size.
- a kernel size of 3x3 can be configured for this median filter, which means that direct neighbors of each pixel are considered for the median calculation.
- a mean filter with a specified mean filter kernel size is applied to each weighting factor w for each pixel of the respective input image.
- the mean is calculated based on the weighting factor w (resulting from applying the median filter as described before) of the respective pixel and the weighting factors (resulting from applying the median filter as described before) of neighboring pixels of the same input image within the kernel size.
- a kernel size of 3x3 can be configured for this mean filter, which means that direct neighbors of each pixel are considered for the mean calculation.
- Such an embodiment provides the advantage that it allows smooth blending in the spatial domain.
- the mean filter takes account of spatial information, which advantageously prevents artefacts caused by strong edges.
- the input images are part of an input video sequence and the output image is part of an output video sequence.
- Such an embodiment of the invention allows utilizing the proposed method’s advantages that were described above for creating high dynamic range output image video sequences, i.e. it allows creating high dynamic range output image video sequences in a particularly flexible, efficient and computationally inexpensive manner.
- the object of the invention is further achieved by a computer program having program code means adapted to perform a method as described above when the computer program is executed on a computer.
- the object of the invention is further achieved by an electronic device that is adapted to perform a method as described above.
- the electronic device can be, for example, a stand-alone integrated circuit (IC) or a part thereof.
- the electronic device can also be a system on a chip (SoC) or a part thereof.
- SoC system on a chip
- the electronic device can also be a part of an image processing pipeline and/or an image processing chain.
- the electronic device can also be a camera or an image signal processor (ISP) or a part of a camera or an ISP.
- the electronic device can also be a system that comprises a camera and/or an ISP.
- the electronic device can also be a part of such a system.
- Figure 1 - a schematic representation of an embodiment of the method for creating a high dynamic range output image according to the invention
- Figure 2 - a schematic representation of an exemplary intensity histogram and a corresponding intensity range of interest limited by a lower intensity threshold and an upper intensity threshold according to the invention
- Figure 3 a schematic representation of exemplary mapping masks and common pixel positions according to the invention for three input images
- Figure 4 a schematic representation of an exemplary image processing system comprising an electronic device according to the invention.
- Figure 1 shows a schematic representation of an exemplary method for creating a high dynamic range output image from a plurality of low dynamic range input images according to the invention.
- the input images are part of an input video sequence and the output image is part of an output video sequence.
- all steps of the method are executed in the Bayer domain, i.e. no demosaicing is conducted before executing any of the steps of the proposed method.
- the input images are directly obtained from the Color Filter Array (CFA), which is a Bayer filter in this example, but can generally be any other type of CFA in other embodiments of the invention.
- CFA Color Filter Array
- some or all steps of the method can be executed in the RGB domain instead of the Bayer domain.
- step 101 which corresponds to step a) explained above, a bracketed series of low dynamic range (LDR) input images is obtained. All input images represent the same scene, but differ in their exposure levels.
- the exposure settings exposure time and sensor gain of each input image, and thereby the exposure level of each input image are automatically configured by an Auto Exposure Module.
- the bracketed series of input images comprises three input images, namely a first input image with a high exposure level (long exposure) and a second input image with a medium exposure level (short exposure) and a third input image with a low exposure level (very short exposure).
- the series of input images includes a reference image, which is the second input image (short exposure, i.e. medium exposure level) in this exemplary embodiment, and two non-reference images, which are the first input image (long exposure, i.e. high exposure level) and the third input image (very short exposure, i.e. low exposure level) in this exemplary embodiment.
- the exposure level of the reference image (second input image in this exemplary embodiment) i.e.
- the medium exposure level serves as a reference exposure level.
- steps 102 and 103 of the embodiment shown in Figure 1 which collectively correspond to step b) of the method according to the invention explained above, for each of the three input images, an intensity histogram is obtained that represents the intensity distribution of the respective input image.
- the intensity histogram has 64 bins.
- the intensities of the pixels of each of the three input images are determined as follows: for each green pixel in the Bayer domain, the intensity of that respective pixel is the intensity of the green pixel itself in the Bayer domain, and for each non-green pixel in the Bayer domain, the intensity of that respective non-green pixel is determined as max(GI,N), wherein Gl is an intensity determined based on the intensities of neighboring green pixels and N is the intensity of the respective non-green pixel itself in the Bayer domain and max is the maximum function.
- Gl is calculated based on a convolution with the kernel k as follows:
- Gl is determined as the arithmetic mean of the intensities of green pixels that are direct neighbors of the respective non-green pixel at pixel position (x,y).
- step 102 of the embodiment shown in Figure 1, which corresponds to step b1) explained above for each of the three input images, the respective image is divided into a plurality of pixel blocks, wherein each pixel block includes a plurality of pixels. Furthermore, in step 102, for each pixel block, a mean intensity for the respective pixel block is determined from the intensities of the pixels included in the respective pixel block, wherein the intensities of green pixels and non-green pixels in the Bayer domain are determined as explained before. Afterwards, in step 103, for each of the three input images, an intensity histogram is obtained that represents the intensity distribution of the respective input image based on the mean intensities of the pixel blocks of the respective input image determined in step 102.
- the intensity histogram for each input image can be obtained directly from the intensities of individual pixels of the respective input image, i.e. instead of using pixel blocks, the intensity of each individual pixel can be counted for obtaining the intensity histogram in such embodiments.
- step 104 of the embodiment shown in Figure 1 which corresponds to step c) explained above, for each input image, an intensity range of interest is determined from the corresponding intensity histogram of the respective input image.
- the intensity range of interest is limited by a lower intensity threshold and an upper intensity threshold.
- the series of input images comprises an input image with the highest exposure level, namely the first input image (long exposure), and an input image with the lowest exposure level, namely the third input image (very short exposure), and the intensity thresholds used in step 104 are configured as follows:
- the upper intensity threshold of the first input image (long exposure, highest exposure level) is configured so that overexposed pixels are excluded and
- the lower intensity threshold of the third input image (very short exposure, lowest exposure level) is configured so that underexposed pixels are excluded and
- the lower intensity threshold and the upper intensity threshold of the second input image are configured so that underexposed pixels and overexposed pixels are excluded.
- the lower intensity threshold and the upper intensity threshold used in step 104 are configured so that the intensity range of interest excludes defective pixels of the respective input image, in particular hot pixels and/or cold pixels of the respective input image.
- Figure 2 schematically shows an exemplary intensity histogram 5 that represents the intensity distribution of the second input image (short exposure, medium exposure level) of this exemplary embodiment.
- the intensity histogram 5 has 64 bins that correspond to intensity intervals. While the intensity values are displayed on the horizontal axis 51 , the numbers of pixels that fall into each of the 64 intervals are displayed on the vertical axis 52.
- pixel blocks are used for obtaining the intensity histogram in this exemplary embodiment. Therefore, in Figure 2, numbers of blocks could be displayed on the vertical axis 52 instead of numbers of pixels.
- the number of pixels per pixel block is known, the number of blocks can easily be converted to the number of pixels and vice versa, which means that both ways of representing the intensity histogram 5 are equivalent to each other. Therefore, the number of pixels is shown in Figure 2 for illustrative purposes.
- Figure 2 depicts an intensity range of interest IRI that has been determined from the second input image’s intensity histogram 5, wherein the intensity range of interest IRI is limited by a lower intensity threshold thi ow and an upper intensity threshold th up .
- Figure 2 illustrates the histogram 5, the intensity range of interest IRI and the intensity thresholds thi ow , th up only schematically.
- the intensity thresholds thi ow , th up can be configured so that the proportion of intensity values being excluded from the input image’s intensity range of interest is significantly smaller than it is shown in Figure 2 for illustrative purposes.
- a mapping mask is obtained that indicates the pixel positions of all pixels of the respective input image whose intensities are within the respective image’s intensity range of interest as it has been determined in step 104.
- Each mapping mask excludes the pixel positions of pixels whose intensities lie outside of the corresponding intensity range of interest.
- the excluded pixel positions particularly comprise the positions of underexposed pixels, overexposed pixels and defective pixels (such as cold pixels and hot pixels).
- the intensities of green pixels and non-green pixels in the Bayer domain are determined as explained before with reference to steps 102 and 103 (obtaining the intensity histogram), i.e. for each green pixel in the Bayer domain, the intensity of that respective pixel is the intensity of the green pixel itself in the Bayer domain, and for each non-green pixel in the Bayer domain, the intensity of that respective non-green pixel is determined as max(GI,N).
- steps 102 and 103 of the embodiment shown in Figure 1 For additional details, reference is made to the above explanation of steps 102 and 103 of the embodiment shown in Figure 1 .
- the pixel blocks determined in step 102 are used instead of individual pixels, wherein the mean intensity of the respective pixel block is used as the intensity of all pixels included in the respective pixel block.
- a comparison image for each non-reference image is determined from the series of input images.
- the reference image is determined as the comparison image for every non-reference image.
- the second input image (medium exposure level, short exposure), which is the reference image, is determined as the comparison image for both non-reference images, i.e. for the first input image (high exposure level, long exposure) as well as for the third input image (low exposure level, very short exposure).
- step 107 of the embodiment shown in Figure 1 which corresponds to step f) explained above, for each non-reference image, common pixel positions of the respective non-reference image and its respective comparison image are determined.
- Such common pixel positions are pixel positions that are included both in the mapping mask of the respective non-reference image and in the mapping mask of the respective non-reference image’s comparison image, wherein the mapping masks have been obtained for each of the three input images in step 105.
- the first input image (high exposure level, long exposure) and the third input image (low exposure level, very short exposure) are the non-reference images and the second input image (medium exposure level, short exposure) is the comparison image for both non-reference images.
- common pixel positions of the first input image (high exposure level, long exposure) and the second input image (medium exposure level, short exposure) are determined, wherein common pixel positions are pixel positions that are included both in the mapping mask of the first input image and in the mapping mask of the second input image, and
- common pixel positions of the third input image (low exposure level, very short exposure) and the second input image (medium exposure level, short exposure) are determined, wherein common pixel positions are pixel positions that are included both in the mapping mask of the third input image and in the mapping mask of the second input image.
- the common pixel positions are determined based on an intersection of the mapping mask of the respective non-reference image and the mapping mask of the corresponding comparison image.
- the pixel blocks determined in step 102 are used instead of individual pixels, wherein the mean intensity of the respective pixel block is used as the intensity of all pixels included in the respective pixel block.
- Figure 3 schematically shows an example of three mapping masks 11 , 12, 13 for three exemplary input images 1, 2, 3 and corresponding common pixel positions 16, 17. It schematically illustrates the first input image 1 (long exposure, i.e. high exposure level), which is a non-reference image, the second input image 2 (short exposure, i.e. medium exposure level), which is the reference image, and the third input image 3 (very short exposure, i.e. low exposure level), which is also a non-reference image.
- the reference image 2 is the comparison image for both non-reference images 1 , 3.
- each input image comprises a plurality of pixel blocks 15, wherein each pixel block comprises a plurality of pixels.
- the three mapping masks 11 , 12, 13 are marked with different hatchings and each mapping mask 11 , 12, 13 indicates the pixel block positions (and hence the pixel positions) of all pixel blocks (and hence pixels) of the respective input image whose intensities are within the respective image’s intensity range of interest IRI.
- the first mapping mask 11 belongs to the first input image 1 (i.e. the first mapping mask 11 indicates the pixel positions of all pixels of the first input image whose intensities are within the first input image’s intensity range of interest IRI).
- the second mapping mask 12 belongs to the second input image 2 and the mapping mask 13 belongs to the third input image 3.
- mapping masks cover different parts of the scene:
- the first mapping mask 11 of the first input image 1 covers dark areas
- the second mapping mask 12 of the second input image 2 covers areas of medium brightness
- the third mapping mask 13 of the third input image 3 covers bright areas of the scene.
- Figure 3 also shows common pixel positions 16 of the first input image 1 (non-reference image) and the second input image 2, which is the comparison image of the first input image 1. It can be seen in Figure 3 that the common pixel positions 16 have been determined based on an intersection of the mapping mask 11 of the non-reference image, which is the first input image 1 in this example, and the mapping mask 12 of the corresponding comparison image, which is the second input image 2 in this example. It can also be seen in Figure 3 that the pixel blocks 15 and their respective pixel block positions have been used instead of individual pixels for determining the common pixel positions 16, wherein the (common) pixel positions can be directly inferred from the corresponding pixel block positions of the pixel blocks 15.
- Figure 3 also shows common pixel positions 17 of the third input image 3 (non-reference image) and the second input image 2, which is the comparison image of the third input image 3, wherein the common pixel positions 17 have been determined based on an intersection of the mapping mask 13 of the non-reference image, which is the third input image 3 in this example, and the mapping mask 12 of the corresponding comparison image, which is the second input image 2 in this example.
- step 108 of this exemplary embodiment corresponding to step g) explained above, an intensity ratio between the intensities of the pixels of the respective non-reference image’s comparison image and the intensities of the pixels of the respective non-reference image is calculated, wherein only the pixels at the common pixel positions are considered for calculating the intensity ratio.
- the first input image high exposure level, long exposure
- the third input image low exposure level, very short exposure
- the second input image medium exposure level, short exposure
- an intensity ratio is calculated between the intensities of the pixels of the first input image (non-reference image) and the intensities of the pixels of the second input image (first input image’s comparison image), wherein only the common pixel positions 16 (see Figure 3) of the first input image and the second input image are considered for calculating the intensity ratio, and
- an intensity ratio is calculated between the intensities of the pixels of the third input image (non-reference image) and the intensities of the pixels of the second input image (third input image’s comparison image), wherein only the common pixel positions 17 (see Figure 3) of the third input image and the second input image are considered for calculating the intensity ratio.
- the intensities of green pixels and non-green pixels in the Bayer domain are determined as explained before with reference to steps 102, 103 (obtaining the intensity histogram) and 105 (obtaining the mapping masks), i.e. for each green pixel in the Bayer domain, the intensity of that respective pixel is the intensity of the green pixel itself in the Bayer domain, and for each non-green pixel in the Bayer domain, the intensity of that respective non-green pixel is determined as max(GI,N).
- steps 102 and 103 of the embodiment shown in Figure 1 For additional details, reference is made to the above explanation of steps 102 and 103 of the embodiment shown in Figure 1.
- the intensity ratio can be calculated as follows: i) calculate a first mean intensity (e.g., arithmetic mean), which is the mean intensity of all pixels of the respective non-reference image’s comparison image that are located at the common pixel positions, ii) calculate a second mean intensity (e.g., arithmetic mean), which is the mean intensity of all pixels of the respective non-reference image itself that are located at the common pixel positions, and iii) divide the first mean intensity by the second mean intensity, wherein the resulting quotient is the intensity ratio.
- a first mean intensity e.g., arithmetic mean
- a second mean intensity e.g., arithmetic mean
- - N is the number of common pixel positions, i.e. the size of set C
- - lcom P (x,y) is the intensity of a pixel of the comparison image at pixel position (x,y),
- - Inonref(x,y) is the intensity of a pixel of the non-reference image at pixel position (x,y).
- the pixel blocks determined in step 102 are used instead of individual pixels, wherein the mean intensity of the respective pixel block is used as the intensity of all pixels included in the respective pixel block.
- a mapping factor m between the respective nonreference image’s exposure level and the reference exposure level is determined based on at least one intensity ratio calculated in step 108.
- the mapping factor m for each non-reference image i.e. for the first input image and for the third input image in this example
- m Ri in this example.
- step 110 of the embodiment shown in Figure 1 which corresponds to step i) explained above, for each non-reference image (i.e. for the first input image and the third input image in this example), the intensities of all pixels of the respective non-reference image are mapped to the reference exposure level, thereby obtaining a mapped intensity for each pixel of the respective non-reference image.
- This is achieved by applying the mapping factor m determined in step 109 to all pixels of the respective non-reference image. In this exemplary embodiment, this is achieved by simply multiplying the intensity of each pixel by the mapping factor m that has been determined for the respective non-reference image in step 109.
- the respective image s individual pixels and the intensities of these individual pixels are used for mapping the intensities of the pixels to the reference exposure level in step 110.
- the weighting factor w for each pixel is determined as a function of the intensity of the respective pixel, wherein the function is defined in a way that in an overlapping area of the intensity range of interest of an input image and the intensity range of interest of another input image, the weighting factor w for the intensity of the pixels of the former input image linearly increases from 0 to 1 and the weighting factor w for the intensity of the pixels of the latter input image linearly decreases from 1 to 0 with increasing intensity.
- step 111 of this exemplary embodiment after the weighting factors w have been initially determined for all pixels of any of the three input images as described above, a median filter with a specified median filter kernel size of 3x3 is applied to each weighting factor w for each pixel of that respective input image, which means that direct neighbors of each pixel are considered for the median calculation.
- a mean filter with a specified mean filter kernel size of 3x3 is applied to each weighting factor w for each pixel of that respective input image. This means that the mean is calculated based on the weighting factor w (resulting from applying the median filter as described before) of the respective pixel and the weighting factors (resulting from applying the median filter as described before) of direct neighbors of each pixel of the same input image.
- step 112 of the embodiment shown in Figure 1 which corresponds to step j2) explained before, for each pixel of the reference image (i.e. for each pixel of the second input image in this example), a weighted intensity of the respective pixel of the reference image is obtained by applying the weighting factor w for the respective pixel of the reference image (as determined before in step 111) to the intensity of the respective pixel of the reference image.
- a weighted intensity of the respective pixel of the respective non-reference image is obtained by applying the weighting factor w for the respective pixel of the respective non-reference image (as determined before in step 111) to the mapped intensity of the respective pixel of the respective non-reference image (as determined before in step 110).
- the weighting factor w is applied to the intensity (or mapped intensity, respectively) of a pixel by simply multiplying the weighting factor w by the intensity (or mapped intensity, respectively) of the respective pixel.
- step 113 shown in Figure 1 which corresponds to step j 3) explained before, the three input images of this exemplary embodiment are merged into a high dynamic range output image by adding up, for each pixel position, the weighted intensities of the pixels of all three input images at the respective pixel position (as determined before in step 112) to determine the intensity of the pixel of the output image at the respective pixel position.
- the respective image s individual pixels and the intensities of these individual pixels are used for merging the input images into a high dynamic range output image in steps 111 , 112 and 113 of the embodiment shown in Figure 1.
- FIG. 4 shows an exemplary image processing system 209.
- the image processing system 209 comprises an image sensor 201.
- the image processing system 209 of figure 4 comprises an electronic device 203, which is an image processing module in this exemplary embodiment.
- the image processing module 203 has a data processing unit 205 and a memory 207 to store image data.
- the data processing unit 205 can be, for example, an appropriately programmed microprocessor, an image signal processor (ISP), a digital signal processor (DSP), a graphics processing unit (GPU), a field-programmable gate array (FPGA), an application-specific integrated circuit (ASIC), or similar.
- the data processing unit 205 reads from and writes to the memory 207.
- the image sensor 201 is configured to generate a bracketed series of at least two low dynamic range input images, wherein the series of input images is part of an input video sequence in this exemplary embodiment.
- the image sensor 201 is directly or indirectly connected with the image processing module 203, which allows the image processing module 203 to read the input images of the input video sequence generated by image sensor 201.
- Each input image read by the image processing module 203 can be stored in memory 207.
- the image processing module 203 is adapted to perform the method as described above for creating a high dynamic range (HDR) output image from a plurality of low dynamic range (LDR) input images.
- the input images are part of an input video sequence and the output image is part of an output video sequence.
- the generated HDR output image can be stored in memory 207 for further processing or for any further application-specific purposes.
- This procedure can be repeated for each input image of the input video sequence generated by the image sensor 201. This results in a generation of a HDR output video sequence, which is a sequence of HDR output images.
- the HDR output video sequence can be stored in memory 207.
- the output image and/or the output video sequence can be stored in another memory and/or stored on a data storage unit and/or can be transmitted via a data transmission link.
Landscapes
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Image Processing (AREA)
Abstract
A method, a computer program and an electronic device for creating a high dynamic range (HDR) output image are proposed. The method comprises obtaining a bracketed series of at least two low dynamic range input images, including a reference image, whose exposure level serves as a reference exposure level, and at least one non-reference image. Intensity histograms are obtained and intensity ranges of interest are determined from the histograms. Based on mapping masks derived from the intensity ranges of interest, common pixel positions are determined and intensity ratios are calculated only considering the common pixel positions. Using mapping factors determined based on the intensity ratios, the intensities of the non- reference images are mapped to the reference exposure level. The input images are merged into an HDR output image using the mapped intensities, wherein only pixels whose pixel positions are included in the corresponding mapping mask are considered for the merging.
Description
Method, Computer Program and Electronic Device for Creating High Dynamic Range Images
DESCRIPTION
The invention is related to a method for creating a high dynamic range (HDR) output image from a plurality of low dynamic range (LDR) input images.
Moreover, the invention is related to a computer program having program code means adapted to perform such a method.
Furthermore, the invention is related to an electronic device adapted to perform such a method.
In general, the invention concerns the field of high dynamic range (HDR) imaging. The human eye can perceive a very large range of brightness from very dark areas to very bright areas in the same scene. In digital imaging, the detected amount of light (irradiance) is represented by the intensity of each pixel. The term dynamic range (or dynamic intensity range) refers to the ratio between the maximum intensity (brightest area) and the minimum intensity (darkest area) of an image. The terms “dynamic intensity range", “dynamic range” and “intensity range” are used equivalently in the context of this application. HDR images typically offer a dynamic range of 14-16 bits to 20-24 bits, while traditional LDR images (or standard dynamic range images) typically only offer a dynamic range of 8-10 bits. Therefore, HDR images allow preserving the details of real-world scenes that contain very bright, but also very dark areas much better than conventional LDR images. This means that HDR images allow capturing dynamic ranges similar to the human eye.
Common cameras and image sensors, however, are generally not capable of capturing HDR images with a single exposure. Instead, a bracketed series of LDR input images (“bracketed input images”) can be captured and merged to create an HDR output image. The bracketed
input images of the series represent the same scene, but differ in their exposure levels, i.e. each bracketed input image covers a part of the full dynamic range of the scene.
From WO 2009 I 044 246 A1 , for example, a method for enhancing a dynamic range of an image is known. For this purpose, a plurality of sub-images with different exposure times is created by means of an image sensor with a multi-exposure pixel pattern. Exposure values are interpolated for pixels with missing exposure values in each sub-image. By combining the resulting interpolated images of the sub-images, an image with an enhanced dynamic range is created.
Conventional methods for merging bracketed LDR input images into an HDR output image, however, are associated with several disadvantages. In particular, these methods require that the exact amount of light (irradiance) represented by each bracketed input image must be known. For this purpose, the camera response functions (also referred to as sensor response functions or characteristic functions in this application) of the utilized image capturing chain have to be known for each bracketed input image. This requires, amongst others, exact knowledge of the characteristics of the deployed optics and of the individual exposure adjustments, such as sensor gains, exposure times and apertures. Moreover, such conventional methods necessitate calculating the inverses of the camera response functions (either on the fly or pre-calculated and stored in lookup tables), which is a time consuming and computationally expensive process, as very large linear equation systems have to be solved. For every utilized image capturing chain, effortful calibration processes have to be carried out.
Against this background, it is an object of the present invention to provide a method for efficiently creating accurate HDR images that involves less effort and can be flexibly used in conjunction with varying image capturing chains and in conjunction with conventional image sensors, in particular low-cost image sensors.
The object of the invention is achieved by a method for creating a high dynamic range (HDR) output image from a plurality of low dynamic range (LDR) input images with the features of claim 1.
According to the invention, the method comprises at least the following steps:
a) obtaining a bracketed series of at least two low dynamic range input images, wherein all of the input images represent the same scene and the input images differ in their exposure levels and the series of input images includes a reference image, whose exposure level serves as a reference exposure level, and at least one non-reference image, b) for each input image, obtaining an intensity histogram that represents the intensity distribution of the respective input image, c) for each input image, determining an intensity range of interest from the corresponding intensity histogram of the respective input image, wherein the intensity range of interest is limited by a lower intensity threshold and an upper intensity threshold, d) for each input image, obtaining a mapping mask that indicates the pixel positions of all pixels of the respective input image whose intensities are within the respective image’s intensity range of interest, e) for each non-reference image, determining a comparison image for the respective nonreference image from the series of input images, wherein the reference image is determined as a comparison image for at least one of the non-reference images, f) for each non-reference image, determining common pixel positions of the respective non- reference image and its respective comparison image, wherein common pixel positions are pixel positions that are included both in the mapping mask of the respective non- reference image and in the mapping mask of the respective non-reference image’s comparison image, g) for each non-reference image, calculating an intensity ratio between the intensities of the pixels of the respective non-reference image’s comparison image and the intensities of the pixels of the respective non-reference image, wherein only the pixels at the common pixel positions are considered for calculating the intensity ratio, h) for each non-reference image, determining a mapping factor m between the respective non-reference image’s exposure level and the reference exposure level based on at least one intensity ratio calculated in step g), i) for each non-reference image, mapping the intensities of all pixels of the respective non- reference image to the reference exposure level by applying the mapping factor m determined in step h) to all pixels of the respective non-reference image, thereby obtaining a mapped intensity for each pixel of the respective non-reference image, j) merging the input images into a high dynamic range output image based on the intensities of the pixels of the reference image and the mapped intensities of the pixels of each non-
reference image, wherein only the pixels whose pixel positions are included in the mapping mask of the respective input image are considered for the merging.
The steps of the method do not have to be executed in the specified order and the invention is not limited accordingly, i.e. the alphabetic order of the letters does not imply a specific sequence of steps a) to j). For example, as matter of course, step e) could be executed before steps b) to j), or some of the method’s steps could be executed in parallel.
In the context of this application, the term “high dynamic range output image” can generally refer to any image with a dynamic range that is greater than the dynamic range of the low dynamic range input images. In particular, in certain embodiments, the low dynamic range input images can be conventional low dynamic range images, e.g. images with a dynamic range of 8 to 10 bits and/or standard dynamic range (SDR) images.
The input images and the output image can be part of an input video sequence and an output video sequence, respectively.
Each respective input image’s exposure level can be controlled by the exposure settings that are used for generating the input image. Such exposure settings may include, for example, one or more of the following: exposure time, aperture, ISO speed (sensor gain). As exposure time is a particularly important means of varying exposure, the terms “high exposure” (or “high exposure level”) and “long exposure” are used equivalently and the terms “low exposure” (or “low exposure level”) and “short exposure” are used equivalently, respectively, in the context of this application. However, a short exposure can generally also be achieved, for example, using a smaller aperture and/or a lower ISO speed.
In step a), a bracketed series of at least two low dynamic range (LDR) input images is obtained. The bracketed input images, i.e. the input images that belong to the bracketed series of LDR input images, represent the same scene, but have different exposure levels.
The series of input images can include, for example, three input images: one image with a long exposure (high exposure level), one image with a short exposure (medium exposure level) and one image with a very short exposure (low exposure level). In such an embodiment, the long exposure input image should capture dark areas of the scene satisfactorily, while bright areas
of the scene may be overexposed. The short exposure input image should capture areas of medium brightness satisfactorily, while dark areas of the scene may be underexposed and/or bright areas of the scene may be overexposed. The very short exposure input image should capture highlights, i.e. bright areas of the scene satisfactorily, while dark areas may be underexposed.
In general, according to the invention, the exposure settings and exposure levels of the input images can be freely chosen, as long as there is a partial overlap of the intensities of the pixels of at least two different bracketed input images within the intensity range of interest. This constraint ensures that the intensity ratios in step g) and hence the mapping factors in step h) can be properly determined.
The bracketed series of LDR input images includes a reference image, whose exposure level serves as a reference exposure level, and at least one non-reference image. A non-reference image is any input image that belongs to the series of bracketed LDR input images and is not the reference image.
In step b), an intensity histogram is obtained for each input image, wherein the intensity histogram represents the intensity distribution of the respective input image. Each intensity histogram can have a plurality of bins. For example, each intensity histogram can have 32 bins, 64 bins, 128 bins or 256 bins. The intensity histogram for the respective input image can be obtained directly from the intensities of individual pixels of the respective input image. In other words, the intensity of each individual pixel can be counted for obtaining the intensity histogram. Alternatively or additionally, the respective image can be divided into a plurality of pixel blocks, wherein each pixel block includes a plurality of pixels, and the intensity histogram for the respective input image can be obtained from the mean intensities of the pixel blocks. This will be explained in detail later.
In step c), an intensity range of interest is determined for each input image from the corresponding intensity histogram of the respective input image. The intensity range of interest is limited by a lower intensity threshold and an upper intensity threshold.
In step d), a mapping mask is obtained for each input image, wherein the mapping mask indicates the pixel positions of all pixels of the respective input image whose intensities are
within the respective image’s intensity range of interest. In other words, the mapping mask excludes the pixel positions of pixels whose intensities lie outside of the intensity range of interest, i.e. pixels whose intensities are smaller than the lower intensity threshold or greater than the upper intensity threshold of the intensity range of interest. Such excluded pixel positions may be, for example, the positions of underexposed and/or overexposed pixels and/or defective pixels (such as cold pixels and/or hot pixels).
In step e), a comparison image for each non-reference image is determined (or selected) from the series of input images, wherein the reference image is determined (or selected) as a comparison image for at least one of the non-reference images. In a particularly simple embodiment, for example, the reference image can be selected as the comparison image for every non-reference image. In other embodiments, the comparison image for each non- reference image can be determined depending on the exposure level of the respective non- reference image. This will be explained in detail later.
In step f), for each non-reference image, common pixel positions of the respective non- reference image and its respective comparison image are determined. Such common pixel positions are pixel positions that are included both in the mapping mask of the respective non- reference image and in the mapping mask of the respective non-reference image’s comparison image. The common pixel positions can be determined, for example, based on an intersection of the mapping mask of the respective non-reference image and the mapping mask of the corresponding comparison image.
In step g), for each non-reference image, an intensity ratio between the intensities of the pixels of the respective non-reference image’s comparison image and the intensities of the pixels of the respective non-reference image is calculated, wherein only the pixels at the common pixel positions are considered for calculating the intensity ratio. In other words, pixels whose pixel positions are not included in the common pixel positions determined in step f) are excluded from the calculation of the intensity ratio. This means that only those pixel positions that are included both in the mapping mask of the respective non-reference image and in the mapping mask of the respective non-reference image’s comparison image are considered for calculating the intensity ratio of the respective non-reference image.
In step h), for each non-reference image, a mapping factor m between the respective nonreference image’s exposure level and the reference exposure level is determined based on at least one intensity ratio calculated in step g).
If the respective non-reference image’s comparison image is the reference image, for example, the mapping factor m for this respective non-reference image can be equal to the intensity ratio determined for this respective non-reference image in step g). This means that in this case the mapping factor m for this respective non-reference image can be equal to the intensity ratio between the intensities of the pixels of the reference image and the intensities of the pixels of this respective non-reference image. Moreover, in a particularly simple embodiment, for example, if the reference image is used as the comparison image for each non-reference image, the mapping factor m for each non-reference image can be equal to the intensity ratio determined in step g) for the respective non-reference image. This means that in this case the mapping factor m for each non-reference image can be equal to the intensity ratio between the intensities of the pixels of the reference image and the intensities of the pixels of the respective non-reference image.
In other embodiments, if the respective non-reference image’s comparison image is not the reference image, the mapping factor m for this respective non-reference image can be determined based on a plurality of intensity ratios calculated in step g). For example, the mapping factor m for the respective non-reference image whose comparison image is not the reference image can be recursively determined by multiplying the intensity ratio that has been calculated for the respective non-reference image in step g) (i.e. the intensity ratio between the intensities of the pixels of the respective non-reference image’s comparison image and the intensities of the pixels of the respective non-reference image) by the mapping factor m that is determined for the respective non-reference image’s comparison image in step h). This recursive calculation ends when it reaches a non-reference image whose comparison image is the reference image.
In step i), for each non-reference image, the intensities of all pixels of the respective non- reference image are mapped to the reference exposure level, thereby obtaining a mapped intensity for each pixel of the respective non-reference image. This is achieved by applying the mapping factor m determined in step h) to all pixels of the respective non-reference image. In an exemplary embodiment, in order to map the intensity of all pixels of the respective non-
reference image to the reference exposure level, the intensity of each pixel can simply be multiplied by the mapping factor m that has been determined for the respective non-reference image in step h).
In step j), the input images are merged into a high dynamic range output image based on the intensities of the pixels of the reference image and the mapped intensities of the pixels of each non-reference image, wherein only the pixels whose pixel positions are included in the mapping mask of the respective input image are considered for the merging. In other words, pixels whose pixel positions are not included in the mapping mask of the respective input image obtained in step d), i.e. pixels whose intensities lie outside of the intensity range of interest, are excluded from the merging of the input images into a high dynamic range output image. As mentioned before, such excluded pixels may be, for example, underexposed and/or overexposed pixels and/or defective pixels (such as cold pixels and/or hot pixels).
The invention advantageously allows creating high quality HDR images from LDR input images in a particularly flexible, efficient and computationally inexpensive manner.
In particular, the invention improves the accuracy of the created HDR output image by considering only pixels that are included in the respective mapping mask of each input image for the creation of the HDR output image, i.e. by considering only pixels whose intensities are within the respective input image’s intensity range of interest. In this way, the adverse effects that certain pixels of the input images have on the output image quality, for example underexposed and overexposed pixels as well as defective and noisy pixels, can be eliminated, since these unwanted pixels can be effectively excluded from the process of creating the HDR output image. This applies i) for determining the mapping factors that are used for mapping the input images’ intensities to the reference exposure level and ii) for merging the input images into the HDR output image. As a result, the proposed method is robust against noise and bad pixels and is able to create high quality HDR images even under most difficult lighting conditions.
Furthermore, the invention significantly increases the flexibility of creating HDR images from LDR input images. This is achieved by means of dynamically calculating the mapping factor for each series of bracketed input images, which provides the advantage that the ratio of the exposure levels of the input images does not have to be constant and, even more beneficial,
the exposure levels of the input images do not even have to be known in advance. Accordingly, the characteristics of the deployed optics, the exposure settings used for generating the input images and the sensor response function, which is usually non-linear, can remain unknown as well. This immensely simplifies the HDR image creation process and provides great flexibility. As a result, the proposed method can be applied in conjunction with all kinds of optics and all kinds of image sensors (cameras), even if the image sensor’s exposure settings and/or the sensor response function are configured dynamically and hence cannot be known in advance. In particular, the invention advantageously allows the use of low-cost image sensors.
The proposed method can flexibly be used with any number of bracketed input images that is equal to or greater than two, i.e. the bracketed series of input images may comprise two, three or more images.
Moreover, the proposed method advantageously works in a self-adjusting manner, i.e. no timeconsuming calibration processes of the deployed image capturing chain are needed. Moreover, the invention provides the advantage that there is no need for a time-consuming and computationally expensive calculation of the inverses of the camera response functions. As a result, HDR images can be created very efficiently, which results in low requirements with regard to computational resources and hence allows the use of low-cost hardware.
As an additional advantage of the invention, the proposed method can be executed in the Bayer domain, which allows for a particularly simple and hence inexpensive implementation, as well as in the RGB domain, which is computationally more expensive, but therefor allows creating highest quality HDR images. This will be explained in more detail later.
In the context of this application, the term “Bayer domain” is used in a generalized fashion. Despite the use of the term “Bayer domain”, the employed Color Filter Array (CFA) does not necessarily have to be a Bayer filter, but can generally be any type of CFA. Therefore, the term “CFA domain” (“Color Filter Array domain”) can be used equivalently to the term “Bayer domain” within the context of this application.
According to an advantageous embodiment of the invention, it is proposed that step j) comprises
j1) for each pixel position and each input image, determining a weighting factor w for the pixel of the respective input image at the respective pixel position, wherein
- w = 0 if the pixel position is not included in the mapping mask of the respective input image and
- 0 < w < 1 if the pixel position is included in the mapping mask of the respective input image and the pixel position is included in the mapping mask of at least one other input image and
- w = 1 if the pixel position is included in the mapping mask of the respective input image and the pixel position is not included in the mapping mask of any other input image and
- the sum of the weighting factors w of all input images for each pixel position is 1 , and j2) for each pixel of each input image, obtaining a weighted intensity of the respective pixel of the respective input image by applying the weighting factor w for the respective pixel of the respective input image
- to the intensity of the respective pixel of the reference image if the respective input image is the reference image and
- to the mapped intensity of the respective pixel of the respective input image if the respective input image is a non-reference image, and j3) merging the input images into a high dynamic range output image by adding up, for each pixel position, the weighted intensities of the pixels of all input images at the respective pixel position to determine the intensity of the pixel of the output image at the respective pixel position.
In other words, according to this embodiment of the invention, if there are at least two input images whose pixels at the respective pixel position are within the intensity range of interest, these pixels are blended according to their weighting factors (0 < w < 1) in order to determine the HDR output image’s intensity at the respective pixel position. On the other hand, if there is only one input image whose pixel at the respective pixel position is within the intensity range of interest, that pixel’s intensity (or mapped intensity, respectively) is directly adopted (w = 1) for the HDR output image.
This embodiment of the invention provides the advantage that it allows merging the LDR input images into a high quality HDR output image in a relatively simple, efficient and hence computationally inexpensive manner. While underexposed, overexposed and defective pixels
can be effectively excluded, the entirety of beneficial image information from all input images can still be utilized to improve the quality of the output image.
In advantageous embodiments of the invention, the weighting factor w can also be determined from the interval 0 < w < 1 (instead of the aforementioned interval 0 < w < 1) if the pixel position is included in the mapping mask of the respective input image and the pixel position is included in the mapping mask of at least one other input image.
In another advantageous embodiment of the invention, all steps of the method are executed in the Bayer domain (CFA domain).
In other words, according to this embodiment of the invention, no demosaicing is conducted before executing any of the steps of the proposed method. Instead, the input images are directly obtained from the Color Filter Array (CFA). Despite the term “Bayer domain”, the CFA does not necessarily have to be a Bayer filter, but can generally be any type of CFA. As explained before, the terms “Bayer domain” and “CFA domain” are used equivalently in this application. Without loss of generality, all CFA filtered pixels are addressed also as RGB pixels in the context of this application.
This embodiment of the invention provides the advantage that demosaicing, which is a computationally expensive process, does not have to be conducted for each input image, but only once per output image. For example, if three bracketed input images are used to create the output image, this allows reducing the required computational effort for demosaicing by approximately two thirds. As a result, this embodiment of the invention allows for a particularly simple and hence inexpensive implementation.
In another advantageous embodiment of the invention, for obtaining the intensity histogram in step b) and/or for obtaining the mapping mask in step d) and/or for calculating the intensity ratio in step g) and/or for determining the weighting factor in step j1), the intensity of each pixel of each input image is the intensity of that respective pixel itself in the Bayer domain, regardless of the color of the respective pixel.
In other advantageous embodiments of the invention, for obtaining the intensity histogram in step b) and/or for obtaining the mapping mask in step d) and/or for calculating the intensity ratio in step g) and/or for determining the weighting factor in step j 1 ) ,
- the intensity of each pixel of each input image is determined only from the intensity of the green component in the Bayer domain, wherein the intensities of non-green pixels in the Bayer domain are determined based on the intensities of neighboring green pixels, or
- for each green pixel of each input image in the Bayer domain, the intensity of that respective pixel is the intensity of the green pixel itself in the Bayer domain, and for each non-green pixel of each input image in the Bayer domain, the intensity of that respective non-green pixel is determined as max(GI,N), wherein Gl is an intensity determined based on the intensities of neighboring green pixels and N is the intensity of the respective non-green pixel itself in the Bayer domain and max is the maximum function.
Such embodiments of the invention provide particular advantages if the CFA is a Bayer filter, i.e. a CFA that consists of green, red and blue pixels. This is because it can be expected that nearly in all common cases, the intensity of the green component in the Bayer domain is a good measure of the scene’s brightness level. Therefore, in these cases, a high quality of the output images can be ensured if the intensity of the pixels of the input image is determined from the intensity of the green component. Moreover, the green channel typically includes the largest amount of light for typical light sources like sunlight and light bulbs. In addition, if the CFA is a Bayer filter, the CFA usually includes twice as many green pixels compared to red or blue pixels. As a result, the green channel usually has a better signal to noise ratio compared to the red channel and the blue channel.
For similar reasons as explained before, however, such embodiments can be beneficial not only if the CFA is a Bayer filter, but generally for all CFAs that contain green pixels, but no white pixels.
To generalize the embodiments explained above, in other advantageous embodiments of the invention, for obtaining the intensity histogram in step b) and/or for obtaining the mapping mask in step d) and/or for calculating the intensity ratio in step g) and/or for determining the weighting factor in step j1),
- the intensity of each pixel of each input image is determined only from the intensity of a selected color component in the Bayer domain, wherein the intensities of non-selected color
pixels in the Bayer domain are determined based on the intensities of neighboring selected color pixels, or
- for each selected color pixel of each input image in the Bayer domain, the intensity of that respective pixel is the intensity of the selected color pixel itself in the Bayer domain, and for each non-selected color pixel of each input image in the Bayer domain, the intensity of that respective non-selected color pixel is determined as max(SI,N), wherein SI is an intensity determined based on the intensities of neighboring selected color pixels and N is the intensity of the respective non-selected color pixel itself in the Bayer domain and max is the maximum function.
As explained before, the selected color component in the Bayer domain can be, for example, the green component.
In such embodiments that generally rely on a selected color component (for example the green component) in the Bayer domain for determining the intensity of the pixels of the input image, as an additional optional improvement of these embodiments, a corner case detection can be conducted for each input image, wherein a corner case is detected if the amount of the selected color (for example green) light of at least one input image is particularly low, e.g. lower than a specified threshold. The amount of selected color (for example green) light can be quantified, for example, by the mean intensity of the selected color (for example green) component of the pixels of the respective input image in the Bayer domain. In such embodiments, if a corner case is detected, the intensity of each pixel of each input image can be determined from the intensity of one or more components other than the selected color (for example green) component in the Bayer domain.
In other advantageous embodiments of the invention, the proposed method may additionally include, after obtaining the bracketed series of input images, for one of the input images or for a number of input images or for each input image, determining a dominant color component of the respective input image in the Bayer domain. For determining the dominant color component of the input image, for example, for each color component in the Bayer domain, a mean intensity (e.g., arithmetic mean) can be determined from the intensities of all pixels of the respective color, and the dominant color component of the input image can be determined based on a comparison of the mean intensities of all color components, wherein the color
component with the largest mean intensity is determined as the dominant color component of the input image.
In such embodiments of the invention, the dominant color component may serve as the selected color component referred to above.
In other words, in such embodiments of the invention, for obtaining the intensity histogram in step b) and/or for obtaining the mapping mask in step d) and/or for calculating the intensity ratio in step g) and/or for determining the weighting factor in step j1),
- the intensity of each pixel of each input image is determined only from the intensity of the dominant color component in the Bayer domain, wherein the intensities of non-dominant color pixels in the Bayer domain are determined based on the intensities of neighboring dominant color pixels, or
- for each dominant color pixel of each input image in the Bayer domain, the intensity of that respective pixel is the intensity of the dominant color pixel itself in the Bayer domain, and for each non-dominant color pixel of each input image in the Bayer domain, the intensity of that respective non-dominant color pixel is determined as max(DI,N), wherein DI is an intensity determined based on the intensities of neighboring dominant color pixels and N is the intensity of the respective non-dominant color pixel itself in the Bayer domain and max is the maximum function.
In other advantageous embodiments of the invention, if the CFA includes a white component, i.e. if the CFA includes white (or transparent) filter elements, the selected color component in the Bayer domain referred to above can be the white component.
In other words, in such embodiments of the invention, for obtaining the intensity histogram in step b) and/or for obtaining the mapping mask in step d) and/or for calculating the intensity ratio in step g) and/or for determining the weighting factor in step j1), the intensity of each pixel of each input image is generally determined only from the intensity of the white component in the Bayer domain, wherein the intensities of non-white pixels in the Bayer domain are determined based on the intensities of neighboring white pixels.
CFAs that include a white component provide significant advantages for capturing dark scenes, i.e. scenes that only include a small amount of light, as the light sensitivity of the white
pixels typically is significantly higher (approximately 2.5 times higher) than the light sensitivity of RGB pixels and hence underexposed pixels can be avoided. For the same reasons, using the white component as the selected component referred to above, i.e. as the selected color component in the Bayer domain for determining the intensity of the pixels of the input image, can be particularly advantageous for dark scenes.
In bright scenes, however, using the white component for determining the intensity of the pixels of the input image can be disadvantageous. The reason for this is that due to their higher light sensitivity, white pixels become overexposed much faster than RGB pixels. Therefore, in bright scenes, a significant amount of white pixels can be overexposed. As a result, in bright scenes, there may be situations where only few common pixels positions may be determined in step f) of the proposed method, as the intensity range of interest determined in step c) of the proposed method will typically be configured to exclude overexposed pixels of the respective input image.
Hence, in other advantageous embodiments of the invention, the selected color component in the Bayer domain for determining the intensity of the pixels of the input image can be determined based on the brightness level of the captured scene, wherein the white component is determined as the selected color component for dark scenes, e.g. if the brightness level does not exceed a specified threshold, and a non-white color component (e.g. the green component) is determined as the selected color component for bright scenes, e.g. if the brightness level exceeds a specified threshold. Alternatively, for bright scenes, the corner case detection described below may be applied for finally determining the selected color component.
In such embodiments that generally rely on the white component in the Bayer domain for determining the intensity of the pixels of the input image, as an additional optional improvement of these embodiments, a corner case detection can be conducted for each input image, wherein a corner case is detected if, based on using the white component for determining the intensity of the pixels of the input image, the number or fraction of common pixel positions determined in step f) is too small, e.g. is lower than a specified threshold. In such embodiments, if a corner case is detected, the intensity of each pixel of each input image can be determined from the intensity of one or more components other than the white component (e.g. the green component) in the Bayer domain.
As an alternative to executing all steps of the method in the Bayer domain as explained above, some or all steps of the method can be executed in the RGB domain.
In particular, in other advantageous embodiments of the invention, all steps of the method are executed in the RGB domain. This means that demosaicing is executed before executing any of the steps of the method according to the invention. Consequently, in such embodiments, the input images are RGB images. In such embodiments, the method can be applied to each component of the RGB input images separately.
Executing all steps of the method in the RGB domain is computationally more expensive, but provides the advantage that it allows creating highest quality HDR images.
In another advantageous embodiment of the invention, step b) comprises b1) for each input image, dividing the respective image into a plurality of pixel blocks, wherein each pixel block includes a plurality of pixels, and determining, for each pixel block, a mean intensity for the respective pixel block from the intensities of the pixels included in the respective pixel block, and b2) for each input image, obtaining an intensity histogram that represents the intensity distribution of the respective input image based on the mean intensities of the pixel blocks of the respective input image determined in step b1).
In another advantageous embodiment of the invention, in steps d), f) and g), the pixel blocks determined in step b1) are used instead of individual pixels, wherein the mean intensity of the respective pixel block is used as the intensity of all pixels included in the respective pixel block.
Such embodiments of the invention provide the advantage that by using pixel blocks instead of single pixels, the computational effort associated with executing the proposed method can be significantly reduced, which allows the use of less powerful hardware. On the other hand, the accuracy that can be achieved by using pixel blocks is completely adequate for the purposes of the proposed method.
According to another advantageous embodiment of the invention, step e) comprises determining the reference image as the comparison image for each non-reference image.
This embodiment of the invention provides the advantage that it allows a particularly simple implementation of the proposed method. The reason for this is as follows: As explained before, if the respective non-reference image’s comparison image is the reference image, the mapping factor m for this respective non-reference image can be equal to the intensity ratio determined for this respective non-reference image in step g). This means that the mapping factor m can be equal to the intensity ratio between the intensities of the pixels of the reference image and the intensities of the pixels of this respective non-reference image. If the reference image is used as the comparison image for each non-reference image, the mapping factor m for each non-reference image can be equal to the intensity ratio determined in step g) for the respective non-reference image. This means that the mapping factor m for each non-reference image can be equal to the intensity ratio between the intensities of the pixels of the reference image and the intensities of the pixels of the respective non-reference image.
In another advantageous embodiment of the invention, step e) comprises determining the comparison image for the respective non-reference image depending on the exposure level of the respective non-reference image, wherein
- if the exposure level of the respective non-reference image is lower than the reference exposure level, the input image whose exposure level is next higher than the exposure level of the respective non-reference image is determined as the comparison image for the respective non-reference image, and/or
- if the exposure level of the respective non-reference image is higher than the reference exposure level, the input image whose exposure level is next lower than the exposure level of the respective non-reference image is determined as the comparison image for the respective non-reference image.
This embodiment of the invention advantageously provides an increased flexibility in determining the comparison image for the respective non-reference image. As explained before, if the respective non-reference image’s comparison image is not the reference image, the mapping factor m for this respective non-reference image can be determined based on a plurality of intensity ratios calculated in step g). For example, the mapping factor m for the respective non-reference image whose comparison image is not the reference image can be recursively determined by multiplying the intensity ratio that has been calculated for the respective non-reference image in step g) by the mapping factor m that is determined for the
respective non-reference image’s comparison image in step h). This recursive calculation ends when it reaches a non-reference image whose comparison image is the reference image.
In another advantageous embodiment of the invention, in step c), the lower intensity threshold and the upper intensity threshold are determined based on an empirical cumulative intensity distribution of the respective input image, wherein the empirical cumulative intensity distribution can be calculated from the corresponding intensity histogram of the respective input image, and wherein
- the lower intensity threshold is determined as the intensity where the empirical cumulative intensity distribution of the respective input image exceeds a first value p1 and
- the upper intensity threshold is determined as the intensity where the empirical cumulative intensity distribution of the respective input image exceeds a second value p2 that is greater than the first value p1 .
In other words, in this embodiment of the invention, the lower intensity threshold is determined as the empirical p1-quantile of the respective input image’s intensity distribution as it is represented by the respective input image’s intensity histogram. The upper intensity threshold is determined as the empirical p2-quantile of the respective input image’s intensity distribution as it is represented by the respective input image’s intensity histogram, wherein 0 < p1 < 1 and 0 < p2 < 1 and p1 < p2.
In an exemplary configuration, p1 and p2 can be chosen as p1 = 0,05 % and p2 = 99,95 %.
This embodiment of the invention provides the advantage that the intensity range of interest can be determined in a very simple and efficient manner, which results in low computational complexity and supports the applicability of low-cost hardware.
In another advantageous embodiment of the invention, the bracketed series of input images comprises three input images, namely a first input image with a high exposure level (e.g. long exposure) and a second input image with a medium exposure level (e.g. short exposure) and a third input image with a low exposure level (e.g. very short exposure).
This embodiment provides the advantage that it allows capturing dark areas as well as areas of medium brightness and bright areas of the scene reliably and efficiently. As explained
before, in such an embodiment, the long exposure input image (high exposure level) should capture dark areas of the scene satisfactorily, while bright areas of the scene may be overexposed. The short exposure input image (medium exposure level) should capture areas of medium brightness satisfactorily, while dark areas of the scene may be underexposed and/or bright areas of the scene may be overexposed. The very short exposure input image (low exposure level) should capture highlights, i.e. bright areas of the scene satisfactorily, while dark areas may be underexposed. As a result, this embodiment allows creating high quality HDR output images based on a relatively small number of input images.
In another advantageous embodiment of the invention, for each input image or for a subset of the input images,
- the lower intensity threshold and the upper intensity threshold used in step c) are configured so that the intensity range of interest excludes underexposed pixels and/or overexposed pixels of the respective input image, and/or
- the lower intensity threshold and the upper intensity threshold used in step c) are configured so that the intensity range of interest excludes defective pixels of the respective input image, in particular hot pixels and/or cold pixels of the respective input image.
In other words, the intensity range of interest can be configured to exclude underexposed and/or overexposed pixels of the respective input image. Moreover, the intensity range of interest can be configured to exclude defective pixels, in particular cold pixels (pixels with false low intensity values at the lower end of the captured intensity range) and/or hot pixels (pixels with false high intensity values at the upper end of the captured intensity range).
As explained before, the proposed method considers only pixels whose intensities are within the respective input image’s intensity range of interest, which applies i) to determining the mapping factors that are used for mapping the input images’ intensities to the reference exposure level and ii) to merging the input images into the HDR output image. Therefore, this embodiment provides the advantage that the adverse effects that underexposed, overexposed pixels and/or defective pixels of the input images have on the output image quality can be eliminated, since these unwanted pixels can be effectively excluded from the process of creating the HDR output image.
In other advantageous embodiments of the invention, the exposure level of each input image, in particular the exposure time and/or the sensor gain of each input image, is automatically configured by an Auto Exposure Module.
In other words, in such embodiments, the exposure settings (e.g. the exposure time and/or the sensor gain and/or the aperture) of each input image is automatically configured by an Auto Exposure Module.
In particular, the Auto Exposure Module can be adapted to ensure that, by automatically configuring the exposure levels of the input images accordingly, the bracketed input images in total appropriately capture the full dynamic range of the scene.
Moreover, the Auto Exposure Module can be adapted to ensure that, by automatically configuring the exposure levels of the input images accordingly, there is an at least partial overlap between the intensity ranges of interest of the bracketed input images. More specifically, the Auto Exposure Module can be adapted to ensure that, by automatically configuring the exposure settings of the input images accordingly, for each non-reference image, there is an at least partial overlap between the intensity ranges of interest of the respective non-reference image and its respective comparison image, wherein the latter can be, for example, the reference image.
This provides the advantage that by automatically configuring the exposure settings (and thereby controlling the exposure levels) of the input images as explained before, the availability of common pixel positions and hence a reliable calculation of the intensity ratio in step g) can be ensured. The reason for this is the following: If the intensity of a pixel lies within an overlapping area of the intensity range of interest of a first input image and the intensity range of interest of a second input image, this implies that the pixel position of that pixel is included both in the mapping mask of the first input image and in the mapping mask of the second input image, which means that the pixel position is a common pixel position for the two input images.
In another advantageous embodiment of the invention, the series of input images comprises an input image with the lowest exposure level and an input image with the highest exposure level, wherein
- the lower intensity threshold of the input image with the lowest exposure level is configured so that underexposed pixels are excluded, and/or
- the upper intensity threshold of the input image with the highest exposure level is configured so that overexposed pixels are excluded, and/or
- the lower intensity threshold and the upper intensity threshold of every other input image are configured so that underexposed pixels and overexposed pixels are excluded.
This embodiment provides the advantage that underexposed and/or overexposed pixels are excluded from the respective intensity range of interest and hence these pixels can be effectively excluded from the process of creating the HDR output image. Moreover, this embodiment advantageously provides a simple solution for configuring the intensity ranges of interest in a way that i) the intensity ranges of interest of the bracketed input images collectively cover the full dynamic range of the scene and ii) there is an at least partial overlap between the intensity ranges of interest of the input images, which is important for the reasons mentioned above. As a result, a high quality of the HDR output image can be achieved.
In another advantageous embodiment of the invention, in step j 1 ) , the weighting factor w for each pixel is determined as a function of the intensity of the respective pixel, wherein the function is defined in a way that in an overlapping area of the intensity range of interest of a first input image and the intensity range of interest of a second input image, the weighting factor w for the intensity of the pixels of the first input image steadily increases, for example linearly, from 0 to 1 and the weighting factor w for the intensity of the pixels of the second input image correspondingly decreases from 1 to 0 with increasing intensity.
If the intensity of a pixel lies within an overlapping area of the intensity range of interest of a first input image and the intensity range of interest of a second input image, this implies that the pixel position of that pixel is included both in the mapping mask of the first input image and in the mapping mask of the second input image. Therefore, the embodiment described above provides the advantage that it provides a simple and efficient solution for blending the pixels of two input images if both pixels’ intensities are in the intensity range of interest of the respective image.
In another advantageous embodiment of the invention, after the weighting factors w have been initially determined for all pixels of an input image, a median filter with a specified median filter kernel size is applied to each weighting factor w for each pixel of the respective input image.
In other words, this means that in this embodiment, for each pixel, a median filter is applied to the initially determined weighting factor w of the respective pixel, wherein the median is calculated based on the initially determined weighting factor w of the respective pixel and the initially determined weighting factors of neighboring pixels of the same input image within the kernel size. For example, a kernel size of 3x3 can be configured for this median filter, which means that direct neighbors of each pixel are considered for the median calculation.
Such an embodiment provides the advantage that by applying the median filter, impulse noise can be effectively eliminated and hence excluded from the process of creating the HDR output image.
In another advantageous embodiment of the invention, after the median filter has been applied to each weighting factor w for each pixel of an input image, a mean filter with a specified mean filter kernel size is applied to each weighting factor w for each pixel of the respective input image.
In other words, this means that in this embodiment, for each pixel, after having applied the median filter as described before, a mean filter is applied to the weighting factor w of the respective pixel. The mean is calculated based on the weighting factor w (resulting from applying the median filter as described before) of the respective pixel and the weighting factors (resulting from applying the median filter as described before) of neighboring pixels of the same input image within the kernel size. For example, a kernel size of 3x3 can be configured for this mean filter, which means that direct neighbors of each pixel are considered for the mean calculation.
Such an embodiment provides the advantage that it allows smooth blending in the spatial domain. The mean filter takes account of spatial information, which advantageously prevents artefacts caused by strong edges.
In another advantageous embodiment of the invention, the input images are part of an input video sequence and the output image is part of an output video sequence.
Such an embodiment of the invention allows utilizing the proposed method’s advantages that were described above for creating high dynamic range output image video sequences, i.e. it allows creating high dynamic range output image video sequences in a particularly flexible, efficient and computationally inexpensive manner.
The object of the invention is further achieved by a computer program having program code means adapted to perform a method as described above when the computer program is executed on a computer.
The object of the invention is further achieved by an electronic device that is adapted to perform a method as described above.
The electronic device can be, for example, a stand-alone integrated circuit (IC) or a part thereof. The electronic device can also be a system on a chip (SoC) or a part thereof. The electronic device can also be a part of an image processing pipeline and/or an image processing chain. The electronic device can also be a camera or an image signal processor (ISP) or a part of a camera or an ISP. The electronic device can also be a system that comprises a camera and/or an ISP. The electronic device can also be a part of such a system.
In the following, the invention will be explained in more detail using the exemplary embodiments schematically shown in the attached drawings. The drawings show the following:
Figure 1 - a schematic representation of an embodiment of the method for creating a high dynamic range output image according to the invention;
Figure 2 - a schematic representation of an exemplary intensity histogram and a corresponding intensity range of interest limited by a lower intensity threshold and an upper intensity threshold according to the invention;
Figure 3 - a schematic representation of exemplary mapping masks and common pixel positions according to the invention for three input images;
Figure 4 - a schematic representation of an exemplary image processing system comprising an electronic device according to the invention.
Figure 1 shows a schematic representation of an exemplary method for creating a high dynamic range output image from a plurality of low dynamic range input images according to the invention. In this exemplary embodiment, the input images are part of an input video sequence and the output image is part of an output video sequence.
In the embodiment shown in Figure 1 , all steps of the method are executed in the Bayer domain, i.e. no demosaicing is conducted before executing any of the steps of the proposed method. Instead, the input images are directly obtained from the Color Filter Array (CFA), which is a Bayer filter in this example, but can generally be any other type of CFA in other embodiments of the invention. In other embodiments of the invention, some or all steps of the method can be executed in the RGB domain instead of the Bayer domain.
In step 101 , which corresponds to step a) explained above, a bracketed series of low dynamic range (LDR) input images is obtained. All input images represent the same scene, but differ in their exposure levels. In this exemplary embodiment, the exposure settings exposure time and sensor gain of each input image, and thereby the exposure level of each input image, are automatically configured by an Auto Exposure Module.
In this exemplary embodiment, the bracketed series of input images comprises three input images, namely a first input image with a high exposure level (long exposure) and a second input image with a medium exposure level (short exposure) and a third input image with a low exposure level (very short exposure). The series of input images includes a reference image, which is the second input image (short exposure, i.e. medium exposure level) in this exemplary embodiment, and two non-reference images, which are the first input image (long exposure, i.e. high exposure level) and the third input image (very short exposure, i.e. low exposure level) in this exemplary embodiment. The exposure level of the reference image (second input image in this exemplary embodiment), i.e. the medium exposure level, serves as a reference exposure level.
In steps 102 and 103 of the embodiment shown in Figure 1 , which collectively correspond to step b) of the method according to the invention explained above, for each of the three input images, an intensity histogram is obtained that represents the intensity distribution of the respective input image. In this exemplary embodiment, the intensity histogram has 64 bins. For obtaining the intensity histogram in step b) (as well as for obtaining the mapping mask in step d) and for calculating the intensity ratio in step g) and for determining the weighting factor in step j 1 ) , which will be explained in more detail later), the intensities of the pixels of each of the three input images are determined as follows: for each green pixel in the Bayer domain, the intensity of that respective pixel is the intensity of the green pixel itself in the Bayer domain, and for each non-green pixel in the Bayer domain, the intensity of that respective non-green pixel is determined as max(GI,N), wherein Gl is an intensity determined based on the intensities of neighboring green pixels and N is the intensity of the respective non-green pixel itself in the Bayer domain and max is the maximum function. In this exemplary embodiment, for each non-green pixel at pixel position (x,y), Gl is calculated based on a convolution with the kernel k as follows:
0 1 0
GI(x,y) = N(x,y) * k , wherein k = 1/4 1 0 1
.0 1 0.
This basically means that in this embodiment, Gl is determined as the arithmetic mean of the intensities of green pixels that are direct neighbors of the respective non-green pixel at pixel position (x,y).
Moreover, in step 102 of the embodiment shown in Figure 1, which corresponds to step b1) explained above, for each of the three input images, the respective image is divided into a plurality of pixel blocks, wherein each pixel block includes a plurality of pixels. Furthermore, in step 102, for each pixel block, a mean intensity for the respective pixel block is determined from the intensities of the pixels included in the respective pixel block, wherein the intensities of green pixels and non-green pixels in the Bayer domain are determined as explained before. Afterwards, in step 103, for each of the three input images, an intensity histogram is obtained that represents the intensity distribution of the respective input image based on the mean intensities of the pixel blocks of the respective input image determined in step 102.
In other embodiments, the intensity histogram for each input image can be obtained directly from the intensities of individual pixels of the respective input image, i.e. instead of using pixel blocks, the intensity of each individual pixel can be counted for obtaining the intensity histogram in such embodiments.
In step 104 of the embodiment shown in Figure 1 , which corresponds to step c) explained above, for each input image, an intensity range of interest is determined from the corresponding intensity histogram of the respective input image. The intensity range of interest is limited by a lower intensity threshold and an upper intensity threshold.
In this exemplary embodiment, the series of input images comprises an input image with the highest exposure level, namely the first input image (long exposure), and an input image with the lowest exposure level, namely the third input image (very short exposure), and the intensity thresholds used in step 104 are configured as follows:
- the upper intensity threshold of the first input image (long exposure, highest exposure level) is configured so that overexposed pixels are excluded and
- the lower intensity threshold of the third input image (very short exposure, lowest exposure level) is configured so that underexposed pixels are excluded and
- the lower intensity threshold and the upper intensity threshold of the second input image (short exposure, medium exposure level) are configured so that underexposed pixels and overexposed pixels are excluded.
Moreover, in this exemplary embodiment, for each of the three input images, the lower intensity threshold and the upper intensity threshold used in step 104 are configured so that the intensity range of interest excludes defective pixels of the respective input image, in particular hot pixels and/or cold pixels of the respective input image.
Figure 2 schematically shows an exemplary intensity histogram 5 that represents the intensity distribution of the second input image (short exposure, medium exposure level) of this exemplary embodiment. The intensity histogram 5 has 64 bins that correspond to intensity intervals. While the intensity values are displayed on the horizontal axis 51 , the numbers of pixels that fall into each of the 64 intervals are displayed on the vertical axis 52. As explained above, pixel blocks are used for obtaining the intensity histogram in this exemplary embodiment. Therefore, in Figure 2, numbers of blocks could be displayed on the vertical axis
52 instead of numbers of pixels. However, as the number of pixels per pixel block is known, the number of blocks can easily be converted to the number of pixels and vice versa, which means that both ways of representing the intensity histogram 5 are equivalent to each other. Therefore, the number of pixels is shown in Figure 2 for illustrative purposes.
Furthermore, Figure 2 depicts an intensity range of interest IRI that has been determined from the second input image’s intensity histogram 5, wherein the intensity range of interest IRI is limited by a lower intensity threshold thiow and an upper intensity threshold thup. It should be noted that Figure 2 illustrates the histogram 5, the intensity range of interest IRI and the intensity thresholds thiow, thup only schematically. In particular, in real implementations of the method according to the invention, the intensity thresholds thiow, thup can be configured so that the proportion of intensity values being excluded from the input image’s intensity range of interest is significantly smaller than it is shown in Figure 2 for illustrative purposes.
Referring to Figure 1 again, in step 105 of this exemplary embodiment corresponding to step d) explained above, for each of the three input images, a mapping mask is obtained that indicates the pixel positions of all pixels of the respective input image whose intensities are within the respective image’s intensity range of interest as it has been determined in step 104. Each mapping mask excludes the pixel positions of pixels whose intensities lie outside of the corresponding intensity range of interest. In this exemplary embodiment, the excluded pixel positions particularly comprise the positions of underexposed pixels, overexposed pixels and defective pixels (such as cold pixels and hot pixels).
In this exemplary embodiment, for obtaining the mapping masks in step 105, the intensities of green pixels and non-green pixels in the Bayer domain are determined as explained before with reference to steps 102 and 103 (obtaining the intensity histogram), i.e. for each green pixel in the Bayer domain, the intensity of that respective pixel is the intensity of the green pixel itself in the Bayer domain, and for each non-green pixel in the Bayer domain, the intensity of that respective non-green pixel is determined as max(GI,N). For additional details, reference is made to the above explanation of steps 102 and 103 of the embodiment shown in Figure 1 .
Moreover, in this exemplary embodiment, for obtaining the mapping masks in step 105, the pixel blocks determined in step 102 are used instead of individual pixels, wherein the mean
intensity of the respective pixel block is used as the intensity of all pixels included in the respective pixel block.
In step 106 of the embodiment shown in Figure 1 , which corresponds to step e) explained above, a comparison image for each non-reference image is determined from the series of input images. In this exemplary embodiment, the reference image is determined as the comparison image for every non-reference image. In other words, the second input image (medium exposure level, short exposure), which is the reference image, is determined as the comparison image for both non-reference images, i.e. for the first input image (high exposure level, long exposure) as well as for the third input image (low exposure level, very short exposure).
In step 107 of the embodiment shown in Figure 1 , which corresponds to step f) explained above, for each non-reference image, common pixel positions of the respective non-reference image and its respective comparison image are determined. Such common pixel positions are pixel positions that are included both in the mapping mask of the respective non-reference image and in the mapping mask of the respective non-reference image’s comparison image, wherein the mapping masks have been obtained for each of the three input images in step 105. In this exemplary embodiment, as explained before, the first input image (high exposure level, long exposure) and the third input image (low exposure level, very short exposure) are the non-reference images and the second input image (medium exposure level, short exposure) is the comparison image for both non-reference images. This means that in step 107,
- common pixel positions of the first input image (high exposure level, long exposure) and the second input image (medium exposure level, short exposure) are determined, wherein common pixel positions are pixel positions that are included both in the mapping mask of the first input image and in the mapping mask of the second input image, and
- common pixel positions of the third input image (low exposure level, very short exposure) and the second input image (medium exposure level, short exposure) are determined, wherein common pixel positions are pixel positions that are included both in the mapping mask of the third input image and in the mapping mask of the second input image.
The common pixel positions are determined based on an intersection of the mapping mask of the respective non-reference image and the mapping mask of the corresponding comparison
image. In this exemplary embodiment, for determining the common pixel positions in step 107, the pixel blocks determined in step 102 are used instead of individual pixels, wherein the mean intensity of the respective pixel block is used as the intensity of all pixels included in the respective pixel block.
Figure 3 schematically shows an example of three mapping masks 11 , 12, 13 for three exemplary input images 1, 2, 3 and corresponding common pixel positions 16, 17. It schematically illustrates the first input image 1 (long exposure, i.e. high exposure level), which is a non-reference image, the second input image 2 (short exposure, i.e. medium exposure level), which is the reference image, and the third input image 3 (very short exposure, i.e. low exposure level), which is also a non-reference image. As explained before, in this exemplary embodiment, the reference image 2 is the comparison image for both non-reference images 1 , 3. It can be seen from Figure 3 that each input image comprises a plurality of pixel blocks 15, wherein each pixel block comprises a plurality of pixels.
In the schematic illustration of Figure 3, the three mapping masks 11 , 12, 13 are marked with different hatchings and each mapping mask 11 , 12, 13 indicates the pixel block positions (and hence the pixel positions) of all pixel blocks (and hence pixels) of the respective input image whose intensities are within the respective image’s intensity range of interest IRI. In this example, the first mapping mask 11 belongs to the first input image 1 (i.e. the first mapping mask 11 indicates the pixel positions of all pixels of the first input image whose intensities are within the first input image’s intensity range of interest IRI). Analogously, the second mapping mask 12 belongs to the second input image 2 and the mapping mask 13 belongs to the third input image 3. It can be seen from Figure 3 that the three input images’ mapping masks cover different parts of the scene: The first mapping mask 11 of the first input image 1 covers dark areas, the second mapping mask 12 of the second input image 2 covers areas of medium brightness, and the third mapping mask 13 of the third input image 3 covers bright areas of the scene.
Figure 3 also shows common pixel positions 16 of the first input image 1 (non-reference image) and the second input image 2, which is the comparison image of the first input image 1. It can be seen in Figure 3 that the common pixel positions 16 have been determined based on an intersection of the mapping mask 11 of the non-reference image, which is the first input image 1 in this example, and the mapping mask 12 of the corresponding comparison image, which is
the second input image 2 in this example. It can also be seen in Figure 3 that the pixel blocks 15 and their respective pixel block positions have been used instead of individual pixels for determining the common pixel positions 16, wherein the (common) pixel positions can be directly inferred from the corresponding pixel block positions of the pixel blocks 15.
In the same manner, Figure 3 also shows common pixel positions 17 of the third input image 3 (non-reference image) and the second input image 2, which is the comparison image of the third input image 3, wherein the common pixel positions 17 have been determined based on an intersection of the mapping mask 13 of the non-reference image, which is the third input image 3 in this example, and the mapping mask 12 of the corresponding comparison image, which is the second input image 2 in this example.
Referring to Figure 1 again, in step 108 of this exemplary embodiment corresponding to step g) explained above, for each non-reference image, an intensity ratio between the intensities of the pixels of the respective non-reference image’s comparison image and the intensities of the pixels of the respective non-reference image is calculated, wherein only the pixels at the common pixel positions are considered for calculating the intensity ratio. In the exemplary embodiment presented here, where the first input image (high exposure level, long exposure) and the third input image (low exposure level, very short exposure) are the non-reference images and the second input image (medium exposure level, short exposure) is the comparison image for both non-reference images, this means that in step 108
- for the first input image, an intensity ratio is calculated between the intensities of the pixels of the first input image (non-reference image) and the intensities of the pixels of the second input image (first input image’s comparison image), wherein only the common pixel positions 16 (see Figure 3) of the first input image and the second input image are considered for calculating the intensity ratio, and
- for the third input image, an intensity ratio is calculated between the intensities of the pixels of the third input image (non-reference image) and the intensities of the pixels of the second input image (third input image’s comparison image), wherein only the common pixel positions 17 (see Figure 3) of the third input image and the second input image are considered for calculating the intensity ratio.
In this exemplary embodiment, for calculating the intensity ratios in step 108, the intensities of green pixels and non-green pixels in the Bayer domain are determined as explained before
with reference to steps 102, 103 (obtaining the intensity histogram) and 105 (obtaining the mapping masks), i.e. for each green pixel in the Bayer domain, the intensity of that respective pixel is the intensity of the green pixel itself in the Bayer domain, and for each non-green pixel in the Bayer domain, the intensity of that respective non-green pixel is determined as max(GI,N). For additional details, reference is made to the above explanation of steps 102 and 103 of the embodiment shown in Figure 1.
For any non-reference image, for example, in step 108, the intensity ratio can be calculated as follows: i) calculate a first mean intensity (e.g., arithmetic mean), which is the mean intensity of all pixels of the respective non-reference image’s comparison image that are located at the common pixel positions, ii) calculate a second mean intensity (e.g., arithmetic mean), which is the mean intensity of all pixels of the respective non-reference image itself that are located at the common pixel positions, and iii) divide the first mean intensity by the second mean intensity, wherein the resulting quotient is the intensity ratio.
This means that the intensity ratio can be calculated as
wherein
- Ri is the intensity ratio,
- C is the set of common pixel positions,
- N is the number of common pixel positions, i.e. the size of set C,
- lcomP(x,y) is the intensity of a pixel of the comparison image at pixel position (x,y), and
- Inonref(x,y) is the intensity of a pixel of the non-reference image at pixel position (x,y).
Again, as for steps 105 and 107 explained above, in this exemplary embodiment, for calculating the intensity ratios in step 108, the pixel blocks determined in step 102 are used instead of individual pixels, wherein the mean intensity of the respective pixel block is used as the intensity of all pixels included in the respective pixel block.
In step 109 of the embodiment shown in Figure 1 , which corresponds to step h) explained above, for each non-reference image, a mapping factor m between the respective nonreference image’s exposure level and the reference exposure level is determined based on at least one intensity ratio calculated in step 108. In this exemplary embodiment, as the reference image is used as the comparison image for each non-reference image, the mapping factor m for each non-reference image (i.e. for the first input image and for the third input image in this example) is simply equal to the intensity ratio that has been determined for the respective non- reference image in step 108. This means that m = Ri in this example.
In step 110 of the embodiment shown in Figure 1 , which corresponds to step i) explained above, for each non-reference image (i.e. for the first input image and the third input image in this example), the intensities of all pixels of the respective non-reference image are mapped to the reference exposure level, thereby obtaining a mapped intensity for each pixel of the respective non-reference image. This is achieved by applying the mapping factor m determined in step 109 to all pixels of the respective non-reference image. In this exemplary embodiment, this is achieved by simply multiplying the intensity of each pixel by the mapping factor m that has been determined for the respective non-reference image in step 109.
For the sake of clarity, in contrast to steps 105, 107 and 108 of this exemplary embodiment that make use of pixel blocks as explained before, the respective image’s individual pixels and the intensities of these individual pixels are used for mapping the intensities of the pixels to the reference exposure level in step 110.
In steps 111 , 112 and 113 of the embodiment shown in Figure 1 , which collectively correspond to step j of the method according to the invention explained above, the input images are merged into a high dynamic range output image based on the intensities of the pixels of the reference image, i.e. the intensities of the pixels of the second input image in this example, and the mapped intensities of the pixels of each non-reference image, i.e. the mapped intensities of the first input image and the third input image in this example, wherein only the pixels whose pixel positions are included in the mapping mask of the respective input image are considered for the merging.
As it can be seen from Figure 3, some pixel positions may only be included in a single mapping mask 11 , 12, 13, while other pixel positions may be included in multiple mapping masks 11 , 12, 13. Therefore, in step 111 of the embodiment shown in Figure 1 , which corresponds to step j1) of the method according to the invention as explained before, for each pixel position and each of the three input images, a weighting factor w for the pixel of the respective input image at the respective pixel position is determined as follows:
- w = 0 if the pixel position is not included in the mapping mask of the respective input image and
- 0 < w < 1 if the pixel position is included in the mapping mask of the respective input image and the pixel position is included in the mapping mask of at least one other input image and
- w = 1 if the pixel position is included in the mapping mask of the respective input image and the pixel position is not included in the mapping mask of any other input image and
- the sum of the weighting factors w of all input images for each pixel position is 1 .
In the embodiment shown in Figure 1 , in step 111 , the weighting factor w for each pixel is determined as a function of the intensity of the respective pixel, wherein the function is defined in a way that in an overlapping area of the intensity range of interest of an input image and the intensity range of interest of another input image, the weighting factor w for the intensity of the pixels of the former input image linearly increases from 0 to 1 and the weighting factor w for the intensity of the pixels of the latter input image linearly decreases from 1 to 0 with increasing intensity.
Moreover, in step 111 of this exemplary embodiment, after the weighting factors w have been initially determined for all pixels of any of the three input images as described above, a median filter with a specified median filter kernel size of 3x3 is applied to each weighting factor w for each pixel of that respective input image, which means that direct neighbors of each pixel are considered for the median calculation.
Furthermore, in step 111 of this exemplary embodiment, after the median filter has been applied to each weighting factor w for each pixel of any of the three input images, a mean filter with a specified mean filter kernel size of 3x3 is applied to each weighting factor w for each pixel of that respective input image. This means that the mean is calculated based on the weighting factor w (resulting from applying the median filter as described before) of the
respective pixel and the weighting factors (resulting from applying the median filter as described before) of direct neighbors of each pixel of the same input image.
In step 112 of the embodiment shown in Figure 1 , which corresponds to step j2) explained before, for each pixel of the reference image (i.e. for each pixel of the second input image in this example), a weighted intensity of the respective pixel of the reference image is obtained by applying the weighting factor w for the respective pixel of the reference image (as determined before in step 111) to the intensity of the respective pixel of the reference image.
Moreover, in step 112, for each pixel of each non-reference image (i.e. for each pixel of the first input image and the third input image in this example), a weighted intensity of the respective pixel of the respective non-reference image is obtained by applying the weighting factor w for the respective pixel of the respective non-reference image (as determined before in step 111) to the mapped intensity of the respective pixel of the respective non-reference image (as determined before in step 110).
In this exemplary embodiment, in step 112, the weighting factor w is applied to the intensity (or mapped intensity, respectively) of a pixel by simply multiplying the weighting factor w by the intensity (or mapped intensity, respectively) of the respective pixel.
Finally, in step 113 shown in Figure 1 , which corresponds to step j 3) explained before, the three input images of this exemplary embodiment are merged into a high dynamic range output image by adding up, for each pixel position, the weighted intensities of the pixels of all three input images at the respective pixel position (as determined before in step 112) to determine the intensity of the pixel of the output image at the respective pixel position.
For the sake of clarity, in contrast to steps 105, 107 and 108 of this exemplary embodiment that make use of pixel blocks as explained before, the respective image’s individual pixels and the intensities of these individual pixels are used for merging the input images into a high dynamic range output image in steps 111 , 112 and 113 of the embodiment shown in Figure 1.
Figure 4 shows an exemplary image processing system 209. The image processing system 209 comprises an image sensor 201. Moreover, the image processing system 209 of figure 4 comprises an electronic device 203, which is an image processing module in this exemplary
embodiment. The image processing module 203 has a data processing unit 205 and a memory 207 to store image data. The data processing unit 205 can be, for example, an appropriately programmed microprocessor, an image signal processor (ISP), a digital signal processor (DSP), a graphics processing unit (GPU), a field-programmable gate array (FPGA), an application-specific integrated circuit (ASIC), or similar. The data processing unit 205 reads from and writes to the memory 207.
The image sensor 201 is configured to generate a bracketed series of at least two low dynamic range input images, wherein the series of input images is part of an input video sequence in this exemplary embodiment. The image sensor 201 is directly or indirectly connected with the image processing module 203, which allows the image processing module 203 to read the input images of the input video sequence generated by image sensor 201. Each input image read by the image processing module 203 can be stored in memory 207.
The image processing module 203 is adapted to perform the method as described above for creating a high dynamic range (HDR) output image from a plurality of low dynamic range (LDR) input images. In this exemplary embodiment, the input images are part of an input video sequence and the output image is part of an output video sequence.
After creating a HDR output image from the plurality of LDR input images as previously explained, the generated HDR output image can be stored in memory 207 for further processing or for any further application-specific purposes.
This procedure can be repeated for each input image of the input video sequence generated by the image sensor 201. This results in a generation of a HDR output video sequence, which is a sequence of HDR output images. The HDR output video sequence can be stored in memory 207.
Additionally or alternatively, the output image and/or the output video sequence can be stored in another memory and/or stored on a data storage unit and/or can be transmitted via a data transmission link.
List of reference signs
I, 2, 3 input image
1 input image with high exposure level (long exposure), non-reference image
2 input image with medium exposure level (short exposure), reference image, comparison image
3 input image with low exposure level (very short exposure), non-reference image
5 intensity histogram
I I, 12, 13 mapping mask
15 pixel block
16, 17 common pixel positions
51 horizontal axis of intensity histogram
52 vertical axis of intensity histogram 101-113 step 201 image sensor
203 image processing module (electronic device)
205 data processing unit
207 memory
209 image processing system
IRI intensity range of interest th low lower intensity threshold thUp upper intensity threshold
Claims
1. Method for creating a high dynamic range output image from a plurality of low dynamic range input images, wherein the method comprises at least the following steps: a) obtaining a bracketed series of at least two low dynamic range input images, wherein all of the input images represent the same scene and the input images differ in their exposure levels and the series of input images includes a reference image, whose exposure level serves as a reference exposure level, and at least one non-reference image, b) for each input image, obtaining an intensity histogram that represents the intensity distribution of the respective input image, c) for each input image, determining an intensity range of interest from the corresponding intensity histogram of the respective input image, wherein the intensity range of interest is limited by a lower intensity threshold and an upper intensity threshold, d) for each input image, obtaining a mapping mask that indicates the pixel positions of all pixels of the respective input image whose intensities are within the respective image’s intensity range of interest, e) for each non-reference image, determining a comparison image for the respective non-reference image from the series of input images, wherein the reference image is determined as a comparison image for at least one of the non-reference images, f) for each non-reference image, determining common pixel positions of the respective non-reference image and its respective comparison image, wherein common pixel positions are pixel positions that are included both in the mapping mask of the respective non-reference image and in the mapping mask of the respective non- reference image’s comparison image, g) for each non-reference image, calculating an intensity ratio between the intensities of the pixels of the respective non-reference image’s comparison image and the intensities of the pixels of the respective non-reference image, wherein only the pixels at the common pixel positions are considered for calculating the intensity ratio, h) for each non-reference image, determining a mapping factor m between the respective non-reference image’s exposure level and the reference exposure level based on at least one intensity ratio calculated in step g),
i) for each non-reference image, mapping the intensities of all pixels of the respective non-reference image to the reference exposure level by applying the mapping factor m determined in step h) to all pixels of the respective non-reference image, thereby obtaining a mapped intensity for each pixel of the respective non-reference image, j) merging the input images into a high dynamic range output image based on the intensities of the pixels of the reference image and the mapped intensities of the pixels of each non-reference image, wherein only the pixels whose pixel positions are included in the mapping mask of the respective input image are considered for the merging.
2. The method according to claim 1 , characterized in that step j) comprises j1) for each pixel position and each input image, determining a weighting factor w for the pixel of the respective input image at the respective pixel position, wherein
- w = 0 if the pixel position is not included in the mapping mask of the respective input image and
- 0 < w < 1 if the pixel position is included in the mapping mask of the respective input image and the pixel position is included in the mapping mask of at least one other input image and
- w = 1 if the pixel position is included in the mapping mask of the respective input image and the pixel position is not included in the mapping mask of any other input image and
- the sum of the weighting factors w of all input images for each pixel position is 1 , and j2) for each pixel of each input image, obtaining a weighted intensity of the respective pixel of the respective input image by applying the weighting factor w for the respective pixel of the respective input image
- to the intensity of the respective pixel of the reference image if the respective input image is the reference image and
- to the mapped intensity of the respective pixel of the respective input image if the respective input image is a non-reference image, and j3) merging the input images into a high dynamic range output image by adding up, for each pixel position, the weighted intensities of the pixels of all input images at the respective pixel position to determine the intensity of the pixel of the output image at the respective pixel position.
3. The method according to any of the preceding claims, characterized in that all steps of the method are executed in the Bayer domain.
4. The method according to claim 3, characterized in that for obtaining the intensity histogram in step b) and/or for obtaining the mapping mask in step d) and/or for calculating the intensity ratio in step g) and/or for determining the weighting factor in step j1)
- the intensity of each pixel of each input image is the intensity of that respective pixel itself in the Bayer domain, regardless of the color of the respective pixel, or
- the intensity of each pixel of each input image is determined only from the intensity of the green component in the Bayer domain, wherein the intensities of non-green pixels in the Bayer domain are determined based on the intensities of neighboring green pixels, or
- for each green pixel of each input image in the Bayer domain, the intensity of that respective pixel is the intensity of the green pixel itself in the Bayer domain, and for each non-green pixel of each input image in the Bayer domain, the intensity of that respective non-green pixel is determined as max(GI,N), wherein Gl is an intensity determined based on the intensities of neighboring green pixels and N is the intensity of the respective non-green pixel itself in the Bayer domain and max is the maximum function.
5. The method according to claim 1 or claim 2, characterized in that all steps of the method are executed in the RGB domain.
6. The method according to any of the preceding claims, characterized in that step b) comprises b1) for each input image, dividing the respective image into a plurality of pixel blocks, wherein each pixel block includes a plurality of pixels, and determining, for each pixel block, a mean intensity for the respective pixel block from the intensities of the pixels included in the respective pixel block, and b2) for each input image, obtaining an intensity histogram that represents the intensity distribution of the respective input image based on the mean intensities of the pixel blocks of the respective input image determined in step b1).
7. The method according to claim 6, characterized in that in steps d), f) and g), the pixel blocks determined in step b1) are used instead of individual pixels, wherein the mean intensity of the respective pixel block is used as the intensity of all pixels included in the respective pixel block.
8. The method according to any of the preceding claims, characterized in that step e) comprises determining the reference image as the comparison image for each nonreference image.
9. The method according to any one of claims 1 to 7, characterized in that step e) comprises determining the comparison image for the respective non-reference image depending on the exposure level of the respective non-reference image, wherein
- if the exposure level of the respective non-reference image is lower than the reference exposure level, the input image whose exposure level is next higher than the exposure level of the respective non-reference image is determined as the comparison image for the respective non-reference image, and/or
- if the exposure level of the respective non-reference image is higher than the reference exposure level, the input image whose exposure level is next lower than the exposure level of the respective non-reference image is determined as the comparison image for the respective non-reference image.
10. The method according to any of the preceding claims, characterized in that in step c), the lower intensity threshold and the upper intensity threshold are determined based on an empirical cumulative intensity distribution of the respective input image, wherein the empirical cumulative intensity distribution can be calculated from the corresponding intensity histogram of the respective input image, and wherein
- the lower intensity threshold is determined as the intensity where the empirical cumulative intensity distribution of the respective input image exceeds a first value p1 and
- the upper intensity threshold is determined as the intensity where the empirical cumulative intensity distribution of the respective input image exceeds a second value p2 that is greater than the first value p1.
11. The method according to any of the preceding claims, characterized in that the bracketed series of input images comprises three input images, namely a first input image with a high exposure level and a second input image with a medium exposure level and a third input image with a low exposure level.
12. The method according to any of the preceding claims, characterized in that for each input image,
- the lower intensity threshold and the upper intensity threshold used in step c) are configured so that the intensity range of interest excludes underexposed pixels and/or overexposed pixels of the respective input image, and/or
- the lower intensity threshold and the upper intensity threshold used in step c) are configured so that the intensity range of interest excludes defective pixels of the respective input image, in particular hot pixels and/or cold pixels of the respective input image.
13. The method according to any of the preceding claims, characterized in that the exposure level of each input image, in particular the exposure time and/or the sensor gain of each input image, is automatically configured by an Auto Exposure Module.
14. The method according to any of the preceding claims, characterized in that the series of input images comprises an input image with the lowest exposure level and an input image with the highest exposure level, wherein
- the lower intensity threshold of the input image with the lowest exposure level is configured so that underexposed pixels are excluded, and/or
- the upper intensity threshold of the input image with the highest exposure level is configured so that overexposed pixels are excluded, and/or
- the lower intensity threshold and the upper intensity threshold of every other input image are configured so that underexposed pixels and overexposed pixels are excluded.
15. The method according to any of claims 2 to 14, characterized in that in step j1), the weighting factor w for each pixel is determined as a function of the intensity of the respective pixel, wherein the function is defined in a way that in an overlapping area of the intensity range of interest of a first input image and the intensity range of interest of
a second input image, the weighting factor w for the intensity of the pixels of the first input image linearly increases from 0 to 1 and the weighting factor w for the intensity of the pixels of the second input image linearly decreases from 1 to 0 with increasing intensity.
16. The method according to any of claims 2 to 15, characterized in that after the weighting factors w have been initially determined for all pixels of an input image, a median filter with a specified median filter kernel size is applied to each weighting factor w for each pixel of the respective input image.
17. The method according to claim 16, characterized in that after the median filter has been applied to each weighting factor w for each pixel of an input image, a mean filter with a specified mean filter kernel size is applied to each weighting factor w for each pixel of the respective input image.
18. The method according to any of the preceding claims, characterized in that the input images are part of an input video sequence and the output image is part of an output video sequence.
19. Computer program having program code means adapted to perform a method according to any of the preceding claims when the computer program is executed on a computer.
20. Electronic device adapted to perform a method according to any of claims 1 to 18.
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| PCT/EP2024/062043 WO2025228523A1 (en) | 2024-05-02 | 2024-05-02 | Method, computer program and electronic device for creating high dynamic range images |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| PCT/EP2024/062043 WO2025228523A1 (en) | 2024-05-02 | 2024-05-02 | Method, computer program and electronic device for creating high dynamic range images |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2025228523A1 true WO2025228523A1 (en) | 2025-11-06 |
Family
ID=91022544
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/EP2024/062043 Pending WO2025228523A1 (en) | 2024-05-02 | 2024-05-02 | Method, computer program and electronic device for creating high dynamic range images |
Country Status (1)
| Country | Link |
|---|---|
| WO (1) | WO2025228523A1 (en) |
Citations (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| WO2009044246A1 (en) | 2007-10-03 | 2009-04-09 | Nokia Corporation | Multi-exposure pattern for enhancing dynamic range of images |
| US20120262600A1 (en) * | 2011-04-18 | 2012-10-18 | Qualcomm Incorporated | White balance optimization with high dynamic range images |
| US20150062382A1 (en) * | 2010-07-05 | 2015-03-05 | Apple Inc. | Operating a device to capture high dynamic range images |
| EP3070930A1 (en) * | 2015-03-17 | 2016-09-21 | VIA Alliance Semiconductor Co., Ltd. | Methods for generating high dynamic range hdr images and apparatuses using the same |
-
2024
- 2024-05-02 WO PCT/EP2024/062043 patent/WO2025228523A1/en active Pending
Patent Citations (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| WO2009044246A1 (en) | 2007-10-03 | 2009-04-09 | Nokia Corporation | Multi-exposure pattern for enhancing dynamic range of images |
| US20150062382A1 (en) * | 2010-07-05 | 2015-03-05 | Apple Inc. | Operating a device to capture high dynamic range images |
| US20120262600A1 (en) * | 2011-04-18 | 2012-10-18 | Qualcomm Incorporated | White balance optimization with high dynamic range images |
| EP3070930A1 (en) * | 2015-03-17 | 2016-09-21 | VIA Alliance Semiconductor Co., Ltd. | Methods for generating high dynamic range hdr images and apparatuses using the same |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US10021313B1 (en) | Image adjustment techniques for multiple-frame images | |
| US8363131B2 (en) | Apparatus and method for local contrast enhanced tone mapping | |
| US9426381B1 (en) | Method and/or apparatus for implementing high dynamic range image processing in a video processing system | |
| US8264576B2 (en) | RGBW sensor array | |
| KR101026577B1 (en) | High dynamic range video generation systems, computer implemented methods, and computer readable recording media | |
| JP7077395B2 (en) | Multiplexed high dynamic range image | |
| US8169500B2 (en) | Dynamic range compression apparatus, dynamic range compression method, computer-readable recording medium, integrated circuit, and imaging apparatus | |
| CN112565636B (en) | Image processing method, device, equipment and storage medium | |
| US20070047803A1 (en) | Image processing device with automatic white balance | |
| EP1789922B1 (en) | Apparatus and method for processing images | |
| JP6110574B2 (en) | High dynamic range imaging method and camera | |
| Lee et al. | Image contrast enhancement using classified virtual exposure image fusion | |
| CN101690160A (en) | Method, system and apparatus for motion detection using autofocus statistics | |
| US8400522B2 (en) | Method and apparatus for applying tonal correction to images | |
| WO2020206659A1 (en) | Method and apparatus for combining low-dynamic range images to a single image | |
| CN114240782A (en) | Image correction method, system and electronic device | |
| CN112598609B (en) | Dynamic image processing method and device | |
| van Beek | Improved image selection for stack-based hdr imaging | |
| CN116982074B (en) | Image processing methods | |
| CN111885281A (en) | Image processing | |
| WO2025228523A1 (en) | Method, computer program and electronic device for creating high dynamic range images | |
| JP2009147520A (en) | Scene estimation apparatus, scene estimation method, program, imaging apparatus, imaging method | |
| CN114240813B (en) | Image processing method, device, equipment and storage medium thereof | |
| JP6867563B1 (en) | Image processing method | |
| JP6893068B1 (en) | Image processing method |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 24723797 Country of ref document: EP Kind code of ref document: A1 |