Method and equipment for compensating relative illumination through multiple exposure
Technical Field
The embodiment of the invention relates to a method and equipment for compensating Relative Illumination (RI) through multiple exposure.
Background
Almost all cameras have a lens section and an image capturing section. The lens portion includes a plurality of lenses. The configuration of the lens (structure, shape, material, etc.) determines most of the performance of the camera. A recently available image capturing part is an image sensor, which is mainly a complementary metal-oxide-semiconductor (CMOS) image sensor or an old-fashioned silver halide film. In the present application, an image capturing section refers to any type of image sensor that can capture time-series images, for example, a CMOS image sensor, a Charge Coupled Device (CCD) image sensor, an organic photoconductive film image sensor, and a quantum thin film image sensor.
Relative Illuminance (RI) is the ratio of the off-axis illuminance of an image plane to the on-axis illuminance of the image area of the image plane. Due to non-uniformity of Relative Illuminance (RI) of the lens and sensitivity of the image sensor, an output image of the image sensor may have shadows. For a camera module of a mobile device, severely limiting the height of the module may result in an RI much less than 50%. A low RI may result in off-axis regions of the image being darker than the central region of the image (for low RI lens images, see fig. 1). This phenomenon is called shadowing or vignetting. Patent citation 1 discloses lens shading, which is caused by incident light being received at a maximum angle relative to a less compact system due to the low height of the integrated camera module. In addition, there are other reasons that the signal level of the off-axis region of the image sensor may be lowered (see non-patent citation 1). Non-patent citation 1 discloses optical vignetting caused by the physical dimensions of the multi-element lens. The rear elements are blocked by the elements in front of them, which reduces the effective lens aperture for off-axis incident light. Non-patent citation 2 also discloses optical vignetting, with less light reaching the sensor edge due to the physical obstruction of the lens.
A related technique is used to compensate for the shadowing. This related technique is called Lens Shading Compensation (LSC) or shading compensation. The function of the LSC is to compensate for the low RI image and restore it to a generally flat signal level. LSCs increase the signal level in off-axis regions of an image by multiplying the gain that depends on the image position so that the signal level is flat throughout the region of the image. Fig. 2 shows an exemplary schematic block diagram of the Lens Shading Compensation (LSC) function described above. The blocks may be located in an image sensor. Alternatively, they may be located in an Image Signal Processor (ISP). ISPs, which represent image signal processors or image signal processing, involve processing hardware, processing software, or processing algorithms to process images from an image sensor to render images in some type of image format. With the above LSC function, an input image signal ("image signal" in fig. 2) is multiplied by a certain gain ("gain" in fig. 2) in a multiplier ("multiplier" in fig. 2). As long as the object is the same as the central region, the gain is calculated to compensate for the signal level of the off-axis region of the image to be the same as the signal level of the central region (on-axis region) of the image. Therefore, the LSC function is used to calculate the position of the image sensor, and the signal level at the position of the image sensor is converted so as to be amplified using a certain pre-calculated table ("gain table" in fig. 2).
Fig. 3 is an exemplary diagram showing relative illuminance and image height. The horizontal axis represents the image height. The image height is expressed in percent (%) of the highest image height or half of the image circle, wherein half of the image circle is the same length as the diagonal of the image sensor. The vertical axis represents the relative illuminance to be fixed. The graph shows that the basis is that the brightness is completely flat (uniform). For simplicity in explaining LSC behavior, it is assumed that LSCs can only compensate for shadows caused by relative illumination. Under this assumption, the gain to be applied depends only on the image height (distance from the focal point) of the region to be compensated. Fig. 4 is an exemplary graph illustrating the gain of a normal LSC function for solving the problem in fig. 3. The horizontal axis represents the image height. The image height is expressed in percent (%) of the highest image height or half of the image circle, wherein half of the image circle is the same length as the diagonal of the image sensor. The vertical axis represents the gain for the normal LSC function of the problem in fig. 3. The graph shows that the basis is that the brightness is completely flat (uniform). As shown in fig. 4, the gain of the LSC function is a reciprocal value of the relative illuminance or a value opposite to the relative illuminance (fig. 4). Thereafter, as long as the object is flat, the output signal can be properly compensated to maintain the same signal level (see, for example, patent citation 2, disclosing a magnification factor that is changed according to the position of the sensor to compensate for lens shading. see patent citation 3, disclosing spherical intensity correction as an example of an LSC that corrects the data of each image pixel by a radius function of the image optical center pixel. see patent citation 4, disclosing lens shading phenomena after taking a white area image. see non-patent citation 2, disclosing an LSC or vignetting compensation that corrects the above-mentioned vignetting. see non-patent citation 3, disclosing an LSC).
Citations for patents
Patent citation 1US20144184813A1
Patent citation 2US2011304752A1
Patent citation 3US7,408,576B2
Patent citation 4US8,049,795B2
Non-patent citations
Non-patent citation 1https:// en. wikipedia. org/wiki/Visneting # Optical _ Visneting
Non-patent citation 2Kayvon Fatahalian, CMU15-869: Graphics and Imaging architecureres (Fall 2011)
Non-patent citation 3 comparative graphics: underpinning and Expanding the Capabilities of Standard Cameras, demonstration from NVIDIA
To show the technical problem of the related art, a signal-to-noise ratio (SNR) of an image is considered. The SNR of an image can be calculated according to the following equation:
in one interpretation, the linear interpretation, the equation is:
SNR=Signal / Noise。
in another interpretation, the dB interpretation, the equation is:
SNR=20*log10(Signal / Noise)。
the latter explanation is more common. In both explanations, Signal is the output Signal level of the image sensor, and Noise is the Noise level of Signal.
Noise can be estimated according to the following equation:
Noise-SQRT (k + Signal + base Noise), in which:
SQRT is a square root function, k is a coefficient related to the image sensor design, and "base noise" is a noise type unrelated to Signal. However, base noise typically varies with image sensor settings, such as the analog gain within the image sensor. With this technique, only gain is applied to the image signal. The gain also amplifies the noise so that the SNR of the signal after LSC is the same as the SNR of the signal before LSC. The basis for the SNR, calculated as above, being higher at the center of the image is the lens design of a low-height camera. One of the most likely causes is the "cosine fourth law", which explains the illumination decay of the camera image resulting in relatively dark edges. FIG. 5 shows a plot of SNR versus optical shot noise for the central region 900 e-based on the rough estimate of relative illumination in FIG. 3 (900 e-18% of 5000 e-and 5000 e-for the typical full well throughput of a 1.12 micron CMOS image sensor). Fig. 5 shows the SNR of an image photographed without employing the present invention. The horizontal axis represents the image height. The image height is expressed in percent (%) of the highest image height or half of the image circle, wherein half of the image circle is the same length as the diagonal of the image sensor. The vertical axis represents the SNR of an image captured without employing the present invention. As described above, the related art has the following technical problems: if the relative illumination is not very high, the SNR of the off-axis region is low, resulting in the image becoming noisy and degraded. Embodiments of the present invention aim to solve the above problems.
Disclosure of Invention
The invention discloses a method and a device for compensating relative illumination through multiple exposures.
According to a first aspect, a method of compensating for relative illumination by multiple exposures is provided. The method comprises the following steps: receiving two or more time-series frames of an image from the outside, and transmitting one primary frame determined from the received two or more time-series frames; storing at least one of two or more time-sequential frames of the image; applying a blending gain to one or more of the two or more time-sequential frames; blending application results, wherein the blending gain varies as the image position varies.
According to a second aspect, there is provided an apparatus for compensating relative illuminance by multiple exposures. The apparatus comprises: a memory controller for receiving two or more time-series frames of an image from the outside and transmitting a main frame determined from the received two or more time-series frames to the multiplier; a memory for capturing at least one of two or more time sequential frames of the image; a multiplier for applying a mixing gain to one or more of the two or more time-series frames; and a mixer for mixing the application results, wherein the mixing gain varies with a variation in the image position.
According to the method and apparatus for compensating relative brightness through multiple exposures of the embodiment of the present invention, it is possible to increase a signal level high enough to maintain a high SNR in a high image height area even in a case where the contrast is small.
Drawings
FIG. 1 shows a low RI lens image;
fig. 2 is an exemplary schematic block diagram of a Lens Shading Compensation (LSC) function described above;
fig. 3 is an exemplary graph showing relative illuminance;
FIG. 4 is an exemplary diagram illustrating a gain table;
FIG. 5 is a graph showing roughly estimated SNR versus image height;
FIG. 6 is a diagram illustrating the basic concepts provided by an embodiment of the present invention;
FIG. 7 is an exemplary block diagram provided by an embodiment of the present invention;
fig. 8 is an exemplary graph showing a mixing gain depending on an image height;
fig. 9 is a graph showing an image height and a signal level after being mixed with the mixing gain of fig. 8;
FIG. 10 is another exemplary block diagram provided in accordance with another embodiment of the present invention;
fig. 11 is yet another exemplary block diagram provided in accordance with yet another embodiment of the present invention.
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings required for describing the embodiments or the prior art are briefly introduced below. It is to be understood that the drawings in the following description are merely illustrative of some embodiments of the invention and that still other drawings may be derived therefrom by those skilled in the art without the exercise of inventive faculty.
Detailed Description
The technical solution in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention. It is to be understood that the embodiments described are only a few embodiments of the present invention, and not all embodiments. All other embodiments, which can be obtained by a person skilled in the art without making creative efforts based on the embodiments of the present invention, belong to the protection scope of the present invention.
Fig. 6 is a diagram showing a basic concept provided by the embodiment of the present invention, including the following steps: "Capture sequential frames" (1 in FIG. 6); "determine a primary frame from the capture frames" (2 in fig. 6, where "capture frame" may include "previous frame", "primary frame", and "next frame"); "apply gain to the next frame (or previous frame) and add it to the main frame" (3 in fig. 6, where "mixing gain" is applied to "next frame" ("mixed frame"), and "next frame" to which "mixing gain" is applied is added to "main frame").
According to the method for compensating relative illumination through multiple exposures, provided by the embodiment of the invention, two or more time sequence frames of an image are captured; determining a primary frame from said captured two or more time series frames; applying a blending gain to one or more blended frames, wherein the one or more blended frames are temporally sequential frames with the main frame, and the temporally sequential frames with the main frame can be frames before the main frame, or frames after the main frame, or frames before the main frame and frames after the main frame; blending application results, wherein the blending gain varies as the image position varies. According to the method for compensating relative illuminance through multiple exposures provided by the embodiment of the invention, the mixing gain is changed along with the change of the upper limit 1.0. Fig. 7 is an exemplary block diagram provided by an embodiment of the present invention. Many image frames received from the image sensor are stored in a memory ("memory" in fig. 7). In this example, at least two frames are stored in memory. In the present embodiment, a method of controlling the memory and the memory size is not defined in advance. One is selected from the stored frames by some method, and the frame is determined as a primary frame ("image 1" in fig. 7). In the present invention, the "main frame" is determined by some method determined by a camera system using an image sensor, which depends on the architecture of the camera system. The frame with the lowest time delay from the shutter trigger is typically determined to be the "primary frame". In most camera systems, the camera system detects a shutter trigger through a mechanical button called "shutter", or detects a multi-touch or a single-touch on a dedicated area for "shutter trigger" on a display panel, or detects a situation such as "smiling face shutter". Another frame is selected by some different method and is referred to as a hybrid frame ("image 2" in fig. 7). There are only three possible schemes for selecting this other frame:
(1) always using the first frame after the "primary frame";
(2) always using the first frame before the "primary frame";
(3) frames associated with the first frame after the "main frame" or the first frame before the "main frame" are adaptively selected. Some possible methods for selecting the relevant frame in (3) include:
(a) selecting a frame with smaller difference in the image;
(b) selecting a frame with a smaller difference in object position;
(c) selecting a frame with a smaller difference in camera position;
(d) a frame is selected that has the best value with a certain value calculated from the image, the object motion and the camera position shift.
A hybrid frame is an image that may be captured/observed before "other functions including LSC" in fig. 7. The mixed frame is multiplied by the mixing gain in conjunction with the gain table (the mixing gain is applied to the "multiplier" in fig. 7). As illustrated in fig. 7, the blend gain varies with the image height at the location. If the relative illumination of the lenses is high, the "mixing gain" is low. Since the relative illumination at the center (image center) of most lenses is higher than at the edges of these lenses, the mixing gain should be small at the center and high at the edges. The gain table may be defined in a lookup table (LUT) (the "gain table" in fig. 7) or a memory. The gain table may also be calculated while the process is being performed (i.e., on-the-fly).
According to one of the simplest implementations of this embodiment, the mixing gain may depend on the image height at the location to be calculated. The image height is the distance between the image center and the position to be calculated. Fig. 8 shows an example of mixing gain with image height for one of the simplest embodiments described above. The horizontal axis represents the image height. The image height is expressed in percent (%) of the highest image height or half of the image circle, wherein half of the image circle is the same length as the diagonal of the image sensor. The vertical axis is the mixing gain. The mixing gain may be calculated such that the same signal level is set over the entire image height. The mixing gain may be limited to a constant, e.g., 1.0.
After the multiplication, the hybrid frame and the main frame are combined into one frame. Fig. 9 shows the signal levels after the mixing operation provided by the present invention. The horizontal axis represents the image height. The image height is expressed in percent (%) of the highest image height or half of the image circle, wherein half of the image circle is the same length as the diagonal of the image sensor. The vertical axis is the signal level after the mixing operation provided by the present invention. If the relative illumination of the lenses is as shown in fig. 3 and the mixing gain is as shown in fig. 8, the signal levels of the output frame of the embodiment of the present invention are as shown in fig. 9.
The hybrid frame may be determined by:
(1) the gray scale map is captured under uniform light. The light should be sufficiently uniform. Furthermore, the light is preferably 3200K halogen light, brass light (2858K) or D65 light source.
(2) The shade of the green channel is calculated (if the image sensor is of the monochrome type, a black and white channel may be sufficient for the calculation).
(3) The mixing gain at the position (x, y) may be obtained as the mixing gain at the position (x, y) ═ min (1.0, (center signal)/(signal at the position (x, y)).
In addition, the LSC parameters should be determined to compensate for color shading including variations in the light source.
Fig. 10 is an exemplary block diagram provided for another embodiment of the present invention, in which only one preceding frame or one succeeding frame may be stored in a memory to reduce the size of the memory ("memory" in fig. 10) for storage, and the memory to be used is reduced to store only one frame. In fig. 10, "image 2" as a frame before or after "image 1" is a mixed frame.
Fig. 11 is a further exemplary block diagram provided by a further embodiment of the present invention in which three time sequential frames are combined. In fig. 11, "image 1" as a frame before "image 2" and "image 3" as a frame after "image 2" are mixed frames.
The method for compensating relative illuminance through multiple exposures according to the embodiment of the invention comprises the following steps:
(a) two or more time-sequential frames are combined into one frame, which is a main frame. The other frames are (a) mixed frames;
(b) multiplying the mixed frame by a mixing gain;
(c) the mixing gain varies with the image position.
According to one of the most feasible implementations of this embodiment, the blending gain varies as the image height varies.
According to the method for compensating relative illuminance through multiple exposures of the embodiment of the invention, the signal level can be increased to be high enough, so that a higher SNR can be maintained in a higher image height area even under the condition of smaller contrast. The method of compensating for relative illuminance by multiple exposures according to an embodiment of the present invention allows only one previous frame or one subsequent frame to be stored in a memory, so that the size of the memory for storage can be reduced
In other embodiments, the mixing gain varies with both image height and azimuth angle, so that not only can relative illumination be compensated, but shadows caused by image sensor non-uniformities, mechanical tolerances of optical components, and other causes can be compensated.
The above description is only a specific embodiment of the present invention and is not intended to limit the scope of the present invention. Any changes or substitutions that may be easily found by those skilled in the art within the technical scope of the present disclosure should fall within the protective scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.