[go: up one dir, main page]

WO2019036522A1 - Traitement d'image efficace en profondeur de bits - Google Patents

Traitement d'image efficace en profondeur de bits Download PDF

Info

Publication number
WO2019036522A1
WO2019036522A1 PCT/US2018/046783 US2018046783W WO2019036522A1 WO 2019036522 A1 WO2019036522 A1 WO 2019036522A1 US 2018046783 W US2018046783 W US 2018046783W WO 2019036522 A1 WO2019036522 A1 WO 2019036522A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
bit depth
nonlinear
transformation
sensor
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
PCT/US2018/046783
Other languages
English (en)
Inventor
Jon S. Mcelvain
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Dolby Laboratories Licensing Corp
Original Assignee
Dolby Laboratories Licensing Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Dolby Laboratories Licensing Corp filed Critical Dolby Laboratories Licensing Corp
Priority to EP18759011.2A priority Critical patent/EP3669542B1/fr
Priority to CN201880053158.1A priority patent/CN110999301B/zh
Priority to US16/637,197 priority patent/US10798321B2/en
Publication of WO2019036522A1 publication Critical patent/WO2019036522A1/fr
Anticipated expiration legal-status Critical
Ceased legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/30Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using hierarchical techniques, e.g. scalability

Definitions

  • the present application relates to wide dynamic range image generation.
  • a typical electronic camera as for example incorporated in consumer electronic devices, includes an image sensor and an image signal processor (ISP).
  • the image sensor has a plurality of photosensitive pixels that generate respective electronic signals in response to incident light.
  • Readout circuitry integrated in the image sensor reads out these electronic signals, which are then processed by the ISP to generate a display-ready image.
  • the ISP may perform a variety of functions such as background subtraction, noise reduction, correction of brightness non-uniformity, and final encoding of the image according to an industry standard, such as those described in ITU-R BT.2100, which is incorporated herein by reference in its entirety.
  • the image sensor For generation of color images with a single-chip image sensor, the image sensor includes a color filter array such that each photosensitive pixel is sensitive to only a portion of the full color spectrum.
  • the ISP applies a demosaicing algorithm to the incomplete color samples provided by the image sensor to produce a full color image at the full pixel resolution of the image sensor.
  • the ISP may further perform color correction.
  • Display developers are working towards displays capable of displaying more and more natural looking images.
  • displays may have a high bit depth, such as 10 bits or 12 bits, to enable the displaying of images at a wide dynamic range, that is, a range from very dark to very bright.
  • a wide dynamic range that is, a range from very dark to very bright.
  • one troubling artifact is the appearance of discrete contours in the image, especially when the displayed scene is of relatively uniform brightness. For example, a grey wall with a slight gradient in brightness may appear to have discrete steps in brightness as opposed to the real-life gradual brightness change.
  • One way to mitigate this problem is to, instead of distributing the display bit depth linearly across the full brightness range, assigning the display bit depth resolution according to the human vision' s capability to perceive brightness differences.
  • WO2016164235 describes systems and methods for in-loop, region- based, reshaping for the coding of high-dynamic range video.
  • Using a high bit-depth buffer to store input data and previously decoded reference data forward and backward, in-loop, reshaping functions allow video coding and decoding to be performed at a target bit depth lower than the input bit depth.
  • Methods for the clustering of the reshaping functions to reduce data overhead are also presented.
  • ISO/IEC JTC1/SC09/WG11 16 October 2015, presents a candidate test model for HDR/WCG video compression.
  • the two major tools proposed in this test model are adaptive reshaping and color enhancement filters. Both tools can work in various color spaces to improve coding efficiency of HDR/WCG video.
  • Range Television System IBC 2015 conference, 11-15 September 2015, Amsterdam, presents an overview of the BBC's "Hybrid Log-Gamma” solution, designed to meet the requirements of high dynamic range television.
  • the signal is "display independent” and requires no complex “mastering metadata.”
  • High quality high dynamic range (HDR) pictures it also delivers a high quality "compatible” image to legacy standard dynamic range (SDR) screens and can be mixed, re-sized and compressed using standard tools and equipment.
  • Quantisation effects or “banding” are analysed theoretically and confirmed experimentally. It is shown that quantisation effects are comparable or below competing HDR solutions.
  • WO2016184532 provides a mechanism for managing a picture.
  • the picture comprises pixels, wherein pixel values of the pixels are represented with a first bitdepth.
  • the method comprises converting the pixel values of the pixels represented with the first bitdepth into the pixel values represented with a second bitdepth, wherein the first bitdepth is smaller than the second bitdepth.
  • the method comprises identifying a group of pixels among the pixels of the picture.
  • the group of pixels comprises two pixels, wherein the two pixels are adjacent to each other along a direction, wherein pixel values of the group of pixels are equal to each other.
  • the method comprises, for at least one of the two pixels, estimating a respective estimated pixel value based on a first pixel value and a second pixel value.
  • the first and second pixel values are derived from two edge pixel values of two edge pixels, wherein each one of the two edge pixels is located along the direction and excluded from the group of pixels, and wherein each one of the two edge pixels is adjacent to a respective end of the group of pixels with respect to the direction.
  • a computer-implemented method for bit-depth efficient image processing includes a step of communicating at least one non-linear transformation to an image signal processor.
  • Each non-linear transformation is configured to, when applied by the image signal processor to a captured image having sensor signals encoded at a first bit depth, produce a nonlinear image that re-encodes the captured image at a second bit depth that may be less than the first bit depth, while optimizing allocation of bit depth resolution in the nonlinear image for low contour visibility.
  • the method further includes receiving the nonlinear image from the image signal processor, and applying an inverse transformation to transform the nonlinear image to a re-linearized image at a third bit depth that is greater than the second bit depth.
  • the inverse transformation is inverse to the nonlinear transformation used to produce the nonlinear image.
  • the non-linear transformation may be determined based on noise characteristics of the sensor signals.
  • Said noise characteristics of the sensor signals may comprise a mapping of code value levels of the sensor signals to corresponding values of a noise standard deviation for said code value levels.
  • the non-linear transformation may comprise a concave function for mapping initial code values of the captured image to optimized code values of the nonlinear image.
  • the non-linear transformation may be configured to produce the nonlinear image such that an average noise level of the nonlinear image is increased compared to an average noise level of the captured image.
  • the non-linear transformation may allocate a relatively greater portion of the second bit depth to less noisy ranges of the sensor signals of the captured image, and may allocate a relatively smaller portion of the second bit depth to more noisy ranges of the sensor signals of the captured image.
  • the nonlinear transformation may allocate a relatively greater portion of the second bit depth to a lower range of the sensor signals, and may allocate a relatively smaller portion of the second bit depth to a higher range of the sensor signals.
  • a product for bit-depth efficient image processing includes machine-readable instructions encoded in non-transitory memory.
  • the instructions include at least one non-linear transformation.
  • Each nonlinear transformation is configured to transform a captured image, encoding sensor signals at a first bit depth, to produce a nonlinear image that re-encodes the sensor signals at a second bit depth that may be less than the first bit depth, while optimizing allocation of bit depth resolution in the nonlinear image for low contour visibility.
  • the instructions further include, for each non-linear
  • the instructions include (a) hardware instructions that, when executed by a processor, communicate the at least one non- linear transformation to an image signal processor, to enable the image processor to produce the nonlinear image from a captured image, and (b) application domain instructions including inverting instructions that, when executed by the processor, receive the nonlinear image from the image signal processor and apply the inverse transformation corresponding to the nonlinear transformation used to produce the nonlinear image, to produce a re-linearized image at a third bit depth that is greater than the second bit depth.
  • a method for bit-depth efficient analog-to-digital conversion of an image includes (a) receiving a plurality of analog signals representing light detected by a respective plurality of photosensitive pixels of an image sensor, and (b) converting the analog signals to digital signals at a first bit depth.
  • the method further includes, prior to the step of converting, a step of applying a nonlinear transformation to the analog signals to optimize allocation of bit depth resolution, to the digital signals, for low contour visibility.
  • the method may include inverting the nonlinear transformation by applying a corresponding inverse transformation to the digital signals, the inverse transformation encoding the digital signals at a second bit depth that is greater than the first bit depth.
  • an image sensor with bit-depth efficient analog-to- digital image conversion includes a plurality of photosensitive pixels for generating a respectively plurality of analog signals representing light detected by the photosensitive pixels.
  • the image sensor further includes at least one analog-to-digital converter for converting the analog signals to digital and having a first bit depth.
  • the image sensor also includes at least one analog preshaping circuit, communicatively coupled between the photosensitive pixels and the at least one analog-to-digital converter, for applying a nonlinear transformation to the analog signals to optimize allocation of bit depth resolution to the digital signals by the analog-to-digital converter for low contour visibility in presence of noise of the analog signals.
  • the image sensor may further include at least one digital inverting circuit for inverting the nonlinear transformation by applying a corresponding inverse transformation to the digital signals. The inverse transformation encodes the digital signals at a second bit depth that is greater than the first bit depth.
  • FIG. 1 illustrates a system for bit-depth efficient image processing, according to an embodiment.
  • FIG. 2 shows a prior art image signal processor.
  • FIG. 3 illustrates a system for bit-depth efficient processing of a captured image, according to an embodiment.
  • FIG. 4 illustrates a method for bit-depth efficient image processing, according to an embodiment.
  • FIGS. 5A and 5B show example nonlinear and inverse transformations that may be used in the systems of FIGS. 1 and 3 and in the method of FIG. 4.
  • FIG. 6 illustrates a system for bit-depth efficient processing of a captured image, which is segmented in a hardware domain and an application domain, according to an embodiment.
  • FIG. 7 illustrates a method for bit-depth efficient image processing effectuated from an application domain, such as the application domain of a cellular phone, according to an embodiment.
  • FIG. 8 illustrates a method for bit-depth efficient image processing effectuated from an application domain, such as the application domain of a cellular phone, which utilizes capture-mode-specific nonlinear transformations, according to an embodiment.
  • FIG. 9 illustrates a method for bit-depth efficient image processing effectuated from an application domain, such as the application domain of a cellular phone, which controls the capture mode and utilizes capture-mode-specific nonlinear
  • FIG. 10 illustrates a method for bit-depth efficient image processing effectuated from an application domain, such as the application domain of a cellular phone, which generates a high-dynamic-range image, according to an embodiment.
  • FIG. 11 illustrates a computer for bit-depth efficient image processing, according to an embodiment.
  • FIG. 12 shows a prior-art image sensor.
  • FIG. 13 illustrates an image sensor with bit-depth efficient analog-to- digital image conversion, according to an embodiment.
  • FIGS. 14A and 14B illustrate an image sensor with bit-depth efficient analog-to-digital conversion in column-specific readout circuitry, according to an
  • FIG. 15 illustrates a method for bit-depth efficient analog-to-digital conversion in an image sensor, according to an embodiment.
  • FIG. 16 shows an example of required bit depth as a function of noise level.
  • FIG. 17 shows a noise characterization of the Google Pixel camera.
  • FIG. 18 illustrates a method for determining the nonlinear transformation of the system of FIG. 1 for an image sensor 120, according to an embodiment.
  • FIG. 19 shows minimum contrast curves derived from an example of method 1800 based upon the Google Pixel camera noise.
  • FIG. 20 shows an example result of mapping the minimum contrast curves of FIG. 19 back to sensor signal code values.
  • FIG. 21 shows an example of typical noise reduction from an image signal processor.
  • FIG. 22 illustrates, by example, the effect of denoising on minimum contrast curves.
  • FIG. 23 illustrates an example of deriving the parameters of a nonlinear transformation from a minimum contrast curve.
  • FIGS. 24A and 24B illustrate an example nonlinear transformation and a corresponding example inverse transformation.
  • FIG. 25 shows an example of required bit depth as a function of noise level based upon an expanded study.
  • FIG. 26 shows an example minimum relative contrast curve associated with the required bit depth of FIG. 25.
  • FIG. 27 is an alternative representation of the data of FIG. 26.
  • FIG. 28 illustrates an example of contour visibility performance provided by the image sensor of FIG. 13.
  • FIG. 1 illustrates one example system 100 for bit-depth efficient image processing.
  • System 100 includes a camera 110 and a processing unit 140.
  • System 100 applies bit-depth efficient image processing to enable generation of wide dynamic range images even when the bit depth of the output of camera 110 presents a bottleneck.
  • System 100 is compatible with wide dynamic range displays.
  • Camera 110 includes an image sensor 120 and an image signal processor (ISP) 130.
  • Image sensor 120 generates a captured image 170 of a scene 160. Captured image 170 encodes electrical sensor signals of image sensor 120 at a first bit depth, such as 10 bits or 12 bits. Image sensor 120 generates these sensor signals in response to light 168 from scene 160.
  • ISP image signal processor
  • the output of camera 110 is limited to a second bit depth.
  • the second bit depth may be less than the first bit depth and therefore present a bottleneck in terms of bit depth resolution.
  • the second bit depth is 8 bits, which is a common bit depth of the output of off-the-shelf ISP integrated circuits.
  • ISP 130 processes captured image 170 for efficient use of the second bit depth available at the output of camera 110. ISP 130 optimizes the bit depth allocation according to the contour visibility for different ranges of sensor signals to minimize contour visibility in images generated by system 100.
  • ISP 130 applies a nonlinear transformation 132 to captured image 170 to re- encode captured image 170 in a nonlinear image 172 at the second bit depth while nonlinearly distributing the sensor signals of captured image 170 across the second bit depth to minimize contour visibility.
  • nonlinear transformation 132 is configured to take into account the effect of noise on contour visibility.
  • nonlinear transformation 132 is defined based upon a consideration of the native noise of image sensor 120.
  • nonlinear transformation 132 is defined based upon a consideration of the native noise of image sensor 120 as well as the noise of other processing performed by camera 110 prior to application of nonlinear transformation 132. Regardless of the origin of the noise, noise tends to reduce contour visibility. Consequently, more noisy ranges of sensor signals are less susceptible to contour visibility, whereas less noisy ranges of sensor signals are more susceptible to contour visibility.
  • nonlinear transformation 132 distributes less noisy ranges of the sensor signals of captured image 170 over a relatively greater portion of the second bit depth than more noisy ranges of the sensor signals.
  • this embodiment of nonlinear transformation 132 (a) allocates a relatively greater portion of the second bit depth to less noisy ranges of the sensor signals of captured image 170 and (b) allocates a relatively smaller portion of the second bit depth to more noisy ranges of the sensor signals. Since greater sensor signals generally are noisier than smaller sensor signals, nonlinear transformation 132 may allocate a relatively greater portion (e.g., 20%) of the second bit depth to the lower range of the sensor signals (e.g., the lowest 10% of the sensor signals).
  • Processing unit 140 includes an inverse transformation 142 that is the inverse transformation of nonlinear transformation 132. Processing unit 140 applies inverse transformation 142 to nonlinear image 172 to generate a re-linearized image 180 the re- encodes the sensor signals at a third bit depth.
  • the third bit depth is greater than the second bit depth, for example 10 bits or 12 bits.
  • Nonlinear transformation 132 enables wide dynamic range image processing, optimized for low contour visibility, even though the bit depth of the output of camera 110 (the second bit depth) may be less than the bit depth of image sensor 120 (the first bit depth). In the absence of nonlinear transformation 132, the bit depth would likely be insufficient to avoid visible contours in the final images, at least for scenes with one or more areas of relatively uniform luminance.
  • scene 160 features a runner 162 in front of a uniformly grey wall 164 backlit by sun 166.
  • the lighting situation causes an apparent gradient in brightness of wall 164.
  • Scene 160 has both very bright areas, e.g., sun 166, and very dark areas, e.g., the least lit portion of wall 164, and a wide dynamic range is therefore needed to produce a natural-looking image of scene 160.
  • a wide dynamic range image 190 generated by such a modified embodiment of system 100, would likely show visible contours on wall 164, since the limited bit depth resolution of the output of camera 110 would be insufficient in sensor signal ranges subject to greater contour visibility.
  • nonlinear transformation 132 enables efficient use of the limited bit depth of the output of camera 110 to avoid, or at least minimize, contour visibility in re-linearized image 180.
  • processing unit 140 further includes a quantizer
  • quantizer 144 that, after inversion of nonlinear transformation 132 by inverse transformation 142, encodes re-linearized image 180 according to a wide dynamic range encoding standard, such as "gamma” or "PQ” and the like, for example as described in ITU-R BT.2100.
  • quantizer 144 encodes re-linearized image for subsequent decoding by a wide dynamic range display configured for low contour visibility.
  • Quantizer 144 may be configured to code a 10,000 nits display luminance range at a bit depth in the range from 10 to 12 bits while non- linearly allocating bit depth resolution to reduce contour visibility when this "quantized" version of re-linearized image 180 is subsequently decoded and converted to display luminance by a wide dynamic range display (not shown in FIG. 1).
  • system 100 is implemented onboard a capture device
  • FIG. 2 shows a prior art ISP 200.
  • ISP 200 processes a captured image 270 to produce an output image 280.
  • ISP 200 may receive captured image 270 from an image sensor similar to image sensor 120, and output image 280 may be encoded for display on a display or for output as electronic data in a standard format.
  • ISP 200 propagates captured image 270 through a processing pipeline that includes several different functional blocks:
  • a demosaicing block 210 demosaics the incomplete color samples of captured image 270 to produce a full color image at the pixel resolution of captured image 270;
  • a black-level substraction/white balancing block 220 performs background subtraction and, optionally, white balancing;
  • a denoiser 230 reduces noise;
  • lens shading corrector 240 corrects for lens shading causing nonuniform illumination of the image sensor generating captured image 270;
  • a color corrector 250 corrects color; and
  • a conventional encoder 260 encodes the resulting image data in output image 280.
  • conventional encoder 260 As is typical for many off-the- shelf ISPs, is limited to a bit depth of 8 bits. Hence, conventional encoder 260 is not capable of encoding images at the bit depth resolution generally required for decoding and display on a wide dynamic range display without visible contours, at least for scenes or scene portions of relatively uniform luminance. For example, the limited bit depth of conventional encoder 260 prevents conventional encoder 260 from incorporating the functionality of quantizer 144.
  • FIG. 3 illustrates one example system 300 for bit-depth efficient processing of captured image 170.
  • System 300 includes an ISP 330 and a processing unit 340, which are embodiments of ISP 130 and processing unit 140, respectively.
  • System 300 may be coupled with image sensor 120 to form an embodiment of system 100.
  • ISP 330 includes an encoder 332 that applies nonlinear transformation 132 to captured image 170, as discussed above in reference to FIG. 1, to produce nonlinear image 172.
  • ISP 330 further includes a preprocessing unit 336 that processes captured image 170 prior to application of nonlinear transformation 132 by encoder 332.
  • Preprocessing unit 336 may include one or more of demosaicing block 210, black-level substraction/white balancing block 220, denoiser 230, lens shading corrector 240, and color corrector 250.
  • ISP 330 is a modified version of ISP 200, wherein conventional encoder 260 is replaced by encoder 332.
  • the output of ISP 330 for example implemented as an output interface 334, is limited to the second bit depth.
  • Processing unit 340 includes an inverter 342 that stores inverse transformation 142 and applies inverse transformation 142 to nonlinear image 172 to produce re-linearized image 180.
  • processing unit 340 further includes a postprocessor 344 that processes re-linearized image 180 before processing unit 340 outputs the post-processed re-linearized image 180 as output image 382.
  • Post-processor 344 may include quantizer 144 such that output image 382 is encoded for subsequent decoding by a wide dynamic range display configured for low contour visibility.
  • Embodiments of processing unit 340 that do not include post-processor 344 may output re-linearized image 180 as output image 382.
  • processing unit 340 stores nonlinear transformation 132 and communicates nonlinear transformation 132 to encoder 332. This embodiment of processing unit 340 may communicate nonlinear transformation 132 to encoder 332 once for subsequent use of encoder 332 on several captured images 170. In one example, processing unit 340 communicates nonlinear transformation 132 to encoder 332 during an initial setup procedure. Alternatively, this embodiment of processing unit 340 may communicate nonlinear transformation 132 to encoder 332 each time a captured image 170 is processed by encoder 332. In another embodiment, encoder 332 stores nonlinear transformation 132.
  • An embodiment of system 300 stores a plurality of nonlinear
  • Each nonlinear transformation 132 is configured for use on captured images 170 captured under a specific respective capture mode. Examples of capture modes include outdoor mode, indoor mode, portrait mode, sport mode, landscape mode, night portrait mode, and macro mode.
  • ISP 330 is configured to receive captured image 170 at a bit depth of more than 8 bits, such as 10 or 12 bits, and output nonlinear image 172 at a bit depth of 8 bits, while processing unit 340 is configured to process nonlinear image 172 and generate re-linearized image 180 and output image 382 at a bit depth of more than 8 bits, such as 10 or 12 bits.
  • ISP 330 may be a standalone system configured to cooperate with a processing unit 340 provided by a third party.
  • processing unit 340 may be a standalone system configured to cooperate with an ISP 330 provided by a third party.
  • inverter 342, inverse transformation(s) 142, and nonlinear transformation(s) 132 may be provided as a software product, such as a machine-readable instructions encoded in non- transitory memory, configured for implementation with a third-party processor to form an embodiment of processing unit 340.
  • FIG. 4 illustrates one example method 400 for bit-depth efficient image processing.
  • Method 400 is performed by system 300, for example.
  • a step 410 method 400 applies a non-linear transformation to a captured image to produce a non- linear image.
  • the captured image has sensor signals encoded at a first bit depth
  • step 410 uses the nonlinear transformation to re-encode the image signals of the captured image in the nonlinear image at a second bit depth that may be less than the first bit depth, while optimizing allocation of bit depth resolution in the nonlinear image for low contour visibility.
  • encoder 332 applies nonlinear transformation 132 to captured image 170, optionally preprocessed by preprocessing unit 336, to produce nonlinear image 172.
  • Step 410 may include a step 412 of applying a nonlinear transformation that corresponds to one of a plurality of capture modes, as discussed above in reference to FIG. 3.
  • a step 420 method 400 communicates the nonlinear image to a processing unit.
  • ISP 330 communicates nonlinear image 172 to processing unit 340 via output interface 334.
  • Step 430 inverts the nonlinear transformation applied in step 410.
  • Step 430 applies, to the nonlinear image, a transformation that is inverse to the nonlinear transformation used to produce nonlinear image, so as to transform the nonlinear image to a re-linearized image.
  • inverter 342 applies, to nonlinear image 172, an inverse transformation 142 that is inverse to a nonlinear transformation 132 applied in step 410, to produce re-linearized image 180.
  • Step 430 includes a step 432 of applying the inverse transformation at a third bit depth that is greater than the second bit depth.
  • method 400 further includes a step 440 that post- processes the re-linearized image.
  • post-processor 344 processes re- linearized image 180 to produce output image 382.
  • Step 440 may include a step 442 of encoding the re-linearized image for decoding by a display or for output as a digital file, for example according to an industry standard.
  • quantizer 144 encodes the data of re-linearized image 180 as output image 382.
  • Step 442 may include a step 444 of transferring representation of re-linearized image from scene -referred sensor-signal values to display-referred luminance values.
  • quantizer 144 translates the re-linearized sensor signal values of re-linearized image 180 from a scale of code values characterizing sensor signals to a scale of code values characterizing luminance of a display on which output image 382 may be displayed. Without departing from the scope hereof, step 444 may be performed prior to the encoding of step 442.
  • FIGS. 5 A and 5B show example nonlinear and inverse transformations that may be used in systems 100 and 300 and in method 400.
  • FIG. 5A is a plot 502 of a nonlinear transformation 500 that is an example of nonlinear transformation 132.
  • FIG. 5B is a plot 552 of an inverse transformation 550 that is an example of inverse transformation 142.
  • Inverse transformation 550 is the inverse transformation of nonlinear transformation 500.
  • FIGS. 5 A and 5B are best viewed together in the following description.
  • Nonlinear transformation 500 transforms initial code values 510 to optimized code values 520.
  • Initial code values 510 are the integer code values that encode sensor signals, such as those of image sensor 120, in captured image 170, optionally preprocessed by preprocessing unit 336.
  • Initial code values 510 are integer code values range from zero to a maximum code value 515 defined by the first bit depth.
  • Optimized code values 520 range from zero to a maximum code value 525 defined by the second bit depth.
  • Maximum code value 525 may be less than maximum code value 515.
  • Optimized code values 550 re-encode initial code values 510, according to nonlinear transformation 500, to nonlinearly redistribute the sensor signals of captured image 170 across the second bit depth to minimize contour visibility, so as to produce nonlinear image 172.
  • initial code values 510 are encoded at a bit depth of 10 bits with maximum code value 515 being 1023
  • optimized code values 520 are encoded at a bit depth of 8 bits with maximum code value 525 being 255.
  • nonlinear transformation 500 (a) allocates a relatively greater portion of the second bit depth (characterized by maximum code value 525) to less noisy ranges of the sensor signals of captured image 170 (characterized by maximum code value 515) and (b) allocates a relatively smaller portion of the second bit depth to more noisy ranges of the sensor signals.
  • Inverse transformation 550 transforms optimized code values 520 of nonlinear image 172 to re-linearized code values 570 that characterize re-linearized sensor signals at the third bit depth, so as to invert nonlinear transformation 500 and produce re- linearized image 180.
  • Re-linearized code values 570 are integer code values that range from zero to a maximum code value 575 defined by the third bit depth. Maximum code value 575 is greater than maximum code value 525.
  • optimized code values 520 are encoded at a bit depth of 8 bits with maximum code value 525 being 255
  • re-linearized code values 570 are encoded at a bit depth of 10 bits (or 12 bits) with maximum code value 575 being 1023 (or 4095).
  • FIG. 6 illustrates one example system 600 for bit-depth efficient processing of captured image 170, which is segmented in a hardware domain 692 and an application domain 690.
  • System 600 is an embodiment of system 300.
  • System 600 includes an ISP 630 and a processing unit 640, which are embodiments of ISP 330 and processing unit 340, respectively.
  • ISP 630 is implemented in hardware domain 692
  • processing unit 640 is implemented in application domain 690.
  • System 600 is configured to receive captured image 170 from an image sensor 120 located in hardware domain 692. Without departing from the scope hereof, system 600 may include image sensor 120.
  • system 600 is implemented onboard a cellular phone having a camera.
  • the camera is in the hardware domain of the cellular phone and includes image sensor 120 and ISP 630.
  • Processing unit 640 is implemented in the application domain of the cellular phone.
  • the "application domain" of a cellular phone refers to a portion of the cellular phone capable of accommodating cellular phone applications ("apps").
  • the application domain of the cellular phone may be open to installation of cellular phone applications provided by other parties than the manufacturer of the cellular phone.
  • ISP 630 includes an encoder 632 configured to receive one or more nonlinear transformations 132 from processing unit 640.
  • Encoder 632 is an embodiment of encoder 332.
  • ISP 630 may further include one or both of preprocessing unit 336 and output interface 334.
  • Processing unit 640 includes inverter 342, with one or more inverse transformations 142. Processing unit 640 may further include post-processor 344, for example including quantizer 144. Processing unit 640 stores one or more nonlinear transformations 132, and is configured to communicate nonlinear transformations 132 to encoder 632. Encoder 632 may be similar to encoder 260 except for being configured to receive one or more nonlinear transformations 132 from processing unit 640.
  • processing unit 640 communicates one or more nonlinear transformations 132 to encoder 632. This communication may be performed (a) once during initial configuration of encoder 632 after or during installation of inverter 342 and nonlinear transformation(s) 132 in application domain 690, or (b) in association with processing of each captured image 170 in hardware domain 692.
  • processing unit 640 includes a plurality of capture-mode-specific nonlinear transformations 132, each associated with a different capture mode of captured image 170.
  • ISP 630 acts as the master.
  • processing unit 640 acts as the master.
  • processing unit 640 communicates all capture-mode-specific nonlinear transformations 132 to encoder 632, and encoder 632 applies a particular capture-mode- specific nonlinear transformation 132 to captured image 170 according to the capture mode under which captured image 170 is captured.
  • Nonlinear image 172 generated by encoder 632 is accompanied by metadata 672 indicating (a) capture mode of captured image 170 associated with nonlinear image 172, (b) which nonlinear transformation 132 was applied by encoder 632 to produce nonlinear image 172, or (c) both capture mode of the associated captured image 170 and which nonlinear transformation 132 was used to produce nonlinear image 172. Further, in this embodiment, inverter 342 applies the appropriate inverse transformation 142 according to metadata 672. In the embodiment with processing unit 640 acting as master, processing unit 640 includes a capture mode controller 660.
  • Capture mode controller 660 controls the capture mode of image sensor 120 (either directly, as shown in FIG. 6, or via ISP 630) and capture mode controller 660 communicates an associated capture-mode-specific nonlinear transformation 132 to encoder 632.
  • capture mode controller 660 may further be communicatively coupled with inverter 342 to control which inverse transformation 142 inverter 342 applies to nonlinear image 172.
  • nonlinear image 172 may be accompanied by metadata 672, and inverter 342 applies the appropriate inverse transformation 142 according to metadata 672.
  • ISP 630 may be a standalone system configured to cooperate with a processing unit 640 provided by a third party.
  • processing unit 640 may be a standalone system configured to cooperate with an ISP 630 provided by a third party.
  • inverter 342, inverse transformation(s) 142, and nonlinear transformation(s) 132, optionally together with capture mode controller 660 may be provided as a software product, such as a machine-readable instructions encoded in non-transitory memory, configured for
  • processing unit 640 implementation with a third-party processor to form an embodiment of processing unit 640.
  • system 600 is implemented onboard an Android phone. In another example, system 600 is implemented on an iPhone.
  • FIG. 7 illustrates one example method 700 for bit-depth efficient image processing effectuated from an application domain, such as the application domain of a cellular phone.
  • Method 700 is, for example, performed by processing unit 640.
  • a step 710 method 700 communicates, to an ISP, at least one non-linear transformation.
  • Each nonlinear transformation is configured to, when applied by the ISP to a captured image having sensor signals encoded at a first bit depth, produce a nonlinear image.
  • This nonlinear image re-encodes the captured image at a second bit depth that may be less than the first bit depth, while optimizing allocation of bit depth resolution in the nonlinear image for low contour visibility.
  • processing unit 640 is configured to, when applied by the ISP to a captured image having sensor signals encoded at a first bit depth, produce a nonlinear image.
  • This nonlinear image re-encodes the captured image at a second bit depth that may be less than the first bit depth, while optimizing allocation of bit depth resolution in the nonlinear image for low contour visibility.
  • Step 710 may include a step 712 of communicating one or more non-linear transformation configured to distribute bit depth resolution in the nonlinear image according to a sensor-signal-dependent contour visibility threshold.
  • each nonlinear transformation 132 is configured to distribute bit depth resolution in the nonlinear image according to a sensor-signal-dependent contour visibility threshold.
  • the sensor-signal- dependent contour visibility threshold defines, for each sensor signal level, the lowest sensor signal contrast that is visible in the presence of noise.
  • the noise may be the native noise of the image sensor used to capture the image to which the nonlinear transformation is applied (e.g., image sensor 120), or the noise may include both the native noise of the image sensor and noise introduced by with other processing of the captured image prior to application of the nonlinear transformation.
  • noise tends to reduce contour visibility such that more noisy ranges of sensor signals have a higher contour visibility threshold than less noisy ranges of sensor signals.
  • the nonlinear transformation(s) of step 712 may therefore distributes less noisy ranges of the sensor signals of a captured image (e.g., captured image 170) over a relatively greater portion of the second bit depth than more noisy ranges of the sensor signals.
  • a step 720 method 700 receives a nonlinear image from the ISP, which is generated by the ISP by applying a nonlinear transformation, communicated to the ISP in step 710, to an image captured by an image sensor.
  • processing unit 640 receives nonlinear image 172 from ISP 630, wherein nonlinear image 172 has been generated by ISP 630 at least through application of nonlinear transformation 132 by encoder 632.
  • step 720 method 700 proceeds to perform step 430 of method 400.
  • processing unit 640 performs step 430 using inverter 342 of processing unit 640.
  • Method 700 may further include step 440 of method 400, for example performed by post-processor 344 of processing unit 640.
  • FIG. 8 illustrates one example method 800 for bit-depth efficient image processing effectuated from an application domain, such as the application domain of a cellular phone, which utilizes capture-mode-specific nonlinear transformations.
  • Method 800 is, for example, performed by processing unit 640.
  • a step 810 method 800 communicates a plurality of capture- mode- specific non-linear transformations to an ISP.
  • Each capture-mode- specific non-linear transformation is configured to, when applied by the ISP to a captured image having sensor signals encoded at first bit depth, produce a nonlinear image that re-encodes the captured image at a second bit depth that may be less than the first bit depth, while optimizing allocation of bit depth resolution in the nonlinear image for low contour visibility.
  • processing unit 640 communicates a plurality of capture-mode-specific nonlinear transformations 132 to encoder 632 of ISP 630.
  • Step 810 may include a step 812 of communicating capture-mode-specific non- linear transformations configured to distribute bit depth resolution, in nonlinear image, according to sensor-signal-dependent contour visibility threshold.
  • Step 812 is similar to step 712 but pertaining to the capture-mode-specific non-linear transformations.
  • a step 820 method 800 receives a nonlinear image and a corresponding capture-mode specification from the ISP.
  • processing unit 640 receives nonlinear image 172 and associated metadata 672 indicating the capture mode associated with nonlinear image 172 (or, alternatively, indicating which capture-mode- specific nonlinear transformation 132 was used to produce nonlinear image 172).
  • a step 830 method 800 inverts the nonlinear transformation used to produce the nonlinear image received in step 820.
  • Step 830 applies, to the nonlinear image, a capture-mode-specific inverse transformation that is inverse to the capture-mode-specific nonlinear transformation used to produce nonlinear image, so as to transform the nonlinear image to a re-linearized image.
  • Step 830 selects the appropriate capture-mode-specific inverse transformation based upon the capture-mode specification received along with the nonlinear image in step 820.
  • step 830 inverter 342 of processing unit 640 selects a capture-mode-specific inverse transformation 142 based upon metadata 672 and applies this capture-mode-specific inverse transformation 142 to nonlinear image 172, so as to produce re-linearized image 180.
  • Step 830 includes step 432 of applying the inverse transformation at a third bit depth that is greater than the second bit depth.
  • Method 800 may further include step 440 of method 400, for example performed by post-processor 344 of processing unit 640.
  • FIG. 9 illustrates one example method 900 for bit-depth efficient image processing effectuated from an application domain, such as the application domain of a cellular phone, which controls the capture mode and utilizes capture-mode-specific nonlinear transformations.
  • Method 900 is, for example, performed by an embodiment of processing unit 640 that includes capture mode controller 660.
  • a step 910 method 900 communicates, to a camera, (a) specification of a capture mode to be used by an image sensor of the camera to capture an image having sensor signals encoded at first bit depth and (b) an associated capture-mode-specific non- linear transformation configured.
  • the capture-mode-specific non- linear transformation is configured to, when applied by an image signal processor of the camera to the captured image, produce a nonlinear image that re-encodes the captured image at a second bit depth that may be less than the first bit depth, while optimizing allocation of bit depth resolution in the nonlinear image for low contour visibility.
  • capture mode controller 660 communicates specification of a capture mode to image sensor 120, either directly or via ISP 630.
  • capture mode controller 660 also communicates a corresponding capture-mode-specific nonlinear transformation 132 to encoder 632 of ISP 630.
  • Step 910 may include a step 912 of communicate a capture-mode-specific non-linear transformation that is configured to distribute bit depth resolution, in the nonlinear image, according to a sensor-signal-dependent contour visibility threshold.
  • Step 912 is similar to step 812 except for communicating only a single capture-mode-specific non-linear transformation.
  • method 900 performs step 720 of method 700 and step 830 of method 800.
  • inverter 342 receives the capture mode specification from capture mode controller 660.
  • inverter 342 receives the capture mode specification from ISP 630 via metadata 672.
  • Method 900 may further include step 440 of method 400, for example performed by post-processor 344 of processing unit 640.
  • FIG. 10 illustrates one example method 1000 for bit-depth efficient image processing effectuated from an application domain, such as the application domain of a cellular phone, which generates a high-dynamic-range (HDR) image.
  • Method 1000 is, for example, performed by processing unit 640.
  • Method 1000 initially performs step 710, as discussed above in reference to FIG. 7.
  • method 1000 performs a step 1020 of receiving a plurality of nonlinear images from the ISP, wherein each nonlinear image has been captured at different brightness setting.
  • processing unit 640 receives a plurality of nonlinear images 172 from ISP 630, wherein each nonlinear image 172 has been captured under a different brightness setting.
  • step 1030 method 1000 applies, to each nonlinear image received in step 1020, an inverse transformation that is inverse to the nonlinear transformation used to produce the nonlinear image, so as to transform each nonlinear image to a respective re- linearized image.
  • inverter 342 applies an inverse transformation 142 to each nonlinear image 172 to produce a respective re-linearized image 180.
  • step 1030 includes step 432 of applying the inverse transformation at a third bit depth that is greater than the second bit depth.
  • step 1040 method 1000 post-processes the re-linearized images.
  • post-processor 344 of processing unit 640 processes re-linearized images 180 generated by inverter 342 in step 1030.
  • Step 1040 includes a step 1041 of combining the re-linearized images to form a single HDR image that has greater dynamic range than any one of the individual captured images.
  • Step 1040 may utilize HDR image combination algorithms known in the art.
  • Step 1040 may further include a step 1042 of encoding the HDR image for decoding by a display or for output as a digital file.
  • Step 1042 is similar to step 442, apart from being applied to the HDR image.
  • Step 1042 may include or be preceded by step 444.
  • method 1000 may further utilize capture-mode-specific nonlinear transformations, as discussed above in reference to FIGS. 8 and 9.
  • FIG. 11 illustrates one example computer 1100 for bit-depth efficient image processing.
  • Computer 1100 is an embodiment of processing unit 340 or processing unit 640.
  • Computer 1100 may perform any one of methods 700, 800, 900, and 1000.
  • computer 1100 is implemented in the application domain of a cellular phone.
  • Computer 1100 includes a processor 1110, a non-transitory memory 1120, and an interface 1190.
  • Processor 1110 is communicatively coupled with each of memory 1120 and interface 1190.
  • Memory 1120 includes machine-readable instructions 1130, data 1170, and dynamic data storage 1180.
  • Machine-readable instructions 1130 include application domain instructions 1140 which, in turn, include inverting instructions 1142.
  • Application domain instructions 1140 may further include quantization instructions 1144 and/or HDR instructions 1146.
  • application domain instructions 1140 include hardware instructions 1150.
  • Data 1170 includes one or more nonlinear transformations 132 and corresponding inverse transformations 142.
  • Data 1170 may further include a quantization specification 1172 and/or a plurality of capture mode specifications 1174.
  • hardware instructions 1150 retrieve one or more nonlinear transformations 132 from data 1170 and output nonlinear
  • Hardware instructions 1150 may be further configured to, upon execution by processor 1110, retrieve one of capture mode specifications 1174 from data 1170 and output this capture mode specification 1174 to an image sensor or ISP via interface 1190.
  • inverting instructions 1142 Upon execution by processor 1110, inverting instructions 1142 receive a nonlinear image 172 via interface 1190, retrieve an inverse transformation 142 from data 1170, and applies inverse transformation 142 to nonlinear image 172 to produce re-linearized image 180. Inverting instructions 1142 may store nonlinear image 172 and/or re-linearized image 180 to dynamic data storage 1180. Processor 1110 and inverting instructions 1142 cooperate to form an embodiment of inverter 342.
  • quantization instructions 1144 retrieve re-linearized image 180 from dynamic data storage 1180, retrieve quantization specification 1172 from data 1170, re-encodes re-linearized image 180 in output image 382 according to quantization specification 1172, and outputs output image 382 via interface 1190 (or, alternatively, stores output image 382 to dynamic data storage 1180).
  • Processor 1110 and quantization instructions 1144 cooperate to form an embodiment of quantizer 144 as implemented in post-processor 344.
  • HDR instructions 1146 Upon execution by processor 1110, HDR instructions 1146 retrieve a plurality of re-linearized images 180 from dynamic data storage 1180 and combines these re- linearized images 180 to produce a HDR image. HDR instructions 1146 may be configured to output the HDR image via interface 1190, or store the HDR image to dynamic data storage 1180 for further processing by processor 1110, for example according to quantization instructions 1144. Processor 1110 and HDR instructions 1146 cooperate to form an embodiment of post-processor 344 configured to perform step 1041 of method 1000.
  • machine-readable instructions 1130 and data 1170 may be provided as a stand-alone software product configured for implementation on a third-party computer that has (a) a non-transitory memory for storage of machine-readable instructions 1130 and data 1170 and (b) a processor for execution of machine-readable instructions 1130.
  • FIG. 12 shows a prior-art image sensor 1200.
  • Image sensor 1200 includes a pixel array 1210 having a plurality of photosensitive pixels (for clarity of illustration individual pixels are not shown in FIG. 12. Each pixel generates an analog electrical signal 1290 in response to incident light.
  • Image sensor 1200 further includes readout circuitry 1220 for reading out analog signals 1290 from pixel array 1210.
  • Readout circuitry 1220 includes at least one analog-to-digital converter (ADC) 1224 that converts analog signals 1290 to respective digital signals 1292.
  • ADC analog-to-digital converter
  • the bit depth of digital signals 1292 is that same as the bit depth of each ADC 1224. For example, if image sensor 1200 is intended to output digital signals at a bit depth of 10 bits, the bit depth of each ADC 1224 must be 10 bits.
  • the power consumed by each ADC 1224 is a function of the bit depth of ADC 1224, wherein greater bit depth requires more power. Generally, the power
  • image sensor 1200 is integrated in a cellular phone and generally relies on a battery for power.
  • a 10 bit ADC 1224 would drain the battery faster than an 8 bit ADC 1224.
  • bit depth there is a tradeoff between bit depth and battery life.
  • cost of ADC 1224 increases with bit depth.
  • Image sensor 1200 may further include an analog denoiser 1222 that reduces noise of analog signals 1290 prior to analog-to-digital conversion by ADC(s) 1224.
  • FIG. 13 illustrates one example image sensor 1300 with bit-depth efficient analog-to-digital image conversion.
  • Image sensor 1300 optimizes allocation of bit depth resolution of one or more ADCs to output digital image signals at a greater bit depth than the bit depth of the ADC(s).
  • Image sensor 1300 may, but need not, be implemented in system 100 as image sensor 120.
  • Image sensor 1300 includes pixel array 1210 and readout circuitry 1320.
  • Readout circuitry 1320 includes a pre-shaping circuit 1330, at least one reduced-bit-depth ADC 1340 having a first bit depth, and a digital inverting circuit 1350.
  • Pre-shaping circuit 1330 is an analog circuit that includes a nonlinear transformation 1332
  • digital inverting circuit 1350 includes an inverse transformation 1352 that is inverse to nonlinear
  • readout circuitry 1320 further includes analog denoiser 1222.
  • Pre-shaping circuit 1330 may implement nonlinear transformation 1332 as one or more gamma and/or logarithmic functions, or other analog function blocks known in the art.
  • image sensor 1300 further includes a digital inverting circuit 1350 containing an inverse transformation 1352 that is inverse to nonlinear transformation 1332.
  • Digital inverting circuit 1350 may implement inverse transformation 1352 as a look-up table.
  • digital inverting circuit 1350 is implemented onboard an ISP communicatively coupled with image sensor 1300, such as ISP 330 or ISP 200.
  • pixel array 1210 generates analog signals 1290.
  • Pre-shaping circuit 1330 applies nonlinear transformation 1332 to analog signals 1290 (optionally after noise reduction by analog denoiser 1222) to produce pre-shaped analog signals 1390.
  • Reduced-bit-depth ADC 1340 converts pre-shaped analog signals 1390 to digital signals 1392 at the bit depth of reduced-bit-depth ADC 1340.
  • Digital inverting circuit 1350 applies inverse transformation 1352 to digital signals 1392 to invert the nonlinear transformation applied by pre-shaping circuit 1330 to generate increased-bit-depth digital signals 1394 having a second bit depth.
  • the second bit depth is greater than the first bit depth. In one example, the first bit depth is 8 bits and the second bit depth is 10 bits.
  • Nonlinear transformation 1332 facilitates efficient use of the limited bit depth resolution of reduced-bit-depth ADC 1340.
  • Nonlinear transformation 1332 redistributes the levels of analog signals 1290 to optimize the allocation of bit depth resolution by reduced-bit-depth ADC 1340 for low contour visibility in captured images encoded in increased-bit-depth digital signals 1394.
  • the functional form of nonlinear transformation 1332 may be similar to the functional form of nonlinear transformation 132. For example, since noise tends to reduce contour visibility, one embodiment of nonlinear transformation 1332 distributes less noisy ranges of analog signals 1290 (optionally accounting for noise reduction by analog denoiser 1222) over a relatively greater portion of the first bit depth than more noisy ranges of analog signals 1290.
  • this embodiment of nonlinear transformation 1332 (a) allocates a relatively greater portion of the first bit depth to less noisy ranges of analog signals 1290 and (b) allocates a relatively smaller portion of the first bit depth to more noisy ranges of analog signals 1290. Since greater analog signals 1290 generally are noisier than smaller analog signals 1290, nonlinear transformation 1332 may allocate a relatively greater portion (e.g., 20%) of the first bit depth to the lower range of analog signals 1290 (e.g., the lowest 10% of analog signals 1290).
  • image sensor 1300 may achieve the same bit depth with minimal or no adverse effect on contour visibility while operating with an ADC of lower bit depth than ADC 1224. Consequently, by virtue of nonlinear transformation 1332, image sensor 1300 may achieve image quality comparable to that of prior art image sensor 1200 at lower cost and with lower power consumption than prior art image sensor 1200. Image sensor 1300 may reduce power consumption by as much as a factor of four compared to the existing state of the art linear ADC implementation.
  • Embodiments of image sensor 1300 may further facilitate an increased frame rate since the image data output by image sensor 1300 in the form of digital signals 1392 are at a lower bit depth than digital signals 1292.
  • readout circuitry 1320 includes a reduced-bit-depth
  • This embodiment of readout circuitry 1320 may also be configured with a pre-shaping circuit 1330 and, optionally, a digital inverting circuit 1350 for each pixel of pixel array 1210.
  • readout circuitry 1320 includes fewer reduced-bit-depth ADCs 1340 than there are pixels in pixel array 1210.
  • This embodiment of readout circuitry 1320 uses multiplexing such that each reduced-bit- depth ADC 1340 sequentially reads analog signals 1290 from different pixels of pixel array 1210.
  • This embodiment of readout circuitry 1320 may similarly multiplex the processing of analog signals 1290 by pre-shaping circuit 1330 and, optionally, digital inverting circuit 1350.
  • FIGS. 14A and 14B illustrate one example image sensor 1400 with bit- depth efficient analog-to-digital conversion in column-specific readout circuitry.
  • Each pixel column of image sensor 1400 has associated column- specific readout circuitry that optimizes allocation of bit depth resolution of an ADC to output digital image signals at a greater bit depth than the bit depth of the ADC.
  • Image sensor 1400 is an embodiment of image sensor 1300.
  • FIG. 14A shows a schematic top plan view of image sensor 1400.
  • FIG. 14B is a block diagram of one instance of column-specific readout circuitry 1420.
  • FIGS. 14A and 14B are best viewed together in the following description.
  • Image sensor 1400 includes a pixel array 1410 having a plurality of pixels 1412 arranged in columns 1414. For clarity of illustration, not all pixels 1412 and not all columns 1414 are labeled in FIG. 14A.
  • Each column 1414 is communicatively coupled with respective column-specific readout circuitry 1420 that sequentially reads out analog signals 1290 of each pixel 1412 in column 1414.
  • Each instance of column-specific readout circuitry 1420 includes pre-shaping circuit 1330 and a single reduced-bit-depth ADC 1340, communicatively coupled as discussed above in reference to FIG. 13.
  • Each instance of column- specific readout circuitry 1420 may further include digital inverting circuit 1350.
  • column-specific readout circuitry 1420 may have some column-to-column differences.
  • nonlinear transformation 1332 may be calibrated on a column-specific basis.
  • FIG. 15 illustrates one example method 1500 for bit-depth efficient analog-to-digital conversion in an image sensor.
  • Method 1500 may be performed by image sensor 1300 or image sensor 1400.
  • a step 1510 method 1500 receives a plurality of analog signals representing light detected by a respective plurality of photosensitive pixels of an image sensor.
  • readout circuitry 1320 receives analog signals 1290 from pixel array 1210.
  • each instance of column- specific readout circuitry 1420 of image sensor 1400 receives analog signals 1290 from a
  • a step 1520 method 1500 applies a nonlinear transformation to the analog signals to optimize the allocation of bit depth resolution to digital signals, generated through subsequent analog-to-digital conversion, for low contour visibility.
  • pre-shaping circuit 1330 of readout circuitry 1320 applies nonlinear transformation 1332 to each analog signal 1290 received from pixel array 1210.
  • pre-shaping circuit 1330 of each instance of column-specific readout circuitry 1420 of image sensor 1400 applies nonlinear transformation 1332 to each analog signal 1290 received from a corresponding column 1414 of pixel array 1410.
  • Step 1520 may include a step 1522 of applying one or more gamma and/or logarithmic functions.
  • pre-shaping circuit 1330 propagates each analog signal 1290 through one or more analog circuits that each applies a gamma or a logarithmic function.
  • step 1530 method 1500 converts the pre-shaped analog signals, generated in step 1520, to digital signals at a first bit depth.
  • each reduced-bit-depth ADC 1340 of either readout circuitry 1320 or of column-specific readout circuitry 1420 converts a pre-shaped analog signal 1390 to a corresponding digital signal 1392.
  • method 1500 further includes a step 1540 of inverting the nonlinear transformation of step 1520 by applying a corresponding inverse transformation to the digital signals at a second bit depth that is generally greater than the first bit depth.
  • digital inverting circuit 1350 of readout circuitry 1320, or of an ISP communicatively coupled with readout circuitry 1320 applies inverse transformation 1352 to each digital signal 1392 received from reduced-bit-depth ADC 1340.
  • digital inverting circuit 1350 of each instance of column-specific readout circuitry 1420 applies inverse transformation 1352 to each digital signal 1392 received from reduced-bit-depth ADC 1340.
  • FIGS. 16-24B illustrate an example of determination of nonlinear transformation 132.
  • FIG. 16 shows the results of a study where a cohort of observers were presented with a set of images each having a shallow brightness gradient.
  • the images sampled three parameters: bit depth at which the image is encoded, the average image luminance, and a level of Gaussian noise added to shallow brightness gradient.
  • bit depth at which the image is encoded For each average luminance and each noise level, the observers were asked to select the minimum bit depth required to avoid visible contours in the images.
  • FIG. 16 shows a group of curves 1600.
  • Each curve 1600 is associated with a specific respective average luminance and indicates the required bit depth 1620 as a function of the Gaussian noise level 1610, wherein the Gaussian noise level is indicated as standard deviation in 12 bit code values. It is evident from FIG. 16 that the required bit depth is approximately inversely proportional to the Gaussian noise level and mostly independent of the average luminance.
  • Captured image 170 may include noise from a variety of sources. At low sensor signal levels, the noise may be dominated by signal-independent contributions such as dark current, readout noise, and Johnson noise. At greater sensor signal levels, the noise is generally dominated by photon shot noise. Photon shot noise originates from the discrete nature of light that translates to electronic sensor signals through photoelectric generation in photodiode wells of image sensor 120. Shot noise is signal-dependent, such that the shot noise standard deviation is proportional to the square root of the sensor signal.
  • FIG. 17 shows a curve 1700 plotting noise standard deviation 1720 as a function of digital number 1710 (code value level of the sensor signal), measured for the image sensor of the Google Pixel camera at 1 msec exposure time.
  • digital number 1710 code value level of the sensor signal
  • the square root dependence at the higher code levels shows that the noise at greater sensor signals is dominated by shot noise.
  • Most image sensors exhibit the same general behavior as indicated in FIG. 17.
  • FIG. 18 illustrates one example method 1800 for determining nonlinear transformation 132 for an image sensor 120.
  • Method 1800 is a calibration procedure that may be performed for any given embodiment of image sensor 120. In one scenario, method 1800 is performed for a single image sensor 120, or a few copies of the same type of image sensor 120, and the resulting nonlinear transformation 132 is then universally applicable to all instances of this same type of image sensor 120. Method 1800 may be performed by a computer.
  • nonlinear transformation 132 determined by method 1800 may be capture-mode specific. Thus, for image sensors 120 capable of image capture under a plurality of capture modes, method 1800 may be performed for each of the capture modes.
  • method 1800 receives noise characteristics for sensor signals of image sensor 120 as encoded in captured image 170.
  • the noise characteristics characterize the standard deviation Oc (or other equivalent statistical measure) of the sensor signal as a function of code value C.
  • Oc pertains only to the native noise of image sensor 120.
  • Qc further accounts for noise
  • Curve 1700 of FIG. 17 is an example of (C,Qc) pertaining to native noise only.
  • a step 1820 method 1800 applies an optical-to-optical transfer function (OOTF) to the noise characteristics (C,Qc) received in step 1810.
  • the OOTF transfers the noise characteristics from scene -referred code values (C,Qc) to display-luminance-referred code values (L,QL).
  • the OOTF applied in step 1820 is the same function applied in step 444 of method 400.
  • a step 1830 method 1800 converts the noise characteristics from display-luminance-referred code values (L,QL) to corresponding quantizer code values (Q,QQ) according to a desired encoding performed by quantizer 144.
  • Steps 1820 and 1830 cooperate to propagate the noise characteristics of captured image 170 to resulting noise characteristics of output image 382.
  • step 1840 method 1800 computes the quantizer bit depth BQ required at each display luminance level as represented by the quantizer code value Q in presence of noise QQ.
  • Step 1840 may utilize the functional form of the curves shown in FIG. 16 to compute BQ from (Q,QQ).
  • a step 1850 determines, based upon BQ, the minimum step AQ of the quantizer code value Q for each Q value.
  • Minimum step AQ indicates the minimum step in quantizer code value Q associated with a visible (step-like) contrast. For steps smaller than minimum step AQ, contrast is not visible.
  • a step 1860 method 1800 transfers the quantizer code value representation (Q,AQ) of the minimum step function to a display-luminance-referred minimum step function (L,AL).
  • Step 1860 determines (L,AL) from (Q,AQ) by applying a function that is inverse to the function applied in step 1830.
  • method 1800 applies the inverse OOTF to minimum step function (L,AL) to generate a minimum step function (C,AC) for sensor signals, so as to determine a minimum contrast curve AC/C(C).
  • the minimum contrast curve AC/C(C) characterizes a sensor-signal-dependent contour visibility threshold, such as the sensor- signal-dependent contour visibility threshold of step 712.
  • step 1880 method 1800 derives nonlinear transformation 132 from minimum contrast curve AC/C(C).
  • step 1880 parametrizes nonlinear transformation 132, and determines the parameters of nonlinear transformation 132 by fitting the relative quantization induced by nonlinear transformation 132, at the bit depth of nonlinear transformation 132 (such as 8 bits), to minimum contrast curve AC/C(C).
  • FIG. 19 shows several minimum contrast curves 1900, 1902, and 1904 derived from an example of method 1800 based upon the Google Pixel camera noise characteristics of FIG. 17.
  • Each of curves 1900, 1902, and 1904 is associated with application of a different OOTF in step 1820 and using the Dolby Perceptual Quantizer (ITU-R).
  • ITU-R Dolby Perceptual Quantizer
  • Each minimum contrast curve 1900, 1902, and 1904 is derived from the minimum step function (L,AL) determined in an example of step 1860 and indicates AL/L (1920) as a function of L (1910). These AL/L(L) curves depend somewhat on the OOTF chosen.
  • FIG. 19 also shows the 12 bit Dolby Perceptual Quantizer curve 1906, which resides just below the visibility threshold in the absence of noise.
  • FIG. 20 shows the result of mapping minimum contrast curves 1900, 1902, and 1904 back to sensor signal code values through the inverse OOTF, as done in an example of step 1870.
  • FIG. 20 shows curves 2000, 2002, and 2004 each indicating AC/C (2020) as a function of C (2010). Curves 2000, 2002, and 2004 are derived from curves 1900, 1902, and 1904, respectively.
  • the minimum contrast curves AC/C(C) represented in sensor-signal space show very similar behavior despite the use of very different OOTFs in each case.
  • method 1800 further takes into account noise reduction performed by preprocessing unit 336, for example by denoiser 230.
  • FIG. 21 shows an example of typical noise reduction from an ISP.
  • FIG. 21 shows three curves 2100, 2102, and 2104 indicating noise 2120 as a function of digital code value 2110 at a bit depth of 8 bits.
  • Curve 2100 indicates the noise without noise reduction
  • curve 2102 indicates the noise level in the presence of temporal denoising
  • curve 2104 indicates the noise level in the presence of temporal and wavelet denoising. It is clear, in this case, that the ISP can reduce the native sensor noise levels by a factor of 2-3x. This noise reduction will then have the effect of reducing the AL/L minimum contrasts and their AC/C counterparts.
  • FIG. 22 illustrates the effect of denoising on minimum contrast curves AC/C(C).
  • FIG. 22 shows minimum contrast curves 2200, 2202, and 2204 indicating AC/C (2220) as a function of C (2210).
  • Curve 2200 corresponds to no denoising, that is, assuming the native noise of the image sensor.
  • Curve 2202 corresponds to a 2x noise reduction, and curve 2204 corresponds to a 4x noise reduction.
  • Denoising has the effect of pushing the minimum contrast curves downward, which indicates a greater sensitivity to contours caused by limited bit depth resolution.
  • FIG. 23 illustrates an example of step 1880, wherein the parameters of a parametrized 8 bit nonlinear transformation 132 are derived from minimum contrast curve 2204 of FIG. 22.
  • FIG. 23 shows several curves indicating AC/C (2220) as a function of C (2210): curve 2204 of FIG. 22 and curves 2300, 2304, and 2306.
  • Curve 2300 is a fit of AC/C(C) associated with the parametrized 8 bit nonlinear transformation 132 to curve 2204.
  • Curve 2304 is a minimum contrast curve AC/C(C) associated with a conventional pure gamma 1/2.4 encoding.
  • Curve 2306 is a minimum contrast curve AC/C(C) associated with a conventional Rec 709 encoding.
  • Curve 2300 is based on parametrization of nonlinear transformation 132 using the derivative of a modified Naka-Rushton model. This model allows for a close fit to curve 2204. In contrast, the 8 bit Rec 709 (curve 2306) and pure gamma 1/2.4 (curve 2304) encodings deviate significantly from curve 2204, illustrating sub-optimal use of the 8 bit code space by these conventional encodings.
  • FIGS. 24A and 24B illustrate an example nonlinear transformation 132 and corresponding example inverse transformation 142 associated with curve 2300 of FIG. 23.
  • FIG. 24A plots a nonlinear transformation 2400 (the example of nonlinear transformation 132 associated with curve 2300) as output relative code value 2420 as a function of relative camera signal 2410.
  • FIG. 24A also shows a conventional gamma 2.4 encoding 2402.
  • FIG. 24B plots an inverse transformation 2440 (the example of inverse transformation 142 associated with curve 2300) as relative camera signal 2410 as a function of output relative code value 2420.
  • FIG. 24B also shows a conventional gamma 1/2.4 decoding 2442.
  • FIGS. 25-28 illustrate, by examples, the effect of applying nonlinear transformation 1332 to pre-shape analog signals 1290 prior to analog-to-digital conversion by reduced-bit-depth ADC 1340.
  • FIG. 25 shows the results of a study where a cohort of observers were presented with a set of images each having a shallow brightness gradient.
  • the study underlying FIG. 25 is an expanded version of the study underlying FIG. 16.
  • FIG. 25 plots a group of curves 2500, each indicating AC/C (2520) as a function of C (2510) determined in step 1870 of method 1800.
  • FIG. 26 shows a curve 2600 indicating the minimum relative contrast AC/C for the group of curves 2500 of FIG. 25, as a function of C.
  • FIG. 27 replots the data of FIG. 26 as a curve 2700 indicating min(AC)
  • FIG. 27 also shows lines 2702 and 2704 indicating relative code changes for ADCs having bit depths of 10 bits and 12 bits, respectively.
  • 10 bit ADC line 2702
  • contour artifacts caused by limited bit depth resolution of the ADC would be observed below a relative camera signal of approximately 0.01.
  • the contrast would fall below the visibility threshold.
  • N-bit e.g. 10
  • N-bit e.g. 10
  • FIG. 28 illustrates an example of contour visibility performance provided by image sensor 1300.
  • FIG. 28 plots several curves indicating min(AC) (2720) as a function of C (2710): curves 2800 and 2802, as well as curve 2700 and lines 2702 and 2704 of FIG. 27.
  • Curve 2800 is similar to curve 2700 of FIG. 27, except that curve 2800 is based upon use of an embodiment of image sensor 1300 that (a) implements nonlinear transformation 2400 of FIG. 24 as nonlinear transformation 1332 and (b) uses a reduced-bit-depth ADC 1340 having a bit depth of 8 bits.
  • curve 2800 demonstrates that, by virtue of nonlinear transformation 1332, it is possible to replace a 10 bit or 12 bit ADC with an 8 bit ADC while staying below the contour visibility threshold indicated by curve 2700.
  • Curve 2802 is similar to curve 2800 except for being based upon an embodiment of image sensor 1300 that implements a gamma 1/2.4 function as nonlinear transformation 1332.
  • the gamma 1/2.4 based embodiment also allows for staying below the contour visibility threshold indicated by curve 2700 while using an 8 bit ADC.
  • nonlinear transformation 2400 may perform better than the gamma 1/2.4 function.
  • a computer-implemented method for bit-depth efficient image processing may include (a) communicating at least one non- linear transformation to an image signal processor, wherein each non- linear transformation is configured to, when applied by the image signal processor to a captured image having sensor signals encoded at a first bit depth, produce a nonlinear image that re-encodes the captured image at a second bit depth, while optimizing allocation of bit depth resolution in the nonlinear image for low contour visibility, (b) receiving the nonlinear image from the image signal processor, and (c) applying an inverse transformation, inverse to the nonlinear transformation used to produce the nonlinear image, to transform the nonlinear image to a re-linearized image at a third bit depth that is greater than the second bit depth.
  • the second bit depth may be less than the first bit depth.
  • each non-linear transformation may be configured to non-linearly distribute bit depth resolution, in the nonlinear image, according to a sensor-signal-dependent contour visibility threshold.
  • the sensor-signal-dependent contour visibility threshold may define, for each digital value of the sensor signals, as encoded at the first bit depth, a minimum contour detectable in the presence of noise, wherein the noise includes native noise of an image sensor generating the captured image.
  • the noise may further include noise introduced by pre-processing of the captured image after capture and prior to the step of receiving.
  • any of the methods denoted as (Al) through (A5) may further include, after the step of applying and at the third bit depth, steps of transferring representation of the re-linearized image from scene-referred sensor-signal values to display- referred luminance values and encoding the re-linearized image, as represented by the display-referred luminance values, for subsequent decoding by a display or for output as a digital file.
  • the step of encoding may include applying, to the re-linearized image, a quantizer configured to code a 10,000 nits display luminance range at a bit depth in range from 10 to 12 bits while non-linearly allocating bit depth resolution to reduce contour visibility.
  • each nonlinear transformation may be configured to (i) be applied to the captured image with the sensor signals being encoded in initial code values and (ii) transform the initial code values to optimized code values that allocate greater bit depth resolution to a first range of the initial code values than a second range of the initial code values, wherein the first range is characterized by a lower contour visibility threshold than the second range and the step of receiving includes receiving the nonlinear image as encoded in the optimized code values.
  • Any of the methods denoted as (Al) through (A8) may include, in the step of receiving, receiving the nonlinear image from the image signal processor via an output of the image signal processor limited to the second bit depth.
  • the second bit depth may be 8 bits
  • the third bit depth may be at least 10 bits.
  • the first bit depth may be greater than 8 bits.
  • Any of the methods denoted as (Al) through (Al l) may further include (1) in the step of communicating, communicating to the image signal processor a plurality of non- linear transformations respectively associated with a plurality of image capture modes, so as to enable the image signal processor to select and apply a specific one of the non-linear transformations according to image capture mode under which the captured image is captured, (2) receiving, from the image signal processor, metadata indicating the mode under which the captured image is captured, and (3) in the step of applying, transforming the image according to an inverse of the specific one of the non-linear transformations.
  • a product for bit-depth efficient image processing may include machine-readable instructions encoded in non-transitory memory, wherein the instructions include (a) at least one non- linear transformation, each configured to transform a captured image, encoding sensor signals at a first bit depth, to produce a nonlinear image that re- encodes the sensor signals at a second bit depth, while optimizing allocation of bit depth resolution in the nonlinear image for low contour visibility, (b) for each non-linear transformation, a corresponding inverse transformation, (c) hardware instructions that, when executed by a processor, communicate the at least one non-linear transformation to an image signal processor, to enable the image processor to produce the nonlinear image from a captured image, and (d) application domain instructions including inverting instructions that, when executed by the processor, receive the nonlinear image from the image signal processor and apply the inverse transformation corresponding to the nonlinear transformation used to produce the nonlinear image, to produce a re-linearized image at a third bit depth that is greater than
  • the second bit depth may be less than the first bit depth.
  • each non-linear transformation may be configured to non-linearly distribute bit depth resolution, in the nonlinear image, according to a sensor-signal-dependent contour visibility threshold.
  • the sensor-signal-dependent contour visibility threshold may define, for each value of the sensor signals, as encoded at the first bit depth, a minimum sensor-signal contour detectable in presence of noise, wherein the noise includes native noise of an image sensor generating the captured image.
  • the noise may further include noise introduced in pre-processing of the captured image prior to application of the nonlinear transformation.
  • the application domain instructions may further include quantization instructions that, when executed by the processor, (i) transfer representation of the re-linearized image from scene-referred sensor- signal values to display-referred luminance values, and (ii) encode the re-linearized image, as represented by the display-referred luminance values, for subsequent decoding by a display or for output as a digital file.
  • the quantization instructions may include, to encode the re-linearized image represented by the display-referred luminance values, a quantizer configured to code a 10,000 nits display luminance range at a bit depth in range from 10 to 12 bits while non-linearly allocating bit depth resolution to reduce contour visibility.
  • the at least one non-linear transformation may include a plurality of non-linear transformations, and a corresponding plurality of inverse transformations, respectively associated with a plurality of image capture modes, so as to enable the image signal processor to select and apply a specific one of the non-linear transformations according to image capture mode under which the captured image is captured, and the inverting instructions may be configured to, when executed by the processor, receive metadata indicating the capture mode under which the captured image is captured, and apply a corresponding one of the inverse transformations to produce a re-linearized- luminance image.
  • a method for bit-depth efficient analog-to-digital conversion of an image may include (a) receiving a plurality of analog signals representing light detected by a respective plurality of photosensitive pixels of an image sensor, (b) converting the analog signals to digital signals at a first bit depth, and (c) prior to the step of converting, applying a nonlinear transformation to the analog signals to optimize allocation of bit depth resolution, to the digital signals, for low contour visibility.
  • the method denoted as (CI) may further include inverting the nonlinear transformation by applying a corresponding inverse transformation to the digital signals, wherein the inverse transformation encodes the digital signals at a second bit depth that is greater than the first bit depth.
  • the step of applying may include allocating greater bit depth resolution to a first range of the analog signals than a second range of the analog signals, wherein the first range is characterized by a lower contour visibility threshold than the second range.
  • the first bit depth may be 8 bits.
  • step of applying may include optimizing the allocation of bit depth resolution while accounting for effect of noise on the contour visibility.
  • the noise may include native noise of the image sensor.
  • Any of the methods denoted as (C5) through (C7) may include performing the steps of receiving, converting, applying, and inverting within each column readout circuit of the image sensor.
  • the step of applying may include applying, to the analog signals, one or more non-linear functions selected from the group consisting of a gamma function and a logarithmic function.
  • An image sensor with bit-depth efficient analog-to-digital image conversion may include (a) a plurality of photosensitive pixels for generating a respectively plurality of analog signals representing light detected by the photosensitive pixels, (b) at least one analog-to-digital converter for converting the analog signals to digital and having a first bit depth, and (c) at least one analog preshaping circuit, communicatively coupled between the photosensitive pixels and the at least one analog-to-digital converter, for applying a nonlinear transformation to the analog signals to optimize allocation of bit depth resolution, to the digital signals by the analog-to-digital converter, for low contour visibility in presence of noise of the analog signals.
  • the image sensor denoted as (Dl) may further include at least one digital inverting circuit for inverting the nonlinear transformation by applying a
  • transformation encodes the digital signals at a second bit depth that is greater than the first bit depth.
  • the analog preshaping circuit may implement the nonlinear transformation, at least in part, as one or more functions selected from the group consisting of a gamma function and a logarithmic function, wherein the digital inverting circuit stores the inverse transformation as a look-up table.
  • the first bit depth may be 8 bits.
  • the photosensitive pixels may be organized in an array having a plurality of columns, wherein each column is configured with column readout circuitry that includes a respective analog-to- digital converter, a respective analog preshaping circuit, and a respective digital inverting circuit.
  • the column readout circuitry may implement a column-specific instance of the nonlinear transformation that is calibrated to minimize contour visibility in presence of noise of the analog signals in the respective column.
  • a computer- implemented method for bit-depth efficient image processing comprising:
  • each non- linear transformation being configured to, when applied by the image signal processor to a captured image having sensor signals encoded at a first bit depth, produce a nonlinear image that re-encodes the captured image at a second bit depth, while optimizing allocation of bit depth resolution in the nonlinear image for low contour visibility,
  • each non-linear transformation being configured to non- linearly distribute bit depth resolution, in the nonlinear image, according to a sensor- signal-dependent contour visibility threshold.
  • the sensor-signal-dependent contour visibility threshold defining, for each digital value of the sensor signals, as encoded at the first bit depth, a minimum contour detectable in the presence of noise, the noise including native noise of an image sensor generating the captured image.
  • the noise further including noise introduced by preprocessing of the captured image after capture and prior to the step of receiving.
  • the method of EEE 6, the step of encoding comprising applying, to the re- linearized image, a quantizer configured to code a 10,000 nits display luminance range at a bit depth in range from 10 to 12 bits while non-linearly allocating bit depth resolution to reduce contour visibility.
  • each nonlinear transformation being configured to (a) be applied to the captured image with the sensor signals being encoded in initial code values and (b) transform the initial code values to optimized code values that allocate greater bit depth resolution to a first range of the initial code values than a second range of the initial code values, the first range being characterized by a lower contour visibility threshold than the second range, the step of receiving comprising receiving the nonlinear image as encoded in the optimized code values.
  • the processor via an output of the image signal processor limited to the second bit depth.
  • the second bit depth being 8 bits
  • the third bit depth being at least 10 bits.
  • a product for bit-depth efficient image processing comprising machine- readable instructions encoded in non-transitory memory, the instructions including:
  • At least one non-linear transformation each configured to transform a captured image, encoding sensor signals at a first bit depth, to produce a nonlinear image that re-encodes the sensor signals at a second bit depth, while optimizing allocation of bit depth resolution in the nonlinear image for low contour visibility;
  • each non-linear transformation being configured to non-linearly distribute bit depth resolution, in the nonlinear image, according to a sensor- signal-dependent contour visibility threshold.
  • the sensor-signal-dependent contour visibility threshold defining, for each value of the sensor signals, as encoded at the first bit depth, a minimum sensor-signal contour detectable in presence of noise, the noise including native noise of an image sensor generating the captured image.
  • the application domain instructions further comprising quantization instructions that, when executed by the processor, (a) transfer representation of the re-linearized image from scene-referred sensor-signal values to display- referred luminance values, and (b) encode the re-linearized image, as represented by the display-referred luminance values, for subsequent decoding by a display or for output as a digital file.
  • the quantization instructions including, to encode the re-linearized image represented by the display-referred luminance values, a quantizer configured to code a 10,000 nits display luminance range at a bit depth in range from 10 to 12 bits while non-linearly allocating bit depth resolution to reduce contour visibility.
  • At least one non-linear transformation comprising a plurality of non-linear transformations, and a corresponding plurality of inverse transformations, respectively associated with a plurality of image capture modes, so as to enable the image signal processor to select and apply a specific one of the linear transformations according to image capture mode under which the captured image is captured;
  • the inverting instructions being configured to, when executed by the processor,
  • a method for bit-depth efficient analog-to-digital conversion of an image comprising:
  • the step of applying comprising allocating greater bit depth resolution to a first range of the analog signals than a second range of the analog signals, the first range being characterized by a lower contour visibility threshold than the second range.
  • the method of EEE 27 the noise including native noise of the image sensor.
  • the method of EEE 26 comprising performing the steps of receiving, converting, applying, and inverting within each column readout circuit of the image sensor.
  • any one of EEEs 22 to 29, the step of applying comprising applying, to the analog signals, one or more non-linear functions selected from the group consisting of a gamma function and a logarithmic function.
  • An image sensor with bit-depth efficient analog-to-digital image conversion comprising:
  • a plurality of photosensitive pixels for generating a respectively plurality of analog signals representing light detected by the photosensitive pixels; at least one analog-to-digital converter for converting the analog signals to digital and having a first bit depth;
  • At least one analog preshaping circuit communicatively coupled between the
  • photosensitive pixels and the at least one analog-to-digital converter for applying a nonlinear transformation to the analog signals to optimize allocation of bit depth resolution, to the digital signals by the analog-to-digital converter, for low contour visibility in presence of noise of the analog signals.
  • the image sensor of EEE 31 further comprising at least one digital inverting circuit for inverting the nonlinear transformation by applying a corresponding inverse transformation to the digital signals, the inverse transformation encoding the digital signals at a second bit depth that is greater than the first bit depth.
  • the digital inverting circuit storing the inverse transformation as a look-up table.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

L'invention concerne un procédé mis en œuvre par ordinateur pour un traitement d'image efficace en profondeur de bits, lequel procédé comprend une étape consistant à communiquer au moins une transformation non linéaire à un processeur de signal d'image. Chaque transformation non linéaire est configurée pour, lorsqu'elle est appliquée par le processeur de signal d'image à une image capturée ayant des signaux de capteur codés à une première profondeur de bits, produire une image non linéaire qui re-code l'image capturée à une deuxième profondeur de bits qui peut être inférieure à la première profondeur de bits, tout en optimisant l'attribution de résolution de profondeur de bits dans l'image non linéaire pour une visibilité de contour faible. Le procédé consiste en outre à recevoir l'image non linéaire provenant du processeur de signal d'image, et à appliquer une transformation inverse pour transformer l'image non linéaire en une image re-linéarisée à une troisième profondeur de bits qui est supérieure à la deuxième profondeur de bits. La transformation inverse est inverse à la transformation non linéaire utilisée pour produire l'image non linéaire.
PCT/US2018/046783 2017-08-15 2018-08-14 Traitement d'image efficace en profondeur de bits Ceased WO2019036522A1 (fr)

Priority Applications (3)

Application Number Priority Date Filing Date Title
EP18759011.2A EP3669542B1 (fr) 2017-08-15 2018-08-14 Traitement efficace d'une image de profondeur de bit
CN201880053158.1A CN110999301B (zh) 2017-08-15 2018-08-14 位深度高效图像处理
US16/637,197 US10798321B2 (en) 2017-08-15 2018-08-14 Bit-depth efficient image processing

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US201762545557P 2017-08-15 2017-08-15
EP17186275.8 2017-08-15
US62/545,557 2017-08-15
EP17186275 2017-08-15

Publications (1)

Publication Number Publication Date
WO2019036522A1 true WO2019036522A1 (fr) 2019-02-21

Family

ID=59677049

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2018/046783 Ceased WO2019036522A1 (fr) 2017-08-15 2018-08-14 Traitement d'image efficace en profondeur de bits

Country Status (1)

Country Link
WO (1) WO2019036522A1 (fr)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111510698A (zh) * 2020-04-23 2020-08-07 惠州Tcl移动通信有限公司 图像处理方法、装置、存储介质及移动终端
CN114827487A (zh) * 2020-04-28 2022-07-29 荣耀终端有限公司 一种高动态范围图像合成方法和电子设备
CN115734080A (zh) * 2021-08-30 2023-03-03 辉达公司 用于高动态范围传感器的图像信号处理管线
JP2023540447A (ja) * 2020-08-06 2023-09-25 ドルビー ラボラトリーズ ライセンシング コーポレイション 擬似輪郭低減による適応ストリーミング

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016164235A1 (fr) 2015-04-06 2016-10-13 Dolby Laboratories Licensing Corporation Remise en forme d'images en boucle reposant sur des blocs lors d'un codage vidéo à grande gamme dynamique
WO2016184532A1 (fr) 2015-05-18 2016-11-24 Telefonaktiebolaget Lm Ericsson (Publ) Procédés, dispositif de réception et dispositif d'envoi permettant de gérer une image
WO2017003525A1 (fr) * 2015-06-30 2017-01-05 Dolby Laboratories Licensing Corporation Quantificateur perceptif s'adaptant au contenu en temps réel pour des images à plage dynamique élevée

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016164235A1 (fr) 2015-04-06 2016-10-13 Dolby Laboratories Licensing Corporation Remise en forme d'images en boucle reposant sur des blocs lors d'un codage vidéo à grande gamme dynamique
WO2016184532A1 (fr) 2015-05-18 2016-11-24 Telefonaktiebolaget Lm Ericsson (Publ) Procédés, dispositif de réception et dispositif d'envoi permettant de gérer une image
WO2017003525A1 (fr) * 2015-06-30 2017-01-05 Dolby Laboratories Licensing Corporation Quantificateur perceptif s'adaptant au contenu en temps réel pour des images à plage dynamique élevée

Non-Patent Citations (6)

* Cited by examiner, † Cited by third party
Title
BORER AND A COTTON T: "A "DISPLAY INDEPENDENT" HIGH DYNAMIC RANGE TELEVISION SYSTEM", IBC 2015 CONFERENCE, 11-15 SEPTEMBER 2015, AMSTERDAM,, 11 September 2015 (2015-09-11), XP030082540 *
FRANÇOIS (TECHNICOLOR) E ET AL: "AHG14: suggested draft text for HDR/WCG technology for SDR backward compatibility, display adaptation, and quality enhancement processing", 25. JCT-VC MEETING; 14-10-2016 - 21-10-2016; CHENGDU; (JOINT COLLABORATIVE TEAM ON VIDEO CODING OF ISO/IEC JTC1/SC29/WG11 AND ITU-T SG.16 ); URL: HTTP://WFTP3.ITU.INT/AV-ARCH/JCTVC-SITE/,, no. JCTVC-Y0029, 5 October 2016 (2016-10-05), XP030118070 *
FRANCOIS ET AL.: "AHG14: suggested draft text for HDR/WCG technology for SDR backward compatibility, display adaptation, and quality enhancement processing", JOINT COLLABORATIVE TEAM ON VIDEO CODING (JCT-VC) OF ITU-T SG 16 WP 3, 14 October 2016 (2016-10-14)
TIM BORER; ANDREW COTTON: "A ''Display Independent'' High Dynamic Range Television System", IBC 2015 CONFERENCE, 11 September 2015 (2015-09-11)
YIN P ET AL: "Candidate Test Model for HDR extension of HEVC", 113. MPEG MEETING; 19-10-2015 - 23-10-2015; GENEVA; (MOTION PICTURE EXPERT GROUP OR ISO/IEC JTC1/SC29/WG11),, no. m37269, 16 October 2015 (2015-10-16), XP030065637 *
YIN PENG ET AL.: "andidate test model for HDR extension of HEVC", MPEG MEETING, vol. 113, 19 October 2015 (2015-10-19)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111510698A (zh) * 2020-04-23 2020-08-07 惠州Tcl移动通信有限公司 图像处理方法、装置、存储介质及移动终端
CN114827487A (zh) * 2020-04-28 2022-07-29 荣耀终端有限公司 一种高动态范围图像合成方法和电子设备
CN114827487B (zh) * 2020-04-28 2024-04-12 荣耀终端有限公司 一种高动态范围图像合成方法和电子设备
JP2023540447A (ja) * 2020-08-06 2023-09-25 ドルビー ラボラトリーズ ライセンシング コーポレイション 擬似輪郭低減による適応ストリーミング
JP7434664B2 (ja) 2020-08-06 2024-02-20 ドルビー ラボラトリーズ ライセンシング コーポレイション 擬似輪郭低減による適応ストリーミング
US12301848B2 (en) 2020-08-06 2025-05-13 Dolby Laboratories Licensing Corporation Adaptive streaming with false contouring alleviation
CN115734080A (zh) * 2021-08-30 2023-03-03 辉达公司 用于高动态范围传感器的图像信号处理管线
US11917307B2 (en) 2021-08-30 2024-02-27 Nvidia Corporation Image signal processing pipelines for high dynamic range sensors
US12273632B2 (en) 2021-08-30 2025-04-08 Nvidia Corporation Image signal processing pipelines for high dynamic range sensors

Similar Documents

Publication Publication Date Title
JP7647005B2 (ja) 異なる表示機能の間で知覚ルミナンス非線形性ベースの画像データ交換を改善する装置および方法
US10798321B2 (en) Bit-depth efficient image processing
CN108370405B (zh) 一种图像信号转换处理方法、装置及终端设备
JP6563915B2 (ja) Hdr画像のための汎用コードマッピングのためのeotf関数を生成するための方法及び装置、並びにこれらの画像を用いる方法及び処理
EP2144444A1 (fr) Dispositifs et procédés de compression de données vidéo HDR
JP2014517556A (ja) ビデオ符号化及び復号化
WO2019036522A1 (fr) Traitement d'image efficace en profondeur de bits
JP2018112936A (ja) Hdr画像処理装置および方法
JP2018524875A (ja) ビデオ符号化方法、ビデオ復号方法、ビデオエンコーダ、およびビデオデコーダ
CN115803802B (zh) 用于使用pq偏移进行环境光补偿的系统和方法
JP2014211914A (ja) 階調補正装置またはその方法
JP2019053402A (ja) 画像処理装置及び画像処理方法及びプログラム
Hatchett Efficient and adaptable high dynamic range compression
JP2025175086A (ja) 異なる表示機能の間で知覚ルミナンス非線形性ベースの画像データ交換を改善する装置および方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18759011

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 2018759011

Country of ref document: EP

Effective date: 20200316