WO2025050333A1 - Determining a white balance for an image - Google Patents
Determining a white balance for an image Download PDFInfo
- Publication number
- WO2025050333A1 WO2025050333A1 PCT/CN2023/117392 CN2023117392W WO2025050333A1 WO 2025050333 A1 WO2025050333 A1 WO 2025050333A1 CN 2023117392 W CN2023117392 W CN 2023117392W WO 2025050333 A1 WO2025050333 A1 WO 2025050333A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- white
- balance
- decision
- balance decision
- image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/80—Camera processing pipelines; Components thereof
- H04N23/84—Camera processing pipelines; Components thereof for processing colour signals
- H04N23/88—Camera processing pipelines; Components thereof for processing colour signals for colour balance, e.g. white-balance circuits or colour temperature control
Definitions
- the present disclosure generally relates to determining a white balance for an image.
- aspects of the present disclosure include systems and techniques for determining a white-balance setting for an image.
- a camera is a device that receives light and captures image frames, such as still images or video frames, using an image sensor.
- Cameras can be configured with a variety of image-capture settings and/or image-processing settings to alter the appearance of images captured thereby.
- Image-capture settings may be determined and applied before and/or while an image is captured, such as ISO, exposure time (also referred to as exposure, exposure duration, or shutter speed) , aperture size, (also referred to as f/stop) , focus, and gain (including analog and/or digital gain) , among others.
- image-processing settings can be configured for post-processing of an image, such as alterations to contrast, brightness, saturation, sharpness, levels, curves, and colors, among others.
- White balancing may include changes to image-capture settings and/or image-processing settings.
- the term “white balance, ” when used as a verb may refer to one or more operations to adjust colors and/or brightness of colors of pixels of image data.
- White balancing may refer to adjusting intensities of colors of pixels or an image responsive to a white-balance decision (e.g., to cause pixels associated with white objects to appear as white in the image.
- the term “white balance” when used as a noun, “white-balance decision, ” and like term may refer to an indication of settings for an image to adjust pixels of the image to implement white balancing.
- a white-balance decision may indicate adjustments to intensities of red, green, and/or blue channels of pixels of an image such that white objects in a scene represented by the image appear white in the image.
- a method for determining a white balance for an image. The method includes: obtaining statistics based on image data, the statistics being associated with at least one of color or brightness of the image data; determining, based on the statistics, a first white-balance decision using a white-balance algorithm; determining, based on the statistics, a second white-balance decision using a machine-learning model trained to determine white-balance decisions; and determining a third white-balance decision based on the first white-balance decision and the second white-balance decision.
- an apparatus for determining a white balance for an image includes at least one memory and at least one processor (e.g., configured in circuitry) coupled to the at least one memory.
- the at least one processor configured to: obtain statistics based on image data, the statistics being associated with at least one of color or brightness of the image data; determine, based on the statistics, a first white-balance decision using a white-balance algorithm; determine, based on the statistics, a second white-balance decision using a machine-learning model trained to determine white-balance decisions; and determine a third white-balance decision based on the first white-balance decision and the second white-balance decision.
- a non-transitory computer-readable medium has stored thereon instructions that, when executed by one or more processors, cause the one or more processors to: obtain statistics based on image data, the statistics being associated with at least one of color or brightness of the image data; determine, based on the statistics, a first white-balance decision using a white-balance algorithm; determine, based on the statistics, a second white-balance decision using a machine-learning model trained to determine white-balance decisions; and determine a third white-balance decision based on the first white-balance decision and the second white-balance decision.
- an apparatus for determining a white balance for an image includes: means for obtaining statistics based on image data, the statistics being associated with at least one of color or brightness of the image data; means for determining, based on the statistics, a first white-balance decision using a white-balance algorithm; means for determining, based on the statistics, a second white-balance decision using a machine-learning model trained to determine white-balance decisions; and means for determining a third white-balance decision based on the first white-balance decision and the second white-balance decision.
- one or more of the apparatuses described herein is, can be part of, or can include a mobile device (e.g., a mobile telephone or so-called “smart phone” , a tablet computer, or other type of mobile device) , an extended reality device (e.g., a virtual reality (VR) device, an augmented reality (AR) device, or a mixed reality (MR) device) , a vehicle (or a computing device or system of a vehicle) , a smart or connected device (e.g., an Internet-of-Things (IoT) device) , a wearable device, a personal computer, a laptop computer, a video server, a television (e.g., a network-connected television) , a robotics device or system, or other device.
- a mobile device e.g., a mobile telephone or so-called “smart phone” , a tablet computer, or other type of mobile device
- an extended reality device e.g., a virtual reality
- each apparatus can include an image sensor (e.g., a camera) or multiple image sensors (e.g., multiple cameras) for capturing one or more images.
- each apparatus can include one or more displays for displaying one or more images, notifications, and/or other displayable data.
- each apparatus can include one or more speakers, one or more light-emitting devices, and/or one or more microphones.
- each apparatus can include one or more sensors. In some cases, the one or more sensors can be used for determining a location of the apparatuses, a state of the apparatuses (e.g., a tracking state, an operating state, a temperature, a humidity level, and/or other state) , and/or for other purposes.
- a state of the apparatuses e.g., a tracking state, an operating state, a temperature, a humidity level, and/or other state
- FIG. 1 is a block diagram illustrating an example architecture of an image processing system, according to various aspects of the present disclosure
- FIG. 3 includes example images and corresponding example AWB stats
- FIG. 5 is a block diagram illustrating an example system for making a white-balance decision, according to various aspects of the present disclosure
- FIG. 6 is a block diagram illustrating an example system for making a white-balance decision, according to various aspects of the present disclosure
- FIG. 7 is a flowchart illustrating an example process for determining a white-balance decision for an image, according to various aspects of the present disclosure
- FIG. 8 is a flow diagram illustrating another example process for determining a white balance for an image, in accordance with aspects of the present disclosure
- FIG. 9 is a block diagram illustrating an example of a deep learning neural network that can be used to implement a perception module and/or one or more validation modules, according to some aspects of the disclosed technology
- FIG. 10 is a block diagram illustrating an example of a convolutional neural network (CNN) , according to various aspects of the present disclosure.
- CNN convolutional neural network
- FIG. 11 is a block diagram illustrating an example computing-device architecture of an example computing device which can implement the various techniques described herein.
- the image may be white balanced, which may include adjusting intensities of color channels of pixels of the image (e.g., such that white objects in a scene represented by the image appear white in the image) .
- Many cameras or devices that include cameras) allow a user to set a white balance of an image (e.g., either before or after the image is captured) .
- a white-balance decision may indicate an adjustment to intensities of red, green, and/or blue channels of pixels of an image.
- An AWB engine may white balance images based on statistics (which may be referred to as “AWB statistics, ” “AWB stats, ” Bayer-grid statistics, " “Bayer-grid stats, ” “BG statistics, ” and/or “BG stats.
- AWB stats may include a relationship between (e.g., a ratio of) intensities of red light to intensities of green light and a relationship between (e.g., a ratio of) intensities of blue light to intensities of greenlight, among other things.
- Algorithm-based white-balance engines may be unable to accurately white balance images which do not include enough white pixels (e.g., enough pixels representing white objects) .
- an algorithm-based white-balance engine may identify a subset of pixels in an image (less than all pixels in the image) that should be white, determine how the subset of pixels need to be adjusted to become white, and apply the same adjustment to all pixels of the image.
- an AWB algorithm may obtain a weight for a number of data points of AWB stats. Each weight may represent the possibility that a respective data point of the AWB stats represents a white point in the scene. As such, in cases in which an image does not include enough white pixels, an algorithm-based white balance engine may be unable to accurately white balance the image.
- Machine-learning-based white-balance engines have been developed. Machine-learning-based white-balance engines may be trained to receive an image and generate a white-balance decision based on the image. Machine-learning-based white-balance engines may have stability issues. For example, a machine-learning-based white-balance engine may be provided with substantially the same image and may produce different results. For example, a machine-learning-based white-balance engine may be provided with subsequent image frames (e.g., of video data) . The machine-learning-based white-balance engine may produce different white-balance decisions responsive to the subsequent image frames even though the subsequent images may be substantially the same.
- Systems, apparatuses, methods (also referred to as processes) , and computer-readable media are described herein for determining a white balance for an image.
- the systems and techniques described herein may use both a white-balance algorithm and a white-balance machine-learning model to determine separate white-balance decisions, then make a final white-balance decision based on the two separate white-balance decisions.
- the systems and techniques may obtain statistics based on image data.
- the statistics may be associated with color and/or brightness of the image data.
- the statistics may be AWB stats.
- the systems and techniques may determine, based on the statistics, a first white- balance decision using a white-balance algorithm. Further, the systems and techniques may determine, based on the statistics, a second white-balance decision using a machine-learning model trained to determine white-balance decisions. Also, the systems and techniques may determine a third white-balance decision based on the first white-balance decision and the second white-balance decision. In some aspects, the systems and techniques may white balance the image data based on the third white-balance decision.
- the systems and techniques may exhibit the advantages of both approaches.
- the systems and techniques may be stable and produce correct white-balance decision in most situations (e.g., a trait that the systems and techniques may share with algorithm-based white-balancing techniques) .
- the systems and techniques may be capable of predicting a good white-balance decision in misleading scenes and/or low-light scenes (e.g., a trait that the systems and techniques may share with machine-learning-based white-balancing techniques) .
- the systems and techniques may overcome the challenges to both algorithm-based white balancing and machine-learning-based white balancing.
- algorithm-based white-balancing techniques may generate incorrect white-balance decisions for images representing misleading and/or low-light scenes.
- the systems and techniques, using a white-balance decision based on a machine-learning-based white-balance technique overcome this issue.
- machine-learning-based white-balancing techniques may perform inconsistently.
- the systems and techniques, using a white-balance decision based on an algorithm-based white-balancing technique may overcome this issue.
- FIG. 1 is a block diagram illustrating an example architecture of an image-processing system 100, according to various aspects of the present disclosure.
- the image-processing system 100 includes various components that are used to capture and process images, such as an image of a scene 106.
- the image-processing system 100 can capture image frames (e.g., still images or video frames) .
- the lens 108 and image sensor 118 (which may include an analog-to-digital converter (ADC) ) can be associated with an optical axis.
- ADC analog-to-digital converter
- the photosensitive area of the image sensor 118 e.g., the photodiodes
- the lens 108 can both be centered on the optical axis.
- the lens 108 of the image-processing system 100 faces a scene 106 and receives light from the scene 106.
- the lens 108 bends incoming light from the scene toward the image sensor 118.
- the light received by the lens 108 then passes through an aperture of the image-processing system 100.
- the aperture e.g., the aperture size
- the aperture can have a fixed size.
- the one or more control mechanisms 110 can control exposure, focus, and/or zoom based on information from the image sensor 118 and/or information from the image processor 124.
- the one or more control mechanisms 110 can include multiple mechanisms and components.
- the control mechanisms 110 can include one or more exposure-control mechanisms 112, one or more focus-control mechanisms 114, and/or one or more zoom-control mechanisms 116.
- the one or more control mechanisms 110 may also include additional control mechanisms besides those illustrated in FIG. 1.
- the one or more control mechanisms 110 can include control mechanisms for controlling analog gain, flash, HDR, depth of field, and/or other image capture properties.
- the focus-control mechanism 114 of the control mechanisms 110 can obtain a focus setting.
- focus-control mechanism 114 stores the focus setting in a memory register.
- the focus-control mechanism 114 can adjust the position of the lens 108 relative to the position of the image sensor 118. For example, based on the focus setting, the focus-control mechanism 114 can move the lens 108 closer to the image sensor 118 or farther from the image sensor 118 by actuating a motor or servo (or other lens mechanism) , thereby adjusting the focus.
- additional lenses may be included in the image-processing system 100.
- the image-processing system 100 can include one or more microlenses over each photodiode of the image sensor 118. The microlenses can each bend the light received from the lens 108 toward the corresponding photodiode before the light reaches the photodiode.
- the focus setting may be determined via contrast detection autofocus (CDAF) , phase detection autofocus (PDAF) , hybrid autofocus (HAF) , or some combination thereof.
- the focus setting may be determined using the control mechanism 110, the image sensor 118, and/or the image processor 124.
- the focus setting may be referred to as an image capture setting and/or an image processing setting.
- the lens 108 can be fixed relative to the image sensor and the focus-control mechanism 114.
- the exposure-control mechanism 112 of the control mechanisms 110 can obtain an exposure setting.
- the exposure-control mechanism 112 stores the exposure setting in a memory register.
- the exposure-control mechanism 112 can control a size of the aperture (e.g., aperture size or f/stop) , a duration of time for which the aperture is open (e.g., exposure time or shutter speed) , a duration of time for which the sensor collects light (e.g., exposure time or electronic shutter speed) , a sensitivity of the image sensor 118 (e.g., ISO speed or film speed) , analog gain applied by the image sensor 118, or any combination thereof.
- the exposure setting may be referred to as an image capture setting and/or an image processing setting.
- the zoom-control mechanism 116 of the control mechanisms 110 can obtain a zoom setting.
- the zoom-control mechanism 116 stores the zoom setting in a memory register.
- the zoom-control mechanism 116 can control a focal length of an assembly of lens elements (lens assembly) that includes the lens 108 and one or more additional lenses.
- the zoom-control mechanism 116 can control the focal length of the lens assembly by actuating one or more motors or servos (or other lens mechanism) to move one or more of the lenses relative to one another.
- the zoom setting may be referred to as an image capture setting and/or an image processing setting.
- the lens assembly may include a parfocal zoom lens or a varifocal zoom lens.
- the lens assembly may include a focusing lens (which can be lens 108 in some cases) that receives the light from the scene 106 first, with the light then passing through a focal zoom system between the focusing lens (e.g., lens 108) and the image sensor 118 before the light reaches the image sensor 118.
- the focal zoom system may, in some cases, include two positive (e.g., converging, convex) lenses of equal or similar focal length (e.g., within a threshold difference of one another) with a negative (e.g., diverging, concave) lens between them.
- the zoom-control mechanism 116 moves one or more of the lenses in the focal zoom system, such as the negative lens and one or both of the positive lenses.
- zoom-control mechanism 116 can control the zoom by capturing an image from an image sensor of a plurality of image sensors (e.g., including image sensor 118) with a zoom corresponding to the zoom setting.
- the image-processing system 100 can include a wide-angle image sensor with a relatively low zoom and a telephoto image sensor with a greater zoom.
- the zoom-control mechanism 116 can capture images from a corresponding sensor.
- the image sensor 118 includes one or more arrays of photodiodes or other photosensitive elements. Each photodiode measures an amount of light that eventually corresponds to a particular pixel in the image produced by the image sensor 118. In some cases, different photodiodes may be covered by different filters. In some cases, different photodiodes can be covered in color filters, and may thus measure light matching the color of the filter covering the photodiode.
- Various color filter arrays can be used such as, for example and without limitation, a Bayer color filter array, a quad color filter array (QCFA) , and/or any other color filter array.
- the image sensor 118 may alternately or additionally include opaque and/or reflective masks that block light from reaching certain photodiodes, or portions of certain photodiodes, at certain times and/or from certain angles.
- opaque and/or reflective masks may be used for phase detection autofocus (PDAF) .
- the opaque and/or reflective masks may be used to block portions of the electromagnetic spectrum from reaching the photodiodes of the image sensor (e.g., an IR cut filter, a UV cut filter, a band-pass filter, low-pass filter, high-pass filter, or the like) .
- the image sensor 118 may also include an analog gain amplifier to amplify the analog signals output by the photodiodes and/or an analog to digital converter (ADC) to convert the analog signals output of the photodiodes (and/or amplified by the analog gain amplifier) into digital signals.
- ADC analog to digital converter
- certain components or functions discussed with respect to one or more of the control mechanisms 110 may be included instead or additionally in the image sensor 118.
- the image sensor 118 may be a charge-coupled device (CCD) sensor, an electron-multiplying CCD (EMCCD) sensor, an active-pixel sensor (APS) , a complimentary metal-oxide semiconductor (CMOS) , an N-type metal-oxide semiconductor (NMOS) , a hybrid CCD/CMOS sensor (e.g., sCMOS) , or some other combination thereof.
- CCD charge-coupled device
- EMCD electron-multiplying CCD
- APS active-pixel sensor
- CMOS complimentary metal-oxide semiconductor
- NMOS N-type metal-oxide semiconductor
- hybrid CCD/CMOS sensor e.g., sCMOS
- the image processor 124 may include one or more processors, such as one or more image signal processors (ISPs) (including ISP 128) , one or more host processors (including host processor 126) , and/or one or more of any other type of processor discussed with respect to the computing-device architecture 1100 of FIG. 11.
- the host processor 126 can be a digital signal processor (DSP) and/or other type of processor.
- the image processor 124 is a single integrated circuit or chip (e.g., referred to as a system-on-chip or SoC) that includes the host processor 126 and the ISP 128.
- the chip can also include one or more input/output ports (e.g., input/output (I/O) ports 130) , central processing units (CPUs) , graphics processing units (GPUs) , broadband modems (e.g., 3G, 4G or LTE, 5G, etc. ) , memory, connectivity components (e.g., Bluetooth TM , Global Positioning System (GPS) , etc. ) , any combination thereof, and/or other components.
- input/output ports e.g., input/output (I/O) ports 130
- CPUs central processing units
- GPUs graphics processing units
- broadband modems e.g., 3G, 4G or LTE, 5G, etc.
- memory e.g., a Wi-Fi, etc.
- connectivity components e.g., Bluetooth TM , Global Positioning System (GPS) , etc.
- the I/O ports 130 can include any suitable input/output ports or interface according to one or more protocol or specification, such as an Inter-Integrated Circuit 2 (I2C) interface, an Inter-Integrated Circuit 3 (I3C) interface, a Serial Peripheral Interface (SPI) interface, a serial General-Purpose Input/Output (GPIO) interface, a Mobile Industry Processor Interface (MIPI) (such as a MIPI CSI-2 physical (PHY) layer port or interface, an Advanced High-performance Bus (AHB) bus, any combination thereof, and/or other input/output port.
- I2C Inter-Integrated Circuit 2
- I3C Inter-Integrated Circuit 3
- SPI Serial Peripheral Interface
- GPIO serial General-Purpose Input/Output
- MIPI Mobile Industry Processor Interface
- the host processor 126 can communicate with the image sensor 118 using an I2C port
- the ISP 128 can communicate with the image sensor 118 using an MIPI port.
- the image processor 124 may perform a number of tasks, such as de-mosaicing, color space conversion, image frame downsampling, pixel interpolation, automatic exposure (AE) control, automatic gain control (AGC) , CDAF, PDAF, automatic white balance, merging of image frames to form an HDR image, image recognition, object recognition, feature recognition, receipt of inputs, managing outputs, managing memory, or some combination thereof.
- the image processor 124 may store image frames and/or processed images in random-access memory (RAM) 120, read-only memory (ROM) 122, a cache, a memory unit, another storage device, or some combination thereof.
- I/O devices 132 may be connected to the image processor 124.
- the I/O devices 132 can include a display screen, a keyboard, a keypad, a touchscreen, a trackpad, a touch-sensitive surface, a printer, any other output devices, any other input devices, or any combination thereof.
- a caption may be input into the image-processing device 104 through a physical keyboard or keypad of the I/O devices 132, or through a virtual keyboard or keypad of a touchscreen of the I/O devices 132.
- the I/O devices 132 may include one or more ports, jacks, or other connectors that enable a wired connection between the image-processing system 100 and one or more peripheral devices, over which the image-processing system 100 may receive data from the one or more peripheral device and/or transmit data to the one or more peripheral devices.
- the I/O devices 132 may include one or more wireless transceivers that enable a wireless connection between the image-processing system 100 and one or more peripheral devices, over which the image-processing system 100 may receive data from the one or more peripheral device and/or transmit data to the one or more peripheral devices.
- the peripheral devices may include any of the previously-discussed types of the I/O devices 132 and may themselves be considered I/O devices 132 once they are coupled to the ports, jacks, wireless transceivers, or other wired and/or wireless connectors.
- the image-processing system 100 may be a single device. In some cases, the image-processing system 100 may be two or more separate devices, including an image-capture device 102 (e.g., a camera) and an image-processing device 104 (e.g., a computing device coupled to the camera) . In some implementations, the image-capture device 102 and the image-capture device 102 may be coupled together, for example via one or more wires, cables, or other electrical connectors, and/or wirelessly via one or more wireless transceivers. In some implementations, the image-capture device 102 and the image-processing device 104 may be disconnected from one another.
- an image-capture device 102 e.g., a camera
- an image-processing device 104 e.g., a computing device coupled to the camera
- the image-capture device 102 and the image-capture device 102 may be coupled together, for example via one or more wires, cables, or other electrical connectors, and/or wireless
- a vertical dashed line divides the image-processing system 100 of FIG. 1 into two portions that represent the image-capture device 102 and the image-processing device 104, respectively.
- the image-capture device 102 includes the lens 108, control mechanisms 110, and the image sensor 118.
- the image-processing device 104 includes the image processor 124 (including the ISP 128 and the host processor 126) , the RAM 120, the ROM 122, and the I/O device 132.
- certain components illustrated in the image-capture device 102 such as the ISP 128 and/or the host processor 126, may be included in the image-capture device 102.
- the image-processing system 100 can include one or more wireless transceivers for wireless communications, such as cellular network communications, 802.11 wi-fi communications, wireless local area network (WLAN) communications, or some combination thereof.
- the image-processing system 100 can be part of, or implemented by, a single computing device or multiple computing devices.
- the image-processing system 100 can be part of an electronic device (or devices) such as a camera system (e.g., a digital camera, an IP camera, a video camera, a security camera, etc. ) , a telephone system (e.g., a smartphone, a cellular telephone, a conferencing system, etc. ) , a laptop or notebook computer, a tablet computer, a set-top box, a smart television, a display device, a game console, an XR device (e.g., an HMD, smart glasses, etc. ) , an IoT (Internet-of-Things) device, a smart wearable device, a video streaming device, an Internet Protocol (IP) camera, or any other suitable electronic device (s) .
- a camera system e.g., a digital camera, an IP camera, a video camera, a security camera, etc.
- the components of the image-processing system 100 can include software, hardware, or one or more combinations of software and hardware.
- the components of the image-processing system 100 can include and/or can be implemented using electronic circuits or other electronic hardware, which can include one or more programmable electronic circuits (e.g., microprocessors, GPUs, DSPs, CPUs, and/or other suitable electronic circuits) , and/or can include and/or be implemented using computer software, firmware, or any combination thereof, to perform the various operations described herein.
- the software and/or firmware can include one or more instructions stored on a computer-readable storage medium and executable by one or more processors of the electronic device implementing the image-processing system 100.
- the computing-device architecture 1100 shown in FIG. 11 and further described below can include the image-processing system 100, the image-capture device 102, the image-processing device 104, or a combination thereof.
- FIG. 2 includes a graph 202 illustrating automatic white balance (AWB) stats, according to various aspects of the present disclosure.
- the x-axis of graph 202 represents a ratio of intensities of red light to intensities of green light.
- the x-axis represents an average intensity of red channels of a group of pixels divided by an average intensity of green channels of the group of pixels.
- the y-axis of graph 202 represents a ratio of intensities of blue light to intensities of green light.
- the y-axis represents an average intensity of blue channels of a group of pixels divided by an average intensity of green channels of the group of pixels.
- a point on graph 202 may represent intensities of colors of a group of pixels of an image.
- an x-coordinate of the point may represent a ratio of the intensities of red channels of the group of pixels to the intensities of green pixels of the group of pixels.
- a y-coordinate of the point may represent a ratio of the intensities of blue channels of the group of pixels to the intensities of green pixels of the group of pixels.
- Graph 202 includes a white box 204.
- Data points inside white box 204 may be representative of objects in the scene that may be white.
- White box 204 may be determined based on a calibration process. For example, a white object may be illuminated by a number of different lights sources having different color temperatures. Multiple images of the white object may be captured as the white object is illuminated by the different light sources.
- AWB stats from the multiple images may be generated and white box 204 may be defined by the AWB stats for the multiple images. Because white box 204 is defined based on AWB stats based on images of a white object, any data point within white box 204 may correspond to a group of pixels that may represent a white object.
- graph 202 may include reference points 206.
- Reference points 206 may similarly define white of graph 202.
- reference points 206 may represent data points measured during the calibration process described above.
- a reference line may be defined connecting reference points 206.
- a distance between a data point on graph 202 and white box 204, reference points 206, and/or the reference line may be a measure of a difference the color of a group of pixels represented by the data point and white. Additionally or alternatively, the distance may be indicative of how white (or not) the group of pixels represented by the data point are.
- FIG. 3 includes example images and corresponding example AWB stats.
- FIG. 3 includes an image 302 and corresponding AWB stats 304.
- Data points of AWB stats 304 may represent groups of pixels of image 302.
- image 302 may be divided into a 64 *48 grid of groups of pixels.
- Each group of pixels of the grid may be represented by a data point of AWB stats 304.
- data points of AWB stats 314 may represent image 312.
- Image 302 may be an image of a scene.
- the scene may include a white object.
- the image may include a group pixels (e.g., in one of the 64 *48 groups of pixels) corresponding toe the white object.
- AWB stats 304 may include a data point corresponding to the group of pixels.
- the data point may be within white box 306.
- An algorithm-based white-balancing approach may identify the group of pixels, for example, based on the data point being within white box 306.
- the algorithm-based white-balance approach may determine a white-balance decision to apply to the group of pixels to cause the group of pixels to be white in a white-balanced image.
- the algorithm-based white-balance approach may apply the white-balance decision to other pixels (e.g., all of the other pixels of the image) .
- Image 312 may be an image of another scene, for example, wood grain.
- the other scene may not include a white object. Because the scene does not include a white object, image 312 may not include any groups of white pixels and AWB stats 314 may not include any data points within white box 316. Because image 312 does not include any data points within white box 316, the algorithm-based white-balance approach may not be able to make an accurate white-balance decision. In some cases, the algorithm-based white-balance approach may select a data point closest to white box 316 (or to reference points or a reference line, neither of which is illustrated in FIG. 3) .
- the white-balance decision may be incorrect (e.g., unable to cause pixels representing white objects in the scene to be white in an image.
- Image 312 may be an illustration of a misleading scene for which it may be difficult for an algorithm-based white-balance approach to accurately determine a white-balance decision.
- FIG. 4 includes example images white-balanced by a machine-learning-based white-balance approach.
- FIG. 4 includes an image 402 and an image 412.
- Image 402 and image 412 may represent the same scene.
- Image 402 and image 412 may have been captured within a short period of time (e.g., 1/30th of a second) .
- image 402 and image 412 may be frames of video data (e.g., with image 412 being subsequent to image 402) .
- image 402 and image 412 may be the same image, white balanced differently.
- Image 402 and image 412 may be white balanced by a machine-learning-based white-balance approach.
- image 402 and image 412 may have been independently white-balanced by the same machine-learning model (e.g., one after the other) .
- a white-balance decision of image 402 (e.g., on which image 402) is white-balanced may be different than a white-balance decision of image 412 (e.g., on white image 412 is white-balanced) .
- image 402 may be different than image 412.
- intensities of colors of image 402 may be different than intensities of colors of image 412.
- Image 402 and image 412 may illustrated an inconsistency of a machine-learning-based white-balance approach to white balancing.
- Respective white-balance decisions for Image 402 and image 412 may be determined according to a machine-learning-based white-balance approach.
- a machine-learning model may be trained to generate white-balance decisions based on AWB stats through a backpropagation training process.
- the machine-learning model may be provided with a number of sets of AWB stats and corresponding white-balance decisions.
- the machine-learning model may generate white-balance decisions based on the AWB stats.
- the generated white-balance decisions may be compared with the provided white-balance decisions.
- system 500 may obtain AWB stats 506.
- AWB stats 506 may be based on image data 502.
- AWB stats 506 may be, or may include, statistics associated with color and/or brightness of image data 502.
- Graph 202 of FIG. 2 may be an example of an illustration of AWB stats 506.
- AWB stats 506 may be, or may include, a relationship between (e.g., a ratio of) intensities of red light to intensities of green light (e.g., as illustrated by the x-axis of graph 202) , a relationship between (e.g., a ratio of) intensities of blue light to intensities of green light, (e.g., as illustrated by the y-axis of graph 202) , a comparison between a white reference point and the ratio of intensities of red light to intensities of green light and the ratio of intensities of blue light to intensities of green light (e.g., a distance between a data point and white box 204, reference points 206, and/or a reference line) , a correlated color temperature, a semantic label and/or sensor-gain values. Additionally or alternatively, AWB stats 506 may include a lux index, IR-sensor data from an IR-sensor, and/or spectrometer data from a spectrometer.
- system 500 may obtain image data 502 and generate AWB stats 506 using a stats engine 504.
- Stats engine 504 may be configured to generate AWB stats 506 based on image data 502.
- Stats engine 504 is optional in system 500. The optional nature of stats engine 504 and image data 502 in system 500 are illustrated by image data 502 and stats engine 504 being illustrated using dashed lines.
- system 500 may include stats engine 504 and system 500 may generate AWB stats 506 based on image data 502 using stats engine 504.
- system 500 may not include stats engine 504 and system 500 may receive AWB stats 506.
- some of AWB stats 506 may be based on data from other sensors (e.g., an IR-sensor or a spectrometer) .
- White-balance algorithm 508 may be, or may include, an algorithm for determining white-balance decision 510 based on AWB stats 506.
- White-balance algorithm 508 may determine white-balance decision 510 in a manner substantially similar to the algorithm-based white-balance approach described above with regard to image 302 of FIG. 3.
- White-balance decision 510 may be, or may include, a red gain, a green gain, and/or a blue gain. The red gain, the green gain, and the blue gain may be applied to all pixels of an image to white-balance the image.
- a white-balance decision may be represented as a red gain divided by a green gain and a blue gain divided by the green gain (e.g., in the format “ (r/g, b/g) ” ) .
- the red gain, blue gain, and green gain may be applied to pixels of an image to such that pixels of the image representing white points in the scene are white in the image.
- Machine-learning model 512 may be, or may include, a machine-learning model trained to determine a white-balance decision based on AWB stats.
- Machine-learning model 512 may determine white-balance decision 514 in a manner substantially similar to the machine-learning-based white-balance approach described above with regard to image 402 of FIG. 4.
- White-balance decision 514 may be, or may include, an intensity of red light, an intensity of green light, and an intensity of blue light that together make up white light.
- machine-learning model 512 may additionally generate a confidence value 516 and/or a scene flag 518.
- Confidence value 516 may be related to white-balance decision 514.
- confidence value 516 may indicate a level of confidence of machine-learning model 512 with regard to white-balance decision 514 and/or a level of confidence with which other systems and techniques may use white-balance decision 514.
- confidence value 516 may be a value between 0 and 1, with 0 indicating that other systems and techniques should not rely on white-balance decision 514 at all and 1 indicating that other systems and techniques may completely rely on white-balance decision 514.
- Scene flag 518 may be related to a scene represented by image data 502. Further, scene flag 518 may indicate whether the scene is a misleading scene or not.
- a misleading scene may be a scene that does not include enough white points or objects (e.g., enough white objects for machine-learning model 512 to accurately determine white-balance decision 514) . Scenes that do not include anything white, scenes that are poorly lit, scenes with mixed lighting, and/or close-up images of objects that are not white are some examples of misleading scenes.
- Scene flag 518 may thus be indicative of whether the scene includes enough white points or objects (or whether image data 502 includes pixels with that are recognizable as white by machine-learning model 512) .
- machine-learning model 512 may determine whether the scene is misleading or not based on data points on an AWB stats diagram, similar to graph 202 of FIG. 2, a white box, reference points, and/or reference line.
- scene flag 518 may be binary (e.g., indicative of a misleading scene or not) .
- scene flag 518 may be a value indicative of a certainty of machine-learning model 512 regarding scene flag 518.
- scene flag 518 may be a value between 0 and 1, 0 indicating that machine-learning model 512 is certain that the scene is not misleading, 1 indicating that machine-learning model 512 is certain that the scene is misleading, and 0.5 indicating that machine-learning model 512 is completely uncertain regarding whether the scene is misleading or not.
- Confidence value 516 and scene flag 518 are optional in system 500.
- the optional nature of confidence value 516 and scene flag 518 in system 500 are illustrated by confidence value 516 and scene flag 518 being illustrated using dashed lines.
- machine-learning model 512 may generate confidence value 516 and scene flag 518 and provide confidence value 516 and scene flag 518 to AWB decision aggregator 522.
- machine-learning model 512 may not generate confidence value 516 and scene flag 518.
- AWB decision aggregator 522 may generate a confidence value and/or scene flag internal to AWB decision aggregator 522.
- system 500 may receive information 520 and may provide information 520 to AWB decision aggregator 522.
- Information 520 may be, or may include, spectrum data from a spectrum sensor, scene-detection results, a number of faces detected, and/or semantic segmentation information.
- AWB decision aggregator 522 may generate white-balance decision 524 based on information 520.
- AWB decision aggregator 522 may determine white-balance decision 524 additionally based on information 520.
- Information 520 is optional in system 500. The optional nature of information 520 in system 500 is illustrated by information 520 being illustrated using dashed lines.
- system 500 may generate a white-balanced image (not illustrated in FIG. 5) based on image data 502 and white-balance decision 524.
- system 500 may white balance image data 502 based on white-balance decision 524 to generate a white-balanced image.
- FIG. 6 is a block diagram illustrating an example system 600 for making a white-balance decision 524, according to various aspects of the present disclosure.
- system 600 may obtain white-balance decision 510 (which may have been generated by an algorithm-based white-balance approach, such as white-balance algorithm 508 of FIG. 5) and white-balance decision 514 (which may have been generated by a machine-learning-based white-balance approach, such as machine-learning model 512 of FIG. 5) .
- AWB decision aggregator 522 may determine white-balance decision 524 based on white-balance decision 510, white-balance decision 514 and/or information 520.
- AWB decision aggregator 522 may receive a confidence value and/or a scene flag (e.g., from the machine-learning-based white-balance approach which generated machine-learning model 512) .
- AWB decision aggregator 522 may include a machine-learning model 602 which may generate a confidence value 516 and/or a scene flag 518.
- machine-learning model 602 may receive white-balance decision 514 and AWB stats 506 and may generate confidence value 516 and/or scene flag 518 based on white-balance decision 514.
- Machine-learning model 602 may be a machine-learning model trained to generate a confidence value and/or a scene flag 518based on a white-balance decision and AWB stats.
- machine-learning model 602 may be trained to generate a confidence value and/or a scene flag based on a white-balance decision through a backpropagation training process.
- machine-learning model 602 may be provided with a number of sets of AWB stats and white-balance decisions and corresponding scene flags and/or confidence values.
- Machine-learning model 602 may generate scene flags and confidence values based on the AWB stats and the white-balance decisions, the generated scene flags and confidence values may be compared with the provided scene flags and confidence values.
- Machine-learning model 602 may be adjusted, for example, parameters (e.g., weights of machine-learning model 602) may be adjusted based on a difference (e.g., an error) between the generated scene flags and the confidence values and the provided scene flags and confidence values. Once trained, machine-learning model 602 may be used to determine the confidence value 516 and scene flag 518 based on white-balance decision 514 and AWB stats 506.
- parameters e.g., weights of machine-learning model 602
- a difference e.g., an error
- FIG. 7 is a flowchart illustrating an example process 700 for determining a white-balance decision for an image, according to various aspects of the present disclosure.
- Process 700 may be implemented by AWB decision aggregator 522 of FIG. 5 or FIG. 6.
- process 700 may be a process used by AWB decision aggregator 522 to determine white-balance decision 524 based on white-balance decision 510, white-balance decision 514, confidence value 516, scene flag 518 and/or information 520.
- process 700 may obtain an algorithm-based white-balance decision ( “AB AWB” ) (e.g., white-balance decision 510) , a machine-learning-based white-balance decision ( "ML AWB” ) (e.g., white-balance decision 514) , a confidence value related to the ML AWB (e.g., confidence value 516) , a scene flag related to a scene of the images (e.g., scene flag 518) , and/or other information (e.g., information 520) .
- AB AWB algorithm-based white-balance decision
- ML AWB machine-learning-based white-balance decision
- confidence value related to the ML AWB e.g., confidence value 51
- scene flag related to a scene of the images e.g., scene flag 518)
- other information e.g., information 520
- Process 700 may begin at decision block 702. At decision block 702, it may be determined whether the scene represented by the image is misleading or not. The determination regarding the scene may be based on the scene flag (e.g., scene flag 518) . If the scene is not misleading, process 700 may continue to block 704, else process 700 may continue to block 706.
- the scene flag e.g., scene flag 5128
- Block 704 may be reached based on a scene represented by the image being not misleading.
- a weight for the ML AWB (an "ML AWB weight" ) may be determined to be 0.
- process 700 may determine to generate the ML AWB weight to be 0.
- An ML AWB weight of 0 may, at block 710, cause process 700 to not use the ML AWB (e.g., white-balance decision 514) in determining a final white-balance decision (e.g., white-balance decision 524) but rather to use another white-balance decision.
- Block 706 may be reached based on a scene represented by the image being misleading.
- the ML AWB weight may be set to a value based on a confidence value related to the ML AWB.
- the ML AWB weight may be set to a confidence value (e.g., confidence value 516) of the ML AWB.
- Block 706 may be followed by block 708.
- the ML AWB weight may be adjusted based on one or more conditions.
- several conditions of the image e.g., image data 502 , the AWB stats (e.g., AWB stats 506) , the AL AWB (e.g., white-balance decision 510) , the ML AWB (e.g., white-balance decision 514) , and/or other information (e.g., information 520) may be checked and the ML AWB weight may be adjusted based on the conditions.
- table 1 illustrates example conditions that may be used to adjust a confidence value of an ML AWB.
- Table 1 includes two conditions, a “distance to reference line” condition and a “distance to spectrum-sensor decision” condition.
- the distance to reference line condition may relate to a distance between a data point representative of the ML AWB on an AWB stats graph, such as graph 202 of FIG. 2, and a reference line.
- the distance to spectrum-sensor decision may relate to a distance between the data point representative of the ML AWB on the AWB stats graph and a white-balance decision based on spectrum data generated by a spectrum sensor and mapped to the AWB stats graph.
- the distance to the reference line and/or the distance to the spectrum-sensor decision may be part of the information (e.g., information 520) obtained by process 700.
- Table 1 further includes a confidence factor that may result from satisfaction of the two conditions.
- a confidence value may be adjusted by one or more confidence factors. Adjusting the confidence value by one or more confidence factors may include multiplying the confidence values by the one or more confidence factors, performing a weighted average based on the confidence values and the one or more confidence factors and/or selecting as max/min confidence value based on one or more confidence factors.
- the confidence value of the ML AWB may be adjusted by a confidence factor of 1.
- the confidence value may be multiplied by a confidence factor of 1.
- the confidence value of the ML AWB may be adjusted by a confidence factor of 0.5.
- the confidence value may be multiplied by a confidence factor of 0.5.
- the confidence value of the ML AWB may be adjusted by a confidence factor of 0.5.
- the confidence value of the ML AWB may be adjusted by a confidence factor of 0.
- the confidence value may be multiplied by a confidence factor of 0.
- table 2 illustrates another example conditions that may be used to adjust a confidence value of an ML AWB.
- Table 2 includes two conditions, a correlated color temperature ( “CCT” ) condition and a “ML AWB CCT decision” condition.
- the CCT for given image data may be calculated based on a data point representative of the ML AWB on an AWB graph, such as graph 202 of FIG. 2.
- the ML AWB CCT decision for given image data may be based on a red gain divided by a green gain and a blue gain divided by the green gain (e.g., in the format “ (r/g, b/g) ” ) of a data representative of the ML AWB.
- image data may have an ML AWB that may be represented by an (r/g, b/g) point. From the (r/g, b/g) point, a CCT may be calculated.
- the ML AWB CCT decision for the ML AWB may be the CCT.
- the CCT and/or the ML AWB CCT decision may be part of the information (e.g., information 520) obtained by process 700.
- Table 2 further includes a confidence factor that may result from satisfaction of the two conditions.
- the confidence value of the ML AWB may be adjusted by a confidence factor of 1. Further, if the CCT measured by the CCT sensor is between 2500 K and 4000 K and the ML AWB CCT decision is between 4500 K and 6500 K, the confidence value of the ML AWB (or the ML AWB weight) may be adjusted by a confidence factor of 0.5.
- the confidence value of the ML AWB may be adjusted by a confidence factor of 0.5. Further, if the CCT measured by the CCT sensor is between 4500 K and 6500 K and the ML AWB CCT decision is between 4500 K and 6500 K, the confidence value of the ML AWB (or the ML AWB weight) may be adjusted by a confidence factor of 1.
- a confidence factor may be determined for situations that fall between the defined conditions. For example, if the CCT measured by the CCT sensor is between 4000 K and 4500 K the systems and techniques may determine a confidence factor. For example, the systems and techniques may determine a confidence factor that is a value between 0.5 and 1 (e.g., between the confidence factor for CCT less than 4000 K and the confidence factor for CCT greater than 4500 K) . In some cases, the systems and techniques may interpolate between 0.5 and 1 based on the CCT. For example, if the CCT is 4250 K (e.g., midway between 4000 K and 4500 K) , the confidence factor may be 0.75 (e.g., midway between 0.5 and 1) .
- table 1 and table 2 are illustrative examples. Other conditions, ranges, numbers of conditions and/or numbers of ranges may be used to adjust a confidence value of an ML AWB (or an ML AWB weight) . Further, the adjustment values are provided as illustrative examples; other adjustment values may be used.
- a final white-balance decision may be determined based on the algorithm-based white-balance decision, the machine-learning-based white-balance decision, and the ML AWB weight.
- the final white-balance decision may be a weighted sum of the AB AWB and the ML AWB.
- process 700 may determine an algorithm-based white-balance decision weight ( "AL AWB weight" ) .
- the AL AWB weight may be 1 -the ML AWB weight. Having determined the AL AWB weight and the ML AWB weight, the final white-balance decision may be determined, at block 710, as the ML AWB *the ML AWB weight + the AL AWB *the AL AWB weight.
- system 500 may perform process 700 on portions of an image independently.
- another algorithm may partition image data 502 into separate portions (e.g., based on lighting of the portions or semantic labels) .
- System 500 may perform process 700 as described above independently on each of the separate portions.
- system 500 may determine a white-balance decision 524 based on white-balance decision 510 and a white-balance decision 514 for each of the portions.
- system 500 may apply the white-balance decisions 524 to the portions and combine the portions.
- system 500 may blend the white-balance decisions 524 between the portions and/or at edges of the portions. In some cases, system 500 may blend the weights between the portions and/or at edges of the portions.
- an image may include a portion lit by natural light and a portion lit by a fluorescent bulb.
- An algorithm may identify the portions.
- System 500 may operate on each of the portions of the image independently by, for example, determining a white-balance decision 510 and a white-balance decision 514 for each portion separately, then determining a white-balance decision 524 for each of the portions separately.
- the white-balance decisions 524 may include weights for the respective white-balance decisions 510 and the respective white-balance decisions 514.
- System 500 may combine the portions and blend the white-balance decisions 524 or the weights between the white-balance decisions 510 and the white-balance decisions 514 between and/or at edges of the portions.
- FIG. 8 is a flow diagram illustrating a process 800 for determining a white balance for an image, in accordance with aspects of the present disclosure.
- One or more operations of process 800 may be performed by a computing device (or apparatus) or a component (e.g., a chipset, codec, etc. ) of the computing device.
- a computing device or apparatus
- a component e.g., a chipset, codec, etc.
- the computing device may be a mobile device (e.g., a mobile phone) , a network-connected wearable such as a watch, an extended reality (XR) device such as a virtual reality (VR) device or augmented reality (AR) device, a vehicle or component or system of a vehicle, a desktop computing device, a tablet computing device, a server computer, a robotic device, and/or any other computing device with the resource capabilities to perform the process 800.
- XR extended reality
- VR virtual reality
- AR augmented reality
- vehicle or component or system of a vehicle a desktop computing device
- tablet computing device e.g., a tablet computing device
- server computer e.g., a server computer
- robotic device e.g., a robotic device
- a computing device may obtain statistics based on image data, the statistics being associated with at least one of color or brightness of the image data. For example, system 500 of FIG. 5 may obtain AWB stats 506.
- the statistics may be, or may include, a lux index; a relationship between intensities of red light to intensities of green light; a relationship between intensities of blue light to intensities of greenlight; a comparison between a white reference point and the relationship between intensities of red light to intensities of green light and the relationship between intensities of blue light to intensities of green light; a correlated color temperature; and/or a semantic label.
- the computing device may determine, based on the statistics, a first white-balance decision using a white-balance algorithm.
- white-balance algorithm 508 of system 500 may generate white-balance decision 510 based on AWB stats 506.
- the white-balance algorithm may determine the first white-balance decision based on a relationship between intensities of red light to intensities of green light and a relationship between intensities of blue light to intensities of greenlight.
- white-balance algorithm 508 may determine white-balance decision 510 based on AWB stats 506.
- the computing device may determine, based on the statistics, a second white-balance decision using a machine-learning model trained to determine white-balance decisions.
- machine-learning model 512 of system 500 may determine white-balance decision 514 based on AWB stats 506.
- the computing device may determine a third white-balance decision based on the first white-balance decision and the second white-balance decision.
- AWB decision aggregator 522 of system 500 may determine white-balance decision 524 based on white-balance decision 510 and white-balance decision 514.
- the computing device may determine a first weight for the first white-balance decision; determine a second weight for the second white-balance decision; and determine the third white-balance decision based on the first weight, the first white-balance decision, the second weight, and the second white-balance decision.
- AWB decision aggregator 522 may determine a first weight for white-balance decision 510 and a second weight for white-balance decision 514. Further, AWB decision aggregator 522 may determine white-balance decision 524 based on the first weight, white-balance decision 510, the second weight, and white-balance decision 514.
- the computing device may determine at least one of a scene flag or a confidence value.
- the scene flag may be related to a scene of the image data.
- the confidence value may be related to the second white-balance decision.
- the third white-balance decision may be based on at least one of the scene flag or the confidence value.
- at least one of the scene flag or the confidence value may be determined using the machine-learning model (e.g., machine-learning model 512) .
- machine-learning model 512 may determine confidence value 516 and/or scene flag 518.
- the machine-learning model may be a first machine-learning model and at least one of the scene flag or the confidence value may be determined by a second machine-learning model.
- machine-learning model 602 may determine confidence value 516 and/or scene flag 518.
- Scene flag 518 may be related to a scene represented by image data 502.
- the confidence value may be indicative of a level of confidence with which downstream operations should use the second confidence value.
- confidence value 516 may be related to a confidence related to white-balance decision 514.
- AWB decision aggregator 522 may determine white-balance decision 524 based, at least in part, on confidence value 516 and/or scene flag 518.
- the scene flag may be, or may include, an indication of a determination that the scene includes white points or an indication of a determination that the scene does not include white points.
- the computing device may, responsive to the scene flag including the indication of the determination that the scene includes the white points, determine the third white-balance decision is the first white-balance decision. For example, at decision block 702, if the scene is not misleading, process 700 may proceed to block 704 then to block 710, at which white-balance decision 524 may be determined to be white-balance decision.
- the computing device may, responsive to scene flag comprising the indication of the determination that the scene does not include white points, determine a weight for the second white-balance decision based on the confidence value, and determine the third white-balance decision based on the first white-balance decision, the weight, and the second white-balance decision. For example, at decision block 702 if the scene is misleading, process 700 may proceed to block 706, then to block 708, and block 710. At block 710, white-balance decision 524 may be determined based on white-balance decision 510, white-balance decision 514 and a weight associated with white-balance decision 514.
- the computing device may determine the third white-balance decision further based on at least one of: a comparison between a white reference point and a relationship between intensities of red light to intensities of green light and a relationship between intensities of blue light to intensities of green light; a comparison between a fourth white-balance decision and at least one of the first white-balance decision, the second white-balance decision, and the third white-balance decision, wherein the fourth white-balance decision is based on spectrum data from a spectrum sensor; a correlated color temperature; or a color temperature decision.
- AWB decision aggregator 522 may determine white-balance decision 524 based, at least in part, on AWB stats 506 and/or information 520, which may include data regarding intensities of red, green, and blue light, CCT measurements, and/or CCT decisions.
- the computing device may determine the third white-balance decision further based on at least one of: spectrum data from a spectrum sensor; scene-detection results; a count of faces detected; or semantic segmentation information.
- AWB decision aggregator 522 may determine white-balance decision 524 based, at least in part, on information 520, which may include spectrum data from a spectrum sensor, scene-detection results, a count of faces detected, and/or semantic segmentation information.
- the third white-balance decision may be, or may include, a gain for red pixel data, a gain for green pixel data, and a gain for blue pixel data. Additionally or alternatively, the first and/or second white-balance decisions may likewise be, or include respective gains for red pixel data, gains for green pixel data, and gains for blue pixel data.
- the computing device may white-balance the image data based on the third white-balance decision.
- system 500 may white-balance image data 502 based on white-balance decision 524.
- the image data (e.g., the image data on which the statistics obtained at image data 502 are based) may be, or may include, a first portion of an image. Additionally, the statistics may be, or may include, first statistics.
- the computing device (or one or more components thereof) may further obtain second statistics based on second image data, the second image data may be, or may include, a second portion of the image. The second statistics may be associated with at least one of color or brightness of the second image data.
- the computing device may further determine, based on the second statistics, a fourth white-balance decision using the white-balance algorithm; determine, based on the second statistics, a fifth white-balance decision using the machine-learning model; and determine a sixth white-balance decision based on the fourth white-balance decision and the fifth white-balance decision.
- image data 502 may be a first portion of an image.
- System 500 may determine a first instance of each of white-balance decision 510, white-balance decision 514, and white-balance decision 524 based on the portion of the image.
- System 500 may receive a second portion of the image and determine a second instance of each of white-balance decision 510, white-balance decision 514, and white-balance decision 524 based on the second portion of the image.
- the computing device may white-balance the first image data based on the third white-balance decision to generate third image data; white-balance the second image data based on the sixth white-balance decision to generate fourth image data; and combine the third image data with the fourth image data.
- the computing device (or one or more components thereof) may white balance the first portion of the image based on the first instance of white-balance decision 524 and white balance the second portion of the image based on the second instance of white-balance decision 524.
- System 500 may combine the white-balanced image portions to generate a final image.
- the computing device may blend a white-balance of an edge the third image data based on the third white-balance decision and the sixth white-balance decision; and blend a white-balance of an edge the fourth image data based on the third white-balance decision and the sixth white-balance decision.
- system 500 may blend the white balance (e.g., the red gain, the green gain, and/or the blue gain) .
- the methods described herein can be performed, in whole or in part, by a computing device or apparatus.
- one or more of the methods can be performed by image-processing system 100 of FIG. 1, image-processing device 104 of FIG. 1, image processor 124 of FIG. 1, system 500 of FIG. 5, stats engine 504 of FIG. 5, white-balance algorithm 508, of FIG. 5, machine-learning model 512 of FIG. 5, AWB decision aggregator 522 of FIG. 5, system 600 of FIG. 6, AWB decision aggregator 522, of FIG. 6, and/or machine-learning model 602 of FIG. 6, or by another system or device.
- one or more of the methods can be performed, in whole or in part, by the computing-device architecture 1100 shown in FIG. 11.
- a computing device with the computing-device architecture 1100 shown in FIG. 11 can include, or be included in, the components of the image-processing system 100 of FIG. 1, image-processing device 104 of FIG. 1, image processor 124 of FIG. 1, system 500 of FIG. 5, stats engine 504 of FIG. 5, white-balance algorithm 508, of FIG. 5, machine-learning model 512 of FIG. 5, AWB decision aggregator 522 of FIG. 5, system 600 of FIG. 6, AWB decision aggregator 522, of FIG.
- the computing device or apparatus can include various components, such as one or more input devices, one or more output devices, one or more processors, one or more microprocessors, one or more microcomputers, one or more cameras, one or more sensors, and/or other component (s) that are configured to carry out the steps of processes described herein.
- the computing device can include a display, a network interface configured to communicate and/or receive the data, any combination thereof, and/or other component (s) .
- the network interface can be configured to communicate and/or receive Internet Protocol (IP) based data or other type of data.
- IP Internet Protocol
- the components of the computing device can be implemented in circuitry.
- the components can include and/or can be implemented using electronic circuits or other electronic hardware, which can include one or more programmable electronic circuits (e.g., microprocessors, graphics processing units (GPUs) , digital signal processors (DSPs) , central processing units (CPUs) , and/or other suitable electronic circuits) , and/or can include and/or be implemented using computer software, firmware, or any combination thereof, to perform the various operations described herein.
- programmable electronic circuits e.g., microprocessors, graphics processing units (GPUs) , digital signal processors (DSPs) , central processing units (CPUs) , and/or other suitable electronic circuits
- Process 700, process 800, and/or other process described herein are illustrated as logical flow diagrams, the operation of which represents a sequence of operations that can be implemented in hardware, computer instructions, or a combination thereof.
- the operations represent computer-executable instructions stored on one or more computer-readable storage media that, when executed by one or more processors, perform the recited operations.
- computer-executable instructions include routines, programs, objects, components, data structures, and the like that perform particular functions or implement particular data types.
- the order in which the operations are described is not intended to be construed as a limitation, and any number of the described operations can be combined in any order and/or in parallel to implement the processes.
- process 700, process 800, and/or other process described herein can be performed under the control of one or more computer systems configured with executable instructions and can be implemented as code (e.g., executable instructions, one or more computer programs, or one or more applications) executing collectively on one or more processors, by hardware, or combinations thereof.
- code e.g., executable instructions, one or more computer programs, or one or more applications
- the code can be stored on a computer-readable or machine-readable storage medium, for example, in the form of a computer program comprising a plurality of instructions executable by one or more processors.
- the computer-readable or machine-readable storage medium can be non-transitory.
- FIG. 9 is an illustrative example of a neural network 900 (e.g., a deep-learning neural network) that can be used to implement machine-learning based white-balance-decision determination, confidence-value determination, misleading-scene determination, feature segmentation, implicit-neural-representation generation, rendering, classification, object detection, image recognition (e.g., face recognition, object recognition, scene recognition, etc. ) , feature extraction, authentication, gaze detection, gaze prediction, and/or automation.
- neural network 900 may be an example of, or can implement, machine-learning model 512 and/or machine-learning model 602.
- An input layer 902 includes input data.
- input layer 902 can include data representing AWB stats 506 or white-balance decision 514.
- Neural network 900 includes multiple hidden layers hidden layers 906a, 906b, through 906n.
- the hidden layers 906a, 906b, through hidden layer 906n include “n” number of hidden layers, where “n” is an integer greater than or equal to one.
- the number of hidden layers can be made to include as many layers as needed for the given application.
- Neural network 900 further includes an output layer 904 that provides an output resulting from the processing performed by the hidden layers 906a, 906b, through 906n.
- output layer 904 can provide white-balance decision 514, confidence value 516, and/or scene flag 518.
- Neural network 900 may be, or may include, a multi-layer neural network of interconnected nodes. Each node can represent a piece of information. Information associated with the nodes is shared among the different layers and each layer retains information as information is processed.
- neural network 900 can include a feed-forward network, in which case there are no feedback connections where outputs of the network are fed back into itself.
- neural network 900 can include a recurrent neural network, which can have loops that allow information to be carried across nodes while reading in input.
- Nodes of input layer 902 can activate a set of nodes in the first hidden layer 906a.
- each of the input nodes of input layer 902 is connected to each of the nodes of the first hidden layer 906a.
- the nodes of first hidden layer 906a can transform the information of each input node by applying activation functions to the input node information.
- the information derived from the transformation can then be passed to and can activate the nodes of the next hidden layer 906b, which can perform their own designated functions.
- Example functions include convolutional, up-sampling, data transformation, and/or any other suitable functions.
- the output of the hidden layer 906b can then activate nodes of the next hidden layer, and so on.
- the output of the last hidden layer 906n can activate one or more nodes of the output layer 904, at which an output is provided.
- nodes e.g., node 908 in neural network 900 are shown as having multiple output lines, a node has a single output and all lines shown as being output from a node represent the same output value.
- each node or interconnection between nodes can have a weight that is a set of parameters derived from the training of neural network 900.
- neural network 900 Once neural network 900 is trained, it can be referred to as a trained neural network, which can be used to perform one or more operations.
- an interconnection between nodes can represent a piece of information learned about the interconnected nodes.
- the interconnection can have a tunable numeric weight that can be tuned (e.g., based on a training dataset) , allowing neural network 900 to be adaptive to inputs and able to learn as more and more data is processed.
- Neural network 900 may be pre-trained to process the features from the data in the input layer 902 using the different hidden layers 906a, 906b, through 906n in order to provide the output through the output layer 904.
- neural network 900 can be trained using training data that includes both images and labels, as described above. For instance, training images can be input into the network, with each training image having a label indicating the features in the images (for the feature-segmentation machine-learning system) or a label indicating classes of an activity in each image.
- a training image can include an image of a number 2, in which case the label for the image can be [0 0 1 0 0 0 0 0 0 0] .
- neural network 900 can adjust the weights of the nodes using a training process called backpropagation.
- a backpropagation process can include a forward pass, a loss function, a backward pass, and a weight update.
- the forward pass, loss function, backward pass, and parameter update is performed for one training iteration.
- the process can be repeated for a certain number of iterations for each set of training images until neural network 900 is trained well enough so that the weights of the layers are accurately tuned.
- the forward pass can include passing a training image through neural network 900.
- the weights are initially randomized before neural network 900 is trained.
- an image can include an array of numbers representing the pixels of the image. Each number in the array can include a value from 0 to 255 describing the pixel intensity at that position in the array.
- the array can include a 28 x 28 x 3 array of numbers with 28 rows and 28 columns of pixels and 3 color components (such as red, green, and blue, or luma and two chroma components, or the like) .
- the output will likely include values that do not give preference to any particular class due to the weights being randomly selected at initialization. For example, if the output is a vector with probabilities that the object includes different classes, the probability value for each of the different classes can be equal or at least very similar (e.g., for ten possible classes, each class can have a probability value of 0.1) . With the initial weights, neural network 900 is unable to determine low-level features and thus cannot make an accurate determination of what the classification of the object might be.
- a loss function can be used to analyze error in the output. Any suitable loss function definition can be used, such as a cross-entropy loss. Another example of a loss function includes the mean squared error (MSE) , defined as The loss can be set to be equal to the value of E total .
- MSE mean squared error
- Neural network 900 can perform a backward pass by determining which inputs (weights) most contributed to the loss of the network and can adjust the weights so that the loss decreases and is eventually minimized.
- a derivative of the loss with respect to the weights (denoted as dL/dW, where W are the weights at a particular layer) can be computed to determine the weights that contributed most to the loss of the network. After the derivative is computed, a weight update can be performed by updating all the weights of the filters.
- the weights can be updated so that they change in the opposite direction of the gradient.
- the weight update can be denoted as where w denotes a weight, w i denotes the initial weight, and ⁇ denotes a learning rate.
- the learning rate can be set to any suitable value, with a high learning rate including larger weight updates and a lower value indicating smaller weight updates.
- Neural network 900 can include any suitable deep network.
- One example includes a convolutional neural network (CNN) , which includes an input layer and an output layer, with multiple hidden layers between the input and out layers.
- the hidden layers of a CNN include a series of convolutional, nonlinear, pooling (for downsampling) , and fully connected layers.
- Neural network 900 can include any other deep network other than a CNN, such as an autoencoder, a deep belief nets (DBNs) , a Recurrent Neural Networks (RNNs) , among others.
- DNNs deep belief nets
- RNNs Recurrent Neural Networks
- FIG. 10 is an illustrative example of a convolutional neural network (CNN) 1000.
- the input layer 1002 of the CNN 1000 includes data representing an image or frame.
- the data can include an array of numbers representing the pixels of the image, with each number in the array including a value from 0 to 255 describing the pixel intensity at that position in the array.
- the array can include a 28 x 28 x 3 array of numbers with 28 rows and 28 columns of pixels and 3 color components (e.g., red, green, and blue, or luma and two chroma components, or the like) .
- the image can be passed through a convolutional hidden layer 1004, an optional non-linear activation layer, a pooling hidden layer 1006, and fully connected layer 1008 (which fully connected layer 1008 can be hidden) to get an output at the output layer 1010. While only one of each hidden layer is shown in FIG. 10, one of ordinary skill will appreciate that multiple convolutional hidden layers, non-linear layers, pooling hidden layers, and/or fully connected layers can be included in the CNN 1000.
- the output can indicate a single class of an object or can include a probability of classes that best describe the object in the image.
- the first layer of the CNN 1000 can be the convolutional hidden layer 1004.
- the convolutional hidden layer 1004 can analyze image data of the input layer 1002. Each node of the convolutional hidden layer 1004 is connected to a region of nodes (pixels) of the input image called a receptive field.
- the convolutional hidden layer 1004 can be considered as one or more filters (each filter corresponding to a different activation or feature map) , with each convolutional iteration of a filter being a node or neuron of the convolutional hidden layer 1004. For example, the region of the input image that a filter covers at each convolutional iteration would be the receptive field for the filter.
- each filter and corresponding receptive field
- each filter is a 5 ⁇ 5 array
- Each connection between a node and a receptive field for that node learns a weight and, in some cases, an overall bias such that each node learns to analyze its particular local receptive field in the input image.
- Each node of the convolutional hidden layer 1004 will have the same weights and bias (called a shared weight and a shared bias) .
- the filter has an array of weights (numbers) and the same depth as the input.
- a filter will have a depth of 3 for an image frame example (according to three color components of the input image) .
- An illustrative example size of the filter array is 5 x 5 x 3, corresponding to a size of the receptive field of a node.
- the convolutional nature of the convolutional hidden layer 1004 is due to each node of the convolutional layer being applied to its corresponding receptive field.
- a filter of the convolutional hidden layer 1004 can begin in the top-left corner of the input image array and can convolve around the input image.
- each convolutional iteration of the filter can be considered a node or neuron of the convolutional hidden layer 1004.
- the values of the filter are multiplied with a corresponding number of the original pixel values of the image (e.g., the 5x5 filter array is multiplied by a 5x5 array of input pixel values at the top-left corner of the input image array) .
- the multiplications from each convolutional iteration can be summed together to obtain a total sum for that iteration or node.
- the process is next continued at a next location in the input image according to the receptive field of a next node in the convolutional hidden layer 1004.
- a filter can be moved by a step amount (referred to as a stride) to the next receptive field.
- the stride can be set to 1 or any other suitable amount. For example, if the stride is set to 1, the filter will be moved to the right by 1 pixel at each convolutional iteration. Processing the filter at each unique location of the input volume produces a number representing the filter results for that location, resulting in a total sum value being determined for each node of the convolutional hidden layer 1004.
- the mapping from the input layer to the convolutional hidden layer 1004 is referred to as an activation map (or feature map) .
- the activation map includes a value for each node representing the filter results at each location of the input volume.
- the activation map can include an array that includes the various total sum values resulting from each iteration of the filter on the input volume. For example, the activation map will include a 24 x 24 array if a 5 x 5 filter is applied to each pixel (astride of 1) of a 28 x 28 input image.
- the convolutional hidden layer 1004 can include several activation maps in order to identify multiple features in an image. The example shown in FIG. 10 includes three activation maps. Using three activation maps, the convolutional hidden layer 1004 can detect three different kinds of features, with each feature being detectable across the entire image.
- a non-linear hidden layer can be applied after the convolutional hidden layer 1004.
- the non-linear layer can be used to introduce non-linearity to a system that has been computing linear operations.
- One illustrative example of a non-linear layer is a rectified linear unit (ReLU) layer.
- the pooling hidden layer 1006 can be applied after the convolutional hidden layer 1004 (and after the non-linear hidden layer when used) .
- the pooling hidden layer 1006 is used to simplify the information in the output from the convolutional hidden layer 1004.
- the pooling hidden layer 1006 can take each activation map output from the convolutional hidden layer 1004 and generates a condensed activation map (or feature map) using a pooling function. Max-pooling is one example of a function performed by a pooling hidden layer.
- Other forms of pooling functions be used by the pooling hidden layer 1006, such as average pooling, L2-norm pooling, or other suitable pooling functions.
- a pooling function (e.g., a max-pooling filter, an L2-norm filter, or other suitable pooling filter) is applied to each activation map included in the convolutional hidden layer 1004.
- a pooling function e.g., a max-pooling filter, an L2-norm filter, or other suitable pooling filter
- three pooling filters are used for the three activation maps in the convolutional hidden layer 1004.
- max-pooling can be used by applying a max-pooling filter (e.g., having a size of 2x2) with a stride (e.g., equal to a dimension of the filter, such as a stride of 2) to an activation map output from the convolutional hidden layer 1004.
- the output from a max-pooling filter includes the maximum number in every sub-region that the filter convolves around.
- each unit in the pooling layer can summarize a region of 2 ⁇ 2 nodes in the previous layer (with each node being a value in the activation map) .
- an activation map For example, four values (nodes) in an activation map will be analyzed by a 2x2 max-pooling filter at each iteration of the filter, with the maximum value from the four values being output as the “max” value. If such a max-pooling filter is applied to an activation filter from the convolutional hidden layer 1004 having a dimension of 24x24 nodes, the output from the pooling hidden layer 1006 will be an array of 12x12 nodes.
- an L2-norm pooling filter could also be used.
- the L2-norm pooling filter includes computing the square root of the sum of the squares of the values in the 2 ⁇ 2 region (or other suitable region) of an activation map (instead of computing the maximum values as is done in max-pooling) and using the computed values as an output.
- the pooling function determines whether a given feature is found anywhere in a region of the image and discards the exact positional information. This can be done without affecting results of the feature detection because, once a feature has been found, the exact location of the feature is not as important as its approximate location relative to other features. Max-pooling (as well as other pooling methods) offer the benefit that there are many fewer pooled features, thus reducing the number of parameters needed in later layers of the CNN 1000.
- the final layer of connections in the network is a fully-connected layer that connects every node from the pooling hidden layer 1006 to every one of the output nodes in the output layer 1010.
- the input layer includes 28 x 28 nodes encoding the pixel intensities of the input image
- the convolutional hidden layer 1004 includes 3 ⁇ 24 ⁇ 24 hidden feature nodes based on application of a 5 ⁇ 5 local receptive field (for the filters) to three activation maps
- the pooling hidden layer 1006 includes a layer of 3 ⁇ 12 ⁇ 12 hidden feature nodes based on application of max-pooling filter to 2 ⁇ 2 regions across each of the three feature maps.
- the output layer 1010 can include ten output nodes. In such an example, every node of the 3x12x12 pooling hidden layer 1006 is connected to every node of the output layer 1010.
- the fully connected layer 1008 can obtain the output of the previous pooling hidden layer 1006 (which should represent the activation maps of high-level features) and determines the features that most correlate to a particular class. For example, the fully connected layer 1008 can determine the high-level features that most strongly correlate to a particular class and can include weights (nodes) for the high-level features. A product can be computed between the weights of the fully connected layer 1008 and the pooling hidden layer 1006 to obtain probabilities for the different classes.
- the CNN 1000 is being used to predict that an object in an image is a person, high values will be present in the activation maps that represent high-level features of people (e.g., two legs are present, a face is present at the top of the object, two eyes are present at the top left and top right of the face, a nose is present in the middle of the face, a mouth is present at the bottom of the face, and/or other features common for a person) .
- high values will be present in the activation maps that represent high-level features of people (e.g., two legs are present, a face is present at the top of the object, two eyes are present at the top left and top right of the face, a nose is present in the middle of the face, a mouth is present at the bottom of the face, and/or other features common for a person) .
- high-level features of people e.g., two legs are present, a face is present at the top of the object, two eyes are present at the top left and top right of the face,
- a 10-dimensional output vector represents ten different classes of objects is [0 0 0.05 0.8 0 0.15 0 0 0 0]
- the vector indicates that there is a 5%probability that the image is the third class of object (e.g., a dog) , an 80%probability that the image is the fourth class of object (e.g., a human) , and a 15%probability that the image is the sixth class of object (e.g., a kangaroo) .
- the probability for a class can be considered a confidence level that the object is part of that class.
- FIG. 11 illustrates an example computing-device architecture 1100 of an example computing device which can implement the various techniques described herein.
- the computing device can include a mobile device, a wearable device, an extended reality device (e.g., a virtual reality (VR) device, an augmented reality (AR) device, or a mixed reality (MR) device) , a personal computer, a laptop computer, a video server, a vehicle (or computing device of a vehicle) , or other device.
- the computing-device architecture 1100 may include, implement, or be included in any or all of image-processing system 100 of FIG. 1, image-processing device 104 of FIG. 1, image processor 124 of FIG. 1, system 500 of FIG. 5, stats engine 504 of FIG.
- computing-device architecture 1100 may be configured to perform process 700, process 800, and/or other process described herein.
- the components of computing-device architecture 1100 are shown in electrical communication with each other using connection 1112, such as a bus.
- the example computing-device architecture 1100 includes a processing unit (CPU or processor) 1102 and computing device connection 1112 that couples various computing device components including computing device memory 1110, such as read only memory (ROM) 1108 and random-access memory (RAM) 1106, to processor 1102.
- CPU central processing unit
- RAM random-access memory
- Computing-device architecture 1100 can include a cache of high-speed memory connected directly with, in close proximity to, or integrated as part of processor 1102.
- Computing-device architecture 1100 can copy data from memory 1110 and/or the storage device 1114 to cache 1104 for quick access by processor 1102. In this way, the cache can provide a performance boost that avoids processor 1102 delays while waiting for data.
- These and other modules can control or be configured to control processor 1102 to perform various actions.
- Other computing device memory 1110 may be available for use as well. Memory 1110 can include multiple different types of memory with different performance characteristics.
- Processor 1102 can include any general-purpose processor and a hardware or software service, such as service 1 1116, service 2 1118, and service 3 1120 stored in storage device 1114, configured to control processor 1102 as well as a special-purpose processor where software instructions are incorporated into the processor design.
- Processor 1102 may be a self-contained system, containing multiple cores or processors, a bus, memory controller, cache, etc.
- a multi-core processor may be symmetric or asymmetric.
- input device 1122 can represent any number of input mechanisms, such as a microphone for speech, a touch-sensitive screen for gesture or graphical input, keyboard, mouse, motion input, speech and so forth.
- Output device 1124 can also be one or more of a number of output mechanisms known to those of skill in the art, such as a display, projector, television, speaker device, etc.
- multimodal computing devices can enable a user to provide multiple types of input to communicate with computing-device architecture 1100.
- Communication interface 1126 can generally govern and manage the user input and computing device output. There is no restriction on operating on any particular hardware arrangement and therefore the basic features here may easily be substituted for improved hardware or firmware arrangements as they are developed.
- Storage device 1114 is a non-volatile memory and can be a hard disk or other types of computer readable media which can store data that are accessible by a computer, such as magnetic cassettes, flash memory cards, solid state memory devices, digital versatile disks, cartridges, random-access memories (RAMs) 1106, read only memory (ROM) 1108, and hybrids thereof.
- Storage device 1114 can include services 1116, 1118, and 1120 for controlling processor 1102. Other hardware or software modules are contemplated.
- Storage device 1114 can be connected to the computing device connection 1112.
- a hardware module that performs a particular function can include the software component stored in a computer-readable medium in connection with the necessary hardware components, such as processor 1102, connection 1112, output device 1124, and so forth, to carry out the function.
- the term “substantially, ” in reference to a given parameter, property, or condition may refer to a degree that one of ordinary skill in the art would understand that the given parameter, property, or condition is met with a small degree of variance, such as, for example, within acceptable manufacturing tolerances.
- the parameter, property, or condition may be at least 90%met, at least 95%met, or even at least 99%met.
- aspects of the present disclosure are applicable to any suitable electronic device (such as security systems, smartphones, tablets, laptop computers, vehicles, drones, or other devices) including or coupled to one or more active depth sensing systems. While described below with respect to a device having or coupled to one light projector, aspects of the present disclosure are applicable to devices having any number of light projectors and are therefore not limited to specific devices.
- a device is not limited to one or a specific number of physical objects (such as one smartphone, one controller, one processing system and so on) .
- a device may be any electronic device with one or more parts that may implement at least some portions of this disclosure. While the below description and examples use the term “device” to describe various aspects of this disclosure, the term “device” is not limited to a specific configuration, type, or number of objects.
- the term “system” is not limited to multiple components or specific aspects. For example, a system may be implemented on one or more printed circuit boards or other substrates and may have movable or static components. While the below description and examples use the term “system” to describe various aspects of this disclosure, the term “system” is not limited to a specific configuration, type, or number of objects.
- a process is terminated when its operations are completed but could have additional steps not included in a figure.
- a process may correspond to a method, a function, a procedure, a subroutine, a subprogram, etc.
- a process corresponds to a function
- its termination can correspond to a return of the function to the calling function or the main function.
- Processes and methods according to the above-described examples can be implemented using computer-executable instructions that are stored or otherwise available from computer-readable media.
- Such instructions can include, for example, instructions and data which cause or otherwise configure a general-purpose computer, special purpose computer, or a processing device to perform a certain function or group of functions. Portions of computer resources used can be accessible over a network.
- the computer executable instructions may be, for example, binaries, intermediate format instructions such as assembly language, firmware, source code, etc.
- computer-readable medium includes, but is not limited to, portable or non-portable storage devices, optical storage devices, and various other mediums capable of storing, containing, or carrying instruction (s) and/or data.
- a computer-readable medium may include a non-transitory medium in which data can be stored and that does not include carrier waves and/or transitory electronic signals propagating wirelessly or over wired connections. Examples of a non-transitory medium may include, but are not limited to, a magnetic disk or tape, optical storage media such as compact disk (CD) or digital versatile disk (DVD) , flash memory, magnetic or optical disks, USB devices provided with non-volatile memory, networked storage devices, any suitable combination thereof, among others.
- CD compact disk
- DVD digital versatile disk
- USB devices provided with non-volatile memory
- networked storage devices any suitable combination thereof, among others.
- a computer-readable medium may have stored thereon code and/or machine-executable instructions that may represent a procedure, a function, a subprogram, a program, a routine, a subroutine, a module, a software package, a class, or any combination of instructions, data structures, or program statements.
- a code segment may be coupled to another code segment or a hardware circuit by passing and/or receiving information, data, arguments, parameters, or memory contents.
- Information, arguments, parameters, data, etc. may be passed, forwarded, or transmitted via any suitable means including memory sharing, message passing, token passing, network transmission, or the like.
- the computer-readable storage devices, mediums, and memories can include a cable or wireless signal containing a bit stream and the like.
- non-transitory computer-readable storage media expressly exclude media such as energy, carrier signals, electromagnetic waves, and signals per se.
- Devices implementing processes and methods according to these disclosures can include hardware, software, firmware, middleware, microcode, hardware description languages, or any combination thereof, and can take any of a variety of form factors.
- the program code or code segments to perform the necessary tasks may be stored in a computer-readable or machine-readable medium.
- a processor may perform the necessary tasks.
- form factors include laptops, smart phones, mobile phones, tablet devices or other small form factor personal computers, personal digital assistants, rackmount devices, standalone devices, and so on.
- Functionality described herein also can be embodied in peripherals or add-in cards. Such functionality can also be implemented on a circuit board among different chips or different processes executing in a single device, by way of further example.
- the instructions, media for conveying such instructions, computing resources for executing them, and other structures for supporting such computing resources are example means for providing the functions described in the disclosure.
- Such configuration can be accomplished, for example, by designing electronic circuits or other hardware to perform the operation, by programming programmable electronic circuits (e.g., microprocessors, or other suitable electronic circuits) to perform the operation, or any combination thereof.
- programmable electronic circuits e.g., microprocessors, or other suitable electronic circuits
- Coupled to refers to any component that is physically connected to another component either directly or indirectly, and/or any component that is in communication with another component (e.g., connected to the other component over a wired or wireless connection, and/or other suitable communication interface) either directly or indirectly.
- Claim language or other language reciting “at least one of” a set and/or “one or more” of a set indicates that one member of the set or multiple members of the set (in any combination) satisfy the claim.
- claim language reciting “at least one of A and B” or “at least one of A or B” means A, B, or A and B.
- claim language reciting “at least one of A, B, and C” or “at least one of A, B, or C” means A, B, C, or A and B, or A and C, or B and C, A and B and C, or any duplicate information or data (e.g., A and A, B and B, C and C, A and A and B, and so on) , or any other ordering, duplication, or combination of A, B, and C.
- the language “at least one of” a set and/or “one or more” of a set does not limit the set to the items listed in the set.
- claim language reciting “at least one of A and B” or “at least one of A or B” may mean A, B, or A and B, and may additionally include items not listed in the set of A and B.
- the phrases “at least one” and “one or more” are used interchangeably herein.
- Claim language or other language reciting “at least one processor configured to, ” “at least one processor being configured to, ” “one or more processors configured to, ” “one or more processors being configured to, ” or the like indicates that one processor or multiple processors (in any combination) can perform the associated operation (s) .
- claim language reciting “at least one processor configured to: X, Y, and Z” means a single processor can be used to perform operations X, Y, and Z; or that multiple processors are each tasked with a certain subset of operations X, Y, and Z such that together the multiple processors perform X, Y, and Z; or that a group of multiple processors work together to perform operations X, Y, and Z.
- claim language reciting “at least one processor configured to: X, Y, and Z” can mean that any single processor may only perform at least a subset of operations X, Y, and Z.
- one element may perform all functions, or more than one element may collectively perform the functions.
- each function need not be performed by each of those elements (e.g., different functions may be performed by different elements) and/or each function need not be performed in whole by only one element (e.g., different elements may perform different sub-functions of a function) .
- one element may be configured to cause the other element to perform all functions, or more than one element may collectively be configured to cause the other element to perform the functions.
- an entity e.g., any entity or device described herein
- the entity may be configured to cause one or more elements (individually or collectively) to perform the functions.
- the one or more components of the entity may include at least one memory, at least one processor, at least one communication interface, another component configured to perform one or more (or all) of the functions, and/or any combination thereof.
- the entity may be configured to cause one component to perform all functions, or to cause more than one component to collectively perform the functions.
- each function need not be performed by each of those components (e.g., different functions may be performed by different components) and/or each function need not be performed in whole by only one component (e.g., different components may perform different sub-functions of a function) .
- the techniques described herein may also be implemented in electronic hardware, computer software, firmware, or any combination thereof. Such techniques may be implemented in any of a variety of devices such as general-purposes computers, wireless communication device handsets, or integrated circuit devices having multiple uses including application in wireless communication device handsets and other devices. Any features described as modules or components may be implemented together in an integrated logic device or separately as discrete but interoperable logic devices. If implemented in software, the techniques may be realized at least in part by a computer-readable data storage medium including program code including instructions that, when executed, performs one or more of the methods described above. The computer-readable data storage medium may form part of a computer program product, which may include packaging materials.
- the computer-readable medium may include memory or data storage media, such as random-access memory (RAM) such as synchronous dynamic random-access memory (SDRAM) , read-only memory (ROM) , non-volatile random- access memory (NVRAM) , electrically erasable programmable read-only memory (EEPROM) , FLASH memory, magnetic or optical data storage media, and the like.
- RAM random-access memory
- SDRAM synchronous dynamic random-access memory
- ROM read-only memory
- NVRAM non-volatile random- access memory
- EEPROM electrically erasable programmable read-only memory
- FLASH memory magnetic or optical data storage media, and the like.
- the techniques additionally, or alternatively, may be realized at least in part by a computer-readable communication medium that carries or communicates program code in the form of instructions or data structures and that can be accessed, read, and/or executed by a computer, such as propagated signals or waves.
- the program code may be executed by a processor, which may include one or more processors, such as one or more digital signal processors (DSPs) , general-purpose microprocessors, an application specific integrated circuits (ASICs) , field programmable logic arrays (FPGAs) , or other equivalent integrated or discrete logic circuitry.
- DSPs digital signal processors
- ASICs application specific integrated circuits
- FPGAs field programmable logic arrays
- a general-purpose processor may be a microprocessor; but in the alternative, the processor may be any conventional processor, controller, microcontroller, or state machine.
- a processor may also be implemented as a combination of computing devices, e.g., a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration. Accordingly, the term “processor, ” as used herein may refer to any of the foregoing structure, any combination of the foregoing structure, or any other structure or apparatus suitable for implementation of the techniques described herein.
- Illustrative aspects of the disclosure include:
- An apparatus for determining a white balance for an image comprising: at least one memory; and at least one processor coupled to the at least one memory and configured to: obtain statistics based on image data, the statistics being associated with at least one of color or brightness of the image data; determine, based on the statistics, a first white-balance decision using a white-balance algorithm; determine, based on the statistics, a second white-balance decision using a machine-learning model trained to determine white-balance decisions; and determine a third white-balance decision based on the first white-balance decision and the second white-balance decision.
- Aspect 2 The apparatus of aspect 1, wherein, to determine the third white-balance decision, the at least one processor is configured to: determine a first weight for the first white-balance decision; determine a second weight for the second white-balance decision; and determine the third white-balance decision based on the first weight, the first white-balance decision, the second weight, and the second white-balance decision.
- Aspect 3 The apparatus of any one of aspects 1 or 2, wherein the at least one processor is further configured to determine at least one of a scene flag or a confidence value, wherein the scene flag is related to a scene of the image data, wherein the confidence value is related to the second white-balance decision, and wherein the third white-balance decision is further based on at least one of the scene flag or the confidence value.
- Aspect 4 The apparatus of any one of aspects 1 to 3, wherein the scene flag comprises an indication of a determination that the scene includes white points or an indication of a determination that the scene does not include white points.
- Aspect 5 The apparatus of aspect 4, wherein, to determine the third white-balance decision, the at least one processor is configured to, responsive to the scene flag comprising the indication of the determination that the scene includes the white points, determine the third white-balance decision is the first white-balance decision.
- Aspect 6 The apparatus of any one of aspects 4 or 5, wherein, to determine the third white-balance decision, the at least one processor is configured to, responsive to scene flag comprising the indication of the determination that the scene does not include white points, determine a weight for the second white-balance decision based on the confidence value, and determine the third white-balance decision based on the first white-balance decision, the weight, and the second white-balance decision.
- Aspect 7 The apparatus of any one of aspects 3 to 6, wherein the confidence value is indicative of a level of confidence with which downstream operations should use the second confidence value.
- Aspect 8 The apparatus of any one of aspects 3 to 7, wherein at least one of the scene flag or the confidence value is determined using the machine-learning model .
- Aspect 9 The apparatus of any one of aspects 3 to 8, wherein the machine-learning model is a first machine-learning model and wherein at least one of the scene flag or the confidence value are determined by a second machine-learning model.
- Aspect 10 The apparatus of any one of aspects 1 to 9, wherein the third white-balance decision is further based on at least one of: a comparison between a white reference point and a relationship between intensities of red light to intensities of green light and a relationship between intensities of blue light to intensities of green light; a comparison between a fourth white-balance decision and at least one of the first white-balance decision, the second white-balance decision, and the third white-balance decision, wherein the fourth white-balance decision is based on spectrum data from a spectrum sensor; a correlated color temperature; or a color temperature decision.
- Aspect 11 The apparatus of any one of aspects 1 to 10, wherein the third white-balance decision is determined based on at least one of: spectrum data from a spectrum sensor; scene-detection results; a count of faces detected; or semantic segmentation information.
- Aspect 12 The apparatus of any one of aspects 1 to 11, wherein the third white-balance decision comprises a gain for red pixel data, a gain for green pixel data, and a gain for blue pixel data.
- Aspect 13 The apparatus of any one of aspects 1 to 12, wherein the statistics comprise at least one of: a lux index; a relationship between intensities of red light to intensities of green light; a relationship between intensities of blue light to intensities of greenlight; a comparison between a white reference point and the relationship between intensities of red light to intensities of green light and the relationship between intensities of blue light to intensities of green light; a correlated color temperature; or a semantic label.
- Aspect 14 The apparatus of any one of aspects 1 to 13, wherein the white-balance algorithm is configured to determine the first white-balance decision based on a relationship between intensities of red light to intensities of green light and a relationship between intensities of blue light to intensities of greenlight.
- Aspect 15 The apparatus of any one of aspects 1 to 14, wherein the at least one processor is further configured to white-balance the image data based on the third white-balance decision.
- Aspect 16 The apparatus of any one of aspects 1 to 15, wherein the image data comprises a first portion of an image, wherein the statistics comprise first statistics, and wherein the at least one processor is further configured to: obtain second statistics based on second image data, wherein the second image data comprises a second portion of the image and wherein the second statistics are associated with at least one of color or brightness of the second image data; determine, based on the second statistics, a fourth white-balance decision using the white-balance algorithm; determine, based on the second statistics, a fifth white-balance decision using the machine-learning model; and determine a sixth white-balance decision based on the fourth white-balance decision and the fifth white-balance decision.
- Aspect 17 The apparatus of aspect 16, wherein the at least one processor is further configured to: white-balance the first image data based on the third white-balance decision to generate third image data; white-balance the second image data based on the sixth white-balance decision to generate fourth image data; and combine the third image data with the fourth image data.
- Aspect 18 The apparatus of aspect 17, wherein the at least one processor is further configured to: blend a white-balance of an edge the third image data based on the third white-balance decision and the sixth white-balance decision; and blend a white-balance of an edge the fourth image data based on the third white-balance decision and the sixth white-balance decision.
- a method for determining a white balance for an image comprising: obtaining statistics based on image data, the statistics being associated with at least one of color or brightness of the image data; determining, based on the statistics, a first white-balance decision using a white-balance algorithm; determining, based on the statistics, a second white-balance decision using a machine-learning model trained to determine white-balance decisions; and determining a third white-balance decision based on the first white-balance decision and the second white-balance decision.
- determining the third white-balance decision comprises: determining a first weight for the first white-balance decision; determining a second weight for the second white-balance decision; and determining the third white-balance decision based on the first weight, the first white-balance decision, the second weight, and the second white-balance decision.
- Aspect 21 The method of any one of aspects 19 or 20, further comprising determining at least one of a scene flag or a confidence value, wherein the scene flag is related to a scene of the image data, wherein the confidence value is related to the second white-balance decision, and wherein the third white-balance decision is further based on at least one of the scene flag or the confidence value.
- Aspect 22 The method of aspect 21, wherein the scene flag comprises an indication of a determination that the scene includes white points or an indication of a determination that the scene does not include white points.
- determining the third white-balance decision comprises responsive to the scene flag comprising the indication of the determination that the scene includes the white points, determining the third white-balance decision is the first white-balance decision.
- Aspect 24 The method of any one of aspects 22 or 23, wherein determining the third white-balance decision further comprises responsive to scene flag comprising the indication of the determination that the scene does not include white points, determining a weight for the second white-balance decision based on the confidence value, and determining the third white-balance decision based on the first white-balance decision, the weight, and the second white-balance decision.
- Aspect 25 The method of any one of aspects 21 to 24, wherein the confidence value is indicative of a level of confidence with which downstream operations should use the second confidence value.
- Aspect 26 The method of any one of aspects 21 to 25, wherein at least one of the scene flag or the confidence value is determined using the machine-learning model.
- Aspect 27 The method of any one of aspects 21 to 26, wherein the machine-learning model is a first machine-learning model and wherein at least one of the scene flag or the confidence value are determined by a second machine-learning model.
- Aspect 28 The method of any one of aspects 19 to 27, wherein the third white-balance decision is further based on at least one of: a comparison between a white reference point and a relationship between intensities of red light to intensities of green light and a relationship between intensities of blue light to intensities of green light; a comparison between a fourth white-balance decision and at least one of the first white-balance decision, the second white-balance decision, and the third white-balance decision, wherein the fourth white-balance decision is based on spectrum data from a spectrum sensor; a correlated color temperature; or a color temperature decision.
- Aspect 29 The method of any one of aspects 19 to 28, wherein the third white-balance decision is determined based on at least one of: spectrum data from a spectrum sensor; scene-detection results; a count of faces detected; or semantic segmentation information.
- Aspect 30 The method of any one of aspects 19 to 29, wherein the third white-balance decision comprises a gain for red pixel data, a gain for green pixel data, and a gain for blue pixel data.
- Aspect 31 The method of any one of aspects 19 to 30, wherein the statistics comprise at least one of: a lux index; a relationship between intensities of red light to intensities of green light; a relationship between intensities of blue light to intensities of greenlight; a comparison between a white reference point and the relationship between intensities of red light to intensities of green light and the relationship between intensities of blue light to intensities of green light; a correlated color temperature; or a semantic label.
- Aspect 32 The method of any one of aspects 19 to 31, wherein the white-balance algorithm is configured to determine the first white-balance decision based on a relationship between intensities of red light to intensities of green light and a relationship between intensities of blue light to intensities of greenlight.
- Aspect 33 The method of any one of aspects 19 to 32, further comprising white-balancing the image data based on the third white-balance decision.
- Aspect 34 The method of any one of aspects 19 to 33, wherein the image data comprises a first portion of an image and wherein the statistics comprise first statistics; and further comprising: obtaining second statistics based on second image data, wherein the second image data comprises a second portion of the image and wherein the second statistics are associated with at least one of color or brightness of the second image data; determining, based on the second statistics, a fourth white-balance decision using the white-balance algorithm; determining, based on the second statistics, a fifth white-balance decision using the machine-learning model; and determining a sixth white-balance decision based on the fourth white-balance decision and the fifth white-balance decision.
- Aspect 35 The method of aspect 34, further comprising: white-balancing the first image data based on the third white-balance decision to generate third image data; white-balancing the second image data based on the sixth white-balance decision to generate fourth image data; and combining the third image data with the fourth image data.
- Aspect 36 The method of aspect 35, further comprising: blending a white-balance of an edge the third image data based on the third white-balance decision and the sixth white-balance decision; and blending a white-balance of an edge the fourth image data based on the third white-balance decision and the sixth white-balance decision.
- Aspect 37 A non-transitory computer-readable storage medium having stored thereon instructions that, when executed by at least one processor, cause the at least one processor to perform operations according to any of aspects 19 to 36.
- Aspect 38 An apparatus for providing virtual content for display, the apparatus comprising one or more means for perform operations according to any of aspects 19 to 36.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Color Television Image Signal Generators (AREA)
Abstract
Systems and techniques are described herein for determining a white balance for an image. For instance, a method for determining a white balance for an image is provided. The method may include obtaining statistics based on image data, the statistics being associated with at least one of color or brightness of the image data; determining, based on the statistics, a first white-balance decision using a white-balance algorithm; determining, based on the statistics, a second white-balance decision using a machine-learning model trained to determine white-balance decisions; and determining a third white-balance decision based on the first white-balance decision and the second white-balance decision.
Description
The present disclosure generally relates to determining a white balance for an image. For example, aspects of the present disclosure include systems and techniques for determining a white-balance setting for an image.
A camera is a device that receives light and captures image frames, such as still images or video frames, using an image sensor. Cameras can be configured with a variety of image-capture settings and/or image-processing settings to alter the appearance of images captured thereby. Image-capture settings may be determined and applied before and/or while an image is captured, such as ISO, exposure time (also referred to as exposure, exposure duration, or shutter speed) , aperture size, (also referred to as f/stop) , focus, and gain (including analog and/or digital gain) , among others. Moreover, image-processing settings can be configured for post-processing of an image, such as alterations to contrast, brightness, saturation, sharpness, levels, curves, and colors, among others.
One way that images can be altered is through white-balancing. White balancing may include changes to image-capture settings and/or image-processing settings. In the present disclosure, the term “white balance, ” when used as a verb may refer to one or more operations to adjust colors and/or brightness of colors of pixels of image data. White balancing may refer to adjusting intensities of colors of pixels or an image responsive to a white-balance decision (e.g., to cause pixels associated with white objects to appear as white in the image. In the present disclosure, the term “white balance” when used as a noun, “white-balance decision, ” and like term may refer to an indication of settings for an image to adjust pixels of the image to implement white balancing. In other words, a white-balance decision may indicate adjustments to intensities of red, green, and/or blue channels of pixels of an image such that white objects in a scene represented by the image appear white in the image.
The following presents a simplified summary relating to one or more aspects disclosed herein. Thus, the following summary should not be considered an extensive overview relating to
all contemplated aspects, nor should the following summary be considered to identify key or critical elements relating to all contemplated aspects or to delineate the scope associated with any particular aspect. Accordingly, the following summary presents certain concepts relating to one or more aspects relating to the mechanisms disclosed herein in a simplified form to precede the detailed description presented below.
Systems and techniques are described for determining a white balance for an image. According to at least one example, a method is provided for determining a white balance for an image. The method includes: obtaining statistics based on image data, the statistics being associated with at least one of color or brightness of the image data; determining, based on the statistics, a first white-balance decision using a white-balance algorithm; determining, based on the statistics, a second white-balance decision using a machine-learning model trained to determine white-balance decisions; and determining a third white-balance decision based on the first white-balance decision and the second white-balance decision.
In another example, an apparatus for determining a white balance for an image is provided that includes at least one memory and at least one processor (e.g., configured in circuitry) coupled to the at least one memory. The at least one processor configured to: obtain statistics based on image data, the statistics being associated with at least one of color or brightness of the image data; determine, based on the statistics, a first white-balance decision using a white-balance algorithm; determine, based on the statistics, a second white-balance decision using a machine-learning model trained to determine white-balance decisions; and determine a third white-balance decision based on the first white-balance decision and the second white-balance decision.
In another example, a non-transitory computer-readable medium is provided that has stored thereon instructions that, when executed by one or more processors, cause the one or more processors to: obtain statistics based on image data, the statistics being associated with at least one of color or brightness of the image data; determine, based on the statistics, a first white-balance decision using a white-balance algorithm; determine, based on the statistics, a second white-balance decision using a machine-learning model trained to determine white-balance decisions; and determine a third white-balance decision based on the first white-balance decision and the second white-balance decision.
In another example, an apparatus for determining a white balance for an image is provided. The apparatus includes: means for obtaining statistics based on image data, the statistics being associated with at least one of color or brightness of the image data; means for determining, based on the statistics, a first white-balance decision using a white-balance algorithm; means for determining, based on the statistics, a second white-balance decision using a machine-learning model trained to determine white-balance decisions; and means for determining a third white-balance decision based on the first white-balance decision and the second white-balance decision.
In some aspects, one or more of the apparatuses described herein is, can be part of, or can include a mobile device (e.g., a mobile telephone or so-called “smart phone” , a tablet computer, or other type of mobile device) , an extended reality device (e.g., a virtual reality (VR) device, an augmented reality (AR) device, or a mixed reality (MR) device) , a vehicle (or a computing device or system of a vehicle) , a smart or connected device (e.g., an Internet-of-Things (IoT) device) , a wearable device, a personal computer, a laptop computer, a video server, a television (e.g., a network-connected television) , a robotics device or system, or other device. In some aspects, each apparatus can include an image sensor (e.g., a camera) or multiple image sensors (e.g., multiple cameras) for capturing one or more images. In some aspects, each apparatus can include one or more displays for displaying one or more images, notifications, and/or other displayable data. In some aspects, each apparatus can include one or more speakers, one or more light-emitting devices, and/or one or more microphones. In some aspects, each apparatus can include one or more sensors. In some cases, the one or more sensors can be used for determining a location of the apparatuses, a state of the apparatuses (e.g., a tracking state, an operating state, a temperature, a humidity level, and/or other state) , and/or for other purposes.
This summary is not intended to identify key or essential features of the claimed subject matter, nor is it intended to be used in isolation to determine the scope of the claimed subject matter. The subject matter should be understood by reference to appropriate portions of the entire specification of this patent, any or all drawings, and each claim.
The foregoing, together with other features and aspects, will become more apparent upon referring to the following specification, claims, and accompanying drawings.
Illustrative examples of the present application are described in detail below with reference to the following figures:
FIG. 1 is a block diagram illustrating an example architecture of an image processing system, according to various aspects of the present disclosure;
FIG. 2 includes a graph illustrating example automatic white balance (AWB) stats, according to various aspects of the present disclosure;
FIG. 3 includes example images and corresponding example AWB stats;
FIG. 4 includes example images white-balanced by a machine-learning-based white-balance approach;
FIG. 5 is a block diagram illustrating an example system for making a white-balance decision, according to various aspects of the present disclosure;
FIG. 6 is a block diagram illustrating an example system for making a white-balance decision, according to various aspects of the present disclosure;
FIG. 7 is a flowchart illustrating an example process for determining a white-balance decision for an image, according to various aspects of the present disclosure;
FIG. 8 is a flow diagram illustrating another example process for determining a white balance for an image, in accordance with aspects of the present disclosure;
FIG. 9 is a block diagram illustrating an example of a deep learning neural network that can be used to implement a perception module and/or one or more validation modules, according to some aspects of the disclosed technology;
FIG. 10 is a block diagram illustrating an example of a convolutional neural network (CNN) , according to various aspects of the present disclosure; and
FIG. 11 is a block diagram illustrating an example computing-device architecture of an example computing device which can implement the various techniques described herein.
Certain aspects of this disclosure are provided below. Some of these aspects may be applied independently and some of them may be applied in combination as would be apparent to those of skill in the art. In the following description, for the purposes of explanation, specific details are set forth in order to provide a thorough understanding of aspects of the application. However, it will be apparent that various aspects may be practiced without these specific details. The figures and description are not intended to be restrictive.
The ensuing description provides example aspects only, and is not intended to limit the scope, applicability, or configuration of the disclosure. Rather, the ensuing description of the exemplary aspects will provide those skilled in the art with an enabling description for implementing an exemplary aspect. It should be understood that various changes may be made in the function and arrangement of elements without departing from the spirit and scope of the application as set forth in the appended claims.
The terms “exemplary” and/or “example” are used herein to mean “serving as an example, instance, or illustration. ” Any aspect described herein as “exemplary” and/or “example” is not necessarily to be construed as preferred or advantageous over other aspects. Likewise, the term “aspects of the disclosure” does not require that all aspects of the disclosure include the discussed feature, advantage, or mode of operation.
As mentioned above, while an image is being captured, or after the image is captured, the image may be white balanced, which may include adjusting intensities of color channels of pixels of the image (e.g., such that white objects in a scene represented by the image appear white in the image) . Many cameras (or devices that include cameras) allow a user to set a white balance of an image (e.g., either before or after the image is captured) .
Many cameras (or devices that include cameras) include an automatic white balance (AWB) engine that may automatically generate white-balance decisions. A white-balance decision may indicate an adjustment to intensities of red, green, and/or blue channels of pixels of an image. An AWB engine may white balance images based on statistics (which may be referred to as “AWB statistics, ” “AWB stats, ” Bayer-grid statistics, " “Bayer-grid stats, ” “BG statistics, ” and/or “BG stats. ” AWB stats may include a relationship between (e.g., a ratio of)
intensities of red light to intensities of green light and a relationship between (e.g., a ratio of) intensities of blue light to intensities of greenlight, among other things.
Algorithm-based white-balance engines may be unable to accurately white balance images which do not include enough white pixels (e.g., enough pixels representing white objects) . For example, an algorithm-based white-balance engine may identify a subset of pixels in an image (less than all pixels in the image) that should be white, determine how the subset of pixels need to be adjusted to become white, and apply the same adjustment to all pixels of the image. As an example, an AWB algorithm may obtain a weight for a number of data points of AWB stats. Each weight may represent the possibility that a respective data point of the AWB stats represents a white point in the scene. As such, in cases in which an image does not include enough white pixels, an algorithm-based white balance engine may be unable to accurately white balance the image.
Machine-learning-based white-balance engines have been developed. Machine-learning-based white-balance engines may be trained to receive an image and generate a white-balance decision based on the image. Machine-learning-based white-balance engines may have stability issues. For example, a machine-learning-based white-balance engine may be provided with substantially the same image and may produce different results. For example, a machine-learning-based white-balance engine may be provided with subsequent image frames (e.g., of video data) . The machine-learning-based white-balance engine may produce different white-balance decisions responsive to the subsequent image frames even though the subsequent images may be substantially the same.
Systems, apparatuses, methods (also referred to as processes) , and computer-readable media (collectively referred to herein as “systems and techniques” ) are described herein for determining a white balance for an image. The systems and techniques described herein may use both a white-balance algorithm and a white-balance machine-learning model to determine separate white-balance decisions, then make a final white-balance decision based on the two separate white-balance decisions.
For example, the systems and techniques may obtain statistics based on image data. The statistics may be associated with color and/or brightness of the image data. The statistics may be AWB stats. The systems and techniques may determine, based on the statistics, a first white-
balance decision using a white-balance algorithm. Further, the systems and techniques may determine, based on the statistics, a second white-balance decision using a machine-learning model trained to determine white-balance decisions. Also, the systems and techniques may determine a third white-balance decision based on the first white-balance decision and the second white-balance decision. In some aspects, the systems and techniques may white balance the image data based on the third white-balance decision.
By determining the final white-balance decision based on both a decision based on an algorithm-based approach and a machine-learning-model-based approach, the systems and techniques may exhibit the advantages of both approaches. For example, the systems and techniques may be stable and produce correct white-balance decision in most situations (e.g., a trait that the systems and techniques may share with algorithm-based white-balancing techniques) . Further, the systems and techniques may be capable of predicting a good white-balance decision in misleading scenes and/or low-light scenes (e.g., a trait that the systems and techniques may share with machine-learning-based white-balancing techniques) .
Further, the systems and techniques may overcome the challenges to both algorithm-based white balancing and machine-learning-based white balancing. For example, algorithm-based white-balancing techniques may generate incorrect white-balance decisions for images representing misleading and/or low-light scenes. The systems and techniques, using a white-balance decision based on a machine-learning-based white-balance technique, overcome this issue. Additionally or alternatively, machine-learning-based white-balancing techniques may perform inconsistently. The systems and techniques, using a white-balance decision based on an algorithm-based white-balancing technique, may overcome this issue.
Various aspects of the application will be described with respect to the figures below.
FIG. 1 is a block diagram illustrating an example architecture of an image-processing system 100, according to various aspects of the present disclosure. The image-processing system 100 includes various components that are used to capture and process images, such as an image of a scene 106. The image-processing system 100 can capture image frames (e.g., still images or video frames) . In some cases, the lens 108 and image sensor 118 (which may include an analog-to-digital converter (ADC) ) can be associated with an optical axis. In one illustrative example,
the photosensitive area of the image sensor 118 (e.g., the photodiodes) and the lens 108 can both be centered on the optical axis.
In some examples, the lens 108 of the image-processing system 100 faces a scene 106 and receives light from the scene 106. The lens 108 bends incoming light from the scene toward the image sensor 118. The light received by the lens 108 then passes through an aperture of the image-processing system 100. In some cases, the aperture (e.g., the aperture size) is controlled by one or more control mechanisms 110. In other cases, the aperture can have a fixed size.
The one or more control mechanisms 110 can control exposure, focus, and/or zoom based on information from the image sensor 118 and/or information from the image processor 124. In some cases, the one or more control mechanisms 110 can include multiple mechanisms and components. For example, the control mechanisms 110 can include one or more exposure-control mechanisms 112, one or more focus-control mechanisms 114, and/or one or more zoom-control mechanisms 116. The one or more control mechanisms 110 may also include additional control mechanisms besides those illustrated in FIG. 1. For example, in some cases, the one or more control mechanisms 110 can include control mechanisms for controlling analog gain, flash, HDR, depth of field, and/or other image capture properties.
The focus-control mechanism 114 of the control mechanisms 110 can obtain a focus setting. In some examples, focus-control mechanism 114 stores the focus setting in a memory register. Based on the focus setting, the focus-control mechanism 114 can adjust the position of the lens 108 relative to the position of the image sensor 118. For example, based on the focus setting, the focus-control mechanism 114 can move the lens 108 closer to the image sensor 118 or farther from the image sensor 118 by actuating a motor or servo (or other lens mechanism) , thereby adjusting the focus. In some cases, additional lenses may be included in the image-processing system 100. For example, the image-processing system 100 can include one or more microlenses over each photodiode of the image sensor 118. The microlenses can each bend the light received from the lens 108 toward the corresponding photodiode before the light reaches the photodiode.
In some examples, the focus setting may be determined via contrast detection autofocus (CDAF) , phase detection autofocus (PDAF) , hybrid autofocus (HAF) , or some combination thereof. The focus setting may be determined using the control mechanism 110, the image sensor
118, and/or the image processor 124. The focus setting may be referred to as an image capture setting and/or an image processing setting. In some cases, the lens 108 can be fixed relative to the image sensor and the focus-control mechanism 114.
The exposure-control mechanism 112 of the control mechanisms 110 can obtain an exposure setting. In some cases, the exposure-control mechanism 112 stores the exposure setting in a memory register. Based on the exposure setting, the exposure-control mechanism 112 can control a size of the aperture (e.g., aperture size or f/stop) , a duration of time for which the aperture is open (e.g., exposure time or shutter speed) , a duration of time for which the sensor collects light (e.g., exposure time or electronic shutter speed) , a sensitivity of the image sensor 118 (e.g., ISO speed or film speed) , analog gain applied by the image sensor 118, or any combination thereof. The exposure setting may be referred to as an image capture setting and/or an image processing setting.
The zoom-control mechanism 116 of the control mechanisms 110 can obtain a zoom setting. In some examples, the zoom-control mechanism 116 stores the zoom setting in a memory register. Based on the zoom setting, the zoom-control mechanism 116 can control a focal length of an assembly of lens elements (lens assembly) that includes the lens 108 and one or more additional lenses. For example, the zoom-control mechanism 116 can control the focal length of the lens assembly by actuating one or more motors or servos (or other lens mechanism) to move one or more of the lenses relative to one another. The zoom setting may be referred to as an image capture setting and/or an image processing setting. In some examples, the lens assembly may include a parfocal zoom lens or a varifocal zoom lens. In some examples, the lens assembly may include a focusing lens (which can be lens 108 in some cases) that receives the light from the scene 106 first, with the light then passing through a focal zoom system between the focusing lens (e.g., lens 108) and the image sensor 118 before the light reaches the image sensor 118. The focal zoom system may, in some cases, include two positive (e.g., converging, convex) lenses of equal or similar focal length (e.g., within a threshold difference of one another) with a negative (e.g., diverging, concave) lens between them. In some cases, the zoom-control mechanism 116 moves one or more of the lenses in the focal zoom system, such as the negative lens and one or both of the positive lenses. In some cases, zoom-control mechanism 116 can control the zoom by capturing an image from an image sensor of a plurality of image sensors (e.g., including image sensor 118) with a zoom corresponding to the zoom setting. For example, the image-processing
system 100 can include a wide-angle image sensor with a relatively low zoom and a telephoto image sensor with a greater zoom. In some cases, based on the selected zoom setting, the zoom-control mechanism 116 can capture images from a corresponding sensor.
The image sensor 118 includes one or more arrays of photodiodes or other photosensitive elements. Each photodiode measures an amount of light that eventually corresponds to a particular pixel in the image produced by the image sensor 118. In some cases, different photodiodes may be covered by different filters. In some cases, different photodiodes can be covered in color filters, and may thus measure light matching the color of the filter covering the photodiode. Various color filter arrays can be used such as, for example and without limitation, a Bayer color filter array, a quad color filter array (QCFA) , and/or any other color filter array.
In some cases, the image sensor 118 may alternately or additionally include opaque and/or reflective masks that block light from reaching certain photodiodes, or portions of certain photodiodes, at certain times and/or from certain angles. In some cases, opaque and/or reflective masks may be used for phase detection autofocus (PDAF) . In some cases, the opaque and/or reflective masks may be used to block portions of the electromagnetic spectrum from reaching the photodiodes of the image sensor (e.g., an IR cut filter, a UV cut filter, a band-pass filter, low-pass filter, high-pass filter, or the like) . The image sensor 118 may also include an analog gain amplifier to amplify the analog signals output by the photodiodes and/or an analog to digital converter (ADC) to convert the analog signals output of the photodiodes (and/or amplified by the analog gain amplifier) into digital signals. In some cases, certain components or functions discussed with respect to one or more of the control mechanisms 110 may be included instead or additionally in the image sensor 118. The image sensor 118 may be a charge-coupled device (CCD) sensor, an electron-multiplying CCD (EMCCD) sensor, an active-pixel sensor (APS) , a complimentary metal-oxide semiconductor (CMOS) , an N-type metal-oxide semiconductor (NMOS) , a hybrid CCD/CMOS sensor (e.g., sCMOS) , or some other combination thereof.
The image processor 124 may include one or more processors, such as one or more image signal processors (ISPs) (including ISP 128) , one or more host processors (including host processor 126) , and/or one or more of any other type of processor discussed with respect to the computing-device architecture 1100 of FIG. 11. The host processor 126 can be a digital signal
processor (DSP) and/or other type of processor. In some implementations, the image processor 124 is a single integrated circuit or chip (e.g., referred to as a system-on-chip or SoC) that includes the host processor 126 and the ISP 128. In some cases, the chip can also include one or more input/output ports (e.g., input/output (I/O) ports 130) , central processing units (CPUs) , graphics processing units (GPUs) , broadband modems (e.g., 3G, 4G or LTE, 5G, etc. ) , memory, connectivity components (e.g., BluetoothTM, Global Positioning System (GPS) , etc. ) , any combination thereof, and/or other components. The I/O ports 130 can include any suitable input/output ports or interface according to one or more protocol or specification, such as an Inter-Integrated Circuit 2 (I2C) interface, an Inter-Integrated Circuit 3 (I3C) interface, a Serial Peripheral Interface (SPI) interface, a serial General-Purpose Input/Output (GPIO) interface, a Mobile Industry Processor Interface (MIPI) (such as a MIPI CSI-2 physical (PHY) layer port or interface, an Advanced High-performance Bus (AHB) bus, any combination thereof, and/or other input/output port. In one illustrative example, the host processor 126 can communicate with the image sensor 118 using an I2C port, and the ISP 128 can communicate with the image sensor 118 using an MIPI port.
The image processor 124 may perform a number of tasks, such as de-mosaicing, color space conversion, image frame downsampling, pixel interpolation, automatic exposure (AE) control, automatic gain control (AGC) , CDAF, PDAF, automatic white balance, merging of image frames to form an HDR image, image recognition, object recognition, feature recognition, receipt of inputs, managing outputs, managing memory, or some combination thereof. The image processor 124 may store image frames and/or processed images in random-access memory (RAM) 120, read-only memory (ROM) 122, a cache, a memory unit, another storage device, or some combination thereof.
Various input/output (I/O) devices 132 may be connected to the image processor 124. The I/O devices 132 can include a display screen, a keyboard, a keypad, a touchscreen, a trackpad, a touch-sensitive surface, a printer, any other output devices, any other input devices, or any combination thereof. In some cases, a caption may be input into the image-processing device 104 through a physical keyboard or keypad of the I/O devices 132, or through a virtual keyboard or keypad of a touchscreen of the I/O devices 132. The I/O devices 132 may include one or more ports, jacks, or other connectors that enable a wired connection between the image-processing system 100 and one or more peripheral devices, over which the image-processing
system 100 may receive data from the one or more peripheral device and/or transmit data to the one or more peripheral devices. The I/O devices 132 may include one or more wireless transceivers that enable a wireless connection between the image-processing system 100 and one or more peripheral devices, over which the image-processing system 100 may receive data from the one or more peripheral device and/or transmit data to the one or more peripheral devices. The peripheral devices may include any of the previously-discussed types of the I/O devices 132 and may themselves be considered I/O devices 132 once they are coupled to the ports, jacks, wireless transceivers, or other wired and/or wireless connectors.
In some cases, the image-processing system 100 may be a single device. In some cases, the image-processing system 100 may be two or more separate devices, including an image-capture device 102 (e.g., a camera) and an image-processing device 104 (e.g., a computing device coupled to the camera) . In some implementations, the image-capture device 102 and the image-capture device 102 may be coupled together, for example via one or more wires, cables, or other electrical connectors, and/or wirelessly via one or more wireless transceivers. In some implementations, the image-capture device 102 and the image-processing device 104 may be disconnected from one another.
As shown in FIG. 1, a vertical dashed line divides the image-processing system 100 of FIG. 1 into two portions that represent the image-capture device 102 and the image-processing device 104, respectively. The image-capture device 102 includes the lens 108, control mechanisms 110, and the image sensor 118. The image-processing device 104 includes the image processor 124 (including the ISP 128 and the host processor 126) , the RAM 120, the ROM 122, and the I/O device 132. In some cases, certain components illustrated in the image-capture device 102, such as the ISP 128 and/or the host processor 126, may be included in the image-capture device 102. In some examples, the image-processing system 100 can include one or more wireless transceivers for wireless communications, such as cellular network communications, 802.11 wi-fi communications, wireless local area network (WLAN) communications, or some combination thereof.
The image-processing system 100 can be part of, or implemented by, a single computing device or multiple computing devices. In some examples, the image-processing system 100 can be part of an electronic device (or devices) such as a camera system (e.g., a digital camera, an IP
camera, a video camera, a security camera, etc. ) , a telephone system (e.g., a smartphone, a cellular telephone, a conferencing system, etc. ) , a laptop or notebook computer, a tablet computer, a set-top box, a smart television, a display device, a game console, an XR device (e.g., an HMD, smart glasses, etc. ) , an IoT (Internet-of-Things) device, a smart wearable device, a video streaming device, an Internet Protocol (IP) camera, or any other suitable electronic device (s) .
While the image-processing system 100 is shown to include certain components, one of ordinary skill will appreciate that the image-processing system 100 can include more components than those shown in FIG. 1. The components of the image-processing system 100 can include software, hardware, or one or more combinations of software and hardware. For example, in some implementations, the components of the image-processing system 100 can include and/or can be implemented using electronic circuits or other electronic hardware, which can include one or more programmable electronic circuits (e.g., microprocessors, GPUs, DSPs, CPUs, and/or other suitable electronic circuits) , and/or can include and/or be implemented using computer software, firmware, or any combination thereof, to perform the various operations described herein. The software and/or firmware can include one or more instructions stored on a computer-readable storage medium and executable by one or more processors of the electronic device implementing the image-processing system 100.
In some examples, the computing-device architecture 1100 shown in FIG. 11 and further described below can include the image-processing system 100, the image-capture device 102, the image-processing device 104, or a combination thereof.
FIG. 2 includes a graph 202 illustrating automatic white balance (AWB) stats, according to various aspects of the present disclosure. The x-axis of graph 202 represents a ratio of intensities of red light to intensities of green light. For example, the x-axis represents an average intensity of red channels of a group of pixels divided by an average intensity of green channels of the group of pixels. The y-axis of graph 202 represents a ratio of intensities of blue light to intensities of green light. For example, the y-axis represents an average intensity of blue channels of a group of pixels divided by an average intensity of green channels of the group of pixels. Thus, a point on graph 202 may represent intensities of colors of a group of pixels of an image. For example, an x-coordinate of the point may represent a ratio of the intensities of red channels
of the group of pixels to the intensities of green pixels of the group of pixels. Also, a y-coordinate of the point may represent a ratio of the intensities of blue channels of the group of pixels to the intensities of green pixels of the group of pixels.
Graph 202 includes a white box 204. Data points inside white box 204 may be representative of objects in the scene that may be white. White box 204 may be determined based on a calibration process. For example, a white object may be illuminated by a number of different lights sources having different color temperatures. Multiple images of the white object may be captured as the white object is illuminated by the different light sources. AWB stats from the multiple images may be generated and white box 204 may be defined by the AWB stats for the multiple images. Because white box 204 is defined based on AWB stats based on images of a white object, any data point within white box 204 may correspond to a group of pixels that may represent a white object.
Additionally or alternatively, graph 202 may include reference points 206. Reference points 206 may similarly define white of graph 202. For example, reference points 206 may represent data points measured during the calibration process described above. In some cases, a reference line may be defined connecting reference points 206.
A distance between a data point on graph 202 and white box 204, reference points 206, and/or the reference line may be a measure of a difference the color of a group of pixels represented by the data point and white. Additionally or alternatively, the distance may be indicative of how white (or not) the group of pixels represented by the data point are.
FIG. 3 includes example images and corresponding example AWB stats. In particular, FIG. 3 includes an image 302 and corresponding AWB stats 304. Data points of AWB stats 304 may represent groups of pixels of image 302. For example, image 302 may be divided into a 64 *48 grid of groups of pixels. Each group of pixels of the grid may be represented by a data point of AWB stats 304. Similarly, data points of AWB stats 314 may represent image 312.
Image 302 may be an image of a scene. The scene may include a white object. The image may include a group pixels (e.g., in one of the 64 *48 groups of pixels) corresponding toe the white object. AWB stats 304 may include a data point corresponding to the group of pixels. The data point may be within white box 306. An algorithm-based white-balancing approach may
identify the group of pixels, for example, based on the data point being within white box 306. The algorithm-based white-balance approach may determine a white-balance decision to apply to the group of pixels to cause the group of pixels to be white in a white-balanced image. The algorithm-based white-balance approach may apply the white-balance decision to other pixels (e.g., all of the other pixels of the image) .
Image 312 may be an image of another scene, for example, wood grain. The other scene may not include a white object. Because the scene does not include a white object, image 312 may not include any groups of white pixels and AWB stats 314 may not include any data points within white box 316. Because image 312 does not include any data points within white box 316, the algorithm-based white-balance approach may not be able to make an accurate white-balance decision. In some cases, the algorithm-based white-balance approach may select a data point closest to white box 316 (or to reference points or a reference line, neither of which is illustrated in FIG. 3) . However, because, the data point is not within white box 316, the white-balance decision may be incorrect (e.g., unable to cause pixels representing white objects in the scene to be white in an image. Image 312 may be an illustration of a misleading scene for which it may be difficult for an algorithm-based white-balance approach to accurately determine a white-balance decision.
FIG. 4 includes example images white-balanced by a machine-learning-based white-balance approach. For example, FIG. 4 includes an image 402 and an image 412. Image 402 and image 412 may represent the same scene. Image 402 and image 412 may have been captured within a short period of time (e.g., 1/30th of a second) . For example, image 402 and image 412 may be frames of video data (e.g., with image 412 being subsequent to image 402) . Alternatively, image 402 and image 412 may be the same image, white balanced differently. Image 402 and image 412 may be white balanced by a machine-learning-based white-balance approach. In some cases, image 402 and image 412 may have been independently white-balanced by the same machine-learning model (e.g., one after the other) . A white-balance decision of image 402 (e.g., on which image 402) is white-balanced may be different than a white-balance decision of image 412 (e.g., on white image 412 is white-balanced) . Accordingly, image 402 may be different than image 412. In particular, intensities of colors of image 402 may be different than intensities of colors of image 412. Image 402 and image 412 may illustrated an inconsistency of a machine-learning-based white-balance approach to white balancing.
Respective white-balance decisions for Image 402 and image 412 may be determined according to a machine-learning-based white-balance approach. For example, a machine-learning model may be trained to generate white-balance decisions based on AWB stats through a backpropagation training process. For example, the machine-learning model may be provided with a number of sets of AWB stats and corresponding white-balance decisions. The machine-learning model may generate white-balance decisions based on the AWB stats. The generated white-balance decisions may be compared with the provided white-balance decisions. The machine-learning model may be adjusted, for example, parameters (e.g., weights of the machine-learning model) may be adjusted based on a difference (e.g., an error) between the generated white-balance decisions and the provided white-balance decisions. The trained machine-learning model may be used to determine the respective white-balance decisions for image 402 and image 412.
FIG. 5 is a block diagram illustrating an example system 500 for making a white-balance decision 524, according to various aspects of the present disclosure. In general, system 500 may obtain AWB stats 506 and provide AWB stats 506 to white-balance algorithm 508 and machine-learning model 512. White-balance algorithm 508 may generate a white-balance decision 510 based on AWB stats 506 and machine-learning model 512 may generate white-balance decision 514 based on AWB stats 506. White-balance algorithm 508 may provide white-balance decision 510 to AWB decision aggregator 522 and machine-learning model 512 may provide white-balance decision 514 to AWB decision aggregator 522. AWB decision aggregator 522 may determine white-balance decision 524 based on white-balance decision 510 and white-balance decision 514.
In further detail, system 500 may obtain AWB stats 506. AWB stats 506 may be based on image data 502. AWB stats 506 may be, or may include, statistics associated with color and/or brightness of image data 502. Graph 202 of FIG. 2 may be an example of an illustration of AWB stats 506. AWB stats 506 may be, or may include, a relationship between (e.g., a ratio of) intensities of red light to intensities of green light (e.g., as illustrated by the x-axis of graph 202) , a relationship between (e.g., a ratio of) intensities of blue light to intensities of green light, (e.g., as illustrated by the y-axis of graph 202) , a comparison between a white reference point and the ratio of intensities of red light to intensities of green light and the ratio of intensities of blue light to intensities of green light (e.g., a distance between a data point and white box 204, reference
points 206, and/or a reference line) , a correlated color temperature, a semantic label and/or sensor-gain values. Additionally or alternatively, AWB stats 506 may include a lux index, IR-sensor data from an IR-sensor, and/or spectrometer data from a spectrometer.
In some cases, system 500 may obtain image data 502 and generate AWB stats 506 using a stats engine 504. Stats engine 504 may be configured to generate AWB stats 506 based on image data 502. Stats engine 504 is optional in system 500. The optional nature of stats engine 504 and image data 502 in system 500 are illustrated by image data 502 and stats engine 504 being illustrated using dashed lines. For example, in some cases, system 500 may include stats engine 504 and system 500 may generate AWB stats 506 based on image data 502 using stats engine 504. In other cases, system 500 may not include stats engine 504 and system 500 may receive AWB stats 506. Additionally or alternatively, some of AWB stats 506 may be based on data from other sensors (e.g., an IR-sensor or a spectrometer) .
White-balance algorithm 508 may be, or may include, an algorithm for determining white-balance decision 510 based on AWB stats 506. White-balance algorithm 508 may determine white-balance decision 510 in a manner substantially similar to the algorithm-based white-balance approach described above with regard to image 302 of FIG. 3. White-balance decision 510 may be, or may include, a red gain, a green gain, and/or a blue gain. The red gain, the green gain, and the blue gain may be applied to all pixels of an image to white-balance the image. For example, a white-balance decision may be represented as a red gain divided by a green gain and a blue gain divided by the green gain (e.g., in the format “ (r/g, b/g) ” ) . For instance a white-balance decision may be (0.5, 0.4) which may denote red gain = 2, blue gain = 2.5 and green gain = 1. The red gain, blue gain, and green gain may be applied to pixels of an image to such that pixels of the image representing white points in the scene are white in the image.
Machine-learning model 512 may be, or may include, a machine-learning model trained to determine a white-balance decision based on AWB stats. Machine-learning model 512 may determine white-balance decision 514 in a manner substantially similar to the machine-learning-based white-balance approach described above with regard to image 402 of FIG. 4. White-balance decision 514 may be, or may include, an intensity of red light, an intensity of green light, and an intensity of blue light that together make up white light.
In some cases, machine-learning model 512 may additionally generate a confidence value 516 and/or a scene flag 518. Confidence value 516 may be related to white-balance decision 514. For example, confidence value 516 may indicate a level of confidence of machine-learning model 512 with regard to white-balance decision 514 and/or a level of confidence with which other systems and techniques may use white-balance decision 514. For instance, confidence value 516 may be a value between 0 and 1, with 0 indicating that other systems and techniques should not rely on white-balance decision 514 at all and 1 indicating that other systems and techniques may completely rely on white-balance decision 514.
Scene flag 518 may be related to a scene represented by image data 502. Further, scene flag 518 may indicate whether the scene is a misleading scene or not. A misleading scene may be a scene that does not include enough white points or objects (e.g., enough white objects for machine-learning model 512 to accurately determine white-balance decision 514) . Scenes that do not include anything white, scenes that are poorly lit, scenes with mixed lighting, and/or close-up images of objects that are not white are some examples of misleading scenes. Scene flag 518 may thus be indicative of whether the scene includes enough white points or objects (or whether image data 502 includes pixels with that are recognizable as white by machine-learning model 512) . In some aspects, machine-learning model 512 may determine whether the scene is misleading or not based on data points on an AWB stats diagram, similar to graph 202 of FIG. 2, a white box, reference points, and/or reference line. In some cases, scene flag 518 may be binary (e.g., indicative of a misleading scene or not) . In other cases, scene flag 518 may be a value indicative of a certainty of machine-learning model 512 regarding scene flag 518. For example, scene flag 518 may be a value between 0 and 1, 0 indicating that machine-learning model 512 is certain that the scene is not misleading, 1 indicating that machine-learning model 512 is certain that the scene is misleading, and 0.5 indicating that machine-learning model 512 is completely uncertain regarding whether the scene is misleading or not.
Confidence value 516 and scene flag 518 are optional in system 500. The optional nature of confidence value 516 and scene flag 518 in system 500 are illustrated by confidence value 516 and scene flag 518 being illustrated using dashed lines. In some aspects, machine-learning model 512 may generate confidence value 516 and scene flag 518 and provide confidence value 516 and scene flag 518 to AWB decision aggregator 522. In other aspects, machine-learning model 512 may not generate confidence value 516 and scene flag 518. In such cases, AWB decision
aggregator 522 may generate a confidence value and/or scene flag internal to AWB decision aggregator 522.
Additionally, in some aspects, system 500 may receive information 520 and may provide information 520 to AWB decision aggregator 522. Information 520 may be, or may include, spectrum data from a spectrum sensor, scene-detection results, a number of faces detected, and/or semantic segmentation information. AWB decision aggregator 522 may generate white-balance decision 524 based on information 520. In such aspects, AWB decision aggregator 522 may determine white-balance decision 524 additionally based on information 520. Information 520 is optional in system 500. The optional nature of information 520 in system 500 is illustrated by information 520 being illustrated using dashed lines.
In some aspects, system 500 may generate a white-balanced image (not illustrated in FIG. 5) based on image data 502 and white-balance decision 524. For example, system 500 may white balance image data 502 based on white-balance decision 524 to generate a white-balanced image.
FIG. 6 is a block diagram illustrating an example system 600 for making a white-balance decision 524, according to various aspects of the present disclosure. In general, system 600 may obtain white-balance decision 510 (which may have been generated by an algorithm-based white-balance approach, such as white-balance algorithm 508 of FIG. 5) and white-balance decision 514 (which may have been generated by a machine-learning-based white-balance approach, such as machine-learning model 512 of FIG. 5) . AWB decision aggregator 522 may determine white-balance decision 524 based on white-balance decision 510, white-balance decision 514 and/or information 520.
In some aspects, AWB decision aggregator 522 may receive a confidence value and/or a scene flag (e.g., from the machine-learning-based white-balance approach which generated machine-learning model 512) . In other cases, AWB decision aggregator 522 may include a machine-learning model 602 which may generate a confidence value 516 and/or a scene flag 518. For example, machine-learning model 602 may receive white-balance decision 514 and AWB stats 506 and may generate confidence value 516 and/or scene flag 518 based on white-balance decision 514. Machine-learning model 602 may be a machine-learning model trained to generate a confidence value and/or a scene flag 518based on a white-balance decision and AWB stats.
For example, machine-learning model 602 may be trained to generate a confidence value and/or a scene flag based on a white-balance decision through a backpropagation training process. For example, machine-learning model 602 may be provided with a number of sets of AWB stats and white-balance decisions and corresponding scene flags and/or confidence values. Machine-learning model 602 may generate scene flags and confidence values based on the AWB stats and the white-balance decisions, the generated scene flags and confidence values may be compared with the provided scene flags and confidence values. Machine-learning model 602 may be adjusted, for example, parameters (e.g., weights of machine-learning model 602) may be adjusted based on a difference (e.g., an error) between the generated scene flags and the confidence values and the provided scene flags and confidence values. Once trained, machine-learning model 602 may be used to determine the confidence value 516 and scene flag 518 based on white-balance decision 514 and AWB stats 506.
FIG. 7 is a flowchart illustrating an example process 700 for determining a white-balance decision for an image, according to various aspects of the present disclosure. Process 700 may be implemented by AWB decision aggregator 522 of FIG. 5 or FIG. 6. For example, process 700, may be a process used by AWB decision aggregator 522 to determine white-balance decision 524 based on white-balance decision 510, white-balance decision 514, confidence value 516, scene flag 518 and/or information 520. For example, process 700 may obtain an algorithm-based white-balance decision ( “AB AWB” ) (e.g., white-balance decision 510) , a machine-learning-based white-balance decision ( "ML AWB" ) (e.g., white-balance decision 514) , a confidence value related to the ML AWB (e.g., confidence value 516) , a scene flag related to a scene of the images (e.g., scene flag 518) , and/or other information (e.g., information 520) .
Process 700 may begin at decision block 702. At decision block 702, it may be determined whether the scene represented by the image is misleading or not. The determination regarding the scene may be based on the scene flag (e.g., scene flag 518) . If the scene is not misleading, process 700 may continue to block 704, else process 700 may continue to block 706.
Block 704 may be reached based on a scene represented by the image being not misleading. At block 704, a weight for the ML AWB (an "ML AWB weight" ) may be determined to be 0. For example, based on a scene not being misleading (e.g., as indicated by the scene flag) , process 700 may determine to generate the ML AWB weight to be 0. An ML AWB weight of 0
may, at block 710, cause process 700 to not use the ML AWB (e.g., white-balance decision 514) in determining a final white-balance decision (e.g., white-balance decision 524) but rather to use another white-balance decision.
Block 706 may be reached based on a scene represented by the image being misleading. At block 706, the ML AWB weight may be set to a value based on a confidence value related to the ML AWB. For example, the ML AWB weight may be set to a confidence value (e.g., confidence value 516) of the ML AWB.
Block 706 may be followed by block 708. At block 708, the ML AWB weight may be adjusted based on one or more conditions. At block 708, several conditions of the image (e.g., image data 502) , the AWB stats (e.g., AWB stats 506) , the AL AWB (e.g., white-balance decision 510) , the ML AWB (e.g., white-balance decision 514) , and/or other information (e.g., information 520) may be checked and the ML AWB weight may be adjusted based on the conditions.
For example, table 1 illustrates example conditions that may be used to adjust a confidence value of an ML AWB. Table 1 includes two conditions, a “distance to reference line” condition and a “distance to spectrum-sensor decision” condition. The distance to reference line condition may relate to a distance between a data point representative of the ML AWB on an AWB stats graph, such as graph 202 of FIG. 2, and a reference line. The distance to spectrum-sensor decision may relate to a distance between the data point representative of the ML AWB on the AWB stats graph and a white-balance decision based on spectrum data generated by a spectrum sensor and mapped to the AWB stats graph. The distance to the reference line and/or the distance to the spectrum-sensor decision may be part of the information (e.g., information 520) obtained by process 700. Table 1 further includes a confidence factor that may result from satisfaction of the two conditions. A confidence value may be adjusted by one or more confidence factors. Adjusting the confidence value by one or more confidence factors may include multiplying the confidence values by the one or more confidence factors, performing a weighted average based on the confidence values and the one or more confidence factors and/or selecting as max/min confidence value based on one or more confidence factors.
Table 1
According to the example of table 1, if a distance between a data point representative of the ML AWB and a refence line is less than 0.05, and the distance between the data point representative of the ML AWB and a spectrum-sensor decision is less than 0.08, the confidence value of the ML AWB (or the ML AWB weight) may be adjusted by a confidence factor of 1. As an example of adjusting the confidence value (or the ML AWB weight) , the confidence value (or the ML AWB weight) may be multiplied by a confidence factor of 1. Further, if a distance between the data point representative of the ML AWB and the refence line is less than 0.05, and the distance between the data point representative of ML AWB and a spectrum-sensor decision is greater than or equal to 0.08 and less than 0.2, the confidence value of the ML AWB (or the ML AWB weight) may be adjusted by a confidence factor of 0.5. As an example of adjusting the confidence value (or the ML AWB weight) , the confidence value (or the ML AWB weight) may be multiplied by a confidence factor of 0.5. Further, if a distance between the data point representative of the ML AWB and the refence line is greater than or equal to 0.05 and less than 0.2, and the distance between the data point representative of ML AWB and a spectrum-sensor decision is less than 0.08, the confidence value of the ML AWB (or the ML AWB weight) may be adjusted by a confidence factor of 0.5. Further, if a distance between the data point representative of the ML AWB and the refence line is greater than or equal to 0.05 and less than
0.2, and the distance between the data point representative of ML AWB and a spectrum-sensor decision is greater than or equal to 0.08 and less than 0.2, the confidence value of the ML AWB (or the ML AWB weight) may be adjusted by a confidence factor of 0. As an example of adjusting the confidence value (or the ML AWB weight) , the confidence value (or the ML AWB weight) may be multiplied by a confidence factor of 0.
As another example, table 2 illustrates another example conditions that may be used to adjust a confidence value of an ML AWB. Table 2 includes two conditions, a correlated color temperature ( “CCT” ) condition and a “ML AWB CCT decision” condition. The CCT for given image data may be calculated based on a data point representative of the ML AWB on an AWB graph, such as graph 202 of FIG. 2. The ML AWB CCT decision for given image data may be based on a red gain divided by a green gain and a blue gain divided by the green gain (e.g., in the format “ (r/g, b/g) ” ) of a data representative of the ML AWB. For example, image data may have an ML AWB that may be represented by an (r/g, b/g) point. From the (r/g, b/g) point, a CCT may be calculated. The ML AWB CCT decision for the ML AWB may be the CCT. The CCT and/or the ML AWB CCT decision may be part of the information (e.g., information 520) obtained by process 700. Table 2 further includes a confidence factor that may result from satisfaction of the two conditions.
Table 2
According to the example of table 2, if the CCT measured by the CCT sensor is between 2500 kelvin (K) and 4000 K and the ML AWB CCT decision is between 2500 K and 4000 K, the confidence value of the ML AWB (or the ML AWB weight) may be adjusted by a confidence factor of 1. Further, if the CCT measured by the CCT sensor is between 2500 K and 4000 K and the ML AWB CCT decision is between 4500 K and 6500 K, the confidence value of the ML AWB (or the ML AWB weight) may be adjusted by a confidence factor of 0.5. Further, if the CCT measured by the CCT sensor is between 4500 K and 6500 K and the ML AWB CCT decision is between 2500 K and 4000 K, the confidence value of the ML AWB (or the ML AWB weight) may be adjusted by a confidence factor of 0.5. Further, if the CCT measured by the CCT sensor is between 4500 K and 6500 K and the ML AWB CCT decision is between 4500 K and 6500 K, the confidence value of the ML AWB (or the ML AWB weight) may be adjusted by a confidence factor of 1.
Further, according to the example of table 2, a confidence factor may be determined for situations that fall between the defined conditions. For example, if the CCT measured by the CCT sensor is between 4000 K and 4500 K the systems and techniques may determine a confidence factor. For example, the systems and techniques may determine a confidence factor that is a value between 0.5 and 1 (e.g., between the confidence factor for CCT less than 4000 K and the confidence factor for CCT greater than 4500 K) . In some cases, the systems and techniques may interpolate between 0.5 and 1 based on the CCT. For example, if the CCT is 4250 K (e.g., midway between 4000 K and 4500 K) , the confidence factor may be 0.75 (e.g., midway between 0.5 and 1) .
The examples given with regard to table 1 and table 2 are illustrative examples. Other conditions, ranges, numbers of conditions and/or numbers of ranges may be used to adjust a confidence value of an ML AWB (or an ML AWB weight) . Further, the adjustment values are provided as illustrative examples; other adjustment values may be used.
After the ML AWB weight is adjusted based on the conditions, at block 708, process 700 may continue at block 710. At block 710, a final white-balance decision may be determined based on the algorithm-based white-balance decision, the machine-learning-based white-balance decision, and the ML AWB weight. For example, the final white-balance decision may be a weighted sum of the AB AWB and the ML AWB. For instance, process 700 may determine an
algorithm-based white-balance decision weight ( "AL AWB weight" ) . The AL AWB weight may be 1 -the ML AWB weight. Having determined the AL AWB weight and the ML AWB weight, the final white-balance decision may be determined, at block 710, as the ML AWB *the ML AWB weight + the AL AWB *the AL AWB weight.
In some aspects, system 500 may perform process 700 on portions of an image independently. For example, in some aspects, another algorithm may partition image data 502 into separate portions (e.g., based on lighting of the portions or semantic labels) . System 500 may perform process 700 as described above independently on each of the separate portions. For example, system 500 may determine a white-balance decision 524 based on white-balance decision 510 and a white-balance decision 514 for each of the portions. In such aspects, system 500 may apply the white-balance decisions 524 to the portions and combine the portions. Further, in such aspects, system 500 may blend the white-balance decisions 524 between the portions and/or at edges of the portions. In some cases, system 500 may blend the weights between the portions and/or at edges of the portions.
For example, an image may include a portion lit by natural light and a portion lit by a fluorescent bulb. An algorithm may identify the portions. System 500 may operate on each of the portions of the image independently by, for example, determining a white-balance decision 510 and a white-balance decision 514 for each portion separately, then determining a white-balance decision 524 for each of the portions separately. The white-balance decisions 524 may include weights for the respective white-balance decisions 510 and the respective white-balance decisions 514. System 500 may combine the portions and blend the white-balance decisions 524 or the weights between the white-balance decisions 510 and the white-balance decisions 514 between and/or at edges of the portions.
FIG. 8 is a flow diagram illustrating a process 800 for determining a white balance for an image, in accordance with aspects of the present disclosure. One or more operations of process 800 may be performed by a computing device (or apparatus) or a component (e.g., a chipset, codec, etc. ) of the computing device. The computing device may be a mobile device (e.g., a mobile phone) , a network-connected wearable such as a watch, an extended reality (XR) device such as a virtual reality (VR) device or augmented reality (AR) device, a vehicle or component or system of a vehicle, a desktop computing device, a tablet computing device, a server computer,
a robotic device, and/or any other computing device with the resource capabilities to perform the process 800. The one or more operations of process 800 may be implemented as software components that are executed and run on one or more processors.
At a block 802, a computing device (or one or more components thereof) may obtain statistics based on image data, the statistics being associated with at least one of color or brightness of the image data. For example, system 500 of FIG. 5 may obtain AWB stats 506.
In some aspects, the statistics may be, or may include, a lux index; a relationship between intensities of red light to intensities of green light; a relationship between intensities of blue light to intensities of greenlight; a comparison between a white reference point and the relationship between intensities of red light to intensities of green light and the relationship between intensities of blue light to intensities of green light; a correlated color temperature; and/or a semantic label.
At a block 804, the computing device (or one or more components thereof) may determine, based on the statistics, a first white-balance decision using a white-balance algorithm. For example, white-balance algorithm 508 of system 500 may generate white-balance decision 510 based on AWB stats 506.
In some aspects, the white-balance algorithm may determine the first white-balance decision based on a relationship between intensities of red light to intensities of green light and a relationship between intensities of blue light to intensities of greenlight. For example, white-balance algorithm 508 may determine white-balance decision 510 based on AWB stats 506.
At a block 806, the computing device (or one or more components thereof) may determine, based on the statistics, a second white-balance decision using a machine-learning model trained to determine white-balance decisions. For example, machine-learning model 512 of system 500 may determine white-balance decision 514 based on AWB stats 506.
At a block 808, the computing device (or one or more components thereof) may determine a third white-balance decision based on the first white-balance decision and the second white-balance decision. For example, AWB decision aggregator 522 of system 500 may determine white-balance decision 524 based on white-balance decision 510 and white-balance decision 514.
In some aspects, to determine the third white-balance decision, the computing device (or one or more components thereof) may determine a first weight for the first white-balance decision; determine a second weight for the second white-balance decision; and determine the third white-balance decision based on the first weight, the first white-balance decision, the second weight, and the second white-balance decision. For example, AWB decision aggregator 522 may determine a first weight for white-balance decision 510 and a second weight for white-balance decision 514. Further, AWB decision aggregator 522 may determine white-balance decision 524 based on the first weight, white-balance decision 510, the second weight, and white-balance decision 514.
In some aspects, the computing device (or one or more components thereof) may determine at least one of a scene flag or a confidence value. The scene flag may be related to a scene of the image data. The confidence value may be related to the second white-balance decision. The third white-balance decision may be based on at least one of the scene flag or the confidence value. In some aspects, at least one of the scene flag or the confidence value may be determined using the machine-learning model (e.g., machine-learning model 512) . For example, machine-learning model 512 may determine confidence value 516 and/or scene flag 518. In some aspects, the machine-learning model may be a first machine-learning model and at least one of the scene flag or the confidence value may be determined by a second machine-learning model. For example, machine-learning model 602 may determine confidence value 516 and/or scene flag 518. Scene flag 518 may be related to a scene represented by image data 502. In some aspects, the confidence value may be indicative of a level of confidence with which downstream operations should use the second confidence value. For example, confidence value 516 may be related to a confidence related to white-balance decision 514. AWB decision aggregator 522 may determine white-balance decision 524 based, at least in part, on confidence value 516 and/or scene flag 518.
In some aspects, the scene flag may be, or may include, an indication of a determination that the scene includes white points or an indication of a determination that the scene does not include white points. In some aspects, to determine the third white-balance decision, the computing device (or one or more components thereof) may, responsive to the scene flag including the indication of the determination that the scene includes the white points, determine the third white-balance decision is the first white-balance decision. For example, at decision
block 702, if the scene is not misleading, process 700 may proceed to block 704 then to block 710, at which white-balance decision 524 may be determined to be white-balance decision.
In some aspects, to determine the third white-balance decision, the computing device (or one or more components thereof) may, responsive to scene flag comprising the indication of the determination that the scene does not include white points, determine a weight for the second white-balance decision based on the confidence value, and determine the third white-balance decision based on the first white-balance decision, the weight, and the second white-balance decision. For example, at decision block 702 if the scene is misleading, process 700 may proceed to block 706, then to block 708, and block 710. At block 710, white-balance decision 524 may be determined based on white-balance decision 510, white-balance decision 514 and a weight associated with white-balance decision 514.
In some aspects, the computing device (or one or more components thereof) may determine the third white-balance decision further based on at least one of: a comparison between a white reference point and a relationship between intensities of red light to intensities of green light and a relationship between intensities of blue light to intensities of green light; a comparison between a fourth white-balance decision and at least one of the first white-balance decision, the second white-balance decision, and the third white-balance decision, wherein the fourth white-balance decision is based on spectrum data from a spectrum sensor; a correlated color temperature; or a color temperature decision. For example, AWB decision aggregator 522 may determine white-balance decision 524 based, at least in part, on AWB stats 506 and/or information 520, which may include data regarding intensities of red, green, and blue light, CCT measurements, and/or CCT decisions.
Additionally or alternatively, in some aspects, the computing device (or one or more components thereof) may determine the third white-balance decision further based on at least one of: spectrum data from a spectrum sensor; scene-detection results; a count of faces detected; or semantic segmentation information. For example, AWB decision aggregator 522 may determine white-balance decision 524 based, at least in part, on information 520, which may include spectrum data from a spectrum sensor, scene-detection results, a count of faces detected, and/or semantic segmentation information.
In some aspects, the third white-balance decision may be, or may include, a gain for red pixel data, a gain for green pixel data, and a gain for blue pixel data. Additionally or alternatively, the first and/or second white-balance decisions may likewise be, or include respective gains for red pixel data, gains for green pixel data, and gains for blue pixel data.
In some aspects, the computing device (or one or more components thereof) may white-balance the image data based on the third white-balance decision. For example, in some aspects, system 500 may white-balance image data 502 based on white-balance decision 524.
In some aspects, the image data (e.g., the image data on which the statistics obtained at image data 502 are based) may be, or may include, a first portion of an image. Additionally, the statistics may be, or may include, first statistics. The computing device (or one or more components thereof) may further obtain second statistics based on second image data, the second image data may be, or may include, a second portion of the image. The second statistics may be associated with at least one of color or brightness of the second image data. The computing device (or one or more components thereof) may further determine, based on the second statistics, a fourth white-balance decision using the white-balance algorithm; determine, based on the second statistics, a fifth white-balance decision using the machine-learning model; and determine a sixth white-balance decision based on the fourth white-balance decision and the fifth white-balance decision. For example, image data 502 may be a first portion of an image. System 500 may determine a first instance of each of white-balance decision 510, white-balance decision 514, and white-balance decision 524 based on the portion of the image. System 500 may receive a second portion of the image and determine a second instance of each of white-balance decision 510, white-balance decision 514, and white-balance decision 524 based on the second portion of the image.
In some aspects, the computing device (or one or more components thereof) may white-balance the first image data based on the third white-balance decision to generate third image data; white-balance the second image data based on the sixth white-balance decision to generate fourth image data; and combine the third image data with the fourth image data. For example, the computing device (or one or more components thereof) may white balance the first portion of the image based on the first instance of white-balance decision 524 and white balance the
second portion of the image based on the second instance of white-balance decision 524. System 500 may combine the white-balanced image portions to generate a final image.
In some aspects, the computing device (or one or more components thereof) may blend a white-balance of an edge the third image data based on the third white-balance decision and the sixth white-balance decision; and blend a white-balance of an edge the fourth image data based on the third white-balance decision and the sixth white-balance decision. For example, at edges between the white-balanced portions of the image, system 500 may blend the white balance (e.g., the red gain, the green gain, and/or the blue gain) .
In some examples, as noted previously, the methods described herein (e.g., process 700 of FIG. 7, process 800 of FIG. 8, and/or other methods described herein) can be performed, in whole or in part, by a computing device or apparatus. In one example, one or more of the methods can be performed by image-processing system 100 of FIG. 1, image-processing device 104 of FIG. 1, image processor 124 of FIG. 1, system 500 of FIG. 5, stats engine 504 of FIG. 5, white-balance algorithm 508, of FIG. 5, machine-learning model 512 of FIG. 5, AWB decision aggregator 522 of FIG. 5, system 600 of FIG. 6, AWB decision aggregator 522, of FIG. 6, and/or machine-learning model 602 of FIG. 6, or by another system or device. In another example, one or more of the methods (e.g., process 700 of FIG. 7, process 800 of FIG. 8, and/or other methods described herein) can be performed, in whole or in part, by the computing-device architecture 1100 shown in FIG. 11. For instance, a computing device with the computing-device architecture 1100 shown in FIG. 11 can include, or be included in, the components of the image-processing system 100 of FIG. 1, image-processing device 104 of FIG. 1, image processor 124 of FIG. 1, system 500 of FIG. 5, stats engine 504 of FIG. 5, white-balance algorithm 508, of FIG. 5, machine-learning model 512 of FIG. 5, AWB decision aggregator 522 of FIG. 5, system 600 of FIG. 6, AWB decision aggregator 522, of FIG. 6, and/or machine-learning model 602 of FIG. 6and can implement the operations of process 700, process 800, and/or other process described herein. In some cases, the computing device or apparatus can include various components, such as one or more input devices, one or more output devices, one or more processors, one or more microprocessors, one or more microcomputers, one or more cameras, one or more sensors, and/or other component (s) that are configured to carry out the steps of processes described herein. In some examples, the computing device can include a display, a network interface configured to communicate and/or receive the data, any combination thereof, and/or other component (s) . The
network interface can be configured to communicate and/or receive Internet Protocol (IP) based data or other type of data.
The components of the computing device can be implemented in circuitry. For example, the components can include and/or can be implemented using electronic circuits or other electronic hardware, which can include one or more programmable electronic circuits (e.g., microprocessors, graphics processing units (GPUs) , digital signal processors (DSPs) , central processing units (CPUs) , and/or other suitable electronic circuits) , and/or can include and/or be implemented using computer software, firmware, or any combination thereof, to perform the various operations described herein.
Process 700, process 800, and/or other process described herein are illustrated as logical flow diagrams, the operation of which represents a sequence of operations that can be implemented in hardware, computer instructions, or a combination thereof. In the context of computer instructions, the operations represent computer-executable instructions stored on one or more computer-readable storage media that, when executed by one or more processors, perform the recited operations. Generally, computer-executable instructions include routines, programs, objects, components, data structures, and the like that perform particular functions or implement particular data types. The order in which the operations are described is not intended to be construed as a limitation, and any number of the described operations can be combined in any order and/or in parallel to implement the processes.
Additionally, process 700, process 800, and/or other process described herein can be performed under the control of one or more computer systems configured with executable instructions and can be implemented as code (e.g., executable instructions, one or more computer programs, or one or more applications) executing collectively on one or more processors, by hardware, or combinations thereof. As noted above, the code can be stored on a computer-readable or machine-readable storage medium, for example, in the form of a computer program comprising a plurality of instructions executable by one or more processors. The computer-readable or machine-readable storage medium can be non-transitory.
As noted above, various aspects of the present disclosure can use machine-learning models or systems.
FIG. 9 is an illustrative example of a neural network 900 (e.g., a deep-learning neural network) that can be used to implement machine-learning based white-balance-decision determination, confidence-value determination, misleading-scene determination, feature segmentation, implicit-neural-representation generation, rendering, classification, object detection, image recognition (e.g., face recognition, object recognition, scene recognition, etc. ) , feature extraction, authentication, gaze detection, gaze prediction, and/or automation. For example, neural network 900 may be an example of, or can implement, machine-learning model 512 and/or machine-learning model 602.
An input layer 902 includes input data. In one illustrative example, input layer 902 can include data representing AWB stats 506 or white-balance decision 514. Neural network 900 includes multiple hidden layers hidden layers 906a, 906b, through 906n. The hidden layers 906a, 906b, through hidden layer 906n include “n” number of hidden layers, where “n” is an integer greater than or equal to one. The number of hidden layers can be made to include as many layers as needed for the given application. Neural network 900 further includes an output layer 904 that provides an output resulting from the processing performed by the hidden layers 906a, 906b, through 906n. In one illustrative example, output layer 904 can provide white-balance decision 514, confidence value 516, and/or scene flag 518.
Neural network 900 may be, or may include, a multi-layer neural network of interconnected nodes. Each node can represent a piece of information. Information associated with the nodes is shared among the different layers and each layer retains information as information is processed. In some cases, neural network 900 can include a feed-forward network, in which case there are no feedback connections where outputs of the network are fed back into itself. In some cases, neural network 900 can include a recurrent neural network, which can have loops that allow information to be carried across nodes while reading in input.
Information can be exchanged between nodes through node-to-node interconnections between the various layers. Nodes of input layer 902 can activate a set of nodes in the first hidden layer 906a. For example, as shown, each of the input nodes of input layer 902 is connected to each of the nodes of the first hidden layer 906a. The nodes of first hidden layer 906a can transform the information of each input node by applying activation functions to the input node information. The information derived from the transformation can then be passed to and can
activate the nodes of the next hidden layer 906b, which can perform their own designated functions. Example functions include convolutional, up-sampling, data transformation, and/or any other suitable functions. The output of the hidden layer 906b can then activate nodes of the next hidden layer, and so on. The output of the last hidden layer 906n can activate one or more nodes of the output layer 904, at which an output is provided. In some cases, while nodes (e.g., node 908) in neural network 900 are shown as having multiple output lines, a node has a single output and all lines shown as being output from a node represent the same output value.
In some cases, each node or interconnection between nodes can have a weight that is a set of parameters derived from the training of neural network 900. Once neural network 900 is trained, it can be referred to as a trained neural network, which can be used to perform one or more operations. For example, an interconnection between nodes can represent a piece of information learned about the interconnected nodes. The interconnection can have a tunable numeric weight that can be tuned (e.g., based on a training dataset) , allowing neural network 900 to be adaptive to inputs and able to learn as more and more data is processed.
Neural network 900 may be pre-trained to process the features from the data in the input layer 902 using the different hidden layers 906a, 906b, through 906n in order to provide the output through the output layer 904. In an example in which neural network 900 is used to identify features in images, neural network 900 can be trained using training data that includes both images and labels, as described above. For instance, training images can be input into the network, with each training image having a label indicating the features in the images (for the feature-segmentation machine-learning system) or a label indicating classes of an activity in each image. In one example using object classification for illustrative purposes, a training image can include an image of a number 2, in which case the label for the image can be [0 0 1 0 0 0 0 0 0 0] .
In some cases, neural network 900 can adjust the weights of the nodes using a training process called backpropagation. As noted above, a backpropagation process can include a forward pass, a loss function, a backward pass, and a weight update. The forward pass, loss function, backward pass, and parameter update is performed for one training iteration. The process can be repeated for a certain number of iterations for each set of training images until neural network 900 is trained well enough so that the weights of the layers are accurately tuned.
For the example of identifying objects in images, the forward pass can include passing a training image through neural network 900. The weights are initially randomized before neural network 900 is trained. As an illustrative example, an image can include an array of numbers representing the pixels of the image. Each number in the array can include a value from 0 to 255 describing the pixel intensity at that position in the array. In one example, the array can include a 28 x 28 x 3 array of numbers with 28 rows and 28 columns of pixels and 3 color components (such as red, green, and blue, or luma and two chroma components, or the like) .
As noted above, for a first training iteration for neural network 900, the output will likely include values that do not give preference to any particular class due to the weights being randomly selected at initialization. For example, if the output is a vector with probabilities that the object includes different classes, the probability value for each of the different classes can be equal or at least very similar (e.g., for ten possible classes, each class can have a probability value of 0.1) . With the initial weights, neural network 900 is unable to determine low-level features and thus cannot make an accurate determination of what the classification of the object might be. A loss function can be used to analyze error in the output. Any suitable loss function definition can be used, such as a cross-entropy loss. Another example of a loss function includes the mean squared error (MSE) , defined asThe loss can be set to be equal to the value of Etotal.
The loss (or error) will be high for the first training images since the actual values will be much different than the predicted output. The goal of training is to minimize the amount of loss so that the predicted output is the same as the training label. Neural network 900 can perform a backward pass by determining which inputs (weights) most contributed to the loss of the network and can adjust the weights so that the loss decreases and is eventually minimized. A derivative of the loss with respect to the weights (denoted as dL/dW, where W are the weights at a particular layer) can be computed to determine the weights that contributed most to the loss of the network. After the derivative is computed, a weight update can be performed by updating all the weights of the filters. For example, the weights can be updated so that they change in the opposite direction of the gradient. The weight update can be denoted aswhere w denotes a weight, wi denotes the initial weight, and η denotes a learning rate. The learning rate can be set to any suitable value, with a high learning rate including larger weight updates and a lower value indicating smaller weight updates.
Neural network 900 can include any suitable deep network. One example includes a convolutional neural network (CNN) , which includes an input layer and an output layer, with multiple hidden layers between the input and out layers. The hidden layers of a CNN include a series of convolutional, nonlinear, pooling (for downsampling) , and fully connected layers. Neural network 900 can include any other deep network other than a CNN, such as an autoencoder, a deep belief nets (DBNs) , a Recurrent Neural Networks (RNNs) , among others.
FIG. 10 is an illustrative example of a convolutional neural network (CNN) 1000. The input layer 1002 of the CNN 1000 includes data representing an image or frame. For example, the data can include an array of numbers representing the pixels of the image, with each number in the array including a value from 0 to 255 describing the pixel intensity at that position in the array. Using the previous example from above, the array can include a 28 x 28 x 3 array of numbers with 28 rows and 28 columns of pixels and 3 color components (e.g., red, green, and blue, or luma and two chroma components, or the like) . The image can be passed through a convolutional hidden layer 1004, an optional non-linear activation layer, a pooling hidden layer 1006, and fully connected layer 1008 (which fully connected layer 1008 can be hidden) to get an output at the output layer 1010. While only one of each hidden layer is shown in FIG. 10, one of ordinary skill will appreciate that multiple convolutional hidden layers, non-linear layers, pooling hidden layers, and/or fully connected layers can be included in the CNN 1000. As previously described, the output can indicate a single class of an object or can include a probability of classes that best describe the object in the image.
The first layer of the CNN 1000 can be the convolutional hidden layer 1004. The convolutional hidden layer 1004 can analyze image data of the input layer 1002. Each node of the convolutional hidden layer 1004 is connected to a region of nodes (pixels) of the input image called a receptive field. The convolutional hidden layer 1004 can be considered as one or more filters (each filter corresponding to a different activation or feature map) , with each convolutional iteration of a filter being a node or neuron of the convolutional hidden layer 1004. For example, the region of the input image that a filter covers at each convolutional iteration would be the receptive field for the filter. In one illustrative example, if the input image includes a 28×28 array, and each filter (and corresponding receptive field) is a 5×5 array, then there will be 24×24 nodes in the convolutional hidden layer 1004. Each connection between a node and a receptive field for that node learns a weight and, in some cases, an overall bias such that each node learns
to analyze its particular local receptive field in the input image. Each node of the convolutional hidden layer 1004 will have the same weights and bias (called a shared weight and a shared bias) . For example, the filter has an array of weights (numbers) and the same depth as the input. A filter will have a depth of 3 for an image frame example (according to three color components of the input image) . An illustrative example size of the filter array is 5 x 5 x 3, corresponding to a size of the receptive field of a node.
The convolutional nature of the convolutional hidden layer 1004 is due to each node of the convolutional layer being applied to its corresponding receptive field. For example, a filter of the convolutional hidden layer 1004 can begin in the top-left corner of the input image array and can convolve around the input image. As noted above, each convolutional iteration of the filter can be considered a node or neuron of the convolutional hidden layer 1004. At each convolutional iteration, the values of the filter are multiplied with a corresponding number of the original pixel values of the image (e.g., the 5x5 filter array is multiplied by a 5x5 array of input pixel values at the top-left corner of the input image array) . The multiplications from each convolutional iteration can be summed together to obtain a total sum for that iteration or node. The process is next continued at a next location in the input image according to the receptive field of a next node in the convolutional hidden layer 1004. For example, a filter can be moved by a step amount (referred to as a stride) to the next receptive field. The stride can be set to 1 or any other suitable amount. For example, if the stride is set to 1, the filter will be moved to the right by 1 pixel at each convolutional iteration. Processing the filter at each unique location of the input volume produces a number representing the filter results for that location, resulting in a total sum value being determined for each node of the convolutional hidden layer 1004.
The mapping from the input layer to the convolutional hidden layer 1004 is referred to as an activation map (or feature map) . The activation map includes a value for each node representing the filter results at each location of the input volume. The activation map can include an array that includes the various total sum values resulting from each iteration of the filter on the input volume. For example, the activation map will include a 24 x 24 array if a 5 x 5 filter is applied to each pixel (astride of 1) of a 28 x 28 input image. The convolutional hidden layer 1004 can include several activation maps in order to identify multiple features in an image. The example shown in FIG. 10 includes three activation maps. Using three activation maps, the
convolutional hidden layer 1004 can detect three different kinds of features, with each feature being detectable across the entire image.
In some examples, a non-linear hidden layer can be applied after the convolutional hidden layer 1004. The non-linear layer can be used to introduce non-linearity to a system that has been computing linear operations. One illustrative example of a non-linear layer is a rectified linear unit (ReLU) layer. A ReLU layer can apply the function f (x) = max (0, x) to all of the values in the input volume, which changes all the negative activations to 0. The ReLU can thus increase the non-linear properties of the CNN 1000 without affecting the receptive fields of the convolutional hidden layer 1004.
The pooling hidden layer 1006 can be applied after the convolutional hidden layer 1004 (and after the non-linear hidden layer when used) . The pooling hidden layer 1006 is used to simplify the information in the output from the convolutional hidden layer 1004. For example, the pooling hidden layer 1006 can take each activation map output from the convolutional hidden layer 1004 and generates a condensed activation map (or feature map) using a pooling function. Max-pooling is one example of a function performed by a pooling hidden layer. Other forms of pooling functions be used by the pooling hidden layer 1006, such as average pooling, L2-norm pooling, or other suitable pooling functions. A pooling function (e.g., a max-pooling filter, an L2-norm filter, or other suitable pooling filter) is applied to each activation map included in the convolutional hidden layer 1004. In the example shown in FIG. 10, three pooling filters are used for the three activation maps in the convolutional hidden layer 1004.
In some examples, max-pooling can be used by applying a max-pooling filter (e.g., having a size of 2x2) with a stride (e.g., equal to a dimension of the filter, such as a stride of 2) to an activation map output from the convolutional hidden layer 1004. The output from a max-pooling filter includes the maximum number in every sub-region that the filter convolves around. Using a 2x2 filter as an example, each unit in the pooling layer can summarize a region of 2×2 nodes in the previous layer (with each node being a value in the activation map) . For example, four values (nodes) in an activation map will be analyzed by a 2x2 max-pooling filter at each iteration of the filter, with the maximum value from the four values being output as the “max” value. If such a max-pooling filter is applied to an activation filter from the convolutional hidden
layer 1004 having a dimension of 24x24 nodes, the output from the pooling hidden layer 1006 will be an array of 12x12 nodes.
In some examples, an L2-norm pooling filter could also be used. The L2-norm pooling filter includes computing the square root of the sum of the squares of the values in the 2×2 region (or other suitable region) of an activation map (instead of computing the maximum values as is done in max-pooling) and using the computed values as an output.
The pooling function (e.g., max-pooling, L2-norm pooling, or other pooling function) determines whether a given feature is found anywhere in a region of the image and discards the exact positional information. This can be done without affecting results of the feature detection because, once a feature has been found, the exact location of the feature is not as important as its approximate location relative to other features. Max-pooling (as well as other pooling methods) offer the benefit that there are many fewer pooled features, thus reducing the number of parameters needed in later layers of the CNN 1000.
The final layer of connections in the network is a fully-connected layer that connects every node from the pooling hidden layer 1006 to every one of the output nodes in the output layer 1010. Using the example above, the input layer includes 28 x 28 nodes encoding the pixel intensities of the input image, the convolutional hidden layer 1004 includes 3×24×24 hidden feature nodes based on application of a 5×5 local receptive field (for the filters) to three activation maps, and the pooling hidden layer 1006 includes a layer of 3×12×12 hidden feature nodes based on application of max-pooling filter to 2×2 regions across each of the three feature maps. Extending this example, the output layer 1010 can include ten output nodes. In such an example, every node of the 3x12x12 pooling hidden layer 1006 is connected to every node of the output layer 1010.
The fully connected layer 1008 can obtain the output of the previous pooling hidden layer 1006 (which should represent the activation maps of high-level features) and determines the features that most correlate to a particular class. For example, the fully connected layer 1008 can determine the high-level features that most strongly correlate to a particular class and can include weights (nodes) for the high-level features. A product can be computed between the weights of the fully connected layer 1008 and the pooling hidden layer 1006 to obtain probabilities for the different classes. For example, if the CNN 1000 is being used to predict that
an object in an image is a person, high values will be present in the activation maps that represent high-level features of people (e.g., two legs are present, a face is present at the top of the object, two eyes are present at the top left and top right of the face, a nose is present in the middle of the face, a mouth is present at the bottom of the face, and/or other features common for a person) .
In some examples, the output from the output layer 1010 can include an M-dimensional vector (in the prior example, M=10) . M indicates the number of classes that the CNN 1000 has to choose from when classifying the object in the image. Other example outputs can also be provided. Each number in the M-dimensional vector can represent the probability the object is of a certain class. In one illustrative example, if a 10-dimensional output vector represents ten different classes of objects is [0 0 0.05 0.8 0 0.15 0 0 0 0] , the vector indicates that there is a 5%probability that the image is the third class of object (e.g., a dog) , an 80%probability that the image is the fourth class of object (e.g., a human) , and a 15%probability that the image is the sixth class of object (e.g., a kangaroo) . The probability for a class can be considered a confidence level that the object is part of that class.
FIG. 11 illustrates an example computing-device architecture 1100 of an example computing device which can implement the various techniques described herein. In some examples, the computing device can include a mobile device, a wearable device, an extended reality device (e.g., a virtual reality (VR) device, an augmented reality (AR) device, or a mixed reality (MR) device) , a personal computer, a laptop computer, a video server, a vehicle (or computing device of a vehicle) , or other device. For example, the computing-device architecture 1100 may include, implement, or be included in any or all of image-processing system 100 of FIG. 1, image-processing device 104 of FIG. 1, image processor 124 of FIG. 1, system 500 of FIG. 5, stats engine 504 of FIG. 5, white-balance algorithm 508, of FIG. 5, machine-learning model 512 of FIG. 5, AWB decision aggregator 522 of FIG. 5, system 600 of FIG. 6, AWB decision aggregator 522, of FIG. 6, and/or machine-learning model 602 of FIG. 6. Additionally or alternatively, computing-device architecture 1100 may be configured to perform process 700, process 800, and/or other process described herein.
The components of computing-device architecture 1100 are shown in electrical communication with each other using connection 1112, such as a bus. The example computing-device architecture 1100 includes a processing unit (CPU or processor) 1102 and computing
device connection 1112 that couples various computing device components including computing device memory 1110, such as read only memory (ROM) 1108 and random-access memory (RAM) 1106, to processor 1102.
Computing-device architecture 1100 can include a cache of high-speed memory connected directly with, in close proximity to, or integrated as part of processor 1102. Computing-device architecture 1100 can copy data from memory 1110 and/or the storage device 1114 to cache 1104 for quick access by processor 1102. In this way, the cache can provide a performance boost that avoids processor 1102 delays while waiting for data. These and other modules can control or be configured to control processor 1102 to perform various actions. Other computing device memory 1110 may be available for use as well. Memory 1110 can include multiple different types of memory with different performance characteristics. Processor 1102 can include any general-purpose processor and a hardware or software service, such as service 1 1116, service 2 1118, and service 3 1120 stored in storage device 1114, configured to control processor 1102 as well as a special-purpose processor where software instructions are incorporated into the processor design. Processor 1102 may be a self-contained system, containing multiple cores or processors, a bus, memory controller, cache, etc. A multi-core processor may be symmetric or asymmetric.
To enable user interaction with the computing-device architecture 1100, input device 1122 can represent any number of input mechanisms, such as a microphone for speech, a touch-sensitive screen for gesture or graphical input, keyboard, mouse, motion input, speech and so forth. Output device 1124 can also be one or more of a number of output mechanisms known to those of skill in the art, such as a display, projector, television, speaker device, etc. In some instances, multimodal computing devices can enable a user to provide multiple types of input to communicate with computing-device architecture 1100. Communication interface 1126 can generally govern and manage the user input and computing device output. There is no restriction on operating on any particular hardware arrangement and therefore the basic features here may easily be substituted for improved hardware or firmware arrangements as they are developed.
Storage device 1114 is a non-volatile memory and can be a hard disk or other types of computer readable media which can store data that are accessible by a computer, such as magnetic cassettes, flash memory cards, solid state memory devices, digital versatile disks, cartridges,
random-access memories (RAMs) 1106, read only memory (ROM) 1108, and hybrids thereof. Storage device 1114 can include services 1116, 1118, and 1120 for controlling processor 1102. Other hardware or software modules are contemplated. Storage device 1114 can be connected to the computing device connection 1112. In one aspect, a hardware module that performs a particular function can include the software component stored in a computer-readable medium in connection with the necessary hardware components, such as processor 1102, connection 1112, output device 1124, and so forth, to carry out the function.
, The term “substantially, ” in reference to a given parameter, property, or condition, may refer to a degree that one of ordinary skill in the art would understand that the given parameter, property, or condition is met with a small degree of variance, such as, for example, within acceptable manufacturing tolerances. By way of example, depending on the particular parameter, property, or condition that is substantially met, the parameter, property, or condition may be at least 90%met, at least 95%met, or even at least 99%met.
Aspects of the present disclosure are applicable to any suitable electronic device (such as security systems, smartphones, tablets, laptop computers, vehicles, drones, or other devices) including or coupled to one or more active depth sensing systems. While described below with respect to a device having or coupled to one light projector, aspects of the present disclosure are applicable to devices having any number of light projectors and are therefore not limited to specific devices.
The term “device” is not limited to one or a specific number of physical objects (such as one smartphone, one controller, one processing system and so on) . As used herein, a device may be any electronic device with one or more parts that may implement at least some portions of this disclosure. While the below description and examples use the term “device” to describe various aspects of this disclosure, the term “device” is not limited to a specific configuration, type, or number of objects. Additionally, the term “system” is not limited to multiple components or specific aspects. For example, a system may be implemented on one or more printed circuit boards or other substrates and may have movable or static components. While the below description and examples use the term “system” to describe various aspects of this disclosure, the term “system” is not limited to a specific configuration, type, or number of objects.
Specific details are provided in the description above to provide a thorough understanding of the aspects and examples provided herein. However, it will be understood by one of ordinary skill in the art that the aspects may be practiced without these specific details. For clarity of explanation, in some instances the present technology may be presented as including individual functional blocks including functional blocks including devices, device components, steps or routines in a method embodied in software, or combinations of hardware and software. Additional components may be used other than those shown in the figures and/or described herein. For example, circuits, systems, networks, processes, and other components may be shown as components in block diagram form in order not to obscure the aspects in unnecessary detail. In other instances, well-known circuits, processes, algorithms, structures, and techniques may be shown without unnecessary detail in order to avoid obscuring the aspects.
Individual aspects may be described above as a process or method which is depicted as a flowchart, a flow diagram, a data flow diagram, a structure diagram, or a block diagram. Although a flowchart may describe the operations as a sequential process, many of the operations can be performed in parallel or concurrently. In addition, the order of the operations may be re-arranged. A process is terminated when its operations are completed but could have additional steps not included in a figure. A process may correspond to a method, a function, a procedure, a subroutine, a subprogram, etc. When a process corresponds to a function, its termination can correspond to a return of the function to the calling function or the main function.
Processes and methods according to the above-described examples can be implemented using computer-executable instructions that are stored or otherwise available from computer-readable media. Such instructions can include, for example, instructions and data which cause or otherwise configure a general-purpose computer, special purpose computer, or a processing device to perform a certain function or group of functions. Portions of computer resources used can be accessible over a network. The computer executable instructions may be, for example, binaries, intermediate format instructions such as assembly language, firmware, source code, etc.
The term “computer-readable medium” includes, but is not limited to, portable or non-portable storage devices, optical storage devices, and various other mediums capable of storing, containing, or carrying instruction (s) and/or data. A computer-readable medium may include a non-transitory medium in which data can be stored and that does not include carrier waves and/or
transitory electronic signals propagating wirelessly or over wired connections. Examples of a non-transitory medium may include, but are not limited to, a magnetic disk or tape, optical storage media such as compact disk (CD) or digital versatile disk (DVD) , flash memory, magnetic or optical disks, USB devices provided with non-volatile memory, networked storage devices, any suitable combination thereof, among others. A computer-readable medium may have stored thereon code and/or machine-executable instructions that may represent a procedure, a function, a subprogram, a program, a routine, a subroutine, a module, a software package, a class, or any combination of instructions, data structures, or program statements. A code segment may be coupled to another code segment or a hardware circuit by passing and/or receiving information, data, arguments, parameters, or memory contents. Information, arguments, parameters, data, etc. may be passed, forwarded, or transmitted via any suitable means including memory sharing, message passing, token passing, network transmission, or the like.
In some aspects the computer-readable storage devices, mediums, and memories can include a cable or wireless signal containing a bit stream and the like. However, when mentioned, non-transitory computer-readable storage media expressly exclude media such as energy, carrier signals, electromagnetic waves, and signals per se.
Devices implementing processes and methods according to these disclosures can include hardware, software, firmware, middleware, microcode, hardware description languages, or any combination thereof, and can take any of a variety of form factors. When implemented in software, firmware, middleware, or microcode, the program code or code segments to perform the necessary tasks (e.g., a computer-program product) may be stored in a computer-readable or machine-readable medium. A processor (s) may perform the necessary tasks. Typical examples of form factors include laptops, smart phones, mobile phones, tablet devices or other small form factor personal computers, personal digital assistants, rackmount devices, standalone devices, and so on. Functionality described herein also can be embodied in peripherals or add-in cards. Such functionality can also be implemented on a circuit board among different chips or different processes executing in a single device, by way of further example.
The instructions, media for conveying such instructions, computing resources for executing them, and other structures for supporting such computing resources are example means for providing the functions described in the disclosure.
In the foregoing description, aspects of the application are described with reference to specific aspects thereof, but those skilled in the art will recognize that the application is not limited thereto. Thus, while illustrative aspects of the application have been described in detail herein, it is to be understood that the inventive concepts may be otherwise variously embodied and employed, and that the appended claims are intended to be construed to include such variations, except as limited by the prior art. Various features and aspects of the above-described application may be used individually or jointly. Further, aspects can be utilized in any number of environments and applications beyond those described herein without departing from the broader spirit and scope of the specification. The specification and drawings are, accordingly, to be regarded as illustrative rather than restrictive. For the purposes of illustration, methods were described in a particular order. It should be appreciated that in alternate aspects, the methods may be performed in a different order than that described.
One of ordinary skill will appreciate that the less than ( “<” ) and greater than ( “>” ) symbols or terminology used herein can be replaced with less than or equal to ( “≤” ) and greater than or equal to ( “≥” ) symbols, respectively, without departing from the scope of this description.
Where components are described as being “configured to” perform certain operations, such configuration can be accomplished, for example, by designing electronic circuits or other hardware to perform the operation, by programming programmable electronic circuits (e.g., microprocessors, or other suitable electronic circuits) to perform the operation, or any combination thereof.
The phrase “coupled to” refers to any component that is physically connected to another component either directly or indirectly, and/or any component that is in communication with another component (e.g., connected to the other component over a wired or wireless connection, and/or other suitable communication interface) either directly or indirectly.
Claim language or other language reciting “at least one of” a set and/or “one or more” of a set indicates that one member of the set or multiple members of the set (in any combination) satisfy the claim. For example, claim language reciting “at least one of A and B” or “at least one of A or B” means A, B, or A and B. In another example, claim language reciting “at least one of A, B, and C” or “at least one of A, B, or C” means A, B, C, or A and B, or A and C, or B and C, A and B and C, or any duplicate information or data (e.g., A and A, B and B, C and C, A and A
and B, and so on) , or any other ordering, duplication, or combination of A, B, and C. The language “at least one of” a set and/or “one or more” of a set does not limit the set to the items listed in the set. For example, claim language reciting “at least one of A and B” or “at least one of A or B” may mean A, B, or A and B, and may additionally include items not listed in the set of A and B. The phrases “at least one” and “one or more” are used interchangeably herein.
Claim language or other language reciting “at least one processor configured to, ” “at least one processor being configured to, ” “one or more processors configured to, ” “one or more processors being configured to, ” or the like indicates that one processor or multiple processors (in any combination) can perform the associated operation (s) . For example, claim language reciting “at least one processor configured to: X, Y, and Z” means a single processor can be used to perform operations X, Y, and Z; or that multiple processors are each tasked with a certain subset of operations X, Y, and Z such that together the multiple processors perform X, Y, and Z; or that a group of multiple processors work together to perform operations X, Y, and Z. In another example, claim language reciting “at least one processor configured to: X, Y, and Z” can mean that any single processor may only perform at least a subset of operations X, Y, and Z.
Where reference is made to one or more elements performing functions (e.g., steps of a method) , one element may perform all functions, or more than one element may collectively perform the functions. When more than one element collectively performs the functions, each function need not be performed by each of those elements (e.g., different functions may be performed by different elements) and/or each function need not be performed in whole by only one element (e.g., different elements may perform different sub-functions of a function) . Similarly, where reference is made to one or more elements configured to cause another element (e.g., an apparatus) to perform functions, one element may be configured to cause the other element to perform all functions, or more than one element may collectively be configured to cause the other element to perform the functions.
Where reference is made to an entity (e.g., any entity or device described herein) performing functions or being configured to perform functions (e.g., steps of a method) , the entity may be configured to cause one or more elements (individually or collectively) to perform the functions. The one or more components of the entity may include at least one memory, at least one processor, at least one communication interface, another component configured to perform
one or more (or all) of the functions, and/or any combination thereof. Where reference to the entity performing functions, the entity may be configured to cause one component to perform all functions, or to cause more than one component to collectively perform the functions. When the entity is configured to cause more than one component to collectively perform the functions, each function need not be performed by each of those components (e.g., different functions may be performed by different components) and/or each function need not be performed in whole by only one component (e.g., different components may perform different sub-functions of a function) .
The various illustrative logical blocks, modules, circuits, and algorithm steps described in connection with the aspects disclosed herein may be implemented as electronic hardware, computer software, firmware, or combinations thereof. To clearly illustrate this interchangeability of hardware and software, various illustrative components, blocks, modules, circuits, and steps have been described above generally in terms of their functionality. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the overall system. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
The techniques described herein may also be implemented in electronic hardware, computer software, firmware, or any combination thereof. Such techniques may be implemented in any of a variety of devices such as general-purposes computers, wireless communication device handsets, or integrated circuit devices having multiple uses including application in wireless communication device handsets and other devices. Any features described as modules or components may be implemented together in an integrated logic device or separately as discrete but interoperable logic devices. If implemented in software, the techniques may be realized at least in part by a computer-readable data storage medium including program code including instructions that, when executed, performs one or more of the methods described above. The computer-readable data storage medium may form part of a computer program product, which may include packaging materials. The computer-readable medium may include memory or data storage media, such as random-access memory (RAM) such as synchronous dynamic random-access memory (SDRAM) , read-only memory (ROM) , non-volatile random-
access memory (NVRAM) , electrically erasable programmable read-only memory (EEPROM) , FLASH memory, magnetic or optical data storage media, and the like. The techniques additionally, or alternatively, may be realized at least in part by a computer-readable communication medium that carries or communicates program code in the form of instructions or data structures and that can be accessed, read, and/or executed by a computer, such as propagated signals or waves.
The program code may be executed by a processor, which may include one or more processors, such as one or more digital signal processors (DSPs) , general-purpose microprocessors, an application specific integrated circuits (ASICs) , field programmable logic arrays (FPGAs) , or other equivalent integrated or discrete logic circuitry. Such a processor may be configured to perform any of the techniques described in this disclosure. A general-purpose processor may be a microprocessor; but in the alternative, the processor may be any conventional processor, controller, microcontroller, or state machine. A processor may also be implemented as a combination of computing devices, e.g., a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration. Accordingly, the term “processor, ” as used herein may refer to any of the foregoing structure, any combination of the foregoing structure, or any other structure or apparatus suitable for implementation of the techniques described herein.
Illustrative aspects of the disclosure include:
Aspect 1. An apparatus for determining a white balance for an image, the apparatus comprising: at least one memory; and at least one processor coupled to the at least one memory and configured to: obtain statistics based on image data, the statistics being associated with at least one of color or brightness of the image data; determine, based on the statistics, a first white-balance decision using a white-balance algorithm; determine, based on the statistics, a second white-balance decision using a machine-learning model trained to determine white-balance decisions; and determine a third white-balance decision based on the first white-balance decision and the second white-balance decision.
Aspect 2. The apparatus of aspect 1, wherein, to determine the third white-balance decision, the at least one processor is configured to: determine a first weight for the first white-balance decision; determine a second weight for the second white-balance decision; and
determine the third white-balance decision based on the first weight, the first white-balance decision, the second weight, and the second white-balance decision.
Aspect 3. The apparatus of any one of aspects 1 or 2, wherein the at least one processor is further configured to determine at least one of a scene flag or a confidence value, wherein the scene flag is related to a scene of the image data, wherein the confidence value is related to the second white-balance decision, and wherein the third white-balance decision is further based on at least one of the scene flag or the confidence value.
Aspect 4. The apparatus of any one of aspects 1 to 3, wherein the scene flag comprises an indication of a determination that the scene includes white points or an indication of a determination that the scene does not include white points.
Aspect 5. The apparatus of aspect 4, wherein, to determine the third white-balance decision, the at least one processor is configured to, responsive to the scene flag comprising the indication of the determination that the scene includes the white points, determine the third white-balance decision is the first white-balance decision.
Aspect 6. The apparatus of any one of aspects 4 or 5, wherein, to determine the third white-balance decision, the at least one processor is configured to, responsive to scene flag comprising the indication of the determination that the scene does not include white points, determine a weight for the second white-balance decision based on the confidence value, and determine the third white-balance decision based on the first white-balance decision, the weight, and the second white-balance decision.
Aspect 7. The apparatus of any one of aspects 3 to 6, wherein the confidence value is indicative of a level of confidence with which downstream operations should use the second confidence value.
Aspect 8. The apparatus of any one of aspects 3 to 7, wherein at least one of the scene flag or the confidence value is determined using the machine-learning model .
Aspect 9. The apparatus of any one of aspects 3 to 8, wherein the machine-learning model is a first machine-learning model and wherein at least one of the scene flag or the confidence value are determined by a second machine-learning model.
Aspect 10. The apparatus of any one of aspects 1 to 9, wherein the third white-balance decision is further based on at least one of: a comparison between a white reference point and a relationship between intensities of red light to intensities of green light and a relationship between intensities of blue light to intensities of green light; a comparison between a fourth white-balance decision and at least one of the first white-balance decision, the second white-balance decision, and the third white-balance decision, wherein the fourth white-balance decision is based on spectrum data from a spectrum sensor; a correlated color temperature; or a color temperature decision.
Aspect 11. The apparatus of any one of aspects 1 to 10, wherein the third white-balance decision is determined based on at least one of: spectrum data from a spectrum sensor; scene-detection results; a count of faces detected; or semantic segmentation information.
Aspect 12. The apparatus of any one of aspects 1 to 11, wherein the third white-balance decision comprises a gain for red pixel data, a gain for green pixel data, and a gain for blue pixel data.
Aspect 13. The apparatus of any one of aspects 1 to 12, wherein the statistics comprise at least one of: a lux index; a relationship between intensities of red light to intensities of green light; a relationship between intensities of blue light to intensities of greenlight; a comparison between a white reference point and the relationship between intensities of red light to intensities of green light and the relationship between intensities of blue light to intensities of green light; a correlated color temperature; or a semantic label.
Aspect 14. The apparatus of any one of aspects 1 to 13, wherein the white-balance algorithm is configured to determine the first white-balance decision based on a relationship between intensities of red light to intensities of green light and a relationship between intensities of blue light to intensities of greenlight.
Aspect 15. The apparatus of any one of aspects 1 to 14, wherein the at least one processor is further configured to white-balance the image data based on the third white-balance decision.
Aspect 16. The apparatus of any one of aspects 1 to 15, wherein the image data comprises a first portion of an image, wherein the statistics comprise first statistics, and wherein
the at least one processor is further configured to: obtain second statistics based on second image data, wherein the second image data comprises a second portion of the image and wherein the second statistics are associated with at least one of color or brightness of the second image data; determine, based on the second statistics, a fourth white-balance decision using the white-balance algorithm; determine, based on the second statistics, a fifth white-balance decision using the machine-learning model; and determine a sixth white-balance decision based on the fourth white-balance decision and the fifth white-balance decision.
Aspect 17. The apparatus of aspect 16, wherein the at least one processor is further configured to: white-balance the first image data based on the third white-balance decision to generate third image data; white-balance the second image data based on the sixth white-balance decision to generate fourth image data; and combine the third image data with the fourth image data.
Aspect 18. The apparatus of aspect 17, wherein the at least one processor is further configured to: blend a white-balance of an edge the third image data based on the third white-balance decision and the sixth white-balance decision; and blend a white-balance of an edge the fourth image data based on the third white-balance decision and the sixth white-balance decision.
Aspect 19. A method for determining a white balance for an image, the method comprising: obtaining statistics based on image data, the statistics being associated with at least one of color or brightness of the image data; determining, based on the statistics, a first white-balance decision using a white-balance algorithm; determining, based on the statistics, a second white-balance decision using a machine-learning model trained to determine white-balance decisions; and determining a third white-balance decision based on the first white-balance decision and the second white-balance decision.
Aspect 20. The method of aspect 19, wherein determining the third white-balance decision comprises: determining a first weight for the first white-balance decision; determining a second weight for the second white-balance decision; and determining the third white-balance decision based on the first weight, the first white-balance decision, the second weight, and the second white-balance decision.
Aspect 21. The method of any one of aspects 19 or 20, further comprising determining at least one of a scene flag or a confidence value, wherein the scene flag is related to a scene of the image data, wherein the confidence value is related to the second white-balance decision, and wherein the third white-balance decision is further based on at least one of the scene flag or the confidence value.
Aspect 22. The method of aspect 21, wherein the scene flag comprises an indication of a determination that the scene includes white points or an indication of a determination that the scene does not include white points.
Aspect 23. The method of aspect 22, wherein determining the third white-balance decision comprises responsive to the scene flag comprising the indication of the determination that the scene includes the white points, determining the third white-balance decision is the first white-balance decision.
Aspect 24. The method of any one of aspects 22 or 23, wherein determining the third white-balance decision further comprises responsive to scene flag comprising the indication of the determination that the scene does not include white points, determining a weight for the second white-balance decision based on the confidence value, and determining the third white-balance decision based on the first white-balance decision, the weight, and the second white-balance decision.
Aspect 25. The method of any one of aspects 21 to 24, wherein the confidence value is indicative of a level of confidence with which downstream operations should use the second confidence value.
Aspect 26. The method of any one of aspects 21 to 25, wherein at least one of the scene flag or the confidence value is determined using the machine-learning model.
Aspect 27. The method of any one of aspects 21 to 26, wherein the machine-learning model is a first machine-learning model and wherein at least one of the scene flag or the confidence value are determined by a second machine-learning model.
Aspect 28. The method of any one of aspects 19 to 27, wherein the third white-balance decision is further based on at least one of: a comparison between a white reference point and a
relationship between intensities of red light to intensities of green light and a relationship between intensities of blue light to intensities of green light; a comparison between a fourth white-balance decision and at least one of the first white-balance decision, the second white-balance decision, and the third white-balance decision, wherein the fourth white-balance decision is based on spectrum data from a spectrum sensor; a correlated color temperature; or a color temperature decision.
Aspect 29. The method of any one of aspects 19 to 28, wherein the third white-balance decision is determined based on at least one of: spectrum data from a spectrum sensor; scene-detection results; a count of faces detected; or semantic segmentation information.
Aspect 30. The method of any one of aspects 19 to 29, wherein the third white-balance decision comprises a gain for red pixel data, a gain for green pixel data, and a gain for blue pixel data.
Aspect 31. The method of any one of aspects 19 to 30, wherein the statistics comprise at least one of: a lux index; a relationship between intensities of red light to intensities of green light; a relationship between intensities of blue light to intensities of greenlight; a comparison between a white reference point and the relationship between intensities of red light to intensities of green light and the relationship between intensities of blue light to intensities of green light; a correlated color temperature; or a semantic label.
Aspect 32. The method of any one of aspects 19 to 31, wherein the white-balance algorithm is configured to determine the first white-balance decision based on a relationship between intensities of red light to intensities of green light and a relationship between intensities of blue light to intensities of greenlight.
Aspect 33. The method of any one of aspects 19 to 32, further comprising white-balancing the image data based on the third white-balance decision.
Aspect 34. The method of any one of aspects 19 to 33, wherein the image data comprises a first portion of an image and wherein the statistics comprise first statistics; and further comprising: obtaining second statistics based on second image data, wherein the second image data comprises a second portion of the image and wherein the second statistics are associated with at least one of color or brightness of the second image data; determining, based on the
second statistics, a fourth white-balance decision using the white-balance algorithm; determining, based on the second statistics, a fifth white-balance decision using the machine-learning model; and determining a sixth white-balance decision based on the fourth white-balance decision and the fifth white-balance decision.
Aspect 35. The method of aspect 34, further comprising: white-balancing the first image data based on the third white-balance decision to generate third image data; white-balancing the second image data based on the sixth white-balance decision to generate fourth image data; and combining the third image data with the fourth image data.
Aspect 36. The method of aspect 35, further comprising: blending a white-balance of an edge the third image data based on the third white-balance decision and the sixth white-balance decision; and blending a white-balance of an edge the fourth image data based on the third white-balance decision and the sixth white-balance decision.
Aspect 37. A non-transitory computer-readable storage medium having stored thereon instructions that, when executed by at least one processor, cause the at least one processor to perform operations according to any of aspects 19 to 36.
Aspect 38. An apparatus for providing virtual content for display, the apparatus comprising one or more means for perform operations according to any of aspects 19 to 36.
Claims (30)
- An apparatus for determining a white balance for an image, the apparatus comprising:at least one memory; andat least one processor coupled to the at least one memory and configured to:obtain statistics based on image data, the statistics being associated with at least one of color or brightness of the image data;determine, based on the statistics, a first white-balance decision using a white-balance algorithm;determine, based on the statistics, a second white-balance decision using a machine-learning model trained to determine white-balance decisions; anddetermine a third white-balance decision based on the first white-balance decision and the second white-balance decision.
- The apparatus of claim 1, wherein, to determine the third white-balance decision, the at least one processor is configured to:determine a first weight for the first white-balance decision;determine a second weight for the second white-balance decision; anddetermine the third white-balance decision based on the first weight, the first white-balance decision, the second weight, and the second white-balance decision.
- The apparatus of claim 1, wherein the at least one processor is further configured to determine at least one of a scene flag or a confidence value, wherein the scene flag is related to a scene of the image data, wherein the confidence value is related to the second white-balance decision, and wherein the third white-balance decision is further based on at least one of the scene flag or the confidence value.
- The apparatus of claim 3, wherein the scene flag comprises an indication of a determination that the scene includes white points or an indication of a determination that the scene does not include white points.
- The apparatus of claim 4, wherein, to determine the third white-balance decision, the at least one processor is configured to, responsive to the scene flag comprising the indication of the determination that the scene includes the white points, determine the third white-balance decision is the first white-balance decision.
- The apparatus of claim 4, wherein, to determine the third white-balance decision, the at least one processor is configured to, responsive to scene flag comprising the indication of the determination that the scene does not include white points, determine a weight for the second white-balance decision based on the confidence value, and determine the third white-balance decision based on the first white-balance decision, the weight, and the second white-balance decision.
- The apparatus of claim 3, wherein the confidence value is indicative of a level of confidence with which downstream operations should use the second confidence value.
- The apparatus of claim 3, wherein at least one of the scene flag or the confidence value is determined using the machine-learning model .
- The apparatus of claim 3, wherein the machine-learning model is a first machine-learning model and wherein at least one of the scene flag or the confidence value are determined by a second machine-learning model.
- The apparatus of claim 1, wherein the third white-balance decision is further based on at least one of:a comparison between a white reference point and a relationship between intensities of red light to intensities of green light and a relationship between intensities of blue light to intensities of green light;a comparison between a fourth white-balance decision and at least one of the first white-balance decision, the second white-balance decision, and the third white-balance decision, wherein the fourth white-balance decision is based on spectrum data from a spectrum sensor;a correlated color temperature; ora color temperature decision.
- The apparatus of claim 1, wherein the third white-balance decision is determined based on at least one of:spectrum data from a spectrum sensor;scene-detection results;a count of faces detected; orsemantic segmentation information.
- The apparatus of claim 1, wherein the third white-balance decision comprises a gain for red pixel data, a gain for green pixel data, and a gain for blue pixel data.
- The apparatus of claim 1, wherein the statistics comprise at least one of:a lux index;a relationship between intensities of red light to intensities of green light;a relationship between intensities of blue light to intensities of greenlight;a comparison between a white reference point and the relationship between intensities of red light to intensities of green light and the relationship between intensities of blue light to intensities of green light;a correlated color temperature; ora semantic label.
- The apparatus of claim 1, wherein the white-balance algorithm is configured to determine the first white-balance decision based on a relationship between intensities of red light to intensities of green light and a relationship between intensities of blue light to intensities of greenlight.
- The apparatus of claim 1, wherein the at least one processor is further configured to white-balance the image data based on the third white-balance decision.
- The apparatus of claim 1, wherein the image data comprises a first portion of an image, wherein the statistics comprise first statistics, and wherein the at least one processor is further configured to:obtain second statistics based on second image data, wherein the second image data comprises a second portion of the image and wherein the second statistics are associated with at least one of color or brightness of the second image data;determine, based on the second statistics, a fourth white-balance decision using the white-balance algorithm;determine, based on the second statistics, a fifth white-balance decision using the machine-learning model; anddetermine a sixth white-balance decision based on the fourth white-balance decision and the fifth white-balance decision.
- The apparatus of claim 16, wherein the at least one processor is further configured to:white-balance the first image data based on the third white-balance decision to generate third image data;white-balance the second image data based on the sixth white-balance decision to generate fourth image data; andcombine the third image data with the fourth image data.
- The apparatus of claim 17, wherein the at least one processor is further configured to:blend a white-balance of an edge the third image data based on the third white-balance decision and the sixth white-balance decision; andblend a white-balance of an edge the fourth image data based on the third white-balance decision and the sixth white-balance decision.
- A method for determining a white balance for an image, the method comprising:obtaining statistics based on image data, the statistics being associated with at least one of color or brightness of the image data;determining, based on the statistics, a first white-balance decision using a white-balance algorithm;determining, based on the statistics, a second white-balance decision using a machine-learning model trained to determine white-balance decisions; anddetermining a third white-balance decision based on the first white-balance decision and the second white-balance decision.
- The method of claim 19, wherein determining the third white-balance decision comprises:determining a first weight for the first white-balance decision;determining a second weight for the second white-balance decision; anddetermining the third white-balance decision based on the first weight, the first white-balance decision, the second weight, and the second white-balance decision.
- The method of claim 19, further comprising determining at least one of a scene flag or a confidence value, wherein the scene flag is related to a scene of the image data, wherein the confidence value is related to the second white-balance decision, and wherein the third white-balance decision is further based on at least one of the scene flag or the confidence value.
- The method of claim 21, wherein the scene flag comprises an indication of a determination that the scene includes white points or an indication of a determination that the scene does not include white points.
- The method of claim 22, wherein determining the third white-balance decision comprises responsive to the scene flag comprising the indication of the determination that the scene includes the white points, determining the third white-balance decision is the first white-balance decision.
- The method of claim 22, wherein determining the third white-balance decision further comprises responsive to scene flag comprising the indication of the determination that the scene does not include white points, determining a weight for the second white-balance decision based on the confidence value, and determining the third white-balance decision based on the first white-balance decision, the weight, and the second white-balance decision.
- The method of claim 21, wherein the confidence value is indicative of a level of confidence with which downstream operations should use the second confidence value.
- The method of claim 21, wherein at least one of the scene flag or the confidence value is determined using the machine-learning model .
- The method of claim 21, wherein the machine-learning model is a first machine-learning model and wherein at least one of the scene flag or the confidence value are determined by a second machine-learning model.
- The method of claim 19, wherein the third white-balance decision is further based on at least one of:a comparison between a white reference point and a relationship between intensities of red light to intensities of green light and a relationship between intensities of blue light to intensities of green light;a comparison between a fourth white-balance decision and at least one of the first white-balance decision, the second white-balance decision, and the third white-balance decision, wherein the fourth white-balance decision is based on spectrum data from a spectrum sensor;a correlated color temperature; ora color temperature decision.
- The method of claim 19, wherein the third white-balance decision is determined based on at least one of:spectrum data from a spectrum sensor;scene-detection results;a count of faces detected; orsemantic segmentation information.
- The method of claim 19, wherein the third white-balance decision comprises a gain for red pixel data, a gain for green pixel data, and a gain for blue pixel data.
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| PCT/CN2023/117392 WO2025050333A1 (en) | 2023-09-07 | 2023-09-07 | Determining a white balance for an image |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| PCT/CN2023/117392 WO2025050333A1 (en) | 2023-09-07 | 2023-09-07 | Determining a white balance for an image |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2025050333A1 true WO2025050333A1 (en) | 2025-03-13 |
Family
ID=94922761
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/CN2023/117392 Pending WO2025050333A1 (en) | 2023-09-07 | 2023-09-07 | Determining a white balance for an image |
Country Status (1)
| Country | Link |
|---|---|
| WO (1) | WO2025050333A1 (en) |
Citations (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20190045163A1 (en) * | 2018-10-02 | 2019-02-07 | Intel Corporation | Method and system of deep learning-based automatic white balancing |
| US20210266507A1 (en) * | 2020-02-26 | 2021-08-26 | Canon Kabushiki Kaisha | Image processing apparatus, image capture apparatus, and image processing method |
| CN113518210A (en) * | 2020-04-10 | 2021-10-19 | 华为技术有限公司 | Image automatic white balance method and device |
| CN113596427A (en) * | 2021-09-13 | 2021-11-02 | 厦门亿联网络技术股份有限公司 | Image white balance improving method and device, electronic equipment and storage medium |
| WO2023122860A1 (en) * | 2021-12-27 | 2023-07-06 | 深圳市大疆创新科技有限公司 | Image processing method and apparatus, image acquisition device, and storage medium |
-
2023
- 2023-09-07 WO PCT/CN2023/117392 patent/WO2025050333A1/en active Pending
Patent Citations (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20190045163A1 (en) * | 2018-10-02 | 2019-02-07 | Intel Corporation | Method and system of deep learning-based automatic white balancing |
| US20210266507A1 (en) * | 2020-02-26 | 2021-08-26 | Canon Kabushiki Kaisha | Image processing apparatus, image capture apparatus, and image processing method |
| CN113518210A (en) * | 2020-04-10 | 2021-10-19 | 华为技术有限公司 | Image automatic white balance method and device |
| CN113596427A (en) * | 2021-09-13 | 2021-11-02 | 厦门亿联网络技术股份有限公司 | Image white balance improving method and device, electronic equipment and storage medium |
| WO2023122860A1 (en) * | 2021-12-27 | 2023-07-06 | 深圳市大疆创新科技有限公司 | Image processing method and apparatus, image acquisition device, and storage medium |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| JP7685676B2 (en) | Image retouching techniques | |
| US11776129B2 (en) | Semantic refinement of image regions | |
| CN113518210B (en) | Method and device for automatic white balance of image | |
| KR20230098575A (en) | Frame Processing and/or Capture Command Systems and Techniques | |
| US11671714B1 (en) | Motion based exposure control | |
| US12015835B2 (en) | Multi-sensor imaging color correction | |
| US20240303841A1 (en) | Monocular image depth estimation with attention | |
| CN119173900A (en) | Image Signal Processor | |
| US11825207B1 (en) | Methods and systems for shift estimation for one or more output frames | |
| US20240054659A1 (en) | Object detection in dynamic lighting conditions | |
| US20250104379A1 (en) | Efficiently processing image data based on a region of interest | |
| US12160670B2 (en) | High dynamic range (HDR) image generation using a combined short exposure image | |
| WO2025050333A1 (en) | Determining a white balance for an image | |
| EP4519824A1 (en) | Automatic camera selection | |
| US20250292462A1 (en) | Modifying shadows in image data | |
| US20250045868A1 (en) | Efficient image-data processing | |
| WO2024168589A1 (en) | Image sensor and image signal processor for capturing images in low light environments | |
| US20250285241A1 (en) | Deblurring images | |
| US20250259269A1 (en) | High dynamic range (hdr) image generation with multi-domain motion correction | |
| US20250086889A1 (en) | Multi-frame three-dimensional (3d) reconstruction | |
| WO2024186686A1 (en) | Monocular image depth estimation with attention | |
| WO2025147408A1 (en) | Reverse disparity error correction |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 23951158 Country of ref document: EP Kind code of ref document: A1 |