[go: up one dir, main page]

WO2025053835A1 - Methods and systems for dynamically calibrating the torch strength for image capture - Google Patents

Methods and systems for dynamically calibrating the torch strength for image capture Download PDF

Info

Publication number
WO2025053835A1
WO2025053835A1 PCT/US2023/032023 US2023032023W WO2025053835A1 WO 2025053835 A1 WO2025053835 A1 WO 2025053835A1 US 2023032023 W US2023032023 W US 2023032023W WO 2025053835 A1 WO2025053835 A1 WO 2025053835A1
Authority
WO
WIPO (PCT)
Prior art keywords
torch
image
intensity
scene
torch intensity
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
PCT/US2023/032023
Other languages
French (fr)
Inventor
Navinprashath RAMESWARI RAJAGOPAL
Yun Davdid TANG
Ruben Manuel VELARDE
Hsin Yi CHIANG
Salma DOGHRAJI
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Google LLC
Original Assignee
Google LLC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Google LLC filed Critical Google LLC
Priority to PCT/US2023/032023 priority Critical patent/WO2025053835A1/en
Publication of WO2025053835A1 publication Critical patent/WO2025053835A1/en
Pending legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/194Segmentation; Edge detection involving foreground-background segmentation
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/71Circuitry for evaluating the brightness variation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/74Circuitry for compensating brightness variation in the scene by influencing the scene brightness using illuminating means
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/741Circuitry for compensating brightness variation in the scene by increasing the dynamic range of the image compared to the dynamic range of the electronic image sensors

Definitions

  • image capture devices such as still and/or video cameras.
  • the image capture devices can capture images, such as images that include people, animals, landscapes, and/or objects.
  • Image capture devices may be equipped with a scene illumination component (e.g., a flash) that may be operated to add illumination to the scene.
  • a scene illumination component e.g., a flash
  • a computing device may be configured to apply a smart torch metering method that determines the torch current dynamically based on the scene to be captured, so as to maintain a balance between the foreground and background illuminations.
  • a scene may be captured using a flash.
  • flash generally refers to a scene illumination component that is related to hardware components, and a mode of image capture.
  • torch as used herein may refer to a particular mode of flash.
  • a torch may be utilized for long-duration multi-frame capture exposures, making the power somewhat limited as compared to a flash.
  • the torch can produce a diffused and uniform illumination on the target object being captured.
  • a camera may dynamically adjust the torch strength prior to image capture.
  • a variance in the foreground and background luminance may be reduced. This may substantially reduce multiple exposures (e.g., short and long exposure captures) and thereby lower merge artifacts. This can also reduce and/or eliminate storage space, and/or post-processing activity, thereby saving compute resources, especially on a mobile device.
  • a computer-implemented method includes receiving, by an image capturing device, a first image of a scene at a first torch intensity, and a second image of the scene at a second torch intensity, wherein the first image and the second image comprise respective foreground and background illuminance for the scene.
  • the method includes generating at least one candidate image of the scene for at least one intermediate torch intensity, wherein the at least one intermediate torch intensity comprises at least one value between the first torch intensity and the second torch intensity.
  • the method includes selecting a particular intermediate torch intensity associated with the at least one candidate image, wherein the particular intermediate torch intensity reduces a difference between a foreground illuminance and a background illuminance for the scene relative to the first torch intensity and the second torch intensity.
  • the method includes, responsive to the selecting, receiving, by the image capturing device, an additional image of the scene based on the selected intermediate torch intensity.
  • a system may include one or more processors.
  • the system may also include data storage, where the data storage has stored thereon computer-executable instructions that, when executed by the one or more processors, cause the system to carry out operations.
  • the operations may include receiving, by an image capturing device, a first image of a scene at a first torch intensity, and a second image of the scene at a second torch intensity, wherein the first image and the second image comprise respective foreground and background illuminance for the scene.
  • the operations may further include generating at least one candidate image of the scene for at least one intermediate torch intensity, wherein the at least one intermediate torch intensity comprises at least one value between the first torch intensity and the second torch intensity.
  • the operations may also include selecting a particular intermediate torch intensity associated with the at least one candidate image, wherein the particular intermediate torch intensity reduces a difference between a foreground illuminance and a background illuminance for the scene relative to the first torch intensity and the second torch intensity.
  • the operations may additionally include, responsive to the selecting, receiving, by the image capturing device, an additional image of the scene based on the selected intermediate torch intensity.
  • a computing device includes one or more processors and data storage that has stored thereon computer-executable instructions that, when executed by the one or more processors, cause the computing device to carry out operations.
  • the operations may include receiving, by an image capturing device, a first image of a scene at a first torch intensity, and a second image of the scene at a second torch intensity, wherein the first image and the second image comprise respective foreground and background illuminance for the scene.
  • the operations may further include generating at least one candidate image of the scene for at least one intermediate torch intensity, wherein the at least one intermediate torch intensity comprises at least one value between the first torch intensity and the second torch intensity.
  • the operations may also include selecting a particular intermediate torch intensity associated with the at least one candidate image, wherein the particular intermediate torch intensity reduces a difference between a foreground illuminance and a background illuminance for the scene relative to the first torch intensity and the second torch intensity.
  • the operations may additionally include, responsive to the selecting, receiving, by the image capturing device, an additional image of the scene based on the selected intermediate torch intensity.
  • an article of manufacture may include a non-transitory computer-readable medium having stored thereon program instructions that, upon execution by one or more processors of a computing device, cause the computing device to carry out operations.
  • the operations may include receiving, by an image capturing device, a first image of a scene at a first torch intensity, and a second image of the scene at a second torch intensity, wherein the first image and the second image comprise respective foreground and background illuminance for the scene.
  • the operations may further include generating at least one candidate image of the scene for at least one intermediate torch intensity, wherein the at least one intermediate torch intensity comprises at least one value between the first torch intensity and the second torch intensity.
  • the operations may also include selecting a particular intermediate torch intensity associated with the at least one candidate image, wherein the particular intermediate torch intensity reduces a difference between a foreground illuminance and a background illuminance for the scene relative to the first torch intensity and the second torch intensity.
  • the operations may additionally include, responsive to the selecting, receiving, by the image capturing device, an additional image of the scene based on the selected intermediate torch intensity.
  • Figure 1 is an example overview of torch strength calibration, in accordance with example embodiments.
  • Figure 2 is another example overview of torch strength calibration, in accordance with example embodiments.
  • Figure 3 is another example overview of torch strength calibration, in accordance with example embodiments.
  • Figure 4 illustrates example images for various torch strength calibrations, in accordance with example embodiments.
  • Figure 5 illustrates an example image captured with adaptive torch strength calibration, in accordance with example embodiments.
  • Figure 6 illustrates additional images for various torch strength calibrations, in accordance with example embodiments.
  • Figure 7 is a block diagram of an example computing device, in accordance with example embodiments.
  • Figure 8 is a flowchart of a method, in accordance with example embodiments.
  • Example methods, devices, and systems are described herein. It should be understood that the words “example” and “exemplary” are used herein to mean “serving as an example, instance, or illustration.” Any embodiment or feature described herein as being an “example” or “exemplary” is not necessarily to be construed as preferred or advantageous over other embodiments or features. Other embodiments can be utilized, and other changes can be made, without departing from the scope of the subject matter presented herein.
  • This application generally relates to adjusting a torch intensity for a camera.
  • the torch intensity is proportional to the luminance of the image.
  • the foreground and background luminance could be very different. This may cause the exposure to either adjust for the foreground or the background and may result in a loss of dynamic range.
  • the torch intensity is generally a fixed setting of the camera.
  • a fixed torch intensity may not be desirable due to varying depths of different portions of the scene.
  • a fixed torch intensity usually uses higher torch strength.
  • the fixed torch intensity may illuminate the subjects at the foreground much higher than the background due to a decrease in illumination over distance. This can make the exposure to either adjust for the foreground or the background and may result in a loss of dynamic range. In case of longer exposure without flash, this could result in motion blur artifacts due to a longer exposure time.
  • a high torch current may be used, when a background is very brightly illuminated.
  • a flash may be used, and this may cause the objects in the foreground (e.g., faces) to be overexposed, while some background detail may be lost.
  • a desirable goal may be to ensure that the foreground target exposure is accurately captured, while preserving as much of the background detail as possible. For example, maintain a bright background and a higher dynamic range. In a camera, the exposure determines how much light is collected on the sensor.
  • the background when a scene is captured using the flash, the background may be less illuminated (e.g., become dark), while a foreground object (e.g., a person) may be brightly illuminated (e.g., due to the flash light on the person’s face).
  • a foreground object e.g., a person
  • the background may still remain dimly illuminated.
  • a desirable solution involves reducing the torch strength to capture the foreground, without losing details in the background. In traditional cameras, this may be achieved by using multiple exposures and varying torch strengths and then merging these multiple images. However, this can result in an overuse of compute resources, and also introduce artifacts during the merge process.
  • the torch intensity may be dynamically adjusted to achieve a desirable balance between the illumination of the foreground and background portions of the image.
  • a desirable balance is achieved based on a first preview that corresponds to a low torch intensity (e.g., no torch) and a second preview that corresponds to a high torch intensity (e.g., a maximum allowable torch intensity for the camera).
  • a plurality of intermediate images are generated at torch intensities between the low and high torch intensities.
  • an intermediate image is selected (e.g., based on various image characteristics) to minimize a difference between a foreground illuminance and a background illuminance for the scene.
  • the automatic exposure settings for the camera can be adjusted to provide a torch intensity that corresponds to the selected intermediate image.
  • a torch current may be dynamically determined based on a scene so that an exposure for a foreground may be balanced with an exposure for a background of the scene.
  • a smart torch metering system may estimate the torch current based on factors such as a depth, a skin tone, an ambient illumination, and a dynamic range of the scene.
  • auto exposure may dynamically change the torch strength. For example, auto exposure can analyze the two frames, a first frame for which no torch has been applied and a second frame with an initial torch current. Accordingly, in an absence of torch statistics and with the initial torch statistics, an automatic exposure (AE) component may determine an impact of torch strength on different segments of the scene, and determine a new torch strength, that can result in an optimal dynamic range within the exposure limitations supported by the device.
  • AE automatic exposure
  • FIG. 1 is an example overview 100 of torch strength calibration, in accordance with example embodiments.
  • the operations may involve displaying, by a graphical user interface of a computing device, a preview image comprising a scene.
  • device 105 may include graphical user interface (GUI) 110 displaying a preview image 115.
  • GUI graphical user interface
  • device 105 may be a mobile device.
  • graphical user interface 110 may be an interface that displays a captured image.
  • graphical user interface 110 may be a live-view interface that displays a live- view preview of an image. As illustrated, the displaying of the image may involve providing a live-view preview of the image prior to a capture of the image.
  • a "live-view preview" of an image should be understood to be an image or sequence of images (e.g., video) that is generated and displayed based on an image data stream from an image sensor of an image-capture device.
  • image data may be generated by a camera's image sensor (or a portion, subset, or sampling of the pixels on the image sensor).
  • This image data is representative of the field-of-view (FOV) of the camera, and thus indicative of the image that will be captured if the user taps the camera's shutter button or initiates image capture in some other manner.
  • FOV field-of-view
  • a camera device or other computing device may generate and display a live-view preview image based on the image data stream from the image sensor.
  • the live-view preview image can be a real-time image feed (e.g., video), such that the user is informed of the camera's FOV in real-time.
  • the image may be a frame of a plurality of frames of a video.
  • the user may open a camera system of device 105 (e.g., with a touch screen or other mechanism), and may direct the camera toward a scene, with an intent to capture an image.
  • Graphical user interface 110 may display a live-view preview of the image.
  • Device 105 may utilize one or more algorithms (e.g., an object detection algorithm, a face detection algorithm, a segmentation algorithm, and so forth) to identify one or more regions of interest in the image.
  • a user-approved facial recognition algorithm may be applied to identify one or more individuals in the image as likely objects of interest.
  • device 105 may have a history of user preferences, and may identify certain objects and/or individuals as being of high interest to the user.
  • an auto-exposure (AE) process may analyze the scene with a first torch intensity, which controls the exposure settings for image capture.
  • the first torch intensity may be a low or zero torch intensity, absent further input from the user.
  • analyzing the scene may involve factors such as depth information for the scene, a balance of illuminance between the foreground and the background, types of objects, reflectance properties of the objects, skin tones, shadow characteristics, and so forth, may be determined.
  • the auto-exposure (AE) process may analyze the scene with a second torch intensity, which controls the exposure settings for image capture.
  • the second torch intensity may be a high torch intensity based upon input from the user.
  • device 105 may generate one or more intermediate images of the scene for intermediate torch intensities.
  • the one or more intermediate images may be computer-generated images based on varying torch intensities.
  • the one or more intermediate images of the scene may be downsampled.
  • the one or more intermediate images may be generated based on an image with no torch, an image with initial torch applied, and initial torch current.
  • the scene brightness may be interpolated between no torch and torch to determine an intermediate scene brightness corresponding to the intermediate torch intensity.
  • the intermediate torch intensities can include values between the first torch intensity and the second torch intensity.
  • the intermediate torch intensities can include values outside the range between the first torch intensity and the second torch intensity (e.g., lower than the first torch intensity and/or higher than the second torch intensity) based on an analysis of the image.
  • the range between the first torch intensity and the second torch intensity may be divided into subintervals to determine the intermediate torch intensities.
  • the partition may be uniform with subintervals of equal length.
  • the partition may be non-uniform with subintervals of varying length.
  • a number of intermediate torch intensities, the respective values, and so forth may be dynamically determined based on the scene, regions and/or objects of interest, ambient lighting, depth information, reflectance properties, and so forth.
  • a number of intermediate torch intensities may be utilized to evaluate the scene. Such intermediate torch intensities may be equally spaced between the minimum torch intensity and the maximum torch intensity in the log domain. A log domain is used since the illumination operates as a multiplicative factor.
  • the auto-exposure (AE) process may determine an adaptive torch strength. For example, the AE process may select a particular intermediate torch intensity associated with an intermediate image. The particular intermediate torch intensity may be selected to reduce a difference between a foreground illuminance and a background illuminance for the scene relative to the first torch intensity and the second torch intensity. For example, smart torch metering handles the night scenes where foreground and background have significantly different depths by using AE statistical data from a raw image.
  • highlight regions of the scene may be captured with a short exposure, shadow regions of the scene may be captured with a long exposure. These images may then be fused together. Generally, this may involve additional post-capture processing, and merging frames taken with different exposures may introduce merge artifacts.
  • the adaptive torch strength (e.g., particular intermediate torch intensity) may be selected based on various image characteristics such as a depth of the scene, a skin tone of a foreground object, an ambient illumination of the scene, a dynamic range of the scene, and so forth.
  • segmentation data for an image may be utilized by example embodiments, such as segmentation masks that outline, isolate, or separate a person or other object(s) of interest within an image; e.g., by indicating an area or areas of the image occupied by a foreground object or objects in a scene, and an area or areas of the image corresponding to the scene’s background.
  • object segmentation data such as segmentation masks that outline, isolate, or separate a person or other object(s) of interest within an image; e.g., by indicating an area or areas of the image occupied by a foreground object or objects in a scene, and an area or areas of the image corresponding to the scene’s background.
  • Depth information may take various forms.
  • the depth information could be a depth map, which is a coordinate mapping or another data structure that stores information relating to the distance of the surfaces of objects in a scene from a certain viewpoint (e.g., from a camera or mobile device).
  • a depth map for an image captured by a camera can specify information relating to the distance from the camera to surfaces of objects captured in the image; e.g., on a pixel -by-pixel (or other) basis or a subset or sampling of pixels in the image.
  • Various techniques may be used to generate depth information for an image. In some cases, depth information may be generated for the entire image (e.g., for the entire image frame). In other cases, depth information may only be generated for a certain area or areas in an image. For instance, depth information may only be generated when image segmentation is used to identify one or more objects in an image. Depth information may be determined specifically for the identified object or objects.
  • stereo imaging may be utilized to generate a depth map.
  • a depth map may be obtained by correlating left and right stereoscopic images to match pixels between the stereoscopic images. The pixels may be matched by determining which pixels are the most similar between the left and right images. Pixels correlated between the left and right stereoscopic images may then be used to determine depth information. For example, a disparity between the location of the pixel in the left image and the location of the corresponding pixel in the right image may be used to calculate the depth information using binocular disparity techniques.
  • An image may be produced that contains depth information for a scene, such as information related to how deep or how far away objects in the scene are in relation to a camera's viewpoint. Such images are useful in perceptual computing for applications such as gesture tracking and object recognition, for example.
  • depth maps can be estimated from images taken by one camera that uses dual pixels on light-detecting sensors; e.g., a camera that provides autofocus functionality.
  • a dual pixel of an image may include a pixel that has been split into two parts, such as a left pixel and a right pixel. Then, a dual pixel image is an image that includes dual pixels.
  • neural networks may be utilized to predict depth maps for an image.
  • depth information may be used, for example, when the image capturing device is in a zoom-in mode.
  • a detected foreground object may generally appear closer to the image capturing device but actually is farther, and the torch strength may be adjusted based on the depth information (e.g., actual depth or distance) as determined by the image capturing device.
  • the torch strength may be increased from a level used for a non-zoom mode to avoid underexposure.
  • depth information may be used, for example, when the image capturing device is in a zoom-out mode.
  • the detected foreground object may appear farther from the image capturing device but is actually closer, and the torch may be adjusted based on the depth information (e.g., actual depth or distance) as determined by the image capturing device.
  • the torch strength may be decreased from a level used for a non-zoom mode to avoid target overexposure.
  • skin tone information may be used.
  • the torch strength may be adjusted to achieve a balanced illuminance for the multiple human objects with different skin tones (e.g., maintain face luminance values to be within a range or designed thresholds). For example, a person with a darker skin tone is not underexposed and a person with a lighter skin tone is not overexposed.
  • an object may have different light reflection characteristics that may depend, for example, on a surface geometry, color, and/or a material of the object.
  • a surface of an object may be composed of a plurality of materials, thereby creating complex light reflection characteristics.
  • a diffuse map e.g., an image of an object that is representative of its diffuse reflection
  • Diffuse reflection is a type of surface reflectance where incident light is reflected and scattered into a plurality of directions (e.g., reflection by a rough surface).
  • the diffuse map may be indexed by a set of color values that are indicative of a texture (e.g., color and pattern) of the object.
  • a specular map (e.g., an image of an object that is representative of its specular reflection) may be used.
  • Specular reflection is a type of surface reflectance where incident light is reflected into a unidirectional reflected light (e.g. reflection by a smooth, and/or shiny surface).
  • the specular map represents a shininess characteristic of a surface and its highlight color.
  • the adaptive torch strength e.g., particular intermediate torch intensity
  • a high-dynamic-range (HDR) display for an image can extend a range of user experience when viewing the image.
  • an image of a person in the dark may have a high composition of dark colors with a low luminance value.
  • a ratio of respective luminance values may have a high dynamic range, for example, 1 : 1,000,000.
  • the bit depth of the image sensor may vary (e.g., 10-12 bit depth).
  • a higher depth value may not be conducive for efficient image processing (e.g., may not add value) due to readout noise and shot noise.
  • the adaptive torch strength e.g., particular intermediate torch intensity
  • the particular intermediate torch intensity may be selected to achieve a signal-to-noise ratio (SNR) greater than a threshold SNR.
  • the threshold SNR may be determined dynamically based on an analysis of the scene, including an environmental illumination, objects of interest, depth characteristics, and so forth.
  • the particular intermediate torch intensity may be selected to reduce a lux value for the scene.
  • an LED current may be configured based on the particular intermediate torch intensity to minimize the lux value.
  • the LED current for the torch intensity is determined for illumination of foreground objects. Accordingly, lowering the torch intensity may achieve better SNR on the background with a long exposure (e.g., when using the Night Sight mode) and merging multiple frames.
  • the scene brightness threshold may be 35-40 lux. Upon a determination that the scene brightness is below the scene brightness threshold, a different torch current may be applied. In some embodiments, the scene brightness threshold may be used for triggering the adaptive torch mode to achieve both an optimal scene and foreground exposure. In some embodiments, the selection of the adaptive torch current may be based on achieving a balance between power saving objectives, and capturing an image with optimal image characteristics.
  • the particular intermediate torch intensity may be selected to maintain the particular intermediate torch intensity to be greater than a threshold torch intensity.
  • device 105 may be configured to apply a minimum torch current and the particular intermediate torch intensity may be selected to be greater than the minimum torch current.
  • the adaptive torch strength may be applied to the flash illumination control module.
  • one or more frames may be captured using a torch with the adaptive torch strength.
  • an image 150 of a scene with the adaptive torch strength may be captured by pressing a shutter button for a camera.
  • Figure 2 is another example overview 200 of torch strength calibration, in accordance with example embodiments. Some components of Figure 2 may share one or more aspects with similar components illustrated in Figure 1. For example, device 205 may share one or more aspects in common with device 105, GUI 210 may share one or more aspects in common with GUI 110, preview image 215 may share one or more aspects in common with preview image 115, and image with adaptive torch 265 may share one or more aspects in common with image with adaptive torch 150.
  • auto exposure (AE) of device 205 analyzes a scene depicted in preview image 215 without torch/flash, and stores the analysis.
  • a user may manually enable the torch.
  • the user may press a shutter to capture the image.
  • device 205 may apply an initial torch strength to a flash illumination control module.
  • the AE may analyze the scene (e.g., a region of interest (ROI)) for capture with the torch in an “ON” state.
  • ROI region of interest
  • auto focus may generate a depth map for a target distance on the ROI.
  • AE may determine an adaptive torch strength after the AE convergence. For example, AE may use the analysis of the scene and the depth map to generate intermediate images at intermediate torch strengths (e.g., between the image without the torch and the image with the torch on). Further analysis of the intermediate images may be performed to identify an intermediate image with an optimal balance between the foreground and background illumination. Accordingly, AE may determine an adaptive torch strength to be the torch intensity corresponding to the identified intermediate image.
  • intermediate torch strengths e.g., between the image without the torch and the image with the torch on. Further analysis of the intermediate images may be performed to identify an intermediate image with an optimal balance between the foreground and background illumination. Accordingly, AE may determine an adaptive torch strength to be the torch intensity corresponding to the identified intermediate image.
  • device 205 may apply the adaptive torch strength (e.g., a smart torch strength) to the flash illumination control module.
  • the adaptive torch strength e.g., a smart torch strength
  • device 205 may capture frames with the adaptive torch strength.
  • Figure 3 is another example overview 300 of torch strength calibration, in accordance with example embodiments.
  • a brightness control component may output an initial torch current to the flash illumination control module.
  • the flash illumination control module applies the initial torch value.
  • the torch value may be a strength of the torch current.
  • the brightness control component determines AE convergence subsequent to the initial torch current being applied.
  • the camera may be configured with an auto white balance (AWB) function, which adjusts white balance automatically according to recognized scenes.
  • ABB auto white balance
  • the white balance function of the camera may be set to AWB by default. Accordingly, the camera may automatically adjust the color of photographs to look natural in various scenes.
  • the automatic white balance component and the brightness control component obtain the scene statistics and initiate a metering process.
  • the brightness control component determines an optimal torch current.
  • the optimal torch current is output to the flash illumination control module.
  • the flash illumination control module applies the optimal torch value.
  • FIG. 4 illustrates example images 400 for various torch strength calibrations, in accordance with example embodiments.
  • Image 405 depicts an object of interest in a foreground illumination 405B against a background illumination 405A.
  • Image 405 is captured with a default torch strength.
  • the object of interest in the foreground e.g., a person
  • the background is less brightly illuminated.
  • Image 410 depicts an object of interest in a foreground illumination 410B against a background illumination 410A.
  • Image 410 is captured with zero torch strength.
  • the object of interest in the foreground e.g., a person
  • the background is brightly illuminated.
  • the foreground illumination 410B is dimly illuminated (e.g., dark)
  • the background illumination 410A is dimly illuminated.
  • Image 415 depicts an object of interest in a foreground illumination 415B against a background illumination 415 A.
  • Image 415 is captured with an adaptive torch strength.
  • the object of interest in the foreground e.g., a person
  • the background is also well illuminated, with a balance between the foreground illumination 415B and the background illumination 415 A.
  • Figure 5 illustrates an example image 500 captured with torch strength calibration, in accordance with example embodiments.
  • the object of interest in the foreground e.g., a person
  • the background is also well illuminated, with a balance between the foreground illumination and the background illumination.
  • FIG. 6 illustrates additional images 600 for various torch strength calibrations, in accordance with example embodiments.
  • Image 605 depicts an object of interest in a foreground illumination 605B against a background illumination 605 A.
  • Image 605 is captured with a night sight setting and a default camera mode. For example, some device cameras are configured to capture images in low light or darker settings (e.g., at night). The night sight mode enables the camera to capture an image with a longer exposure time. Detailed low-light images may be captured without a need for a flash and/or a tripod.
  • the object of interest in the foreground e.g., a person
  • the front of a building in the background is brightly illuminated.
  • Image 610 depicts an object of interest in a foreground illumination 61 OB against a background illumination 610A.
  • Image 610 is captured with an adaptive torch strength and in night sight mode.
  • the object of interest in the foreground e.g., a person
  • the background is also well illuminated, with a balance between the foreground illumination 610B and the background illumination 610A.
  • Image 620 depicts an object of interest in a foreground illumination 620B against a background illumination 620A.
  • Image 620 is captured at a torch strength for HDR mode and a default camera mode.
  • HDR mode the camera is configured to widen the exposure range to capture more detail in a scene (e.g., capturing details in both bright and dark areas). For example, for an image captured in daylight, HDR mode enables details to be captured for areas that are illuminated by the sun as well as the shaded portions of the scene.
  • the object of interest in the foreground e.g., a person
  • the background is also well illuminated.
  • the face of the person has a reflective luminance and some of the clothing (e.g., neck scarf) has more illumination, making the image appear to be less color balanced.
  • FIG. 7 is a block diagram of an example computing device 700, in accordance with example embodiments.
  • computing device 700 shown in Figure 7 can be configured to perform at least one function described herein, including method 800.
  • Computing device 700 may include a user interface module 701, a network communications module 702, one or more processors 703, data storage 704, one or more cameras 718, one or more sensors 720, and power system 722, all of which may be linked together via a system bus, network, or other connection mechanism 705.
  • User interface module 701 can be operable to send data to and/or receive data from external user input/output devices.
  • user interface module 701 can be configured to send and/or receive data to and/or from user input devices such as a touch screen, a computer mouse, a keyboard, a keypad, a touch pad, a trackball, a joystick, a voice recognition module, and/or other similar devices.
  • user input devices such as a touch screen, a computer mouse, a keyboard, a keypad, a touch pad, a trackball, a joystick, a voice recognition module, and/or other similar devices.
  • User interface module 701 can also be configured to provide output to user display devices, such as one or more cathode ray tubes (CRT), liquid crystal displays, light emitting diodes (LEDs), displays using digital light processing (DLP) technology, printers, light bulbs, and/or other similar devices, either now known or later developed.
  • CTR cathode ray tubes
  • LEDs light emitting di
  • User interface module 701 can also be configured to generate audible outputs, with devices such as a speaker, speaker jack, audio output port, audio output device, earphones, and/or other similar devices. User interface module 701 can further be configured with one or more haptic devices that can generate haptic outputs, such as vibrations and/or other outputs detectable by touch and/or physical contact with computing device 700. In some examples, user interface module 701 can be used to provide a graphical user interface (GUI) for utilizing computing device 700.
  • GUI graphical user interface
  • Network communications module 702 can include one or more devices that provide one or more wireless interfaces 707 and/or one or more wireline interfaces 708 that are configurable to communicate via a network.
  • Wireless interface(s) 707 can include one or more wireless transmitters, receivers, and/or transceivers, such as a BluetoothTM transceiver, a Zigbee® transceiver, a Wi-FiTM transceiver, a WiMAXTM transceiver, an LTETM transceiver, and/or other type of wireless transceiver configurable to communicate via a wireless network.
  • Wireline interface(s) 708 can include one or more wireline transmitters, receivers, and/or transceivers, such as an Ethernet transceiver, a Universal Serial Bus (USB) transceiver, or similar transceiver configurable to communicate via a twisted pair wire, a coaxial cable, a fiberoptic link, or a similar physical connection to a wireline network.
  • wireline transmitters, receivers, and/or transceivers such as an Ethernet transceiver, a Universal Serial Bus (USB) transceiver, or similar transceiver configurable to communicate via a twisted pair wire, a coaxial cable, a fiberoptic link, or a similar physical connection to a wireline network.
  • USB Universal Serial Bus
  • network communications module 702 can be configured to provide reliable, secured, and/or authenticated communications.
  • information for facilitating reliable communications e.g., guaranteed message delivery
  • a message header and/or footer e.g., packet/message sequencing information, encapsulation headers and/or footers, size/time information, and transmission verification information such as cyclic redundancy check (CRC) and/or parity check values.
  • CRC cyclic redundancy check
  • Communications can be made secure (e.g., be encoded or encrypted) and/or decry pted/decoded using one or more cryptographic protocols and/or algorithms, such as, but not limited to, Data Encryption Standard (DES), Advanced Encryption Standard (AES), a Rivest- Shamir- Adelman (RSA) algorithm, a Diffie-Hellman algorithm, a secure sockets protocol such as Secure Sockets Layer (SSL) or Transport Layer Security (TLS), and/or Digital Signature Algorithm (DSA).
  • DES Data Encryption Standard
  • AES Advanced Encryption Standard
  • RSA Rivest- Shamir- Adelman
  • SSL Secure Sockets Layer
  • TLS Transport Layer Security
  • DSA Digital Signature Algorithm
  • Other cryptographic protocols and/or algorithms can be used as well or in addition to those listed herein to secure (and then decry pt/decode) communications.
  • One or more processors 703 can include one or more general purpose processors (e.g., central processing unit (CPU), etc.), and/or one or more special purpose processors (e.g., digital signal processors, tensor processing units (TPUs), graphics processing units (GPUs), application specific integrated circuits, etc.).
  • processors 703 can be configured to execute computer-readable instructions 706 that are contained in data storage 704 and/or other instructions as described herein.
  • Data storage 704 can include one or more non-transitory computer-readable storage media that can be read and/or accessed by at least one of one or more processors 703.
  • the one or more computer-readable storage media can include volatile and/or non-volatile storage components, such as optical, magnetic, organic or other memory or disc storage, which can be integrated in whole or in part with at least one of one or more processors 703.
  • data storage 704 can be implemented using a single physical device (e.g., one optical, magnetic, organic or other memory or disc storage unit), while in other examples, data storage 704 can be implemented using two or more physical devices.
  • Data storage 704 can include computer-readable instructions 706 and perhaps additional data.
  • data storage 704 can include storage required to perform at least part of the herein-described methods, scenarios, and techniques and/or at least part of the functionality of the herein-described devices and networks.
  • computer-readable instructions 706 can include instructions that, when executed by processor(s) 703, enable computing device 700 to provide for some or all of the functionality described herein.
  • data storage 704 may store a first image of a scene at a first torch intensity, and a second image of the scene at a second torch intensity.
  • Data storage 704 may also store one or more candidate images of the scene for at intermediate torch intensities.
  • computer-readable instructions 706 can include instructions that, when executed by processor(s) 703, enable computing device 700 to carry out functions comprising: receiving, by an image capturing device, a first image of a scene at a first torch intensity, and a second image of the scene at a second torch intensity, wherein the first image and the second image comprise respective foreground and background illuminance for the scene; generating at least one candidate image of the scene for at least one intermediate torch intensity, wherein the at least one intermediate torch intensity comprises at least one value between the first torch intensity and the second torch intensity; selecting a particular intermediate torch intensity associated with the at least one candidate image, wherein the particular intermediate torch intensity reduces a difference between a foreground illuminance and a background illuminance for the scene relative to the first torch intensity and the second torch intensity; and responsive to the selecting, receiving, by the image capturing device, an additional image of the scene based on the selected intermediate torch intensity.
  • the instructions may further involve instructions for capturing the image subsequent to the adjusting of the automatic exposure setting.
  • the instructions for the selecting of the particular intermediate torch intensity may be configured to achieve a signal -to-noise ratio (SNR) greater than a threshold SNR.
  • SNR signal -to-noise ratio
  • the instructions for the selecting of the particular intermediate torch intensity may be configured to reduce a lux value for the scene.
  • the instructions for the selecting of the particular intermediate torch intensity may be configured to maintain the particular intermediate torch intensity to be greater than a threshold torch intensity.
  • computing device 700 can include torch strength module 712.
  • Torch strength module 712 can be configured to control a torch strength to be applied for image capture by one or more cameras 718.
  • torch strength module 712 may receive an intermediate torch intensity from data storage 704, and configure the one or more cameras 718 to capture an image using the intermediate torch intensity.
  • computing device 700 can include one or more cameras 718.
  • Camera(s) 718 can include one or more image capture devices, such as still and/or video cameras, equipped to capture light and record the captured light in one or more images; that is, camera(s) 718 can generate image(s) of captured light.
  • the one or more images can be one or more still images and/or one or more images utilized in video imagery.
  • Camera(s) 718 can capture light and/or electromagnetic radiation emitted as visible light, infrared radiation, ultraviolet light, and/or as one or more other frequencies of light.
  • computing device 700 can include one or more sensors 720.
  • Sensors 720 can be configured to measure conditions within computing device 700 and/or conditions in an environment of computing device 700 and provide data about these conditions.
  • sensors 720 can include one or more of (i) sensors for obtaining data about computing device 700, such as, but not limited to, a thermometer for measuring a temperature of computing device 700, a battery sensor for measuring power of one or more batteries of power system 722, and/or other sensors measuring conditions of computing device 700; (ii) an identification sensor to identify other obj ects and/or devices, such as, but not limited to, a Radio Frequency Identification (RFID) reader, proximity sensor, one-dimensional barcode reader, two-dimensional barcode (e.g., Quick Response (QR) code) reader, and a laser tracker, where the identification sensors can be configured to read identifiers, such as RFID tags, barcodes, QR codes, and/or other devices and/or object configured to be read and provide at least identifying information; (iii) sensors to measure locations and/or movements of computing device 700
  • Power system 722 can include one or more batteries 724 and/or one or more external power interfaces 726 for providing electrical power to computing device 700.
  • Each battery of the one or more batteries 724 can, when electrically coupled to the computing device 700, act as a source of stored electrical power for computing device 700.
  • One or more batteries 724 of power system 722 can be configured to be portable. Some or all of one or more batteries 724 can be readily removable from computing device 700. In other examples, some or all of one or more batteries 724 can be internal to computing device 700, and so may not be readily removable from computing device 700. Some or all of one or more batteries 724 can be rechargeable.
  • a rechargeable battery can be recharged via a wired connection between the battery and another power supply, such as by one or more power supplies that are external to computing device 700 and connected to computing device 700 via the one or more external power interfaces.
  • one or more batteries 724 can be non-rechargeable batteries.
  • One or more external power interfaces 726 of power system 722 can include one or more wired-power interfaces, such as a USB cable and/or a power cord, that enable wired electrical power connections to one or more power supplies that are external to computing device 700.
  • One or more external power interfaces 726 can include one or more wireless power interfaces, such as a Qi wireless charger, that enable wireless electrical power connections, such as via a Qi wireless charger, to one or more external power supplies.
  • computing device 700 can draw electrical power from the external power source the established electrical power connection.
  • power system 722 can include related sensors, such as battery sensors associated with the one or more batteries or other types of electrical power sensors.
  • One or more external power interfaces 726 of power system 722 can include one or more wired-power interfaces, such as a USB cable and/or a power cord, that enable wired electrical power connections to one or more power supplies that are external to computing device 700.
  • One or more external power interfaces 726 can include one or more wireless power interfaces, such as a Qi wireless charger, that enable wireless electrical power connections, such as via a Qi wireless charger, to one or more external power supplies.
  • computing device 700 can draw electrical power from the external power source the established electrical power connection.
  • power system 722 can include related sensors, such as battery sensors associated with the one or more batteries or other types of electrical power sensors.
  • Figure 8 is a flowchart of a method, in accordance with example embodiments.
  • Method 800 may include various blocks or steps. The blocks or steps may be carried out individually or in combination. The blocks or steps may be carried out in any order and/or in series or in parallel. Further, blocks or steps may be omitted or added to method 800.
  • the blocks of method 800 may be carried out by various elements of computing device 700 as illustrated and described in reference to Figure 7.
  • Block 810 involves receiving, by an image capturing device, a first image of a scene at a first torch intensity, and a second image of the scene at a second torch intensity, wherein the first image and the second image comprise respective foreground and background illuminance for the scene.
  • Block 820 involves generating at least one candidate image of the scene for at least one intermediate torch intensity, wherein the at least one intermediate torch intensity comprises at least one value between the first torch intensity and the second torch intensity.
  • Block 830 involves selecting a particular intermediate torch intensity associated with the at least one candidate image, wherein the particular intermediate torch intensity reduces a difference between a foreground illuminance and a background illuminance for the scene relative to the first torch intensity and the second torch intensity.
  • Block 840 involves, responsive to the selecting, receiving, by the image capturing device, an additional image of the scene based on the selected intermediate torch intensity.
  • the selecting of the particular intermediate torch intensity may be based on one or more of a depth of the scene, a skin tone of a foreground object, an ambient illumination of the scene, or a dynamic range of the scene.
  • Some embodiments involve, responsive to the selecting, adjusting an exposure setting of the image capturing device based on the selected intermediate torch intensity. Such embodiments involve capturing the image at the selected intermediate torch intensity.
  • the first torch intensity may correspond to a low torch intensity and the second torch intensity may correspond to a high torch intensity.
  • the at least one candidate image may be downsampled.
  • the selecting of the particular intermediate torch intensity may be performed to achieve a signal-to-noise ratio (SNR) greater than a threshold SNR.
  • SNR signal-to-noise ratio
  • the selecting of the particular intermediate torch intensity may be performed to reduce a lux value for the scene.
  • the selecting of the particular intermediate torch intensity may be performed to maintain the particular intermediate torch intensity to be greater than a threshold torch intensity.
  • the generating of the at least one candidate image involves receiving the at least one candidate image by the image capturing device operating at the at least one intermediate torch intensity.
  • the image capturing device may be a component of a computing device.
  • the computing device may be a mobile device.
  • the particular arrangements shown in the Figures should not be viewed as limiting. It should be understood that other embodiments may include more or less of each element shown in a given Figure. Further, some of the illustrated elements may be combined or omitted. Yet further, an illustrative embodiment may include elements that are not illustrated in the Figures.
  • a step or block that represents a processing of information can correspond to circuitry that can be configured to perform the specific logical functions of a herein-described method or technique.
  • a step or block that represents a processing of information can correspond to a module, a segment, or a portion of program code (including related data).
  • the program code can include one or more instructions executable by a processor for implementing specific logical functions or actions in the method or technique.
  • the program code and/or related data can be stored on any type of computer readable medium such as a storage device including a disk, hard drive, or other storage medium.
  • the computer readable medium can also include non-transitory computer readable media such as computer-readable media that store data for short periods of time like register memory, processor cache, and random access memory (RAM).
  • the computer readable media can also include non-transitory computer readable media that store program code and/or data for longer periods.
  • the computer readable media may include secondary or persistent long-term storage, like read only memory (ROM), optical or magnetic disks, compact disc read only memory (CD-ROM), for example.
  • the computer readable media can also be any other volatile or non-volatile storage systems.
  • a computer readable medium can be considered a computer readable storage medium, for example, or a tangible storage device.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Studio Devices (AREA)

Abstract

An example method includes receiving, by an image capturing device, first and second images of a scene at respective first and second torch intensities. The first and the second images comprise respective foreground and background illuminance for the scene. The method includes generating at least one candidate image of the scene for at least one intermediate torch intensity. The at least one intermediate torch intensity comprises at least one value between the first torch intensity and the second torch intensity. The method includes selecting a particular intermediate torch intensity associated with the at least one candidate image. The particular intermediate torch intensity reduces a difference between a foreground illuminance and a background illuminance for the scene relative to the first and second torch intensities. The method includes, responsive to the selecting, receiving, by the image capturing device, an additional image of the scene based on the selected intermediate torch intensity.

Description

METHODS AND SYSTEMS FOR DYNAMICALLY CALIBRATING THE TORCH STRENGTH FOR IMAGE CAPTURE
BACKGROUND
[0001] Many modern computing devices, including mobile phones, personal computers, and tablets, include image capture devices, such as still and/or video cameras. The image capture devices can capture images, such as images that include people, animals, landscapes, and/or objects. Image capture devices may be equipped with a scene illumination component (e.g., a flash) that may be operated to add illumination to the scene.
SUMMARY
[0002] In one aspect, a computing device may be configured to apply a smart torch metering method that determines the torch current dynamically based on the scene to be captured, so as to maintain a balance between the foreground and background illuminations. Generally, for low-light photography, a scene may be captured using a flash. The term “flash” as used herein, generally refers to a scene illumination component that is related to hardware components, and a mode of image capture. The term “torch” as used herein may refer to a particular mode of flash. For example, a torch may be utilized for long-duration multi-frame capture exposures, making the power somewhat limited as compared to a flash. Also, for example, the torch can produce a diffused and uniform illumination on the target object being captured.
[0003] Generally speaking, during smart flash or torch capture based on the foreground and background illumination, a camera’s auto exposure component may dynamically adjust the torch strength prior to image capture. As described herein, a variance in the foreground and background luminance may be reduced. This may substantially reduce multiple exposures (e.g., short and long exposure captures) and thereby lower merge artifacts. This can also reduce and/or eliminate storage space, and/or post-processing activity, thereby saving compute resources, especially on a mobile device.
[0004] In a first aspect, a computer-implemented method is provided. The method includes receiving, by an image capturing device, a first image of a scene at a first torch intensity, and a second image of the scene at a second torch intensity, wherein the first image and the second image comprise respective foreground and background illuminance for the scene. The method includes generating at least one candidate image of the scene for at least one intermediate torch intensity, wherein the at least one intermediate torch intensity comprises at least one value between the first torch intensity and the second torch intensity. The method includes selecting a particular intermediate torch intensity associated with the at least one candidate image, wherein the particular intermediate torch intensity reduces a difference between a foreground illuminance and a background illuminance for the scene relative to the first torch intensity and the second torch intensity. The method includes, responsive to the selecting, receiving, by the image capturing device, an additional image of the scene based on the selected intermediate torch intensity.
[0005] In a second aspect, a system is provided. The system may include one or more processors. The system may also include data storage, where the data storage has stored thereon computer-executable instructions that, when executed by the one or more processors, cause the system to carry out operations. The operations may include receiving, by an image capturing device, a first image of a scene at a first torch intensity, and a second image of the scene at a second torch intensity, wherein the first image and the second image comprise respective foreground and background illuminance for the scene. The operations may further include generating at least one candidate image of the scene for at least one intermediate torch intensity, wherein the at least one intermediate torch intensity comprises at least one value between the first torch intensity and the second torch intensity. The operations may also include selecting a particular intermediate torch intensity associated with the at least one candidate image, wherein the particular intermediate torch intensity reduces a difference between a foreground illuminance and a background illuminance for the scene relative to the first torch intensity and the second torch intensity. The operations may additionally include, responsive to the selecting, receiving, by the image capturing device, an additional image of the scene based on the selected intermediate torch intensity.
[0006] In a third aspect, a computing device is provided. The device includes one or more processors and data storage that has stored thereon computer-executable instructions that, when executed by the one or more processors, cause the computing device to carry out operations. The operations may include receiving, by an image capturing device, a first image of a scene at a first torch intensity, and a second image of the scene at a second torch intensity, wherein the first image and the second image comprise respective foreground and background illuminance for the scene. The operations may further include generating at least one candidate image of the scene for at least one intermediate torch intensity, wherein the at least one intermediate torch intensity comprises at least one value between the first torch intensity and the second torch intensity. The operations may also include selecting a particular intermediate torch intensity associated with the at least one candidate image, wherein the particular intermediate torch intensity reduces a difference between a foreground illuminance and a background illuminance for the scene relative to the first torch intensity and the second torch intensity. The operations may additionally include, responsive to the selecting, receiving, by the image capturing device, an additional image of the scene based on the selected intermediate torch intensity.
[0007] In a fourth aspect, an article of manufacture is provided. The article of manufacture may include a non-transitory computer-readable medium having stored thereon program instructions that, upon execution by one or more processors of a computing device, cause the computing device to carry out operations. The operations may include receiving, by an image capturing device, a first image of a scene at a first torch intensity, and a second image of the scene at a second torch intensity, wherein the first image and the second image comprise respective foreground and background illuminance for the scene. The operations may further include generating at least one candidate image of the scene for at least one intermediate torch intensity, wherein the at least one intermediate torch intensity comprises at least one value between the first torch intensity and the second torch intensity. The operations may also include selecting a particular intermediate torch intensity associated with the at least one candidate image, wherein the particular intermediate torch intensity reduces a difference between a foreground illuminance and a background illuminance for the scene relative to the first torch intensity and the second torch intensity. The operations may additionally include, responsive to the selecting, receiving, by the image capturing device, an additional image of the scene based on the selected intermediate torch intensity.
[0008] The foregoing summary is illustrative only and is not intended to be in any way limiting. In addition to the illustrative aspects, embodiments, and features described above, further aspects, embodiments, and features will become apparent by reference to the figures and the following detailed description and the accompanying drawings.
BRIEF DESCRIPTION OF THE FIGURES
[0009] Figure 1 is an example overview of torch strength calibration, in accordance with example embodiments.
[0010] Figure 2 is another example overview of torch strength calibration, in accordance with example embodiments.
[0011] Figure 3 is another example overview of torch strength calibration, in accordance with example embodiments.
[0012] Figure 4 illustrates example images for various torch strength calibrations, in accordance with example embodiments. [0013] Figure 5 illustrates an example image captured with adaptive torch strength calibration, in accordance with example embodiments.
[0014] Figure 6 illustrates additional images for various torch strength calibrations, in accordance with example embodiments.
[0015] Figure 7 is a block diagram of an example computing device, in accordance with example embodiments.
[0016] Figure 8 is a flowchart of a method, in accordance with example embodiments.
DETAILED DESCRIPTION
[0017] Example methods, devices, and systems are described herein. It should be understood that the words “example” and “exemplary” are used herein to mean “serving as an example, instance, or illustration.” Any embodiment or feature described herein as being an “example” or “exemplary” is not necessarily to be construed as preferred or advantageous over other embodiments or features. Other embodiments can be utilized, and other changes can be made, without departing from the scope of the subject matter presented herein.
[0018] Thus, the example embodiments described herein are not meant to be limiting. Aspects of the present disclosure, as generally described herein, and illustrated in the figures, can be arranged, substituted, combined, separated, and designed in a wide variety of different configurations, all of which are contemplated herein.
[0019] Further, unless context suggests otherwise, the features illustrated in each of the figures may be used in combination with one another. Thus, the figures should be generally viewed as component aspects of one or more overall embodiments, with the understanding that not all illustrated features are necessary for each embodiment.
Overview
[0020] This application generally relates to adjusting a torch intensity for a camera. The torch intensity is proportional to the luminance of the image. For a particular scene, there may be a large difference between the illumination of the foreground and background portions of an image. For example, depending on the ambient illumination (illumination other than the triggered torch strength), the foreground and background luminance could be very different. This may cause the exposure to either adjust for the foreground or the background and may result in a loss of dynamic range. For example, when capturing a scene at a low ambient illumination, either a flash setting or a long exposure setting is triggered to obtain an acceptable illumination for the foreground and background portions of the scene. For the flash setting, the torch intensity is generally a fixed setting of the camera. However, a fixed torch intensity may not be desirable due to varying depths of different portions of the scene. For example, a fixed torch intensity usually uses higher torch strength. When the ambient illumination is even, the fixed torch intensity may illuminate the subjects at the foreground much higher than the background due to a decrease in illumination over distance. This can make the exposure to either adjust for the foreground or the background and may result in a loss of dynamic range. In case of longer exposure without flash, this could result in motion blur artifacts due to a longer exposure time.
[0021] For example, a high torch current may be used, when a background is very brightly illuminated. However, in traditional photography, a flash may be used, and this may cause the objects in the foreground (e.g., faces) to be overexposed, while some background detail may be lost. A desirable goal may be to ensure that the foreground target exposure is accurately captured, while preserving as much of the background detail as possible. For example, maintain a bright background and a higher dynamic range. In a camera, the exposure determines how much light is collected on the sensor. Accordingly, in traditional photography, when a scene is captured using the flash, the background may be less illuminated (e.g., become dark), while a foreground object (e.g., a person) may be brightly illuminated (e.g., due to the flash light on the person’s face). In such situations, it may be desirable to lower the exposure so the person is illuminated appropriately. However, the background may still remain dimly illuminated. A desirable solution involves reducing the torch strength to capture the foreground, without losing details in the background. In traditional cameras, this may be achieved by using multiple exposures and varying torch strengths and then merging these multiple images. However, this can result in an overuse of compute resources, and also introduce artifacts during the merge process.
[0022] Accordingly, there is a need to achieve a balance between the foreground and background illumination with a single exposure. For example, the torch intensity may be dynamically adjusted to achieve a desirable balance between the illumination of the foreground and background portions of the image. As described in this disclosure, such a desirable balance is achieved based on a first preview that corresponds to a low torch intensity (e.g., no torch) and a second preview that corresponds to a high torch intensity (e.g., a maximum allowable torch intensity for the camera). A plurality of intermediate images are generated at torch intensities between the low and high torch intensities. Subsequently, an intermediate image is selected (e.g., based on various image characteristics) to minimize a difference between a foreground illuminance and a background illuminance for the scene. The automatic exposure settings for the camera can be adjusted to provide a torch intensity that corresponds to the selected intermediate image.
Example Smart Torch Metering Adjustments
[0023] As described herein, a torch current may be dynamically determined based on a scene so that an exposure for a foreground may be balanced with an exposure for a background of the scene. A smart torch metering system may estimate the torch current based on factors such as a depth, a skin tone, an ambient illumination, and a dynamic range of the scene. In some implementations, during smart flash or torch capture, based on the foreground and background illumination, auto exposure may dynamically change the torch strength. For example, auto exposure can analyze the two frames, a first frame for which no torch has been applied and a second frame with an initial torch current. Accordingly, in an absence of torch statistics and with the initial torch statistics, an automatic exposure (AE) component may determine an impact of torch strength on different segments of the scene, and determine a new torch strength, that can result in an optimal dynamic range within the exposure limitations supported by the device.
[0024] Figure 1 is an example overview 100 of torch strength calibration, in accordance with example embodiments. In some embodiments, the operations may involve displaying, by a graphical user interface of a computing device, a preview image comprising a scene. For example, device 105 may include graphical user interface (GUI) 110 displaying a preview image 115. In some embodiments, device 105 may be a mobile device. In some embodiments, graphical user interface 110 may be an interface that displays a captured image. In some embodiments, graphical user interface 110 may be a live-view interface that displays a live- view preview of an image. As illustrated, the displaying of the image may involve providing a live-view preview of the image prior to a capture of the image.
[0025] Herein a "live-view preview" of an image should be understood to be an image or sequence of images (e.g., video) that is generated and displayed based on an image data stream from an image sensor of an image-capture device. For instance, image data may be generated by a camera's image sensor (or a portion, subset, or sampling of the pixels on the image sensor). This image data is representative of the field-of-view (FOV) of the camera, and thus indicative of the image that will be captured if the user taps the camera's shutter button or initiates image capture in some other manner. To help a user to decide how to position a camera for image capture, a camera device or other computing device may generate and display a live-view preview image based on the image data stream from the image sensor. The live-view preview image can be a real-time image feed (e.g., video), such that the user is informed of the camera's FOV in real-time. In some embodiments, the image may be a frame of a plurality of frames of a video.
[0026] In some embodiments, the user may open a camera system of device 105 (e.g., with a touch screen or other mechanism), and may direct the camera toward a scene, with an intent to capture an image. Graphical user interface 110 may display a live-view preview of the image. Device 105 may utilize one or more algorithms (e.g., an object detection algorithm, a face detection algorithm, a segmentation algorithm, and so forth) to identify one or more regions of interest in the image. In some implementations, a user-approved facial recognition algorithm may be applied to identify one or more individuals in the image as likely objects of interest. For example, device 105 may have a history of user preferences, and may identify certain objects and/or individuals as being of high interest to the user.
[0027] At block 120, an auto-exposure (AE) process may analyze the scene with a first torch intensity, which controls the exposure settings for image capture. For example, the first torch intensity may be a low or zero torch intensity, absent further input from the user. For example, analyzing the scene may involve factors such as depth information for the scene, a balance of illuminance between the foreground and the background, types of objects, reflectance properties of the objects, skin tones, shadow characteristics, and so forth, may be determined. [0028] At block 125, the auto-exposure (AE) process may analyze the scene with a second torch intensity, which controls the exposure settings for image capture. For example, the second torch intensity may be a high torch intensity based upon input from the user.
[0029] At block 130, device 105 may generate one or more intermediate images of the scene for intermediate torch intensities. For example, the one or more intermediate images may be computer-generated images based on varying torch intensities. In some embodiments, the one or more intermediate images of the scene may be downsampled. For example, the one or more intermediate images may be generated based on an image with no torch, an image with initial torch applied, and initial torch current. For a given intermediate torch intensity, the scene brightness may be interpolated between no torch and torch to determine an intermediate scene brightness corresponding to the intermediate torch intensity.
[0030] The intermediate torch intensities can include values between the first torch intensity and the second torch intensity. In some embodiments, the intermediate torch intensities can include values outside the range between the first torch intensity and the second torch intensity (e.g., lower than the first torch intensity and/or higher than the second torch intensity) based on an analysis of the image. Also, for example, the range between the first torch intensity and the second torch intensity may be divided into subintervals to determine the intermediate torch intensities. In some embodiments, the partition may be uniform with subintervals of equal length. In some embodiments, the partition may be non-uniform with subintervals of varying length. Generally, a number of intermediate torch intensities, the respective values, and so forth, may be dynamically determined based on the scene, regions and/or objects of interest, ambient lighting, depth information, reflectance properties, and so forth.
[0031] In some embodiments, based on a minimum torch intensity and a maximum torch intensity, a number of intermediate torch intensities (e.g., 10, 20, etc.) may be utilized to evaluate the scene. Such intermediate torch intensities may be equally spaced between the minimum torch intensity and the maximum torch intensity in the log domain. A log domain is used since the illumination operates as a multiplicative factor.
[0032] At block 135, the auto-exposure (AE) process may determine an adaptive torch strength. For example, the AE process may select a particular intermediate torch intensity associated with an intermediate image. The particular intermediate torch intensity may be selected to reduce a difference between a foreground illuminance and a background illuminance for the scene relative to the first torch intensity and the second torch intensity. For example, smart torch metering handles the night scenes where foreground and background have significantly different depths by using AE statistical data from a raw image.
[0033] In some embodiments, to obtain a proper exposure for the foreground and the background at a high torch strength, highlight regions of the scene may be captured with a short exposure, shadow regions of the scene may be captured with a long exposure. These images may then be fused together. Generally, this may involve additional post-capture processing, and merging frames taken with different exposures may introduce merge artifacts.
[0034] The adaptive torch strength (e.g., particular intermediate torch intensity) may be selected based on various image characteristics such as a depth of the scene, a skin tone of a foreground object, an ambient illumination of the scene, a dynamic range of the scene, and so forth.
[0035] In some embodiments, segmentation data for an image to perform various types of image processing on the image. In particular, example embodiments may utilize object segmentation data, such as segmentation masks that outline, isolate, or separate a person or other object(s) of interest within an image; e.g., by indicating an area or areas of the image occupied by a foreground object or objects in a scene, and an area or areas of the image corresponding to the scene’s background.
[0036] Depth information may take various forms. For example, the depth information could be a depth map, which is a coordinate mapping or another data structure that stores information relating to the distance of the surfaces of objects in a scene from a certain viewpoint (e.g., from a camera or mobile device). For instance, a depth map for an image captured by a camera can specify information relating to the distance from the camera to surfaces of objects captured in the image; e.g., on a pixel -by-pixel (or other) basis or a subset or sampling of pixels in the image. Various techniques may be used to generate depth information for an image. In some cases, depth information may be generated for the entire image (e.g., for the entire image frame). In other cases, depth information may only be generated for a certain area or areas in an image. For instance, depth information may only be generated when image segmentation is used to identify one or more objects in an image. Depth information may be determined specifically for the identified object or objects.
[0037] In embodiments, stereo imaging may be utilized to generate a depth map. In such embodiments, a depth map may be obtained by correlating left and right stereoscopic images to match pixels between the stereoscopic images. The pixels may be matched by determining which pixels are the most similar between the left and right images. Pixels correlated between the left and right stereoscopic images may then be used to determine depth information. For example, a disparity between the location of the pixel in the left image and the location of the corresponding pixel in the right image may be used to calculate the depth information using binocular disparity techniques. An image may be produced that contains depth information for a scene, such as information related to how deep or how far away objects in the scene are in relation to a camera's viewpoint. Such images are useful in perceptual computing for applications such as gesture tracking and object recognition, for example.
[0038] In a single-camera approach, depth maps can be estimated from images taken by one camera that uses dual pixels on light-detecting sensors; e.g., a camera that provides autofocus functionality. A dual pixel of an image may include a pixel that has been split into two parts, such as a left pixel and a right pixel. Then, a dual pixel image is an image that includes dual pixels. In some embodiments, neural networks may be utilized to predict depth maps for an image.
[0039] In some embodiments, depth information may be used, for example, when the image capturing device is in a zoom-in mode. In such a situation, a detected foreground object may generally appear closer to the image capturing device but actually is farther, and the torch strength may be adjusted based on the depth information (e.g., actual depth or distance) as determined by the image capturing device. For example, the torch strength may be increased from a level used for a non-zoom mode to avoid underexposure. [0040] In some embodiments, depth information may be used, for example, when the image capturing device is in a zoom-out mode. In such a situation, the detected foreground object may appear farther from the image capturing device but is actually closer, and the torch may be adjusted based on the depth information (e.g., actual depth or distance) as determined by the image capturing device. For example, the torch strength may be decreased from a level used for a non-zoom mode to avoid target overexposure.
[0041] In some embodiments, skin tone information may be used. For example, when the foreground includes multiple human objects with different skin tones (e.g., darker and lighter skin tones), the torch strength may be adjusted to achieve a balanced illuminance for the multiple human objects with different skin tones (e.g., maintain face luminance values to be within a range or designed thresholds). For example, a person with a darker skin tone is not underexposed and a person with a lighter skin tone is not overexposed.
[0042] Generally, an object may have different light reflection characteristics that may depend, for example, on a surface geometry, color, and/or a material of the object. Also, for example, a surface of an object may be composed of a plurality of materials, thereby creating complex light reflection characteristics. In some embodiments, a diffuse map (e.g., an image of an object that is representative of its diffuse reflection) may be utilized. Diffuse reflection is a type of surface reflectance where incident light is reflected and scattered into a plurality of directions (e.g., reflection by a rough surface). The diffuse map may be indexed by a set of color values that are indicative of a texture (e.g., color and pattern) of the object. In some embodiments, a specular map (e.g., an image of an object that is representative of its specular reflection) may be used. Specular reflection is a type of surface reflectance where incident light is reflected into a unidirectional reflected light (e.g. reflection by a smooth, and/or shiny surface). The specular map represents a shininess characteristic of a surface and its highlight color. Accordingly, the adaptive torch strength (e.g., particular intermediate torch intensity) may be determined to maintain a balance between the foreground and background illumination, based at least in part on the diffuse map, and/or the specular map.
[0043] A high-dynamic-range (HDR) display for an image can extend a range of user experience when viewing the image. For example, an image of a person in the dark may have a high composition of dark colors with a low luminance value. In some aspects, a ratio of respective luminance values may have a high dynamic range, for example, 1 : 1,000,000. In order to capture images of such high dynamic range, the bit depth of the image sensor may vary (e.g., 10-12 bit depth). Generally, a higher depth value may not be conducive for efficient image processing (e.g., may not add value) due to readout noise and shot noise. Accordingly, the adaptive torch strength (e.g., particular intermediate torch intensity) may be determined based on at least an application of a tone-mapping technique to maintain a balance between the foreground and background illumination.
[0044] In some embodiments, the particular intermediate torch intensity may be selected to achieve a signal-to-noise ratio (SNR) greater than a threshold SNR. In some embodiments, the threshold SNR may be determined dynamically based on an analysis of the scene, including an environmental illumination, objects of interest, depth characteristics, and so forth. Also, for example, the particular intermediate torch intensity may be selected to reduce a lux value for the scene. For example, an LED current may be configured based on the particular intermediate torch intensity to minimize the lux value. Generally speaking, the LED current for the torch intensity is determined for illumination of foreground objects. Accordingly, lowering the torch intensity may achieve better SNR on the background with a long exposure (e.g., when using the Night Sight mode) and merging multiple frames.
[0045] In some embodiments, the scene brightness threshold may be 35-40 lux. Upon a determination that the scene brightness is below the scene brightness threshold, a different torch current may be applied. In some embodiments, the scene brightness threshold may be used for triggering the adaptive torch mode to achieve both an optimal scene and foreground exposure. In some embodiments, the selection of the adaptive torch current may be based on achieving a balance between power saving objectives, and capturing an image with optimal image characteristics.
[0046] As another example, the particular intermediate torch intensity may be selected to maintain the particular intermediate torch intensity to be greater than a threshold torch intensity. For example, device 105 may be configured to apply a minimum torch current and the particular intermediate torch intensity may be selected to be greater than the minimum torch current.
[0047] At block 140, the adaptive torch strength may be applied to the flash illumination control module.
[0048] At block 145, one or more frames may be captured using a torch with the adaptive torch strength.
[0049] For example, an image 150 of a scene with the adaptive torch strength may be captured by pressing a shutter button for a camera.
[0050] Figure 2 is another example overview 200 of torch strength calibration, in accordance with example embodiments. Some components of Figure 2 may share one or more aspects with similar components illustrated in Figure 1. For example, device 205 may share one or more aspects in common with device 105, GUI 210 may share one or more aspects in common with GUI 110, preview image 215 may share one or more aspects in common with preview image 115, and image with adaptive torch 265 may share one or more aspects in common with image with adaptive torch 150.
[0051] At block 220, auto exposure (AE) of device 205 analyzes a scene depicted in preview image 215 without torch/flash, and stores the analysis.
[0052] At block 225, a user may manually enable the torch.
[0053] At block 230, the user may press a shutter to capture the image.
[0054] At block 235, device 205 may apply an initial torch strength to a flash illumination control module.
[0055] At block 240, the AE may analyze the scene (e.g., a region of interest (ROI)) for capture with the torch in an “ON” state.
[0056] At block 245, auto focus (AF) may generate a depth map for a target distance on the ROI.
[0057] At block 250, AE may determine an adaptive torch strength after the AE convergence. For example, AE may use the analysis of the scene and the depth map to generate intermediate images at intermediate torch strengths (e.g., between the image without the torch and the image with the torch on). Further analysis of the intermediate images may be performed to identify an intermediate image with an optimal balance between the foreground and background illumination. Accordingly, AE may determine an adaptive torch strength to be the torch intensity corresponding to the identified intermediate image.
[0058] At block 255, device 205 may apply the adaptive torch strength (e.g., a smart torch strength) to the flash illumination control module.
[0059] At block 260, device 205 may capture frames with the adaptive torch strength.
[0060] Figure 3 is another example overview 300 of torch strength calibration, in accordance with example embodiments.
[0061] At block 305, when a camera is activated, a brightness control component may output an initial torch current to the flash illumination control module.
[0062] At block 310, the flash illumination control module applies the initial torch value. For example, the torch value may be a strength of the torch current.
[0063] At block 315, the brightness control component determines AE convergence subsequent to the initial torch current being applied.
[0064] Generally, the camera may be configured with an auto white balance (AWB) function, which adjusts white balance automatically according to recognized scenes. The white balance function of the camera may be set to AWB by default. Accordingly, the camera may automatically adjust the color of photographs to look natural in various scenes.
[0065] At block 320, the automatic white balance component and the brightness control component obtain the scene statistics and initiate a metering process.
[0066] At block 325, the brightness control component determines an optimal torch current.
[0067] At block 330, the optimal torch current is output to the flash illumination control module.
[0068] At block 335, the flash illumination control module applies the optimal torch value.
[0069] At block 340, the automatic white balance component and the brightness control component complete the convergence process.
[0070] Figure 4 illustrates example images 400 for various torch strength calibrations, in accordance with example embodiments. Image 405 depicts an object of interest in a foreground illumination 405B against a background illumination 405A. Image 405 is captured with a default torch strength. As shown, the object of interest in the foreground (e.g., a person) is brightly illuminated, and the background is less brightly illuminated. However there is a lack of a balance between the foreground illumination 405B and the background illumination 405 A. [0071] Image 410 depicts an object of interest in a foreground illumination 410B against a background illumination 410A. Image 410 is captured with zero torch strength. As shown, the object of interest in the foreground (e.g., a person) is dimly illuminated (e.g., dark) and the background is brightly illuminated. However there is a lack of a balance between the foreground illumination 410B and the background illumination 410A.
[0072] Image 415 depicts an object of interest in a foreground illumination 415B against a background illumination 415 A. Image 415 is captured with an adaptive torch strength. As shown, the object of interest in the foreground (e.g., a person) is well illuminated and the background is also well illuminated, with a balance between the foreground illumination 415B and the background illumination 415 A.
[0073] Figure 5 illustrates an example image 500 captured with torch strength calibration, in accordance with example embodiments. As shown, the object of interest in the foreground (e.g., a person) is well illuminated and the background is also well illuminated, with a balance between the foreground illumination and the background illumination.
[0074] Figure 6 illustrates additional images 600 for various torch strength calibrations, in accordance with example embodiments. Image 605 depicts an object of interest in a foreground illumination 605B against a background illumination 605 A. Image 605 is captured with a night sight setting and a default camera mode. For example, some device cameras are configured to capture images in low light or darker settings (e.g., at night). The night sight mode enables the camera to capture an image with a longer exposure time. Detailed low-light images may be captured without a need for a flash and/or a tripod. As shown, the object of interest in the foreground (e.g., a person) is dimly illuminated, and the front of a building in the background is brightly illuminated. However there is a lack of a balance between the foreground illumination 605B and the background illumination 605 A.
[0075] Image 610 depicts an object of interest in a foreground illumination 61 OB against a background illumination 610A. Image 610 is captured with an adaptive torch strength and in night sight mode. As shown, the object of interest in the foreground (e.g., a person) is well illuminated and the background is also well illuminated, with a balance between the foreground illumination 610B and the background illumination 610A.
[0076] Image 615 depicts an object of interest in a foreground illumination 615B against a background illumination 615 A. Image 615 is captured at full torch strength and in night sight mode. As shown, the object of interest in the foreground (e.g., a person) is well illuminated and the background is also well illuminated. The face of the person has a brighter illumination with respect to the illumination of the faces in images 605 and 610. Additionally, some of the clothing (e.g., neck scarf) has more illumination, making the image appear to be less color balanced.
[0077] Image 620 depicts an object of interest in a foreground illumination 620B against a background illumination 620A. Image 620 is captured at a torch strength for HDR mode and a default camera mode. In HDR mode, the camera is configured to widen the exposure range to capture more detail in a scene (e.g., capturing details in both bright and dark areas). For example, for an image captured in daylight, HDR mode enables details to be captured for areas that are illuminated by the sun as well as the shaded portions of the scene. As shown, the object of interest in the foreground (e.g., a person) is well illuminated and the background is also well illuminated. The face of the person has a reflective luminance and some of the clothing (e.g., neck scarf) has more illumination, making the image appear to be less color balanced.
Computing Device Architecture
[0078] Figure 7 is a block diagram of an example computing device 700, in accordance with example embodiments. In particular, computing device 700 shown in Figure 7 can be configured to perform at least one function described herein, including method 800.
[0079] Computing device 700 may include a user interface module 701, a network communications module 702, one or more processors 703, data storage 704, one or more cameras 718, one or more sensors 720, and power system 722, all of which may be linked together via a system bus, network, or other connection mechanism 705.
[0080] User interface module 701 can be operable to send data to and/or receive data from external user input/output devices. For example, user interface module 701 can be configured to send and/or receive data to and/or from user input devices such as a touch screen, a computer mouse, a keyboard, a keypad, a touch pad, a trackball, a joystick, a voice recognition module, and/or other similar devices. User interface module 701 can also be configured to provide output to user display devices, such as one or more cathode ray tubes (CRT), liquid crystal displays, light emitting diodes (LEDs), displays using digital light processing (DLP) technology, printers, light bulbs, and/or other similar devices, either now known or later developed. User interface module 701 can also be configured to generate audible outputs, with devices such as a speaker, speaker jack, audio output port, audio output device, earphones, and/or other similar devices. User interface module 701 can further be configured with one or more haptic devices that can generate haptic outputs, such as vibrations and/or other outputs detectable by touch and/or physical contact with computing device 700. In some examples, user interface module 701 can be used to provide a graphical user interface (GUI) for utilizing computing device 700.
[0081] Network communications module 702 can include one or more devices that provide one or more wireless interfaces 707 and/or one or more wireline interfaces 708 that are configurable to communicate via a network. Wireless interface(s) 707 can include one or more wireless transmitters, receivers, and/or transceivers, such as a Bluetooth™ transceiver, a Zigbee® transceiver, a Wi-Fi™ transceiver, a WiMAX™ transceiver, an LTE™ transceiver, and/or other type of wireless transceiver configurable to communicate via a wireless network. Wireline interface(s) 708 can include one or more wireline transmitters, receivers, and/or transceivers, such as an Ethernet transceiver, a Universal Serial Bus (USB) transceiver, or similar transceiver configurable to communicate via a twisted pair wire, a coaxial cable, a fiberoptic link, or a similar physical connection to a wireline network.
[0082] In some examples, network communications module 702 can be configured to provide reliable, secured, and/or authenticated communications. For each communication described herein, information for facilitating reliable communications (e.g., guaranteed message delivery) can be provided, perhaps as part of a message header and/or footer (e.g., packet/message sequencing information, encapsulation headers and/or footers, size/time information, and transmission verification information such as cyclic redundancy check (CRC) and/or parity check values). Communications can be made secure (e.g., be encoded or encrypted) and/or decry pted/decoded using one or more cryptographic protocols and/or algorithms, such as, but not limited to, Data Encryption Standard (DES), Advanced Encryption Standard (AES), a Rivest- Shamir- Adelman (RSA) algorithm, a Diffie-Hellman algorithm, a secure sockets protocol such as Secure Sockets Layer (SSL) or Transport Layer Security (TLS), and/or Digital Signature Algorithm (DSA). Other cryptographic protocols and/or algorithms can be used as well or in addition to those listed herein to secure (and then decry pt/decode) communications.
[0083] One or more processors 703 can include one or more general purpose processors (e.g., central processing unit (CPU), etc.), and/or one or more special purpose processors (e.g., digital signal processors, tensor processing units (TPUs), graphics processing units (GPUs), application specific integrated circuits, etc.). One or more processors 703 can be configured to execute computer-readable instructions 706 that are contained in data storage 704 and/or other instructions as described herein.
[0084] Data storage 704 can include one or more non-transitory computer-readable storage media that can be read and/or accessed by at least one of one or more processors 703. The one or more computer-readable storage media can include volatile and/or non-volatile storage components, such as optical, magnetic, organic or other memory or disc storage, which can be integrated in whole or in part with at least one of one or more processors 703. In some examples, data storage 704 can be implemented using a single physical device (e.g., one optical, magnetic, organic or other memory or disc storage unit), while in other examples, data storage 704 can be implemented using two or more physical devices.
[0085] Data storage 704 can include computer-readable instructions 706 and perhaps additional data. In some examples, data storage 704 can include storage required to perform at least part of the herein-described methods, scenarios, and techniques and/or at least part of the functionality of the herein-described devices and networks. In particular, computer-readable instructions 706 can include instructions that, when executed by processor(s) 703, enable computing device 700 to provide for some or all of the functionality described herein. For example, data storage 704 may store a first image of a scene at a first torch intensity, and a second image of the scene at a second torch intensity. Data storage 704 may also store one or more candidate images of the scene for at intermediate torch intensities.
[0086] In some embodiments, computer-readable instructions 706 can include instructions that, when executed by processor(s) 703, enable computing device 700 to carry out functions comprising: receiving, by an image capturing device, a first image of a scene at a first torch intensity, and a second image of the scene at a second torch intensity, wherein the first image and the second image comprise respective foreground and background illuminance for the scene; generating at least one candidate image of the scene for at least one intermediate torch intensity, wherein the at least one intermediate torch intensity comprises at least one value between the first torch intensity and the second torch intensity; selecting a particular intermediate torch intensity associated with the at least one candidate image, wherein the particular intermediate torch intensity reduces a difference between a foreground illuminance and a background illuminance for the scene relative to the first torch intensity and the second torch intensity; and responsive to the selecting, receiving, by the image capturing device, an additional image of the scene based on the selected intermediate torch intensity.
[0087] In some embodiments, the instructions for the selecting of the particular intermediate torch intensity may be based on one or more of a depth of the scene, a skin tone of a foreground object, an ambient illumination of the scene, or a dynamic range of the scene.
[0088] In some embodiments, the instructions may further involve instructions for capturing the image subsequent to the adjusting of the automatic exposure setting.
[0089] In some embodiments, the instructions for the selecting of the particular intermediate torch intensity may be configured to achieve a signal -to-noise ratio (SNR) greater than a threshold SNR.
[0090] In some embodiments, the instructions for the selecting of the particular intermediate torch intensity may be configured to reduce a lux value for the scene.
[0091] In some embodiments, the instructions for the selecting of the particular intermediate torch intensity may be configured to maintain the particular intermediate torch intensity to be greater than a threshold torch intensity.
[0092] In some examples, computing device 700 can include torch strength module 712. Torch strength module 712 can be configured to control a torch strength to be applied for image capture by one or more cameras 718. In some embodiments, torch strength module 712 may receive an intermediate torch intensity from data storage 704, and configure the one or more cameras 718 to capture an image using the intermediate torch intensity.
[0093] In some examples, computing device 700 can include one or more cameras 718. Camera(s) 718 can include one or more image capture devices, such as still and/or video cameras, equipped to capture light and record the captured light in one or more images; that is, camera(s) 718 can generate image(s) of captured light. The one or more images can be one or more still images and/or one or more images utilized in video imagery. Camera(s) 718 can capture light and/or electromagnetic radiation emitted as visible light, infrared radiation, ultraviolet light, and/or as one or more other frequencies of light. [0094] In some examples, computing device 700 can include one or more sensors 720. Sensors 720 can be configured to measure conditions within computing device 700 and/or conditions in an environment of computing device 700 and provide data about these conditions. For example, sensors 720 can include one or more of (i) sensors for obtaining data about computing device 700, such as, but not limited to, a thermometer for measuring a temperature of computing device 700, a battery sensor for measuring power of one or more batteries of power system 722, and/or other sensors measuring conditions of computing device 700; (ii) an identification sensor to identify other obj ects and/or devices, such as, but not limited to, a Radio Frequency Identification (RFID) reader, proximity sensor, one-dimensional barcode reader, two-dimensional barcode (e.g., Quick Response (QR) code) reader, and a laser tracker, where the identification sensors can be configured to read identifiers, such as RFID tags, barcodes, QR codes, and/or other devices and/or object configured to be read and provide at least identifying information; (iii) sensors to measure locations and/or movements of computing device 700, such as, but not limited to, a tilt sensor, a gyroscope, an accelerometer, a Doppler sensor, a GPS device, a sonar sensor, a radar device, a laser-displacement sensor, and a compass; (iv) an environmental sensor to obtain data indicative of an environment of computing device 700, such as, but not limited to, an infrared sensor, an optical sensor, a light sensor, a biosensor, a capacitive sensor, a touch sensor, a temperature sensor, a wireless sensor, a radio sensor, a movement sensor, a microphone, a sound sensor, an ultrasound sensor and/or a smoke sensor; and/or (v) a force sensor to measure one or more forces (e.g., inertial forces and/or G-forces) acting about computing device 700, such as, but not limited to one or more sensors that measure: forces in one or more dimensions, torque, ground force, friction, and/or a zero moment point (ZMP) sensor that identifies ZMPs and/or locations of the ZMPs. Many other examples of sensors 720 are possible as well.
[0095] Power system 722 can include one or more batteries 724 and/or one or more external power interfaces 726 for providing electrical power to computing device 700. Each battery of the one or more batteries 724 can, when electrically coupled to the computing device 700, act as a source of stored electrical power for computing device 700. One or more batteries 724 of power system 722 can be configured to be portable. Some or all of one or more batteries 724 can be readily removable from computing device 700. In other examples, some or all of one or more batteries 724 can be internal to computing device 700, and so may not be readily removable from computing device 700. Some or all of one or more batteries 724 can be rechargeable. For example, a rechargeable battery can be recharged via a wired connection between the battery and another power supply, such as by one or more power supplies that are external to computing device 700 and connected to computing device 700 via the one or more external power interfaces. In other examples, some or all of one or more batteries 724 can be non-rechargeable batteries.
[0096] One or more external power interfaces 726 of power system 722 can include one or more wired-power interfaces, such as a USB cable and/or a power cord, that enable wired electrical power connections to one or more power supplies that are external to computing device 700. One or more external power interfaces 726 can include one or more wireless power interfaces, such as a Qi wireless charger, that enable wireless electrical power connections, such as via a Qi wireless charger, to one or more external power supplies. Once an electrical power connection is established to an external power source using one or more external power interfaces 726, computing device 700 can draw electrical power from the external power source the established electrical power connection. In some examples, power system 722 can include related sensors, such as battery sensors associated with the one or more batteries or other types of electrical power sensors.
[0097] One or more external power interfaces 726 of power system 722 can include one or more wired-power interfaces, such as a USB cable and/or a power cord, that enable wired electrical power connections to one or more power supplies that are external to computing device 700. One or more external power interfaces 726 can include one or more wireless power interfaces, such as a Qi wireless charger, that enable wireless electrical power connections, such as via a Qi wireless charger, to one or more external power supplies. Once an electrical power connection is established to an external power source using one or more external power interfaces 726, computing device 700 can draw electrical power from the external power source the established electrical power connection. In some examples, power system 722 can include related sensors, such as battery sensors associated with the one or more batteries or other types of electrical power sensors.
Example Methods of Operation
[0098] Figure 8 is a flowchart of a method, in accordance with example embodiments. Method 800 may include various blocks or steps. The blocks or steps may be carried out individually or in combination. The blocks or steps may be carried out in any order and/or in series or in parallel. Further, blocks or steps may be omitted or added to method 800.
[0099] The blocks of method 800 may be carried out by various elements of computing device 700 as illustrated and described in reference to Figure 7.
[00100] Block 810 involves receiving, by an image capturing device, a first image of a scene at a first torch intensity, and a second image of the scene at a second torch intensity, wherein the first image and the second image comprise respective foreground and background illuminance for the scene.
[00101] Block 820 involves generating at least one candidate image of the scene for at least one intermediate torch intensity, wherein the at least one intermediate torch intensity comprises at least one value between the first torch intensity and the second torch intensity.
[00102] Block 830 involves selecting a particular intermediate torch intensity associated with the at least one candidate image, wherein the particular intermediate torch intensity reduces a difference between a foreground illuminance and a background illuminance for the scene relative to the first torch intensity and the second torch intensity.
[00103] Block 840 involves, responsive to the selecting, receiving, by the image capturing device, an additional image of the scene based on the selected intermediate torch intensity.
[00104] In some embodiments, the selecting of the particular intermediate torch intensity may be based on one or more of a depth of the scene, a skin tone of a foreground object, an ambient illumination of the scene, or a dynamic range of the scene.
[00105] Some embodiments involve, responsive to the selecting, adjusting an exposure setting of the image capturing device based on the selected intermediate torch intensity. Such embodiments involve capturing the image at the selected intermediate torch intensity.
[00106] In some embodiments, the first torch intensity may correspond to a low torch intensity and the second torch intensity may correspond to a high torch intensity.
[00107] In some embodiments, the at least one candidate image may be downsampled.
[00108] In some embodiments, the selecting of the particular intermediate torch intensity may be performed to achieve a signal-to-noise ratio (SNR) greater than a threshold SNR.
[00109] In some embodiments, the selecting of the particular intermediate torch intensity may be performed to reduce a lux value for the scene.
[00110] In some embodiments, the selecting of the particular intermediate torch intensity may be performed to maintain the particular intermediate torch intensity to be greater than a threshold torch intensity.
[00111] In some embodiments, the generating of the at least one candidate image involves receiving the at least one candidate image by the image capturing device operating at the at least one intermediate torch intensity.
[00112] In some embodiments, the image capturing device may be a component of a computing device. In some embodiments, the computing device may be a mobile device. [00113] The particular arrangements shown in the Figures should not be viewed as limiting. It should be understood that other embodiments may include more or less of each element shown in a given Figure. Further, some of the illustrated elements may be combined or omitted. Yet further, an illustrative embodiment may include elements that are not illustrated in the Figures.
[00114] A step or block that represents a processing of information can correspond to circuitry that can be configured to perform the specific logical functions of a herein-described method or technique. Alternatively or additionally, a step or block that represents a processing of information can correspond to a module, a segment, or a portion of program code (including related data). The program code can include one or more instructions executable by a processor for implementing specific logical functions or actions in the method or technique. The program code and/or related data can be stored on any type of computer readable medium such as a storage device including a disk, hard drive, or other storage medium.
[00115] The computer readable medium can also include non-transitory computer readable media such as computer-readable media that store data for short periods of time like register memory, processor cache, and random access memory (RAM). The computer readable media can also include non-transitory computer readable media that store program code and/or data for longer periods. Thus, the computer readable media may include secondary or persistent long-term storage, like read only memory (ROM), optical or magnetic disks, compact disc read only memory (CD-ROM), for example. The computer readable media can also be any other volatile or non-volatile storage systems. A computer readable medium can be considered a computer readable storage medium, for example, or a tangible storage device.
[00116] While various examples and embodiments have been disclosed, other examples and embodiments will be apparent to those skilled in the art. The various disclosed examples and embodiments are for purposes of illustration and are not intended to be limiting, with the true scope being indicated by the following claims.

Claims

CLAIMS What is claimed is:
1. A computer-implemented method, comprising: receiving, by an image capturing device, a first image of a scene at a first torch intensity, and a second image of the scene at a second torch intensity, wherein the first image and the second image comprise respective foreground and background illuminance for the scene; generating at least one candidate image of the scene for at least one intermediate torch intensity, wherein the at least one intermediate torch intensity comprises at least one value between the first torch intensity and the second torch intensity; selecting a particular intermediate torch intensity associated with the at least one candidate image, wherein the particular intermediate torch intensity reduces a difference between a foreground illuminance and a background illuminance for the scene relative to the first torch intensity and the second torch intensity; and responsive to the selecting, receiving, by the image capturing device, an additional image of the scene based on the selected intermediate torch intensity.
2. The computer-implemented method of claim 1, wherein the selecting of the particular intermediate torch intensity is based on one or more of a depth of the scene, a skin tone of a foreground object, an ambient illumination of the scene, or a dynamic range of the scene.
3. The computer-implemented method of claim 1, further comprising: responsive to the selecting, adjusting an exposure setting of the image capturing device based on the selected intermediate torch intensity; and capturing the image at the selected intermediate torch intensity.
4. The computer-implemented method of claim 1, wherein the first torch intensity corresponds to a low torch intensity and the second torch intensity corresponds to a high torch intensity.
5. The computer-implemented method of claim 1, wherein the at least one candidate image is downsampled.
6. The computer-implemented method of claim 1, wherein the selecting of the particular intermediate torch intensity is performed to achieve a signal-to-noise ratio (SNR) greater than a threshold SNR.
7. The computer-implemented method of claim 1, wherein the selecting of the particular intermediate torch intensity is performed to reduce a lux value for the scene.
8. The computer-implemented method of claim 1, wherein the selecting of the particular intermediate torch intensity is performed to maintain the particular intermediate torch intensity to be greater than a threshold torch intensity.
9. The computer-implemented method of claim 1, wherein the generating of the at least one candidate image comprises: receiving the at least one candidate image by the image capturing device operating at the at least one intermediate torch intensity.
10. The computer-implemented method of claim 1, wherein the image capturing device is a component of a computing device.
11. The computer-implemented method of claim 10, wherein the computing device is a mobile device.
12. A computing device, comprising: one or more processors; and data storage, wherein the data storage has stored thereon computer-executable instructions that, when executed by the one or more processors, cause the computing device to carry out functions comprising: receiving, by an image capturing device, a first image of a scene at a first torch intensity, and a second image of the scene at a second torch intensity, wherein the first image and the second image comprise respective foreground and background illuminance for the scene; generating at least one candidate image of the scene for at least one intermediate torch intensity, wherein the at least one intermediate torch intensity comprises at least one value between the first torch intensity and the second torch intensity; selecting a particular intermediate torch intensity associated with the at least one candidate image, wherein the particular intermediate torch intensity reduces a difference between a foreground illuminance and a background illuminance for the scene relative to the first torch intensity and the second torch intensity; and responsive to the selecting, receiving, by the image capturing device, an additional image of the scene based on the selected intermediate torch intensity.
13. The computing device of claim 12, wherein the instructions for the selecting of the particular intermediate torch intensity are based on one or more of a depth of the scene, a skin tone of a foreground object, an ambient illumination of the scene, or a dynamic range of the scene.
14. The computing device of claim 12, wherein the instructions further comprise instructions for: capturing the image subsequent to the adjusting of the automatic exposure setting.
15. The computing device of claim 12, wherein the first torch intensity corresponds to a low torch intensity and the second torch intensity corresponds to a high torch intensity.
16. The computing device of claim 12, wherein the at least one candidate image is downsampled.
17. The computing device of claim 12, wherein the instructions for the selecting of the particular intermediate torch intensity are configured to achieve a signal-to-noise ratio (SNR) greater than a threshold SNR.
18. The computing device of claim 12, wherein the instructions for the selecting of the particular intermediate torch intensity are configured to reduce a lux value for the scene.
19. The computing device of claim 12, wherein the instructions for the selecting of the particular intermediate torch intensity are configured to maintain the particular intermediate torch intensity to be greater than a threshold torch intensity.
20. An article of manufacture comprising one or more computer readable media having computer-readable instructions stored thereon that, when executed by one or more processors of a computing device, cause the computing device to carry out functions comprising: receiving, by an image capturing device, a first image of a scene at a first torch intensity, and a second image of the scene at a second torch intensity, wherein the first image and the second image comprise respective foreground and background illuminance for the scene; generating at least one candidate image of the scene for at least one intermediate torch intensity, wherein the at least one intermediate torch intensity comprises at least one value between the first torch intensity and the second torch intensity; selecting a particular intermediate torch intensity associated with the at least one candidate image, wherein the particular intermediate torch intensity reduces a difference between a foreground illuminance and a background illuminance for the scene relative to the first torch intensity and the second torch intensity; and responsive to the selecting, receiving, by the image capturing device, an additional image of the scene based on the selected intermediate torch intensity.
PCT/US2023/032023 2023-09-06 2023-09-06 Methods and systems for dynamically calibrating the torch strength for image capture Pending WO2025053835A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/US2023/032023 WO2025053835A1 (en) 2023-09-06 2023-09-06 Methods and systems for dynamically calibrating the torch strength for image capture

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/US2023/032023 WO2025053835A1 (en) 2023-09-06 2023-09-06 Methods and systems for dynamically calibrating the torch strength for image capture

Publications (1)

Publication Number Publication Date
WO2025053835A1 true WO2025053835A1 (en) 2025-03-13

Family

ID=88204056

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2023/032023 Pending WO2025053835A1 (en) 2023-09-06 2023-09-06 Methods and systems for dynamically calibrating the torch strength for image capture

Country Status (1)

Country Link
WO (1) WO2025053835A1 (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070264000A1 (en) * 2006-05-15 2007-11-15 Asia Optical Co., Inc. Image Extraction Apparatus and Flash Control Method for Same
US20100328486A1 (en) * 2004-08-16 2010-12-30 Tessera Technologies Ireland Limited Foreground/Background Segmentation in Digital Images with Differential Exposure Calculations
US20190302004A1 (en) * 2018-04-03 2019-10-03 Hiwin Technologies Corp. Adaptive Method for a Light Source
US20200267299A1 (en) * 2019-02-18 2020-08-20 Samsung Electronics Co., Ltd. Apparatus and method for capturing and blending multiple images for high-quality flash photography using mobile electronic device
US20210021750A1 (en) * 2017-06-02 2021-01-21 Apple Inc. Method and Device for Balancing Foreground-Background Luminosity

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100328486A1 (en) * 2004-08-16 2010-12-30 Tessera Technologies Ireland Limited Foreground/Background Segmentation in Digital Images with Differential Exposure Calculations
US20070264000A1 (en) * 2006-05-15 2007-11-15 Asia Optical Co., Inc. Image Extraction Apparatus and Flash Control Method for Same
US20210021750A1 (en) * 2017-06-02 2021-01-21 Apple Inc. Method and Device for Balancing Foreground-Background Luminosity
US20190302004A1 (en) * 2018-04-03 2019-10-03 Hiwin Technologies Corp. Adaptive Method for a Light Source
US20200267299A1 (en) * 2019-02-18 2020-08-20 Samsung Electronics Co., Ltd. Apparatus and method for capturing and blending multiple images for high-quality flash photography using mobile electronic device

Similar Documents

Publication Publication Date Title
US9704250B1 (en) Image optimization techniques using depth planes
KR102774718B1 (en) Electronic device and method for controlling camera using external electronic device
CN108322646B (en) Image processing method, image processing device, storage medium and electronic equipment
KR102565513B1 (en) Method and apparatus for multiple technology depth map acquisition and fusion
EP3363196B1 (en) Auto white balance using infrared and ultraviolet signals
KR102263537B1 (en) Electronic device and control method of the same
KR102085766B1 (en) Method and Apparatus for controlling Auto Focus of an photographing device
US12231777B2 (en) Exposure change control in low light environments
EP3609175B1 (en) Apparatus and method for generating moving image data including multiple section images in electronic device
WO2023020201A1 (en) Image enhancement method and electronic device
CN107454335A (en) Image processing method, device, computer readable storage medium and mobile terminal
CN107563329A (en) Image processing method, device, computer-readable recording medium and mobile terminal
US9684828B2 (en) Electronic device and eye region detection method in electronic device
CN117083870A (en) Image capturing method of electronic device and electronic device thereof
WO2025053835A1 (en) Methods and systems for dynamically calibrating the torch strength for image capture
EP4258676A1 (en) Automatic exposure method and electronic device
US20250301228A1 (en) Systems and Methods for Detection and Mitigation of a Rolling Band using a Secondary Camera
CN119520990B (en) Focusing processing method and electronic equipment
CN117710265B (en) Image processing method and related equipment
WO2025071563A1 (en) Systems and methods for detection and mitigation of a rolling band
KR102753923B1 (en) An electronic device for supporting image quality adjustment and a method thereof
JP2025533878A (en) Hybrid autofocus system with robust macro-object priority focus

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23777440

Country of ref document: EP

Kind code of ref document: A1