[go: up one dir, main page]

WO2025090075A1 - Depth camera using display encoded dot pattern - Google Patents

Depth camera using display encoded dot pattern Download PDF

Info

Publication number
WO2025090075A1
WO2025090075A1 PCT/US2023/077662 US2023077662W WO2025090075A1 WO 2025090075 A1 WO2025090075 A1 WO 2025090075A1 US 2023077662 W US2023077662 W US 2023077662W WO 2025090075 A1 WO2025090075 A1 WO 2025090075A1
Authority
WO
WIPO (PCT)
Prior art keywords
light
display
computing device
dots
periodic structure
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
PCT/US2023/077662
Other languages
French (fr)
Inventor
Changgeng Liu
Hart Levy
Kuntal Sengupta
Markek MIENKO
Ion Bita
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Google LLC
Original Assignee
Google LLC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Google LLC filed Critical Google LLC
Priority to PCT/US2023/077662 priority Critical patent/WO2025090075A1/en
Publication of WO2025090075A1 publication Critical patent/WO2025090075A1/en
Pending legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/24Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures
    • G01B11/25Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures by projecting a pattern, e.g. one or more lines, moiré fringes on the object
    • G01B11/2513Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures by projecting a pattern, e.g. one or more lines, moiré fringes on the object with several lines being projected in more than one direction, e.g. grids, patterns
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F1/00Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
    • G06F1/16Constructional details or arrangements
    • G06F1/1613Constructional details or arrangements for portable computers
    • G06F1/1626Constructional details or arrangements for portable computers with a single-body enclosure integrating a flat display, e.g. Personal Digital Assistants [PDAs]
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F1/00Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
    • G06F1/16Constructional details or arrangements
    • G06F1/1613Constructional details or arrangements for portable computers
    • G06F1/1633Constructional details or arrangements of portable computers not specific to the type of enclosures covered by groups G06F1/1615 - G06F1/1626
    • G06F1/1637Details related to the display arrangement, including those related to the mounting of the display in the housing
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F1/00Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
    • G06F1/16Constructional details or arrangements
    • G06F1/1613Constructional details or arrangements for portable computers
    • G06F1/1633Constructional details or arrangements of portable computers not specific to the type of enclosures covered by groups G06F1/1615 - G06F1/1626
    • G06F1/1684Constructional details or arrangements related to integrated I/O peripherals not covered by groups G06F1/1635 - G06F1/1675
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F1/00Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
    • G06F1/16Constructional details or arrangements
    • G06F1/1613Constructional details or arrangements for portable computers
    • G06F1/1633Constructional details or arrangements of portable computers not specific to the type of enclosures covered by groups G06F1/1615 - G06F1/1626
    • G06F1/1684Constructional details or arrangements related to integrated I/O peripherals not covered by groups G06F1/1635 - G06F1/1675
    • G06F1/1686Constructional details or arrangements related to integrated I/O peripherals not covered by groups G06F1/1635 - G06F1/1675 the I/O peripheral being an integrated camera
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/30Authentication, i.e. establishing the identity or authorisation of security principals
    • G06F21/31User authentication
    • G06F21/32User authentication using biometric data, e.g. fingerprints, iris scans or voiceprints
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/10Image acquisition
    • G06V10/12Details of acquisition arrangements; Constructional details thereof
    • G06V10/14Optical characteristics of the device performing the acquisition or on the illumination arrangements
    • G06V10/145Illumination specially adapted for pattern recognition, e.g. using gratings
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/166Detection; Localisation; Normalisation using acquisition arrangements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/02Constructional features of telephone sets
    • H04M1/0202Portable telephone sets, e.g. cordless phones, mobile phones or bar type handsets
    • H04M1/026Details of the structure or mounting of specific components
    • H04M1/0266Details of the structure or mounting of specific components for a display module assembly
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72448User interfaces specially adapted for cordless or mobile telephones with means for adapting the functionality of the device according to specific conditions
    • H04M1/72454User interfaces specially adapted for cordless or mobile telephones with means for adapting the functionality of the device according to specific conditions according to context-related or environment-related conditions
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/02Constructional features of telephone sets
    • H04M1/0202Portable telephone sets, e.g. cordless phones, mobile phones or bar type handsets
    • H04M1/026Details of the structure or mounting of specific components
    • H04M1/0264Details of the structure or mounting of specific components for a camera module assembly
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72448User interfaces specially adapted for cordless or mobile telephones with means for adapting the functionality of the device according to specific conditions
    • H04M1/72463User interfaces specially adapted for cordless or mobile telephones with means for adapting the functionality of the device according to specific conditions to restrict the functionality of the device
    • H04M1/724631User interfaces specially adapted for cordless or mobile telephones with means for adapting the functionality of the device according to specific conditions to restrict the functionality of the device by limiting the access to the user interface, e.g. locking a touch-screen or a keypad
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M2250/00Details of telephonic subscriber devices
    • H04M2250/52Details of telephonic subscriber devices including functional features of a camera

Definitions

  • a computing device may include depth mapping hardware that is used to perform depth mapping of three dimensional scenes.
  • a computing device may use such hardware to perform depth mapping of faces for the purposes of performing facial recognition.
  • aspects of this disclosure are directed to techniques for a computing device to generate patterns of dots that are used to perform depth mapping of a scene without use of a separate diffractive optical element that diffracts light into one or more patters of light dots.
  • the display of the computing device may act as a diffractive optical element that diffracts light into patterns of dots that can be used to perform depth mapping of a three-dimensional scene.
  • the computing device may include a light source that may emit light towards the display of the computing device.
  • the display may include a periodic structure that forms a two-dimensional arrangement of opaque regions and transparent regions. The arrangement of opaque regions and transparent regions of the periodic structure may cause the light to constructively and destructively interfere with each other, thereby diffracting the light into one or more specific patterns of light dots that are projected onto a three-dimensional scene.
  • a camera of the computing device may capture the pattern of light dots in the three-dimensional scene and may determine depth values associated with the pattern of light dots to generate a depth map of the three- dimensional scene.
  • the techniques described herein relate to a computing device including: a display having a periodic structure; a light source configured to emit light towards at least a portion of the display, the periodic structure of the display diffracting the light into a plurality of light dots that are projected onto a scene; a camera configured to capture the plurality of light dots in the scene; and one or more processors configured to determine depth values associated with the plurality of light dots.
  • the techniques described herein relate to a method including: emitting, by a light source of a computing device, light towards at least a portion of a display of the computing device, wherein a periodic structure of the display diffracts the light into a plurality of light dots that are projected onto a scene; capturing, by a camera of the computing device, the plurality of light dots in the scene; and determining, by one or more processors of the computing device, depth values associated with the plurality of light dots.
  • FIG. 1 A is a conceptual diagram illustrating an example environment in which a computing device is configured to perform depth mapping of a three dimensional scene, in accordance with one or more aspects of the present disclosure.
  • FIG. 1B shows a profile view of a portion of display of a computing device in further detail.
  • FIG. 1C illustrates a plan view of periodic structure of a display of a computing device 102.
  • FIG. 1D illustrates an example of dot patterns that may be projected by computing device onto a scene.
  • FIG. 2 is a block diagram illustrating an example computing device, in accordance with one or more aspects of the present disclosure.
  • FIG. 3 is a flowchart illustrating example operations of an example computing device configured to generate depth maps of three-dimensional scenes, in accordance with one or more aspects of the present disclosure.
  • FIG. 1A is a conceptual diagram illustrating an example environment 100 in which a computing device 102 is configured to perform depth mapping of a three dimensional scene 120, in accordance with one or more aspects of the present disclosure.
  • computing device 102 is a mobile computing device (e.g., a mobile phone).
  • computing device 102 may be a tablet computer, a laptop computer, a desktop computer, a gaming system, a media player, an e-book reader, a television platform, an automobile navigation system, a wearable computing device (e.g., a computerized watch, computerized eyewear, a computerized glove), or any other type of mobile or non-mobile computing device.
  • Computing device 102 includes display 112.
  • Display 112 may function as an output (e.g., display) device using any one or more display devices, such as a liquid crystal display (LCD), dot matrix display, light emitting diode (LED) display, microLED, organic light-emitting diode (OLED) display, e-ink, or similar monochrome or color display capable of outputting visible information to a user of computing device 102.
  • display devices such as a liquid crystal display (LCD), dot matrix display, light emitting diode (LED) display, microLED, organic light-emitting diode (OLED) display, e-ink, or similar monochrome or color display capable of outputting visible information to a user of computing device 102.
  • LCD liquid crystal display
  • LED light emitting diode
  • microLED microLED
  • OLED organic light-emitting diode
  • e-ink or similar monochrome or color display capable of outputting visible information to a user of computing device 102.
  • Display 112 may be a presence-sensitive display that may receive tactile input from a user of computing device 102.
  • Display 112 may receive indications of the tactile input by detecting one or more gestures from a user of computing device 102 (e.g., the user touching or pointing to one or more locations of display 112 with a finger or a stylus pen).
  • Display 112 may present output to a user, for instance at a presencesensitive display.
  • Display 112 may present the output as a graphical user interface, which may be associated with functionality provided by computing device 102.
  • display 112 may present various user interfaces of components of a computing platform, operating system, applications, or services executing at or accessible by computing device 102 (e.g., an electronic message application, an Internet browser application, a mobile operating system, etc.). A user may interact with a respective user interface to cause computing device 102 to perform operations relating to a function.
  • Computing device 102 may include depth mapping module 110. Depth mapping module 110 may perform operations described herein using software, hardware, firmware, or a mixture of both hardware, software, and firmware residing in and executing on computing device 102 or at one or more other remote computing devices. In some examples, depth mapping module 110 may be implemented as hardware, software, and/or a combination of hardware and software. Computing device 102 may execute depth mapping module 110 with one or more processors. Computing device 102 may execute depth mapping module 110 as or within a virtual machine executing on underlying hardware.
  • Depth mapping module 110 may be implemented in various ways. For example, depth mapping module 110 may be implemented as a downloadable or pre-installed application or “app.” In another example, depth mapping module 110 may be implemented as part of an operating system of computing device 102. Other examples of computing device 102 that implement techniques of this disclosure may include additional components not shown in FIG. 1A.
  • Computing device 102 includes camera 106 and light source 108 that computing device 102 may use to perform depth mapping of three-dimensional scenes.
  • Light source 108 may be any device capable of emit light that is used to perform depth mapping of three-dimensional scenes.
  • light source 108 may be capable of generating light having a center wavelength that is between about 400 nanometers (nm) to about 850nm or longer.
  • light source 108 may be capable of generating light that is in the near infrared range to the infrared range, such that human eyes are insensitive to the light emitted by light source 108.
  • Light source 108 may be any suitable light source having a certain degree of spatial and temporal coherence to generate high contrast light patterns.
  • light source 108 may be a laser, such as a super luminescent diode laser.
  • light source 108 may be filtered light emitting diodes.
  • light source 108 may be Vertical Cavity Surface Emitting Lasers (VCSELs).
  • VCSELs Vertical Cavity Surface Emitting Lasers
  • light source 108 may be one or more high emissivity diodes, which may be light emitting diodes that are capable of generating light having a center wavelength that is between about 400 nanometers (nm) to about 850nm or longer.
  • Camera 106 may be any device capable of capturing images of the patterns of light emitted from light source 108 that are projected onto a three-dimensional scene by computing device 102 for the purposes of performing depth mapping. That is, camera 106 may capture the patterns of light that are reflected by the three-dimensional scene. As the light emitted by light source 108 may have a relatively high wavelength, which may be in the near infrared or infrared range, camera 106 may be a device that is able to capture the light patterns at such relatively high wavelengths.
  • Computing device 102 may include one or more components that diffuse the light emitted by light source 108 into one or more patterns of light dots (also referred to as “dots” or “dot patterns”) that are projected onto a three-dimensional scene.
  • computing device 102 may include a separate diffractive optical element that diffracts the light emitted by light source 108 into one or more patterns of light dots.
  • a diffractive optical element is integrated with light source 108 into a single module.
  • Such a diffractive optical element may be designed to cause light waves emitted by light source 108 to constructively and destructively interfere with each other to result in one or more specific patterns of light dots that are projected from computing device 102 onto a three-dimensional scene.
  • a diffractive optical element may function as a beam splitter that splits the light emitted from light source 108 into multiple patterns of lights that are angularly offset relative to each other and that overlap each other.
  • the diffractive optical element may not be disposed behind any other components that may block the light source and the diffractive optical element from projecting patterns of light dots onto a three-dimensional scene.
  • the light source and the diffractive optical element may not be positioned below the components of the display.
  • the computing device may use the light source and the diffractive optical element to perform depth mapping of human faces for the purposes of performing facial recognition.
  • the light source and the diffractive optical element of the smartphone may be positioned to project patterns of light from the same side of the smartphone as the display.
  • accommodating the light source and the diffractive optical element being on the same side of a computing device as the display the computing device without positioning the light source and the diffractive optical element below the display may require increasing the size of the computing device or reducing the screen size of the display.
  • a computing device may include a notch or a cutout to accommodate the light source and the diffractive optical element, such notches or cutouts may reduce the screen space of the display and may increase the cost and complexity of designing and manufacturing displays that accommodate such notches or cutouts.
  • displays that accommodate notches or cutouts may negatively impact the user experience of interacting with such displays and may cause issues with content placement in user interfaces displayed by such displays.
  • At least a portion of display 112 of computing device 102 may act as a diffractive optical element for light source 108 to diffract light 116 emitted by light source 108 to project patterns of light dots 118 onto scene 120, which maybe a three-dimensional scene. That is, instead of using a separate diffractive optical element that is a separate component from display 112, light source 108 may use at least a portion of display 112 as a diffractive optical element that is operable to diffract light 116 emitted by light source 108 to project patterns of light dots 118 onto a scene.
  • light source 108 may be disposed in computing device 102 behind display 112, such that light source 108 emits light 116 towards display 112 that diffracts the emitted light 116 out from computing device 102 into patterns of light dots 118 onto scene 120. That is, light source 108 may be disposed in computing device 102 behind display 112, such that light 116 emitted by light source 108 illuminates at least a portion of the side of display 112 that faces inwards towards light source 108.
  • display 112 As light 116 hits and travels within display 112, at least a portion of display 112 cause light 116 emitted by light source 108 to constructively and destructively interfere with each other to result in one or more specific patterns of light dots 118 that are projected from the other side of display 112 away from computing device 102 and onto scene 120.
  • Display 112 includes periodic structure 111 that may act as a diffractive optical element for light source 108.
  • periodic structure 111 may be a structure of display 112 that includes pixels or subpixels of the display that emit light to display content, such as a user interface, at display 112.
  • Light source 108 may emit light 116 towards periodic structure 111, and periodic structure 111 may act as a diffractive optical element to cause light 116 to constructively and destructively interfere with each other to result in one or more specific patterns of light dots 118 that display 112 projects onto scene 120.
  • Periodic structure 111 of display 112 may be formed to have a two-dimensional arrangement of transparent or semi-transparent regions that are spaced between opaque regions of periodic structure 111.
  • the arrangement of transparent or semi-transparent regions of periodic structure 111 may be a two-dimensional regular repeating arrangement (e.g., a lattice) of transparent or semi-transparent regions or a two-dimensional interlaced repeating arrangement of transparent or semi-transparent regions.
  • the arrangement of transparent or semi-transparent regions spaced between opaque regions may cause tight waves (e.g., tight 116) to interfere with each other, which may cause constructive interference and destructive interference of the light waves.
  • the constructive and destructive interference of the light waves as the tight waves hit periodic structure 111 may cause the tight waves to diffract, resulting in patterns of tight dots 118 that passes through the transparent or semi-transparent regions of periodic structure 111.
  • Such patterns of tight dots 118 may therefore travel out of display 112 and are projected at scene 120.
  • Camera 106 may capture the pattern of tight dots projected onto scene 120, such as by capturing one or more images of scene 120. Such pattern of light dots captured by camera 106 may be a result of the three-dimensionality of scene 120 reflecting the tight dots projected by computing device 102. Camera 106 may be positioned in any suitable position in computing device 102, such as under display 112.
  • Computing device 102 may execute depth mapping module 110 to determine depth values associated with the plurality of tight dots captured by camera 106.
  • Depth mapping module 110 may use any suitable technique to determine a corresponding depth value for each tight dot captured by camera 106 and to generate a depth map of scene 120 based on the depth values of the plurality of tight dots.
  • Computing device 102 may perform one or more functions using the depth map generated by depth mapping module 110. For example, computing device 102 may perform facial recognition to authenticate authorized users of computing device 102. Computing device 102 may also use the depth map to aid in performing functions such as portrait mode photography, active autofocusing of cameras, and the like.
  • the techniques of this disclosure may provide certain technical advantages.
  • display 112 as a diffracting optical element for light source 108 to project patterns of light dots onto a scene
  • the techniques of this disclosure may enable light source 108 to be placed under display 112.
  • Enabling light source 108 to be placed under 108 may reduce the size of the form factor of computing device 102 compared with computing devices having a dedicated diffracting optical element.
  • Enabling light source 108 to be placed under 108 may also enable computing device 102 to include a relatively larger display 112 compared with computing devices having a dedicated diffracting optical element.
  • the techniques of this disclosure may reduce the number of components of a computing device. Reducing the number of components in a computing device may decrease the cost and complexity in manufacturing the computing device and may increase the reliability of the computing devices.
  • FIG. 1B shows a profile view of a portion of display 112 of computing device 102 in further detail. While display 112 shown in FIG. 1B is an OLED display, the techniques illustrated in FIG. 1B may also be applicable to other types of display technologies.
  • a portion of display 112 may include hard coat 122, glass layer 124, polarizer layer 126, encapsulation layer 128, OLED layer 138, metal layer 132, and support layer 130.
  • Display 112 may include additional layers and/or components not shown in FIG. 1B.
  • Support layer 130 may provide structural integrity of the assembly of display 112.
  • Metal layer 132 may also be referred to as a metal mask layer or the backplane or the thin-film transistor (TFT) layer, and may include metal structure 134 (e.g., a TFT) on which light emitters 136A-136C ("light emitters 136”) of OLED layer 138 are disposed.
  • light emitters may be OLEDs.
  • backplane [0039]
  • light emitters 136 may include light emitter 136A that emits red light, light emitter 136B, disposed on corresponding metal structure 134, that emits green light, and light emitter 136C that emits blue light.
  • light emitters 136 shown in FIG. 1B may be an example of red, green, and blue subpixels making up a single pixel of display 112.
  • OLED layer 138 and metal layer 132 of display 112 may, together, form periodic structure 111 of display 112. As can be seen, there are small transparent apertures, such as gaps or empty space, between each metal structures 134 of metal layer 132 and between light emitters 136 of OLED layer 138 within periodic structure 111. As such, periodic structure 111 forms a regular repeating pattern of transparent apertures between metal structures 134 and light emitters 136.
  • Hard coat 122, glass layer 124, polarizer layer 126, encapsulation layer 128, and support layer 130 may be transparent (e.g., clear) or semi-transparent so that light may travel through hard coat 122, glass layer 124, polarizer layer 126, encapsulation layer 128, and support layer 130.
  • light source 108 emits light 116 towards display 112
  • light 116 may travel through support layer 130 towards periodic structure 111.
  • light source 108 emits light 116 towards periodic structure 111 that acts as a diffractive optical element to diffract light 116
  • light source 108 may emit light 116 in the form of a collimated tight projections, such as a collimated beam
  • display 112 may receive light 116 emitted from light source 108 in the form of the collimated tight projections that display 112 may diffract to produce patters of light dots 118.
  • the arrangement of metal structures 134 and light emitters 136 and the arrangement of transparent apertures, such as gaps, between metal structures 134 and between light emitters 136 may cause light 116 to bounce off of metal structures 134 and to travel through the transparent apertures, which may cause constructive interference and destructive interference of tight 116.
  • Such constructive interference and destructive interference may diffract light 116 into patterns of light dots 118 that travel through encapsulation layer 128, polarizer layer 126, glass layer 124, and hard coat 122 and are therefore projected onto a three-dimensional scene.
  • FIG. 1C illustrates a plan view of periodic structure 111 of display 112 of computing device 102. While periodic structure 111 is shown in FIG. 1C is in the context of an OLED display, the techniques illustrated in FIG. 1C may also be applicable to other types of display technologies. In some examples, the techniques may also be applicable to an LCD or a microLED display, where a plurality of pixels of the LCD or the microLED may be semi-transparent to form a regular or repeating arrangement of semi-transparent regions of periodic structure 111.
  • an LCD or a microLED display may use Red Green Blue White (RGBW) filters to form a regular or repeating arrangement of semi-transparent regions of periodic structure 111, where White filters may be transparent filters that enables light 116 emitted by light source 108 as well as light produced as a result of constructive and destractive interference with light 116 to pass through.
  • RGBW Red Green Blue White
  • metal structures 134 of periodic structure 111 of display 112, on which red, green, and blue light emitters 136 are disposed may be opaque regions of periodic structure 111 arranged in a two-dimensional regular repeating pattern.
  • the arrangement of metal structures 134 may form a regular or interlaced repeating arrangement of space 135 between the opaque regions of periodic structure 111, which may be metal structures 134.
  • Such space 135 between metal structures 134 may be transparent or semi-transparent regions of periodic structure 111, such as transparent apertures or gaps, thereby enabling light, which may be light 116 emitted by light source 108 as well as light produced as a result of constructive and destructive interference with light 116, to pass through such space 135 between metal structures 134 in the form of patterns of light dots that are project onto a three-dimensional scene, such as scene 120.
  • FIG. 1D illustrates an example of dot patterns that may be projected by computing device 102 onto a scene.
  • periodic structure 111 of display 112 may diffract light 116 emitted by light source 108 into a pattern of light dots 118 that is projected onto a scene, such as scene 120.
  • the pattern of light dots 118 may result from light 116 being emitted by a laser at about 940nm.
  • Camera 106 may capture such a pattern of dots 118, and depth mapping module 110 may determine a depth map of the scene based on the pattern of dots 118 projected onto the scene.
  • a computing device and/or a computing system analyzes information (e.g., context, locations, speeds, search queries, etc.) associated with a computing device and a user of a computing device, only if the computing device receives permission from the user of the computing device to analyze the information.
  • information e.g., context, locations, speeds, search queries, etc.
  • a computing device or computing system can collect or may make use of information associated with a user
  • the user may be provided with an opportunity to provide input to control whether programs or features of the computing device and/or computing system can collect and make use of user information (e.g., information about a user’s current location, current speed, etc.), or to dictate whether and/or how to the device and/or system may receive content that may be relevant to the user.
  • user information e.g., information about a user’s current location, current speed, etc.
  • certain data may be treated in one or more ways before it is stored or used by the computing device and/or computing system, so that personally identifiable information is removed.
  • a user’s identity may be treated so that no personally identifiable information can be determined about the user, or a user’s geographic location may be generalized where location information is obtained (such as to a city, ZIP code, or state level), so that a particular location of a user cannot be determined.
  • location information such as to a city, ZIP code, or state level
  • the user may have control over how information is collected about the user and used by the computing device and computing system.
  • FIG. 2 is a block diagram illustrating an example computing device 202, in accordance with one or more aspects of the present disclosure.
  • Computing device 202 of FIG. 2 is an example of computing device 102 of FIGS. 1A-1D.
  • Computing device 202 is only one particular example of computing device 102 of FIGS. 1A-1D, and many other examples of computing device 102 may be used in other instances.
  • computing device 202 may be a mobile computing device (e.g., a smartphone), or any other computing device.
  • Computing device 202 of FIG. 2 may include a subset of the components included in example computing device 202 or may include additional components not shown in FIG. 2.
  • computing device 202 includes display 212, one or more processors 240, one or more input devices 242, one or more communication units 244, one or more output devices 246, and one or more storage devices 248.
  • Storage devices 248 of computing device 202 also include depth sensing module 252, depth mapping module 210, facial recognition module 258, operating system 254, depth mapping model 256, and facial recognition data 270.
  • Communication channels 250 may interconnect each of the components 240, 242, 244, 246, 248, and 212 for inter-component communications (physically, communicatively, and/or operatively).
  • communication channels 250 may include a system bus, a network connection, an inter-process communication data structure, or any other method for communicating data.
  • One or more input devices 242 of computing device 202 may be configured to receive input. Examples of input are tactile, audio, and video input.
  • Input devices 242 of computing device 202 includes a presence-sensitive display, touch- sensitive screen, mouse, keyboard, voice responsive system, video camera, microphone or any other type of device for detecting input from a human or machine.
  • one or more input devices 242 may include camera 206, which is an example of camera 106 of FIGS. 1A-1D.
  • Camera 206 may be any device capable of capturing images of the patterns of light emitted from light source 208 that are projected onto a three-dimensional scene by computing device 202 for the purposes of performing depth mapping. That is, camera 206 may capture the patterns of light that are reflected by the three-dimensional scene.
  • the light emitted by light source 208 may have a relatively high wavelength, which may be in the near infrared or infrared range, camera 206 may be a device that is able to capture the light patterns at such relatively high wavelengths.
  • One or more output devices 246 of computing device 202 may be configured to generate output. Examples of output are tactile, audio, and video output.
  • Output devices 246 of computing device 202 includes a presence-sensitive organic light emitting diode (OLED) display, sound card, video graphics adapter card, speaker, monitor, a presence-sensitive liquid crystal display (LCD), or any other type of device for generating output to a human or machine.
  • OLED organic light emitting diode
  • LCD presence-sensitive liquid crystal display
  • one or more output devices 246 may include light source 208, which is an example of light source 108 of FIGS. 1A-1D.
  • Light source 208 may be any device capable of emit light, such as in the form of a collimated beam, that is used to perform depth mapping of three-dimensional scenes.
  • light source 208 may be capable of generating light having a center wavelength that is between about 400 nanometers (nm) to about 850nm or longer.
  • light source 208 may be capable of generating light that is in the near infrared range to the infrared range, such that human eyes are insensitive to the light emitted by light source 208.
  • Light source 208 may be any suitable light source having a certain degree of spatial and temporal coherence to generate high contrast light patterns.
  • light source 208 may be a laser, such as a super luminescent diode laser.
  • light source 208 may be filtered light emitting diodes.
  • light source 208 may be Vertical Cavity Surface Emitting Lasers (VCSELs)
  • One or more communication units 244 of computing device 202 may be configured to communicate with external devices via one or more wired and/or wireless networks by transmitting and/or receiving network signals on the one or more networks. Examples of communication unit 244 include a network interface card (e.g.
  • communication units 44 may include short wave radios, cellular data radios, wireless network radios, as well as universal serial bus (USB) controllers.
  • USB universal serial bus
  • Display 212 may be an example of display 112 of FIGS. 1A-1D.
  • Display 212 may function as an output (e.g., display) device using any one or more display devices, such as a liquid crystal display (LCD), dot matrix display, light emitting diode (LED) display, microLED, organic fight-emitting diode (OLED) display, e-ink, or similar monochrome or color display capable of outputting visible information to a user of computing device 202.
  • LCD liquid crystal display
  • LED light emitting diode
  • OLED organic fight-emitting diode
  • e-ink or similar monochrome or color display capable of outputting visible information to a user of computing device 202.
  • display 212 of computing device 202 may include functionality of input devices 242 and/or output devices 246.
  • display 212 may be or may include a presence-sensitive input device.
  • a presence sensitive input device may detect an object at and/or near a screen.
  • a presence-sensitive input device may detect an object, such as a finger or stylus that is within 2 inches or less of the screen.
  • the presence-sensitive input device may determine a location (e.g., an (x,y) coordinate) of a screen at which the object was detected.
  • a presence-sensitive input device may detect an object six inches or less from the screen and other ranges are also possible.
  • the presence-sensitive input device may determine the location of the screen selected by a user’s finger using capacitive, inductive, and/or optical recognition techniques.
  • a presence sensitive input device also provides output to a user using tactile, audio, or video stimuli as described with respect to output device 246, e.g., at a display.
  • output device 246 e.g., at a display.
  • display 212 may present a user interface.
  • display 212 may, in some examples, represents an external component that shares a data path with computing device 202 for transmitting and/or receiving input and output.
  • display 212 represents a built-in component of computing device 202 located within and physically connected to the external packaging of computing device 202 (e.g., a screen on a mobile phone).
  • display 212 represents an external component of computing device 202 located outside and physically separated from the packaging of computing device 202 (e.g., a monitor, a projector, etc. that shares a wired and/or wireless data path with a tablet computer).
  • Display 212 includes periodic structure 211, which is an example of periodic structure 111 of FIGS. 1A-1D. Similar to periodic structure 111, periodic structure 211 may be formed of one or more display components, such as structures (e.g., a metal layer) on which pixels or subpixels of display 212 are disposed.
  • periodic structure 211 may be formed of one or more display components, such as structures (e.g., a metal layer) on which pixels or subpixels of display 212 are disposed.
  • One or more storage devices 248 within computing device 202 may store information for processing during operation of computing device 202.
  • storage device 248 is a temporary memory, meaning that a primary purpose of storage device 248 is not long-term storage.
  • Storage devices 248 on computing device 202 may be configured for short-term storage of information as volatile memory and therefore not retain stored contents if powered off. Examples of volatile memories include random access memories (RAM), dynamic random access memories (DRAM), static random access memories (SRAM), and other forms of volatile memories known in the art.
  • RAM random access memories
  • DRAM dynamic random access memories
  • SRAM static random access memories
  • Storage devices 248 may be configured to store larger amounts of information than volatile memory.
  • Storage devices 248 may further be configured for long-term storage of information as non-volatile memory space and retain information after power on/off cycles. Examples of non-volatile memories include magnetic hard discs, optical discs, floppy discs, flash memories, or forms of electrically programmable memories (EPROM) or electrically erasable and programmable (EEPROM) memories.
  • Storage devices 248 may store program instructions and/or information (e.g., data) associated with depth sensing module 252, depth mapping module 210, facial recognition module 258, operating system 254, depth mapping model 256, and facial recognition data 270.
  • processors 240 may implement functionality and/or execute instructions within computing device 202.
  • processors 240 on computing device 202 may receive and execute instructions stored by storage devices 248 that execute the functionality of depth sensing module 252, depth mapping module 210, facial recognition module 258, operating system 254, and depth mapping model 256.
  • These instructions executed by processors 240 may, for example, cause one or more processors 240 to perform depth mapping of a three-dimensional scene.
  • One or more processors 240 are configured to execute depth sensing module 252 to control the operations of light source 208 and camera 206.
  • depth sensing module 252 may communicate with light source 208 to cause light source 208 to emit light that display 212 may diffract into a plurality of light dots that are projected onto a scene.
  • depth sensing module 252 may communicate with camera 206 to capture the plurality of light dots projected onto the scene.
  • One or more processors 240 are configured to execute depth mapping module 210 to determine depth values associated with the plurality of light dots captured by camera 206.
  • the depth value of a light dot may correspond to a relative distance of the light dot from camera 206.
  • Depth mapping module 210 may determine the depth value of each of a plurality of light dots captured by camera 206 and may also determine interpolated depth values between light dots captured by camera 206.
  • Depth mapping module 210 may therefore analyze the one or more dot patterns captured by camera 206 to determine how the one or more dot patterns projected by computing device 202 has been distorted by the scene to determine the depth values of the one or more dot patterns captured by camera 206.
  • depth mapping module 210 may use a triangulation technique to determine the depth values of dots captured by camera 206, which uses the known positions of the projected dots, the dots captured by camera 206, and the angle of camera 206 to the projected dots to determine the distance between camera 206 and each dot captured by camera 206.
  • Depth mapping module 210 may, during a calibration phase, determine and assign angular coordinates to the locations of centroids of dots that are projected by computing device 202 and captured by camera 206, and may associate depth values to the angular coordinates, so that each angular coordinate is associated with a particular depth value. For example, given a dot projected into a scene, depth mapping module 210 may determine the angular coordinate of the dot from light source 208.
  • depth mapping module 210 may determine the distance of the dot from camera 206, which corresponds with a depth value associated with the dot, by determining the distance from camera 206 at which a light projected by light source 208 via display 212 towards the centroid of the dot may intersect a light projected from camera 206 towards the centroid of the dot.
  • Depth mapping module 210 may therefore analyze the one or more dot patterns captured by camera 206 to determine angular coordinates of the dots in the one or more dot patterns, and may determine the depth values associated with the angular coordinates. Depth mapping module 210 may also apply an interpolation technique to determine depth values of points that are between dots in the one or more dot patterns. In this way, depth mapping module 210 may create a depth map based on the one or more dot patterns captured by camera 206.
  • depth mapping module 210 may use an indirect time of flight technique to determine the depth values of one or more dot patterns captured by camera 206.
  • computing device 202 may modulate dot patterns, also referred to as spots, that are projected onto a scene.
  • Camera 206 may capture and demodulate the dot patterns to measure the phase delay of each dot, thereby extracting depth information from each dot.
  • Depth mapping module 210 may therefore be able to determine depth values associated with dots in the one or more dot patterns captured by camera 206.
  • Depth mapping module 210 may also apply an interpolation technique to determine depth values of points that are between dots in the one or more dot patterns. In this way, depth mapping module 210 may create a depth map based on the one or more dot patterns captured by camera 206.
  • depth mapping module 210 may determine the depth value of one or more dot patterns captured by camera 206 by using depth mapping model 256 that is machine-trained model trained via machine learning to generate a depth map for one or more dot patterns.
  • Depth mapping model 256 may be trained via supervised machine learning. For example, depth mapping model 256 may be trained using training data that pairs one or more dot patterns of a three-dimensional scene with a depth map of the scene, which may have been captured using a calibrated depth camerato generate hyperparameters for depth mapping model 256.
  • depth mapping module 210 may use depth mapping model 256 to generate a depth map for one or more depth patterns. For example, depth mapping module 210 may input, into depth mapping model 256, one or more dot patterns captured by camera 206, and depth mapping model 256 may generate and output a depth map of the inputted one or more dot patterns.
  • the depth mapping module 210 may use the techniques described in this disclosure of determining depth values of dot patterns to augment additional depth mapping techniques. Using the techniques described in this disclosure to augment additional depth mapping techniques may increase the accuracy of depth maps of scenes that are generated by computing device 202.
  • depth mapping module 210 may determine depth values of dot patterns to augment depth maps of scenes generated using stereoscopic imaging.
  • Stereoscopic imaging is a technique that uses two or more cameras to capture images of a scene from slightly different viewpoints. By analyzing the differences between the images, the techniques of stereoscopic imaging may enable computing device 202 to infer depth information and create a three-dimensional representation of the scene.
  • Computing device 202 may use the depth map of a scene that is generated by depth mapping module 210 to perform various functions. For example, computing device 202 may use depth mapping to perform facial recognition of authorized users of computing device 202. An authorized user may enroll their facial features at computing device 202. For example, computing device 202 may use the techniques described herein to scan the facial features of an authorized user and may store the enrolled facial features of the authorized user as facial recognition data 270.
  • one or more processors 240 may execute facial recognition module 258 that communicates with light source 208 and camera 206 to project one or more dot patterns onto a user’s face and to capture the dot patterns projected onto the user’s face.
  • Depth mapping module 210 may generate a depth map of the user’s face based on the dot patterns captured by camera 206, and facial recognition module 258 may compare the depth map of the user’s face with the enrolled facial features of the authorized user of computing device 202 as stored in facial recognition data 270 to determine whether the user is an authorized user of computing device 202.
  • facial recognition data 270 and facial recognition module 258 may be stored in secure hardware that is physically isolated from other components of computing device 202. Storing facial recognition data 270 and facial recognition module 258 in such secure hardware may prevent unauthorized access and tampering. Further, facial recognition data 270 and facial recognition module 258 may also be encrypted, such that facial recognition data 270 and facial recognition module 258 may remain secure and inaccessible even if the secure hardware is compromised.
  • FIG. 3 is a flowchart illustrating example operations of an example computing device configured to generate depth maps of three-dimensional scenes, in accordance with one or more aspects of the present disclosure. For purposes of illustration only, the example operations are described below within the context of computing device 202 of FIG. 2.
  • a light source 208 of computing device 202 may emit light 116 towards at least a portion of a display 212 having a periodic structure 211, wherein the periodic structure 211 of the display 212 diffracts the light 116 into a plurality of light dots 118 that are projected onto a scene 120 (302).
  • the periodic structure 211 includes a two-dimensional arrangement of opaque regions that forms a two-dimensional arrangement of transparent regions between the opaque regions.
  • the opaque regions of the periodic structure 211 include metal structures 134, and light emitters of the display 212 are disposed on the metal structures 134.
  • the display 212 includes an organic light-emitting diode (OLED) display, and the light emitters include OLEDs.
  • OLED organic light-emitting diode
  • the periodic structure 211 is operable to cause the light 116 emitted by the light source 208 to constructively and destructively interfere with each other to produce the plurality of light dots 118 that are projected onto the scene 120.
  • the light waves created via constructive and destructive interference of the light 116 emitted by the light source 208 pass through the transparent regions of the periodic structure 211 to form the plurality of light dots 118.
  • the light 116 emitted by the light source 208 towards at least the portion of the display 212 is a collimated beam.
  • the display 212 is a presence-sensitive display.
  • the light source 208 includes one or more high emissivity diodes behind the display 212.
  • the display 212 includes a metal mask layer as the periodic structure 211 , the metal mask layer including transparent apertures positioned over the one or more high emissivity diodes to produce, from collimated light emissions of the one or more high emissivity diodes (e.g., the high emissivity diodes may be a collimated light source), the plurality of light dots 118 in a determined pattern.
  • a camera 206 of computing device 202 may capture the plurality of light dots 118 in the scene 120 (304).
  • One or more processors 240 of computing device 202 may determine depth values associated with the plurality of light dots 118 (306). For example, to determine the depth values of the plurality of light dots 118, the one or more processors 240 may input the plurality of light dots 118 into a machine-trained model, such as depth mapping model 256, that is trained via machine learning to generate the depth values for the plurality of light dots 118. In some examples, one or more processors 240 may perform facial recognition based on the depth values associated with the plurality of light dots 118.
  • Example 1 A computing device comprising: a display having a periodic structure; a light source configured to emit light towards at least a portion of the display, the periodic structure of the display diffracting the light into a plurality of light dots that are projected onto a scene; a camera configured to capture the plurality of light dots in the scene; and one or more processors configured to determine depth values associated with the plurality of light dots.
  • Example 2 The computing device of example 1, wherein the periodic structure comprises a two-dimensional arrangement of opaque regions that forms a two- dimensional arrangement of transparent regions between the opaque regions.
  • Example 3 The computing device of example 2, wherein the opaque regions of the periodic structure include metal structures, and wherein light emitters of the display are disposed on the metal structures.
  • Example 4 The computing device of any of examples 2 and 3, wherein the display comprises one of: an organic light-emitting diode (OLED) display, a liquid crystal display (LCD), or a microLED display.
  • OLED organic light-emitting diode
  • LCD liquid crystal display
  • microLED microLED
  • Example 5 The computing device of any of examples 1-4, wherein the light source includes one or more high emissivity diodes behind the display.
  • Example 6 The computing device of example 5, wherein the display includes a metal mask layer as the periodic structure, the metal mask layer including transparent apertures positioned over the one or more high emissivity diodes to produce, from diode emissions of the one or more high emissivity diodes, the plurality of light dots in a determined pattern.
  • Example 7 The computing device of any of examples 2-6, wherein the periodic structure is operable to cause the light emitted by the light source to constructively and destructively interfere with each other to produce the plurality of light dots that are projected onto the scene.
  • Example 8 The computing device of example 7, wherein light waves created via constructive and destructive interference of the light emitted by the light source pass through the transparent regions of the periodic structure to form the plurality of light dots.
  • Example 10 The computing device of any of examples 1-9, wherein the display comprises a presence-sensitive display.
  • Example 11 The computing device of any of examples 1-10, wherein to determine the depth values of the plurality of light dots, the one or more processors are further configured to: determine the depth values of the plurality of dots using one of: a triangulation technique, a time of flight technique, or a machine-trained model trained via machine learning to generate the depth values for the plurality of light dots.
  • Example 12 The computing device of any of examples 1-11, wherein the one or more processors are further configured to: perform facial recognition based on the depth values associated with the plurality of light dots.
  • Example 13 A method comprising: emitting, by a light source of a computing device, light towards at least a portion of a display of the computing device, wherein a periodic structure of the display diffracts the light into a plurality of light dots that are projected onto a scene; capturing, by a camera of the computing device, the plurality of light dots in the scene; and determining, by one or more processors of the computing device, depth values associated with the plurality of light dots.
  • Example 14 The method of example 13, wherein the periodic structure comprises a two-dimensional arrangement of opaque regions that forms a two- dimensional arrangement of transparent regions between the opaque regions.
  • Example 15 The method of example 14, wherein the opaque regions of the periodic structure include metal structures, and wherein light emitters of the display are disposed on the metal structures.
  • Example 16 The method of any of examples 14 and 15, wherein the display comprises one of: an organic light-emitting diode (OLED) display, a liquid crystal display (LCD), or a microLED display.
  • OLED organic light-emitting diode
  • LCD liquid crystal display
  • microLED microLED
  • Example 17 The method of any of examples 14-16, wherein the periodic structure is operable to cause the light emitted by the light source to constructively and destructively interfere with each other to produce the plurality of light dots that are projected onto the scene.
  • Example 19 The method of any of examples 13-18, wherein the light emitted by the light source towards at least the portion of the display comprises a collimated beam.
  • Example 20 The method of any of examples 13-19, wherein the display comprises a presence-sensitive display.
  • Example 21 The method of any of examples 13-20, wherein determining the depth values of the plurality of light dots further comprises: determining, by the one or more processors, the depth values of the plurality of dots using one of: a triangulation technique, a time of flight technique, or a machine-trained model trained via machine learning to generate the depth values for the plurality of light dots.
  • Example 22 The method of any of examples 13-21, further comprising: performing, by the one or more processors, facial recognition based on the depth values associated with the plurality of light dots.
  • Computer-readable media may include computer-readable storage media, which corresponds to a tangible medium such as data storage media, or communication media including any medium that facilitates transfer of a computer program from one place to another, e.g., according to a communication protocol.
  • computer-readable media generally may correspond to (1) tangible computer-readable storage media, which is non-transitory or (2) a communication medium such as a signal or carrier wave.
  • Data storage media may be any available media that can be accessed by one or more computers or one or more processors to retrieve instructions, code and/or data structures for implementation of the techniques described in this disclosure.
  • a computer program product may include a computer-readable medium.
  • Such computer-readable storage media can comprise RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage, or other magnetic storage devices, flash memory, or any other medium that can be used to store desired program code in the form of instructions or data structures and that can be accessed by a computer. Also, any connection is properly termed a computer-readable medium.
  • coaxial cable, fiber optic cable, twisted pair, digital subscriber line (DSL), or wireless technologies such as infrared, radio, and microwave are included in the definition of medium.
  • DSL digital subscriber line
  • computer-readable storage media and data storage media do not include connections, carrier waves, signals, or other transient media, but are instead directed to non-transient, tangible storage media.
  • Disk and disc includes compact disc (CD), laser disc, optical disc, digital versatile disc (DVD), floppy disk and Blu-ray disc, where disks usually reproduce data magnetically, while discs reproduce data optically with lasers. Combinations of the above should also be included within the scope of computer-readable media.
  • processors such as one or more digital signal processors (DSPs), general purpose microprocessors, application specific integrated circuits (ASICs), field programmable logic arrays (FPGAs), or other equivalent integrated or discrete logic circuitry.
  • DSPs digital signal processors
  • ASICs application specific integrated circuits
  • FPGAs field programmable logic arrays
  • processors may refer to any of the foregoing structures or any other structure suitable for implementation of the techniques described herein.
  • the functionality described herein may be provided within dedicated hardware and/or software modules. Also, the techniques could be fully implemented in one or more circuits or logic elements.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Hardware Design (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • General Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Signal Processing (AREA)
  • Computer Security & Cryptography (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Artificial Intelligence (AREA)
  • Environmental & Geological Engineering (AREA)
  • Testing, Inspecting, Measuring Of Stereoscopic Televisions And Televisions (AREA)

Abstract

A computing device may include a display having a periodic structure. The computing device includes a light source that emits light towards at least a portion of the display, and the periodic structure of the display may diffract tire light into a plurality of light dots that are projected onto a scene. The computing device includes a camera, that may capture the plurality of light dots in the scene. The computing device may determine depth values associated with the plurality of light dots.

Description

DEPTH CAMERA USING DISPLAY ENCODED DOT PATTERN
BACKGROUND
[0001] A computing device may include depth mapping hardware that is used to perform depth mapping of three dimensional scenes. For example, a computing device may use such hardware to perform depth mapping of faces for the purposes of performing facial recognition.
SUMMARY
[0002] In general, aspects of this disclosure are directed to techniques for a computing device to generate patterns of dots that are used to perform depth mapping of a scene without use of a separate diffractive optical element that diffracts light into one or more patters of light dots. Instead, the display of the computing device may act as a diffractive optical element that diffracts light into patterns of dots that can be used to perform depth mapping of a three-dimensional scene.
[0003] The computing device may include a light source that may emit light towards the display of the computing device. The display may include a periodic structure that forms a two-dimensional arrangement of opaque regions and transparent regions. The arrangement of opaque regions and transparent regions of the periodic structure may cause the light to constructively and destructively interfere with each other, thereby diffracting the light into one or more specific patterns of light dots that are projected onto a three-dimensional scene. A camera of the computing device may capture the pattern of light dots in the three-dimensional scene and may determine depth values associated with the pattern of light dots to generate a depth map of the three- dimensional scene.
[0004] In some aspects, the techniques described herein relate to a computing device including: a display having a periodic structure; a light source configured to emit light towards at least a portion of the display, the periodic structure of the display diffracting the light into a plurality of light dots that are projected onto a scene; a camera configured to capture the plurality of light dots in the scene; and one or more processors configured to determine depth values associated with the plurality of light dots.
[0005] In some aspects, the techniques described herein relate to a method including: emitting, by a light source of a computing device, light towards at least a portion of a display of the computing device, wherein a periodic structure of the display diffracts the light into a plurality of light dots that are projected onto a scene; capturing, by a camera of the computing device, the plurality of light dots in the scene; and determining, by one or more processors of the computing device, depth values associated with the plurality of light dots.
[0006] The details of one or more examples of the disclosure are set forth in the accompanying drawings and the description below. Other features, objects, and advantages of the disclosure will be apparent from the description and drawings, and from the claims.
BRIEF DESCRIPTION OF DRAWINGS
[0007] FIG. 1 A is a conceptual diagram illustrating an example environment in which a computing device is configured to perform depth mapping of a three dimensional scene, in accordance with one or more aspects of the present disclosure.
[0008] FIG. 1B shows a profile view of a portion of display of a computing device in further detail.
[0009] FIG. 1C illustrates a plan view of periodic structure of a display of a computing device 102.
[0010] FIG. 1D illustrates an example of dot patterns that may be projected by computing device onto a scene.
[0011] FIG. 2 is a block diagram illustrating an example computing device, in accordance with one or more aspects of the present disclosure.
[0012] FIG. 3 is a flowchart illustrating example operations of an example computing device configured to generate depth maps of three-dimensional scenes, in accordance with one or more aspects of the present disclosure.
DETAILED DESCRIPTION
[0013] FIG. 1A is a conceptual diagram illustrating an example environment 100 in which a computing device 102 is configured to perform depth mapping of a three dimensional scene 120, in accordance with one or more aspects of the present disclosure. As shown in FIG. 1A, computing device 102 is a mobile computing device (e.g., a mobile phone). However, in other examples, computing device 102 may be a tablet computer, a laptop computer, a desktop computer, a gaming system, a media player, an e-book reader, a television platform, an automobile navigation system, a wearable computing device (e.g., a computerized watch, computerized eyewear, a computerized glove), or any other type of mobile or non-mobile computing device. [0014] Computing device 102 includes display 112. Display 112 may function as an input device for computing device 102 and as an output device for computing device 102. Display 112 may be implemented using various technologies. For instance, display 112 may function as an input device using a presence-sensitive input screen, such as a resistive touchscreen, a surface acoustic wave touchscreen, a capacitive touchscreen, a projective capacitive touchscreen, a pressure sensitive screen, an acoustic pulse recognition touchscreen, or another presence-sensitive display technology. Display 112 may function as an output (e.g., display) device using any one or more display devices, such as a liquid crystal display (LCD), dot matrix display, light emitting diode (LED) display, microLED, organic light-emitting diode (OLED) display, e-ink, or similar monochrome or color display capable of outputting visible information to a user of computing device 102.
[0015] Display 112 may be a presence-sensitive display that may receive tactile input from a user of computing device 102. Display 112 may receive indications of the tactile input by detecting one or more gestures from a user of computing device 102 (e.g., the user touching or pointing to one or more locations of display 112 with a finger or a stylus pen). Display 112 may present output to a user, for instance at a presencesensitive display. Display 112 may present the output as a graphical user interface, which may be associated with functionality provided by computing device 102. For example, display 112 may present various user interfaces of components of a computing platform, operating system, applications, or services executing at or accessible by computing device 102 (e.g., an electronic message application, an Internet browser application, a mobile operating system, etc.). A user may interact with a respective user interface to cause computing device 102 to perform operations relating to a function. [0016] Computing device 102 may include depth mapping module 110. Depth mapping module 110 may perform operations described herein using software, hardware, firmware, or a mixture of both hardware, software, and firmware residing in and executing on computing device 102 or at one or more other remote computing devices. In some examples, depth mapping module 110 may be implemented as hardware, software, and/or a combination of hardware and software. Computing device 102 may execute depth mapping module 110 with one or more processors. Computing device 102 may execute depth mapping module 110 as or within a virtual machine executing on underlying hardware.
[0017] Depth mapping module 110 may be implemented in various ways. For example, depth mapping module 110 may be implemented as a downloadable or pre-installed application or “app.” In another example, depth mapping module 110 may be implemented as part of an operating system of computing device 102. Other examples of computing device 102 that implement techniques of this disclosure may include additional components not shown in FIG. 1A.
[0018] Computing device 102 includes camera 106 and light source 108 that computing device 102 may use to perform depth mapping of three-dimensional scenes. Light source 108 may be any device capable of emit light that is used to perform depth mapping of three-dimensional scenes. In some examples, light source 108 may be capable of generating light having a center wavelength that is between about 400 nanometers (nm) to about 850nm or longer. In some examples, light source 108 may be capable of generating light that is in the near infrared range to the infrared range, such that human eyes are insensitive to the light emitted by light source 108.
[0019] Light source 108 may be any suitable light source having a certain degree of spatial and temporal coherence to generate high contrast light patterns. In some examples, light source 108 may be a laser, such as a super luminescent diode laser. In some examples, light source 108 may be filtered light emitting diodes. In some examples, light source 108 may be Vertical Cavity Surface Emitting Lasers (VCSELs). In some examples, light source 108 may be one or more high emissivity diodes, which may be light emitting diodes that are capable of generating light having a center wavelength that is between about 400 nanometers (nm) to about 850nm or longer.
[0020] Camera 106 may be any device capable of capturing images of the patterns of light emitted from light source 108 that are projected onto a three-dimensional scene by computing device 102 for the purposes of performing depth mapping. That is, camera 106 may capture the patterns of light that are reflected by the three-dimensional scene. As the light emitted by light source 108 may have a relatively high wavelength, which may be in the near infrared or infrared range, camera 106 may be a device that is able to capture the light patterns at such relatively high wavelengths. [0021] Computing device 102 may include one or more components that diffuse the light emitted by light source 108 into one or more patterns of light dots (also referred to as “dots” or “dot patterns”) that are projected onto a three-dimensional scene. In some examples, computing device 102 may include a separate diffractive optical element that diffracts the light emitted by light source 108 into one or more patterns of light dots. In some examples, such a diffractive optical element is integrated with light source 108 into a single module. Such a diffractive optical element may be designed to cause light waves emitted by light source 108 to constructively and destructively interfere with each other to result in one or more specific patterns of light dots that are projected from computing device 102 onto a three-dimensional scene. For example, a diffractive optical element may function as a beam splitter that splits the light emitted from light source 108 into multiple patterns of lights that are angularly offset relative to each other and that overlap each other.
[0022] In computing devices having a light source, such as light source 108, and a diffractive optical element, the diffractive optical element may not be disposed behind any other components that may block the light source and the diffractive optical element from projecting patterns of light dots onto a three-dimensional scene. As such, in computing devices having an integrated display, such as a smartphone, the light source and the diffractive optical element may not be positioned below the components of the display.
[0023] In the example where a computing device is a smartphone, the computing device may use the light source and the diffractive optical element to perform depth mapping of human faces for the purposes of performing facial recognition. To enable a user to face the display of the smartphone while the smartphone performs facial recognition of the user, the light source and the diffractive optical element of the smartphone may be positioned to project patterns of light from the same side of the smartphone as the display.
[0024] However, accommodating the light source and the diffractive optical element being on the same side of a computing device as the display the computing device without positioning the light source and the diffractive optical element below the display, may require increasing the size of the computing device or reducing the screen size of the display. While a computing device may include a notch or a cutout to accommodate the light source and the diffractive optical element, such notches or cutouts may reduce the screen space of the display and may increase the cost and complexity of designing and manufacturing displays that accommodate such notches or cutouts. In addition, displays that accommodate notches or cutouts may negatively impact the user experience of interacting with such displays and may cause issues with content placement in user interfaces displayed by such displays.
[0025] In accordance with aspects of this disclosure, at least a portion of display 112 of computing device 102 may act as a diffractive optical element for light source 108 to diffract light 116 emitted by light source 108 to project patterns of light dots 118 onto scene 120, which maybe a three-dimensional scene. That is, instead of using a separate diffractive optical element that is a separate component from display 112, light source 108 may use at least a portion of display 112 as a diffractive optical element that is operable to diffract light 116 emitted by light source 108 to project patterns of light dots 118 onto a scene.
[0026] To enable light source 108 to use display 112 as a diffractive optical element, light source 108 may be disposed in computing device 102 behind display 112, such that light source 108 emits light 116 towards display 112 that diffracts the emitted light 116 out from computing device 102 into patterns of light dots 118 onto scene 120. That is, light source 108 may be disposed in computing device 102 behind display 112, such that light 116 emitted by light source 108 illuminates at least a portion of the side of display 112 that faces inwards towards light source 108. As light 116 hits and travels within display 112, at least a portion of display 112 cause light 116 emitted by light source 108 to constructively and destructively interfere with each other to result in one or more specific patterns of light dots 118 that are projected from the other side of display 112 away from computing device 102 and onto scene 120.
[0027] Display 112 includes periodic structure 111 that may act as a diffractive optical element for light source 108. In some examples, periodic structure 111 may be a structure of display 112 that includes pixels or subpixels of the display that emit light to display content, such as a user interface, at display 112. Light source 108 may emit light 116 towards periodic structure 111, and periodic structure 111 may act as a diffractive optical element to cause light 116 to constructively and destructively interfere with each other to result in one or more specific patterns of light dots 118 that display 112 projects onto scene 120. [0028] Periodic structure 111 of display 112 may be formed to have a two-dimensional arrangement of transparent or semi-transparent regions that are spaced between opaque regions of periodic structure 111. In some examples, the arrangement of transparent or semi-transparent regions of periodic structure 111 may be a two-dimensional regular repeating arrangement (e.g., a lattice) of transparent or semi-transparent regions or a two-dimensional interlaced repeating arrangement of transparent or semi-transparent regions.
[0029] In some examples, the transparent or semi-transparent regions of periodic structure 111 may be gaps, spaces, and/or voids between a two-dimensional arrangement, such as a regular repeating arrangement (e.g., a lattice) of non-transparent regions of periodic structure 111. Such non-transparent regions of periodic structure 111 may be made of metal or other non-transparent materials. For example, the nontransparent regions of periodic structure 111 may be regions of periodic structure 111 that contain pixels or subpixels of display 112, such as regions of periodic structure 111 that include light emitting diodes (e.g., organic light emitting diodes) of display 112. [0030] The arrangement of transparent or semi-transparent regions spaced between opaque regions may cause tight waves (e.g., tight 116) to interfere with each other, which may cause constructive interference and destructive interference of the light waves. The constructive and destructive interference of the light waves as the tight waves hit periodic structure 111 may cause the tight waves to diffract, resulting in patterns of tight dots 118 that passes through the transparent or semi-transparent regions of periodic structure 111. Such patterns of tight dots 118 may therefore travel out of display 112 and are projected at scene 120.
[0031] Camera 106 may capture the pattern of tight dots projected onto scene 120, such as by capturing one or more images of scene 120. Such pattern of light dots captured by camera 106 may be a result of the three-dimensionality of scene 120 reflecting the tight dots projected by computing device 102. Camera 106 may be positioned in any suitable position in computing device 102, such as under display 112.
[0032] Computing device 102 may execute depth mapping module 110 to determine depth values associated with the plurality of tight dots captured by camera 106. Depth mapping module 110 may use any suitable technique to determine a corresponding depth value for each tight dot captured by camera 106 and to generate a depth map of scene 120 based on the depth values of the plurality of tight dots. [0033] Computing device 102 may perform one or more functions using the depth map generated by depth mapping module 110. For example, computing device 102 may perform facial recognition to authenticate authorized users of computing device 102. Computing device 102 may also use the depth map to aid in performing functions such as portrait mode photography, active autofocusing of cameras, and the like.
[0034] The techniques of this disclosure may provide certain technical advantages. By using display 112 as a diffracting optical element for light source 108 to project patterns of light dots onto a scene, the techniques of this disclosure may enable light source 108 to be placed under display 112. Enabling light source 108 to be placed under 108 may reduce the size of the form factor of computing device 102 compared with computing devices having a dedicated diffracting optical element. Enabling light source 108 to be placed under 108 may also enable computing device 102 to include a relatively larger display 112 compared with computing devices having a dedicated diffracting optical element.
[0035] Furthermore, by using display 112 as a diffracting optical element for light source 108 to project patterns of light dots onto a scene, the techniques of this disclosure may reduce the number of components of a computing device. Reducing the number of components in a computing device may decrease the cost and complexity in manufacturing the computing device and may increase the reliability of the computing devices.
[0036] FIG. 1B shows a profile view of a portion of display 112 of computing device 102 in further detail. While display 112 shown in FIG. 1B is an OLED display, the techniques illustrated in FIG. 1B may also be applicable to other types of display technologies.
[0037] As shown in FIG. 1B, a portion of display 112 may include hard coat 122, glass layer 124, polarizer layer 126, encapsulation layer 128, OLED layer 138, metal layer 132, and support layer 130. Display 112 may include additional layers and/or components not shown in FIG. 1B.
[0038] Support layer 130 may provide structural integrity of the assembly of display 112. Metal layer 132 may also be referred to as a metal mask layer or the backplane or the thin-film transistor (TFT) layer, and may include metal structure 134 (e.g., a TFT) on which light emitters 136A-136C ("light emitters 136”) of OLED layer 138 are disposed. In some examples, light emitters may be OLEDs. For example, backplane [0039] In the example of FIG. 1B, light emitters 136 may include light emitter 136A that emits red light, light emitter 136B, disposed on corresponding metal structure 134, that emits green light, and light emitter 136C that emits blue light. As such, light emitters 136 shown in FIG. 1B may be an example of red, green, and blue subpixels making up a single pixel of display 112.
[0040] Polarizer layer 126 may manage polarization of light, such as ambient light and/or light emitted by light emitters 136, to ensure optimal visibility of display 112 and to reduce glare. Polarizer layer 126 may also control display 112 ’s brightness and contrast. Glass layer 124 may serve as the main substrate upon which touch interactions occur. Hard coat 122 is a durable and scratch-resistant coating applied to the outermost surface of display 112. The purpose of hard coat 122 may include protecting the underlying layers of display 112 from physical damage.
[0041] OLED layer 138 and metal layer 132 of display 112 may, together, form periodic structure 111 of display 112. As can be seen, there are small transparent apertures, such as gaps or empty space, between each metal structures 134 of metal layer 132 and between light emitters 136 of OLED layer 138 within periodic structure 111. As such, periodic structure 111 forms a regular repeating pattern of transparent apertures between metal structures 134 and light emitters 136.
[0042] Hard coat 122, glass layer 124, polarizer layer 126, encapsulation layer 128, and support layer 130 may be transparent (e.g., clear) or semi-transparent so that light may travel through hard coat 122, glass layer 124, polarizer layer 126, encapsulation layer 128, and support layer 130. When light source 108 emits light 116 towards display 112, light 116 may travel through support layer 130 towards periodic structure 111. Because light source 108 emits light 116 towards periodic structure 111 that acts as a diffractive optical element to diffract light 116, light source 108 may emit light 116 in the form of a collimated tight projections, such as a collimated beam, and display 112 may receive light 116 emitted from light source 108 in the form of the collimated tight projections that display 112 may diffract to produce patters of light dots 118.
[0043] The arrangement of metal structures 134 and light emitters 136 and the arrangement of transparent apertures, such as gaps, between metal structures 134 and between light emitters 136 may cause light 116 to bounce off of metal structures 134 and to travel through the transparent apertures, which may cause constructive interference and destructive interference of tight 116. Such constructive interference and destructive interference may diffract light 116 into patterns of light dots 118 that travel through encapsulation layer 128, polarizer layer 126, glass layer 124, and hard coat 122 and are therefore projected onto a three-dimensional scene.
[0044] FIG. 1C illustrates a plan view of periodic structure 111 of display 112 of computing device 102. While periodic structure 111 is shown in FIG. 1C is in the context of an OLED display, the techniques illustrated in FIG. 1C may also be applicable to other types of display technologies. In some examples, the techniques may also be applicable to an LCD or a microLED display, where a plurality of pixels of the LCD or the microLED may be semi-transparent to form a regular or repeating arrangement of semi-transparent regions of periodic structure 111. For example, an LCD or a microLED display may use Red Green Blue White (RGBW) filters to form a regular or repeating arrangement of semi-transparent regions of periodic structure 111, where White filters may be transparent filters that enables light 116 emitted by light source 108 as well as light produced as a result of constructive and destractive interference with light 116 to pass through.
[0045] As shown in FIG. 1C, metal structures 134 of periodic structure 111 of display 112, on which red, green, and blue light emitters 136 are disposed, may be opaque regions of periodic structure 111 arranged in a two-dimensional regular repeating pattern. The arrangement of metal structures 134 may form a regular or interlaced repeating arrangement of space 135 between the opaque regions of periodic structure 111, which may be metal structures 134. Such space 135 between metal structures 134 may be transparent or semi-transparent regions of periodic structure 111, such as transparent apertures or gaps, thereby enabling light, which may be light 116 emitted by light source 108 as well as light produced as a result of constructive and destructive interference with light 116, to pass through such space 135 between metal structures 134 in the form of patterns of light dots that are project onto a three-dimensional scene, such as scene 120.
[0046] FIG. 1D illustrates an example of dot patterns that may be projected by computing device 102 onto a scene. As shown in FIG. 1D, periodic structure 111 of display 112 may diffract light 116 emitted by light source 108 into a pattern of light dots 118 that is projected onto a scene, such as scene 120. In the example of FIG. 1D, the pattern of light dots 118 may result from light 116 being emitted by a laser at about 940nm. Camera 106 may capture such a pattern of dots 118, and depth mapping module 110 may determine a depth map of the scene based on the pattern of dots 118 projected onto the scene.
[0047] Throughout the disclosure, examples are described where a computing device and/or a computing system analyzes information (e.g., context, locations, speeds, search queries, etc.) associated with a computing device and a user of a computing device, only if the computing device receives permission from the user of the computing device to analyze the information. For example, in situations discussed below, before a computing device or computing system can collect or may make use of information associated with a user, the user may be provided with an opportunity to provide input to control whether programs or features of the computing device and/or computing system can collect and make use of user information (e.g., information about a user’s current location, current speed, etc.), or to dictate whether and/or how to the device and/or system may receive content that may be relevant to the user. In addition, certain data may be treated in one or more ways before it is stored or used by the computing device and/or computing system, so that personally identifiable information is removed. For example, a user’s identity may be treated so that no personally identifiable information can be determined about the user, or a user’s geographic location may be generalized where location information is obtained (such as to a city, ZIP code, or state level), so that a particular location of a user cannot be determined. Thus, the user may have control over how information is collected about the user and used by the computing device and computing system.
[0048] FIG. 2 is a block diagram illustrating an example computing device 202, in accordance with one or more aspects of the present disclosure. Computing device 202 of FIG. 2 is an example of computing device 102 of FIGS. 1A-1D. Computing device 202 is only one particular example of computing device 102 of FIGS. 1A-1D, and many other examples of computing device 102 may be used in other instances. In the example of FIG. 2, computing device 202 may be a mobile computing device (e.g., a smartphone), or any other computing device. Computing device 202 of FIG. 2 may include a subset of the components included in example computing device 202 or may include additional components not shown in FIG. 2.
[0049] As shown in the example of FIG. 2, computing device 202 includes display 212, one or more processors 240, one or more input devices 242, one or more communication units 244, one or more output devices 246, and one or more storage devices 248. Storage devices 248 of computing device 202 also include depth sensing module 252, depth mapping module 210, facial recognition module 258, operating system 254, depth mapping model 256, and facial recognition data 270.
[0050] Communication channels 250 may interconnect each of the components 240, 242, 244, 246, 248, and 212 for inter-component communications (physically, communicatively, and/or operatively). In some examples, communication channels 250 may include a system bus, a network connection, an inter-process communication data structure, or any other method for communicating data.
[0051] One or more input devices 242 of computing device 202 may be configured to receive input. Examples of input are tactile, audio, and video input. Input devices 242 of computing device 202, in one example, includes a presence-sensitive display, touch- sensitive screen, mouse, keyboard, voice responsive system, video camera, microphone or any other type of device for detecting input from a human or machine.
[0052] For example, one or more input devices 242 may include camera 206, which is an example of camera 106 of FIGS. 1A-1D. Camera 206 may be any device capable of capturing images of the patterns of light emitted from light source 208 that are projected onto a three-dimensional scene by computing device 202 for the purposes of performing depth mapping. That is, camera 206 may capture the patterns of light that are reflected by the three-dimensional scene. As the light emitted by light source 208 may have a relatively high wavelength, which may be in the near infrared or infrared range, camera 206 may be a device that is able to capture the light patterns at such relatively high wavelengths.
[0053] One or more output devices 246 of computing device 202 may be configured to generate output. Examples of output are tactile, audio, and video output. Output devices 246 of computing device 202, in one example, includes a presence-sensitive organic light emitting diode (OLED) display, sound card, video graphics adapter card, speaker, monitor, a presence-sensitive liquid crystal display (LCD), or any other type of device for generating output to a human or machine.
[0054] For example, one or more output devices 246 may include light source 208, which is an example of light source 108 of FIGS. 1A-1D. Light source 208 may be any device capable of emit light, such as in the form of a collimated beam, that is used to perform depth mapping of three-dimensional scenes. In some examples, light source 208 may be capable of generating light having a center wavelength that is between about 400 nanometers (nm) to about 850nm or longer. In some examples, light source 208 may be capable of generating light that is in the near infrared range to the infrared range, such that human eyes are insensitive to the light emitted by light source 208. [0055] Light source 208 may be any suitable light source having a certain degree of spatial and temporal coherence to generate high contrast light patterns. In some examples, light source 208 may be a laser, such as a super luminescent diode laser. In some examples, light source 208 may be filtered light emitting diodes. In some examples, light source 208 may be Vertical Cavity Surface Emitting Lasers (VCSELs) [0056] One or more communication units 244 of computing device 202 may be configured to communicate with external devices via one or more wired and/or wireless networks by transmitting and/or receiving network signals on the one or more networks. Examples of communication unit 244 include a network interface card (e.g. such as an Ethernet card), an optical transceiver, a radio frequency transceiver, a GPS receiver, or any other type of device that can send and/or receive information. Other examples of communication units 44 may include short wave radios, cellular data radios, wireless network radios, as well as universal serial bus (USB) controllers.
[0057] Display 212 may be an example of display 112 of FIGS. 1A-1D. Display 212 may function as an output (e.g., display) device using any one or more display devices, such as a liquid crystal display (LCD), dot matrix display, light emitting diode (LED) display, microLED, organic fight-emitting diode (OLED) display, e-ink, or similar monochrome or color display capable of outputting visible information to a user of computing device 202.
[0058] In some examples, display 212 of computing device 202 may include functionality of input devices 242 and/or output devices 246. In the example of FIG. 2, display 212 may be or may include a presence-sensitive input device. In some examples, a presence sensitive input device may detect an object at and/or near a screen. As one example range, a presence-sensitive input device may detect an object, such as a finger or stylus that is within 2 inches or less of the screen. The presence-sensitive input device may determine a location (e.g., an (x,y) coordinate) of a screen at which the object was detected. In another example range, a presence-sensitive input device may detect an object six inches or less from the screen and other ranges are also possible. The presence-sensitive input device may determine the location of the screen selected by a user’s finger using capacitive, inductive, and/or optical recognition techniques. In some examples, a presence sensitive input device also provides output to a user using tactile, audio, or video stimuli as described with respect to output device 246, e.g., at a display. In the example of FIG. 2, display 212 may present a user interface.
[0059] While illustrated as an internal component of computing device 202, display 212 may, in some examples, represents an external component that shares a data path with computing device 202 for transmitting and/or receiving input and output. For instance, in one example, display 212 represents a built-in component of computing device 202 located within and physically connected to the external packaging of computing device 202 (e.g., a screen on a mobile phone). In another example, display 212 represents an external component of computing device 202 located outside and physically separated from the packaging of computing device 202 (e.g., a monitor, a projector, etc. that shares a wired and/or wireless data path with a tablet computer).
[0060] Display 212 includes periodic structure 211, which is an example of periodic structure 111 of FIGS. 1A-1D. Similar to periodic structure 111, periodic structure 211 may be formed of one or more display components, such as structures (e.g., a metal layer) on which pixels or subpixels of display 212 are disposed.
[0061] One or more storage devices 248 within computing device 202 may store information for processing during operation of computing device 202. In some examples, storage device 248 is a temporary memory, meaning that a primary purpose of storage device 248 is not long-term storage. Storage devices 248 on computing device 202 may be configured for short-term storage of information as volatile memory and therefore not retain stored contents if powered off. Examples of volatile memories include random access memories (RAM), dynamic random access memories (DRAM), static random access memories (SRAM), and other forms of volatile memories known in the art.
[0062] Storage devices 248, in some examples, also include one or more computer- readable storage media. Storage devices 248 may be configured to store larger amounts of information than volatile memory. Storage devices 248 may further be configured for long-term storage of information as non-volatile memory space and retain information after power on/off cycles. Examples of non-volatile memories include magnetic hard discs, optical discs, floppy discs, flash memories, or forms of electrically programmable memories (EPROM) or electrically erasable and programmable (EEPROM) memories. Storage devices 248 may store program instructions and/or information (e.g., data) associated with depth sensing module 252, depth mapping module 210, facial recognition module 258, operating system 254, depth mapping model 256, and facial recognition data 270.
[0063] One or more processors 240 may implement functionality and/or execute instructions within computing device 202. For example, processors 240 on computing device 202 may receive and execute instructions stored by storage devices 248 that execute the functionality of depth sensing module 252, depth mapping module 210, facial recognition module 258, operating system 254, and depth mapping model 256.. These instructions executed by processors 240 may, for example, cause one or more processors 240 to perform depth mapping of a three-dimensional scene.
[0064] One or more processors 240 are configured to execute depth sensing module 252 to control the operations of light source 208 and camera 206. For example, depth sensing module 252 may communicate with light source 208 to cause light source 208 to emit light that display 212 may diffract into a plurality of light dots that are projected onto a scene. Similarly, depth sensing module 252 may communicate with camera 206 to capture the plurality of light dots projected onto the scene.
[0065] One or more processors 240 are configured to execute depth mapping module 210 to determine depth values associated with the plurality of light dots captured by camera 206. The depth value of a light dot may correspond to a relative distance of the light dot from camera 206. Depth mapping module 210 may determine the depth value of each of a plurality of light dots captured by camera 206 and may also determine interpolated depth values between light dots captured by camera 206.
[0066] When light source 208 emits light that display 212 diffracts into a one or more dot patterns that are projected onto a scene, objects in the scene, such as a human face and its curvature and depth variations, may distort the one or more dot patterns projected by computing device 202. Depth mapping module 210 may therefore analyze the one or more dot patterns captured by camera 206 to determine how the one or more dot patterns projected by computing device 202 has been distorted by the scene to determine the depth values of the one or more dot patterns captured by camera 206.
[0067] In some examples, depth mapping module 210 may use a triangulation technique to determine the depth values of dots captured by camera 206, which uses the known positions of the projected dots, the dots captured by camera 206, and the angle of camera 206 to the projected dots to determine the distance between camera 206 and each dot captured by camera 206.
[0068] Depth mapping module 210 may, during a calibration phase, determine and assign angular coordinates to the locations of centroids of dots that are projected by computing device 202 and captured by camera 206, and may associate depth values to the angular coordinates, so that each angular coordinate is associated with a particular depth value. For example, given a dot projected into a scene, depth mapping module 210 may determine the angular coordinate of the dot from light source 208. To determine a depth value associated with the angular coordinate, depth mapping module 210 may determine the distance of the dot from camera 206, which corresponds with a depth value associated with the dot, by determining the distance from camera 206 at which a light projected by light source 208 via display 212 towards the centroid of the dot may intersect a light projected from camera 206 towards the centroid of the dot.
[0069] Depth mapping module 210 may therefore analyze the one or more dot patterns captured by camera 206 to determine angular coordinates of the dots in the one or more dot patterns, and may determine the depth values associated with the angular coordinates. Depth mapping module 210 may also apply an interpolation technique to determine depth values of points that are between dots in the one or more dot patterns. In this way, depth mapping module 210 may create a depth map based on the one or more dot patterns captured by camera 206.
[0070] In some examples, depth mapping module 210 may use an indirect time of flight technique to determine the depth values of one or more dot patterns captured by camera 206. In this technique, computing device 202 may modulate dot patterns, also referred to as spots, that are projected onto a scene. Camera 206 may capture and demodulate the dot patterns to measure the phase delay of each dot, thereby extracting depth information from each dot. Depth mapping module 210 may therefore be able to determine depth values associated with dots in the one or more dot patterns captured by camera 206. Depth mapping module 210 may also apply an interpolation technique to determine depth values of points that are between dots in the one or more dot patterns. In this way, depth mapping module 210 may create a depth map based on the one or more dot patterns captured by camera 206.
[0071] In some examples, depth mapping module 210 may determine the depth value of one or more dot patterns captured by camera 206 by using depth mapping model 256 that is machine-trained model trained via machine learning to generate a depth map for one or more dot patterns. Depth mapping model 256 may be trained via supervised machine learning. For example, depth mapping model 256 may be trained using training data that pairs one or more dot patterns of a three-dimensional scene with a depth map of the scene, which may have been captured using a calibrated depth camerato generate hyperparameters for depth mapping model 256.
[0072] Once depth mapping model 256 is trained, depth mapping module 210 may use depth mapping model 256 to generate a depth map for one or more depth patterns. For example, depth mapping module 210 may input, into depth mapping model 256, one or more dot patterns captured by camera 206, and depth mapping model 256 may generate and output a depth map of the inputted one or more dot patterns.
[0073] In some examples, the depth mapping module 210 may use the techniques described in this disclosure of determining depth values of dot patterns to augment additional depth mapping techniques. Using the techniques described in this disclosure to augment additional depth mapping techniques may increase the accuracy of depth maps of scenes that are generated by computing device 202.
[0074] For example, depth mapping module 210 may determine depth values of dot patterns to augment depth maps of scenes generated using stereoscopic imaging. Stereoscopic imaging is a technique that uses two or more cameras to capture images of a scene from slightly different viewpoints. By analyzing the differences between the images, the techniques of stereoscopic imaging may enable computing device 202 to infer depth information and create a three-dimensional representation of the scene. [0075] Computing device 202 may use the depth map of a scene that is generated by depth mapping module 210 to perform various functions. For example, computing device 202 may use depth mapping to perform facial recognition of authorized users of computing device 202. An authorized user may enroll their facial features at computing device 202. For example, computing device 202 may use the techniques described herein to scan the facial features of an authorized user and may store the enrolled facial features of the authorized user as facial recognition data 270.
[0076] To perform facial recognition of authorized users of computing device 202, one or more processors 240 may execute facial recognition module 258 that communicates with light source 208 and camera 206 to project one or more dot patterns onto a user’s face and to capture the dot patterns projected onto the user’s face. Depth mapping module 210 may generate a depth map of the user’s face based on the dot patterns captured by camera 206, and facial recognition module 258 may compare the depth map of the user’s face with the enrolled facial features of the authorized user of computing device 202 as stored in facial recognition data 270 to determine whether the user is an authorized user of computing device 202.
[0077] To ensure the security of sensitive data, such as facial recognition data 270 for performing facial recognition of authorized users, facial recognition data 270 and facial recognition module 258 may be stored in secure hardware that is physically isolated from other components of computing device 202. Storing facial recognition data 270 and facial recognition module 258 in such secure hardware may prevent unauthorized access and tampering. Further, facial recognition data 270 and facial recognition module 258 may also be encrypted, such that facial recognition data 270 and facial recognition module 258 may remain secure and inaccessible even if the secure hardware is compromised.
[0078] FIG. 3 is a flowchart illustrating example operations of an example computing device configured to generate depth maps of three-dimensional scenes, in accordance with one or more aspects of the present disclosure. For purposes of illustration only, the example operations are described below within the context of computing device 202 of FIG. 2.
[0079] As shown in FIG. 3, a light source 208 of computing device 202 may emit light 116 towards at least a portion of a display 212 having a periodic structure 211, wherein the periodic structure 211 of the display 212 diffracts the light 116 into a plurality of light dots 118 that are projected onto a scene 120 (302). In some examples, the periodic structure 211 includes a two-dimensional arrangement of opaque regions that forms a two-dimensional arrangement of transparent regions between the opaque regions. In some examples, the opaque regions of the periodic structure 211 include metal structures 134, and light emitters of the display 212 are disposed on the metal structures 134. In some examples, the display 212 includes an organic light-emitting diode (OLED) display, and the light emitters include OLEDs.
[0080] In some examples, the periodic structure 211 is operable to cause the light 116 emitted by the light source 208 to constructively and destructively interfere with each other to produce the plurality of light dots 118 that are projected onto the scene 120. In some examples, the light waves created via constructive and destructive interference of the light 116 emitted by the light source 208 pass through the transparent regions of the periodic structure 211 to form the plurality of light dots 118.
[0081] In some examples, the light 116 emitted by the light source 208 towards at least the portion of the display 212 is a collimated beam. In some examples, the display 212 is a presence-sensitive display.
[0082] In some examples, the light source 208 includes one or more high emissivity diodes behind the display 212. In some examples, the display 212 includes a metal mask layer as the periodic structure 211 , the metal mask layer including transparent apertures positioned over the one or more high emissivity diodes to produce, from collimated light emissions of the one or more high emissivity diodes (e.g., the high emissivity diodes may be a collimated light source), the plurality of light dots 118 in a determined pattern.
[0083] A camera 206 of computing device 202 may capture the plurality of light dots 118 in the scene 120 (304). One or more processors 240 of computing device 202 may determine depth values associated with the plurality of light dots 118 (306). For example, to determine the depth values of the plurality of light dots 118, the one or more processors 240 may input the plurality of light dots 118 into a machine-trained model, such as depth mapping model 256, that is trained via machine learning to generate the depth values for the plurality of light dots 118. In some examples, one or more processors 240 may perform facial recognition based on the depth values associated with the plurality of light dots 118.
[0084] This disclosure includes the following examples:
[0085] Example 1. A computing device comprising: a display having a periodic structure; a light source configured to emit light towards at least a portion of the display, the periodic structure of the display diffracting the light into a plurality of light dots that are projected onto a scene; a camera configured to capture the plurality of light dots in the scene; and one or more processors configured to determine depth values associated with the plurality of light dots.
[0086] Example 2. The computing device of example 1, wherein the periodic structure comprises a two-dimensional arrangement of opaque regions that forms a two- dimensional arrangement of transparent regions between the opaque regions. [0087] Example 3. The computing device of example 2, wherein the opaque regions of the periodic structure include metal structures, and wherein light emitters of the display are disposed on the metal structures.
[0088] Example 4. The computing device of any of examples 2 and 3, wherein the display comprises one of: an organic light-emitting diode (OLED) display, a liquid crystal display (LCD), or a microLED display.
[0089] Example 5. The computing device of any of examples 1-4, wherein the light source includes one or more high emissivity diodes behind the display.
[0090] Example 6. The computing device of example 5, wherein the display includes a metal mask layer as the periodic structure, the metal mask layer including transparent apertures positioned over the one or more high emissivity diodes to produce, from diode emissions of the one or more high emissivity diodes, the plurality of light dots in a determined pattern.
[0091] Example 7. The computing device of any of examples 2-6, wherein the periodic structure is operable to cause the light emitted by the light source to constructively and destructively interfere with each other to produce the plurality of light dots that are projected onto the scene.
[0092] Example 8. The computing device of example 7, wherein light waves created via constructive and destructive interference of the light emitted by the light source pass through the transparent regions of the periodic structure to form the plurality of light dots.
[0093] Example 9. The computing device of any of examples 1-8, wherein the light emitted by the light source towards at least the portion of the display comprises a collimated beam.
[0094] Example 10. The computing device of any of examples 1-9, wherein the display comprises a presence-sensitive display.
[0095] Example 11. The computing device of any of examples 1-10, wherein to determine the depth values of the plurality of light dots, the one or more processors are further configured to: determine the depth values of the plurality of dots using one of: a triangulation technique, a time of flight technique, or a machine-trained model trained via machine learning to generate the depth values for the plurality of light dots. [0096] Example 12. The computing device of any of examples 1-11, wherein the one or more processors are further configured to: perform facial recognition based on the depth values associated with the plurality of light dots.
[0097] Example 13. A method comprising: emitting, by a light source of a computing device, light towards at least a portion of a display of the computing device, wherein a periodic structure of the display diffracts the light into a plurality of light dots that are projected onto a scene; capturing, by a camera of the computing device, the plurality of light dots in the scene; and determining, by one or more processors of the computing device, depth values associated with the plurality of light dots.
[0098] Example 14. The method of example 13, wherein the periodic structure comprises a two-dimensional arrangement of opaque regions that forms a two- dimensional arrangement of transparent regions between the opaque regions.
[0099] Example 15. The method of example 14, wherein the opaque regions of the periodic structure include metal structures, and wherein light emitters of the display are disposed on the metal structures.
[0100] Example 16. The method of any of examples 14 and 15, wherein the display comprises one of: an organic light-emitting diode (OLED) display, a liquid crystal display (LCD), or a microLED display.
[0101] Example 17. The method of any of examples 14-16, wherein the periodic structure is operable to cause the light emitted by the light source to constructively and destructively interfere with each other to produce the plurality of light dots that are projected onto the scene.
[01.02] Example 18. The method of example 17, wherein light waves created via constructive and destructive interference of the light emitted by the light source pass through the transparent regions of the periodic structure to form the plurality of light dots.
[0103] Example 19. The method of any of examples 13-18, wherein the light emitted by the light source towards at least the portion of the display comprises a collimated beam. [0104] Example 20. The method of any of examples 13-19, wherein the display comprises a presence-sensitive display.
[01.05] Example 21 . The method of any of examples 13-20, wherein determining the depth values of the plurality of light dots further comprises: determining, by the one or more processors, the depth values of the plurality of dots using one of: a triangulation technique, a time of flight technique, or a machine-trained model trained via machine learning to generate the depth values for the plurality of light dots.
[0106] Example 22. The method of any of examples 13-21, further comprising: performing, by the one or more processors, facial recognition based on the depth values associated with the plurality of light dots.
[0107] In one or more examples, the functions described may be implemented in hardware, software, firmware, or any combination thereof. If implemented in software, the functions may be stored on or transmitted over, as one or more instructions or code, a computer-readable medium and executed by a hardware-based processing unit. Computer-readable media may include computer-readable storage media, which corresponds to a tangible medium such as data storage media, or communication media including any medium that facilitates transfer of a computer program from one place to another, e.g., according to a communication protocol. In this manner, computer-readable media generally may correspond to (1) tangible computer-readable storage media, which is non-transitory or (2) a communication medium such as a signal or carrier wave. Data storage media may be any available media that can be accessed by one or more computers or one or more processors to retrieve instructions, code and/or data structures for implementation of the techniques described in this disclosure. A computer program product may include a computer-readable medium.
[0108] By way of example, and not limitation, such computer-readable storage media can comprise RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage, or other magnetic storage devices, flash memory, or any other medium that can be used to store desired program code in the form of instructions or data structures and that can be accessed by a computer. Also, any connection is properly termed a computer-readable medium. For example, if instructions are transmitted from a website, server, or other remote source using a coaxial cable, fiber optic cable, twisted pair, digital subscriber line (DSL), or wireless technologies such as infrared, radio, and microwave, then the coaxial cable, fiber optic cable, twisted pair, DSL, or wireless technologies such as infrared, radio, and microwave are included in the definition of medium. It should be understood, however, that computer-readable storage media and data storage media do not include connections, carrier waves, signals, or other transient media, but are instead directed to non-transient, tangible storage media. Disk and disc, as used herein, includes compact disc (CD), laser disc, optical disc, digital versatile disc (DVD), floppy disk and Blu-ray disc, where disks usually reproduce data magnetically, while discs reproduce data optically with lasers. Combinations of the above should also be included within the scope of computer-readable media.
[0109] Instructions may be executed by one or more processors, such as one or more digital signal processors (DSPs), general purpose microprocessors, application specific integrated circuits (ASICs), field programmable logic arrays (FPGAs), or other equivalent integrated or discrete logic circuitry. Accordingly, the term “processor,” as used herein may refer to any of the foregoing structures or any other structure suitable for implementation of the techniques described herein. In addition, in some aspects, the functionality described herein may be provided within dedicated hardware and/or software modules. Also, the techniques could be fully implemented in one or more circuits or logic elements.
[0110] The techniques of this disclosure may be implemented in a wide variety of devices or apparatuses, including a wireless handset, an integrated circuit (IC) or a set of ICs (e.g., a chip set). Various components, modules, or units are described in this disclosure to emphasize functional aspects of devices configured to perform the disclosed techniques, but do not necessarily require realization by different hardware units. Rather, as described above, various units may be combined in a hardware unit or provided by a collection of interoperative hardware units, including one or more processors as described above, in conjunction with suitable software and/or firmware.
[0111] Various examples of the disclosure have been described. Any combination of the described systems, operations, or functions is contemplated. These and other examples are within the scope of the following claims.

Claims

CLAIMS:
1. A computing device comprising: a display having a periodic structure; a light source configured to emit light towards at least a portion of the display, the periodic structure of the display diffracting the light into a plurality of light dots that are projected onto a scene; a camera, configured to capture the plurality of light dots in the scene; and one or more processors configured to determine depth values associated with the plurality of light dots.
2. The computing device of claim 1, wherein the periodic structure comprises a two-dimensional arrangement of opaque regions that forms a two-dimensional arrangement of transparent regions between the opaque regions.
3. The computing device of claim 2, wherein the opaque regions of the periodic structure include metal structures, and wherein light emitters of the display are disposed on the metal structures.
4. The computing device of any of claims 2 and 3, wherein the display comprises one of: an organic light-emitting diode (OLED) display, a liquid crystal display (LCD), or a microLED display.
5. The computing device of any of claims 1-4, wherein the light source includes one or more high emissivity diodes behind the display.
6. The computing device of claim 5, wherein the display includes a metal mask layer as the periodic structure, the metal mask layer including transparent apertures positioned over the one or more high emissivity diodes to produce, from collimated light emissions of the one or more high emissivity diodes, the plurality of light dots in a determined pattern.
7. The computing device of any of claims 2-6, wherein the periodic structure is operable to cause the light emitted by the light source to constructively and destructively interfere with each other to produce the plurality of light dots that are projected onto the scene.
8. The computing device of claim 7, wherein light waves created via constructive and destructive interference of the light emitted by the light source pass through the transparent regions of the periodic structure to form the plurality of light dots.
9. The computing device of any of claims 1-8, wherein the light emitted by the light source towards at least the portion of the display comprises a collimated beam.
10. The computing device of any of claims 1-9, wherein the display comprises a presence-sensitive display.
11. The computing device of any of claims 1-10, wherein to determine the depth values of the plurality of light dots, the one or more processors are further configured to: determine the depth values of the plurality of dots using one of: a triangulation technique, a time of flight technique, or a machine-trained model trained via machine learning to generate the depth values for the plurality of light dots.
12. The computing device of any of claims 1-11, wherein the one or more processors are further configured to: perform facial recognition based on the depth values associated with the plurality of light dots.
13. A method comprising: emitting, by a light source of a computing device, light towards at least a portion of a display of the computing device, wherein a periodic structure of the display diffracts the light into a plurality of light dots that are projected onto a scene; capturing, by a camera of the computing device, the plurality of light dots in the scene; and determining, by one or more processors of the computing device, depth values associated with the plurality of light dots.
14. The method of claim 13, wherein the periodic structure comprises a two- dimensional arrangement of opaque regions that forms a two-dimensional arrangement of transparent regions between the opaque regions.
15. The method of claim 14, wherein the opaque regions of the periodic structure include metal structures, and wherein light emitters of the display are disposed on the metal structures.
16. The method of any of claims 14 and 15, wherein the display comprises one of: an organic light-emitting diode (OLED) display, a liquid crystal display (LCD), or a microLED display.
17. The method of any of claims 14-16, wherein the periodic structure is operable to cause the light emitted by the light source to constructively and destructively interfere with each other to produce the plurality of light dots that are projected onto the scene.
18. The method of claim 17, wherein light waves created via constructive and destructive interference of the light emitted by the light source pass through the transparent regions of the periodic structure to form the plurality of light dots.
19. The method of any of claims 13-18, wherein the light emitted by the light source towards at least the portion of the display comprises a collimated beam.
20. The method of any of claims 13-19, wherein determining the depth values of the plurality of light dots further comprises: determining, by the one or more processors, the depth values of the plurality of dots using one of: a triangulation technique, a time of flight technique, or a machine- trained model trained via machine learning to generate the depth values for the plurality of light dots.
PCT/US2023/077662 2023-10-24 2023-10-24 Depth camera using display encoded dot pattern Pending WO2025090075A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/US2023/077662 WO2025090075A1 (en) 2023-10-24 2023-10-24 Depth camera using display encoded dot pattern

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/US2023/077662 WO2025090075A1 (en) 2023-10-24 2023-10-24 Depth camera using display encoded dot pattern

Publications (1)

Publication Number Publication Date
WO2025090075A1 true WO2025090075A1 (en) 2025-05-01

Family

ID=88863373

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2023/077662 Pending WO2025090075A1 (en) 2023-10-24 2023-10-24 Depth camera using display encoded dot pattern

Country Status (1)

Country Link
WO (1) WO2025090075A1 (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10509147B2 (en) * 2015-01-29 2019-12-17 ams Sensors Singapore Pte. Ltd Apparatus for producing patterned illumination using arrays of light sources and lenses
WO2020139836A1 (en) * 2018-12-26 2020-07-02 Apple Inc. Through-display optical transmission, reception, or sensing through micro-optic elements
EP3873080A1 (en) * 2018-10-31 2021-09-01 Guangdong Oppo Mobile Telecommunications Corp., Ltd. Image acquisition method, image acquisition apparatus, structured light assembly and electronic apparatus
US20220398759A1 (en) * 2019-11-27 2022-12-15 Trinamix Gmbh Depth measurement through display
EP3987248B1 (en) * 2019-06-20 2023-08-02 AMS Sensors Asia Pte. Ltd. Projecting a structured light pattern from an apparatus having an oled display screen

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10509147B2 (en) * 2015-01-29 2019-12-17 ams Sensors Singapore Pte. Ltd Apparatus for producing patterned illumination using arrays of light sources and lenses
EP3873080A1 (en) * 2018-10-31 2021-09-01 Guangdong Oppo Mobile Telecommunications Corp., Ltd. Image acquisition method, image acquisition apparatus, structured light assembly and electronic apparatus
WO2020139836A1 (en) * 2018-12-26 2020-07-02 Apple Inc. Through-display optical transmission, reception, or sensing through micro-optic elements
EP3987248B1 (en) * 2019-06-20 2023-08-02 AMS Sensors Asia Pte. Ltd. Projecting a structured light pattern from an apparatus having an oled display screen
US20220398759A1 (en) * 2019-11-27 2022-12-15 Trinamix Gmbh Depth measurement through display

Similar Documents

Publication Publication Date Title
US20230350628A1 (en) Intuitive augmented reality collaboration on visual data
US9245193B2 (en) Dynamic selection of surfaces in real world for projection of information thereon
KR20230044401A (en) Personal control interface for extended reality
US11474970B2 (en) Artificial reality system with inter-processor communication (IPC)
US20230252156A1 (en) Artificial reality system with multi-stage boot process
US11265721B1 (en) Secure device attestation and mutual authentication of artificial reality devices
US10346599B2 (en) Multi-function button for computing devices
KR20210123994A (en) Electronic device and method for object identification utilizing paired electronic device
US9773240B1 (en) Fake sensor input for passcode entry security
EP4121875A1 (en) Authentication and calibration via gaze tracking
JP2019194676A (en) Methods and systems for display device multiplexing and demultiplexing
US12183117B2 (en) User authentication using pose-based facial recognition
KR102312900B1 (en) User authentication on display device
US11238159B1 (en) Artificial reality system with verified boot sequences
JP6921857B2 (en) Display components and their manufacturing methods, display devices
WO2025090075A1 (en) Depth camera using display encoded dot pattern
US11532175B1 (en) Imaging an object on a display
Bhowmik Natural and intuitive user interfaces with perceptual computing technologies
US20240378272A1 (en) Systems and methods for optical signature generation and authentication
US12254176B1 (en) System and method of tap detection on a three-dimensional image
US11941131B1 (en) Isolation of secrets from an operating system
US20250285464A1 (en) Using touch input data to improve fingerprint sensor performance
KR20250155389A (en) Virtual image processing system and method
US20210256108A1 (en) Use of steganography to authenticate provider of package, device, or device component

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23809424

Country of ref document: EP

Kind code of ref document: A1