WO2025202122A1 - Electronic device, method and computer program - Google Patents
Electronic device, method and computer programInfo
- Publication number
- WO2025202122A1 WO2025202122A1 PCT/EP2025/057987 EP2025057987W WO2025202122A1 WO 2025202122 A1 WO2025202122 A1 WO 2025202122A1 EP 2025057987 W EP2025057987 W EP 2025057987W WO 2025202122 A1 WO2025202122 A1 WO 2025202122A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- electronic device
- dot
- time
- receiver
- emitters
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S7/00—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
- G01S7/48—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
- G01S7/4802—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00 using analysis of echo signal for target characterisation; Target signature; Target cross-section
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S17/00—Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
- G01S17/02—Systems using the reflection of electromagnetic waves other than radio waves
- G01S17/06—Systems determining position data of a target
- G01S17/08—Systems determining position data of a target for measuring distance only
- G01S17/10—Systems determining position data of a target for measuring distance only using transmission of interrupted, pulse-modulated waves
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S17/00—Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
- G01S17/02—Systems using the reflection of electromagnetic waves other than radio waves
- G01S17/06—Systems determining position data of a target
- G01S17/42—Simultaneous measurement of distance and other co-ordinates
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S17/00—Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
- G01S17/88—Lidar systems specially adapted for specific applications
- G01S17/89—Lidar systems specially adapted for specific applications for mapping or imaging
- G01S17/894—3D imaging with simultaneous measurement of time-of-flight at a 2D array of receiver pixels, e.g. time-of-flight cameras or flash lidar
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S7/00—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
- G01S7/48—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
- G01S7/481—Constructional features, e.g. arrangements of optical elements
- G01S7/4814—Constructional features, e.g. arrangements of optical elements of transmitters alone
- G01S7/4815—Constructional features, e.g. arrangements of optical elements of transmitters alone using multiple transmitters
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S7/00—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
- G01S7/48—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
- G01S7/483—Details of pulse systems
- G01S7/484—Transmitters
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S7/00—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
- G01S7/48—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
- G01S7/483—Details of pulse systems
- G01S7/486—Receivers
- G01S7/4861—Circuits for detection, sampling, integration or read-out
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S7/00—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
- G01S7/48—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
- G01S7/483—Details of pulse systems
- G01S7/486—Receivers
- G01S7/4861—Circuits for detection, sampling, integration or read-out
- G01S7/4863—Detector arrays, e.g. charge-transfer gates
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S7/00—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
- G01S7/48—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
- G01S7/483—Details of pulse systems
- G01S7/486—Receivers
- G01S7/4865—Time delay measurement, e.g. time-of-flight measurement, time of arrival measurement or determining the exact position of a peak
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S7/00—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
- G01S7/48—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
- G01S7/483—Details of pulse systems
- G01S7/486—Receivers
- G01S7/4868—Controlling received signal intensity or exposure of sensor
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S7/00—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
- G01S7/48—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
- G01S7/483—Details of pulse systems
- G01S7/486—Receivers
- G01S7/487—Extracting wanted echo signals, e.g. pulse detection
- G01S7/4873—Extracting wanted echo signals, e.g. pulse detection by deriving and controlling a threshold value
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S17/00—Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
- G01S17/02—Systems using the reflection of electromagnetic waves other than radio waves
- G01S17/06—Systems determining position data of a target
- G01S17/08—Systems determining position data of a target for measuring distance only
- G01S17/32—Systems determining position data of a target for measuring distance only using transmission of continuous waves, whether amplitude-, frequency-, or phase-modulated, or unmodulated
- G01S17/36—Systems determining position data of a target for measuring distance only using transmission of continuous waves, whether amplitude-, frequency-, or phase-modulated, or unmodulated with phase comparison between the received signal and the contemporaneously transmitted signal
Definitions
- the present disclosure generally pertains to an electronic device, a method and a computer program.
- Time Of Flight (ToF) systems in general are resource constrained systems. Energy (and, correspondingly, power) and time that can be spent on each dot are limited.
- the disclosure provides an electronic device according to independent claim 1.
- the present disclosure provides a managing method according to independent claim 46.
- the present disclosure provides a computer program according to independent claim 47. Further aspects of the present disclosure are set forth in the dependent claims, the figures and the following description.
- Fig. 1 is a schematic illustration of a ToF device according to the present disclosure.
- Fig. 2 is a symbolic illustration of an example of a time-of-flight histogram
- Fig. 3 is a schematic illustration of a device according to a first embodiment
- Fig. 4 is a schematic illustration of a device according to a modification of the first embodiment.
- Fig. 5 is a schematic illustration of a receiver
- Fig. 6 is a symbolic illustration of a method to determine an operation of components of a ToF system according to a signal-to-noise ratio
- Fig. 7 is a symbolic illustration of a smart illumination time principle
- Fig. 8 is a symbolic illustration of the smart illumination time principle with regrouping
- Fig. 9 is a symbolic illustration of the smart illumination time principle with regrouping and smart stop
- Fig. lOa-c are symbolic illustrations of a scene with single and multiple reflections per dot.
- Fig. Ila and Fig. 11b are symbolic illustrations of histograms with single and multiple reflections.
- Fig. 13 is a first schematic example of how measurement errors may be induced by rotation of the device.
- Fig. 14 is a first symbolic representation of histogram processing being affected by the rotation shown in Fig. 13;
- Fig. 15 is a second schematic example of how measurement errors may be induced by rotation of the device.
- Fig. 16 is a second symbolic representation of histogram processing being affected by the rotation shown in Fig. 15;
- Fig. 19 is a schematic illustration of a host device according to the third embodiment.
- Fig. 20 is a symbolic illustration of a processing method according to the third embodiment.
- Performance may for example relate to distance and/or returned points.
- the management of the resources may be for example be a dynamic management.
- the other sensors may for example comrpise an IMU or an RGB sensor.
- the circuitry may for example be configured to perform grouping of the emitters and corresponding receivers to groups, based on conditions.
- the circuitry may be configured to set parameters for each group of emitters/receivers including activation profiles, emitter parameters (pulse length, peak power, ... ), receiver parameters number of SPADs per dot, SPAD parameters, histogram parameters such as a threshold, or the like.
- Activation may for example comprise power saving.
- Continuing activation may for example continue beyond a predefined value - e.g. to allow achieving longer distance.
- the circuitry is configured to perform dynamic modifications based on system conditions.
- System conditions may for example refer to checking, if there is not enough time to schedule activation of emitter due to lack of time, or the like.
- Grouping may be based on estimated/projected parameters of the dots and/or hardware limitations.
- Estimated/projected parameters of the dots may for example relate to the expected distances or illumination time associated with the dots, e.g. if they share similar parameters, such as single/multiple reflections.
- Hardware limitations may for example relate to the issue that the system cannot configure any dot with any dot due to routing limitations.
- the estimated/projected parameters may for example be based on previous dToF measurement reports, upper layer a priori information, or other sensor information.
- Previous dToF measurement reports may for example comprise information about distance, albedo, or the like.
- Upper layer a priori information may for example be obtained from a SLAM pipeline.
- Other sensor information may for example be obtained from image data, IMU data, or the like.
- Optimizing a dToF key performance indicator may for example comprise not scheduling dots that do not have a chance to be returned. Optimizing a dToF key performance indicator may also comprise scheduling dots according to prioritization from upper layers. Still further, optimizing a dToF key performance indicator may for example comprise scheduling dots according to chances for them to be returned, e.g. starting from shorter exposure times.
- Managing resources of a time-of-flight system may for example comprise optimizing distance measurements and/or a number of returned dots.
- the electronic device may be the Time-of-Flight (ToF) system.
- ToF Time-of-Flight
- depth information is obtained by determining a phase angle ⁇ j) of incident light in pixels of the ToF sensor as compared to a modulation signal.
- Each pixel of a ToF sensor may typically include two distinct taps (e.g. tap A and tap B), each tap providing gain information (e.g. GA for tap A) in a 7i/2 phase offset as well as phase-dependent intensity information (e.g. S (0) for the a phase of zero and S(JC/2) for a phase of rc/2).
- gain information e.g. GA for tap A
- phase-dependent intensity information e.g. S (0) for the a phase of zero and S(JC/2) for a phase of rc/2
- Depth information is calculated from the phase information by correlating the phase information with phase information of emitted laser beams, such as infrared (IR) laser beams generating, for example, a dot pattern.
- phase information of emitted laser beams such as infrared (IR) laser beams generating, for example, a dot pattern.
- the correlation of the phase information detected by the pixel with phase information of emitted IR laser beams yields a phase offset A0 between the emitted laser beams and the incident radiation sensed by the pixel.
- a distance D of the pixel to the surface reflecting the emitted laser beams can then be calculated using where c is the speed of light in an atmosphere and f is a frequency of the emitted light.
- Photon counting ToF systems like the dToF system record a photon histogram.
- DToF systems may, for example, use single photon avalanche diodes (SPADs) as detectors.
- SBADs single photon avalanche diodes
- depth information is obtained, for example, by measuring the time a signal pulse of a defined duration, emitted by the device using dToF, takes to reach an object, be reflected on a reflection surface, and return to the device to be sensed by the dToF sensor. The signals are then correlated to obtain the distance.
- a dToF system may emit very short pulses (ca. 1 nanosecond) of laser light.
- the emitter emits pulses at a predetermined periodicity, and the receiver (using a timing that is aligned with the timing of the pulse emission by the emitter), collects received photons.
- the receiver determines the time elapsed from emission to reception of each signal, records the determined timing per photons and accumulates the timing of each received photon in a histogram.
- a peak in the histogram accumulated based on a plurality of photons over time corresponds to the distance of the object.
- the peak may, for example, be located at a position T pea k in the histogram.
- Resources of the ToF system may be any of the subunits of the ToF system, for example, the emission unit, the reception unit, or components thereof, or any circuitry, computing resources or other electronic components.
- the managing comprises selective cessation of an operation of emitters of a group of emitters for time-of-flight measurements, wherein each emitter is configured to commence emitting a respective dot simultaneously with the other emitters of the group of emitters.
- the emitter is any electronic component capable of being caused, by a control signal, to emit a timed laser signal.
- Each emitter may in particular be a vertical cavity surface emitting laser (VCSEL).
- VCSEL vertical cavity surface emitting laser
- the device By configuring the device to set an illumination time per group of dots, configure group of dots, each one with its own activation profile (e.g. number of pulses in a train) a greater degree of freedom on the utilization of the resources of the ToF system is achieved. Moreover, intelligent scheduling may be used. In some embodiments, the illumination time may be chosen dynamically, such that performance can be further improved.
- the predefined number of the emitters is equal to a total number of emitters of the predefined group of emitters.
- circuitry is further configured to cause the emitter to cease emitting after a predefined maximum emission time.
- sensing the incident dot comprises monitoring whether a signal- to-noise-ratio of the incident dot measured by a receiver of the receiver array exceeds a predefined threshold.
- circuitry is further configured to cause the emitters to emit a plurality of dots in a dot pattern.
- dynamic managing comprises a selective managing of emitters and/or receivers based on a number of expected reflections of a dot.
- circuitry is further configured to determine, in image data, a number of edges within the subsection of the image sensor that correspond to the dot, and to determine the number of expected reflections of the dot based on the number of edges and/or plane detection.
- circuitry is further configured to determine, in image data, a number of edges which correspond to one or more dots and in which the edges define a object, and wherein in absence of detection of motion of the scene or of the circuitry cause the receivers in the receiver array to cease operating and/or a group of emitters to cease emitting where the receivers and/or emitters correspond spatially to the interior of the object with respect to the edges in the image data.
- the edges may hint about an object, covered by one dot, which has two different distances. So whenever the emitter starts emitting, this may hint that two reflections are expected to be observed. If the dot does not cover an edge, it may hint that the whole surface the dot is emitting is of one distance, so one reflection is expected to be observed.
- circuitry is further configured to control an operating duration of the emitter based on the number of expected reflections.
- circuitry is further configured, if the number of expected reflections is greater than one, to cause a particular emitter related to an incident dot to cease emitting only after more than one reflection has been sensed.
- the expected number of reflections is one more than the number of edges in the field of view of the image sensor coinciding with the incident dot.
- circuitry is further configured to calculate, from the detected movement of the electronic device, an expected shift of a position of an incident dot on a receiver array comprising the receivers during an exposure duration, and wherein the emitters or the receivers are caused to operate based on the expected shift.
- the position of the receiver related to the emitter emitting the incident dot is known a priori due to the optical characteristics of the ToF system, and, if the value of the shift exceeds a predetermined threshold, the particular receiver and/or the emitter related to the particular receiver is not caused to operate.
- circuitry is further configured to estimate a position of the incident dot on the receiver array and determine a particular receiver located at the position of the receiver array, and wherein, if the value of the shift exceeds a predetermined threshold, the particular receiver is not caused to operate.
- the circuitry is further configured to estimate a position of the incident dot on the receiver array and determine a particular receiver located at the position of the receiver array, and wherein, if the value of the shift exceeds a predetermined threshold, operation of the particular receiver is delayed until the value of the shift decreases below the predetermined threshold.
- the circuitry is further configured to provide a time-of-flight measurement indicating a distance to an object based on the detected movement.
- circuitry is further configured to provide the time-of-flight measurement indicating a distance using multi-receiver processing.
- calculating the expected shift comprises multiplying a rotation of the device per unit time with the exposure duration.
- the exposure duration is estimated based on a structure of a scene
- the structure of the scene is estimated based on depth data of the scene captured in a previous frame and/or based on image data obtained from an image sensor and/or a prior information from a host.
- the structure of the scene is estimated based on the depth data or based on the image data using a neural network or computer vision.
- the embodiments also disclose a managing method for a time-of-flight device comprising managing resources of a time-of-flight system to optimize time, energy and/or performance usage of the time-of-flight system based on sensor information, wherein the sensor information comprises time-of-flight measurements and/or measurements from one or more other sensors.
- the embodiments also disclose a computer program that, if executed by a computer, causes the computer to manage resources of a time-of-flight system to optimize time, energy and/or performance usage of the time-of-flight system based on sensor information, wherein the sensor information comprises time-of-flight measurements and/or measurements from one or more other sensors.
- the methods as described herein are also implemented in some embodiments as a computer program causing a computer and/or a processor to perform the method, when being carried out on the computer and/or processor.
- a non-transitory computer- readable recording medium is provided that stores therein a computer program product, which, when executed by a processor, such as the processor described above, causes the methods described herein to be performed.
- Fig. 1 is a schematic illustration of a time-of-flight imaging device 100 (in the following: device 100).
- the device 100 comprises an emission unit 200 and a reception unit 200.
- the emission unit 200 and the reception unit 300 are, in conjunction, configured to measure a distance (as indicated in Fig. 1) to an object O located at the distance from the device 100 according to the principles of time-of-flight measurements as described hereinabove.
- the emission unit 200 is configured to emit, for a short duration, an outgoing signal Ll-1.
- the outgoing signal is a timed light signal, for example a laser signal in the infrared part of the electromagnetic spectrum.
- the outgoing signal Ll-1 is reflected, for example, by the object O to form an incident signal LI -2.
- the incident signal LI -2 is received by the reception unit with a time delay, with respect to the timing of the emission of the outgoing signal.
- the time delay corresponds to the distance of the device 100 to the object O.
- the device 100 calculates the distance of the object O to the device 100.
- dToF direct time-of-flight
- iToF indirect time-of-flight
- the emission unit 200 and the reception unit 300 are mounted codirectionally, i.e. such that the emission unit 200 emits a into a same direction as the reception unit 300 receives a signal from.
- the emission unit 200 may be an emitter array that comprises a plurality of individual emitters.
- the emission unit 200 may in particular be an emitter array wherein individual emitters are arrayed in a two-dimensional pattern such that an emitting side of each individual emitter faces toward the emission direction as described hereinabove.
- the reception unit 300 may be an receiver array that comprises a plurality of individual receivers.
- the reception unit 300 unit may in particular be a receiver array wherein individual receivers are arrayed in a two-dimensional pattern such that a light sensitive side of each individual receiver faces toward the reception direction as described hereinabove.
- Each emitter may in particular be a single-photon avalanche diode (SPAD).
- SPAD single-photon avalanche diode
- each receiver or group of receivers output timed digital data that indicates a number of received photons associated with timing information, the timing information indicating a time between emission of the signal Ll-1 and reception of the reflection LI -2 caused by the signal Ll-1.
- Each SPAD outputs a single event per photon according to the known functioning of SPADs.
- the histogram as shown in Fig. 2 may be constructed based on an output signal of a single receiver in the reception unit, for example a SPAD, or based on an output signal of one of a plurality of receivers in the reception unit superposed with other receivers of the plurality of receivers.
- the receiver outputs, as described above, a sensor signal indicating a number of photons associated with timing information as data. According to the timing information, the received data is binned to construct the histogram as shown.
- the histogram may show as a feature, for example, a uniform underground (as indicated). This uniform underground may be interpreted as being caused by ambient light present in the surrounding of the device 100.
- the histogram may show a peak (as indicated). This peak may be assumed to be associated with reception of the reflected signal LI -2. The peak may be detected by checking whether the photon count of the histogram at any position exceeds a predetermined detection threshold (as indicated). The position of the peak is then the position of a bin on the x-axis with a photon count exceeding the predetermined threshold. The position of the peak may, in particular, be a peak with a maximum photon count that also exceeds the predetermined threshold. As a result, a valid reflection is defined only in a case where the identified peak passes the predetermined threshold.
- the distance to the object O may be the indicated by the position of the peak on the x-axis.
- Necessary subcomponents are, for example, the emission unit 200, the reception unit 300 and control and analysis circuitry used to analyze the output of the reception unit 200.
- a power of the interface between a host device and the time-of-flight device 100 may be optimized. Furthermore, other parameters, such as distance, number of returned points and/or latency may be optimized.
- Fig. 3 shows a schematic illustration of the device 100 according to the first embodiment.
- the device comprises a receiver (Rx) 301, an emitter (Tx) 201a, a control unit 110 and a scheduling unit 120.
- the device is configured to communicate, through a suitable data transmission means, with an external estimation unit 400.
- the receiver 301 may be comprised in or be identical with the reception unit 300 of Fig. 1. Moreover, a plurality pf receivers 301a may be comprised in a receiver array. The receiver array comprising the plurality of receivers 301a may be identical to the reception unit 300.
- the emitter 201 may be comprised in or be identical with the emission unit 200 of Fig. 1. Moreover, a plurality pf emitters 201 may be comprised in an emitter array. The emitter array comprising the plurality of emitters 201 may be identical to the emission unit 200.
- the control unit 110 controls an operation of the emitter 201.
- the control unit 110 may control an operation of the emitters of the emitter 201 as one or more groups. There are, however, embodiments wherein the control unit 110 selectively controls operation of individual emitters. “Controlling an operation” may comprise causing an activation and/or a deactivation of emitters or groups of emitters, or may comprise controlling a strength of an emitted signal Ll-1, or may comprise controlling a timing of activation and/or deactivation of the emitter or groups of emitters and/or the power of the pulse. Additionally or alternatively, a number of pulses to be emitted, a length of the pulse, or a duty cycle (periodicity) of the pulses may be controlled.
- Dots are arranged into groups as discussed hereinabove, while in each time slot one (or more) groups are illuminated by associated VCSELs and processed by associated SPADs.
- control unit 110 selectively controls operation of individual receivers 301 comprised in a receiver array.
- Receivers 301 comprised in a receiver array may be said to be addressable.
- control unit 110 selectively controls operation of individual emitters 201 comprised in an emitter array.
- Emitters 201 comprised in an emitter array may be said to be addressable.
- Controlling an operation may comprise causing an activation and/or a deactivation of receivers or groups of receivers, or may comprise controlling a timing of activation and/or deactivation of the receiver or groups of receivers. Additionally or alternatively, a number of SPADs per receiver, a gating (i.e. a time window defining a length of activity of a SPAD or SPADs and SPAD parameters such as deadtime may be controlled. Furthermore, parameters relating to histogram processing, such as noise estimation, threshold selection and other may be controlled.
- the scheduling unit 120 schedules the control performed by the control unit 110. For example, the scheduling unit 120 schedules the activation of the transmitter 201 and the receiver 301, or the activation of individual receivers. The scheduling provided by the scheduling unit 120 may affect all control performed by the control unit 110.
- the external estimation unit 400 estimates a required operation for either the receiver 301 or the emitter 201 or both the receiver 301 and the emitter 201.
- the external estimation unit 400 provides said estimates to the scheduling unit 120.
- the scheduling unit 120 provides scheduling to the control unit such that the operation of either the receiver 301a or the emitter 201 or both the receiver 301 and the emitter 201 is performed according to the estimate provided by the estimation unit.
- the external estimation unit 400 may comprise subunits as required for it to function.
- the external estimation unit 400 may comprise a memory unit wherein sensor signals received from the receiver 301 are temporarily, until further processing by the estimation unit 400, or permanently stored.
- control unit 110 causes the emitter 201a, or emitters comprised in the emitter 201 to operate, emitting an outgoing signal Ll-1. Further, the control unit 110, as scheduled by the scheduling unit 120, causes the receiver 301, or receivers comprised in the receiver 301 to operate.
- the receiver 301 generates, upon reception of the incident signal Ll-2 caused by the outgoing signal Ll-1, outputs a sensor signal or a plurality of sensor signals to the estimation unit 400.
- the estimation unit 400 analyses the received sensor signals, for example by means of the histogram to derive the timing of the peak in the histogram described with reference to Fig. 2.
- the timing of the peak in the histogram is provided to the scheduling unit 120.
- the histogram as described with reference to Fig. 2 may be generated by the receiver 301, or, if a plurality of the receivers 301 are comprised by the reception unit 300, the reception unit 300 for the comprised receiver 301.
- the scheduling unit 120 generates scheduling information, based on the estimate provided by the estimation unit 400.
- Fig. 4 shows a schematic illustration of the device 101 according to a modification of the first embodiment.
- the device 101 of Fig. 4 unlike the device 100 of Fig. 3, comprises, instead of the external estimation unit 400, an internal estimation unit 401 and, in addition, a memory unit 111.
- the internal estimation unit 401 in conjunction with the memory unit 111, performs the same functionality as the external estimation unit 400 described with respect to Fig. 3 hereinabove.
- Fig. 5 shows a schematic illustration of the receiver 301a, as comprised in the device 100 shown in Fig. 3 and in the device 101 shown in Fig. 4.
- the receiver 301a comprises analog circuitry 310, a time-to-digital conversion unit 311, ahistogram building unit 312, a processing unit 313 and a reflection peak detection unit 314.
- the receiver 301a may be identical to the receiver 301 of Fig. 4.
- the receiver 301a as shown in Fig. 5 corresponds to a known ToF receiver, such as a SPAD.
- the analog circuitry 310 is a physical light reception arrangement of the receiver.
- the analog circuitry 310 received incident radiation, such as the incident signal Ll-2 and outputs a detected photon as an electrical signal.
- the time-to-digital conversion unit 311 generates a timestamp for each photon detected by the analog circuitry 310.
- the histogram building unit 312 increases the relevant timing bin in the histogram by +n, (n being the number of events received by the SPADs) thus building a histogram as described with respect to Fig. 2. Specifically, the histogram building unit 312 identifies and counts photons (following a process called histogram builder) and place them in the correct timing bin.
- the receiver may comprise multiple SPADs.
- Fig. 2 is only a visual representation of a histogram intended to aid in the understanding of the present technology.
- the histogram as generated by the histogram building unit 312 is not required to be visually represented at any stage of the functioning of the device 100. Instead, the histogram may only be generated as a digital data structure.
- the histogram is then output to the processing unit 313.
- the processing unit 313 comprises circuitry that is configured to process the histogram received from the histogram building unit.
- the processing unit 313 may perform post-processing of the histogram data, such as smoothing operations, background reduction, background subtraction and the like.
- the processed histogram is output to the reflection peak detection unit 314.
- the reflection peak detection unit 314 performs detection of a peak in the histogram, as described with respect to Fig. 2, by applying, for example, the predetermined threshold. Based on the detected peak, the distance to the object O that reflected the signal can be calculated, as described hereinabove.
- Fig. 6 illustrates, by means of a signal-to-noise ratio (SNR)-exposure time graph, an application of a cutoff condition to determine an optimal illumination duration according to the “just enough power” (JEP) concept as known from the state of the art.
- SNR signal-to-noise ratio
- JEP just enough power
- the SNR is defined as where N* refers to the number of counts of signal/noise respectively. As N of both signal and noise increases, the SNR is improved.
- a first dashed line, labeled “THR” illustrates a preset value of the SNR that acts as a threshold to determine the optimal illumination duration of the receiver in order to distinguish, in the measured incident radiation a signal as opposed to noise.
- THR indicates a desired SNR for classifying radiation incident on a receiver 301a as a signal, as opposed to noise (i.e. ambient light, detector self radiation, device internal noise etc.).
- THR is set, for example, to a choice of a constant value.
- THR may be a variable set by an algorithm.
- the term “SNR of the incident dot” specifically refers to a SNR of the peak value of the histogram.
- the histogram processing unit 313 continuously looks for the peak and calculates the noise of the instantaneous histogram that is being accumulated, and from the peak and the noise calculates an instantaneous SNR. If the instantaneous SNR is high enough, the emitter and receiver operations corresponding to the dot may be terminated and the peak timing may be output.
- the time elapsed since a start of the measurement, indicated, for example, by an origin of the coordinate system shown in Fig. 6, to the time indicated by the intersection of the line SNR(t) and THR is the optimal illumination duration.
- the optimal illumination duration may be a minimal illumination duration.
- an incident signal that has been distinguished from noise due to the SNR of the receiver receiving the signal having exceeded THR i.e. the peak of the signal having exceeded the noise level
- a sensed signal an incident signal that has been distinguished from noise due to the SNR of the receiver receiving the signal having exceeded THR, i.e. the peak of the signal having exceeded the noise level
- Fig. 6 and the graph shown therein is merely a visual aid.
- the intersection of the line SNR(t) and THR is, for example, determined numerically by comparing the current value of the SNR with THR.
- the present technology does not require an actual numerical value for the optimal illumination duration to be determined.
- fulfillment of the condition that the current value of the SNR exceeds THR may lead to further control steps to be performed.
- the condition that the current value of the SNR exceeds THR for a particular receiver may lead to that receiver to be deactivated.
- the condition that the current value of the SNR exceeds THR for a particular receiver may lead to a particular emitter emitting the signal Ll-1 sensed by the receiver to be deactivated.
- Fig. 7 symbolically illustrates the method according to the first embodiment of the present disclosure.
- an x-axis of the graph corresponds to an exposure time, measured from the start of an image frame of the ToF device.
- a y-axis of the graph corresponds to a number of active VCSELs (also called “emitters” in the following) with respect to the exposure time.
- Each active VCSEL emits a corresponding emitted signal Ll-1.
- Each emitted signal Ll-1 generates, if intersecting a surface of the object O as described hereinabove, a dot.
- the dot may be measured by a receiver 301a in a receiver array due to being reflected back to the device 100.
- the device 100 may, in a non-limiting example, be configured to operate at 60 frame-per-second (FPS), corresponding to a frame processing time of ⁇ 17msec (milli-seconds), if configured as a dToF system, the device 100 may be set to illuminate and receive up to 600 dots per frame and process 50 dots in parallel. That means that to process 600 dots the system needs to work with at least 12 processing steps, denoted here as groups.
- FPS frame-per-second
- ⁇ 17msec milli-seconds
- Operation of the VCSELs is sectioned into dot groups Gl-GN that are emitted by the active VCSELs sequentially in time.
- a first dot group Gl a second dot group G2, a third dot group G3 and so on.
- GN indicates a last dot group of the image frame.
- Each VCSEL emits for a timespan corresponding to an operating duration 20 of the dot group G.
- a first operating duration 20-1 corresponds to the first dot group Gl
- a first operating duration 20-2 corresponds to the second dot group G2 and so on.
- a second dot group G2 is emitted. Emission of the second dot group G2 may commence after a delay (as illustrated) or may commence immediately following completion of the first dot group GL Sequential emission of the dot groups Gl-GN proceeds until the end of the image frame.
- a number N of the dot group may be determined by dividing a desired total number of dots in an image frame by a number of VCSELs per dot group Gl intended to be committed to the task at the point of implementation. For example, if the desired number of dots in an image frame is 600 and 50 VCSELs per dot group Gl-GN are intended to be committed to the task, then the number of dot groups N is 12.
- each VCSEL has a specific emission duration 30.
- a first VCSEL may emit for a first emission duration 30-1.
- a second VCSEL may emit for a second emission duration 30-2.
- first second, third, etc. is merely a convenience for the sake of explanation. It is noted that each VCSEL is uniquely identifiable by an address or the like.
- a first emitter in the first dot group may not necessarily identical to a first emitter in the second dot group and so on.
- Each receiver sensing a dot is related, by the circuitry, to the emitter emitting the dot.
- each emitter may coupled to a specific receiver (or a plurality of receivers). This coupling is provided by the physical implementation of the dToF system.
- Each emitter or group of emitters is linked to a specific histogram (i.e. a receiver or group of receivers).
- the controlling of the emitter may be provided by an inter-integrated circuit (I 2 C) interface, for instance.
- the operating duration 20 of the dot group G is equal to longest optimal illumination duration among the dots of the dot group.
- the longest optimal illumination duration among all dots in the dot group G1 corresponds to the emission duration 30-M of the mth still emitting emitter.
- the emission duration 30-M of the mth emitter in the first dot group is equal to the operating duration 20-1 of the first dot group Gl.
- the emission duration 30-M of the mth emitter is again equal to the operating duration 20-2 of the second dot group G2, but not equal to the operating duration 20-1 of the first dot group Gl.
- the emission duration of emitters in different groups are only shown to be the same for example. However, this illustration is to be understood to by symbolic and nonlimiting.
- the emission duration of individual emitters in different groups may be different in accordance with the present disclosure. In each group different dots may be activated, corresponding to different areas in the scene and, hence, experience different illumination times
- the operating duration 20 of a dot group G is not preset, but determined during operation based on the emission duration of the longest emitting emitter among the emitters of the dot group G, while, at the same time, the emission duration of the longest emitting emitter among the emitters of the dot group G is likewise not preset, but determined during operation based on whether the dot emitted is sensed according to the method of Fig. 6.
- the device 100 may be configured to cause all emitters to cease emitting and all receivers to cease operating once a predetermined number of dot groups have been emitted or once a predetermined time, for example a frame, has elapsed, irrespective of whether all dots have been sensed.
- the emission duration may be limited to a worst case estimated illumination time, in case this information is available by further processing at upper layer in a host device, for example.
- Good performance may further be achieved if dots are sorted according to estimated illumination time prior to a group partition. Furthermore, the emission duration may be set to a maximum time according to remaining total illumination time divided by a number of remaining groups.
- Fig. 8 symbolically illustrates a first modification of the method according to the first embodiment.
- the method in Fig. 8 is similar to the method in Fig. 7.
- the operating duration 20 of the dot group G is equal to the emission duration of the longest emitting emitter (e.g. the mth emitter)
- the operating duration 20 of the dot group G is equal to the emission duration not of the of the said mth emitter, but of a shorter emitting emitter, e.g. the m-l st emitter (i.e. the second-to-longest emitting emitter) or of the m-2 nd emitter (i.e. the third-to-longest emitting emitter).
- the operating duration 20 of the dot group G is equal to the emission duration the m-l st emitter.
- An mth emitter of the first dot group Gl in the present example, may be called a remaining emitter of the first dot group (Rl).
- An mth emitter of the second dot group G2, in the present example, may be called a remaining emitter of the second dot group (R2).
- the 22ausest of the modification of the first embodiment is as follows: In the first dot group Gl, before the dot of the m-l st emitter is sensed, the method proceeds as described in Fig. 7. Once the dot of the m-l st emitter is sensed, operation of the first dot group Gl ceases and the histograms are generated based on the sensed dots of the first dot group, but the mth emitter of the first dot group (i.e. the remaining emitter of the first dot group Rl) keeps emitting and the histogram based on the dot emitted by the remaining emitter of the first dot group Gl is not reset. Then operation of the second dot group G2 commences. The remaining emitter of the first dot group Rl is attached to the second dot group G2.
- the process of attaching an emitter that emitted a dot which has not been sensed in a first dot group G and a histogram based on said dot to a second, subsequent, dot group may be called regrouping.
- the corresponding receiver may likewise be attached to the second, subsequent, dot group.
- individual emitters may be regrouped multiple times, such that, for example, the dot emitted by remaining emitter of the first dot group Rl may be sensed once the remaining emitter of the first dot group Rl is attached, for example, to the third dot group G3 or dot groups subsequent to G3.
- Fig. 9 symbolically illustrates a second modification, based on the first modification, of the method according to the first embodiment.
- the method in Fig. 9 is similar, including regrouping of emitters, to the method in Fig. 8.
- the individual emitters in addition, cease emitting once a preset maximum emission time 50 is reached.
- an emitter that is regrouped for example the remaining emitter of the second dot group R2 (when to be regrouped from the second dot group G2 to the third dot group G3), ceases emitting once it has emitted for the maximum emission duration 50.
- the maximum emission duration is a preset value that may be chosen by an operator or at the time of implementation of the method.
- an emitter ceases emitting due to reaching the maximum emission time, instead of its emitted dot being sensed, its associated histogram may either be used in analysis or discarded as chosen at the point of implementation.
- perception properties e.g. cars, persons etc
- Classification of objects or scenery elements intersected by the dots such as: o Sky / not sky o Double reflection dots o Surface characteristics (Lambertian/specular) o Estimated illumination time per as described here in above o Usage of information from previous frames o Usage of information from other modalities (e.g. RGB image) o Avoid illumination of dots with low percentile to be returned (far away and/or with low reflectivity of an intersected surface)
- a conventional algorithm would segment the dots into 12 groups, with each group containing 50 dots.
- a fixed illumination time is allocated across the frame. For instance, with a frame duration of 24 msec, this results in a fixed illumination time of 2 msec per dot.
- the first 50 dots are assigned to the initial group and the exposure process commences. Dots that require additional exposure time automatically transition to the subsequent group, while those that have completed exposure are substituted with new, previously unallocated dots, in a continuous cycle.
- One algorithm aims to group dots with similar characteristics together. For instance, dots expected to have a comparable exposure time, typically due to a similar distance, are allocated to the same group to ensure they complete nearly simultaneously. This strategy reduces system idle time, as it prevents the scenario where dots that have completed their exposure must wait for a single dot taking an extended time to finish.
- a different algorithm groups dots based on specific attributes, such as the number of reflections. For example, dots are organized into groups that share the same reflection count — a group for single reflections, another for double reflections, and so on. This is particularly crucial when histogram processing must be configured on a per-group basis rather than for individual dots.
- Gating configurations which, when the expected distance of a dot is known, can adjust the histogram processing to focus on relevant time bins.
- a critical aspect to consider is which group to prioritize: the one with dots that are easier to measure or those that are more challenging.
- the choice depends on the objective, whether it is to acquire measurements from as many dots as possible or to prioritize data from distant dots.
- the scheduling algorithm can be complex and has significant implications. For instance, prioritizing distant dots could consume so much exposure time that there is insufficient time remaining for other dots. Conversely, focusing on nearer dots first might leave inadequate time for capturing distant dots. These decisions also influence the maximum exposure time set per group, with too long an exposure potentially limiting the time available for subsequent groups and too short an exposure risking premature termination of dot measurement before obtaining sufficient data.
- a more tailored algorithm could align the scheduling of dots with specific application requirements. For instance, a simultaneous localization and mapping (SLAM) application on the host may determine certain dots as more critical based on their relevance to features within the scene. Consequently, the SLAM application’s prioritization of dots would be factored into the scheduling process.
- SLAM simultaneous localization and mapping
- Adaptive exposure time in SIT The invention details adaptive exposure time control, with further considerations outlined as follows:
- a smart stop algorithm may cease dot emission in response to histogram analysis, halting dots when a peak exceeds a certain threshold. Additionally, an algorithm might preemptively stop emissions if it predicts no reflection or a reflection at a distance too great to resolve within the maximum exposure time. This decision could also take into account the SNR of the histogram, ceasing exposure when the SNR falls below a predefined threshold, thereby saving resources by not continuing exposure.
- the management of the ToF system may use a priori information, which can be sourced from various inputs:
- Data from previous dToF measurements can help estimate parameter distributions to set maximum exposure times or to allocate dots based on segmented fields of view.
- RGB sensors combined with computer vision or neural network algorithms, can predict scene structure and estimate distances or expected exposure times.
- Photon count sensors potentially integrated with the dToF sensor itself, offer another method for obtaining a priori information.
- the sensor may miss additional reflections generated by the signal emitted by an emitter. In a situation wherein detection of more than a single reflection is advantageous, in which case cessation of emission and/or reception of the dot should not cease once a single peak with sufficient SNR is detected.
- Fig. lOa-c shows a situation in which multiple reflections per dot may be expected.
- Fig. 10b shows an enlarged view of the part of the scene illustrated in Fig. 10a that is subtended by the first dot Doti.
- the first dot Doti encounters a wall, corresponding, in Fig. 10b, to a first area Al. Therefore, the first dot Doti is expected to be reflected, in the first area Al, once, producing a single reflection.
- Fig. 10c shows an enlarged view of the part of the scene illustrated in Fig. 10a that is subtended by the second dot Dot2.
- the second dot Dot2 encounters, in order of time since emission of the dot, a lampshade (cf. Fig. 10a) corresponding to a first area Al, producing a first reflection, then a wall (cf. Fig. 10a), corresponding to a second area A2, producing a second reflection. Therefore, the second dot Dot2 projected into the scene of Fig. 10a at this position is expected to produce two distinct reflections.
- the first area Al and the second area A2 are visually separated by an edge El.
- dots may produce more than two reflection.
- a third dot (not illustrated) projected to a upper left comer of the sofa shown in Fig. 10a, may encounter, in order of time since emission of the dot, a cushion (cf. Fig. 10a) on the sofa, producing a first reflection, then a structure of the sofa (cf. Fig. 10a), producing a second reflection, then a wall (cf. Fig. 10a), producing a third reflection.
- Fig. 1 la and 1 lb illustrate a time sequence of a received signal, for example, of an outgoing signal Ll-1, corresponding to dots as shown in Fig. 10a.
- Each of Fig. 1 la and Fig. 11b may correspond, for example, to a histogram as shown in Fig. 2.
- a signal may thus be represented by a count of received photons over time.
- Fig. 1 la shows a signal with a single reflection, as may correspond to the first dot Doti shown in Fig. 10a and Fig. 10b.
- a single reflection, corresponding to the peak (“1 st peak”) is comprised in the signal.
- the present embodiment provides a function to estimate the number of expected reflections in a signal corresponding to a dot. Then, once the number of expected reflections is known, the emitter array 200 and the receiver array 300 are operated until a required number of reflections, based on the number of expected reflections, are sensed. For example, of the number of expected reflections for a particular dot is determined to be greater than one, an emitter emitting the dot and a receiver receiving the dot are operated until at least two reflections of the dot have been sensed. Specifically.
- the device 100 may be configured to not cause an emitter and/or receiver to cease operating according to the method of Fig. 6 before the required number of reflections has been sensed. It is to be noted that the number of expected reflections is calculated before measuring the actual number of reflections. Thus, the number of expected reflections is not a measured number but a predicted number.
- Figs. 12a-c are schematic illustrations of an apparatus configured to estimate the number of expected reflections.
- the apparatus comprises the device 100 as shown ion Fig. 12a and a host device 800, as shown in Fig. 12b.
- the device 100 shown in Fig. 12A comprises the control unit 110, the emitter 201 and the receiver 301.
- the function of the control unit 110, the emitter 201 and the receiver 301 respectively has been described, for example, with reference to Fig. 3 hereinabove.
- the output of the receiver 301 in addition to being provided to the control unit 110 in order to control the operation of the emitter 201 and the receiver 301, for example according to the method of Fig. 6, is provided to a reflection algorithm 810.
- the reflection algorithm 810 is a computing unit that estimates the number of expected reflections for an individual dot.
- the reflection algorithm 810 is comprised in the parent unit 800 or, in other words, executed by the parent unit 800.
- the parent unit 800 may be any electronic computing device, for example, a computer, a mobile terminal, a microprocessor or the like.
- the device 100 may be comprised in the parent unit 800.
- the parent unit 800 may be a computing block run by a host device or any other computing infrastructure connected to the device 100.
- the reflection algorithm outputs a number, for example an integer, that indicates a number of expected reflections corresponding to a dot.
- the number of expected reflections is provided to the device 100 (which, in the present example, may be a dToF system 100)
- the reflection algorithm 810 may optionally use image data provided by an RGB image sensor 600. If the reflection algorithm uses image data provided by the RGB image sensor, then the number of expected reflections may be obtained as follows:
- the second dot Dot2 As shown in Fig. 10c, one reflection each is expected from each of the first area Al and the second area A2, i.e. two reflections. I.e. the second dot Dot2 generates two reflections.
- the first area Al is separated from the second area A2 by the edge El.
- the edge El By detecting a presence of the edge El, the number of expected reflections corresponding to the second dot Dot2 can be calculated by incrementing a number of detected edges in the area subtending the dot by one.
- the edge El may be detected using an edge detection algorithm. Edge detection algorithms are widely known to the skilled person and any algorithm capable of detecting an edge in image data may in principle be used. Only for example, representing a wider range of edge detection algorithms, the known Canny edge detector is mentioned.
- feature detection computer vision or a neural network-based approach may likewise be used.
- the number of expected reflections may be estimated using previous ToF measurements.
- a photon count image may be provided by the ToF sensor.
- the photon count image may be analyzed like a monochrome image.
- a scene composition may obtained through ToF measurements may be analyzed using, for example, a plane estimation algorithm.
- Plane estimation algorithms are known to the skilled person. For example, among a wider range of algorithms that may in principle be used, the known Random Sample Consensus (RANSAC) algorithm is mentioned.
- the number of expected reflections corresponding to the dot may then be calculated by incrementing a number of detected planes in the area subtending the dot by one.
- the receiver 301c comprises analog circuitry 310, a time-to-digital conversion unit 311, a histogram building unit 312, a processing unit 313, a noise statistics echo peak detection unit 314b and a JEP-Algorithm unit 315. Note that, instead of the receiver 301c may be replaced with the receiver 301a of Fig. 5.
- the function of the analog circuitry 310, the time-to-digital conversion unit 311, the histogram building unit 312 and the processing unit 313 is as described, for example, with reference to Fig. 3 hereinabove.
- the noise statistics echo peak detection unit 314b comprises the same functionality as the echo peak detection unit 314 of Fig. 3, but is further configured to provide noise statistics according to known methods of the ToF principle.
- the JEP-Algorithm unit 315 provides the information required by the control unit 110 to control the emitter 301 and the receiver 301c according to the method described with respect to Fig. 6 hereinabove (i.e. causing a ceasing of emission by the emitter 201 and/or reception by the receiver 301 once a SNR of the received signal exceeds a preset threshold).
- the JEP-Algorithm unit 315 is provided by the parent unit 800 with the number of expected reflections separately for each dot.
- the JEP-Algorithm unit 315 may specifically override the method according to Fig.
- Blurred images are a well-known issue that can occur due to the movement of objects within a scene or due to the movement of the camera itself.
- one pixel In a direct Time-of-Flight (dToF) system, one pixel consists of a Vertical Cavity Surface Emitting Laser (VCSEL) on the transmitter (TX) side and a Single-Photon Avalanche Diode (SPAD) on the receiver (RX) side.
- VCSEL Vertical Cavity Surface Emitting Laser
- SPAD Single-Photon Avalanche Diode
- the VCSEL emits a light pulse, which travels to an object and is then reflected back to the SPAD.
- the VCSEL emits a train of light pulses, which travel towards an object in the scene and are then reflected back to the sensor.
- the light pulse can be reflected back at different times, causing the image to appear blurred. This is because the movement causes the distance between the camera and the object to change, resulting in the light pulse taking a different amount of time to travel to and from the object. This can cause the image to appear distorted and make it difficult to accurately determine the distance to the object.
- post-processing is performed to detect the peak of the histogram and verify if its signal-to-noise ratio (SNR) is above a predefined threshold.
- SNR signal-to-noise ratio
- the distance change can be significant enough to cause the reflected photons to be received at different distances, potentially moving them to the next histogram bin. This can result in a “smearing” effect on the histogram, where the peak of the histogram becomes less distinct and the overall shape becomes more spread out. This smearing can impact the accuracy of the distance measurement and make it more difficult to detect the peak of the histogram and verify if its signal-to-noise ratio (SNR) is above a predefined threshold.
- SNR signal-to-noise ratio
- the embodiments described below provide a system that involves movement and distance calculations, using a SPAD sensor and an illuminator.
- the embodiments provide a mechanism that can identify such movements and react accordingly. This mechanism detects and responds to movements in order to improve the accuracy of distance calculations and reduce power consumption. It may involve adjusting the exposure time or illumination level, or implementing other strategies to better handle movement in the system.
- the embodiments identify and react to movements in the context of a Time-of-Flight (ToF) system should take into account:
- the mechanism is able to detect and track the movement and speed of the camera in order to adjust the ToF system’s operation accordingly. This may involve adjusting the exposure time, illumination level, or other parameters to compensate for the camera’s movement.
- the mechanism is able to take detect and track moving objects within the scene. This information can be used to adjust the ToF system’s operation to better handle moving objects and reduce the impact of blur on power consumption and distance calculations.
- the mechanism can take into account information about the scene: The mechanism may have access to information about the scene, such as object distances and flatness. This information can be used to optimize the ToF system’s operation and improve the accuracy of distance calculations.
- the mechanism can take into exposure time: The mechanism may for example be able to adjust the exposure time of the ToF system in response to movements and changes in the scene. This can help to reduce power consumption and improve the36auses36yy of distance calculations.
- the new mechanism can better identify and react to movements in the context of a ToF system, improving the overall performance and efficiency of the system.
- the user is asking for more information about the algorithm that can be used to react to movements in a Time-of-Flight (ToF) system.
- the algorithm can perform the following actions for each group of pixels:
- an algorithm can select to perform the following for each (group) of pixels:
- the algorithm may stop or shorten the exposure time of the pixels in order to reduce power consumption and improve the accuracy of distance calculations.
- Delay activation of selected dots The algorithm may delay activation of selected dots.
- the algorithm can delay the activation of selected dots until the movement in the scene has slowed down.
- an IMU is an electronic device that measures and reports a body’s specific force, angular rate, and sometimes the magnetic field surrounding the body, using a combination of accelerometers, gyroscopes, and magnetometers.
- the system can detect and track movement and orientation changes, allowing it to adjust its operation accordingly and reduce power consumption.
- an IMU can be used to detect and track movement in the scene, allowing the system to adjust exposure time, illumination level, and other parameters to reduce power consumption and improve distance calculations.
- movement may be tracked by other means as well. For example, movement may be detected using RGB information from an RGB sensor using a computer vision algorithm. Furthermore, ToF information may be used to detect movement.
- the system can estimate how many pixels will be shifted for every dot, resulting in a value called Spixel.
- TH a predefined threshold
- Fig. 17 illustrates the process of exposure time determination and decision in a system that includes a control unit 110, a transmitter (Tx) 201, a receiver (Rx) 301b, an RGB sensor 600, and an Inertial Measurement Unit (IMU) 700.
- the control unit 110 receives information from the exposure time determination and decision module 501, which is used to control the operation of the transmitter (Tx) 201 and receiver (Rx) 301b.
- the exposure time determination and decision module 501 obtains information from the RGB sensor 600 and the IMU 700 to determine the optimal exposure time for the system.
- the RGB sensor 600 provides information about the scene, such as object distances and flatness, while the IMU 700 provides acceleration information, from which information about the movement and speed of the camera is derived.
- Fig. 18 provides a schematic example of receiver (RX) 301b of Fig. 17.
- RX comprises a SPAD analog circuitry 310, a Time-to-digital conversion unit 311, ahistogram builder 312, and aNoise Statistics Echo Peak Detection 314b.
- the RX is responsible for processing the incoming signal from the SPAD (Single Photon Avalanche Diode) analog circuitry 310.
- the SPAD analog circuitry 310 may also be responsible for amplifying and filtering the incoming signal from the SPAD.
- the Time-to-digital conversion unit 311 converts the analog signal into a digital format that can be processed by the histogram builder 312.
- the histogram builder 312 creates a histogram of the signal data, which is used to extract information about the distance to the object being measured.
- the Noise Statistics Echo Peak Detection 314b is responsible for analyzing the noise statistics of the signal and identifying the peak of the echo. The timing of the peak corresponds to the distance to the object.
- Figure 19 provides a more detailed view of the exposure time determination and decision module 501.
- the module is responsible for determining the exposure time limits for each pixel in the scene, based on information about the scene and the movement of the camera.
- the module consists of several sub-modules, including camera movement estimation 720, scene estimation 620, rotational shift calculation 730, exposure time determination & decision 740, and configuration and decision 750.
- a priori information may be a distribution of distances in the scene, or a maximal difference between adjacent pixels that will affect the threshold selection algorithms.
- Fig. 20 is a symbolic illustration of a processing method according to the third embodiment.
- the processing method describes estimating the impact for the simple scenario of a flat wall.
- the camera 20 rotates from left to right.
- the light beam travels a distance dl before being reflected by the wall.
- the light beam travels a distance d2 before being reflected by the wall.
- the wall is at an angle of -45 degrees with respect to the camera orientation (ray direction) at the third time instance.
- the process determines of the difference between dl and d2:
- a threshold here for example 15
- Fig. 21 is an explanatory diagram of the rotational pixel shift of a device.
- the diagram shows the impact of exposure time and camera movement on non-retumed pixels due to high movement for the situation described in Fig. 20 (wall angle: 45 deg, wall distance: 2 meters).
- the exposure time in the example was 0.003 seconds.
- the rotational speed of the ToF camera was 200 deg/sec.
- the diagram shows the distance difference (delta)
- the area indicates that all SPADs looking at degrees 30-40 will suffer from a peak shifting of more , for example, 20 cm,. In one dToF system, such a distance shift means that peaks will not be detected and the exposure time will be infinite, if not interrupted.
- Fig. 22 may be used to implement the system described in Fig. 17 which illustrates the process of exposure time determination and decision and configuration and decision options based on the exposure time limits.
- Such algorithms may include for example be the iterative closest point (ICP) method between point cloud information and the current 3D model, or for example a SLAM (Simultaneous localization and mapping) pipeline.
- ICP iterative closest point
- SLAM Simultaneous localization and mapping
- the registered point cloud obtained by the pose estimation 1004-1 is forwarded to a 3D model reconstruction 104-2.
- the 3D model reconstruction 1004-2 updates a 3D model of the scene based on the registered point cloud obtained from the pose estimation 1004-1 and based on auxiliary input obtained from the auxiliary sensors 103.
- the pose estimation 1004-1 and the 3D model reconstruction 1004-2 obtain auxiliary input from auxiliary sensors 1003.
- the auxiliary sensors 1003 comprise a colour camera 1003-1 which provides e.g. an RGB/LAB/YUV image of the scene, from which sparse or dense visual features can be extracted to perform conventional visual odometry, that is determining the position and orientation of the current camera pose.
- the auxiliary sensors 1003 further comprises an event-based camera 1003-2 providing e.g. high frame rate cues for visual odometry from events.
- the auxiliary sensors 1003 further comprise an inertial measurement unit (IMU) 1003-3 which provides e.g. acceleration and orientation information, that can be suitably integrated to provide pose estimates.
- IMU inertial measurement unit
- Fig. 23 shows a general configuration of circuitry 1200 according to the present disclosure.
- the circuitry may implement, for example, the control unit 110 and the exposure time determination & decision module 501 in Fig. 19.
- the circuitry 1200 can include a CPU 1201, interacting with storage 1202.
- the storage 1202 can, for example, be a solid state disk (SSD).
- the device 1200 can further include a read-only -memory (RAM) 1203 interacting with the CPU 1201.
- the device can include a Bluetooth transceiver and decoder 1204 and an antenna and circuitry configured to interface with a wireless local area network (WLAN) 1205.
- WLAN wireless local area network
- the intelligent scheduling and management block may be provided as part of the ToF system 1213 or be executed by the CPU 1201. Integrating the intelligent scheduling and management block as part of the sensor would enable fast reaction and low latency decisions for the receiver and emitter (the emitter could be controlled also from the receiver, that is the receiver is the master of the dTof System). External sensors, as well as the host application could provide additional meta-data/configurations allowing the intelligent block to take the decisions.
- circuitry is configured to set parameters for each group of emitters/receivers including activation profiles, emitter parameters, and/or receiver parameters.
- circuitry configured to dynamically control the activation of an emitter and corresponding receiver in a group to cease activation, continue activation , and/or modify the groups contents.
- circuitry is further configured to provide a time-of-flight measurement indicating a distance to an object based on the group of dots emitted by the predefined set of emitters and sensed by the receiver array.
- circuitry is further configured to cause emitters of an emitter array to sequentially emit groups of dots and, for each group of emitted dots, to cause receivers in the receiver array (300) to cease operating when a number of dots of the group have been sensed by the receiver array.
- circuitry is further configured to estimate a position of the incident dot on the receiver array and determine a particular receiver located at the position of the receiver array, and wherein, if the value of the shift exceeds a predetermined threshold, operation of the particular receiver is delayed until the value of the shift decreases below the predetermined threshold.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Computer Networks & Wireless Communication (AREA)
- General Physics & Mathematics (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Electromagnetism (AREA)
- Optical Radar Systems And Details Thereof (AREA)
Abstract
An electronic device comprising circuitry configured to manage resources of a time-of-flight system to optimize time, and/or energy usage, and/or performance of the time-of-flight system based on sensor information, wherein the sensor information comprises time-of-flight measurements and/or measurements from one or more other sensors.
Description
ELECTRONIC DEVICE, METHOD AND COMPUTER PROGRAM
TECHNICAL FIELD
The present disclosure generally pertains to an electronic device, a method and a computer program.
TECHNICAL BACKGROUND
Time Of Flight (ToF) systems in general are resource constrained systems. Energy (and, correspondingly, power) and time that can be spent on each dot are limited.
Although there exist techniques for reducing time and energy usage of ToF systems, there exists a need to improve on currently available methods.
SUMMARY
According to a first aspect the disclosure provides an electronic device according to independent claim 1. According to a second aspect, the present disclosure provides a managing method according to independent claim 46. According to a third aspect, the present disclosure provides a computer program according to independent claim 47. Further aspects of the present disclosure are set forth in the dependent claims, the figures and the following description.
BRIEF DESCRIPTION OF THE DRAWINGS
Embodiments are explained by way of example with respect to the accompanying drawings, in which:
Fig. 1 is a schematic illustration of a ToF device according to the present disclosure; and
Fig. 2 is a symbolic illustration of an example of a time-of-flight histogram; and
Fig. 3 is a schematic illustration of a device according to a first embodiment; and
Fig. 4 is a schematic illustration of a device according to a modification of the first embodiment; and
Fig. 5 is a schematic illustration of a receiver; and
Fig. 6 is a symbolic illustration of a method to determine an operation of components of a ToF system according to a signal-to-noise ratio; and
Fig. 7 is a symbolic illustration of a smart illumination time principle; and
Fig. 8 is a symbolic illustration of the smart illumination time principle with regrouping; and
Fig. 9 is a symbolic illustration of the smart illumination time principle with regrouping and smart stop; and
Fig. lOa-c are symbolic illustrations of a scene with single and multiple reflections per dot; and
Fig. Ila and Fig. 11b are symbolic illustrations of histograms with single and multiple reflections; and
Fig. 12a-c are schematic illustrations of a ToF apparatus according to a second embodiment; and
Fig. 13 is a first schematic example of how measurement errors may be induced by rotation of the device; and
Fig. 14 is a first symbolic representation of histogram processing being affected by the rotation shown in Fig. 13; and
Fig. 15 is a second schematic example of how measurement errors may be induced by rotation of the device; and
Fig. 16 is a second symbolic representation of histogram processing being affected by the rotation shown in Fig. 15; and
Fig. 17 is a schematic illustration of a ToF device according to a third embodiment; and
Fig. 18 is a schematic illustration of a receiver according to the third embodiment; and
Fig. 19 is a schematic illustration of a host device according to the third embodiment; and
Fig. 20 is a symbolic illustration of a processing method according to the third embodiment; and
Fig. 21 is an explanatory diagram of the rotational pixel shift of a device; and
Fig. 22 is a schematic illustration of the usage of the present disclosure; and
Fig. 23 is a schematic illustration of circuitry according to the present disclosure.
DETAILED DESCRIPTION OF EMBODIMENTS
Before a detailed description of the embodiments under reference of Fig. 1 is given, general explanations are made.
The embodiments disclose an electronic device comprising circuitry configured to manage resources of a time-of-flight system to optimize time, energy and/or performance usage of the time-of-flight system based on sensor information, wherein the sensor information comprises time-of-flight measurements and/or measurements from one or more other sensors.
The time-of-flight measurements may for example be current or previous measurements.
Performance may for example relate to distance and/or returned points.
The management of the resources may be for example be a dynamic management.
The other sensors may for example comrpise an IMU or an RGB sensor.
The circuitry may for example be configured to perform grouping of the emitters and corresponding receivers to groups, based on conditions.
A group may for example refer to emitters and receivers which are active in parallel at least part of the time. The groups can for example be pre-defined, or dynamically generated.
The circuitry may be configured to set parameters for each group of emitters/receivers including activation profiles, emitter parameters (pulse length, peak power, ... ), receiver parameters number of SPADs per dot, SPAD parameters, histogram parameters such as a threshold, or the like.
The circuitry may for example be configured to dynamically control the activation of an emitter and corresponding receiver in a group to cease activation, continue activation , and/or modify the groups contents.
Activation may for example comprise power saving.
Continuing activation may for example continue beyond a predefined value - e.g. to allow achieving longer distance.
Modifying the groups contents may for example comprise moving dot resources between groups.
The circuitry may for example be configured to perform dynamic modifications based on histogram contents. Histogram contents may for example refer to peak and noise level or SNR. SNR may for example look if SNR is too low. In this case operation of emitters and/or receivers may be stopped earlier.
The circuitry is configured to perform dynamic modifications based on system conditions. System conditions may for example refer to checking, if there is not enough time to schedule activation of emitter due to lack of time, or the like.
The circuitry may for example be configured to perform dynamic modifications based on host requirements.
Grouping may be based on estimated/projected parameters of the dots and/or hardware limitations. Estimated/projected parameters of the dots may for example relate to the expected
distances or illumination time associated with the dots, e.g. if they share similar parameters, such as single/multiple reflections. Hardware limitations may for example relate to the issue that the system cannot configure any dot with any dot due to routing limitations.
The estimated/projected parameters may for example be based on previous dToF measurement reports, upper layer a priori information, or other sensor information. Previous dToF measurement reports may for example comprise information about distance, albedo, or the like. Upper layer a priori information may for example be obtained from a SLAM pipeline.
Other sensor information may for example be obtained from image data, IMU data, or the like.
According to an embodiment, the scheduling of groups is targeted to optimize a dToF key performance indicator and take into account system considerations.
Optimizing a dToF key performance indicator may for example comprise not scheduling dots that do not have a chance to be returned. Optimizing a dToF key performance indicator may also comprise scheduling dots according to prioritization from upper layers. Still further, optimizing a dToF key performance indicator may for example comprise scheduling dots according to chances for them to be returned, e.g. starting from shorter exposure times.
Managing resources of a time-of-flight system may for example comprise optimizing distance measurements and/or a number of returned dots. The electronic device may be the Time-of-Flight (ToF) system.
The device may be a control unit of a ToF system. The device may be comprised in the ToF system or may be configured to control the ToF system without being comprised in the ToF system. The device may be a computer. The circuitry may be any electronic device that may be configured to execute a method in the sense of the present disclosure. “Managing” in the sense of the present disclosure may comprise controlling the ToF system or controlling subunits of the ToF system. “Managing” in the sense of the present disclosure may comprise controlling an operation of the ToF system or controlling an operation of subunits of the ToF system. “Managing” in the sense of the present disclosure may further comprise scheduling of the operation the ToF system or scheduling the operation of the subunits of the ToF system.
The ToF system may be any imaging system that is configured to measure a distance (or “depth”) measurement based on emission and reception of a light signal according to the principles of time-of-flight measurement as described, for example, hereinbelow. For example, a ToF system (or “sensor”) may work according to an iToF principle or according to a dToF principle. Generally, ToF systems are configured to emit one or more laser beams (or “dots”)
with a precise timing. If these laser beams are reflected off an object, the reflected light is received by a reception unit of the ToF system. For example, though not necessarily, a frequency of the laser beams emitted by ToF systems are in an infrared part of the spectrum.
In sensors working according to the iToF-principle, depth information is obtained by determining a phase angle <j) of incident light in pixels of the ToF sensor as compared to a modulation signal. Each pixel of a ToF sensor may typically include two distinct taps (e.g. tap A and tap B), each tap providing gain information (e.g. GA for tap A) in a 7i/2 phase offset as well as phase-dependent intensity information (e.g. S (0) for the a phase of zero and S(JC/2) for a phase of rc/2). Using the gain information and the phase-dependent intensity, variables I and Q can be calculated using the relations
7 = 2(GA - GB)S(0) (1) and
Q = 2(GA — GB)S(TT/2). (2)
The phase angle for each pixel comprising the ToF sensor can then be calculated using
Executing this calculation for each pixel (receiver) comprising the ToF sensor yields a phase image associating each pixel with phase information.
Depth information is calculated from the phase information by correlating the phase information with phase information of emitted laser beams, such as infrared (IR) laser beams generating, for example, a dot pattern. The correlation of the phase information detected by the pixel with phase information of emitted IR laser beams yields a phase offset A0 between the emitted laser beams and the incident radiation sensed by the pixel. A distance D of the pixel to the surface reflecting the emitted laser beams can then be calculated using
where c is the speed of light in an atmosphere and f is a frequency of the emitted light.
For example, if the emitted light has a frequency of /=20 MHz and the calculated phase offset A0 is 22°, the distance to the surface is about 0.45m. The distance is depth information.
Thus, by correlating the phase angle of the incident radiation with the phase angle of the dot pattern, the distance of the pixel to the dot can be calculated.
In sensors working according to the dToF -principle, a photon counting (PC) technique pulse method is used.
Photon counting ToF systems like the dToF system record a photon histogram. DToF systems may, for example, use single photon avalanche diodes (SPADs) as detectors. In dToF, depth information is obtained, for example, by measuring the time a signal pulse of a defined duration, emitted by the device using dToF, takes to reach an object, be reflected on a reflection surface, and return to the device to be sensed by the dToF sensor. The signals are then correlated to obtain the distance.
A dToF system may emit very short pulses (ca. 1 nanosecond) of laser light. The emitter emits pulses at a predetermined periodicity, and the receiver (using a timing that is aligned with the timing of the pulse emission by the emitter), collects received photons. The receiver determines the time elapsed from emission to reception of each signal, records the determined timing per photons and accumulates the timing of each received photon in a histogram. A peak in the histogram accumulated based on a plurality of photons over time corresponds to the distance of the object. The peak may, for example, be located at a position Tpeak in the histogram.The distance of the reflection surface can then be calculated:
D = Tpeak X (2c) (5) where c is the speed of light in an atmosphere.
In a Time-of-Flight (ToF) system, a histogram is typically used on the sensor side to accumulate the detected photons at the correct timing. The histogram is integrated over the duration of the pulse train to gather data on the photons that are reflected back to the sensor.
Resources of the ToF system may be any of the subunits of the ToF system, for example, the emission unit, the reception unit, or components thereof, or any circuitry, computing resources or other electronic components.
“Dynamic” managing comprises managing that is adapted to changing conditions of the operation of the ToF system. The changing conditions may occur during acquisition of data, during emission or reception of signals (i.e. dots). Conditions may comprise parameters of the emitted signal or signals, parameters of the received signal or signals or a condition of the ToF system or of components of the ToF system.
“Sensor information” may, in addition to ToF measurements, be any electronic or digital information that may be provided to an electronic device by a sensor. A sensor may, in
particular, be a receiving unit, or components thereof, of the ToF device, a camera, or light receiving pixels comprised by a camera, or any other sensor providing information on a disposition of the device in relation to an environment of the device. Current ToF measurements may be information obtained during an ongoing measurement cycle. Note that current ToF measurements may comprise signals received by the ToF system based on which a distance measurement has not, or not yet, been calculated. Past ToF measurements are measurements obtained during a measurement cycle that precedes the current measurement cycle.
According to the present disclosure, energy may be saved both on the emission side and the reception side or both and is applicable to a varieties of architectures (e.g. dot-projection ToF or flood ToF).
There are embodiments wherein the managing comprises selective cessation of an operation of emitters of a group of emitters for time-of-flight measurements, wherein each emitter is configured to commence emitting a respective dot simultaneously with the other emitters of the group of emitters.
The emitter is any electronic component capable of being caused, by a control signal, to emit a timed laser signal. Each emitter may in particular be a vertical cavity surface emitting laser (VCSEL).
The group of emitters may be a subgroup of emitters of an emitter array. However, the group of emitters may comprise all emitters of the emitter array or all emitters comprised in the ToF system. Note that a “dot” may refer to a laser signal that is emitted both spatially limited (i.e. as a spot) as well as temporally limited (i.e. for a certain duration). This means that a single emitter that emits a laser beam for a certain duration emits a single dot.
By configuring the device to set an illumination time per group of dots, configure group of dots, each one with its own activation profile (e.g. number of pulses in a train) a greater degree of freedom on the utilization of the resources of the ToF system is achieved. Moreover, intelligent scheduling may be used. In some embodiments, the illumination time may be chosen dynamically, such that performance can be further improved.
There are embodiments wherein the circuitry is further configured to cause an emitter of the predefined group of emitters to cease emitting when an incident dot related to the emitter has been sensed by a receiver array, and cause the receivers in the receiver array to cease operating when a predefined number of the emitters of the predefined group of emitters have ceased emitting.
There are embodiments wherein the circuitry is further configured to cause a particular receiver sensing the incident dot to cease operating when the incident dot has been sensed.
There are embodiments wherein the circuitry is further configured to provide a time-of-flight measurement indicating a distance to an object based on the group of dots emitted by the predefined set of emitters and sensed by the receiver array.
There are embodiments wherein the predefined number of the emitters is equal to a total number of emitters of the predefined group of emitters.
There are embodiments wherein the circuitry is further configured to cause emitters of an emitter array to sequentially emit groups of dots and, for each group of emitted dots, to cause receivers in the receiver array to cease operating when a number of dots of the group have been sensed by the receiver array.
Every receiver (wherein each pixel may comprise a plurality of SPADs) is coupled to a specific emitter. Ceasing operation of the emitting emitter, according to an embodiment, means ceasing operation of the corresponding receiver.
There are embodiments wherein when the number of the emitters of a first group of emitters have ceased emitting and the receivers in the receiver array have been caused to cease operating, the remaining emitters keep emitting as part of a subsequent second group of emitters.
There are embodiments wherein the circuitry is further configured to provide at least one first time-of-flight measurement based on the group of dots emitted by the first set of emitters and sensed by the receiver array and provide at least one second time-of-flight measurement based on the group of dots emitted by the second group of emitters and sensed by the receiver array.
There are embodiments wherein the circuitry is further configured to cause the emitter to cease emitting after a predefined maximum emission time.
The circuitry is further configured to cause the receiver related to the emitter to cease operating after the predefined maximum emission time.
There are embodiments wherein sensing the incident dot comprises monitoring whether a signal- to-noise-ratio of the incident dot measured by a receiver of the receiver array exceeds a predefined threshold.
There are embodiments wherein the circuitry is further configured to cause the emitters to emit a plurality of dots in a dot pattern.
There are embodiments wherein the dynamic managing comprises a selective managing of emitters and/or receivers based on a number of expected reflections of a dot.
There are embodiments wherein the circuitry is further configured to determine, in image data, a number of edges within the subsection of the image sensor that correspond to the dot, and to determine the number of expected reflections of the dot based on the number of edges and/or plane detection.
There are embodiments wherein the circuitry is further configured to determine, in image data, a number of edges which correspond to one or more dots and in which the edges define a object, and wherein in absence of detection of motion of the scene or of the circuitry cause the receivers in the receiver array to cease operating and/or a group of emitters to cease emitting where the receivers and/or emitters correspond spatially to the interior of the object with respect to the edges in the image data.
The edges may hint about an object, covered by one dot, which has two different distances. So whenever the emitter starts emitting, this may hint that two reflections are expected to be observed. If the dot does not cover an edge, it may hint that the whole surface the dot is emitting is of one distance, so one reflection is expected to be observed.
There are embodiments wherein the circuitry is further configured to control an operating duration of the emitter based on the number of expected reflections.
There are embodiments wherein the circuitry is further configured, if the number of expected reflections is greater than one, to cause a particular emitter related to an incident dot to cease emitting only after more than one reflection has been sensed.
There are embodiments wherein the circuitry is further configured, if the number of expected reflections is greater than one, to cause a particular receiver sensing an incident dot to cease sensing only after more than one reflection has been sensed and/or after a maximum illumination time has elapsed.
There are embodiments wherein the expected number of reflections is one more than the number of edges in the field of view of the image sensor coinciding with the incident dot.
There are embodiments wherein the number of surfaces is estimated using a structure detection algorithm, an edge-detection algorithm, and/or a plane detection algorithm.
There are embodiments wherein the dynamic managing comprises a selective managing of emitters and/or receivers based on a detected movement of the electronic device. For example,
the management of emitters and/or receivers in an electronic device may be dynamically adjusted based on the detected movement of the device. This selective management could involve activating or deactivating certain emitters or receivers, adjusting their power levels, or modifying their transmission patterns to optimize the performance of the device in response to its movement. This could be useful in applications such as wireless communication, sensor systems, or location-based services, where the movement of the device can affect the performance of the emitters and receivers.
There are embodiments wherein the circuitry is further configured to calculate, from the detected movement of the electronic device, an expected shift of a position of an incident dot on a receiver array comprising the receivers during an exposure duration, and wherein the emitters or the receivers are caused to operate based on the expected shift.
There are embodiments wherein the detected movement of the electronic device is a rotation and/or a translation of the electronic device.
There are embodiments wherein, if the value of the shift exceeds a predetermined threshold, the emitter related to the incident dot is not caused to emit the dot.
According to an embodiment, the position of the receiver related to the emitter emitting the incident dot is known a priori due to the optical characteristics of the ToF system, and, if the value of the shift exceeds a predetermined threshold, the particular receiver and/or the emitter related to the particular receiver is not caused to operate.
There are embodiments wherein the circuitry is further configured to estimate a position of the incident dot on the receiver array and determine a particular receiver located at the position of the receiver array, and wherein, if the value of the shift exceeds a predetermined threshold, the particular receiver is not caused to operate.
There are embodiments wherein the position of the receiver related to the emitter emitting the incident dot is known a priori due to the optical characteristics of the ToF system, and if the value of the shift exceeds a predetermined threshold, operation of the emitter is delayed until the value of the shift decreases below the predetermined threshold.
There are embodiments wherein the circuitry is further configured to estimate a position of the incident dot on the receiver array and determine a particular receiver located at the position of the receiver array, and wherein, if the value of the shift exceeds a predetermined threshold, operation of the particular receiver is delayed until the value of the shift decreases below the predetermined threshold.
There are embodiments wherein the circuitry is further configured to provide a time-of-flight measurement indicating a distance to an object based on the detected movement.
There are embodiments wherein the circuitry is further configured to provide the time-of-flight measurement indicating a distance using multi-receiver processing.
There are embodiments wherein calculating the expected shift comprises multiplying a rotation of the device per unit time with the exposure duration.
There are embodiments wherein the exposure duration is estimated based on a structure of a scene, and the structure of the scene is estimated based on depth data of the scene captured in a previous frame and/or based on image data obtained from an image sensor and/or a prior information from a host.
There are embodiments wherein the structure of the scene is estimated based on the depth data or based on the image data using a neural network or computer vision.
The embodiments also disclose a managing method for a time-of-flight device comprising managing resources of a time-of-flight system to optimize time, energy and/or performance usage of the time-of-flight system based on sensor information, wherein the sensor information comprises time-of-flight measurements and/or measurements from one or more other sensors.
The embodiments also disclose a computer program that, if executed by a computer, causes the computer to manage resources of a time-of-flight system to optimize time, energy and/or performance usage of the time-of-flight system based on sensor information, wherein the sensor information comprises time-of-flight measurements and/or measurements from one or more other sensors.
The methods as described herein are also implemented in some embodiments as a computer program causing a computer and/or a processor to perform the method, when being carried out on the computer and/or processor. In some embodiments, also a non-transitory computer- readable recording medium is provided that stores therein a computer program product, which, when executed by a processor, such as the processor described above, causes the methods described herein to be performed.
Fig. 1 is a schematic illustration of a time-of-flight imaging device 100 (in the following: device 100). The device 100 comprises an emission unit 200 and a reception unit 200.
The emission unit 200 and the reception unit 300 are, in conjunction, configured to measure a distance (as indicated in Fig. 1) to an object O located at the distance from the device 100 according to the principles of time-of-flight measurements as described hereinabove.
The emission unit 200 is configured to emit, for a short duration, an outgoing signal Ll-1. The outgoing signal is a timed light signal, for example a laser signal in the infrared part of the electromagnetic spectrum.
The outgoing signal Ll-1 is reflected, for example, by the object O to form an incident signal LI -2. The incident signal LI -2 is received by the reception unit with a time delay, with respect to the timing of the emission of the outgoing signal. The time delay corresponds to the distance of the device 100 to the object O.
Using either a direct time-of-flight (dToF) technique or an indirect time-of-flight (iToF) technique as described hereinabove, the device 100 calculates the distance of the object O to the device 100.
The emission unit 200 and the reception unit 300 are mounted codirectionally, i.e. such that the emission unit 200 emits a into a same direction as the reception unit 300 receives a signal from.
The emission unit 200 may be an emitter array that comprises a plurality of individual emitters. The emission unit 200 may in particular be an emitter array wherein individual emitters are arrayed in a two-dimensional pattern such that an emitting side of each individual emitter faces toward the emission direction as described hereinabove.
The reception unit 300 may be an receiver array that comprises a plurality of individual receivers. The reception unit 300 unit may in particular be a receiver array wherein individual receivers are arrayed in a two-dimensional pattern such that a light sensitive side of each individual receiver faces toward the reception direction as described hereinabove. Each emitter may in particular be a single-photon avalanche diode (SPAD).
In dToF each receiver or group of receivers output timed digital data that indicates a number of received photons associated with timing information, the timing information indicating a time between emission of the signal Ll-1 and reception of the reflection LI -2 caused by the signal Ll-1. Each SPAD outputs a single event per photon according to the known functioning of SPADs.
From the timed digital data, a distance of the object O that the reflected photons were reflected from can be calculated.
Fig. 2 symbolically illustrates an example evaluation of a signal according to the principles of dToF measurement by means of a histogram. The histogram has, as an x-axis, a time. The time indicates, as described hereinabove, the time elapsed between emission of the signal Ll-1 and reception of the reflection LI -2 caused by the signal Ll-1. This time can be seen as equivalent, by application of equation (5) hereinabove, to the distance of the object O. The histogram has, as ay-axis, a number of photons detected by the receiver or the plurality of receivers.
The histogram as shown in Fig. 2 may be constructed based on an output signal of a single receiver in the reception unit, for example a SPAD, or based on an output signal of one of a plurality of receivers in the reception unit superposed with other receivers of the plurality of receivers.
The receiver outputs, as described above, a sensor signal indicating a number of photons associated with timing information as data. According to the timing information, the received data is binned to construct the histogram as shown.
The histogram may show as a feature, for example, a uniform underground (as indicated). This uniform underground may be interpreted as being caused by ambient light present in the surrounding of the device 100. As another feature, the histogram may show a peak (as indicated). This peak may be assumed to be associated with reception of the reflected signal LI -2. The peak may be detected by checking whether the photon count of the histogram at any position exceeds a predetermined detection threshold (as indicated). The position of the peak is then the position of a bin on the x-axis with a photon count exceeding the predetermined threshold. The position of the peak may, in particular, be a peak with a maximum photon count that also exceeds the predetermined threshold. As a result, a valid reflection is defined only in a case where the identified peak passes the predetermined threshold.
The distance to the object O may be the indicated by the position of the peak on the x-axis.
Smart illumination time
In order to reduce a power consumption of the time-of-flight device 100, power used to operate the necessary subcomponents of the device should be used as efficiently as possible. Necessary subcomponents are, for example, the emission unit 200, the reception unit 300 and control and analysis circuitry used to analyze the output of the reception unit 200.
Alternatively, according to the present disclosure, a power of the interface between a host device and the time-of-flight device 100 may be optimized. Furthermore, other parameters, such as distance, number of returned points and/or latency may be optimized.
Fig. 3 shows a schematic illustration of the device 100 according to the first embodiment. The device comprises a receiver (Rx) 301, an emitter (Tx) 201a, a control unit 110 and a scheduling unit 120. The device is configured to communicate, through a suitable data transmission means, with an external estimation unit 400.
The receiver 301 may be comprised in or be identical with the reception unit 300 of Fig. 1. Moreover, a plurality pf receivers 301a may be comprised in a receiver array. The receiver array comprising the plurality of receivers 301a may be identical to the reception unit 300. The emitter 201 may be comprised in or be identical with the emission unit 200 of Fig. 1. Moreover, a plurality pf emitters 201 may be comprised in an emitter array. The emitter array comprising the plurality of emitters 201 may be identical to the emission unit 200.
The control unit 110 controls an operation of the emitter 201. The control unit 110 may control an operation of the emitters of the emitter 201 as one or more groups. There are, however, embodiments wherein the control unit 110 selectively controls operation of individual emitters. “Controlling an operation” may comprise causing an activation and/or a deactivation of emitters or groups of emitters, or may comprise controlling a strength of an emitted signal Ll-1, or may comprise controlling a timing of activation and/or deactivation of the emitter or groups of emitters and/or the power of the pulse. Additionally or alternatively, a number of pulses to be emitted, a length of the pulse, or a duty cycle (periodicity) of the pulses may be controlled.
Typically, not all scene dots are illuminated and received at the same time. Dots are arranged into groups as discussed hereinabove, while in each time slot one (or more) groups are illuminated by associated VCSELs and processed by associated SPADs.
There are embodiments wherein the control unit 110 selectively controls operation of individual receivers 301 comprised in a receiver array. Receivers 301 comprised in a receiver array may be said to be addressable.
There are further embodiments wherein the control unit 110 selectively controls operation of individual emitters 201 comprised in an emitter array. Emitters 201 comprised in an emitter array may be said to be addressable.
“Controlling an operation” may comprise causing an activation and/or a deactivation of receivers or groups of receivers, or may comprise controlling a timing of activation and/or deactivation of the receiver or groups of receivers. Additionally or alternatively, a number of SPADs per receiver, a gating (i.e. a time window defining a length of activity of a SPAD or SPADs and
SPAD parameters such as deadtime may be controlled. Furthermore, parameters relating to histogram processing, such as noise estimation, threshold selection and other may be controlled.
The scheduling unit 120 schedules the control performed by the control unit 110. For example, the scheduling unit 120 schedules the activation of the transmitter 201 and the receiver 301, or the activation of individual receivers. The scheduling provided by the scheduling unit 120 may affect all control performed by the control unit 110.
The external estimation unit 400 estimates a required operation for either the receiver 301 or the emitter 201 or both the receiver 301 and the emitter 201. The external estimation unit 400 provides said estimates to the scheduling unit 120. The scheduling unit 120, in turn, provides scheduling to the control unit such that the operation of either the receiver 301a or the emitter 201 or both the receiver 301 and the emitter 201 is performed according to the estimate provided by the estimation unit.
Note that the external estimation unit 400 may comprise subunits as required for it to function. For example, the external estimation unit 400 may comprise a memory unit wherein sensor signals received from the receiver 301 are temporarily, until further processing by the estimation unit 400, or permanently stored.
In other words, the control unit 110, as scheduled by the scheduling unit 120, causes the emitter 201a, or emitters comprised in the emitter 201 to operate, emitting an outgoing signal Ll-1. Further, the control unit 110, as scheduled by the scheduling unit 120, causes the receiver 301, or receivers comprised in the receiver 301 to operate.
The receiver 301 generates, upon reception of the incident signal Ll-2 caused by the outgoing signal Ll-1, outputs a sensor signal or a plurality of sensor signals to the estimation unit 400. The estimation unit 400 analyses the received sensor signals, for example by means of the histogram to derive the timing of the peak in the histogram described with reference to Fig. 2. The timing of the peak in the histogram is provided to the scheduling unit 120. Note that the histogram as described with reference to Fig. 2 may be generated by the receiver 301, or, if a plurality of the receivers 301 are comprised by the reception unit 300, the reception unit 300 for the comprised receiver 301. The scheduling unit 120 generates scheduling information, based on the estimate provided by the estimation unit 400.
Fig. 4 shows a schematic illustration of the device 101 according to a modification of the first embodiment. The device 101 of Fig. 4, unlike the device 100 of Fig. 3, comprises, instead of the external estimation unit 400, an internal estimation unit 401 and, in addition, a memory unit 111.
The internal estimation unit 401, in conjunction with the memory unit 111, performs the same functionality as the external estimation unit 400 described with respect to Fig. 3 hereinabove.
Fig. 5 shows a schematic illustration of the receiver 301a, as comprised in the device 100 shown in Fig. 3 and in the device 101 shown in Fig. 4. The receiver 301a comprises analog circuitry 310, a time-to-digital conversion unit 311, ahistogram building unit 312, a processing unit 313 and a reflection peak detection unit 314. The receiver 301a may be identical to the receiver 301 of Fig. 4.
In principle, the receiver 301a as shown in Fig. 5 corresponds to a known ToF receiver, such as a SPAD.
The analog circuitry 310 is a physical light reception arrangement of the receiver. The analog circuitry 310 received incident radiation, such as the incident signal Ll-2 and outputs a detected photon as an electrical signal.
The time-to-digital conversion unit 311 generates a timestamp for each photon detected by the analog circuitry 310. The histogram building unit 312 increases the relevant timing bin in the histogram by +n, (n being the number of events received by the SPADs) thus building a histogram as described with respect to Fig. 2. Specifically, the histogram building unit 312 identifies and counts photons (following a process called histogram builder) and place them in the correct timing bin.
Note that, according to an embodiment, the receiver may comprise multiple SPADs.
Note that Fig. 2 is only a visual representation of a histogram intended to aid in the understanding of the present technology. The histogram as generated by the histogram building unit 312 is not required to be visually represented at any stage of the functioning of the device 100. Instead, the histogram may only be generated as a digital data structure. The histogram is then output to the processing unit 313.
The processing unit 313 comprises circuitry that is configured to process the histogram received from the histogram building unit. The processing unit 313 may perform post-processing of the histogram data, such as smoothing operations, background reduction, background subtraction and the like. The processed histogram is output to the reflection peak detection unit 314.
The reflection peak detection unit 314 performs detection of a peak in the histogram, as described with respect to Fig. 2, by applying, for example, the predetermined threshold. Based
on the detected peak, the distance to the object O that reflected the signal can be calculated, as described hereinabove.
Fig. 6 illustrates, by means of a signal-to-noise ratio (SNR)-exposure time graph, an application of a cutoff condition to determine an optimal illumination duration according to the “just enough power” (JEP) concept as known from the state of the art.
In the graph shown in Fig. 6, an x-axis of the graph corresponds to a time-coordinate (exposure time), measured from the start of operation of a receiver in the receiver array being illuminated by incident radiation. A y-axis of the graph corresponds to signal-to-noise ratio (SNR) of the receiver sensing the incident dot. Note that, as signals are emitted in a sequence of short pulses, the exposure time may be seen as an aggregate of durations of all emitted pulses.
The SNR is defined as
where N* refers to the number of counts of signal/noise respectively. As N of both signal and noise increases, the SNR is improved.
A solid line, labeled “SNR(T)”, symbolically illustrates a possible evolution of the SNR of the incident radiation received by the receiver with respect to the exposure time. A first dashed line, labeled “THR” illustrates a preset value of the SNR that acts as a threshold to determine the optimal illumination duration of the receiver in order to distinguish, in the measured incident radiation a signal as opposed to noise. Thus, THR indicates a desired SNR for classifying radiation incident on a receiver 301a as a signal, as opposed to noise (i.e. ambient light, detector self radiation, device internal noise etc.). THR is set, for example, to a choice of a constant value. Alternatively, THR may be a variable set by an algorithm.
Once the SNR of the incident dot exceeds the SNR threshold, a point in time when the signal is incident radiation is classified as a signal is indicated by the value of the x-coordinate at the intersection of the line SNR(t) and THR. Note that, the term “SNR of the incident dot” specifically refers to a SNR of the peak value of the histogram. The histogram processing unit 313 continuously looks for the peak and calculates the noise of the instantaneous histogram that is being accumulated, and from the peak and the noise calculates an instantaneous SNR. If the instantaneous SNR is high enough, the emitter and receiver operations corresponding to the dot may be terminated and the peak timing may be output. The time elapsed since a start of the measurement, indicated, for example, by an origin of the coordinate system shown in Fig. 6, to
the time indicated by the intersection of the line SNR(t) and THR is the optimal illumination duration. Note that the optimal illumination duration may be a minimal illumination duration.
In the following, an incident signal that has been distinguished from noise due to the SNR of the receiver receiving the signal having exceeded THR, i.e. the peak of the signal having exceeded the noise level, may be called a “sensed” signal.
Note that Fig. 6 and the graph shown therein is merely a visual aid. According to the present technology, the intersection of the line SNR(t) and THR is, for example, determined numerically by comparing the current value of the SNR with THR. Note further that the present technology does not require an actual numerical value for the optimal illumination duration to be determined. In contrast, fulfillment of the condition that the current value of the SNR exceeds THR may lead to further control steps to be performed. For example, as in some embodiments described hereinbelow, the condition that the current value of the SNR exceeds THR for a particular receiver may lead to that receiver to be deactivated. As a further example, as in some embodiments described hereinbelow, the condition that the current value of the SNR exceeds THR for a particular receiver may lead to a particular emitter emitting the signal Ll-1 sensed by the receiver to be deactivated.
The lower the choice of THR, the more likely noise may be classified as a signal.
An amount of power received by the receiver may, for example, be calculated according to
wherein P(R) denotes a power received, R denotes a target distance, i.e a distance to the object O, Po denotes a peak power transmitted, p denotes a reflectivity of the target, Ao denotes an aperture area of the receiver, po denotes a transmission of optics of the receiver and y denotes an atmospheric extinction coefficient.
As the required received power is quadratic inverse to distance, power saving may be achieved.
Fig. 7 symbolically illustrates the method according to the first embodiment of the present disclosure. In the graph shown in Fig. 7, an x-axis of the graph corresponds to an exposure time, measured from the start of an image frame of the ToF device. A y-axis of the graph corresponds to a number of active VCSELs (also called “emitters” in the following) with respect to the exposure time. Each active VCSEL emits a corresponding emitted signal Ll-1. Each emitted signal Ll-1 generates, if intersecting a surface of the object O as described hereinabove, a dot.
The dot may be measured by a receiver 301a in a receiver array due to being reflected back to the device 100.
A group of outgoing signals Ll-1 emitted by a corresponding group of VCSELs may be called an (outgoing) dot group. Note, however, that not every outgoing signal Ll-1 is necessarily reflected back to the device to be measured as an incident signal LI -2. The group of dots measured as incident signals LI -2 may likewise be called a (incident) dot group.
The device 100 may, in a non-limiting example, be configured to operate at 60 frame-per-second (FPS), corresponding to a frame processing time of ~17msec (milli-seconds), if configured as a dToF system, the device 100 may be set to illuminate and receive up to 600 dots per frame and process 50 dots in parallel. That means that to process 600 dots the system needs to work with at least 12 processing steps, denoted here as groups.
Operation of the VCSELs is sectioned into dot groups Gl-GN that are emitted by the active VCSELs sequentially in time. In Fig. 7 a first dot group Gl, a second dot group G2, a third dot group G3 and so on. GN indicates a last dot group of the image frame. Each VCSEL emits for a timespan corresponding to an operating duration 20 of the dot group G. For example, a first operating duration 20-1 corresponds to the first dot group Gl, a first operating duration 20-2 corresponds to the second dot group G2 and so on.
Division of the dots into groups, as described, may be chosen due to physical limitations of the system. There may, for example, be a limitation on a maximum power that can be consumed at a time, necessitating operation of less than a maximum number of VCSELs. There may, alternatively, be a limitation on a maximum number of processing units available for processing the signal emitted and received by the emitter 201 or the receiver 301. Again alternatively, memory considerations, eye safety or other considerations may necessitate division of dots into groups. There may be, if none of the above-mentioned limitations apply, no incentive to divide dots into groups, such that all available dots are active until the end of the frame.
Once emission of a first dot group Gl has been completed, a second dot group G2 is emitted. Emission of the second dot group G2 may commence after a delay (as illustrated) or may commence immediately following completion of the first dot group GL Sequential emission of the dot groups Gl-GN proceeds until the end of the image frame. A number N of the dot group may be determined by dividing a desired total number of dots in an image frame by a number of VCSELs per dot group Gl intended to be committed to the task at the point of implementation.
For example, if the desired number of dots in an image frame is 600 and 50 VCSELs per dot group Gl-GN are intended to be committed to the task, then the number of dot groups N is 12.
As illustrated in Fig. 7, each VCSEL has a specific emission duration 30. For example, in the first dot group a first VCSEL may emit for a first emission duration 30-1. A second VCSEL may emit for a second emission duration 30-2.
The designation of VCSELs in the following as first second, third, etc. is merely a convenience for the sake of explanation. It is noted that each VCSEL is uniquely identifiable by an address or the like. A first emitter in the first dot group may not necessarily identical to a first emitter in the second dot group and so on.
While, in the present example, both the first and the second VCSEL commence emitting synchronously, their respective emission duration 30 is unequal. Specifically, the point in time that each of the first and second VCSEL of the present example cease emitting is determined based on the determination of the optimal illumination duration as described with respect to Fig. 6 hereinabove, i.e.: The first VCSEL ceases emitting after the first emission duration 30-1 because a first receiver receiving the reflected signal LI -2 of the outgoing signal Ll-1 emitted by the first VCSEL Is determined to have sensed the incident signal. Thus, during emission of a dot group, the number of active VCSELs decreases. Cessation of emitter and/or receiver operation corresponding to the dot may also be due to reaching a maximum illumination time.
In other words: Each receiver sensing a dot is related, by the circuitry, to the emitter emitting the dot. As known to the skilled person, each emitter may coupled to a specific receiver (or a plurality of receivers). This coupling is provided by the physical implementation of the dToF system. Each emitter or group of emitters is linked to a specific histogram (i.e. a receiver or group of receivers). The controlling of the emitter may be provided by an inter-integrated circuit (I2C) interface, for instance.
Each emitter is linked to a specific receiver, where each receiver may comprise one or more SPADs Furthermore, each emitter may be one or more VCSELs). Each receiver is coupled to specific histogram. There should be an interface between the control unit to the transmitter and the receiver, such as I2c that allows controlling both circuits.
There may also be a mechanism provided to align the clock and/or timing of said two circuits, such as a common clock that is fed by a master in the system.
The master may be provided external to receiver (RX) and the emitter (TX) circuits, or could be integrated in one of them, e.g. in RX sensor.
Once, using the method according to Fig. 6, the SNR of an incident dot reaches the SNR threshold, thereby defining the optimal illumination time for that dot and, identically, the emission duration 30 for the emitter emitting the dot, the emitter emitting the dot is deactivated.
Similarly, the operating duration 20 of the dot group G is equal to longest optimal illumination duration among the dots of the dot group. For example, as shown in the first dot group G1 in Fig. 7, if a total number of emitters in the first dot group is M, then the longest optimal illumination duration among all dots in the dot group G1 corresponds to the emission duration 30-M of the mth still emitting emitter. The emission duration 30-M of the mth emitter in the first dot group is equal to the operating duration 20-1 of the first dot group Gl. In the second dot group G2, the emission duration 30-M of the mth emitter is again equal to the operating duration 20-2 of the second dot group G2, but not equal to the operating duration 20-1 of the first dot group Gl.
Note that, in Fig. 7, the emission duration of emitters in different groups are only shown to be the same for example. However, this illustration is to be understood to by symbolic and nonlimiting. The emission duration of individual emitters in different groups may be different in accordance with the present disclosure. In each group different dots may be activated, corresponding to different areas in the scene and, hence, experience different illumination times
Once all emitters corresponding to a dot group G have ceased emitting, all receivers receiving the dot group likewise cease operating.
Note, however, that there are embodiments wherein a particular receiver ceases operating once the dot that the receiver measures is sensed (as described with respect to Fig. 6 hereinabove).
A distance at a time corresponding to the emission of a dot group G to objects (for example one or more of the object O of Fig. 1) in a field of view of the device 100 may be calculated, for example, based on the histograms generated by the receivers 301a of a dot group G. In other words: the receivers of the first dot group Gl may each generate a histogram as described with reference to Fig. 2. The plurality of histograms generated by the receivers is then analyzed and, based on the histograms, the distance to the object that reflected the dot corresponding to the particular histogram is calculated.
It is stressed that, in the present embodiment, the operating duration 20 of a dot group G is not preset, but determined during operation based on the emission duration of the longest emitting emitter among the emitters of the dot group G, while, at the same time, the emission duration of the longest emitting emitter among the emitters of the dot group G is likewise not preset, but
determined during operation based on whether the dot emitted is sensed according to the method of Fig. 6.
It should be noted that the device 100 may be configured to cause all emitters to cease emitting and all receivers to cease operating once a predetermined number of dot groups have been emitted or once a predetermined time, for example a frame, has elapsed, irrespective of whether all dots have been sensed.
Additionally, the emission duration may be limited to a worst case estimated illumination time, in case this information is available by further processing at upper layer in a host device, for example.
Good performance may further be achieved if dots are sorted according to estimated illumination time prior to a group partition. Furthermore, the emission duration may be set to a maximum time according to remaining total illumination time divided by a number of remaining groups.
For example, the emission duration may be limited to the first group IGto, for example, 1.4 msec. Assuming the first group G1 is terminated after 0.3msec, as in the example, the emission duration of the second group G2 may be limited to (17-0.3)/l 1 msec = 1.52 msec.
By operating each emitter emitting a dot group only until the dot emitted by the emitterhas been sensed by the corresponding receiver, the power used to generate the emitted signals is used more efficiently.
Fig. 8 symbolically illustrates a first modification of the method according to the first embodiment. The method in Fig. 8 is similar to the method in Fig. 7. However, where, in the embodiment of Fig. 7, the operating duration 20 of the dot group G is equal to the emission duration of the longest emitting emitter (e.g. the mth emitter), in the embodiment of Fig. 8, the operating duration 20 of the dot group G is equal to the emission duration not of the of the said mth emitter, but of a shorter emitting emitter, e.g. the m-lst emitter (i.e. the second-to-longest emitting emitter) or of the m-2nd emitter (i.e. the third-to-longest emitting emitter).
In the example shown in Fig. 8, the operating duration 20 of the dot group G is equal to the emission duration the m-lst emitter. An mth emitter of the first dot group Gl, in the present example, may be called a remaining emitter of the first dot group (Rl). An mth emitter of the second dot group G2, in the present example, may be called a remaining emitter of the second dot group (R2).
The 22ausest of the modification of the first embodiment is as follows:
In the first dot group Gl, before the dot of the m-lst emitter is sensed, the method proceeds as described in Fig. 7. Once the dot of the m-lst emitter is sensed, operation of the first dot group Gl ceases and the histograms are generated based on the sensed dots of the first dot group, but the mth emitter of the first dot group (i.e. the remaining emitter of the first dot group Rl) keeps emitting and the histogram based on the dot emitted by the remaining emitter of the first dot group Gl is not reset. Then operation of the second dot group G2 commences. The remaining emitter of the first dot group Rl is attached to the second dot group G2.
Once the dot emitted by the remaining emitter of the first dot group Rl is sensed, the histogram generated based on the dot emitted by the remaining emitter of the first dot group Rl is assigned to the second dot group G2.
There may be one or more remaining emitter of the second dot group R2 and remaining emitters of further dot groups that are processed like the remaining emitter of the first dot group Rl.
In the example shown in Fig. 8, the number of remaining emitters per dot group is preset to one. There are, however, embodiments wherein the number of remaining emitters per dot group is preset to two or three or any number larger than one but smaller than a total number of emitters assigned to the dot group.
The process of attaching an emitter that emitted a dot which has not been sensed in a first dot group G and a histogram based on said dot to a second, subsequent, dot group, may be called regrouping. In addition to attaching the emitter, the corresponding receiver may likewise be attached to the second, subsequent, dot group.
There are embodiments wherein individual emitters may be regrouped multiple times, such that, for example, the dot emitted by remaining emitter of the first dot group Rl may be sensed once the remaining emitter of the first dot group Rl is attached, for example, to the third dot group G3 or dot groups subsequent to G3.
By regrouping emitters, the power used to operate the receivers is used more efficiently.
Fig. 9 symbolically illustrates a second modification, based on the first modification, of the method according to the first embodiment. The method in Fig. 9 is similar, including regrouping of emitters, to the method in Fig. 8. However, where, in the embodiment of Fig. 8, individual emitters only cease emitting once the dot emitted by the individual emitter is sensed, in the embodiment of Fig. 9, the individual emitters, in addition, cease emitting once a preset maximum emission time 50 is reached. This means that an emitter that is regrouped, for example the remaining emitter of the second dot group R2 (when to be regrouped from the second dot
group G2 to the third dot group G3), ceases emitting once it has emitted for the maximum emission duration 50. The maximum emission duration is a preset value that may be chosen by an operator or at the time of implementation of the method.
If an emitter ceases emitting due to reaching the maximum emission time, instead of its emitted dot being sensed, its associated histogram may either be used in analysis or discarded as chosen at the point of implementation.
By assigning the maximum emission time 50 in addition to the method shown with respect to Fig. 6, individual emitters are prevented from emitting for an excessively long duration while being regrouped multiple times.
The device may further be improved by incorporating feedbacks from upper layer processing, for example by a host device (e.g. AP) or additional modalities (e.g. RGB sensor).
For example:
Prioritization of dots illumination order
- Based on estimated emission duration
- Based on perception properties (e.g. cars, persons etc)
Classification of objects or scenery elements intersected by the dots, such as: o Sky / not sky o Double reflection dots o Surface characteristics (Lambertian/specular) o Estimated illumination time per as described here in above o Usage of information from previous frames o Usage of information from other modalities (e.g. RGB image) o Avoid illumination of dots with low percentile to be returned (far away and/or with low reflectivity of an intersected surface)
Additional considerations for smart illumination time (SIT)
Assuming a scenario with 600 dots, where only 50 dots can be illuminated in parallel, a conventional algorithm would segment the dots into 12 groups, with each group containing 50 dots. In such an algorithm, a fixed illumination time is allocated across the frame. For instance, with a frame duration of 24 msec, this results in a fixed illumination time of 2 msec per dot.
By incorporating the capability to adjust the illumination time for each group of dots, a more efficient resource utilization is achieved through intelligent scheduling. Enhancing this system with dynamic adjustment of illumination time could lead to further performance improvements.
Algorithm 1 :
A basic algorithm may distribute illumination unevenly among the 12 groups, rather than allocating a uniform 2 msec per group. For example, group 1 could be assigned 4 msec of illumination, groups 2-5 could receive 1 msec each, groups 6-10 could have 2 msec each, and groups 11-12 could be allocated 3 msec.
Differentiating exposure times is crucial when accounting for the fact that dots may vary in distance and albedo. Consequently, dots at greater distances should be given longer exposure times, whereas those closer may require shorter exposure periods.
The introduction of intelligence into scheduling and management involves key considerations:
• The method for grouping dots.
• The determination of exposure time, or the maximum exposure time, particularly in the context of JEP (to be elaborated on subsequently).
• The sequencing of the groups, which may be significant for latency or other metrics, as will be discussed later.
The JEP-Just-Enough-Power feature enhances operational dynamics by varying the illumination time for each dot within a group, thereby ensuring optimal power usage. Furthermore, the flexibility to extend dot exposure beyond the constraints of a single group — by regrouping it to a subsequent group — introduces an ‘elastic’ concept to grouping. The proposed algorithm operates as follows:
1. An extensive series of groups is established, hypothetically totaling 1000, with each group initially assigned a 24 microsecond (psec) exposure time derived from dividing 24 msec by 1000.
2. The first 50 dots are assigned to the initial group and the exposure process commences. Dots that require additional exposure time automatically transition to the subsequent group, while those that have completed exposure are substituted with new, previously unallocated dots, in a continuous cycle.
Groups are essential for managing the configuration of receivers (RX, i.e. SPADs, histograms) and emitters (TX, i.e VCSEL) blocks for varying dots. Certain dots necessitate specific settings,
including parameters influencing histogram processing, such as the number of reflections to detect, noise statistics thresholds, and the allocation of SPADs per dot, among others. Individual configuration for each dot and SPAD is not practical due to hardware (HW) and software (SW) limitations, such as processing and latency constraints that restrict the ability to alter per-dot parameters dynamically. Therefore, it is more efficient to assign a uniform set of parameters or configurations to a group of dots. Additionally, hardware routing limitations may preclude the inclusion of certain dots within the same group, necessitating their distribution across different groups.
The embodiments thus far have detailed the concept of grouping, which allows for varying parameter settings per group, such as exposure time. It has also addressed the implementation of Just-Enough Processing (JEP) and the process of regrouping. The focus now shifts to the critical role of the scheduling and management block, an integral component of the invention. This block is tasked with several functions, including: 1) determining the number of groups and allocating dots to these groups; 2) configuring group-specific parameters; 3) organizing the sequence of the groups; and 4) adaptive exposure time.
Dividing dots into Groups in SIT:
One algorithm aims to group dots with similar characteristics together. For instance, dots expected to have a comparable exposure time, typically due to a similar distance, are allocated to the same group to ensure they complete nearly simultaneously. This strategy reduces system idle time, as it prevents the scenario where dots that have completed their exposure must wait for a single dot taking an extended time to finish.
A different algorithm groups dots based on specific attributes, such as the number of reflections. For example, dots are organized into groups that share the same reflection count — a group for single reflections, another for double reflections, and so on. This is particularly crucial when histogram processing must be configured on a per-group basis rather than for individual dots.
Setting parameters per group in SIT:
Parameters for each group can be configured according to a plurality of parameters. Some of these parameters include:
1. Exposure parameters, as previously mentioned.
2. Histogram processing, such as the number of reflections, noise statistics (for example, averaging parameters), threshold selection, and the number of peaks.
3. SPAD (Single Photon Avalanche Diode) parameters, including parameters for dead time.
4. The quantity of SPADs assigned to observe a single dot.
5. Gating configurations, which, when the expected distance of a dot is known, can adjust the histogram processing to focus on relevant time bins.
6. The ordering of groups.
7. Additional parameters as applicable.
A critical aspect to consider is which group to prioritize: the one with dots that are easier to measure or those that are more challenging. The choice depends on the objective, whether it is to acquire measurements from as many dots as possible or to prioritize data from distant dots. The scheduling algorithm can be complex and has significant implications. For instance, prioritizing distant dots could consume so much exposure time that there is insufficient time remaining for other dots. Conversely, focusing on nearer dots first might leave inadequate time for capturing distant dots. These decisions also influence the maximum exposure time set per group, with too long an exposure potentially limiting the time available for subsequent groups and too short an exposure risking premature termination of dot measurement before obtaining sufficient data.
Example algorithms for Time of Flight (ToF) system scheduling may include:
• An initial approach might involve utilizing any available knowledge regarding the anticipated exposure time for the dots. Dots could be arranged based on their expected exposure times and scheduled from the shortest to the longest. This simple, heuristic strategy aims to capture as many dots as possible by addressing the less complex measurements first, though it may not be the most efficient.
• A more tailored algorithm could align the scheduling of dots with specific application requirements. For instance, a simultaneous localization and mapping (SLAM) application on the host may determine certain dots as more critical based on their relevance to features within the scene. Consequently, the SLAM application’s prioritization of dots would be factored into the scheduling process.
• The most sophisticated algorithm would integrate both the application’s demands and the intrinsic limitations of the ToF system. This comprehensive method would produce an enhanced strategy for scheduling and grouping, potentially leading to more effective resource utilization and data acquisition.
Adaptive exposure time in SIT
The invention details adaptive exposure time control, with further considerations outlined as follows:
1. An algorithm may determine whether to halt a current group of dots based on the number of returned points versus those pending, allowing the subsequent group an opportunity to be illuminated and processed.
2. A smart stop algorithm may cease dot emission in response to histogram analysis, halting dots when a peak exceeds a certain threshold. Additionally, an algorithm might preemptively stop emissions if it predicts no reflection or a reflection at a distance too great to resolve within the maximum exposure time. This decision could also take into account the SNR of the histogram, ceasing exposure when the SNR falls below a predefined threshold, thereby saving resources by not continuing exposure.
Concerning a priori information:
The management of the ToF system may use a priori information, which can be sourced from various inputs:
• Data from previous dToF measurements, such as exposure times, distances, and albedo, can help estimate parameter distributions to set maximum exposure times or to allocate dots based on segmented fields of view.
• Additional sensors like RGB sensors, combined with computer vision or neural network algorithms, can predict scene structure and estimate distances or expected exposure times.
• Photon count sensors, potentially integrated with the dToF sensor itself, offer another method for obtaining a priori information.
• Information from applications, such as SLAM, may also be comprised in a priori knowledge.
It should be emphasized the algorithms in question are dynamic and adaptive. They respond in real time to measurement data and the system’s current state, such as elapsed time and the number of processed dots. This constant feedback loop is designed to enhance the ToF system’s resource utilization efficiency
Multi reflection scenario
In a sensor configured to stop illuminating dots as soon as a peak with a sufficient SNR is detected (i.e. as described with respect to Fig. 6), the sensor may miss additional reflections generated by the signal emitted by an emitter. In a situation wherein detection of more than a
single reflection is advantageous, in which case cessation of emission and/or reception of the dot should not cease once a single peak with sufficient SNR is detected.
However, by not using the method according to Fig. 6, excess power may be used , for example, if the device is used against a flat wall. All dots illuminating the wall are expected to generate only one reflection. However, if the device are not configured to estimate the number of expected reflections (i.e. one in this situation), the dots will continue to be emitted but no additional reflection will rise above the noise level. Hence, a device is provided that is configured to estimate dynamically and adaptively over the scene, using computer vision/DSP/ML based on RGB/DTOF/PC (photon count) which receivers should expect additional reflection, thus allowing a reduced power usage. The functionality of the device may be provided as an algorithm executed by circuitry in accordance with the present disclosure.
In order to further increase an efficiency of the power usage of the device 100, if a receiver 301a is expected to receive more than a single reflection of a dot, that receiver is left to operate until a sufficient number of reflections are received. A single reflection is expected if a dot is expected to be reflected by a single surface having a mostly uniform distance from the device 100. If a dot is expected to encounter more than one surface, each encountered surface located at a different distance, each of the encountered surfaces reflects the dot once, thus leading to a plurality of reflections corresponding to the number of encountered surfaces.
Fig. lOa-c shows a situation in which multiple reflections per dot may be expected.
Specifically, Fig. 10a illustrates shows a scene that may be captured by a visual image capturing system, for example a camera, that images a same field of view as the device 100. In the scene illustrated in Fig. 10a for example, two dots, namely a first dot Doti and a second dot Dot2, illustrated by circles, may be projected by the emitter array 200. It is recognized that the circles merely symbolically represent an extent of the dots in the scene. For example, the first dot Doti is projected onto a wall. The second dot Dot2 is projected onto a comer of a piece of furniture, for example an upper right comer of a lampshade, as illustrated.
Fig. 10b shows an enlarged view of the part of the scene illustrated in Fig. 10a that is subtended by the first dot Doti. As shown (cf. Fig. 10a), the first dot Doti encounters a wall, corresponding, in Fig. 10b, to a first area Al. Therefore, the first dot Doti is expected to be reflected, in the first area Al, once, producing a single reflection.
Fig. 10c shows an enlarged view of the part of the scene illustrated in Fig. 10a that is subtended by the second dot Dot2. As shown (cf. Fig. 10a), the second dot Dot2 encounters, in order of
time since emission of the dot, a lampshade (cf. Fig. 10a) corresponding to a first area Al, producing a first reflection, then a wall (cf. Fig. 10a), corresponding to a second area A2, producing a second reflection. Therefore, the second dot Dot2 projected into the scene of Fig. 10a at this position is expected to produce two distinct reflections. As illustrated, the first area Al and the second area A2 are visually separated by an edge El.
It is understood that other dots may produce more than two reflection. For example, a third dot (not illustrated) projected to a upper left comer of the sofa shown in Fig. 10a, may encounter, in order of time since emission of the dot, a cushion (cf. Fig. 10a) on the sofa, producing a first reflection, then a structure of the sofa (cf. Fig. 10a), producing a second reflection, then a wall (cf. Fig. 10a), producing a third reflection.
Fig. 1 la and 1 lb illustrate a time sequence of a received signal, for example, of an outgoing signal Ll-1, corresponding to dots as shown in Fig. 10a. Each of Fig. 1 la and Fig. 11b may correspond, for example, to a histogram as shown in Fig. 2. A signal may thus be represented by a count of received photons over time.
Fig. 1 la shows a signal with a single reflection, as may correspond to the first dot Doti shown in Fig. 10a and Fig. 10b. Here, a single reflection, corresponding to the peak (“1st peak”) is comprised in the signal.
Fig. 11b shows a signal with two reflections, as may correspond to the second dot Dot2 shown in Fig. 10a and Fig. 10c. Here, two reflections, one reflection each corresponding to the first peak (“1st peak”) and the second peak (“2nd peak”) are comprised in the signal.
In order to use the power emitted by the emitter array 200 most efficiently, the emitter array 200 and/or the receiver array 300 are, according to the present embodiment, operated until the number of peaks detected in the signal corresponds to a number of expected peaks in the signal.
Thus, the present embodiment provides a function to estimate the number of expected reflections in a signal corresponding to a dot. Then, once the number of expected reflections is known, the emitter array 200 and the receiver array 300 are operated until a required number of reflections, based on the number of expected reflections, are sensed. For example, of the number of expected reflections for a particular dot is determined to be greater than one, an emitter emitting the dot and a receiver receiving the dot are operated until at least two reflections of the dot have been sensed. Specifically. The device 100 may be configured to not cause an emitter and/or receiver to cease operating according to the method of Fig. 6 before the required number of reflections has been sensed.
It is to be noted that the number of expected reflections is calculated before measuring the actual number of reflections. Thus, the number of expected reflections is not a measured number but a predicted number.
Figs. 12a-c are schematic illustrations of an apparatus configured to estimate the number of expected reflections. The apparatus comprises the device 100 as shown ion Fig. 12a and a host device 800, as shown in Fig. 12b.
The device 100 shown in Fig. 12A comprises the control unit 110, the emitter 201 and the receiver 301. The function of the control unit 110, the emitter 201 and the receiver 301 respectively has been described, for example, with reference to Fig. 3 hereinabove.
However, the output of the receiver 301, in addition to being provided to the control unit 110 in order to control the operation of the emitter 201 and the receiver 301, for example according to the method of Fig. 6, is provided to a reflection algorithm 810. The reflection algorithm 810 is a computing unit that estimates the number of expected reflections for an individual dot.
The reflection algorithm 810 is comprised in the parent unit 800 or, in other words, executed by the parent unit 800. The parent unit 800 may be any electronic computing device, for example, a computer, a mobile terminal, a microprocessor or the like. Note that the device 100 may be comprised in the parent unit 800. Note that the parent unit 800 may be a computing block run by a host device or any other computing infrastructure connected to the device 100.
The reflection algorithm outputs a number, for example an integer, that indicates a number of expected reflections corresponding to a dot. The number of expected reflections is provided to the device 100 (which, in the present example, may be a dToF system 100)
The reflection algorithm 810 may optionally use image data provided by an RGB image sensor 600. If the reflection algorithm uses image data provided by the RGB image sensor, then the number of expected reflections may be obtained as follows:
In the second dot Dot2 as shown in Fig. 10c, one reflection each is expected from each of the first area Al and the second area A2, i.e. two reflections. I.e. the second dot Dot2 generates two reflections. The first area Al is separated from the second area A2 by the edge El. By detecting a presence of the edge El, the number of expected reflections corresponding to the second dot Dot2 can be calculated by incrementing a number of detected edges in the area subtending the dot by one.
The edge El may be detected using an edge detection algorithm. Edge detection algorithms are widely known to the skilled person and any algorithm capable of detecting an edge in image data may in principle be used. Only for example, representing a wider range of edge detection algorithms, the known Canny edge detector is mentioned.
Alternatively, feature detection, computer vision or a neural network-based approach may likewise be used.
If an RGB image sensor is not provided, the number of expected reflections may be estimated using previous ToF measurements. Alternatively, if an RGB image sensor is not provided, a photon count image may be provided by the ToF sensor. The photon count image may be analyzed like a monochrome image. In this case, a scene composition may obtained through ToF measurements may be analyzed using, for example, a plane estimation algorithm. Plane estimation algorithms are known to the skilled person. For example, among a wider range of algorithms that may in principle be used, the known Random Sample Consensus (RANSAC) algorithm is mentioned. The number of expected reflections corresponding to the dot may then be calculated by incrementing a number of detected planes in the area subtending the dot by one.
Fig. 12c shows an receiver 301c that may be used in the present embodiment. The receiver 301c may be the receiver 301 in Fig. 12a.
The receiver 301c comprises analog circuitry 310, a time-to-digital conversion unit 311, a histogram building unit 312, a processing unit 313, a noise statistics echo peak detection unit 314b and a JEP-Algorithm unit 315. Note that, instead of the receiver 301c may be replaced with the receiver 301a of Fig. 5. The function of the analog circuitry 310, the time-to-digital conversion unit 311, the histogram building unit 312 and the processing unit 313 is as described, for example, with reference to Fig. 3 hereinabove. The noise statistics echo peak detection unit 314b comprises the same functionality as the echo peak detection unit 314 of Fig. 3, but is further configured to provide noise statistics according to known methods of the ToF principle. The JEP-Algorithm unit 315 provides the information required by the control unit 110 to control the emitter 301 and the receiver 301c according to the method described with respect to Fig. 6 hereinabove (i.e. causing a ceasing of emission by the emitter 201 and/or reception by the receiver 301 once a SNR of the received signal exceeds a preset threshold). The JEP-Algorithm unit 315 is provided by the parent unit 800 with the number of expected reflections separately for each dot. The JEP-Algorithm unit 315 may specifically override the method according to Fig. 6 hereinabove such that causing a ceasing of emission by the emitter 201 and/or reception by the receiver 301 is only performed if the number of sensed reflections is equal to the number of
expected reflections and/or greater than one if the number of expected reflections is greater than one.
Fast movement scenario
Blurred images are a well-known issue that can occur due to the movement of objects within a scene or due to the movement of the camera itself.
In a direct Time-of-Flight (dToF) system, one pixel consists of a Vertical Cavity Surface Emitting Laser (VCSEL) on the transmitter (TX) side and a Single-Photon Avalanche Diode (SPAD) on the receiver (RX) side. The VCSEL emits a light pulse, which travels to an object and is then reflected back to the SPAD.
The VCSEL emits a train of light pulses, which travel towards an object in the scene and are then reflected back to the sensor. However, when there is movement in the scene or the camera itself, the light pulse can be reflected back at different times, causing the image to appear blurred. This is because the movement causes the distance between the camera and the object to change, resulting in the light pulse taking a different amount of time to travel to and from the object. This can cause the image to appear distorted and make it difficult to accurately determine the distance to the object.
After the histogram is integrated, post-processing is performed to detect the peak of the histogram and verify if its signal-to-noise ratio (SNR) is above a predefined threshold.
However, if there is movement of the camera or the object during the pulse train, the reflected photons will be received at different distances. This can cause the histogram to be “smeared” and result in inaccurate distance measurements.
The distance change can be significant enough to cause the reflected photons to be received at different distances, potentially moving them to the next histogram bin. This can result in a “smearing” effect on the histogram, where the peak of the histogram becomes less distinct and the overall shape becomes more spread out. This smearing can impact the accuracy of the distance measurement and make it more difficult to detect the peak of the histogram and verify if its signal-to-noise ratio (SNR) is above a predefined threshold.
The movement of objects or the camera during the pulse train in a Time-of-Flight (ToF) system can have multiple negative impacts on the histogram output. One such impact is that the detected distance may be inaccurate, as it may be a combination of multiple distances due to the movement. Additionally, the distance may not be detected at all if the histogram peak is not high
enough above the noise level, resulting in a missed detection. This phenomenon is more severe in ToF systems compared to blurring in RGB sensors, as it can significantly impact power consumption due to excessive noise accumulation and/or waste of illumination time without meaningful signal reception. Moreover, unlike in RGB sensors where the pixel outputs a blurred signal in any case, in dToF systems, the peak may not cross the threshold leading to no output at all being generated.
Figures 13 and 14 provide a schematic representation of how blur affects power consumption in a direct Time-of-Flight (dToF) system. Specifically, they illustrate a scenario in which the camera is rotating and a change in the measured distance is detected. In this case, the new distance is reported as the peak value. However, this process results in wasted energy, as the movement of the camera causes the distance to change and the system must continuously adjust to the new distance, consuming additional power. This phenomenon can significantly impact power consumption due to excessive noise accumulation and/or waste of illumination time without meaningful signal reception.
Fig. 13 schematically shows a dToF camera 100 that is rotating from left to right. As the camera rotates, the light beam emitted by its VCSEL moves from left to right and hits the surrounding environment at various locations. The figure depicts six different states, labeled as Rayl, Ray2, ... , Ray6, of the emitted light beam as it is emitted at subsequent times. This demonstrates how the movement of the camera can cause the light beam to hit different locations in the environment, which can impact the accuracy of distance measurements and power consumption in the dToF system.
Fig. 14 schematically shows the change of the histogram captured by dToF camera 100 of Fig. 13 for the six respective times at which Rayl, Ray 2, ... , Ray6 of Fig. 13 are emitted. The diagram is symbolic. It depicts six different histograms that are arranged on the time line at six respective times Tl, T2, ... , T6 when they have been captured.
At times Tl and T2, the peak of histogram does not cross the threshold, which leads to waisted power. Moreover, If exposure time setting doesn’t take this into account, SPAD/V CSEL may stop operation before and no peak will returned at all.
Starting at time T3 and proceeding until time T6, a new peak is detected and begins to accumulate in a separate histogram bin. As it continues to accumulate, it eventually crosses the threshold, while the old peak remains at a low level. It should be noted that the threshold increases in proportion to the accumulated noise level. During the first two exposure times, noise
is accumulated without any meaningful signal reception, resulting in an increase in the threshold. This phenomenon also has an impact on power consumption. If the histogram accumulation and SPAD/V CSEL operation were to commence at a later time, excessive power consumption could be eliminated.
Figures 15 and 16 provide a further schematic representation of how blur affects power consumption in a direct Time-of-Flight (dToF) system. Specifically, they illustrate a scenario in which the camera is rotating and a change in the measured distance occurs. However, in this case, the peak of the new distance is never returned and the exposure time continues until it reaches the maximum exposure time. This results in excessive noise accumulation and wasted illumination time without meaningful signal reception, leading to significant power consumption. The system is unable to continuously adjust to the new distance due to the movement of the camera, causing the distance to change and resulting in further power consumption.
Fig. 15 schematically shows a dToF camera 100 that is rotating from left to right. As the camera rotates, the light beam emitted by its VCSEL moves from left to right and hits the surrounding environment at various locations. The figure depicts fourteen different states, labeled as Rayl, Ray 2, ... , Ray 14, of the emitted light beam as it is emitted at subsequent times.
Fig. 16 schematically shows the change of the histogram captured by dToF camera 100 of Fig. 15 for the fourteen respective times at which Rayl, Ray 2, ... , Rayl4 of Fig. 13 are emitted. The diagram is again symbolic. It depicts different histograms that are arranged on the time line at fourteen respective times Tl, T2, ... , T14 when they have been captured. While the histogram is being populated, the peak persistently shifts, thereby inhibiting the possibility for the peak to amass a sufficient value above the threshold. Consequently, no output will be generated and the exposure may continue indefinitely.
As shown in the examples of Figs. 13 to 16 above, in contrast to the blurring effect in RGB sensors, where the light continues to be captured and transmitted from the sensor to the host, the histogram processing operation in a SPAD sensor can have a significant impact on power consumption, in addition to incorrect distance calculations. In the case of an intelligent illuminator, this can be counterintuitive. When there is sudden, fast movement, the intelligent illuminator may request the transmission of many dots, as the previous history is no longer relevant. However, under such high movement, there is a possibility that most of the dots will either not be returned or will require a longer exposure time than originally anticipated, further increasing power consumption.
In the light of the above, the embodiments described below provide a system that involves movement and distance calculations, using a SPAD sensor and an illuminator. The embodiments provide a mechanism that can identify such movements and react accordingly. This mechanism detects and responds to movements in order to improve the accuracy of distance calculations and reduce power consumption. It may involve adjusting the exposure time or illumination level, or implementing other strategies to better handle movement in the system.
The embodiments identify and react to movements in the context of a Time-of-Flight (ToF) system should take into account: The mechanism is able to detect and track the movement and speed of the camera in order to adjust the ToF system’s operation accordingly. This may involve adjusting the exposure time, illumination level, or other parameters to compensate for the camera’s movement. The mechanism is able to take detect and track moving objects within the scene. This information can be used to adjust the ToF system’s operation to better handle moving objects and reduce the impact of blur on power consumption and distance calculations. Still further, the mechanism can take into account information about the scene: The mechanism may have access to information about the scene, such as object distances and flatness. This information can be used to optimize the ToF system’s operation and improve the accuracy of distance calculations. Still further, the mechanism can take into exposure time: The mechanism may for example be able to adjust the exposure time of the ToF system in response to movements and changes in the scene. This can help to reduce power consumption and improve the36auses36yy of distance calculations.
By taking these factors into account in higher granularity or resolution, the new mechanism can better identify and react to movements in the context of a ToF system, improving the overall performance and efficiency of the system.
The user is asking for more information about the algorithm that can be used to react to movements in a Time-of-Flight (ToF) system. The algorithm can perform the following actions for each group of pixels:
Upon such information, an algorithm can select to perform the following for each (group) of pixels:
Stop or shorten exposure time:
The algorithm may stop or shorten the exposure time of the pixels in order to reduce power consumption and improve the accuracy of distance calculations.
Delay activation of selected dots:
The algorithm may delay activation of selected dots. For example: The algorithm can delay the activation of selected dots until the movement in the scene has slowed down.
Restart histograms if time expires:
The algorithm may restart histogram capture if the time expires: For example, the algorithm can restart the histograms if the exposure time expires.
Note that the histogram may only be restarted after some time. This can be beneficial in certain situations. For example, at first a dot may be oriented at the sky, but due to movement the dot may subsequently be oriented at a close object. In this example, at first the histogram is only ambient noise, which is very high and which causes the threshold to be high. However, after the movement the peak generated by the close object will not be high enough above the noise. Thus, if the histogram is cleared and start from the beginning, the noise level is lower and it is easier for the peak to be detected.
Advanced accumulation of histogram:
The algorithm may use advanced accumulation techniques for the histogram that take into account pixel movement.
Share histogram information:
The algorithm can share histogram information over adjacent pixels so that the accumulation of one pixel may contribute to the detection of adjacent pixels. This is particularly relevant for full ToF systems (as opposed to spot-ToF) and can help to improve the accuracy of distance calculations and reduce power consumption.
To overcome excessive power consumption in a system, the embodiment described here comprises the use of an Inertial Measurement Unit (IMU). An IMU is an electronic device that measures and reports a body’s specific force, angular rate, and sometimes the magnetic field surrounding the body, using a combination of accelerometers, gyroscopes, and magnetometers. By using an IMU, the system can detect and track movement and orientation changes, allowing it to adjust its operation accordingly and reduce power consumption. In a Time-of-Flight (ToF) system, an IMU can be used to detect and track movement in the scene, allowing the system to adjust exposure time, illumination level, and other parameters to reduce power consumption and improve distance calculations.
Note that movement may be tracked by other means as well. For example, movement may be detected using RGB information from an RGB sensor using a computer vision algorithm. Furthermore, ToF information may be used to detect movement.
An exemplifying algorithm considers the expected or configured exposure time for every dot (^exposure) and the camera movement speed, calculated from an Inertial Measurement Unit (IMU) in degrees per second, which is converted to pixels per second: deg I sec -> 8pixel/se c (8)
Note that the IMU provides movement for several axes. Here, for sake of simplicity different speeds per axis are not referenced by choice, but this choice is to be understood to be nonlimiting. Furthermore, only rotational movement and not linear movement is referenced for simplicity, but this is likewise to be understood as non-limiting.
By multiplying the camera movement speed in pixels per second with the exposure time, the system can estimate how many pixels will be shifted for every dot, resulting in a value called Spixel.
If the value of Spixel is greater than a predefined threshold (TH), the dot will not be activated.
The threshold setting (TH) can be defined globally or per region, depending on the scene.
This process is used to reduce power consumption and improve the accuracy of distance calculations in a Time-of-Flight (ToF) system.
Fig. 17 illustrates the process of exposure time determination and decision in a system that includes a control unit 110, a transmitter (Tx) 201, a receiver (Rx) 301b, an RGB sensor 600, and an Inertial Measurement Unit (IMU) 700. The control unit 110 receives information from the exposure time determination and decision module 501, which is used to control the operation of the transmitter (Tx) 201 and receiver (Rx) 301b. The exposure time determination and decision module 501 obtains information from the RGB sensor 600 and the IMU 700 to determine the optimal exposure time for the system. The RGB sensor 600 provides information about the scene, such as object distances and flatness, while the IMU 700 provides acceleration information, from which information about the movement and speed of the camera is derived. By taking this information into account, the exposure time determination and decision module 501 can adjust the exposure time to improve the accuracy of distance calculations and reduce power consumption.
The system of Fig. 17 uses external information, such as camera speed from the IMU 700 or other sensor 600, 601, to control the operation of the SPAD/VCSEL in a dToF system. This external information can also include additional information about the scene, such as distances distribution and illumination time, which can be useful for configuring and controlling the dToF exposure time and other parameters. The control can include configuring the exposure time on the fly, such as starting, stopping, and setting limits, as well as configuring the histogram processing, the receivers and/or the emitters to take into account movement and perform multi - SPADs processing. The goal is to improve the accuracy and efficiency of the dToF system by using external information to dynamically adjust its operation.
Fig. 18 provides a schematic example of receiver (RX) 301b of Fig. 17. RX comprises a SPAD analog circuitry 310, a Time-to-digital conversion unit 311, ahistogram builder 312, and aNoise Statistics Echo Peak Detection 314b. The RX is responsible for processing the incoming signal from the SPAD (Single Photon Avalanche Diode) analog circuitry 310. The SPAD analog circuitry 310 may also be responsible for amplifying and filtering the incoming signal from the SPAD. The Time-to-digital conversion unit 311 converts the analog signal into a digital format that can be processed by the histogram builder 312. The histogram builder 312 creates a histogram of the signal data, which is used to extract information about the distance to the object being measured. The Noise Statistics Echo Peak Detection 314b is responsible for analyzing the noise statistics of the signal and identifying the peak of the echo. The timing of the peak corresponds to the distance to the object.
As shown in Fig. 17 above, the RX 301b may be controlled by the control unit 110 based on information from the exposure time determination and decision 501, which obtains information from the RGB sensor 600, dToF sensor 601, and IMU 700.
Figure 19 provides a more detailed view of the exposure time determination and decision module 501. The module is responsible for determining the exposure time limits for each pixel in the scene, based on information about the scene and the movement of the camera. The module consists of several sub-modules, including camera movement estimation 720, scene estimation 620, rotational shift calculation 730, exposure time determination & decision 740, and configuration and decision 750.
The camera movement estimation 720 sub-module estimates the movement of the camera (speed[x,y] in deg/sec) based on data from the IMU 700, which may include 6DoF measurements or other relevant information. Camera movement can be estimated per axis separately, and can also estimate rotational movement.
The scene estimation 620 sub-module determines information about the scene based on (past)_ dToF measurements 120 and RGB image data 610, including illumination time per pixel illum_time[x,y] and distance per pixel dist[x,y]. The scene estimation may be performed using computer vision (CV) techniques or neural network (NN) algorithms. The choice of method will depend on the specific application and requirements. If there is no information about the scene, the algorithm can estimate range for illumination time per (group) of pixels, and similar information about max delta difference for distances between adjacent pixels.
In other words: if there is no accurate information about the scene, some a priori information (based on the past, or preconfigured) is used. A priori information may be a distribution of distances in the scene, or a maximal difference between adjacent pixels that will affect the threshold selection algorithms.
The rotational shift calculation 730 sub-module calculates the rotational shift based on the camera movement estimated by camera movement estimation 720 and the scene information determined by scene estimation 620. The rotational shift is calculated as 6 |x. y] = speed[x,y] * time[x,y], where speed[x,y] is the camera movement and time[x, y] is the illumination time in the scene information.
The exposure time determination & decision 740 sub-module calculates the exposure time limits per pixel, exp_time[x,y], based on the rotational shift calculated by rotational shift calculation 730 and the scene information determined by scene estimation 620. For example the result may indicate that all pixels with 0[%, y] > TH should not be shot at all. In other words, all pixels with a value of 9\x. y] (representing the pixel at position (x, y)) greater than the threshold value (TH) should not be active.
The configuration and decision 750 sub-module determines configuration and decision options based on the exposure time limits determined by exposure time determination & decision 740. It may use JEP (Just Enough Power) to determine these options. Configuration and decision 750 then controls control unit 110 to realize its configuration and decision options.
Fig. 20 is a symbolic illustration of a processing method according to the third embodiment. The processing method describes estimating the impact for the simple scenario of a flat wall. The camera 20 rotates from left to right. At a first time instance, the light beam travels a distance dl before being reflected by the wall. At a second time instance, the light beam travels a distance d2 before being reflected by the wall. At a third time instance, the light beam travels a distance D = 2m before being reflected by the wall. The wall is at an angle of -45 degrees with respect to the
camera orientation (ray direction) at the third time instance. The process determines of the difference between dl and d2: |dl-d2| is greater than a threshold, here for example 15 cm. If |dl - d2| is larger than the threshold, then this indicates that the system starts to suffer from blur that affects power consumption (as described with regard to Fig. 15 and 16 above) as the camera is rotating and a change in the measured distance occurs.
Fig. 21 is an explanatory diagram of the rotational pixel shift of a device. The diagram shows the impact of exposure time and camera movement on non-retumed pixels due to high movement for the situation described in Fig. 20 (wall angle: 45 deg, wall distance: 2 meters). The exposure time in the example was 0.003 seconds. The rotational speed of the ToF camera was 200 deg/sec. The diagram shows the distance difference (delta) |dl-d2| in meters for the distance between ToF camera and wall (y axis) over the viewing direction in degrees (x axis). The area indicates that all SPADs looking at degrees 30-40 will suffer from a peak shifting of more , for example, 20 cm,. In one dToF system, such a distance shift means that peaks will not be detected and the exposure time will be infinite, if not interrupted.
Implementation
In the following, an implementation example of a dToF camera is described in more detail. Fig. 22 may be used to implement the system described in Fig. 17 which illustrates the process of exposure time determination and decision and configuration and decision options based on the exposure time limits.
In Fig. 22 a scene is illuminated by an iToF camera 1002. The reflected light from the scene is captured by the iToF camera 102. The iToF camera 1002 comprises an iToF camera controller 1002-1 which controls the operation of the illuminator and the sensor of the camera according to an operation mode which defines configuration settings related to the operation of the imaging sensor and the illumination sensor (such as exposure time, or the like). The controller 1002-1 provides the ToF measurements (e.g. depth data frames) to a ToF datapath 1002-2 which processes the ToF measurements into a ToF point cloud (defined e.g. in a 3D camera coordinate system). The ToF point cloud is a point representation of the ToF measurements which describes the current scene as viewed by the ToF camera. The ToF point cloud may for example be represented in a cartesian coordinate system of the iToF camera. This ToF point cloud obtained from the ToF datapath is forwarded to a 3D reconstruction 1004.
3D reconstruction 1004 creates and maintains a three-dimensional (3D) model of the scene 1001 based on technologies known to the skilled person, for example based on the “KinectFusion”
pipeline. In particular, 3D reconstruction 1004 comprises a pose estimation 1004-1 which receives the ToF point cloud. The pose estimation 1004-1 further receives auxiliary input from auxiliary sensors 1003, and a current 3D model from a 3D model reconstruction 1004-2. Based on the ToF point cloud, the auxiliary input, and the current 3D model, the pose estimation 1004- 1 applies algorithms to the measurements to determine the pose of the ToF camera (defined by e.g. position and orientation) in a global scene (“world”). Such algorithms may include for example be the iterative closest point (ICP) method between point cloud information and the current 3D model, or for example a SLAM (Simultaneous localization and mapping) pipeline. Knowing the camera pose, the pose estimation 1004-1 “registers” the ToF point cloud obtained from datapath 1002-2 to the global scene, thus producing a registered point cloud which represents the point cloud in the camera coordinate system as transformed into a global coordinate system (e.g. a “world” coordinate system) in which a model of the scene is defined.
The registered point cloud obtained by the pose estimation 1004-1 is forwarded to a 3D model reconstruction 104-2. The 3D model reconstruction 1004-2 updates a 3D model of the scene based on the registered point cloud obtained from the pose estimation 1004-1 and based on auxiliary input obtained from the auxiliary sensors 103.
As described above, the pose estimation 1004-1 and the 3D model reconstruction 1004-2 obtain auxiliary input from auxiliary sensors 1003. The auxiliary sensors 1003 comprise a colour camera 1003-1 which provides e.g. an RGB/LAB/YUV image of the scene, from which sparse or dense visual features can be extracted to perform conventional visual odometry, that is determining the position and orientation of the current camera pose. The auxiliary sensors 1003 further comprises an event-based camera 1003-2 providing e.g. high frame rate cues for visual odometry from events. The auxiliary sensors 1003 further comprise an inertial measurement unit (IMU) 1003-3 which provides e.g. acceleration and orientation information, that can be suitably integrated to provide pose estimates. These auxiliary sensors 1003 gather information about the scene 101 in order to aid the 3D reconstruction 1004 in producing and updating a 3D model of the scene.
Fig. 23 shows a general configuration of circuitry 1200 according to the present disclosure. The circuitry may implement, for example, the control unit 110 and the exposure time determination & decision module 501 in Fig. 19. The circuitry 1200 can include a CPU 1201, interacting with storage 1202. The storage 1202 can, for example, be a solid state disk (SSD). The device 1200 can further include a read-only -memory (RAM) 1203 interacting with the CPU 1201. The device can include a Bluetooth transceiver and decoder 1204 and an antenna and circuitry configured to
interface with a wireless local area network (WLAN) 1205. The circuitry 1200 contains the a ToF interface 1213 which interfaces with the electronic device 100 and allows the CPU 1201 to control the ToF sensor 1213 according to the embodiments described hereinabove. The circuitry 1200 can further include a user interface 1212. The user interface 1212 may be used to acquire user input as required. The circuitry can further include a sensor array 1211 capable of sensing a user reaction. The circuitry further comprises an RGB image sensor 1210, which interfaces with the CPU.
N that the intelligent scheduling and management block may be provided as part of the ToF system 1213 or be executed by the CPU 1201. Integrating the intelligent scheduling and management block as part of the sensor would enable fast reaction and low latency decisions for the receiver and emitter (the emitter could be controlled also from the receiver, that is the receiver is the master of the dTof System). External sensors, as well as the host application could provide additional meta-data/configurations allowing the intelligent block to take the decisions.
Another option is split operation - some parts are done internally, closer to the sensor, and some are done externally, running on the host, and they communicate thru some interface.
All units and entities described in this specification and claimed in the appended claims can, if not stated otherwise, be implemented as integrated circuit logic, for example on a chip, and functionality provided by such units and entities can, if not stated otherwise, be implemented by software.
In so far as the embodiments of the disclosure described above are implemented, at least in part, using software-controlled data processing apparatus, it will be appreciated that a computer program providing such software control and a transmission, storage or other medium by which such a computer program is provided are envisaged as aspects of the present disclosure.
Note that the present technology can also be configured as described below.
(1) An electronic device comprising circuitry configured to manage resources of a time-of-flight system to optimize time, and/or energy usage, and/or performance of the time-of-flight system based on sensor information, wherein the sensor information comprises time-of-flight measurements and/or measurements from one or more other sensors.
(2) The electronic device according to (1), wherein the circuitry is configured to perform grouping of the emitters and corresponding receivers to groups, based on conditions.
(3) The electronic device according to any of (1) to (2), wherein a group refers to emitters and receivers which are active in parallel at least part of the time.
(4) The electronic device according to any of (1) to (3), wherein the circuitry is configured to set parameters for each group of emitters/receivers including activation profiles, emitter parameters, and/or receiver parameters.
(5) The electronic device according to any of (1) to (4), wherein the circuitry is configured to dynamically control the activation of an emitter and corresponding receiver in a group to cease activation, continue activation , and/or modify the groups contents.
(6) The electronic device according to any of (1) to (5), wherein the circuitry is configured to perform dynamic modifications based on histogram contents.
(7) The electronic device according to any of (1) to (6), wherein the circuitry is configured to perform dynamic modifications based on system conditions.
(8) The electronic device according to any of (1) to (7), wherein the circuitry is configured to perform dynamic modifications based on host requirements.
(9) The electronic device according to any of (1) to (8), wherein grouping is based on estimated/projected parameters of the dots and/or hardware limitations.
(10) The electronic device according to any of (1) to (9), wherein the estimated/projected parameters are based on previous dToF measurement reports, upper layer a priori information, or other sensor information.
(11) The electronic device according to any of (1) to (10), wherein the scheduling of groups is targeted to optimize a dToF key performance indicator and take into account system considerations.
(12) The electronic device according to any of (1) to (11), wherein managing resources of a time- of-flight system comprises optimizing distance measurements and/or a number of returned dots.
(13) The electronic device according to any of (1) to (12), wherein the managing comprises selective cessation of an operation of emitters of a group of emitters for time-of-flight measurements, wherein each emitter is configured to commence emitting a respective dot simultaneously with the other emitters of the group of emitters.
(14) The electronic device according to any of (1) to (13), wherein the circuitry is further configured to cause an emitter of the group of emitters to cease emitting when an incident dot related to the emitter has been sensed by a receiver array, and cause the receivers in the receiver
array to cease operating when a predefined number of the emitters of the group of emitters have ceased emitting.
(15) The electronic device according to any of (1) to (14), wherein the receiver related to the emitter emitting the incident dot is caused to cease operating when the incident dot has been sensed by the receiver and the emitter emitting the incident dot is caused to cease emitting.
(16) The electronic device according to any of (1) to (14), wherein the circuitry is further configured to cause a particular receiver sensing the incident dot to cease operating when the incident dot has been sensed.
(17) The electronic device according to any of (1) to (14), wherein the circuitry is further configured to provide a time-of-flight measurement indicating a distance to an object based on the group of dots emitted by the predefined set of emitters and sensed by the receiver array.
(18) The electronic device according to any of (1) to (14), wherein the predefined number of the emitters is equal to a total number of emitters of the predefined group of emitters.
(19) The electronic device according to any of (1) to (14), wherein the circuitry is further configured to cause emitters of an emitter array to sequentially emit groups of dots and, for each group of emitted dots, to cause receivers in the receiver array (300) to cease operating when a number of dots of the group have been sensed by the receiver array.
(20) The electronic device according to (19), wherein when the number of the emitters of a first group of emitters have ceased emitting and the receivers in the receiver array have been caused to cease operating, remaining emitters keep emitting as part of a subsequent second group of emitters.
(21) The electronic device according to (20), wherein the circuitry is further configured to provide at least one first time-of-flight measurement based on the group of dots emitted by the first set of emitters and sensed by the receiver array and provide at least one second time-of- flight measurement based on the group of dots emitted by the second group of emitters and sensed by the receiver array.
(22) The electronic device according to (21), wherein the circuitry is further configured to cause the emitter to cease emitting after a predefined maximum emission time.
(23) The electronic device according to (22), wherein the circuitry is further configured to cause the receiver related to the emitter to cease operating after the predefined maximum emission time.
(24) The electronic device according to (23), wherein sensing the incident dot comprises monitoring whether a signal-to-noise-ratio of the incident dot measured by a receiver of the receiver array exceeds a predefined threshold.
(25) The electronic device according to (24), wherein the circuitry is further configured to cause the emitters to emit a plurality of dots in a dot pattern.
(26) The electronic device according to any of (1) to (25), wherein the dynamic managing comprises a selective managing of emitters and/or receivers based on a number of expected reflections of a dot.
(27) The electronic device according to any of (1) to (25), wherein the circuitry is further configured to determine, in image data, a number of edges within the subsection of the image sensor that correspond to the dot, and to determine the number of expected reflections of the dot based on the number of edges and/or plane detection.
(28) The electronic device according to (27), wherein the circuitry is further configured to control an operating duration of the emitter based on the number of expected reflections.
(29) The electronic device according to (27), wherein the circuitry is further configured, if the number of expected reflections is greater than one, to cause a particular emitter related to an incident dot to cease emitting only after more than one reflection has been sensed.
(30) The electronic device according to (29), wherein the circuitry is further configured, if the number of expected reflections is greater than one, to cause a particular receiver sensing an incident dot to cease sensing only after more than one reflection has been sensed and/or after a maximum illumination time has elapsed.
(31) The electronic device according to (29), wherein the expected number of reflections is one more than the number of edges in the field of view of the image sensor coinciding with the incident dot.
(32) The electronic device according to (29), wherein the number of surfaces and/or number of distinct distances is estimated using a structure detection algorithm, an edge-detection algorithm, and/or a plane detection algorithm.
(33) The electronic device according to any of (1) to (32), wherein the dynamic managing comprises a selective managing of emitters and/or receivers based on a detected movement of the electronic device.
(34) The electronic device according to (33), wherein the circuitry is further configured to calculate, from the detected movement of the electronic device, an expected shift of a position of an incident dot on a receiver array comprising the receivers during an exposure duration, and wherein the emitters or the receivers are caused to operate based on the expected shift.
(35) The electronic device according to (33), wherein the detected movement of the electronic device is a rotation and/or a translation of the electronic device.
(36) The electronic device according to any of (1) to (34), wherein if the value of the shift exceeds a predetermined threshold, the emitter related to the incident dot is not caused to emit the dot.
(37) The electronic device according to (34), wherein the position of the receiver related to the emitter emitting the incident dot is known a priori due to the optical characteristics of the ToF system, and wherein, if the value of the shift exceeds a predetermined threshold, the particular receiver is not caused to operate.
(38) The electronic device according to any of (1) to (34), wherein the circuitry is further configured to estimate a position of the incident dot on the receiver array and determine a particular receiver located at the position of the receiver array, and wherein, if the value of the shift exceeds a predetermined threshold, the particular receiver is not caused to operate.
(39) The electronic device according to any of (1) to (34), wherein the position of the receiver related to the emitter emitting the incident dot is known a priori due to the optical characteristics of the ToF system, and if the value of the shift exceeds a predetermined threshold, operation of the emitter and/or receiver is delayed until the value of the shift decreases below the predetermined threshold.
(40) The electronic device according to (34), wherein the circuitry is further configured to estimate a position of the incident dot on the receiver array and determine a particular receiver located at the position of the receiver array, and wherein, if the value of the shift exceeds a predetermined threshold, operation of the particular receiver is delayed until the value of the shift decreases below the predetermined threshold.
(41) The electronic device according to any of (1) to (34), wherein the circuitry is further configured to provide a time-of-flight measurement indicating a distance to an object based on the detected movement.
(42) The electronic device according to (41), wherein the circuitry is further configured to provide the time-of-flight measurement indicating a distance using multi-receiver processing.
(43) The electronic device according to (34), wherein calculating the expected shift comprises multiplying a rotation of the device per unit time with the exposure duration. (44) The electronic device according to (34), wherein the exposure duration is estimated based on a structure of a scene, and the structure of the scene is estimated based on depth data of the scene captured in a previous frame and/or based on image data obtained from an image sensor and/or a prior information from a host.
(45) The electronic device according to (44), wherein the structure of the scene is estimated based on the depth data or based on the image data using a neural network or computer vision.
(46) A managing method for a time-of-flight device comprising managing resources of a time- of-flight system to optimize time, energy and/or performance usage of the time-of-flight system based on sensor information, wherein the sensor information comprises time-of-flight measurements and/or measurements from one or more other sensors. (47) A computer program that, if executed by a computer, causes the computer to manage resources of a time-of-flight system to optimize time, energy and/or performance usage of the time-of-flight system based on sensor information, wherein the sensor information comprises time-of-flight measurements and/or measurements from one or more other sensors.
Claims
1. An electronic device (100) comprising circuitry configured to manage resources of a time-of-flight system to optimize time, and/or energy usage, and/or performance of the time-of-flight system based on sensor information, wherein the sensor information comprises time-of-flight measurements and/or measurements from one or more other sensors.
2. The electronic device of claim 1, wherein the circuitry is configured to perform grouping of the emitters and corresponding receivers to groups, based on conditions.
3. The electronic device of claim 1, wherein a group refers to emitters and receivers which are active in parallel at least part of the time.
4. The electronic device of claim 1, wherein the circuitry is configured to set parameters for each group of emitters/receivers including activation profiles, emitter parameters, and/or receiver parameters.
5. The electronic device of claim 1, wherein the circuitry is configured to dynamically control the activation of an emitter and corresponding receiver in a group to cease activation, continue activation, and/or modify the groups contents.
6. The electronic device of claim 1, wherein the circuitry is configured to perform dynamic modifications based on histogram contents.
7. The electronic device of claim 1, wherein the circuitry is configured to perform dynamic modifications based on system conditions.
8. The electronic device of claim 1, wherein the circuitry is configured to perform dynamic modifications based on host requirements.
9. The electronic device of claim 1, wherein grouping is based on estimated/projected parameters of the dots and/or hardware limitations.
10. The electronic device of claim 9, wherein the estimated/projected parameters are based on previous dToF measurement reports, upper layer a priori information, or other sensor information.
11. The electronic device of claim 1, wherein the scheduling of groups is targeted to optimize a dToF key performance indicator and take into account system considerations.
12. The electronic device of claim 1, wherein managing resources of a time-of-flight system comprises optimizing distance measurements and/or a number of returned dots.
13. The electronic device (100) according to claim 1, wherein the managing comprises selective cessation of an operation of emitters of a group of emitters for time-of-flight measurements, wherein each emitter is configured to commence emitting a respective dot simultaneously with the other emitters of the group of emitters.
14. The electronic device (100) according to claim 13, wherein the circuitry is further configured to cause an emitter of the group of emitters to cease emitting when an incident dot related to the emitter has been sensed by a receiver array, and cause the receivers in the receiver array (300) to cease operating when a predefined number of the emitters of the group of emitters have ceased emitting.
15. The electronic device (100) according to claim 14, wherein the receiver related to the emitter emitting the incident dot is caused to cease operating when the incident dot has been sensed by the receiver and the emitter emitting the incident dot is caused to cease emitting.
16. The electronic device (100) according to claim 14, wherein the circuitry is further configured to cause a particular receiver sensing the incident dot to cease operating when the incident dot has been sensed.
17. The electronic device (100) according to claim 14, wherein the circuitry is further configured to provide a time-of-flight measurement indicating a distance to an object based on the group of dots emitted by the predefined set of emitters and sensed by the receiver array.
18. The electronic device according to claim 14, wherein the predefined number of the emitters is equal to a total number of emitters of the predefined group of emitters.
19. The electronic device according to claim 14, wherein the circuitry is further configured to
cause emitters of an emitter array to sequentially emit groups of dots and, for each group of emitted dots, to cause receivers in the receiver array (300) to cease operating when a number of dots of the group have been sensed by the receiver array.
20. The electronic device according to claim 19, wherein when the number of the emitters of a first group of emitters have ceased emitting and the receivers in the receiver array (300) have been caused to cease operating, remaining emitters keep emitting as part of a subsequent second group of emitters.
21. The electronic device according to claim 20, wherein the circuitry is further configured to provide at least one first time-of-flight measurement based on the group of dots emitted by the first set of emitters and sensed by the receiver array and provide at least one second time-of-flight measurement based on the group of dots emitted by the second group of emitters and sensed by the receiver array.
22. The electronic device according to claim 14, wherein the circuitry is further configured to cause the emitter to cease emitting after a predefined maximum emission time.
23. The electronic device according to claim 22, wherein the circuitry is further configured to cause the receiver related to the emitter to cease operating after the predefined maximum emission time.
24. The electronic device (100) according to claim 14, wherein sensing the incident dot comprises monitoring whether a signal-to-noise-ratio of the incident dot measured by a receiver of the receiver array exceeds a predefined threshold.
25. The electronic device (100) according to claim 14, wherein the circuitry is further configured to cause the emitters to emit a plurality of dots in a dot pattern.
26. The electronic device (100) according to claim 1, wherein the dynamic managing comprises a selective managing of emitters and/or receivers based on a number of expected reflections of a dot.
27. The electronic device according to claim 26, wherein the circuitry is further configured to determine, in image data, a number of edges and/or number of surfaces and/or number of distinct distances within the subsection of the image sensor that correspond to the dot, and to
determine the number of expected reflections of the dot based on the number of edges and/or plane detection.
28. The electronic device according to claim 27, wherein the circuitry is further configured to control an operating duration of the emitter based on the number of expected reflections.
29. The electronic device according to claim 27, wherein the circuitry is further configured, if the number of expected reflections is greater than one, to cause a particular emitter related to an incident dot to cease emitting only after more than one reflection has been sensed.
30. The electronic device according to claim 29, wherein the circuitry is further configured, if the number of expected reflections is greater than one, to cause a particular receiver sensing an incident dot to cease sensing only after more than one reflection has been sensed and/or after a maximum illumination time has elapsed.
31. The electronic device according to claim 29, wherein the expected number of reflections is one more than the number of edges in the field of view of the image sensor coinciding with the incident dot.
32. The electronic device according to claim 29, wherein the number of surfaces and/or number of distinct distances is estimated using a structure detection algorithm, an edge-detection algorithm, and/or a plane detection algorithm.
33. The electronic device (100) according to claim 1, wherein the dynamic managing comprises a selective managing of emitters and/or receivers based on a detected movement of the electronic device (100).
34. The electronic device according to claim 33, wherein the circuitry is further configured to calculate, from the detected movement of the electronic device, an expected shift of a position of an incident dot on a receiver array comprising the receivers during an exposure duration, and wherein the emitters or the receivers are caused to operate based on the expected shift.
35. The electronic device (100) according to claim 33, wherein the detected movement of the electronic device (100) is a rotation and/or a translation of the electronic device.
36. The electronic device according to claim 34, wherein
if the value of the shift exceeds a predetermined threshold, the emitter related to the incident dot is not caused to emit the dot.
37. The electronic device according to claim 34, wherein the position of the receiver related to the emitter emitting the incident dot is known a priori due to the optical characteristics of the ToF system, and wherein, if the value of the shift exceeds a predetermined threshold, the particular and/or the emitter related to the particular receiver is not caused to operate.
38. The electronic device according to claim 34, wherein the circuitry is further configured to estimate a position of the incident dot on the receiver array and determine a particular receiver located at the position of the receiver array, and wherein, if the value of the shift exceeds a predetermined threshold, the particular receiver is not caused to operate.
39. The electronic device according to claim 34, wherein, if the value of the shift exceeds a predetermined threshold, operation of the emitter and/or receiver is delayed until the value of the shift decreases below the predetermined threshold.
40. The electronic device according to claim 34, wherein the circuitry is further configured to estimate a position of the incident dot on the receiver array and determine a particular receiver located at the position of the receiver array, and wherein, if the value of the shift exceeds a predetermined threshold, operation of the particular receiver is delayed until the value of the shift decreases below the predetermined threshold.
41. The electronic device according to claim 34, wherein the circuitry is further configured to provide a time-of-flight measurement indicating a distance to an object based on the detected movement.
42. The electronic device according to claim 41, wherein the circuitry is further configured to provide the time-of-flight measurement indicating a distance using multi -receiver processing.
43. The electronic device according to claim 34, wherein
calculating the expected shift comprises multiplying a rotation of the device per unit time with the exposure duration.
44. The electronic device according to claim 34, wherein the exposure duration is estimated based on a structure of a scene, and the structure of the scene is estimated based on depth data of the scene captured in a previous frame and/or based on image data obtained from an image sensor and/or a prior information from a host.
45. The electronic device according to claim 44, wherein the structure of the scene is estimated based on the depth data or based on the image data using a neural network or computer vision.
46. A managing method for a time-of-flight device comprising managing resources of a time-of- flight system to optimize time, and/or energy usage, and/or performance of the time-of-flight system based on sensor information, wherein the sensor information comprises time-of-flight measurements and/or measurements from one or more other sensors.
47. A computer program that, if executed by a computer, causes the computer to manage resources of a time-of-flight system to optimize time, and/or energy usage, and/or performance of the time-of-flight system based on sensor information, wherein the sensor information comprises time-of-flight measurements and/or measurements from one or more other sensors.
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| EP24167788.9 | 2024-03-28 | ||
| EP24167788 | 2024-03-28 |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2025202122A1 true WO2025202122A1 (en) | 2025-10-02 |
Family
ID=90675209
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/EP2025/057987 Pending WO2025202122A1 (en) | 2024-03-28 | 2025-03-24 | Electronic device, method and computer program |
Country Status (1)
| Country | Link |
|---|---|
| WO (1) | WO2025202122A1 (en) |
Citations (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| WO2019010320A1 (en) * | 2017-07-05 | 2019-01-10 | Ouster, Inc. | Light ranging device with electronically scanned emitter array and synchronized sensor array |
| US20200256993A1 (en) * | 2019-02-11 | 2020-08-13 | Apple Inc. | Depth sensing using a sparse array of pulsed beams |
| WO2021159226A1 (en) * | 2020-02-10 | 2021-08-19 | Hesai Technology Co., Ltd. | Adaptive emitter and receiver for lidar systems |
-
2025
- 2025-03-24 WO PCT/EP2025/057987 patent/WO2025202122A1/en active Pending
Patent Citations (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| WO2019010320A1 (en) * | 2017-07-05 | 2019-01-10 | Ouster, Inc. | Light ranging device with electronically scanned emitter array and synchronized sensor array |
| US20200256993A1 (en) * | 2019-02-11 | 2020-08-13 | Apple Inc. | Depth sensing using a sparse array of pulsed beams |
| WO2021159226A1 (en) * | 2020-02-10 | 2021-08-19 | Hesai Technology Co., Ltd. | Adaptive emitter and receiver for lidar systems |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| EP3367661B1 (en) | Method and system for using light emissions by a depth-sensing camera to capture video images under low-light conditions | |
| CN113316755B (en) | Environmental model maintenance using event-based vision sensors | |
| US11172126B2 (en) | Methods for reducing power consumption of a 3D image capture system | |
| CN105190426B (en) | Time-of-flight sensor binning | |
| JP7094937B2 (en) | Built-in calibration of time-of-flight depth imaging system | |
| JP6816097B2 (en) | Methods and equipment for determining depth maps for images | |
| TWI460615B (en) | Dynamically reconfigurable pixel array for optical navigation | |
| CN113296114B (en) | DTOF depth image acquisition method and device, electronic equipment and medium | |
| EP3788402B1 (en) | Time of flight ranging with varying fields of emission | |
| JPWO2015001770A1 (en) | Motion sensor device having a plurality of light sources | |
| WO2020145035A1 (en) | Distance measurement device and distance measurement method | |
| US12032065B2 (en) | System and method for histogram binning for depth detection | |
| US10616561B2 (en) | Method and apparatus for generating a 3-D image | |
| EP3994487A1 (en) | Phase depth imaging using machine-learned depth ambiguity dealiasing | |
| US9313376B1 (en) | Dynamic depth power equalization | |
| TWI647661B (en) | Image depth sensing method and image depth sensing device | |
| Vales et al. | An IoT system for smart building combining multiple mmWave FMCW radars applied to people counting | |
| WO2025202122A1 (en) | Electronic device, method and computer program | |
| US20100289745A1 (en) | System and method for automatically adjusting light source drive current during optical navigation operation | |
| US11585936B2 (en) | Range imaging camera and range imaging method | |
| Slattery et al. | ADI ToF depth sensing technology: new and emerging applications in industrial, automotive markets, and more | |
| CN114697560B (en) | Active exposure method based on TOF imaging system and exposure time calculation method | |
| CN113075672B (en) | Ranging method and system, and computer-readable storage medium | |
| WO2024200575A1 (en) | Object tracking circuitry and object tracking method | |
| WO2025162875A1 (en) | System and method for 3d-imaging using reconfigurable tof sensor and peripheral sensor |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 25713367 Country of ref document: EP Kind code of ref document: A1 |