[go: up one dir, main page]

WO2025128235A1 - Control of pixel density in imaging systems - Google Patents

Control of pixel density in imaging systems Download PDF

Info

Publication number
WO2025128235A1
WO2025128235A1 PCT/US2024/054609 US2024054609W WO2025128235A1 WO 2025128235 A1 WO2025128235 A1 WO 2025128235A1 US 2024054609 W US2024054609 W US 2024054609W WO 2025128235 A1 WO2025128235 A1 WO 2025128235A1
Authority
WO
WIPO (PCT)
Prior art keywords
lidar
signal
output signals
system output
field
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
PCT/US2024/054609
Other languages
French (fr)
Inventor
Mehdi Asghari
Nirmal Chindhu WARKE
Hidetoshi Utsumi
Shunsuke Konishi
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Honda Motor Co Ltd
SILC Technologies Inc
Original Assignee
Honda Motor Co Ltd
SILC Technologies Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US18/539,251 external-priority patent/US20250199132A1/en
Application filed by Honda Motor Co Ltd, SILC Technologies Inc filed Critical Honda Motor Co Ltd
Publication of WO2025128235A1 publication Critical patent/WO2025128235A1/en
Pending legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/48Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
    • G01S7/481Constructional features, e.g. arrangements of optical elements
    • G01S7/4817Constructional features, e.g. arrangements of optical elements relating to scanning
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/89Lidar systems specially adapted for specific applications for mapping or imaging
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/93Lidar systems specially adapted for specific applications for anti-collision purposes
    • G01S17/931Lidar systems specially adapted for specific applications for anti-collision purposes of land vehicles
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/48Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
    • G01S7/481Constructional features, e.g. arrangements of optical elements
    • G01S7/4814Constructional features, e.g. arrangements of optical elements of transmitters alone
    • G01S7/4815Constructional features, e.g. arrangements of optical elements of transmitters alone using multiple transmitters
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/48Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
    • G01S7/481Constructional features, e.g. arrangements of optical elements
    • G01S7/4818Constructional features, e.g. arrangements of optical elements using optical fibres

Definitions

  • the invention relates to imaging.
  • the invention relates to LIDAR systems.
  • Imaging systems such as LIDAR systems are being used in an increasing number of applications.
  • LIDAR systems generate LIDAR data for pixels in the LIDAR system’s field of view.
  • the objects that are being sought tend to be concentrated in one or more regions within the field of view.
  • the field of view and/or the objects can temporarily shift such that the objects are located outside of these regions in the field of view.
  • There is a need for a LIDAR system that can more efficiently generate LIDAR data for applications where the objects being sought tend to be concentrated within one or more locations within the LIDAR system’s field of view.
  • a LIDAR system includes an optical component assembly that concurrently outputs multiple system output signals in a field of view.
  • the system output signals carry the same wavelength channel.
  • the imaging system includes solid-state beam steerers that are each configured to steer one of the system output signals to multiple different pixels within the field of view.
  • the pixels are arranged such that a density of the pixels in the field of view is higher in a concentrated region of the field of view than in a diluted region of the field of view.
  • the optical component assembly is configured such that the location of the concentrated region of the field of view shifts within the field of view in response to a change in a wavelength of the wavelength channel carried by the system output signals.
  • a method of operating a system includes concurrently transmitting multiple system output signals in the field of view of a LIDAR system.
  • the system output signals are transmitted from an optical assembly in the LIDAR system and carry the same wavelength channel.
  • the method also includes operating a solid-state beam-steerer so as to steer each of the system output signals to multiple different pixels within the field of view such that a density of the pixels in the field of view is higher in a concentrated region of the field of view than in a diluted region of the field of view.
  • the method also includes shifting the location of the concentrated region of the field within the field of view.
  • a LIDAR system has a LIDAR chip that includes a switch and multiple alternate waveguides.
  • the switch is configured to direct an outgoing LIDAR signal to any one of multiple different alternate waveguides.
  • Each of the alternate waveguides terminates at a facet through which the outgoing LIDAR signals passes when directed to the alternate waveguide.
  • the facets are arranged such that the distance between adjacent pairs of the facets is different for different adjacent pairs of facets.
  • the LIDAR system includes a signal redirector that receives the outgoing LIDAR signal from any of the alternate waveguides and redirects the received outgoing LIDAR signal such that a direction that the outgoing LIDAR signal travels away from the redirection component changes in response to a change in the alternate waveguide from which the redirection component receives the outgoing LIDAR signal.
  • Figure 1 illustrates an imaging system that includes a chip with a photonic circuit.
  • Figure 2 is a schematic of a LIDAR system that includes multiple different cores on a chip.
  • Figure 3 is a schematic of a LIDAR system that includes multiple different cores on a chip.
  • Figure 4A is a schematic of a LIDAR system constructed according to Figure 2 where the chip has an array of alternate waveguide facets with a varying separation distance between the facets.
  • Figure 4B is a schematic of the LIDAR system of Figure 4A after a change in the wavelength of the system output signal.
  • Figure 4C is a schematic of the relationship between the LIDAR system shown in Figure 4 A and the field of view for the LIDAR system.
  • Figure 4D is a sideview of the field of view of a LIDAR system at the maximum operational distance of the LIDAR system.
  • Figure 4E is the sideview shown in Figure 4D after shifting of system output signals within the field of view.
  • Figure 5A through Figure 5B illustrates an example of a light signal processor that is suitable for use as the light signal processor in a LIDAR system constructed according to Figure 1.
  • Figure 5 A is a schematic of an example of a suitable optical -to-electrical assembly for use in the light signal processor.
  • Figure 5B provides a schematic of the relationship between electronics and the optical -to-electrical assembly of Figure 5 A.
  • Figure 5C illustrates an example of the frequency versus time pattern for a system output signal transmitted from the imaging system.
  • Figure 6A through Figure 6C illustrate a self-driving car having a LIDAR system that transmits system output signals within the LIDAR system’s field of view.
  • Figure 6A illustrates the highest density of system output signals within the LIDAR system's field of view positioned to detect oncoming traffic or other obstacles.
  • Figure 6B illustrates the highest density of system output signals within the LIDAR system’s field of view passing over oncoming traffic as a result of a decline in a road.
  • Figure 6C illustrates the highest density of system output signals within the LIDAR system’s field of view shifted downward within the field of view so as to detect oncoming traffic.
  • Figure 6D illustrates shifting of the system output signals within the LIDAR system’s field of view system in response to the road decline illustrated in Figure 6A through Figure 6C.
  • Figure 7 is a cross section of a silicon-on-insulator wafer.
  • Figure 8A and Figure 8B illustrate an example of an optical switch that includes cascaded Mach-Zehnder interferometers.
  • Figure 8A is a topview of the optical switch.
  • Figure 8B is a cross section of the optical switch shown in Figure 8A taken along the line labeled B in Figure 8A.
  • the LIDAR system concurrently outputs multiple system output signals in a field of view.
  • the imaging system includes one or more solid-state beam steerer that steers the system output signals to multiple different pixels within the field of view.
  • the pixels are arranged such that a density of the pixels in the field of view is higher in a concentrated region of the field of view than in a diluted region of the field of view.
  • the location of the concentrated region can be selected so the concentrated region can be aligned with the one or more locations where the objects that are being sought by the LIDAR system tend to be found.
  • the LIDAR system is more likely to detect the presence of the objects and/or to provide more resolution regarding the detected objects. Accordingly, the LIDAR system has a more efficient allocation of the system output signals within the field of view.
  • the LIDAR system can be configured to shift the location of the concentrated region of the field of view.
  • the LIDAR system shifts the location of the concentrated region of the field of view in response to output from one or more sensors.
  • the one or more sensors can be configured to provide output that indicates the presence of conditions where the objects tend to move outside of the concentrated region of the field of view.
  • shifting the concentrated region within the field of view can be done in response to conditions where the objects tend to move outside of the concentrated region of the field of view.
  • the concentrated region within the field of view can be shifted to a new location where the concentrated is more likely to align with the objects. Shifting the concentrated region to the new location allows the LIDAR system to continue providing LIDAR data for these objects despite the movement of the objects outside of the original concentrated region location in the field of view.
  • Figure 1 is a schematic of a portion of a LIDAR system that includes a LIDAR chip.
  • Figure 1 includes a topview of a portion of the LIDAR chip 2.
  • the LIDAR chip includes a LIDAR core 4.
  • the LIDAR system also includes a light source 10 and electronics 62.
  • the light source 10 outputs an outgoing LIDAR signal that can be one of M different w avelength channels.
  • There are M wavelength channels and each of the wavelength channels is associated with a wavelength channel index m where m has a value from 1 to M.
  • Each of the M w avelength channels is at a different wavelength.
  • the electronics 62 can operate the light source 10 so as to select which of the M different wavelength channels is carried by the outgoing LIDAR signal and can switch the selection of the M different wavelength channels that are carried by the outgoing LIDAR signal. In some instances, the electronics 62 operate the light source 10 such that the outgoing LIDAR signal carries one, or substantially one, w avelength channel at a time. Suitable values for M include, but are not limited to, values greater than or equal to 2, 4, 8. or 16 and less than 32. 64. or 128. In some instances, the separation between adjacent wavelength channels is greater than 0.4 nm, 0.8 nm, or 1.2 nm and/or less than 5 nm, 10 nm, or 20 nm.
  • the LIDAR core 4 includes a photonic integrated circuit with a uti 1 ity waveguide 12.
  • the utility waveguide 12 receives the outgoing LIDAR signal from the light source 10.
  • the utility 7 waveguide 12 carries the outgoing LIDAR signal to a signal director 14.
  • the LIDAR system can include electronics 62 that operate the signal directorl4.
  • the electronics 62 can include a director controller 15 that operates the signal directorl4 so as direct light from the light source output signal to any one of multiple different alternate waveguides 16.
  • Each of the alternate waveguides 16 can receive the outgoing LIDAR signal from the signal director 14.
  • the alternate waveguides 16 serves an active waveguide and carries the outgoing LIDAR signal to a port 18 through which the outgoing LIDAR signal can exit from the LIDAR chip and serve as an outbound LIDAR signal.
  • the alternate waveguides 16 terminate at a facet that serves as the port 18. Accordingly, the outgoing LIDAR signal is output from the active waveguide.
  • the light signals that result from the outgoing LIDAR signal being directed to the alternate waveguide 16 with alternate waveguide index i can be classified as light signals carrying channel (Cm,i) where m is the wavelength channel index and i is the alternate waveguide index. Accordingly, a light signal output from alternate waveguide index i and carrying wavelength channel m is carrying channel (Cm.i).
  • the path of the outbound LIDAR signal that carnes the channel with alternate waveguide index 2 is labeled CI-M,2 in Figure 1.
  • a LIDAR input signal returns to the LIDAR chip such that a LIDAR input signal carrying channel C m ,i enters the alternate waveguide 16 that is associated with the same alternate waveguide index i.
  • LIDAR input signals carry ing channels with different alternate waveguide indices are received at different alternate waveguides.
  • the portion of the LIDAR input signal that enters an alternate w aveguide 16 serves as an incoming LIDAR signal.
  • the alternate waveguide that receives the incoming LIDAR signal can guide an outgoing LIDAR signal while also guiding the incoming LIDAR signal in the opposite direction.
  • the alternate waveguide 16 that receives the incoming LIDAR signal carries the incoming LIDAR signal to the signal director!4.
  • the signal director!4 outputs the incoming LIDAR signal on the utility waveguide 12.
  • the alternate waveguide 16 that receives the incoming LIDAR signal carries the incoming LIDAR signal to a 2x2 splitter 24 that moves a portion of the incoming LIDAR signal from the alternate waveguide 16 onto a comparative waveguide 26 as a comparative signal.
  • the comparative signal includes light from the outgoing LIDAR signal that has exited from the imaging system, that has been reflected by an object located outside of the imaging system, and that has returned to the imaging system.
  • the comparative waveguide 26 carries the comparative signal to a signal processor 28 for further processing.
  • Suitable splitters 24 include, but are not limited to, optical couplers, Y-junctions, and MMIs. In some instances, the splitter 24 is configured such that the power of the incoming LIDAR signal is divided evenly or substantially evenly between the utility waveguide 12 and the comparative waveguide 26.
  • the alternate waveguide 16 also carries the outgoing LIDAR signal to the splitter 24.
  • the splitter 24 moves a portion of the outgoing LIDAR signal from the alternate waveguide 16 onto a reference waveguide 32 as a reference signal.
  • the reference waveguide 32 carries the reference signal to the signal processor 28 for further processing.
  • a signal power reducer can optionally be positioned along the reference waveguide 32 to reduce the power of the reference signal to reduce or prevent saturation of one or more light sensor included in the signal processor 28.
  • suitable signal power reducers include, but are not limited to. attenuators including variable optic attenuators (VOAs) and light splitters combined with beam dumps.
  • VOAs variable optic attenuators
  • the signal processor 28 combines the comparative signal with the reference signal to form a composite signal that carries LIDAR data for a sample region on the field of view. Accordingly, the composite signal can be processed so as to extract LIDAR data (radial velocity and/or distance between a LIDAR system and an object external to the LIDAR system) for the sample region.
  • LIDAR data radial velocity and/or distance between a LIDAR system and an object external to the LIDAR system
  • the electronics 62 can include a light source controller 63.
  • the light source controller 63 can operate the light source such that the outgoing LIDAR signal, and accordingly a resulting system output signal, has a particular frequency versus time pattern. For instance, the light source controller 63 can operate the light source such that the outgoing LIDAR signal, and accordingly a system output signal, has different chirp rates during different data periods. Additionally, or alternately, the light source controller 63 can operate the light source such that the outgoing LIDAR signal carries the wavelength channel that is currently desired for operation of the LIDAR system.
  • the LIDAR chip can optionally include a control branch for controlling the operation of the light source 10.
  • the control branch can provide a feedback loop that the light source controller 63 uses in operating the light source such that the outgoing LIDAR signal has the desired frequency versus time pattern.
  • the control branch includes a directional coupler 66 that moves a portion of the outgoing LIDAR signal from the utility waveguide 12 onto a control waveguide 68.
  • the coupled portion of the outgoing LIDAR signal serves as a tapped signal.
  • Figure 1 illustrates a directional coupler 66 moving the portion of the outgoing LIDAR signal onto the control waveguide 68
  • other signal-taps can be used to move a portion of the outgoing LIDAR signal from the utility waveguide 12 onto the control waveguide 68. Examples of suitable signal taps include, but are not limited to, Y-junctions, and MMIs.
  • the control waveguide 68 carries the tapped signal to a feedback system 70.
  • the feedback system 70 can include one or more light sensors (not shown) that convert light signals carried by the feedback system 70 to electrical signals that are output from the feedback system 70.
  • the light source controller 63 can receive the electrical signals output from the feedback system 70. During operation, the light source controller 63 can adjust the frequency of the outgoing LIDAR signal in response to output from the electrical signals output from the feedback system 70.
  • An example of a suitable construction and operation of feedback system 70 and light source controller 63 is provided in U.S.
  • Patent Application serial number 16/875,987 filed on 16 May 2020, entitled “Monitoring Signal Chirp in outbound LIDAR signals,’ 7 and incorporated herein in its entirety; and also in U.S. Patent Application serial number 17/244,869, filed on 29 April 2021, entitled “Reducing Size of LIDAR System Control Assemblies,” and incorporated herein in its entirety 7 .
  • Figure 1 illustrates the electronics 62 as a component that is separate from the signal processor(s) 28, a portion of the electronics 62 can be included in each of the signal processor(s) 28.
  • a LIDAR system can include a LIDAR chip with one or more LIDAR cores 4.
  • Figure 2 illustrates a LIDAR chip that includes multiple different cores.
  • the cores are each labeled corek where k represents a core index k with a value from 1 to K.
  • Each of the LIDAR cores can be constructed as disclosed in the context of Figure 1 or can have an alternate construction.
  • Each of the LIDAR cores outputs a different outbound LIDAR signal.
  • the outbound LIDAR signal output from the core labeled corek carries LIDAR channel Ski,m where k represents the core index, m represents the wavelength channel index, and i represents the alternate waveguide index.
  • LIDAR channel Sk,i,m is function of the wavelength channel index m, alternate w aveguide index i and the core index k.
  • the outbound LIDAR signal carrying LIDAR channel Sk,i.m is output from corek, carries wavelength channel m. and includes light that was received by alternate waveguide index i and output from alternate waveguide index i.
  • the outbound LIDAR signal earn ing LIDAR channel Sk,i,m is output from corek and carries channel Cm,i.
  • the LIDAR system can include an optical component assembly 75 that receives the outbound LIDAR signal from each of the different cores and outputs system output signals that each includes, consists of, or consists essentially of light from a different one of the outbound LIDAR signals.
  • the optical assembly includes active components such as movable mirrors, the active components can be operated by assembly electronics 280 so as to steer the system output signals to different sample regions in the LIDAR system’s field of view.
  • Figure 2 illustrates an optical component assembly 75 that optionally includes a signal redirector 76 that receives the outbound LIDAR signal from different cores.
  • the signal redirector 76 changes the direction that at least a portion of the outbound LIDAR signals are traveling.
  • Suitable signal redirectors 76 include, but are not limited to, lenses such as convex lenses, mirrors such as concave mirrors and combinations of these elements.
  • the optical assembly illustrated in Figure 2 also includes a wavelength chromatic disperser 77 that receives the outbound LIDAR signals.
  • the wavelength chromatic disperser 77 receives all or a portion of the outbound LIDAR signals from a signal redirector 76, from all or a portion of the LIDAR cores, or from other optical component(s) depending on the configuration of the optical component assembly 75.
  • the wavelength chromatic disperser 77 is configured to cause chromatic dispersion such that direction that an outbound LIDAR signal travels away from the wavelength chromatic disperser 77 is a function of the wavelength channel carried by the outbound LIDAR signal. For instance, the direction that an outbound LIDAR signal travels away from the wavelength chromatic disperser 77 changes in response to changes in the wavelength channel carried by the outbound LIDAR signal.
  • the outbound LIDAR signals carrying the LIDAR channels labeled SI,2.I-M in Figure 2 are each received at the same location or substantially the same location on the wavelength chromatic disperser 77.
  • the wavelength chromatic disperser 77 directs each of the outbound LIDAR signals such that when the outbound LIDAR signal carries different LIDAR channels, the outbound LIDAR signal travels away from the wavelength chromatic disperser 77 in different directions.
  • the outbound LIDAR signal labeled SI,2,I-M can carry' the LIDAR channel Si, 2,1, Si, 2, 2. or Si, 2, 3.
  • the wavelength chromatic disperser 77 operates on the outbound LIDAR signal labeled SI,2,I-M such that the direction that the outbound LIDAR signal travels away from the LIDAR system changes depending on whether the outbound LIDAR signal is cartying LIDAR channel Si, 2,1, Si, 2, 2, or Si, 2, 3.
  • the electronics can scan each of the outbound LIDAR signals to different sample regions in a field of view by changing the wavelength channel carried by the outbound LIDAR signal.
  • the optical component assembly 75 is configured such that changing the wavelength channel carried by 7 an outbound LIDAR signal does not change, or does not substantially change, the location on the wavelength chromatic disperser 77 where the outbound LIDAR signal is received.
  • an outbound LIDAR signal carrying different wavelength channels can exit from the wavelength chromatic disperser 77 at the same or substantially the same location or can exit from the wavelength chromatic disperser 77 from different locations.
  • Suitable wavelength chromatic dispersers 77 can include or consist of one or more dispersive media and/or have a wavelength dependent refractive index.
  • wavelength chromatic dispersers 77 include, but are not limited to, reflective diffraction gratings, transmissive diffraction gratings, and prisms. In some instances, the wavelength chromatic disperser 77 is configured to provide a level of dispersion greater than 0.005°/nm, 0.01°/nm , or 0.02°/nm and less than 0.04°/nm , 0.08°/nm , or 0.12°/nm.
  • the electronics can scan each of the outbound LIDAR signals to different sample regions in the field of view by changing the alternate waveguide that receives the outgoing LIDAR signal.
  • Figure 2 illustrates an outbound LIDAR signal that carries LIDAR channels labeled SI,I,I-M and LIDAR channels labeled SI,2,I-M.
  • the outbound LIDAR signal carries LIDAR channels SI.I,I-M
  • the outbound LIDAR signal carries LIDAR channels SI,2,I-M
  • the outbound LIDAR signal is output from different alternate waveguides on the same core.
  • the change in the alternate waveguide that receives the outgoing LIDAR signal causes a change in the direction that the outbound LIDAR signal and the resulting system output signal travel away from the LIDAR system.
  • Figure 2 also illustrates that the change in direction occurs when the outbound LIDAR signal carries the same wavelength channel during the change in alternate waveguide and/or when the outbound LIDAR signal carries different wavelength channels during the change in alternate waveguide.
  • the electronics can scan each of the outbound LIDAR signals to different sample regions in a field of view by changing the alternate waveguide from which the outbound LIDAR signal originates. For instance, the electronics can scan the system output signal to different sample regions in a field of view by changing the alternate waveguide that receives the light included in the outbound LIDAR signal.
  • Figure 2 also illustrates that the outbound LIDAR signals from different cores travel away from the LIDAR system in different directions.
  • Figure 2 illustrates an outbound LIDAR signal that carries LIDAR channels labeled SI.I,I-M and an outbound LIDAR signal that carries LIDAR channels labeled SS,I,I-M.
  • the electronics can operate the signal directors 14 on different cores so as to change the alternate waveguide 16 that receives the outbound LIDAR signal and steer the resulting system output signal from each of the cores within the LIDAR system’s field of view. Accordingly, the electronics can operate the signal directors 14 on different cores so as to steer the system output signals to different sample regions within the core’s field of view. As a result, each of the signal directors 14 can operate as a solid-state beam steerer.
  • a suitable method of operating the signal directors 14 on different cores and/or the one or more beam steering components 78 so as to steer the system output signals to different sample regions within the LIDAR system’s field of view is disclosed in U.S. Patent Application serial number 17/580,623, filed on January 20, 2022, entitled “Imaging System Having Multiple Cores,’’ and incorporated herein in its entirety.
  • the LIDAR chip and/or the optical component assembly 75 can be constructed such that each of the LIDAR channels Sk,i. m is incident on the chromatic disperser 77 at a different location and/or at a different angle of incidence.
  • the LIDAR chip and/or the optical component assembly 75 can be constructed such that an outbound LIDAR signal carrying different LIDAR channels Sk,i,m is incident on the chromatic disperser 77 at a different location and/or at a different angle of incidence and outbound LIDAR signals carrying different LIDAR channels Sk,i,m are incident on the chromatic disperser 77 at different locations and/or at a different angle of incidences.
  • This difference in incident locations and/or incident angles can provide the difference in directions that the different LIDAR channels Sk,i,m, and accordingly the different system output signals, travel away from the LIDAR system.
  • the LIDAR channels from different alternate waveguides can be incident on the chromatic disperser 77 at a different location and/or at a different angle of incidence as a result of the facets of alternate waveguides 16 on the same core being spaced apart as shown in Figure 1 and Figure 2 and/or as a result of the facets of the alternate waveguides 16 on different cores being spaced apart as shown in Figure 2.
  • each of the outbound LIDAR signals carrying a LIDAR channel from a different one of the alternate waveguides (Sk,i,i-Ni) would be incident on the chromatic disperser 77 at a different location.
  • dispensers 77 such as prisms
  • the outbound LIDAR signal(s) being incident on the chromatic disperser 77 at different locations results in system output signals that cany different LIDAR channels traveling away from the LIDAR system in different directions.
  • the construction of the signal redirector 76 can be selected such that the LIDAR channels from different alternate waveguides (Sk,i.i-M) can be incident on the chromatic disperser 77 at a different location and/or at a different angle of incidence.
  • the signal redirector 76 in Figure 2 is a concave lens. The lens is positioned such that an outbound LIDAR signal transmitted from different alternate waveguides (Sk,i.i-M) on the same core is incident on the signal redirector 76 at different angles of incidence and/or outbound LIDAR signal from different cores are incident on the signal redirector 76 at different angles of incidence.
  • an outbound LIDAR signal output from different alternate waveguides (Sk,i,i-M) on the same core each travels away from the signal redirector 76 in a different direction and/or outbound LIDAR signals output from different cores travel away from the signal redirector 76 in a different direction.
  • Outbound LIDAR signal(s) traveling away from the signal redirector 76 in a different direction are incident on the chromatic disperser 77 at a different location and/or at a different angle of incidence.
  • the different and/or different angle of incidence of the outbound LIDAR signals on the disperser 77 provides system output signals that can travel away from the LIDAR system in different directions. In some instances, the system output signals travel away from the LIDAR system in non-parallel directions.
  • the LIDAR system can provide solid-state steering (steering without moving parts) of system output signals that cany the same wavelength channel in K*N different directions.
  • the LIDAR system concurrently outputs K system output signals where one system output signal is output from each of the cores and the system output signals from different cores each carries the same wavelength channel. In these instances, each of the K system output signals can be steered in N different directions.
  • the outbound LIDAR signals that exit from the wavelength chromatic disperser 77 can serve as system output signals for the LIDAR system; however, the optical component assembly 75 can optionally include other optical components.
  • Figure 2 illustrates the optical component assembly 75 including one or more beam steering components 78 that receive the outbound LIDAR signals output from the wavelength chromatic disperser 77.
  • the portion of the outbound LIDAR signals output from one or more beam steering components 78 serve as the system output signals for the LIDAR system.
  • the electronics can operate the one or more beam steering components 78 so as to steer each of the system output signal to different sample regions in the field of view.
  • the one or more beam steering components 78 can be configured such that the electronics can steer the system output signals in one dimension or in two dimensions.
  • the one or more beam steering components 78 can function as a beam-steering mechanism that is operated by the electronics so as to steer the system output signals within the field of view of the LIDAR system.
  • the one or more system output signals output by the LIDAR system can be steered within the LIDAR system’s field of view by operating the one or more beam steering components 78 in combination with switching the wavelength channel carried by all or a portion of the system output signals and/or switching the selection of alternate waveguides that output the system output signals.
  • Suitable beam steering components 78 include, but are not limited to, movable mirrors, polygon mirror, MEMS mirrors, optical phased arrays (OP As), optical gratings, and actuated optical gratings.
  • the signal redirector 76, wavelength chromatic disperser 77, and/or the one or more beam steering components 78 are configured to operate on the outbound LIDAR signals such that the system output signals are collimated or substantially collimated as they travel away from the LIDAR system.
  • the LIDAR system can include one or more collimating optical components (not illustrated) that operate on the outbound LIDAR signals, and/or the system output signals such that the system output signals are collimated or substantially collimated as they travel away from the LIDAR system.
  • the system output signals can be reflected by an object located outside of the LIDAR system. All or a portion of the reflected light from a system output signal can return to the LIDAR system as a system return signal.
  • the LIDAR system includes one or more beam steering components 78
  • each of the system return signals is received at the one or more beam steering components 78.
  • the one or more beam steering components 78 output at least a portion of each of the system return signals as a returned signal.
  • the returned signals are each received at the chromatic disperser 77.
  • each of the system return signals can serve as one of the returned signals 77 received at the chromatic disperser 77.
  • the chromatic disperser 77 directs returned signal to the one or more signal redirectors 76.
  • the one or more signal redirectors 76 outputs at least a portion of each one of the returned signals as a LIDAR input signal.
  • Each of the different LIDAR input signals is received by one of the alternate waveguides on a different one of the cores 4.
  • Each of the LIDAR input signals includes or consists of light from the outbound LIDAR signal that was output from the core that receives the LIDAR input signal.
  • the LIDAR input signal received at an alternate waveguide includes or consists of the light from the outbound LIDAR signal and system output signal that was output from the same alternate waveguide.
  • the optical component assembly 75 can have configurations other than the configuration shown in Figure 2.
  • the one or more beam steering components 78 can be positioned between the signal redirector 76 and the LIDAR chip.
  • the optical component assembly 75 can include optical components that are not illustrated.
  • the optical component assembly 75 can include one or more lenses configured to increase collimation of the outbound LIDAR signals and/or other signals derived from the outbound LIDAR signals and/or that include light from the outbound LIDAR signals.
  • the light source 10 is show n as being positioned off the LIDAR chip, all or a portion of the light source 10 can be located on the LIDAR chip.
  • Figure 3 illustrates an example of a light source 10 used in conjunction with the LIDAR system of Figure 2.
  • the light source 10 includes multiple laser sources 80.
  • Each of the laser sources 80 is configured to output a w avelength channel signal on a source w aveguide 82.
  • Each of the source waveguides 82 carries a wavelength channel signal to a signal mixer 84 that combines the wavelength channel signals so as to form a light signal that is received on a channel waveguide 85.
  • the light signal mixer 84 can be a wavelength dependent multiplexer including, but not limited to, an Arrayed Waveguide Grating (AWG) multiplexer, and an echelle grating multiplexer.
  • the light signal mixer 84 can also be a wavelength independent mixer including, but not limited to, cascaded Y-j unctions, cascaded MMI splitters, and a star coupler.
  • a light signal splitter 86 receives the light signal from the channel waveguide 85.
  • the light signal splitter 86 is configured to divide the light signal among multiple core waveguides 87.
  • the portion of the light signal received by a core waveguide 87 can sen e as an outgoing LIDAR signal precursor.
  • Each of the core waveguides 87 carries one of the outgoing LIDAR signal precursors to the utility waveguide 12 on a different one of the cores 4.
  • the portion of the outgoing LIDAR signal precursor received by a utility waveguide 12 serves as the outgoing LIDAR signal received by the utility waveguide 12.
  • the light signal splitter 86 can be a wavelength independent splitter including, but not limited to, a cascaded Y-junctions, cascaded MMI splitters, and a star coupler.
  • the outgoing LIDAR signal, the outbound LIDAR signal, and the system output signal each carries light from one of the wavelength channel signals. Since each of the wavelength channel signals carries one of the wavelength channels, the electronics can operate the light source 10 such that the outgoing LIDAR signal received by the utility waveguides 12 of the different cores carries one of the wavelength channels. For instance, the electronics can operate the laser sources 80 independently such that only one of the laser sources 80 outputs a wavelength channel signal while the other laser sources 80 do not output a wavelength channel signal. As an example, the electronics can turn on the laser sources 80 that outputs the desired wavelength channel signal and turn off the source(s) 80 that do not output the desired wavelength channel signal.
  • the light source controller 63 can apply an electrical current through the gain element or laser cavity in one of the laser sources 80 so as to cause that laser source to output a wavelength channel signal while refraining from applying an electrical current through the gain element or laser cavity in the one or more remaining laser source(s) 80 so they do not output a wavelength channel signal.
  • the outgoing LIDAR signal received by the utility waveguides 12 of different cores carries one of the wavelength channels.
  • the electronics can also operate the laser source(s) 80 so as to change the wavelength channel that is present in the outgoing LIDAR signals received by the cores. For instance, the light source controller 63 can change the laser source to which the electrical current is applied.
  • the light source to which the electrical current is applied can be the light source that outputs the wavelength channel signal that carries the wavelength channel that is currently desired for the outgoing LIDAR signals and accordingly the system output signals.
  • the light source 10 can optionally include one or more modulators 90 that are each positioned so as to modulate one of the wavelength channel signals.
  • the light source 10 can optionally include one or more modulators 90 positioned along each of the source waveguides 82.
  • the light source controller 63 can operate each of the modulators 90 so as to allow a wavelength channel signal carried in a source waveguide 82 to pass the modulator 90 without attenuation from the modulator or such that the wavelength channel signal carried in a source waveguide 82 is attenuated by the modulator.
  • the attenuation can be sufficient that the attenuated wavelength channel is not substantially present in the channel waveguide 85.
  • the attenuation can be sufficient that the attenuated wavelength channel is not substantially present in the outgoing LIDAR signals output from the light source and is accordingly not substantially present in the system output signals output from the LIDAR system.
  • the light source controller 63 can keep the laser sources that generate the needed channel wavelengths “on” and also operate the one or more modulators 90 so the outgoing LIDAR signals carry' the currently desired wavelength channel.
  • the light source controller 63 can keep the laser sources that generate the channel wavelengths that will be needed “on” while operating the one or more modulators 90 so the system output signal(s) carry the currently desired wavelength channel.
  • the source controller 63 can operate laser sources 80 that generate channel wavelengths Xi- XM such that each of these laser sources 80 concurrently outputs a wavelength channel signal and can operate the modulators 90 such that the wavelength channel signal that carries wavelength channel X2 passes the associate modulator 90 but the wavelength channel signals carry ing Xi and X3 through M are attenuated such that wavelength channels Xi and wavelength channels X3 through XM are not substantially present in the resulting outgoing LIDAR signals and are accordingly not substantially present in the resulting system output signals.
  • Suitable modulators 90 include, but are not limited to, Variable Optical Attenuators (VOAs), Mach-Zehnder modulators.
  • VOAs Variable Optical Attenuators
  • An example of a suitable optical attenuator can be found in U.S. Patent Application serial number 17/396,616, filed on August 6, 2021, entitled “Carrier Injector Having Increased Compatibility.” and incorporated herein in its entirety’.
  • Suitable waveguides for use as the source waveguide 82, channel waveguide 85, and/or the core waveguides 87 include, but are not limited to optical fibers and planar optical waveguides.
  • Figure 3 illustrates the light source 10 as separate from the LIDAR chip, all or a portion of the light source 10 can be positioned on the LIDAR chip and/or integrated into the LIDAR chip.
  • suitable waveguides for use as the source waveguide 82, channel waveguide 85, and/or the core waveguides 87 also include, but are not limited to rib waveguides, ridge waveguides, buried waveguides.
  • All or a portion of the electronics 62 associated with different cores can optionally be consolidated in the assembly electronics 280 illustrated in Figure 2.
  • the light source controller 63 can be included in the assembly electronics 280 rather than in the electronics 62 associated with each of the individual cores.
  • the combination of the electronics 62 for each of the one or more cores and the assembly electronics 280 serve as the electronics for the LIDAR system.
  • the LIDAR system electronics can operate the LIDAR system and generate the LIDAR data for the LIDAR system.
  • the assembly electronics 280 can be positioned on the LIDAR chip or can be external to the LIDAR chip.
  • the assembly electronics 280 can collect or generate the LIDAR data results from different cores, and/or can coordinate the LIDAR data results from different cores so as to assemble LIDAR data results for the LIDAR system’s field of view.
  • the assembly electronics 280 can stitch together LIDAR data results for the fields of view of different cores so as to form LIDAR data results for the LIDAR system’s field of view.
  • Suitable numbers of cores on the LIDAR chip and suitable values for K include, but are not limited to, numbers greater than or equal to 2, 4, or 6 and/or less than 32, 64, or 128.
  • Figure 2 illustrates each of the cores associated with the electronics 62 that operate the core and/or generate the LIDAR data for the core.
  • the electronics 62 for different cores 62 and the assembly electronics 280 need not be separate and can be incorporated into the same electronics for the system as shown in Figure 3.
  • the distance between the facets of alternate waveguides 16 can be selected to achieve a particular pattern for the density of pixels in the field of view of the LIDAR system.
  • Figure 4A illustrates a LIDAR system constructed according to Figure 2.
  • the facets of the alternate waveguides 16 are arranged in an array.
  • the distance between the facets of two adjacent alternate waveguides 16 is labeled d.
  • the distance d represents a center-to-center distance.
  • the distances between the facets of the alternate waveguides 16 is selected such that the density of the facets is higher in the middle of the facet array than toward the edges of the array.
  • Figure 4A also illustrates the system output signals output from the optical component assembly 75 as a result of the illustrated configuration of alternate waveguides 16.
  • the increased density' of the facets in the middle of the facet array can increase the density of the system output signals near the middle of the array of system output signals output by the LIDAR system as shown in Figure 4A.
  • the increased density of the system output signals near the middle of the array of system output signals output by the LIDAR system can produce an increased density' of pixels in the center of the field of view.
  • the change in wavelength channel causes a shift in the direction that the system output signals travel away from the LIDAR system.
  • Figure 4C is a schematic of the relationship between the field of view and the LIDAR system shown in Figure 4A and Figure 4B.
  • the field of view is represented by the dashed lines that extend from the LIDAR system to an imaginary' surface within the field of view.
  • the imaginary surface is positioned at a maximum operational distance (labeled diu) from the LIDAR sy stem.
  • the maximum operational distance can generally be considered the maximum distance for which the LIDAR system is configured to provide reliable LIDAR data.
  • the imaginary' surface can have a curved shape due to the fixed nature of the maximum operational distance, however, a planar surface is shown to simplify’ the following discussion.
  • FIG. 4B illustrates the LIDAR system outputting a single system output signal.
  • a portion of a sample region illuminated by the system output signal is illustrated by the polygon on the plane of Figure 4B.
  • the electronics generate LIDAR data in a series of cycles by sequentially illuminating different sample regions in the field of view for the LIDAR system.
  • LIDAR data can be generated for each of the sample regions. For instance, a sample region is the portion of the field of view that is illuminated during the cycle that is used to generate the LIDAR data for the sample region.
  • each of the LIDAR data results is associated with one of the cycles and one of the sample regions.
  • the LIDAR data results can stitch together the LIDAR data results from multiple different sample regions to generate LIDAR data results for the field of view.
  • the electronics can operate the one or more beam steering mechanisms to steer the system output signal during the data period(s) associated with the sample region.
  • the one or more beam steering mechanisms can scan the system output signal in the direction of the arrow labeled A for the duration of a cycle. This scan can cause the system output signal to illuminate the length of the polygon labeled ct during the cycle.
  • the sample region is shown as two dimensional in Figure 4C, the sample region is three-dimensional and can extend from the rectangle on the illustrated plane back to the LIDAR system. As a result, each sample region can serve as a three-dimensional pixel within the field of view.
  • Figure 4D is a sideview of an example of the imaginary plane from Figure 4C.
  • the imaginary plane can be a two-dimensional representation of the field of view' of the LIDAR system.
  • the left side of field of view contains a column of sample regions illustrated by dashed lines and labeled ‘'solid-state/’
  • the sample regions in the column labeled "solid- state” can result from scanning the field using the signal director 14 but not using the one or more beam steering mechanisms to steer the system output signals.
  • the sample regions in the column labeled "solid-state” are a result of solid-state scanning of the system output signals.
  • the LIDAR system can concurrently output four different system output signals that are each directed to a different sample region.
  • the LIDAR system can concurrently output four different system output signals that are each directed to one of the sample regions illustrated by a combination of solid and dashed lines in the column labeled "‘Solid-state/’
  • the sample region to which each of the four system output signals is directed can be changed by changing the alternate waveguide that receives the outgoing LIDAR signal through operation of the signal director 14.
  • the LIDAR system can illuminate the sixteen (K*N) different sample regions in the column labeled "‘solid-state” by operation of the signal director 14 so as to change the alternate waveguide from which the system output signals originate.
  • the electronics can also operate the one or more beam steering mechanisms to steer the system output signals to the sample regions illustrated by the solid lines in Figure 4D.
  • Figure 4D also shows an axis labeled xi and an axis labeled X2.
  • the movement of the system output signals in the direction of the axis labeled xi can be achieved through the use of the signal director 14.
  • the movement of the system output signals in the direction of the axis labeled X2 can be achieved through the use of the one or more beam steering mechanisms.
  • the axis labeled X2 can also represent time.
  • the LIDAR system can be constructed such that the field of view has one or more concentrated regions and one or more diluted regions.
  • the concentration of the sample regions in each of the one or more concentrated regions is higher than the concentration of sample regions in each of the one or more diluted regions.
  • the field of view shown in Figure 4D includes a concentrated region (labeled hd) betw een diluted regions (labeled Id).
  • the concentrated region results from the configuration of alternate waveguide disclosed in the context of Figure 4A.
  • the increased density of the sample regions in the concentrated region can be a result of the increased density of the facets in the middle of the facet array and the resulting increase in the density 7 of the system output signals near the middle of the array of system output signals output by the LIDAR system as shown in Figure 4A.
  • the LIDAR system can shift the location of the one or more concentrated regions within the field of view.
  • the wavelength channel is held constant during the scanning of the sample regions shown in Figure 4D.
  • Figure 4E illustrates the result of the LIDAR system scanning the scanning the same field of view- while the wavelength channel is held constant at a different wavelength channel.
  • FIG. 4B A comparison on Figure 4D and Figure 4E shows that the sample regions have shifted lower in the field of view as a result of the change in the wavelength channel carried by the system output signals.
  • the shift in the location of the sample regions is also evident from Figure 4B.
  • the change in wavelength channel carried by each of the system output signal shifts each of the system output signals in the same direction.
  • the shift in the locations of sample regions also shifts the location where the highest density of sample regions can be found within the field of view. Accordingly, the electronics can tune the location of the one or more concentrated regions within the field of view'.
  • Figure 4A illustrates the density of the facets being higher in the middle of the facet array than toward the edges of the array
  • the facets of the alternate waveguides 16 can be arranged in other configurations.
  • the density of the facets can be lower in the middle of the facet array than toward one or more edges of the array.
  • the density of the facets can be higher at one or more edges of the array and decrease moving toward the opposing edge of the array.
  • the closest pair of adjacent alternate waveguide facets 16 or one of the closest pairs of adjacent alternate waveguide facets 16 can serve as reference facets.
  • the distance between the reference facets can serve as a reference distance.
  • the facets can be arranged such that the distance between adjacent facets (d) becomes larger or stays the same for each pair of adjacent pair starting at the reference facets and moving tow ard one or both ends of the array and the distance between adjacent facets (d) becomes larger for at least a portion of the adjacent pairs starting at the reference facets and moving toward one or both ends of the array.
  • the distance between adjacent facets (d) can increase linearly or non-linearly as a function of distance for each pair of adjacent pair starting at the reference facets and moving tow ard one or both ends of the array.
  • the facets of the alternate waveguides 16 in the array are arranged such that the largest distance between adjacent facets (d) is greater than or equal to 1.5, 2, or 4 and less than 5, 10, or 20 times the reference distance. Additionally, or alternately, the facets of the alternate waveguides 16 can be arranged such that the distance between adjacent facets for all or a portion of the adjacent pairs of facets in the array are greater than 3 pm, 5 pm, or 10 pm, and less than 50 pm, 500 pm, or 1000 pm.
  • the distance between adjacent facets changes such that there are greater than or equal to 3, 4, N/8, N/4, or (N-l)/2 different distances between the adjacent pairs in the array.
  • the distance between adjacent facets is selected such that a first portion of the adjacent pairs have a distance between the adjacent pair that is more than 1.5, 2 , or 2.5 times the reference distance and less than 3, 4. or 5 times the reference distance and a second portion of the adjacent pairs have a distance between the adjacent pair that is more than 5, 6, or 7 times the reference distance and less than 8, 9, or 10 times the reference distance.
  • the distance between adjacent facets is selected such that a first portion of the adjacent pairs have a distance between the adjacent pair that less than or equal to 1.5, 2 , or 2.5 times the reference distance, a second portion of the adjacent pairs have a distance that is more than 1.5, 2 , or 2.5 times the reference distance and less than 3, 4, or 5 times the reference distance and a third portion of the adjacent pairs have a distance between the adjacent pair that is more than 5, 6, or 7 times the reference distance.
  • Figure 4D and Figure 4E illustrate the density of the sample regions being higher in the middle of the field of view than the density of the sample regions along two edges of the field of view
  • the facets of the alternate waveguides 16 can be arranged to provide the distribution of sample regions with other patterns.
  • the facets of the alternate waveguides 16 can be arranged so the density of the sample regions is lower in the middle of the field of view than the density of the sample regions along two edges of the field of view.
  • the density of the facets can be arranged so the density of the sample regions is higher along one edge of the field of view decreases moving toward the opposing edge of the field.
  • the separation distance between the sample regions in the column labeled “solid-state” is at least partially a function of the divergence between adjacent system output signals carrying the same wavelength channel.
  • An example of the divergence between adjacent system output signals carrying the same wavelength channel is labeled (])a in Figure 4A (signal divergence).
  • the signal divergence can be measured relative to the center ray of the system output signals.
  • the system output signals with the smallest signal divergence can serve as a reference output signals.
  • the signal divergence for reference output signals can serve as a reference divergence.
  • the facets of the alternate waveguides 16 and the optical component assembly 75 can be configured such that the signal divergence becomes larger or stays the same for each pair of adjacent system output signals starting at the reference output signals and moving toward one or both edges of the field of view and the signal divergence becomes larger for at least a portion of the adjacent system output signals starting at the reference output signals and moving toward one or both edges of the field of view.
  • the largest signal divergence between adjacent system output signals is greater than or equal to 2. 5, or 10 and less than 20, 50, or 100 times the reference divergence.
  • the signal divergence between adjacent system output signals for all or a portion of the system output signals carrying the same channel can be greater than 0.01°, 0.1°, or 0.25° and less than 0.5°, 1°, or 2°.
  • a first portion of the adjacent system output signals each has a signal divergence that is more than 1.25, 2, or 2.5 times the reference divergence and less than 3, 4, or 5 times the reference divergence and a second portion of the adjacent system output signals each has a signal divergence that is more than 3. 6, or 7 times the reference divergence and/or less than 8, 9. or 10 times the reference divergence.
  • the degree of shift in the location of the sample regions within the field of view that occurs in response to the change in the wavelength channel carried by the system output signals is at least partially a function of the divergence between system output signals that originate from the same alternate waveguide but carry adjacent wavelength channels.
  • An example of the divergence between system output signals that originate from the same alternate waveguide but carry adjacent wavelength channels is labeled ([> in Figure 4B. In some instances, the divergence between system output signals that originate from the same alternate waveguide but carry adjacent wavelength channels is greater than 1°, or 5° and less than 10° or 20°.
  • the distance between adjacent sample regions that result from solid-state scanning is labeled d s in Figure 4D and is a measure of the distance between the sample regions in the column labeled solid-state at the maximum operational distance.
  • the distance can be measured from the center ray of the system output signal that illuminates each of the sample regions.
  • the closest pair of adjacent sample regions or one of the closest pairs of adjacent sample regions can sen e as reference sample regions.
  • the distance between the reference sample regions can serve as a reference sample region distance.
  • the facets and the optical component assembly 75 can be configured such that the distance between adjacent sample regions becomes larger or stays the same for each pair of adjacent sample regions starting at the reference sample regions and moving toward one or both ends of the field of view and the distance between adjacent sample regions (ds) becomes larger for at least a portion of the adjacent sample regions starting at the reference sample regions and moving toward one or both ends of the array.
  • the largest distance between adjacent sample regions (d s ) is greater than or equal to 1.1, 2, or 2.5 and less than 3, 4, or 9 times the reference sample region distance.
  • the sample regions of the alternate waveguides 16 can be arranged such that the distance between adjacent sample regions for all or a portion of the adjacent pairs of sample regions in the array are greater than 1 cm, 50 cm, or 1 m, and less than 2 m, 5 m, or 10 m.
  • N*K different sample regions there can be N*K different sample regions that result from solid state scanning of the field of view. In some instances, there are greater than or equal to 3. 4, N*K /8, N*K /4, or ((N*K)-l)/2 different distances between the adjacent sample regions (ds). In some instances, a first portion of the adjacent sample regions each has a distance between the adjacent pair that is more than 1.2, 2, or 2.5 times the reference sample region distance and less than 3. 4, or 5 times the reference sample region distance and a second portion of the adjacent pairs each has a distance between the adjacent pair that is more than 3, 6, or 7 times the reference sample region distance and/or less than 8, 9, or 10 times the reference sample region distance.
  • Figure 5A through Figure 5B illustrates an example of a light signal processor that is suitable for use as the light signal processor 28 in a LIDAR system constructed according to Figure 1.
  • the light signal processor includes an optical -to-electrical assembly configured to convert the light signals to electrical signals.
  • Figure 5A is a schematic of an example of a suitable optical-to-electrical assembly that includes a first splitter 200 that divides the comparative signal received from the comparative waveguide 26 onto a first comparative waveguide 204 and a second comparative waveguide 206.
  • the first comparative waveguide 204 carries a first portion of the comparative signal to a light combiner 211.
  • the second comparative waveguide 206 carries a second portion of the comparative signal to a second light combiner 212.
  • the light signal processor of Figure 5A also includes a second splitter 202 that divides the reference signal received from the reference waveguide 32 onto a first reference waveguide 210 and a second reference waveguide 208.
  • the first reference waveguide 210 carries a first portion of the reference signal to the light combiner 211.
  • the second reference waveguide 208 carries a second portion of the reference signal to the second light combiner 212.
  • the second light combiner 212 combines the second portion of the comparative signal and the second portion of the reference signal into a second composite signal. Due to the difference in frequencies between the second portion of the comparative signal and the second portion of the reference signal, the second composite signal is beating between the second portion of the comparative signal and the second portion of the reference signal.
  • the first composite signal and the second composite signal are each an example of a composite signal.
  • the second light combiner 212 also splits the resulting second composite signal onto a first auxiliary' detector waveguide 214 and a second auxiliary detector waveguide 216.
  • the first auxiliary' detector waveguide 214 carries a first portion of the second composite signal to a first auxiliary light sensor 218 that converts the first portion of the second composite signal to a first auxiliary' electrical signal.
  • the second auxiliary detector waveguide 21 carries a second portion of the second composite signal to a second auxiliary light sensor 220 that converts the second portion of the second composite signal to a second auxiliary electrical signal.
  • suitable light sensors include germanium photodiodes (PDs), and avalanche photodiodes (APDs).
  • the second light combiner 212 splits the second composite signal such that the portion of the comparative signal (i.e. the portion of the second portion of the comparative signal) included in the first portion of the second composite signal is phase shifted by 180° relative to the portion of the comparative signal (i.e. the portion of the second portion of the comparative signal) in the second portion of the second composite signal but the portion of the reference signal (i.e. the portion of the second portion of the reference signal) in the second portion of the second composite signal is not phase shifted relative to the portion of the reference signal (i.e. the portion of the second portion of the reference signal) in the first portion of the second composite signal.
  • the second light combiner 212 splits the second composite signal such that the portion of the reference signal (i.e. the portion of the second portion of the reference signal) in the first portion of the second composite signal is phase shifted by 180° relative to the portion of the reference signal (i.e. the portion of the second portion of the reference signal) in the second portion of the second composite signal but the portion of the comparative signal (i.e. the portion of the second portion of the comparative signal) in the first portion of the second composite signal is not phase shifted relative to the portion of the comparative signal (i.e. the portion of the second portion of the comparative signal) in the second portion of the second composite signal.
  • suitable light sensors include germanium photodiodes (PDs), and avalanche photodiodes (APDs).
  • the first light combiner 211 combines the first portion of the comparative signal and the first portion of the reference signal into a first composite signal. Due to the difference in frequencies between the first portion of the comparative signal and the first portion of the reference signal, the first composite signal is beating between the first portion of the comparative signal and the first portion of the reference signal.
  • the light combiner 211 also splits the first composite signal onto a first detector waveguide 221 and a second detector waveguide 222.
  • the first detector waveguide 221 carries a first portion of the first composite signal to a first light sensor 223 that converts the first portion of the second composite signal to a first electrical signal.
  • the second detector waveguide 222 carries a second portion of the second composite signal to a second light sensor 224 that converts the second portion of the second composite signal to a second electrical signal.
  • suitable light sensors include germanium photodiodes (PDs), and avalanche photodiodes (APDs).
  • the light combiner 211 splits the first composite signal such that the portion of the comparative signal (i.e. the portion of the first portion of the comparative signal) included in the first portion of the composite signal is phase shifted by 180° relative to the portion of the comparative signal (i.e. the portion of the first portion of the comparative signal) in the second portion of the composite signal but the portion of the reference signal (i.e. the portion of the first portion of the reference signal) in the first portion of the composite signal is not phase shifted relative to the portion of the reference signal (i.e. the portion of the first portion of the reference signal) in the second portion of the composite signal.
  • the light combiner 211 splits the composite signal such that the portion of the reference signal (i.e. the portion of the first portion of the reference signal) in the first portion of the composite signal is phase shifted by 180° relative to the portion of the reference signal (i.e. the portion of the first portion of the reference signal) in the second portion of the composite signal but the portion of the comparative signal (i.e. the portion of the first portion of the comparative signal) in the first portion of the composite signal is not phase shifted relative to the portion of the comparative signal (i.e. the portion of the first portion of the comparative signal) in the second portion of the composite signal.
  • the light combiner 211 When the second light combiner 212 splits the second composite signal such that the portion of the comparative signal in the first portion of the second composite signal is phase shifted by 180° relative to the portion of the comparative signal in the second portion of the second composite signal, the light combiner 211 also splits the composite signal such that the portion of the comparative signal in the first portion of the composite signal is phase shifted by 180° relative to the portion of the comparative signal in the second portion of the composite signal.
  • the light combiner 211 When the second light combiner 212 splits the second composite signal such that the portion of the reference signal in the first portion of the second composite signal is phase shifted by 180° relative to the portion of the reference signal in the second portion of the second composite signal, the light combiner 211 also splits the composite signal such that the portion of the reference signal in the first portion of the composite signal is phase shifted by 180° relative to the portion of the reference signal in the second portion of the composite signal.
  • the first reference waveguide 210 and the second reference waveguide 208 are constructed to provide a phase shift between the first portion of the reference signal and the second portion of the reference signal.
  • the first reference waveguide 210 and the second reference waveguide 208 can be constructed so as to provide a 90-degree phase shift between the first portion of the reference signal and the second portion of the reference signal.
  • one reference signal portion can be an in-phase component and the other a quadrature component.
  • one of the reference signal portions can be a sinusoidal function and the other reference signal portion can be a cosine function.
  • the first reference waveguide 210 and the second reference waveguide 208 are constructed such that the first reference signal portion is a cosine function and the second reference signal portion is a sine function. Accordingly, the portion of the reference signal in the second composite signal is phase shifted relative to the portion of the reference signal in the first composite signal, however, the portion of the comparative signal in the first composite signal is not phase shifted relative to the portion of the comparative signal in the second composite signal.
  • the first light sensor 223 and the second light sensor 224 can be connected as a balanced detector and the first auxiliary light sensor 218 and the second auxiliary' light sensor 220 can also be connected as a balanced detector.
  • the balanced detector(s) sen e as light sensors that convert a light signal to an electrical signal.
  • Figure 5B provides a schematic of the relationship between the electronics 62 and one of the light signal processors 28. For instance, Figure 5B provides a schematic of the relationship between the electronics 62 and the first light sensor 223, the second light sensor 224, the first auxiliary light sensor 218, and the second auxiliary light sensor 220 from the same light signal processor.
  • the symbol for a photodiode is used to represent the first light sensor 223, the second light sensor 224, the first auxiliary light sensor 218, and the second auxiliary' light sensor 220 but one or more of these sensors can have other constructions.
  • all of the components illustrated in the schematic of Figure 5B are included on the LIDAR chip. In some instances, the components illustrated in the schematic of Figure 5B are distributed between the LIDAR chip and electronics located off the LIDAR chip.
  • the electronics 62 connect the first light sensor 223 and the second light sensor 224 as a first balanced detector 225 and the first auxiliary light sensor 218 and the second auxiliary light sensor 220 as a second balanced detector 226.
  • the first light sensor 223 and the second light sensor 224 are connected in series.
  • the first auxiliary' light sensor 218 and the second auxiliary light sensor 220 are connected in series.
  • the serial connection in the first balanced detector is in communication with a first data line 228 that carries the output from the first balanced detector as a first data signal.
  • the serial connection in the second balanced detector is in communication with a second data line 232 that carries the output from the second balanced detector as a second data signal.
  • the first data line and the second data line are each an example of a data line.
  • the first data signal is an electrical data signal that carries a representation of the first composite signal and the second data signal is an electrical data signal that carries a representation of the second composite signal. Accordingly, the first data signal includes a contribution from a first waveform and a second waveform, and the second data signal is a composite of the first waveform and the second waveform.
  • the portion of the first waveform in the first data signal is phase-shifted relative to the portion of the first waveform in the first data signal but the portion of the second waveform in the first data signal being in-phase relative to the portion of the second waveform in the first data signal.
  • the second data signal includes a portion of the reference signal that is phase shifted relative to a different portion of the reference signal that is included the first data signal. Additionally, the second data signal includes a portion of the comparative signal that is in-phase with a different portion of the comparative signal that is included in the first data signal.
  • the first data signal and the second data signal are beating as a result of the beating between the comparative signal and the reference signal, i.e.. the beating in the first composite signal and in the second composite signal.
  • the electronics 62 include a data processor 237 configured to generate the LIDAR data.
  • Figure 5B illustrates one data processor in the electronics 62, however, the electronics 62 for a core can include a data processor 237 for each light signal processor 28 operated by the electronics 62.
  • the data processor 237 includes a beat frequency identifier 238 configured to identify the beat frequency of the composite signal from the first data signal and the second data signal.
  • the beat frequency identifier 238 receives the first data signal and the second data signal. Since the first data signal is an in-phase component and the second data signal its quadrature component, the first data signal and the second data signal together act as a complex data signal where the first data signal is the real component and the second data signal is the imaginary component of the complex data signal.
  • the data processor 237 includes a first Analog-to-Digital Converter (ADC) 264 that receives the first data signal from the first data line 228.
  • the first Analog-to-Digital Converter (ADC) 264 converts the first data signal from an analog form to a digital form and outputs a first digital data signal.
  • the beat frequency identifier 238 includes a second Analog-to-Digital Converter (ADC) 266 that receives the second data signal from the second data line 232.
  • the second Analog-to-Digital Converter (ADC) 266 converts the second data signal from an analog form to a digital form and outputs a second digital data signal.
  • the first digital data signal is a digital representation of the first data signal and the second digital data signal is a digital representation of the second data signal. Accordingly, the first digital data signal and the second digital data signal act together as a complex signal where the first digital data signal acts as the real component of the complex signal and the second digital data signal acts as the imaginary component of the complex data signal.
  • the beat frequency identifier 238 includes a mathematical transformer 268 that receives the complex data signal.
  • the mathematical transformer 268 receives the first digital data signal from the first Analog-to-Digital Converter (ADC) 264 as an input and also receives the second digital data signal from the first Analog-to-Digital Converter (ADC) 266 as an input.
  • the mathematical transformer 268 can be configured to perform a mathematical transform on the complex signal so as to convert from the time domain to the frequency domain.
  • the mathematical transform can be a complex transform such as a complex Fast Fourier Transform (FFT).
  • a complex transform such as a complex Fast Fourier Transform (FFT) provides an unambiguous solution for the shift in frequency of a comparative signal relative to the system output signal.
  • the mathematical transformer 268 can include a peak finder (not shown) configured to identify peaks in the output of the mathematical transformer 268.
  • the peak finder can be configured to identify' any frequency peaks associated with reflection of the system output signal by one or more objects located outside of the LIDAR system. For instance, frequency peaks associated with reflection of the system output signal by one or more objects located outside of the LIDAR system can fall within a frequency range.
  • the peak finder can identify the frequency peak within the range of frequencies associated with the reflection of the system output signal by one or more objects located outside of the LIDAR system. The frequency of the identified frequency peak represents the beat frequency of the composite signal.
  • the data processor 237 includes a LIDAR data generator 270 that receives the beat frequency of the composite signal from the peak finder.
  • the LIDAR data generator 270 processes the beat frequency of the composite signal so as to generate the LIDAR data (distance and/or radial velocity between the reflecting object and the LIDAR chip or LIDAR system).
  • the transform component 268 can execute the attributed functions using firmware, hardware or software or a combination thereof.
  • the light source controller 63 operates the light source 10 such that the outbound LIDAR signal and the resulting system output signal have a frequency versus time pattern. For instance, when a light source is constructed according to Figure 3 and the laser sources include a gain element or laser chip, the light source controller 63 can change the frequency of the outgoing LIDAR signal by changing the level of electrical current applied through the gain element or laser cavity. Additionally, or alternately, the light source 10 can include one or more modulators (not shown) that the light source controller 63 can use to modulate the frequency of the outgoing LIDAR signal.
  • the light source controller 63 can operate the modulator so as to achieve the desired frequency versus time pattern in light signals that include light from the outgoing LIDAR signal.
  • the light source controller 63 can execute the attributed functions using firmware, hardware or software or a combination thereof.
  • Figure 5C shows an example of a chirp pattern for the outgoing LIDAR signals, outbound LIDAR signals and the resulting system output signals.
  • Figure 5C shows an example of a relationship between the frequency of the system output signals, time, cycles, periods and sample regions.
  • the base frequency of the system output signal (f o ) can be the frequency of the system output signal at the start of a cycle.
  • the frequency versus time pattern shown in Figure 5C can represent the frequency versus time pattern for the system output signals that are concurrently output from the LIDAR system carrying the same wavelength channel. However, different system output signals that carry the same wavelength channel illuminate a different selection of sample regions.
  • Figure 5C applies to a set of sample regions labeled Rn n and Rn n +i, the disclosure of Figure 5C applies to the system output signal that illuminates these sample regions. Additionally, the frequency versus time pattern shown in Figure 5C is for system output signals carrying a particular one of the wavelength channels. When the wavelength channel is switched, the system output signals can have the same pattern but at the wavelength of the new wavelength channel. Accordingly, the frequency versus time pattern shown in Figure 5C will be shifted u w ard or downward in response to the change in w avelength channel.
  • Figure 5C shows frequency versus time for a sequence of two cycles labeled cyclej and cyclej+i. In some instances, the frequency versus time pattern is repeated in each cycle as shown in Figure 5C. The illustrated cycles do not include re-location periods and/or re-location periods are not located between cycles. As a result, Figure 5C illustrates the results for a continuous scan where the steering of the system output signal is continuous.
  • Each cycle includes multiple data periods labeled DPi, DP2. and DPs.
  • the frequency versus time pattern is the same for the data periods that correspond to each other in different cycles as is shown in Figure 5C.
  • Corresponding data periods are data periods with the same period index.
  • each data period DPi can be considered corresponding data periods and the associated frequency versus time patterns are the same in Figure 5C.
  • the electronics return the frequency to the same frequency level at which it started the previous cycle.
  • the electronics operate the light source such that the frequency of the system output signal changes at a linear rate a.
  • the electronics operate the light source such that the frequency of the system output signal changes at a linear rate -a.
  • Figure 5C labels sample regions that are each associated with a sample region index n and are labeled Rn n .
  • Figure 5C labels sample regions Rnk and Rnk-i. Each sample region is illuminated with the system output signal during the data periods that Figure 5C shows as associated with the sample region. For instance, sample region Rn n is illuminated with the system output signal during the data periods labeled DPi through DP3.
  • the sample region indices n can be assigned relative to time. For instance, the sample regions can be illuminated by the system output signal in the sequence indicated by the index n. As a result, the sample region Rmo can be illuminated after sample region Rn9 and before Rnn.
  • the LIDAR system is typically configured to provide reliable LIDAR data when the object is within an operational distance range from the LIDAR system.
  • the operational distance range can extend from a minimum operational distance to a maximum operational distance.
  • a maximum roundtrip time can be the time required for a system output signal to exit the LIDAR system, travel the maximum operational distance to the object, and to return to the LIDAR system and is labeled TM in Figure 5C.
  • the composite signals do not include a contribution from the LIDAR signal until after the system return signal has returned to the LIDAR system. Since the composite signal needs the contribution from the system return signal for there to be a LIDAR beat frequency, the electronics measure the LIDAR beat frequency that results from system return signal that return to the LIDAR system during a data window in the data period. The data window is labeled “W” in Figure 5C. The contribution from the LIDAR signal to the composite signals will be present at times larger than the maximum operational time delay (TM). AS a result, the data window is shown extending from the maximum operational time delay (T ) to the end of the data period.
  • TM maximum operational time delay
  • a frequency peak in the output from a mathematical transform such as a Complex Fourier transform represents the beat frequency of the composite signals that each includes a comparative signal beating against a reference signal.
  • the beat frequencies from two or more different data periods can be combined to generate the LIDAR data.
  • the beat frequency determined from DPi in Figure 5C can be combined with the beat frequency determined from DP2 in Figure 5C to determine the LIDAR data.
  • fdb -fd -a T
  • fdb a frequency provided by the transform component (fi. LDP determined from DP2 in this case).
  • fd and T are unknowns.
  • the electronics use each of the beat frequencies as a variable in one or more equations that yield the LIDAR data.
  • fdb represents the beat frequency during a data period where source controller 63 decreases the frequency of the outgoing LIDAR signal during the data period such as occurs in data period DP2 from Figure 5C through Figure 5E and otdb represents the rate of the frequency decrease during the data period with a decreasing frequency and a u b represents the rate of the frequency decrease during the data period with an increasing frequency.
  • the LIDAR data generator 270 can execute the attributed functions using firmw are, hardware or software or a combination thereof.
  • the data period labeled DP3 in Figure 10C is optional. In some situations, there can be more than one object in a sample region. For instance, during the feedback period in DPi for cycle2 and also during the feedback period in DP2 for cycle2, more than one frequency pair can be matched. In these circumstances, it may not be clear which frequency peaks from DP2 corresponds to which frequency peaks from DPi. As a result, it may be unclear which frequencies need to be used together to generate the LIDAR data for an object in the sample region. As a result, there can be a need to identify corresponding frequencies. The identification of corresponding frequencies can be performed such that the corresponding frequencies are frequencies from the same reflecting object within a sample region. The data period labeled DP3 can be used to find the corresponding frequencies. LIDAR data can be generated for each pair of corresponding frequencies and is considered and/or processed as the LIDAR data for the different reflecting objects in the sample region.
  • FIG. 5C An example of the identification of corresponding frequencies uses a LIDAR system where the cycles include three data periods (DPi, DP2, and DP3) as shown in Figure 5C.
  • the transform component When there are two objects in a sample region illuminated by the LIDAR outputs signal, the transform component outputs two different frequencies for f u b: fui and f U 2 during DPi and another two different frequencies for fdb: fdi and £12 during DP2.
  • the possible frequency pairings are: (fdi, fui); (fdi, fu2); (fd2, fui); and (fd2, fdu2).
  • a value of fd and r can be calculated for each of the possible frequency pairings.
  • the value of 013 is different from the value of a used in DPi and DP2.
  • the value of as is zero.
  • the transform component also outputs two values for fi that are each associated with one of the objects in the sample region.
  • the frequency pair with a theoretical fi, value closest to each of the actual fi values is considered a corresponding pair.
  • LIDAR data can be generated for each of the corresponding pairs as described above and is considered and/or processed as the LIDAR data for a different one of the reflecting objects in the sample region.
  • Each set of corresponding frequencies can be used in the above equations to generate LIDAR data.
  • the generated LIDAR data will be for one of the objects in the sample region.
  • multiple different LIDAR data values can be generated for a sample region where each of the different LIDAR data values corresponds to a different one of the objects in the sample region.
  • FIG. 6A through Figure 6C illustrate a self-driving car having a LIDAR system.
  • the LIDAR system is configured to have the highest density of system output signals where information that affects driving is most likely to be found and/or is most concentrated.
  • Figure 6A illustrates the highest density of system output signals positioned to detect oncoming traffic or other obstacles. Meanwhile a lower density of system output signals is positioned to detect overhead items such as bridges.
  • Figure 6B illustrates the car of Figure 6A approaching a downward slope in the road.
  • the illustrated LIDAR system does not adjust for the slope by shifting the system output signals within the field of view of the LIDAR system.
  • the LIDAR system does not detect the presence of the oncoming traffic shown in Figure 6B because the system output signals pass over the oncoming traffic.
  • Figure 6C illustrates the LIDAR system shifting the system output signals within the field of view of the LIDAR system so as to compensate for the downward slope in the road.
  • the system output signals shift downward within the field of view of the LIDAR system.
  • the downward shift of the system output signals allows the LIDAR system to detect the oncoming.
  • the downward shift of the system output signals can be sufficient for the oncoming traffic to be located within the highly concentrated region of the field of view where an increased density of sample regions is present.
  • the electronics can shift the system output signals within the field of view in response to output from one or more sensors.
  • the electronics can be in electrical communication with the one or more sensors 282 illustrated in Figure 1.
  • the light source controller 63 is in electrical communication with the one or more sensors 282.
  • the light source controller 63 is included in the assembly electronics 280 as shown in Figure 2 and is in electrical communication with one or more sensors 282. All or a portion of the electronics can be included in the one or more sensors. For instance, in some instances, all or a portion of the light source controller 63is included in the one or more sensors.
  • the one or more sensors can be positioned within the LIDAR system or on a supporting object on which the LIDAR system is positioned. For instance, the one or more sensors can be positioned at one or more locations on the car.
  • An example of the one or more sensors 282 includes, but is not limited to, orientation sensors.
  • the output of orientation sensors can indicate a spatial orientation of the LIDAR system or a supporting object on which the LIDAR system is positioned.
  • the orientation sensors can indicate a spatial orientation of the LIDAR system or the support object relative to a reference such as horizontal or vertical.
  • the LIDAR system shown in Figure 6C can include one or more orientation sensors that output a signal that indicates the orientation of the LIDAR system or the car relative to a reference such as horizontal or vertical.
  • the electronics can process the output of the one or more sensors to determine other characteristics of the LIDAR system and/or the support object. For instance, the electronics can calculate the rate of change in the orientation of the LIDAR system or support object. The rate of change in the orientation of the LIDAR system or support object can be measured relative to time or relative to distance traveled by the LIDAR system or support object.
  • Figure 6D is a graph that illustrates a rate of change in the horizontal or vertical orientation of the car in Figure 6C relative to distance traveled as the car travels down the illustrated road.
  • the distance on the x-axis in Figure 6D corresponds to the location along the road shown in Figure 6C. Accordingly, the illustrated rate of orientation change represents the rate of orientation change at the corresponding location on the road of Figure 6C.
  • the rate of change increases at places w here the curvature of the road increases and decreases at places where the curvature of the road decreases and/or becomes flat.
  • Figure 6D also includes multiple orientation rate change thresholds. For instance.
  • Figure 6D includes a first orientation rate change threshold labeled Ti, a second orientation rate change threshold labeled T2, a third orientation rate change threshold labeled T3, and a fourth orientation rate change threshold labeled T4.
  • the light source controller 63 can operate the light source 10 in response to the rate of orientation change.
  • the light source controller 63 can cause the light source to output the outgoing LIDAR signal carrying the wavelength channels (to through to) shown in Figure 6D.
  • the light source controller 63 can change the wavelength channel carried by the system output signals.
  • changing the wavelength channel carried by the system output signals shifts the locations of the sample regions within the field of view.
  • increasing the wavelength channel index (m in ton) shifts the sample regions downward in the field of view while decreasing the wavelength channel index (m in Am) shifts the sample regions upward in the field of view, however, other conventions are possible.
  • the electronics operate the light source 10 such that when there is little change in the orientation of the car, the light source controller 63 operate the light source 10 such that the system output signals carry wavelength channel to. Accordingly, the system output signals carry wavelength channel to when the car is on a substantially flat road.
  • the electronics operate the light source 10 such that the system output signals carry wavelength channel in response to the rate of orientation change going from below Ti to above Ti.
  • an increase in the rate of orientation changes from below Ti to above Ti results from the car starting down the decline.
  • the system output signals are shifted downward in the LIDAR system’s field of view in response to the car starting down the decline.
  • the electronics operate the light source 10 such that the system output signals carry wavelength channel to in response to the rate of orientation change going from below T2 to above T2. As a result, the system output signals are shifted further downward in the LIDAR system’s field of view in response to the car decline becoming steeper.
  • the electronics operate the light source 10 such that the system output signals carry wavelength channel to in response to the rate of orientation change going from above T3 to below T3.
  • the system output signals are shifted upward in the LIDAR system's field of view in response to the car starting an incline.
  • Figure 6D illustrates that the light source controller 63 changes the wavelength channel carried by the system output signals in response to the rate of orientation change crossing an orientation rate change threshold. Additionally, the wavelength channel carried by the system output signal after the change in the wavelength channel is a function of whether the rate of orientation change increases above the orientation rate change threshold or decreases below the orientation rate change threshold.
  • the example of Figure 6D describes shifting the location of the system output signals in response to the output of an orientation sensor.
  • Suitable orientation sensors include, but are not limited to, accelerometers, and gyroscopes.
  • the one or more sensors 282 can include sensors in addition to one or more orientation sensors or as an alternative to one or more orientation sensors.
  • the location of the system output signals can be shifted in response to the output of one or more sensors in addition to the one or more orientation sensors or as an alternative to the one or more orientation sensors.
  • sensors that can be used in addition to orientation sensors or as an alternative to onentation sensors include, but are not limited to cameras, radar sensors, and LIDAR sensors.
  • Figure 6D disclosed shifting of the system output signals vertically within the LIDAR sy stem's field of view
  • the LIDAR system can additionally or alternately be configured to shift the system output signals horizontally within the LIDAR system's field of view.
  • Examples of LIDAR system applications where varying density of system output signals and/or shifting of the system output signals within the field of view maybe desired includes, but is not limited to, automotive, robotics, and surveying.
  • Suitable platforms for the LIDAR chip include, but are not limited to, silica, indium phosphide, and silicon-on-insulator wafers.
  • Figure 7 is a cross section of a sihcon-on- insulator wafer.
  • a silicon-on-insulator (SOI) wafer includes a buried layer 300 between a substrate 302 and a light-transmitting medium 304.
  • the buried layer 300 is silica while the substrate 302 and the light- transmitting medium 304 are silicon.
  • the substrate of an optical platform such as an SOI wafer can serve as the base for a LIDAR chip.
  • the optical components shown in Figure lare positioned on or over the top and/or lateral sides of the same substrate.
  • the substrate of an optical platform such as an SOI wafer can serve as base 298 shown in Figure 2B.
  • the portion of the LIDAR chip illustrated in Figure 7 includes a waveguide construction that is suitable for use with chips constructed from silicon-on-insulator wafers.
  • a ridge 306 of the light-transmitting medium 304 extends away from slab regions 308 of the light-transmitting medium 304.
  • the light signals are constrained between the top of the ridge and the buried layer 300.
  • the ridge 306 at least partially defines the waveguide.
  • the dimensions of the ridge waveguide are labeled in Figure 7.
  • the ridge has a width labeled w and a height labeled h.
  • the thickness of the slab regions is labeled t.
  • the ridge width (labeled w) is greater than 1 pm and less than 4 pm
  • the ridge height (labeled h) is greater than 1 pm and less than 4 pm
  • the slab region thickness is greater than 0.5 pm and less than 3 pm.
  • these dimensions apply to straight or substantially straight portions of a waveguide.
  • curved portions of a waveguide can have a reduced slab thickness in order to reduce optical loss in the curved portions of the waveguide.
  • a curved portion of a w aveguide can have a ridge that extends aw ay from a slab region with a thickness greater than or equal to 0.0 pm and less than 0.5 pm. While the above dimensions will generally provide the straight or substantially straight portions of a waveguide with a single-mode construction, they can result in the tapered section(s) and/or curved section(s) that are multimode.
  • Coupling between the multi-mode geometry to the single mode geometry can be done using tapers that do not substantially excite the higher order modes. Accordingly, the waveguides can be constructed such that the signals earned in the waveguides are carried in a single mode even when carried in w aveguide sections having multi-mode dimensions.
  • the w aveguide construction of Figure 7 is suitable for all or a portion of the w aveguides on a LIDAR chip constructed according to Figure 1.
  • Suitable signal directors 14 for use with the LIDAR chip include, but are not limited to, optical switches such as cascaded Mach-Zehnder interferometers and micro-ring resonator switches.
  • the signal director 14 includes cascaded Mach-Zehnder interferometers that use thermal or free-carrier injection phase shifters.
  • Figure 8 A and Figure 8B illustrate an example of an optical switch that includes cascaded Mach-Zehnder interferometers 416.
  • Figure 8A is a topview of the optical switch.
  • Figure 8B is a cross section of the optical switch shown in Figure 8A taken along the line labeled B in Figure 8A.
  • the optical switch receives the outgoing LIDAR signal from the utility waveguide 12.
  • the optical switch is configured to direct the outgoing LIDAR signal to one of several alternate waveguides 16.
  • the optical switch includes interconnect waveguides 414 that connect multiple Mach-Zehnder interferometers 416 in a cascading arrangement. Each of the Mach-Zehnder interferometers 416 directs the outgoing LIDAR signal to one of two interconnect waveguides 414.
  • the signal director 14 can operate each Mach -Zehnder so as to select which of the two interconnect waveguides 414 receives the outgoing LIDAR signal from the Mach-Zehnder interferometer 416.
  • the interconnect waveguides 414 that receive the outgoing LIDAR signal can be selected such that the outgoing LIDAR signal is guided through the optical switch to a particular one of the alternate waveguides 16.
  • Each of the Mach-Zehnder interferometers 416 includes two branch waveguides 418 that each receives a portion of the outgoing LIDAR signal from the utilitywaveguide 12 or from an interconnect waveguide 414.
  • Each of the Mach-Zehnder interferometers 416 includes a direction component 420 that receives two portions of the outgoing LIDAR signal from the branch waveguides 418. The direction component 420 steers the outgoing LIDAR signal to one of the two interconnect waveguides 414 configured to receive the outgoing LIDAR signal from the direction component 420.
  • the interconnect waveguide 414 to which the outgoing LIDAR signal is directed is a function of the phase differential between the two different portions of the outgoing LIDAR signal received by the direction component 420.
  • Figure 8A illustrates a directional coupler operating as the direction component 420, other direction components 420 can be used. Suitable alternate direction components 420 include, but are not limited to, Multi-Mode Interference (MMI) devices and tapered couplers.
  • MMI Multi-Mode Interference
  • Each of the Mach-Zehnder interferometers 416 includes a phase shifter 422 positioned along one of the branch waveguides 418.
  • the output component includes conductors 424 in electrical communication with the phase shifters 422.
  • the conductors 424 are illustrated as dashed lines so they can be easily distinguished from underlying features.
  • the conductors 424 each terminate at a contact pad 426.
  • the contact pads 426 can be used to provide electrical communication between the conductors 424 and the signal director 14. Accordingly, the conductors 424 provide electrical communication between the signal director 14 and the phase shifters 422 and allow the electronics to operate the phase shifters 422.
  • Suitable conductors 424 include, but are not limited to, metal traces. Suitable materials for the conductors include, but are not limited to, titanium, aluminum and gold.
  • the electronics can operate each of the phase shifters 422 so as to control the phase differential between the portions of the outgoing LIDAR signal received by a direction component 420.
  • a phase shifter 422 can be operated so as to change the index of refraction of a portion of at least a portion of a branch waveguide 418. Changing the index of a portion of a branch waveguide 418 in a Mach-Zehnder interferometer 416, changes the effective length of that branch waveguides 418 and accordingly changes the phase differential between the portions of the outgoing LIDAR signal received by a direction component 420.
  • the ability of the electronics to change the phase differential allows the electronics to select the interconnect waveguide 414 that receives the outgoing LIDAR signal from the direction component 420.
  • Figure 8B illustrates one example of a suitable construction of a phase shifter 422 on a branch waveguide 418.
  • the branch waveguide 418 is at least partially defined by a ridge 306 of the light-transmitting medium 304 that extends away from slab regions 308 of the light-transmitting medium 304.
  • Doped regions 428 extend into the slab regions 308 with one of the doped regions including an n-type dopant and one of the doped regions 428 including a p-type dopant.
  • a first cladding 430 is positioned between the light-transmitting medium 304 and a conductor 424. The conductors 424 each extend through an opening in the first cladding 430 into contact with one of the doped regions 428.
  • a second cladding 432 is optionally positioned over the first cladding 430 and over the conductor 424.
  • the electronics can apply a forward bias can be applied to the conductors 424 so as to generate an electrical current through the branch waveguide 418.
  • the resulting injection of carriers into the branch waveguide 418 causes free earner absorption that changes the index of refraction in the branch waveguide 418.
  • the first cladding 430 and/or the second cladding 432 illustrated in Figure 8B can each represent one or more layers of materials.
  • the materials for the first cladding 430 and/or the second cladding 432 can be selected to provide electrical isolation of the conductors 424, lower index of refraction relative to the light-transmitting medium 304, stress reduction and mechanical and environmental protection.
  • Suitable materials for the first cladding 430 and/or the second cladding 432 include, but are not limited to, silicon nitride, tetraorthosilicate (TEOS), silicon dioxide, silicon nitride, and aluminum oxide.
  • the one or more materials for the first cladding 430 and/or the second cladding 432 can be doped or undoped.
  • the LIDAR system can optionally include one or more light signal amplifiers 446.
  • an amplifier 446 can optionally be positioned along a utility waveguide as illustrated in the LIDAR system of Figure 1. Additionally, or alternately, an amplifier 446 can be positioned along all or a portion of the alternate waveguides 16 as illustrated in the LIDAR system of Figure 1.
  • the electronics can operate the amplifier 446 so as to amplify the power of the outgoing LIDAR signal and accordingly of the system output signal.
  • the electronics can operate each of the amplifiers 446 so as to amplify' the power of the outgoing LIDAR signal.
  • Suitable amplifiers 446 for use on the LIDAR chip include, but are not limited to, Semiconductor Optical Amplifiers (SOAs) and SOA arrays.
  • SOAs Semiconductor Optical Amplifiers
  • Light sensors that are interfaced with waveguides on a LIDAR chip can be a component that is separate from the chip and then attached to the chip.
  • the light sensor can be a photodiode, or an avalanche photodiode.
  • suitable light sensors include, but are not limited to, InGaAs PIN photodiodes manufactured by Hamamatsu located in Hamamatsu City, Japan, or an InGaAs APD (Avalanche Photo Diode) manufactured by Hamamatsu located in Hamamatsu City, Japan. These light sensors can be centrally located on the LIDAR chip.
  • all or a portion the waveguides that terminate at a light sensor can terminate at a facet located at an edge of the chip and the light sensor can be attached to the edge of the chip over the facet such that the light sensor receives light that passes through the facet.
  • the use of light sensors that are a separate component from the chip is suitable for all or a portion of the light sensors selected from the group consisting of the first light sensor and the second light sensor.
  • all or a portion of the light sensors can be integrated with the chip.
  • examples of light sensors that are interfaced with ridge waveguides on a chip constructed from a silicon-on-insulator wafer can be found in Optics Express Vol. 15, No. 21, 13965-13971 (2007); U.S. Patent number 8,093,080, issued on Jan 10 2012; U.S. Patent number 8,242,432, issued Aug 14 2012; and U.S. Patent number 6.108,472, issued on Aug 22, 2000 each of which is incorporated herein in its entirety.
  • the use of light sensors that are integrated with the chip are suitable for all or a portion of the light sensors selected from the group consisting of the first light sensor and the second light sensor.
  • Suitable electronics 62 can include, but are not limited to, a controller that includes or consists of analog electrical circuits, digital electrical circuits, processors, microprocessors, digital signal processors (DSPs), Field Programmable Gate Arrays (FPGAs), computers, microcomputers, or combinations suitable for performing the operation, monitoring and control functions described above.
  • the controller has access to a memory that includes instructions to be executed by the controller during performance of the operation, control and monitoring functions.
  • the functions of the LIDAR data generator and the peak finder can be executed by Field Programmable Gate Arrays (FPGAs), digital signal processors (DSPs), Application Specific Integrated Circuits, firmware, software, hardware, and combinations thereof.
  • the electronics are illustrated as a single component in a single location, the electronics can include multiple different components that are independent of one another and/or placed in different locations. Additionally, as noted above, all or a portion of the disclosed electronics can be included on the chip including electronics that are integrated with the chip.
  • An example of a suitable director controller 15 executes the attributed functions using firmware, hardware, or software or a combination thereof.
  • An example of a suitable light source controller 63 executes the attributed functions using firmware, hardware, or software or a combination thereof.
  • An example of a suitable data processor 237 executes the attributed functions using firmware, hardware, or software or a combination thereof.
  • An example of a suitable assembly electronics 280 and electronics 62 executes the attributed functions using firmware, hardware, or software or a combination thereof.
  • the integrated optical components can include or consist of a portion of the wafer from which the LIDAR chip is fabricated.
  • a wafer that can serve as a platform for a LIDAR chip can include multiple layers of material. At least a portion of the different layers can be different materials.
  • the integrated on-chip components can be formed by using etching and masking techniques to define the features of the component in the light-transmitting medium 304.
  • the slab regions 308 that define the waveguides and the stop recess can be formed in the desired regions of the wafer using different etches of the wafer.
  • the LIDAR chip includes a portion of the wafer and the integrated on-chip components can each include or consist of a portion of the wafer.
  • the integrated on- chip components can be configured such that light signals traveling through the component travel through one or more of the layers that were originally included in the wafer.
  • the waveguide of Figure 7 guides light signal through the light-transmitting medium 304 from the wafer.
  • the integrated components can optionally include materials in addition to the materials that were present on the wafer.
  • the integrated components can include reflective materials and/or a cladding.
  • Numeric labels such as first, second, third, etc. are used to distinguish different features and components and do not indicate sequence or existence of lower numbered features.
  • a second component can exist w ithout the presence of a first component and/or a third step can be performed before a first step.
  • the light signals disclosed above each include, consist of, or consist essentially of light from the prior light signal(s) from which the light signal is derived.
  • an incoming LIDAR signal includes, consists of, or consists essentially of light from the LIDAR input signal.
  • the LIDAR system is disclosed as using complex signals such as the complex data signal, the LIDAR system can also use real signals.
  • the mathematical transform can be a real transform and the components associated with the generation and use of the quadrature components can be removed from the LIDAR system.
  • the LIDAR system can use a single signal combiner. Additionally, or alternately, a single light sensor can replace each of the balanced detectors.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • General Physics & Mathematics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Electromagnetism (AREA)
  • Optical Radar Systems And Details Thereof (AREA)

Abstract

The imaging system includes a LIDAR system with an optical component assembly that concurrently outputs multiple system output signals in a field of view. The system output signals carry the same wavelength channel. The imaging system includes solid-state beam steerers that are each configured to steer one of the system output signals to multiple different pixels within the field of view. The pixels are arranged such that a density of the pixels in the field of view is higher in a concentrated region of the field of view than in a diluted region of the field of view. The optical component assembly is configured such that the location of the concentrated region of the field of view shifts within the field of view in response to a change in a wavelength of the wavelength channel carried by the system output signals.

Description

CONTROL OF PIXEL DENSITY IN IMAGING SYSTEMS
RELATED APPLICATIONS
[0001] This Application claims the benefit of U.S. Patent Application serial number 63/553,885, filed on February7 15, 2024, entitled “Control of Pixel Density7 in Imaging Systems” and incorporated herein in its entirety; and this application is a continuation-in-part of U.S. Patent Application serial number 18/539,251, filed on December 13, 2023. entitled “Amplification of Signals in Imaging Systems” and incorporated herein in its entirety.
FIELD
[0002] The invention relates to imaging. In particular, the invention relates to LIDAR systems.
BACKGROUND
[0003] Imaging systems such as LIDAR systems are being used in an increasing number of applications. LIDAR systems generate LIDAR data for pixels in the LIDAR system’s field of view. In many applications, the objects that are being sought tend to be concentrated in one or more regions within the field of view. In many applications, the field of view and/or the objects can temporarily shift such that the objects are located outside of these regions in the field of view. There is a need for a LIDAR system that can more efficiently generate LIDAR data for applications where the objects being sought tend to be concentrated within one or more locations within the LIDAR system’s field of view.
SUMMARY
[0004] A LIDAR system includes an optical component assembly that concurrently outputs multiple system output signals in a field of view. The system output signals carry the same wavelength channel. The imaging system includes solid-state beam steerers that are each configured to steer one of the system output signals to multiple different pixels within the field of view. The pixels are arranged such that a density of the pixels in the field of view is higher in a concentrated region of the field of view than in a diluted region of the field of view. The optical component assembly is configured such that the location of the concentrated region of the field of view shifts within the field of view in response to a change in a wavelength of the wavelength channel carried by the system output signals. [0005] A method of operating a system includes concurrently transmitting multiple system output signals in the field of view of a LIDAR system. The system output signals are transmitted from an optical assembly in the LIDAR system and carry the same wavelength channel. The method also includes operating a solid-state beam-steerer so as to steer each of the system output signals to multiple different pixels within the field of view such that a density of the pixels in the field of view is higher in a concentrated region of the field of view than in a diluted region of the field of view. The method also includes shifting the location of the concentrated region of the field within the field of view.
[0006] A LIDAR system has a LIDAR chip that includes a switch and multiple alternate waveguides. The switch is configured to direct an outgoing LIDAR signal to any one of multiple different alternate waveguides. Each of the alternate waveguides terminates at a facet through which the outgoing LIDAR signals passes when directed to the alternate waveguide. The facets are arranged such that the distance between adjacent pairs of the facets is different for different adjacent pairs of facets. In some instances, the LIDAR system includes a signal redirector that receives the outgoing LIDAR signal from any of the alternate waveguides and redirects the received outgoing LIDAR signal such that a direction that the outgoing LIDAR signal travels away from the redirection component changes in response to a change in the alternate waveguide from which the redirection component receives the outgoing LIDAR signal.
BRIEF DESCRIPTION OF THE FIGURES
[0007] Figure 1 illustrates an imaging system that includes a chip with a photonic circuit.
[0008] Figure 2 is a schematic of a LIDAR system that includes multiple different cores on a chip.
[0009] Figure 3 is a schematic of a LIDAR system that includes multiple different cores on a chip.
[0010] Figure 4A is a schematic of a LIDAR system constructed according to Figure 2 where the chip has an array of alternate waveguide facets with a varying separation distance between the facets.
[0011] Figure 4B is a schematic of the LIDAR system of Figure 4A after a change in the wavelength of the system output signal.
[0012] Figure 4C is a schematic of the relationship between the LIDAR system shown in Figure 4 A and the field of view for the LIDAR system. [0013] Figure 4D is a sideview of the field of view of a LIDAR system at the maximum operational distance of the LIDAR system.
[0014] Figure 4E is the sideview shown in Figure 4D after shifting of system output signals within the field of view.
[0015] Figure 5A through Figure 5B illustrates an example of a light signal processor that is suitable for use as the light signal processor in a LIDAR system constructed according to Figure 1. Figure 5 A is a schematic of an example of a suitable optical -to-electrical assembly for use in the light signal processor.
[0016] Figure 5B provides a schematic of the relationship between electronics and the optical -to-electrical assembly of Figure 5 A.
[0017] Figure 5C illustrates an example of the frequency versus time pattern for a system output signal transmitted from the imaging system.
[0018] Figure 6A through Figure 6C illustrate a self-driving car having a LIDAR system that transmits system output signals within the LIDAR system’s field of view. Figure 6A illustrates the highest density of system output signals within the LIDAR system's field of view positioned to detect oncoming traffic or other obstacles.
[0019] Figure 6B illustrates the highest density of system output signals within the LIDAR system’s field of view passing over oncoming traffic as a result of a decline in a road. [0020] Figure 6C illustrates the highest density of system output signals within the LIDAR system’s field of view shifted downward within the field of view so as to detect oncoming traffic.
[0021] Figure 6D illustrates shifting of the system output signals within the LIDAR system’s field of view system in response to the road decline illustrated in Figure 6A through Figure 6C.
[0022] Figure 7 is a cross section of a silicon-on-insulator wafer.
[0023] Figure 8A and Figure 8B illustrate an example of an optical switch that includes cascaded Mach-Zehnder interferometers. Figure 8A is a topview of the optical switch.
[0024] Figure 8B is a cross section of the optical switch shown in Figure 8A taken along the line labeled B in Figure 8A.
DESCRIPTION
[0025] The LIDAR system concurrently outputs multiple system output signals in a field of view. The imaging system includes one or more solid-state beam steerer that steers the system output signals to multiple different pixels within the field of view. The pixels are arranged such that a density of the pixels in the field of view is higher in a concentrated region of the field of view than in a diluted region of the field of view. The location of the concentrated region can be selected so the concentrated region can be aligned with the one or more locations where the objects that are being sought by the LIDAR system tend to be found. As a result, the LIDAR system is more likely to detect the presence of the objects and/or to provide more resolution regarding the detected objects. Accordingly, the LIDAR system has a more efficient allocation of the system output signals within the field of view.
[0026] Additionally, the LIDAR system can be configured to shift the location of the concentrated region of the field of view. In some instances, the LIDAR system shifts the location of the concentrated region of the field of view in response to output from one or more sensors. The one or more sensors can be configured to provide output that indicates the presence of conditions where the objects tend to move outside of the concentrated region of the field of view. As a result, shifting the concentrated region within the field of view can be done in response to conditions where the objects tend to move outside of the concentrated region of the field of view. When these conditions are present, the concentrated region within the field of view can be shifted to a new location where the concentrated is more likely to align with the objects. Shifting the concentrated region to the new location allows the LIDAR system to continue providing LIDAR data for these objects despite the movement of the objects outside of the original concentrated region location in the field of view.
[0027] Figure 1 is a schematic of a portion of a LIDAR system that includes a LIDAR chip. Figure 1 includes a topview of a portion of the LIDAR chip 2. The LIDAR chip includes a LIDAR core 4. The LIDAR system also includes a light source 10 and electronics 62. The light source 10 outputs an outgoing LIDAR signal that can be one of M different w avelength channels. There are M wavelength channels and each of the wavelength channels is associated with a wavelength channel index m where m has a value from 1 to M. Each of the M w avelength channels is at a different wavelength. The electronics 62 can operate the light source 10 so as to select which of the M different wavelength channels is carried by the outgoing LIDAR signal and can switch the selection of the M different wavelength channels that are carried by the outgoing LIDAR signal. In some instances, the electronics 62 operate the light source 10 such that the outgoing LIDAR signal carries one, or substantially one, w avelength channel at a time. Suitable values for M include, but are not limited to, values greater than or equal to 2, 4, 8. or 16 and less than 32. 64. or 128. In some instances, the separation between adjacent wavelength channels is greater than 0.4 nm, 0.8 nm, or 1.2 nm and/or less than 5 nm, 10 nm, or 20 nm.
[0028] The LIDAR core 4 includes a photonic integrated circuit with a uti 1 ity waveguide 12. The utility waveguide 12 receives the outgoing LIDAR signal from the light source 10. The utility7 waveguide 12 carries the outgoing LIDAR signal to a signal director 14. The LIDAR system can include electronics 62 that operate the signal directorl4. For instance, the electronics 62 can include a director controller 15 that operates the signal directorl4 so as direct light from the light source output signal to any one of multiple different alternate waveguides 16. There are N alternate waveguides and each of the alternate waveguides 16 is associated with an alternate waveguide index i where i has a value from 1 to N. Suitable values of N include, but are not limited to, values less than 128, 64, or 32 and/or greater than 2, 8, or 16. In one example, N is between 2 and 128.
[0029] Each of the alternate waveguides 16 can receive the outgoing LIDAR signal from the signal director 14. When any of the alternate waveguides 16 receives the outgoing LIDAR signal, the alternate waveguides 16 serves an active waveguide and carries the outgoing LIDAR signal to a port 18 through which the outgoing LIDAR signal can exit from the LIDAR chip and serve as an outbound LIDAR signal. In some instances, the alternate waveguides 16 terminate at a facet that serves as the port 18. Accordingly, the outgoing LIDAR signal is output from the active waveguide.
[0030] The light signals that result from the outgoing LIDAR signal being directed to the alternate waveguide 16 with alternate waveguide index i can be classified as light signals carrying channel (Cm,i) where m is the wavelength channel index and i is the alternate waveguide index. Accordingly, a light signal output from alternate waveguide index i and carrying wavelength channel m is carrying channel (Cm.i). As an example, the path of the outbound LIDAR signal that carnes the channel with alternate waveguide index 2 is labeled CI-M,2 in Figure 1. For the purposes of illustration, the LIDAR system is shown as generating three outbound LIDAR signals (i=3) labeled Cm,i through Cm, 3. Each of the illustrated outbound LIDAR signals can cany7 all or a portion of the wavelength channels m=l through M.
[0031] A LIDAR input signal returns to the LIDAR chip such that a LIDAR input signal carrying channel Cm,i enters the alternate waveguide 16 that is associated with the same alternate waveguide index i. As a result, LIDAR input signals carry ing channels with different alternate waveguide indices are received at different alternate waveguides. The portion of the LIDAR input signal that enters an alternate w aveguide 16 serves as an incoming LIDAR signal. As a result, the alternate waveguide that receives the incoming LIDAR signal can guide an outgoing LIDAR signal while also guiding the incoming LIDAR signal in the opposite direction. The alternate waveguide 16 that receives the incoming LIDAR signal carries the incoming LIDAR signal to the signal director!4. The signal director!4 outputs the incoming LIDAR signal on the utility waveguide 12.
[0032] The alternate waveguide 16 that receives the incoming LIDAR signal carries the incoming LIDAR signal to a 2x2 splitter 24 that moves a portion of the incoming LIDAR signal from the alternate waveguide 16 onto a comparative waveguide 26 as a comparative signal. The comparative signal includes light from the outgoing LIDAR signal that has exited from the imaging system, that has been reflected by an object located outside of the imaging system, and that has returned to the imaging system. The comparative waveguide 26 carries the comparative signal to a signal processor 28 for further processing. Suitable splitters 24 include, but are not limited to, optical couplers, Y-junctions, and MMIs. In some instances, the splitter 24 is configured such that the power of the incoming LIDAR signal is divided evenly or substantially evenly between the utility waveguide 12 and the comparative waveguide 26.
[0033] The alternate waveguide 16 also carries the outgoing LIDAR signal to the splitter 24. The splitter 24 moves a portion of the outgoing LIDAR signal from the alternate waveguide 16 onto a reference waveguide 32 as a reference signal. The reference waveguide 32 carries the reference signal to the signal processor 28 for further processing. Although not illustrated, a signal power reducer can optionally be positioned along the reference waveguide 32 to reduce the power of the reference signal to reduce or prevent saturation of one or more light sensor included in the signal processor 28. Examples of suitable signal power reducers include, but are not limited to. attenuators including variable optic attenuators (VOAs) and light splitters combined with beam dumps.
[0034] As will be described in more detail below, the signal processor 28 combines the comparative signal with the reference signal to form a composite signal that carries LIDAR data for a sample region on the field of view. Accordingly, the composite signal can be processed so as to extract LIDAR data (radial velocity and/or distance between a LIDAR system and an object external to the LIDAR system) for the sample region.
[0035] The electronics 62 can include a light source controller 63. The light source controller 63 can operate the light source such that the outgoing LIDAR signal, and accordingly a resulting system output signal, has a particular frequency versus time pattern. For instance, the light source controller 63 can operate the light source such that the outgoing LIDAR signal, and accordingly a system output signal, has different chirp rates during different data periods. Additionally, or alternately, the light source controller 63 can operate the light source such that the outgoing LIDAR signal carries the wavelength channel that is currently desired for operation of the LIDAR system.
[0036] The LIDAR chip can optionally include a control branch for controlling the operation of the light source 10. For instance, the control branch can provide a feedback loop that the light source controller 63 uses in operating the light source such that the outgoing LIDAR signal has the desired frequency versus time pattern.
[0037] The control branch includes a directional coupler 66 that moves a portion of the outgoing LIDAR signal from the utility waveguide 12 onto a control waveguide 68. The coupled portion of the outgoing LIDAR signal serves as a tapped signal. Although Figure 1 illustrates a directional coupler 66 moving the portion of the outgoing LIDAR signal onto the control waveguide 68, other signal-taps can be used to move a portion of the outgoing LIDAR signal from the utility waveguide 12 onto the control waveguide 68. Examples of suitable signal taps include, but are not limited to, Y-junctions, and MMIs.
[0038] The control waveguide 68 carries the tapped signal to a feedback system 70. The feedback system 70 can include one or more light sensors (not shown) that convert light signals carried by the feedback system 70 to electrical signals that are output from the feedback system 70. The light source controller 63 can receive the electrical signals output from the feedback system 70. During operation, the light source controller 63 can adjust the frequency of the outgoing LIDAR signal in response to output from the electrical signals output from the feedback system 70. An example of a suitable construction and operation of feedback system 70 and light source controller 63 is provided in U.S. Patent Application serial number 16/875,987, filed on 16 May 2020, entitled “Monitoring Signal Chirp in outbound LIDAR signals,’7 and incorporated herein in its entirety; and also in U.S. Patent Application serial number 17/244,869, filed on 29 April 2021, entitled “Reducing Size of LIDAR System Control Assemblies,” and incorporated herein in its entirety7.
[0039] Although Figure 1 illustrates the electronics 62 as a component that is separate from the signal processor(s) 28, a portion of the electronics 62 can be included in each of the signal processor(s) 28.
[0040] A LIDAR system can include a LIDAR chip with one or more LIDAR cores 4. As an example, Figure 2 illustrates a LIDAR chip that includes multiple different cores. The cores are each labeled corek where k represents a core index k with a value from 1 to K. Each of the LIDAR cores can be constructed as disclosed in the context of Figure 1 or can have an alternate construction. Each of the LIDAR cores outputs a different outbound LIDAR signal. The outbound LIDAR signal output from the core labeled corek carries LIDAR channel Ski,m where k represents the core index, m represents the wavelength channel index, and i represents the alternate waveguide index. As a result, LIDAR channel Sk,i,m is function of the wavelength channel index m, alternate w aveguide index i and the core index k. As an example, the outbound LIDAR signal carrying LIDAR channel Sk,i.m is output from corek, carries wavelength channel m. and includes light that was received by alternate waveguide index i and output from alternate waveguide index i. Accordingly, the outbound LIDAR signal earn ing LIDAR channel Sk,i,m is output from corek and carries channel Cm,i. As an example, an outbound LIDAR signal output from core k= 1 and carrying light from an outgoing LIDAR signal received at alternate waveguide i=l carries the LIDAR channels labeled Si,i.i-Min Figure 2. In contrast, an outbound LIDAR signal output from core k= 1 and earn ing light from alternate wav eguide i=2 carries the LIDAR channels labeled Si,2,i-Min Figure 2. The outbound LIDAR signals labeled Si,i,i-Mare shown as spaced apart from the outbound LIDAR signals labeled SI.2,I-M as a result of the spatial separation between the facets of the different alternate waveguides. In the labels SI,I,I-M and SI,2,I-M the variable 1-M represents 1 through M and accordingly indicates that each of the different w avelength channels (1-M) travel the labeled portion of the pathway. For instance, each of the different wavelength channels 1-M that are output from core k= 1 while carrying light from an outgoing LIDAR signal directed to alternate waveguide i=2 travels the portion of the optical pathway labeled S 1,2.1-Min Figure 2.
[0041] The LIDAR system can include an optical component assembly 75 that receives the outbound LIDAR signal from each of the different cores and outputs system output signals that each includes, consists of, or consists essentially of light from a different one of the outbound LIDAR signals. When the optical assembly includes active components such as movable mirrors, the active components can be operated by assembly electronics 280 so as to steer the system output signals to different sample regions in the LIDAR system’s field of view.
[0042] Figure 2 illustrates an optical component assembly 75 that optionally includes a signal redirector 76 that receives the outbound LIDAR signal from different cores. The signal redirector 76 changes the direction that at least a portion of the outbound LIDAR signals are traveling. Suitable signal redirectors 76 include, but are not limited to, lenses such as convex lenses, mirrors such as concave mirrors and combinations of these elements. [0043] The optical assembly illustrated in Figure 2 also includes a wavelength chromatic disperser 77 that receives the outbound LIDAR signals. In some instances, the wavelength chromatic disperser 77 receives all or a portion of the outbound LIDAR signals from a signal redirector 76, from all or a portion of the LIDAR cores, or from other optical component(s) depending on the configuration of the optical component assembly 75. The wavelength chromatic disperser 77 is configured to cause chromatic dispersion such that direction that an outbound LIDAR signal travels away from the wavelength chromatic disperser 77 is a function of the wavelength channel carried by the outbound LIDAR signal. For instance, the direction that an outbound LIDAR signal travels away from the wavelength chromatic disperser 77 changes in response to changes in the wavelength channel carried by the outbound LIDAR signal. As an example, the outbound LIDAR signals carrying the LIDAR channels labeled SI,2.I-M in Figure 2 are each received at the same location or substantially the same location on the wavelength chromatic disperser 77. The wavelength chromatic disperser 77 directs each of the outbound LIDAR signals such that when the outbound LIDAR signal carries different LIDAR channels, the outbound LIDAR signal travels away from the wavelength chromatic disperser 77 in different directions. For instance, the outbound LIDAR signal labeled SI,2,I-M can carry any of the wavelength channels m=l through M. In particular, the outbound LIDAR signal labeled SI,2,I-M can carry' the LIDAR channel Si, 2,1, Si, 2, 2. or Si, 2, 3. As shown in Figure 2, the wavelength chromatic disperser 77 operates on the outbound LIDAR signal labeled SI,2,I-M such that the direction that the outbound LIDAR signal travels away from the LIDAR system changes depending on whether the outbound LIDAR signal is cartying LIDAR channel Si, 2,1, Si, 2, 2, or Si, 2, 3. As a result, the electronics can scan each of the outbound LIDAR signals to different sample regions in a field of view by changing the wavelength channel carried by the outbound LIDAR signal.
[0044] In some instances, the optical component assembly 75 is configured such that changing the wavelength channel carried by7 an outbound LIDAR signal does not change, or does not substantially change, the location on the wavelength chromatic disperser 77 where the outbound LIDAR signal is received. Depending on the configuration of the wavelength chromatic disperser 77, an outbound LIDAR signal carrying different wavelength channels can exit from the wavelength chromatic disperser 77 at the same or substantially the same location or can exit from the wavelength chromatic disperser 77 from different locations. [0045] Suitable wavelength chromatic dispersers 77 can include or consist of one or more dispersive media and/or have a wavelength dependent refractive index. Examples of suitable wavelength chromatic dispersers 77 include, but are not limited to, reflective diffraction gratings, transmissive diffraction gratings, and prisms. In some instances, the wavelength chromatic disperser 77 is configured to provide a level of dispersion greater than 0.005°/nm, 0.01°/nm , or 0.02°/nm and less than 0.04°/nm , 0.08°/nm , or 0.12°/nm.
[0046] The electronics can scan each of the outbound LIDAR signals to different sample regions in the field of view by changing the alternate waveguide that receives the outgoing LIDAR signal. For instance, Figure 2 illustrates an outbound LIDAR signal that carries LIDAR channels labeled SI,I,I-M and LIDAR channels labeled SI,2,I-M. When the outbound LIDAR signal carries LIDAR channels SI.I,I-M, the outbound LIDAR signal is output from the alternate waveguide with alternate waveguide index i=l on the core with core index k=l. In contrast, when the outbound LIDAR signal carries LIDAR channels SI,2,I-M, the outbound LIDAR signal is output from the alternate waveguide with alternate waveguide index i=2 on the core with core index k=l. As a result, the outbound LIDAR signal is output from different alternate waveguides on the same core. As is evident from comparing the direction that the outbound LIDAR signal (system output signal) travels away from the LIDAR system when carrying LIDAR channels LIDAR channels SI,I,I-M to the direction that the outbound LIDAR signal travels away from the LIDAR system when carrying LIDAR channels LIDAR channels SI,2.I-M, the change in the alternate waveguide that receives the outgoing LIDAR signal causes a change in the direction that the outbound LIDAR signal and the resulting system output signal travel away from the LIDAR system. Figure 2 also illustrates that the change in direction occurs when the outbound LIDAR signal carries the same wavelength channel during the change in alternate waveguide and/or when the outbound LIDAR signal carries different wavelength channels during the change in alternate waveguide. As a result, the electronics can scan each of the outbound LIDAR signals to different sample regions in a field of view by changing the alternate waveguide from which the outbound LIDAR signal originates. For instance, the electronics can scan the system output signal to different sample regions in a field of view by changing the alternate waveguide that receives the light included in the outbound LIDAR signal.
[0047] Figure 2 also illustrates that the outbound LIDAR signals from different cores travel away from the LIDAR system in different directions. For instance, Figure 2 illustrates an outbound LIDAR signal that carries LIDAR channels labeled SI.I,I-M and an outbound LIDAR signal that carries LIDAR channels labeled SS,I,I-M. AS a result, the outbound LIDAR signal that carries LIDAR channels SI,I,I-M and the outbound LIDAR signal that carries LIDAR channels labeled SS,I,I-M are output from alternate waveguide that have the same alternate waveguide index (i=l ) on different cores (k=l versus k=3). Comparing the directions that the outbound LIDAR signal (system output signal) carrying LIDAR channels travels away from the LIDAR system to the direction that the outbound LIDAR signal (system output signal) carrying LIDAR channels SS.I.I-M travel away from the LIDAR system illustrates that the outbound LIDAR signals (system output signals) from different cores travel away from the LIDAR system in different directions. This change in direction occurs when the outbound LIDAR signals cany’ the same wavelength channel or different wavelength channels and/or are out from alternate waveguides with the same or different waveguide indices.
[0048] The electronics can operate the signal directors 14 on different cores so as to change the alternate waveguide 16 that receives the outbound LIDAR signal and steer the resulting system output signal from each of the cores within the LIDAR system’s field of view. Accordingly, the electronics can operate the signal directors 14 on different cores so as to steer the system output signals to different sample regions within the core’s field of view. As a result, each of the signal directors 14 can operate as a solid-state beam steerer. A suitable method of operating the signal directors 14 on different cores and/or the one or more beam steering components 78 so as to steer the system output signals to different sample regions within the LIDAR system’s field of view is disclosed in U.S. Patent Application serial number 17/580,623, filed on January 20, 2022, entitled “Imaging System Having Multiple Cores,’’ and incorporated herein in its entirety.
[0049] The LIDAR chip and/or the optical component assembly 75 can be constructed such that each of the LIDAR channels Sk,i.m is incident on the chromatic disperser 77 at a different location and/or at a different angle of incidence. For instance, the LIDAR chip and/or the optical component assembly 75 can be constructed such that an outbound LIDAR signal carrying different LIDAR channels Sk,i,m is incident on the chromatic disperser 77 at a different location and/or at a different angle of incidence and outbound LIDAR signals carrying different LIDAR channels Sk,i,m are incident on the chromatic disperser 77 at different locations and/or at a different angle of incidences. This difference in incident locations and/or incident angles can provide the difference in directions that the different LIDAR channels Sk,i,m, and accordingly the different system output signals, travel away from the LIDAR system.
[0050] The LIDAR channels from different alternate waveguides (Sk,i,i-M) can be incident on the chromatic disperser 77 at a different location and/or at a different angle of incidence as a result of the facets of alternate waveguides 16 on the same core being spaced apart as shown in Figure 1 and Figure 2 and/or as a result of the facets of the alternate waveguides 16 on different cores being spaced apart as shown in Figure 2. For instance, if the optical component assembly 75 of Figure 2 excluded the signal redirector 76 and the alternate waveguides 16 are constructed such that the outbound LIDAR signals exit the different alternate waveguides 16 traveling in parallel or substantially in parallel, each of the outbound LIDAR signals carrying a LIDAR channel from a different one of the alternate waveguides (Sk,i,i-Ni) would be incident on the chromatic disperser 77 at a different location. As is evident from the functionality of dispensers 77 such as prisms, the outbound LIDAR signal(s) being incident on the chromatic disperser 77 at different locations results in system output signals that cany different LIDAR channels traveling away from the LIDAR system in different directions.
[0051] The construction of the signal redirector 76 can be selected such that the LIDAR channels from different alternate waveguides (Sk,i.i-M) can be incident on the chromatic disperser 77 at a different location and/or at a different angle of incidence. For instance, the signal redirector 76 in Figure 2 is a concave lens. The lens is positioned such that an outbound LIDAR signal transmitted from different alternate waveguides (Sk,i.i-M) on the same core is incident on the signal redirector 76 at different angles of incidence and/or outbound LIDAR signal from different cores are incident on the signal redirector 76 at different angles of incidence. As a result, an outbound LIDAR signal output from different alternate waveguides (Sk,i,i-M) on the same core each travels away from the signal redirector 76 in a different direction and/or outbound LIDAR signals output from different cores travel away from the signal redirector 76 in a different direction. Outbound LIDAR signal(s) traveling away from the signal redirector 76 in a different direction are incident on the chromatic disperser 77 at a different location and/or at a different angle of incidence. As is evident from Figure 2, the different and/or different angle of incidence of the outbound LIDAR signals on the disperser 77 provides system output signals that can travel away from the LIDAR system in different directions. In some instances, the system output signals travel away from the LIDAR system in non-parallel directions.
[0052] There are K*N different LIDAR channels that carry the same wavelength. As a result, there can be at least K*N directions that the LIDAR system can output a system output signal. Accordingly, the LIDAR system can provide solid-state steering (steering without moving parts) of system output signals that cany the same wavelength channel in K*N different directions. In some instances, the LIDAR system concurrently outputs K system output signals where one system output signal is output from each of the cores and the system output signals from different cores each carries the same wavelength channel. In these instances, each of the K system output signals can be steered in N different directions. [0053] As shown in Figure 2, the outbound LIDAR signals that exit from the wavelength chromatic disperser 77 can serve as system output signals for the LIDAR system; however, the optical component assembly 75 can optionally include other optical components. For instance, Figure 2 illustrates the optical component assembly 75 including one or more beam steering components 78 that receive the outbound LIDAR signals output from the wavelength chromatic disperser 77. The portion of the outbound LIDAR signals output from one or more beam steering components 78 serve as the system output signals for the LIDAR system. The electronics can operate the one or more beam steering components 78 so as to steer each of the system output signal to different sample regions in the field of view. As is evident from the arrows labeled A and B in Figure 2, the one or more beam steering components 78 can be configured such that the electronics can steer the system output signals in one dimension or in two dimensions. As a result, the one or more beam steering components 78 can function as a beam-steering mechanism that is operated by the electronics so as to steer the system output signals within the field of view of the LIDAR system. Accordingly, the one or more system output signals output by the LIDAR system can be steered within the LIDAR system’s field of view by operating the one or more beam steering components 78 in combination with switching the wavelength channel carried by all or a portion of the system output signals and/or switching the selection of alternate waveguides that output the system output signals.
[0054] Suitable beam steering components 78 include, but are not limited to, movable mirrors, polygon mirror, MEMS mirrors, optical phased arrays (OP As), optical gratings, and actuated optical gratings. In some instances, the signal redirector 76, wavelength chromatic disperser 77, and/or the one or more beam steering components 78 are configured to operate on the outbound LIDAR signals such that the system output signals are collimated or substantially collimated as they travel away from the LIDAR system. Additionally, or alternately, the LIDAR system can include one or more collimating optical components (not illustrated) that operate on the outbound LIDAR signals, and/or the system output signals such that the system output signals are collimated or substantially collimated as they travel away from the LIDAR system.
[0055] The system output signals can be reflected by an object located outside of the LIDAR system. All or a portion of the reflected light from a system output signal can return to the LIDAR system as a system return signal. When the LIDAR system includes one or more beam steering components 78, each of the system return signals is received at the one or more beam steering components 78. The one or more beam steering components 78 output at least a portion of each of the system return signals as a returned signal. The returned signals are each received at the chromatic disperser 77. When the LIDAR system excludes one or more beam steering components 78, each of the system return signals can serve as one of the returned signals 77 received at the chromatic disperser 77. The chromatic disperser 77 directs returned signal to the one or more signal redirectors 76. The one or more signal redirectors 76 outputs at least a portion of each one of the returned signals as a LIDAR input signal. Each of the different LIDAR input signals is received by one of the alternate waveguides on a different one of the cores 4. Each of the LIDAR input signals includes or consists of light from the outbound LIDAR signal that was output from the core that receives the LIDAR input signal. Additionally, the LIDAR input signal received at an alternate waveguide includes or consists of the light from the outbound LIDAR signal and system output signal that was output from the same alternate waveguide.
[0056] The optical component assembly 75 can have configurations other than the configuration shown in Figure 2. For instance, the one or more beam steering components 78 can be positioned between the signal redirector 76 and the LIDAR chip. Additionally, the optical component assembly 75 can include optical components that are not illustrated. For instance, the optical component assembly 75 can include one or more lenses configured to increase collimation of the outbound LIDAR signals and/or other signals derived from the outbound LIDAR signals and/or that include light from the outbound LIDAR signals.
[0057] Although the light source 10 is show n as being positioned off the LIDAR chip, all or a portion of the light source 10 can be located on the LIDAR chip. Figure 3 illustrates an example of a light source 10 used in conjunction with the LIDAR system of Figure 2. The light source 10 includes multiple laser sources 80. Each of the laser sources 80 is configured to output a w avelength channel signal on a source w aveguide 82. Each wavelength channel signal can cany7 one of the m=l-M w avelength channels. For instance, Figure 3 illustrates one possible arrangement where the source waveguide 82 that guides the wavelength channel signal earn ing wavelength channel m is labeled Am where m represents the w avelength channel index and the wavelength channel(s) are each associated with wavelength channel index m=l through m=M.
[0058] Each of the source waveguides 82 carries a wavelength channel signal to a signal mixer 84 that combines the wavelength channel signals so as to form a light signal that is received on a channel waveguide 85. The light signal mixer 84 can be a wavelength dependent multiplexer including, but not limited to, an Arrayed Waveguide Grating (AWG) multiplexer, and an echelle grating multiplexer. The light signal mixer 84 can also be a wavelength independent mixer including, but not limited to, cascaded Y-j unctions, cascaded MMI splitters, and a star coupler.
[0059] A light signal splitter 86 receives the light signal from the channel waveguide 85. The light signal splitter 86 is configured to divide the light signal among multiple core waveguides 87. The portion of the light signal received by a core waveguide 87 can sen e as an outgoing LIDAR signal precursor. Each of the core waveguides 87 carries one of the outgoing LIDAR signal precursors to the utility waveguide 12 on a different one of the cores 4. The portion of the outgoing LIDAR signal precursor received by a utility waveguide 12 serves as the outgoing LIDAR signal received by the utility waveguide 12. The light signal splitter 86 can be a wavelength independent splitter including, but not limited to, a cascaded Y-junctions, cascaded MMI splitters, and a star coupler.
[0060] The outgoing LIDAR signal, the outbound LIDAR signal, and the system output signal each carries light from one of the wavelength channel signals. Since each of the wavelength channel signals carries one of the wavelength channels, the electronics can operate the light source 10 such that the outgoing LIDAR signal received by the utility waveguides 12 of the different cores carries one of the wavelength channels. For instance, the electronics can operate the laser sources 80 independently such that only one of the laser sources 80 outputs a wavelength channel signal while the other laser sources 80 do not output a wavelength channel signal. As an example, the electronics can turn on the laser sources 80 that outputs the desired wavelength channel signal and turn off the source(s) 80 that do not output the desired wavelength channel signal. When the laser sources 80 are each a gain element or laser chip, the light source controller 63 can apply an electrical current through the gain element or laser cavity in one of the laser sources 80 so as to cause that laser source to output a wavelength channel signal while refraining from applying an electrical current through the gain element or laser cavity in the one or more remaining laser source(s) 80 so they do not output a wavelength channel signal. As a result, the outgoing LIDAR signal received by the utility waveguides 12 of different cores carries one of the wavelength channels. The electronics can also operate the laser source(s) 80 so as to change the wavelength channel that is present in the outgoing LIDAR signals received by the cores. For instance, the light source controller 63 can change the laser source to which the electrical current is applied. The light source to which the electrical current is applied can be the light source that outputs the wavelength channel signal that carries the wavelength channel that is currently desired for the outgoing LIDAR signals and accordingly the system output signals. [0061] The light source 10 can optionally include one or more modulators 90 that are each positioned so as to modulate one of the wavelength channel signals. For instance, the light source 10 can optionally include one or more modulators 90 positioned along each of the source waveguides 82. The light source controller 63 can operate each of the modulators 90 so as to allow a wavelength channel signal carried in a source waveguide 82 to pass the modulator 90 without attenuation from the modulator or such that the wavelength channel signal carried in a source waveguide 82 is attenuated by the modulator. The attenuation can be sufficient that the attenuated wavelength channel is not substantially present in the channel waveguide 85. As a result, the attenuation can be sufficient that the attenuated wavelength channel is not substantially present in the outgoing LIDAR signals output from the light source and is accordingly not substantially present in the system output signals output from the LIDAR system. As a result, an alternative to the light source controller 63 turning laser sources 80 on and off so as to select the wavelength channel carried in the system output signals, the light source controller 63 can keep the laser sources that generate the needed channel wavelengths “on” and also operate the one or more modulators 90 so the outgoing LIDAR signals carry' the currently desired wavelength channel. Accordingly, the light source controller 63 can keep the laser sources that generate the channel wavelengths that will be needed “on” while operating the one or more modulators 90 so the system output signal(s) carry the currently desired wavelength channel. As an example, when it is desired for the outgoing LIDAR signals and system output signals to carry' wavelength channel X2, the source controller 63 can operate laser sources 80 that generate channel wavelengths Xi- XM such that each of these laser sources 80 concurrently outputs a wavelength channel signal and can operate the modulators 90 such that the wavelength channel signal that carries wavelength channel X2 passes the associate modulator 90 but the wavelength channel signals carry ing Xi and X3 through M are attenuated such that wavelength channels Xi and wavelength channels X3 through XM are not substantially present in the resulting outgoing LIDAR signals and are accordingly not substantially present in the resulting system output signals. Suitable modulators 90 include, but are not limited to, Variable Optical Attenuators (VOAs), Mach-Zehnder modulators. An example of a suitable optical attenuator can be found in U.S. Patent Application serial number 17/396,616, filed on August 6, 2021, entitled “Carrier Injector Having Increased Compatibility.” and incorporated herein in its entirety’. [0062] When a light source 10 is constructed as shown in Figure 3, each of the outgoing LIDAR signals and system output signals concurrently carries the same wavelength channel, however, other light source configurations are possible.
[0063] Suitable waveguides for use as the source waveguide 82, channel waveguide 85, and/or the core waveguides 87 include, but are not limited to optical fibers and planar optical waveguides. Although Figure 3 illustrates the light source 10 as separate from the LIDAR chip, all or a portion of the light source 10 can be positioned on the LIDAR chip and/or integrated into the LIDAR chip. As a result, suitable waveguides for use as the source waveguide 82, channel waveguide 85, and/or the core waveguides 87 also include, but are not limited to rib waveguides, ridge waveguides, buried waveguides.
[0064] All or a portion of the electronics 62 associated with different cores can optionally be consolidated in the assembly electronics 280 illustrated in Figure 2. For instance, the light source controller 63 can be included in the assembly electronics 280 rather than in the electronics 62 associated with each of the individual cores. The combination of the electronics 62 for each of the one or more cores and the assembly electronics 280 serve as the electronics for the LIDAR system. The LIDAR system electronics can operate the LIDAR system and generate the LIDAR data for the LIDAR system.
[0065] The assembly electronics 280 can be positioned on the LIDAR chip or can be external to the LIDAR chip. The assembly electronics 280 can collect or generate the LIDAR data results from different cores, and/or can coordinate the LIDAR data results from different cores so as to assemble LIDAR data results for the LIDAR system’s field of view. For instance, the assembly electronics 280 can stitch together LIDAR data results for the fields of view of different cores so as to form LIDAR data results for the LIDAR system’s field of view.
[0066] Although Figure 2 and Figure 3 illustrate four cores on the LIDAR chip (K=4), the LIDAR chip can include one (K=l), two (K=2), or more than two cores. Suitable numbers of cores on the LIDAR chip and suitable values for K, include, but are not limited to, numbers greater than or equal to 2, 4, or 6 and/or less than 32, 64, or 128.
[0067] Figure 2 illustrates each of the cores associated with the electronics 62 that operate the core and/or generate the LIDAR data for the core. However, the electronics 62 for different cores 62 and the assembly electronics 280 need not be separate and can be incorporated into the same electronics for the system as shown in Figure 3.
[0068] The distance between the facets of alternate waveguides 16 can be selected to achieve a particular pattern for the density of pixels in the field of view of the LIDAR system. For instance, Figure 4A illustrates a LIDAR system constructed according to Figure 2. In order to simplify the illustration, the LIDAR system is shown with K=2 cores that each has N=4 alternate waveguides. The facets of the alternate waveguides 16 are arranged in an array. The distance between the facets of two adjacent alternate waveguides 16 is labeled d. The distance d represents a center-to-center distance. The distances between the facets of the alternate waveguides 16 is selected such that the density of the facets is higher in the middle of the facet array than toward the edges of the array.
[0069] Figure 4A also illustrates the system output signals output from the optical component assembly 75 as a result of the illustrated configuration of alternate waveguides 16. The increased density' of the facets in the middle of the facet array can increase the density of the system output signals near the middle of the array of system output signals output by the LIDAR system as shown in Figure 4A. As will be described below, the increased density of the system output signals near the middle of the array of system output signals output by the LIDAR system can produce an increased density' of pixels in the center of the field of view. [0070] Each of the system output signals illustrated in Figure 4A carries the same wavelength channel m=l. However, the light source controller 63 can operate the light source 10 so as to change the wavelength channel carried by the system output signals. For instance, Figure 4B illustrates the LIDAR system of Figure 4A after the system output signals have changed from carrying wavelength channel m=l to wavelength channel m=2. The system output signals shown by dashed lines in Figure 4B represent system output signals carrying wavelength channel m=2 and the system output signals shown by dashed lines in Figure 4B represent system output signals carrying wavelength channel m= 1 and are arranged as shown in Figure 4A. As illustrated, the change in wavelength channel causes a shift in the direction that the system output signals travel away from the LIDAR system.
[0071] Figure 4C is a schematic of the relationship between the field of view and the LIDAR system shown in Figure 4A and Figure 4B. The field of view is represented by the dashed lines that extend from the LIDAR system to an imaginary' surface within the field of view. In order to show- the extent of the field of view, the imaginary surface is positioned at a maximum operational distance (labeled diu) from the LIDAR sy stem. The maximum operational distance can generally be considered the maximum distance for which the LIDAR system is configured to provide reliable LIDAR data. In reality', the imaginary' surface can have a curved shape due to the fixed nature of the maximum operational distance, however, a planar surface is shown to simplify’ the following discussion. [0072] While the LIDAR system can concurrently output multiple system output signals, Figure 4B illustrates the LIDAR system outputting a single system output signal. In Figure 4B, a portion of a sample region illuminated by the system output signal is illustrated by the polygon on the plane of Figure 4B. The electronics generate LIDAR data in a series of cycles by sequentially illuminating different sample regions in the field of view for the LIDAR system. LIDAR data can be generated for each of the sample regions. For instance, a sample region is the portion of the field of view that is illuminated during the cycle that is used to generate the LIDAR data for the sample region. As a result, each of the LIDAR data results is associated with one of the cycles and one of the sample regions. The LIDAR data results can stitch together the LIDAR data results from multiple different sample regions to generate LIDAR data results for the field of view.
[0073] In Figure 4C, only a portion of the illustrated sample region is shown as illuminated by the system output signal because the electronics can operate the one or more beam steering mechanisms to steer the system output signal during the data period(s) associated with the sample region. For instance, the one or more beam steering mechanisms can scan the system output signal in the direction of the arrow labeled A for the duration of a cycle. This scan can cause the system output signal to illuminate the length of the polygon labeled ct during the cycle. Although the sample region is shown as two dimensional in Figure 4C, the sample region is three-dimensional and can extend from the rectangle on the illustrated plane back to the LIDAR system. As a result, each sample region can serve as a three-dimensional pixel within the field of view.
[0074] Figure 4D is a sideview of an example of the imaginary plane from Figure 4C. The imaginary plane can be a two-dimensional representation of the field of view' of the LIDAR system. The left side of field of view contains a column of sample regions illustrated by dashed lines and labeled ‘'solid-state/’ The sample regions in the column labeled "solid- state” can result from scanning the field using the signal director 14 but not using the one or more beam steering mechanisms to steer the system output signals. As a result, the sample regions in the column labeled "solid-state” are a result of solid-state scanning of the system output signals. The sample regions in the column labeled "solid-state” can result from a LIDAR system having K=4 cores that each has N=4 alternate waveguides. When the outgoing LIDAR signal is directed to one of the alternate waveguides, the LIDAR system can concurrently output four different system output signals that are each directed to a different sample region. For instance, when the outgoing LIDAR signal is directed to one of the alternate waveguides, the LIDAR system can concurrently output four different system output signals that are each directed to one of the sample regions illustrated by a combination of solid and dashed lines in the column labeled "‘Solid-state/’ The sample region to which each of the four system output signals is directed can be changed by changing the alternate waveguide that receives the outgoing LIDAR signal through operation of the signal director 14. As a result, the LIDAR system can illuminate the sixteen (K*N) different sample regions in the column labeled "‘solid-state” by operation of the signal director 14 so as to change the alternate waveguide from which the system output signals originate.
[0075] The electronics can also operate the one or more beam steering mechanisms to steer the system output signals to the sample regions illustrated by the solid lines in Figure 4D. For instance, Figure 4D also shows an axis labeled xi and an axis labeled X2. The movement of the system output signals in the direction of the axis labeled xi can be achieved through the use of the signal director 14. The movement of the system output signals in the direction of the axis labeled X2 can be achieved through the use of the one or more beam steering mechanisms. The axis labeled X2 can also represent time.
[0076] The LIDAR system can be constructed such that the field of view has one or more concentrated regions and one or more diluted regions. The concentration of the sample regions in each of the one or more concentrated regions is higher than the concentration of sample regions in each of the one or more diluted regions. The field of view shown in Figure 4D includes a concentrated region (labeled hd) betw een diluted regions (labeled Id). The concentrated region results from the configuration of alternate waveguide disclosed in the context of Figure 4A. The increased density of the sample regions in the concentrated region can be a result of the increased density of the facets in the middle of the facet array and the resulting increase in the density7 of the system output signals near the middle of the array of system output signals output by the LIDAR system as shown in Figure 4A.
[0077] The LIDAR system can shift the location of the one or more concentrated regions within the field of view. For instance, in Figure 4D, the wavelength channel is held constant during the scanning of the sample regions shown in Figure 4D. As an example, each of the system output signals can carry the wavelength channel with channel index m=l (Xi) during the scanning of the sample regions shown in Figure 4D. Figure 4E illustrates the result of the LIDAR system scanning the scanning the same field of view- while the wavelength channel is held constant at a different wavelength channel. As an example, during the scanning of the sample regions shown in Figure 4E, each of the system output signals can carry the wavelength channel with channel index m=2 (X2). The change in wavelength channel carried by the system output signals can be a result of the light source controller 63 (not shown) operating the light source so as to change the wavelength channel carried by the outgoing LIDAR signal from the wavelength channel with channel index m=l (Xi) to the wavelength channel with channel index m=2 (X2).
[0078] A comparison on Figure 4D and Figure 4E shows that the sample regions have shifted lower in the field of view as a result of the change in the wavelength channel carried by the system output signals. The shift in the location of the sample regions is also evident from Figure 4B. In Figure 4B, the system output signals that carry wavelength channel index m=l (Xi) are shown by solid lines while the system output signals that carry wavelength channel index m=2 (X2) are shown by dashed lines. The change in wavelength channel carried by each of the system output signal shifts each of the system output signals in the same direction. As a result, the shift in the locations of sample regions also shifts the location where the highest density of sample regions can be found within the field of view. Accordingly, the electronics can tune the location of the one or more concentrated regions within the field of view'.
[0079] Although Figure 4A illustrates the density of the facets being higher in the middle of the facet array than toward the edges of the array, the facets of the alternate waveguides 16 can be arranged in other configurations. For instance, the density of the facets can be lower in the middle of the facet array than toward one or more edges of the array. Alternately, the density of the facets can be higher at one or more edges of the array and decrease moving toward the opposing edge of the array.
[0080] The closest pair of adjacent alternate waveguide facets 16 or one of the closest pairs of adjacent alternate waveguide facets 16 can serve as reference facets. The distance between the reference facets can serve as a reference distance. The facets can be arranged such that the distance between adjacent facets (d) becomes larger or stays the same for each pair of adjacent pair starting at the reference facets and moving tow ard one or both ends of the array and the distance between adjacent facets (d) becomes larger for at least a portion of the adjacent pairs starting at the reference facets and moving toward one or both ends of the array. The distance between adjacent facets (d) can increase linearly or non-linearly as a function of distance for each pair of adjacent pair starting at the reference facets and moving tow ard one or both ends of the array. In some instances, the facets of the alternate waveguides 16 in the array are arranged such that the largest distance between adjacent facets (d) is greater than or equal to 1.5, 2, or 4 and less than 5, 10, or 20 times the reference distance. Additionally, or alternately, the facets of the alternate waveguides 16 can be arranged such that the distance between adjacent facets for all or a portion of the adjacent pairs of facets in the array are greater than 3 pm, 5 pm, or 10 pm, and less than 50 pm, 500 pm, or 1000 pm.
[0081] In some instances, the distance between adjacent facets changes such that there are greater than or equal to 3, 4, N/8, N/4, or (N-l)/2 different distances between the adjacent pairs in the array. In some instances, the distance between adjacent facets is selected such that a first portion of the adjacent pairs have a distance between the adjacent pair that is more than 1.5, 2 , or 2.5 times the reference distance and less than 3, 4. or 5 times the reference distance and a second portion of the adjacent pairs have a distance between the adjacent pair that is more than 5, 6, or 7 times the reference distance and less than 8, 9, or 10 times the reference distance. In some instances, the distance between adjacent facets is selected such that a first portion of the adjacent pairs have a distance between the adjacent pair that less than or equal to 1.5, 2 , or 2.5 times the reference distance, a second portion of the adjacent pairs have a distance that is more than 1.5, 2 , or 2.5 times the reference distance and less than 3, 4, or 5 times the reference distance and a third portion of the adjacent pairs have a distance between the adjacent pair that is more than 5, 6, or 7 times the reference distance.
[0082] Although Figure 4D and Figure 4E illustrate the density of the sample regions being higher in the middle of the field of view than the density of the sample regions along two edges of the field of view, the facets of the alternate waveguides 16 can be arranged to provide the distribution of sample regions with other patterns. For instance, the facets of the alternate waveguides 16 can be arranged so the density of the sample regions is lower in the middle of the field of view than the density of the sample regions along two edges of the field of view. Alternately, the density of the facets can be arranged so the density of the sample regions is higher along one edge of the field of view decreases moving toward the opposing edge of the field.
[0083] The separation distance between the sample regions in the column labeled “solid-state” is at least partially a function of the divergence between adjacent system output signals carrying the same wavelength channel. An example of the divergence between adjacent system output signals carrying the same wavelength channel is labeled (])a in Figure 4A (signal divergence). The signal divergence can be measured relative to the center ray of the system output signals. The system output signals with the smallest signal divergence can serve as a reference output signals. The signal divergence for reference output signals can serve as a reference divergence. The facets of the alternate waveguides 16 and the optical component assembly 75 can be configured such that the signal divergence becomes larger or stays the same for each pair of adjacent system output signals starting at the reference output signals and moving toward one or both edges of the field of view and the signal divergence becomes larger for at least a portion of the adjacent system output signals starting at the reference output signals and moving toward one or both edges of the field of view. In some instances, the largest signal divergence between adjacent system output signals is greater than or equal to 2. 5, or 10 and less than 20, 50, or 100 times the reference divergence. Additionally, or alternately, the signal divergence between adjacent system output signals for all or a portion of the system output signals carrying the same channel can be greater than 0.01°, 0.1°, or 0.25° and less than 0.5°, 1°, or 2°.
[0084] In some instances, there are greater than or equal to 3, 4, N*K/8, N*K/4, or ((N*K)-l)/2 different signal divergences between the adjacent system output signals. In some instances, a first portion of the adjacent system output signals each has a signal divergence that is more than 1.25, 2, or 2.5 times the reference divergence and less than 3, 4, or 5 times the reference divergence and a second portion of the adjacent system output signals each has a signal divergence that is more than 3. 6, or 7 times the reference divergence and/or less than 8, 9. or 10 times the reference divergence.
[0085] The degree of shift in the location of the sample regions within the field of view that occurs in response to the change in the wavelength channel carried by the system output signals is at least partially a function of the divergence between system output signals that originate from the same alternate waveguide but carry adjacent wavelength channels. An example of the divergence between system output signals that originate from the same alternate waveguide but carry adjacent wavelength channels is labeled ([> in Figure 4B. In some instances, the divergence between system output signals that originate from the same alternate waveguide but carry adjacent wavelength channels is greater than 1°, or 5° and less than 10° or 20°.
[0086] The distance between adjacent sample regions that result from solid-state scanning is labeled ds in Figure 4D and is a measure of the distance between the sample regions in the column labeled solid-state at the maximum operational distance. The distance can be measured from the center ray of the system output signal that illuminates each of the sample regions. The closest pair of adjacent sample regions or one of the closest pairs of adjacent sample regions can sen e as reference sample regions. The distance between the reference sample regions can serve as a reference sample region distance. The facets and the optical component assembly 75 can be configured such that the distance between adjacent sample regions becomes larger or stays the same for each pair of adjacent sample regions starting at the reference sample regions and moving toward one or both ends of the field of view and the distance between adjacent sample regions (ds) becomes larger for at least a portion of the adjacent sample regions starting at the reference sample regions and moving toward one or both ends of the array. In some instances, the largest distance between adjacent sample regions (ds) is greater than or equal to 1.1, 2, or 2.5 and less than 3, 4, or 9 times the reference sample region distance. Additionally, or alternately, the sample regions of the alternate waveguides 16 can be arranged such that the distance between adjacent sample regions for all or a portion of the adjacent pairs of sample regions in the array are greater than 1 cm, 50 cm, or 1 m, and less than 2 m, 5 m, or 10 m.
[0087] There can be N*K different sample regions that result from solid state scanning of the field of view. In some instances, there are greater than or equal to 3. 4, N*K /8, N*K /4, or ((N*K)-l)/2 different distances between the adjacent sample regions (ds). In some instances, a first portion of the adjacent sample regions each has a distance between the adjacent pair that is more than 1.2, 2, or 2.5 times the reference sample region distance and less than 3. 4, or 5 times the reference sample region distance and a second portion of the adjacent pairs each has a distance between the adjacent pair that is more than 3, 6, or 7 times the reference sample region distance and/or less than 8, 9, or 10 times the reference sample region distance.
[0088] Figure 5A through Figure 5B illustrates an example of a light signal processor that is suitable for use as the light signal processor 28 in a LIDAR system constructed according to Figure 1. The light signal processor includes an optical -to-electrical assembly configured to convert the light signals to electrical signals. Figure 5A is a schematic of an example of a suitable optical-to-electrical assembly that includes a first splitter 200 that divides the comparative signal received from the comparative waveguide 26 onto a first comparative waveguide 204 and a second comparative waveguide 206. The first comparative waveguide 204 carries a first portion of the comparative signal to a light combiner 211. The second comparative waveguide 206 carries a second portion of the comparative signal to a second light combiner 212.
[0089] The light signal processor of Figure 5A also includes a second splitter 202 that divides the reference signal received from the reference waveguide 32 onto a first reference waveguide 210 and a second reference waveguide 208. The first reference waveguide 210 carries a first portion of the reference signal to the light combiner 211. The second reference waveguide 208 carries a second portion of the reference signal to the second light combiner 212. [0090] The second light combiner 212 combines the second portion of the comparative signal and the second portion of the reference signal into a second composite signal. Due to the difference in frequencies between the second portion of the comparative signal and the second portion of the reference signal, the second composite signal is beating between the second portion of the comparative signal and the second portion of the reference signal. The first composite signal and the second composite signal are each an example of a composite signal.
[0091] The second light combiner 212 also splits the resulting second composite signal onto a first auxiliary' detector waveguide 214 and a second auxiliary detector waveguide 216. The first auxiliary' detector waveguide 214 carries a first portion of the second composite signal to a first auxiliary light sensor 218 that converts the first portion of the second composite signal to a first auxiliary' electrical signal. The second auxiliary detector waveguide 21 carries a second portion of the second composite signal to a second auxiliary light sensor 220 that converts the second portion of the second composite signal to a second auxiliary electrical signal. Examples of suitable light sensors include germanium photodiodes (PDs), and avalanche photodiodes (APDs).
[0092] In some instances, the second light combiner 212 splits the second composite signal such that the portion of the comparative signal (i.e. the portion of the second portion of the comparative signal) included in the first portion of the second composite signal is phase shifted by 180° relative to the portion of the comparative signal (i.e. the portion of the second portion of the comparative signal) in the second portion of the second composite signal but the portion of the reference signal (i.e. the portion of the second portion of the reference signal) in the second portion of the second composite signal is not phase shifted relative to the portion of the reference signal (i.e. the portion of the second portion of the reference signal) in the first portion of the second composite signal. Alternately, the second light combiner 212 splits the second composite signal such that the portion of the reference signal (i.e. the portion of the second portion of the reference signal) in the first portion of the second composite signal is phase shifted by 180° relative to the portion of the reference signal (i.e. the portion of the second portion of the reference signal) in the second portion of the second composite signal but the portion of the comparative signal (i.e. the portion of the second portion of the comparative signal) in the first portion of the second composite signal is not phase shifted relative to the portion of the comparative signal (i.e. the portion of the second portion of the comparative signal) in the second portion of the second composite signal. Examples of suitable light sensors include germanium photodiodes (PDs), and avalanche photodiodes (APDs).
[0093] The first light combiner 211 combines the first portion of the comparative signal and the first portion of the reference signal into a first composite signal. Due to the difference in frequencies between the first portion of the comparative signal and the first portion of the reference signal, the first composite signal is beating between the first portion of the comparative signal and the first portion of the reference signal.
[0094] The light combiner 211 also splits the first composite signal onto a first detector waveguide 221 and a second detector waveguide 222. The first detector waveguide 221 carries a first portion of the first composite signal to a first light sensor 223 that converts the first portion of the second composite signal to a first electrical signal. The second detector waveguide 222 carries a second portion of the second composite signal to a second light sensor 224 that converts the second portion of the second composite signal to a second electrical signal. Examples of suitable light sensors include germanium photodiodes (PDs), and avalanche photodiodes (APDs).
[0095] In some instances, the light combiner 211 splits the first composite signal such that the portion of the comparative signal (i.e. the portion of the first portion of the comparative signal) included in the first portion of the composite signal is phase shifted by 180° relative to the portion of the comparative signal (i.e. the portion of the first portion of the comparative signal) in the second portion of the composite signal but the portion of the reference signal (i.e. the portion of the first portion of the reference signal) in the first portion of the composite signal is not phase shifted relative to the portion of the reference signal (i.e. the portion of the first portion of the reference signal) in the second portion of the composite signal. Alternately, the light combiner 211 splits the composite signal such that the portion of the reference signal (i.e. the portion of the first portion of the reference signal) in the first portion of the composite signal is phase shifted by 180° relative to the portion of the reference signal (i.e. the portion of the first portion of the reference signal) in the second portion of the composite signal but the portion of the comparative signal (i.e. the portion of the first portion of the comparative signal) in the first portion of the composite signal is not phase shifted relative to the portion of the comparative signal (i.e. the portion of the first portion of the comparative signal) in the second portion of the composite signal.
[0096] When the second light combiner 212 splits the second composite signal such that the portion of the comparative signal in the first portion of the second composite signal is phase shifted by 180° relative to the portion of the comparative signal in the second portion of the second composite signal, the light combiner 211 also splits the composite signal such that the portion of the comparative signal in the first portion of the composite signal is phase shifted by 180° relative to the portion of the comparative signal in the second portion of the composite signal. When the second light combiner 212 splits the second composite signal such that the portion of the reference signal in the first portion of the second composite signal is phase shifted by 180° relative to the portion of the reference signal in the second portion of the second composite signal, the light combiner 211 also splits the composite signal such that the portion of the reference signal in the first portion of the composite signal is phase shifted by 180° relative to the portion of the reference signal in the second portion of the composite signal.
[0097] The first reference waveguide 210 and the second reference waveguide 208 are constructed to provide a phase shift between the first portion of the reference signal and the second portion of the reference signal. For instance, the first reference waveguide 210 and the second reference waveguide 208 can be constructed so as to provide a 90-degree phase shift between the first portion of the reference signal and the second portion of the reference signal. As an example, one reference signal portion can be an in-phase component and the other a quadrature component. Accordingly, one of the reference signal portions can be a sinusoidal function and the other reference signal portion can be a cosine function. In one example, the first reference waveguide 210 and the second reference waveguide 208 are constructed such that the first reference signal portion is a cosine function and the second reference signal portion is a sine function. Accordingly, the portion of the reference signal in the second composite signal is phase shifted relative to the portion of the reference signal in the first composite signal, however, the portion of the comparative signal in the first composite signal is not phase shifted relative to the portion of the comparative signal in the second composite signal.
[0098] The first light sensor 223 and the second light sensor 224 can be connected as a balanced detector and the first auxiliary light sensor 218 and the second auxiliary' light sensor 220 can also be connected as a balanced detector. The balanced detector(s) sen e as light sensors that convert a light signal to an electrical signal. Figure 5B provides a schematic of the relationship between the electronics 62 and one of the light signal processors 28. For instance, Figure 5B provides a schematic of the relationship between the electronics 62 and the first light sensor 223, the second light sensor 224, the first auxiliary light sensor 218, and the second auxiliary light sensor 220 from the same light signal processor. The symbol for a photodiode is used to represent the first light sensor 223, the second light sensor 224, the first auxiliary light sensor 218, and the second auxiliary' light sensor 220 but one or more of these sensors can have other constructions. In some instances, all of the components illustrated in the schematic of Figure 5B are included on the LIDAR chip. In some instances, the components illustrated in the schematic of Figure 5B are distributed between the LIDAR chip and electronics located off the LIDAR chip.
[0099] The electronics 62 connect the first light sensor 223 and the second light sensor 224 as a first balanced detector 225 and the first auxiliary light sensor 218 and the second auxiliary light sensor 220 as a second balanced detector 226. In particular, the first light sensor 223 and the second light sensor 224 are connected in series. Additionally, the first auxiliary' light sensor 218 and the second auxiliary light sensor 220 are connected in series. The serial connection in the first balanced detector is in communication with a first data line 228 that carries the output from the first balanced detector as a first data signal. The serial connection in the second balanced detector is in communication with a second data line 232 that carries the output from the second balanced detector as a second data signal. The first data line and the second data line are each an example of a data line. The first data signal is an electrical data signal that carries a representation of the first composite signal and the second data signal is an electrical data signal that carries a representation of the second composite signal. Accordingly, the first data signal includes a contribution from a first waveform and a second waveform, and the second data signal is a composite of the first waveform and the second waveform. The portion of the first waveform in the first data signal is phase-shifted relative to the portion of the first waveform in the first data signal but the portion of the second waveform in the first data signal being in-phase relative to the portion of the second waveform in the first data signal. For instance, the second data signal includes a portion of the reference signal that is phase shifted relative to a different portion of the reference signal that is included the first data signal. Additionally, the second data signal includes a portion of the comparative signal that is in-phase with a different portion of the comparative signal that is included in the first data signal. The first data signal and the second data signal are beating as a result of the beating between the comparative signal and the reference signal, i.e.. the beating in the first composite signal and in the second composite signal.
[00100] The electronics 62 include a data processor 237 configured to generate the LIDAR data. For the purposes of illustration, Figure 5B illustrates one data processor in the electronics 62, however, the electronics 62 for a core can include a data processor 237 for each light signal processor 28 operated by the electronics 62. [00101] The data processor 237 includes a beat frequency identifier 238 configured to identify the beat frequency of the composite signal from the first data signal and the second data signal. The beat frequency identifier 238 receives the first data signal and the second data signal. Since the first data signal is an in-phase component and the second data signal its quadrature component, the first data signal and the second data signal together act as a complex data signal where the first data signal is the real component and the second data signal is the imaginary component of the complex data signal.
[00102] The data processor 237 includes a first Analog-to-Digital Converter (ADC) 264 that receives the first data signal from the first data line 228. The first Analog-to-Digital Converter (ADC) 264 converts the first data signal from an analog form to a digital form and outputs a first digital data signal. The beat frequency identifier 238 includes a second Analog-to-Digital Converter (ADC) 266 that receives the second data signal from the second data line 232. The second Analog-to-Digital Converter (ADC) 266 converts the second data signal from an analog form to a digital form and outputs a second digital data signal. The first digital data signal is a digital representation of the first data signal and the second digital data signal is a digital representation of the second data signal. Accordingly, the first digital data signal and the second digital data signal act together as a complex signal where the first digital data signal acts as the real component of the complex signal and the second digital data signal acts as the imaginary component of the complex data signal.
[00103] The beat frequency identifier 238 includes a mathematical transformer 268 that receives the complex data signal. For instance, the mathematical transformer 268 receives the first digital data signal from the first Analog-to-Digital Converter (ADC) 264 as an input and also receives the second digital data signal from the first Analog-to-Digital Converter (ADC) 266 as an input. The mathematical transformer 268 can be configured to perform a mathematical transform on the complex signal so as to convert from the time domain to the frequency domain. The mathematical transform can be a complex transform such as a complex Fast Fourier Transform (FFT). A complex transform such as a complex Fast Fourier Transform (FFT) provides an unambiguous solution for the shift in frequency of a comparative signal relative to the system output signal.
[00104] The mathematical transformer 268 can include a peak finder (not shown) configured to identify peaks in the output of the mathematical transformer 268. The peak finder can be configured to identify' any frequency peaks associated with reflection of the system output signal by one or more objects located outside of the LIDAR system. For instance, frequency peaks associated with reflection of the system output signal by one or more objects located outside of the LIDAR system can fall within a frequency range. The peak finder can identify the frequency peak within the range of frequencies associated with the reflection of the system output signal by one or more objects located outside of the LIDAR system. The frequency of the identified frequency peak represents the beat frequency of the composite signal.
[00105] The data processor 237 includes a LIDAR data generator 270 that receives the beat frequency of the composite signal from the peak finder. The LIDAR data generator 270 processes the beat frequency of the composite signal so as to generate the LIDAR data (distance and/or radial velocity between the reflecting object and the LIDAR chip or LIDAR system). The transform component 268 can execute the attributed functions using firmware, hardware or software or a combination thereof.
[00106] The light source controller 63 operates the light source 10 such that the outbound LIDAR signal and the resulting system output signal have a frequency versus time pattern. For instance, when a light source is constructed according to Figure 3 and the laser sources include a gain element or laser chip, the light source controller 63 can change the frequency of the outgoing LIDAR signal by changing the level of electrical current applied through the gain element or laser cavity. Additionally, or alternately, the light source 10 can include one or more modulators (not shown) that the light source controller 63 can use to modulate the frequency of the outgoing LIDAR signal. When the light source 10 includes a modulator one or more, the light source controller 63 can operate the modulator so as to achieve the desired frequency versus time pattern in light signals that include light from the outgoing LIDAR signal. The light source controller 63 can execute the attributed functions using firmware, hardware or software or a combination thereof.
Figure 5C shows an example of a chirp pattern for the outgoing LIDAR signals, outbound LIDAR signals and the resulting system output signals. Figure 5C shows an example of a relationship between the frequency of the system output signals, time, cycles, periods and sample regions. The base frequency of the system output signal (fo) can be the frequency of the system output signal at the start of a cycle. The frequency versus time pattern shown in Figure 5C can represent the frequency versus time pattern for the system output signals that are concurrently output from the LIDAR system carrying the same wavelength channel. However, different system output signals that carry the same wavelength channel illuminate a different selection of sample regions. Since Figure 5C applies to a set of sample regions labeled Rnn and Rnn+i, the disclosure of Figure 5C applies to the system output signal that illuminates these sample regions. Additionally, the frequency versus time pattern shown in Figure 5C is for system output signals carrying a particular one of the wavelength channels. When the wavelength channel is switched, the system output signals can have the same pattern but at the wavelength of the new wavelength channel. Accordingly, the frequency versus time pattern shown in Figure 5C will be shifted u w ard or downward in response to the change in w avelength channel.
[00107] Figure 5C shows frequency versus time for a sequence of two cycles labeled cyclej and cyclej+i. In some instances, the frequency versus time pattern is repeated in each cycle as shown in Figure 5C. The illustrated cycles do not include re-location periods and/or re-location periods are not located between cycles. As a result, Figure 5C illustrates the results for a continuous scan where the steering of the system output signal is continuous.
[00108] Each cycle includes multiple data periods labeled DPi, DP2. and DPs. In some instances, the frequency versus time pattern is the same for the data periods that correspond to each other in different cycles as is shown in Figure 5C. Corresponding data periods are data periods with the same period index. As a result, each data period DPi can be considered corresponding data periods and the associated frequency versus time patterns are the same in Figure 5C. At the end of a cycle, the electronics return the frequency to the same frequency level at which it started the previous cycle.
[00109] During the data periods DPi the electronics operate the light source such that the frequency of the system output signal changes at a linear rate a. During the data periods DP2 the electronics operate the light source such that the frequency of the system output signal changes at a linear rate -a.
[00110] Figure 5C labels sample regions that are each associated with a sample region index n and are labeled Rnn. Figure 5C labels sample regions Rnk and Rnk-i. Each sample region is illuminated with the system output signal during the data periods that Figure 5C shows as associated with the sample region. For instance, sample region Rnn is illuminated with the system output signal during the data periods labeled DPi through DP3. The sample region indices n can be assigned relative to time. For instance, the sample regions can be illuminated by the system output signal in the sequence indicated by the index n. As a result, the sample region Rmo can be illuminated after sample region Rn9 and before Rnn.
[00111 ] The LIDAR system is typically configured to provide reliable LIDAR data when the object is within an operational distance range from the LIDAR system. The operational distance range can extend from a minimum operational distance to a maximum operational distance. A maximum roundtrip time can be the time required for a system output signal to exit the LIDAR system, travel the maximum operational distance to the object, and to return to the LIDAR system and is labeled TM in Figure 5C.
[00112] Since there is a delay between the system output signal being transmitted and returning to the LIDAR system, the composite signals do not include a contribution from the LIDAR signal until after the system return signal has returned to the LIDAR system. Since the composite signal needs the contribution from the system return signal for there to be a LIDAR beat frequency, the electronics measure the LIDAR beat frequency that results from system return signal that return to the LIDAR system during a data window in the data period. The data window is labeled “W” in Figure 5C. The contribution from the LIDAR signal to the composite signals will be present at times larger than the maximum operational time delay (TM). AS a result, the data window is shown extending from the maximum operational time delay (T ) to the end of the data period.
[00113] A frequency peak in the output from a mathematical transform such as a Complex Fourier transform represents the beat frequency of the composite signals that each includes a comparative signal beating against a reference signal. The beat frequencies from two or more different data periods can be combined to generate the LIDAR data. For instance, the beat frequency determined from DPi in Figure 5C can be combined with the beat frequency determined from DP2 in Figure 5C to determine the LIDAR data. As an example, the following equation applies during a data period where electronics increase the frequency of the outgoing LIDAR signal during the data period such as occurs in data period DPi of Figure 5C: fub = -fd+or where fub is the frequency provided by the transform component,/) represents the Doppler shift (fa = 2vfc/c) where fc represents the optical frequency (fo), c represents the speed of light, v is the radial velocity between the reflecting object and the LIDAR system where the direction from the reflecting object toward the chip is assumed to be the positive direction, T is the time in which the light from the system output signal travels to the object and returns to the LIDAR sy stem (the roundtrip time), and c is the speed of light. The following equation applies during a data period where electronics decrease the frequency of the outgoing LIDAR signal such as occurs in data period DP2 of Figure 5C: fdb = -fd -a T where fdb is a frequency provided by the transform component (fi. LDP determined from DP2 in this case). In these two equations, fd and T are unknowns. These two equations can be solved for the two unknowns. The radial velocity for the sample region then be calculated from the Doppler shift (v= c*fd/(2fc)) and/or the separation distance for that sample region can be calculated from C*T/2. AS a result, the electronics use each of the beat frequencies as a variable in one or more equations that yield the LIDAR data. As an example, the distance between the LIDAR system and an object in the sample region (R) can be determiner from Equation 1: R=c(fub-fdb)/(2(aub-Odb)) where aub represents the rate of the frequency increase during the data period. Additionally, fdb represents the beat frequency during a data period where source controller 63 decreases the frequency of the outgoing LIDAR signal during the data period such as occurs in data period DP2 from Figure 5C through Figure 5E and otdb represents the rate of the frequency decrease during the data period with a decreasing frequency and aub represents the rate of the frequency decrease during the data period with an increasing frequency. Additionally, the radial velocity between the reflecting object and the LIDAR system (v) can be calculated Equation 2: v=L(adbfub-aubfdb)/(2(aub-adb)). The LIDAR data generator 270 can execute the attributed functions using firmw are, hardware or software or a combination thereof.
[00114] The data period labeled DP3 in Figure 10C is optional. In some situations, there can be more than one object in a sample region. For instance, during the feedback period in DPi for cycle2 and also during the feedback period in DP2 for cycle2, more than one frequency pair can be matched. In these circumstances, it may not be clear which frequency peaks from DP2 corresponds to which frequency peaks from DPi. As a result, it may be unclear which frequencies need to be used together to generate the LIDAR data for an object in the sample region. As a result, there can be a need to identify corresponding frequencies. The identification of corresponding frequencies can be performed such that the corresponding frequencies are frequencies from the same reflecting object within a sample region. The data period labeled DP3 can be used to find the corresponding frequencies. LIDAR data can be generated for each pair of corresponding frequencies and is considered and/or processed as the LIDAR data for the different reflecting objects in the sample region.
[00115] An example of the identification of corresponding frequencies uses a LIDAR system where the cycles include three data periods (DPi, DP2, and DP3) as shown in Figure 5C. When there are two objects in a sample region illuminated by the LIDAR outputs signal, the transform component outputs two different frequencies for fub: fui and fU2 during DPi and another two different frequencies for fdb: fdi and £12 during DP2. In this instance, the possible frequency pairings are: (fdi, fui); (fdi, fu2); (fd2, fui); and (fd2, fdu2). A value of fd and r can be calculated for each of the possible frequency pairings. Each pair of values for fd and r can be substituted into 3=-/d+ot3io to generate a theoretical fi. for each of the possible frequency pairings. The value of 013 is different from the value of a used in DPi and DP2. In Figure 5C, the value of as is zero. In this case, the transform component also outputs two values for fi that are each associated with one of the objects in the sample region. The frequency pair with a theoretical fi, value closest to each of the actual fi values is considered a corresponding pair. LIDAR data can be generated for each of the corresponding pairs as described above and is considered and/or processed as the LIDAR data for a different one of the reflecting objects in the sample region. Each set of corresponding frequencies can be used in the above equations to generate LIDAR data. The generated LIDAR data will be for one of the objects in the sample region. As a result, multiple different LIDAR data values can be generated for a sample region where each of the different LIDAR data values corresponds to a different one of the objects in the sample region.
[00116] An example application for LIDAR systems with varying density of sample regions in the field of view include vehicles for transportation of people and/or products. Figure 6A through Figure 6C illustrate a self-driving car having a LIDAR system. The LIDAR system is configured to have the highest density of system output signals where information that affects driving is most likely to be found and/or is most concentrated. For instance. Figure 6A illustrates the highest density of system output signals positioned to detect oncoming traffic or other obstacles. Meanwhile a lower density of system output signals is positioned to detect overhead items such as bridges.
[00117] Figure 6B illustrates the car of Figure 6A approaching a downward slope in the road. The illustrated LIDAR system does not adjust for the slope by shifting the system output signals within the field of view of the LIDAR system. As a result, the LIDAR system does not detect the presence of the oncoming traffic shown in Figure 6B because the system output signals pass over the oncoming traffic. In contrast, Figure 6C illustrates the LIDAR system shifting the system output signals within the field of view of the LIDAR system so as to compensate for the downward slope in the road. For instance, the system output signals shift downward within the field of view of the LIDAR system. The downward shift of the system output signals allows the LIDAR system to detect the oncoming. Further, the downward shift of the system output signals can be sufficient for the oncoming traffic to be located within the highly concentrated region of the field of view where an increased density of sample regions is present.
[00118] The electronics can shift the system output signals within the field of view in response to output from one or more sensors. The electronics can be in electrical communication with the one or more sensors 282 illustrated in Figure 1. In some instances, the light source controller 63 is in electrical communication with the one or more sensors 282. In some instances, the light source controller 63 is included in the assembly electronics 280 as shown in Figure 2 and is in electrical communication with one or more sensors 282. All or a portion of the electronics can be included in the one or more sensors. For instance, in some instances, all or a portion of the light source controller 63is included in the one or more sensors. The one or more sensors can be positioned within the LIDAR system or on a supporting object on which the LIDAR system is positioned. For instance, the one or more sensors can be positioned at one or more locations on the car.
[00119] An example of the one or more sensors 282 includes, but is not limited to, orientation sensors. The output of orientation sensors can indicate a spatial orientation of the LIDAR system or a supporting object on which the LIDAR system is positioned. In some instances, the orientation sensors can indicate a spatial orientation of the LIDAR system or the support object relative to a reference such as horizontal or vertical. As an example, the LIDAR system shown in Figure 6C can include one or more orientation sensors that output a signal that indicates the orientation of the LIDAR system or the car relative to a reference such as horizontal or vertical.
[00120] The electronics can process the output of the one or more sensors to determine other characteristics of the LIDAR system and/or the support object. For instance, the electronics can calculate the rate of change in the orientation of the LIDAR system or support object. The rate of change in the orientation of the LIDAR system or support object can be measured relative to time or relative to distance traveled by the LIDAR system or support object.
[00121] Figure 6D is a graph that illustrates a rate of change in the horizontal or vertical orientation of the car in Figure 6C relative to distance traveled as the car travels down the illustrated road. The distance on the x-axis in Figure 6D corresponds to the location along the road shown in Figure 6C. Accordingly, the illustrated rate of orientation change represents the rate of orientation change at the corresponding location on the road of Figure 6C. The rate of change increases at places w here the curvature of the road increases and decreases at places where the curvature of the road decreases and/or becomes flat. Figure 6D also includes multiple orientation rate change thresholds. For instance. Figure 6D includes a first orientation rate change threshold labeled Ti, a second orientation rate change threshold labeled T2, a third orientation rate change threshold labeled T3, and a fourth orientation rate change threshold labeled T4. In some instances, T2= -T3 and/or Ti= -T4.
[00122] The light source controller 63 can operate the light source 10 in response to the rate of orientation change. The light source controller 63 can cause the light source to output the outgoing LIDAR signal carrying the wavelength channels (to through to) shown in Figure 6D. Accordingly, the light source controller 63 can change the wavelength channel carried by the system output signals. As noted above, changing the wavelength channel carried by the system output signals shifts the locations of the sample regions within the field of view. In Figure 6D, increasing the wavelength channel index (m in ton) shifts the sample regions downward in the field of view while decreasing the wavelength channel index (m in Am) shifts the sample regions upward in the field of view, however, other conventions are possible.
[00123] In Figure 6D, the electronics operate the light source 10 such that when there is little change in the orientation of the car, the light source controller 63 operate the light source 10 such that the system output signals carry wavelength channel to. Accordingly, the system output signals carry wavelength channel to when the car is on a substantially flat road. The electronics operate the light source 10 such that the system output signals carry wavelength channel in response to the rate of orientation change going from below Ti to above Ti. In Figure 6D, an increase in the rate of orientation changes from below Ti to above Ti results from the car starting down the decline. As a result, the system output signals are shifted downward in the LIDAR system’s field of view in response to the car starting down the decline. The electronics operate the light source 10 such that the system output signals carry wavelength channel to in response to the rate of orientation change going from below T2 to above T2. As a result, the system output signals are shifted further downward in the LIDAR system’s field of view in response to the car decline becoming steeper.
[00124] In Figure 6D, the electronics operate the light source 10 such that the system output signals cany' wavelength channel u in response to the rate of orientation change going from above T2 to below T2. As a result, the system output signals are shifted upward in the LIDAR system’s field of view in response to the car decline becoming more mores shallow. Additionally, the electronics operate the light source 10 such that the system output signals carry7 wavelength channel to in response to the rate of orientation change going from above Ti to below Ti. As a result, the system output signals are shifted back to the location in the field of view that the system output signals have for a substantially flat road. As a result, the system output signals carry the wavelength channels associated with a flat road in a location where the road is still declining but is flat in the decline.
[00125] In Figure 6D, the electronics operate the light source 10 such that the system output signals carry wavelength channel to in response to the rate of orientation change going from above T3 to below T3. As a result, the system output signals are shifted upward in the LIDAR system's field of view in response to the car starting an incline.
[00126] Figure 6D illustrates that the light source controller 63 changes the wavelength channel carried by the system output signals in response to the rate of orientation change crossing an orientation rate change threshold. Additionally, the wavelength channel carried by the system output signal after the change in the wavelength channel is a function of whether the rate of orientation change increases above the orientation rate change threshold or decreases below the orientation rate change threshold.
[00127] The example of Figure 6D describes shifting the location of the system output signals in response to the output of an orientation sensor. Suitable orientation sensors include, but are not limited to, accelerometers, and gyroscopes. The one or more sensors 282 can include sensors in addition to one or more orientation sensors or as an alternative to one or more orientation sensors. As a result, the location of the system output signals can be shifted in response to the output of one or more sensors in addition to the one or more orientation sensors or as an alternative to the one or more orientation sensors. Examples of sensors that can be used in addition to orientation sensors or as an alternative to onentation sensors include, but are not limited to cameras, radar sensors, and LIDAR sensors.
[00128] Although Figure 6D disclosed shifting of the system output signals vertically within the LIDAR sy stem's field of view, the LIDAR system can additionally or alternately be configured to shift the system output signals horizontally within the LIDAR system's field of view. Examples of LIDAR system applications where varying density of system output signals and/or shifting of the system output signals within the field of view maybe desired includes, but is not limited to, automotive, robotics, and surveying.
[00129] Suitable platforms for the LIDAR chip include, but are not limited to, silica, indium phosphide, and silicon-on-insulator wafers. Figure 7 is a cross section of a sihcon-on- insulator wafer. A silicon-on-insulator (SOI) wafer includes a buried layer 300 between a substrate 302 and a light-transmitting medium 304. In a silicon-on-insulator wafer, the buried layer 300 is silica while the substrate 302 and the light- transmitting medium 304 are silicon. The substrate of an optical platform such as an SOI wafer can serve as the base for a LIDAR chip. For instance, in some instances, the optical components shown in Figure lare positioned on or over the top and/or lateral sides of the same substrate. As a result, the substrate of an optical platform such as an SOI wafer can serve as base 298 shown in Figure 2B. [00130] The portion of the LIDAR chip illustrated in Figure 7 includes a waveguide construction that is suitable for use with chips constructed from silicon-on-insulator wafers. A ridge 306 of the light-transmitting medium 304 extends away from slab regions 308 of the light-transmitting medium 304. The light signals are constrained between the top of the ridge and the buried layer 300. As a result, the ridge 306 at least partially defines the waveguide. [00131] The dimensions of the ridge waveguide are labeled in Figure 7. For instance, the ridge has a width labeled w and a height labeled h. The thickness of the slab regions is labeled t. For LIDAR applications, these dimensions can be more important than other applications because of the need to use higher levels of optical power than are used in other applications. The ridge width (labeled w) is greater than 1 pm and less than 4 pm, the ridge height (labeled h) is greater than 1 pm and less than 4 pm, the slab region thickness is greater than 0.5 pm and less than 3 pm. These dimensions can apply to straight or substantially straight portions of the waveguide, curved portions of the waveguide and tapered portions of the waveguide(s). Accordingly, these portions of the waveguide will be single mode.
However, in some instances, these dimensions apply to straight or substantially straight portions of a waveguide. Additionally, or alternately, curved portions of a waveguide can have a reduced slab thickness in order to reduce optical loss in the curved portions of the waveguide. For instance, a curved portion of a w aveguide can have a ridge that extends aw ay from a slab region with a thickness greater than or equal to 0.0 pm and less than 0.5 pm. While the above dimensions will generally provide the straight or substantially straight portions of a waveguide with a single-mode construction, they can result in the tapered section(s) and/or curved section(s) that are multimode. Coupling between the multi-mode geometry to the single mode geometry can be done using tapers that do not substantially excite the higher order modes. Accordingly, the waveguides can be constructed such that the signals earned in the waveguides are carried in a single mode even when carried in w aveguide sections having multi-mode dimensions. The w aveguide construction of Figure 7 is suitable for all or a portion of the w aveguides on a LIDAR chip constructed according to Figure 1.
[00132] Suitable signal directors 14 for use with the LIDAR chip include, but are not limited to, optical switches such as cascaded Mach-Zehnder interferometers and micro-ring resonator switches. In one example, the signal director 14 includes cascaded Mach-Zehnder interferometers that use thermal or free-carrier injection phase shifters. Figure 8 A and Figure 8B illustrate an example of an optical switch that includes cascaded Mach-Zehnder interferometers 416. Figure 8A is a topview of the optical switch. Figure 8B is a cross section of the optical switch shown in Figure 8A taken along the line labeled B in Figure 8A. [00133] The optical switch receives the outgoing LIDAR signal from the utility waveguide 12. The optical switch is configured to direct the outgoing LIDAR signal to one of several alternate waveguides 16. The optical switch includes interconnect waveguides 414 that connect multiple Mach-Zehnder interferometers 416 in a cascading arrangement. Each of the Mach-Zehnder interferometers 416 directs the outgoing LIDAR signal to one of two interconnect waveguides 414. The signal director 14 can operate each Mach -Zehnder so as to select which of the two interconnect waveguides 414 receives the outgoing LIDAR signal from the Mach-Zehnder interferometer 416. The interconnect waveguides 414 that receive the outgoing LIDAR signal can be selected such that the outgoing LIDAR signal is guided through the optical switch to a particular one of the alternate waveguides 16.
[00134] Each of the Mach-Zehnder interferometers 416 includes two branch waveguides 418 that each receives a portion of the outgoing LIDAR signal from the utilitywaveguide 12 or from an interconnect waveguide 414. Each of the Mach-Zehnder interferometers 416 includes a direction component 420 that receives two portions of the outgoing LIDAR signal from the branch waveguides 418. The direction component 420 steers the outgoing LIDAR signal to one of the two interconnect waveguides 414 configured to receive the outgoing LIDAR signal from the direction component 420. The interconnect waveguide 414 to which the outgoing LIDAR signal is directed is a function of the phase differential between the two different portions of the outgoing LIDAR signal received by the direction component 420. Although Figure 8A illustrates a directional coupler operating as the direction component 420, other direction components 420 can be used. Suitable alternate direction components 420 include, but are not limited to, Multi-Mode Interference (MMI) devices and tapered couplers.
[00135] Each of the Mach-Zehnder interferometers 416 includes a phase shifter 422 positioned along one of the branch waveguides 418. The output component includes conductors 424 in electrical communication with the phase shifters 422. The conductors 424 are illustrated as dashed lines so they can be easily distinguished from underlying features. The conductors 424 each terminate at a contact pad 426. The contact pads 426 can be used to provide electrical communication between the conductors 424 and the signal director 14. Accordingly, the conductors 424 provide electrical communication between the signal director 14 and the phase shifters 422 and allow the electronics to operate the phase shifters 422. Suitable conductors 424 include, but are not limited to, metal traces. Suitable materials for the conductors include, but are not limited to, titanium, aluminum and gold.
[00136] The electronics can operate each of the phase shifters 422 so as to control the phase differential between the portions of the outgoing LIDAR signal received by a direction component 420. In one example, a phase shifter 422 can be operated so as to change the index of refraction of a portion of at least a portion of a branch waveguide 418. Changing the index of a portion of a branch waveguide 418 in a Mach-Zehnder interferometer 416, changes the effective length of that branch waveguides 418 and accordingly changes the phase differential between the portions of the outgoing LIDAR signal received by a direction component 420. The ability of the electronics to change the phase differential allows the electronics to select the interconnect waveguide 414 that receives the outgoing LIDAR signal from the direction component 420.
[00137] Figure 8B illustrates one example of a suitable construction of a phase shifter 422 on a branch waveguide 418. The branch waveguide 418 is at least partially defined by a ridge 306 of the light-transmitting medium 304 that extends away from slab regions 308 of the light-transmitting medium 304. Doped regions 428 extend into the slab regions 308 with one of the doped regions including an n-type dopant and one of the doped regions 428 including a p-type dopant. A first cladding 430 is positioned between the light-transmitting medium 304 and a conductor 424. The conductors 424 each extend through an opening in the first cladding 430 into contact with one of the doped regions 428. A second cladding 432 is optionally positioned over the first cladding 430 and over the conductor 424. The electronics can apply a forward bias can be applied to the conductors 424 so as to generate an electrical current through the branch waveguide 418. The resulting injection of carriers into the branch waveguide 418 causes free earner absorption that changes the index of refraction in the branch waveguide 418.
[00138] The first cladding 430 and/or the second cladding 432 illustrated in Figure 8B can each represent one or more layers of materials. The materials for the first cladding 430 and/or the second cladding 432 can be selected to provide electrical isolation of the conductors 424, lower index of refraction relative to the light-transmitting medium 304, stress reduction and mechanical and environmental protection. Suitable materials for the first cladding 430 and/or the second cladding 432 include, but are not limited to, silicon nitride, tetraorthosilicate (TEOS), silicon dioxide, silicon nitride, and aluminum oxide. The one or more materials for the first cladding 430 and/or the second cladding 432 can be doped or undoped. [00139] As is evident from Figure 1, the LIDAR system can optionally include one or more light signal amplifiers 446. For instance, an amplifier 446 can optionally be positioned along a utility waveguide as illustrated in the LIDAR system of Figure 1. Additionally, or alternately, an amplifier 446 can be positioned along all or a portion of the alternate waveguides 16 as illustrated in the LIDAR system of Figure 1. The electronics can operate the amplifier 446 so as to amplify the power of the outgoing LIDAR signal and accordingly of the system output signal. The electronics can operate each of the amplifiers 446 so as to amplify' the power of the outgoing LIDAR signal. Suitable amplifiers 446 for use on the LIDAR chip, include, but are not limited to, Semiconductor Optical Amplifiers (SOAs) and SOA arrays.
[00140] Light sensors that are interfaced with waveguides on a LIDAR chip can be a component that is separate from the chip and then attached to the chip. For instance, the light sensor can be a photodiode, or an avalanche photodiode. Examples of suitable light sensors include, but are not limited to, InGaAs PIN photodiodes manufactured by Hamamatsu located in Hamamatsu City, Japan, or an InGaAs APD (Avalanche Photo Diode) manufactured by Hamamatsu located in Hamamatsu City, Japan. These light sensors can be centrally located on the LIDAR chip. Alternately, all or a portion the waveguides that terminate at a light sensor can terminate at a facet located at an edge of the chip and the light sensor can be attached to the edge of the chip over the facet such that the light sensor receives light that passes through the facet. The use of light sensors that are a separate component from the chip is suitable for all or a portion of the light sensors selected from the group consisting of the first light sensor and the second light sensor.
[00141] As an alternative to a light sensor that is a separate component, all or a portion of the light sensors can be integrated with the chip. For instance, examples of light sensors that are interfaced with ridge waveguides on a chip constructed from a silicon-on-insulator wafer can be found in Optics Express Vol. 15, No. 21, 13965-13971 (2007); U.S. Patent number 8,093,080, issued on Jan 10 2012; U.S. Patent number 8,242,432, issued Aug 14 2012; and U.S. Patent number 6.108,472, issued on Aug 22, 2000 each of which is incorporated herein in its entirety. The use of light sensors that are integrated with the chip are suitable for all or a portion of the light sensors selected from the group consisting of the first light sensor and the second light sensor.
[00142] Suitable electronics 62 can include, but are not limited to, a controller that includes or consists of analog electrical circuits, digital electrical circuits, processors, microprocessors, digital signal processors (DSPs), Field Programmable Gate Arrays (FPGAs), computers, microcomputers, or combinations suitable for performing the operation, monitoring and control functions described above. In some instances, the controller has access to a memory that includes instructions to be executed by the controller during performance of the operation, control and monitoring functions. In some instances, the functions of the LIDAR data generator and the peak finder can be executed by Field Programmable Gate Arrays (FPGAs), digital signal processors (DSPs), Application Specific Integrated Circuits, firmware, software, hardware, and combinations thereof. Although the electronics are illustrated as a single component in a single location, the electronics can include multiple different components that are independent of one another and/or placed in different locations. Additionally, as noted above, all or a portion of the disclosed electronics can be included on the chip including electronics that are integrated with the chip.
[00143] An example of a suitable director controller 15 executes the attributed functions using firmware, hardware, or software or a combination thereof. An example of a suitable light source controller 63 executes the attributed functions using firmware, hardware, or software or a combination thereof. An example of a suitable data processor 237 executes the attributed functions using firmware, hardware, or software or a combination thereof. An example of a suitable assembly electronics 280 and electronics 62 executes the attributed functions using firmware, hardware, or software or a combination thereof.
[00144] Components on the LIDAR chip can be fully or partially integrated with the LIDAR chip. For instance, the integrated optical components can include or consist of a portion of the wafer from which the LIDAR chip is fabricated. A wafer that can serve as a platform for a LIDAR chip can include multiple layers of material. At least a portion of the different layers can be different materials. As an example, in a silicon-on-insulator wafer that includes the buried layer 300 between the substrate 302 and the light-transmitting medium 304 as shown in Figure 7, the integrated on-chip components can be formed by using etching and masking techniques to define the features of the component in the light-transmitting medium 304. For instance, the slab regions 308 that define the waveguides and the stop recess can be formed in the desired regions of the wafer using different etches of the wafer. As a result, the LIDAR chip includes a portion of the wafer and the integrated on-chip components can each include or consist of a portion of the wafer. Further, the integrated on- chip components can be configured such that light signals traveling through the component travel through one or more of the layers that were originally included in the wafer. For instance, the waveguide of Figure 7 guides light signal through the light-transmitting medium 304 from the wafer. The integrated components can optionally include materials in addition to the materials that were present on the wafer. For instance, the integrated components can include reflective materials and/or a cladding.
[00145] Although the gain medium is disclosed as having both a laser waveguide and an amplifier waveguide, the amplifier waveguide is optional. As a result, the utility waveguide can be continuous with the auxiliary waveguide and/or can serve the auxiliary' waveguide.
[00146] Numeric labels such as first, second, third, etc. are used to distinguish different features and components and do not indicate sequence or existence of lower numbered features. For instance, a second component can exist w ithout the presence of a first component and/or a third step can be performed before a first step. The light signals disclosed above each include, consist of, or consist essentially of light from the prior light signal(s) from which the light signal is derived. For instance, an incoming LIDAR signal includes, consists of, or consists essentially of light from the LIDAR input signal.
[00147] Although the LIDAR system is disclosed as using complex signals such as the complex data signal, the LIDAR system can also use real signals. As a result, the mathematical transform can be a real transform and the components associated with the generation and use of the quadrature components can be removed from the LIDAR system. As a result, the LIDAR system can use a single signal combiner. Additionally, or alternately, a single light sensor can replace each of the balanced detectors.
[00148] Other embodiments, combinations and modifications of this invention will occur readily to those of ordinary skill in the art in view' of these teachings. Therefore, this invention is to be limited only by the following claims, which include all such embodiments and modifications when viewed in conjunction with the above specification and accompanying drawings.

Claims

1. A system, comprising: a LIDAR chip that includes a switch configured to direct an outgoing LIDAR signal to one of multiple different alternate waveguides, each of the alternate waveguides terminating at a facet through which the outgoing LIDAR signals passes when directed to the alternate waveguide, the facets being arranged such that a distance between adjacent pairs of the facets being different for different adjacent pairs of facets.
2. The system of claim 1. further comprising a signal redirector configured to receive the outgoing LIDAR signal from any one of the alternate waveguides and to redirect the received outgoing LIDAR signal such that a direction that the outgoing LIDAR signal travels away from the redirection component changes in response to changes in the alternate waveguide from which the redirection component receives the outgoing LIDAR signal.
3. The system of claim 1, wherein the facets are arranged in an array and the distance between adjacent pairs of facets becomes larger or stays the same for each pair of adjacent pair starting at a pair of reference facets and moving tow ard one or both ends of the array and the distance between adjacent facets becomes larger for at least a portion of the adjacent pairs starting at the reference facets and moving toward one or both ends of the array, the reference facets being an adjacent pair of facets that has the shortest of the distances between the adjacent pairs.
4. The system of claim 1, wherein a largest distance between adjacent pairs of facets is greater than or equal to 1.5 and less than 20 times the adjacent pair of facets that has the shortest of the distances betw een the adjacent pairs.
5. A system, comprising: a LIDAR system having an optical component assembly that concurrently outputs multiple system output signals in a field of view the system output signals carrying the same wavelength channel; solid-state beam steerers that are each configured to steer each of the system output signals to multiple different pixels within the field of view, the pixels arranged such that a density of the pixels in the field of view changes across the field of view; and the optical component assembly configured such that the location of the pixels shifts within the field of view in response to a change in a wavelength of the wavelength channel carried by the system output signals.
6. The system of claim 5, wherein the system includes electronics having a light source controller configured to operate a light source so as to change the wavelength of the wavelength channel carried by the system output signals.
7. The system of claim 6, wherein the light source controller is configured to change the wavelength of the wavelength channel carried by the system output signals in response to output from one or more sensors.
8. The system of claim 7. wherein the one or more sensors include an orientation sensor that provides an output indicating an orientation of the LIDAR system or of a support upon which the LIDAR system is positioned.
9. The system of claim 8. wherein the light source controller is configured to change the wavelength of the wavelength channel carried by the system output signals in response to a rate of change in an orientation of the LIDAR system crossing a threshold.
10. The system of claim 5, wherein the LIDAR system is configured to combine light that returns to the LIDAR system from each of the system output signal with light from a reference signal so as to generate beating signals that are each beating at a beat frequency; and the LIDAR system includes electronics configured to calculated LIDAR data for each of the pixels from the beat frequencies, the LIDAR data for each pixel indicating a radial velocity and/or a distance between the LIDAR system and an object located in the pixel.
11. The system of claim 5, wherein each of the solid-state beam steerers includes an optical switch configured to direct an outgoing LIDAR signal to any one of multiple different alternate waveguides and each of the system output signals includes light from one of the outgoing LIDAR signals.
12. The system of claim 5, wherein each of the alternate waveguides from multiple different beam-steerers terminate at a facet and the facets are arranged in an array such that the distance between adjacent facets in the array changes along a length of the array.
13. A method of operating a system, comprising: concurrently transmitting multiple system output signals in a field of view of a LIDAR system, the system output signals being transmitted from an optical assembly in the
LIDAR system, and the system output signals carry ing the same wavelength channel; operating a solid-state beam-steerer so as to steer each of the system output signals to multiple different pixels within the field of view such that a density of the pixels in the field of view is higher in a concentrated region of the field of view than in a diluted region of the field of view; and shifting the location of the concentrated region of the field.
14. The method of claim 13. wherein shifting the location of the concentrated region of the field includes changing the wavelength of the wavelength channel carried by the system output signals.
15. The method of claim 14, wherein changing the wavelength of the wavelength channel carried by the system output signals includes changing the wavelength of the wavelength channel carried by the system output signals in response to output from one or more sensors.
16. The method of claim 15, wherein the one or more sensors include an orientation sensor that provides an output indicating an orientation of the LIDAR system or of a support upon which the LIDAR system is positioned.
17. The method of claim 16, wherein changing the wavelength of the wavelength channel carried by the system output signals in response to output from one or more sensors includes changing the wavelength of the wavelength channel carried by the system output signals in response to rate of change of the orientation crossing a threshold.
18. The method of claim 13, further comprising: combining light that returns to the LIDAR system from each of the system output signals with light from a reference signal so as to generate beating signals that are each beating at a beat frequency; and calculating LIDAR data for the pixels from the beat frequencies, the LIDAR data for each pixel indicating a radial velocity and/or a distance between the LIDAR system and an object located in the pixel.
19. The method of claim 13, wherein each of the solid-state beam steerers includes an optical switch configured to direct an outgoing LIDAR signal to any one of multiple different alternate waveguides and each of the system output signals includes light from one of the outgoing LIDAR signals, and steering each of the system output signals to multiple different pixels within the field of view includes switching the alternate waveguide to which the outgoing LIDAR signal is directed.
20. The method of claim 13, wherein each of the alternate waveguides from multiple different beam-steerers terminate at a facet and the facets are arranged in an array such that the distance between adjacent facets in the array changes along a length of the array.
PCT/US2024/054609 2023-12-13 2024-11-05 Control of pixel density in imaging systems Pending WO2025128235A1 (en)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US18/539,251 2023-12-13
US18/539,251 US20250199132A1 (en) 2023-12-13 2023-12-13 Amplification of signals in imaging systems
US202463553885P 2024-02-15 2024-02-15
US63/553,885 2024-02-15

Publications (1)

Publication Number Publication Date
WO2025128235A1 true WO2025128235A1 (en) 2025-06-19

Family

ID=96058309

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2024/054609 Pending WO2025128235A1 (en) 2023-12-13 2024-11-05 Control of pixel density in imaging systems

Country Status (1)

Country Link
WO (1) WO2025128235A1 (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140376001A1 (en) * 2013-06-23 2014-12-25 Eric Swanson Integrated optical system and components utilizing tunable optical sources and coherent detection and phased array for imaging, ranging, sensing, communications and other applications
US20170184450A1 (en) * 2015-12-26 2017-06-29 Intel Corporation Low power, high resolution solid state lidar circuit
US20170356983A1 (en) * 2016-06-08 2017-12-14 Lg Electronics Inc. Lidar apparatus for vehicles and vehicle having the same
US20220342048A1 (en) * 2018-06-25 2022-10-27 Silc Technologies, Inc. Optical Switching for Tuning Direction of LIDAR Output Signals

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140376001A1 (en) * 2013-06-23 2014-12-25 Eric Swanson Integrated optical system and components utilizing tunable optical sources and coherent detection and phased array for imaging, ranging, sensing, communications and other applications
US20170184450A1 (en) * 2015-12-26 2017-06-29 Intel Corporation Low power, high resolution solid state lidar circuit
US20170356983A1 (en) * 2016-06-08 2017-12-14 Lg Electronics Inc. Lidar apparatus for vehicles and vehicle having the same
US20220342048A1 (en) * 2018-06-25 2022-10-27 Silc Technologies, Inc. Optical Switching for Tuning Direction of LIDAR Output Signals

Similar Documents

Publication Publication Date Title
US12339399B2 (en) Optical switching for tuning direction of LIDAR output signals
EP3803446B1 (en) Control of phase in steering of lidar output signals
US11703598B2 (en) Steering of LIDAR output signals
US20220404470A1 (en) Scanning multiple lidar system output signals
US12066541B2 (en) Imaging system having multiple cores
US20250130316A1 (en) Return surfaces in lidar systems
WO2024010960A1 (en) Imaging sytem with enhanced scan rate
US20250199142A1 (en) Control of pixel density in imaging systems
US20240012147A1 (en) Imaging system using light source with tunable electro-optics
WO2025128235A1 (en) Control of pixel density in imaging systems
WO2023141235A1 (en) Imaging system having multiple cores
US20240345224A1 (en) Increasing resolution in imaging systems
US20250199132A1 (en) Amplification of signals in imaging systems
US20240134018A1 (en) Identification of chirp rates
US20250251494A1 (en) Integration of lidar system components
US20250076469A1 (en) Change of resolution in field of view
US20240255618A1 (en) Scanning multiple lidar system output signals
WO2023235271A2 (en) Imaging sytem having multiple cores
CN118215857A (en) Separation of optical signals in lidar systems

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 24904598

Country of ref document: EP

Kind code of ref document: A1