WO2025125551A1 - Système de balayage intrabuccal à images focalisées alignées sur un modèle de surface 3d - Google Patents
Système de balayage intrabuccal à images focalisées alignées sur un modèle de surface 3d Download PDFInfo
- Publication number
- WO2025125551A1 WO2025125551A1 PCT/EP2024/086180 EP2024086180W WO2025125551A1 WO 2025125551 A1 WO2025125551 A1 WO 2025125551A1 EP 2024086180 W EP2024086180 W EP 2024086180W WO 2025125551 A1 WO2025125551 A1 WO 2025125551A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image sensor
- scanning system
- pixels
- reflected light
- sensor unit
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01B—MEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
- G01B11/00—Measuring arrangements characterised by the use of optical techniques
- G01B11/24—Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures
- G01B11/25—Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures by projecting a pattern, e.g. one or more lines, moiré fringes on the object
- G01B11/2513—Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures by projecting a pattern, e.g. one or more lines, moiré fringes on the object with several lines being projected in more than one direction, e.g. grids, patterns
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B1/00—Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
- A61B1/00002—Operational features of endoscopes
- A61B1/00004—Operational features of endoscopes characterised by electronic signal processing
- A61B1/00009—Operational features of endoscopes characterised by electronic signal processing of image signals during a use of endoscope
- A61B1/000094—Operational features of endoscopes characterised by electronic signal processing of image signals during a use of endoscope extracting biological structures
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B1/00—Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
- A61B1/00002—Operational features of endoscopes
- A61B1/00004—Operational features of endoscopes characterised by electronic signal processing
- A61B1/00009—Operational features of endoscopes characterised by electronic signal processing of image signals during a use of endoscope
- A61B1/000096—Operational features of endoscopes characterised by electronic signal processing of image signals during a use of endoscope using artificial intelligence
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B1/00—Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
- A61B1/00002—Operational features of endoscopes
- A61B1/00043—Operational features of endoscopes provided with output arrangements
- A61B1/00045—Display arrangement
- A61B1/0005—Display arrangement combining images e.g. side-by-side, superimposed or tiled
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B1/00—Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
- A61B1/00163—Optical arrangements
- A61B1/00172—Optical arrangements with means for scanning
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B1/00—Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
- A61B1/00163—Optical arrangements
- A61B1/00188—Optical arrangements with focusing or zooming features
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B1/00—Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
- A61B1/00163—Optical arrangements
- A61B1/00194—Optical arrangements adapted for three-dimensional imaging
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B1/00—Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
- A61B1/04—Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor combined with photographic or television appliances
- A61B1/046—Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor combined with photographic or television appliances for infrared imaging
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B1/00—Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
- A61B1/06—Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor with illuminating arrangements
- A61B1/0605—Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor with illuminating arrangements for spatially modulated illumination
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B1/00—Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
- A61B1/06—Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor with illuminating arrangements
- A61B1/0638—Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor with illuminating arrangements providing two or more wavelengths
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B1/00—Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
- A61B1/24—Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor for the mouth, i.e. stomatoscopes, e.g. with tongue depressors; Instruments for opening or keeping open the mouth
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/0059—Measuring for diagnostic purposes; Identification of persons using light, e.g. diagnosis by transillumination, diascopy, fluorescence
- A61B5/0082—Measuring for diagnostic purposes; Identification of persons using light, e.g. diagnosis by transillumination, diascopy, fluorescence adapted for particular medical purposes
- A61B5/0088—Measuring for diagnostic purposes; Identification of persons using light, e.g. diagnosis by transillumination, diascopy, fluorescence adapted for particular medical purposes for oral or dental tissue
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B1/00—Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
- A61B1/04—Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor combined with photographic or television appliances
- A61B1/043—Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor combined with photographic or television appliances for fluorescence imaging
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B1/00—Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
- A61B1/04—Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor combined with photographic or television appliances
- A61B1/05—Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor combined with photographic or television appliances characterised by the image sensor, e.g. camera, being in the distal end portion
- A61B1/051—Details of CCD assembly
Definitions
- the disclosure relates to an intraoral scanning system. More specifically, the intraoral scanning system provides an improved surface 3D model with aligned focused images of internal structures of a dental object.
- ionizing radiation e.g., X-rays
- X-ray bitewing radiographs are often used to provide non-quantitative images of the teeth's internal structures.
- images are typically limited in their ability to show early tooth mineralization changes (e.g. initial caries) resulting in underestimation of the demineralization depth; they are unable to assess the presence or not of micro-cavitation; they result in frequent overlap of the approximal tooth surfaces which requires repetition of radiograph acquisition and thus may involve a lengthy and expensive procedure.
- NIR near-infrared
- a light field camera or an image sensor unit with an array of pixels with a micro-lens array arranged in-front of has been shown very efficient.
- the disadvantage of such a light field camera is the lack of accuracy in determining a 3D surface model of a dental object, and therefore, using a light field camera or an image sensor unit with an array of pixels with a micro-lens array arranged infront of is not suitable to provide an accurate 3D surface model with aligned focused images with different focus depths.
- a single camera has a limited field of view and it is known to expand the field of view of an intraoral scanner by applying multiple cameras into an intraoral scanner arranged in either in a linear or circular arrangement.
- the pricing of the intraoral scanner will increase significantly.
- first image sensor unit with a micro-lens array configured to acquire reflected light from within a dental object and a second image sensor unit configured to acquire structured light from a surface of the dental object into an intraoral scanning system will improve the accuracy of the acquired reflected light by the first image sensor unit. Furthermore, the combination of the first image sensor unit and the second image sensor unit provides a user of the intraoral scanning system the ability to adjust the focus depth into the dental object.
- an intraoral scanning system may comprise a projector unit configured to emit light onto a dental object, wherein the emitted light includes an infrared wavelength and structured light that includes a first visible wavelength.
- the infrared wavelength may be between 800 nm and 1150 nm, and the visible wavelength may be between 350 nm and 750 nm.
- the structured light may be a static pattern or a time-varying pattern which may be provided by a structural pattern unit arranged such that the emitted light from the projector unit may be received by the structural pattern unit.
- the structural pattern unit may be configured to change the pattern dynamically.
- the structural pattern may be part of the intraoral scanning system.
- the intraoral scanning system may include a first image sensor unit that includes a first image sensor configured to acquire first reflected light from the dental object, wherein the first reflected light includes the infrared wavelength.
- the first image sensor may include an array of pixels.
- the first image sensor unit may include a micro-lens array arranged in front of the array of pixels of the first image sensor, and wherein the micro-lens array may be configured to convey the first reflected light to the array of pixels.
- the one or more micro-lenses of the micro-lens array may direct the first reflected light to one or more pixels of the array of pixels.
- the emitted light that includes infrared wavelength(s) may include a structured pattern, such as a static pattern or a time-varying pattern.
- the processing unit may be configured to register datapoints of the plurality of focused images by the help of the structured pattern in the first reflected light that includes infrared wavelengths.
- a static pattern is a pattern that does not vary in time, e.g. a static checkerboard pattern or a static line pattern.
- a time varying pattern is a pattern that varies in time, i.e. the embedded spatial structure varies in time. May also be termed “time varying illumination pattern”. In the following also termed “fringes”.
- the light field consists of radiant intensity and direction of the light.
- knowing the radiant intensity and direction of the light in one plane it is possible to calculate the intensity and direction of this light in other planes.
- This enables the first image sensor unit to refocus the image onto other planes.
- the refocused image may include intraoral structures within a dental object determined by at least the first reflected light.
- the second image sensor unit may be a photo image sensor or a high-speed photo image sensor configured with a frame rate of 25 frames per second or more.
- Each frame includes a subscan of multiple two-dimensional images of a specific wavelength range determined by the second reflected light.
- the multiple two-dimensional images of the subscan may be used for generating a point cloud in a three-dimensional Euclidean coordinate system.
- the point cloud may then be used for generating a 3D subscan which is then stitched together with other 3D subscans acquired previously by the intraoral scanning system.
- the first image sensor unit may have a lower resolution than the second image sensor unit, however, this may be compensated by combining with 3D data taken earlier or later with any of the image sensors, i.e. the first image sensor unit or the second image sensor unit, thus achieving high resolution in the final 3D model.
- the second image sensor unit of the intraoral scanning system may include a second image sensor configured to acquire second reflected light from the dental object, wherein the second reflected light includes the structured light.
- the second image sensor unit may include a plurality of pixels which receives the structured light from variable planes which causes the structures within the structured light to vary in size.
- the second image sensor unit outputs the two-dimensional image with depth information, wherein the depth information may be determined by the size variations of the structures in the structured light.
- the second image sensor may be configured to acquire second reflected light from the dental object via a moveable focus lens or via a triangulation arrangement of the second image sensor and the projector unit.
- the first image sensor unit and the second image sensor unit may be configured to acquire the first and second reflected light, respectively, in parallel or sequential.
- the first and the second image sensor unit in combination with the projector unit may be arranged within a handheld intraoral scanner, and the processing unit may be arranged within or external to the intraoral handheld scanner.
- the processing unit may be part of a computer, a server, a cloud server or any external computing device.
- the intraoral scanning system may include a handheld intraoral scanner that includes a first optical path that guides the first and the second reflected light to the first image sensor unit and the second image sensor unit, respectively.
- the reflected light that are acquired by the two image sensor units are guided in parallel by optical components within a housing of the handheld intraoral scanner. At least partly between a mirror and the image sensor units the first and the second optical path are parallel.
- the handheld intraoral scanner may include a first optical path and a second optical for the first image sensor unit and the second image sensor unit, respectively, wherein the first and second optical path may be configured to guide the first and second reflected light to the first and second image sensor unit, respectively.
- the first optical path may be partly or fully orthogonal to the second optical path.
- the mirror receives the reflected light via a scanner window placed in the tip of the handheld intraoral scanner, and the mirror is configured to reflect the reflected light towards the first image sensor unit and/or the second image sensor unit.
- the scanner window maybe arranged within a housing of the handheld intraoral scanner, and the scanner window may be an opening in the housing or made of glass.
- the second image sensor unit may include a Bayer filter arranged in-front of the second image sensor unit.
- the Bayer filter receives and forwards the reflected light towards an array of pixels of the second image sensor unit.
- the reflected light may be white, and the Bayer filter may be configured to filtrate the reflected light into red, green and blue light which are then forwarded to separate pixels of the array of pixels.
- the optical components may include one or more mirrors, one or more beam splitters and/or one or more lenses.
- the housing may include a tip end configured for being inserted into a mouth of a patient during a scanning sequence and a distal end that is opposite to the tip end. Furthermore, the housing may include a midpoint that has an equal distance to the tip end and the distal end.
- the handheld intraoral scanner may include a power unit configured to powering the projector unit, the image sensor units, the processor unit, a memory unit and/or other components within the handheld intraoral scanner.
- the housing may include a user interface, such as a button, a touchpad or a different button mean configured for controlling the handheld intraoral scanner and/or a graphical user interface.
- the mirror is arranged at the tip end and the first and second image sensor unit are arranged in vicinity of the midpoint or the distal end.
- the optical path from the dental object to the image sensor units have been increased to an extent which resolves in a handheld intraoral scanner that is less sensitive to where the user places the handheld intraoral scanner relative to the dental object while performing the scanning. Furthermore, it also reduces the size of the tip end that only the mirror is placed at the tip end, i.e. in the tip of the housing.
- the first and the second optical path may be parallel between the mirror and the image sensor units.
- the first and the second optical path are the same between the mirror and a beam splitter that is configured to split the first optical path and the second optical path, such that the first image sensor unit receives the first reflected light via the first optical path and the second image sensor unit receives the second reflected light via the second optical path.
- the first image sensor unit may be arranged in vicinity to the tip end and the second image sensor unit may be arranged in vicinity to the midpoint.
- a scanner window is arranged within the housing and at the tip end. The scanner window is arranged on a first side of the mirror, and the first image sensor unit is arranged on a second side of the mirror wherein a field of view of the first image sensor unit points directly towards the scanner window.
- the first reflected light that includes infrared wavelengths are not redirected by the mirror, instead, the first reflected light travels through the scanner window and directly to the first image sensor unit from the dental object.
- the second reflected light is redirected by the mirror towards the second image sensor unit.
- the first optical path is between the scanner window and the first image sensor unit, and the second optical path is between the mirror and the second image sensor unit.
- the handheld intraoral scanner may include a housing which accommodates the first and the second image sensor unit.
- the field-of- view of both image sensor units are pointing directly towards the scanner window.
- the distance between the dental object and the image sensor units is shortened significantly which results in a more overall compact design of the handheld intraoral scanner.
- the processing unit may be configured to determine the three-dimensional (3D) surface model of the dental object by merging the 3D data, and the merging of 3D data may include stitching of multiple 3D subscans generated by multiple two-dimensional images that includes the second reflected light.
- the second reflected light includes visible wavelengths.
- the processor unit may be configured to align a position of the plurality of focused images to a position in the 3D model based on the 3D data.
- the alignment may involve alignment of the plurality of focused images onto the 3D model which is then displayed and visualized by the user as the plurality of focused images is part of the 3D model.
- the alignment may involve displaying the 3D model in a first window of a graphical user interface displayed on a display unit of the intraoral scanning system.
- the one or more of the plurality of focused images is displayed on a second window of the graphical user interface. A user is able to select manually one or more of the plurality of focused images on the second window via the 3D model.
- the manually selection may be provided by moving a marker on the 3D model, and the position of the marker on the 3D model corresponds to the aligned positioned of the selected one or more focused images.
- an infrared blocker filter may be arranged in-front of the second image sensor unit for preventing reflected light including infrared wavelengths from being acquired by the second image sensor unit, and an infrared pass filter in-front of the first image sensor unit for preventing reflected light outside the infrared wavelength range from being acquired by the first image sensor unit.
- the infrared pass filter may be configured to pass wavelengths above 800 nm or between 800 nm and 1150 nm to the first image sensor unit.
- the processor unit may be configured to align the plurality of focused images onto the 3D surface model.
- the processor unit may be configured to display on a graphical user interface the aligned plurality of focused images onto the 3D surface model and to allow a user of the intraoral scanning system to change a focus depth of one or more of the aligned plurality of focused images.
- the one or more focused images may be selected by the user by moving a marker on the 3D surface model to different positions on the 3D model, and the change of focus depth may be determined by a focus mean on the graphical user interface.
- the focus mean may be a sliding bar which the user may move for adjusting the focus.
- the marker may be a window, wherein the part of the 3D surface model which is within the window is affected by the adjustment of the focus.
- the one or more of plurality of focused images that are within the window are selected for being adjusted in focus depth via the focus mean.
- the processor unit may be configured to merge the plurality of focused images into the 3D surface model.
- the processor unit may be configured to determine a volumetric point cloud within the 3D surface model based on a neural radiance field model applied to each of the plurality of focused images, and wherein the processor unit may be configured to modify the three-dimensional (3D) surface model by applying the volumetric point clouds to the 3D surface model based on the aligned position of the plurality of focused images to the position in the 3D model.
- the projector unit emits light and the image sensor units acquire reflected light according to a scanning sequence, and during the scanning sequence the plurality of focused images and the 3D data are provided with time stamps. The time stamp of each of the plurality of focused images and the 3D data is used for aligning the position of the plurality of focused images to a position in the 3D surface model.
- the plurality of focused images may include a first time stamp and the 3D data may include a second time stamp, and wherein the alignment of the position of the plurality of focused images to the position in the 3D model may be performed by comparing the first time stamp and the second time stamp.
- the first time stamp may be the same or about the same as the second time stamp which indicates that the focused image with the first time stamp has been acquired at the same position as where the 3D data has been generated on the dental obj ect.
- the emitted light includes a second visible wavelength, and wherein the second visible wavelength may be equal to or different from the first visible wavelength.
- the first image sensor unit may be configured to acquire third reflected light from the dental object, wherein the third reflected light may include the second visible wavelength.
- the first image sensor unit may be configured to extend the field of view of the intraoral scanning system.
- both the first image sensor unit and the second image sensor unit are arranged within an intraoral scanner which results in an extended field of view of the second image sensor unit.
- the field of view of the first image sensor unit and the second image sensor unit may overlap.
- the first image sensor unit may have a first field-of-view
- the second image sensor unit may have a second field-of-view
- a total field of view of the intraoral scanning system includes a combination of the first and the second field-of-view, wherein the total field-of-view is larger than first field-of-view and the second field-of-view, respectively.
- the stitching of the 3D subscans is based on an overlap between two or more 3D subscans, and the size of the overlap between the two or more 3D subscans may be depended on how fast the user is moving the intraoral scanner or by the field-of-view of the camera. By extending the field-of-view would resolve in an increased overlap between the 3D subscans, and therefore, the user is able to move the intraoral scanner faster and still maintain a proper overlap between the 3D subscans for performing the stitching.
- the extended field-of-view may be achieved by combining the respective fields-of-view of all the image sensor units which will improve accuracy due to reduced amount of image stitching errors, especially in edentulous regions, where the gum surface is smooth and there may be fewer clear high resolution 3-D features. Having an extended field-of-view enables large smooth features, such as the overall curve of the tooth, to appear in each image frame, which improves the accuracy of stitching respective surfaces obtained from multiple such image frames.
- the field-of-view of the first and the second image sensor unit may be partial overlapped.
- the field-of-view of the first and the second image sensor unit may not overlap, and in this example, the stitching of the 3D subscans is based on knowing the location of the field of view of both image sensor units.
- the processor unit may be configured to monitor a first position of a first field-of-view of the first image sensor unit on a dental object, and use the first position to determine a second position of the second field-of-view of the second image sensor unit.
- the processor unit may be configured to stitch the 3D subscans provided by the two image sensor units based on the first position and the second position.
- the second position may be determined based on a geometrical relation between the first and the second field-of-view.
- the geometrical relation between the first and the second field-of-view is calibrated and stored into a memory unit of the intraoral scanning system.
- the first image sensor unit may include a first field-of-view
- the second image sensor unit may include a second field-of-view
- a total field of view of the intraoral scanning system includes an overlap of the first and the second field-of-view, such that the total field of view of the intraoral scanning system may be equal to the first field-of-view or the second field-of-view.
- no extension of the field-of-view of the intraoral scanning system is obtained.
- the purpose of the none-extended field-of- view is to obtain the plurality of focused images and 3D data which are overlapping.
- the overlapping of the plurality of focused images and the 3D data are used for aligning the position of each of the plurality of focused images to a position in the 3D surface model based on the 3D data.
- the penetration depth of the first reflected light, i.e. the infrared reflected light, into the dental object is determined mainly by the wavelength of the first reflected light, and thus, the amount of micro-lenses in the micro-lens array determines the resolution of the focus depth into the dental object.
- Each of the plurality of focused images corresponds to a focus depth inside the dental object.
- the processor unit may be configured to determine a focus depth for each of the plurality of focused images by selecting two or more pixels of the set of pixels which acquires at least a same point on the dental object, and wherein the selection of the two or more pixels corresponds to the focus depth.
- the selection of the two or more pixels may include a certain combination of the two or more pixels, and wherein the processor unit may be configured to retrieve the focus depth that corresponds to the combination of the one or more pixels from a memory unit of the intraoral scanning system.
- the processor unit may be configured to determine a focus depth for each of the plurality of focused images by selecting two or more pixels of the set of pixels which acquires at least a same point on the dental object, determining a position of the same point on the object by performing ray tracing of the first reflected light from each of the selected two or more pixels to the same point on the dental object, and performing triangulation between the two or more pixels and the position of the same point on the object.
- the ray tracing may be performed at least through one or more dental elements of within the dental object, and wherein the processor unit may be configured to determine or retrieve from a memory unit of the intraoral scanning system a refractive index of each of the one or more dental elements, and the ray tracing may be performed by including the refractive index of each of the one or more dental elements.
- the refractive index may correspond to an angle of incident of a ray of the first reflected light into the one or more dental elements.
- a dental element may be a dentine, an enamel, a pulp, a caries, a crack, a filling etc.
- the type of the dental element may be determined by a machine learning model.
- the machine learning model includes determining one or more dental elements for each of the plurality of focused images by determining an intensity level for each of the pixels of the array of pixels of the first image sensor unit, and correlating the intensity level to a trained intensity level which corresponds to a specific type of a dental element.
- the trained intensity level may be trained by manually identifying the type of a dental element on each of a plurality of focused images or on an infrared (IR) image(s) generated based on infrared reflected light.
- the identification on each of the plurality of focused images or the IR image(s) may be converted to a trained intensity level.
- the identification on each of the plurality of focused images or the IR image may be converted to a trained intensity level and a trained dental element position.
- the machine learning model includes determining one or more dental elements for each of the plurality of focused images by determining an intensity level for each of the pixels of the array of pixels of the first image sensor unit, determining a position of the intensity level relative to the dental object in the plurality of focused images, and determining the type of a dental element by correlating the intensity level to a trained intensity level and the position to the trained dental element position.
- the training of the trained intensity levels and/or the trained dental element position may be based on plurality of focuses images or IR image(S) from different patients.
- the type of dental element may be determined by the processor unit based on the neural radiance field model.
- the neural radiance field model may include determine a set of input parameters for a casting object corresponding to each of the plurality of pixels of the first image sensor unit, wherein the set of input parameters comprises spatial location information and viewing angle information.
- the viewing angle information represents the viewing angle of the field-of-view of the first image sensor unit, and the spatial location information is a relative position between the first image to the dental object while acquiring the reflected light.
- the casting object may comprise a plurality of point coordinates; and a continuous volumetric machine learning model configured to receive and process the set of input parameters to determine an intensity value and a density value of a three-dimensional inner geometry of the 3D surface model, wherein the continuous volumetric machine learning model is configured to be trained using plurality of NIR images or focused images.
- the continuous volumetric machine learning model may be trained by receiving the set of input parameters that corresponds to the casting object for each of the plurality of IR images or focused images, wherein the casting object includes the plurality of point coordinates associated with the 3D surface model, generating, based on the continuous volumetric machine learning model using the set of input parameters, the intensity value and the density value for each of the plurality of point coordinates, determining a synthetic pixel value for the casting object based on the corresponding determined intensity value and the density value for each of the plurality of point coordinates; and minimizing a loss function between the synthetic pixel value and a corresponding true pixel value of the plurality of pixels of the plurality of IR images or focused images by changing the intensity value and the density value for each of the plurality of point coordinates.
- Each of the plurality of focused images corresponds to a focus depth which the processor unit may be configured to convert into a penetration depth in the dental object.
- the penetration depth may be in relation to a reference focus depth which is determined by the 3D data of the dental object.
- the processor unit may be configured to determine a surface focus depth based on the three-dimensional (3D) data of the dental object, and wherein the surface focus depth corresponds to a surface of the dental object.
- the processor unit may be configured to determine a penetration depth into the dental object of each of the plurality of focused images, and wherein the penetration depth is a difference between the reference focus depth and the focus depth of each of the plurality of focused images.
- the penetration depth may be indicated on a graphical user interface of a display unit on or next to each of the plurality of focused images or the 3D model which includes the plurality of focused images.
- the intraoral scanning system may include a display unit that may be configured to display the 3D model and one or more focused images of the plurality focused images of the dental object.
- the display unit may further display a focus adjustment mean, and wherein the processor unit may be configured to select the one or more focused images based on a focus depth that may be determined by the focus adjustment mean.
- the focus adjustment mean may be a range slider. The user may adjust the position of a slider on the focus adjustment mean, and while moving the slider the focus depth is changing and so does the selection of one or more focused images of the plurality of focused images.
- the focus adjustment mean may not be visible on the graphical user interface of the display unit, and the focus depth is adjusted via the focus adjustment mean by swiping across the 3D model with the plurality of focused images or on a separate window displaying the one or more of plurality of focused images. In another window the 3D model is displayed.
- FIGS. 1 A to IE illustrate different examples of an intraoral scanning system
- FIG. 2 illustrates another example of an intraoral scanning system
- FIGS. 3 A, 3B, and 3C illustrate different examples of a field-of-view of image sensor units
- FIGS. 4 A and 4B illustrate a plurality of focused images and a surface focused depth, respectively;
- FIGS. 5A, 5B and 5C illustrate different examples of a first image sensor unit
- FIGS. 6A, 6B, and 6C illustrate different examples of a sequence of projected light from a projecting unit
- FIGS. 7A and 7B illustrate different examples of a neural radiance field model
- FIGS. 8A and 8B illustrate different examples on displaying of a 3D surface model and one or more plurality of focused images.
- the electronic hardware may include microprocessors, microcontrollers, digital signal processors (DSPs), field programmable gate arrays (FPGAs), programmable logic devices (PLDs), gated logic, discrete hardware circuits, and other suitable hardware configured to perform the various functionality described throughout this disclosure.
- Computer program shall be construed broadly to mean instructions, instruction sets, code, code segments, program code, programs, subprograms, software modules, applications, software applications, software packages, routines, subroutines, objects, executables, threads of execution, procedures, functions, etc., whether referred to as software, firmware, middleware, microcode, hardware description language, or otherwise.
- a scanning for providing intra-oral scan data may be performed by a dental scanning system that may include an intraoral scanning device such as the TRIOS series scanners from 3 Shape A/S.
- the dental scanning system may include a wireless capability as provided by a wireless network unit.
- the scanning device may employ a scanning principle such as triangulation-based scanning, confocal scanning, focus scanning, ultrasound scanning, x-ray scanning, stereo vision, structure from motion, optical coherent tomography OCT, or any other scanning principle.
- the scanning device is capable of obtaining surface information by operated by projecting a pattern and translating a focus plane along an optical axis of the scanning device and capturing a plurality of 2D images at different focus plane positions such that each series of captured 2D images corresponding to each focus plane forms a stack of 2D images.
- the acquired 2D images are also referred to herein as raw 2D images, wherein raw in this context means that the images have not been subject to image processing.
- the focus plane position is preferably shifted along the optical axis of the scanning system, such that 2D images captured at a number of focus plane positions along the optical axis form said stack of 2D images (also referred to herein as a sub-scan) for a given view of the object, i.e.
- the scanning device is generally moved and angled relative to the dentition during a scanning session, such that at least some sets of sub-scans overlap at least partially, in order to enable reconstruction of the digital dental 3D model by stitching overlapping 3D subscans together in real-time and display the progress of the virtual 3D model on a display as a feedback to the user.
- the result of stitching is the digital 3D representation of a surface larger than that which can be captured by a single sub-scan, i.e. which is larger than the field of view of the 3D scanning device.
- Stitching also known as registration and fusion, works by identifying overlapping regions of 3D surface in various sub-scans and transforming sub-scans to a common coordinate system such that the overlapping regions match, finally yielding the digital 3D model.
- An Iterative Closest Point (ICP) algorithm may be used for this purpose.
- Another example of a scanning device is a triangulation scanner, where a time varying pattern is projected onto the dental arch and a sequence of images of the different pattern configurations are acquired by one or more cameras located at an angle relative to the projector unit.
- the scanning device comprises one or more light projectors configured to generate an illumination pattern to be projected on a three-dimensional dental arch during a scanning session.
- the light projector(s) preferably comprises a light source, a mask having a spatial pattern, and one or more lenses such as collimation lenses or projection lenses.
- the light source may be configured to generate light of a single wavelength or a combination of wavelengths (mono- or polychromatic). The combination of wavelengths may be produced by using a light source configured to produce light (such as white light) comprising different wavelengths.
- the light projector(s) may comprise multiple light sources such as LEDs individually producing light of different wavelengths (such as red, green, and blue) that may be combined to form light comprising the different wavelengths.
- the light produced by the light source may be defined by a wavelength defining a specific color, or a range of different wavelengths defining a combination of colors such as white light.
- the scanning device comprises a light source configured for exciting fluorescent material of the teeth to obtain fluorescence data from the dental arch.
- a light source may be configured to produce a narrow range of wavelengths.
- the light from the light source is infrared (IR) light, which is capable of penetrating dental tissue.
- the light projector(s) may be DLP projectors using a micro mirror array for generating a time varying pattern, or a diffractive optical element (DOF), or back-lit mask projectors, wherein the light source is placed behind a mask having a spatial pattern, whereby the light projected on the surface of the dental arch is patterned.
- the back-lit mask projector may comprise a collimation lens for collimating the light from the light source, said collimation lens being placed between the light source and the mask.
- the mask may have a checkerboard pattern, such that the generated illumination pattern is a checkerboard pattern. Alternatively, the mask may feature other patterns such as lines or dots, etc.
- the scanning device preferably further comprises optical components for directing the light from the light source to the surface of the dental arch.
- the specific arrangement of the optical components depends on whether the scanning device is a focus scanning apparatus, a scanning device using triangulation, or any other type of scanning device.
- a focus scanning apparatus is further described in EP 2 442 720 Bl by the same applicant, which is incorporated herein in its entirety.
- the image sensor(s) may be a monochrome sensor or include a color filter array such as a Bayer filter and/or additional filters that may be configured to substantially remove one or more color components from the reflected light and retain only the other non-removed components prior to conversion of the reflected light into an electrical signal.
- additional filters may be used to remove a certain part of a white light spectrum, such as a blue component, and retain only red and green components from a signal generated in response to exciting fluorescent material of the teeth.
- the network unit may be configured to connect the dental scanning system to a network comprising a plurality of network elements including at least one network element configured to receive the processed data.
- the network unit may include a wireless network unit or a wired network unit.
- the wireless network unit is configured to wirelessly connect the dental scanning system to the network comprising the plurality of network elements including the at least one network element configured to receive the processed data.
- the wired network unit is configured to establish a wired connection between the dental scanning system and the network comprising the plurality of network elements including the at least one network element configured to receive the processed data.
- the dental scanning system preferably further comprises a processor configured to generate scan data (such as extra-oral scan data and/or intra-oral scan data) by processing the two-dimensional (2D) images acquired by the scanning device.
- the processor may be part of the scanning device.
- the processor may comprise a Field- programmable gate array (FPGA) and/or an Advanced RISC Machines (ARM) processor located on the scanning device.
- the scan data comprises information relating to the three- dimensional dental arch.
- the scan data may comprise any of: 2D images, 3D point clouds, depth data, texture data, intensity data, color data, and/or combinations thereof.
- the scan data may comprise one or more point clouds, wherein each point cloud comprises a set of 3D points describing the three-dimensional dental arch.
- the scan data may comprise images, each image comprising image data e.g. described by image coordinates and a timestamp (x, y, t), wherein depth information can be inferred from the timestamp.
- the image sensor(s) of the scanning device may acquire a plurality of raw 2D images of the dental arch in response to illuminating said object using the one or more light projectors.
- the plurality of raw 2D images may also be referred to herein as a stack of 2D images.
- the 2D images may subsequently be provided as input to the processor, which processes the 2D images to generate scan data.
- the processing of the 2D images may comprise the step of determining which part of each of the 2D images are in focus in order to deduce/generate depth information from the images.
- the internal depth information may be used to generate 3D point clouds comprising a set of 3D points in space, e.g., described by cartesian coordinates (x, y, z).
- the 3D point clouds may be generated by the processor or by another processing unit.
- Each 2D/3D point may furthermore comprise a timestamp that indicates when the 2D/3D point was recorded, i.e., from which image in the stack of 2D images the point originates.
- the timestamp is correlated with the z-coordinate of the 3D points, i.e., the z-coordinate may be inferred from the timestamp.
- the output of the processor is the scan data, and the scan data may comprise image data and/or depth data, e.g. described by image coordinates and a timestamp (x, y, t) or alternatively described as (x, y, z).
- the scanning device may be configured to transmit other types of data in addition to the scan data. Examples of data include 3D information, texture information such as infra-red (IR) images, fluorescence images, reflectance color images, x-ray images, and/or combinations thereof.
- IR infra-red
- FIGS.1 A to IE illustrate different examples of an intraoral scanning system 1.
- the system 1 includes a projector unit 2 configured to emit light and structured light onto a dental object 11.
- the emitted light includes an infrared wavelength
- the structured light includes a first visible wavelength.
- the system 1 includes a first image sensor unit 4 including a first image sensor 5 configured to acquire first reflected light from the dental object 11, wherein the first reflected light includes the infrared wavelength.
- the first image sensor 5 includes an array of pixels 5, and a micro-lens array 7 arranged in front of the array of pixels 5 and configured to convey the first reflected light to the array of pixels 5.
- One or more micro lenses of the micro-lens array 7 directs the first reflected light to one or more pixels of the array of pixels 5.
- the system 1 further includes a second image sensor unit 3 including a second image sensor 3B configured to acquire second reflected light from the dental object 11, wherein the second reflected light includes the structured light.
- the system 1 further includes a processor unit 6 configured to determine three-dimensional (3D) data of the dental object 11 based on the second reflected light, a three-dimensional (3D) surface model of the dental object 11 by merging the 3D data, and a plurality of focused images at different focus depth into the dental object 11 by selection of a set of pixels from the array of pixels associated with each micro-lens of the micro-lens array 7.
- the processor unit 6 is configured to align a position of the plurality of focused images to a position in the 3D surface model based on the 3D data.
- FIG. 1C illustrates an example, where the first image sensor unit 4 is arranged in vicinity to the mirror 9 wherein the field- of-view of the first image sensor unit 4 is directed towards a scanning window 15.
- the first image sensor unit 4 includes an additional lens to direct the reflected light towards the micro-lens array 7.
- the projector unit 2 includes a first light source 2A that is configured to emit structured light with visible wavelengths and a second light source 2B configured to emit light including an infrared wavelength, and the emitted infrared light is directly transmitted towards the scanning window 15, and wherein the emitted structured light is directed towards the scanning window 15 via the mirror 9.
- the intraoral scanning system 1 in FIG. ID is similar to the system 1 in FIG. 1C, however, the second image sensor unit 3 includes plurality of cameras (3 A, 3B). In this example, the plurality of cameras (3 A, 3B) includes two cameras. In FIG.
- the handheld intraoral scanner 10 includes a housing 10 which has a tip end 20, a distal end 21 which is opposite to the tip end, and a midpoint 22 which is equally positioned relative to the tip end 20 and the distal end 21.
- the first image sensor unit 4 is arranged in vicinity to the tip end 20 or the midpoint 21.
- the second image sensor unit 3 is arranged in vicinity to the midpoint 21.
- the first and second image sensor unit (3, 4) and the projector unit 2 are arranged in vicinity to the tip end 20 of the housing 10.
- the field-of-view of both image sensor units (3, 4) are directed towards the scanning windows 15, and the projector unit 2 is configured to emit directly towards the scanning window 15.
- FIG. 2 illustrates yet another example of an intraoral scanning system 1 that comprises a projector unit 2 configured to emit light and structured light onto a dental object 11, wherein the emitted light includes an infrared wavelength and the structured light includes a first visible wavelength.
- the system 1 further includes a first image sensor unit 5 including a first image sensor 4 configured to acquire reflected light from the dental object, and wherein the first image sensor 4 includes an array of pixels 5, and a micro-lens array 7 arranged in front of the array of pixels and configured to convey the reflected light to the array of pixels, wherein one or more micro lenses of the micro-lens array 7 directs the reflected light to one or more pixels of the array of pixels 5.
- the system 1 includes a processor unit 6 configured to determine three-dimensional (3D) data based on the reflected light that includes the structured light, a three-dimensional (3D) surface model of the dental object 11 by merging the 3D data, and a plurality of focused images at different focus depth into the dental object 3 by selection of a set of pixels from the array of pixels 5 associated with each micro-lens of the micro-lens array 7.
- the processor unit 6 is configured to align a position of the plurality of focused images to a position in the 3D model based on the 3D data.
- the second image senor unit 3 is configured to capture the first reflected light that includes infrared wavelength(s).
- the second image sensor unit 3 includes a Bayer filter 13 that includes one or more red, green and blue filter channels that are configured to filter the white light of the second reflected light into the different colors and forward these colors to respective pixels of the pixel array of the second image sensor 12.
- the Bayer filter 13 includes one or more combined filter channels where at least a color and infrared wavelengths are filtered and forwarded to the array of pixels, such as green and infrared.
- the projector unit 2 is configured to switch between visible wavelengths and infrared wavelengths such that visible light and infrared light are being emitted sequentially, i.e. at different time slots, and not at the same time slot, i.e. in parallel.
- the switching between visible wavelengths and infrared wavelengths may include turning on and off the respective light sources of the projector unit 2 that emit the respective wavelengths, or turning up and down the power of the light sources such that the capturing of both visible and infrared wavelengths with the combined filter channel has minimal negative affection to the 3D data or the infrared data.
- the Bayer filter 13 includes the one or more red, green, infrared and blue filter channels that are configured to filter the white light and infrared such that each of the pixels of the array of pixels of the second image sensor unit 3 receives either the red, green, infrared or blue light.
- the second image sensor unit 3 is configured to captured infrared wavelengths and provide a reference infrared image to the processor unit 6.
- the processor unit 6 is configured to improve the resolution of the plurality of focused images by comparing each of the plurality of focused images with the reference infrared image for selecting a focused image which has the same or about the same focus depth.
- the selected focused image, the plurality of focused images and the reference infrared image are fed into a machine learning algorithm which are configured to improve the resolution of the plurality of focused images by performing an interpolation of the pixels of each of the plurality of focused images based on a comparison of the selected focused image and the reference infrared image.
- FIGs. 3A, 3B and 3C illustrate different examples of a field-of-view (30A, 30B) of the image sensor units (3,4).
- the first image unit sensor unit 4 has a first field-of-view 30A and the second image sensor unit 3 has a second field-of-view 3 OB.
- the two field-of-views (30A, 30B) are arranged such that the first field-of-view 30A extends the second field-of-view 30B.
- the processor unit 6 is configured to align the 3D data provided by the images captured by the two image sensor units (3,4) based on a time stamp of each of the captured images.
- the stitching of the 3D subscans is based on knowing the location of the field of view of both image sensor units (3,4).
- the processor unit 6 may be configured to monitor a first position of a first field-of-view of the first image sensor unit 3 on a dental object 11, and use the first position to determine a second position of the second field-of- view of the second image sensor unit 4.
- the processor unit 6 may be configured to stitch the 3D subscans provided by the two image sensor units (3,4) based on the first position and the second position.
- the second position may be determined based on a geometrical relation between the first and the second field-of-view (30A,30B).
- the geometrical relation between the first and the second field-of-view (30A,30B ) is calibrated and stored into a memory unit of the intraoral scanning system 1.
- the stitching of the 3D subscans is based on an overlap between two or more 3D subscans, and the size of the overlap between the two or more 3D subscans may be depended on how fast the user is moving the intraoral scanner 10 or by the field-of-view of the camera (3,4). By extending the field-of-view would resolve in an increased overlap between the 3D subscans, and therefore, the user is able to move the intraoral scanner faster and still maintain a proper overlap between the 3D subscans for performing the stitching.
- the extended field-of-view may be achieved by combining the respective fields-of-view of all the image sensor units (3,4) which will improve accuracy due to reduced amount of image stitching errors, especially in edentulous regions, where the gum surface is smooth and there may be fewer clear high resolution 3-D features. Having an extended field-of- view enables large smooth features, such as the overall curve of the tooth, to appear in each image frame, which improves the accuracy of stitching respective surfaces obtained from multiple such image frames.
- the disadvantage with solution is a reduction of the extended field- of-view seen in relation to the example with no overlap, but the stitching of the 3D subscans provided by the two image sensor units (3,4) is a lot simpler.
- the two field-of-views (30 A, 3 OB) are fully overlapping, which means no extension of the field of view view of either image sensor units (3,4) is obtained.
- the purpose of the none-extended field-of-view 30 is to obtain the plurality of focused images and 3D data which are overlapping. The overlapping of the plurality of focused images and the 3D data are used for aligning the position of each of the plurality of focused images to a position in the 3D surface model based on the 3D data.
- FIGS. 4 A and 4B illustrate the plurality of focused images (41 A - 41E) and the surface focused depth 40.
- the first reflected light 43 is a sum of multiple internal reflections from inside the dental object 11 which is sorted by selecting a focused image of the plurality of focused images 41.
- Each of the plurality of focused images correspond to a focus depth.
- the structured reflected light 42 is used for determining a reference focus depth 40 which corresponds to the surface of the dental object 11.
- a penetration depth is determined by the processor unit 6 based on the reference focus depth 40 and the focus depth of each of the plurality of focused images (41A-41E).
- the penetrations depth (44A- 44E) is determined for each of the plurality of focused images (41A-41E) as a difference between the reference focus depth 40 and each of the focus depth of the plurality of focused images (41A-41E).
- the penetration depth (44A-44E) may be converted to a distance by the processing unit 6 based on a calibration factor stored in a memory unit of the system 1.
- the calibration factor may be determined during the manufacture of the system 1 or the handheld intraoral scanner 10.
- FIGS. 5A - 5C illustrate different examples of the first image sensor unit 4.
- the first reflected light is guided by an optical lens module 50 towards micro-lens array 7 which are then redirecting the first reflected light towards one or more pixels of the array of pixel 5.
- the plurality of focused images includes two focused images from two different focus planes (54,55).
- the focused images include one focused image 55 from a focus depth that is within the enamel 51 and another focused image 54 from a focus depth within the dentine 52.
- a caries 56 is seen in the enamel 51 of the dental object 11, and in this example, the focused image 55 corresponds to the focus plane 55 which depicts a caries 56 inside the enamel 51 in focus.
- FIG. 6A-6C illustrate different examples of a sequence (60,60A,60B) of projected light from the projecting unit 2.
- FIG. 6A illustrates an example, where visible wavelengths including white 61 and blue wavelengths, are emitted in parallel to the emitted infrared wavelengths 63.
- the projector unit 2 is configured to emit pulses of the different wavelengths, however, in FIG 6B, the projector unit 2 is configured to constantly emit infrared wavelengths 63 while the white 61 and blue wavelengths are interchangeably switched on and off.
- the projector unit 2 is configured to switch between three different wavelengths (61,62,63), wherein the emitted white 61 and blue 62 wavelengths are interchangeably switched on and off.
- the emitted infrared wavelengths 63 is turned up and down in power.
- FIGS. 7 A and 7B illustrate different examples of a neural radiance field model used by the processing unit 2 to determine a volumetric point cloud within 3D surface model based on one or more of the plurality of focused images 718.
- the neural radiance field model may include determine a set of input parameters for a casting object 724 corresponding to each of the plurality of pixels of the first image sensor unit 4, wherein the set of input parameters comprises spatial location information and viewing angle information.
- the viewing angle information represents the viewing angle of the field-of-view of the first image sensor unit 4, and the spatial location information is a relative position between the first image to the dental object while acquiring the reflected light.
- the casting object (724- 728) may comprise a plurality of point coordinates (726A-726E), (732A - 732E); and a continuous volumetric machine learning model configured to receive and process the set of input parameters to determine an intensity value and a density value of a three- dimensional inner geometry of the 3D surface model, wherein the continuous volumetric machine learning model is configured to be trained using plurality of NIR images or focused images.
- the casting object 724 is a ray and in FIG. 7B, the casting object 728 is a cone.
- the processors 6 may be configured to determine an average of content within a visible volume 734 for the pixel 722.
- the cone 728 may include a plurality of point coordinates, depicted as point coordinates 732A, 732B, 732C, 732D and 732E (collectively referred to as point coordinates 732, hereinafter). Further, the cone 728 may be sliced into conical frustums corresponding to the point coordinates 732. For example, a conical frustum 736 may correspond to the point coordinate 732D. To this end, each of the point coordinates 732 along the cone 728 may be transformed with a positional encoding of a volume of the corresponding conical frustums. The point coordinates 732 may be sampled along the cone 728.
- FIGS. 8A and 8B illustrate different examples on displaying of the 3D surface model and one or more of the plurality of focused images on a graphical user interface 80 of the system 1.
- the plurality of focused images 41 has been aligned onto the 3D surface model 81.
- the focus depth is adjusted by a focus adjustment mean 82 which in this example is a sliding bar.
- the focus adjustment mean 82 can be a button which is rotatable.
- a focused image (41 A-41D) of the plurality of focused image 41 is displayed in a window separate from the window showing the 3D surface model 81.
- a projector unit configured to emit light onto a dental object, wherein the emitted light includes an infrared wavelength and structured light that includes a first visible wavelength;
- the intraoral scanning system comprising an infrared blocker filter that is configured to block reflected light including infrared wavelengths from being acquired by the second image sensor unit, and an infrared pass filter configured to pass through wavelengths above 800 nm or between 800 nm and 1100 nm.
- the first image sensor unit includes a first field-of-view
- the second image sensor unit includes a second field-of-view
- a total field of view of the intraoral scanning system includes a combination of the first and the second field-of-view, wherein the total field-of-view is larger than first field-of-view and the second field-of- view, respectively.
- the processor unit is configured to determine a volumetric point cloud within the dental object based on a neural radiance field model applied to each of the plurality of focused images.
- the intraoral scanning system according to item 14, wherein the processor unit is configured to modify the three-dimensional (3D) surface model by applying the volumetric point clouds to the 3D surface model based on the aligned position of the plurality of focused images to the position in the 3D model.
- first time stamp and the second time stamp correspond to a scanning sequence
- first time stamp corresponds to a time which the first image sensor unit acquires the first reflected light
- second time stamp corresponds to a time which the second image sensor unit acquires the second reflected light
- the intraoral scanning system according to any of the previous items wherein the first visible wavelength is between 400 nm and 700 nm, a second visible wavelength is between 350 nm to 500 nm, and the infrared wavelength is between 800 nm and 1150 nm.
- the intraoral scanning system according to any of the previous items comprising a handheld intraoral scanner that includes a first optical path that guides the first and the second reflected light to the first image sensor unit and the second image sensor unit, respectively, or, the handheld intraoral scanner includes a first optical path and a second optical for the first image sensor unit and the second image sensor unit, respectively, wherein the first and second optical path are configured to guide the first and second reflected light to the first and second image sensor unit, respectively.
- the intraoral scanning system comprising a display unit configured to:
- the processor unit is configured to select the one or more focused images based on a focus depth that is determined by the focus adjustment mean.
Landscapes
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Surgery (AREA)
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Medical Informatics (AREA)
- Biophysics (AREA)
- Pathology (AREA)
- Public Health (AREA)
- Veterinary Medicine (AREA)
- Biomedical Technology (AREA)
- Heart & Thoracic Surgery (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Animal Behavior & Ethology (AREA)
- Radiology & Medical Imaging (AREA)
- Optics & Photonics (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Signal Processing (AREA)
- Dentistry (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Computation (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Physics & Mathematics (AREA)
- Dental Tools And Instruments Or Auxiliary Dental Instruments (AREA)
- Endoscopes (AREA)
Abstract
La présente divulgation concerne un système de balayage intrabuccal configuré pour déterminer un modèle de surface tridimensionnel et une pluralité d'images focalisées. Le système de balayage intrabuccal comprend : une unité de projection configurée pour émettre de la lumière sur un objet dentaire, la lumière émise contenant une longueur d'onde infrarouge et une lumière structurée qui comprend une première longueur d'onde visible ; une première unité de capteur d'image comprenant un premier capteur d'image configuré pour acquérir une première lumière réfléchie par l'objet dentaire, la première lumière réfléchie comprenant la longueur d'onde infrarouge, et le premier capteur d'image comprenant un réseau de pixels, et un réseau de microlentilles agencé devant le réseau de pixels et configuré pour acheminer la première lumière réfléchie vers le réseau de pixels, une ou plusieurs microlentilles du réseau de microlentilles dirigeant la première lumière réfléchie vers un ou plusieurs pixels du réseau de pixels ; une seconde unité de capteur d'image comprenant un second capteur d'image configuré pour acquérir une seconde lumière réfléchie par l'objet dentaire, la seconde lumière réfléchie comprenant la lumière structurée ; une unité de traitement configurée pour déterminer des données tridimensionnelles (3D) de l'objet dentaire sur la base de la seconde lumière réfléchie, et un modèle de surface tridimensionnel (3D) de l'objet dentaire par fusion des données 3D, et une pluralité d'images focalisées à différentes profondeurs de focalisation dans l'objet dentaire par sélection d'un ensemble de pixels du réseau de pixels associés à chaque microlentille du réseau de microlentilles, et l'unité de traitement étant configurée pour aligner une position de chaque image de la pluralité d'images focalisées sur une position dans le modèle de surface 3D sur la base des données 3D.
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| DKPA202370617 | 2023-12-14 | ||
| DKPA202370617 | 2023-12-14 |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2025125551A1 true WO2025125551A1 (fr) | 2025-06-19 |
Family
ID=94173335
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/EP2024/086180 Pending WO2025125551A1 (fr) | 2023-12-14 | 2024-12-13 | Système de balayage intrabuccal à images focalisées alignées sur un modèle de surface 3d |
Country Status (1)
| Country | Link |
|---|---|
| WO (1) | WO2025125551A1 (fr) |
Citations (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| EP2442720B1 (fr) | 2009-06-17 | 2016-08-24 | 3Shape A/S | Appareil d'exploration à focalisation |
| CN108965653A (zh) * | 2017-05-27 | 2018-12-07 | 欧阳聪星 | 一种口腔内窥器 |
| US20230380942A1 (en) * | 2022-05-26 | 2023-11-30 | Align Technology, Inc. | Intraoral scanner with waveguide pattern projector |
-
2024
- 2024-12-13 WO PCT/EP2024/086180 patent/WO2025125551A1/fr active Pending
Patent Citations (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| EP2442720B1 (fr) | 2009-06-17 | 2016-08-24 | 3Shape A/S | Appareil d'exploration à focalisation |
| CN108965653A (zh) * | 2017-05-27 | 2018-12-07 | 欧阳聪星 | 一种口腔内窥器 |
| US20230380942A1 (en) * | 2022-05-26 | 2023-11-30 | Align Technology, Inc. | Intraoral scanner with waveguide pattern projector |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US10104363B2 (en) | Optical system in 3D focus scanner | |
| US10944953B2 (en) | Method and apparatus for colour imaging a three-dimensional structure | |
| US9956061B2 (en) | Methods and systems for generating color images | |
| JP5856610B2 (ja) | カラー光学印象による経時的三次元測定装置 | |
| US8279450B2 (en) | Intra-oral measurement device and intra-oral measurement system | |
| US20220296103A1 (en) | Intra-oral scanning device with integrated Optical Coherence Tomography (OCT) | |
| US20100253773A1 (en) | Intra-oral measurement device and intra-oral measurement system | |
| KR102458985B1 (ko) | 단층 촬영 융합형 구강 스캐너 | |
| KR101740334B1 (ko) | 치과용 3차원 스캐너 | |
| EP4272630A1 (fr) | Système et procédé pour fournir une rétroaction dynamique pendant le balayage d'un objet dentaire | |
| WO2025125551A1 (fr) | Système de balayage intrabuccal à images focalisées alignées sur un modèle de surface 3d | |
| WO2025202064A1 (fr) | Système de balayage intrabuccal doté d'un boîtier de pointe configuré pour transmettre une lumière infrarouge | |
| WO2025202067A1 (fr) | Système de balayage intrabuccal à programmes de séquence de balayage améliorés | |
| WO2024260907A1 (fr) | Système de balayage intrabuccal pour déterminer un signal de couleur visible et un signal infrarouge | |
| CN121038736A (zh) | 口腔外扫描仪系统 | |
| WO2024260743A1 (fr) | Système de balayage intra-buccal pour déterminer un signal infrarouge | |
| US20250387075A1 (en) | Method for determining optical parameters to be displayed on a three-dimensional model | |
| WO2025202066A1 (fr) | Système de balayage intrabuccal permettant de fournir un signal de rétroaction qui comporte un niveau de qualité d'un modèle tridimensionnel | |
| WO2024260906A1 (fr) | Mesures volumétriques d'une région interne d'un objet dentaire | |
| CN121039699A (zh) | 用于在三维模型上叠加二维图像的口内扫描仪系统和方法 | |
| WO2024146786A1 (fr) | Système de balayage intrabuccal pour déterminer des informations de balayage composites |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 24833247 Country of ref document: EP Kind code of ref document: A1 |