WO2025078622A1 - System and method for intraoral scanning - Google Patents
System and method for intraoral scanning Download PDFInfo
- Publication number
- WO2025078622A1 WO2025078622A1 PCT/EP2024/078723 EP2024078723W WO2025078622A1 WO 2025078622 A1 WO2025078622 A1 WO 2025078622A1 EP 2024078723 W EP2024078723 W EP 2024078723W WO 2025078622 A1 WO2025078622 A1 WO 2025078622A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- dental object
- dental
- information
- images
- internal feature
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61C—DENTISTRY; APPARATUS OR METHODS FOR ORAL OR DENTAL HYGIENE
- A61C9/00—Impression cups, i.e. impression trays; Impression methods
- A61C9/004—Means or methods for taking digitized impressions
- A61C9/0046—Data acquisition means or methods
- A61C9/0053—Optical means or methods, e.g. scanning the teeth by a laser or light beam
Definitions
- Intraoral scanners are electronic devices that may be used for, for example, capturing digital images of oral cavity of a subject.
- the intraoral scanners may include light sources that may project light rays onto an object to be scanned, such as teeth, gums, and other intraoral structures inside the oral cavity of the subject.
- computer-aided design process may be used to create a virtual three-dimensional model of the teeth of the subject using the digital images captured by the intraoral scanner.
- the digital images of the teeth are imported into a computer-aided design (CAD) program, which creates a final virtual 3D model of dentition of the teeth of the subject.
- CAD computer-aided design
- intraoral scanners are used for examination or treatments inside the oral cavities of subjects.
- the use of intraoral scanners may eliminate the use of conventional impression material and plaster models, simplify clinical treatment procedures of teeth for the dentists, as well as reduce discomfort.
- the virtual 3D model of dentition of the teeth may only provide surface information or surface of dentition relating to the teeth of the subject.
- conventional intraoral scanners may not be suited to detect internal structures of the teeth, for example, structures of enamel and dentin within each tooth of the subject.
- the intraoral scanners may fail to determine the general internal composition of a tooth inside the oral cavity of the subject.
- the 3D model of the surface of the dentition of the subject generated using digital images produced by conventional intraoral scanners are not suitable for detecting defects of anomalies occurring inside the teeth of the subject.
- the 3D model may fail to provide any information relating to development of caries, cracks and/or bacteria within enamel and underlying dentin of the teeth, bleeding or any other damage within the enamel and the underlying dentin of the teeth, and/or deep crack or margin lines or other error in a prepared dental prostheses (such as, crown, bridges, implants, inlays, on lays, etc.).
- a prepared dental prostheses such as, crown, bridges, implants, inlays, on lays, etc.
- information of internal structure of a natural tooth or a prosthetic implant in conjunction with surface information may be crucial for effective treatment of the patient and ensuring long span restorations with natural tooth or prosthetic implant.
- conventional methods may be used for constructing 3D model of the teeth, by projecting 2D images of the teeth onto a 3D model.
- white light images may be segmented and placed onto a 3D model counterpart using segmentation and stitching techniques to create the 3D model of the teeth of the subject.
- conventional methods may cause an intraoral scanner to capture large number of images from various different viewpoints for constructing the 3D model of the teeth using white light images.
- the conventional methods of generating the 3D model of the teeth possess several disadvantages.
- Some embodiments are based on the understanding that sub-surface structure in the 3D model of the dental structure may get displaced if the 3D surface model is moved, such as rotated about an axis, to view from different directions.
- the IR or NIR images projected onto the 3D surface model may appear at incorrect positions when the 3D model is moved for viewing from different directions.
- such conventional 3D model may fail to provide accurate sub-surface structure information for the dental structure. This may result in incorrect examination, diagnosis, and treatment for the subject.
- the handheld intraoral scanner 104 may be configured to communicate with the processors 106 via the communication channel 108.
- the handheld intraoral scanner 104 may include a web server that may be configured to communicate via a web network and establish a connection to the communication channel 108.
- the handheld intraoral scanner 104 may be configured to execute the web server to provide visible light information and IR information obtained from the sensors to the processors 106.
- the handheld intraoral scanner 104 may further include a processing unit, a memory unit, a communication interface, the sensors, and additional components.
- the processing unit, the memory unit, the communication interface, and the additional components may be communicatively coupled to each other. Details of the components of the handheld intraoral scanner 104 are further provided, for example, in FIG. IB and FIG. 9.
- the processors 106 are configured to determine 3D surface information of the dental object of the subject based on the captured visible light images 114, particularly, white light information or white light images.
- the processors 106 are configured to generate the 3D surface model 118 of the dental object based on the visible light information.
- the 3D surface information may be a digital representation of a dental arch of the dental object, such as tooth, gums, or a set of teeth, of the subject depicted in a 3D space.
- the 3D surface information may include, for example, 3D point cloud data corresponding to the visible light images 114 or white light images.
- the 3D point cloud data may correspond to 3D real-world coordinates.
- the 3D surface information may additionally include colour texture reflected from a surface of the dental object, i.e., a surface of a tooth or a set of teeth.
- the processors 106 may include processing capabilities that may be required to process the visible light images 114 (or visible light information) and IR images 112 (or IR information).
- the processors 106 may be configured to establish a connection to the communication channel 108.
- the processors 106 may be within the handheld intraoral scanner 104.
- the processors 106 may receive the visible light information or the visible light images 114 and IR information or the IR images 112 from the handheld intraoral scanner 104 or sensors of the handheld intraoral scanner 104 to generate 3D surface information for the dental object of the subject. Further, based on the 3D surface information derived from the visible light images 114, such as white light images, the 3D surface model 118 of the dental object of the subject is generated.
- the processors 106 may further render the 3D surface information or the 3D surface model 118 of the dental object into an interactive 3D graphical representation. Details of generating the 3D surface model 118 for the dental object are further described in conjunction with, for example, FIG. 2.
- the processors 106 are further configured to generate a plurality of two- dimensional (2D) internal feature images based on the visible light information and the IR information.
- the plurality of 2D internal feature images indicate 2D inner geometry features for the dental object.
- the plurality of 2D internal feature images may be hybrid images that may be generated based on the IR information and the visible light information relating to one or more sub-spectrums.
- the plurality of 2D internal feature images may be generated based on combination of white light image information, fluorescent light (such as fluorescent red and/or fluorescent green) image information and IR image information.
- white light image information such as fluorescent red and/or fluorescent green
- the plurality of 2D internal feature images or the 2D hyperspectral images may be generated to differently colour the different enhanced internal structures of the dental object for making it easier for a user of the system 102 to distinguish between the different enhanced internal structures.
- each of a plurality of channels associated with composed scan information such as channels for visible light information and IR information, may be assigned to different colours.
- the processors 106 may be configured to weigh differently or similarly each of the plurality of channels with a channel weighting coefficient. Details of generating the 2D internal feature images or the 2D hyperspectral images are further described in conjunction with, for example, FIG. 3 A, FIG. 3B, FIG. 3C, FIG. 3D, FIG. 8A, FIG. 8B, FIG. 8C, FIG. 8D, FIG. 8E, FIG. 8F, and FIG. 8G.
- the reference frames may indicate vectors that convey information about a local surface orientation at each control point of the 3D mesh.
- the control points may be used to map the pixels of the plurality of 2D internal feature images to their corresponding positions in order to generate the overlay 120 of the plurality of 2D internal feature images onto the 3D surface model 118.
- the one or more reference frames of the dental object in the 3D surface model 118 may provide information relating to geometry and orientation of surfaces of the 3D surface model 118. Subsequently, the reference frames of the dental object when correlated with the 2D inner geometry features provide accurate positions for overlaying the plurality of 2D internal feature images. Details of the reference frames are provided in conjunction with, for example, FIG. ID.
- the plurality of 2D internal feature images may be overlaid and provided as output.
- the output may include the overlay 120 of the 2D image information, i.e., the 2D internal feature images, onto the 3D surface model 118.
- the correlated reference frames and the 2D inner geometry features may indicate a manner in which pixels from the plurality of 2D internal feature images should be displaced for each control point in the 3D mesh to accurately provide surface and sub-surface information for different viewing angles to a viewer.
- the overlay 120 of the dental object may be rendered as an interactive 3D graphical representation.
- the interactive 3D graphical representation may be rendered on a display unit of a device.
- the interactive 3D graphical representation may be viewed by the viewers or users, such as the dentists on the display to diagnose any disease or anomaly within the dental object of the subject.
- a view or viewing angle of the interactive 3D graphical representation of the overlay 120 may be modified by the users, based on a preference. For example, a perspective of the interactive 3D graphical representation may be changed, or the interactive 3D graphical representation may be zoomed-in or zoomed-out as per the preference of the users.
- FIG. IB shows an example schematic diagram 122 of the intraoral scanning system 102, in accordance with an example embodiment.
- the intraoral scanning system 102 includes the handheld intraoral scanner 104 configured to scan a dental object 124 of a subject.
- handheld intraoral scanner 104 may emit light of various wavelengths in a pulsating manner.
- the images of the dental object 124 may be captured using light having different wavelengths, such as white light, blue light, IR light, NIR light, fluorescent light, etc.
- visible light images 114 and/or the IR images 112 for the dental object may be captured from a same position and at same time due to high pulse repetition rate of the light emitted from the handheld intraoral scanner 104.
- the handheld intraoral scanner 104 may include a projector unit configured to emit light at different wavelengths, such as at near-infrared wavelength, infrared wavelength, whitecoloured wavelengths and/or coloured visible wavelengths onto at least the dental object 124.
- the projector unit may be configured to emit light with different wavelengths in a pulsating manner during different time periods onto at least the dental object 124, wherein the different wavelengths include a near-infrared wavelength, an infrared wavelength, and a visible wavelength.
- the visible light emitted by the projector unit may include one or more colour lights of the filtered visible light signals, such as red, green, blue, and white.
- the projector unit may include multiple light sources that are configured to emit the one or more colours lights and the infrared light.
- the multiple light sources may be arranged within a single module that includes multiple Light Emitting Diodes (LEDs) that are configured to emit different wavelengths within the visible and non-visible wavelength ranges.
- LEDs Light Emitting Diodes
- light source i.e., one or more LEDs that may be configured to emit infrared light, may be arranged separately from the light source that is configured to emit the visible light.
- the handheld intraoral scanner 104 may include an image sensor configured to capture the visible light information and the IR information from at least the dental object 124 caused by the emitted light of the projector unit.
- the image sensor is configured to capture the visible light information and the internal light information from at least the dental object 124 caused by the visible wavelength and the near-infrared and/or infrared wavelength, respectively.
- the image sensor unit may include multiple cameras, such as, high speed cameras.
- the multiple cameras may be arranged around the projector unit or next to the projector unit.
- the image sensor unit may include a plurality of pixels, wherein each of a plurality of single-color channels and each of a plurality of combined-colour channels may be aligned to each of the plurality of pixels.
- each of the colour channels may overlap with a pixel of the image sensor or with a group of pixels of the image sensor.
- the visible light information and IR information generated by the handheld intraoral scanner 104 may be communicated to the processors 106 over wired or wireless communication channel 108.
- the processors 106 may be implemented as a computing device 126A or a server 126B external to the handheld intraoral scanner 104.
- the system 102 includes the one or more processors arranged in the handheld intraoral scanner 104, the external computer 126A and/or the server 126B.
- the handheld intraoral scanner 104 may include a processor, and the processor is configured to process sensor data from sensors of the handheld intraoral scanner 104 into information, such as the visible light information and IR information configured to be transmitted to the external computer 126 A or the server 126B.
- the external computer 126 A or the server 126B may include the processors 106 to process the received visible light information and IR information.
- the subject may require a dental treatment.
- the system 102 may be utilized by a user, such as a dentist to provide the dental treatment to the subject.
- the subject may be present at a dental clinic.
- the system 102 may be utilized in a treatment room of the dental clinic.
- the subject may have been requested for a home visit for the dental treatment.
- the system 102 may be utilized in the home of the subject.
- the handheld intraoral scanner 104 may be utilized by the user to capture the visible light images 114 and the IR images 112 of the dental object 124 of the subject, using one or more sensors.
- the projector unit may be configured to constantly emit the infrared light while emitting the visible light.
- an on-off switching of the emitted infrared light is avoided, and thereby, unwanted transients on the emitted infrared light are avoided.
- any timing issue between the visible light and the infrared light is also avoided.
- the pulse infrared light and the visible light may be emitted for a time period having a ratio of 2: 1, i.e., for every 2 seconds of visible light emission or 2 pulses of visible light, the IR light may be emitted for next 1 second or 1 pulse, respectively.
- FIG. 1C illustrates another example schematic diagram of the intraoral scanning system 102, in accordance with an embodiment.
- the processors 106 of the system 102 receive the visible light information 128 or the visible light images 114 and IR information 130 or IR images 112, for example, from the handheld intraoral scanner 104.
- the processors 106 are configured to generate the IR information 130 based on a subtraction of combined light signals from one or more colour light signals of the acquired visible light information 128.
- the processors 106 are configured to generate, at 132, a plurality of 2D internal feature images based on the visible light information 128 and the IR information 130.
- the plurality of 2D internal feature images indicate 2D inner geometry features for the dental object 124.
- inner geometry features of the dental object 124 may be determined.
- the processors 106 are also configured to determine, at 134, 3D data of the dental object based on the visible light information 128 to further generate the 3D surface model 118 of the dental object 124.
- the 3D surface model 118 may include a mesh with a plurality of control points indicating surface colour and/or texture information of the dental object 124.
- the processors 106 are also configured to process, at 136, the plurality of 2D internal feature images to correlate the 2D inner geometry features of the dental object 124 with one or more normal lines of the mesh of the 3D surface model 118 of the dental object 124.
- the correlation of the 2D inner geometry features with the normal lines of the 3D surface model 118 may enable mapping of sub-surface information from the 2D internal feature images onto the 3D surface model 118.
- the subsurface information may provide colour information, shade information and inner region information determined based on at least the infrared information 130 or the plurality of 2D internal feature images.
- the processors 106 are configured to output, at 138, the overlay 120 of the plurality of 2D internal feature images onto the 3D surface model 118.
- the overlay 120 includes surface 3D geometry and inner geometry of the dental object 124 or a dentition.
- the processors 106 may be configured to determine the overlay 120 and the 3D surface model 118 in parallel based on the IR information 130 and the visible light information 128.
- the embodiments of the present disclosure may utilize mixing signals or information obtained from imaging the dental object 124 with different light sources, i.e., white, blue and IR or NIR, in order to obtain a 2D internal feature image or a hyperspectral image.
- the 2D internal feature images may include a 2D composition of inner geometry features for the dental object 124. It is an objective of the present disclosure to combine information from a 2D internal feature image or a hyperspectral image, i.e., images taken with white/blue/IR light in order to generate the overlay 120 displaying various diagnoses like cracks, caries, bacteria etc.
- the enamel 312A and the dentin 312B are more clearly seen in the composed 2D internal feature images 310 than in the infrared information 130.
- the plurality of 2D internal feature images may also be generated using excited red fluorescence light information, such as by summation of IR information 130, the green fluorescence light information 128C and the red fluorescence light information.
- excited red fluorescence light information such as by summation of IR information 130, the green fluorescence light information 128C and the red fluorescence light information.
- Such 2D internal feature images generated based on the IR information 130, the green fluorescence light information 128C and the red fluorescence light information may also improve the visibility of the DEJ in relation compared to a regular or enhanced fluorescence information or, compared to the IR information 130 independently.
- the visible light signals (128A, 128C and 128D) include surface information of the dental object 124 provided by the emitted white wavelength pulses, fluorescence green wavelength pulses and fluorescence red wavelength pulses provided by the emitted blue wavelengths.
- the surface information is used for generating or updating the 3D model 118.
- the fluorescence information (128C and 128D) may be used for generating a plurality of 2D internal feature images 314.
- the processors 106 may be configured to determine a first difference between the infrared information 130 and the green fluorescence light information 128C and a second difference between the infrared information 130 and the red fluorescence light information 128D.
- the 2D internal feature images 314 are generated based on a summation of the first difference and the second difference.
- the generated 2D internal feature images 314 provide enhanced internal structure information relating to the dental object 124.
- the 2D internal feature images 314 indicates sub-surface information, such as anomalies, anatomical structure, etc. more clearly in comparison to the independent infrared information 130.
- the plurality of 2D internal feature images generated may include non-object information.
- the plurality of 2D internal feature images may include parts corresponding to tongue, walls, etc. of the oral cavity of the subject.
- such information is not required for generating the 3D surface model 118 and/or the overlay 120 of the 2D internal features of the dental object 124 onto the 3D surface model. Further, such information may increase processing time and required computing power for generating the overlay 120.
- non-object information that may not relate to the dental object, such as the teeth and the gum, in the plurality of 2D internal feature images need to be eliminated. Details of removing the non-object information is described in conjunction with, for example, FIG. 4 A and FIG. 4B.
- FIG. 4A illustrates a flowchart 400 of a method for generating an object mask for the internal feature images, in accordance with an example embodiment.
- the 2D internal feature images may be generated based on the IR information 130 and the visible light information 128.
- the IR information 130 may be enhanced based on different spectrum or colour based excited fluorescence light information and/or white light information. Such enhancement of the IR information 130 based on different spectrums may result in more clear inner structure of the dental object 124.
- certain IR images and visible light images may be collected from same time frame and a same scanning position due to the high pulse repetition rate.
- an object mask that may be generated using the visible light images may be implemented on or may also correspond to the IR images.
- the object mask may be used to filter or isolate specific parts of an image, such as the plurality of 2D internal feature images, while excluding or suppressing unwanted regions.
- the object mask may be implemented as a binary image where each pixel is assigned a value of either 1 (to include a pixel) or 0 (to exclude a pixel) based on a predefined criteria or pattern.
- a predefined criteria or pattern may be defined based on the visible light images.
- the object mask may be employed to perform masking or mask-based filtering to apply selective processing to the plurality of 2D internal feature images.
- the processors 106 are configured to generate an object mask using visible light information captured from a scanning position.
- the scanning position corresponds to a position of the one or more sensors of the handheld intraoral scanner 104.
- the processors 106 are configured to generate multiple masks based on different scanning positions in which the handheld intraoral scanner 104 (referred to as scanner 104, hereinafter) may be moved.
- the processors 106 are configured to identify visible light information or visible light images captured by the scanner 104. Based on the identified visible light images for the scanning position, one or more parts of the dental object 124 in the identified visible light images may be defined and identified.
- the object mask may be generated based on a predefined criteria or automatically based on image processing techniques.
- the object mask may be generated based on identification of parts of the dental object 124 in images, such as visible light images. Further, a pixel value ‘ 1’ may be assigned to pixels that correspond to such parts of the dental object 124 in the images.
- the object mask may define which parts of the images should be included or excluded in the filtering process.
- the techniques used for creating the object mask may include, but are not limited to, thresholding, edge detection, and region segmentation.
- the processors 106 are configured to apply the object mask on the plurality of 2D internal feature images.
- a set of 2D internal feature images associated with the scanning position may be identified.
- the set of 2D internal feature images may be generated based on combining of visible light images and IR images captured from the scanning position. Further, as the pulse repetition rate for emitting visible wavelength pulses and IR wavelength pulses is high, the visible light images and the IR images may be captured from the same scanning position.
- the set of 2D internal feature images may be segmented.
- object mask may be applied to the set of 2D internal feature images to segment at least a portion of the 2D internal feature images. Details of applying the object mask on the 2D internal feature images are described in conjunction with, for example, FIG. 4B.
- the object mask 408 may be an image corresponding to the visible light images, such that the object mask 408 may have assigned pixel values of ‘ 1’ to parts of dental object 124 in the visible light images.
- the object mask 408 may be generated based on assigning the pixel values of ‘ 1’ to pixels corresponding to the dental object 124 in the visible light images.
- the object mask 408 may be generated based on assigning the pixel values of ‘ 1’ to pixels corresponding to outer edges of the dental object 124 in the visible light images.
- the object mask 408 may be applied to segment parts of the set of 2D internal feature images indicating non-object information.
- the set of 2D internal feature images generated based on the combining of the visible light images with IR images taken from the same scanning position may be segmented.
- the object mask 408 may be, for example, overlaid on the 2D internal feature images to perform element-wise multiplication. Each pixel value in the mask is multiplied with the corresponding pixel value in an image from the set of 2D internal feature images. If the mask pixel is 1, the corresponding pixel in the image remains unchanged; if the mask pixel is 0, the corresponding pixel in the input image is set to 0.
- the object mask 408 is applied to the IR information in the set of 2D internal feature images to generate filtered IR information 414A as well as visible light information (such as fluorescence light information) in the set of 2D internal feature images to generate filtered visible light information 414B.
- the processors 106 are configured to remove the segmented portion of the 2D internal feature images indicating the non-object information.
- the result of the element-wise multiplication generates images where only the pixels of the set of 2D internal feature images that align with the Is in the object mask 408 are retained. In this manner, the segmented portion indicating the non-object information may be removed from the set of 2D internal feature images.
- the one or more object masks generated based on the white light images may be applied to captured blue light images and the IR images in order to remove background, redundant information, or non-teeth information from the plurality of 2D images 110.
- the object masks may segment parts corresponding to the object-information e.g., the teeth. This is possible because the white light images, blue light images and IR images are taken from the same position at the same time frame.
- one or more object masks may be generated for filtering the plurality of 2D internal feature images based on different scanning positions of the scanner 104. Further, in some cases, the masked or filtered 2D internal feature images may undergo various filtering operations, such as blurring, sharpening, contrast adjustment, noise reduction, or any other image processing technique to enhance visibility of internal structure of the dental object 124. These operations are applied only to the selected regions of interest defined by the object mask(s). Once segment, the plurality of 2D internal feature images may be processed to be correlated with the 3D surface model 118 for generating the overlay 120. Details of processing the 2D internal feature images are described in conjunction with, for example, FIG. 5 A and FIG. 5B.
- FIG. 5 A illustrates a method flowchart 500 for estimating one or more object poses for the dental object, respectively, in accordance with an example embodiment.
- the one or more object poses are generated by the processors 106 using the 2D internal feature images and the 3D model 118.
- the one or more object poses are generated using a machine learning model.
- FIG. 5A is explained in conjunction with elements of FIG. 1 A, FIG. IB, FIG. 1C, FIG. ID, FIG. 2, FIG. 3A, FIG. 3B, FIG. 3C, FIG. 3D, FIG. 4A and FIG. 4B.
- the one or more object poses of inner geometry for the dental object 124 may be estimated using a trained machine learning model. Moreover, the one or more object poses may be used to overlay a plurality of 2D internal feature images (referred to as 2D internal feature images) of the dental object onto the 3D surface model 118 (referred to as 3D model, hereinafter).
- 2D internal feature images a plurality of 2D internal feature images of the dental object onto the 3D surface model 118
- each tooth in the 3D model 118 may belong to a tooth type, such as a central incisor, a lateral incisor, a canine, a first premolar, a second premolar, a first molar, a second molar, a third molar, etc.
- a template object model also referred to as a template tooth model
- the template object model may be identified and aligned based on the position of the dental object or the tooth in the 3D model 118 and a tooth type of the dental object or the tooth. For example, based on the position of the tooth in the 3D model 118, a tooth type of the tooth may be identified. Further, the template tooth model of the identified tooth type may be aligned with the position of the dental object in the 3D model 118. Subsequently, the template object model has a corresponding tooth type, such as one of central incisor, a lateral incisor, a canine, a first premolar, a second premolar, a first molar, a second molar, and a third molar.
- a corresponding tooth type such as one of central incisor, a lateral incisor, a canine, a first premolar, a second premolar, a first molar, a second molar, and a third molar.
- the template object model may provide generic feature information relating to the dental object, i.e., the tooth, based on the type of the tooth.
- the template object model may indicate shape, size, and anatomical structure of surface and/or inner structure of the tooth.
- FIG. 5B a schematic diagram 510 for alignment of a template object model 512 is shown, in accordance with an example embodiment.
- the template object model 512 also includes a 3D coordinate system 514 indicating an orientation of the dental object with the 3D model 118.
- the template object model 512 may have the predefined 3D coordinate system 514 having coordinate axes.
- the different coordinate axes (such as X, Y and Z) of the 3D coordinate system 514 may correspond to different tooth axes to provide structure information of the tooth type in the different directions in 3D.
- the values of the coordinate axes of the 3D coordinate system 514 of the template object model 512 may indicate an orientation of the template object model 512 of the dental object or the tooth with the 3D model 118.
- the template object model 512 may be placed at an origin of the 3D coordinate system 514.
- the values along the coordinate axes of the 3D coordinate system 514 may be manipulated to transform characteristics of the template object model 512 into characteristics of the dental object or tooth corresponding to it.
- a coordinate axis of the 3D coordinate system 514 is aligned with an occlusal reference frame 516 of the dental object in the 3D model 118.
- the occlusal reference frame 516 may be a normal line or a perpendicular from an occlusal plane 518 of the dental object in the 3D model 118.
- the occlusal reference frame 516 may pass through a surface of the dental object, i.e., surface normal, in the 3D model 118.
- the surface normal or the occlusal reference frame 516 may define an occlusal direction for aligning the 3D coordinate system 514 with the 3D model 118. This may ensure that the template object model 512 is aligned within the surface of the dental object identified in the 3D model 118.
- the estimation of the one or more obj ect poses for transforming the template object model 512 to the actual dental object is performed using the mesh of the 3D model 118, the occlusal direction and the segmentation of the 2D internal feature images based on the object mask 408.
- the processors 106 are configured to correlate a position of each of the 2D internal feature images of the dental object with the position of the dental object in the 3D model 118.
- the position of the dental object in the 3D model may be identified based on calibration data relating to the sensors of the scanner 104.
- the 2D internal feature images showing or relating to the dental object may be identified based on a mapping between the scanning position and/or scanning time frame of the visible light information used for generating one or more control points of the dental object in the 3D model 118.
- 2D internal images for the dental object may be categorized for or moved to the position of the dental object in the 3D model 118. This may be done for each of the dental object or tooth in the set of teeth. For a full set of teeth, by positioning the 2D internal feature images based on corresponding position in 3D surface model, certain gaps associated with the structure of the inner structure of the dental objects may be filled and/or repetitive information may be eliminated. This may improve accuracy and efficiency of further processing of the 2D internal feature images.
- the 2D internal feature images may be moved to or positioned at the position of the dental object in the 3D model 118.
- the scanner 104 may capture visible light information and IR information for the dental object from a same location due to high pulse repetition rate of the projecting unit emitting the IR light and the visible light (such as white light and blue light).
- the visible light such as white light and blue light.
- an order of receiving or incoming of the visible light information corresponding to the dental object, and subsequent points of the 3D model 118 corresponding to the dental object may be close to, such as within a predefined range from an order of receiving the IR information that is used for generating the 2D internal feature images for the dental object.
- the processors 106 are configured to determine or correlate the position of the 2D internal feature images with the position of the dental object in the 3D model 118 based on a correlation of the order of the incoming visible light information and the order of the incoming IR information corresponding to the dental object.
- the order may be defined in terms of, for example, calibration data of the sensors of the scanner 104, scanning position or location of the scanner 104 with respect to the dental object, scanning time frame, etc.
- the correlation may lead to misalignment in the positioning of the 2D internal feature images with respect to the 3D surface model.
- the correlation may allow to establish a relationship between the 2D internal feature images of the dental object and its corresponding 3D representation in the 3D model 119.
- This correlation may enable the processors 106 to accurately determine a location of the dental object within the 3D scene or the 3D model 118. This may also enable tracking of the dental object’s position in the 3D real-world coordinates when the 3D model is rendered.
- correlating the position of the of the 2D internal feature images with the position of the dental object in the 3D surface model may also be used for spatial registration, i.e., to ensure that the 3D model aligns correctly with the real-world environment.
- the processors 106 are configured to correlate a viewing orientation of each of the 2D internal feature images of the dental object with the 3D coordinate system 514 of the template object model 512 of the dental object.
- the viewing orientation of IR information as well as visible light information for a same time frame for the dental object may be close or same as wavelength pulses are emitted at high rate. Therefore, the viewing orientation or camera orientation of scanner 104 while capturing the 2D internal feature image generated based on the IR information and the visible light information may be determined based on the calibration data of the sensors of the scanner 104.
- values of pixel(s) and/or points at 10 degrees from the Y coordinate axis of the template object model may be adjusted, updated, or modified based on the 2D internal feature image.
- the correlation of the viewing orientations of the 2D internal feature images with the 3D coordinate system 514 allows you to perform 3D reconstruction of the inner geometry of the dental object by transforming the template object model based on the 2D internal feature images.
- the mapping information from the 2D internal feature images to the template object model 514 becomes accurate and easy.
- the one or more object poses of the dental object may be determined.
- the values of the 2D internal feature images may provide sub-surface information of the dental object from different directions.
- the one or more object poses i.e., position and orientation
- values of matrix of the template object model 512 may be updated.
- the one or more poses i.e., the matrix may be used to transform the template object model 512 to the dimensions of the scanned dental object or the scanned tooth.
- a manner in which the template object model 512 is transformed is further described in conjunction with, for example, FIG. 6A, FIG. 6B and FIG. 6C.
- FIG. 6A illustrates a method flowchart 600 for correlating 2D inner geometry features with the 3D model 118 for the dental object, in accordance with an example embodiment. It is crucial to determine the correlation between the 2D inner geometry features with the 3D model 118 accurately. To this end, overlay of the 2D internal feature images onto the 3D surface model is performed based on the correlation of the inner geometry features with the 3D surface model. The accurate correlation further ensures that the overlay of the 2D internal feature images is accurately updated in cases where a viewing angle of the 3D model by a viewer is changed, i.e., the 3D model is rotated, moved, zoomed, etc.
- FIG. 6A is explained in conjunction with elements of FIG. 1A, FIG. IB, FIG. 1C, FIG. ID, FIG. 2, FIG. 3A, FIG. 3B, FIG. 3C, FIG. 3D, FIG. 4A, FIG. 4B, FIG. 5A and FIG. 5B.
- This correlation of the positions of the 2D internal feature images with the position of the dental object in the 3D model 118 enables to establish a relationship between the 2D internal feature images and its corresponding 3D representation, so as to ensure tracking of location of the dental object within the 3D scene or model.
- information from the 2D internal feature images may be mapped onto the template object model 514.
- the one or more object poses (such as position and orientation) of the inner geometry of the dental object may be estimated for the viewing orientation(s). Determination of the one or more poses is crucial for manipulation of information of the 2D internal feature images of the dental object during overlaying.
- the one or more object poses for the inner geometry of the dental object may include a matrix.
- each of the objects poses may be defined as a matrix that transforms a template object model 512 or a template tooth to the scanned dental object or tooth for corresponding viewing direction or orientation.
- the matrix of the object poses may indicate a direction or orientation corresponding to the transformation of the template object model 512 based on the viewing orientations of the 2D internal feature. Details of the estimation of the one or more object poses of the 2D inner geometry for the dental object are described in conjunction with, for example, FIG. 5 A and FIG. 5B.
- the trained machine learning model may be implemented as a convolutional neural network (CNN) and/or other deep learning architecture.
- CNN convolutional neural network
- the trained machine learning model may take the template object model 512 and the 2D internal feature images as input and predicts the object poses for different orientation, usually represented as translation (X, Y, Z) and rotation (pitch, yaw, roll) values.
- training dataset comprising pairs of template object models and their corresponding ground truth poses for different orientations may be fed to the machine learning model.
- This data is used to teach the model how to associate visual features in the 2D internal feature images with the template object model with specific poses or orientations.
- relevant features such as key points, edges, corners, or other distinctive visual elements, may be extracted by the machine learning model during the training to learn to map the direction or orientation information from the 2D internal feature images with the coordinate axes of the 3D template object model.
- the processors 106 are configured to determine one or more intermediate poses for the dental object.
- matrix interpolation techniques may be applied to affine matrices to obtain the one or more intermediate poses for the dental objects.
- the 2D internal feature images may be positioned or placed on orientations or directions defined by the object poses for the inner geometry of the dental object.
- accurate orientation for positioning of 2D internal feature images may not be defined for interproximal area indicated by one or more pairs of interproximal images, i.e., images indicating information relating to interproximal areas situated or occurring in between two adjacent dental objects or two adject tooth in the set of teeth.
- orientation for positioning 2D internal feature images for the interproximal areas may be obtained based on a smooth interpolation between two object poses defined by corresponding pair of interproximal images from the positioned 2D internal feature images.
- each of the two object poses for the interproximal area corresponding to the dental object includes an affine matrix.
- affine matrices may represent various types of geometric transformations that preserve parallel lines and ratios of distances.
- the affine matrices corresponding to the inner geometry of the dental object may be changed.
- the template object model may be transformed based on the dental object and the 2D internal feature images may be positioned onto the orientation defined by the poses of the inner geometry.
- the affine matrices may be used to transform the template object model while keeping straight lines straight and keeping the proportions of shapes the same.
- the affine matrix may include a set of numbers organized in a grid, where the numbers in the matrix enable scaling, and/or translation of the images.
- Affine matrices are a subset of linear transformations, and they include information relating to translations, rotations, scaling, shearing, and combinations thereof. Affine matrices are used to represent and apply these transformations to points or vectors in the 3D model 118, particularly, on the positioned 2D internal feature images. For example, an affine matrix for a pose or orientation of the 2D inner geometry indicated by a 2D internal feature image may combine linear transformations and translations for the 2D internal feature image.
- an affine matrix (E) with a rigid motion R G IR 3x3 and translation s G IR 3 may be defined as: [0142]
- the smooth interpolation may be performed as a linear interpolation.
- smooth interpolation may also be performed using Spherical Linear Interpolation (SLERP) for quaternion representations of R lt R 2 of the two affine matrices E t and E 2 .
- SLERP Spherical Linear Interpolation
- FIG. 6B and FIG. 6C schematic diagrams of overlaying one or more object poses onto the 3D model 118 are illustrated, in accordance with an example embodiment.
- the 3D coordinate system (depicted as 3D coordinate systems 610A, 61 OB, 610C, 610D, and collectively referred to as 3D coordinate systems 610) corresponding to template tooth model for each tooth from the set of teeth in the 3D model 118 may be positioned onto a position of the tooth in the 3D model 118.
- a template tooth model transformed to indicate features of the tooth may also be positioned at a position of the tooth in the 3D model 118.
- the 3D coordinate system of the tooth is correlated with the viewing or camera orientation of the 2D internal feature images of the tooth. Subsequently, the 3D coordinate systems may be positioned within the 3D model 118 to provide orientation information defined by the poses. This orientation information is used to overlay images onto the 3D model.
- FIG. 6B alignment of 3D coordinate systems corresponding to template object models of the set of teeth is shown.
- FIG. 6C alignment of 3D coordinate systems corresponding to template object models of the set of teeth with the 3D model 118 is shown.
- the processors 106 are configured to overlay the 2D internal feature images of the dental object on the 3D model 118.
- the processors 106 are configured to position the 2D internal feature images of the dental object in one or more orientations defined by the one or more object poses of the inner geometry.
- the object poses (or position and orientation information) for the inner geometry of the dental object, particularly, the inner geometry of the template object model 512 my define orientations or directions based on viewing orientations from which the 2D internal feature images are captured. Based on the defined orientations, the 2D internal feature images of the dental object may be positioned.
- the 2D internal feature images of the dental object are correctly positioned and oriented in relation to the virtual 3D model 118. Moreover, this allows seamless integration of the plurality of 2D internal feature images into the 3D model 118.
- the 2D internal feature images (or interproximal images) corresponding to the interproximal area relating to the dental object may be positioned based on the intermediate poses determined based on the interpolation of two object poses corresponding to two adjacent dental object or two adjacent teeth.
- the 2D internal feature images are merely positioned onto a position corresponding to the dental object in the 3D model 118, then the alignment of the images may not be correct from different angles for viewing the 3D model 118.
- the positioning of the 2D internal feature images is more accurate. Further, such correlation-based positioning makes the overlay 120 of the 2D internal feature images onto the 3D model unsusceptible to change in viewing angle of the overlay 120.
- the 2D internal feature images may be positioned and aligned with the 3D model 118 for a particular viewing angle of the 3D model 118 or various viewing angles of the 3D model 118.
- 2D image information (such as 2D inner composition of the dental object) may be projected onto the 3D model 118.
- the 2D internal feature images may be placed and aligned onto the 3D model 118 from a viewpoint or perspective of an actual surface of the dental object, i.e., main tooth surface.
- the processors 106 are configured to join the positioned 2D internal feature images of the dental object.
- the overlay 120 may be visualized by stitching or joining the positioned 2D internal feature images or hyperspectral images covering at same or different parts of the dental object for a particular viewing orientation.
- Each of the 2D internal feature images may cover at least some part of the dental object.
- the joined or stitched 2D internal feature images may form one or more 2D inner geometry panorama images (referred to as 2D panorama images, hereinafter) for the dental object based on the corresponding viewing orientation.
- the 2D panorama images may then be overlay ed, placed and aligned onto the 3D model 118 based on corresponding viewing orientation.
- FIG. 7 A illustrates a method flowchart 700 for overlaying 2D internal feature images with the 3D model 118 for the dental object, in accordance with an example embodiment. It To this end, overlay 120 of the 2D internal feature images onto the 3D model 120 is performed based on the correlation of the 2D inner geometry features with the 3D model 118.
- FIG. 7 is explained in conjunction with elements of FIG. 1A, FIG. IB, FIG. 1C, FIG. ID, FIG. 2, FIG. 3A, FIG. 3B, FIG. 3C, FIG. 3D, FIG. 4A, FIG. 4B, FIG. 5A, FIG. 5B, FIG. 6A, FIG. 6B and FIG. 6C.
- the processors 106 are configured to generate an offset surface of the 3D model 118 of the dental object.
- the offset surface is a mesh.
- the offset surface may be a grid or mesh of control points that are used to map pixels of an image to its corresponding position in the 3D model 118 to generate the overlay 120. Each control point has an associated offset that defines how much the pixel at that control point should be moved or adjusted.
- the offset mesh may correspond to the 3D model 118.
- the offset mesh may be generated over visible light images.
- the offset mesh may include a grid of points or a set of key points that are strategically placed based on a type of distortion or transformation needed.
- an offset vector may be calculated for a control point in the grid or the offset surface. This offset vector may specify how much the pixel at that control point should be shifted in both the horizontal (X) and vertical (Y) directions.
- the offset values can be positive or negative, indicating the direction of the displacement.
- the visible light images or the visible light information may be warped based on the offset values assigned to each control point. This may include moving each pixel in the visible light images according to its associated offset vector.
- the offset mesh may be used for image stitching and/or image registration of the visible light images.
- the offset surface may have a low- resolution and the offset surface may be configured to have a constant distance field from the tooth or the dental object.
- the offset surface 710 may include 3D information relating to different dental objects, i.e., different teeth of the subject.
- the offset surface 710 may include a mesh or a grid for associating values of pixels with the mesh of the offset surface 710.
- FIG. 7C shows a low-resolution offset surface 710.
- the low-resolution of the offset surface may be generated using techniques, such as marching cubes.
- the low-resolution of the offset surface 710 may smoothen surface of the offset surface 710. Due to this reference frames 712 or normal lines generated form the offset surface 710 may be consistent.
- the processors 106 are configured to overlay the 2D internal feature images of the dental object on the offset surface based on the one or more orientations.
- the one or more orientation may be defined by the object poses for inner geometry of the dental object.
- orientations for interproximal area(s) may be defined by intermediate poses of the dental object.
- the 2D internal feature images of the dental object may be overlaid on the offset surface 710 or the 3D model 118 based on an intersection between a point on the offset surface 710 and a ray extending in a first direction from the dental object towards the offset surface 710.
- the interpolated intermediate poses and the object poses for a dental object, or a tooth may be overlaid on a position corresponding to the tooth in the offset surface 710. Further, overlaying the intermediate poses and the object poses may indicate various orientations for the tooth. For example, the orientations, positions, and the poses of the offset surface 710 may be used to project or place the 2D internal feature images. Moreover, the orientations for the tooth may also be used to select an optimal view of the dental object. To this end, the 2D internal feature images overlaid on the 3D model 118 or the offset surface 710 may be rendered from the selected optimal view or viewing angle.
- the projection or placement of the 2D internal feature images on the offset surface 710 or the 3D model 118 may be performed by generating a ray from the tooth in the first direction or upwards direction.
- the ray may extend from the surface of the tooth to a point on the offset mesh 710.
- one or more of the 2D internal feature images may be placed, positioned, or overlaid.
- the offset surface 710 is mapped with 3D coordinate systems (depicted as 714A, 714B and 714C, and collectively referred to as 3D coordinate systems 714, hereinafter) for template tooth models of each tooth based on corresponding positions of the tooth in the offset surface 710.
- the 3D coordinate systems may indicate various orientations or directions for overlaying images onto the offset surface 710.
- a first layer of images (depicted as 716A, 716B and 716C, and collectively referred to as first layer of images 716, hereinafter) from the 2D internal feature images are positioned or placed on the offset surface 710.
- the first layer of images 716 may be placed based on an intersection of a point of the offset surface 710 and a ray extending from the tooth surface.
- the first layer of images 716 may correspond to innermost layer, for example, corresponding to a side opposite to a side form which the ray is extending towards the offset surface 710.
- a first layer of image for a tooth may be positioned in the position of the offset surface 710 corresponding to the tooth based on orientations defined by 3D coordinate system associated with the tooth or template tooth model of the tooth.
- the 2D internal feature images may be projected on the offset mesh 710 or 3D model 118 based on correlations between orientations defined by one or more object poses and/or intermediate poses and viewing orientation and position of 2D internal feature images.
- the processors 106 are configured to cause to display the overlay 120 of the 2D internal feature images of the dental object with the offset surface 710.
- the overlay may be rendered on a display device, such as a computing device, a smartphone, a monitor, etc.
- the 2D internal feature images are not part of the 3D offset surface 710 or surface model 118.
- the image projection of the 2D internal feature images may be performed by mapping and aligning images to the one or more object poses and/or the one or more intermediate poses.
- the viewing angle of the displayed overlay 120 may be changed, for example, by a user or viewer to examine other parts of the teeth of the subject.
- the processors 106 are configured to update pose matrices of each of the one or more object poses and the one or more intermediate poses for the dental object. For example, to the update of the pose matrices, the processors 106 may be configured to map the current overlay with updated viewing angle for viewing the overlay 120. Further, the processors 106 may be configured to generate an updated overlay of the correlated 2D inner geometry features of the dental object on the 3D model or the offset surface 710 for the updated viewing angle. For example, based on the updated viewing angle, the location or position of the 2D internal feature images are positioned in the offset mesh 710 or the 3D model 118.
- FIG. 8A, FIG. 8B, FIG. 8C, FIG. 8D, FIG. 8E, FIG. 8F and FIG. 8G illustrate a manner of merging different spectrums of light for generating the 2D internal feature images and/or specific colour images. These 2D internal feature images and/or specific colour images may be used to enhance quality of the capture images.
- a 2D internal feature image 802 may be generated using NIR wavelength, fluorescence (Fluo) light wavelength and white light wavelength.
- excited green fluorescence light information may be subtracted from NIR wavelength information, i.e., NIR(G) - Fluo (G).
- signal contrast may be readjusted after the subtraction.
- white light wavelength information may be subtracted from the obtained output from the previous subtraction.
- object edges or edges of a tooth may be removed to decrease signal from the tooth margin.
- the NIR wavelength may be captured with background in blue.
- features of the NIR information and white light information overlayed in the blue may output images having NIR highlighted with white edges.
- a 2D internal feature image 804 may be generated using NIR wavelength, fluorescence light wavelength and white light wavelength.
- the 2D internal feature image 804 may be obtained based on NIR(G) - White(G)) + 0.3 (NIR(G) -Fluo(G).
- signal contrast may be re-adjusted before second order sum, i.e., 0.3 (NIR(G) -Fluo(G).
- the object edges or edges of the tooth may be removed to decrease signal from the tooth margin.
- a 2D internal feature image 806 may be a composite image.
- the image 806 may be generated using red channel hue from fluorescence light, green channel may be generated based on NIR(G) - Fluo(G) - White(G). Further, blue channel provides NIR features.
- a 2D internal feature image 808 may be composed by applying all pixels corresponding to visible light having (R>5 or G>5 or R>5) to white light image.
- a 2D internal feature image 810 may be composed by applying all pixels corresponding to visible light having (R>5 and G>5 and R>5) to white light image.
- a 2D internal feature image may be a composite image.
- an image 812 may be generated using white image in purple/blue as background. Further, red hue from fluorescence light is overlayed as overlay ed signal 1 over the background.
- an image 814 is generated using all pixels with (R>5 and G>5 and R>5) from composite image may overlayed as overlayed signal 2 over the overlayed signal 1.
- colour channels or filers corresponding to the different colours may be applied to NIR background images to generate 2D internal feature images.
- FIG. 9 illustrates a block diagram 900 of the handheld intraoral scanner 104, in accordance with an example embodiment.
- the scanner 104 may include at least one processing unit (hereinafter, also referred to as “processing unit 902”), a memory unit 904, a web server 906, a monitoring unit 908, a temporary storage unit 910, a scanning feedback unit 912, an input/output (I/O) unit 914, and a communication interface 916.
- the processing unit 902 may be embodied in a number of different ways.
- the processing unit 902 may be embodied as one or more of various hardware processing means such as a coprocessor, a microprocessor, a controller, a digital signal processor (DSP), a processing element with or without an accompanying DSP, or various other processing circuitry including integrated circuits such as, for example, an ASIC (application specific integrated circuit), an FPGA (field programmable gate array), a microcontroller unit (MCU), a hardware accelerator, a special-purpose computer chip, or the like.
- various hardware processing means such as a coprocessor, a microprocessor, a controller, a digital signal processor (DSP), a processing element with or without an accompanying DSP, or various other processing circuitry including integrated circuits such as, for example, an ASIC (application specific integrated circuit), an FPGA (field programmable gate array), a microcontroller unit (MCU), a hardware accelerator, a special-purpose computer chip, or the like.
- the processing unit 902 may be embodied as a high-performance microprocessor having series of System on Chip (SOCs) which includes relative powerful and power-efficient Graphics Processing Units (GPUs) and Central Processing Units (CPUs) and a small form factor.
- SOCs System on Chip
- GPUs Graphics Processing Units
- CPUs Central Processing Units
- the processing unit 902 may include one or more processing cores configured to perform independently.
- a multi-core processor may enable multiprocessing within a single physical package.
- the processing unit 902 may include one or more processors configured in tandem via the bus to enable independent execution of instructions, pipelining and/or multithreading.
- the processing unit 902 may be configured to detect NIR and visible light, using the VO unit 914 during the scanning session of teeth of a subject, such as a patient requiring a dental treatment.
- the detected NIR and visible light may be used to generate the plurality of 2D images 110, such as the plurality of 2D IR images 112 and the visible light images 114.
- the plurality of 2D images 110 may include images of the dental object 124 or teeth of the subject from various angles or viewing points.
- the processing unit 902 may be configured to generate 3D surface information for the teeth based on the visible light images 114.
- the processing unit 902 may be in communication with the memory unit 904 via a bus for passing information among components of the scanner 104.
- the memory unit 904 may be non-transitory and may include, for example, one or more volatile and/or non-volatile memories.
- the memory unit 904 may be an electronic storage device (for example, a computer readable storage medium) comprising gates configured to store data (for example, bits) that may be retrievable by a machine (for example, a computing device like the processing unit 902).
- the memory unit 904 may be configured to store information, data, content, applications, instructions, or the like, for enabling the apparatus to carry out various functions in accordance with an example embodiment of the present disclosure.
- the memory unit 904 may be configured to store the detected IR and the detected visible light after the scanning session of the teeth is finished.
- the detected IR and the visible light after the scanning session may be stored as IR information and visible light information, respectively.
- the memory unit 904 may be configured to store compressed IR information and visible light information.
- the memory unit 904 may be configured to store calibration data required to measure the detected IR and the visible light to generate the IR information, the visible light information, the white light images, IR images and/or the plurality of 2D internal feature images.
- the memory unit 904 may be configured to store instructions for execution by the processing unit 902.
- the processing unit 902 may represent an entity (for example, physically embodied in circuitry) capable of performing operations according to an embodiment of the present disclosure while configured accordingly.
- the processing unit 902 when the processing unit 902 is embodied as the microprocessor, the processing unit 902 may be specifically configured hardware for conducting the operations described herein.
- the instructions when the processing unit 902 is embodied as an executor of software instructions, the instructions may specifically configure the processing unit 902 to perform the algorithms and/or operations described herein when the instructions are executed.
- the processing unit 902 may include, among other things, a clock, an arithmetic logic unit (ALU) and logic gates configured to support operation of the processing unit 902.
- ALU arithmetic logic unit
- the web server 906 may be a software, a hardware, or a combination thereof that may be configured to store and provide data to a web browser associated with the processors 106. For example, the visible light information and the IR information be provided to the web browser of the processors 106 via the web server 906. As the web server 906 may be accessed by any web browser, the need for installation of an additional software by the processors 106, to connect to the web server 906 may be eliminated.
- the web server 906 may communicate to one of the communication channels 108 via a web network. In an example, the web server 906 and the processors 106 may communicate to a common wireless full-duplex communication channel via the web network for transmission and reception of the visible light information and the IR information.
- the web server 906 and the web browser may communicate via, for example, Hypertext Transfer Protocol (HTTP), Simple Mail Transfer Protocol (SMTP), or File Transfer Protocol (FTP).
- HTTP Hypertext Transfer Protocol
- SMTP Simple Mail Transfer Protocol
- FTP File Transfer Protocol
- the monitoring unit 908 may be a software, a hardware, or a combination thereof that may be configured to monitor a bandwidth of one of the communication channels 108 (such as the wireless full-duplex communication channel) via which the scanner 104 and the processors 106 may be connected. Moreover, the monitoring unit 908 may be configured to monitor a connection of one of the communication channels 108 via which the scanner 104 and the processors 106 may be connected.
- the monitoring unit 908 may provide such information to the processing unit 902.
- the processing unit 902 may downsample the visible light information and the IR information, based on the received information.
- the monitoring unit 908 may determine that the bandwidth of the communication channels 108 is below the minimum bandwidth for longer than a maximum period, the monitoring unit 908 may provide such information to the processing unit 902.
- the processing unit 902 may compress and store the visible light information and the NIR information into the memory unit 904.
- the monitoring unit 908 may determine that the connection between the scanner 104 and the processors 106 is lost, the monitoring unit 908 may provide such information to the processing unit 902. In such a case, the processing unit 902 may compress and store the visible light information and the IR information into the memory unit 904.
- the temporary storage unit 910 may be a software, a hardware, or a combination thereof that may be configured to store the visible light information and the IR information when the bandwidth of the communication channels 108 (such as the wireless full-duplex communication channel) is determined to be below the minimum bandwidth.
- the temporary storage unit 910 may further transmit the stored the visible light information and the IR information to the processors 106 when the bandwidth is determined to be above or equal the minimum bandwidth. Examples of the temporary storage unit 910 may include, but may not be limited to, a random-access memory (RAM), or a cache memory.
- the scanning feedback unit 912 may be a software, a hardware, or a combination thereof that may be configured to receive status input from the monitoring unit 908.
- the scanning feedback unit 912 may provide a scanning feedback signal to the user, such as a dentist, of the handheld intraoral scanner 104.
- the scanning feedback signal is used to provide guidance to the user regarding an area of the teeth where a scanning quality of the scanning session is low and adequate visible light information and/or the NIR information of the is not received.
- the scanning feedback unit 912 may provide the scanning feedback signal as, for example, an acoustic feedback signal, a haptic feedback, or a visual feedback.
- the I/O unit 914 may include circuitry and/or software that may be configured to provide output to the user of the handheld intraoral scanning device 104 and receive, measure or sense input information.
- the I/O unit 914 may include a speaker 914A, a vibrator 914B, a projector unit 914C, and one or more sensors 914D.
- the speaker 914A may be configured to output the acoustic feedback signal to guide the user.
- the vibrator 914B may be, for example, a transducer configured to convert the scanning feedback signal that may be an electrical signal into a mechanical output, such as the haptic feedback in form of vibrations to guide the user.
- the scanner 104 may be configured to detect the IR and the visible light that may be reflected from the teeth of the subject.
- the projector unit 914C may be configured to output one or more visible or white coloured wavelength pulses, and one or more IR wavelength pulses.
- the visible wavelength pulses and the IR wavelength pulses may be casted onto the dental object 124 or the teeth to illuminate the teeth of the subject, such as a patient.
- the visible light wavelength pulses and the IR wavelength pulses may be reflected or refracted from the surface and/or the inner region of the teeth.
- the one or more sensors 914D may be configured to detect visible coloured wavelength pulses and IR wavelength pulses that may be reflected and/or refracted from the surface or inner region of the teeth.
- the one or more sensors 914D may include one or more image sensors, such as cameras.
- the image sensors may be configured to generate the visible light images 114 and the IR images 112 based on the illumination of the teeth using the IR and the visible light.
- the communication interface 916 may comprise input interface and output interface for supporting communications to and from the handheld intraoral scanner 104.
- the communication interface 916 may be a device or circuitry embodied in either hardware or a combination of hardware and software that is configured to receive and/or transmit data to/from the scanner 104.
- the communication interface 916 may include, for example, an antenna (or multiple antennae) and supporting hardware and/or software for enabling communications with a wireless communication network. Additionally, or alternatively, the communication interface 916 may include the circuitry for interacting with the antenna(s) to cause transmission of signals via the antenna(s) or to handle receipt of signals received via the antenna(s).
- the communication interface 916 may alternatively or additionally support wired communication.
- the communication interface 916 may include a communication modem and/or other hardware and/or software for supporting communication via cable, digital subscriber line (DSL), universal serial bus (USB) or other mechanisms.
- DSL digital subscriber line
- USB universal serial bus
- FIG. 10 illustrates a pre-processing step for visible light images and IR images 1006, in accordance with an example.
- the visible light images may include white light images 1002 and blue light images 1004.
- the blue light images 1004 may be used to generate green and/or red excited fluorescence light information or images.
- the IR images 1006 may be captured using near-infrared wavelength pulses.
- the pre-processing step for the white light images 1002, the blue light images 1004 and the IR images 1006 may include contrast adjustment.
- the contrast adjustment of the white light images 1002 may include red (R) light contrast adjustment 1008 A, green (G) light contrast adjustment 1008B, and blue (B) light contrast adjustment 1008C.
- the contrast adjustment of the blue light images 1004 may include red light contrast adjustment 1010A, green light contrast adjustment 1010B, and red green (RG) light contrast adjustment 1010C.
- the contrast adjustment of the IR light images 1006 may include red light contrast adjustment 1012A, green light contrast adjustment 1012B, and blue light contrast adjustment 1012C.
- the 2D internal feature images may be generated using the pre-processed white light images 1002, blue light images (or fluorescence red and/or green light images) 1004, and IR images 1006.
Landscapes
- Health & Medical Sciences (AREA)
- Physics & Mathematics (AREA)
- Optics & Photonics (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Dentistry (AREA)
- Epidemiology (AREA)
- Life Sciences & Earth Sciences (AREA)
- Animal Behavior & Ethology (AREA)
- General Health & Medical Sciences (AREA)
- Public Health (AREA)
- Veterinary Medicine (AREA)
- Dental Tools And Instruments Or Auxiliary Dental Instruments (AREA)
Abstract
The present disclosure relates to an intraoral scanning system (102) that is configured to generate an overlay of correlated 2D inner geometry features on a 3D surface model for dental objects. The system is configured to receive visible light information (128) and IR information (130) from one or more sensors and generate a 3D surface model (118) of a dental object (!46) based on the visible light information and generate a plurality of 2D internal feature images (302, 306, 310, 314) based on the visible light information and the IR information, process the plurality of 2D internal feature images to correlate the 2D inner geometry features of the dental object with at least one reference frame (712) of the dental object in the 3D surface model; and output an overlay (120) of the correlated 2D inner geometry features of the dental object on the 3D surface model. The plurality of 2D internal feature images indicates 2D inner geometry features for the dental object.
Description
SYSTEM AND METHOD FOR INTRAORAL SCANNING
TECHNICAL FIELD
[0001] An example embodiment of the present disclosure generally relates to intraoral scan registration and more particularly relates to an intraoral scanning system and a method for intraoral scanning to generate a three-dimensional inner geometry of a tooth.
BACKGROUND OF THE INVENTION
[0002] Intraoral scanners are electronic devices that may be used for, for example, capturing digital images of oral cavity of a subject. In an example, the intraoral scanners may include light sources that may project light rays onto an object to be scanned, such as teeth, gums, and other intraoral structures inside the oral cavity of the subject. In certain cases, computer-aided design process may be used to create a virtual three-dimensional model of the teeth of the subject using the digital images captured by the intraoral scanner. The digital images of the teeth are imported into a computer-aided design (CAD) program, which creates a final virtual 3D model of dentition of the teeth of the subject.
[0003] Typically, intraoral scanners are used for examination or treatments inside the oral cavities of subjects. The use of intraoral scanners may eliminate the use of conventional impression material and plaster models, simplify clinical treatment procedures of teeth for the dentists, as well as reduce discomfort. However, the virtual 3D model of dentition of the teeth may only provide surface information or surface of dentition relating to the teeth of the subject. In other words, conventional intraoral scanners may not be suited to detect internal structures of the teeth, for example, structures of enamel and dentin within each tooth of the subject. In particular, the intraoral scanners may fail to determine the general internal composition of a tooth inside the oral cavity of the subject.
[0004] To this end, the 3D model of the surface of the dentition of the subject generated using digital images produced by conventional intraoral scanners are not suitable for detecting defects of anomalies occurring inside the teeth of the subject. For example, the 3D model may fail to provide any information relating to development of caries, cracks and/or bacteria within enamel and underlying dentin of the teeth, bleeding or any other damage within the enamel and the underlying dentin of the teeth, and/or deep crack or margin lines or other error in a prepared dental prostheses (such as, crown, bridges, implants, inlays, on lays, etc.). Particularly,
information of internal structure of a natural tooth or a prosthetic implant in conjunction with surface information may be crucial for effective treatment of the patient and ensuring long span restorations with natural tooth or prosthetic implant.
[0005] In certain cases, conventional methods may be used for constructing 3D model of the teeth, by projecting 2D images of the teeth onto a 3D model. For example, white light images may be segmented and placed onto a 3D model counterpart using segmentation and stitching techniques to create the 3D model of the teeth of the subject. In an example, conventional methods may cause an intraoral scanner to capture large number of images from various different viewpoints for constructing the 3D model of the teeth using white light images. However, the conventional methods of generating the 3D model of the teeth possess several disadvantages.
[0006] In an example, the conventional methods fail to disambiguate different compositions of inner geometry of the teeth reliably in the 3D model. In addition, the conventional methods require a large number of images captured from large number of viewpoints as reliable scattering coefficients can only be determined if multiple view angles are covered. This may increase computational load and cause delay in generating the 3D model. As a result, real-time operations may not be performed on such 3D model of the teeth. Further, the 3D model generated using conventional methods may store captured colours of the teeth on a surface of the 3D model. Therefore, projecting sub-surface structure images of the teeth on the 3D model to show inner composition of the teeth is susceptible to incorrect displacements when the 3D model is viewed from different angles.
[0007] Therefore, there is a need for improved systems and methods of intraoral scanning to overcome the disadvantages of the conventional methods for generating 3D model of dental structures.
SUMMARY
[0008] An intraoral scanning system, a method and a computer programmable product are provided for generating an overlay of correlated 2D inner geometry features on a 3D surface model of a subject’s teeth using a handheld intraoral scanner. The handheld intraoral scanner comprises sensors to detect infrared (IR) or near infrared (NIR) and visible light.
[0009] Some embodiments are based on the understanding that visualization of IR or NIR image information of a dental structure may be used to create a 3D model for the dental structure providing inner geometry information. In this regard, the IR or NIR image information includes sub-surface information of the dental structure. The IR or NIR image information when projected onto a 3D surface model creates the 3D model with inner geometry information for the dental structure.
[0010] Some embodiments are based on the understanding that sub-surface structure in the 3D model of the dental structure may get displaced if the 3D surface model is moved, such as rotated about an axis, to view from different directions. In other words, the IR or NIR images projected onto the 3D surface model may appear at incorrect positions when the 3D model is moved for viewing from different directions. As a result, such conventional 3D model may fail to provide accurate sub-surface structure information for the dental structure. This may result in incorrect examination, diagnosis, and treatment for the subject.
[0011] It is an objective of the present disclosure to provide techniques for accurate overlay of 2D inner geometry features of a dental structure, such as a tooth onto a 3D surface model of the tooth.
[0012] In one aspect, an intraoral scanning system configured to generate a 3D model for one or more dental objects is provided. The intraoral scanning system comprises a hand-held intraoral scanner configured to operate with one or more sensors to detect infrared (IR) and visible light. The one or more sensors comprises an image sensor. The intraoral scanning system comprises one or more processors operably connected to the hand-held intraoral scanner. The one or more processors are configured to receive visible light information and IR information from the one or more sensors, generate a three-dimensional (3D) surface model of a dental object from the one or more dental objects based on the visible light information, and generate a plurality of two-dimensional (2D) internal feature images based on the visible light information and the IR information. The plurality of 2D internal feature images indicate 2D inner geometry features for the dental object. The one or more processors are configured to process the plurality of 2D internal feature images to correlate the 2D inner geometry features of the dental object with at least one reference frame of the dental object in the 3D surface model and output an overlay of correlated 2D inner geometry features of the dental object on the 3D surface model.
[0013] In accordance with some example embodiments, the at least one reference frame of the dental object is perpendicular to two or more planes of the dental object in the 3D surface model.
[0014] In accordance with some example embodiments, each of the two or more planes of the dental object in the 3D surface model comprises at least a first plane and a second plane. In an example, the first plane and the second plane are aligned with a buccal-lingual plane and a mesial-distal plane, respectively.
[0015] In accordance with some example embodiments, the two or more planes of the dental object includes at least one of an occlusal plane, a buccal plane, a lingual plane, a mesial plane, a distal plane, or a labial plane.
[0016] In accordance with some example embodiments, to correlate the 2D inner geometry features of the dental object with at least one reference frame of the dental object in the 3D surface model, the one or more processors are further configured to align a template object model with a position of the dental object in the 3D surface model, and correlate a position of each of the plurality of 2D internal feature images of the dental object with the position of the dental object in the 3D surface model. The template object model includes a 3D coordinate system indicating an orientation of the dental object with the 3D surface model. For example, an order of receiving the visible light information for the 3D surface model of the tooth is within a predefined range from an order of receiving the IR information for the plurality of 2D internal feature images. The one or more processors are further configured to correlate a viewing orientation of each of the plurality of 2D internal feature images of the dental object with the 3D coordinate system of the template object model of the dental object.
[0017] In accordance with some example embodiments, the one or more processors are further configured to join the positioned plurality of 2D internal feature images of the dental object to generate one or more 2D inner geometry panorama images for the dental object based on the viewing orientation; and overlay the one or more 2D inner geometry panorama images on the 3D surface model.
[0018] In accordance with some example embodiments, a coordinate axis of the 3D coordinate system is aligned with an occlusal reference frame of the at least one reference frame of the dental object, and wherein the occlusal reference frame is perpendicular to an occlusal plane of the dental object in the 3D surface model.
[0019] In accordance with some example embodiments, the template object model has a corresponding tooth type, and wherein the tooth type of the template object model is at least one of: a central incisor, a lateral incisor, a canine, a first premolar, a second premolar, a first molar, a second molar, or a third molar.
[0020] In accordance with some example embodiments, the one or more processors are configured to estimate, using a trained machine learning model, one or more object poses of inner geometry for the dental object based on the plurality of 2D internal feature images and the 3D surface model; determine one or more intermediate poses for the dental object based on a smooth interpolation between two object poses indicated by a pair of interproximal images from the positioned plurality of 2D internal feature images; and overlay the plurality of 2D internal feature images of the dental object on the 3D surface model based on the one or more intermediate poses for the interproximal area relating to the dental object and one or more orientations defined by the one or more object poses of inner geometry.
[0021] In accordance with some example embodiments, the one or more processors are configured to generate an offset surface of the 3D surface model of the dental object such that the offset surface is within a constant distance field from the dental object; overlay the plurality of 2D internal feature images on the offset surface based on the one or more orientations defined in at least one of the one or more object poses, or the one or more intermediate poses of the dental object and cause to display the overlay of the one or more intermediate poses of the dental object with the offset surface. Further, the overlay may be performed based on an intersection between a point on the offset surface and a ray extending in a first direction from the dental object towards the offset surface.
[0022] In accordance with some example embodiments, the one or more processors are configured to update pose matrices of each of the one or more intermediate poses for the dental object to map the overlay with an updated viewing angle; and generate an updated overlay of the correlated 2D inner geometry features of the dental object on the 3D surface model for the updated viewing angle based on the updated pose matrices.
[0023] In accordance with some example embodiments, the one or more processors are configured to generate an object mask using the visible light information captured from a scanning position, apply the object mask on the plurality of 2D internal feature images to segment at least a portion of the plurality of 2D internal feature images indicating non-object
information, and remove the segmented portion of the plurality of 2D internal feature images indicating the non-object information. The scanning position corresponds to a position of the one or more sensors.
[0024] In accordance with some example embodiments, the plurality of 2D internal feature images includes a 2D composition of inner geometry features for the dental object.
[0025] In another aspect, a method for generating an overlay of correlated 2D inner geometry features on a 3D surface model for one or more dental objects is provided. The method is implemented using an intraoral scanning system comprising a hand-held intraoral scanner configured to operate with one or more sensors to detect infrared (IR) and visible light and one or more processors operably connected to the hand-held intraoral scanner. The method comprises receiving visible light information and IR information from the one or more sensors, generating a three-dimensional (3D) surface model of a dental object from the one or more dental objects based on the visible light information, generating a plurality of two-dimensional (2D) internal feature images based on the visible light information and the IR information, processing the plurality of 2D internal feature images to correlate the 2D inner geometry features of the dental object with at least one reference frame of the dental object in the 3D surface model, and outputting an overlay of the correlated 2D inner geometry features of the dental object on the 3D surface model. In an example, the plurality of 2D internal feature images indicate 2D inner geometry features for the dental object.
[0026] In yet another aspect, a computer programmable product is provided. The computer programmable product comprises a non-transitory computer readable medium having stored thereon computer executable instructions, which when executed by a processing circuitry, cause the processing circuitry to carry out operations. The operations comprise receiving visible light information and IR information from the one or more sensors, generating a three-dimensional (3D) surface model of a dental object from the one or more dental objects based on the visible light information, generating a plurality of two-dimensional (2D) internal feature images based on the visible light information and the IR information, processing the plurality of 2D internal feature images to correlate the 2D inner geometry features of the dental object with at least one reference frame of the dental object in the 3D surface model, and outputting an overlay of the correlated 2D inner geometry features of the dental object on the 3D surface model. In an example, the plurality of 2D internal feature images indicate 2D inner geometry features for the dental object.
[0027] The foregoing summary is illustrative only and is not intended to be in any way limiting. In addition to the illustrative aspects, embodiments, and features described above, further aspects, embodiments, and features will become apparent by reference to the drawings and the following detailed description.
EFFECT(S) OF THE INVENTION
[0028] According to the present disclosure, an intraoral scanning system, a method and a computer programmable product are provided. One of the purposes of the present disclosure is to provide an accurate three-dimensional render of the dental objects with sub-surface structures of the dental objects.
[0029] Conventional systems may include intraoral scanners that are utilized to capture two-dimensional (2D) intraoral scans of dental objects of patients. However, the conventional intraoral scanners may possess limited processing capability, that may only be utilized to capture 2D intraoral scans and/or 3D model of surface of the dental objects of the subjects. However, 3D surface models of the dental objects may fail to detect internal structures of teeth, for example, structures of enamel and dentine within the dental objects of the subjects. As a result, diseases, or anomalies within the internal structure of the teeth may not be identified, unless such diseases grow till the surface of the teeth or the dentist make use of other devices such as X-ray machines, however -rays are ionized radiations which can cause mutations in human cells upon exposure which should be avoided. For example, conventional intraoral scanners may fail to identify early occurrence of an anomaly or a disease occurring inside a tooth, such as dentin erosion, enamel erosion, cracks or caries inside tooth, bacteria growth, etc. unless a bone of the tooth or a surface of the tooth is affected. Due to delayed diagnosis of diseases or anomalies inside the tooth, irreparable damage may occur in the tooth causing use of surgical method to remove such tooth. This may cause great discomfort to patients.
[0030] Moreover, the conventional intraoral scanners may fail to identify an anomaly occurring inside a dental prosthetic implant. The identification of any anomaly occurring inside the dental prosthetic implant may be crucial to ensure long life of such dental prosthetic implant. To this end, the information relating to inner geometry of dental objects, such as teeth or dental prosthetic implant in conjunction with surface information may be crucial for early diagnosis, effective treatment and ensuring long span restorations with natural tooth or prosthetic implant.
[0031] In certain conventional methods, inner structures of dental objects may be visualized using intraoral scans. The conventional methods for generating inner structures of the dental objects may include visualization of IR or NIR image information on a 3D surface model of the dental objects in the form of 2D IR image information. Typically, captured colours of the intraoral scans of an intraoral cavity having the dental objects are stored on a surface of the 3D surface model of the dental objects. In particular, colour values of the intraoral scans are associated to parts of a surface mesh forming the 3D surface model of the dental objects. Then, 2D IR image information indicating inner structures or sub-surface information is projected onto the 3D surface model.
[0032] However, the mere projection of 2D IR image information on 3D mesh does not reliably capture information of inner structure or sub-surface information of the dental objects. For example, due to overlapping and/or missing data, as well as different angles of capturing IR images, 2D IR image information projected onto the 3D mesh, or the 3D surface model may not give accurate inner structure details. In particular, when the 2D IR image information is projected onto the 3D surface model, the inner structure of the sub-surface structure information may get displaced if the 3D model is moved. For example, examiners, such as doctors and/or medical practitioners, may have to move the 3D model of the dental objects to acquire knowledge of the surface as well as inner structure of the dental objects from various directions and/or orientation. However, any movements in the 3D model may cause displacement of the projected 2D IR image information, thereby causing inaccurate or incorrect projection of the inner structure information. To this end, such displacement of 2D IR image information due to changing viewing angle of the 3D model is not desirable and may hinder accurate diagnosis and/or treatment. In certain cases, the displacement of 2D IR image information may give incorrect information about the dental objects leading to possible incorrect diagnosis and/or treatments for the subjects.
[0033] In addition, such processing of large number of 2D IR images corresponding to the dental objects may be processing intensive. A large amount of computing power may be required that may increase the size and price of the intraoral scanning system.
[0034] To this end, the intraoral scanning system of the present disclosure includes a handheld intraoral scanner, and one or more processors for generating an overlay of 2D IR information onto 3D surface model such that a render of a 3D model of dental objects is accurate even for different viewing angles of the model. The overlay and subsequent output
may, in one embodiment, be done in real time or as a subsequent processing step upon data acquisition. The intraoral scanning system may provide an enhanced processing capability within the handheld intraoral scanner, such that additional equipment or devices are not required to generate the 3D model of the dental objects.
[0035] An embodiment of the present disclosure provides visualization of 2D internal feature image information onto a 3D surface model through stitching plurality of 2D internal feature images, particularly, 2D IR hyperspectral images, covering at least part of a dental object. The present disclosure provides techniques to place and align stitched images between a 3D surface model and a viewing angle of the 3D surface model to project the 2D internal feature image information onto the 3D surface model. The visualization of the 2D internal feature image information onto the 3D surface model is done from a perspective of tooth or dental object surfaces and viewing angle of a viewer, such as a medical practitioner or an examiner.
[0036] The embodiments of the present disclosure allow for direct visualization of certain diagnoses using the 3D model that provides information of sub-surface structure of a dental object, such as a tooth of a subject. In an example, the projection of the 2D internal feature image information on the 3D surface model is done through mapping and aligning of plurality of 2D internal feature images relating to the dental object onto the 3D surface model using one or more estimated object poses for the dental object. The projection is further done by generating offset mesh for the 3D surface model such that the offset mesh represents a constant distance field from a surface of the dental object, i.e., a surface of a tooth, of the subject.
[0037] In this manner, the techniques for generating the overlay of the 2D internal feature image information on the 3D surface model according to the present disclosure enable estimations of depth of different layers within the surface of the dental object. For example, for a tooth, the sub-surface information in the overlay of the 3D surface model enables estimation of characteristics of dentin layer, enamel layer, etc. of the inner structure of the tooth. This enables the viewer to view and analyse the inner or internal structure of the dental object of the subject through multiple angles to ascertain conditions and characteristics of the dental object accurately.
[0038] The sub-surface information may also be used to simulate a manner in which images from the intraoral scanners are obtained. This may be useful in order to optimize image analysis
for diagnostics and/or treatment. In particular, the intraoral scanners may be used to capture different features of the dental objects in the images, by controlling frequency, angle, etc. of operation of the intraoral scanners such that different features of the images may be used for different analysis. Embodiments of the present disclosure also provide improved image stitching technique, particularly, stitching technique for panoramic images.
BRIEF DESCRIPTION OF THE DRAWINGS
[0039] The present disclosure is illustrated by way of example and not by way of limitation in the figures of the accompanying drawings, in which the like reference numerals indicate like elements and in which:
FIG. 1A illustrates a network environment in which an intraoral scanning system for oral scanning is implemented, in accordance with an example embodiment;
FIG. IB and FIG. 1C show an example schematic diagram of the intraoral scanning system 102, in accordance with various example embodiments;
FIG. ID illustrates a coordinate system for a dental object, in accordance with an example embodiment;
FIG. 2 illustrates a sequence diagram that depicts generation of a 3D surface model, in accordance with an example embodiment;
FIG. 3 A, FIG. 3B, FIG. 3C, and FIG. 3D illustrate example diagrams for generating a plurality of 2D internal feature images, in accordance with various example embodiments;
FIG. 4A illustrates a flowchart of a method for generating an object mask for the plurality of 2D internal feature images, in accordance with an example embodiment;
FIG. 4B illustrates a schematic diagram of applying the object mask, in accordance with an example embodiment;
FIG. 5 A illustrates a method flowchart for estimating one or more object poses for the dental object, in accordance with an example embodiment;
FIG. 5B illustrates a schematic diagram for alignment of a template object model, in accordance with an example embodiment;
FIG. 6A illustrates a method flowchart for correlating 2D inner geometry features with 3D model for the dental object, in accordance with an example embodiment;
FIG. 6B and FIG. 6C illustrates a schematic diagram of overlaying object poses onto the 3D model, in accordance with an example embodiment;
FIG. 7A illustrates a method flowchart for overlaying 2D internal feature images with the 3D model for the dental object, in accordance with an example embodiment;
FIG. 7B and FIG. 7C, there is shown a schematic diagram of an example offset surface, in accordance with an example embodiment;
FIG. 7D, FIG. 7E and FIG. 7F illustrate a schematic diagram for overlaying the 2D internal feature images onto an offset surface, in accordance with an example embodiment;
FIG. 8A, FIG. 8B, FIG. 8C, FIG. 8D, FIG. 8E, FIG. 8F and FIG. 8G illustrate a manner of merging different spectrums of light for generating the 2D internal feature images, in accordance with an example embodiment;
FIG. 9 illustrates a block diagram of a handheld intraoral scanner, in accordance with an example embodiment; and
FIG. 10 illustrates a pre-processing step for visible light images and IR images, in accordance with an example.
DETAILED DESCRIPTION
[0040] In the following description, for purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the present disclosure. It will be apparent, however, to one skilled in the art that the present disclosure may be practiced without these specific details. In other instances, systems and methods are shown in block diagram form only in order to avoid obscuring the present disclosure.
[0041] Reference in this specification to “one embodiment” or “an embodiment” means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment of the present disclosure. The appearance of the phrase “in one embodiment” in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually
exclusive of other embodiments. Further, the terms “a” and “an” herein do not denote a limitation of quantity, but rather denote the presence of at least one of the referenced items. Moreover, various features are described which may be exhibited by some embodiments and not by others. Similarly, various requirements are described which may be requirements for some embodiments but not for other embodiments.
[0042] Some embodiments of the present disclosure will now be described more fully hereinafter with reference to the accompanying drawings, in which some, but not all, embodiments of the disclosure are shown. Indeed, various embodiments of the disclosure may be embodied in many different forms and should not be construed as limited to the embodiments set forth herein; rather, these embodiments are provided so that this disclosure will satisfy applicable legal requirements. Like reference numerals refer to like elements throughout. As used herein, the terms “data,” “content,” “information,” and similar terms may be used interchangeably to refer to data capable of being transmitted, received and/or stored in accordance with embodiments of the present disclosure. Further, the terms “processor”, “controller” and “processing circuitry” and similar terms may be used interchangeably to refer to the processor capable of processing information in accordance with embodiments of the present disclosure. Further, the terms “electronic equipment”, “electronic devices” and “devices” are used interchangeably to refer to electronic equipment monitored by the system in accordance with embodiments of the present disclosure. Thus, use of any such terms should not be taken to limit the spirit and scope of embodiments of the present disclosure.
[0043] The embodiments are described herein for illustrative purposes and are subject to many variations. It is understood that various omissions and substitutions of equivalents are contemplated as circumstances may suggest or render expedient but are intended to cover the application or implementation without departing from the spirit or the scope of the present disclosure. Further, it is to be understood that the phraseology and terminology employed herein are for the purpose of the description and should not be regarded as limiting. Any heading utilized within this description is for convenience only and has no legal or limiting effect.
[0044] As used in this specification and claims, the terms “for example” “for instance” and “such as”, and the verbs “comprising,” “having,” “including” and their other verb forms, when used in conjunction with a listing of one or more components or other items, are each to be construed as open ended, meaning that that the listing is not to be considered as excluding
other, additional components or items. Other terms are to be construed using their broadest reasonable meaning unless they are used in a context that requires a different interpretation.
[0045] An intraoral scanning system, a method and a computer programmable product are provided for generating an overlay of 2D internal feature image information onto a 3D surface model for a dental object of a subject to provide a holistic view of the dental object from different viewing angles.
[0046] For instance, an exemplary network environment of the intraoral scanning system for oral scanning and generating the overlay is provided below with reference to FIG. 1 A, FIG. IB, FIG. 1C and FIG. ID.
[0047] FIG. 1A illustrates an exemplary network environment 100 in which an intraoral scanning system 102 for intraoral scanning is implemented, in accordance with an example embodiment. The intraoral scanning system 102 may be used to generate an overlay of 2D internal feature image information, specifically, 2D IR hyperspectral image information, of a dental object within oral cavity of a subject on a 3D surface model. For example, the intraoral scanning system 102 may be used by a user, such as a person having knowledge of dentistry, for example, a dentist, a dental technician, and so forth. Further, it is possible that one or more components may be rearranged, changed, added, and/or removed without deviating from the scope of the present disclosure.
[0048] The intraoral scanning system 102 includes a handheld intraoral scanner 104, and one or more processors 106. The network environment 100 may further include communication channels 108 that may be configured to establish communicative coupling between components of the intraoral scanning system 102, i.e., the handheld intraoral scanner 104 and one or more processors 106 (referred to as processors 106, hereinafter).
[0049] For example, the processors 106 may be embodied as one or more of various hardware processing means such as a coprocessor, a microprocessor, a controller, a digital signal processor (DSP), a processing element with or without an accompanying DSP, or various other processing circuitry including integrated circuits such as, for example, an ASIC (application specific integrated circuit), an FPGA (field programmable gate array), a microcontroller unit (MCU), a hardware accelerator, a special-purpose computer chip, or the like. As such, in some embodiments, the processors 106 may include one or more processing cores configured to perform independently. A multi-core processor may enable
multiprocessing within a single physical package. Additionally, or alternatively, the processors 106 may include one or more processors configured in tandem via the bus to enable independent execution of instructions, pipelining and/or multithreading. Additionally, or alternatively, the processors 106 may include one or more processors capable of processing large volumes of workloads and operations to provide support for big data analysis. In an example embodiment, the processors 106 may be in communication with the other components of the intraoral scanning system 102 (referred to as system 102, hereinafter) via a bus or the communication channel 108 for passing information among components of the system 102.
[0050] In an example, when the processors 106 are embodied as an executor of software instructions, the instructions may specifically configure the processors 106 to perform the algorithms and/or operations described herein when the instructions are executed. However, in some cases, the processors 106 may be a processor specific device (for example, a mobile terminal or a fixed computing device) configured to employ an embodiment of the present disclosure by further configuration of the processors 106 by instructions for performing the algorithms and/or operations described herein. The processors 106 may include, among other things, a clock, an arithmetic logic unit (ALU) and logic gates configured to support operation of the processors 106. The network environment, such as, 100 may be accessed using the communication channel 108.
[0051] The communication channel 108 may be wired, wireless, or any combination of wired and wireless communication networks, such as cellular, wireless fidelity (Wi-Fi), internet, local area networks, or the like. In accordance with an embodiment, the communication channel 108 may be one or more wireless full-duplex communication channels. In one embodiment, the communication channel 108 may include one or more networks such as a data network, a wireless network, a telephony network, or any combination thereof. It is contemplated that the data network may be any local area network (LAN), metropolitan area network (MAN), wide area network (WAN), a public data network (e.g., the Internet), short range wireless network, or any other suitable packet-switched network, such as a commercially owned, proprietary packet-switched network, e.g., a proprietary cable or fibre-optic network, and the like, or any combination thereof. In addition, the wireless network may be, for example, a cellular network and may employ various technologies including enhanced data rates for global evolution (EDGE), general packet radio service (GPRS), global system for mobile communications (GSM), Internet protocol multimedia subsystem (IMS), universal mobile
telecommunications system (UMTS), etc., as well as any other suitable wireless medium, e.g., worldwide interoperability for microwave access (WiMAX), Long Term Evolution (LTE) networks (for e.g. LTE-Advanced Pro), 5G New Radio networks, ITU-IMT 2020 networks, code division multiple access (CDMA), wideband code division multiple access (WCDMA), wireless fidelity (Wi-Fi), wireless LAN (WLAN), Bluetooth, Internet Protocol (IP) data casting, satellite, mobile ad-hoc network (MANET), and the like, or any combination thereof. The handheld intraoral scanner 104 may be configured to communicate with the processors 106 via the communication channel 108.
[0052] The intraoral scanning system 102 may be configured to store, such as in a memory, data generated by the intraoral scanning system 102. For example, intraoral scanning system 102 may be configured to store data captured by the handheld intraoral scanner 104, i.e., a plurality of 2D images 110. In an example, the plurality of 2D images 110 generated by the handheld intraoral scanner 104 includes a plurality of two-dimensional (2D) infrared (IR) images 112 and a plurality of visible light images 114. Moreover, intraoral scanning system 102 may be configured to store data generated by the processors 106. Such data may include three-dimensional (3D) models 116, i.e., a three-dimensional (3D) surface model 118 of the dental object of the subject and an overlay 120 of 2D image information onto the 3D surface model 118.
[0053] The intraoral scanning system 102 may be utilized for registration of intraoral scans and generating the overlay for the dental object to show sub-surface information along with surface information for the dental object. The intraoral scanning system 102 may include multiple components, such as the handheld intraoral scanner 104, the processors 106 and memory (not shown) that may communicate with each other to register the intraoral scans of the dental object.
[0054] In an example, the handheld intraoral scanner 104 may include the processors 106. In another example, the handheld intraoral scanner 104 may be coupled to the processors 106, wherein the processors 106 may be located remotely and may perform operations associated with the intraoral scanning system 102. The intraoral scanning system 102 may have enhanced processing capabilities that may be required to process the plurality of 2D IR images 112 and the visible light images 114 in real time to generate plurality of 2D internal feature images and further overlay the 2D internal features image information (or 2D internal features) of the dental object onto the 3D surface model 118.
[0055] The handheld intraoral scanner 104 may be configured to capture the plurality of 2D images 110 during a scanning session of an oral cavity of a subject. The plurality of 2D images 110 may include the plurality of 2D IR images 112 indicating images of a dental object of the subject captured using light that is emitted at wavelength corresponding to a range of IR or near-infrared (NIR) wavelengths. The plurality of 2D IR images 112 may include images of the dental object of the subject from various viewing angles or viewing points. Further, the plurality of 2D images 110 may include the visible light images 114 indicating images of the dental object of the subject captured using light that is emitted at wavelength corresponding to a range of white light wavelengths or a selected sub spectrum of the visible light such as blue light (400 - 495 nm), green light (490 - 570 nm), and fluorescence light (300 - 800nm). The visible light images 114 may also include images of the dental object of the subject from various viewing angles. In certain cases, the plurality of 2D images 112 may also include images captured at other visible and/or non-visible light wavelengths.
[0056] In operation, the processors 106 are configured to receive and/or obtain visible light information and IR information from one or more sensors (referred to as sensors, hereinafter) of the handheld intraoral scanner 104. The sensors of the handheld intraoral scanner 104 may include, for example, light sensors and/or image sensors. To this end, the based on visible and/or IR light sensed by the sensors, the sensors may capture and provide the visible light information and the IR information corresponding to the dental object of the subject to the processors 106. In an example, the processors 106 may receive or obtain the visible light information as the plurality of 2D visible light images 114 (referred to as visible light images 114, hereinafter) and the IR information as the plurality of 2D IR images 112 (referred to as IR images 112, hereinafter).
[0057] In an example, the handheld intraoral scanner 104 may include a web server that may be configured to communicate via a web network and establish a connection to the communication channel 108. The handheld intraoral scanner 104 may be configured to execute the web server to provide visible light information and IR information obtained from the sensors to the processors 106. The handheld intraoral scanner 104 may further include a processing unit, a memory unit, a communication interface, the sensors, and additional components. The processing unit, the memory unit, the communication interface, and the additional components may be communicatively coupled to each other. Details of the
components of the handheld intraoral scanner 104 are further provided, for example, in FIG. IB and FIG. 9.
[0058] Further, the processors 106 are configured to determine 3D surface information of the dental object of the subject based on the captured visible light images 114, particularly, white light information or white light images. In particular, the processors 106 are configured to generate the 3D surface model 118 of the dental object based on the visible light information. The 3D surface information may be a digital representation of a dental arch of the dental object, such as tooth, gums, or a set of teeth, of the subject depicted in a 3D space. The 3D surface information may include, for example, 3D point cloud data corresponding to the visible light images 114 or white light images. The 3D point cloud data may correspond to 3D real-world coordinates. The 3D surface information may additionally include colour texture reflected from a surface of the dental object, i.e., a surface of a tooth or a set of teeth.
[0059] The processors 106 may include processing capabilities that may be required to process the visible light images 114 (or visible light information) and IR images 112 (or IR information). The processors 106 may be configured to establish a connection to the communication channel 108. In an example, the processors 106 may be within the handheld intraoral scanner 104. For example, the processors 106 may receive the visible light information or the visible light images 114 and IR information or the IR images 112 from the handheld intraoral scanner 104 or sensors of the handheld intraoral scanner 104 to generate 3D surface information for the dental object of the subject. Further, based on the 3D surface information derived from the visible light images 114, such as white light images, the 3D surface model 118 of the dental object of the subject is generated. The processors 106 may further render the 3D surface information or the 3D surface model 118 of the dental object into an interactive 3D graphical representation. Details of generating the 3D surface model 118 for the dental object are further described in conjunction with, for example, FIG. 2.
[0060] The processors 106 are further configured to generate a plurality of two- dimensional (2D) internal feature images based on the visible light information and the IR information. The plurality of 2D internal feature images indicate 2D inner geometry features for the dental object. In an example, the plurality of 2D internal feature images may be hybrid images that may be generated based on the IR information and the visible light information relating to one or more sub-spectrums. In one example, the plurality of 2D internal feature images may be generated based on combination of white light image information, fluorescent
light (such as fluorescent red and/or fluorescent green) image information and IR image information. However, such spectrums of images for generating the hybrid or hyperspectral images should not be construed as a limitation. Details of generating the plurality of 2D internal feature images are described in conjunction with, for example, FIG. 3A, FIG. 3B, FIG. 3C and FIG. 3D.
[0061] In an example, the plurality of 2D internal feature images or the 2D hyperspectral images may be generated to differently colour the different enhanced internal structures of the dental object for making it easier for a user of the system 102 to distinguish between the different enhanced internal structures. In an example, each of a plurality of channels associated with composed scan information, such as channels for visible light information and IR information, may be assigned to different colours. For improving the distinguishing of the different enhanced internal structures, the processors 106 may be configured to weigh differently or similarly each of the plurality of channels with a channel weighting coefficient. Details of generating the 2D internal feature images or the 2D hyperspectral images are further described in conjunction with, for example, FIG. 3 A, FIG. 3B, FIG. 3C, FIG. 3D, FIG. 8A, FIG. 8B, FIG. 8C, FIG. 8D, FIG. 8E, FIG. 8F, and FIG. 8G.
[0062] Once the plurality of 2D internal feature images indicating the inner structure information or sub-surface information for the dental object is generated, the processors 106 are configured to process the plurality of 2D internal feature images. The processors 106 are configured to correlate the 2D inner geometry features of the dental object with at least one reference frame of the dental object in the 3D surface model 118. It may be noted, the 3D surface model 118 may include 3D surface information that may be represented as a mesh. Further, to overlay the plurality of 2D internal feature images onto the 3D surface model 118, the processors 106 may have to select positions in the 3D surface model 118 to position or overlay the plurality of 2D internal feature images on the 3D surface model 118. The selection of the positions for overlaying the plurality of 2D internal feature images ensures that images are positioned at approximately a constant distance from the surface of the dental object, as well as positions and directions of placement or overlaying of the plurality of 2D internal feature images are varied smoothly across the 3D surface model. Details of correlating the 2D inner geometry features of the dental object with the 3D surface model 118 are described in conjunction with, for example, FIG. 5A, FIG. 5B, FIG. 6A, FIG. 6B and FIG. 6C.
[0063] In an example, the reference frames may include perpendicular lines or normal lines of the 3D mesh of the 3D surface model 118. Further, the reference frames. For example, the reference frames may indicate vectors that convey information about a local surface orientation at each control point of the 3D mesh. To this end, the control points may be used to map the pixels of the plurality of 2D internal feature images to their corresponding positions in order to generate the overlay 120 of the plurality of 2D internal feature images onto the 3D surface model 118. To this end, the one or more reference frames of the dental object in the 3D surface model 118 may provide information relating to geometry and orientation of surfaces of the 3D surface model 118. Subsequently, the reference frames of the dental object when correlated with the 2D inner geometry features provide accurate positions for overlaying the plurality of 2D internal feature images. Details of the reference frames are provided in conjunction with, for example, FIG. ID.
[0064] Based on the correlated reference frames of the dental object and the 2D inner geometry features, the plurality of 2D internal feature images may be overlaid and provided as output. In an example, the output may include the overlay 120 of the 2D image information, i.e., the 2D internal feature images, onto the 3D surface model 118. The correlated reference frames and the 2D inner geometry features may indicate a manner in which pixels from the plurality of 2D internal feature images should be displaced for each control point in the 3D mesh to accurately provide surface and sub-surface information for different viewing angles to a viewer.
[0065] For example, the overlay 120 of the dental object may be rendered as an interactive 3D graphical representation. The interactive 3D graphical representation may be rendered on a display unit of a device. The interactive 3D graphical representation may be viewed by the viewers or users, such as the dentists on the display to diagnose any disease or anomaly within the dental object of the subject. A view or viewing angle of the interactive 3D graphical representation of the overlay 120 may be modified by the users, based on a preference. For example, a perspective of the interactive 3D graphical representation may be changed, or the interactive 3D graphical representation may be zoomed-in or zoomed-out as per the preference of the users.
[0066] In one example, the 3D graphical representation of the overlay 120 may be rendered by the intraoral scanner 104 or any other light projector, such that the overlay is rendered directly above the surface of the dental object of the subject. In another example, the 3D
graphical representation of the overlay 120 may be rendered on a display device. The display device may be associated with any user accessible device such as a displaying unit, a monitor, a mobile phone, a smartphone, a tablet, a computer, an artificial realty (XR) device, and the like. In some examples, the display device may be a part of the user accessible device. The display may be, for example, a touch screen display. Additional, different, or fewer components may be provided. Further, it is possible that one or more components may be rearranged, changed, added, and/or removed without deviating from the scope of the present disclosure.
[0067] FIG. IB shows an example schematic diagram 122 of the intraoral scanning system 102, in accordance with an example embodiment. The intraoral scanning system 102 includes the handheld intraoral scanner 104 configured to scan a dental object 124 of a subject. In an example, handheld intraoral scanner 104 may emit light of various wavelengths in a pulsating manner. For example, the images of the dental object 124 may be captured using light having different wavelengths, such as white light, blue light, IR light, NIR light, fluorescent light, etc. To this end, visible light images 114 and/or the IR images 112 for the dental object may be captured from a same position and at same time due to high pulse repetition rate of the light emitted from the handheld intraoral scanner 104.
[0068] The handheld intraoral scanner 104 may include a projector unit configured to emit light at different wavelengths, such as at near-infrared wavelength, infrared wavelength, whitecoloured wavelengths and/or coloured visible wavelengths onto at least the dental object 124. In an example, the projector unit may be configured to emit light with different wavelengths in a pulsating manner during different time periods onto at least the dental object 124, wherein the different wavelengths include a near-infrared wavelength, an infrared wavelength, and a visible wavelength.
[0069] In an example, the visible light emitted by the projector unit may include one or more colour lights of the filtered visible light signals, such as red, green, blue, and white. The projector unit may include multiple light sources that are configured to emit the one or more colours lights and the infrared light. The multiple light sources may be arranged within a single module that includes multiple Light Emitting Diodes (LEDs) that are configured to emit different wavelengths within the visible and non-visible wavelength ranges. In another example, light source, i.e., one or more LEDs that may be configured to emit infrared light, may be arranged separately from the light source that is configured to emit the visible light.
[0070] The handheld intraoral scanner 104 may include an image sensor configured to capture the visible light information and the IR information from at least the dental object 124 caused by the emitted light of the projector unit. In another example, the image sensor is configured to capture the visible light information and the internal light information from at least the dental object 124 caused by the visible wavelength and the near-infrared and/or infrared wavelength, respectively.
[0071] The image sensor unit may include multiple cameras, such as, high speed cameras. In one example, the multiple cameras may be arranged around the projector unit or next to the projector unit.
[0072] The image sensor unit may include a plurality of pixels, wherein each of a plurality of single-color channels and each of a plurality of combined-colour channels may be aligned to each of the plurality of pixels. In this example, each of the colour channels may overlap with a pixel of the image sensor or with a group of pixels of the image sensor.
[0073] The visible light information and IR information generated by the handheld intraoral scanner 104 may be communicated to the processors 106 over wired or wireless communication channel 108. In an example, the processors 106 may be implemented as a computing device 126A or a server 126B external to the handheld intraoral scanner 104.
[0074] The system 102 includes the one or more processors arranged in the handheld intraoral scanner 104, the external computer 126A and/or the server 126B. The handheld intraoral scanner 104 may include a processor, and the processor is configured to process sensor data from sensors of the handheld intraoral scanner 104 into information, such as the visible light information and IR information configured to be transmitted to the external computer 126 A or the server 126B. Furthermore, the external computer 126 A or the server 126B may include the processors 106 to process the received visible light information and IR information.
[0075] For example, the subject may require a dental treatment. In such a case, the system 102 may be utilized by a user, such as a dentist to provide the dental treatment to the subject. In an embodiment, the subject may be present at a dental clinic. In such a case, the system 102 may be utilized in a treatment room of the dental clinic. In another embodiment, the subject may have been requested for a home visit for the dental treatment. In such a case, the system 102 may be utilized in the home of the subject. To start the dental treatment, the handheld
intraoral scanner 104 may be utilized by the user to capture the visible light images 114 and the IR images 112 of the dental object 124 of the subject, using one or more sensors.
[0076] In an example, throughout a scanning of the subject the projector unit may be configured to constantly emit the infrared light while emitting the visible light. In this example, an on-off switching of the emitted infrared light is avoided, and thereby, unwanted transients on the emitted infrared light are avoided. Furthermore, any timing issue between the visible light and the infrared light is also avoided. In other embodiments, the pulse infrared light and the visible light may be emitted for a time period having a ratio of 2: 1, i.e., for every 2 seconds of visible light emission or 2 pulses of visible light, the IR light may be emitted for next 1 second or 1 pulse, respectively.
[0077] FIG. 1C illustrates another example schematic diagram of the intraoral scanning system 102, in accordance with an embodiment. According to the present example, the processors 106 of the system 102 receive the visible light information 128 or the visible light images 114 and IR information 130 or IR images 112, for example, from the handheld intraoral scanner 104. In an example, the processors 106 are configured to generate the IR information 130 based on a subtraction of combined light signals from one or more colour light signals of the acquired visible light information 128.
[0078] Further, the processors 106 are configured to generate, at 132, a plurality of 2D internal feature images based on the visible light information 128 and the IR information 130. The plurality of 2D internal feature images indicate 2D inner geometry features for the dental object 124. For example, based on the IR information 130, inner geometry features of the dental object 124 may be determined. The processors 106 are also configured to determine, at 134, 3D data of the dental object based on the visible light information 128 to further generate the 3D surface model 118 of the dental object 124. In an example, the 3D surface model 118 may include a mesh with a plurality of control points indicating surface colour and/or texture information of the dental object 124. Thereafter, the processors 106 are also configured to process, at 136, the plurality of 2D internal feature images to correlate the 2D inner geometry features of the dental object 124 with one or more normal lines of the mesh of the 3D surface model 118 of the dental object 124. The correlation of the 2D inner geometry features with the normal lines of the 3D surface model 118 may enable mapping of sub-surface information from the 2D internal feature images onto the 3D surface model 118. For example, the subsurface information may provide colour information, shade information and inner region
information determined based on at least the infrared information 130 or the plurality of 2D internal feature images. Furthermore, the processors 106 are configured to output, at 138, the overlay 120 of the plurality of 2D internal feature images onto the 3D surface model 118. The overlay 120 includes surface 3D geometry and inner geometry of the dental object 124 or a dentition. In an example, the processors 106 may be configured to determine the overlay 120 and the 3D surface model 118 in parallel based on the IR information 130 and the visible light information 128.
[0079] In the example illustrated in FIG. 1C, the system 102 includes a display unit 140. For example, an overlay 142 of the 2D internal feature images onto the 3D surface model 118 may be rendered on the display unit 140. The processors 106 are configured to display the overlay 142 in real-time. The displayed overlay 142 includes both the 3D surface model and the inner structure information in form of pixels of the plurality of 2D internal feature images.
[0080] An inner structure or sub-surface information of the dental object 124 may be determined by the processors 106 based on the IR information 130. The sub-surface information may include information about dental features that are arranged within the dental object 124. The dental feature may be one or more of an anatomy feature, a disease feature, and a mechanical feature. The anatomy feature may indicate information relating to, for example, an enamel, a dentine, or a pulp within the dental object 124. The disease feature may indicate information relating to, for example, plaque, crack, or caries. The mechanical feature may indicate information relating to, for example, a filling and/or a composite restoration.
[0081] In an example, based on the plurality of 2D internal feature images, i.e., 2D hyperspectral IR images, such images may have to be placed over or on select positions on the 3D surface model 118. To this end, such placement of the 2D internal feature images on the 3D surface model 118 may also have to show newly obtained information, such as due to change in viewing of the 3D surface model 118 directly on the 3D surface model 118. Such visualization of updated information of internal structure of the teeth due to change in viewing angle may allow for direct visualization of certain diagnoses onto the 3D surface model 118 or the overlay 120 that may not be obtained with only white light images or white light render in 3D model. The projection or overlay of the 2D internal feature images is done through mapping and aligning the pixels of the 2D internal feature images to estimate one or more object poses for the dental object 124. In an example, one or more object poses may be determined for each
tooth of the subject. Based on the one or more object poses (or one or more tooth poses) the overlay of the 2D internal feature images onto the 3D surface model may be generated.
[0082] FIG. ID illustrates a coordinate system 144 for a dental object, in accordance with an example embodiment. In particular, the coordinate system 144 corresponds to structure of a dental object 146 or a tooth, and different information provided by different directions and/or planes. The coordinate system 144 may be associated with the 3D surface model 118 for overlaying 2D internal feature information of inner geometry of the dental object 146 onto the 3D surface model.
[0083] To this end, the coordinate system 144 for the dental object 146 may have 3 axes (X, Y and Z). In addition, the coordinate system 144 may have two or more planes, such as XY plane 148, YZ plane 150 and XZ plane 152 (collectively referred to as planes 148, 150 and 152).
[0084] In an example, each of the planes 148, 150, and 152 of the dental object or the tooth 146 in the 3D surface model 118 comprises at least a first plane and a second plane. For example, the XY plane 148 may be the first plane and the YZ plane 150 may be the second plane. To this end, the XY plane 148 or the first plane may correspond to a buccallingual plane of the dental object or the tooth 146 (referred to as tooth 146, hereinafter) and the YZ plane 150 or the second plane may correspond to a mesial-distal plane of the tooth 146.
[0085] To this end, different planes, and axes of the coordinate system 144 for the tooth 146 provides different information. In an example, the Y axis may correspond to a tooth axis. For example, the tooth axis may indicate information of depth of the tooth 146.
[0086] In addition, the planes 148, 150 and 152 of the dental object or the tooth 146 includes at least one of an occlusal plane, a buccal plane, a lingual plane, a mesial plane, a distal plane, or a labial plane.
[0087] FIG. 2 illustrates a sequence diagram 200 that depicts generation of the 3D surface model 118, in accordance with an example embodiment. FIG. 2 is explained in conjunction with elements of FIG. 1A, FIG. IB, FIG. 1C and FIG. ID. The sequence diagram 200 may include the handheld intraoral scanner 104 and the one or more processors 106. The sequence
diagram 200 may depict operations performed by at least one of the handheld intraoral scanner 104 and the processors 106.
[0088] At step 202, a projector unit of the handheld intraoral scanner 104 may illuminate an oral cavity of a subject. The projector unit may illuminate the dental object of the oral cavity using visible light or white light wavelength pulses and IR and/or NIR wavelength pulses. In an example, the dental object 124 may correspond to one or more structures, such as tooth, a set of teeth, and/or gums inside the oral cavity of the subject.
[0089] At step 204, one or more sensors of the handheld intraoral scanner 104 may detect visible light and IR or NIR. Such detected visible light and NIR may be reflected or refracted from surface or inner region of the dental object 124. In other words, the one or more sensors may detect the visible light information 128 and the IR information 130 reflected or refracted from the dental object 124 or tooth.
[0090] At step 206, the one or more sensors of the handheld intraoral scanner 104 may capture visible light images 114 and 2D IR images 112. For example, the one or more sensors may include image sensors that may be configured to capture the visible light images 114 based on the detected visible light information 128 and the IR images 112 based on the detected IR information 130. In this manner, the one or more sensors may be configured to generate the plurality of 2D images 110 of the dental object 124.
[0091] At step 208, the plurality of 2D images 110 of the dental object 124 are received by the processors 106. As mentioned above, the plurality of 2D images 110 of the dental object 124 may include the visible light images 114 and the IR images 112.
[0092] At step 210, 3D surface information is determined using the visible light information 128. In particular, the visible light images 114 may be processed to determine the 3D surface information. In an example, in-focus measurements in the visible light images 114 may be determined. Further, projected features of the dental object 124 may be tracked across the visible light images 114 or white light images. For example, a correspondence function may be performed or solved to triangulate depth information for the dental object 124 and determine the 3D surface information for the dental object 124.
[0093] At step 212, the 3D surface model 118 is generated for the dental object 124 based on the 3D surface information. In an example, a 3D patch may be generated corresponding
to a part of a surface of the dental object 124, or a surface of the set of teeth of the subject by accessing calibration data of the one or more sensors and transforming the 3D surface information corresponding to the part into real-world 3D coordinates and texture information. For example, different 3D patches may be generated for different parts of the surface of the teeth using the visible light images 114 or the white light images. In certain cases, there may be an overlap in parts covered by different 3D patches. For example, the 3D patch for the part of the surface of the teeth may be registered or associated with one or more previously generated 3D patches for other parts and/or overlapping parts of the surface of the teeth by locating corresponding data points. Thereafter, the 3D patch for the part may be fused with the one or more previously generated 3D patches relating to other parts of the surface of the teeth to generate the 3D surface model 118.
[0094] In an example, the 3D surface model 118 may be a 3D point cloud including 3D points or 3D control points within voxels in a signed distance field, and the signed distance field may be converted into a 3D mesh for rendering the 3D surface model 118. In an example, the 3D surface model 118 is generated as an offset mesh comprising a grid or mesh of 3D control points. For example, an initial grid or mesh of 3D control points for generating the 3D surface model 118 may be created over input images, i.e., the visible light images 114. The grid may be a grid of points or a set of key points (referred to as 3D control points) that are strategically placed based on a type of distortion or transformation to be performed on the images. These 3D control points may be used as references for mapping pixels of images in the corresponding mesh to generate the 3D surface model 118. For example, each of the 3D control points may provide associated offset information that defines how much the pixel at that control point should be moved or adjusted. The 3D surface model 118 may also include colour data and texture data of the surface of the teeth.
[0095] After the 3D surface model 118 of the dental object 124 or the set of teeth is generated, the processors 106 may be configured to process the visible light information along with IR information to generate the overlay 120 indicating sub-surface information for the dental object 124 or the set of teeth. Details of the generation of the overlay are described in conjunction with, for example, FIG. 7A, FIG. 7B, FIG. 7C, FIG. 7D, FIG. 7E, and FIG. 7F.
[0096] FIG. 3 A, FIG. 3B, FIG. 3C, and FIG. 3D illustrate example diagrams for generating a plurality of 2D internal feature images based on the IR information 130 and the visible light
information 128. In particular, the FIG. 3A, FIG. 3B, FIG. 3C, and FIG. 3D describe generation of the 2D internal feature images based on combining (such as, adding or subtracting) different spectrums of visible and IR light. As a result, hybrid or hyperspectral images are generated. It may be noted that the different manners of generation of the 2D internal feature images or 2D hyperspectral images for the dental object 124 are only exemplary and should not be construed as a limitation.
[0097] The embodiments of the present disclosure may utilize mixing signals or information obtained from imaging the dental object 124 with different light sources, i.e., white, blue and IR or NIR, in order to obtain a 2D internal feature image or a hyperspectral image. It may be noted, the 2D internal feature images may include a 2D composition of inner geometry features for the dental object 124. It is an objective of the present disclosure to combine information from a 2D internal feature image or a hyperspectral image, i.e., images taken with white/blue/IR light in order to generate the overlay 120 displaying various diagnoses like cracks, caries, bacteria etc.
[0098] In an example, the visible light information 128 includes surface reflection, i.e., surface information, provided by, for example, white coloured light emitted by the handheld intraoral scanner 104. Further, the IR information 130 includes sub-surface structure information provided by IR and/or NIR light emitted by the handheld intraoral scanner 104.
[0099] Referring to FIG. 3A, the visible light information 128 may include, for example, white light information 128A that is provided by white coloured light emitted by the handheld intraoral scanner 104. To this end, a plurality of 2D internal feature images 302 may be generated based on a subtraction of the white light information 128 A from the infrared information 130. In an example, the visible light information 128 may be filtered, for example, using single colour channel or multiple colour channels to produce visible light in different wavelengths, such as white light 128 A, red light, blue light, green light, etc. Such filtered white light information 128A may be used for determining the 3D surface model 118 of the dental object 124. Further, 2D internal feature images 302 includes enhanced internal structure 304 that is represented by a restoration that is not seen in the infrared information 130 but can easily be identified in the composed or generated 2D internal feature images 302. The composed 2D internal feature images 302 may be mapped onto the 3D surface model 118 by the processors 106, such that the composed 2D internal feature images 302 provides three-dimensional subsurface information regarding the inner structure of the dental object 124.
[0100] Referring to FIG. 3B, the visible light information 128 may include, for example, excited fluorescence light information 128B that is provided by green and/or blue coloured light emitted by the handheld intraoral scanner 104. Pursuant to present example, a plurality of 2D internal feature images 306 may be generated based on a subtraction of the fluorescence light information 128B from the IR information 130. For example, the fluorescence light information 128B may be used for applying fluorescence information onto the 3D surface model 118 of the dental object 124. It may be noted, the infrared information 130 fails to show anomalies 308 in the inner region of the dental object 124 independently. However, IR information 130 with subtracted fluorescence light information 128B in the 2D internal feature images 306 enhances the visibility of the anomalies 308.
[0101] Referring to FIG. 3C, the visible light information 128 may include, for example, excited green fluorescence light information 128C that is provided by green coloured light emitted by the handheld intraoral scanner 104. In accordance with present example, a plurality of 2D internal feature images 310 may be generated based on a summation of the green fluorescence light information 128C and the IR information 130. To this end, the 2D internal feature images 310 includes enhanced textural information about different layers in the inner region of the dental object 124. The layers in the inner region may correspond to, for example, the enamel 312A and dentin 312B. As a result, dentin-enamel-junction (DEJ) becomes easier or clearer to see due to an improved contrast in the 2D internal feature images 310.
[0102] It may be noted, the enamel 312A and the dentin 312B are more clearly seen in the composed 2D internal feature images 310 than in the infrared information 130. In certain cases, the plurality of 2D internal feature images may also be generated using excited red fluorescence light information, such as by summation of IR information 130, the green fluorescence light information 128C and the red fluorescence light information. Such 2D internal feature images generated based on the IR information 130, the green fluorescence light information 128C and the red fluorescence light information may also improve the visibility of the DEJ in relation compared to a regular or enhanced fluorescence information or, compared to the IR information 130 independently.
[0103] Referring to FIG. 3D, a projector unit is configured to emit light pulses corresponding to visible light pulses or visible wavelength pulses. In an example, the wavelength of the visible light pulses may include blue wavelength pulses, and white wavelengths pulses. In other words, the projector unit is configured to emit visible light pulses
having wavelength corresponding to white light and blue light. The projector unit may also emit non-visible light pulses that includes infrared wavelengths. In one example, the emitted blue wavelength pulses may be used to capture excited green fluorescence light information 128C and excited red fluorescence light information 128D.
[0104] To this end, the visible light signals (128A, 128C and 128D) include surface information of the dental object 124 provided by the emitted white wavelength pulses, fluorescence green wavelength pulses and fluorescence red wavelength pulses provided by the emitted blue wavelengths. The surface information is used for generating or updating the 3D model 118. Moreover, the fluorescence information (128C and 128D) may be used for generating a plurality of 2D internal feature images 314. In the present example, the processors 106 may be configured to determine a first difference between the infrared information 130 and the green fluorescence light information 128C and a second difference between the infrared information 130 and the red fluorescence light information 128D. Further, the 2D internal feature images 314 are generated based on a summation of the first difference and the second difference. The generated 2D internal feature images 314 provide enhanced internal structure information relating to the dental object 124. For example, the 2D internal feature images 314 indicates sub-surface information, such as anomalies, anatomical structure, etc. more clearly in comparison to the independent infrared information 130.
[0105] In an example, the plurality of 2D internal feature images generated may include non-object information. For example, the plurality of 2D internal feature images may include parts corresponding to tongue, walls, etc. of the oral cavity of the subject. However, such information is not required for generating the 3D surface model 118 and/or the overlay 120 of the 2D internal features of the dental object 124 onto the 3D surface model. Further, such information may increase processing time and required computing power for generating the overlay 120. To this end, such non-object information that may not relate to the dental object, such as the teeth and the gum, in the plurality of 2D internal feature images need to be eliminated. Details of removing the non-object information is described in conjunction with, for example, FIG. 4 A and FIG. 4B.
[0106] FIG. 4A illustrates a flowchart 400 of a method for generating an object mask for the internal feature images, in accordance with an example embodiment. As described above, the 2D internal feature images may be generated based on the IR information 130 and the visible light information 128. For example, the IR information 130 may be enhanced based on
different spectrum or colour based excited fluorescence light information and/or white light information. Such enhancement of the IR information 130 based on different spectrums may result in more clear inner structure of the dental object 124.
[0107] In an example, certain IR images and visible light images may be collected from same time frame and a same scanning position due to the high pulse repetition rate. To this end, an object mask that may be generated using the visible light images may be implemented on or may also correspond to the IR images.
[0108] The object mask may be used to filter or isolate specific parts of an image, such as the plurality of 2D internal feature images, while excluding or suppressing unwanted regions. For example, the object mask may be implemented as a binary image where each pixel is assigned a value of either 1 (to include a pixel) or 0 (to exclude a pixel) based on a predefined criteria or pattern. For example, such predefined criteria or pattern may be defined based on the visible light images. The object mask may be employed to perform masking or mask-based filtering to apply selective processing to the plurality of 2D internal feature images.
[0109] At 402, the processors 106 are configured to generate an object mask using visible light information captured from a scanning position. It may be noted, the scanning position corresponds to a position of the one or more sensors of the handheld intraoral scanner 104. In an example, the processors 106 are configured to generate multiple masks based on different scanning positions in which the handheld intraoral scanner 104 (referred to as scanner 104, hereinafter) may be moved. Further, for a particular scanning position, the processors 106 are configured to identify visible light information or visible light images captured by the scanner 104. Based on the identified visible light images for the scanning position, one or more parts of the dental object 124 in the identified visible light images may be defined and identified.
[0110] In an example, the object mask may be generated based on a predefined criteria or automatically based on image processing techniques. The object mask may be generated based on identification of parts of the dental object 124 in images, such as visible light images. Further, a pixel value ‘ 1’ may be assigned to pixels that correspond to such parts of the dental object 124 in the images. In this manner, the object mask may define which parts of the images should be included or excluded in the filtering process. For example, the techniques used for creating the object mask may include, but are not limited to, thresholding, edge detection, and region segmentation.
[0111] At 404, the processors 106 are configured to apply the object mask on the plurality of 2D internal feature images. As the object mask is generated for the scanning position, a set of 2D internal feature images associated with the scanning position may be identified. For example, the set of 2D internal feature images may be generated based on combining of visible light images and IR images captured from the scanning position. Further, as the pulse repetition rate for emitting visible wavelength pulses and IR wavelength pulses is high, the visible light images and the IR images may be captured from the same scanning position. To this end, based on the generated object mask for the scanning position, the set of 2D internal feature images may be segmented. In this regard, object mask may be applied to the set of 2D internal feature images to segment at least a portion of the 2D internal feature images. Details of applying the object mask on the 2D internal feature images are described in conjunction with, for example, FIG. 4B.
[0112] Referring to FIG. 4B, a schematic diagram of applying an object mask408 is shown, in accordance with an example embodiment. For example, the object mask 408 may be an image corresponding to the visible light images, such that the object mask 408 may have assigned pixel values of ‘ 1’ to parts of dental object 124 in the visible light images. For example, as shown in 410A, the object mask 408 may be generated based on assigning the pixel values of ‘ 1’ to pixels corresponding to the dental object 124 in the visible light images. Moreover, in certain cases, as shown in 410B, the object mask 408 may be generated based on assigning the pixel values of ‘ 1’ to pixels corresponding to outer edges of the dental object 124 in the visible light images.
[0113] Further, the object mask 408 may be applied to segment parts of the set of 2D internal feature images indicating non-object information. In this regard, the set of 2D internal feature images generated based on the combining of the visible light images with IR images taken from the same scanning position may be segmented. The object mask 408 may be, for example, overlaid on the 2D internal feature images to perform element-wise multiplication. Each pixel value in the mask is multiplied with the corresponding pixel value in an image from the set of 2D internal feature images. If the mask pixel is 1, the corresponding pixel in the image remains unchanged; if the mask pixel is 0, the corresponding pixel in the input image is set to 0. In this manner, the object mask 408 is applied to the IR information in the set of 2D internal feature images to generate filtered IR
information 414A as well as visible light information (such as fluorescence light information) in the set of 2D internal feature images to generate filtered visible light information 414B.
[0114] Returning back to FIG. 4A, at 406, the processors 106 are configured to remove the segmented portion of the 2D internal feature images indicating the non-object information. In this regard, by applying the object mask 408, the result of the element-wise multiplication generates images where only the pixels of the set of 2D internal feature images that align with the Is in the object mask 408 are retained. In this manner, the segmented portion indicating the non-object information may be removed from the set of 2D internal feature images.
[0115] Although the present example described applying the object mask on the 2D internal feature/ hyperspectral images, in certain cases, the one or more object masks generated based on the white light images may be applied to captured blue light images and the IR images in order to remove background, redundant information, or non-teeth information from the plurality of 2D images 110. The object masks may segment parts corresponding to the object-information e.g., the teeth. This is possible because the white light images, blue light images and IR images are taken from the same position at the same time frame.
[0116] To this end, one or more object masks may be generated for filtering the plurality of 2D internal feature images based on different scanning positions of the scanner 104. Further, in some cases, the masked or filtered 2D internal feature images may undergo various filtering operations, such as blurring, sharpening, contrast adjustment, noise reduction, or any other image processing technique to enhance visibility of internal structure of the dental object 124. These operations are applied only to the selected regions of interest defined by the object mask(s). Once segment, the plurality of 2D internal feature images may be processed to be correlated with the 3D surface model 118 for generating the overlay 120. Details of processing the 2D internal feature images are described in conjunction with, for example, FIG. 5 A and FIG. 5B.
[0117] FIG. 5 A illustrates a method flowchart 500 for estimating one or more object poses for the dental object, respectively, in accordance with an example embodiment. The one or more object poses are generated by the processors 106 using the 2D internal feature images and the 3D model 118. For example, the one or more object poses are generated using
a machine learning model. FIG. 5A is explained in conjunction with elements of FIG. 1 A, FIG. IB, FIG. 1C, FIG. ID, FIG. 2, FIG. 3A, FIG. 3B, FIG. 3C, FIG. 3D, FIG. 4A and FIG. 4B.
[0118] The one or more object poses of inner geometry for the dental object 124 may be estimated using a trained machine learning model. Moreover, the one or more object poses may be used to overlay a plurality of 2D internal feature images (referred to as 2D internal feature images) of the dental object onto the 3D surface model 118 (referred to as 3D model, hereinafter).
[0119] To this end, for clarity, the embodiments of the present disclosure are explained in reference to processing of 2D internal feature images relating to a single tooth for estimating one or more object poses or tooth poses and further overlaying 2D internal feature images of the tooth onto the 3D model 118 at the position of the tooth. Such embodiments may be repeated for every tooth in the set of teeth of the subject to generate the overlay 120 for the subject.
[0120] Referring to FIG. 5 A, at 502, the processors 106 are configured to align a template object model with a position of the dental object in the 3D model 118. In an example, each tooth in the 3D model 118 may belong to a tooth type, such as a central incisor, a lateral incisor, a canine, a first premolar, a second premolar, a first molar, a second molar, a third molar, etc. Moreover, a template object model (also referred to as a template tooth model) may be predefined for different tooth types. In an example, the template object model may be identified and aligned based on the position of the dental object or the tooth in the 3D model 118 and a tooth type of the dental object or the tooth. For example, based on the position of the tooth in the 3D model 118, a tooth type of the tooth may be identified. Further, the template tooth model of the identified tooth type may be aligned with the position of the dental object in the 3D model 118. Subsequently, the template object model has a corresponding tooth type, such as one of central incisor, a lateral incisor, a canine, a first premolar, a second premolar, a first molar, a second molar, and a third molar.
[0121] It may be noted, the template object model may provide generic feature information relating to the dental object, i.e., the tooth, based on the type of the tooth. For example, the template object model may indicate shape, size, and anatomical structure of surface and/or inner structure of the tooth.
[0122] Referring to the FIG. 5B, a schematic diagram 510 for alignment of a template object model 512 is shown, in accordance with an example embodiment. In one example, the template object model 512 also includes a 3D coordinate system 514 indicating an orientation of the dental object with the 3D model 118. In other words, the template object model 512 may have the predefined 3D coordinate system 514 having coordinate axes. For example, the different coordinate axes (such as X, Y and Z) of the 3D coordinate system 514 may correspond to different tooth axes to provide structure information of the tooth type in the different directions in 3D. Moreover, the values of the coordinate axes of the 3D coordinate system 514 of the template object model 512 may indicate an orientation of the template object model 512 of the dental object or the tooth with the 3D model 118.
[0123] In an example, the template object model 512 may be placed at an origin of the 3D coordinate system 514. For example, the values along the coordinate axes of the 3D coordinate system 514 may be manipulated to transform characteristics of the template object model 512 into characteristics of the dental object or tooth corresponding to it.
[0124] In accordance with an embodiment, a coordinate axis of the 3D coordinate system 514 is aligned with an occlusal reference frame 516 of the dental object in the 3D model 118. It may be noted, the occlusal reference frame 516 may be a normal line or a perpendicular from an occlusal plane 518 of the dental object in the 3D model 118. Particularly, the occlusal reference frame 516 may pass through a surface of the dental object, i.e., surface normal, in the 3D model 118. The surface normal or the occlusal reference frame 516 may define an occlusal direction for aligning the 3D coordinate system 514 with the 3D model 118. This may ensure that the template object model 512 is aligned within the surface of the dental object identified in the 3D model 118.
[0125] Further, the estimation of the one or more obj ect poses for transforming the template object model 512 to the actual dental object is performed using the mesh of the 3D model 118, the occlusal direction and the segmentation of the 2D internal feature images based on the object mask 408.
[0126] Returning to FIG. 5 A, at 504, the processors 106 are configured to correlate a position of each of the 2D internal feature images of the dental object with the position of the dental object in the 3D model 118. The position of the dental object in the 3D model may be identified based on calibration data relating to the sensors of the scanner 104. Moreover, the
2D internal feature images showing or relating to the dental object may be identified based on a mapping between the scanning position and/or scanning time frame of the visible light information used for generating one or more control points of the dental object in the 3D model 118.
[0127] In an example, based on the correlation of the position of each of the 2D internal feature images of the dental object with the position of the dental object in the 3D model 118, 2D internal images for the dental object may be categorized for or moved to the position of the dental object in the 3D model 118. This may be done for each of the dental object or tooth in the set of teeth. For a full set of teeth, by positioning the 2D internal feature images based on corresponding position in 3D surface model, certain gaps associated with the structure of the inner structure of the dental objects may be filled and/or repetitive information may be eliminated. This may improve accuracy and efficiency of further processing of the 2D internal feature images. In an example, based on the correlation, the 2D internal feature images may be moved to or positioned at the position of the dental object in the 3D model 118.
[0128] According to an embodiment, the scanner 104 may capture visible light information and IR information for the dental object from a same location due to high pulse repetition rate of the projecting unit emitting the IR light and the visible light (such as white light and blue light). As a result, an order of receiving or incoming of the visible light information corresponding to the dental object, and subsequent points of the 3D model 118 corresponding to the dental object, may be close to, such as within a predefined range from an order of receiving the IR information that is used for generating the 2D internal feature images for the dental object. Therefore, the processors 106 are configured to determine or correlate the position of the 2D internal feature images with the position of the dental object in the 3D model 118 based on a correlation of the order of the incoming visible light information and the order of the incoming IR information corresponding to the dental object. The order may be defined in terms of, for example, calibration data of the sensors of the scanner 104, scanning position or location of the scanner 104 with respect to the dental object, scanning time frame, etc. In particular, if IR information of the 2D internal feature images of a particular order is not mapped or correlated with visible light information of the corresponding order for the 3D surface model, then the correlation may lead to misalignment in the positioning of the 2D internal feature images with respect to the 3D surface model.
[0129] Further, the correlation may allow to establish a relationship between the 2D internal feature images of the dental object and its corresponding 3D representation in the 3D model 119. This correlation may enable the processors 106 to accurately determine a location of the dental object within the 3D scene or the 3D model 118. This may also enable tracking of the dental object’s position in the 3D real-world coordinates when the 3D model is rendered. In some cases, correlating the position of the of the 2D internal feature images with the position of the dental object in the 3D surface model may also be used for spatial registration, i.e., to ensure that the 3D model aligns correctly with the real-world environment.
[0130] Thereafter, at 506, the processors 106 are configured to correlate a viewing orientation of each of the 2D internal feature images of the dental object with the 3D coordinate system 514 of the template object model 512 of the dental object. In an example, the viewing orientation of IR information as well as visible light information for a same time frame for the dental object may be close or same as wavelength pulses are emitted at high rate. Therefore, the viewing orientation or camera orientation of scanner 104 while capturing the 2D internal feature image generated based on the IR information and the visible light information may be determined based on the calibration data of the sensors of the scanner 104. Further, for example, if a 2D internal feature image is captured from a front viewing orientation, say 10 degrees from Y axis, then values of pixel(s) and/or points at 10 degrees from the Y coordinate axis of the template object model may be adjusted, updated, or modified based on the 2D internal feature image.
[0131] In an example, the correlation of the viewing orientations of the 2D internal feature images with the 3D coordinate system 514 allows you to perform 3D reconstruction of the inner geometry of the dental object by transforming the template object model based on the 2D internal feature images. As the viewing orientation of the 2D internal feature images are mapped to the coordinates of the template object model, the mapping information from the 2D internal feature images to the template object model 514 becomes accurate and easy.
[0132] Further, based on the correlation of the viewing orientation of the different 2D internal feature images with the 3D coordinate system 514 of the template object model 512, and the position of the 2D internal feature images with the 3D model 118, the one or more object poses of the dental object may be determined. In particular, the values of the 2D
internal feature images may provide sub-surface information of the dental object from different directions. In this manner, the one or more object poses (i.e., position and orientation) for manipulating the inner geometry of the template object model 512 or a template model of a tooth to transform the template object model 512 into the tooth may be determined. In particular, based on the correlations, values of matrix of the template object model 512 may be updated. Further, the one or more poses, i.e., the matrix may be used to transform the template object model 512 to the dimensions of the scanned dental object or the scanned tooth. A manner in which the template object model 512 is transformed is further described in conjunction with, for example, FIG. 6A, FIG. 6B and FIG. 6C.
[0133] FIG. 6A illustrates a method flowchart 600 for correlating 2D inner geometry features with the 3D model 118 for the dental object, in accordance with an example embodiment. It is crucial to determine the correlation between the 2D inner geometry features with the 3D model 118 accurately. To this end, overlay of the 2D internal feature images onto the 3D surface model is performed based on the correlation of the inner geometry features with the 3D surface model. The accurate correlation further ensures that the overlay of the 2D internal feature images is accurately updated in cases where a viewing angle of the 3D model by a viewer is changed, i.e., the 3D model is rotated, moved, zoomed, etc. FIG. 6A is explained in conjunction with elements of FIG. 1A, FIG. IB, FIG. 1C, FIG. ID, FIG. 2, FIG. 3A, FIG. 3B, FIG. 3C, FIG. 3D, FIG. 4A, FIG. 4B, FIG. 5A and FIG. 5B.
[0134] At 602, the processors 106 are configured to estimate one or more object poses of the 2D inner geometry for the dental object. In an example, a trained machine learning model may be trained to align the template object model 512 with the 3D model. Further, the positions of the 2D internal feature images are correlated with position of the dental object in the 3D model 118; and the viewing orientations or camera orientations for the 2D internal feature images are correlated with the 3D coordinate system 514 of the aligned template object model 512 for the dental object. This correlation of the positions of the 2D internal feature images with the position of the dental object in the 3D model 118 enables to establish a relationship between the 2D internal feature images and its corresponding 3D representation, so as to ensure tracking of location of the dental object within the 3D scene or model. Moreover, by aligning the viewing orientations with the 3D coordinate system 514 of the template object model 512, information from the 2D internal feature images may be mapped onto the template object model 514. Based on these correlations, the one or more
object poses (such as position and orientation) of the inner geometry of the dental object may be estimated for the viewing orientation(s). Determination of the one or more poses is crucial for manipulation of information of the 2D internal feature images of the dental object during overlaying.
[0135] In an example, the one or more object poses for the inner geometry of the dental object may include a matrix. In other words, each of the objects poses may be defined as a matrix that transforms a template object model 512 or a template tooth to the scanned dental object or tooth for corresponding viewing direction or orientation. To this end, the matrix of the object poses may indicate a direction or orientation corresponding to the transformation of the template object model 512 based on the viewing orientations of the 2D internal feature. Details of the estimation of the one or more object poses of the 2D inner geometry for the dental object are described in conjunction with, for example, FIG. 5 A and FIG. 5B.
[0136] In an example, the trained machine learning model may be implemented as a convolutional neural network (CNN) and/or other deep learning architecture. The trained machine learning model may take the template object model 512 and the 2D internal feature images as input and predicts the object poses for different orientation, usually represented as translation (X, Y, Z) and rotation (pitch, yaw, roll) values.
[0137] For example, to train machine learning model for pose estimation, training dataset comprising pairs of template object models and their corresponding ground truth poses for different orientations may be fed to the machine learning model. This data is used to teach the model how to associate visual features in the 2D internal feature images with the template object model with specific poses or orientations. For example, relevant features, such as key points, edges, corners, or other distinctive visual elements, may be extracted by the machine learning model during the training to learn to map the direction or orientation information from the 2D internal feature images with the coordinate axes of the 3D template object model.
[0138] At 604, the processors 106 are configured to determine one or more intermediate poses for the dental object. In an example, matrix interpolation techniques may be applied to affine matrices to obtain the one or more intermediate poses for the dental objects. It may be noted, for the dental object, the 2D internal feature images may be positioned or placed on orientations or directions defined by the object poses for the inner geometry of the dental object.
[0139] Further, accurate orientation for positioning of 2D internal feature images may not be defined for interproximal area indicated by one or more pairs of interproximal images, i.e., images indicating information relating to interproximal areas situated or occurring in between two adjacent dental objects or two adject tooth in the set of teeth. To this end, orientation (such as position and direction) for positioning 2D internal feature images for the interproximal areas may be obtained based on a smooth interpolation between two object poses defined by corresponding pair of interproximal images from the positioned 2D internal feature images. For example, each of the two object poses for the interproximal area corresponding to the dental object includes an affine matrix.
[0140] In an example, affine matrices may represent various types of geometric transformations that preserve parallel lines and ratios of distances. In other words, for changing shape, size, or position of the template object model based on the dental object and positioning the 2D internal feature images, the affine matrices corresponding to the inner geometry of the dental object may be changed. In this manner, the template object model may be transformed based on the dental object and the 2D internal feature images may be positioned onto the orientation defined by the poses of the inner geometry. The affine matrices may be used to transform the template object model while keeping straight lines straight and keeping the proportions of shapes the same. For example, the affine matrix may include a set of numbers organized in a grid, where the numbers in the matrix enable scaling, and/or translation of the images.
[0141] Affine matrices are a subset of linear transformations, and they include information relating to translations, rotations, scaling, shearing, and combinations thereof. Affine matrices are used to represent and apply these transformations to points or vectors in the 3D model 118, particularly, on the positioned 2D internal feature images. For example, an affine matrix for a pose or orientation of the 2D inner geometry indicated by a 2D internal feature image may combine linear transformations and translations for the 2D internal feature image. In an example, an affine matrix (E) with a rigid motion R G IR3x3 and translation s G IR3 may be defined as:
[0142] In an example, for two affine matrices (F^ and E2) corresponding to a pair of interproximal images, the smooth interpolation may be performed as a linear interpolation. The linear interpolation may be defined as: s(t) = (1 — t) ■ S-L + t ■ s2
[0143] In certain cases, smooth interpolation may also be performed using Spherical Linear Interpolation (SLERP) for quaternion representations of Rlt R2 of the two affine matrices Et and E2.
[0144] Referring to FIG. 6B and FIG. 6C, schematic diagrams of overlaying one or more object poses onto the 3D model 118 are illustrated, in accordance with an example embodiment. In this regard, the 3D coordinate system (depicted as 3D coordinate systems 610A, 61 OB, 610C, 610D, and collectively referred to as 3D coordinate systems 610) corresponding to template tooth model for each tooth from the set of teeth in the 3D model 118 may be positioned onto a position of the tooth in the 3D model 118. In an example, for a tooth, a template tooth model transformed to indicate features of the tooth may also be positioned at a position of the tooth in the 3D model 118. Further, the 3D coordinate system of the tooth is correlated with the viewing or camera orientation of the 2D internal feature images of the tooth. Subsequently, the 3D coordinate systems may be positioned within the 3D model 118 to provide orientation information defined by the poses. This orientation information is used to overlay images onto the 3D model.
[0145] In FIG. 6B, alignment of 3D coordinate systems corresponding to template object models of the set of teeth is shown. In FIG. 6C, alignment of 3D coordinate systems corresponding to template object models of the set of teeth with the 3D model 118 is shown.
[0146] Thereafter, at 606, the processors 106 are configured to overlay the 2D internal feature images of the dental object on the 3D model 118. In an example, the processors 106 are configured to position the 2D internal feature images of the dental object in one or more orientations defined by the one or more object poses of the inner geometry. The object poses (or position and orientation information) for the inner geometry of the dental object, particularly, the inner geometry of the template object model 512 my define orientations or directions based on viewing orientations from which the 2D internal feature images are captured. Based on the defined orientations, the 2D internal feature images of the dental object may be positioned. To this end, by positioning and orienting the 2D internal feature images of
the dental object on the 3D model 118 based on the identified orientations of the inner geometry, the 2D internal feature images are correctly positioned and oriented in relation to the virtual 3D model 118. Moreover, this allows seamless integration of the plurality of 2D internal feature images into the 3D model 118.
[0147] Moreover, the 2D internal feature images (or interproximal images) corresponding to the interproximal area relating to the dental object may be positioned based on the intermediate poses determined based on the interpolation of two object poses corresponding to two adjacent dental object or two adjacent teeth.
[0148] In an example, if the 2D internal feature images are merely positioned onto a position corresponding to the dental object in the 3D model 118, then the alignment of the images may not be correct from different angles for viewing the 3D model 118. By positioning the 2D internal feature images in the orientation identified by the poses and/or intermediate poses of the transformed template object model, the positioning of the 2D internal feature images is more accurate. Further, such correlation-based positioning makes the overlay 120 of the 2D internal feature images onto the 3D model unsusceptible to change in viewing angle of the overlay 120.
[0149] In this manner, the 2D internal feature images may be positioned and aligned with the 3D model 118 for a particular viewing angle of the 3D model 118 or various viewing angles of the 3D model 118. Subsequently, 2D image information (such as 2D inner composition of the dental object) may be projected onto the 3D model 118. In an example, the 2D internal feature images may be placed and aligned onto the 3D model 118 from a viewpoint or perspective of an actual surface of the dental object, i.e., main tooth surface.
[0150] In certain cases, the processors 106 are configured to join the positioned 2D internal feature images of the dental object. In an example, the overlay 120 may be visualized by stitching or joining the positioned 2D internal feature images or hyperspectral images covering at same or different parts of the dental object for a particular viewing orientation. Each of the 2D internal feature images may cover at least some part of the dental object. The joined or stitched 2D internal feature images may form one or more 2D inner geometry panorama images (referred to as 2D panorama images, hereinafter) for the dental object based on the corresponding viewing orientation. In an example, the 2D panorama images may then be
overlay ed, placed and aligned onto the 3D model 118 based on corresponding viewing orientation.
[0151] FIG. 7 A illustrates a method flowchart 700 for overlaying 2D internal feature images with the 3D model 118 for the dental object, in accordance with an example embodiment. It To this end, overlay 120 of the 2D internal feature images onto the 3D model 120 is performed based on the correlation of the 2D inner geometry features with the 3D model 118. FIG. 7is explained in conjunction with elements of FIG. 1A, FIG. IB, FIG. 1C, FIG. ID, FIG. 2, FIG. 3A, FIG. 3B, FIG. 3C, FIG. 3D, FIG. 4A, FIG. 4B, FIG. 5A, FIG. 5B, FIG. 6A, FIG. 6B and FIG. 6C.
[0152] At 702, the processors 106 are configured to generate an offset surface of the 3D model 118 of the dental object. In an example, the offset surface is a mesh. The offset surface may be a grid or mesh of control points that are used to map pixels of an image to its corresponding position in the 3D model 118 to generate the overlay 120. Each control point has an associated offset that defines how much the pixel at that control point should be moved or adjusted.
[0153] The offset mesh may correspond to the 3D model 118. The offset mesh may be generated over visible light images. The offset mesh may include a grid of points or a set of key points that are strategically placed based on a type of distortion or transformation needed. In an example, for a control point in the grid or the offset surface, an offset vector may be calculated. This offset vector may specify how much the pixel at that control point should be shifted in both the horizontal (X) and vertical (Y) directions. The offset values can be positive or negative, indicating the direction of the displacement. The visible light images or the visible light information may be warped based on the offset values assigned to each control point. This may include moving each pixel in the visible light images according to its associated offset vector. Pursuant to the present example, the offset mesh may be used for image stitching and/or image registration of the visible light images. In an example, the offset surface may have a low- resolution and the offset surface may be configured to have a constant distance field from the tooth or the dental object.
[0154] Referring to FIG. 7B and FIG. 7C, there is shown a schematic diagram of an example offset surface 710, in accordance with an example, The offset surface 710 may include 3D information relating to different dental objects, i.e., different teeth of the subject. In
particular, the offset surface 710 may include a mesh or a grid for associating values of pixels with the mesh of the offset surface 710.
[0155] As shown in FIG. 7B, the visible light information and/or the visible light images may be represented using signed distance fields to create the offset surface 710. For example, the signed distance fields of the visible light information may include data structure used to store information about the distance of a point in the offset surface 710 to its nearest surface or boundary of the dental object represented by the offset surface 710.
[0156] Further, FIG. 7C shows a low-resolution offset surface 710. The low-resolution of the offset surface may be generated using techniques, such as marching cubes. For example, the low-resolution of the offset surface 710 may smoothen surface of the offset surface 710. Due to this reference frames 712 or normal lines generated form the offset surface 710 may be consistent.
[0157] Returning to FIG. 7A, at 704, the processors 106 are configured to overlay the 2D internal feature images of the dental object on the offset surface based on the one or more orientations. The one or more orientation may be defined by the object poses for inner geometry of the dental object. Moreover, orientations for interproximal area(s) may be defined by intermediate poses of the dental object. Further, the 2D internal feature images of the dental object may be overlaid on the offset surface 710 or the 3D model 118 based on an intersection between a point on the offset surface 710 and a ray extending in a first direction from the dental object towards the offset surface 710.
[0158] In an example, the interpolated intermediate poses and the object poses for a dental object, or a tooth, may be overlaid on a position corresponding to the tooth in the offset surface 710. Further, overlaying the intermediate poses and the object poses may indicate various orientations for the tooth. For example, the orientations, positions, and the poses of the offset surface 710 may be used to project or place the 2D internal feature images. Moreover, the orientations for the tooth may also be used to select an optimal view of the dental object. To this end, the 2D internal feature images overlaid on the 3D model 118 or the offset surface 710 may be rendered from the selected optimal view or viewing angle.
[0159] In an example, the projection or placement of the 2D internal feature images on the offset surface 710 or the 3D model 118 may be performed by generating a ray from the tooth in the first direction or upwards direction. The ray may extend from the surface of the
tooth to a point on the offset mesh 710. Subsequently, at an intersection of the ray and the point of the offset surface 710, one or more of the 2D internal feature images may be placed, positioned, or overlaid.
[0160] FIG. 7D, FIG. 7E and FIG. 7F illustrate a schematic diagram for overlaying the 2D internal feature images onto the offset surface 710, in accordance with an example embodiment.
[0161] In the FIG. 7D, the offset surface 710 is mapped with 3D coordinate systems (depicted as 714A, 714B and 714C, and collectively referred to as 3D coordinate systems 714, hereinafter) for template tooth models of each tooth based on corresponding positions of the tooth in the offset surface 710. The 3D coordinate systems may indicate various orientations or directions for overlaying images onto the offset surface 710.
[0162] In FIG. 7E, a first layer of images (depicted as 716A, 716B and 716C, and collectively referred to as first layer of images 716, hereinafter) from the 2D internal feature images are positioned or placed on the offset surface 710. In an example, the first layer of images 716 may be placed based on an intersection of a point of the offset surface 710 and a ray extending from the tooth surface. To this end, the first layer of images 716 may correspond to innermost layer, for example, corresponding to a side opposite to a side form which the ray is extending towards the offset surface 710. Further a first layer of image for a tooth may be positioned in the position of the offset surface 710 corresponding to the tooth based on orientations defined by 3D coordinate system associated with the tooth or template tooth model of the tooth.
[0163] In FIG. 7F, multiple layers of images (depicted as 718 A, 718B and 718C, and collectively referred to as multiple layers of images 718, hereinafter) from the 2D internal feature images are positioned or placed on the offset surface 710. In an example, the multiple layers of images 718 may be placed based on an intersection of a point of the offset surface 710 and a ray extending from the tooth surface. To this end, the multiple layers of images 718 may projected or overlaid on the offset surface 710 to create the overlay 120. For example, the 2D internal feature images may be projected on the offset mesh 710 or 3D model 118 based on correlations between orientations defined by one or more object poses and/or intermediate poses and viewing orientation and position of 2D internal feature images.
[0164] Returning to FIG. 7A, at 706, the processors 106 are configured to cause to display the overlay 120 of the 2D internal feature images of the dental object with the offset surface 710. In an example, the overlay may be rendered on a display device, such as a computing device, a smartphone, a monitor, etc. In particular, the 2D internal feature images are not part of the 3D offset surface 710 or surface model 118. The image projection of the 2D internal feature images may be performed by mapping and aligning images to the one or more object poses and/or the one or more intermediate poses.
[0165] In certain cases, the viewing angle of the displayed overlay 120 may be changed, for example, by a user or viewer to examine other parts of the teeth of the subject. In such a case, the processors 106 are configured to update pose matrices of each of the one or more object poses and the one or more intermediate poses for the dental object. For example, to the update of the pose matrices, the processors 106 may be configured to map the current overlay with updated viewing angle for viewing the overlay 120. Further, the processors 106 may be configured to generate an updated overlay of the correlated 2D inner geometry features of the dental object on the 3D model or the offset surface 710 for the updated viewing angle. For example, based on the updated viewing angle, the location or position of the 2D internal feature images are positioned in the offset mesh 710 or the 3D model 118.
[0166] FIG. 8A, FIG. 8B, FIG. 8C, FIG. 8D, FIG. 8E, FIG. 8F and FIG. 8G illustrate a manner of merging different spectrums of light for generating the 2D internal feature images and/or specific colour images. These 2D internal feature images and/or specific colour images may be used to enhance quality of the capture images.
[0167] According to FIG. 8A, a 2D internal feature image 802 may be generated using NIR wavelength, fluorescence (Fluo) light wavelength and white light wavelength. In particular, excited green fluorescence light information may be subtracted from NIR wavelength information, i.e., NIR(G) - Fluo (G). Moreover, signal contrast may be readjusted after the subtraction. Further, white light wavelength information may be subtracted from the obtained output from the previous subtraction. In an example, object edges or edges of a tooth may be removed to decrease signal from the tooth margin.
[0168] In certain cases, the NIR wavelength may be captured with background in blue. In such a case, features of the NIR information and white light information overlayed in the blue may output images having NIR highlighted with white edges.
[0169] According to FIG. 8B, a 2D internal feature image 804 may be generated using NIR wavelength, fluorescence light wavelength and white light wavelength. In particular, the 2D internal feature image 804 may be obtained based on NIR(G) - White(G)) + 0.3 (NIR(G) -Fluo(G). In an example, signal contrast may be re-adjusted before second order sum, i.e., 0.3 (NIR(G) -Fluo(G). Moreover, the object edges or edges of the tooth may be removed to decrease signal from the tooth margin.
[0170] According to FIG. 8C, a 2D internal feature image 806 may be a composite image. In such a case, the image 806 may be generated using red channel hue from fluorescence light, green channel may be generated based on NIR(G) - Fluo(G) - White(G). Further, blue channel provides NIR features.
[0171] According to FIG. 8D, a 2D internal feature image 808 may be composed by applying all pixels corresponding to visible light having (R>5 or G>5 or R>5) to white light image.
[0172] According to FIG. 8E, a 2D internal feature image 810 may be composed by applying all pixels corresponding to visible light having (R>5 and G>5 and R>5) to white light image.
[0173] According to FIG. 8F and FIG. 8G, a 2D internal feature image may be a composite image. In such a case, as shown in FIG. 8F, an image 812 may be generated using white image in purple/blue as background. Further, red hue from fluorescence light is overlayed as overlay ed signal 1 over the background. Thereafter, as shown in FIG. 8G, an image 814 is generated using all pixels with (R>5 and G>5 and R>5) from composite image may overlayed as overlayed signal 2 over the overlayed signal 1.
[0174] To this end, different colour combinations can be made from white light and blue light. Further, colour channels or filers corresponding to the different colours may be applied to NIR background images to generate 2D internal feature images.
[0175] FIG. 9 illustrates a block diagram 900 of the handheld intraoral scanner 104, in accordance with an example embodiment. FIG. 9 is explained in conjunction with elements of FIG. 1-8. The scanner 104 may include at least one processing unit (hereinafter, also referred to as “processing unit 902”), a memory unit 904, a web server 906, a monitoring unit 908, a
temporary storage unit 910, a scanning feedback unit 912, an input/output (I/O) unit 914, and a communication interface 916.
[0176] The processing unit 902 may be embodied in a number of different ways. For example, the processing unit 902 may be embodied as one or more of various hardware processing means such as a coprocessor, a microprocessor, a controller, a digital signal processor (DSP), a processing element with or without an accompanying DSP, or various other processing circuitry including integrated circuits such as, for example, an ASIC (application specific integrated circuit), an FPGA (field programmable gate array), a microcontroller unit (MCU), a hardware accelerator, a special-purpose computer chip, or the like. In an embodiment, the processing unit 902 may be embodied as a high-performance microprocessor having series of System on Chip (SOCs) which includes relative powerful and power-efficient Graphics Processing Units (GPUs) and Central Processing Units (CPUs) and a small form factor. As such, in some embodiments, the processing unit 902 may include one or more processing cores configured to perform independently. A multi-core processor may enable multiprocessing within a single physical package. Additionally, or alternatively, the processing unit 902 may include one or more processors configured in tandem via the bus to enable independent execution of instructions, pipelining and/or multithreading.
[0177] In some embodiments, the processing unit 902 may be configured to detect NIR and visible light, using the VO unit 914 during the scanning session of teeth of a subject, such as a patient requiring a dental treatment. The detected NIR and visible light may be used to generate the plurality of 2D images 110, such as the plurality of 2D IR images 112 and the visible light images 114. The plurality of 2D images 110 may include images of the dental object 124 or teeth of the subject from various angles or viewing points. For example, the processing unit 902 may be configured to generate 3D surface information for the teeth based on the visible light images 114.
[0178] In an example embodiment, the processing unit 902 may be in communication with the memory unit 904 via a bus for passing information among components of the scanner 104.
[0179] The memory unit 904 may be non-transitory and may include, for example, one or more volatile and/or non-volatile memories. In other words, for example, the memory unit 904 may be an electronic storage device (for example, a computer readable storage medium) comprising gates configured to store data (for example, bits) that may be retrievable by a
machine (for example, a computing device like the processing unit 902). The memory unit 904 may be configured to store information, data, content, applications, instructions, or the like, for enabling the apparatus to carry out various functions in accordance with an example embodiment of the present disclosure. For example, the memory unit 904 may be configured to store the detected IR and the detected visible light after the scanning session of the teeth is finished. The detected IR and the visible light after the scanning session may be stored as IR information and visible light information, respectively. In certain cases, the memory unit 904 may be configured to store compressed IR information and visible light information. In some embodiments, the memory unit 904 may be configured to store calibration data required to measure the detected IR and the visible light to generate the IR information, the visible light information, the white light images, IR images and/or the plurality of 2D internal feature images. As exemplarily illustrated in FIG. 9, the memory unit 904 may be configured to store instructions for execution by the processing unit 902. As such, whether configured by hardware or software methods, or by a combination thereof, the processing unit 902 may represent an entity (for example, physically embodied in circuitry) capable of performing operations according to an embodiment of the present disclosure while configured accordingly. Thus, for example, when the processing unit 902 is embodied as the microprocessor, the processing unit 902 may be specifically configured hardware for conducting the operations described herein. Alternatively, as another example, when the processing unit 902 is embodied as an executor of software instructions, the instructions may specifically configure the processing unit 902 to perform the algorithms and/or operations described herein when the instructions are executed. The processing unit 902 may include, among other things, a clock, an arithmetic logic unit (ALU) and logic gates configured to support operation of the processing unit 902.
[0180] The web server 906 may be a software, a hardware, or a combination thereof that may be configured to store and provide data to a web browser associated with the processors 106. For example, the visible light information and the IR information be provided to the web browser of the processors 106 via the web server 906. As the web server 906 may be accessed by any web browser, the need for installation of an additional software by the processors 106, to connect to the web server 906 may be eliminated. The web server 906 may communicate to one of the communication channels 108 via a web network. In an example, the web server 906 and the processors 106 may communicate to a common wireless full-duplex communication channel via the web network for transmission and reception of the visible light information and the IR information. The web server 906 and the web browser may communicate via, for
example, Hypertext Transfer Protocol (HTTP), Simple Mail Transfer Protocol (SMTP), or File Transfer Protocol (FTP). Once the web server 906 and the web browser are connected, the web server 906 may provide a web application on the web browser.
[0181] The monitoring unit 908 may be a software, a hardware, or a combination thereof that may be configured to monitor a bandwidth of one of the communication channels 108 (such as the wireless full-duplex communication channel) via which the scanner 104 and the processors 106 may be connected. Moreover, the monitoring unit 908 may be configured to monitor a connection of one of the communication channels 108 via which the scanner 104 and the processors 106 may be connected.
[0182] In an embodiment, if the monitoring unit 908 determines that the bandwidth of the communication channels 108 is below a minimum bandwidth, the monitoring unit 908 may provide such information to the processing unit 902. The processing unit 902 may downsample the visible light information and the IR information, based on the received information. In another embodiment, if the monitoring unit 908 may determine that the bandwidth of the communication channels 108 is below the minimum bandwidth for longer than a maximum period, the monitoring unit 908 may provide such information to the processing unit 902. The processing unit 902 may compress and store the visible light information and the NIR information into the memory unit 904. In some embodiments, if the monitoring unit 908 may determine that the connection between the scanner 104 and the processors 106 is lost, the monitoring unit 908 may provide such information to the processing unit 902. In such a case, the processing unit 902 may compress and store the visible light information and the IR information into the memory unit 904.
[0183] The temporary storage unit 910 may be a software, a hardware, or a combination thereof that may be configured to store the visible light information and the IR information when the bandwidth of the communication channels 108 (such as the wireless full-duplex communication channel) is determined to be below the minimum bandwidth. The temporary storage unit 910 may further transmit the stored the visible light information and the IR information to the processors 106 when the bandwidth is determined to be above or equal the minimum bandwidth. Examples of the temporary storage unit 910 may include, but may not be limited to, a random-access memory (RAM), or a cache memory.
[0184] The scanning feedback unit 912 may be a software, a hardware, or a combination thereof that may be configured to receive status input from the monitoring unit 908. Based on the received status input, the scanning feedback unit 912 may provide a scanning feedback signal to the user, such as a dentist, of the handheld intraoral scanner 104. In an embodiment, the scanning feedback signal is used to provide guidance to the user regarding an area of the teeth where a scanning quality of the scanning session is low and adequate visible light information and/or the NIR information of the is not received. For example, the scanning feedback unit 912 may provide the scanning feedback signal as, for example, an acoustic feedback signal, a haptic feedback, or a visual feedback.
[0185] The I/O unit 914 may include circuitry and/or software that may be configured to provide output to the user of the handheld intraoral scanning device 104 and receive, measure or sense input information. The I/O unit 914 may include a speaker 914A, a vibrator 914B, a projector unit 914C, and one or more sensors 914D. In an embodiment, the speaker 914A may be configured to output the acoustic feedback signal to guide the user. The vibrator 914B may be, for example, a transducer configured to convert the scanning feedback signal that may be an electrical signal into a mechanical output, such as the haptic feedback in form of vibrations to guide the user.
[0186] As may be understood, the scanner 104 may be configured to detect the IR and the visible light that may be reflected from the teeth of the subject. In this regard, the projector unit 914C may be configured to output one or more visible or white coloured wavelength pulses, and one or more IR wavelength pulses. For example, the visible wavelength pulses and the IR wavelength pulses may be casted onto the dental object 124 or the teeth to illuminate the teeth of the subject, such as a patient. Further, the visible light wavelength pulses and the IR wavelength pulses may be reflected or refracted from the surface and/or the inner region of the teeth. The one or more sensors 914D may be configured to detect visible coloured wavelength pulses and IR wavelength pulses that may be reflected and/or refracted from the surface or inner region of the teeth. In an example, the one or more sensors 914D may include one or more image sensors, such as cameras. For example, the image sensors may be configured to generate the visible light images 114 and the IR images 112 based on the illumination of the teeth using the IR and the visible light.
[0187] The communication interface 916 may comprise input interface and output interface for supporting communications to and from the handheld intraoral scanner 104. The
communication interface 916 may be a device or circuitry embodied in either hardware or a combination of hardware and software that is configured to receive and/or transmit data to/from the scanner 104. In this regard, the communication interface 916 may include, for example, an antenna (or multiple antennae) and supporting hardware and/or software for enabling communications with a wireless communication network. Additionally, or alternatively, the communication interface 916 may include the circuitry for interacting with the antenna(s) to cause transmission of signals via the antenna(s) or to handle receipt of signals received via the antenna(s). In some environments, the communication interface 916 may alternatively or additionally support wired communication. As such, for example, the communication interface 916 may include a communication modem and/or other hardware and/or software for supporting communication via cable, digital subscriber line (DSL), universal serial bus (USB) or other mechanisms.
[0188] FIG. 10 illustrates a pre-processing step for visible light images and IR images 1006, in accordance with an example. In an example, the visible light images may include white light images 1002 and blue light images 1004. As described above, the blue light images 1004 may be used to generate green and/or red excited fluorescence light information or images. Moreover, in an example, the IR images 1006 may be captured using near-infrared wavelength pulses.
[0189] For example, the pre-processing step for the white light images 1002, the blue light images 1004 and the IR images 1006 may include contrast adjustment. In an example, the contrast adjustment of the white light images 1002 may include red (R) light contrast adjustment 1008 A, green (G) light contrast adjustment 1008B, and blue (B) light contrast adjustment 1008C. Further, the contrast adjustment of the blue light images 1004 may include red light contrast adjustment 1010A, green light contrast adjustment 1010B, and red green (RG) light contrast adjustment 1010C. In addition, the contrast adjustment of the IR light images 1006 may include red light contrast adjustment 1012A, green light contrast adjustment 1012B, and blue light contrast adjustment 1012C. Further, the 2D internal feature images may be generated using the pre-processed white light images 1002, blue light images (or fluorescence red and/or green light images) 1004, and IR images 1006.
[0190] Many modifications and other embodiments of the disclosure set forth herein will come to mind of one skilled in the art to which these disclosures pertain having the benefit of the teachings presented in the foregoing descriptions and the associated drawings. Therefore,
it is to be understood that the disclosures are not to be limited to the specific embodiments disclosed and that modifications and other embodiments are intended to be included within the scope of the appended claims. Moreover, although the foregoing descriptions and the associated drawings describe example embodiments in the context of certain example combinations of elements and/or functions, it should be appreciated that different combinations of elements and/or functions may be provided by alternative embodiments without departing from the scope of the appended claims. In this regard, for example, different combinations of elements and/or functions than those explicitly described above are also contemplated as may be set forth in some of the appended claims. Although specific terms are employed herein, they are used in a generic and descriptive sense only and not for purposes of limitation.
Claims
1. An intraoral scanning system (102) configured to generate an overlay (120) of correlated 2D inner geometry features on a 3D surface model (118) for one or more dental objects (146), the intraoral scanning system comprising: a hand-held intraoral scanner (104) configured to operate with one or more sensors to detect infrared (IR) and visible light, wherein the one or more sensors comprises an image sensor; one or more processors (106) operably connected to the hand-held intraoral scanner, the one or more processors configured to: receive visible light information (128) and IR information (130) from the one or more sensors; generate a three-dimensional (3D) surface model (118) of a dental object (146) from the one or more dental objects based on the visible light information; generate a plurality of two-dimensional (2D) internal feature images (302, 306, 310, 314) based on the visible light information and the IR information, wherein the plurality of 2D internal feature images indicate 2D inner geometry features for the dental object; process the plurality of 2D internal feature images to correlate the 2D inner geometry features of the dental object with at least one reference frame (712) of the dental object in the 3D surface model; and output an overlay (120) of the correlated 2D inner geometry features of the dental object on the 3D surface model.
2. The intraoral scanning system (102) according to claim 1, wherein the at least one reference frame (712) of the dental object (146) is perpendicular to two or more planes (148, 150, 152) of the dental object in the 3D surface model (118).
3. The intraoral scanning system (102) according to claim 2, wherein each of the two or more planes (148, 150, 152) of the dental object (146) in the 3D surface model (118) comprises at least a first plane and a second plane, and wherein the first plane and the second plane are aligned with a buccal-lingual plane and a mesial-distal plane, respectively.
4. The intraoral scanning system (102) according to claims 2 or 3, wherein the two or more planes (148, 150, 152) of the dental object (146) includes at least one of: an occlusal plane (518), a buccal plane, a lingual plane, a mesial plane, a distal plane, or a labial plane.
5. The intraoral scanning system (102) according to any of previous claims, wherein to correlate the 2D inner geometry features of the dental object (146) with at least one reference frame (712) of the dental object in the 3D surface model (118), the one or more processors are configured to: align (502) a template object model (512) with a position of the dental object in the 3D surface model, wherein the template object model includes a 3D coordinate system (514) indicating an orientation of the dental object with the 3D surface model; correlate (504) a position of each of the plurality of 2D internal feature images (302, 306, 310, 314) of the dental object with the position of the dental object in the 3D surface model, such that an order of receiving the visible light information (128) for the 3D surface model of the dental object is within a predefined range from an order of receiving the IR information (130) for the plurality of 2D internal feature images; and correlate (506) a viewing orientation of each of the plurality of 2D internal feature images of the dental object with the 3D coordinate system of the template object model of the dental object.
6. The intraoral scanning system (102) according to claim 5, wherein a coordinate axis of the 3D coordinate system (514) is aligned with an occlusal reference frame (516) of the at least one reference frame (712) of the dental object (146), and wherein the occlusal reference frame is perpendicular to an occlusal plane (518) of the dental object in the 3D surface model (H8).
7. The intraoral scanning system (102) according to any of claims 5 or 6, wherein the template object model (512) has a corresponding tooth type, and wherein the tooth type of the template object model is at least one of: a central incisor, a lateral incisor, a canine, a first premolar, a second premolar, a first molar, a second molar, or a third molar.
8. The intraoral scanning system (102) according to any of claims 5 - 7, wherein the one or more processors (106) are configured to:
estimate (602), using a trained machine learning model, one or more object poses of inner geometry for the dental object (146) based on the plurality of 2D internal feature images (302, 306, 310, 314) and the 3D surface model (118), wherein each of the one or more object poses indicate a matrix to transform the template object model (512) into the dental object; determine (604) one or more intermediate poses for an interproximal area relating to the dental object based on a smooth interpolation between two object poses indicated by a pair of interproximal images from the positioned plurality of 2D internal feature images, wherein each of the two object poses for the dental object includes an affine matrix; and overlay (606) the plurality of 2D internal feature images of the dental object on the 3D surface model based on the one or more intermediate poses for the interproximal area relating to the dental object and one or more orientations defined by the one or more object poses of inner geometry.
9. The intraoral scanning system (102) according to claim 8, wherein the one or more processors (106) are configured to: join the positioned plurality of 2D internal feature images (302, 306, 310, 314) of the dental object (146) to generate one or more 2D inner geometry panorama images for the dental object based on the viewing orientation; and overlay the one or more 2D inner geometry panorama images on the 3D surface model (118).
10. The intraoral scanning system (102) according to claim 8, wherein the one or more processors (106) are configured to: generate (702) an offset surface (710) of the 3D surface model (118) of the dental object (146), such that the offset surface is within a constant distance field from the dental object; overlay (704) the plurality of 2D internal feature images (302, 306, 310, 314) of the dental object on the offset surface based on an orientation defined in at least one of: the one or more object poses, or the one or more intermediate poses of the dental object, and an intersection between a point on the offset surface and a ray extending in a first direction from the dental object towards the offset surface; and cause to display (706) the overlay (120) of the plurality of 2D internal feature images of the dental object with the offset surface.
11. The intraoral scanning system (102) according to any of claims 8 - 10, wherein the one or more processors (106) are configured to: update pose matrices of each of the one or more intermediate poses for the dental object (146) to map the overlay (120) with an updated viewing angle; and based on the updated pose matrices, generate an updated overlay of the correlated 2D inner geometry features of the dental object on the 3D surface model (118) for the updated viewing angle.
12. The intraoral scanning system (102) according to any of the previous claims, wherein the one or more processors (106) are configured to: generate (402) an object mask (408) using the visible light information (128) captured from a scanning position, wherein the scanning position corresponds to a position of the one or more sensors; apply (404) the object mask on the plurality of 2D internal feature images (302, 306, 310, 314) to segment at least a portion of the plurality of 2D internal feature images indicating non-obj ect information; and remove (406) the segmented portion of the plurality of 2D internal feature images indicating the non-obj ect information.
13. The intraoral scanning system (102) according to any of the previous claims, wherein the plurality of 2D internal feature images (302, 306, 310, 314) include a 2D composition of inner geometry features for the dental object (146).
14. A method for generating an overlay (120) of correlated 2D inner geometry features on a 3D surface model (118) for one or more dental objects, the method being implemented using an intraoral scanning system (102) comprising a hand-held intraoral scanner (104) configured to operate with one or more sensors to detect infrared (IR) and visible light and one or more processors (106) operably connected to the hand-held intraoral scanner, the method comprising: receiving visible light information (128) and IR information (130) from the one or more sensors; generating (212) a three-dimensional (3D) surface model (118) of a dental object (146) from the one or more dental objects based on the visible light information;
generating a plurality of two-dimensional (2D) internal feature images (302, 306, 310, 314) based on the visible light information and the IR information, wherein the plurality of 2D internal feature images indicate 2D inner geometry features for the dental object; processing the plurality of 2D internal feature images to correlate the 2D inner geometry features of the dental object with at least one reference frame (712) of the dental object in the 3D surface model; and outputting an overlay (120) of the correlated 2D inner geometry features of the dental object on the 3D surface model.
15. A computer programmable product comprising a non-transitory computer readable medium having stored thereon computer executable instructions, which when executed by a processing circuitry, cause the processing circuitry to carry out operations, the operations comprising: receiving visible light information (128) and infrared (IR) information (130) from one or more sensors; generating (212) a three-dimensional (3D) surface model (118) of a dental object (146) based on the visible light information; generating a plurality of two-dimensional (2D) internal feature images (302, 306, 310, 314) based on the visible light information and the IR information, wherein the plurality of 2D internal feature images indicate 2D inner geometry features for the dental object; processing the plurality of 2D internal feature images to correlate the 2D inner geometry features of the dental object with at least one reference frame (712) of the dental object in the 3D surface model; and outputting an overlay (120) of the correlated 2D inner geometry features of the dental object on the 3D surface model.
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| DKPA202370526 | 2023-10-11 | ||
| DKPA202370526 | 2023-10-11 |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2025078622A1 true WO2025078622A1 (en) | 2025-04-17 |
Family
ID=93119394
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/EP2024/078723 Pending WO2025078622A1 (en) | 2023-10-11 | 2024-10-11 | System and method for intraoral scanning |
Country Status (1)
| Country | Link |
|---|---|
| WO (1) | WO2025078622A1 (en) |
Cited By (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| WO2025202065A1 (en) * | 2024-03-27 | 2025-10-02 | 3Shape A/S | An intraoral scanning system for improving composed scan information |
Citations (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20230240819A1 (en) * | 2016-07-27 | 2023-08-03 | Align Technology, Inc. | Intraoral scanning apparatus |
-
2024
- 2024-10-11 WO PCT/EP2024/078723 patent/WO2025078622A1/en active Pending
Patent Citations (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20230240819A1 (en) * | 2016-07-27 | 2023-08-03 | Align Technology, Inc. | Intraoral scanning apparatus |
Cited By (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| WO2025202065A1 (en) * | 2024-03-27 | 2025-10-02 | 3Shape A/S | An intraoral scanning system for improving composed scan information |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US11903788B2 (en) | Multifaceted registration of intraoral scans | |
| Rangel et al. | Integration of digital dental casts in 3-dimensional facial photographs | |
| CN210727927U (en) | Intraoral Scanning System | |
| EP3583608B1 (en) | Longitudinal analysis and visualization under limited accuracy system | |
| EP2560572B1 (en) | Reduction and removal of artifacts from a three-dimensional dental x-ray data set using surface scan information | |
| KR101915215B1 (en) | Identification of areas of interest during intraoral scans | |
| US9191648B2 (en) | Hybrid stitching | |
| US9105127B2 (en) | Apparatus for generating volumetric image and matching color textured external surface | |
| JP2019103831A (en) | Selection and locking of intraoral images | |
| CN106537225A (en) | A device for visualization of the inside of a patient's mouth | |
| JP2014117611A (en) | Integration of intra-oral imagery and volumetric imagery | |
| CN104349710A (en) | Dental 3D Measuring Devices | |
| US20240024076A1 (en) | Combined face scanning and intraoral scanning | |
| US20190254790A1 (en) | Rendering A Dental Model In An Image | |
| WO2025078622A1 (en) | System and method for intraoral scanning | |
| EP4276765A1 (en) | Method to correct scale of dental impressions | |
| KR101001678B1 (en) | 3D dental image acquisition method | |
| EP4494108A1 (en) | Computerized dental visualization | |
| WO2020037582A1 (en) | Graph-based key frame selection for 3-d scanning | |
| EP3629301B1 (en) | Rendering a dental model in an image | |
| EP4479982A1 (en) | Method of generating a training data set for determining periodontal structures of a patient | |
| WO2024256722A1 (en) | System and method for intraoral scanning | |
| EP4287202A1 (en) | Digital dental impressions | |
| EP4227955A1 (en) | Enhanced scanning | |
| US20250380923A1 (en) | Method for artifact reduction in cone beam computed tomography images |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 24790387 Country of ref document: EP Kind code of ref document: A1 |