[go: up one dir, main page]

EP4627541A1 - Erzeugung von dentalen rendering aus modelldaten - Google Patents

Erzeugung von dentalen rendering aus modelldaten

Info

Publication number
EP4627541A1
EP4627541A1 EP23836677.7A EP23836677A EP4627541A1 EP 4627541 A1 EP4627541 A1 EP 4627541A1 EP 23836677 A EP23836677 A EP 23836677A EP 4627541 A1 EP4627541 A1 EP 4627541A1
Authority
EP
European Patent Office
Prior art keywords
dental
panoramic
projection
model
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
EP23836677.7A
Other languages
English (en)
French (fr)
Inventor
Guotu Li
Michael Chang
Christopher Cramer
Michael Austin Brown
Magdalena BLANKENBURG
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Align Technology Inc
Original Assignee
Align Technology Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US18/522,169 external-priority patent/US20240177397A1/en
Application filed by Align Technology Inc filed Critical Align Technology Inc
Publication of EP4627541A1 publication Critical patent/EP4627541A1/de
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/10Geometric effects
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2210/00Indexing scheme for image generation or computer graphics
    • G06T2210/41Medical
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2215/00Indexing scheme for image rendering
    • G06T2215/08Gnomonic or central projection
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2219/00Indexing scheme for manipulating 3D models or images for computer graphics
    • G06T2219/021Flattening

Definitions

  • Embodiments of the present disclosure relate to the field of dentistry and, in particular, to the use of three-dimensional (3D) models from intraoral scans to generate two- dimensional (2D) dental arch renderings.
  • FIG. 2 illustrates a cylindrical modeling approach for generating a 2D projection of a 3D dentition, in accordance with at least one embodiment.
  • FIG. 3 illustrates projection of the 3D dentition onto a cylindrical projection surface, in accordance with at least one embodiment.
  • FIG. 5 is a comparison of an actual X-ray image to an X-ray panoramic simulated image, in accordance with at least one embodiment.
  • FIG. 6B is a workflow illustrating generation of a panoramic projection from a 3D dentition based on the arch curve-following modeling approach, in accordance with at least one embodiment.
  • FIG. 7 illustrates a graphical user interface displaying various renderings of a 3D dentition, in accordance with at least one embodiment.
  • FIG. 8A illustrates a further arch curve-following modeling approach for generating a 2D projection of a 3D dentition, in accordance with at least one embodiment.
  • FIG. 8B illustrates a 2D buccal rendering of the 3D dentition using the arch curve-following modeling approach, in accordance with at least one embodiment.
  • FIG. 9B shows overlays of 2D panoramic renderings onto X-ray images, in accordance with at least one embodiment.
  • Described herein are methods and systems using 3D models of a dental site of a patient (e.g., a dentition) to generate panoramic 2D images of the dental site.
  • the 2D images may be used, for example, for inspecting and evaluating the shapes, positions, and orientations of teeth, as well as for identifying and labeling of dental features.
  • dental features that may be identified and/or labeled include cracks, chips, gum line, worn tooth regions, cavities (also known as caries), emergent profile (e.g., the gum tooth line intersection), an implant gum line, implant edges, scan body edge/curves, margin line of a preparation tooth, and so on.
  • X-ray panoramic simulated images are methods and systems for simulating X-ray images from panoramic renderings of 3D models. Also described herein are methods and systems for labeling dental features in panoramic 2D images and assigning labels to corresponding dental features in the 3D model from which the panoramic 2D images are derived. Certain embodiments described herein parameterize the rendering process by projecting the 3D model onto various types of projection targets to reduce or minimize geometric distortions. Certain embodiments further relate to projection targets that closely track the contours of the patient’ s dental arch. Such embodiments can provide more accurate panoramic renderings with minimal distortion, further facilitating a dentist to conduct visual oral diagnostics and provide patient education.
  • the embodiments described herein provide a framework for panoramic dental arch renderings (both buccal and lingual views). When combined with the occlusal view of the jaw, dental personnel can have a comprehensive overview of the patient’s jaw to facilitate both diagnostics and patient education. Unlike traditional rendering approaches which often require at least seven images (i.e., right-buccal, right-lingual, anterior-buccal, anteriorlingual, left-buccal, left-lingual and occlusal views), the embodiments described herein can reduce the number of renderings used for fully visualizing the patient’s dentition down to three, i.e., buccal panoramic, lingual panoramic, and occlusal. Moreover, the panoramic arch rendering provides for easier image labeling for various image-based oral diagnostic modeling processes.
  • a lab scan or model/impression scan may include one or more images of a dental site or of a model or impression of a dental site, which may or may not include height maps, and which may or may not include color images.
  • FIG. 1 illustrates an exemplary system 100 for performing intraoral scanning and/or generating panoramic 2D images of a dental site, in accordance with at least one embodiment.
  • one or more components of system 100 carries out one or more operations described below with reference to FIGS. 10-12.
  • System 100 includes a dental office 108 and a dental lab 110.
  • the dental office 108 and the dental lab 110 each include a computing device 105, 106, where the computing devices 105, 106 may be connected to one another via a network 180.
  • the network 180 may be a local area network (LAN), a public wide area network (WAN) (e.g., the Internet), a private WAN (e.g., an intranet), or a combination thereof.
  • LAN local area network
  • WAN public wide area network
  • private WAN e.g., an intranet
  • Computing device 105 maybe coupled to an intraoral scanner 150 (also referred to as a scanner) and/or a data store 125.
  • Computing device 106 may also be connected to a data store (not shown).
  • the data stores may be local data stores and/or remote data stores.
  • Computing device 105 and computing device 106 may each include one or more processing devices, memory, secondary storage, one or more input devices (e.g., such as a keyboard, mouse, tablet, and so on), one or more output devices (e.g., a display, a printer, etc.), and/or other hardware components.
  • scanner 150 is wirelessly connected to computing device 105 via a direct wireless connection. In at least one embodiment, scanner 150 is wirelessly connected to computing device 105 via a wireless network. In at least one embodiment, the wireless network is a Wi-Fi network. In at least one embodiment, the wireless network is a Bluetooth network, a Zigbee network, or some other wireless network. In at least one embodiment, the wireless network is a wireless mesh network, examples of which include a Wi-Fi mesh network, a Zigbee mesh network, and so on. In an example, computing device 105 may be physically connected to one or more wireless access points and/or wireless routers (e.g., Wi-Fi access points/routers). Intraoral scanner 150 may include a wireless module such as a Wi-Fi module, and via the wireless module may join the wireless network via the wireless access point/router.
  • a wireless module such as a Wi-Fi module
  • scanner 150 includes an inertial measurement unit (IMU).
  • the IMU may include an accelerometer, a gyroscope, a magnetometer, a pressure sensor and/or other sensor.
  • scanner 150 may include one or more micro-electromechanical system (MEMS) IMU.
  • MEMS micro-electromechanical system
  • the IMU may generate inertial measurement data (referred to herein as movement data or motion data), including acceleration data, rotation data, and so on.
  • Intraoral scanner 150 may include a probe (e.g., a hand held probe) for optically capturing three-dimensional structures.
  • the intraoral scanner 150 maybe used to perform an intraoral scan of a patient’s oral cavity, in which a plurality of intraoral scans (also referred to as intraoral images) are generated.
  • An intraoral scan application 115 running on computing device 105 may communicate with the scanner 150 to effectuate the intraoral scanning process.
  • a result of the intraoral scanning may be intraoral scan data 135A, 135B through 135N that may include one or more sets of intraoral scans or intraoral images.
  • Intraoral scan data 135A-N may optionally include one or more color images (e.g., color 2D images) and/or images generated under particular lighting conditions (e.g., 2D ultraviolet (UV) images, 2D infrared (IR) images, 2D near-IR images, 2D fluorescent images, and so on).
  • color images e.g., color 2D images
  • images generated under particular lighting conditions e.g., 2D ultraviolet (UV) images, 2D infrared (IR) images, 2D near-IR images, 2D fluorescent images, and so on.
  • the scanner 150 may transmitthe intraoral scan data 135A, 135B through 135N to the computing device 105.
  • Computing device 105 may store the intraoral scan data 135 A- 135N in data store 125.
  • a user may subject a patient to intraoral scanning.
  • the user may apply scanner 150 to one or more patient intraoral locations.
  • the scanning may be divided into one or more segments.
  • the segments may include an upper dental arch segment, a lower dental arch segment, a bite segment, and optionally one or more preparation tooth segments.
  • the segments may include a lower buccal region of the patient, a lower lingual region of the patient, an upper buccal region of the patient, an upper lingual region of the patient, one or more preparation teeth of the patient (e.g., teeth of the patient to which a dental device such as a crown or other dental prosthetic will be applied), one or more teeth which are contacts of preparation teeth (e.g., teeth not themselves subject to a dental device but which are located next to one or more such teeth or which interface with one or more such teeth upon mouth closure), and/or patient bite (e.g., scanning performed with closure of the patient’s mouth with the scan being directed towards an interface area of the patient’ s upper and lower teeth).
  • preparation teeth of the patient e.g., teeth of the patient to which a dental device such as a crown or other dental prosthetic will be applied
  • one or more teeth which are contacts of preparation teeth e.g., teeth not themselves subject to a dental device but which are located next to one or more such teeth or which interface with one or more such teeth upon
  • the scanner 150 may provide intraoral scan data 135A-N to computing device 105.
  • the intraoral scan data 135A-N may be provided in the form of intraoral scan data sets, each of which may include 3D point clouds, 2D scans/images and/or 3D scans/images of particular teeth and/or regions of an intraoral site.
  • intraoral scan data sets may include 3D point clouds, 2D scans/images and/or 3D scans/images of particular teeth and/or regions of an intraoral site.
  • separate data sets are created for the maxillary arch, for the mandibular arch, for a patient bite, and for each preparation tooth.
  • a single large data set is generated (e.g., for a mandibular and/or maxillary arch).
  • Such scans may be provided from the scanner 150 to the computing device 105 in the form of one or more points (e.g., one or more point clouds).
  • the manner in which the oral cavity of a patient is to be scanned may depend on the procedure to be applied thereto. For example, if an upper or lower denture is to be created, then a full scan of the mandibular or maxillary edentulous arches maybe performed. In contrast, if a bridge is to be created, then just a portion of a total arch may be scanned which includes an edentulous region, the neighboring preparation teeth (e.g., abutment teeth) and the opposing arch and dentition. Additionally, the manner in which the oral cavity is to be scanned may depend on a doctor’s scanning preferences and/or patient conditions.
  • orthodontic procedure refers, inter alia, to any procedure involving the oral cavity and directed to the design, manufacture or installation of orthodontic elements at a intraoral site within the oral cavity, or a real or virtual model thereof, or directed to the design and preparation of the intraoral site to receive such orthodontic elements.
  • These elements may be appliances including but not limited to brackets and wires, retainers, clear aligners, or functional appliances.
  • intraoral scan application 115 may register and stitch together two or more intraoral scans (e.g., intraoral scan data 135 A and intraoral scan data 135B) generated thus far from the intraoral scan session.
  • performing registration includes capturing 3D data of various points of a surface in multiple scans, and registering the scans by computing transformations between the scans.
  • One or more 3D surfaces may be generated based on the registered and stitched together intraoral scans during the intraoral scanning. The one or more 3D surfaces maybe output to a display so that a doctor or technician can view their scan progress thus far.
  • the one or more 3D surfaces may be updated, and the updated 3D surface(s) may be output to the display.
  • segmentation is performed on the intraoral scans and/or the 3D surface to segment points and/or patches on the intraoral scans and/or 3D surface into one or more classifications.
  • intraoral scan application 115 classifies points as hard tissue or as soft tissue.
  • the 3D surface may then be displayed using the classification information. For example, hard tissue may be displayed using a first visualization (e.g., an opaque visualization) and soft tissue may be displayed using a second visualization (e.g., a transparent or semi-transparent visualization).
  • intraoral scan application 115 may automatically generate a virtual 3D model of one or more scanned dental sites (e.g., of an upper jaw and a lowerjaw).
  • the final 3D model may be a set of 3D points and their connections with each other (i.e., a mesh).
  • the final 3D model is a volumetric 3D model that has both surface and internal features.
  • the 3D model is a volumetric model generated as described in International Patent Application Publication No. WO 2019/147984 Al, entitled “Diagnostic Intraoral Scanning and Tracking,” which is hereby incorporated by reference herein in its entirety.
  • intraoral scan application 115 may register and stitch together the intraoral scans generated from the intraoral scan session that are associated with a particular scanning role or segment.
  • the registration performed at this stage may be more accurate than the registration performed during the capturing of the intraoral scans, and may take more time to complete than the registration performed during the capturing of the intraoral scans.
  • performing scan registration includes capturing 3D data of various points of a surface in multiple scans, and registering the scans by computing transformations between the scans.
  • the 3D data may be projected into a 3D space of a 3D model to form a portion of the 3D model.
  • the intraoral scans may be integrated into a common reference frame by applying appropriate transformations to points of each registered scan and projecting each scan into the 3D space.
  • registration is performed for adjacent or overlapping intraoral scans (e.g., each successive frame of an intraoral video).
  • registration is performed using blended scans.
  • Registration algorithms are carried outto registertwo adjacent or overlapping intraoral scans (e.g., two adjacent blended intraoral scans) and/or to register an intraoral scan with a 3D model, which essentially involves determination of the transformations which align one scan with the other scan and/or with the 3D model.
  • Registration may involve identifying multiple points in each scan (e.g., point clouds) of a scan pair (or of a scan and the 3D model), surface fitting to the points, and using local searches around points to match points of the two scans (or of the scan and the 3D model).
  • intraoral scan application 115 may match points of one scan with the closest points interpolated on the surface of another scan, and iteratively minimize the distance between matched points.
  • Other registration techniques may also be used.
  • intraoral scan application 115 may process intraoral scans (e.g., which may be blended intraoral scans) to determine which intraoral scans (or which portions of intraoral scans) to use for portions of a 3D model (e.g., for portions representing a particular dental site).
  • Intraoral scan application 115 may use data such as geometric data represented in scans and/or time stamps associated with the intraoral scans to select optimal intraoral scans to use for depicting a dental site or a portion of a dental site.
  • images are input into a machine learning model that has been trained to select and/or grade scans of dental sites.
  • one or more scores are assigned to each scan, where each score may be associated with a particular dental site and indicate a quality of a representation of that dental site in the intraoral scans.
  • intraoral scans may be assigned weights based on scores assigned to those scans (e.g., based on proximity in time to a time stamp of one or more selected 2D images). Assigned weights may be associated with different dental sites. In at least one embodiment, a weight may be assigned to each scan (e.g., to each blended scan) for a dental site (or for multiple dental sites). During model generation, conflicting data from multiple intraoral scans may be combined using a weighted average to depict a dental site. The weights that are applied may be those weights that were assigned based on quality scores for the dental site.
  • processing logic may determine that data for a particular overlapping region from a first set of intraoral scans is superior in quality to data for the particular overlapping region of a second set of intraoral scans.
  • the first intraoral scan data set may then be weighted more heavily than the second intraoral scan data set when averaging the differences between the intraoral scan data sets.
  • the first intraoral scans assigned the higher rating may be assigned a weight of 70% and the second intraoral scans may be assigned a weight of 30%.
  • the merged result will look more like the depiction from the first intraoral scan data set and less like the depiction from the second intraoral scan data set.
  • images and/or intraoral scans are input into a machine learning model that has been trained to select and/or grade images and/or intraoral scans of dental sites.
  • one or more scores are assigned to each image and/or intraoral scan, where each score may be associated with a particular dental site and indicate a quality of a representation of that dental site in the 2D image and/or intraoral scan.
  • Intraoral scan application 115 may generate one or more 3D surfaces and/or 3D models from intraoral scans, and may display the 3D surfaces and/or 3D models to a user (e.g., a doctor) via a user interface.
  • the 3D surfaces and/or 3D models can then be checked visually by the doctor.
  • the doctor can virtually manipulate the 3D surfaces and/or 3D models via the user interface with respect to up to six degrees of freedom (i.e., translated and/or rotated with respect to one or more of three mutually orthogonal axes) using suitable user controls (hardware and/or virtual) to enable viewing of the 3D model from any desired direction.
  • the doctor may review (e.g., visually inspect) the generated 3D surface and/or 3D model of an intraoral site and determine whether the 3D surface and/or 3D model is acceptable.
  • a 3D model of a dental site (e.g., of a dental arch or a portion of a dental arch including a preparation tooth) is generated, it may be sent to dental modeling logic 116 for review, analysis and/or updating. Additionally, or alternatively, one or more operations associated with review, analysis and/or updating of the 3D model may be performed by intraoral scan application 115.
  • Intraoral scan application 115 and/or dental modeling logic 116 may include modeling logic 118 and/or panoramic 2D image processing logic 119.
  • Modeling logic 118 may include logic for generating projection targets onto which a 3D model may be projected. The modeling logic 118 may import the 3D model data to identify various parameters used for generating the projection targets.
  • Such parameters include, but are not limited to, an arch center (which may serve as a projection center for performing projection transformations), a 3D coordinate axis, tooth locations/centers, and arch dimensions. From these parameters, the modeling logic 118 may be able to determine the positions, sizes, and orientations of various projection targets for positioning around the dental arch represented by the 3D model.
  • the panoramic 2D image processing logic 119 may utilize one or more models (i.e., projection targets) generated from the modeling logic 118 for generating/deriving panoramic 2D images from the 3D model of the dental site.
  • the image processing logic 119 may generate 2D panoramic images from the 3D model based on the projection center.
  • a radially outward projection onto the projection target may result in a panoramic lingual view of the dentition
  • a radially inward projection onto the projection target may result in a panoramic buccal view of the dentition.
  • Image processing logic 119 may also be utilized to generate an X-ray panoramic simulated image from, for example, lingual and buccal 2D panoramic projections.
  • the result of such projection transformations may include not just raw image data, but may also preserve other information related to the 3D model.
  • each pixel of a 2D panoramic image may have associated depth information (e.g., a radial distance from the projection center), density information, 3D surface coordinates, and/or other data.
  • depth information e.g., a radial distance from the projection center
  • density information e.g., 3D surface coordinates, and/or other data.
  • data may be used in transforming a 2D panoramic image back to a 3D image.
  • such data may be used in identifying overlaps of teeth detectable from
  • a visualization component 120 of the intraoral scan application 115 may be used to visualize the panoramic 2D images for inspection, labeling, patient education, or any other purpose.
  • the visualization component 120 may be utilized to compare panoramic 2D images generated from intraoral scans at various stages of a treatment plan. Such embodiments allow for visualization of tooth movement and shifting.
  • a machine learning model may be trained to detect and automatically label tooth movement and shifting using panoramic 2D images, panoramic X-ray images, and/or intraoral scan data as inputs.
  • FIG. 2 which illustrates a cylindrical modeling approach for generating a 2D projection of a dental site (e.g., 3D model of a patient’ s dentition, or “3D dentition” 210), in accordance with at least one embodiment.
  • a top-down view of the 3D dentition 210 is shown with a projection center 230 in a central region of an arch of the 3D dentition 210.
  • a projection target i.e., projection surface 220
  • the projection surface 220 is a partial cylinder that surrounds the dental arch.
  • a radius of the projection surface 220 may coincide with the projection center 230.
  • the radius may be selected to surround the dental arch while maintaining a minimum spacing away from the nearest tooth of the 3D dentition 210.
  • the projection center 230 corresponds to the center of the arch. In at least one embodiment, the projection center 230 is selected so that radial projection lines 232 are tangential or nearly tangential to the third molar of the 3D dentition 210 for a given radius of the projection surface 220.
  • FIG. 3 illustrates projection of the 3D dentition 210 onto the cylindrical projection surface 220, in accordance with at least one embodiment.
  • the 3D dentition 210 is projected onto the cylindrical projection surface 220 and then the 3D model/mesh is flattened to produce a flattened arch mesh 330.
  • the flattened arch mesh 330 can then be rendered using orthographic rendering to generate a panoramic projection.
  • a coordinate system (x-y-z) is based on the original coordinate system associated with the 3D dentition 210, and a coordinate system for the projection surface 220 is defined as x’-y’-z’.
  • the transform 320 is used to transform any coordinate of the 3D dentition 210 to x’-y’-z’ according to the following relationships: where r is the distance from the origin of the 3D dentition 210 coordinate system to the projection surface 220. With the above transformation, a “flattened” arch mesh 330 is obtained.
  • FIG. 4 is a workflow 400 illustrating generation of an X-ray panoramic simulated image 450 (based on circular projection), in accordance with at least one embodiment.
  • orthographic rendering is applied via transformation 420, resulting in panoramic 2D images 430 A and/or 430B.
  • applying the transformation results in a buccal image (panoramic 2D image 430A) due to the buccal side of the flattened arch mesh 330 facing the projection surface 220.
  • a lingual rendering may be obtained, for example, by rotating the flattened arch mesh 330 by 180° about the vertical axis or by flipping the sign of the depth coordinate (z’).
  • the panoramic 2D images 430 A and 430B may retain the original color of the 3D dentition 210.
  • the panoramic 2D images 430 A and 430B may be recolored. For example, as illustrated in FIG.
  • each tooth is recolored in grayscale using, for example, a gray pixel value of tooth index number multiplied by 5.
  • transform 440 is applied to the panoramic 2D images 430 A and 430B to generate an X-ray panoramic simulated image 450, which can be generated by comparing the buccal and lingual renderings of the same jaw, and marking the regions having different color values from each other as a different color (e.g., white) to show tooth overlap that is representative of high density regions of an X-ray image.
  • FIG. 5 is a comparison of an actual X-ray image 500 to an X-ray panoramic simulated image 450 for the same patient, in accordance with at least one embodiment.
  • the simulated rendering of the X-ray panoramic simulated image 450 including the marked/highlighted areas closely resemble the original X-ray image 500, including identification of high-density areas.
  • the simulation process can be calibrated to more closely resemble an X-ray image, for example, by adjusting the location of the projection center and the position and orientation of the projection surface 220. Such calibrations are advantageous, for example, if the patient’s jaw was not facing/orthogonal to the X-ray film at the time that the X-ray was captured.
  • these parameters may be iterated through and multiple X-ray panoramic simulated images may be generated in order to identify a best fit simulated image.
  • FIG. 6A illustrates an arch curve-following modeling approach for generating a 2D projection of the 3D dentition 210, in accordance with at least one embodiment.
  • a plurality of projection surfaces 620 e.g., a plurality of connected or continuous projection surfaces 620
  • the dental arch of the 3D dentition is segmented into a plurality of arch segments 642 (e.g., 7 segments as shown) based on the angle span around an arch center.
  • a center vertex 644 is calculated for each of the arch segments 642.
  • the center vertices 644 are used to connect the segments 642 in a piecewise manner to produce an arch mesh 640.
  • the arch mesh 640 is scaled radially outward from the projection center 630 to form the projection surfaces 620 that encompasses the dental arch.
  • a smoothing algorithm is applied to the projection surfaces 620 to produce a more smooth transition between the segments 622 by eliminating/reducing discontinuities caused by the presence of the joints.
  • the sign of the depth coordinate can be switched in the flattened mesh, the vertex order of all triangular mesh faces can be reversed, or the flattened arch mesh 690 can be rotated about its vertical axis prior to applying the orthographic rendering.
  • FIG. 7 illustrates a graphical user interface (GUI) 700 displaying various renderings of a 3D dentition, in accordance with at least one embodiment.
  • the GUI 700 may be utilized by dental personnel to display panoramic 2D images of the patient’s dentition.
  • the GUI 700 includes a buccal rendering 710, a lingual rendering 720, and an occlusal rendering 730.
  • the occlusal rendering 730 may be generated, for example, by projecting from a top of the 3D dentition down onto a plane underneath the 3D dentition.
  • the upper jaw dentition may be shown separately, or the GUI 700 may include renderings of the top and bottom dentitions (e.g., up to 6 total views).
  • the GUI 700 may allow dental personnel to label dental features that are observable in the various renderings.
  • labeling of dental feature in one rendering may cause a similar label to appear in a corresponding location of another rendering.
  • FIG. 8 A illustrates an arch curve-following modeling approach utilizing a hybrid projection surface for generating a 2D projection of a 3D dentition 810, in accordance with at least one embodiment.
  • a projection surface 820 is generated from 3 segments joined at their edges, with each section corresponding to a projection subsurface. In other embodiments, more than 3 segments are utilized.
  • the projection surface 820 is formed from planar portions 822 and a cylindrical portion 826 that connects at its edges to the planar portions 822, resulting in a symmetric shape that substantially surrounds 3D dentition 810. A smooth transition between the planar portions 822 and the cylindrical portion 826 can be utilized to reduce or eliminate discontinuities.
  • the cylindrical portion 826 encompasses a first portion of the 3D dentition 810, and the planar portions 822 extend past the first portion and towards a rear portion of the 3D dentition (left and right back molars).
  • the angle 0 corresponds to the angle between two projection lines 824 extending from the projection center 830 to the edges at which the planar portions 822 are connected to the cylindrical portion 826.
  • the angle 9 is from about 110° to about 130° (e.g., 120°).
  • the angle 9, the location of the projection center 830, and the orientation and location of the projection surface 820 are used as a tunable parameter to optimize/minimize distortions in the resulting panoramic images.
  • FIG. 9A illustrates a polynomial arch curve modeling approach for generating a 2D projection of the 3D dentition 210, in accordance with at least one embodiment.
  • a parabolic projection surface 920 surrounds the dental arch of the 3D dentition 210.
  • the parabolic projection surface 920 is an illustrative example of the polynomial curve modeling approach, and it is contemplated that higher-order polynomials may be used as would be appreciated by those of ordinary skill in the art.
  • FIG. 9B shows overlays of 2D panoramic renderings onto X-ray images for the upper and lower arches of a patient, demonstrating the accuracy of the polynomial arch curve modeling approach.
  • FIGS. 10-12 illustrate methods related to generation of panoramic 2D images from 3D models of dental sites, for which the 3D model is generated from one or more intraoral scans.
  • the methods may be performed by a processing logic that may comprise hardware (e.g., circuitry, dedicated logic, programmable logic, microcode, etc.), software (e.g., instructions run on a processing device to perform hardware simulation), or a combination thereof.
  • a computing device executing dental modeling application such as dental modeling logic 116 of FIG. 1.
  • the dental modeling 116 may be, for example, a component of an intraoral scanning apparatus that includes a handheld intraoral scanner and a computing device operatively coupled (e.g., via a wired or wireless connection) to the handheld intraoral scanner.
  • the dental modeling application may execute on a computing device at a dentist office or dental lab.
  • FIG. 10 illustrates a flow diagram for a method
  • a computing device receives a 3D model of a dental site.
  • the 3D model is generated from one or more intraoral scans.
  • the intraoral scan may be performedby a scanner (e.g., the scanner 150), which generates one or more intraoral scan data sets.
  • the intraoral scan data set may include 3D point clouds, 2D images, and/or 3D images of particular teeth and/or regions of the dental site.
  • the computing device (e.g., implementing the modeling logic 118) generates a projection target shaped to substantially surround an arch represented by the dental site.
  • the projection target is a cylindrically-shaped surface (e.g., the projection surface 220) that substantially surrounds the arch.
  • the projection target comprises a polynomial curve-shaped curve, such as a parabolically-shaped surface (e.g., the projection surface 620), that substantially surrounds the arch.
  • the projection target is a hybrid surface (e.g., the projection surface 920) formed from a cylindrically-shaped surface (e.g., the cylindrical portion 926), a first and second planar surfaces (e.g., the planar portions 922) that extend from edges of the cylindrically-shaped surface.
  • the cylindrically- shaped surface, the first planar surface, and the second planar surface collectively define a continuous surface that substantially surrounds the arch.
  • an angle between the first planar surface and the second planar surface is from about 110° to about 130° (e.g., 120°).
  • the computing device (e.g., implementing the panoramic 2D image processing logic 119) computesa surface projection by projectingthe 3D model ofthe dental site onto one or more surfaces of the projection target.
  • the surface projection is computed based on a projection path surrounding the arch.
  • the computing device (e.g., implementing the panoramic 2D image processing logic 119) generates at least one panoramic two-dimensional (2D) image from the surface projection.
  • at least one panoramic 2D image is generated by orthographic rendering of a flattened mesh generated by projecting the 3D model along a projection path surrounding the arch (e.g., applying any of transforms 420 or 780).
  • FIG. 11 illustrates a flow diagram for a method 1100 of generating a panoramic 2D image based on a multi-surface projection target, in accordance with at least one embodiment.
  • the method 1100 may follow the workflow described with respect to FIGS. 7A and 7B.
  • a computing device e.g., the computing device 105 of the dental office 108 or dental lab 110
  • receives a 3D model of a dental site e.g., the computing device 105 of the dental office 108 or dental lab 110
  • the 3D model of the dental site may include a 3D dentition for the lower jaw, the upper jaw, or both, of the patient (e.g., any of the 3D dentitions 210, 660, or 810).
  • a plurality of vertices are computed along an arch represented by the dental site (e.g., the 3D dentition 210).
  • one or more of the plurality of vertices is positioned at a tooth center.
  • the number of vertices is greater than 5 (e.g., 10, 50, or greater).
  • an initial projection target is computed (e.g., the arch mesh 640).
  • the initial projection target is formed from a plurality of surface segments (e.g., segments 742) connected to each other in series at the location of the vertices.
  • a projection target e.g., the projection surfaces 620
  • the resulting projection target includes a plurality of segments (e.g., segments 622).
  • a computing device receives a 3D model of a dental site.
  • the 3D model of the dental site may include a 3D dentition for the lower jaw, the upper jaw, or both, of the patient (e.g., any of the 3D dentitions 210, 660, or 810).
  • a projection target is generated.
  • the projection target may be shaped to substantially surround an arch represented by the dental site.
  • the projection target may correspond to any of those described above with respect to the methods 1000 and 1100.
  • a first surface projection is computed by projecting the 3D model of the dental site onto one or more surfaces of the projection target along the buccal direction (e.g., based on the transform 420).
  • the projection may be computed by utilizing the mathematical operation described above with respect to FIG. 4 to transform the coordinates of a 3D dentition.
  • a second surface projection is computed by projecting the 3D model of the dental site onto one or more surfaces of the projection target along the lingual direction (e.g., based on the transform 420).
  • the projection along the lingual direction is performed by flipping the sign of the depth coordinate before or after applying the second surface projection, the vertex order of all mesh faces of the 3D model can be reversed, or the second surface projection can be rotated about its vertical axis.
  • At block 1225 at least one panoramic 2D image is generated by combining the first surface projection andthe second surface projection (e.g., applying transform 440).
  • the resulting panoramic 2D image correspondsto an X-ray panoramic simulated image (e.g., X-ray panoramic simulated image 450).
  • generating the panoramic 2D image includes marking regions of a panoramic 2D image corresponding to overlapping regions of the 3D model identified from the first and second surface projections.
  • the dental site corresponds to a single jaw.
  • a first panoramic 2D image can correspondsto a buccal rendering
  • a second panoramic 2D image can correspondsto a lingual rendering.
  • the buccal and lingual renderings of the jaw can be displayed, for example, in a GUI individually, together, with an occlusal rendering of the dental site, or with similar renderings for the opposite jaw.
  • the occlusal rendering is generated by projecting the 3D model of the dental site onto a flat surface from the occlusal side of the dental site.
  • the computing device may generate for display a panoramic 2D image for labeling one or more dental features in the image. Each labeled dental feature has an associated position within the panoramic 2D image.
  • the computing device determines a corresponding location in the 3D model from which the panoramic 2D image was generated and assigns a label for the dental feature to the corresponding location.
  • the 3D model when displayed, will include the one or more labels.
  • the labeling may be performed, for example, in response to a user input to directly label the dental feature.
  • the labeling may be performed using a trained machine learning model.
  • the trained machine learning model can be trained to identify and label dental features in panoramic 2D images, 3D dentitions, or both.
  • one or more workflows may be utilized to implement model training in accordance with embodiments of the present disclosure.
  • the model training workflow may be performed at a server which may or may not include an intraoral scan application.
  • the model training workflow and the model application workflow may be performed by processing logic executed by a processor of a computing device.
  • One or more of these workflows may be implemented, for example, by one or more machine learning modules implemented in an intraoral scan application 115, by dental modeling logic 116, or other software and/or firmware executing on a processing device of computing device 1300 shown and described in FIG. 13.
  • the model training workflow is to train one or more machine learning models (e.g., deep learning models) to perform one or more classifying, segmenting, detection, recognition, prediction, etc. tasks for intraoral scan data (e.g., 3D intraoral scans, height maps, 2D color images, 2D NIRI images, 2D fluorescent images, etc.) and/or 3D surfaces generated based on intraoral scan data.
  • the model application workflow is to apply the one or more trained machine learning models to perform the classifying, segmenting, detection, recognition, prediction, etc. tasks for intraoral scan data (e.g., 3D scans, height maps, 2D color images, NIRI images, etc.) and/or 3D surfaces generated based on intraoral scan data.
  • One or more of the machine learning models may receive and process 3D data (e.g., 3D point clouds, 3D surfaces, portions of 3D models, etc.).
  • 3D data e.g., 3D point clouds, 3D surfaces, portions of 3D models, etc.
  • 2D data e.g., 2D panoramic images, height maps, projections of 3D surfaces onto planes, etc.
  • one or more machine learning models are trained to perform one or more of the below tasks.
  • Each task may be performed by a separate machine learning model.
  • a single machine learning model may perform each of the tasks or a subset of the tasks.
  • different machine learning models may be trained to perform different combinations of the tasks.
  • one or a few machine learning models may be trained, where the trained ML model is a single shared neural network that has multiple shared layers and multiple higher level distinct output layers, where each of the output layers outputs a different prediction, classification, identification, etc.
  • the tasks that the one or more trained machine learning models may be trained to perform are as follows: I) Canonical position determination - this can include determining canonical position and/or orientation of a 3D surface or of objects in an intraoral scan, or determining canonical positions of objections in a 2D image.
  • Scan/2D image assessment can include determining quality metric values associated with intraoral scans, 2D images and/or regions of 3D surfaces. This can include assigning a quality value to individual scans, 3D surfaces, portions of 3D surface, 3D models, portions of 3D models, 2D images, portions of 2D images, etc.
  • Moving tissue identification/removal can include performing pixel-level identification/classification of moving tissue (e.g., tongue, finger, lips, etc.) from intraoral scans and/or 2D images and optionally removing such moving tissue from intraoral scans, 2D images and/or 3D surfaces.
  • moving tissue identification and removal is described in U.S. Patent Application Publication No. 2020/0349698 Al, entitled “Excess Material Removal Using Machine Learning,” which is hereby incorporated by reference herein in its entirety.
  • Patent Application Publication No. 2021/0321872 Al entitled “Smart Scanning for Intraoral Scans,” which is hereby incorporated by reference herein in its entirety.
  • the trained machine learning models are trained to perform one or more of the tasks set forth in U.S. Patent Application Publication No. 2022/0202295 Al, entitled “Dental Diagnostics Hub,” which is hereby incorporated by reference herein in its entirety.
  • model application workflow includes a first trained model and a second trained model.
  • First and second trained models may each be trained to perform segmentation of an input and identify a dental feature therefrom, but may be trained to operate on different types of data.
  • first trained model may be trained to operate on 3D data
  • second trained model may be trained to operate on panoramic 2D images.
  • a single trained machine learning model is used for analyzing multiple types of data.
  • an intraoral scanner generates a sequence of intraoral scans and 2D images.
  • a 3D surface generator may perform registration between intraoral scans to stitch the intraoral scans together and generate a 3D surface/model from the intraoral scans.
  • 2D intraoral images e.g., color 2D images and/or NIRI 2D images
  • motion data may be generated by an IMU of the intraoral scanner and/or based on analysis of the intraoral scans and/or 2D intraoral images.
  • Data from the 3D model/surface may be input into first trained model, which outputs a first dental feature.
  • the first dental feature may be output as a probability map or mask in at least one embodiment, where each point has an assigned probability of being part of a dental feature and/or an assigned probability of not being part of a dental feature.
  • data from the panoramic 2D image is input into second trained model which outputs dental feature.
  • the dental feature(s) may each be output as a probability map or mask in at least one embodiment, where each pixel of the input 2D image has an assigned probability of being a dental feature and/or an assigned probability of not being a dental feature.
  • the machine learning model is additionally trained to identify teeth, gums and/or excess material. In at least one embodiment, the machine learning model is further trained to determine one or more specific tooth numbers and/or to identify a specific indication (or indications) for an input image. Accordingly, a single machine learning model may be trained to identify dental features and also to identify teeth generally, identify different specific tooth numbers, identify gums and/or identify other features (e.g., margin lines, etc.). In an alternative embodiment, a separate machine learning model is trained for each specific tooth number and for each specific indication.
  • the tooth number and/or indication (e.g., a particular dental prosthetic to be used) may be indicated (e.g., may be input by a user), and an appropriate machine learning model may be selected based on the specific tooth number and/or the specific indication.
  • the machine learning model may be trained to output an identification of a dental feature as well as separate information indicating one or more of the above (e.g., path of insertion, model orientation, teeth identification, gum identification, excess material identification, etc.).
  • the machine learning model (or a different machine learning model) is trained to perform one or more of : identify teeth represented in height maps, identify gums represented in height maps, identify excess material (e.g., material that is not gums or teeth) in height maps, and/or identify dental features in height maps.
  • a panoramic 2D image is generated from the 3D model of the dental site, for example, utilizing any of the methods 1000, 1100, or 1200 described in greater detail above.
  • one or more trained ML models may be utilized to segment/classify dental features identified in the panoramic 2D image.
  • the one or more trained ML models may be trained and utilized in accordance with the methodologies discussed in greater detail above.
  • information descriptive of the segmentation/classification is projected onto the 3D model of the dental site, for example, by identifying and/or labeling dental features at locations in the 3D model corresponding to those of the panoramic 2D image.
  • the example computing device 1300 includes a processing device 1302, a main memory 1304 (e.g., read-only memory (ROM), flash memory, dynamic random access memory (DRAM) such as synchronous DRAM (SDRAM), etc.), a static memory 1306 (e.g., flash memory, static random access memory (SRAM), etc.), and a secondary memory (e.g., a data storage device 1328), which communicate with each other via a bus 1308.
  • main memory 1304 e.g., read-only memory (ROM), flash memory, dynamic random access memory (DRAM) such as synchronous DRAM (SDRAM), etc.
  • DRAM dynamic random access memory
  • SDRAM synchronous DRAM
  • static memory 1306 e.g., flash memory, static random access memory (SRAM), etc.
  • secondary memory e.g., a data storage device 1328
  • Processing device 1302 represents one or more general-purpose processors such as a microprocessor, central processing unit, or the like. More particularly, the processing device 1302 may be a complex instruction set computing (CISC) microprocessor, reduced instruction set computing (RISC) microprocessor, very long instruction word (VLIW) microprocessor, processor implementing other instruction sets, or processors implementing a combination of instruction sets. Processing device 1302 may also be one or more specialpurpose processing devices such as an application specific integrated circuit (ASIC), a field programmable gate array (FPGA), a digital signal processor (DSP), network processor, or the like. Processing device 1302 is configured to execute the processing logic (instructions 1326) for performing operations and steps discussed herein.
  • ASIC application specific integrated circuit
  • FPGA field programmable gate array
  • DSP digital signal processor
  • Embodiment 2 The method of Embodiment 1, wherein the surface projection is computed based on a projection path surrounding the arch.
  • Embodiment 4 The method of any of the preceding Embodiments, wherein the projection target comprises a cylindrically-shaped surface that substantially surrounds the arch.
  • Embodiment s The method of any of the preceding Embodiments, wherein the projection target comprises a polynomial curve-shaped surface that substantially surrounds the arch.
  • Embodiment 6 The method of any of the preceding Embodiments, wherein the projection target comprises: a cylindrically-shaped surface; a first planar surface that extends from a first edge of the cylindrically-shaped surface; and a second planar surface that extends from a second edge of the cylindrically-shaped surface that is opposite the first edge, wherein the cylindrically-shaped surface, the first planar surface, and the second planar surface collectively define a continuous surface that substantially surrounds the arch.
  • Embodiment 7 The method of Embodiment 6, wherein an angle between the first planar surface and the second planar surface is from about 110° to about 130°.
  • Embodiment s The method of any of the preceding Embodiments, wherein the dental site correspondsto a single jaw, wherein a first panoramic 2D image corresponds to a buccal rendering, and wherein a second panoramic 2D image corresponds to a lingual rendering.
  • Embodiment 10 The method of any of the preceding Embodiments, further comprising: generating for display a panoramic 2D image; labeling a dental feature at a first location in the panoramic 2D image; determining a second location of the 3D model corresponding to the first location of the panoramic 2D image; and assigning a label for the dental feature to the second location of the 3D model, wherein the 3D model is displayable with the label.
  • Embodiment 11 The method of Embodiment 10, wherein labeling the dental feature comprises one or more of receiving a user input to directly label the dental feature or using a trained machine learning model that has been trained to identify and label dental features in panoramic 2D images.
  • Embodiment 12 A method comprising: receiving a three-dimensional (3D) model of a dental site generated from one or more intraoral scans; generating a plurality of vertices along an arch represented by the dental site; computing a projection target comprising a plurality of surface segments connected to each other in series at locations of the vertices; scaling the projection target with respect to an arch center located within a central region of the arch such thatthe projection target substantially surrounds the arch; computing a surface projection by projecting the 3D model of the dental site onto each of the surface segments of the projection target; and generating at least one panoramic two-dimensional (2D) image of the dental site from the surface projection.
  • Embodiment 13 The method of Embodiment 12, wherein one or more of the plurality of vertices is positioned at a tooth center, and wherein the number of vertices is greater than 5.
  • Embodiment 14 A method comprising: receiving a three-dimensional (3D) model of a dental site generated from one or more intraoral scans; generating a projection target shaped to substantially surround an arch represented by the dental site; computing a first surface projection by projecting the 3D model of the dental site onto one or more surfaces of the projection target along a buccal direction; computing a second surface projection by projecting the 3D model of the dental site onto one or more surfaces of the projection target along a lingual direction; and generating at least one panoramic two-dimensional (2D) image by combining the first surface projection and the second surface projection.
  • 3D three-dimensional
  • Embodiment 15 The method of Embodiment 14, wherein generating the at least one panoramic 2D image comprises marking regions of a panoramic 2D image corresponding to overlapping regions of the 3D model identified from the first and second surface projections.
  • Embodiment 16 An intraoral scanning system comprising: an intraoral scanner; and a computing device operatively connected to the intraoral scanner, wherein the computing device is to perform the method of any of Embodiments 1-15 responsive to generating the one or more intraoral scans using the intraoral scanner.
  • Embodiment 17 A non-transitory computer readable medium comprising instructions that, when executed by a processing device, cause the processing device to perform the method of any of Embodiments 1-15.
  • Embodiment 18 A system comprising: a memory; and a processing device to execute instructions from the memory to perform a method comprising: receiving a three- dimensional (3D) model of a dental site generated from one or more intraoral scans; generating a projection target shaped to substantially surround an arch represented by the dental site; computing a surface projection by projecting the 3D model of the dental site onto one or more surfaces of the projection target; and generating at least one panoramic two- dimensional (2D) image of the dental site from the surface projection.
  • 3D three- dimensional
  • Embodiment 19 The system of Embodiment 18, wherein the surface projection is computed based on a projection path surrounding the arch, and wherein the at least one panoramic 2D image is generated by orthographic rendering of a flattened mesh generated by projecting the 3D model along the projection path.
  • Embodiment 20 The system of either Embodiment 18 or Embodiment 19, wherein the projection target comprises a cylindrically-shaped surface that substantially surrounds the arch.
  • Embodiment 21 The system of any of Embodiments 18-20, wherein the projection target comprises a polynomial curve-shaped surface that substantially surrounds the arch.
  • Embodiment 22 The system of any of Embodiments 18-21, wherein the projection target comprises: a cylindrically-shaped surface; a first planar surface that extends from a first edge of the cylindrically-shaped surface; and a second planar surface that extends from a second edge of the cylindrically-shaped surface that is opposite the first edge, wherein the cylindrically-shaped surface, the first planar surface, and the second planar surface collectively define a continuous surface that substantially surrounds the arch, and wherein an angle between the first planar surface and the second planar surface is from about 110° to about 130°.
  • Embodiment 23 The system of any of Embodiments 18-22, wherein the dental site corresponds to a single jaw, wherein a first panoramic 2D image corresponds to a buccal rendering, and wherein a second panoramic 2D image corresponds to a lingual rendering, and wherein the method further comprises: generating for display the buccal rendering, the lingual rendering, and optionally an occlusal rendering of the dental site generated by projecting the 3D model of the dental site onto a flat surface from the occlusal side of the dental site.
  • Embodiment 24 The system of any of Embodiments 18-22, further comprising: generating for display a panoramic 2D image; labeling a dental feature at a first location in the panoramic 2D image; determining a second location of the 3D model corresponding to the first location of the panoramic 2D image; and assigning a label for the dental feature to the second location of the 3D model, wherein the 3D model is displayable with the label.
  • Embodiment 25 The system of Embodiment 24, wherein labeling the dental feature comprises one or more of receiving a user input to directly label the dental feature or using a trained machine learning model that has been trained to identify and label dental features in panoramic 2D images.
  • Claim language or other language herein reciting “at least one of’ a set and/or “one or more” of a set indicates that one member of the set or multiple members of the set (in any combination) satisfy the claim.
  • claim language reciting “at least one of A and B” or “at least one of A or B” means A, B, or A and B.
  • claim language reciting “at least one of A, B, and C” or “at least one of A, B, or C” means A, B, C, or A and B, or A and C, or B and C, or A and B and C.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Graphics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Geometry (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Apparatus For Radiation Diagnosis (AREA)
  • Dental Tools And Instruments Or Auxiliary Dental Instruments (AREA)
EP23836677.7A 2022-11-30 2023-11-29 Erzeugung von dentalen rendering aus modelldaten Pending EP4627541A1 (de)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
US202263428941P 2022-11-30 2022-11-30
US18/522,169 US20240177397A1 (en) 2022-11-30 2023-11-28 Generation of dental renderings from model data
PCT/US2023/081658 WO2024118819A1 (en) 2022-11-30 2023-11-29 Generation of dental renderings from model data

Publications (1)

Publication Number Publication Date
EP4627541A1 true EP4627541A1 (de) 2025-10-08

Family

ID=89474887

Family Applications (1)

Application Number Title Priority Date Filing Date
EP23836677.7A Pending EP4627541A1 (de) 2022-11-30 2023-11-29 Erzeugung von dentalen rendering aus modelldaten

Country Status (3)

Country Link
EP (1) EP4627541A1 (de)
CN (1) CN120604268A (de)
WO (1) WO2024118819A1 (de)

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3578131B1 (de) * 2016-07-27 2020-12-09 Align Technology, Inc. Intraoraler scanner mit zahnmedizinischen diagnosefähigkeiten
CN106846307B (zh) * 2017-01-19 2020-09-22 深圳市深图医学影像设备有限公司 基于锥形束计算机体层摄影的图像处理方法及装置
CN114587237B (zh) 2018-01-26 2025-09-30 阿莱恩技术有限公司 诊断性口内扫描和追踪
US11238586B2 (en) 2019-05-02 2022-02-01 Align Technology, Inc. Excess material removal using machine learning
US11995839B2 (en) 2019-09-04 2024-05-28 Align Technology, Inc. Automated detection, generation and/or correction of dental features in digital models
CN119837657A (zh) * 2019-09-10 2025-04-18 阿莱恩技术有限公司 牙科全景视图
US12453473B2 (en) 2020-04-15 2025-10-28 Align Technology, Inc. Smart scanning for intraoral scanners
US12127814B2 (en) 2020-12-30 2024-10-29 Align Technology, Inc. Dental diagnostics hub

Also Published As

Publication number Publication date
WO2024118819A1 (en) 2024-06-06
CN120604268A (zh) 2025-09-05

Similar Documents

Publication Publication Date Title
US20240221165A1 (en) Dental object classification and 3d model modification
US12062180B2 (en) Automated, generation of dental features in digital models
US11744682B2 (en) Method and device for digital scan body alignment
US11651494B2 (en) Apparatuses and methods for three-dimensional dental segmentation using dental image data
US20220218449A1 (en) Dental cad automation using deep learning
US20230068727A1 (en) Intraoral scanner real time and post scan visualizations
US20240024076A1 (en) Combined face scanning and intraoral scanning
US20230419631A1 (en) Guided Implant Surgery Planning System and Method
US20240058105A1 (en) Augmentation of 3d surface of dental site using 2d images
US20230309800A1 (en) System and method of scanning teeth for restorative dentistry
US20240144480A1 (en) Dental treatment video
US20250200894A1 (en) Modeling and visualization of facial structure for dental treatment planning
US20240202921A1 (en) Viewfinder image selection for intraoral scanning
WO2023028339A1 (en) Intraoral scanner real time and post scan visualizations
WO2024039547A1 (en) Augmentation of 3d surface of dental site using 2d images
US20240177397A1 (en) Generation of dental renderings from model data
EP4627541A1 (de) Erzeugung von dentalen rendering aus modelldaten
US20250225752A1 (en) Blood and saliva handling for intraoral scanning
US20240325124A1 (en) Method for providing annotated synthetic image data for training machine learning models
CN120475942A (zh) 用于口内扫描的取景器图像选择
WO2024097673A1 (en) Dental treatment video
CN119213462A (zh) 用于牙科修复的扫描牙齿的系统和方法

Legal Events

Date Code Title Description
STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: UNKNOWN

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE INTERNATIONAL PUBLICATION HAS BEEN MADE

PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: REQUEST FOR EXAMINATION WAS MADE

17P Request for examination filed

Effective date: 20250512

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC ME MK MT NL NO PL PT RO RS SE SI SK SM TR