[go: up one dir, main page]

WO2024127313A1 - Calcul et visualisation de métriques dans des soins buccaux numériques - Google Patents

Calcul et visualisation de métriques dans des soins buccaux numériques Download PDF

Info

Publication number
WO2024127313A1
WO2024127313A1 PCT/IB2023/062707 IB2023062707W WO2024127313A1 WO 2024127313 A1 WO2024127313 A1 WO 2024127313A1 IB 2023062707 W IB2023062707 W IB 2023062707W WO 2024127313 A1 WO2024127313 A1 WO 2024127313A1
Authority
WO
WIPO (PCT)
Prior art keywords
oral care
metrics
tooth
setups
teeth
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
PCT/IB2023/062707
Other languages
English (en)
Inventor
Michael Starr
Jonathan D. Gandrud
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
3M Innovative Properties Co
Original Assignee
3M Innovative Properties Co
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 3M Innovative Properties Co filed Critical 3M Innovative Properties Co
Priority to EP23829131.4A priority Critical patent/EP4634931A1/fr
Publication of WO2024127313A1 publication Critical patent/WO2024127313A1/fr
Anticipated expiration legal-status Critical
Ceased legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H20/00ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance
    • G16H20/30ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance relating to physical therapies or activities, e.g. physiotherapy, acupressure or exercising
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • G16H30/40ICT specially adapted for the handling or processing of medical images for processing medical images, e.g. editing
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/20ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems

Definitions

  • the methods may modify at least one of the structure or the shape of the tooth (or other 3D oral care representation) , such as modifying one of the position or orientation of a mesh element, within the digital 3D model of the tooth (or other 3D oral care representation) to generate the modified state of a tooth (or other 3D oral care representation).
  • a mesh element comprises at least one of a point, a vertex, an edge, a face or a voxel.
  • the methods may apply a smoothing operation to one or more mesh elements.
  • the methods may remove one or more mesh elements from the tooth (or other 3D oral care representation).
  • the methods may add one or more mesh elements to the tooth (or other 3D oral care representation).
  • FIG.3 shows a plot of orthodontic metrics (for UR1) which have undergone dimensionality reduction.
  • FIG.4 shows a plot of orthodontic metrics (for the full arch) which have undergone dimensionality reduction.
  • FIG.5 shows the progress in orthodontic treatment of a patient case.
  • FIG.6 shows a tooth-arch alignment oral care metric.
  • FIG.7 shows the progress scores for the orthodontic treatment of a patient case.
  • FIG.8 shows a buccolingual inclination oral care metric.
  • FIG.9 shows a midline oral care metric.
  • FIG.10 shows an overjet oral care metric.
  • the metric value may be received as the input to the machine learning models described herein, as a way of training that model or those models to encode a distribution of such a metric over the several examples of the training dataset.
  • the network may then receive metric value(s) as input, to assist in training the network to link that inputted metric value to the physical aspects of the ground truth oral care mesh which is used in loss calculation.
  • a loss calculation may quantify the difference between a prediction and a ground truth example (e.g., between a predicted oral care mesh and a ground truth oral care mesh).
  • the network techniques of this disclosure may, through the course of loss calculation and subsequent backpropagation, learn train the network to encode a distribution of a given metric.
  • aspects of the present disclosure reduce computing resource consumption by decimating 3D representations of the patient’s dentition (e.g., reducing the counts of mesh elements used to describe aspects of the patient’s dentition) so that computing resources are not unnecessarily wasted by processing excess quantities of mesh elements.
  • decimating the meshes does not reduce the overall predictive accuracy of the computing system (and indeed may actually improve predictions because the input provided to the ML model after decimation is a more accurate (or better) representation of the patient’s dentition). For example, noise or other artifacts which are unimportant (and which may reduce the accuracy of the predictive models) are removed.
  • computing systems specifically adapted to visualize and/or analyze configurations of orthodontic setups for oral care appliance generation are improved.
  • aspects of the present disclosure improve the performance of a computing system for visualizing oral care metrics data by reducing the consumption of computing resources.
  • aspects of the present disclosure reduce computing resource consumption by reducing the dimensionality of high-dimensional vectors of oral care metrics (e.g., reducing hundreds or thousands of dimensions to 2 or 3 dimensions which can be easily plotted and visualized) so that computing resources are not unnecessarily wasted by visualizing large quantities of oral care metrics plots.
  • aspects of the present disclosure may need to be executed in a time-constrained manner, such as when an oral care appliance must be generated for a patient immediately after intraoral scanning (e.g., while the patient waits in the clinician’s office).
  • aspects of the present disclosure are necessarily rooted in the underlying computer technology of oral care metrics dimensionality reduction and visualization and cannot be performed by a human, even with the aid of pen and paper.
  • IPR information (e.g., quantity of IPR that is to be performed on one or more teeth, as measured in millimeters, or one or more binary flags to indicate whether or not IPR is to be performed on each tooth identified by flagging) may be concatenated with a latent vector A which is produced by a VAE or a latent capsule T autoencoder.
  • the vector(s) and/or capsule(s) resulting from such a concatenation may be provided to one or more of the neural networks of the present disclosure, with the technical improvement or added advantage of enabling that predictive neural network to account for IPR.
  • IPR is especially relevant to setups prediction methods, which may determine the positions and poses of teeth at the end of treatment or during one or more stages during treatment.
  • a VAE may be trained to perform this embedding operation, a U-Net may be trained to perform such an embedding, or a simple dense or fully connected network may be trained, or a combination thereof.
  • the transformer-based techniques of this disclosure may predict an action for an individual tooth, or may predict actions for multiple teeth (e.g., predict transformations for each of multiple teeth).
  • a 3D mesh transformer may include a transformer encoder structure (which may encode oral care data), and may be followed by a transformer decoder structure.
  • the 3D mesh transformer encoder may encode oral care data into a latent representation, which may be combined with attention information (e.g., to concatenate a vector of attention information to the latent representation).
  • a transformer may include modules such as one or more of: multi-headed attention modules, feed forward modules, normalization modules, linear modules, and softmax modules, and convolution models for latent vector compression, and/or representation.
  • the encoder may be stacked one or more times, thereby further encoding the oral care data, and enabling different representations of the oral care data to be learned (e.g., different latent representations). These representations may be embedded with attention information (which may influence the decoder’s focus to the relevant portions of the latent representation of the oral care data) and may be provided to the decoder in continuous form (e.g., as a concatenation of latent representations – such as latent vectors).
  • the latent output generated by the transformer encoder may be used to predict mesh element labels for mesh segmentation or mesh cleanup.
  • Such a transformer encoder (or transformer decoder) may be trained, at least in part using a cross entropy loss (or others described herein) function, which may compare predicted mesh element labels to ground truth (or reference) mesh element labels.
  • Multi-headed attention and transformers may be advantageously applied to the setups- generation problem. Multi-headed attention is a module in a 3D transformer encoder network which computes the attention weights for the provided oral care data and produces an output vector with encoded information on how each example of oral care data should attend to each other oral care data in an arch.
  • multi-headed attention may enable the transformer to attend to mesh elements within local neighborhoods (or cliques), or to attend to global dependencies between mesh elements (or cliques).
  • multi-headed attention may enable a transformer for setups prediction (e.g., a setups prediction model which is based on a transformer) to generate a transform for a tooth, and to substantially concurrently attend to each of the other teeth in the arch while that transform is generated.
  • the transform for each tooth may be generated in light of the poses of one or more other teeth in the arch, leading to a more accurate transform (e.g., a transform which conforms more closely to the ground truth or reference transform).
  • One implementation of the GDL Setups neural network model may include a representation generation module (e.g., containing a U-Net structure, an autoencoder encoder, a transformer encoder, another type of encoder-decoder structure, or an encoder, etc.) which may provide its output to a module which is trained to generate tooth transformers (e.g., a set of fully connected layers with optional skip connections, or an encoder structure) to generate the prediction of a transform for each individual tooth.
  • Skip connections may, in some implementations, connect the outputs of a particular layer in a neural network to the inputs of another later in the neural network (e.g., a layer which is not immediately adjacent to the originating layer).
  • such orthodontic metrics may be incorporated into the feature vector for a mesh element, where these per- element feature vectors are provided to the setups prediction network as inputs.
  • such orthodontic metrics may be directly consumed by a generator, an MLP, a transformer, or other neural network as direct inputs (such as presented in one or more input vectors of real numbers S, such as described elsewhere in this disclosure.
  • the use of such orthodontic metrics in the training of the generator may improve the performance (i.e., correctness) of the resulting generator, resulting in predicted transforms which place teeth more nearly in the correct final setups poses than would otherwise be possible.
  • Such orthodontic metrics may be consumed by an encoder structure or by a U-Net structure (in the case of GDL Setups).
  • This metric may share some computational elements with the archform_parallelism_global orthodontic metric, except that this metric may input the mean distance from a tooth origin to the line formed by the neighboring teeth in opposing arches (e.g., a tooth in the upper arch and the corresponding tooth in the lower arch). The mean distance may be computed for one or more such pairs of teeth. In some implementations, this may be computed for all pairs of teeth. Then the mean distance may be subtracted from the distance that is computed for each tooth pair. This OM may yield the deviation of a tooth from a “typical” tooth parallelism in the arch.
  • This OM may compute how far forward or behind the tooth is positioned on the l-axis relative to the tooth or teeth of interest in the opposing arch.
  • Crossbite - Fossa in at least one upper molar may be located by finding the halfway point between distal and mesial marginal ridge saddles of the tooth.
  • a lower molar cusp may lie between the marginal ridges of the corresponding upper molar.
  • Compute Curve of Spee in this plane by measuring the distance between the farthest of the projected intermediate points to the projected curve-of-spee line segment. This yields a measure for the curvature of the arch relative to the occlusal plane.
  • 4) Skip the projection and compute the distances and curvatures in the 3D space.
  • Compute Curve of Spee by measuring the distance between the farthest of the intermediate points to the curve-of- spee line segment. This yields a measure for the curvature of the arch in 3D space.
  • [0099] 5) Compute the slope of the projected curve-of-spee line segment on the occlusal plane.
  • the neural networks of this disclosure may exploit one or more benefits of the operation of parameter tuning, whereby the inputs and parameters of a neural network are optimized to produce more data-precide results.
  • One parameter which may be tuned is neural network learning rate (e.g., which may have values such as 0.1, 0.01, 0.001, etc.).
  • activation functions impart non-linear behavior to the network, including: sigmoid/logistic activation functions, Tanh (hyperbolic tangent) functions, rectified linear units (ReLU), leaky ReLU functions, parametric ReLU functions, exponential linear units (ELU), softmax function, swish function, Gaussian error linear unit (GELU), or scaled exponential linear unit (SELU).
  • a linear activation function may be well suited to some regression applications (among other applications), in an output layer.
  • a sigmoid/logistic activation function may be well suited to some binary classification applications (among other applications), in an output layer.
  • Softmax activation function may be well suited to some multiclass classification applications (among other applications), in an output layer.
  • a transform may be described by a 9x1 transformation vector (e.g., that specifies a translation vector and a quaternion). In other implementations, a transform may be described by a transformation matrix (e.g., a 4x4 affine transformation matrix).
  • systems of this disclosure may implement a principal components analysis (PCA) on an oral care mesh, and use the resulting principal components as at least a portion of the representation of the oral care mesh in subsequent machine learning and/or other predictive or generative processing.
  • PCA principal components analysis
  • An autoencoder may be trained to generate a latent form of a 3D oral care representation.
  • a neural network which was previously trained on a first dataset may subsequently receive further training on oral care data and be applied to oral care applications (such as setups prediction). Transfer learning may be employed to further train any of the following networks: GCN (Graph Convolutional Networks), PointNet, ResNet or any of the other neural networks from the published literature which are listed above.
  • GCN Graph Convolutional Networks
  • PointNet PointNet
  • ResNet any of the other neural networks from the published literature which are listed above.
  • a first neural network may be trained to predict coordinate systems for teeth (such as by using the techniques described in WO2022123402A1 or US Provisional Application No. US63/366492).
  • a 3D representation may be produced using a 3D scanner, such as an intraoral scanner, a computerized tomography (CT) scanner, ultrasound scanner, a magnetic resonance imaging (MRI) machine or a mobile device which is enabled to perform stereophotogrammetry.
  • a 3D representation may describe the shape and/or structure of a subject.
  • a 3D representation may include one or more 3D mesh, 3D point cloud, and/or a 3D voxelized representation, among others.
  • one or more mesh element features may be computed, at least in part, via deep feature synthesis (DFS), e.g. as described in: J. M. Kanter and K.
  • DFS deep feature synthesis
  • mesh element features may convey aspects of a 3D representation’s surface shape and/or structure to the neural network models of this disclosure. Each mesh element feature describes distinct information about the 3D representation that may not be redundantly present in other input data that are provided to the neural network.
  • Predictive models which may operate on feature vectors of the aforementioned features include but are not limited to: GDL Setups, RL Setups, VAE Setups, Capsule Setups, MLP Setups, Diffusion Setups, PT Setups, Similarity Setups, Tooth Classification, Setups Classification, Setups Comparison, VAE Mesh Element Labeling, MAE Mesh In-filling, Mesh Reconstruction Autoencoder, Validation Using Autoencoders, Mesh Segmentation, Coordinate System Prediction, Mesh Cleanup, Restoration Design Generation, Appliance Component Generation and Placement, and Archform Prediction.
  • Such feature vectors may be presented to the input of a predictive model.
  • Some implementations of optimization algorithms may compute orthodontic metrics through the course of operation, as a means of quantifying progress and guiding the optimization algorithm towards an end state (e.g., the prediction of a final setup).
  • an end state e.g., the prediction of a final setup.
  • the present technique uses a dimensionality reduction technique, such as the tSNE or U-Map technique, for dimensionality reduction of orthodontic metrics data.
  • the techniques of this disclosure may, in some examples, further implement visualization techniques that highlight the results of the dimensionality reduction.
  • Some implementations may further implement the plotting of the progress of an orthodontic treatment plan, over many successive stages.
  • the technique may also apply to the visualization of other types of metrics in digital dentistry and digital orthodontics, such as for metrics which may be used in the automation of dental restoration appliance creation (i.e., for the 3M® FiltekTM Matrix), such as Restoration Design Metrics (RDM) (e.g., as described under the “Restoration Design Metrics Calculation” heading herein).
  • metrics which may be used in the automation of dental restoration appliance creation (i.e., for the 3M® FiltekTM Matrix), such as Restoration Design Metrics (RDM) (e.g., as described under the “Restoration Design Metrics Calculation” heading herein).
  • RDM Restoration Design Metrics
  • Some implementations of the techniques of the present disclosure involve dimensionality reduction of these high-dimensional vectors of metrics, so that the vectors may be visualized in two dimensions (2D) or three dimensions (3D), for interpretation by a clinician.
  • the set of metrics values corresponding to a mal setup which reflects the starting arrangement of the teeth may be plotted in this space, and each successive intermediate setup may also be plotted in this space. Finally, the final setup (which reflects the target or end-point of treatment) may be plotted in this space.
  • the vector of metrics values for a setup may comprise 470 individual scalar values. Plotting a point relative to 470 orthogonal coordinate axes may produce a result which is perfectly understandable to a computer, but a human may struggle to understand such a plot (because the plot would likely have to be displayed to the human in incremental steps, including 2 or 3 axes at a time).
  • Dimensionality reduction techniques in accordance with this disclosure may include one or more of T-distributed Stochastic Neighborhood Embedding (t-SNE), Uniform Manifold Approximation and Projection (U-Map), CompressionVAE (CVAE), principal component analysis (PCA), MDS, sammon mapping, and/or graph-based techniques.
  • t-SNE, U-Map and other techniques may preserve neighborhood embeddings, which may transform each data point from a high dimensional dataset into a lower dimensional space (e.g., 2 or 3 dimensions).
  • the techniques of this disclosure may provide data precision-related technical improvements in the form of an improved aligner production process, by enabling quick and intuitive validation of the intermediate staging progression.
  • This approach may also provide a summary of the progress towards an automated system which may be used for monitoring or interacting with the setup design across one or multiple cases.
  • FIG. 4 after dimensionality reduction is applied, a 2D visualization of metrics for cases in a dataset can be formed.
  • FIG.5 shows an example of progress of an orthodontic case through the stages of treatment.
  • Standard machine learning data processing may be performed on the raw metrics data prior to their ingestion by the UMAP or T-SNE algorithms. This includes, but is not limited to, data normalization.
  • the plots may be used for one or more purposes, some of which are described below.
  • Patient treatment may be aided by dimensionality reduction techniques as outlined above by the use of automated systems to score each treatment plan in a plurality of treatment plans, thus serving in an automated quality control capacity.
  • the present technique may provide a framework by which the characteristics of the arch can be represented quantitatively rather than qualitatively, allowing for automated systems such as AI and Machine Learning based systems to learn facets of what constitutes good arches and setups.
  • oral care metrics include Orthodontic Metrics (OM) and Restoration Design Metrics (RDM).
  • OM Orthodontic Metrics
  • RDM Restoration Design Metrics
  • OM Orthodontic Metrics
  • RDM Restoration Design Metrics
  • One use case example is in the creation of one or more dental restoration appliances.
  • Another use case example is in the creation of one or more veneers (such as a zirconia veneer).
  • Some RDM may quantify the shape and/or other characteristics of a tooth.
  • one or more neural networks or other machine learning models may be trained to identify or extract one or more RDM from one or more 3D representations of teeth (or gums, hardware and/or other elements of the patient's dentition).
  • Techniques of this disclosure may use RDM in various ways. For instance, in some implementations, one or more neural networks or other machine learning models may be trained to classify or label one or more setups, arches, dentitions or other sets of teeth based at least in part on RDM. As such, in these examples, RDMs form a part of the training data used for training these models.
  • This autoencoder for restoration design generation is disclosed in US Provisional Application No. US63/366514.
  • This autoencoder (e.g., a variational autoencoder or VAE) takes as input a tooth mesh (or other 3D representation) that reflects a mal state (i.e., the pre-restoration tooth shape).
  • the encoder component of the autoencoder encodes that tooth mesh to a latent form (e.g., a latent vector). Modifications may be applied to this latent vector (e.g., based on a mapping of the latent space through prior experiments), for the purpose of altering the geometry and/or structure of the eventual reconstructed mesh.
  • Case Assignment Such clusters may be used to gain further insight into the kinds of patient cases which exist in a dataset. Analysis of such clusters may reveal that patient treatment cases with certain RDM values (or ranges of values) may take less time to treat (or alternatively more time to treat). Cases which take more time to treat (or are otherwise more difficult) may be assigned to experienced or senior technicians for processing. Cases which take less time to treat may be assigned to newer or less- experienced techniques for processing. Such an assignment may be further aided by finding correlations between RDM values for certain cases and the known processing durations associated with those cases.
  • Bilateral Symmetry and/or Ratios A measure of the symmetry between one or more teeth and one or more other teeth on opposite sides of the dental. For example, for a pair of corresponding teeth, a measure of the width of each tooth. In one instance, the one tooth is of normal width, and the other tooth is too narrow. In another instance, both teeth are of normal width.
  • Proportions of Adjacent Teeth Measure the width proportions of adjacent teeth as measured as a projection along an arch onto a plane (e.g., a plane that is situated in front of the patient's face).
  • the ideal proportions for use in the final restoration design can be, for example, the so-called golden proportions.
  • the golden proportions relate adjacent teeth, such as central incisors and lateral incisors. This metric pertains to the measuring of these proportions as the proportions exist in the pre- restoration mal dentition.
  • Bolton analysis measurements may be made by measuring upper widths, lower widths, and proportions between those quantities. Arch discrepancies may be described in absolute measurements (e.g., in mm or other suitable units) or in terms of proportions or ratios, in various implementations.
  • Midline A measure of the midline of the maxillary incisors, relative to the midline of the mandibular incisors. Techniques of this disclosure may measure the midline of the maxillary incisors, relative to the midline of the nose (if data about nose location is available).
  • Proximal Contacts A measure of the size (area, volume, circumference, etc.) of the proximal contact between adjacent teeth.
  • the teeth touch along the mesial/distal surfaces and the gums fill in gingivally to where the teeth touch.
  • Black triangles may form if the gum tissue fails to fill the space below the proximal contact.
  • the size of the proximal contact may get progressively shorter for teeth located farther towards the posterior of the arch.
  • the proximal contact would be long enough so that there is an appropriately sized incisal embrasure and the gum tissue fills in the area below or gingival to the contact.
  • Embrasure In some implementations, techniques of this disclosure may measure the size (area, volume, circumference, etc.) of an embrasure, the gap between teeth at either of the gingival or incisal edge. In some implementations, techniques of this disclosure may measure the symmetry between embrasures on opposite sides of the arch. An embrasure is based at least in part on the length of the length of the contact between teeth, and/or at least in part on the shape of the tooth. In some instances, the size of the embrasure may get progressively longer for teeth located farther towards the posterior of the arch. [00168] Non-limiting examples of Intra-tooth RDM are enumerated below, continuing with the numbering of other RDM listed above.
  • Length and/or Width A measure of the length of a tooth relative to the width of that tooth. This metric may reveal, for example, that a patient has long central incisors. Width and length are defined as: a) width - mesial to distal distance; b) length - gingival to incisal distance; c) other dimensions of tooth body - the portions of tooth between the gingival region and the incisal edge. In some implementations, either or both of a length and a width may be measured for a tooth and compared to the length and/or width of one or more teeth.
  • Tooth Morphology A measure of the primary anatomy of the tooth shape, such as line angles, buccal contours, and/or incisal angles and/or embrasures.
  • the frequency and/or dimensions may be measured.
  • the observed primary tooth shape aspects may be matched to one or more known styles.
  • Techniques of this disclosure may measure secondary anatomy of the tooth shape, such as mamelon grooves. For instance, the frequency and/or dimensions may be measured.
  • the observed secondary tooth shape aspects may be matched to one or more known styles.
  • techniques of this disclosure may measure tertiary anatomy of the tooth shape, such as perikymata or striations. For instance, the frequency and/or dimensions may be measured.
  • the observed tertiary tooth shape aspects may be matched to one or more known styles.
  • Shade and/or Translucency A measure of tooth shade and/or translucency. Tooth shade is often described by the Vita Classical or 3D Master shade guide. Tooth translucency is described by transmittance or a contrast ratio. Tooth shade and translucency may be evaluated (or measured) based on one or more of the following kinds of data pertaining to teeth: the incisal edge, incisal third, body and gingival third. The enamel layer translucency is general higher than the dentin or cementum layer. Shade and translucency may, in some implementations, be measured on a per-voxel (local) basis.
  • Shade and translucency may, in some implementations, be measured on a per-area basis, such as an incisal area, tooth body area, etc. Tooth body may pertain to the portions of the tooth between the gingival region and the incisal edge.
  • Height of Contour A measure of the contour of a tooth. When viewed from the proximal view, all teeth have a specific contour or shape, moving from the gingival aspect to the incisal. This is referred to as the facial contour of the tooth. In each tooth, there is a height of contour, where that shape is the most pronounced. This height of contour changes from the teeth in the anterior of the arch to the teeth in the posterior of the arch.
  • this measurement may take the form of fitting against a template of known dimensions and/or known proportions. In some implementations, this measurement may quantify a degree of curvature along the facial tooth surface. In some implementations, measure the location along the contour of the tooth where the height of the curvature is most pronounced. This location may be measured as a distance away from the gingival margin or a distance away from the incisal edge, or a percentage along the length of the tooth.
  • RDMs may be converted to restoration design scores (RDS) that represent the RDMs’ agreement with or deviation from ideal values in a patient case dataset.
  • the network may learn baseline values for each of the RDMs from a ground truth dataset of post-restoration designs.
  • the ideal post-restoration arches may be further refined or possibly tailored to specific practices or oral care providers (e.g., a separate set of ground truth restoration designs for each dentist). All features are computed for each restoration design in the set of ground truth restoration designs. For each RDM, the median, kth percentile and (100 ⁇ k)th percentile are computed.
  • An RDS value between ⁇ 1 and 1 indicates that the RDM value is within the 25th ⁇ 75th percentile values of the RDM in the ground truth dataset, while an RDS greater than 1 or less than ⁇ 1 indicates that the RDM is outside of the normal range in the ground truth dataset.
  • the RDMs and/or RDSs described above may be used to train a machine learning (ML) classifier to rate restoration designs.
  • the classifier learns baseline values for each RDM and/or RDS from a dataset of case data, which includes pre-restoration and post-restoration dentition for each patient. This is followed by an optional normalization step which make the features have 0 mean and unit variance.
  • ML classifiers are subsequently trained using cross validation to identify post-restoration restoration designs.
  • Classifiers may include, for example, Support Vector Machine (SVM), elliptical covariance estimator, or a Principal Components Analysis (PCA) reconstruction error-based classifier, decision trees, random forest, Adaboost classifier, Na ⁇ ve Bayes, or neural networks (such as those disclosed elsewhere in this disclosure). Other classifiers are also possible, such as classifiers disclosed elsewhere in this disclosure.
  • SVM Support Vector Machine
  • PCA Principal Components Analysis
  • RDSs and/or ML classifiers can be used for several tasks during automated restoration design generation in accordance with the techniques of this disclosure, and are presented in enumerated format below: [00178] 1.
  • Initialization “target” restoration designs represent restoration designs that fix some issues with the initial malocclusion but still need to be optimized to result in an adequate restoration designs. These restoration designs may allow for speed-up of the restoration designs search process by allowing for difficult tooth restoration geometries to be generated upfront. Target restoration designs may be generated by performing simple operations (e.g., filling-in template shapes over “stub” teeth). Alternatively, the classifier may learn restoration designs based on data from existing restoration designs.
  • the classifier may select the best restoration design(s) by choosing the restoration design(s) that minimize one or more RDSs, or by choosing the restoration design(s) that a ML model rates as being most representative of a post- restoration restoration design.
  • Restoration designs may be automatically created by iteratively adjusting the geometries and/or structures of one or more teeth in a restoration arch.
  • the loss function can be defined as a single RDM or RDS, a linear or non-linear combination of RDM/RDS, or the continuous output of a ML classifier.
  • Output from a ML classifier of this disclosure may include, for example, the distance to a hyperplane in a SVM, the distance metric in a GMM, or a probability that the state is a member of the pre-restoration or post-restoration class in a two- class classifier.
  • the DRSs described in Section B indicate the deviation of RDMs from an ideal restoration design. An RDS between ⁇ 1 and 1 indicates that the RDM lies within ideal data, while values outside this range suggest that the RDM lies outside of ideal values and should be further improved. Thus, RDSs can be used to select RDMs in a restoration design that need to be further optimized.
  • RDSs can also be used to identify RDMs that are currently in the acceptable range and that should not be increased during optimization.
  • Stopping Criteria for Optimization RDMs, RDSs, or the output of a ML classifier can be used to evaluate the acceptability of a restoration design. If the restoration design lies within an acceptable range, iterative optimization can be terminated.
  • Restoration Design generation can be designed to produce multiple candidate restoration designs, from which a subset can be selected. The subset may include the single best scoring restoration design or multiple restoration designs that achieve the best RDSs or RDSs above a threshold.
  • RDSs can be used to identify a subset of restoration designs that represent certain tradeoffs in restoration design.
  • RDMs, RDSs, and/or ML classifiers can be used in interactive tools to assist clinicians during restoration design generation and evaluation.
  • RDM could be computed and displayed to an oral care provider during interactive restoration design generation. The oral care provider could use clinical expertise to determine which RDM may benefit from further improvement and could perform the appropriate tooth geometry and/or structure modifications to achieve these improvements. Such a tool could also be used to train new oral care providers in how to develop an acceptable restoration design.
  • scores for individual RDMs and/or ML classifier output could be displayed. This would provide information about the severity of issues with restoration design generation.
  • the ML classifier would alert the oral care provider if the restoration design did not closely resemble an ideal post-restoration restoration design.
  • RDSs would indicate which RDMs needed to be further refined to achieve a restoration design which is suitable for use in creating a dental restoration appliance.
  • Interface for Restoration Design Evaluation RDMs, RDSs, and/or ML classifier output could be provided to a clinician and patient in a display interface alongside the restoration design. By providing this type of easily interpretable information, the systems of this disclosure may help the patient understand the target restoration design and promote treatment acceptance.
  • RDMs, RDSs, and/or ML classifier output could be used to demonstrate trade-offs between multiple candidate restoration designs.
  • systems of this disclosure may modify the shape and/or structure of a 3D representation of a tooth (e.g., a 3D mesh of a tooth), based at least in part on the dental restoration metrics and scores described herein.
  • One or more mesh elements of a tooth representation may be modified, including a 3D point (in the case of a 3D point cloud), a vertex, a face, an edge or a voxel (in the case of sparse processing).
  • a mesh element may be removed.
  • a mesh element may be added.
  • a mesh element may undergo modification to that mesh element’s position, to the mesh element’s orientation or to both.
  • one or more mesh elements may undergo smoothing in the course of forming the target restoration design.
  • One or more mesh elements may under deformations which are computed, at least in part, in consideration of one or more RDM.
  • one or more RDM may be provided to a machine learning model (e.g., an encoder, a neural network - such as an autoencoder, a U-Net, a transformer or a network comprising convolution and/or pooling layers) which has been trained to generate a 3D oral care representation, such as a restoration tooth design (e.g., to design crown, root or both).
  • a restoration tooth design e.g., to design crown, root or both.
  • Such RDM may impart information about the shape and/or structure of the one or more teeth meshes (or point clouds, etc.) to a neural network which has been trained to generate representations of the teeth, thereby improving those representations.
  • an autoencoder e.g., a variational autoencoder or a capsule autoencoder
  • the autoencoder may also be trained to reconstruct the 3D oral care representation out of that latent vector.
  • the reconstructed 3D oral care representation may be compared to the inputted 3D oral care representation using a reconstruction error calculation.
  • FIG.11 shows an example technique, using systems of this disclosure, to generate a tooth restoration design using RDM.
  • a pre-restoration tooth design may be received at step 1102 (e.g., a 3D representation).
  • the pre-restoration tooth design may be provided to a module at step 1104 which may compute one or more RDM (e.g., “Length and/or Width”, “Height of Contour”, or “Tooth Morphology”, among others) on that pre-restoration tooth.
  • a scoring function may be executed on the RDM, according to the descriptions herein.
  • a termination criterion is evaluated at step 1108. Examples of criteria include a maximum number of iterations. Other examples of a termination criteria include evaluation of the RDM and/or score, to determine whether the RDM and/or score are within thresholds or tolerances (e.g., whether the tooth has become sufficiently wide or sufficiently long). If the termination criterion is not yet met at step 1108, then at step 1114 the shape and/or structure of the tooth is modified. After modifying the tooth in step 1114, one or more RDM are updated in step 1112. The technique then iterates as illustrated and described. After the termination criterion is met, the completed restoration design is outputted at step 1110.
  • Techniques of this disclosure may train an encoder-decoder structure to reconstruct a 3D oral care representation which is suitable for oral care appliance generation.
  • An encoder-decoder structure may comprise at least one encoder or at least one decoder.
  • Non-limiting examples of an encoder-decoder structure include a 3D U-Net, a transformer, a pyramid encoder-decoder or an autoencoder, among others.
  • Non-limiting examples of autoencoders include a variational autoencoder, a regularized autoencoder, a masked autoencoder or a capsule autoencoder.
  • 3D oral care representations e.g., tooth restoration designs, appliance components, and other examples of 3D oral care representations described herein.
  • Such 3D oral care representations may comprise point clouds, polylines, meshes, voxels and the like.
  • Such 3D oral care representation may be generated according to the requirements of the oral care arguments which may, in some implementations, be supplied to the generative model.
  • Oral care arguments may include oral care parameters as disclosed herein, or other real-valued, text-based or categorical inputs which specify intended aspects of the one or more 3D oral care representations which are to be generated.
  • oral care arguments may include oral care metrics, which may describe intended aspects of the one or more 3D oral care representations which are to be generated. Oral care arguments are specifically adapted to the implementations described herein. For example, the oral care arguments may specify the intended the designs (e.g., including shape and/or structure) of 3D oral care representations which may be generated (or modified) according to techniques described herein. In short, implementations using the specific oral care arguments disclosed herein generate more accurate 3D oral care representations than implementations that do not use the specific oral care arguments.
  • a text encoder may encode a set of natural language instructions from the clinician (e.g., generate a text embedding). A text string may comprise tokens.
  • An encoder for generating text embeddings may, in some implementations, apply either mean-pooling or max-pooling between the token vectors.
  • a transformer e.g., BERT or Siamese BERT
  • such a model for generating text embeddings may be trained using transfer learning (e.g., initially trained on another corpus of text, and then receive further training on text related to digital oral care).
  • Some text embeddings may encode text at the word level.
  • Some text embeddings may encode text at the token level.
  • a transformer for generating a text embedding may, in some implementations, be trained, at least in part, with a loss calculation which compares predicted outputs to ground truth outputs (e.g., softmax loss, multiple negatives ranking loss, MSE margin loss, cross-entropy loss or the like).
  • a loss calculation which compares predicted outputs to ground truth outputs (e.g., softmax loss, multiple negatives ranking loss, MSE margin loss, cross-entropy loss or the like).
  • the non-text arguments such as real values or categorical values, may be converted to text, and subsequently embedded using the techniques described herein.
  • the crown shape should take into consideration the shape of adjacent teeth and should have no more than x mm (e.g.: 0.1mm) space between the adjacent teeth.”
  • Techniques of this disclosure may, in some implementations, use PointNet, PointNet++, or derivative neural networks (e.g., networks trained via transfer learning using either PointNet or PointNet++ as a basis for training) to extract local or global neural network features from a 3D point cloud or other 3D representation (e.g., a 3D point cloud describing aspects of the patient’s dentition – such as teeth or gums).
  • Techniques of this disclosure may, in some implementations, use U-Nets to extract local or global neural network features from a 3D point cloud or other 3D representation.
  • input data may comprise 3D mesh data, 3D point cloud data, 3D surface data, 3D polyline data, 3D voxel data, or data pertaining to a spline (e.g., control points).
  • An encoder- decoder structure may comprise one or more encoders, or one or more decoders.
  • the encoder may take as input mesh element feature vectors for one or more of the inputted mesh elements, to improve the ability of the encoder to generate a representation of the input data.
  • Examples of encoder-decoder structures include U-Nets, autoencoders or transformers (among others).
  • a representation generation module may comprise one or more encoder-decoder structures (or portions of encoders-decoder structures – such as individual encoders or individual decoders).
  • a representation generation module may generate an information-rich (optionally reduced-dimensionality) representation of the input data, which may be more easily consumed by other generative or discriminative machine learning models.
  • a U-Net may comprise an encoder, followed by a decoder.
  • the architecture of a U-Net may resemble a U shape.
  • the encoder may extract one or more global neural network features from the input 3D representation, zero or more intermediate-level neural network features, or one or more local neural network features (at the most local level as contrasted with the most global level).
  • the output from each level of the encoder may be passed along to the input of corresponding levels of a decoder (e.g., by way of skip connections).
  • the decoder may operate on multiple levels of global-to-local neural network features. For instance, the decoder may output a representation of the input data which may contain global, intermediate or local information about the input data.
  • the U-Net may, in some implementations, generate an information-rich (optionally reduced-dimensionality) representation of the input data, which may be more easily consumed by other generative or discriminative machine learning models.
  • An autoencoder may be configured to encode the input data into a latent form.
  • An autoencoder may train an encoder to reformat the input data into a reduced-dimensionality latent form in between the encoder and the decoder, and then train a decoder to reconstruct the input data from that latent form of the data.
  • a reconstruction error may be computed to quantify the extent to which the reconstructed form of the data differs from the input data.
  • the latent form may, in some implementations, be used as an information-rich reduced-dimensionality representation of the input data which may be more easily consumed by other generative or discriminative machine learning models.
  • an autoencoder may be trained to input a 3D representation, encode that 3D representation into a latent form (e.g., a latent embedding), and then reconstruct a close facsimile of that input 3D representation at the output.
  • a transformer may be trained to use self-attention to generate, at least in part, representations of its input.
  • a transformer may encode long-range dependencies (e.g., encode relationships between a large number of inputs).
  • a transformer may comprise an encoder or a decoder.
  • Such an encoder may, in some implementations, operate in a bi-directional fashion or may operate a self-attention mechanism.
  • a decoder may, in some implementations, may operate a masked self-attention mechanism, may operate a cross-attention mechanism, or may operate in an auto-regressive manner.
  • the self-attention operations of the transformers described herein may, in some implementations, relate different positions or aspects of an individual 3D oral care representation in order to compute a reduced-dimensionality representation of that 3D oral care representation.
  • the cross-attention operations of the transformers described herein may, in some implementations, mix or combine aspects of two (or more) different 3D oral care representations.
  • the auto-regressive operations of the transformers described herein may, in some implementations, consume previously generated aspects of 3D oral care representations (e.g., previously generated points, point clouds, transforms, etc.) as additional input when generating a new or modified 3D oral care representation.
  • the transformer may, in some implementations, generate a latent form of the input data, which may be used as an information-rich reduced-dimensionality representation of the input data, which may be more easily consumed by other generative or discriminative machine learning models.
  • an encoder-decoder structure may first be trained as an autoencoder. In deployment, one or more modifications may be made to the latent form of the input data.
  • This modified latent form may then proceed to be reconstructed by the decoder, yielding a reconstructed form of the input data which differs from the input data in one or more intended aspects.
  • Oral care arguments such as oral care parameters or oral care metrics may be supplied to the encoder, the decoder, or may be used in the modification of the latent form, to influence the encoder-decoder structure in generating a reconstructed form that has desired characteristics (e.g., characteristics which may differ from that of the input data).
  • Techniques of this disclosure may, in some instances, be trained using federated learning.
  • Federated learning may enable multiple remote clinicians to iteratively improve a machine learning model (e.g., validation of 3D oral care representations, mesh segmentation, mesh cleanup, other techniques which involve labeling mesh elements, coordinate system prediction, non-organic object placement on teeth, appliance component generation, tooth restoration design generation, techniques for placing 3D oral care representations, setups prediction, generation or modification of 3D oral care representations using autoencoders, generation or modification of 3D oral care representations using transformers, generation or modification of 3D oral care representations using diffusion models, 3D oral care representation classification, imputation of missing values), while protecting data privacy (e.g., the clinical data may not need to be sent “over the wire” to a third party). Data privacy is particularly important to clinical data, which is protected by applicable laws.
  • a machine learning model e.g., validation of 3D oral care representations, mesh segmentation, mesh cleanup, other techniques which involve labeling mesh elements, coordinate system prediction, non-organic object placement on teeth, appliance component generation, tooth restoration design generation, techniques for
  • a clinician may receive a copy of a machine learning model, use a local machine learning program to further train that ML model using locally available data from the local clinic, and then send the updated ML model back to the central hub or third party.
  • the central hub or third party may integrate the updated ML models from multiple clinicians into a single updated ML model which benefits from the learnings of recently collected patient data at the various clinical sites. In this way, a new ML model may be trained which benefits from additional and updated patient data (possibly from multiple clinical sites), while those patient data are never actually sent to the 3rd party. Training on a local in-clinic device may, in some instances, be performed when the device is idle or otherwise be performed during off-hours (e.g., when patients are not being treated in the clinic).
  • Devices in the clinical environment for the collection of data and/or the training of ML models for techniques described here may include intra-oral scanners, CT scanners, X-ray machines, laptop computers, servers, desktop computers or handheld devices (such as smart phones with image collection capability).
  • contrastive learning may be used to train, at least in part, the ML models described herein. Contrastive learning may, in some instances, augment samples in a training dataset to accentuate the differences in samples from difference classes and/or increase the similarity of samples of the same class.
  • Machine learning models such as: U-Nets, encoders, autoencoders, pyramid encoder- decoders, transformers, or convolution and/or pooling layers, may be trained as a part of a method for hardware (or appliance component) placement.
  • Representation learning may train a first module to determine an embedded representation of a 3D oral care representation (e.g., encoding a mesh or point cloud into a latent form using an autoencoder, or using a U-Net, encoder, transformer, block of convolution and/or pooling layers or the like). That representation may comprise a reduced dimensionality form and/or information-rich version of the inputted 3D oral care representation.
  • a representation may be aided by the calculation of a mesh element feature vector for one or more mesh elements (e.g., each mesh element).
  • a representation may be computed for a hardware element (or appliance component).
  • Such representations are suitable to be provided to a second module, which may perform a generative task, such as oral care metric generation.
  • a U-Net (among other neural networks) is trained to generate the representations of tooth meshes
  • the mesh convolution and/or mesh pooling techniques described herein enjoy invariance to rotations/translations/scaling of that tooth mesh. Examples: Example 1.
  • a method for generating one or more target shapes for a tooth comprising: receiving, by processing circuitry of a computing device, a digital 3D model of the tooth in a pre- restoration state; executing, by the processing circuitry, a scoring function using one or more dental restoration metrics related to the tooth in the pre-restoration state as input to generate a score associated with the tooth in the pre-restoration state; modifying, by the processing circuitry, at least one of a structure or a shape of the tooth to form modified aspects of the tooth; and updating, by the processing circuitry, the at least one of the structure or the shape of the tooth based on the score and the modified aspects of the tooth to generate one or more post-restoration states of the tooth after implementation of a dental restoration treatment on the tooth.
  • Example 2 The method of Example 1, further comprising obtaining, by the processing circuitry, position information associated with the tooth from the 3D digital model.
  • Example 3 The method of Example 1, further comprising obtaining, by the processing circuitry, landmark information associated with the tooth from the 3D digital model.
  • Example 6 The method of Example 1, wherein the scoring function is represented by f(P(x)), where f is a function selected from one or more of linear, non-linear, and a probabilistic framework functions.
  • Example 6 The method of Example 1, wherein modifying the at least one of the structure or the shape of the tooth comprises modifying at least one of the position or orientation of a mesh element, by the processing circuitry, within the digital 3D model of a tooth to generate the modified state of a tooth.
  • Example 7 The method of Example 6, wherein a mesh element comprises at least one of a point, a vertex, an edge, a face or a voxel.
  • Example 8 The method of Example 6, wherein a smoothing operation is applied to one or more mesh elements.
  • Example 6 The method of Example 6, wherein one or more mesh elements are removed from the tooth.
  • Example 10. The method of Example 6, wherein one or more mesh elements are added to the tooth.
  • Example 11. The method of Example 1, wherein the one or more post-restoration states of the tooth are used to generate a design for a dental restoration appliance.
  • Example 12. The method of Example 1, wherein the one or more post-restoration states of the tooth are used to generate a design for an orthodontic appliance.
  • Example 13 The method of Example 12, wherein the orthodontic appliance is a clear tray aligner (CTA).
  • Example 14 The method of Example 1, wherein the computing device is deployed at a clinical context, and wherein the method is performed at the clinical context.
  • Example 16 The method of Example 15, wherein the generated 3D oral care representation comprises one or more post-restoration states of the tooth.

Landscapes

  • Health & Medical Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Public Health (AREA)
  • Medical Informatics (AREA)
  • Epidemiology (AREA)
  • General Health & Medical Sciences (AREA)
  • Primary Health Care (AREA)
  • Biomedical Technology (AREA)
  • Radiology & Medical Imaging (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • Pathology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biophysics (AREA)
  • Physical Education & Sports Medicine (AREA)
  • Dental Tools And Instruments Or Auxiliary Dental Instruments (AREA)

Abstract

Sont divulgués des systèmes et des techniques de visualisation de métriques de soins buccaux. Le procédé consiste à recevoir une ou plusieurs représentations de soins buccaux tridimensionnelles (3D) et à utiliser des circuits de traitement pour calculer les métriques de soins buccaux sur la base de ces représentations. Les circuits de traitement convertissent en outre les métriques de soins buccaux calculées en un format multidimensionnel. Pour améliorer la visualisation, la dimensionnalité des métriques de soins buccaux dans le format multidimensionnel est réduite, formant une version à dimensionnalité réduite des métriques. Enfin, les circuits de traitement rendent la version à dimensionnalité réduite des métriques de soins buccaux sous une forme visualisée. Ces systèmes et techniques permettent la visualisation efficace de métriques de soins buccaux, fournissant des aperçus de valeur et facilitant une analyse et une prise de décision améliorées dans des applications de soins buccaux.
PCT/IB2023/062707 2022-12-14 2023-12-14 Calcul et visualisation de métriques dans des soins buccaux numériques Ceased WO2024127313A1 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
EP23829131.4A EP4634931A1 (fr) 2022-12-14 2023-12-14 Calcul et visualisation de métriques dans des soins buccaux numériques

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US202263432627P 2022-12-14 2022-12-14
US63/432,627 2022-12-14
US202363461236P 2023-04-21 2023-04-21
US63/461,236 2023-04-21

Publications (1)

Publication Number Publication Date
WO2024127313A1 true WO2024127313A1 (fr) 2024-06-20

Family

ID=89386143

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IB2023/062707 Ceased WO2024127313A1 (fr) 2022-12-14 2023-12-14 Calcul et visualisation de métriques dans des soins buccaux numériques

Country Status (2)

Country Link
EP (1) EP4634931A1 (fr)
WO (1) WO2024127313A1 (fr)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN119782721A (zh) * 2025-03-10 2025-04-08 成都信息工程大学 动态生理指标填补方法、装置及系统和存储介质
CN119952934A (zh) * 2025-04-08 2025-05-09 广东材通实业有限公司 一种pvc线缆管的成型加工装置及方法

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020026117A1 (fr) 2018-07-31 2020-02-06 3M Innovative Properties Company Procédé de génération automatisée de configurations finales de traitement orthodontique
WO2021110938A1 (fr) * 2019-12-06 2021-06-10 3Shape A/S Procédé de génération d'une courbe de bord pour dispositifs dentaires
WO2021245480A1 (fr) 2020-06-03 2021-12-09 3M Innovative Properties Company Système pour générer un traitement d'aligneur orthodontique par étapes
WO2022123402A1 (fr) 2020-12-11 2022-06-16 3M Innovative Properties Company Traitement automatisé de balayages dentaires à l'aide d'un apprentissage profond géométrique
US20220249201A1 (en) 2018-07-31 2022-08-11 3M Innovative Properties Company Dashboard for visualizing orthodontic metrics during setup design

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020026117A1 (fr) 2018-07-31 2020-02-06 3M Innovative Properties Company Procédé de génération automatisée de configurations finales de traitement orthodontique
US20210259808A1 (en) 2018-07-31 2021-08-26 3M Innovative Properties Company Method for automated generation of orthodontic treatment final setups
US20220249201A1 (en) 2018-07-31 2022-08-11 3M Innovative Properties Company Dashboard for visualizing orthodontic metrics during setup design
WO2021110938A1 (fr) * 2019-12-06 2021-06-10 3Shape A/S Procédé de génération d'une courbe de bord pour dispositifs dentaires
WO2021245480A1 (fr) 2020-06-03 2021-12-09 3M Innovative Properties Company Système pour générer un traitement d'aligneur orthodontique par étapes
WO2022123402A1 (fr) 2020-12-11 2022-06-16 3M Innovative Properties Company Traitement automatisé de balayages dentaires à l'aide d'un apprentissage profond géométrique

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
"Equidistant and Uniform Data Augmentation for 3D Objects", IEEE ACCESS
ASHISH VASWANINOAM SHAZEERNIKI PARMARNIKI PARMARLLION JONESAIDAN N. GOMEZLUKASZ KAISERILLIA POLOSUKHIN, ATTENTION IS ALL YOU NEED, 2017
J. M. KANTERK. VEERAMACHANENI: "Deep feature synthesis: Towards automating data science endeavors", 2015 IEEE INTERNATIONAL CONFERENCE ON DATA SCIENCE AND ADVANCED ANALYTICS (DSAA, 2015, pages 1 - 10, XP032826310, DOI: 10.1109/DSAA.2015.7344858
P. CIGNONIC. ROCCHINIR. SCOPIGNO: "Computer Graphics Forum", vol. 17, June 1998, BLACKWELL PUBLISHERS, article "Metro: measuring error on simplified surfaces", pages: 167 - 174
TONIONI A ET AL.: "Learning to detect good 3D keypoints", INT J COMPUT. VIS., vol. 126, 2018, pages 1 - 20, XP036405732, DOI: 10.1007/s11263-017-1037-3

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN119782721A (zh) * 2025-03-10 2025-04-08 成都信息工程大学 动态生理指标填补方法、装置及系统和存储介质
CN119952934A (zh) * 2025-04-08 2025-05-09 广东材通实业有限公司 一种pvc线缆管的成型加工装置及方法

Also Published As

Publication number Publication date
EP4634931A1 (fr) 2025-10-22

Similar Documents

Publication Publication Date Title
WO2024127318A1 (fr) Débruitage de modèles de diffusion pour soins buccaux numériques
EP4634798A1 (fr) Techniques de réseau neuronal pour la création d'appareils dans des soins buccodentaires numériques
WO2024127309A1 (fr) Autoencodeurs pour configurations finales et étapes intermédiaires d'aligneurs transparents
WO2024127303A1 (fr) Apprentissage par renforcement pour configurations finales et organisation intermédiaire dans des aligneurs de plateaux transparents
WO2024127316A1 (fr) Autocodeurs pour le traitement de représentations 3d dans des soins buccodentaires numériques
WO2024127311A1 (fr) Modèles d'apprentissage automatique pour génération de conception de restauration dentaire
US20250364117A1 (en) Mesh Segmentation and Mesh Segmentation Validation In Digital Dentistry
US20250366959A1 (en) Geometry Generation for Dental Restoration Appliances, and the Validation of That Geometry
WO2024127313A1 (fr) Calcul et visualisation de métriques dans des soins buccaux numériques
EP4634934A1 (fr) Apprentissage profond géométrique pour configurations finales et séquençage intermédiaire dans le domaine des aligneurs transparents
WO2024127304A1 (fr) Transformateurs pour configurations finales et stadification intermédiaire dans des aligneurs de plateaux transparents
US20250363269A1 (en) Fixture Model Validation for Aligners in Digital Orthodontics
US20250359964A1 (en) Coordinate System Prediction in Digital Dentistry and Digital Orthodontics, and the Validation of that Prediction
US20250366958A1 (en) Validation for Rapid Prototyping Parts in Dentistry
EP4540833A1 (fr) Validation de configurations de dents pour des aligneurs en orthodontie numérique
EP4539771A1 (fr) Placement de boîtier et fixation en orthodontie numérique, et validation de ces placements
WO2024127314A1 (fr) Imputation de valeurs de paramètres ou de valeurs métriques dans des soins buccaux numériques
WO2024127308A1 (fr) Classification de représentations 3d de soins bucco-dentaires
EP4633527A1 (fr) Techniques de transfert de pose pour des représentations de soins bucco-dentaires en 3d
WO2025126117A1 (fr) Modèles d'apprentissage automatique pour la prédiction de structures de données se rapportant à une réduction interproximale
WO2024127310A1 (fr) Autocodeurs pour la validation de représentations de soins buccodentaires 3d
EP4633530A1 (fr) Comparaison de montages pour des montages finaux et la stadification intermédiaire de gouttières d'alignement transparentes
WO2025074322A1 (fr) Traitements de restauration orthodontique et dentaire combinés
WO2025257747A1 (fr) Modèles d'apprentissage automatique pour génération de conception de restauration dentaire en couches

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23829131

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 2023829131

Country of ref document: EP

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 2023829131

Country of ref document: EP

Effective date: 20250714

WWP Wipo information: published in national office

Ref document number: 2023829131

Country of ref document: EP