WO2025151667A1 - Manipulation de sang et de salive pour balayage intrabuccal - Google Patents
Manipulation de sang et de salive pour balayage intrabuccalInfo
- Publication number
- WO2025151667A1 WO2025151667A1 PCT/US2025/010981 US2025010981W WO2025151667A1 WO 2025151667 A1 WO2025151667 A1 WO 2025151667A1 US 2025010981 W US2025010981 W US 2025010981W WO 2025151667 A1 WO2025151667 A1 WO 2025151667A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- bodily fluid
- images
- intraoral
- color
- point cloud
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H30/00—ICT specially adapted for the handling or processing of medical images
- G16H30/40—ICT specially adapted for the handling or processing of medical images for processing medical images, e.g. editing
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/0059—Measuring for diagnostic purposes; Identification of persons using light, e.g. diagnosis by transillumination, diascopy, fluorescence
- A61B5/0082—Measuring for diagnostic purposes; Identification of persons using light, e.g. diagnosis by transillumination, diascopy, fluorescence adapted for particular medical purposes
- A61B5/0088—Measuring for diagnostic purposes; Identification of persons using light, e.g. diagnosis by transillumination, diascopy, fluorescence adapted for particular medical purposes for oral or dental tissue
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61C—DENTISTRY; APPARATUS OR METHODS FOR ORAL OR DENTAL HYGIENE
- A61C9/00—Impression cups, i.e. impression trays; Impression methods
- A61C9/004—Means or methods for taking digitized impressions
- A61C9/0046—Data acquisition means or methods
- A61C9/0053—Optical means or methods, e.g. scanning the teeth by a laser or light beam
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61C—DENTISTRY; APPARATUS OR METHODS FOR ORAL OR DENTAL HYGIENE
- A61C9/00—Impression cups, i.e. impression trays; Impression methods
- A61C9/004—Means or methods for taking digitized impressions
- A61C9/0046—Data acquisition means or methods
- A61C9/0053—Optical means or methods, e.g. scanning the teeth by a laser or light beam
- A61C9/006—Optical means or methods, e.g. scanning the teeth by a laser or light beam projecting one or more stripes or patterns on the teeth
Definitions
- Some procedures also call for removable prosthetics to be fabricated to replace one or more missing teeth, such as a partial or full denture, in which case the surface contours of the areas where the teeth are missing need to be reproduced accurately so that the resulting prosthetic fits over the edentulous region with even pressure on the soft tissues.
- the dental site is prepared by a dental practitioner, and a positive physical model of the dental site is constructed using known methods.
- the dental site may be scanned to provide 3D data of the dental site (i.e. in the form of intraoral images such as height maps).
- the virtual or real model of the dental site is sent to the dental lab, which manufactures the prosthesis based on the model.
- the design of the prosthesis may be less than optimal.
- the coping geometry has to be altered to avoid the collision, which may result in the coping design being less optimal.
- the area of the preparation containing a finish line lacks definition, it may not be possible to properly determine the finish line and thus the lower edge of the coping may not be properly designed. Indeed, in some circumstances, the model is rejected and the dental practitioner then rescans the dental site, or reworks the preparation, so that a suitable prosthesis may be produced.
- a virtual model of the oral cavity is also beneficial.
- Such a virtual model may be obtained by scanning the oral cavity directly, or by producing a physical model of the dentition, and then scanning the model with a suitable scanner.
- obtaining a three-dimensional (3D) model of a dental site in the oral cavity is an initial procedure that is performed.
- the 3D model is a virtual model
- a method comprises receiving scan data comprising an intraoral image during an intraoral scan of a dental site, identifying a representation of a foreign object in the intraoral image based on an analysis of the scan data, modifying the intraoral image by removing the representation of the foreign object from the intraoral image, receiving additional scan data comprising a plurality of additional intraoral images of the dental site during the intraoral scan, and generating a virtual three-dimensional (3D) model of the dental site using the modified intraoral image and the plurality of additional intraoral images.
- FIGS. 2B-2C comprise schematic illustrations of positioning configurations for cameras and structured light projectors of an intraoral scanner, in accordance with some applications of the present disclosure.
- FIG. 2E is a flow chart outlining a method for generating a digital three-dimensional image, in accordance with some applications of the present disclosure.
- FIGS. 2G-J are schematic illustrations depicting a simplified example of the steps of FIG. 2F, in accordance with some applications of the present disclosure.
- FIG. 6 illustrates a flow diagram for a method of addressing bodily fluids in intraoral scan data, in accordance with embodiments of the present disclosure.
- FIG. 7 illustrates a flow diagram for a method of identifying bodily fluids in intraoral scan data, in accordance with embodiments of the present disclosure.
- an alert or notification is generated to notify a user of the identified bodily fluids and/or suspected bodily fluids.
- the alert may be generated, for example, if an amount of bodily fluids and/or suspected bodily fluids exceeds a threshold.
- the alert may include a recommendation for a dental practitioner to remove the bodily fluids (e.g., by wiping or using suction) and/or take other corrective action. Responsive to such an alert, a dental practitioner may pause intraoral scanning and remove the bodily fluids before proceeding with intraoral scanning.
- Embodiments may improve the accuracy of 3D models of a patient’s dental arches, and may reduce a number of returned jobs.
- a returned job may occur when a lab or other facility that receives a 3D model of a patient’s dental arches determines that a quality of the 3D models is too low to use for generation of dental appliances (e.g., such as orthodontic aligners, palatal expanders, caps, crowns, bridges, etc.) and notifies a dental practitioner to repeat an intraoral scan of the patient’s dental arches to generate improved 3D models of those dental arches.
- dental appliances e.g., such as orthodontic aligners, palatal expanders, caps, crowns, bridges, etc.
- an active intraoral scanner shines structured light on scanned surfaces and triangulates the location of geometric deformations.
- structured light passes through fluids such as saliva and blood, reflections and refractions may alter and optical path of the structured light and cause geometrical deformations in a resultant 3D model.
- Intraoral scanners that use structured light projection may be particularly vulnerable to inaccuracies caused by bodily fluids due to the fact that structured light projectors and cameras that capture images of the reflected light on intraoral surfaces are typically at angles to one another.
- intraoral scanners that use confocal optics for determining depth may have light projectors and cameras that parallel to an imaging axis, reducing a sensitivity to geometrical distortions caused by bodily fluids (e.g., saliva bubbles, pooled blood, pooled saliva, etc.).
- bodily fluids e.g., saliva bubbles, pooled blood, pooled saliva, etc.
- Embodiments identify bodily fluids in intraoral scan data (e.g., point clouds generated from intraoral scan data) during scanning, and enable depictions of the bodily fluids to be removed during such scanning. This enables a dental practitioner to see which areas have been affected by buildup of bodily fluids such as blood and/or saliva, and to address such areas and then continue scanning with the bodily fluid removed.
- One advantage of this approach is that it reduces the confusion dentists may have with respect to handling saliva and/or blood.
- the approach described herein may also save dentist time during scanning by calling attention to areas in which there is too much blood and/or saliva.
- a sequence of operations is performed to identify bodily fluids in intraoral scan data, and to remove representations of such bodily fluids from the intraoral scan data.
- a first operation detects suspected saliva pixels in 2D images or 3D images (e.g., color 2D images) of a dental site generated by an intraoral scanner.
- the images may include a set of color images each generated by a different camera of an intraoral scanner at a same time (or at about a same time).
- a machine learning model such as a neural network may have been trained from a large set of labeled images to solve a binary image segmentation problem, where each pixel of an input image may be assigned a probability score which estimates the probability of that pixel being a bodily fluid pixel.
- the probability scores may be captured in a segmentation mask or probability mask for each image.
- the amount of suspected bodily fluid pixels may be compared to one or more thresholds, and an alert may be generated if the amount of suspected bodily fluid pixels exceeds the one or more thresholds.
- structured light points or features from images of the dental site captured using structured light projection may be projected on the generated segmentation mask or masks.
- Each of the images may be generated by one of the multiple cameras of the intraoral scanner, and may correspond to one of the color images generated by the same camera. The field of view of the multiple cameras may overlap. Accordingly, each structured light point may be viewed by multiple cameras.
- Each structured light point may be a 3D point generated by solving a correspondence algorithm for points or features captured in the multiple images.
- a combined score (e.g., an average score) may be computed per 3D point .
- the combined score may be a combined probability of the 3D point being a bodily fluid point.
- Use of the combined score for identification of bodily fluids may improve bodily fluid detection stability.
- a thresholding operation may be performed to determine for each of the 3D points whether that a point is bodily fluid candidate.
- each 3D point in a generated point cloud is associated with one or more corresponding pixels in captured images, where each pixel have a determined probability value indicating a probability of that pixel depicting a bodily fluid.
- a voting algorithm is used to determine whether a 3D point isa bodily fluid point based on bodily fluid classifications (e.g., probability values) for the pixels associate with the 3D point.
- a third operation spatial evidence of bodily fluids may be collected, taking into account the fact that saliva bubbles and bodily fluids are typically continuous. Specifically, for each suspected bodily fluid point determined from the second operation, the density of the additional suspected bodily fluid points around or proximate to that suspected bodily fluid point may be computed. A region considered for determining density may be a circular region centered on the suspected bodily fluid point in embodiments. Bodily fluid points in high density regions of bodily fluid points may be confirmed as bodily fluid points in embodiments.
- an inclusion circle may be computed.
- all points in these inclusion circles are confirm as bodily fluid points.
- bodily fluid points may then be excluded from a surface reconstruction operation, which induces holes in these regions of a reconstructed 3D mesh of the dental arch.
- Embodiments are described herein that accurately identify and filter out or remove depictions of bodily fluids from intraoral scan data and/or from 3D surfaces generated from intraoral scan data.
- embodiments are described that perform particularly well in identifying and removing or filtering out depictions of bodily fluids from intraoral scan data that has been generated using structured light projection.
- embodiments are discussed with reference to identifying and removing regions depicting bodily fluids, the techniques discussed herein with regards to bodily fluids may also apply to other types of dental object classes.
- the same or similar techniques may be used to identify and remove depictions of foreign objects (e.g., dental tools, hands, fingers, etc.), moving tissue (e.g., tongue, lips, cheeks, etc.), excess tissue, etc.
- machine learning models may be trained to identify one or more other types of dental classes to be removed from intraoral scan data and/or 3D surfaces. Accordingly, though an emphasis is placed on bodily fluids in the following discussion, it should be understood that embodiments also apply to other types of dental object classes.
- FIG. 1 illustrates one embodiment of a system 101 for performing intraoral scanning and/or generating a three-dimensional (3D) surface and/or a virtual three- dimensional model of a dental site that identifies and filters out depictions of bodily fluids.
- System 101 includes a dental office 108 and optionally one or more dental lab 110.
- the dental office 108 and the dental lab 110 each include a computing device 105, 106, where the computing devices 105, 106 may be connected to one another via a network 180.
- the network 180 may be a local area network (LAN), a public wide area network (WAN) (e.g., the Internet), a private WAN (e.g., an intranet), or a combination thereof.
- LAN local area network
- WAN public wide area network
- private WAN e.g., an intranet
- Computing device 105 may be coupled to one or more intraoral scanner 150 (also referred to as a scanner) and/or a data store 125 via a wired or wireless connection.
- multiple scanners 150 in dental office 108 wirelessly connect to computing device 105.
- scanner 150 is wirelessly connected to computing device 105 via a direct wireless connection.
- scanner 150 is wirelessly connected to computing device 105 via a wireless network.
- the wireless network is a Wi-Fi network.
- the wireless network is a Bluetooth network, a Zigbee network, or some other wireless network.
- the wireless network is a wireless mesh network, examples of which include a Wi-Fi mesh network, a Zigbee mesh network, and so on.
- computing device 105 may be physically connected to one or more wireless access points and/or wireless routers (e.g., Wi-Fi access points/routers).
- Intraoral scanner 150 may include a wireless module such as a Wi-Fi module, and via the wireless module may join the wireless network via the wireless access point/router.
- Computing device 106 may also be connected to a data store (not shown).
- the data stores may be local data stores and/or remote data stores.
- Computing device 105 and computing device 106 may each include one or more processing devices, memory, secondary storage, one or more input devices (e.g., such as a keyboard, mouse, tablet, touchscreen, microphone, camera, and so on), one or more output devices (e.g., a display, printer, touchscreen, speakers, etc.), and/or other hardware components.
- scanner 150 includes an inertial measurement unit (IMU).
- the IMU may include an accelerometer, a gyroscope, a magnetometer, a pressure sensor and/or other sensor.
- scanner 150 may include one or more micro-electromechanical system (MEMS) IMU.
- MEMS micro-electromechanical system
- the IMU may generate inertial measurement data (also referred to as movement data), including acceleration data, rotation data, and so on.
- Computing device 105 and/or data store 125 may be located at dental office 108 (as shown), at dental lab 110, or at one or more other locations such as a server farm that provides a cloud computing service.
- the manner in which the oral cavity of a patient is to be scanned may depend on the procedure to be applied thereto. For example, if an upper or lower denture is to be created, then a full scan of the mandibular or maxillary edentulous arches may be performed. In contrast, if a bridge is to be created, then just a portion of a total arch may be scanned which includes an edentulous region, the neighboring preparation teeth (e.g., abutment teeth) and the opposing arch and dentition. Alternatively, full scans of upper and/or lower dental arches may be performed if a bridge is to be created.
- one or more of the 3D points of a generated 3D point cloud may be determined from multiple images of a set of images. Each of those images may have one or more pixels that map to one or more of the 3D points, and each of those pixels may have its own associated probabilities (i.e., of depicting one or more dental object classes). Accordingly, some 3D points may have multiple probabilities assigned to it. In an example, a 3D point that was determined from 3 images may have three different probabilities of that 3D point representing blood, of that 3D point representing saliva, of that 3D point representing a tooth, of that 3D point representing gingiva, and so on. Processing logic may combine the multiple probabilities into a single combined probability score.
- Such combination of multiple probabilities may be based on a weighted or unweighted averaging of the probabilities associated with that 3D point.
- other statistical techniques may be used to determine the combined probability, such as determining a median probability value.
- a voting algorithm is used to generate the combined probability score, or to otherwise determine suspect bodily fluid points.
- 3D points having a probability of depicting a bodily fluid that is above a probability threshold are identified as suspected bodily fluid points.
- 3D points having a probability of depicting a tooth that exceeds a threshold may be identified as tooth points
- 3D points having a probability of depicting a gingiva that exceeds a threshold may be identified as gingival points, and so on.
- processing logic of the intraoral scanner and/or computing device may update suspected bodily fluid points based on one or more criteria.
- a criterion that is used to confirm or negate suspected bodily fluid points is a density or amount of surrounding bodily fluid points.
- the images generated by the intraoral scanner have sparsely populated points. For example, if a projected light pattern that is captured in images is a pattern of sparse dots, then the images have sparse information for captured features of the light pattern. Accordingly, there may be insufficient information to confirm or negate a suspected bodily fluid point from a single set of images.
- a moving window of 3D point dental class probability information is determined.
- the moving window may include a record of 3D point dental class probabilities (e.g., masks) for one or more 3D point clouds generated prior to a current 3D point cloud under process as well as one or more 3D point clouds generated after the current 3D point cloud under process.
- the moving window may extend, for example, a few (e.g., 1-3) milliseconds into the future and/or past.
- a moving window for a current 3D point cloud may be based in part on one or more 3D point clouds generated from sets of images captured after the set of images used to generate the current 3D point cloud. Accordingly, there may be a delay in performance of the operations of block 408 until those additional sets of images have been generated and processed.
- processing logic may determine an amount or density of proximate or surrounding suspected bodily fluid points. In one embodiment, processing logic determines an amount of other suspected bodily fluid points within a threshold distance (e.g., 0.5 mm, 1 mm, 1.5 mm, etc.) from a suspected bodily fluid point in question.
- the magnitude of the threshold distance may be optimized to a specific domain and/or device, and the provided values are merely examples. If the number or density of proximate or surrounding suspected bodily fluid points meets or exceeds a threshold, then the suspected bodily fluid point may be confirmed as a bodily fluid point.
- the threshold number of points may be optimized to a specific domain and/or device.
- Example threshold numbers of points include 5 points, 10 points, 15 points, 20 points, 25 points, 30 points, 100 points, and so on. If the number or density of proximate or surrounding suspected bodily fluid points is below the threshold, then the suspected bodily fluid point may be determined not to be a bodily fluid point. In some embodiments, statistics of suspected bodily fluid points are gathered over time. Clustering may then be performed to identify clusters of suspected bodily fluid points (e.g., points having a probability of being a bodily fluid that is greater than a threshold) that satisfy one or more criteria, confirming those suspected bodily fluid points in the clusters as true bodily fluid points. In some embodiments, a clustering algorithm such as density-based spatial clustering of applications with noise (DBSCAN) is used for this purpose.
- DBSCAN density-based spatial clustering of applications with noise
- bodily fluid points in a 3D point cloud should be proximate to other bodily fluid points.
- Suspected bodily fluid points that are not immediately proximate to other bodily fluid points may not actually depict a bodily fluid.
- processing logic of the intraoral scanner and/or computing device may perform one or more operations to update bodily fluid points.
- processing logic draws a shape around or boundary the bodily fluid point in the 3D point cloud.
- the shape may be a 2D shape such as a circle or ellipse, or may be a 3D shape such as a sphere, hemisphere, or other rounded shape.
- the shape may be a set size, and may be centered on the bodily fluid point.
- the shape may have a size of 0.5 mm, 1 mm, 1 .5 mm, or other size.
- the size and/or shape of the boundary may be optimized to a specific domain and/or device.
- the shapes/boundaries drawn around the bodily fluid points are combined, and a superposition of the shapes/boundaries may create general, organic or random shapes.
- processing logic of the intraoral scanner and/or computing device may make a final determination of a bodily fluid region (e.g., including multiple bodily fluid points).
- any points that are within the drawn shape(s) may be updated so that they are also classified as bodily fluid points.
- processing logic may ensure that in general regions of bodily fluids are contiguous (e.g., avoiding a situation where there are points classified as not bodily fluid between other points classified as bodily fluid points, which corresponds to how such bodily fluids form in a person’s mouth in real life. Accordingly, use of the shapes around classified bodily fluid points to update bodily fluid point classifications further increases an accuracy of bodily fluid detection.
- processing logic of the intraoral scanner and/or computing device may remove the bodily fluid points from the 3D point cloud. Alternatively, processing logic may simply ignore or filter out the bodily fluid points without actually removing those points from the 3D point cloud. If the operations of blocks 404-418 were performed by an intraoral scanner, then the 3D point cloud may be provided to a computing device for further processing. Alternatively, the captured intraoral scan data 402 may have been transmitted to a computing device, and the computing device may have performed the operations of blocks 404-418.
- processing logic may register and stitch the 3D point cloud with other 3D point clouds and/or a generated 3D surface (e.g., that has been generated by registering and stitching together multiple 3D point clouds (each associated with or constituting a discrete intraoral scan).
- the bodily fluid points of the 3D point cloud may not be used in registering and/or stitching the 3D point cloud to the other 3D point clouds and/or 3D surface (e.g., 3D mesh).
- the 3D surface may include a void corresponding to the detected bodily fluid region (e.g., to the bodily fluid points).
- processing logic may display the generated 3D surface.
- the shown 3D surface may include one or more voids caused by the removed bodily fluid points.
- the voids may be emphasized or highlighted on a display. This may draw a dental practitioner’s attention to the voids.
- the user interface may also output a notice or alert to the display when a threshold amount of bodily fluid and/or suspected bodily fluid has been detected (e.g., when a threshold number of bodily fluid points has been detected, when a bodily fluid region having a threshold size has been detected, etc.).
- the alert may include a recommendation for the dental practitioner to wipe away or otherwise remove the bodily fluid from the patient’s mouth.
- FIG. 5 illustrates a flow diagram 500 for a method of identifying and removing representations of bodily fluids in intraoral scan data, in accordance with embodiments of the present disclosure.
- a computing device receives intraoral scan data comprising a first set of images generated using structured light projection and a corresponding second set of images (e.g., set of 2D color images).
- processing logic processes the second set of images using one or more trained machine learning models to generate, for each image in the second set of images, a mask such as a segmentation mask.
- the mask generated for an image may include, for each pixel of the image, a probability of that pixel being a bodily fluid pixel (e.g., a blood pixel or a saliva pixel).
- processing logic generates a 3D point cloud from the first set of intraoral images that were generated using structured light projection. Each image in the first set of images may have been generated by a different camera at a same point in time.
- the 3D point cloud may be generated by solving a corresponding problem, where the solution provides triangulation data that indicates depths for each of the 3D points.
- processing logic maps probabilities from the masks associated with the images of the second set of images to points in the 3D point cloud.
- processing logic determines points in the 3D point cloud that are suspected bodily fluid points from the mapped probabilities. This may include determining an average probability, a median probability, or some other statistical combination of probabilities for each 3D point cloud based on the probabilities of pixels mapped to those 3D points.
- processing logic updates suspected bodily fluid points based a density of proximate suspected bodily fluid points (e.g., optionally using a clustering algorithm).
- the proximate suspected bodily fluid points may be from the current 3D point cloud and/or one or more additional 3D point clouds generated from intraoral scan data captured before and/or after the intraoral scan data used to generate the current 3D point cloud in embodiments.
- processing logic draws a shape around the bodily fluid points in the 3D point cloud.
- the shape may be a 2D shape such as a circle or ellipse, or may be a 3D shape such as a sphere, hemisphere, or other rounded shape.
- processing logic makes a final determination of a bodily fluid region (e.g., including multiple bodily fluid points).
- a bodily fluid region e.g., including multiple bodily fluid points.
- any points that are within the drawn shape may be updated so that they are also classified as bodily fluid points.
- processing logic removes the bodily fluid points from the 3D point cloud.
- processing logic may simply ignore or filter out the bodily fluid points without actually removing those points from the 3D point cloud.
- the 3D point cloud registers and stitches the 3D point cloud with other 3D point clouds and/or a generated 3D surface or model (e.g., that has been generated by registering and stitching together multiple 3D point clouds (each associated with or constituting a discrete intraoral scan).
- the bodily fluid points of the 3D point cloud may not be used in registering and/or stitching the 3D point cloud to the other 3D point clouds and/or 3D surface (e.g., 3D mesh).
- the 3D surface may include a void corresponding to the detected bodily fluid region (e.g., to the bodily fluid points).
- FIG. 6 illustrates a flow diagram for a method 600 of addressing bodily fluids in intraoral scan data, in accordance with embodiments of the present disclosure.
- processing logic receives intraoral scan data of a portion of a dental site.
- the intraoral scan data may include a first set of images generated using structured light projection (where for each set of images each image may have been generated by a different camera of an intraoral scanner) and a second set of images (e.g., set of color 2D images, where each image may have been generated by a different camera of the intraoral scanner).
- processing logic generates a 3D point cloud of the portion of the dental site using the intraoral scan data.
- the 3D point cloud is determined by performing triangulation between corresponding points captured in different images of a first set of intraoral images. Such triangulation may be performed by solving a correspondence problem, as discussed herein above.
- the one or more regions comprising a bodily fluid or suspected bodily fluid are detected by inputting the 3D point cloud, or data for the 3D point cloud, into one or more trained machine learning models, which may output dental object classifications (e.g., a blood classification, a saliva classification, a tooth classification, a gingiva classification, etc.) for the 3D points of the 3D point cloud.
- dental object classifications e.g., a blood classification, a saliva classification, a tooth classification, a gingiva classification, etc.
- the intraoral images from one or more sets of intraoral images are input into one or more trained machine learning models, which output dental object classifications for pixels in the intraoral images, which may be mapped to points on the 3D point cloud.
- generating one or more second training datasets 836B includes gathering one or 2D images with labels of bodily fluids 812B.
- One or more images and optionally associated probability maps or pixel/patch-level labels in the training dataset 812B may be resized in embodiments.
- model storage 845 Once one or more trained ML models are generated, they may be stored in model storage 845.
- the model application workflow 817 is to apply the one or more trained machine learning models generated using the model training workflow 805 to perform the classifying, segmenting, detection, recognition, image generation, prediction, parameter generation, etc. tasks for intraoral scan data (e.g., 3D scans, 3D point clouds, height maps, 2D color images, NIRI images, etc.) and/or 3D surfaces generated based on intraoral scan data.
- One or more of the machine learning models may receive and process 3D data (e.g., 3D point clouds, 3D surfaces, portions of 3D models, etc.).
- One or more of the machine learning models may receive and process 2D data (e.g., 2D images, height maps, projections of 3D surfaces onto planes, etc.).
- a combined ML model 852 is added to model application workflow 817 from model storage 845.
- a point cloud generator 860 may process each intraoral scan (e.g., each set of images generated using structured light projection) to generate a 3D point cloud 862. Processing the set of images may include performing triangulation between captured features in the images and solving a correspondence algorithm.
- FIG. 9 illustrates a diagrammatic representation of a machine in the example form of a computing device 900 within which a set of instructions, for causing the machine to perform any one or more of the methodologies discussed herein, may be executed.
- the machine may be connected (e.g., networked) to other machines in a Local Area Network (LAN), an intranet, an extranet, or the Internet.
- LAN Local Area Network
- the machine may operate in the capacity of a server or a client machine in a client-server network environment, or as a peer machine in a peer-to-peer (or distributed) network environment.
- Embodiment 2 The method of embodiment 1 , further comprising: generating an alert to perform a corrective action responsive to detecting the one or more regions of the 3D point cloud comprising the bodily fluid.
- Embodiment 15 The method of embodiment 14, wherein combining the probabilities comprises determining an average of the probabilities.
- Embodiment 16 The method of embodiment 14 or 15, further comprising, for each point on the 3D point cloud classified as depicting the bodily fluid, performing the following: determining a density of surrounding points that are also classified as depicting the bodily fluid; and confirming the point as depicting the bodily fluid responsive to determining that the density of the surrounding points that are also classified as depicting the bodily fluid is above a threshold.
- Embodiment 17 The method of embodiment 16, wherein the density of surrounding points is determined using first additional intraoral scan data captured prior to the intraoral scan data and second additional intraoral scan data captured after the intraoral scan data was captured.
- Embodiment 18 The method of embodiment 16 or 17, further comprising: estimating a shape around a group of points classified as depicting the bodily fluid; and updating points within the circular shape as depicting the bodily fluid.
- Embodiment 19 The method of embodiment 18, wherein estimating the shape comprises: for each point classified as depicting the bodily fluid, determining a boundary around the point; and determining a superposition of boundaries around points.
- Embodiment 21 The method of any of embodiments 1-20, wherein the intraoral scan data is received at a first time, the method further comprising: receiving additional intraoral scan data of the dental site at a second time; generating a second 3D point cloud of the dental site using the additional intraoral scan data; detecting one or more second regions of the second 3D point cloud comprising the bodily fluid; determining that the one or more second regions are larger than the one or more regions; and generating an alert to perform a corrective action.
- Embodiment 22 The method of any of embodiments 1 -21 wherein the 3D surface comprises one or more voids corresponding to the one or more regions comprising the bodily fluid, the method further comprising: highlighting the one or more voids in the 3D surface.
- Embodiment 23 The method of embodiment 22, further comprising: receiving additional intraoral scan data after the bodily fluid has been removed from the dental site; and updating the 3D surface based on the additional intraoral scan data, wherein the one or more voids are filled in based on the additional intraoral scan data.
- Embodiment 24 The method of any of embodiments 1 -23, wherein the bodily fluid comprises blood.
- Embodiment 25 A non-transitory computer readable medium comprising instructions that, when executed by a processing device, cause the processing device to perform the method of any of embodiments 1-24.
- Embodiment 27 A system comprising: a handheld scanner to perform an intraoral scan; and a computing device to perform the method of any of embodiments 1-24.
- Embodiment 28 A system comprising: a handheld scanner to perform one or more operations of the method of any of embodiments 1-24 and a computing device to perform one or more remaining operations of the method of any of embodiments 1-24.
- Embodiment 29 A system comprising: a handheld scanner to perform an intraoral scan; a first computing device to perform one or more operations of the method of any of embodiments 1 -24 and a second computing device to perform one or more remaining operations of the method of any of embodiments 1-24.
Landscapes
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Public Health (AREA)
- General Health & Medical Sciences (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Dentistry (AREA)
- Epidemiology (AREA)
- Animal Behavior & Ethology (AREA)
- Veterinary Medicine (AREA)
- Physics & Mathematics (AREA)
- Optics & Photonics (AREA)
- Engineering & Computer Science (AREA)
- Medical Informatics (AREA)
- Biophysics (AREA)
- Pathology (AREA)
- Biomedical Technology (AREA)
- Heart & Thoracic Surgery (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Molecular Biology (AREA)
- Surgery (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Radiology & Medical Imaging (AREA)
- Primary Health Care (AREA)
- Dental Tools And Instruments Or Auxiliary Dental Instruments (AREA)
Abstract
Dans un procédé de traitement de données de balayage intrabuccal, le procédé consiste à recevoir des données de balayage intrabuccal comprenant une ou plusieurs images intrabuccales d'une partie d'un site dentaire générées à l'aide d'une projection de lumière structurée pendant un balayage intrabuccal du site dentaire, générer un nuage de points tridimensionnel (3D) de la partie du site dentaire à l'aide des données de balayage intrabuccal, détecter une ou plusieurs régions du nuage de points 3D comprenant un fluide corporel, éliminer la ou les régions du nuage de points 3D, mettre à jour une surface 3D du site dentaire à l'aide du nuage de points 3D de la partie du site dentaire, et délivrer la surface 3D du site dentaire à une unité d'affichage.
Applications Claiming Priority (4)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| US202463619295P | 2024-01-09 | 2024-01-09 | |
| US63/619,295 | 2024-01-09 | ||
| US19/014,066 US20250225752A1 (en) | 2024-01-09 | 2025-01-08 | Blood and saliva handling for intraoral scanning |
| US19/014,066 | 2025-01-08 |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2025151667A1 true WO2025151667A1 (fr) | 2025-07-17 |
Family
ID=94601417
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/US2025/010981 Pending WO2025151667A1 (fr) | 2024-01-09 | 2025-01-09 | Manipulation de sang et de salive pour balayage intrabuccal |
Country Status (1)
| Country | Link |
|---|---|
| WO (1) | WO2025151667A1 (fr) |
Citations (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20210059796A1 (en) * | 2019-09-04 | 2021-03-04 | Align Technology, Inc. | Automated detection, generation and/or correction of dental features in digital models |
| US11563929B2 (en) | 2019-06-24 | 2023-01-24 | Align Technology, Inc. | Intraoral 3D scanner employing multiple miniature cameras and multiple miniature pattern projectors |
| WO2023028339A1 (fr) * | 2021-08-27 | 2023-03-02 | Align Technology, Inc. | Visualisations en temps réel et post-balayage de scanner intrabuccal |
-
2025
- 2025-01-09 WO PCT/US2025/010981 patent/WO2025151667A1/fr active Pending
Patent Citations (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US11563929B2 (en) | 2019-06-24 | 2023-01-24 | Align Technology, Inc. | Intraoral 3D scanner employing multiple miniature cameras and multiple miniature pattern projectors |
| US20210059796A1 (en) * | 2019-09-04 | 2021-03-04 | Align Technology, Inc. | Automated detection, generation and/or correction of dental features in digital models |
| WO2023028339A1 (fr) * | 2021-08-27 | 2023-03-02 | Align Technology, Inc. | Visualisations en temps réel et post-balayage de scanner intrabuccal |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US12138013B2 (en) | Automatic generation of prosthodontic prescription | |
| US20230068727A1 (en) | Intraoral scanner real time and post scan visualizations | |
| US20240382295A1 (en) | Digital 3d model generation with accurate arch width | |
| US12370025B2 (en) | Intuitive intraoral scanning | |
| US20240202921A1 (en) | Viewfinder image selection for intraoral scanning | |
| US20250339247A1 (en) | Intraoral scanner with illumination sequencing and controlled polarization | |
| US20240285379A1 (en) | Gradual surface quality feedback during intraoral scanning | |
| WO2023028339A1 (fr) | Visualisations en temps réel et post-balayage de scanner intrabuccal | |
| US20230414331A1 (en) | Capture of intraoral features from non-direct views | |
| US20240058105A1 (en) | Augmentation of 3d surface of dental site using 2d images | |
| WO2023014995A1 (fr) | Numérisation intrabuccale intuitive | |
| US20240358482A1 (en) | Determining 3d data for 2d points in intraoral scans | |
| US20240177397A1 (en) | Generation of dental renderings from model data | |
| US20240023800A1 (en) | Minimalistic intraoral scanning system | |
| US20250225752A1 (en) | Blood and saliva handling for intraoral scanning | |
| WO2025151667A1 (fr) | Manipulation de sang et de salive pour balayage intrabuccal | |
| WO2023004147A1 (fr) | Scanner intra-buccal avec séquençage d'éclairage et polarisation commandée | |
| US20240307158A1 (en) | Automatic image selection for images of dental sites | |
| WO2024137515A1 (fr) | Sélection d'image de viseur pour balayage intra-buccal | |
| WO2024177891A1 (fr) | Retour progressif de qualité de surface pendant un balayage intra-buccal | |
| WO2024226825A1 (fr) | Détermination de données 3d pour des points 2d dans des balayages intrabuccaux | |
| CN120604268A (zh) | 根据模型数据生成牙科渲染 |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 25705049 Country of ref document: EP Kind code of ref document: A1 |