Detailed Description
The following description of the embodiments of the present application will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present application, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
The application combines artificial intelligence technology, realizes automatic measurement and accurate measurement of the form and position of the temporomandibular joint condyle and the glenoid fossa, and can be used as a powerful auxiliary tool to help an oral doctor to diagnose diseases, make treatment plans and evaluate treatment effects.
In order that the above-recited objects, features and advantages of the present application will become more readily apparent, a more particular description of the application will be rendered by reference to the appended drawings and appended detailed description.
The method for measuring the three-dimensional shape of the temporomandibular joint provided by the embodiment of the application can be applied to an application environment shown in figure 1. Wherein the terminal 102 communicates with the server 104 via a network. The data storage system may store data that the server 104 needs to process. The data storage system may be provided separately, may be integrated on the server 104, or may be placed on a cloud or other server. The terminal 102 may send the target temporomandibular joint CBCT data to the server 104, after the server 104 receives the target temporomandibular joint CBCT data, input the target temporomandibular joint CBCT data to the mandibular three-dimensional segmentation model to obtain a mandibular three-dimensional segmentation result, determine a temporomandibular joint central position area image based on the mandibular three-dimensional segmentation result and the target temporomandibular joint CBCT data, input the temporomandibular joint central position area image to the condyle and glenoid fossa two-dimensional segmentation model to obtain a condyle area and glenoid fossa two-dimensional segmentation result, and finally calculate the condyle morphology information, the glenoid fossa morphology information and the position information of the condyle in the glenoid fossa based on the condyle area and glenoid fossa two-dimensional segmentation result. In addition, the mandibular three-dimensional segmentation model and the condyle and glenoid two-dimensional segmentation model are trained and stored in a server. The server 104 may feed back the obtained condyle morphology information, glenoid morphology information, and information on the position of the condyle in the glenoid fossa to the terminal 102. Furthermore, in some embodiments, the method for measuring the three-dimensional shape of the temporomandibular joint may also be implemented by the server 104 or the terminal 102 alone.
The terminal 102 may be, but is not limited to, various desktop computers, notebook computers, smart phones, tablet computers, and internet of things devices. The server 104 may be implemented as a stand-alone server or a server cluster composed of a plurality of servers, or may be a cloud server.
In an exemplary embodiment, as shown in fig. 2, a method for measuring a three-dimensional shape of a temporomandibular joint is provided, which is executed by a computer device, specifically, may be executed by a computer device such as a terminal or a server, or may be executed by the terminal and the server together, and in an embodiment of the present application, the method is described as being applied to the server 104 in fig. 1, and includes the following steps 201 to 205.
Step 201, acquiring CBCT data of a target temporomandibular joint.
Step 202, inputting the target temporomandibular joint CBCT data into a mandibular three-dimensional segmentation model to obtain a mandibular three-dimensional segmentation result, wherein the mandibular three-dimensional segmentation model is obtained by training a first preset deep learning network by adopting a temporomandibular joint CBCT sample data set. Through the processing of the mandible three-dimensional segmentation model, the central position of the temporomandibular joint can be quickly found, and the influence of other redundant skull structures is removed.
And 203, determining a temporomandibular joint central position area image based on the mandibular three-dimensional segmentation result and the target temporomandibular joint CBCT data, and specifically cutting out all corresponding temporomandibular joint central position area images, namely temporomandibular joint central position area images, from the target temporomandibular joint CBCT data based on the mandibular three-dimensional segmentation result. The temporomandibular joint central position area image specifically comprises three slice images, namely an image of the maximum axial position of the condyle, the central oblique sagittal position of the corrected condyle and the central oblique coronal position of the corrected condyle.
Step 204, inputting the image of the central position area of the temporomandibular joint into a two-dimensional segmentation model of the condyle and the glenoid fossa to obtain a two-dimensional segmentation result of the condyle area and the glenoid fossa, wherein the two-dimensional segmentation model of the condyle and the glenoid fossa is obtained by training a second preset depth learning network by adopting a sample data set of the central position area of the temporomandibular joint. By processing the two-dimensional segmentation model of the condyle and the glenoid fossa, the mask of the condyle and the glenoid fossa on each slice image of the central position area image of the temporomandibular joint can be obtained, and the condyle area and the glenoid fossa area are displayed on the corresponding mask.
Step 205, calculating temporomandibular joint three-dimensional form information based on two-dimensional segmentation results of the condylar area and the glenoid fossa area, wherein the temporomandibular joint three-dimensional form information comprises condylar form information, glenoid fossa form information and position information of a condylar in a glenoid fossa, the condylar form information comprises condylar length, condylar width, condylar head height and condylar height, the glenoid fossa form information comprises glenoid width, glenoid fossa depth and joint nodule gradient, and the position information of the condylar in the glenoid fossa comprises a pre-articular gap, an intra-articular gap, a post-articular gap, an intra-articular gap and an intra-articular gap.
In one specific application example, in step 205, based on the two-dimensional segmentation result of the condylar region and the glenoid region, condylar morphology information is calculated, including the following steps (11) - (14).
(11) And connecting the innermost point of the condyle with the outermost point of the condyle corresponding to the maximum axial position of the condyle to obtain the length of the condyle.
(12) Corresponding to the maximum axial position of the condyle, passing through the midpoint of the connecting line of the innermost point of the condyle and the outermost point of the condyle and being perpendicular to the inner diameter and the outer diameter of the condyle, obtaining a pre-condyle point and a post-condyle point, and connecting the pre-condyle point and the post-condyle point to obtain the width of the condyle.
(13) And (3) corresponding to the central oblique sagittal position of the corrected condyle, making a tangent line through the lowest point of the mandibular sigmoid notch, and making a vertical line through the vertex of the condyle to the tangent line so as to obtain the condyle height.
(14) And connecting the innermost point of the condyle with the outermost point of the condyle to obtain an inner-outer connecting line corresponding to the central oblique coronal position of the corrected condyle, and making a vertical line from the vertex of the condyle to the inner-outer connecting line so as to obtain the height of the condylar head.
In one specific application example, in step 205, the glenoid morphology information is calculated based on the two-dimensional segmentation result of the condylar region and the glenoid region, including the following steps (21) - (23).
(21) The node nadir is connected to the glenoid nadir to the glenoid posterior wall nadir to obtain a glenoid width.
(22) And crossing the vertex of the joint socket, and making a vertical line to a connecting line between the lowest point of the joint nodule and the lowest point of the rear wall of the joint socket so as to obtain the depth of the joint socket.
(23) And (3) making a first connecting line through the vertex of the glenoid and the lowest point of the joint nodule, and taking the included angle between the first connecting line and the orbital lug plane as the inclination of the joint nodule.
In one specific application example, in step 205, based on the two-dimensional segmentation result of the condyle area and the glenoid fossa area, the position information of the condyle in the glenoid fossa is calculated, which includes the following steps (31) - (32).
(31) Corresponding to the central oblique sagittal position of the correction condyle, connecting the anterior condyle point and the anterior glenoid point to obtain an anterior joint gap, connecting the vertex of the condyle with the vertex of the glenoid to obtain an upper joint gap, and connecting the posterior condyle point and the posterior glenoid point to obtain a posterior joint gap.
(32) Corresponding to the central oblique crown position of the corrected condyle, connecting 1/4 point in the medial glenoid fossa and the medial condylar roof to obtain an intra-articular gap, connecting the medial glenoid fossa and the medial condylar roof to obtain an intra-articular gap, and connecting the lateral glenoid fossa and the lateral condylar roof to 1/4 point to obtain an extra-articular gap.
The above-mentioned content is the measuring process of the temporomandibular joint three-dimensional form, in practical application, before needing to measure, carry on the construction and preparation of the three-dimensional segmentation model of mandibular bone, condyle and glenoid fossa two-dimensional segmentation model. In the process of construction and preparation, a temporomandibular joint CBCT sample dataset and a temporomandibular joint center position area sample dataset, a first preset deep learning network and a second preset deep learning network need to be prepared first.
Any one of the temporomandibular joint CBCT sample data in the temporomandibular joint CBCT sample data set includes historical temporomandibular joint CBCT data and corresponding mandibular three-dimensional segmentation result labels. The temporomandibular joint central position area sample data in any temporomandibular joint central position area sample data set comprises historical temporomandibular joint central position area images, corresponding condylar segmentation tags and glenoid fossa area segmentation tags.
The acquisition process of the historical temporomandibular joint CBCT data comprises the steps of keeping the orbital plane parallel to the ground plane when a patient shoots the temporomandibular joint CBCT examination, and importing original DICOM data into SmartVPro software after CBCT image scanning is completed. The temporomandibular joint CBCT of each patient was taken as a historical data, and 150 or more could be collected and the relevant technician could be adjusted as needed. After obtaining a plurality of historical data, screening is performed according to the following inclusion criteria and exclusion criteria, so as to obtain a plurality of final historical temporomandibular joint CBCT data.
The standard is included, ① images are clear, motion artifacts or hardening artifacts are avoided, and ② condyle and glenoid bone structures are normal.
The exclusion criteria include ① incomplete or unclear images of condyle and glenoid fossa, ② with a history of temporomandibular joint tumor, trauma, rigidity or systemic disease, ③ with a history of temporomandibular joint trauma and surgery, ④ with systemic disease of condyle, such as rheumatoid arthritis, systemic lupus erythematosus, etc.
On the basis, for each historical temporomandibular joint CBCT data, the medial and lateral long axes of the condyle are determined on the level of the maximum cross section of the axial condyle, the level is the maximum axial position of the condyle, the sagittal position is adjusted to be perpendicular to the direction of the long axis of the condyle, the central level of the sagittal position is the central oblique sagittal position of the corrected condyle, the coronal position is adjusted to be parallel to the direction of the long axis of the condyle, and the central level of the coronal position is the central oblique coronal position of the corrected condyle. The slice images of the three slices form a historical temporomandibular joint central position area image, namely the historical temporomandibular joint central position area image comprises an image of the maximum axial position of the condyle, the central oblique sagittal position of the corrected condyle and the central oblique coronal position of the corrected condyle.
The mandible three-dimensional segmentation result label is obtained by adopting MIMICS RESEARCH 19.0.0 to segment the mandible three-dimensional structure, and the segmentation result based on the segmentation comprises mandible and teeth. The two-dimensional segmentation of the condylar area and the glenoid fossa area is performed by adopting the three-dimensional medical image processing software Labelme, and because the accurate evaluation of the three-dimensional structure and the form of the temporomandibular joint depends on the accurate segmentation of the mandible, the condylar area and the glenoid fossa area, the software application method and the training of the mandible three-dimensional segmentation and the two-dimensional segmentation method are uniformly performed on doctors before segmentation so as to ensure the accuracy of segmentation.
After knowing the two-dimensional segmentation of the condylar region and the glenoid region, measurements of the condylar length, the distance between the innermost and outermost condylar points at the maximum axial level of the condylar, corresponding to a-B in fig. 3, and the condylar width, the distance between the anterior and posterior condylar points at the maximum axial level of the condylar, passing through the midpoint of the line connecting the innermost and outermost condylar points and perpendicular to the inner and outer condylar diameters, corresponding to C-D in fig. 3, can be made as shown in fig. 3.
The condylar-head height may be measured as shown in fig. 4, where the condylar-head height is the vertical distance from the apex of the intercondylar-passing to the inner and outer lines passing through the innermost and outermost points of the condylar-passing on the corrected image of the central oblique coronal position of the condylar, corresponding to H-I in fig. 4.
The measurement of the condylar height is performed as shown in fig. 5, where the condylar height is the vertical distance from the apex of the intercondylar to the tangent line of the lowest point of the mandibular sigmoid notch on the corrected image of the central oblique sagittal view of the condylar, corresponding to J-K in fig. 5.
As shown in FIG. 6, the measurement of the glenoid width, the glenoid depth and the pitch of the glenoid is performed, wherein the glenoid width is the distance between the lowest point of the glenoid and the lowest point of the rear wall of the glenoid, and corresponds to N-M in FIG. 6, the glenoid depth is the shortest distance between the vertex of the glenoid and the connecting line between the lowest point of the glenoid and the lowest point of the rear wall of the glenoid, and corresponds to S-O in FIG. 6, and the pitch of the glenoid is the angle between the first connecting line between the vertex of the glenoid and the lowest point of the glenoid and the plane of the orbital ear, and corresponds to the angle alpha in FIG. 6.
The measurements of the anterior, superior and posterior joint gaps may be performed as shown in FIG. 7, where the anterior joint gap corresponds to Q1-Q2 in FIG. 7, denoted Q, the superior joint gap corresponds to S1-S2 in FIG. 7, denoted S, and the posterior joint gap corresponds to P1-P2 in FIG. 7, denoted P. In one specific application, the condylar position may be assessed according to the methods Pullinger and Hollender as linear percentage = (P-Q)/(p+q) ×100%, such as linear percentage < -12%, indicating the post-condylar position, linear percentage > +12% indicating the pre-condylar position, linear percentage between-12% and +12%, indicating the condylar position centered.
The intra-articular, intra-articular and extra-articular gaps may be measured as shown in fig. 8, where the intra-articular gap corresponds to V1-V2 in fig. 8, denoted as V, the intra-articular gap corresponds to T1-T2 in fig. 8, denoted as T, and the extra-articular gap corresponds to U1-U2 in fig. 8, denoted as U.
In addition, referring to FIGS. 7 and 8, it is possible to know the determination of the condylar apex, the glenoid apex, etc., specifically, in FIG. 7, a horizontal Line1 parallel to the orbital plane is tangent to the glenoid apex S2 with the glenoid upper edge, a tangent to the condylar anterior edge is drawn through the glenoid apex S2 and tangent to the condylar anterior point Q1, a tangent to the condylar posterior edge is drawn through the condylar apex S2 and tangent to the condylar posterior point P1, a vertical Line2 through the condylar apex S2 is tangent to the condylar upper edge with the condylar apex S1, a vertical Line through the condylar anterior point Q1 and tangent to the condylar anterior edge is tangent to the glenoid anterior point Q2 with the glenoid anterior edge, and a vertical Line through the condylar posterior point P1 is tangent to the condylar posterior edge and tangent to the glenoid posterior point P2 with the glenoid posterior point. In fig. 8, the Line between the innermost point G of the condyle and the outermost point F of the condyle is denoted as Line3, the Line is denoted as Line4, the Line4 intersects the upper edge of the condyle at the medial condylar apex point T1, intersects the glenoid fossa at the medial condylar apex point T2, the medial angular bisector of the Line3 and the Line4 intersects the inner edge of the condyle at the medial condylar apex 1/4 point V1, intersects the glenoid fossa at the medial condylar point V2, the lateral angular bisector of the Line3 and the Line4 intersects the outer edge of the condyle at the lateral condylar apex 1/4 point U1, and intersects the glenoid fossa at the lateral condylar point U2.
In a specific application, the first preset deep learning network is a UNet model, and correspondingly, the training process of the mandible three-dimensional segmentation model includes the following steps (41) - (42).
(41) And performing data expansion processing and data enhancement processing on the temporomandibular joint CBCT sample data set to obtain the ready-to-use temporomandibular joint CBCT sample data set, wherein the expansion mode can adopt random cutting, random rotation, horizontal overturning, vertical overturning and contrast adjustment, and the data enhancement mode can adopt an on-line enhancement mode so as to improve generalization of the model.
(42) And inputting the CBCT sample data set of the temporomandibular joint to be used into the UNet model for 3D segmentation, and obtaining a mandibular three-dimensional segmentation model through iterative optimization training. The method comprises the steps of constructing a light mandible segmentation model detection model, namely a UNet model, based on a Pytorch deep learning frame, dividing a CBCT sample dataset of a temporomandibular joint to be used into a training set, a verification set and a test set according to a ratio of 3:1:1, inputting the training set into the UNet3D positioning model, outputting a predicted position heat map, performing model training to achieve rough positioning, and finding the position of the temporomandibular joint. The training process described above employs a focus loss function to ensure sensitivity to position, which is defined as:
。
where Loss is the value of the Loss function, For the number of samples in the CBCT sample dataset for the temporomandibular joint to be used,In order to predict the pixel point of a pixel,The total number of the components is 0.5,In order to label the pixel points,0.5, X, y, z are coordinate values.
In one practical application of the present invention, the present invention provides, the image sizes in the training set, validation set, and test set may be unified to 72 x 72. As shown in fig. 9,10 and 11, the original image, the corresponding segmented image and the 3D segmented image are one original image, the corresponding segmented image and the 3D segmented image in the CBCT sample data set of the temporomandibular joint to be used. After the image shown in FIG. 11 is obtained, the maximum level of the condylar axiom is located and the maximum level of the coronal vector is output.
In another specific application, the second preset deep learning network is nnUNet models, and correspondingly, the training process of the two-dimensional segmentation model of the condyle and the glenoid fossa comprises the following steps (51) - (52).
(51) And carrying out data enhancement processing on the temporomandibular joint central position area sample data set to obtain a temporomandibular joint central position area sample data set to be used. In practical application, the acquisition process of the sample data set of the central position area of the temporomandibular joint comprises the steps of cutting out the corresponding image of the central area of the temporomandibular joint according to the steps (41) - (42) after the rough positioning of the condyle and the glenoid fossa is realized by utilizing a rough positioning model of the condyle and the glenoid fossa, and then marking the condyle and the glenoid fossa as the two-dimensional segmentation data set of the condyle and the glenoid fossa, thereby obtaining the sample data set of the central position area of the temporomandibular joint. In addition, the data enhancement processing in this step includes random rotation, random scaling, noise addition, and the like.
(52) And inputting the sample data set of the central position area of the temporomandibular joint to be used into the nnUNet model for 2D segmentation training so as to obtain a two-dimensional segmentation model of the condyle and the glenoid fossa. In an application example, the training set, the verification set and the test set are divided into 3:1:1, a nnUNet model is selected for training a 2D segmentation model, cross entropy is adopted as a loss function in the training process, two-dimensional segmentation of the condyle and the glenoid fossa on a 2D slice is achieved, masks of the condyle and the glenoid fossa on each slice are obtained, and two-dimensional segmentation results of the condyle area and the glenoid fossa area are displayed on the corresponding masks. As shown in fig. 12 and 13, a coronal original image and a corresponding coronal two-dimensional segmentation image in the sample data set of the central position of the temporomandibular joint to be used are respectively shown, and as shown in fig. 14 and 15, a sagittal original image and a corresponding sagittal two-dimensional segmentation image in the sample data set of the central position of the temporomandibular joint to be used are respectively shown.
After the training of the two-dimensional segmentation model of the condyle and the glenoid fossa is completed, the segmentation result can be compared with a gold standard, so that the segmentation accuracy and the measurement index accuracy are calculated. If the requirements are not met, the temporomandibular joint CBCT sample data set and the temporomandibular joint central position area sample data set are reselected for training.
In summary, the application aims to establish a CBCT image automatic measurement system of the three-dimensional morphology and the position of the condylar process and the glenoid fossa of the temporomandibular joint based on the artificial intelligence technology of deep learning, so as to realize the three-dimensional automatic accurate measurement of the temporomandibular joint. The establishment of the automatic measurement model of the three-dimensional form and the position of the temporomandibular joint condyle and the glenoid fossa has remarkable beneficial effects in the aspects of improving measurement precision, realizing rapid measurement, supporting personalized treatment, assisting diagnosis and scientific research, promoting interdisciplinary cooperation and the like. This will provide powerful technical support and support for the diagnosis and treatment of temporomandibular joint related diseases.
In an exemplary embodiment, a computer device, which may be a server or a terminal, is provided, and an internal structure thereof may be as shown in fig. 16. The computer device includes a processor, a memory, an Input/Output interface (I/O) and a communication interface. The processor, the memory and the input/output interface are connected through a system bus, and the communication interface is connected to the system bus through the input/output interface. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device includes a non-volatile storage medium and an internal memory. The non-volatile storage medium stores an operating system, computer programs, and a database. The internal memory provides an environment for the operation of the operating system and computer programs in the non-volatile storage media. The input/output interface of the computer device is used to exchange information between the processor and the external device. The communication interface of the computer device is used for communicating with an external terminal through a network connection. The computer program is executed by the processor to implement a method of measuring a three-dimensional form of a temporomandibular joint.
It will be appreciated by those skilled in the art that the structure shown in FIG. 16 is merely a block diagram of some of the structures associated with the present inventive arrangements and is not limiting of the computer device to which the present inventive arrangements may be applied, and that a particular computer device may include more or fewer components than shown, or may combine some of the components, or have a different arrangement of components. In one exemplary embodiment, a computer device is provided that includes a memory, a processor, and a computer program stored on the memory and executable on the processor, the processor executing the computer program to perform the steps of the method embodiments described above.
In an exemplary embodiment, a computer-readable storage medium is provided, in which a computer program is stored which, when being executed by a processor, carries out the steps of the method embodiments described above.
In an exemplary embodiment, a computer program product is provided, comprising a computer program which, when executed by a processor, implements the steps of the method embodiments described above.
It should be noted that, the user information (including but not limited to user equipment information, user personal information, etc.) and the data (including but not limited to data for analysis, stored data, presented data, etc.) related to the present application are both information and data authorized by the user or sufficiently authorized by each party, and the collection, use and processing of the related data are required to meet the related regulations.
Those skilled in the art will appreciate that implementing all or part of the above described methods may be accomplished by way of a computer program stored on a non-transitory computer readable storage medium, which when executed, may comprise the steps of the embodiments of the methods described above. Any reference to memory, database, or other medium used in embodiments provided herein may include at least one of non-volatile and volatile memory. The nonvolatile Memory may include Read-Only Memory (ROM), magnetic tape, floppy disk, flash Memory, optical Memory, high density embedded nonvolatile Memory, resistive random access Memory (ReRAM), magneto-resistive random access Memory (Magnetoresistive Random Access Memory, MRAM), ferroelectric Memory (Ferroelectric Random Access Memory, FRAM), phase change Memory (PHASE CHANGE Memory, PCM), graphene Memory, and the like. Volatile memory can include random access memory (Random Access Memory, RAM) or external cache memory, and the like. By way of illustration, and not limitation, RAM can be in various forms such as static random access memory (Static Random Access Memory, SRAM) or dynamic random access memory (Dynamic Random Access Memory, DRAM), etc.
The databases referred to in the embodiments provided herein may include at least one of a relational database and a non-relational database. The non-relational database may include, but is not limited to, a blockchain-based distributed database, and the like. The processor referred to in the embodiments provided in the present application may be a general-purpose processor, a central processing unit, a graphics processor, a digital signal processor, a programmable logic unit, a data processing logic unit based on quantum computing, or the like, but is not limited thereto.
The technical features of the above embodiments may be arbitrarily combined, and all possible combinations of the technical features in the above embodiments are not described for brevity of description, however, as long as there is no contradiction between the combinations of the technical features, they should be considered as the scope of the description.
The principles and embodiments of the present application have been described herein with reference to specific examples, which are intended to facilitate an understanding of the principles and concepts of the application and are to be varied in scope and detail by persons of ordinary skill in the art based on the teachings herein. In view of the foregoing, this description should not be construed as limiting the application.