WO2024065343A1 - Système et procédé d'enregistrement de données de nuage de points hépatiques préopératoires et peropératoires, terminal, et support de stockage - Google Patents
Système et procédé d'enregistrement de données de nuage de points hépatiques préopératoires et peropératoires, terminal, et support de stockage Download PDFInfo
- Publication number
- WO2024065343A1 WO2024065343A1 PCT/CN2022/122374 CN2022122374W WO2024065343A1 WO 2024065343 A1 WO2024065343 A1 WO 2024065343A1 CN 2022122374 W CN2022122374 W CN 2022122374W WO 2024065343 A1 WO2024065343 A1 WO 2024065343A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- point cloud
- features
- preoperative
- intraoperative
- cloud data
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Ceased
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
- G06T17/20—Finite element generation, e.g. wire-frame surface description, tesselation
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/30—Determination of transform parameters for the alignment of images, i.e. image registration
- G06T7/33—Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
Definitions
- the invention relates to a preoperative and intraoperative liver point cloud data registration system, method, terminal and storage medium.
- the key point of 3D laparoscopic liver tumor resection is how to accurately locate the liver tumor under the 3D laparoscopic field of view.
- Medical image registration technology is an effective way to solve this problem. Registration refers to the precise alignment of the patient's preoperative image or intraoperative image (image space) with the anatomical structure (physical space) of the patient's surgical area.
- the data used in medical image registration technology usually includes 2D data and 3D data: 2D data medical images will cause loss of information about the affected area to a certain extent; while 3D data can display more intuitive and detailed pathological information. Using 3D data for registration can improve the accuracy and stability of diagnosis and surgical planning.
- the images to be registered are preoperative liver CT image data and liver surface data under the intraoperative laparoscopic field of view.
- the use of registration technology can achieve accurate positioning of liver tumors and improve the accuracy and safety of surgery.
- inexperienced doctors can perform liver resection surgery more safely, while skilled professional doctors can also use this technology to improve the accuracy of surgery, shorten the operation time, increase the tumor resection rate, and reduce the tumor residue and local recurrence rate.
- RPMNet Robust point matching using learned features
- RPMNet Robust point matching using learned features
- a structure similar to PPFNet is adopted, and the three-dimensional coordinates of the points are added to the features to obtain mixed features; a parameter prediction network is used to estimate the outlier parameters and annealing parameters, and then the mixed features are combined to calculate the point pair matching.
- a differentiable sinkhorn layer is introduced to expand the matching matrix, and then iterative normalization is performed to obtain a higher confidence correspondence.
- RPMNet achieves excellent performance under noise and partial overlap, but this method requires repeated feature calculations in iterations, which has a high computational cost.
- OMNet Learning overlapping mask for partial-to-partial point cloud registration
- this method predicts the overlapping masks of the two source point clouds and the target point cloud in each iteration, filters the non-overlapping areas, and then predicts the relative motion parameters from the global features of the two point clouds through multilayer perceptrons (MLPs).
- MLPs multilayer perceptrons
- OMNet avoids interference with global features by removing non-overlapping areas and has achieved excellent results on low-overlap point cloud data.
- the extracted global features of the point cloud contain less local information and are not suitable for liver surface data.
- the present invention provides a preoperative and intraoperative liver point cloud data registration system, comprising: a local mixed feature extraction module: used for extracting local mixed features containing geometric structure information from preoperative point cloud data and intraoperative point cloud data respectively; a global feature extraction module: used for fusing the above-extracted local mixed features to obtain global features of the preoperative point cloud and global features of the intraoperative point cloud; a feature fusion module: used for fusing the local mixed features of the preoperative point cloud, the global features and the global features of the intraoperative point cloud to obtain fused features of the preoperative point cloud; similarly, the local mixed features, the global features and the global features of the intraoperative point cloud are fused to obtain fused features of the preoperative point cloud.
- the global features are fused to obtain the fusion features of the intraoperative point cloud;
- the overlapping area mask prediction module is used to fuse the fusion features of the preoperative point cloud and the intraoperative point cloud to obtain their respective overlapping area masks and decoding features;
- the transformation matrix prediction module is used to multiply the overlapping area masks of the preoperative point cloud and the intraoperative point cloud with their respective local mixed features, and link them with their respective decoding features to obtain the spatial transformation matrix of the preoperative point cloud and the intraoperative point cloud;
- the registration module is used to apply the calculated spatial transformation matrix to the preoperative point cloud data to obtain the registration result of the preoperative point cloud data and the intraoperative point cloud data.
- the local mixed feature fX consists of three parts, including: the spatial coordinates of the point, the distance to the adjacent points and the local geometric features; for a point Xi in the point cloud X, assuming its neighborhood is N( Xi ), then its local mixed feature is Expressed as:
- f( ⁇ ) represents a multilayer perceptron network used to extract local mixed features of point clouds, X j ⁇ N(X i ), represents the overlapping area mask obtained in the previous iteration, It means that the neighborhood points are converted into local neighboring points after subtracting the centroid point:
- PPF(X i ,X j ) represents the 4D point pair spatial feature (PPF) between X j and Xi , and the 4D point pair spatial feature is described by the distance between two spatial points and the angle between the normal vectors:
- ni and nj represent the normal vectors of points Xi and Xj .
- the global feature extraction module is specifically used for:
- the local mixed features of all points in the point cloud X in the i-th iteration are extracted. Finally, the local mixed feature set is fed into a three-layer convolutional network, expanded to 1024-D, and then combined with the overlapping area mask obtained in the previous iteration. Multiply and pass through a maximum pooling layer to obtain the final global feature
- the feature fusion module is specifically used for:
- the feature fusion module consists of three convolutional layers, which input the local mixed features of the preoperative point cloud X Global Features And the global features of the intraoperative point cloud Y Output 512-dimensional fusion features of preoperative point cloud Similarly, the fusion features of the intraoperative point cloud are obtained
- the overlap region mask prediction module is specifically used for:
- g( ⁇ ) represents the feature fusion module
- f( ⁇ ) represents the overlapping area mask prediction module
- the input fusion feature After 4 convolutional layers, the predicted overlapping area mask is output, and the decoding features of the point cloud obtained by splicing the output of each layer of the first three layers
- the transformation matrix prediction module is specifically used for:
- the transformation matrix prediction module includes 5 convolution layers.
- the input features are sent to the transformation matrix prediction module after passing through the maximum pooling operation once.
- a 7-d feature vector is output, which represents the spatial transformation of the current iteration, wherein the first 4 values represent the rotation quaternion q ⁇ R 4 , and the last three values represent the translation vector t ⁇ R 3 , wherein the input features refer to the features spliced together by the fusion features and the layer-by-layer splicing features;
- p( ⁇ ) represents the transformation matrix prediction module, and after the iteration, the spatial transformation predicted in each round will be accumulated and calculated to obtain the overall transformation between the final preoperative point cloud and the intraoperative point cloud.
- the registration module is specifically used for:
- the transformation process can be started. Specifically, the virtual position of the preoperative point cloud in the intraoperative point cloud area is recorded as P T , and the original position of the preoperative point cloud is recorded as P I , then:
- the present invention also provides a method for preoperative and intraoperative liver point cloud data registration, the method comprising the following steps: a. extracting local mixed features containing geometric structure information from preoperative point cloud data and intraoperative point cloud data respectively; b. fusing the local mixed features extracted above to obtain global features of the preoperative point cloud and global features of the intraoperative point cloud; c. fusing the local mixed features of the preoperative point cloud, the global features and the global features of the intraoperative point cloud to obtain fused features of the preoperative point cloud; similarly, fusing the local mixed features, the global features and the global features of the intraoperative point cloud to obtain fused features of the preoperative point cloud.
- the global features of the point cloud are fused to obtain the fusion features of the intraoperative point cloud; d.
- the fusion features of the preoperative point cloud and the intraoperative point cloud are fused to obtain their respective overlapping area masks and decoding features; e.
- the overlapping area masks of the preoperative point cloud and the intraoperative point cloud are multiplied with their respective local mixed features, and linked with their respective decoding features to obtain the spatial transformation matrix of the preoperative point cloud and the intraoperative point cloud; f.
- the calculated spatial transformation matrix is applied to the preoperative point cloud data to obtain the registration result of the preoperative point cloud data and the intraoperative point cloud data.
- the present invention also provides a terminal, which includes a processor and a memory coupled to the processor, wherein: the memory stores program instructions for implementing the preoperative and intraoperative liver point cloud data registration method; and the processor is used to execute the program instructions stored in the memory to implement preoperative and intraoperative liver point cloud data registration.
- the present invention also provides a storage medium storing program instructions executable by a processor, wherein the program instructions are used to execute the preoperative and intraoperative liver point cloud data registration method.
- the beneficial effects of the present application include: the features of preoperative and intraoperative point clouds are automatically extracted using deep learning methods, and are obtained by fusing local mixed features and global features.
- the local mixed features mainly focus on the local set information of the point cloud, while the global features have a wider field of view and can focus on the overall structure of the point cloud. Combining the two can effectively improve the registration accuracy of the point cloud and avoid falling into the erroneous state of the local optimality; the point cloud is filtered using the learned point cloud overlapping area mask to reject points in non-overlapping areas, and the local-to-local point cloud registration is converted into a point cloud registration of the same shape, which can effectively improve the registration accuracy of preoperative liver point cloud data and intraoperative laparoscopic liver surface point cloud.
- FIG1 is a schematic diagram of the structure of a preoperative and intraoperative liver point cloud data registration system according to an embodiment of the present application
- FIG2 is a flow chart of a method for preoperative and intraoperative liver point cloud data registration according to an embodiment of the present application
- FIG3 is a schematic diagram of a terminal structure according to an embodiment of the present application.
- FIG4 is a schematic diagram of the structure of a storage medium according to an embodiment of the present application.
- Figure 5 is a schematic diagram of the visualization results of the alignment of the present application, document [1], and document [2] provided in an embodiment of the present application: (a) is the present application, (b) is document [1], (c) is document [2], and (d) is the true value of the alignment of the two point clouds; among them, the point with the smallest diameter and the lowest brightness represents the pre-operative point cloud, the point with the largest diameter and the highest brightness represents the intra-operative point cloud, and the point with the diameter and brightness in the middle represents the pre-operative point cloud after the alignment transformation.
- FIG. 1 is a hardware architecture diagram of a preoperative and intraoperative liver point cloud data registration system 10 of the present invention.
- the system comprises: a local hybrid feature extraction module 101, a global feature extraction module 102, a feature fusion module 103, an overlap region mask prediction module 104, a transformation matrix prediction module 105, and a registration module 106.
- the local mixed feature extraction module 101 is used to extract local mixed features containing geometric structure information from the pre-operative point cloud data and the intra-operative point cloud data. Specifically:
- the local mixed feature f X consists of three parts, including: the spatial coordinates of the point, the distance to the adjacent points and the local geometric features.
- a point Xi in the point cloud X let its neighborhood be N( Xi ), then its local mixed feature Expressed as:
- f( ⁇ ) represents a multilayer perceptron network used to extract local mixed features of point clouds, X j ⁇ N(X i ), represents the overlapping area mask obtained in the previous iteration, It means that the neighborhood points are converted into local neighboring points after subtracting the centroid point:
- PPF(X i ,X j ) represents the 4D point pair spatial feature (PPF) between X j and Xi , and the 4D point pair spatial feature is described by the distance between two spatial points and the angle between the normal vectors:
- ni and nj represent the normal vectors of points Xi and Xj .
- the features of all points in the neighborhood N(X i ) of Xi are first converted into The 10-D feature vector of the form is then passed through a series of convolutional layers, maximum pooling layers, and output convolutional layers to obtain the final local mixed feature of point Xi
- the global feature extraction module 102 is used to fuse the local mixed features extracted above to obtain the global features of the pre-operative point cloud and the global features of the intra-operative point cloud. Specifically:
- the global features of point cloud X are obtained by expanding its local mixed features.
- the local mixed feature set of each point can filter out some redundant information, so that the extracted global features are more representative.
- the local mixed feature set is fed into a three-layer convolutional network, expanded to 1024-D, and then combined with the overlapping area mask obtained in the previous iteration. Multiply and pass through a maximum pooling layer to obtain the final global feature
- the output dimension of each layer of the three-layer convolutional network is 96, 128 and 1024; the local mixed feature set is a local mixed feature Composed of a collection.
- the feature fusion module 103 is used to fuse the local mixed features and global features of the preoperative point cloud and the global features of the intraoperative point cloud to obtain the fused features of the preoperative point cloud; similarly, the local mixed features and global features of the intraoperative point cloud and the global features of the preoperative point cloud are fused to obtain the fused features of the intraoperative point cloud. Specifically:
- the local mixed features and global features of the two point clouds are obtained through the local mixed feature extraction module 101 and the global feature extraction module 102.
- the feature fusion module 103 includes three convolution layers.
- the local mixed features of the preoperative point cloud X are input.
- Global Features And the global features of the intraoperative point cloud Y Output 512-dimensional fusion features of preoperative point cloud Similarly, the fusion features of the intraoperative point cloud are obtained
- the overlap region mask prediction module 104 is used to fuse the obtained fusion features of the pre-operative point cloud and the fusion features of the intra-operative point cloud to obtain respective overlap region masks and decoding features. Specifically:
- g( ⁇ ) represents the feature fusion module 103
- f( ⁇ ) represents the overlap region mask prediction module 104
- the input fusion feature After 4 convolutional layers, the predicted overlapping area mask is output, and the decoding features of the point cloud obtained by splicing the output of each layer of the first three layers
- the transformation matrix prediction module 105 is used to multiply the overlapping area mask of the pre-operative point cloud and the intra-operative point cloud with their respective local mixed features, and link them with their respective decoded features to obtain the spatial transformation matrix of the pre-operative point cloud and the intra-operative point cloud. Specifically:
- the feature fusion module 103 obtains the fusion feature obtained by fusing the local mixed feature of the point cloud and the global feature.
- the layer-by-layer splicing features are obtained in the overlapping area mask prediction module 104
- the fused features and the layer-by-layer splicing features are spliced together to obtain the final features required in the spatial transformation prediction.
- the transformation matrix prediction module 105 includes 5 convolutional layers. The input features are sent to the transformation matrix prediction module 105 after passing through the maximum pooling operation once.
- a 7-d feature vector is output, which represents the spatial transformation of the current iteration, wherein the first 4 values represent the rotation quaternion q ⁇ R 4 , and the last three values represent the translation vector t ⁇ R 3 .
- the input features refer to the features spliced together by the fused features and the layer-by-layer splicing features.
- the spatial transformations predicted in each round will be accumulated and calculated to obtain the overall transformation between the final pre-operative point cloud and the intra-operative point cloud.
- the registration module 106 is used to apply the calculated spatial transformation matrix to the preoperative point cloud data to obtain the registration result of the preoperative point cloud data and the intraoperative point cloud data.
- the preoperative point cloud data is the preoperative point cloud after the registration transformation in the previous iteration; the registration result refers to the registration of the preoperative point cloud data to the intraoperative point cloud data through the transformation matrix.
- the transformation process can be started.
- the specific operation is: the virtual position of the preoperative point cloud in the intraoperative point cloud area is recorded as P T , and the original position of the preoperative point cloud is recorded as P I , then:
- This embodiment is executed 4 times, that is, iterated 4 times, to improve the registration accuracy, so as to register the preoperative point cloud data to the intraoperative point cloud data.
- iterating 1, 2, or 3 times can also implement the present application and is also within the protection scope of the present application.
- FIG. 2 is a flowchart of a preferred embodiment of the preoperative and intraoperative liver point cloud data registration method of the present invention.
- Step S1 the local mixed feature extraction module 101 extracts local mixed features containing geometric structure information from the pre-operative point cloud data and the intra-operative point cloud data. Specifically:
- the local mixed feature fX consists of three parts, including: the spatial coordinates of the point, the distance to the adjacent points and the local geometric features.
- a point Xi in the point cloud X let its neighborhood be N( Xi ), then its local mixed feature Expressed as:
- f( ⁇ ) represents a multilayer perceptron network used to extract local mixed features of point clouds, X j ⁇ N(X i ), represents the overlapping area mask obtained in the previous iteration, It means that the neighborhood points are converted into local neighboring points after subtracting the centroid point:
- PPF(X i ,X j ) represents the 4D point pair spatial feature (PPF) between X j and Xi , and the 4D point pair spatial feature is described by the distance between two spatial points and the angle between the normal vectors:
- ni and nj represent the normal vectors of points Xi and Xj .
- the features of all points in the neighborhood N(X i ) of Xi are first converted into The 10-D feature vector of the form is then passed through a series of convolutional layers, maximum pooling layers, and output convolutional layers to obtain the final local mixed feature of point Xi
- Step S2 the global feature extraction module 102 fuses the local mixed features extracted above to obtain the global features of the pre-operative point cloud and the global features of the intra-operative point cloud. Specifically:
- the global features of point cloud X are obtained by expanding its local mixed features.
- the local mixed feature set of each point can filter out some redundant information, so that the extracted global features are more representative.
- the local mixed feature set is fed into a three-layer convolutional network, expanded to 1024-D, and then combined with the overlapping area mask obtained in the previous iteration. Multiply and pass through a maximum pooling layer to obtain the final global feature
- the output dimension of each layer of the three-layer convolutional network is 96, 128 and 1024; the local mixed feature set is a local mixed feature Composed of a collection.
- Step S3 the feature fusion module 103 fuses the local mixed features and global features of the preoperative point cloud and the global features of the intraoperative point cloud to obtain the fused features of the preoperative point cloud; similarly, the local mixed features and global features of the intraoperative point cloud and the global features of the preoperative point cloud are fused to obtain the fused features of the intraoperative point cloud.
- the feature fusion module 103 includes three convolutional layers.
- the local mixed features of the preoperative point cloud X are input.
- Global Features And the global features of the intraoperative point cloud Y Output 512-dimensional fusion features of preoperative point cloud Similarly, the fusion features of the intraoperative point cloud are obtained
- Step S4 fusing the obtained fusion features of the pre-operative point cloud and the fusion features of the intra-operative point cloud to obtain respective overlapping area masks and decoding features.
- g( ⁇ ) represents the feature fusion module 103 in step S3
- f( ⁇ ) represents the overlap region mask prediction module 104
- the input fusion feature After 4 convolutional layers, the predicted overlapping area mask is output, and the decoding features of the point cloud obtained by splicing the output of each layer of the first three layers
- Step S5 the transformation matrix prediction module 105 multiplies the overlapping area masks of the pre-operative point cloud and the intra-operative point cloud with their respective local mixed features, and links them with their respective decoded features to obtain the spatial transformation matrices of the pre-operative point cloud and the intra-operative point cloud. Specifically:
- Step S3 obtains the fusion feature obtained by fusing the local mixed feature of the point cloud and the global feature
- Step S4 obtains layer-by-layer concatenation features
- the fused features and the layer-by-layer splicing features are spliced together to obtain the final features required in the spatial transformation prediction.
- the transformation matrix prediction module 105 includes 5 convolutional layers.
- the input features are sent to the transformation matrix prediction module 105 after passing through the maximum pooling operation once. After 5 layers of convolution operations, a 7-d feature vector is output, which represents the spatial transformation of the current iteration, wherein the first 4 values represent the rotation quaternion q ⁇ R 4 , and the last three values represent the translation vector t ⁇ R 3 .
- the input features refer to the features spliced together by the fused features and the layer-by-layer splicing features.
- the spatial transformation predicted in each round will be accumulated and calculated to obtain the overall transformation between the final pre-operative point cloud and the intra-operative point cloud.
- Step S6 applying the calculated spatial transformation matrix to the preoperative point cloud data to obtain the registration result of the preoperative point cloud data and the intraoperative point cloud data.
- the preoperative point cloud data is the preoperative point cloud after the registration transformation in the previous iteration; the registration result refers to the registration of the preoperative point cloud data to the intraoperative point cloud data through the transformation matrix.
- the transformation process can be started.
- the specific operation steps are: the virtual position of the preoperative point cloud in the intraoperative point cloud area is recorded as PT , and the original position of the preoperative point cloud is recorded as PI , then:
- This embodiment is executed 4 times, that is, steps S1-S7 are iterated 4 times to improve the registration accuracy, so as to register the preoperative point cloud data to the intraoperative point cloud data.
- the present application can also be implemented by iterating 1, 2, or 3 times, which is also within the protection scope of the present application.
- the terminal 50 includes a processor 51 and a memory 52 coupled to the processor 51 .
- the memory 52 stores program instructions for implementing the above-mentioned preoperative and intraoperative liver point cloud data registration method.
- the processor 51 is used to execute program instructions stored in the memory 52 to achieve preoperative and intraoperative liver point cloud data registration.
- the processor 51 may also be referred to as a CPU (Central Processing Unit).
- the processor 51 may be an integrated circuit chip having signal processing capabilities.
- the processor 51 may also be a general-purpose processor, a digital signal processor (DSP), an application-specific integrated circuit (ASIC), a field-programmable gate array (FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components.
- DSP digital signal processor
- ASIC application-specific integrated circuit
- FPGA field-programmable gate array
- the general-purpose processor may be a microprocessor or the processor may also be any conventional processor, etc.
- FIG 4 is a schematic diagram of the structure of the storage medium of the embodiment of the present application.
- the storage medium of the embodiment of the present application stores a program file 61 that can implement all the above methods, wherein the program file 61 can be stored in the above storage medium in the form of a software product, including a number of instructions to enable a computer device (which can be a personal computer, server, or network device, etc.) or a processor (processor) to execute all or part of the steps of each implementation method of the present application.
- a computer device which can be a personal computer, server, or network device, etc.
- processor processor
- the aforementioned storage medium includes: U disk, mobile hard disk, read-only memory (ROM, Read-Only Memory), random access memory (RAM, Random Access Memory), disk or optical disk and other media that can store program codes, or terminal devices such as computers, servers, mobile phones, tablets, etc.
- the test set used in this application contains 13 sets of preoperative and intraoperative pig liver surface point cloud data pairs in different deformation states.
- the test set was taken from a 50 kg male white pig.
- Richard et al. used laparotomy to evenly distribute 15 1 cm long metal clips on the ventral surface of the liver, and implanted 45 metal balls with a diameter of 2 mm inside the liver to calculate the registered TRE to evaluate the accuracy.
- the laparoscopic surface data of each configuration was recorded in the video using a three-dimensional camera system.
- the intraoperative liver surface data was obtained by 3D surface reconstruction of the laparoscopic video of each configuration using photoscanpro (4) software using the dense SfM method.
- Photoscanpro is a state-of-the-art surface reconstruction software.
- the preoperative liver surface point cloud data was obtained by sampling the computed tomography (CT) data of each configuration.
- CT computed tomography
- the experimental pigs were kept in apnea during data collection, and the preoperative and intraoperative liver point cloud data had been registered.
- 2048 points were randomly sampled from the preoperative and intraoperative point cloud data as experimental point cloud data, and the position information of 15 metal clips on the ventral surface of the liver was used to evaluate the registration accuracy.
- the main evaluation indicators used are the target registration error (TRE) for the registration point error and the isotropic rotation error and translation error for the registration transformation matrix.
- the target registration error represents the distance between corresponding points outside the registered fiducials:
- Isotropic rotation error and translation error represent the error between the predicted rotation matrix and translation vector and the true value:
- RMSE root mean square error
- MAE mean average error
- Table 1 compares the TRE indicators of this application, reference [1] and reference [2] on the test set. The last column is the average TRE error of the entire test set, and the best result is marked in bold. Reference [2] failed to align the data pair and failed to obtain the correct result, so the TRE error in its sixth column is marked as "--".
- Table 2 is a comparison of the errors of the three methods on the transformation matrix, where Error(R) and Error(t) represent isotropic rotation and translation errors, RMSE(R) and RMSE(t) represent the root mean square error (RMSE) of the rotation matrix and translation vector, and Error(R) and Error(t) represent the average error between the two.
- Error(R) and Error(t) represent isotropic rotation and translation errors
- RMSE(R) and RMSE(t) represent the root mean square error (RMSE) of the rotation matrix and translation vector
- Error(R) and Error(t) represent the average error between the two.
- Table 3 compares the registration success rates of the three methods.
- FIG5 shows the alignment visualization results of the three algorithms, (a) is from this application, (b) is from reference [1], and (c) is from reference [2].
- This application uses the learned overlapping area mask to filter out non-overlapping areas, converts part-to-part point cloud registration to registration of the same shape, and then registers the extracted overlapping area point cloud according to local mixed features and global features.
- This application can better adapt to the registration of liver point cloud data under 3D laparoscopy.
- this application can register the preoperative liver CT image with the liver surface obtained by stereoscopic reconstruction of the intraoperative 3D laparoscopic image, which facilitates the precise positioning and resection of liver tumors during resection surgery, shortens the operation time, improves the tumor resection rate, and improves the accuracy and safety of the operation.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Graphics (AREA)
- Geometry (AREA)
- Software Systems (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Processing (AREA)
Abstract
L'invention concerne un système et un procédé d'enregistrement de données de nuage de points hépatiques préopératoires et peropératoires, un terminal, et un support de stockage. Le procédé consiste à : extraire des caractéristiques hybrides locales de données de nuage de points préopératoires et des caractéristiques hybrides locales de données de nuage de points peropératoires (S1) ; fusionner les caractéristiques hybrides locales pour obtenir des caractéristiques globales d'un nuage de points préopératoires et des caractéristiques globales d'un nuage de points peropératoires (S2) ; fusionner les caractéristiques hybrides locales et les caractéristiques globales du nuage de points préopératoires et les caractéristiques globales du nuage de points peropératoires pour obtenir des caractéristiques fusionnées du nuage de points préopératoires, et obtenir des caractéristiques fusionnées du nuage de points peropératoires de la même manière (S3) ; fusionner les caractéristiques fusionnées du nuage de points préopératoires et les caractéristiques fusionnées du nuage de points peropératoires pour obtenir des masques de région de chevauchement respectifs et des caractéristiques décodées (S4) ; obtenir des matrices de transformation spatiale du nuage de points préopératoires et du nuage de points peropératoires (S5) ; et appliquer les matrices de transformation spatiale aux données de nuage de points préopératoires pour obtenir un résultat d'enregistrement des données de nuage de points préopératoires et des données de nuage de points peropératoires (S6). Le procédé et le système peuvent faciliter la localisation précise d'une tumeur hépatique, ce qui permet de raccourcir le temps d'opération chirurgicale et d'améliorer la précision et la sécurité d'une opération chirurgicale.
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| PCT/CN2022/122374 WO2024065343A1 (fr) | 2022-09-29 | 2022-09-29 | Système et procédé d'enregistrement de données de nuage de points hépatiques préopératoires et peropératoires, terminal, et support de stockage |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| PCT/CN2022/122374 WO2024065343A1 (fr) | 2022-09-29 | 2022-09-29 | Système et procédé d'enregistrement de données de nuage de points hépatiques préopératoires et peropératoires, terminal, et support de stockage |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2024065343A1 true WO2024065343A1 (fr) | 2024-04-04 |
Family
ID=90475352
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/CN2022/122374 Ceased WO2024065343A1 (fr) | 2022-09-29 | 2022-09-29 | Système et procédé d'enregistrement de données de nuage de points hépatiques préopératoires et peropératoires, terminal, et support de stockage |
Country Status (1)
| Country | Link |
|---|---|
| WO (1) | WO2024065343A1 (fr) |
Cited By (8)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN119090930A (zh) * | 2024-11-07 | 2024-12-06 | 合肥工业大学 | 面向点云处理的非刚性配准方法、装置、设备及存储介质 |
| CN119090927A (zh) * | 2024-09-02 | 2024-12-06 | 山东大学 | 一种面向计算机辅助肝脏手术的非刚性点云配准方法 |
| CN119090929A (zh) * | 2024-11-07 | 2024-12-06 | 合肥工业大学 | 基于深度学习的术前模型到术中点云的实时配准方法 |
| CN119205865A (zh) * | 2024-11-29 | 2024-12-27 | 山东大学 | 一种部分到整体非刚性点云配准方法及系统 |
| CN119273726A (zh) * | 2024-09-02 | 2025-01-07 | 山东大学 | 面向计算机辅助骨科手术的特征引导刚性点云配准方法 |
| CN119863499A (zh) * | 2025-03-25 | 2025-04-22 | 浙江华是科技股份有限公司 | 低重叠点云配准方法及系统 |
| CN119941529A (zh) * | 2025-01-23 | 2025-05-06 | 山东奥柏生物科技有限公司 | 一种心外科心脏介入术前术中图像数据的模型融合系统 |
| CN120065669A (zh) * | 2025-04-28 | 2025-05-30 | 深圳市八方同创科技有限公司 | 用于元宇宙增强现实显示的全息光学成像系统 |
Citations (7)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| WO2017180097A1 (fr) * | 2016-04-12 | 2017-10-19 | Siemens Aktiengesellschaft | Alignement déformable d'entrées peropératoire et préopératoire à l'aide de modèles de mélange génératifs et d'une déformation biomécanique |
| CN111028242A (zh) * | 2019-11-27 | 2020-04-17 | 中国科学院深圳先进技术研究院 | 一种肿瘤自动分割系统、方法及电子设备 |
| CN111179324A (zh) * | 2019-12-30 | 2020-05-19 | 同济大学 | 基于颜色和深度信息融合的物体六自由度位姿估计方法 |
| CN112907642A (zh) * | 2021-03-01 | 2021-06-04 | 沈阳蓝软智能医疗科技有限公司 | 术前ct或核磁图像与其术中对应病灶精确配准重合的方法、系统、存储介质及设备 |
| CN114022523A (zh) * | 2021-10-09 | 2022-02-08 | 清华大学 | 低重叠点云数据配准系统及方法 |
| CN114638867A (zh) * | 2022-03-25 | 2022-06-17 | 西安电子科技大学 | 基于特征提取模块和对偶四元数的点云配准方法及系统 |
| US20220254095A1 (en) * | 2021-02-03 | 2022-08-11 | Electronics And Telecommunications Research Institute | Apparatus and method for searching for global minimum of point cloud registration error |
-
2022
- 2022-09-29 WO PCT/CN2022/122374 patent/WO2024065343A1/fr not_active Ceased
Patent Citations (7)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| WO2017180097A1 (fr) * | 2016-04-12 | 2017-10-19 | Siemens Aktiengesellschaft | Alignement déformable d'entrées peropératoire et préopératoire à l'aide de modèles de mélange génératifs et d'une déformation biomécanique |
| CN111028242A (zh) * | 2019-11-27 | 2020-04-17 | 中国科学院深圳先进技术研究院 | 一种肿瘤自动分割系统、方法及电子设备 |
| CN111179324A (zh) * | 2019-12-30 | 2020-05-19 | 同济大学 | 基于颜色和深度信息融合的物体六自由度位姿估计方法 |
| US20220254095A1 (en) * | 2021-02-03 | 2022-08-11 | Electronics And Telecommunications Research Institute | Apparatus and method for searching for global minimum of point cloud registration error |
| CN112907642A (zh) * | 2021-03-01 | 2021-06-04 | 沈阳蓝软智能医疗科技有限公司 | 术前ct或核磁图像与其术中对应病灶精确配准重合的方法、系统、存储介质及设备 |
| CN114022523A (zh) * | 2021-10-09 | 2022-02-08 | 清华大学 | 低重叠点云数据配准系统及方法 |
| CN114638867A (zh) * | 2022-03-25 | 2022-06-17 | 西安电子科技大学 | 基于特征提取模块和对偶四元数的点云配准方法及系统 |
Non-Patent Citations (1)
| Title |
|---|
| BAOCHUN HE, JIA FUCANG: "Head and Neck CT Segmentation Based on a Combined U-Net Model", JOURNAL OF INTEGRATION TECHNOLOGY, KEXUE CHUBANSHE,SCIENCE PRESS, CN, vol. 9, no. 2, 15 March 2020 (2020-03-15), CN, pages 17 - 24, XP093151586, ISSN: 2095-3135, DOI: 10.12146/j.issn.2095-3135.20191216001 * |
Cited By (8)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN119090927A (zh) * | 2024-09-02 | 2024-12-06 | 山东大学 | 一种面向计算机辅助肝脏手术的非刚性点云配准方法 |
| CN119273726A (zh) * | 2024-09-02 | 2025-01-07 | 山东大学 | 面向计算机辅助骨科手术的特征引导刚性点云配准方法 |
| CN119090930A (zh) * | 2024-11-07 | 2024-12-06 | 合肥工业大学 | 面向点云处理的非刚性配准方法、装置、设备及存储介质 |
| CN119090929A (zh) * | 2024-11-07 | 2024-12-06 | 合肥工业大学 | 基于深度学习的术前模型到术中点云的实时配准方法 |
| CN119205865A (zh) * | 2024-11-29 | 2024-12-27 | 山东大学 | 一种部分到整体非刚性点云配准方法及系统 |
| CN119941529A (zh) * | 2025-01-23 | 2025-05-06 | 山东奥柏生物科技有限公司 | 一种心外科心脏介入术前术中图像数据的模型融合系统 |
| CN119863499A (zh) * | 2025-03-25 | 2025-04-22 | 浙江华是科技股份有限公司 | 低重叠点云配准方法及系统 |
| CN120065669A (zh) * | 2025-04-28 | 2025-05-30 | 深圳市八方同创科技有限公司 | 用于元宇宙增强现实显示的全息光学成像系统 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| WO2024065343A1 (fr) | Système et procédé d'enregistrement de données de nuage de points hépatiques préopératoires et peropératoires, terminal, et support de stockage | |
| CN112802185B (zh) | 面向微创手术空间感知的内窥镜图像三维重构方法和系统 | |
| CN102999938B (zh) | 多模态体积图像的基于模型的融合的方法和系统 | |
| Modrzejewski et al. | An in vivo porcine dataset and evaluation methodology to measure soft-body laparoscopic liver registration accuracy with an extended algorithm that handles collisions | |
| CN118037793B (zh) | 一种术中x线和ct图像的配准方法和装置 | |
| CN111524170A (zh) | 一种基于无监督深度学习的肺部ct图像配准方法 | |
| CN107680688B (zh) | 一种基于3d打印的盆腔仿真微创手术视觉导航验证方法 | |
| CN113143459A (zh) | 腹腔镜增强现实手术导航方法、装置及电子设备 | |
| CN109903268B (zh) | 确定脊柱图像集的异常类型的方法及计算设备 | |
| Liu et al. | Global and local panoramic views for gastroscopy: an assisted method of gastroscopic lesion surveillance | |
| CN115527003A (zh) | 术前术中肝脏点云数据配准系统、方法、终端以及存储介质 | |
| CN115049806B (zh) | 基于蒙特卡洛树搜索的人脸增强现实标定方法及装置 | |
| CN111260704B (zh) | 基于启发式树搜索的血管结构3d/2d刚性配准方法及装置 | |
| CN113823399B (zh) | 二维医学影像设备的摆位控制方法、装置和计算机设备 | |
| CN118505763A (zh) | 几何信息辅助的3d表面刚性配准方法、系统及电子设备 | |
| US11138736B2 (en) | Information processing apparatus and information processing method | |
| Guan et al. | Intraoperative laparoscopic liver surface registration with preoperative CT using mixing features and overlapping region masks | |
| CN112472293B (zh) | 一种术前三维影像与术中透视图像的配准方法 | |
| CN111063441A (zh) | 一种肝脏变形预测方法、系统及电子设备 | |
| CN114581340A (zh) | 一种图像校正方法及设备 | |
| WO2020031071A1 (fr) | Localisation d'organe interne d'un sujet pour fournir une assistance pendant une chirurgie | |
| Daly et al. | Multimodal image registration using multiresolution genetic optimization | |
| Lecomte et al. | Beyond respiratory models: a physics-enhanced synthetic data generation method for 2D-3D deformable registration | |
| RU2505860C2 (ru) | Одновременная основанная на модели сегментация объектов, удовлетворяющих заранее заданным пространственным соотношениям | |
| CN114998239B (zh) | 呼吸校正方法、计算机设备和存储介质 |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 22959986 Country of ref document: EP Kind code of ref document: A1 |
|
| NENP | Non-entry into the national phase |
Ref country code: DE |
|
| 122 | Ep: pct application non-entry in european phase |
Ref document number: 22959986 Country of ref document: EP Kind code of ref document: A1 |