[go: up one dir, main page]

WO2024169341A1 - Registration method for multimodality image-guided radiotherapy - Google Patents

Registration method for multimodality image-guided radiotherapy Download PDF

Info

Publication number
WO2024169341A1
WO2024169341A1 PCT/CN2023/136602 CN2023136602W WO2024169341A1 WO 2024169341 A1 WO2024169341 A1 WO 2024169341A1 CN 2023136602 W CN2023136602 W CN 2023136602W WO 2024169341 A1 WO2024169341 A1 WO 2024169341A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
registered
point cloud
registration
point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
PCT/CN2023/136602
Other languages
French (fr)
Chinese (zh)
Inventor
赵汉卿
梁晓坤
谢耀钦
秦文健
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Institute of Advanced Technology of CAS
Original Assignee
Shenzhen Institute of Advanced Technology of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Institute of Advanced Technology of CAS filed Critical Shenzhen Institute of Advanced Technology of CAS
Publication of WO2024169341A1 publication Critical patent/WO2024169341A1/en
Anticipated expiration legal-status Critical
Ceased legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H20/00ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance
    • G16H20/40ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance relating to mechanical, radiation or invasive therapies, e.g. surgery, laser therapy, dialysis or acupuncture
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30096Tumor; Lesion
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Definitions

  • the present invention relates to the technical field of medical image processing, and more specifically, to a registration method for multimodal image-guided radiotherapy.
  • Image-guided radiotherapy is a personalized radiotherapy method that is performed under the guidance of medical images of the human body. This method can show the different sizes, shapes and locations of cancer lesions. Doctors can develop a radiotherapy plan based on this, which can accurately irradiate the cancer lesions while ensuring that the surrounding healthy tissues receive low doses of radiation as much as possible to avoid damaging healthy tissues.
  • Medical images of different modalities can display different information about the human body from microscopic to macroscopic scales and from the inside to the outside.
  • Using multimodal medical image registration technology to match medical images of different modalities to the same spatial and temporal scales can help doctors accurately delineate the location of the patient's lesions and develop more accurate radiotherapy plans.
  • this technology has overcome the problems of medical image information being unable to match and time and space information being difficult to calibrate to a certain extent, but traditional methods rely too much on manually designed feature selectors and feature matchers, and manually designed feature matchers are not robust enough in multimodal image registration, and feature selectors and feature matchers that perform well on single-modality images are usually not adaptable to images of multiple modalities.
  • the existing technology mainly has the following defects:
  • the purpose of the present invention is to overcome the above-mentioned defects of the prior art and provide a registration method for multimodal image-guided radiotherapy.
  • the method comprises the following steps:
  • a radiation therapy plan is formulated using the registered images.
  • the advantages of the present invention are that it proposes an image registration method based on deformation vector field constraints and attention mechanism of anatomical point cloud, which can automatically extract features and match them in deep space, that is, it realizes fully automatic registration of multimodal images and end-to-end optimization.
  • the present invention combines the anatomical structure point cloud of the organ itself to constrain the image registration in the multimodal image-guided radiotherapy system, thereby improving the accuracy of image registration and medical interpretability.
  • FIG1 is a flow chart of a registration method for multi-modality image-guided radiotherapy according to an embodiment of the present invention
  • FIG2 is a schematic diagram of a process of a registration method for multi-modality image-guided radiotherapy according to an embodiment of the present invention
  • FIG3 is a schematic diagram of the architecture of a deep convolutional network based on a self-attention mechanism according to an embodiment of the present invention
  • FIG4 is a schematic diagram of two connected Swin-Transformer blocks according to one embodiment of the present invention.
  • FIG. 5 is a schematic diagram of registration effects under three evaluation indicators according to an embodiment of the present invention.
  • the present invention provides a multimodal image registration method based on deformation vector field constraints and attention mechanism of anatomical point cloud.
  • the method first uses a public multimodal image-guided radiotherapy image data set for training.
  • the designed training network is, for example, a deep convolutional network based on a self-attention mechanism.
  • the image features are extracted in combination with the attention mechanism, and the image is feature matched by a feature matcher.
  • the image deformation vector field is generated by feature matching, thereby obtaining a deformation vector field that is closer to the real situation and achieving a more accurate registration effect.
  • the provided multimodal image-guided radiotherapy registration method includes: The following steps are included.
  • Step S110 collecting data of medical image-guided radiotherapy of different modalities.
  • Medical images of different modalities include magnetic resonance imaging, ultrasound imaging, computed tomography, cone beam computed tomography or other types of medical images.
  • Data of medical image-guided radiotherapy of different modalities can be obtained from public datasets or collected by the user.
  • Step S120 preprocessing the multimodal images to add labels of different target organs.
  • step S120 includes:
  • Step S121 normalizing the size of the multimodal image data, including voxel coordinate direction, distance between voxels, etc.
  • Step S122 label the organs and add identifiable organ marker labels.
  • Step S130 converting the target organ label into a point cloud, and based on the point cloud, using a non-rigid registration method to perform a coarse registration based on the organ anatomical structure point cloud.
  • step S130 includes:
  • Step S131 Use the marching cubes method to construct the human organ anatomical structure point cloud and the target area point cloud.
  • the central difference method is used to calculate the gradient Gx , Gy , Gz of the voxel vertex where the data point is located, where a, b, and c represent the distances between two adjacent hexahedrons, respectively.
  • Step S132 After respectively calculating the gradients G x , G y , and G z of the eight vertices of the voxel using the marching cubes method, the gradients of the vertices of the central triangle patch are calculated using linear interpolation to draw the isosurface.
  • Step S133 extracting the coordinates of the isosurface vertices and defining them as a point cloud, and using a non-rigid registration method to perform registration based on the organ anatomical structure point cloud on the defined target point cloud.
  • minE(X) min ⁇ L d (X)+ ⁇ L s (X)+ ⁇ L l (X) ⁇ (4)
  • L d (X) is the distance term
  • L s (X) is the rigid term
  • L l (X) is the key point term
  • ⁇ , ⁇ are weights
  • the goal is to minimize the function and find the deformation vector field that meets the conditions.
  • wi is the weight
  • vi is the i-th point in the point cloud M to be deformed
  • dist(S,v) represents the distance between point vi and the nearest point on the template image
  • Xi is a transformation matrix of size 3 ⁇ 4
  • x, y, and z represent the coordinate values of point vi respectively.
  • stiffness term Ls (X) is expressed as:
  • the key point term L l (X) is expressed as:
  • l represents the feature point in the template image (or reference image)
  • the last term of the loss function is the constraint of the feature point. If a feature point set is given This loss can take into account the distance between the feature points to be registered and the feature points of the template image. If a given feature point does not exist, the function will cover all feature points.
  • the registration process based on the organ anatomical structure point cloud using the non-rigid registration method includes the following steps:
  • the parameters ⁇ and ⁇ are updated.
  • the rigid term weight is ⁇ i+1 ⁇ i
  • the key point weight is ⁇ i+1 ⁇ i .
  • the second loop is to solve the parameter X, including:
  • the template image and the image to be registered are initially deformed.
  • the multimodal images are roughly registered using the human organ anatomical structure point cloud method, which provides the interpretability of the image registration task in image-guided radiotherapy.
  • the roughly registered deformation vector field is interpolated using linear interpolation to obtain a deformation vector field that is consistent with the original image size.
  • Step S140 using a deep convolutional network, performing deep learning-based precise registration on the obtained multimodal images with the same label to obtain a final registered image.
  • FIG3 is an architecture diagram of a deep convolutional network as a registration network, which is used to further obtain the deformation field between the image to be registered and the template image.
  • the deep convolutional network as a whole includes an encoder and a decoder, wherein the encoder is a self-attention encoder.
  • This cascaded approach connects the point cloud registration of step S130 and the registration method based on a deep neural network, further improving the accuracy of image registration.
  • a convolutional network based on a self-attention mechanism is used for fine image registration in multimodal image-guided therapy. It should be noted that other types of deep convolutional network architectures may also be used.
  • step S140 includes:
  • Step S141 using a convolutional image feature extraction model based on a self-attention mechanism, inputting the template image preliminarily deformed in step S130 and the image to be registered into a self-attention encoder for feature encoding;
  • Step S142 input the fused features into the deep convolution layer for decoding prediction, perform prediction after fusing features of different scales, and output a predicted image with the same size as the original image;
  • Step S143 Compare the predicted result with the original image and back-propagate it to the self-attention encoder to satisfy the following conditions:
  • p represents the position of the voxel in the image
  • Im and f represent the image to be registered and the template image, and and represents the average voxel value of the adjacent voxels of the image to be registered and the template image with pi as the center point
  • represents the deformation vector field from the image to be registered to the template image
  • pi represents the central voxel of the current operation
  • R represents the image domain.
  • Step S31 Segment the image to be registered and the template image obtained in S130 after point cloud guided deformation into non-overlapping voxel blocks (patches).
  • the original image size is 160 ⁇ 192 ⁇ 160
  • the final size of each voxel block is 2 ⁇ 4 ⁇ 4 ⁇ 4.
  • Step S32 Flatten each voxel block and input it into a linear mapping partition encoder for position encoding.
  • Step S33 Input the encoded token into the Swin-Transformer encoder based on the self-attention mechanism. After passing through the encoder, the adjacent 2 ⁇ 2 ⁇ 2 tokens will be merged, the number of tokens will be reduced by 8 times, and the feature dimension will be expanded by 8 times.
  • Step S34 Input the expanded feature terms into the linear layer and output the features of every two dimensions.
  • Step S35 After four stages of Swin-Transformer encoders and three partition splicings, the encoder finally outputs 5 ⁇ 6 ⁇ 5 ⁇ 8.
  • Step S36 The decoder includes an upsampling layer and a convolution layer with a convolution kernel of 3 ⁇ 3. During the decoding process, the feature map in each upsampling layer will be spliced with the corresponding features in the encoding process through jump links, and two three-dimensional convolution layers will be connected after upsampling.
  • Step S37 After a three-dimensional convolution, the original image to be registered, the template image and the image after a downsampling are spliced with the feature map obtained in step S36 to integrate the global position information.
  • the spliced features are convolved 16 times with a convolution kernel of 3 ⁇ 3 and the deformation vector field is output.
  • FIG. 4 is a schematic diagram of two connected Swin-Transformer blocks.
  • Each Swin-Transformer block contains a layer normalization layer, a window-based multi-head attention, and a multi-layer perceptron.
  • the specific processing process includes the following steps:
  • Step S41 The input features are normalized by layers and then input into the window-based multi-head self-attention module.
  • Step S42 Add the result obtained in step S41 to the original feature, perform layer normalization, pass through a multi-layer perceptron, and then add it to the result of step S41.
  • Step S43 After the result obtained in step S42 is layer-normalized, it is input into the multi-head self-attention module based on the sliding window, and the result is added to the result obtained in step S42.
  • Step S44 After the result obtained in step S43 is layer-normalized, it is input into a multilayer perceptron, and the result is added to the result obtained in step S43.
  • the above is a training process described in a specific embodiment, and after the training of the deep convolutional network is completed, it can be used for actual medical image registration.
  • the actual application process includes: obtaining multi-modal images to be registered; performing point cloud processing on the target organ of the image to be registered, and using the deformation relationship learned by non-rigid registration to register the image to be registered based on the organ anatomical structure point cloud to obtain a preliminary deformed image to be registered; inputting the preliminary deformed image to be registered into the trained deep convolutional network to obtain a registered image, and the deep convolutional network reflects the deformation relationship between the multi-modal image to be registered and the template image; and using the registered image to formulate a radiotherapy plan.
  • the data set was verified on the image-guided breast cancer radiotherapy dataset, with a total of 14 cases.
  • the data of two patients were used as the test data set, and the rest of the data were used as the training set.
  • the results of the comparison between the traditional convolutional neural network registration method and the linear interpolation image registration method are shown in Figure 5, where for each indicator, the rightmost figure corresponds to the present invention. It can be seen that the deep convolutional network method based on the human anatomical structure point cloud pre-registration cascade self-attention mechanism proposed in the present invention can effectively realize image-guided breast cancer radiotherapy registration, and has higher accuracy and signal-to-noise ratio than the existing methods.
  • the present invention has the following advantages:
  • the present invention takes into account the application of human anatomical structure in multimodal image registration by clinicians, takes into account the guiding role of anatomical key points in the registration problem, improves the interpretability of the registration results, and improves the accuracy of the registration task.
  • a cascade registration method was used.
  • the front end used the anatomical information of human organs for rough registration
  • the back end used a deep convolutional neural network based on the self-attention mechanism for fine registration, which improved the accuracy of multimodal image-guided radiotherapy.
  • This method of using anatomical point cloud and actual image multimodal fusion makes full use of the anatomical information of human organs and reduces the dependence of deep network models on the amount of clinical data.
  • the multimodal images are registered to solve the problems of insufficient cross-modal image feature extraction and low accuracy in multimodal image-guided radiotherapy.
  • Traditional registration methods generally use iterative algorithms, which consume a lot of running time.
  • the present invention adopts a one-step registration method to ultimately achieve fully automatic real-time registration in multimodal image-guided radiotherapy.
  • the present invention may be a system, a method and/or a computer program product.
  • the computer program product may include a computer-readable storage medium carrying computer-readable program instructions for causing a processor to implement various aspects of the present invention.
  • Computer readable storage medium can be a tangible device that can hold and store instructions used by an instruction execution device.
  • Computer readable storage medium can be, for example, but not limited to, an electrical storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination thereof.
  • Non-exhaustive list of computer readable storage medium include: a portable computer disk, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), a static random access memory (SRAM), a portable compact disk read-only memory (CD-ROM), a digital versatile disk (DVD), a memory stick, a floppy disk, a mechanical encoding device, for example, a punch card or a convex structure in a groove on which instructions are stored, and any suitable combination thereof.
  • RAM random access memory
  • ROM read-only memory
  • EPROM or flash memory erasable programmable read-only memory
  • SRAM static random access memory
  • CD-ROM compact disk read-only memory
  • DVD digital versatile disk
  • memory stick a floppy disk
  • mechanical encoding device for example, a punch card or a convex structure in a groove on which instructions are stored, and any suitable combination thereof.
  • the computer readable storage medium used here is not interpreted as a transient signal itself, such as a radio wave or other freely propagating electromagnetic wave, an electromagnetic wave propagated by a waveguide or other transmission medium (for example, a light pulse by an optical fiber cable), or an electrical signal transmitted by a wire.
  • the computer-readable program instructions described herein can be downloaded from a computer-readable storage medium to each computing/processing device, or downloaded to an external computer or external storage device via a network, such as the Internet, a local area network, a wide area network, and/or a wireless network.
  • the network can include copper transmission cables, optical fiber transmissions, wireless transmissions, routers, firewalls, switches, gateway computers, and/or edge servers.
  • the network adapter card or network interface in each computing/processing device receives the computer-readable program instructions from the network and forwards the computer-readable program instructions for storage in the computer-readable storage medium in each computing/processing device.
  • the computer program instructions for performing the operation of the present invention may be assembly instructions, instruction set architecture (ISA) instructions, machine instructions, machine-dependent instructions, microcode, firmware instructions, state setting data, or source code or object code written in any combination of one or more programming languages, including object-oriented programming languages, such as Smalltalk, C++, Python, etc., and conventional procedural programming languages, such as "C" language or similar programming languages.
  • Computer-readable program instructions may be executed entirely on a user's computer, partially on a user's computer, as an independent software package, partially on a user's computer, partially on a remote computer, or entirely on a remote computer or server.
  • the remote computer may be connected to the user's computer via any type of network, including a local area network (LAN) or a wide area network (WAN), or may be connected to an external computer (e.g., using an Internet service provider to connect via the Internet).
  • an electronic circuit such as a programmable logic circuit, a field programmable gate array (FPGA), or a programmable logic array (PLA), may be personalized by utilizing the state information of the computer-readable program instructions, and the electronic circuit may execute the computer-readable program instructions, thereby realizing various aspects of the present invention.
  • These computer-readable program instructions can be provided to a processor of a general-purpose computer, a special-purpose computer, or other programmable data processing device, thereby producing a machine, so that when these instructions are executed by the processor of the computer or other programmable data processing device, a device that implements the functions/actions specified in one or more boxes in the flowchart and/or block diagram is generated.
  • These computer-readable program instructions can also be stored in a computer-readable storage medium, and these instructions cause the computer, programmable data processing device, and/or other equipment to work in a specific manner, so that the computer-readable medium storing the instructions includes a manufactured product, which includes instructions for implementing various aspects of the functions/actions specified in one or more boxes in the flowchart and/or block diagram.
  • Computer-readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other device so that a series of operating steps are performed on the computer, other programmable data processing apparatus, or other device to produce a computer-implemented process, thereby causing the instructions executed on the computer, other programmable data processing apparatus, or other device to implement the functions/actions specified in one or more boxes in the flowchart and/or block diagram.
  • each box in the flowchart or block diagram can represent a part of a module, program segment or instruction, and the part of the module, program segment or instruction contains one or more executable instructions for realizing the specified logical function.
  • the functions marked in the box can also occur in a different order from the order marked in the accompanying drawings. For example, two consecutive boxes can actually be executed substantially in parallel, and they can sometimes be executed in the opposite order, depending on the functions involved.
  • each box in the block diagram and/or flowchart, and the combination of the boxes in the block diagram and/or flowchart can be implemented by a dedicated hardware-based system that performs the specified function or action, or can be implemented by a combination of dedicated hardware and computer instructions. It is well known to those skilled in the art that it is equivalent to implement it by hardware, implement it by software, and implement it by combining software and hardware.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biophysics (AREA)
  • Urology & Nephrology (AREA)
  • Epidemiology (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Medical Informatics (AREA)
  • Primary Health Care (AREA)
  • Public Health (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Surgery (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

Disclosed in the present invention is a registration method for multimodality image-guided radiotherapy. The method comprises: acquiring multimodality images to be registered; performing point cloud processing on a target organ in each image to be registered, and using a deformation relationship learned from non-rigid registration to perform organ anatomical structure point clouds-based registration on the images to be registered, so as to obtain preliminarily deformed images to be registered; inputting the preliminarily deformed images to be registered into a trained deep convolutional network to obtain a registered image, the deep convolutional network reflecting the deformation relationship between the multimodality images to be registered and a template image; and formulating a radiotherapy plan by using the registered image. The present invention improves the accuracy of image registration and medical interpretability.

Description

一种多模态影像引导放射治疗的配准方法A registration method for multimodal image-guided radiotherapy 技术领域Technical Field

本发明涉及医学影像处理技术领域,更具体地,涉及一种多模态影像引导放射治疗的配准方法。The present invention relates to the technical field of medical image processing, and more specifically, to a registration method for multimodal image-guided radiotherapy.

背景技术Background Art

影像引导放射治疗是一种在人体医学影像引导下进行的个性化放射治疗方法。该方法可以显示癌症病灶的不同大小、形状和位置,医生在此基础上制定放射治疗计划,在准确照射癌症病灶的同时,尽可能确保周围的健康组织接受低剂量的照射,以避免对健康组织造成损伤。Image-guided radiotherapy is a personalized radiotherapy method that is performed under the guidance of medical images of the human body. This method can show the different sizes, shapes and locations of cancer lesions. Doctors can develop a radiotherapy plan based on this, which can accurately irradiate the cancer lesions while ensuring that the surrounding healthy tissues receive low doses of radiation as much as possible to avoid damaging healthy tissues.

不同模态的医学影像可以显示从微观尺度到宏观尺度以及从内部到外部不同的人体信息,使用多模态医学影像配准技术将不同模态的医学影像对应到相同的空间以及时间尺度上,有助于医生精确勾画出病人病灶位置,进而制定更准确的放射治疗计划。目前该项技术一定程度上克服了医学影像信息无法匹配,时间、空间信息难以校准的问题,但是传统方法过于依赖手工设计特征选取器以及特征匹配器,且手工设计的特征匹配器在多模态影像配准中的鲁棒性不足,而在单一模态影像上表现效果较好的特征选取器及特征匹配器通常不能适应多种模态的影像。Medical images of different modalities can display different information about the human body from microscopic to macroscopic scales and from the inside to the outside. Using multimodal medical image registration technology to match medical images of different modalities to the same spatial and temporal scales can help doctors accurately delineate the location of the patient's lesions and develop more accurate radiotherapy plans. At present, this technology has overcome the problems of medical image information being unable to match and time and space information being difficult to calibrate to a certain extent, but traditional methods rely too much on manually designed feature selectors and feature matchers, and manually designed feature matchers are not robust enough in multimodal image registration, and feature selectors and feature matchers that perform well on single-modality images are usually not adaptable to images of multiple modalities.

经分析,现有技术主要存在以下缺陷:After analysis, the existing technology mainly has the following defects:

1)放射治疗计划制定耗时长,放疗手术过程中,放射科医生无法根据病人的位移对放疗手术计划实时更改。1) The preparation of radiotherapy plans takes a long time. During the radiotherapy operation, the radiologist cannot make real-time changes to the radiotherapy operation plan based on the patient's movement.

2)癌症病灶边界及具体细节在单一模态影像上难以界定,而若是融合多种模态影像进行放射治疗计划制定,对医生经验要求高。2) The boundaries and specific details of cancer lesions are difficult to define on a single modality image, and if multiple modalities of images are integrated to formulate radiotherapy plans, it requires a high level of doctor experience.

3)目前已有的基于深度学习的多模态影像配准方法存在失真度高、可解释性差等问题。3) The existing multimodal image registration methods based on deep learning have problems such as high distortion and poor interpretability.

发明内容Summary of the invention

本发明的目的是克服上述现有技术的缺陷,提供一种多模态影像引导放射治疗的配准方法。该方法包括以下步骤:The purpose of the present invention is to overcome the above-mentioned defects of the prior art and provide a registration method for multimodal image-guided radiotherapy. The method comprises the following steps:

获取多模态的待配准影像;Acquire multimodal images to be registered;

对所述待配准影像的目标器官进行点云化处理,并使用非刚性配准所学习到的形变关系对所述待配准图像进行基于器官解剖结构点云的配准,以获得初步形变的待配准影像;Performing point cloud processing on the target organ of the image to be registered, and registering the image to be registered based on the organ anatomical structure point cloud using the deformation relationship learned by non-rigid registration to obtain a preliminarily deformed image to be registered;

将所述初步形变的待配准影像输入到经训练的深度卷积网络,获得配准后的影像,所述深度卷积网络反映所述多模态的待配准影像与模板影像之间的形变关系; Inputting the preliminarily deformed image to be registered into a trained deep convolutional network to obtain a registered image, wherein the deep convolutional network reflects the deformation relationship between the multimodal image to be registered and the template image;

利用所述配准后的影像制定放射治疗计划。A radiation therapy plan is formulated using the registered images.

与现有技术相比,本发明的优点在于,提出了基于解剖学点云的变形矢量场约束及注意力机制的影像配准方法,能够自动对特征进行提取并在深层空间进行匹配,即实现了多模态影像的全自动配准以及端到端的优化。此外,本发明结合器官本身解剖结构点云,对多模态影像引导的放射治疗系统中的影像配准进行约束,提升了图像配准的准确性以及医学可解释性。Compared with the prior art, the advantages of the present invention are that it proposes an image registration method based on deformation vector field constraints and attention mechanism of anatomical point cloud, which can automatically extract features and match them in deep space, that is, it realizes fully automatic registration of multimodal images and end-to-end optimization. In addition, the present invention combines the anatomical structure point cloud of the organ itself to constrain the image registration in the multimodal image-guided radiotherapy system, thereby improving the accuracy of image registration and medical interpretability.

通过以下参照附图对本发明的示例性实施例的详细描述,本发明的其它特征及其优点将会变得清楚。Further features and advantages of the present invention will become apparent from the following detailed description of exemplary embodiments of the present invention with reference to the attached drawings.

附图说明BRIEF DESCRIPTION OF THE DRAWINGS

被结合在说明书中并构成说明书的一部分的附图示出了本发明的实施例,并且连同其说明一起用于解释本发明的原理。The accompanying drawings, which are incorporated in and constitute a part of the specification, illustrate embodiments of the invention and, together with the description, serve to explain the principles of the invention.

图1是根据本发明一个实施例的多模态影像引导放射治疗的配准方法的流程图;FIG1 is a flow chart of a registration method for multi-modality image-guided radiotherapy according to an embodiment of the present invention;

图2是根据本发明一个实施例的多模态影像引导放射治疗的配准方法的过程示意图;FIG2 is a schematic diagram of a process of a registration method for multi-modality image-guided radiotherapy according to an embodiment of the present invention;

图3是根据本发明一个实施例的基于自注意机制的深度卷积网络的架构示意图;FIG3 is a schematic diagram of the architecture of a deep convolutional network based on a self-attention mechanism according to an embodiment of the present invention;

图4是根据本发明一个实施例的两个相连的Swin-Transformer块的示意图;FIG4 is a schematic diagram of two connected Swin-Transformer blocks according to one embodiment of the present invention;

图5是根据本发明一个实施例的三种评价指标下的配准效果示意图。FIG. 5 is a schematic diagram of registration effects under three evaluation indicators according to an embodiment of the present invention.

具体实施方式DETAILED DESCRIPTION

现在将参照附图来详细描述本发明的各种示例性实施例。应注意到:除非另外具体说明,否则在这些实施例中阐述的部件和步骤的相对布置、数字表达式和数值不限制本发明的范围。Various exemplary embodiments of the present invention will now be described in detail with reference to the accompanying drawings. It should be noted that the relative arrangement of components and steps, numerical expressions and numerical values set forth in these embodiments do not limit the scope of the present invention unless otherwise specifically stated.

以下对至少一个示例性实施例的描述实际上仅仅是说明性的,决不作为对本发明及其应用或使用的任何限制。The following description of at least one exemplary embodiment is merely illustrative in nature and is in no way intended to limit the invention, its application, or uses.

对于相关领域普通技术人员已知的技术、方法和设备可能不作详细讨论,但在适当情况下,所述技术、方法和设备应当被视为说明书的一部分。Technologies, methods, and equipment known to ordinary technicians in the relevant art may not be discussed in detail, but where appropriate, the technologies, methods, and equipment should be considered as part of the specification.

在这里示出和讨论的所有例子中,任何具体值应被解释为仅仅是示例性的,而不是作为限制。因此,示例性实施例的其它例子可以具有不同的值。In all examples shown and discussed herein, any specific values should be interpreted as merely exemplary and not limiting. Therefore, other examples of the exemplary embodiments may have different values.

应注意到:相似的标号和字母在下面的附图中表示类似项,因此,一旦某一项在一个附图中被定义,则在随后的附图中不需要对其进行进一步讨论。It should be noted that like reference numerals and letters refer to similar items in the following figures, and therefore, once an item is defined in one figure, it need not be further discussed in subsequent figures.

本发明提供了一种基于解剖学点云的变形矢量场约束及注意力机制的多模态影像配准方法。该方法首先利用公开多模态影像引导放射治疗影像数据集进行训练,所设计的训练网络例如是基于自注意力机制的深度卷积网络,结合注意力机制提取影像特征,并通过特征匹配器对影像进行特征匹配,通过特征的匹配生成影像变形矢量场,从而得到更接近更真实情况的变形矢量场,实现了更精确的配准效果。The present invention provides a multimodal image registration method based on deformation vector field constraints and attention mechanism of anatomical point cloud. The method first uses a public multimodal image-guided radiotherapy image data set for training. The designed training network is, for example, a deep convolutional network based on a self-attention mechanism. The image features are extracted in combination with the attention mechanism, and the image is feature matched by a feature matcher. The image deformation vector field is generated by feature matching, thereby obtaining a deformation vector field that is closer to the real situation and achieving a more accurate registration effect.

具体地,结合图1和图2所示,所提供的多模态影像引导放射治疗的配准方法包 括以下步骤。Specifically, in combination with FIG. 1 and FIG. 2, the provided multimodal image-guided radiotherapy registration method includes: The following steps are included.

步骤S110,搜集不同模态的医学影像引导放射治疗的数据。Step S110 , collecting data of medical image-guided radiotherapy of different modalities.

不同模态的医学影像包括核磁共振成像、超声成像、计算机断成像和锥形束计算机断层成像或其他类型的医学影像。不同模态的医学影像引导放射治疗的数据可采用公开数据集或自行收集。Medical images of different modalities include magnetic resonance imaging, ultrasound imaging, computed tomography, cone beam computed tomography or other types of medical images. Data of medical image-guided radiotherapy of different modalities can be obtained from public datasets or collected by the user.

步骤S120,对多模态影像进行预处理,增加不同目标器官标签。Step S120 , preprocessing the multimodal images to add labels of different target organs.

在一个实施例中,步骤S120包括:In one embodiment, step S120 includes:

步骤S121:将多模态影像数据尺寸归一化,包括体素坐标方向、体素间距离等。Step S121: normalizing the size of the multimodal image data, including voxel coordinate direction, distance between voxels, etc.

步骤S122:将器官标签化,并添加可识别的器官标志标签。Step S122: label the organs and add identifiable organ marker labels.

步骤S130,将目标器官标签点云化,并在点云的基础上,利用非刚性配准方法进行基于器官解剖结构点云的粗配准。Step S130 , converting the target organ label into a point cloud, and based on the point cloud, using a non-rigid registration method to perform a coarse registration based on the organ anatomical structure point cloud.

在一个实施例中,步骤S130包括:In one embodiment, step S130 includes:

步骤S131:使用移动立方体法(MarchingCubes)构建人体器官解剖结构点云,靶区点云。Step S131: Use the marching cubes method to construct the human organ anatomical structure point cloud and the target area point cloud.

例如,假设某一三维平面和六面体体元的任一交点的数值表示为f(xi,yj,zk),采用中心差分法计算该数据点所在体元顶点的梯度Gx,Gy,Gz,在这里,a,b,c分别代表相邻两六面体之间的间距。


For example, assuming that the value of any intersection point between a three-dimensional plane and a hexahedral voxel is expressed as f( xi , yj , zk ), the central difference method is used to calculate the gradient Gx , Gy , Gz of the voxel vertex where the data point is located, where a, b, and c represent the distances between two adjacent hexahedrons, respectively.


步骤S132:使用移动立方体法对体元8个顶点分别计算梯度Gx,Gy,Gz之后,使用线性插值的方式计算中心三角面片顶点的梯度,对等值面进行绘制。Step S132: After respectively calculating the gradients G x , G y , and G z of the eight vertices of the voxel using the marching cubes method, the gradients of the vertices of the central triangle patch are calculated using linear interpolation to draw the isosurface.

步骤S133:提取等值面顶点坐标,并定义为点云,并对定义目标点云使用非刚性配准方法进行基于器官解剖结构点云的配准。Step S133: extracting the coordinates of the isosurface vertices and defining them as a point cloud, and using a non-rigid registration method to perform registration based on the organ anatomical structure point cloud on the defined target point cloud.

例如,假设模板表示为其中V代表顶点,有m个;是边,有n条,待形变点云M=(v,e),其中v代表可移动顶点,有m个;e代表边,有n条,X代表变形矢量场。使待形变点云发生形变,并使其尽量满足如下条件:
minE(X)=min{Ld(X)+αLs(X)+βLl(X)}   (4)
For example, suppose the template is represented as Where V represents the vertex, there are m; is an edge, there are n of them, the point cloud to be deformed is M = (v, e), where v represents m movable vertices; e represents n edges, and X represents the deformation vector field. Deform the point cloud to be deformed and make it meet the following conditions as much as possible:
minE(X)=min{L d (X)+αL s (X)+βL l (X)} (4)

其中,Ld(X)是距离项,Ls(X)是刚性项,Ll(X)是关键点项,α,β是权重,以最小化该函数为目标,找到满足条件的变形矢量场。Among them, L d (X) is the distance term, L s (X) is the rigid term, L l (X) is the key point term, α, β are weights, and the goal is to minimize the function and find the deformation vector field that meets the conditions.

在一个实施例中,距离项Ld(X)表示为:

vi=[x,y,z,1]T   (6)
In one embodiment, the distance term L d (X) is expressed as:

vi = [x, y, z, 1] T (6)

其中,wi是权重,vi是待形变点云M中的第i个点,dist(S,v)表示的是点vi和模板影像上和它距离最近点的距离,Xi是大小为3×4的变换矩阵,x,y,z分别表示在点vi的坐标值。Among them, wi is the weight, vi is the i-th point in the point cloud M to be deformed, dist(S,v) represents the distance between point vi and the nearest point on the template image, Xi is a transformation matrix of size 3×4, and x, y, and z represent the coordinate values of point vi respectively.

在一个实施例中,刚性项Ls(X)表示为:
In one embodiment, the stiffness term Ls (X) is expressed as:

使用刚性约束对同一条边上的两点的仿射变换进行惩罚,其中G=diag(1,1,1,γ),γ可以用来为变形的旋转和倾斜部分与变形的平移部分的差异加权,γ可根据经验或仿真预先设定。Use rigid constraints on the same edge The affine transformation of the two points on is penalized, where G = diag(1,1,1,γ), γ can be used to weight the difference between the rotation and tilt parts of the deformation and the translation part of the deformation, and γ can be preset based on experience or simulation.

在一个实施例中,关键点项Ll(X)表示为:
In one embodiment, the key point term L l (X) is expressed as:

其中,l表示模板影像(或称为参考影像)中的特征点,表示给定特征点集。该损失函数的最后一项是特征点的约束,若给定一个特征点集该项损失可以考虑到待配准特征点和模板影像特征点之间的距离,如果不存在给定特征点,该函数将会覆盖到全部特征点上。Where l represents the feature point in the template image (or reference image), Represents a given feature point set. The last term of the loss function is the constraint of the feature point. If a feature point set is given This loss can take into account the distance between the feature points to be registered and the feature points of the template image. If a given feature point does not exist, the function will cover all feature points.

具体地,使用非刚性配准方法进行基于器官解剖结构点云的配准过程包括以下步骤:Specifically, the registration process based on the organ anatomical structure point cloud using the non-rigid registration method includes the following steps:

1)初始化X0 1) Initialize X 0

第一层循环,对参数α,β进行更新。In the first layer loop, the parameters α and β are updated.

2)对于每次迭代过程中刚性项权重有αi+1i,关键点权重βi+1i2) For each iteration, the rigid term weight is α i+1i , and the key point weight is β i+1i .

一直循环:直到||Xj+1-Xj||<ε,其中ε是设定阈值。Keep looping: until ||X j+1 -X j ||<ε, where ε is the set threshold.

第二层循环,对参数X的求解,包括:The second loop is to solve the parameter X, including:

(1)为通过最近点寻找算法寻找初步对应关系;(1) Find preliminary correspondences through the nearest point finding algorithm;

(2)使用下山法等基于梯度方法对Xj求解,并将Xj作为初步对应关系和αi的最优变形矢量场。(2) Use the downhill method or other gradient-based methods to solve Xj , and use Xj as the preliminary correspondence and the optimal deformation vector field of αi .

在此步骤S130中,对模板影像和待配准影像进行初步形变。采用人体器官解剖结构点云的方式对多模态影像进行粗配准,提供了影像引导放射治疗中影像配准任务的可解释性。并且利用线性插值的方式将粗配准变形矢量场进行插值得到和原始影像尺寸一致的变形矢量场。In this step S130, the template image and the image to be registered are initially deformed. The multimodal images are roughly registered using the human organ anatomical structure point cloud method, which provides the interpretability of the image registration task in image-guided radiotherapy. And the roughly registered deformation vector field is interpolated using linear interpolation to obtain a deformation vector field that is consistent with the original image size.

步骤S140,利用深度卷积网络,对获得的具有相同标签的多模态影像进行基于深度学习的精配准,获得最终的配准影像。 Step S140, using a deep convolutional network, performing deep learning-based precise registration on the obtained multimodal images with the same label to obtain a final registered image.

图3是作为配准网络的深度卷积网络的架构图,用于进一步获得待配准影像和模板影像之间的形变场。该深度卷积网络整体上包括编码器和解码器,其中编码器是自注意力编码器。这种采用级联的方式将步骤S130的点云配准和基于深度神经网络的配准方法进行连接,进一步提高了影像配准的准确性。此外,采用了基于自注意力机制的卷积网络进行多模态影像引导治疗中的影像细配准。需说明的是,也可采用其他类型的深度卷积网络架构。FIG3 is an architecture diagram of a deep convolutional network as a registration network, which is used to further obtain the deformation field between the image to be registered and the template image. The deep convolutional network as a whole includes an encoder and a decoder, wherein the encoder is a self-attention encoder. This cascaded approach connects the point cloud registration of step S130 and the registration method based on a deep neural network, further improving the accuracy of image registration. In addition, a convolutional network based on a self-attention mechanism is used for fine image registration in multimodal image-guided therapy. It should be noted that other types of deep convolutional network architectures may also be used.

在一个实施例中,步骤S140包括:In one embodiment, step S140 includes:

步骤S141:使用基于自注意力机制的卷积影像特征提取模型,将经过步骤S130初步形变的模板影像和待配准影像输入自注意力编码器进行特征编码;Step S141: using a convolutional image feature extraction model based on a self-attention mechanism, inputting the template image preliminarily deformed in step S130 and the image to be registered into a self-attention encoder for feature encoding;

步骤S142:将融合后的特征输入深度卷积层进行解码预测,融合不同尺度特征后进行预测,输出和原始影像尺寸一致的预测影像;Step S142: input the fused features into the deep convolution layer for decoding prediction, perform prediction after fusing features of different scales, and output a predicted image with the same size as the original image;

步骤S143:将预测后的结果和原始影像进行比对,并将其反向传播到自注意力编码器中,使其满足下列条件:
Step S143: Compare the predicted result with the original image and back-propagate it to the self-attention encoder to satisfy the following conditions:

其中,p代表影像中体素的位置,Im和f代表待配准图像和模板影像,且代表待配准影像和模板影像在以pi为中心点相邻体素的平均体素值,φ代表由待配准影像到模板影像的变形矢量场,pi表示当前操作的中心体素,R表示图像域。Where p represents the position of the voxel in the image, Im and f represent the image to be registered and the template image, and and represents the average voxel value of the adjacent voxels of the image to be registered and the template image with pi as the center point, φ represents the deformation vector field from the image to be registered to the template image, pi represents the central voxel of the current operation, and R represents the image domain.

为了进一步理解基于深度卷积网络的精细配准过程,结合图3所示,主要包括以下步骤:In order to further understand the fine registration process based on deep convolutional networks, as shown in Figure 3, it mainly includes the following steps:

步骤S31:将S130获得的经过点云引导变形的待配准影像和模板影像分割成不重叠的体素块(patch)。Step S31: Segment the image to be registered and the template image obtained in S130 after point cloud guided deformation into non-overlapping voxel blocks (patches).

例如,原始影像尺寸为160×192×160,最终获得每一个体素块大小为2×4×4×4。For example, the original image size is 160×192×160, and the final size of each voxel block is 2×4×4×4.

步骤S32:将每一个体素块拉平,输入线性映射分区编码器中进行位置编码。Step S32: Flatten each voxel block and input it into a linear mapping partition encoder for position encoding.

步骤S33:将编码后的词条(token)输入基于自注意力机制的Swin-Transformer编码器中,经过编码器后,相邻的2×2×2个词条会被合并,词条的数量将会减少8倍,同时特征维度扩展8倍。Step S33: Input the encoded token into the Swin-Transformer encoder based on the self-attention mechanism. After passing through the encoder, the adjacent 2×2×2 tokens will be merged, the number of tokens will be reduced by 8 times, and the feature dimension will be expanded by 8 times.

步骤S34:将被扩展之后的特征词条输入线性层,输出每两个维度的特征。Step S34: Input the expanded feature terms into the linear layer and output the features of every two dimensions.

步骤S35:经历了4个阶段的Swin-Transformer编码器和其中的3次分区拼合,编码器最后输出为5×6×5×8。Step S35: After four stages of Swin-Transformer encoders and three partition splicings, the encoder finally outputs 5×6×5×8.

步骤S36:解码器包含上采样层和卷积核为3×3的卷积层,在解码过程中,每个上采样层中的特征图都会通过跳跃链接和编码过程中的对应特征拼合,上采样后会连接两个三维卷积层。 Step S36: The decoder includes an upsampling layer and a convolution layer with a convolution kernel of 3×3. During the decoding process, the feature map in each upsampling layer will be spliced with the corresponding features in the encoding process through jump links, and two three-dimensional convolution layers will be connected after upsampling.

步骤S37:将原始的待配准影像和模板影像以及其进行一次降采样后的影像经过一次三维卷积后,与步骤S36得到的特征图拼接,用来融入全局位置信息,将拼接好的特征,经过16次卷积核为3×3的卷积并输出变形矢量场。Step S37: After a three-dimensional convolution, the original image to be registered, the template image and the image after a downsampling are spliced with the feature map obtained in step S36 to integrate the global position information. The spliced features are convolved 16 times with a convolution kernel of 3×3 and the deformation vector field is output.

图4是两个相连的Swin-Transformer块的示意图。每个Swin-Transformer块包含层归一化层、基于窗口的多头注意力和多层感知机。具体处理过程包括以下步骤:Figure 4 is a schematic diagram of two connected Swin-Transformer blocks. Each Swin-Transformer block contains a layer normalization layer, a window-based multi-head attention, and a multi-layer perceptron. The specific processing process includes the following steps:

步骤S41:将输入的特征经过层归一化后输入基于窗口的多头自注意力模块。Step S41: The input features are normalized by layers and then input into the window-based multi-head self-attention module.

步骤S42:将步骤S41得到结果与原始特征相加,再进行层归一化,通过多层感知机后,再与步骤S41结果相加。Step S42: Add the result obtained in step S41 to the original feature, perform layer normalization, pass through a multi-layer perceptron, and then add it to the result of step S41.

步骤S43:将步骤S42得到结果经过层归一化后,输入基于滑动窗口的多头自注意力模块,将结果与步骤S42得到的结果相加。Step S43: After the result obtained in step S42 is layer-normalized, it is input into the multi-head self-attention module based on the sliding window, and the result is added to the result obtained in step S42.

步骤S44:将步骤S43得到结果经过层归一化后,输入多层感知机,将结果与步骤S43得到的结果相加。Step S44: After the result obtained in step S43 is layer-normalized, it is input into a multilayer perceptron, and the result is added to the result obtained in step S43.

应理解的是,上述是已具体实施例描述的训练过程,在完成深度卷积网络的训练后,即可用于实际的医学影像配准。例如,实际应用过程包括:获取多模态的待配准影像;对待配准影像的目标器官进行点云化处理,并使用非刚性配准所学习到的形变关系对待配准图像进行基于器官解剖结构点云的配准,以获得初步形变的待配准影像;将所述初步形变的待配准影像输入到经训练的深度卷积网络,获得配准后的影像,所述深度卷积网络反映所述多模态的待配准影像与模板影像之间的形变关系;利用所述配准后的影像制定放射治疗计划。It should be understood that the above is a training process described in a specific embodiment, and after the training of the deep convolutional network is completed, it can be used for actual medical image registration. For example, the actual application process includes: obtaining multi-modal images to be registered; performing point cloud processing on the target organ of the image to be registered, and using the deformation relationship learned by non-rigid registration to register the image to be registered based on the organ anatomical structure point cloud to obtain a preliminary deformed image to be registered; inputting the preliminary deformed image to be registered into the trained deep convolutional network to obtain a registered image, and the deep convolutional network reflects the deformation relationship between the multi-modal image to be registered and the template image; and using the registered image to formulate a radiotherapy plan.

为进一步验证本发明,在影像引导乳腺癌放射治疗数据集上进行了验证,数据集大小共14例,将其中两位患者的数据作为测试数据集,其余数据作为训练集,对比传统卷积神经网络配准方法和线性插值的影像配准方法结果参见图5所示,其中对于每种指标,最右侧的图对应本发明。可以看出,本发明提出的基于人体解剖结构点云预配准级联自注意力机制的深度卷积网络方法,能够有效实现影像引导乳腺癌放射治疗配准,并且比现有方法精度高、信噪比大。To further verify the present invention, the data set was verified on the image-guided breast cancer radiotherapy dataset, with a total of 14 cases. The data of two patients were used as the test data set, and the rest of the data were used as the training set. The results of the comparison between the traditional convolutional neural network registration method and the linear interpolation image registration method are shown in Figure 5, where for each indicator, the rightmost figure corresponds to the present invention. It can be seen that the deep convolutional network method based on the human anatomical structure point cloud pre-registration cascade self-attention mechanism proposed in the present invention can effectively realize image-guided breast cancer radiotherapy registration, and has higher accuracy and signal-to-noise ratio than the existing methods.

综上所述,相对于现有技术,本发明具有以下优势:In summary, compared with the prior art, the present invention has the following advantages:

1)通过构建基于自注意力机制的深度卷积网络模型对医学影像进行配准,并结合人体解剖结构点云引导,实现多模态影像的全自动配准。本发明从技术角度考虑到了临床医生对于人体解剖结构在多模态影像配准问题中的应用,考虑到了解剖关键点在配准问题中的引导作用,提升了配准结果的可解释性,提升了配准任务精确度。1) By building a deep convolutional network model based on the self-attention mechanism to register medical images, and combining the guidance of the human anatomical structure point cloud, the fully automatic registration of multimodal images is realized. From a technical perspective, the present invention takes into account the application of human anatomical structure in multimodal image registration by clinicians, takes into account the guiding role of anatomical key points in the registration problem, improves the interpretability of the registration results, and improves the accuracy of the registration task.

2)使用了级联的配准方法,前端使用人体器官解剖学信息进行粗配准,后端使用基于自注意力机制的深度卷积神经网络进行精配准,提升了多模态影像引导放射治疗中的精准度。这种使用解剖结构点云和实际影像多模态融合的方式,充分利用了人体器官的解剖结构信息,降低了深度网络模型对于临床数据量的依赖。2) A cascade registration method was used. The front end used the anatomical information of human organs for rough registration, and the back end used a deep convolutional neural network based on the self-attention mechanism for fine registration, which improved the accuracy of multimodal image-guided radiotherapy. This method of using anatomical point cloud and actual image multimodal fusion makes full use of the anatomical information of human organs and reduces the dependence of deep network models on the amount of clinical data.

3)采用基于解剖学点云的变形矢量场约束及注意力机制的深度卷积网络对多模 态影像进行配准,解决多模态影像引导放射治疗中跨模态影像特征提取不充分,准确度低的问题,传统配准方法普遍采用迭代式的算法,耗费大量运行时间,本发明采用一步配准法,最终实现多模态影像引导放射治疗中全自动实时配准。3) Using a deep convolutional network based on anatomical point cloud deformation vector field constraints and attention mechanism for multi-modal The multimodal images are registered to solve the problems of insufficient cross-modal image feature extraction and low accuracy in multimodal image-guided radiotherapy. Traditional registration methods generally use iterative algorithms, which consume a lot of running time. The present invention adopts a one-step registration method to ultimately achieve fully automatic real-time registration in multimodal image-guided radiotherapy.

本发明可以是系统、方法和/或计算机程序产品。计算机程序产品可以包括计算机可读存储介质,其上载有用于使处理器实现本发明的各个方面的计算机可读程序指令。The present invention may be a system, a method and/or a computer program product. The computer program product may include a computer-readable storage medium carrying computer-readable program instructions for causing a processor to implement various aspects of the present invention.

计算机可读存储介质可以是可以保持和存储由指令执行设备使用的指令的有形设备。计算机可读存储介质例如可以是但不限于电存储设备、磁存储设备、光存储设备、电磁存储设备、半导体存储设备或者上述的任意合适的组合。计算机可读存储介质的更具体的例子(非穷举的列表)包括:便携式计算机盘、硬盘、随机存取存储器(RAM)、只读存储器(ROM)、可擦式可编程只读存储器(EPROM或闪存)、静态随机存取存储器(SRAM)、便携式压缩盘只读存储器(CD-ROM)、数字多功能盘(DVD)、记忆棒、软盘、机械编码设备、例如其上存储有指令的打孔卡或凹槽内凸起结构、以及上述的任意合适的组合。这里所使用的计算机可读存储介质不被解释为瞬时信号本身,诸如无线电波或者其他自由传播的电磁波、通过波导或其他传输媒介传播的电磁波(例如,通过光纤电缆的光脉冲)、或者通过电线传输的电信号。Computer readable storage medium can be a tangible device that can hold and store instructions used by an instruction execution device. Computer readable storage medium can be, for example, but not limited to, an electrical storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination thereof. More specific examples (non-exhaustive list) of computer readable storage medium include: a portable computer disk, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), a static random access memory (SRAM), a portable compact disk read-only memory (CD-ROM), a digital versatile disk (DVD), a memory stick, a floppy disk, a mechanical encoding device, for example, a punch card or a convex structure in a groove on which instructions are stored, and any suitable combination thereof. The computer readable storage medium used here is not interpreted as a transient signal itself, such as a radio wave or other freely propagating electromagnetic wave, an electromagnetic wave propagated by a waveguide or other transmission medium (for example, a light pulse by an optical fiber cable), or an electrical signal transmitted by a wire.

这里所描述的计算机可读程序指令可以从计算机可读存储介质下载到各个计算/处理设备,或者通过网络、例如因特网、局域网、广域网和/或无线网下载到外部计算机或外部存储设备。网络可以包括铜传输电缆、光纤传输、无线传输、路由器、防火墙、交换机、网关计算机和/或边缘服务器。每个计算/处理设备中的网络适配卡或者网络接口从网络接收计算机可读程序指令,并转发该计算机可读程序指令,以供存储在各个计算/处理设备中的计算机可读存储介质中。The computer-readable program instructions described herein can be downloaded from a computer-readable storage medium to each computing/processing device, or downloaded to an external computer or external storage device via a network, such as the Internet, a local area network, a wide area network, and/or a wireless network. The network can include copper transmission cables, optical fiber transmissions, wireless transmissions, routers, firewalls, switches, gateway computers, and/or edge servers. The network adapter card or network interface in each computing/processing device receives the computer-readable program instructions from the network and forwards the computer-readable program instructions for storage in the computer-readable storage medium in each computing/processing device.

用于执行本发明操作的计算机程序指令可以是汇编指令、指令集架构(ISA)指令、机器指令、机器相关指令、微代码、固件指令、状态设置数据、或者以一种或多种编程语言的任意组合编写的源代码或目标代码,所述编程语言包括面向对象的编程语言—诸如Smalltalk、C++、Python等,以及常规的过程式编程语言—诸如“C”语言或类似的编程语言。计算机可读程序指令可以完全地在用户计算机上执行、部分地在用户计算机上执行、作为一个独立的软件包执行、部分在用户计算机上部分在远程计算机上执行、或者完全在远程计算机或服务器上执行。在涉及远程计算机的情形中,远程计算机可以通过任意种类的网络—包括局域网(LAN)或广域网(WAN)—连接到用户计算机,或者,可以连接到外部计算机(例如利用因特网服务提供商来通过因特网连接)。在一些实施例中,通过利用计算机可读程序指令的状态信息来个性化定制电子电路,例如可编程逻辑电路、现场可编程门阵列(FPGA)或可编程逻辑阵列(PLA),该电子电路可以执行计算机可读程序指令,从而实现本发明的各个方面。The computer program instructions for performing the operation of the present invention may be assembly instructions, instruction set architecture (ISA) instructions, machine instructions, machine-dependent instructions, microcode, firmware instructions, state setting data, or source code or object code written in any combination of one or more programming languages, including object-oriented programming languages, such as Smalltalk, C++, Python, etc., and conventional procedural programming languages, such as "C" language or similar programming languages. Computer-readable program instructions may be executed entirely on a user's computer, partially on a user's computer, as an independent software package, partially on a user's computer, partially on a remote computer, or entirely on a remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer via any type of network, including a local area network (LAN) or a wide area network (WAN), or may be connected to an external computer (e.g., using an Internet service provider to connect via the Internet). In some embodiments, an electronic circuit, such as a programmable logic circuit, a field programmable gate array (FPGA), or a programmable logic array (PLA), may be personalized by utilizing the state information of the computer-readable program instructions, and the electronic circuit may execute the computer-readable program instructions, thereby realizing various aspects of the present invention.

这里参照根据本发明实施例的方法、装置(系统)和计算机程序产品的流程图和 /或框图描述了本发明的各个方面。应当理解,流程图和/或框图的每个方框以及流程图和/或框图中各方框的组合,都可以由计算机可读程序指令实现。Reference is made herein to flowcharts and illustrations of methods, apparatus (systems) and computer program products according to embodiments of the present invention. The flowchart and/or block diagrams describe various aspects of the present invention. It should be understood that each block of the flowchart and/or block diagram and the combination of each block in the flowchart and/or block diagram can be implemented by computer-readable program instructions.

这些计算机可读程序指令可以提供给通用计算机、专用计算机或其它可编程数据处理装置的处理器,从而生产出一种机器,使得这些指令在通过计算机或其它可编程数据处理装置的处理器执行时,产生了实现流程图和/或框图中的一个或多个方框中规定的功能/动作的装置。也可以把这些计算机可读程序指令存储在计算机可读存储介质中,这些指令使得计算机、可编程数据处理装置和/或其他设备以特定方式工作,从而,存储有指令的计算机可读介质则包括一个制造品,其包括实现流程图和/或框图中的一个或多个方框中规定的功能/动作的各个方面的指令。These computer-readable program instructions can be provided to a processor of a general-purpose computer, a special-purpose computer, or other programmable data processing device, thereby producing a machine, so that when these instructions are executed by the processor of the computer or other programmable data processing device, a device that implements the functions/actions specified in one or more boxes in the flowchart and/or block diagram is generated. These computer-readable program instructions can also be stored in a computer-readable storage medium, and these instructions cause the computer, programmable data processing device, and/or other equipment to work in a specific manner, so that the computer-readable medium storing the instructions includes a manufactured product, which includes instructions for implementing various aspects of the functions/actions specified in one or more boxes in the flowchart and/or block diagram.

也可以把计算机可读程序指令加载到计算机、其它可编程数据处理装置、或其它设备上,使得在计算机、其它可编程数据处理装置或其它设备上执行一系列操作步骤,以产生计算机实现的过程,从而使得在计算机、其它可编程数据处理装置、或其它设备上执行的指令实现流程图和/或框图中的一个或多个方框中规定的功能/动作。Computer-readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other device so that a series of operating steps are performed on the computer, other programmable data processing apparatus, or other device to produce a computer-implemented process, thereby causing the instructions executed on the computer, other programmable data processing apparatus, or other device to implement the functions/actions specified in one or more boxes in the flowchart and/or block diagram.

附图中的流程图和框图显示了根据本发明的多个实施例的系统、方法和计算机程序产品的可能实现的体系架构、功能和操作。在这点上,流程图或框图中的每个方框可以代表一个模块、程序段或指令的一部分,所述模块、程序段或指令的一部分包含一个或多个用于实现规定的逻辑功能的可执行指令。在有些作为替换的实现中,方框中所标注的功能也可以以不同于附图中所标注的顺序发生。例如,两个连续的方框实际上可以基本并行地执行,它们有时也可以按相反的顺序执行,这依所涉及的功能而定。也要注意的是,框图和/或流程图中的每个方框、以及框图和/或流程图中的方框的组合,可以用执行规定的功能或动作的专用的基于硬件的系统来实现,或者可以用专用硬件与计算机指令的组合来实现。对于本领域技术人员来说公知的是,通过硬件方式实现、通过软件方式实现以及通过软件和硬件结合的方式实现都是等价的。The flowchart and block diagram in the accompanying drawings show the possible architecture, functions and operations of the system, method and computer program product according to multiple embodiments of the present invention. In this regard, each box in the flowchart or block diagram can represent a part of a module, program segment or instruction, and the part of the module, program segment or instruction contains one or more executable instructions for realizing the specified logical function. In some alternative implementations, the functions marked in the box can also occur in a different order from the order marked in the accompanying drawings. For example, two consecutive boxes can actually be executed substantially in parallel, and they can sometimes be executed in the opposite order, depending on the functions involved. It should also be noted that each box in the block diagram and/or flowchart, and the combination of the boxes in the block diagram and/or flowchart can be implemented by a dedicated hardware-based system that performs the specified function or action, or can be implemented by a combination of dedicated hardware and computer instructions. It is well known to those skilled in the art that it is equivalent to implement it by hardware, implement it by software, and implement it by combining software and hardware.

以上已经描述了本发明的各实施例,上述说明是示例性的,并非穷尽性的,并且也不限于所披露的各实施例。在不偏离所说明的各实施例的范围和精神的情况下,对于本技术领域的普通技术人员来说许多修改和变更都是显而易见的。本文中所用术语的选择,旨在最好地解释各实施例的原理、实际应用或对市场中的技术改进,或者使本技术领域的其它普通技术人员能理解本文披露的各实施例。本发明的范围由所附权利要求来限定。 Embodiments of the present invention have been described above, and the above description is exemplary, not exhaustive, and is not limited to the disclosed embodiments. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described embodiments. The selection of terms used herein is intended to best explain the principles of the embodiments, practical applications, or technical improvements in the marketplace, or to enable other persons of ordinary skill in the art to understand the embodiments disclosed herein. The scope of the present invention is defined by the appended claims.

Claims (10)

一种多模态影像引导放射治疗的配准方法,包括以下步骤:A registration method for multimodal image-guided radiotherapy comprises the following steps: 获取多模态的待配准影像;Acquire multimodal images to be registered; 对待配准影像的目标器官进行点云化处理,并使用非刚性配准所学习到的形变关系对待配准图像进行基于器官解剖结构点云的配准,以获得初步形变的待配准影像;The target organ of the image to be registered is processed into a point cloud, and the deformation relationship learned by non-rigid registration is used to register the image to be registered based on the organ anatomical structure point cloud to obtain a preliminarily deformed image to be registered; 将所述初步形变的待配准影像输入到经训练的深度卷积网络,获得配准后的影像,所述深度卷积网络反映所述多模态的待配准影像与模板影像之间的形变关系;Inputting the preliminarily deformed image to be registered into a trained deep convolutional network to obtain a registered image, wherein the deep convolutional network reflects the deformation relationship between the multimodal image to be registered and the template image; 利用所述配准后的影像制定放射治疗计划。A radiation therapy plan is formulated using the registered images. 根据权利要求1所述的方法,其特征在于,所述非刚性配准所学习到的形变关系利用以下优化目标获得:
minE(X)=min{Ld(X)+αLs(X)+βLl(X)}
The method according to claim 1, characterized in that the deformation relationship learned by the non-rigid registration is obtained using the following optimization objectives:
minE(X)=min{L d (X)+αL s (X)+βL l (X)}
其中,X代表变形矢量场,Ld(X)是距离项,Ls(X)是刚性项,Ll(X)是关键点项,α,β是对应项的权重。Among them, X represents the deformation vector field, L d (X) is the distance term, L s (X) is the rigidity term, L l (X) is the key point term, and α and β are the weights of the corresponding terms.
根据权利要求2所述的方法,其特征在于,将所述距离项表示为:

vi=[x,y,z,1]T
The method according to claim 2, characterized in that the distance term is expressed as:

vi = [x, y, z, 1] T
其中,wi是权重,vi是待形变点云M中的第i个点,dist(S,v)表示的是点vi和模板影像S上和它距离最近点的距离,Xi是大小为3×4的变换矩阵;Where, wi is the weight, vi is the i-th point in the point cloud M to be deformed, dist(S,v) represents the distance between point vi and the nearest point on the template image S, and Xi is a transformation matrix of size 3×4; 将所述刚性项表示为:
The rigid term is expressed as:
其中,G=diag(1,1,1,γ),γ是设定的加权值;Among them, G = diag (1, 1, 1, γ), γ is the set weight value; 将所述关键点项表示为:
The key point item is expressed as:
其中,l表示模板影像中的特征点,表示给定特征点集。Among them, l represents the feature point in the template image, Represents a given set of feature points.
根据权利要求1所述的方法,其特征在于,所述深度卷积网络包括编码器和解码器,所述编码器是Swin-Transformer编码器,所述解码器包含上采样层和卷积层,在解码过程中,每个上采样层中的特征图通过跳跃链接和编码过程中的对应特征拼合,编码过程的上采样后连接两个三维卷积层。 The method according to claim 1 is characterized in that the deep convolutional network includes an encoder and a decoder, the encoder is a Swin-Transformer encoder, the decoder includes an upsampling layer and a convolution layer, and in the decoding process, the feature map in each upsampling layer is spliced with the corresponding feature in the encoding process through a jump link, and the two three-dimensional convolutional layers are connected after the upsampling in the encoding process. 根据权利要求4所述的方法,其特征在于,训练所述深度卷积网络包括以下步骤:The method according to claim 4, characterized in that training the deep convolutional network comprises the following steps: 将经过初步形变的模板影像和待配准影像输入到所述编码器进行特征编码;Inputting the template image that has undergone preliminary deformation and the image to be registered into the encoder for feature encoding; 将所述编码器提取的特征进行融合后,将融合特征利用所述解码器进行解码,并融合不同尺度特征后进行预测,输出和原始影像尺寸一致的预测影像;After fusing the features extracted by the encoder, the fused features are decoded by the decoder, and prediction is performed after fusing features of different scales, and a predicted image having the same size as the original image is output; 将预测影像和原始影像进行比对,并将其反向传播到所述编码器,利用设定的损失函数训练所述深度卷积网络。The predicted image is compared with the original image and back-propagated to the encoder, and the deep convolutional network is trained using a set loss function. 根据权利要求1所述的方法,其特征在于,训练所述深度卷积网络的损失函数设置为:
The method according to claim 1, characterized in that the loss function for training the deep convolutional network is set to:
其中,p代表影像中体素的位置,Im和f代表待配准图像和模板影像,且代表待配准影像和模板影像在以pi为中心点相邻体素的平均体素值,φ代表由待配准影像到模板影像的变形矢量场,pi表示当前操作的中心体素,R表示图像域。Where p represents the position of the voxel in the image, Im and f represent the image to be registered and the template image, and and represents the average voxel value of the adjacent voxels of the image to be registered and the template image with pi as the center point, φ represents the deformation vector field from the image to be registered to the template image, pi represents the central voxel of the current operation, and R represents the image domain.
根据权利要求3所述的方法,其特征在于,使用非刚性配准所学习到的形变关系对待配准图像进行基于器官解剖结构点云的配准包括:The method according to claim 3 is characterized in that the use of the deformation relationship learned by the non-rigid registration to register the image to be registered based on the organ anatomical structure point cloud comprises: 使用移动立方体法构建人体器官解剖结构点云和靶区点云,其中采用中心差分法计算各数据点所在体元顶点的梯度;The moving cube method is used to construct the human organ anatomical structure point cloud and the target area point cloud, in which the central difference method is used to calculate the gradient of the voxel vertex where each data point is located; 使用移动立方体法对体元8个顶点分别计算梯度后,使用线性插值计算中心三角面片顶点的梯度,对等值面进行绘制;After using the marching cube method to calculate the gradients of the eight vertices of the voxel, use linear interpolation to calculate the gradients of the central triangle patch vertices and draw the isosurface; 提取等值面顶点坐标,定义为点云,并对定义目标点云使用非刚性配准进行基于器官解剖结构点云的配准。The coordinates of the isosurface vertices are extracted and defined as a point cloud, and the defined target point cloud is registered based on the organ anatomical structure point cloud using non-rigid registration. 根据权利要求1所述的方法,其特征在于,所述多模态医学影像包括核磁共振成像、超声成像、计算机断成像和锥形束计算机断层成像。The method according to claim 1 is characterized in that the multimodal medical imaging includes magnetic resonance imaging, ultrasound imaging, computed tomography and cone beam computed tomography. 一种计算机可读存储介质,其上存储有计算机程序,其中,该计算机程序被处理器执行时实现根据权利要求1至8中任一项所述的方法的步骤。A computer-readable storage medium having a computer program stored thereon, wherein the computer program, when executed by a processor, implements the steps of the method according to any one of claims 1 to 8. 一种计算机设备,包括存储器和处理器,在所述存储器上存储有能够在处理器上运行的计算机程序,其特征在于,所述处理器执行所述计算机程序时实现权利要求1至8中任一项所述的方法的步骤。 A computer device comprises a memory and a processor, wherein a computer program that can be run on the processor is stored in the memory, and wherein the processor implements the steps of any one of the methods of claims 1 to 8 when executing the computer program.
PCT/CN2023/136602 2023-02-14 2023-12-05 Registration method for multimodality image-guided radiotherapy Ceased WO2024169341A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202310149191.7 2023-02-14
CN202310149191.7A CN116433734A (en) 2023-02-14 2023-02-14 Registration method for multi-mode image guided radiotherapy

Publications (1)

Publication Number Publication Date
WO2024169341A1 true WO2024169341A1 (en) 2024-08-22

Family

ID=87082190

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2023/136602 Ceased WO2024169341A1 (en) 2023-02-14 2023-12-05 Registration method for multimodality image-guided radiotherapy

Country Status (2)

Country Link
CN (1) CN116433734A (en)
WO (1) WO2024169341A1 (en)

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118799658A (en) * 2024-09-13 2024-10-18 杭州电子科技大学 A cross-modal fusion classification method based on digital breast images and ultrasound images
CN119151778A (en) * 2024-11-20 2024-12-17 杭州智睿云康医疗科技有限公司 Image processing method, device, electronic equipment and storage medium
CN119228860A (en) * 2024-11-28 2024-12-31 江西省肿瘤医院(江西省第二人民医院、江西省癌症中心) A method and system for non-rigid image registration based on deep learning
CN119273724A (en) * 2024-08-30 2025-01-07 西安电子科技大学 Multi-organ registration method based on cross-modal attention mechanism and vector fusion
CN119455278A (en) * 2024-10-24 2025-02-18 迈胜医疗设备有限公司 VR-based SGRT positioning verification method and related equipment
CN119722766A (en) * 2025-02-27 2025-03-28 首都医科大学附属北京天坛医院 A brain CT and CTP image registration method based on point cloud technology
CN119941813A (en) * 2025-04-07 2025-05-06 西安交通大学医学院第一附属医院 Multimodal large model-assisted laparoscopic soft tissue registration surgery navigation method and system
CN120070440A (en) * 2025-04-28 2025-05-30 中国人民解放军总医院第三医学中心 Method and system for resolving radiological image data
CN120655689A (en) * 2025-08-15 2025-09-16 福建自贸试验区厦门片区Manteia数据科技有限公司 Deformation registration method and device, storage medium and electronic equipment
CN120707869A (en) * 2025-07-09 2025-09-26 安徽中医药大学神经病学研究所附属医院 A medical image analysis method based on multimodal brain MR structural feature recognition
CN120983150A (en) * 2025-10-24 2025-11-21 珠海横琴全星医疗科技有限公司 Intraoperative organ displacement real-time sensing method based on multi-mode fusion and related device

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116433734A (en) * 2023-02-14 2023-07-14 中国科学院深圳先进技术研究院 Registration method for multi-mode image guided radiotherapy

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108596961A (en) * 2018-04-17 2018-09-28 浙江工业大学 Point cloud registration method based on Three dimensional convolution neural network
US20200058156A1 (en) * 2018-08-17 2020-02-20 Nec Laboratories America, Inc. Dense three-dimensional correspondence estimation with multi-level metric learning and hierarchical matching
CN111882593A (en) * 2020-07-23 2020-11-03 首都师范大学 A point cloud registration model and method combining attention mechanism and 3D graph convolutional network
CN112802073A (en) * 2021-04-08 2021-05-14 之江实验室 Fusion registration method based on image data and point cloud data
CN113450294A (en) * 2021-06-07 2021-09-28 刘星宇 Multi-modal medical image registration and fusion method and device and electronic equipment
CN115358995A (en) * 2022-08-22 2022-11-18 复旦大学 Fully automatic spatial registration system based on multimodal information fusion
CN116433734A (en) * 2023-02-14 2023-07-14 中国科学院深圳先进技术研究院 Registration method for multi-mode image guided radiotherapy

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107358607A (en) * 2017-08-13 2017-11-17 强深智能医疗科技(昆山)有限公司 Tumour radiotherapy visual monitoring and visual servo intelligent control method
CN112907439B (en) * 2021-03-26 2023-08-08 中国科学院深圳先进技术研究院 Deep learning-based supine position and prone position breast image registration method
CN113450397B (en) * 2021-06-25 2022-04-01 广州柏视医疗科技有限公司 Image deformation registration method based on deep learning
CN114022521B (en) * 2021-10-13 2024-09-13 华中科技大学 A non-rigid multimodal medical image registration method and system

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108596961A (en) * 2018-04-17 2018-09-28 浙江工业大学 Point cloud registration method based on Three dimensional convolution neural network
US20200058156A1 (en) * 2018-08-17 2020-02-20 Nec Laboratories America, Inc. Dense three-dimensional correspondence estimation with multi-level metric learning and hierarchical matching
CN111882593A (en) * 2020-07-23 2020-11-03 首都师范大学 A point cloud registration model and method combining attention mechanism and 3D graph convolutional network
CN112802073A (en) * 2021-04-08 2021-05-14 之江实验室 Fusion registration method based on image data and point cloud data
CN113450294A (en) * 2021-06-07 2021-09-28 刘星宇 Multi-modal medical image registration and fusion method and device and electronic equipment
CN115358995A (en) * 2022-08-22 2022-11-18 复旦大学 Fully automatic spatial registration system based on multimodal information fusion
CN116433734A (en) * 2023-02-14 2023-07-14 中国科学院深圳先进技术研究院 Registration method for multi-mode image guided radiotherapy

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
董国亚 等 (DONG, GUOYA ET AL.): "基于深度学习的跨模态医学图像转换 (Cross-Modality Medical Image Synthesis Based on Deep Learning)", 中国医学物理学杂志 (CHINESE JOURNAL OF MEDICAL PHYSICS), vol. 37, no. 10, 31 October 2020 (2020-10-31), XP055941001, DOI: 10.3969/j.issn.1005-202X.2020.10.021 *

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN119273724A (en) * 2024-08-30 2025-01-07 西安电子科技大学 Multi-organ registration method based on cross-modal attention mechanism and vector fusion
CN118799658A (en) * 2024-09-13 2024-10-18 杭州电子科技大学 A cross-modal fusion classification method based on digital breast images and ultrasound images
CN119455278A (en) * 2024-10-24 2025-02-18 迈胜医疗设备有限公司 VR-based SGRT positioning verification method and related equipment
CN119151778A (en) * 2024-11-20 2024-12-17 杭州智睿云康医疗科技有限公司 Image processing method, device, electronic equipment and storage medium
CN119228860A (en) * 2024-11-28 2024-12-31 江西省肿瘤医院(江西省第二人民医院、江西省癌症中心) A method and system for non-rigid image registration based on deep learning
CN119722766A (en) * 2025-02-27 2025-03-28 首都医科大学附属北京天坛医院 A brain CT and CTP image registration method based on point cloud technology
CN119941813A (en) * 2025-04-07 2025-05-06 西安交通大学医学院第一附属医院 Multimodal large model-assisted laparoscopic soft tissue registration surgery navigation method and system
CN120070440A (en) * 2025-04-28 2025-05-30 中国人民解放军总医院第三医学中心 Method and system for resolving radiological image data
CN120707869A (en) * 2025-07-09 2025-09-26 安徽中医药大学神经病学研究所附属医院 A medical image analysis method based on multimodal brain MR structural feature recognition
CN120655689A (en) * 2025-08-15 2025-09-16 福建自贸试验区厦门片区Manteia数据科技有限公司 Deformation registration method and device, storage medium and electronic equipment
CN120983150A (en) * 2025-10-24 2025-11-21 珠海横琴全星医疗科技有限公司 Intraoperative organ displacement real-time sensing method based on multi-mode fusion and related device

Also Published As

Publication number Publication date
CN116433734A (en) 2023-07-14

Similar Documents

Publication Publication Date Title
WO2024169341A1 (en) Registration method for multimodality image-guided radiotherapy
US12112483B2 (en) Systems and methods for anatomic structure segmentation in image analysis
US11488021B2 (en) Systems and methods for image segmentation
CN113516659B (en) An automatic segmentation method of medical images based on deep learning
Mahapatra et al. Joint registration and segmentation of xray images using generative adversarial networks
Kroon et al. MRI modalitiy transformation in demon registration
US12254538B2 (en) Devices and process for synthesizing images from a source nature to a target nature
Wang et al. Annotation-efficient learning for medical image segmentation based on noisy pseudo labels and adversarial learning
CN110363802B (en) Prostate Image Registration System and Method Based on Automatic Segmentation and Pelvis Alignment
CN110009669A (en) A 3D/2D Medical Image Registration Method Based on Deep Reinforcement Learning
CN114359642A (en) Multi-modal medical image multi-organ positioning method based on one-to-one target query Transformer
JP7346553B2 (en) Determining the growth rate of objects in a 3D dataset using deep learning
Xu et al. 3D‐SIFT‐Flow for atlas‐based CT liver image segmentation
CN116797519A (en) Brain glioma segmentation and three-dimensional visualization model training method and system
JPWO2020110774A1 (en) Image processing equipment, image processing methods, and programs
Henderson et al. Automatic identification of segmentation errors for radiotherapy using geometric learning
Zhang et al. Shape prior modeling using sparse representation and online dictionary learning
Alchatzidis et al. A discrete MRF framework for integrated multi-atlas registration and segmentation
CN115409837B (en) A CTV automatic delineation method for endometrial cancer based on multimodal CT images
Chaisangmongkon et al. External validation of deep learning algorithms for cardiothoracic ratio measurement
CN113379770B (en) Construction method of nasopharyngeal carcinoma MR image segmentation network, image segmentation method and device
Albarqouni et al. Single-view X-ray depth recovery: toward a novel concept for image-guided interventions
US20240394870A1 (en) Systems and methods for determining anatomical deformations
Anitha Brain tumor detection in combined 3D MRI and CT images using Dictionary learning based Segmentation and Spearman Regression
Longuefosse et al. MR to CT synthesis using GANs: a practical guide applied to thoracic imaging

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23922457

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE