WO2025071977A1 - Pet/ct atlas for segmentation - Google Patents
Pet/ct atlas for segmentation Download PDFInfo
- Publication number
- WO2025071977A1 WO2025071977A1 PCT/US2024/046993 US2024046993W WO2025071977A1 WO 2025071977 A1 WO2025071977 A1 WO 2025071977A1 US 2024046993 W US2024046993 W US 2024046993W WO 2025071977 A1 WO2025071977 A1 WO 2025071977A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- patient
- image
- segmentation
- neural network
- atlas
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/088—Non-supervised learning, e.g. competitive learning
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/30—Determination of transform parameters for the alignment of images, i.e. image registration
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N20/00—Machine learning
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10072—Tomographic images
- G06T2207/10104—Positron emission tomography [PET]
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10072—Tomographic images
- G06T2207/10108—Single photon emission computed tomography [SPECT]
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20112—Image segmentation details
- G06T2207/20128—Atlas-based segmentation
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30096—Tumor; Lesion
Definitions
- a machine learning method of segmenting at least one anatomical feature in an image of a patient is presented.
- the method includes: obtaining a patient image, where the patient image is of an anatomical portion of the patient, and where the anatomical portion includes one or more of a chest, an abdomen, or a pelvis; providing the patient image to a trained neural network, where the trained neural network was trained based on an unsupervised machine learning technique to generate a deformation field that maps a standardized anatomical atlas to an input image; obtaining a patient deformation field from the trained neural network in response to the providing, where the deformation field warps the standardized anatomical atlas to the patient image; applying the patient deformation field to a labeling of the standardized atlas, where a segmentation corresponding to the patient image is produced; and outputting the segmentation.
- the patient image may include an Emission Computed Tomography (ECT) image, and the patient image may be captured after the patient is injected with a radiopharmaceutical.
- the segmentation may include a tumor segmentation.
- the radiopharmaceutical may include a Prostate-Specific Membrane Antigen (PSMA).
- PSMA Prostate-Specific Membrane Antigen
- the trained neural network may include a transformer architecture.
- the trained neural network may have been trained using a loss function that includes a deformation field smoothness measure and a similarity measure.
- the loss function may further include an average measure of magnitude of a deformation field over a training dataset.
- the trained neural network may have been further trained using a dice function for labeled training images.
- the method may further include providing an uncertainty map for the segmentation.
- the uncertainty map may be based on at least one of epistemic uncertainty or aleatoric uncertainty.
- a machine learning system for segmenting at least one anatomical feature in an image of a patient.
- the system includes: an electronic processor; and a non-transitory computer readable medium including instructions that, when executed by the electronic processor, configure the electronic processor to perform actions including: obtaining a patient image, where the patient image is of an anatomical portion of the patient, and where the anatomical portion includes one or more of a chest, an abdomen, or a pelvis, providing the patient image to a trained neural network, where the trained neural network was trained based on an unsupervised machine learning technique to generate a deformation field that maps a standardized anatomical atlas to an input image, obtaining a patient deformation field from the trained neural network in response to the providing, where the deformation field warps the standardized anatomical atlas to the patient image, applying the patient deformation field to a labeling of the standardized atlas
- the patient image may include an Emission Computed Tomography (ECT) image, and the patient image may be captured after the patient is injected with a radiopharmaceutical.
- the segmentation may include a tumor segmentation.
- the radiopharmaceutical may include a Prostate-Specific Membrane Antigen (PSMA).
- PSMA Prostate-Specific Membrane Antigen
- the trained neural network may include a transformer architecture.
- the trained neural network may have been trained using a loss function that includes a deformation field smoothness measure and a similarity measure.
- the loss function may further include an average measure of magnitude of a deformation field over a training dataset.
- the trained neural network may have been further trained using a dice function for labeled training images.
- the actions may further include providing an uncertainty map for the segmentation.
- Fig.1 is a schematic diagram representing training a machine learning system for segmenting at least one anatomical feature in an image of a patient, according to various embodiments; [0012] Figs.
- FIG. 2A and 2B illustrates an example hybrid transformer and convolutional neural network architecture 200 according to various embodiments
- Fig. 3 is a schematic diagram representing using a trained machine learning system to segment at least one anatomical feature in an image of a patient, according to various embodiments
- Fig. 3 is a schematic diagram representing using a trained machine learning system to segment at least one anatomical feature in an image of a patient, according to various embodiments
- Fig. 3 is a schematic diagram representing using a trained machine learning system to segment at least one anatomical feature in an image of a patient, according to various embodiments
- FIG. 4 depicts layers of a constructed CT atlas, a deformation field, a warped atlas, a patient CT scan, a warped labeled atlas, and segmentation uncertainty, according to reduction to practice;
- Fig.5 depicts layers of a constructed PET atlas, a deformation field, a warped atlas, a patient PET scan, a warped labeled atlas, and segmentation uncertainty, according to reduction to practice;
- Fig.6 illustrates an organ segmentation of a patient PSMA PET scan, according to the reduction to practice of Fig.4; and [0017] Fig.
- FIG. 7 is a flow chart illustrating a method of segmenting at least one anatomical feature in an image of a patient, according to various embodiments.
- Description of the Examples [0018] Reference will now be made in detail to example implementations, illustrated in the accompanying drawings. Wherever convenient, the same reference numbers will be used throughout the drawings to refer to the same or like parts. In the following description, reference is made to the accompanying drawings that form a part thereof, and in which is shown by way of illustration specific exemplary examples in which the invention may be practiced. These examples are described in sufficient detail to enable those skilled in the art to practice the invention and it is to be understood that other examples may be utilized and that changes may be made without departing from the scope of the invention. The following description is, therefore, merely exemplary.
- Various embodiments provide an unsupervised machine learning technique for organ segmentation based on atlas-image registration.
- Various embodiments may be used to segment organs shown in three-dimensional scans, such as, by way of non-limiting example, PET/CT images.
- Various embodiments may be used provide patient-specific dosimetry for radiopharmaceutical therapies (RPTs), e.g., for PSMA-targeted RPTs.
- RPTs radiopharmaceutical therapies
- there is no need for an annotated training dataset because various embodiments utilize an unsupervised machine learning system.
- Various embodiments provide fast and reliable segmentation results akin to those methods trained with an accurately annotated labeled training dataset.
- Some embodiments solve the problem of prior machine learning techniques, which require a large number of annotated labeled training images, by using an unsupervised machine learning system to generate a deformation field that maps a standardized anatomical atlas to an input image, which does not require any annotated labeled training images.
- the use of a standardized atlas according to some embodiments solves the problem presented by inter-patient variability for anatomical features such as the chest, abdomen, or pelvis. The use of such a standardized atlas allows for segmentation of such anatomical features in the presence of such variability.
- FIG. 1 is a schematic diagram 100 representing training a machine learning system for segmenting at least one anatomical feature in an image of a patient, according to various embodiments.
- the diagram 100 shows a technique for generating an atlas 106 and deep neural network registration model 108 using a training image data set 102.
- the atlas 106 is a standardized anatomical reference image that can be geometrically deformed to fit the anatomical variations among patients.
- embodiments are described herein in reference to PSMA PET and CT images, however, embodiments are not limited to radiopharmacology in general or PSMA radiopharmacology in particular, nor are embodiments limited to PET or CT images.
- patient images such as the images in the training image data set 102
- training images such as the training image data set 102
- the atlas 106 may be constructed in a training stage by maximizing a posterior, which may be represented as follows, by way of non-limiting example: (1) log ⁇ ( ⁇
- x and a are the image dataset 102 and the atlas 106
- ⁇ ⁇ ⁇ + ⁇ denotes the deformation fields 110 that warp the patient images to the atlas a 106.
- the prior over the deformation fields, ⁇ ( ⁇ ) is modeled by multivariate normal distributions with zero mean and covariance ⁇ , and the Gaussian likelihood is used for ⁇ ( ⁇
- ⁇ ⁇ 1 be the Laplacian of a neighborhood graph defined on the image grid, the registration network be f, and its parameters be ⁇ , the loss function 114 is minimized to train the deep neural network registration model 108 and construct the atlas 106.
- the Gaussian likelihood ensures the similarity matching between the deformed atlas 112 and the individual patient image, while the prior ensures the atlas 106 is unbiased and encourages the spatial smoothness of the deformation.
- the loss function described herein in reference to Equation (2) is non- limiting. In general, for various embodiments, a loss function that includes any, or any combination, of the following: (a) a smoothness measure of the deformation field, (b) a measure of similarity between the patient image and the atlas after the atlas has been warped according to the deformation field, (c) a measure of the average (mean) magnitude of the deformation field over the entire training image data set (e.g., the sum of each pixel’s movement).
- (c) can be used to ensure that the atlas is unbiased by serving as a regularizer, which prevents overfitting to any particular image in the training image data set.
- the loss function may incorporate a measure of difference between the atlas as warped by the deformation field and the segmentation of the labeled training image. Such a measure may include a Dice function, for example.
- An off-the-shelf affine registration model 104 is used to align the images in the training data set 102.
- the TransMorph registration model may be used for the deep neural network registration model 108, f.
- the TransMorph architecture is shown and described herein in reference to Figs.2A and 2B.
- the parameters of the deep neural network registration model 108 and the atlas 106 are jointly optimized using, by way of non-limiting example, the Adam optimization algorithm.
- the deep neural network registration model 108 can input a patient image and an atlas and output a deformation field that warps the atlas to the patient image.
- the segmentation of the organs for a specific patient is achieved through a deformable registration process that warps the atlas 106 to the patient scans, as shown and described in reference to Fig.3. [0028] Figs.
- the deep neural network registration model 108 of Fig.1 may include a transformer architecture, such as in included in the architecture 200 of Figs.2A and 2B.
- a transformer deploys a self-attention mechanisms to determine which parts of the input sequence (e.g., an image) are essential based on contextual information.
- the specific architecture 200 is a hybrid Transformer-ConvNet model for volumetric medical image registration.
- Fig.3 is a schematic diagram 300 representing using a trained machine learning system to segment at least one anatomical feature in an image of a patient, according to various embodiments.
- the trained machine learning system may include a machine learning system trained as shown and described herein in reference to Fig.1.
- the trained deep learning registration model 310 may be the deep neural network registration model 108 of Fig.1 after having undergone training.
- the constructed atlas 304 may be a standardized anatomical reference image, e.g., the atlas 106, constructed together with the deep neural network registration model 108 as shown and described herein in reference to Fig.1.
- the constructed atlas 106 is labeled to segment the organs and other tissue of interest, resulting in the labeled constructed atlas 302. This may be done by hand, for example; however, note that no labeling of the images in the training image data set is required according to various embodiments.
- the new patient image 306 is aligned, e.g., using the affine registration model 104 of Fig. 1, to obtain an affine registered patient image 308.
- the affine registered patient image 308 and the constructed atlas 304 are passed to the trained deep learning registration model 310.
- the trained deep learning registration model 310 accepts these data and outputs a deformation field 312.
- the deformation field 312 is applied to the labeled constructed atlas 302, which results in a patient image segmentation 314.
- the patient image segmentation 314 constitutes a segmentation of the new patient image 306.
- the trained machine learning system may further output an uncertainty map, illustrating regions of uncertainty in the patient image segmentation 314.
- the uncertainty map may represent epistemic segmentation uncertainty, representing the uncertainty of the trained deep learning registration model 310 in predicting the deformation map, aleatoric segmentation uncertainty, representing segmentation uncertainty arising from the natural stochasticity of observations, or a different segmentation uncertainty.
- the trained machine learning system may include an uncertainty deep neural network that is conditioned on the appearance differences between a warped and fixed image to estimate the uncertainty in propagating the anatomical labels.
- Such an uncertainty deep neural network may estimate the aleatoric segmentation uncertainty without necessitating an actual anatomical label map at test time.
- the epistemic segmentation uncertainty may be estimated if anatomical labels are provided.
- Fig.4 depicts layers 400 of a constructed CT atlas 402, a deformation field 404, a warped atlas 406, a patient CT scan 408, a warped labeled atlas 410, and segmentation uncertainty 412, according to a reduction to practice.
- the atlas 402 was constructed from 275 whole-body PSMA CT scans as shown and described herein in reference to Fig.1. Seven organs in the constructed CT atlas 402 were manually delineated, including bone, lungs, kidneys, liver, and spleen, resulting in a labeled constructed CT atlas.
- the constructed CT atlas 402 and the patient CT scan 408 were input to the reduction to practice as shown and described in reference to Fig.3, and the deformation field 404 was output.
- the deformation field 404 when applied to the constructed CT atlas 402, deformed the constructed CT atlas 402 to produce the warped atlas 406, which matched the patient CT scan 408.
- the deformation field 404 was applied to the labeled constructed CT atlas, resulting in the warped labeled atlas 410, which provided a segmentation of the patient CT scan 408. Finally, a segmentation uncertainty 412 was generated, illustrating uncertain regions of the deformed labeled atlas 410.
- Fig.5 depicts layers 500 of a constructed PET atlas 502, a deformation field 504, a warped atlas 506, a patient PET scan 508, a warped labeled atlas 510, and segmentation uncertainty 512, according to a reduction to practice.
- the PET atlas 302 was constructed from 275 whole-body PSMA PET scans as shown and described herein in reference to Fig.1. Eight organs on the constructed PET atlas 502 were manually delineated, including the bladder, bowel, kidneys, liver, spleen, lacrimal glands, parotid glands, and submandibular glands, resulting in a labeled constructed PET atlas.
- the constructed PET atlas 502 and the patient PET scan 508 were input to the reduction to practice as shown and described in reference to Fig.3, and the deformation field 504 was output.
- the deformation field 504 when applied to the constructed PET atlas 502, warped the constructed PET atlas 502 to produce the deformed atlas 506, which matched the patient PET scan 508.
- the deformation field 504 was applied to the labeled constructed PET atlas, resulting in the warped labeled atlas 510, which provided a segmentation of the patient PET scan 508. Finally, a segmentation uncertainty 512 was generated, illustrating uncertain regions of the deformed labeled atlas 510. [0035] Fig.
- Fig.6 illustrates an organ segmentation 600 of a patient PSMA PET scan, according to the reduction to practice of Fig.5.
- Fig.6 illustrates example layers of the organ segmentation 600 for the bladder, bowel, kidneys, liver, spleen, lacrimal glands, parotid glands, and submandibular glands.
- the segmentation illustrates qualitative results of the PSMA PET segmentation of Fig.5.
- Fig.7 is a flow chart illustrating a method 700 of segmenting at least one anatomical feature in an image of a patient, according to various embodiments. The method 700 may be implemented as shown and described in reference to Fig.3, using a system trained as shown and described in reference to Fig.1, for example.
- the method 700 includes obtaining a patient image.
- the patient image may be a three-dimensional image, such as a CT or PET scan, for example.
- the patient image may be of an anatomical portion of the patient, including one or more of a chest, an abdomen, and/or a pelvis of the patient.
- the method 700 includes providing the patient image to a trained neural network.
- the trained neural network may have been trained based on an unsupervised machine learning technique to generate a deformation field that maps a standardized anatomical atlas to an input image, e.g., as shown and described herein in reference to Fig.3.
- the method 700 includes obtaining a patient deformation field from the trained neural network in response to the providing.
- this deformation field preserves the overall quantification of voxel values even though individual voxel values may be altered during the registration process.
- the deformation field may be output from the trained neural network in response to inputting the patient image and a standardized anatomical atlas, e.g., as shown and described herein in reference to Fig.3.
- the anatomical atlas may also be input at 706, or may have previously been input to the trained neural network, e.g., as a fixed or selectable parameter.
- the deformation field may map the standardized anatomical atlas to the patient image, such that the deformation field warps the atlas to the patient image.
- the method 700 includes applying the patient deformation field to a labeling of the standardized atlas, e.g., as shown and described herein in reference to Fig. 3.
- the application of the deformation field to the labeling of the standardized atlas warps the standardized atlas to the patient image, producing a segmentation corresponding to the patient image, e.g., as shown and described in reference to Fig.3.
- the method 700 includes outputting the segmentation.
- the segmentation may be output in any of a variety of ways. For example, the segmentation may be output by displaying it on a monitor, e.g., to a care provider.
- the segmentation may be output by delivery to an electronic health care system, e.g., over a network.
- the segmentation may be output to a different computing system, such as a radiology system.
- the method 700 may include any of a variety of actions.
- the method 700 may output an uncertainty map, as shown and described herein in reference to Figs. 4 and 5. Such an uncertainty map may be consulted for use with any additional use case.
- the method 700 may include treating the patient, or altering treatment of the patient, based on the segmentation.
- the method 700 may include selecting or adjusting the dose of a radiopharmaceutical, such as a PSMA radiopharmaceutical, for example.
- the method 700 may include ensuring that the dose arrives at a targeted region or anatomical feature, such as a tumor, in the correct quantity.
- a PET scan may be used, which shows the presence of the radiopharmaceutical (or a tracer), and the segmentation may be used to determine a quantity of the radiopharmaceutical (or tracer) in any anatomical feature, such as a particular organ, tissue, or tumor.
- the method 700 may be used to determine pharmacokinetics.
- the method 700 may be used with a digital phantom to generate a training corpus for a machine learning application.
- embodiments may be used to segment any Emission Computed Tomography (ECT) image.
- embodiments may be used to segment Magnetic Resonance Imagining (MRI) images.
- embodiments may be used to segment any ultrasound image.
- ECT Emission Computed Tomography
- MRI Magnetic Resonance Imagining
- embodiments may be used to segment any ultrasound image.
- Certain examples can be performed using a computer program or set of programs.
- the computer programs can exist in a variety of forms both active and inactive.
- the computer programs can exist as software program(s) comprised of program instructions in source code, object code, executable code or other formats; firmware program(s), or hardware description language (HDL) files. Any of the above can be embodied on a transitory or non-transitory computer readable medium, which include storage devices and signals, in compressed or uncompressed form.
- Exemplary computer readable storage devices include conventional computer system RAM (random access memory), ROM (read-only memory), EPROM (erasable, programmable ROM), EEPROM (electrically erasable, programmable ROM), flash memory, and magnetic or optical disks or tapes.
- RAM random access memory
- ROM read-only memory
- EPROM erasable, programmable ROM
- EEPROM electrically erasable, programmable ROM
- flash memory and magnetic or optical disks or tapes.
- These computer readable program instructions may be provided to a processor of a general-purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the electronic processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks.
- These computer readable program instructions may also be stored in a computer readable storage medium that can direct a computer, a programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer readable storage medium having instructions stored therein comprises an article of manufacture including instructions which implement aspects of the function/act specified in the flowchart and/or block diagram block or blocks.
- the computer readable program instructions may be assembler instructions, instruction-set-architecture (ISA) instructions, machine instructions, machine dependent instructions, microcode, firmware instructions, state- setting data, configuration data for integrated circuitry, or either source code or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, C++, or the like, and procedural programming languages, such as the C programming language or similar programming languages.
- the computer readable program instructions may execute entirely on a user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server.
- the terms “A or B” and “A and/or B” are intended to encompass A, B, or ⁇ A and B ⁇ . Further, the terms “A, B, or C” and “A, B, and/or C” are intended to encompass single items, pairs of items, or all items, that is, all of: A, B, C, ⁇ A and B ⁇ , ⁇ A and C ⁇ , ⁇ B and C ⁇ , and ⁇ A and B and C ⁇ .
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Health & Medical Sciences (AREA)
- Computing Systems (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Biomedical Technology (AREA)
- Molecular Biology (AREA)
- Biophysics (AREA)
- General Engineering & Computer Science (AREA)
- Artificial Intelligence (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Life Sciences & Earth Sciences (AREA)
- Health & Medical Sciences (AREA)
- Image Analysis (AREA)
Abstract
Machine learning techniques for segmenting at least one anatomical feature in an image of a patient are presented. The techniques can include: obtaining a patient image, where the patient image is of an anatomical portion of the patient, and where the anatomical portion comprises one or more of a chest, an abdomen, or a pelvis; providing the patient image to a trained neural network, where the trained neural network was trained based on an unsupervised machine learning technique to generate a deformation field that maps a standardized anatomical atlas to an input image; obtaining a patient deformation field from the trained neural network in response to the providing, where the deformation field warps the standardized anatomical atlas to the patient image; applying the patient deformation field to a labeling of the standardized atlas, where a segmentation corresponding to the patient image is produced; and outputting the segmentation.
Description
Attorney Docket No. C17805_P17805-02/0184.0281-PCT PET/CT ATLAS FOR SEGMENTATION Government Support [0001] This invention was made with government support under grant no. CA140204, awarded by the National Institutes of Health. The government has certain rights to the invention. Cross-Reference to Related Application [0002] This application claims the benefit of U.S. Provisional Patent Application No. 63/585,995, filed September 28, 2023, and entitled “PET/CT Atlas for Segmentation.” Field [0003] This disclosure relates generally to medical image segmentation. Background [0004] Accurate patient-specific dosimetry is important for maximizing the efficacy of PSMA-targeted radiopharmaceutical therapies (RPTs). It requires the segmentation of the organ-at-risk by imaging. Such segmentation is often defined manually in current clinical practice, which is time-consuming and the largest source of variability in dose calculations. Deep learning has been used for medical image segmentation, e.g., in the context of brains. However, there has been a dearth of research on learning-based segmentation methods for dosimetry. A potential reason is that obtaining sufficient training samples for neural networks is an obstacle. Furthermore, inter-patient variability of brain anatomy is significantly less than that of chest, abdomen, or a pelvis.
Summary [0005] According to various embodiments, a machine learning method of segmenting at least one anatomical feature in an image of a patient is presented. The method includes: obtaining a patient image, where the patient image is of an anatomical portion of the patient, and where the anatomical portion includes one or more of a chest, an abdomen, or a pelvis; providing the patient image to a trained neural network, where the trained neural network was trained based on an unsupervised machine learning technique to generate a deformation field that maps a standardized anatomical atlas to an input image; obtaining a patient deformation field from the trained neural network in response to the providing, where the deformation field warps the standardized anatomical atlas to the patient image; applying the patient deformation field to a labeling of the standardized atlas, where a segmentation corresponding to the patient image is produced; and outputting the segmentation. [0006] Various optional features of the above method embodiments include the following. The patient image may include an Emission Computed Tomography (ECT) image, and the patient image may be captured after the patient is injected with a radiopharmaceutical. The segmentation may include a tumor segmentation. The radiopharmaceutical may include a Prostate-Specific Membrane Antigen (PSMA). The trained neural network may include a transformer architecture. The trained neural network may have been trained using a loss function that includes a deformation field smoothness measure and a similarity measure. The loss function may further include an average measure of magnitude of a deformation field over a training dataset. The trained neural network may have been further trained using a dice function for labeled training images. The method may further include providing an uncertainty map for the
segmentation. The uncertainty map may be based on at least one of epistemic uncertainty or aleatoric uncertainty. [0007] According to various system embodiments, a machine learning system for segmenting at least one anatomical feature in an image of a patient is presented. The system includes: an electronic processor; and a non-transitory computer readable medium including instructions that, when executed by the electronic processor, configure the electronic processor to perform actions including: obtaining a patient image, where the patient image is of an anatomical portion of the patient, and where the anatomical portion includes one or more of a chest, an abdomen, or a pelvis, providing the patient image to a trained neural network, where the trained neural network was trained based on an unsupervised machine learning technique to generate a deformation field that maps a standardized anatomical atlas to an input image, obtaining a patient deformation field from the trained neural network in response to the providing, where the deformation field warps the standardized anatomical atlas to the patient image, applying the patient deformation field to a labeling of the standardized atlas, where a segmentation corresponding to the patient image is produced, and outputting the segmentation. [0008] Various optional features of the above system embodiments include the following. The patient image may include an Emission Computed Tomography (ECT) image, and the patient image may be captured after the patient is injected with a radiopharmaceutical. The segmentation may include a tumor segmentation. The radiopharmaceutical may include a Prostate-Specific Membrane Antigen (PSMA). The trained neural network may include a transformer architecture. The trained neural network may have been trained using a loss function that includes a deformation field smoothness measure and a similarity measure. The loss function may further include
an average measure of magnitude of a deformation field over a training dataset. The trained neural network may have been further trained using a dice function for labeled training images. The actions may further include providing an uncertainty map for the segmentation. The uncertainty map may be based on at least one of epistemic uncertainty or aleatoric uncertainty. [0009] Combinations, (including multiple dependent combinations) of the above-described elements and those within the specification have been contemplated by the inventors and may be made, except where otherwise indicated or where contradictory. Brief Description of the Drawings [0010] Various features of the examples can be more fully appreciated, as the same become better understood with reference to the following detailed description of the examples when considered in connection with the accompanying figures, in which: [0011] Fig.1 is a schematic diagram representing training a machine learning system for segmenting at least one anatomical feature in an image of a patient, according to various embodiments; [0012] Figs. 2A and 2B illustrates an example hybrid transformer and convolutional neural network architecture 200 according to various embodiments; [0013] Fig. 3 is a schematic diagram representing using a trained machine learning system to segment at least one anatomical feature in an image of a patient, according to various embodiments; [0014] Fig. 4 depicts layers of a constructed CT atlas, a deformation field, a warped atlas, a patient CT scan, a warped labeled atlas, and segmentation uncertainty, according to reduction to practice;
[0015] Fig.5 depicts layers of a constructed PET atlas, a deformation field, a warped atlas, a patient PET scan, a warped labeled atlas, and segmentation uncertainty, according to reduction to practice; [0016] Fig.6 illustrates an organ segmentation of a patient PSMA PET scan, according to the reduction to practice of Fig.4; and [0017] Fig. 7 is a flow chart illustrating a method of segmenting at least one anatomical feature in an image of a patient, according to various embodiments. Description of the Examples [0018] Reference will now be made in detail to example implementations, illustrated in the accompanying drawings. Wherever convenient, the same reference numbers will be used throughout the drawings to refer to the same or like parts. In the following description, reference is made to the accompanying drawings that form a part thereof, and in which is shown by way of illustration specific exemplary examples in which the invention may be practiced. These examples are described in sufficient detail to enable those skilled in the art to practice the invention and it is to be understood that other examples may be utilized and that changes may be made without departing from the scope of the invention. The following description is, therefore, merely exemplary. [0019] Various embodiments provide an unsupervised machine learning technique for organ segmentation based on atlas-image registration. Various embodiments may be used to segment organs shown in three-dimensional scans, such as, by way of non-limiting example, PET/CT images. Various embodiments may be used provide patient-specific dosimetry for radiopharmaceutical therapies (RPTs), e.g., for PSMA-targeted RPTs. According to various embodiments, there is no need
for an annotated training dataset, because various embodiments utilize an unsupervised machine learning system. Various embodiments provide fast and reliable segmentation results akin to those methods trained with an accurately annotated labeled training dataset. Some embodiments solve the problem of prior machine learning techniques, which require a large number of annotated labeled training images, by using an unsupervised machine learning system to generate a deformation field that maps a standardized anatomical atlas to an input image, which does not require any annotated labeled training images. Further, the use of a standardized atlas according to some embodiments solves the problem presented by inter-patient variability for anatomical features such as the chest, abdomen, or pelvis. The use of such a standardized atlas allows for segmentation of such anatomical features in the presence of such variability. [0020] These and other features and advantages are shown and described herein in reference to the figures. [0021] Fig. 1 is a schematic diagram 100 representing training a machine learning system for segmenting at least one anatomical feature in an image of a patient, according to various embodiments. In particular, the diagram 100 shows a technique for generating an atlas 106 and deep neural network registration model 108 using a training image data set 102. [0022] In general, the atlas 106 is a standardized anatomical reference image that can be geometrically deformed to fit the anatomical variations among patients. By way of non-limiting examples, embodiments are described herein in reference to PSMA PET and CT images, however, embodiments are not limited to radiopharmacology in general or PSMA radiopharmacology in particular, nor are embodiments limited to PET or CT images. In general, patient images, such as the
images in the training image data set 102, are whole-body images, e.g., depicting one or more of a chest, an abdomen, and/or a pelvis. Because patient images according to various embodiments have moderate anatomical variations, once trained as shown and described presently, such images may be used as training images, such as the training image data set 102, to construct the atlas 106. The atlas 106 may be constructed in a training stage by maximizing a posterior, which may be represented as follows, by way of non-limiting example: (1)
log ^^^^( ^^^^| ^^^^; ^^^^) + log ^^^^ [0023] In Equation (1), x and a are the image dataset 102 and the atlas 106, and ^^^^ = ^^^^ ^^^^ + ^^^^ denotes the deformation fields 110 that warp the patient images to the atlas a 106. The prior over the deformation fields, ^^^^( ^^^^), is modeled by multivariate normal distributions with zero mean and covariance Σ, and the Gaussian likelihood is used for ^^^^( ^^^^| ^^^^; ^^^^). Letting Σ−1 be the Laplacian of a neighborhood graph defined on the image grid, the registration network be f, and its parameters be θ, the loss function 114 is minimized to train the deep neural network registration model 108 and construct the atlas 106. The loss function f may be represented as follows, by way of non- limiting example:
[0024] In Equation (2), ^^^^ ^^^^ = ^^^ ^^^^^( ^^^^ ^^^^, ^^^^), ^^^^, ^^^^, ^^^^d, ^^^^a are user-defined parameters, and ^^^^# is the mean displacement across all data. The Gaussian likelihood ensures the similarity matching between the deformed atlas 112 and the individual patient image, while the prior ensures the atlas 106 is unbiased and encourages the spatial smoothness of the deformation.
[0025] The loss function described herein in reference to Equation (2) is non- limiting. In general, for various embodiments, a loss function that includes any, or any combination, of the following: (a) a smoothness measure of the deformation field, (b) a measure of similarity between the patient image and the atlas after the atlas has been warped according to the deformation field, (c) a measure of the average (mean) magnitude of the deformation field over the entire training image data set (e.g., the sum of each pixel’s movement). Note that (c) can be used to ensure that the atlas is unbiased by serving as a regularizer, which prevents overfitting to any particular image in the training image data set. According to some embodiments, if labeled (e.g., manually annotated) training images are available, the loss function may incorporate a measure of difference between the atlas as warped by the deformation field and the segmentation of the labeled training image. Such a measure may include a Dice function, for example. [0026] An off-the-shelf affine registration model 104 is used to align the images in the training data set 102. [0027] By way of non-limiting example, the TransMorph registration model may be used for the deep neural network registration model 108, f. An illustration of the TransMorph architecture is shown and described herein in reference to Figs.2A and 2B. During the training phase, the parameters of the deep neural network registration model 108 and the atlas 106 are jointly optimized using, by way of non-limiting example, the Adam optimization algorithm. Once trained, the deep neural network registration model 108 can input a patient image and an atlas and output a deformation field that warps the atlas to the patient image. In more detail, in the inference phase, the segmentation of the organs for a specific patient is achieved through a deformable
registration process that warps the atlas 106 to the patient scans, as shown and described in reference to Fig.3. [0028] Figs. 2A and 2B illustrate an example hybrid transformer and convolutional neural network architecture 200 according to various embodiments. In general, and according to various embodiments, the deep neural network registration model 108 of Fig.1 may include a transformer architecture, such as in included in the architecture 200 of Figs.2A and 2B. In contrast to a convolutional neural network, for example, a transformer deploys a self-attention mechanisms to determine which parts of the input sequence (e.g., an image) are essential based on contextual information. As shown in Figs. 2A and 2B, and by way of non-limiting example, the specific architecture 200 is a hybrid Transformer-ConvNet model for volumetric medical image registration. For TransMorph, the Swin Transformer is employed as the encoder to capture the spatial correspondence between the input moving and fixed images. Then, a ConvNet decoder processes the information provided by the transformer encoder into a dense displacement field. Long skip connections are then be deployed to maintain the flow of localization information between the encoder and decoder stages. According to various embodiments, a diffeomorphic variant, which ensures the topology-preserving deformations, a Bayesian variant, which produces a well calibrated registration uncertainty estimate, or a different variant, may be used. [0029] Fig.3 is a schematic diagram 300 representing using a trained machine learning system to segment at least one anatomical feature in an image of a patient, according to various embodiments. The trained machine learning system may include a machine learning system trained as shown and described herein in reference to Fig.1. For example, the trained deep learning registration model 310 may be the deep neural network registration model 108 of Fig.1 after having undergone training. The
constructed atlas 304 may be a standardized anatomical reference image, e.g., the atlas 106, constructed together with the deep neural network registration model 108 as shown and described herein in reference to Fig.1. The constructed atlas 106 is labeled to segment the organs and other tissue of interest, resulting in the labeled constructed atlas 302. This may be done by hand, for example; however, note that no labeling of the images in the training image data set is required according to various embodiments. [0030] For segmenting a new patient image 306, the new patient image 306 is aligned, e.g., using the affine registration model 104 of Fig. 1, to obtain an affine registered patient image 308. The affine registered patient image 308 and the constructed atlas 304 are passed to the trained deep learning registration model 310. The trained deep learning registration model 310 accepts these data and outputs a deformation field 312. The deformation field 312 is applied to the labeled constructed atlas 302, which results in a patient image segmentation 314. Thus, the patient image segmentation 314 constitutes a segmentation of the new patient image 306. [0031] The trained machine learning system may further output an uncertainty map, illustrating regions of uncertainty in the patient image segmentation 314. The uncertainty map may represent epistemic segmentation uncertainty, representing the uncertainty of the trained deep learning registration model 310 in predicting the deformation map, aleatoric segmentation uncertainty, representing segmentation uncertainty arising from the natural stochasticity of observations, or a different segmentation uncertainty. For example, the trained machine learning system may include an uncertainty deep neural network that is conditioned on the appearance differences between a warped and fixed image to estimate the uncertainty in propagating the anatomical labels. Such an uncertainty deep neural network may
estimate the aleatoric segmentation uncertainty without necessitating an actual anatomical label map at test time. According to some embodiments, if anatomical labels are provided, the epistemic segmentation uncertainty may be estimated. [0032] Fig.4 depicts layers 400 of a constructed CT atlas 402, a deformation field 404, a warped atlas 406, a patient CT scan 408, a warped labeled atlas 410, and segmentation uncertainty 412, according to a reduction to practice. Using the reduction to practice, the atlas 402 was constructed from 275 whole-body PSMA CT scans as shown and described herein in reference to Fig.1. Seven organs in the constructed CT atlas 402 were manually delineated, including bone, lungs, kidneys, liver, and spleen, resulting in a labeled constructed CT atlas. The constructed CT atlas 402 and the patient CT scan 408 were input to the reduction to practice as shown and described in reference to Fig.3, and the deformation field 404 was output. The deformation field 404, when applied to the constructed CT atlas 402, deformed the constructed CT atlas 402 to produce the warped atlas 406, which matched the patient CT scan 408. The deformation field 404 was applied to the labeled constructed CT atlas, resulting in the warped labeled atlas 410, which provided a segmentation of the patient CT scan 408. Finally, a segmentation uncertainty 412 was generated, illustrating uncertain regions of the deformed labeled atlas 410. [0033] Because gold-standard organ delineations were not available for the PSMA dataset, the reduction to practice was quantitatively evaluated on an independent CT dataset that has 50 Chest-Abdomen-Pelvis (CAP) scans with gold- standard manual delineations of 11 organs. Note that no ground truth labeled images were used throughout training. Yet, the reduction to practice achieved visually accurate segmentation results, with an average run time < 5 s. On the CAP CT dataset, the reduction to practice attained promising accuracy, with eight out of eleven
organs achieving average Dice scores larger than 0.8. The quantitative results further evident the capability of the reduction to practice for organ segmentation in medical images. [0034] Fig.5 depicts layers 500 of a constructed PET atlas 502, a deformation field 504, a warped atlas 506, a patient PET scan 508, a warped labeled atlas 510, and segmentation uncertainty 512, according to a reduction to practice. Using the reduction to practice, the PET atlas 302 was constructed from 275 whole-body PSMA PET scans as shown and described herein in reference to Fig.1. Eight organs on the constructed PET atlas 502 were manually delineated, including the bladder, bowel, kidneys, liver, spleen, lacrimal glands, parotid glands, and submandibular glands, resulting in a labeled constructed PET atlas. The constructed PET atlas 502 and the patient PET scan 508 were input to the reduction to practice as shown and described in reference to Fig.3, and the deformation field 504 was output. The deformation field 504, when applied to the constructed PET atlas 502, warped the constructed PET atlas 502 to produce the deformed atlas 506, which matched the patient PET scan 508. The deformation field 504 was applied to the labeled constructed PET atlas, resulting in the warped labeled atlas 510, which provided a segmentation of the patient PET scan 508. Finally, a segmentation uncertainty 512 was generated, illustrating uncertain regions of the deformed labeled atlas 510. [0035] Fig. 6 illustrates an organ segmentation 600 of a patient PSMA PET scan, according to the reduction to practice of Fig.5. In particular, Fig.6 illustrates example layers of the organ segmentation 600 for the bladder, bowel, kidneys, liver, spleen, lacrimal glands, parotid glands, and submandibular glands. Thus, the segmentation illustrates qualitative results of the PSMA PET segmentation of Fig.5.
[0036] Fig.7 is a flow chart illustrating a method 700 of segmenting at least one anatomical feature in an image of a patient, according to various embodiments. The method 700 may be implemented as shown and described in reference to Fig.3, using a system trained as shown and described in reference to Fig.1, for example. [0037] At 702, the method 700 includes obtaining a patient image. The patient image may be a three-dimensional image, such as a CT or PET scan, for example. The patient image may be of an anatomical portion of the patient, including one or more of a chest, an abdomen, and/or a pelvis of the patient. [0038] At 704, the method 700 includes providing the patient image to a trained neural network. The trained neural network may have been trained based on an unsupervised machine learning technique to generate a deformation field that maps a standardized anatomical atlas to an input image, e.g., as shown and described herein in reference to Fig.3. [0039] At 706, the method 700 includes obtaining a patient deformation field from the trained neural network in response to the providing. Within the context of PET imaging and radiopharmaceutical therapy, this deformation field preserves the overall quantification of voxel values even though individual voxel values may be altered during the registration process. For example, the deformation field may be output from the trained neural network in response to inputting the patient image and a standardized anatomical atlas, e.g., as shown and described herein in reference to Fig.3. The anatomical atlas may also be input at 706, or may have previously been input to the trained neural network, e.g., as a fixed or selectable parameter. The deformation field may map the standardized anatomical atlas to the patient image, such that the deformation field warps the atlas to the patient image.
[0040] At 708, the method 700 includes applying the patient deformation field to a labeling of the standardized atlas, e.g., as shown and described herein in reference to Fig. 3. The application of the deformation field to the labeling of the standardized atlas warps the standardized atlas to the patient image, producing a segmentation corresponding to the patient image, e.g., as shown and described in reference to Fig.3. [0041] At 710, the method 700 includes outputting the segmentation. The segmentation may be output in any of a variety of ways. For example, the segmentation may be output by displaying it on a monitor, e.g., to a care provider. The segmentation may be output by delivery to an electronic health care system, e.g., over a network. The segmentation may be output to a different computing system, such as a radiology system. [0042] Subsequent to 710, the method 700 may include any of a variety of actions. For example, the method 700 may output an uncertainty map, as shown and described herein in reference to Figs. 4 and 5. Such an uncertainty map may be consulted for use with any additional use case. As another example, the method 700 may include treating the patient, or altering treatment of the patient, based on the segmentation. The method 700 may include selecting or adjusting the dose of a radiopharmaceutical, such as a PSMA radiopharmaceutical, for example. For example, the method 700 may include ensuring that the dose arrives at a targeted region or anatomical feature, such as a tumor, in the correct quantity. In such a use case, a PET scan may be used, which shows the presence of the radiopharmaceutical (or a tracer), and the segmentation may be used to determine a quantity of the radiopharmaceutical (or tracer) in any anatomical feature, such as a particular organ, tissue, or tumor. As another example, the method 700 may be used to determine
pharmacokinetics. As yet another example, the method 700 may be used with a digital phantom to generate a training corpus for a machine learning application. [0043] Although the invention is described herein in reference to PET and CT scans, embodiments are not so limited. For example, embodiments may be used to segment any Emission Computed Tomography (ECT) image. As another example, embodiments may be used to segment Magnetic Resonance Imagining (MRI) images. As yet another example, embodiments may be used to segment any ultrasound image. [0044] Certain examples can be performed using a computer program or set of programs. The computer programs can exist in a variety of forms both active and inactive. For example, the computer programs can exist as software program(s) comprised of program instructions in source code, object code, executable code or other formats; firmware program(s), or hardware description language (HDL) files. Any of the above can be embodied on a transitory or non-transitory computer readable medium, which include storage devices and signals, in compressed or uncompressed form. Exemplary computer readable storage devices include conventional computer system RAM (random access memory), ROM (read-only memory), EPROM (erasable, programmable ROM), EEPROM (electrically erasable, programmable ROM), flash memory, and magnetic or optical disks or tapes. [0045] Aspects of the present disclosure are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the disclosure. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented using computer readable program instructions that are executed by an electronic processor.
[0046] These computer readable program instructions may be provided to a processor of a general-purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the electronic processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer readable program instructions may also be stored in a computer readable storage medium that can direct a computer, a programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer readable storage medium having instructions stored therein comprises an article of manufacture including instructions which implement aspects of the function/act specified in the flowchart and/or block diagram block or blocks. [0047] In embodiments, the computer readable program instructions may be assembler instructions, instruction-set-architecture (ISA) instructions, machine instructions, machine dependent instructions, microcode, firmware instructions, state- setting data, configuration data for integrated circuitry, or either source code or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, C++, or the like, and procedural programming languages, such as the C programming language or similar programming languages. The computer readable program instructions may execute entirely on a user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. [0048] As used herein, the terms “A or B” and “A and/or B” are intended to encompass A, B, or {A and B}. Further, the terms “A, B, or C” and “A, B, and/or C” are
intended to encompass single items, pairs of items, or all items, that is, all of: A, B, C, {A and B}, {A and C}, {B and C}, and {A and B and C}. The term “or” as used herein means “and/or.” [0049] As used herein, language such as “at least one of X, Y, and Z,” “at least one of X, Y, or Z,” “at least one or more of X, Y, and Z,” “at least one or more of X, Y, or Z,” “at least one or more of X, Y, and/or Z,” or “at least one of X, Y, and/or Z,” is intended to be inclusive of both a single item (e.g., just X, or just Y, or just Z) and multiple items (e.g., {X and Y}, {X and Z}, {Y and Z}, or {X, Y, and Z}). The phrase “at least one of” and similar phrases are not intended to convey a requirement that each possible item must be present, although each possible item may be present. [0050] The techniques presented and claimed herein are referenced and applied to material objects and concrete examples of a practical nature that demonstrably improve the present technical field and, as such, are not abstract, intangible or purely theoretical. Further, if any claims appended to the end of this specification contain one or more elements designated as “means for [perform]ing [a function]…” or “step for [perform]ing [a function]…”, it is intended that such elements are to be interpreted under 35 U.S.C. § 112(f). However, for any claims containing elements designated in any other manner, it is intended that such elements are not to be interpreted under 35 U.S.C. § 112(f). [0051] While the invention has been described with reference to the exemplary examples thereof, those skilled in the art will be able to make various modifications to the described examples without departing from the true spirit and scope. The terms and descriptions used herein are set forth by way of illustration only and are not meant as limitations. In particular, although the method has been described by examples, the steps of the method can be performed in a different order than illustrated or
simultaneously. Those skilled in the art will recognize that these and other variations are possible within the spirit and scope as defined in the following claims and their equivalents.
Claims
What is claimed is: 1. A machine learning method of segmenting at least one anatomical feature in an image of a patient, the method comprising: obtaining a patient image, wherein the patient image is of an anatomical portion of the patient, and wherein the anatomical portion comprises one or more of a chest, an abdomen, or a pelvis; providing the patient image to a trained neural network, wherein the trained neural network was trained based on an unsupervised machine learning technique to generate a deformation field that maps a standardized anatomical atlas to an input image; obtaining a patient deformation field from the trained neural network in response to the providing, wherein the deformation field warps the standardized anatomical atlas to the patient image; applying the patient deformation field to a labeling of the standardized atlas, wherein a segmentation corresponding to the patient image is produced; and outputting the segmentation.
2. The method of claim 1, wherein the patient image comprises an Emission Computed Tomography (ECT) image, and wherein the patient image is captured after the patient is injected with a radiopharmaceutical.
3. The method of claim 2, wherein the segmentation comprises a tumor segmentation.
4. The method of claim 2, wherein the radiopharmaceutical comprises a Prostate-Specific Membrane Antigen (PSMA).
5. The method of claim 1, wherein the trained neural network comprises a transformer architecture.
6. The method of claim 1, wherein the trained neural network was trained using a loss function that comprises a deformation field smoothness measure and a similarity measure.
7. The method of claim 6, wherein the loss function further comprises an average measure of magnitude of a deformation field over a training dataset.
8. The method of claim 1, wherein the trained neural network was further trained using a dice function for labeled training images.
9. The method of claim 1, further comprising providing an uncertainty map for the segmentation.
10. The method of claim 9, wherein the uncertainty map is based on at least one of epistemic uncertainty or aleatoric uncertainty.
11. A machine learning system for segmenting at least one anatomical feature in an image of a patient, the system comprising: an electronic processor; and a non-transitory computer readable medium comprising instructions that, when
executed by the electronic processor, configure the electronic processor to perform actions comprising: obtaining a patient image, wherein the patient image is of an anatomical portion of the patient, and wherein the anatomical portion comprises one or more of a chest, an abdomen, or a pelvis, providing the patient image to a trained neural network, wherein the trained neural network was trained based on an unsupervised machine learning technique to generate a deformation field that maps a standardized anatomical atlas to an input image, obtaining a patient deformation field from the trained neural network in response to the providing, wherein the deformation field warps the standardized anatomical atlas to the patient image, applying the patient deformation field to a labeling of the standardized atlas, wherein a segmentation corresponding to the patient image is produced, and outputting the segmentation.
12. The system of claim 11, wherein the patient image comprises an Emission Computed Tomography (ECT) image, and wherein the patient image is captured after the patient is injected with a radiopharmaceutical.
13. The system of claim 12, wherein the segmentation comprises a tumor segmentation.
14. The system of claim 12, wherein the radiopharmaceutical comprises a Prostate-Specific Membrane Antigen (PSMA).
15. The system of claim 11, wherein the trained neural network comprises a transformer architecture.
16. The system of claim 11, wherein the trained neural network was trained using a loss function that comprises a deformation field smoothness measure and a similarity measure.
17. The system of claim 16, wherein the loss function further comprises an average measure of magnitude of a deformation field over a training dataset.
18. The system of claim 11, wherein the trained neural network was further trained using a dice function for labeled training images.
19. The system of claim 11, wherein the actions further comprise providing an uncertainty map for the segmentation.
20. The system of claim 19, wherein the uncertainty map is based on at least one of epistemic uncertainty or aleatoric uncertainty.
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| US202363585995P | 2023-09-28 | 2023-09-28 | |
| US63/585,995 | 2023-09-28 |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2025071977A1 true WO2025071977A1 (en) | 2025-04-03 |
Family
ID=95202061
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/US2024/046993 Pending WO2025071977A1 (en) | 2023-09-28 | 2024-09-17 | Pet/ct atlas for segmentation |
Country Status (1)
| Country | Link |
|---|---|
| WO (1) | WO2025071977A1 (en) |
Cited By (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN120259388A (en) * | 2025-05-29 | 2025-07-04 | 南昌航空大学 | Image registration and segmentation joint optimization method, system, device and medium |
Citations (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20210183484A1 (en) * | 2019-12-06 | 2021-06-17 | Surgical Safety Technologies Inc. | Hierarchical cnn-transformer based machine learning |
| US20210283279A1 (en) * | 2013-10-18 | 2021-09-16 | Deutsches Krebsforschungszentrum | Use of labeled inhibitors of prostate specific membrane antigen (psma), as agents for the treatment of prostate cancer |
| US20220007940A1 (en) * | 2017-06-20 | 2022-01-13 | Siemens Healthcare Gmbh | Deep-learnt tissue deformation for medical imaging |
| US20220114389A1 (en) * | 2020-10-09 | 2022-04-14 | GE Precision Healthcare LLC | Systems and methods of automatic medical image labeling |
| US20230135351A1 (en) * | 2021-11-02 | 2023-05-04 | GE Precision Healthcare LLC | System and methods for quantifying uncertainty of segmentation masks produced by machine learning models |
-
2024
- 2024-09-17 WO PCT/US2024/046993 patent/WO2025071977A1/en active Pending
Patent Citations (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20210283279A1 (en) * | 2013-10-18 | 2021-09-16 | Deutsches Krebsforschungszentrum | Use of labeled inhibitors of prostate specific membrane antigen (psma), as agents for the treatment of prostate cancer |
| US20220007940A1 (en) * | 2017-06-20 | 2022-01-13 | Siemens Healthcare Gmbh | Deep-learnt tissue deformation for medical imaging |
| US20210183484A1 (en) * | 2019-12-06 | 2021-06-17 | Surgical Safety Technologies Inc. | Hierarchical cnn-transformer based machine learning |
| US20220114389A1 (en) * | 2020-10-09 | 2022-04-14 | GE Precision Healthcare LLC | Systems and methods of automatic medical image labeling |
| US20230135351A1 (en) * | 2021-11-02 | 2023-05-04 | GE Precision Healthcare LLC | System and methods for quantifying uncertainty of segmentation masks produced by machine learning models |
Non-Patent Citations (1)
| Title |
|---|
| GORTHI, S ET AL.: "Active deformation fields: Dense deformation field estimation for atlas- based segmentation using the active contour framework", MEDICAL IMAGE ANALYSIS, vol. 15, no. 6, December 2011 (2011-12-01), pages 787 - 800, XP028312865, [retrieved on 20241104], DOI: https://doi.org/10.1016/j.media. 2011.05.00 8 * |
Cited By (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN120259388A (en) * | 2025-05-29 | 2025-07-04 | 南昌航空大学 | Image registration and segmentation joint optimization method, system, device and medium |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| Zhang et al. | Context-guided fully convolutional networks for joint craniomaxillofacial bone segmentation and landmark digitization | |
| Mahapatra et al. | Joint registration and segmentation of xray images using generative adversarial networks | |
| CN110807755B (en) | Plane selection using locator images | |
| JP6567179B2 (en) | Pseudo CT generation from MR data using feature regression model | |
| US20200311932A1 (en) | Systems and Methods for Synthetic Medical Image Generation | |
| US20200146635A1 (en) | System and method for unsupervised deep learning for deformable image registration | |
| US9082169B2 (en) | Longitudinal monitoring of pathology | |
| Rao et al. | Brain tumor detection and segmentation using conditional random field | |
| US20090180675A1 (en) | System and method for image based multiple-modality cardiac image alignment | |
| Luo et al. | $\mathcal {X} $-Metric: An N-Dimensional Information-Theoretic Framework for Groupwise Registration and Deep Combined Computing | |
| CN108778416A (en) | Pseudo-CT generation from MR data using tissue parameter estimation | |
| CN102938013A (en) | Medical image processing apparatus and medical image processing method | |
| US20130188846A1 (en) | Method, system and computer readable medium for automatic segmentation of a medical image | |
| Emami et al. | Attention-guided generative adversarial network to address atypical anatomy in synthetic CT generation | |
| EP4239581A1 (en) | Generation of 3d models of anatomical structures from 2d radiographs | |
| US12106478B2 (en) | Deep learning based medical system and method for image acquisition | |
| WO2025071977A1 (en) | Pet/ct atlas for segmentation | |
| Zakirov et al. | Dental pathology detection in 3D cone-beam CT | |
| Lei et al. | Brain MRI classification based on machine learning framework with auto-context model | |
| Longuefosse et al. | Lung ct synthesis using gans with conditional normalization on registered ultrashort echo-time mri | |
| Lauritzen et al. | Evaluation of ct image synthesis methods: From atlas-based registration to deep learning | |
| Zakirov et al. | End-to-end dental pathology detection in 3D cone-beam computed tomography images | |
| US20080285822A1 (en) | Automated Stool Removal Method For Medical Imaging | |
| Zhang et al. | Shape prior-constrained deep learning network for medical image segmentation | |
| Zhu et al. | Deep learning-based automated scan plane positioning for brain magnetic resonance imaging |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 24873327 Country of ref document: EP Kind code of ref document: A1 |