WO2024112260A2 - A system for and a method of classifying adipose tissue - Google Patents
A system for and a method of classifying adipose tissue Download PDFInfo
- Publication number
- WO2024112260A2 WO2024112260A2 PCT/SG2023/050744 SG2023050744W WO2024112260A2 WO 2024112260 A2 WO2024112260 A2 WO 2024112260A2 SG 2023050744 W SG2023050744 W SG 2023050744W WO 2024112260 A2 WO2024112260 A2 WO 2024112260A2
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- adipose tissue
- recited
- medical image
- further configured
- segmentation module
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Ceased
Links
Classifications
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H30/00—ICT specially adapted for the handling or processing of medical images
- G16H30/20—ICT specially adapted for the handling or processing of medical images for handling medical images, e.g. DICOM, HL7 or PACS
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/48—Other medical applications
- A61B5/4869—Determining body composition
- A61B5/4872—Body fat
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/72—Signal processing specially adapted for physiological signals or for diagnostic purposes
- A61B5/7235—Details of waveform analysis
- A61B5/7264—Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0012—Biomedical image inspection
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H20/00—ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance
- G16H20/30—ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance relating to physical therapies or activities, e.g. physiotherapy, acupressure or exercising
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H20/00—ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance
- G16H20/40—ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance relating to mechanical, radiation or invasive therapies, e.g. surgery, laser therapy, dialysis or acupuncture
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H20/00—ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance
- G16H20/60—ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance relating to nutrition control, e.g. diets
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H30/00—ICT specially adapted for the handling or processing of medical images
- G16H30/40—ICT specially adapted for the handling or processing of medical images for processing medical images, e.g. editing
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H40/00—ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices
- G16H40/60—ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices for the operation of medical equipment or devices
- G16H40/67—ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices for the operation of medical equipment or devices for remote operation
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H50/00—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
- G16H50/20—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H50/00—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
- G16H50/30—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for calculating health indices; for individual health risk assessment
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H50/00—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
- G16H50/70—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for mining of medical data, e.g. analysing previous cases of other patients
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B2576/00—Medical imaging apparatus involving image processing or analysis
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10072—Tomographic images
- G06T2207/10081—Computed x-ray tomography [CT]
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10072—Tomographic images
- G06T2207/10088—Magnetic resonance imaging [MRI]
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20076—Probabilistic image processing
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
Definitions
- This application relates to a system for classifying adipose tissue and a method of classifying adipose tissue.
- AAT abdominal adipose tissues
- Abdominal adipose tissue can generally be separated into two main depots: subcutaneous adipose tissue (SAT) and intra-abdominal adipose tissue (1AAT).
- SAT subcutaneous adipose tissue
- IAAT intra-abdominal adipose tissue
- SSAT superficial subcutaneous adipose tissue
- DSAT deep subcutaneous adipose tissue
- IAAT may be further separated into intraperitoneal adipose tissue (IPAT), retroperitoneal adipose tissue (RPAT), and paraspinal adipose tissue (PSAT).
- a system for classifying adipose tissue includes memory storing instructions; and a processor coupled to the memory and configured to process the stored instructions to implement: a segmentation module configured to: acquire a medical image of a subject; segment the medical image into a plurality of volumetric segments using a machine learning model, wherein each of the plurality of volumetric segments comprises a respective segmentation mask corresponding to one selected from a plurality of adipose tissue types.
- the processor is further configured to determine a metabolic outcome based on the plurality of volumetric segments in the adipose tissue.
- the segmentation module is further configured to determine a risk of gestational diabetes mellitus (GDM) based on a quantification of the plurality of volumetric segments in the adipose tissue.
- the segmentation module is further configured to determine a risk of birth of large for gestational age (LGA) offspring based on a quantification of the plurality of volumetric segments in the adipose tissue.
- GDM gestational diabetes mellitus
- LGA large for gestational age
- the segmentation module if further configured to determine a risk of a disease based on a distribution of the plurality of volumetric segments in the adipose tissue. In some embodiments, the segmentation module is further configured to determine a risk of a disease based on a quantification of the plurality of volumetric segments in the adipose tissue. L0008J According to another aspect, disclosed herein a method of classifying adipose tissue. The method includes: acquiring a medical image of a subject; segmenting the medical image into a plurality of volumetric segments using a machine learning model, wherein each of the plurality of volumetric segments comprises a respective segmentation mask corresponding to one selected from a plurality of adipose tissue types.
- FIG. 1A is a schematic of a system for classifying adipose tissue according to embodiments of the present disclosure
- FIG. IB is a schematic of a workflow of a method of classifying adipose tissue of the system of FIG. 1 A;
- FIG. 2A is an example of a medical image of a subject
- FIG. 2B is an example of a segmented output of the medical image using the system of FIG. 1 ;
- FIG. 2C is another example of a segmented output of the medical image using the system of FIG. 1;
- FIG. 2D is yet another example of a segmented output of the medical image using the system of FIG. 1;
- FIG. 3 A and 3B are illustrations of a graphical user interface of the system according to various embodiments.
- FIG. 4A is a flowchart of a method of classifying adipose tissue according to embodiments of the present disclosure
- FIG. 4B is a schematic workflow of a system for classifying adipose tissue according to embodiments of the present disclosure
- FIG. 5 is a schematic diagram of a machine learning model according to embodiments of the present disclosure.
- FIGs. 6 to 9 are schematic diagrams of another machine learning model and sublayers according to embodiments of the present disclosure.
- FIG. 10A shows an example of medical image and segmented output of the abdomen of a normal weight subject
- FIG. 10A shows another example of medical image and respective segmented output of the abdomen for an overweight participant
- FIGs. 11A to HE are Bland- Altman plots showing volumetric differences in cubic centimetres between the ground truth (GT) and the prediction (P) of model 1 for the hold-out test set for superficial subcutaneous (SSAT), deep subcutaneous (DSAT), intraperitoneal (IP AT), retroperitoneal (RPAT), and paraspinal (PSAT) adipose tissue;
- SSAT superficial subcutaneous
- DSAT deep subcutaneous
- IP AT intraperitoneal
- RPAT retroperitoneal
- PSAT paraspinal
- FIGs. 12A to 12H show example medical images and respective segmented outputs of model 1 on a hold-out test set
- the articles “a”, “an” and “the” as used with regard to a feature or element include a reference to one or more of the features or elements.
- the term “about” or “approximately” as applied to a numeric value encompasses the exact value and a reasonable variance as generally understood in the relevant technical field, e.g., within 10% of the specified value.
- modules may be implemented as circuits, logic chips or any sort of discrete component, and multiple modules may be combined into a single module or divided into sub-modules as required without departing from the disclosure. Still further, one skilled in the art will also recognize that a module may be implemented in software which may then be executed by a variety of processors. In embodiments of the disclosure, a module may also comprise computer instructions or executable codes that may instruct a computer processor to carry out a sequence of events based on instructions received. The choice of the implementation of the modules is left as a decision to a person skilled in the art and does not limit the scope of this disclosure in any way.
- machine learning model may be used to refer to any one or more of the terms “artificial intelligence model”, “neural network model “, “deep learning model”, “multi-layer perceptron model”, “ResNet model”, “back propagation model”, etc., as will be understood from the context.
- IAAT intra- abdominal adipose tissue
- IPAT IPAA
- RPAT RPAT drains into the systemic circulation
- hepatic energy regulation such as increased gluconeogenesis and production of very low-density lipoproteins.
- SAT is heterogeneous, distinct associations of DS AT and SSAT with cardiometabolic risk factors have been shown. DSAT shares similar deleterious characteristics with IAAT and hence close association with cardiometabolic risk factors, while SSAT is considered a protective fat storage site.
- adipose tissue infiltration into the lumbar paraspinal musculature is identified as a pathological phenotype in neuromuscular disease and may be a manifestation in patients with chronic lower back pain and symptomatic lumbar spinal stenosis.
- the present disclosure relates to a system for classifying adipose tissue and a method for classifying adipose tissue.
- the system and method may be a fully automated system/method for segmenting, quantifying, and visualizing distinct abdominal adipose tissue (AAT) depots or sub-depots from medical imaging data.
- AAT abdominal adipose tissue
- Medical imaging modalities such as computed tomography, magnetic resonance imaging (MRI) and electrical impedance tomography scan, may enable non-invasive imaging of tissue for specific characterization and quantification.
- the system and classification method as disclosed herein utilizes medical images of a subject obtained from the medical imaging modalities, and autonomously compute and segment the abdominal adipose tissue into the respective adipose tissue types within a short duration (for example, below 20 seconds).
- the automated and standardized quantification system and method opens up opportunities to better understand obesity and its physiological and pathological phenotypes in research setting as well as enable improved and rapid health assessment in a clinical context.
- changes in AAT depots/sub-depots may be evaluated longitudinally or over a duration, in combination with lifestyle and metabolic interventions.
- large cohort studies and longitudinal studies relevant to abdominal adipose tissue may enormous benefit from utilization of the system/method as disclosed.
- the system and method of the various embodiments of the present disclosure may also be integrated with various MRI scanners/medical devices to achieve rapid results for various clinical radiological applications and wellness markets.
- the present disclosure demonstrates a method and system for comprehensive classification and quantification of various adipose tissue depots or sub-depots.
- the detailed quantification of distinct adipose tissue depots or sub-depots may enable phenotypic risk assessment, guide diagnosis, and surgery planning.
- the presented disclosure is exemplary in nature and can be expanded to include analysis of neonates, children, and ageing subjects as well as different populations such as ethnic groups.
- the technology can be easily integrated with MRI scanners and can be utilized in obesity clinics, metabolic surgeries, and lifestyle interventions.
- Some examples of clinical application include but are not limited to: risk assessment of cardio-mctabolic disease in primary and secondary care; obesity/ diabetes; childhood obesity; geriatric; wellness applications (exercise/nutrition); metabolic/oncologic surgeries/cosmetic applications; real-time surgical applications.
- the adipose tissue types segmented from the medical image may include AAT depots, such as subcutaneous adipose tissue (SAT); and an intra-abdominal adipose tissue (1AAT).
- AAT depots such as subcutaneous adipose tissue (SAT); and an intra-abdominal adipose tissue (1AAT).
- the SAT and 1AAT depots may further be segmented into the respective sub-depots such as superficial subcutaneous adipose tissue (SSAT); deep subcutaneous adipose tissue (DSAT); intraperitoneal adipose tissue (IPAT); a retroperitoneal adipose tissue (RPAT), and a paraspinal adipose tissue (PSAT).
- SSAT superficial subcutaneous adipose tissue
- DSAT deep subcutaneous adipose tissue
- IPAT intraperitoneal adipose tissue
- RPAT retroperitoneal a
- FIG. 1A illustrates a system 100 for classifying adipose tissue or an adipose tissue classification system, according to various embodiments of the present disclosure.
- FIG. IB illustrates a workflow of a method of classifying adipose tissue of the system 100.
- the system 100 may include memory which stores instructions, and a computational device, such as a processor coupled to the memory.
- the processor may be configured to process the stored instructions on the memory to implement: a segmentation module 100.
- the segmentation module 100 may be configured to receive or acquire a medical image 210 of a subject 80 or patient from a medical imaging module or a database storing the medical images.
- medical imaging 132 of the subject 80 may be performed via an imaging modularity 134 to obtain medical images 210, such as a raw image or raw image volume, and stored in the memory of a database or data storage.
- the segmentation module 100 may be configured to acquire the medical image 210 from the database for image processing 112 or a model application to produce an output.
- the segmentation module 110 may be configured to use a machine learning model 300 to segment the medical image 210 to determine a model output 114, such as a segmented output 220.
- the segmented output 220 may be one or more volumetric segments, each of the volumetric segment corresponding to a respective AAT depot or sub-depot in the medical image 210.
- each of the volumetric segment may include a respective segmentation mask, each of the segmentation mask corresponding to a respective adipose tissue type, such as a respective AAT depot or sub-depot.
- the system 100 may be embodied in the form of a workstation, a laptop, a mobile device, a network server, a PACS server, a cloud computing device, etc which interfaces with or executes the machine learning model 300 of the segmentation module
- the segmentation module 110 may carry out an adipose tissue classification method.
- the segmentation module 110 may be configured to segment the medical image 210 to determine or obtain a segmented output 220 or a segmented medical image.
- the model output or the segmented output 220 may be displayed via a graphical user interface 120, which provides visual and/or textual representations of the segmented output 220.
- clinical interferences 140 or clinical analysis may be performed by a medical practitioner.
- the clinical interferences or analysis may include a diagnostic report, health assessments, surgery planning, intervention planning, personalized medicine, clinical analysis, etc.
- the system 100 may be integrated or incorporated with an image acquisition module 130, such as an MRI machine or CT machine. This enables the system 100 to be a complete one-stop solution for AAT depot/sub-depot segmentation and characterization.
- the medical image 210 may be an output from one of the following: a Magnetic Resonance Imaging (MRI) scan, a Computed Tomography (CT) scan, and an electrical impedance tomography scan.
- the medical image 210 may be a three-dimensional (3D) volumetric image, such as a 3D-Computed Tomography image.
- the medical image 210 may include a plurality of 2D medical images, thus forming one or more stacks of 2D medical images.
- the medical image 210 may include a point cloud, each of point (coordinate) in the point cloud corresponding to a value obtained from a medical imaging process, such as an MRI scan.
- the segmentation module 110 segments the medical image 210 by classifying each voxel (or a predetermined unit volume) of the medical image 210 with a label which corresponds to each AAT depot or AAT sub-depot.
- the label may correspond to one of: a background; SAT; and IAAT.
- each voxel with the label of SAT may be further classified into or provided with another label corresponding to one of: SSAT and DSAT.
- each voxel with the label of IAAT may be further classified into or provided with another label corresponding to one of: IP AT; RPAT; and PS AT.
- each voxel may be directly classified with a label corresponding to one of: a background; SSAT; DSAT; IP AT; RPAT, and PSAT.
- the segmented output 220 may include the medical image 210 overlaid with one or more segmentation masks 221/222/223/224/225 to obtain a visualization output.
- FIG. 2A illustrates the medical image 210
- FIG. 2B illustrates the visualization output which includes the medical image 210 overlaid with the segmentation masks 221/222/223/224/225.
- each segmentation mask 221/222/223/224/225 may correspond to a respective AAT sub-depot.
- segmentation mask 221 corresponds to the SSAT
- segmentation mask 222 corresponds to DSAT
- segmentation mask 223 corresponds to IP AT
- segmentation mask 224 corresponds to RPAT
- segmentation mask 225 corresponds to PSAT.
- each segmentation mask is provided with at least one predetermined visually-distinguishable characteristic.
- each segmentation mask may be represented by a unique hatching pattern corresponding to area/volume of each respective AAT sub-depot. Therefore, the segmented output 220 may include segmentation masks of different hatching patterns overlaid onto the medical image 210 for easy visualization.
- each segmentation mask may be represented by a unique colour or shade of colour corresponding to the area/volume of each respective AAT sub-depot. Therefore, the segmented output 220 may include segmentation masks of different colours overlaid onto the medical image 210.
- FIG. 2C each segmentation mask may be represented by a unique colour or shade of colour corresponding to the area/volume of each respective AAT sub-depot. Therefore, the segmented output 220 may include segmentation masks of different colours overlaid onto the medical image 210.
- FIG. 1 referring to FIG.
- each segmentation mask may be represented by an edge or contour corresponding to the area/volume of each respective AAT sub-depot.
- the segmented output 220 may only include one or more segmentation masks without being overlaid on the medical image 210.
- each of the segmentation masks may be rendered as a volumetric segment corresponding to each AAT sub-depot.
- each segmentation mask may be a three-dimensional volume. Therefore, the segmented output may include multiple segmentation masks, each being a three-dimensional volume, overlaid onto a three- dimensional medical image, to obtain a three-dimensional segmented output 220.
- the segmented output 220 may include data representing or corresponding to segmentation of the medical image 210 into one or more AAT depots or subdepots.
- the segmented output 220 may be a two-dimensional surface field or a three- dimensional point cloud which includes values corresponding to each AAT sub-depot.
- the segmented output 220 may include quantitative data, such as the respective tissue specific volumes, corresponding to each of the AAT depots or sub-depots in the medical image 210. Therefore, the segmented output 220 may include multiple volumetric values, each indicative of a specific volume corresponding to each of the AAT sub-depot.
- a graphical user interface 120 may be provided to display or to present the segmented output 220 to enable a convenient utilization for a user.
- the graphical user interface 120 may enable the unprocessed three-dimensional (3D) volume medical image 210 to be loaded.
- the user may be able to toggle between the medical image 210 (as shown in FIG. 3 A) and the segmented output 220 (as shown in FIG. 3B) for better visualization.
- the segmented output 220 may be displayed as a visualization output including multiple two-dimensional (2D) images or in other words, multiple slices of 2D images forming the three-dimensional segmented output 220.
- Each of the plurality of 2D images may be representative of a cross section of the three- dimensional segmented output 220.
- the multiple 2D images corresponding to sections or cross-sections of the three-dimensional segmented output 220 may be displayed via the graphical user interface 120.
- the 2D images may be moveable or selectable relative to the three-dimensional segmented output 220.
- the graphical user interface 120 may allow a user to view each individual slice of the medical image 210 by sliding through the 3D-volume of the medical image 210.
- the user may choose to selectively apply the segmentation model to the volume or a defined region of interest, using appropriate action buttons in the menu bar.
- the GUI may then apply the developed algorithm to the image volume.
- the program will compute the volumes for each fat depot by multiplying the number of labelled voxels of the respective fat depot with the voxel resolution. Quantified volumes will then be displayed within the interface.
- the segmentation mask will be overlaid on the raw image highlighting the segmented areas by assigning distinct label colours to the unique fat depots.
- the graphical user interface 120 additionally enables visualization options, to investigate the individual adipose tissue depots, such as zooming options, adjustment of the opacity of the overlaid segmentation masks, editing of the segmentation masks, and measurement of regions of interest. The user may then export the produced results in the desired imaging formats.
- FIG. 4A is a flowchart illustrating a method of classifying adipose tissue 400 according to various embodiments of the present disclosure.
- the method 400 includes in stage 410, acquiring a medical image of a subject.
- the medical image is acquired from a database storing the medical image.
- the medical image may be acquired from a medical imaging module.
- the method may further include, in stage 420, segmenting the medical image into a plurality of volumetric segments using a machine learning model, wherein each of the plurality of volumetric segments comprises a respective segmentation mask corresponding to one selected from a plurality of adipose tissue types.
- the method 400 may further include in stage 430, classifying each voxel of the medical image with a label.
- the label may correspond to one of: a background; a subcutaneous adipose tissue (SAT); and an intra-abdominal adipose tissue (IAAT).
- SAT subcutaneous adipose tissue
- IAAT intra-abdominal adipose tissue
- Each voxel with the label of subcutaneous adipose tissue (SAT) may be further classified into one of: a superficial subcutaneous adipose tissue (SSAT) and a deep subcutaneous adipose tissue (DSAT).
- Each voxel with the label of intra-abdominal adipose tissue may be further classified into one of: an intraperitoneal adipose tissue (IP AT); a retroperitoneal adipose tissue (RPAT), and a paraspinal adipose tissue (PSAT).
- IP AT intraperitoneal adipose tissue
- RPAT retroperitoneal adipose tissue
- PSAT paraspinal adipose tissue
- the method 400 also includes providing a respective specific volume corresponding to each of the plurality of volumetric segments.
- the method 400 may further include converting a volume of each of the plurality of volumetric segments into a respective relative volume expressed as a percentage of a combined volume of all of the plurality of volumetric segments.
- segmenting of the medical image may include: downsampling the medical image for at least one down-sampling iteration to obtain an intermediate representation of the medical image; and up-sampling the intermediate representation of the medical image for at least one up-sampling iteration to obtain the plurality of volumetric segments.
- the method may include providing a skip connection from one of the at least one down-sampling iteration to a corresponding at least one up-sampling iteration.
- the method 400 includes training the machine learning model with a training data, wherein the training data includes augmented data. Further, the method 400 may include normalizing an image volume of the medical image.
- the method 400 may further include in stage 440, overlaying the segmentation masks on the medical image to obtain a visualization output, wherein each segmentation mask is provided with at least one predetermined visually-distinguishable characteristic. Further, the method 400 may include displaying the visualization output as a plurality of two-dimensional (2D) images, wherein each of the plurality of 2D images is representative of a cross section of the visualization output.
- 2D two-dimensional
- FIG. 4B illustrates embodiments of an overall workflow of a method of classifying adipose tissue according to embodiments of the present disclosure.
- one or more medical images 210 may be acquired and/or received by the segmentation module 110 from a database or a data source.
- the medical images 210 may first be pre-processed by an image pre-processor 1 16 prior to inputting to a machine learning model 300.
- the machine learning model 300 may receive the medical images 210 sequentially to classify each medical image into the various AAT depots or subdepots.
- the machine learning model 300 may receive the medical images 210 concurrently to classify each medical image into the various AAT depots or subdepots.
- the segmentation module 1 10 Upon classification, the segmentation module 1 10 outputs one or more segmented output 220.
- the segmented output 220 may include tissue specific volumes corresponding to each classification of adipose tissue. Further, the segmented output 220 may also include multiple segmentation masks, each segmentation mask corresponding to one adipose tissue type.
- the segmented output 220 may be displayed by the graphical user interface 120.
- the graphical user interface 120 may include editing tools or editing functions such as an image editor, thus allowing a user to enhance the segmented output 220 to improve the visual representation of the segmented output 220.
- the image editor may allow the user to alter or darken an interface between two AAT sub-depots or to change the colour of representation of each AAT sub-depot.
- the graphical user interface may also include visualization tools or visualization functions such as 3D visualization of the segmented output via augmented reality, or visualization functions such as 2D slicing planes, etc.
- FIG. 5 illustrates an embodiment of a machine learning model 300 or segmentation model according to various embodiments of the disclosure.
- the machine learning model 300 may be a three-dimensional (3D) deep convolutional neural network.
- the machine learning model 300 may include an input block 310; one or more down-sampling blocks 320; and one or more up-sampling blocks 330, connected in series or in sequence.
- an input received by each down-sampling block 320 is down-sampled or down-scaled and provided as a respective output to a subsequent block, such as another down-sampling block 320.
- the last of the down-sampling blocks 320 may output an intermediate representation 215 of the medical image 210 into the first of the up-sampling block 330.
- the output of the down-sampling block 320 has a lower resolution or lower data size in comparison to the input as received by the same down-sampling block 320.
- an input received by each up-sampling block 330 is up-sampled or up- scaled and provided as a respective output to a subsequent block, such as another up-sampling block 330.
- the output of the up-sampling block 330 has a higher resolution or higher data size in comparison to the input as received by the same up-sampling block 330.
- the last of the up-sampling blocks 330 may output the segmented output 220 and/or the segmentation masks.
- one or more of the down-sampling blocks 320 may provide a skip connection 325 input to a respective up-sampling block 330.
- a skip connection 325 may be provided from one or more of the down-sampling blocks 220 to a respective one or more up-sampling block 330.
- a skip connection 225 is provided between each pah- of corresponding down-sampling block 320 and up-sampling block 330.
- a skip connection 225 is provided between the input block 310 and the last of the up-sampling block 330.
- skip connections 225 are selectively provided between selected pairs of down-sampling block 320 and up-sampling block 330.
- the down-sampling blocks 320 and/or the up-sampling blocks 330 may have a different block dimension.
- one or more of the down sampling blocks 320 may include sub-blocks such as: convolution layer; 3D convolution layer; Ixlxl convolution layer; instance normalization layer; Leaky Relu activation function, etc.
- one or more of the up-sampling blocks 330 may include sub-blocks such as: Trilinear interpolation layer; convolution layer; concatenate layer; 3D convolution layer; instance normalization layer; Leakly Relu activation function.
- the up- sampling blocks 330 may also include a skip connection input 325 and/or a feed forward connection.
- each convolution layer may include sub-blocks such as: 3D convolution layer; instance normalization layer; Leakly Relu activation function.
- FIGs. 6 to 9 illustrates a machine learning model 300 architecture according to embodiments of the disclosure.
- the machine learning model 300 may be a Deep Learning based AAT quantification model.
- the machine learning model 300 may be a ResNet based 3D-UNet implemented to segment the distinct AAT depots.
- the machine learning model 300 includes 11 building blocks, containing a total of 59,145,102 trainable parameters. As shown in FIG. 6, in a specific example, the machine learning model 300 includes one input block 310; five down-sampling blocks 320 and five up-sampling blocks 330. Still referring to FIG. 6, the numbers in the paratheses within the blocks indicate dimensions of the respective block: (x, y, z).
- Convolutional operations may be performed in a sequence of 3D convolution 350, instance normalization 360, followed by a Leaky-Relu activation 370.
- Trilinear interpolation 380 may be used to up-sample feature maps within the decoder path.
- Down-sampling is performed with a stride operation of two in the final convolutional layer in each block.
- the network may be configured to first pool over the x- and y-axis until their dimensions match the z-axis; thereafter, all axes are down sampled synchronously.
- the number of convolutional filters is doubled. For example, starting with 24 convolutional filters in the first block and 768 convolutional filters in the deepest block.
- Glorot uniform initialization is used for weight initialization.
- the model may be translated to other imaging modalities such as computed tomography (CT) images or data.
- CT computed tomography
- the segmentation task of the method is defined as a voxel-wise classification of AAT into background, SSAT, DSAT, IP AT, RPAT, and PSAT.
- the machine learning model 300 may be trained on manual expert-generated segmented data. The weight in the machine learning model 300 may be tuned or adjusted using backpropagation algorithms.
- the Adam optimizer is used to minimize the loss function, which is defined as a label wise summation of the binary cross entropy and the generalized dice loss as follows: wherein, Pc is the probability matrix for class c and GTc is the corresponding ground truth matrix. Hyper-parameters, including batch size, learning rate, and patience may be determined empirically. In some embodiments, the training parameters are found to be batch size of 2, a learning rate of I x 10 4 . and a patience of 40.
- FIG. 10A shows an exemplary segmented output 220 generated using the proposed method and machine learning model 300 of the abdomen of a normal weight subject. Further, the segmented output 220 is presented in a three-dimensional volume showing different slices/cross-sections of the segmented output 220.
- FIG. 10B shows another exemplary segmented output 220 of the abdomen for an overweight participant. Referring to FIG. 10B, the segmented output 220 shows a larger volume (both relative volume and absolute volume) of IAT in comparison to the segmented output 220 of FIG. 1 1 A.
- the accuracy of the segmentation model or machine learning model 300 is evaluated against the manually generated ground truth. Volumes are calculated by multiplying the number of labelled voxels of the respective fat depot with the voxel resolution. The segmentation overlaps between the predicted and ground truth volumes are evaluated by computing the Dice similarity coefficients (0 indicating no overlap and 1 representing 100% overlap).
- evaluation metrics including, false positive rate (the number of predicted voxels assigned to a class that have a true label belonging to another class), false negative rate (the proportion of true labelled ground truth voxels for which the model predicted a different class), precision (ratio of the correctly positive predicted voxels to all positive predicted voxels), and sensitivity (ratio of correctly positive predicted voxels to all actual positive voxels) are presented.
- SSAT Superficial Subcutaneous Adipose Tissue
- DSAT Deep Subcutaneous Adipose Tissue
- IPAT Intraperitoneal Adipose Tissue
- RPAT Retroperitoneal Adipose Tissue
- PSAT Paraspinal Adipose Tissue
- DICE Dice Similarity Coefficient
- FP False Positive Rate
- FN False Negative Rate.
- the proposed method shows high accuracy when compared with the manually created ground truth data with mean Dice similarity scores (5-fold cross-validation) of 98.3%, 97.2%, 96.5%, 96.3%, and 95.9% for SSAT, DSAT, IPAT, RPAT, and PSAT, respectively.
- the proposed method enables reliable segmentation of individual adipose tissue sub-depots from common medical images such as MRT volumes. Bland-Altman plots (FTGs.
- Model inference time for an abdominal volume, was assessed and takes approximately 20 seconds and 1.5 seconds on an Intel I CITM i7-10750H CPU (@ 2.60GHz 2.59 GHz) and an NVIDIA V100 GPU (32GB), respectively which points to a short computational time for prompt adipose tissue segmentation. Further, the short computational time required advantageously allows the segmented output to be integrated or incorporated with existing imaging system workflow, and may be presented collectively with the medical images.
- anthropometric measurements like BM1 and crude measurements of abdominal obesity such as WC and waist-to-hip ratio (WHR) have been adopted to assess the risk and progression of obesity and cardiometabolic disease in clinical care, those methods suffer from precision due to the extremely heterogeneous manifestation of obesity. Studies have shown that the metabolic heterogeneity of obesity is closely linked to adipose tissue distribution. Therefore, identifying distinct AAT partitioning patterns to advance risk stratification of obesity, beyond traditional clinical obesity risk assessment methods is useful.
- the system may be configured to determine a risk stratification of a metabolic outcome.
- a method of risk stratification of the metabolic outcome is also disclosed.
- the method of risk stratification of a metabolic outcome may include determining a risk of the metabolic outcome based on a plurality of volumetric segments in the adipose tissue.
- the metabolic outcome may include metabolic syndromes, gestational diabetes mellitus (GDM), birth of large for gestational age (LG A) offspring, and diseases such as metabolic diseases, type 2 diabetes, cardiovascular diseases, etc..
- the method of risk stratification of the metabolic outcome may include determining a risk of the metabolic outcome, such as a disease, based on a relative distribution of the plurality of volumetric segments in the adipose tissue.
- the risk of a metabolic outcome, such as a disease is determined based on a distribution and/or location of respective ones of the plurality of volumetric segments.
- a relatively high risk of metabolic disease is determined based on a high amount of intraperitoneal adipose tissue (IP AT) sub-depot volume located at an anterior of the abdomen and a low amount of deep subcutaneous adipose tissue (DSAT) sub-depot volume located at a posterior of the abdomen.
- IP AT intraperitoneal adipose tissue
- DSAT deep subcutaneous adipose tissue
- a relatively high risk of disease may be determined based on a single adipose tissue sub-depot which is uniformly distributed in specific locations of the abdomen.
- the disease may include but not limited to: type two diabetes, cardiovascular disease, metabolic disease, diseases relating to liver, kidney, abdomen, etc.
- the method of risk stratification of the metabolic outcome may include determining a risk of a disease based on a relative quantification of the plurality of volumetric segments in the adipose tissue.
- a relatively high risk of metabolic disease is determined based on a high amount of intraperitoneal adipose tissue (TP AT) subdepot volume relative to the deep subcutaneous adipose tissue (DSAT) sub-depot volume.
- TP AT intraperitoneal adipose tissue
- DSAT deep subcutaneous adipose tissue
- the risk of metabolic disease may be determined based on the presence of specific adipose tissue types.
- the system 100 may also be configured to determine a risk stratification of cardiometabolic disease.
- a method of risk stratification of cardiometabolic disease is disclosed.
- the method of risk stratification of cardiometabolic disease includes determining a risk of cardiometabolic disease of a subject based on a quantification (relative quantification or an absolute quantification) of respective AAT depot or sub-depot of an adipose tissue of the subject.
- the method may include determining a relative quantification or an absolute quantification of a plurality of volumetric segments corresponding to respective AAT depot or sub-depot in the adipose tissue.
- the method may utilize the system for and method of classifying adipose tissue as disclosed in previous sections.
- the method of risk stratification of cardiometabolic disease may further include determining a metabolically unfavourable abdominal adipose tissue distribution responsive to a higher amount of intraperitoneal adipose tissue (IPAT) in relative to a reference subject or phenotype, and a lower amount of deep subcutaneous adipose tissue (DSAT) relative to the reference subject or phenotype.
- the reference subject or phenotype may be a subject of common physiological parameters, characterized by a predominantly normal BMI (58%) and healthy metabolic profile
- DBP diastolic
- SBP systolic
- HDL-C high density lipoprotein cholesterol
- LDL-C low density lipoprotein cholesterol
- TGs triglycerides
- FPG fasting plasma glucose
- hsCRP high-sensitivity C-reactive protein
- MeC Metabolic Syndrome
- participant defined by the P4 phenotype were characterized by an overall healthy metabolic profile
- participants defined by the Pl and P2 phenotypes appeared to be increasingly affected by obesity and showed elevated levels of BMI, WC, WHR, total body fat (BF), and TAAT, compared to P4 (FIG. 13).
- the Pl and P2 phenotypes showed elevated levels of IMCL, liver fat, hsCRP, and reduced levels of HDL, compared to P4 (FIG. 14). This indicated that the two groups with elevated levels of IPAT, Pl and P2, are characterized by increased obesity and obesity related metabolic alterations.
- a healthy abdominal fat partitioning pattern may be defined by a decreased relative amount of IP AT ( ⁇ 14.5%) and increased relative amount of DSAT (>27.3%) (P4 phenotype). Phenotypes characterized by increased IPAT accumulation (Pl and P2) showed elevated levels of obesity and metabolic impairments.
- the Pl phenotype While individuals defined by the Pl and P2 phenotype showed similar levels of traditional clinical obesity measurements, the Pl phenotype exhibited a higher cardiometabolic risk profile, characterized by increased liver fat, elevated circulating TGs, and decreased HDL- C concentrations, resulting in significantly increased prevalence and relative risk for MetS, and hence increased risk for developing cardiometabolic disease. Therefore, the Pl phenotype of a relatively high or a higher amount of intraperitoneal adipose tissue (IPAT) relative to the reference phenotype (P4), and a relatively low or a lower amount of deep subcutaneous adipose tissue (DSAT) relative to the reference phenotype appears to define a metabolically unfavourable adipose tissue partitioning pattern.
- ITT intraperitoneal adipose tissue
- DSAT deep subcutaneous adipose tissue
- the system 100 may also be configured to determine a risk stratification of gestation events.
- a method of risk stratification of gestation events is disclosed according to various embodiments. Tn some embodiments, the method includes determining a risk of gestational diabetes mellitus (GDM) based on a quantification of respective AAT depot or sub-depot of an adipose tissue of the subject prior to conception. In other embodiments, the method includes determining a risk of birth of large for gestational age (LGA) offspring based on a quantification of respective AAT depot or sub-depot of an adipose tissue of the subject prior to conception.
- GDM gestational diabetes mellitus
- LGA large for gestational age
- the method may include determining a relative quantification or an absolute quantification of a plurality of volumetric segments corresponding to respective AAT depot or sub-depot in the adipose tissue.
- Tire method may utilize the adipose tissue classification method or system as disclosed in previous sections.
- the method of risk stratification of gestation events may further include determining a metabolically unfavourable abdominal adipose tissue distribution responsive to a higher amount of intraperitoneal adipose tissue (IP AT) relative to a reference subject or phenotype, and a lower amount of deep subcutaneous adipose tissue (DSAT) relative to the reference subject or phenotype.
- IP AT intraperitoneal adipose tissue
- DSAT deep subcutaneous adipose tissue
- the reference subject or phenotype may be a subject of common physiological parameters, characterized by a predominantly normal BMI (58%) and healthy metabolic profile (72%).
- GDM oral glucose tolerance test
- IADPSG criteria oral glucose tolerance test
- LGA offspring was defined by birth weight >90 percentile of the S-PRESTO population.
- phenotypic odds for developing GDM and having LGA offspring were determined by binary logistic regression models adjusted for age, ethnicity, educational status, parity, and BMT.
- AAT depots were segmented and quantified from MRI volumes, converted to relative volumes, and expressed as % of total AAT.
- P4 defined most participants with normal weight (Asian cut-off: BMI ⁇ 23) and was considered as the reference group or reference phenotype. Regression results are shown in Table 4. In comparison to P4, the odds for GDM were >5 times higher for individuals in Pl (Odds ratio (95%CI): 5.36 (1.12, 27.85)) after adjusting for confoundcrs including prcprcgnancy BMI (Table SI). Additionally, women categorized as Pl exhibited 4-fold higher odds (4.5 (0.96, 22.00)) for having LGA offspring, compared to P4. While this did not reach statistical significance upon adjusting for BMI (Table 4), it provides an indication of risk of LGA offspring in Pl phenotype relative to P4 phenotype. P2 phenotype was not associated with GDM or LGA.
- the proposed phenotype risk stratification method collectively enhances metabolic risk stratification in women affected by obesity. While AAT depot-specific expansion mechanisms for maintaining metabolic homeostasis during excess weight gain remain largely unknown, this study is the first to propose a pathomechanism between reduced DSAT expansion and MetS and GDM. Since MetS, GDM, and T2D share similar underlying obesity- related metabolic impairments (e.g., lipotoxicity and insulin resistance), collectively our findings underscore the critical role of AAT distribution in shaping metabolic health among Asian women.
- the proposed system and method for phenotyping abdominal obesity using relative measurements of distinct AAT depots improved risk stratification of MetS in an Asian female cohort. Therefore, automated, and rapid assessment of distinct AAT depots could not only help to improve understanding of obesity but also improve risk assessment of obesity and obesity related disease in clinical care. Further, the proposed system and method advantageously provides phenotyping/differentiation of 1AAT sub-depots (IP AT, RPAT, and PS AT). Furthermore, three-dimensional volumes can be fed to the convolutional neural network (enables improved segmentation as anatomical three- dimensional contexts are needed for accurate differentiation of individual adipose tissue depots).
Landscapes
- Health & Medical Sciences (AREA)
- Engineering & Computer Science (AREA)
- Medical Informatics (AREA)
- Public Health (AREA)
- General Health & Medical Sciences (AREA)
- Epidemiology (AREA)
- Primary Health Care (AREA)
- Biomedical Technology (AREA)
- Life Sciences & Earth Sciences (AREA)
- Physics & Mathematics (AREA)
- Pathology (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Data Mining & Analysis (AREA)
- Biophysics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Databases & Information Systems (AREA)
- Radiology & Medical Imaging (AREA)
- Surgery (AREA)
- Animal Behavior & Ethology (AREA)
- Heart & Thoracic Surgery (AREA)
- Veterinary Medicine (AREA)
- Artificial Intelligence (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Molecular Biology (AREA)
- Mathematical Physics (AREA)
- Urology & Nephrology (AREA)
- Signal Processing (AREA)
- Nutrition Science (AREA)
- Physiology (AREA)
- Fuzzy Systems (AREA)
- Physical Education & Sports Medicine (AREA)
- Psychiatry (AREA)
- Evolutionary Computation (AREA)
- Quality & Reliability (AREA)
- Business, Economics & Management (AREA)
- General Business, Economics & Management (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
Description
Claims
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| EP23895140.4A EP4622555A2 (en) | 2022-11-23 | 2023-11-09 | A system for and a method of classifying adipose tissue |
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| SG10202260168Q | 2022-11-23 | ||
| SG10202260168Q | 2022-11-23 |
Publications (3)
| Publication Number | Publication Date |
|---|---|
| WO2024112260A2 true WO2024112260A2 (en) | 2024-05-30 |
| WO2024112260A9 WO2024112260A9 (en) | 2024-07-04 |
| WO2024112260A3 WO2024112260A3 (en) | 2024-08-15 |
Family
ID=91196705
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/SG2023/050744 Ceased WO2024112260A2 (en) | 2022-11-23 | 2023-11-09 | A system for and a method of classifying adipose tissue |
Country Status (2)
| Country | Link |
|---|---|
| EP (1) | EP4622555A2 (en) |
| WO (1) | WO2024112260A2 (en) |
Family Cites Families (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US10157462B2 (en) * | 2016-06-27 | 2018-12-18 | University Of Central Florida Research Foundation, Inc. | System and method for image-based quantification of white and brown adipose tissue at the whole-body, organ and body-region levels |
| WO2019182520A1 (en) * | 2018-03-22 | 2019-09-26 | Agency For Science, Technology And Research | Method and system of segmenting image of abdomen of human into image segments corresponding to fat compartments |
-
2023
- 2023-11-09 EP EP23895140.4A patent/EP4622555A2/en active Pending
- 2023-11-09 WO PCT/SG2023/050744 patent/WO2024112260A2/en not_active Ceased
Also Published As
| Publication number | Publication date |
|---|---|
| WO2024112260A3 (en) | 2024-08-15 |
| WO2024112260A9 (en) | 2024-07-04 |
| EP4622555A2 (en) | 2025-10-01 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| Lee et al. | Deep neural network for automatic volumetric segmentation of whole-body CT images for body composition assessment | |
| US11443433B2 (en) | Quantification and staging of body-wide tissue composition and of abnormal states on medical images via automatic anatomy recognition | |
| Zhou et al. | Deep learning-based carotid plaque segmentation from B-mode ultrasound images | |
| CN113711271A (en) | Deep convolutional neural network for tumor segmentation by positron emission tomography | |
| CN103054563B (en) | A kind of quantification of blood vessel wall image texture characteristic and extracting method | |
| Huang et al. | ISA-Net: Improved spatial attention network for PET-CT tumor segmentation | |
| Zopfs et al. | Evaluating body composition by combining quantitative spectral detector computed tomography and deep learning-based image segmentation | |
| Kawahara et al. | Image synthesis with deep convolutional generative adversarial networks for material decomposition in dual-energy CT from a kilovoltage CT | |
| CN111598864A (en) | A method for evaluating the differentiation of hepatocellular carcinoma based on fusion of multimodal image contributions | |
| CN117788435A (en) | Physical examination CT image data processing and analyzing system and application thereof | |
| WO2011139232A1 (en) | Automated identification of adipose tissue, and segmentation of subcutaneous and visceral abdominal adipose tissue | |
| Yang et al. | A multi-stage progressive learning strategy for COVID-19 diagnosis using chest computed tomography with imbalanced data | |
| Oh et al. | Segmentation of white matter hyperintensities on 18F-FDG PET/CT images with a generative adversarial network | |
| Jin et al. | Segmentation and evaluation of adipose tissue from whole body MRI scans | |
| Pusterla et al. | An automated pipeline for computation and analysis of functional ventilation and perfusion lung MRI with matrix pencil decomposition: TrueLung | |
| EP4622555A2 (en) | A system for and a method of classifying adipose tissue | |
| Benrabha et al. | Automatic ROI detection and classification of the achilles tendon ultrasound images | |
| Memiş et al. | A new scheme for automatic 2D detection of spheric and aspheric femoral heads: A case study on coronal MR images of bilateral hip joints of patients with Legg-Calve-Perthes disease | |
| CN119151967A (en) | Medical image analysis method and system based on flat scanning CT data | |
| Saadizadeh | Breast cancer detection in thermal images using GLRLM algorithm | |
| Takahashi et al. | Automated volume measurement of abdominal adipose tissue from entire abdominal cavity in Dixon MR images using deep learning | |
| Moghbeli et al. | A method for body fat composition analysis in abdominal magnetic resonance images via self-organizing map neural network | |
| Wald et al. | Automated quantification of adipose and skeletal muscle tissue in whole-body MRI data for epidemiological studies | |
| Dietz et al. | Diabetes detection from whole-body magnetic resonance imaging using deep learning | |
| Yunardi et al. | Contrast-enhanced Based on Abdominal Kernels for CT Image Noise Reduction |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| WWE | Wipo information: entry into national phase |
Ref document number: 2023895140 Country of ref document: EP |
|
| NENP | Non-entry into the national phase |
Ref country code: DE |
|
| WWE | Wipo information: entry into national phase |
Ref document number: 11202503083Y Country of ref document: SG |
|
| WWP | Wipo information: published in national office |
Ref document number: 11202503083Y Country of ref document: SG |
|
| ENP | Entry into the national phase |
Ref document number: 2023895140 Country of ref document: EP Effective date: 20250623 |
|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 23895140 Country of ref document: EP Kind code of ref document: A2 |
|
| WWP | Wipo information: published in national office |
Ref document number: 2023895140 Country of ref document: EP |