[go: up one dir, main page]

WO2024112260A2 - A system for and a method of classifying adipose tissue - Google Patents

A system for and a method of classifying adipose tissue Download PDF

Info

Publication number
WO2024112260A2
WO2024112260A2 PCT/SG2023/050744 SG2023050744W WO2024112260A2 WO 2024112260 A2 WO2024112260 A2 WO 2024112260A2 SG 2023050744 W SG2023050744 W SG 2023050744W WO 2024112260 A2 WO2024112260 A2 WO 2024112260A2
Authority
WO
WIPO (PCT)
Prior art keywords
adipose tissue
recited
medical image
further configured
segmentation module
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
PCT/SG2023/050744
Other languages
French (fr)
Other versions
WO2024112260A3 (en
WO2024112260A9 (en
Inventor
Yeshe Manuel KWAY
Suresh Anand SADANANTHAN
Sambasivam SENDHIL VELAN
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Agency for Science Technology and Research Singapore
Original Assignee
Agency for Science Technology and Research Singapore
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Agency for Science Technology and Research Singapore filed Critical Agency for Science Technology and Research Singapore
Priority to EP23895140.4A priority Critical patent/EP4622555A2/en
Publication of WO2024112260A2 publication Critical patent/WO2024112260A2/en
Publication of WO2024112260A9 publication Critical patent/WO2024112260A9/en
Publication of WO2024112260A3 publication Critical patent/WO2024112260A3/en
Anticipated expiration legal-status Critical
Ceased legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • G16H30/20ICT specially adapted for the handling or processing of medical images for handling medical images, e.g. DICOM, HL7 or PACS
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/48Other medical applications
    • A61B5/4869Determining body composition
    • A61B5/4872Body fat
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7235Details of waveform analysis
    • A61B5/7264Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H20/00ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance
    • G16H20/30ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance relating to physical therapies or activities, e.g. physiotherapy, acupressure or exercising
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H20/00ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance
    • G16H20/40ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance relating to mechanical, radiation or invasive therapies, e.g. surgery, laser therapy, dialysis or acupuncture
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H20/00ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance
    • G16H20/60ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance relating to nutrition control, e.g. diets
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • G16H30/40ICT specially adapted for the handling or processing of medical images for processing medical images, e.g. editing
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H40/00ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices
    • G16H40/60ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices for the operation of medical equipment or devices
    • G16H40/67ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices for the operation of medical equipment or devices for remote operation
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/20ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/30ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for calculating health indices; for individual health risk assessment
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/70ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for mining of medical data, e.g. analysing previous cases of other patients
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B2576/00Medical imaging apparatus involving image processing or analysis
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10088Magnetic resonance imaging [MRI]
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20076Probabilistic image processing
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Definitions

  • This application relates to a system for classifying adipose tissue and a method of classifying adipose tissue.
  • AAT abdominal adipose tissues
  • Abdominal adipose tissue can generally be separated into two main depots: subcutaneous adipose tissue (SAT) and intra-abdominal adipose tissue (1AAT).
  • SAT subcutaneous adipose tissue
  • IAAT intra-abdominal adipose tissue
  • SSAT superficial subcutaneous adipose tissue
  • DSAT deep subcutaneous adipose tissue
  • IAAT may be further separated into intraperitoneal adipose tissue (IPAT), retroperitoneal adipose tissue (RPAT), and paraspinal adipose tissue (PSAT).
  • a system for classifying adipose tissue includes memory storing instructions; and a processor coupled to the memory and configured to process the stored instructions to implement: a segmentation module configured to: acquire a medical image of a subject; segment the medical image into a plurality of volumetric segments using a machine learning model, wherein each of the plurality of volumetric segments comprises a respective segmentation mask corresponding to one selected from a plurality of adipose tissue types.
  • the processor is further configured to determine a metabolic outcome based on the plurality of volumetric segments in the adipose tissue.
  • the segmentation module is further configured to determine a risk of gestational diabetes mellitus (GDM) based on a quantification of the plurality of volumetric segments in the adipose tissue.
  • the segmentation module is further configured to determine a risk of birth of large for gestational age (LGA) offspring based on a quantification of the plurality of volumetric segments in the adipose tissue.
  • GDM gestational diabetes mellitus
  • LGA large for gestational age
  • the segmentation module if further configured to determine a risk of a disease based on a distribution of the plurality of volumetric segments in the adipose tissue. In some embodiments, the segmentation module is further configured to determine a risk of a disease based on a quantification of the plurality of volumetric segments in the adipose tissue. L0008J According to another aspect, disclosed herein a method of classifying adipose tissue. The method includes: acquiring a medical image of a subject; segmenting the medical image into a plurality of volumetric segments using a machine learning model, wherein each of the plurality of volumetric segments comprises a respective segmentation mask corresponding to one selected from a plurality of adipose tissue types.
  • FIG. 1A is a schematic of a system for classifying adipose tissue according to embodiments of the present disclosure
  • FIG. IB is a schematic of a workflow of a method of classifying adipose tissue of the system of FIG. 1 A;
  • FIG. 2A is an example of a medical image of a subject
  • FIG. 2B is an example of a segmented output of the medical image using the system of FIG. 1 ;
  • FIG. 2C is another example of a segmented output of the medical image using the system of FIG. 1;
  • FIG. 2D is yet another example of a segmented output of the medical image using the system of FIG. 1;
  • FIG. 3 A and 3B are illustrations of a graphical user interface of the system according to various embodiments.
  • FIG. 4A is a flowchart of a method of classifying adipose tissue according to embodiments of the present disclosure
  • FIG. 4B is a schematic workflow of a system for classifying adipose tissue according to embodiments of the present disclosure
  • FIG. 5 is a schematic diagram of a machine learning model according to embodiments of the present disclosure.
  • FIGs. 6 to 9 are schematic diagrams of another machine learning model and sublayers according to embodiments of the present disclosure.
  • FIG. 10A shows an example of medical image and segmented output of the abdomen of a normal weight subject
  • FIG. 10A shows another example of medical image and respective segmented output of the abdomen for an overweight participant
  • FIGs. 11A to HE are Bland- Altman plots showing volumetric differences in cubic centimetres between the ground truth (GT) and the prediction (P) of model 1 for the hold-out test set for superficial subcutaneous (SSAT), deep subcutaneous (DSAT), intraperitoneal (IP AT), retroperitoneal (RPAT), and paraspinal (PSAT) adipose tissue;
  • SSAT superficial subcutaneous
  • DSAT deep subcutaneous
  • IP AT intraperitoneal
  • RPAT retroperitoneal
  • PSAT paraspinal
  • FIGs. 12A to 12H show example medical images and respective segmented outputs of model 1 on a hold-out test set
  • the articles “a”, “an” and “the” as used with regard to a feature or element include a reference to one or more of the features or elements.
  • the term “about” or “approximately” as applied to a numeric value encompasses the exact value and a reasonable variance as generally understood in the relevant technical field, e.g., within 10% of the specified value.
  • modules may be implemented as circuits, logic chips or any sort of discrete component, and multiple modules may be combined into a single module or divided into sub-modules as required without departing from the disclosure. Still further, one skilled in the art will also recognize that a module may be implemented in software which may then be executed by a variety of processors. In embodiments of the disclosure, a module may also comprise computer instructions or executable codes that may instruct a computer processor to carry out a sequence of events based on instructions received. The choice of the implementation of the modules is left as a decision to a person skilled in the art and does not limit the scope of this disclosure in any way.
  • machine learning model may be used to refer to any one or more of the terms “artificial intelligence model”, “neural network model “, “deep learning model”, “multi-layer perceptron model”, “ResNet model”, “back propagation model”, etc., as will be understood from the context.
  • IAAT intra- abdominal adipose tissue
  • IPAT IPAA
  • RPAT RPAT drains into the systemic circulation
  • hepatic energy regulation such as increased gluconeogenesis and production of very low-density lipoproteins.
  • SAT is heterogeneous, distinct associations of DS AT and SSAT with cardiometabolic risk factors have been shown. DSAT shares similar deleterious characteristics with IAAT and hence close association with cardiometabolic risk factors, while SSAT is considered a protective fat storage site.
  • adipose tissue infiltration into the lumbar paraspinal musculature is identified as a pathological phenotype in neuromuscular disease and may be a manifestation in patients with chronic lower back pain and symptomatic lumbar spinal stenosis.
  • the present disclosure relates to a system for classifying adipose tissue and a method for classifying adipose tissue.
  • the system and method may be a fully automated system/method for segmenting, quantifying, and visualizing distinct abdominal adipose tissue (AAT) depots or sub-depots from medical imaging data.
  • AAT abdominal adipose tissue
  • Medical imaging modalities such as computed tomography, magnetic resonance imaging (MRI) and electrical impedance tomography scan, may enable non-invasive imaging of tissue for specific characterization and quantification.
  • the system and classification method as disclosed herein utilizes medical images of a subject obtained from the medical imaging modalities, and autonomously compute and segment the abdominal adipose tissue into the respective adipose tissue types within a short duration (for example, below 20 seconds).
  • the automated and standardized quantification system and method opens up opportunities to better understand obesity and its physiological and pathological phenotypes in research setting as well as enable improved and rapid health assessment in a clinical context.
  • changes in AAT depots/sub-depots may be evaluated longitudinally or over a duration, in combination with lifestyle and metabolic interventions.
  • large cohort studies and longitudinal studies relevant to abdominal adipose tissue may enormous benefit from utilization of the system/method as disclosed.
  • the system and method of the various embodiments of the present disclosure may also be integrated with various MRI scanners/medical devices to achieve rapid results for various clinical radiological applications and wellness markets.
  • the present disclosure demonstrates a method and system for comprehensive classification and quantification of various adipose tissue depots or sub-depots.
  • the detailed quantification of distinct adipose tissue depots or sub-depots may enable phenotypic risk assessment, guide diagnosis, and surgery planning.
  • the presented disclosure is exemplary in nature and can be expanded to include analysis of neonates, children, and ageing subjects as well as different populations such as ethnic groups.
  • the technology can be easily integrated with MRI scanners and can be utilized in obesity clinics, metabolic surgeries, and lifestyle interventions.
  • Some examples of clinical application include but are not limited to: risk assessment of cardio-mctabolic disease in primary and secondary care; obesity/ diabetes; childhood obesity; geriatric; wellness applications (exercise/nutrition); metabolic/oncologic surgeries/cosmetic applications; real-time surgical applications.
  • the adipose tissue types segmented from the medical image may include AAT depots, such as subcutaneous adipose tissue (SAT); and an intra-abdominal adipose tissue (1AAT).
  • AAT depots such as subcutaneous adipose tissue (SAT); and an intra-abdominal adipose tissue (1AAT).
  • the SAT and 1AAT depots may further be segmented into the respective sub-depots such as superficial subcutaneous adipose tissue (SSAT); deep subcutaneous adipose tissue (DSAT); intraperitoneal adipose tissue (IPAT); a retroperitoneal adipose tissue (RPAT), and a paraspinal adipose tissue (PSAT).
  • SSAT superficial subcutaneous adipose tissue
  • DSAT deep subcutaneous adipose tissue
  • IPAT intraperitoneal adipose tissue
  • RPAT retroperitoneal a
  • FIG. 1A illustrates a system 100 for classifying adipose tissue or an adipose tissue classification system, according to various embodiments of the present disclosure.
  • FIG. IB illustrates a workflow of a method of classifying adipose tissue of the system 100.
  • the system 100 may include memory which stores instructions, and a computational device, such as a processor coupled to the memory.
  • the processor may be configured to process the stored instructions on the memory to implement: a segmentation module 100.
  • the segmentation module 100 may be configured to receive or acquire a medical image 210 of a subject 80 or patient from a medical imaging module or a database storing the medical images.
  • medical imaging 132 of the subject 80 may be performed via an imaging modularity 134 to obtain medical images 210, such as a raw image or raw image volume, and stored in the memory of a database or data storage.
  • the segmentation module 100 may be configured to acquire the medical image 210 from the database for image processing 112 or a model application to produce an output.
  • the segmentation module 110 may be configured to use a machine learning model 300 to segment the medical image 210 to determine a model output 114, such as a segmented output 220.
  • the segmented output 220 may be one or more volumetric segments, each of the volumetric segment corresponding to a respective AAT depot or sub-depot in the medical image 210.
  • each of the volumetric segment may include a respective segmentation mask, each of the segmentation mask corresponding to a respective adipose tissue type, such as a respective AAT depot or sub-depot.
  • the system 100 may be embodied in the form of a workstation, a laptop, a mobile device, a network server, a PACS server, a cloud computing device, etc which interfaces with or executes the machine learning model 300 of the segmentation module
  • the segmentation module 110 may carry out an adipose tissue classification method.
  • the segmentation module 110 may be configured to segment the medical image 210 to determine or obtain a segmented output 220 or a segmented medical image.
  • the model output or the segmented output 220 may be displayed via a graphical user interface 120, which provides visual and/or textual representations of the segmented output 220.
  • clinical interferences 140 or clinical analysis may be performed by a medical practitioner.
  • the clinical interferences or analysis may include a diagnostic report, health assessments, surgery planning, intervention planning, personalized medicine, clinical analysis, etc.
  • the system 100 may be integrated or incorporated with an image acquisition module 130, such as an MRI machine or CT machine. This enables the system 100 to be a complete one-stop solution for AAT depot/sub-depot segmentation and characterization.
  • the medical image 210 may be an output from one of the following: a Magnetic Resonance Imaging (MRI) scan, a Computed Tomography (CT) scan, and an electrical impedance tomography scan.
  • the medical image 210 may be a three-dimensional (3D) volumetric image, such as a 3D-Computed Tomography image.
  • the medical image 210 may include a plurality of 2D medical images, thus forming one or more stacks of 2D medical images.
  • the medical image 210 may include a point cloud, each of point (coordinate) in the point cloud corresponding to a value obtained from a medical imaging process, such as an MRI scan.
  • the segmentation module 110 segments the medical image 210 by classifying each voxel (or a predetermined unit volume) of the medical image 210 with a label which corresponds to each AAT depot or AAT sub-depot.
  • the label may correspond to one of: a background; SAT; and IAAT.
  • each voxel with the label of SAT may be further classified into or provided with another label corresponding to one of: SSAT and DSAT.
  • each voxel with the label of IAAT may be further classified into or provided with another label corresponding to one of: IP AT; RPAT; and PS AT.
  • each voxel may be directly classified with a label corresponding to one of: a background; SSAT; DSAT; IP AT; RPAT, and PSAT.
  • the segmented output 220 may include the medical image 210 overlaid with one or more segmentation masks 221/222/223/224/225 to obtain a visualization output.
  • FIG. 2A illustrates the medical image 210
  • FIG. 2B illustrates the visualization output which includes the medical image 210 overlaid with the segmentation masks 221/222/223/224/225.
  • each segmentation mask 221/222/223/224/225 may correspond to a respective AAT sub-depot.
  • segmentation mask 221 corresponds to the SSAT
  • segmentation mask 222 corresponds to DSAT
  • segmentation mask 223 corresponds to IP AT
  • segmentation mask 224 corresponds to RPAT
  • segmentation mask 225 corresponds to PSAT.
  • each segmentation mask is provided with at least one predetermined visually-distinguishable characteristic.
  • each segmentation mask may be represented by a unique hatching pattern corresponding to area/volume of each respective AAT sub-depot. Therefore, the segmented output 220 may include segmentation masks of different hatching patterns overlaid onto the medical image 210 for easy visualization.
  • each segmentation mask may be represented by a unique colour or shade of colour corresponding to the area/volume of each respective AAT sub-depot. Therefore, the segmented output 220 may include segmentation masks of different colours overlaid onto the medical image 210.
  • FIG. 2C each segmentation mask may be represented by a unique colour or shade of colour corresponding to the area/volume of each respective AAT sub-depot. Therefore, the segmented output 220 may include segmentation masks of different colours overlaid onto the medical image 210.
  • FIG. 1 referring to FIG.
  • each segmentation mask may be represented by an edge or contour corresponding to the area/volume of each respective AAT sub-depot.
  • the segmented output 220 may only include one or more segmentation masks without being overlaid on the medical image 210.
  • each of the segmentation masks may be rendered as a volumetric segment corresponding to each AAT sub-depot.
  • each segmentation mask may be a three-dimensional volume. Therefore, the segmented output may include multiple segmentation masks, each being a three-dimensional volume, overlaid onto a three- dimensional medical image, to obtain a three-dimensional segmented output 220.
  • the segmented output 220 may include data representing or corresponding to segmentation of the medical image 210 into one or more AAT depots or subdepots.
  • the segmented output 220 may be a two-dimensional surface field or a three- dimensional point cloud which includes values corresponding to each AAT sub-depot.
  • the segmented output 220 may include quantitative data, such as the respective tissue specific volumes, corresponding to each of the AAT depots or sub-depots in the medical image 210. Therefore, the segmented output 220 may include multiple volumetric values, each indicative of a specific volume corresponding to each of the AAT sub-depot.
  • a graphical user interface 120 may be provided to display or to present the segmented output 220 to enable a convenient utilization for a user.
  • the graphical user interface 120 may enable the unprocessed three-dimensional (3D) volume medical image 210 to be loaded.
  • the user may be able to toggle between the medical image 210 (as shown in FIG. 3 A) and the segmented output 220 (as shown in FIG. 3B) for better visualization.
  • the segmented output 220 may be displayed as a visualization output including multiple two-dimensional (2D) images or in other words, multiple slices of 2D images forming the three-dimensional segmented output 220.
  • Each of the plurality of 2D images may be representative of a cross section of the three- dimensional segmented output 220.
  • the multiple 2D images corresponding to sections or cross-sections of the three-dimensional segmented output 220 may be displayed via the graphical user interface 120.
  • the 2D images may be moveable or selectable relative to the three-dimensional segmented output 220.
  • the graphical user interface 120 may allow a user to view each individual slice of the medical image 210 by sliding through the 3D-volume of the medical image 210.
  • the user may choose to selectively apply the segmentation model to the volume or a defined region of interest, using appropriate action buttons in the menu bar.
  • the GUI may then apply the developed algorithm to the image volume.
  • the program will compute the volumes for each fat depot by multiplying the number of labelled voxels of the respective fat depot with the voxel resolution. Quantified volumes will then be displayed within the interface.
  • the segmentation mask will be overlaid on the raw image highlighting the segmented areas by assigning distinct label colours to the unique fat depots.
  • the graphical user interface 120 additionally enables visualization options, to investigate the individual adipose tissue depots, such as zooming options, adjustment of the opacity of the overlaid segmentation masks, editing of the segmentation masks, and measurement of regions of interest. The user may then export the produced results in the desired imaging formats.
  • FIG. 4A is a flowchart illustrating a method of classifying adipose tissue 400 according to various embodiments of the present disclosure.
  • the method 400 includes in stage 410, acquiring a medical image of a subject.
  • the medical image is acquired from a database storing the medical image.
  • the medical image may be acquired from a medical imaging module.
  • the method may further include, in stage 420, segmenting the medical image into a plurality of volumetric segments using a machine learning model, wherein each of the plurality of volumetric segments comprises a respective segmentation mask corresponding to one selected from a plurality of adipose tissue types.
  • the method 400 may further include in stage 430, classifying each voxel of the medical image with a label.
  • the label may correspond to one of: a background; a subcutaneous adipose tissue (SAT); and an intra-abdominal adipose tissue (IAAT).
  • SAT subcutaneous adipose tissue
  • IAAT intra-abdominal adipose tissue
  • Each voxel with the label of subcutaneous adipose tissue (SAT) may be further classified into one of: a superficial subcutaneous adipose tissue (SSAT) and a deep subcutaneous adipose tissue (DSAT).
  • Each voxel with the label of intra-abdominal adipose tissue may be further classified into one of: an intraperitoneal adipose tissue (IP AT); a retroperitoneal adipose tissue (RPAT), and a paraspinal adipose tissue (PSAT).
  • IP AT intraperitoneal adipose tissue
  • RPAT retroperitoneal adipose tissue
  • PSAT paraspinal adipose tissue
  • the method 400 also includes providing a respective specific volume corresponding to each of the plurality of volumetric segments.
  • the method 400 may further include converting a volume of each of the plurality of volumetric segments into a respective relative volume expressed as a percentage of a combined volume of all of the plurality of volumetric segments.
  • segmenting of the medical image may include: downsampling the medical image for at least one down-sampling iteration to obtain an intermediate representation of the medical image; and up-sampling the intermediate representation of the medical image for at least one up-sampling iteration to obtain the plurality of volumetric segments.
  • the method may include providing a skip connection from one of the at least one down-sampling iteration to a corresponding at least one up-sampling iteration.
  • the method 400 includes training the machine learning model with a training data, wherein the training data includes augmented data. Further, the method 400 may include normalizing an image volume of the medical image.
  • the method 400 may further include in stage 440, overlaying the segmentation masks on the medical image to obtain a visualization output, wherein each segmentation mask is provided with at least one predetermined visually-distinguishable characteristic. Further, the method 400 may include displaying the visualization output as a plurality of two-dimensional (2D) images, wherein each of the plurality of 2D images is representative of a cross section of the visualization output.
  • 2D two-dimensional
  • FIG. 4B illustrates embodiments of an overall workflow of a method of classifying adipose tissue according to embodiments of the present disclosure.
  • one or more medical images 210 may be acquired and/or received by the segmentation module 110 from a database or a data source.
  • the medical images 210 may first be pre-processed by an image pre-processor 1 16 prior to inputting to a machine learning model 300.
  • the machine learning model 300 may receive the medical images 210 sequentially to classify each medical image into the various AAT depots or subdepots.
  • the machine learning model 300 may receive the medical images 210 concurrently to classify each medical image into the various AAT depots or subdepots.
  • the segmentation module 1 10 Upon classification, the segmentation module 1 10 outputs one or more segmented output 220.
  • the segmented output 220 may include tissue specific volumes corresponding to each classification of adipose tissue. Further, the segmented output 220 may also include multiple segmentation masks, each segmentation mask corresponding to one adipose tissue type.
  • the segmented output 220 may be displayed by the graphical user interface 120.
  • the graphical user interface 120 may include editing tools or editing functions such as an image editor, thus allowing a user to enhance the segmented output 220 to improve the visual representation of the segmented output 220.
  • the image editor may allow the user to alter or darken an interface between two AAT sub-depots or to change the colour of representation of each AAT sub-depot.
  • the graphical user interface may also include visualization tools or visualization functions such as 3D visualization of the segmented output via augmented reality, or visualization functions such as 2D slicing planes, etc.
  • FIG. 5 illustrates an embodiment of a machine learning model 300 or segmentation model according to various embodiments of the disclosure.
  • the machine learning model 300 may be a three-dimensional (3D) deep convolutional neural network.
  • the machine learning model 300 may include an input block 310; one or more down-sampling blocks 320; and one or more up-sampling blocks 330, connected in series or in sequence.
  • an input received by each down-sampling block 320 is down-sampled or down-scaled and provided as a respective output to a subsequent block, such as another down-sampling block 320.
  • the last of the down-sampling blocks 320 may output an intermediate representation 215 of the medical image 210 into the first of the up-sampling block 330.
  • the output of the down-sampling block 320 has a lower resolution or lower data size in comparison to the input as received by the same down-sampling block 320.
  • an input received by each up-sampling block 330 is up-sampled or up- scaled and provided as a respective output to a subsequent block, such as another up-sampling block 330.
  • the output of the up-sampling block 330 has a higher resolution or higher data size in comparison to the input as received by the same up-sampling block 330.
  • the last of the up-sampling blocks 330 may output the segmented output 220 and/or the segmentation masks.
  • one or more of the down-sampling blocks 320 may provide a skip connection 325 input to a respective up-sampling block 330.
  • a skip connection 325 may be provided from one or more of the down-sampling blocks 220 to a respective one or more up-sampling block 330.
  • a skip connection 225 is provided between each pah- of corresponding down-sampling block 320 and up-sampling block 330.
  • a skip connection 225 is provided between the input block 310 and the last of the up-sampling block 330.
  • skip connections 225 are selectively provided between selected pairs of down-sampling block 320 and up-sampling block 330.
  • the down-sampling blocks 320 and/or the up-sampling blocks 330 may have a different block dimension.
  • one or more of the down sampling blocks 320 may include sub-blocks such as: convolution layer; 3D convolution layer; Ixlxl convolution layer; instance normalization layer; Leaky Relu activation function, etc.
  • one or more of the up-sampling blocks 330 may include sub-blocks such as: Trilinear interpolation layer; convolution layer; concatenate layer; 3D convolution layer; instance normalization layer; Leakly Relu activation function.
  • the up- sampling blocks 330 may also include a skip connection input 325 and/or a feed forward connection.
  • each convolution layer may include sub-blocks such as: 3D convolution layer; instance normalization layer; Leakly Relu activation function.
  • FIGs. 6 to 9 illustrates a machine learning model 300 architecture according to embodiments of the disclosure.
  • the machine learning model 300 may be a Deep Learning based AAT quantification model.
  • the machine learning model 300 may be a ResNet based 3D-UNet implemented to segment the distinct AAT depots.
  • the machine learning model 300 includes 11 building blocks, containing a total of 59,145,102 trainable parameters. As shown in FIG. 6, in a specific example, the machine learning model 300 includes one input block 310; five down-sampling blocks 320 and five up-sampling blocks 330. Still referring to FIG. 6, the numbers in the paratheses within the blocks indicate dimensions of the respective block: (x, y, z).
  • Convolutional operations may be performed in a sequence of 3D convolution 350, instance normalization 360, followed by a Leaky-Relu activation 370.
  • Trilinear interpolation 380 may be used to up-sample feature maps within the decoder path.
  • Down-sampling is performed with a stride operation of two in the final convolutional layer in each block.
  • the network may be configured to first pool over the x- and y-axis until their dimensions match the z-axis; thereafter, all axes are down sampled synchronously.
  • the number of convolutional filters is doubled. For example, starting with 24 convolutional filters in the first block and 768 convolutional filters in the deepest block.
  • Glorot uniform initialization is used for weight initialization.
  • the model may be translated to other imaging modalities such as computed tomography (CT) images or data.
  • CT computed tomography
  • the segmentation task of the method is defined as a voxel-wise classification of AAT into background, SSAT, DSAT, IP AT, RPAT, and PSAT.
  • the machine learning model 300 may be trained on manual expert-generated segmented data. The weight in the machine learning model 300 may be tuned or adjusted using backpropagation algorithms.
  • the Adam optimizer is used to minimize the loss function, which is defined as a label wise summation of the binary cross entropy and the generalized dice loss as follows: wherein, Pc is the probability matrix for class c and GTc is the corresponding ground truth matrix. Hyper-parameters, including batch size, learning rate, and patience may be determined empirically. In some embodiments, the training parameters are found to be batch size of 2, a learning rate of I x 10 4 . and a patience of 40.
  • FIG. 10A shows an exemplary segmented output 220 generated using the proposed method and machine learning model 300 of the abdomen of a normal weight subject. Further, the segmented output 220 is presented in a three-dimensional volume showing different slices/cross-sections of the segmented output 220.
  • FIG. 10B shows another exemplary segmented output 220 of the abdomen for an overweight participant. Referring to FIG. 10B, the segmented output 220 shows a larger volume (both relative volume and absolute volume) of IAT in comparison to the segmented output 220 of FIG. 1 1 A.
  • the accuracy of the segmentation model or machine learning model 300 is evaluated against the manually generated ground truth. Volumes are calculated by multiplying the number of labelled voxels of the respective fat depot with the voxel resolution. The segmentation overlaps between the predicted and ground truth volumes are evaluated by computing the Dice similarity coefficients (0 indicating no overlap and 1 representing 100% overlap).
  • evaluation metrics including, false positive rate (the number of predicted voxels assigned to a class that have a true label belonging to another class), false negative rate (the proportion of true labelled ground truth voxels for which the model predicted a different class), precision (ratio of the correctly positive predicted voxels to all positive predicted voxels), and sensitivity (ratio of correctly positive predicted voxels to all actual positive voxels) are presented.
  • SSAT Superficial Subcutaneous Adipose Tissue
  • DSAT Deep Subcutaneous Adipose Tissue
  • IPAT Intraperitoneal Adipose Tissue
  • RPAT Retroperitoneal Adipose Tissue
  • PSAT Paraspinal Adipose Tissue
  • DICE Dice Similarity Coefficient
  • FP False Positive Rate
  • FN False Negative Rate.
  • the proposed method shows high accuracy when compared with the manually created ground truth data with mean Dice similarity scores (5-fold cross-validation) of 98.3%, 97.2%, 96.5%, 96.3%, and 95.9% for SSAT, DSAT, IPAT, RPAT, and PSAT, respectively.
  • the proposed method enables reliable segmentation of individual adipose tissue sub-depots from common medical images such as MRT volumes. Bland-Altman plots (FTGs.
  • Model inference time for an abdominal volume, was assessed and takes approximately 20 seconds and 1.5 seconds on an Intel I CITM i7-10750H CPU (@ 2.60GHz 2.59 GHz) and an NVIDIA V100 GPU (32GB), respectively which points to a short computational time for prompt adipose tissue segmentation. Further, the short computational time required advantageously allows the segmented output to be integrated or incorporated with existing imaging system workflow, and may be presented collectively with the medical images.
  • anthropometric measurements like BM1 and crude measurements of abdominal obesity such as WC and waist-to-hip ratio (WHR) have been adopted to assess the risk and progression of obesity and cardiometabolic disease in clinical care, those methods suffer from precision due to the extremely heterogeneous manifestation of obesity. Studies have shown that the metabolic heterogeneity of obesity is closely linked to adipose tissue distribution. Therefore, identifying distinct AAT partitioning patterns to advance risk stratification of obesity, beyond traditional clinical obesity risk assessment methods is useful.
  • the system may be configured to determine a risk stratification of a metabolic outcome.
  • a method of risk stratification of the metabolic outcome is also disclosed.
  • the method of risk stratification of a metabolic outcome may include determining a risk of the metabolic outcome based on a plurality of volumetric segments in the adipose tissue.
  • the metabolic outcome may include metabolic syndromes, gestational diabetes mellitus (GDM), birth of large for gestational age (LG A) offspring, and diseases such as metabolic diseases, type 2 diabetes, cardiovascular diseases, etc..
  • the method of risk stratification of the metabolic outcome may include determining a risk of the metabolic outcome, such as a disease, based on a relative distribution of the plurality of volumetric segments in the adipose tissue.
  • the risk of a metabolic outcome, such as a disease is determined based on a distribution and/or location of respective ones of the plurality of volumetric segments.
  • a relatively high risk of metabolic disease is determined based on a high amount of intraperitoneal adipose tissue (IP AT) sub-depot volume located at an anterior of the abdomen and a low amount of deep subcutaneous adipose tissue (DSAT) sub-depot volume located at a posterior of the abdomen.
  • IP AT intraperitoneal adipose tissue
  • DSAT deep subcutaneous adipose tissue
  • a relatively high risk of disease may be determined based on a single adipose tissue sub-depot which is uniformly distributed in specific locations of the abdomen.
  • the disease may include but not limited to: type two diabetes, cardiovascular disease, metabolic disease, diseases relating to liver, kidney, abdomen, etc.
  • the method of risk stratification of the metabolic outcome may include determining a risk of a disease based on a relative quantification of the plurality of volumetric segments in the adipose tissue.
  • a relatively high risk of metabolic disease is determined based on a high amount of intraperitoneal adipose tissue (TP AT) subdepot volume relative to the deep subcutaneous adipose tissue (DSAT) sub-depot volume.
  • TP AT intraperitoneal adipose tissue
  • DSAT deep subcutaneous adipose tissue
  • the risk of metabolic disease may be determined based on the presence of specific adipose tissue types.
  • the system 100 may also be configured to determine a risk stratification of cardiometabolic disease.
  • a method of risk stratification of cardiometabolic disease is disclosed.
  • the method of risk stratification of cardiometabolic disease includes determining a risk of cardiometabolic disease of a subject based on a quantification (relative quantification or an absolute quantification) of respective AAT depot or sub-depot of an adipose tissue of the subject.
  • the method may include determining a relative quantification or an absolute quantification of a plurality of volumetric segments corresponding to respective AAT depot or sub-depot in the adipose tissue.
  • the method may utilize the system for and method of classifying adipose tissue as disclosed in previous sections.
  • the method of risk stratification of cardiometabolic disease may further include determining a metabolically unfavourable abdominal adipose tissue distribution responsive to a higher amount of intraperitoneal adipose tissue (IPAT) in relative to a reference subject or phenotype, and a lower amount of deep subcutaneous adipose tissue (DSAT) relative to the reference subject or phenotype.
  • the reference subject or phenotype may be a subject of common physiological parameters, characterized by a predominantly normal BMI (58%) and healthy metabolic profile
  • DBP diastolic
  • SBP systolic
  • HDL-C high density lipoprotein cholesterol
  • LDL-C low density lipoprotein cholesterol
  • TGs triglycerides
  • FPG fasting plasma glucose
  • hsCRP high-sensitivity C-reactive protein
  • MeC Metabolic Syndrome
  • participant defined by the P4 phenotype were characterized by an overall healthy metabolic profile
  • participants defined by the Pl and P2 phenotypes appeared to be increasingly affected by obesity and showed elevated levels of BMI, WC, WHR, total body fat (BF), and TAAT, compared to P4 (FIG. 13).
  • the Pl and P2 phenotypes showed elevated levels of IMCL, liver fat, hsCRP, and reduced levels of HDL, compared to P4 (FIG. 14). This indicated that the two groups with elevated levels of IPAT, Pl and P2, are characterized by increased obesity and obesity related metabolic alterations.
  • a healthy abdominal fat partitioning pattern may be defined by a decreased relative amount of IP AT ( ⁇ 14.5%) and increased relative amount of DSAT (>27.3%) (P4 phenotype). Phenotypes characterized by increased IPAT accumulation (Pl and P2) showed elevated levels of obesity and metabolic impairments.
  • the Pl phenotype While individuals defined by the Pl and P2 phenotype showed similar levels of traditional clinical obesity measurements, the Pl phenotype exhibited a higher cardiometabolic risk profile, characterized by increased liver fat, elevated circulating TGs, and decreased HDL- C concentrations, resulting in significantly increased prevalence and relative risk for MetS, and hence increased risk for developing cardiometabolic disease. Therefore, the Pl phenotype of a relatively high or a higher amount of intraperitoneal adipose tissue (IPAT) relative to the reference phenotype (P4), and a relatively low or a lower amount of deep subcutaneous adipose tissue (DSAT) relative to the reference phenotype appears to define a metabolically unfavourable adipose tissue partitioning pattern.
  • ITT intraperitoneal adipose tissue
  • DSAT deep subcutaneous adipose tissue
  • the system 100 may also be configured to determine a risk stratification of gestation events.
  • a method of risk stratification of gestation events is disclosed according to various embodiments. Tn some embodiments, the method includes determining a risk of gestational diabetes mellitus (GDM) based on a quantification of respective AAT depot or sub-depot of an adipose tissue of the subject prior to conception. In other embodiments, the method includes determining a risk of birth of large for gestational age (LGA) offspring based on a quantification of respective AAT depot or sub-depot of an adipose tissue of the subject prior to conception.
  • GDM gestational diabetes mellitus
  • LGA large for gestational age
  • the method may include determining a relative quantification or an absolute quantification of a plurality of volumetric segments corresponding to respective AAT depot or sub-depot in the adipose tissue.
  • Tire method may utilize the adipose tissue classification method or system as disclosed in previous sections.
  • the method of risk stratification of gestation events may further include determining a metabolically unfavourable abdominal adipose tissue distribution responsive to a higher amount of intraperitoneal adipose tissue (IP AT) relative to a reference subject or phenotype, and a lower amount of deep subcutaneous adipose tissue (DSAT) relative to the reference subject or phenotype.
  • IP AT intraperitoneal adipose tissue
  • DSAT deep subcutaneous adipose tissue
  • the reference subject or phenotype may be a subject of common physiological parameters, characterized by a predominantly normal BMI (58%) and healthy metabolic profile (72%).
  • GDM oral glucose tolerance test
  • IADPSG criteria oral glucose tolerance test
  • LGA offspring was defined by birth weight >90 percentile of the S-PRESTO population.
  • phenotypic odds for developing GDM and having LGA offspring were determined by binary logistic regression models adjusted for age, ethnicity, educational status, parity, and BMT.
  • AAT depots were segmented and quantified from MRI volumes, converted to relative volumes, and expressed as % of total AAT.
  • P4 defined most participants with normal weight (Asian cut-off: BMI ⁇ 23) and was considered as the reference group or reference phenotype. Regression results are shown in Table 4. In comparison to P4, the odds for GDM were >5 times higher for individuals in Pl (Odds ratio (95%CI): 5.36 (1.12, 27.85)) after adjusting for confoundcrs including prcprcgnancy BMI (Table SI). Additionally, women categorized as Pl exhibited 4-fold higher odds (4.5 (0.96, 22.00)) for having LGA offspring, compared to P4. While this did not reach statistical significance upon adjusting for BMI (Table 4), it provides an indication of risk of LGA offspring in Pl phenotype relative to P4 phenotype. P2 phenotype was not associated with GDM or LGA.
  • the proposed phenotype risk stratification method collectively enhances metabolic risk stratification in women affected by obesity. While AAT depot-specific expansion mechanisms for maintaining metabolic homeostasis during excess weight gain remain largely unknown, this study is the first to propose a pathomechanism between reduced DSAT expansion and MetS and GDM. Since MetS, GDM, and T2D share similar underlying obesity- related metabolic impairments (e.g., lipotoxicity and insulin resistance), collectively our findings underscore the critical role of AAT distribution in shaping metabolic health among Asian women.
  • the proposed system and method for phenotyping abdominal obesity using relative measurements of distinct AAT depots improved risk stratification of MetS in an Asian female cohort. Therefore, automated, and rapid assessment of distinct AAT depots could not only help to improve understanding of obesity but also improve risk assessment of obesity and obesity related disease in clinical care. Further, the proposed system and method advantageously provides phenotyping/differentiation of 1AAT sub-depots (IP AT, RPAT, and PS AT). Furthermore, three-dimensional volumes can be fed to the convolutional neural network (enables improved segmentation as anatomical three- dimensional contexts are needed for accurate differentiation of individual adipose tissue depots).

Landscapes

  • Health & Medical Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Medical Informatics (AREA)
  • Public Health (AREA)
  • General Health & Medical Sciences (AREA)
  • Epidemiology (AREA)
  • Primary Health Care (AREA)
  • Biomedical Technology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Pathology (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Data Mining & Analysis (AREA)
  • Biophysics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Databases & Information Systems (AREA)
  • Radiology & Medical Imaging (AREA)
  • Surgery (AREA)
  • Animal Behavior & Ethology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Veterinary Medicine (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Molecular Biology (AREA)
  • Mathematical Physics (AREA)
  • Urology & Nephrology (AREA)
  • Signal Processing (AREA)
  • Nutrition Science (AREA)
  • Physiology (AREA)
  • Fuzzy Systems (AREA)
  • Physical Education & Sports Medicine (AREA)
  • Psychiatry (AREA)
  • Evolutionary Computation (AREA)
  • Quality & Reliability (AREA)
  • Business, Economics & Management (AREA)
  • General Business, Economics & Management (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

A system for classifying adipose tissue and a method of classifying adipose tissue. The system is configured to implement: a segmentation module configured to: acquire a medical image of a subject; segment the medical image into a plurality of volumetric segments using a machine learning model, wherein each of the plurality of volumetric segments includes a respective segmentation mask corresponding to one selected from a plurality of adipose tissue types. The system may also be configured to determine a risk of a metabolic outcome based on the plurality of volumetric segments in the adipose tissue. The method includes acquiring a medical image of a subject; segmenting the medical image into a plurality of volumetric segments using a machine learning model, wherein each of the plurality of volumetric segments comprises a respective segmentation mask corresponding to one selected from a plurality of adipose tissue types.

Description

A SYSTEM FOR AND A METHOD OF CLASSIFYING ADIPOSE TISSUE
CROSS-REFERENCE TO RELATED APPLICATION
[0001] This application claims the benefit of priority to the Singapore application no. 10202260168Q filed November 23, 2022, the contents of which arc hereby incorporated by reference in their entirety for all purposes.
TECHNICAL FIELD
[0002] This application relates to a system for classifying adipose tissue and a method of classifying adipose tissue.
BACKGROUND
[0003] Obesity has been recognized as a world-wide pandemic and WHO has estimated 2.8 million deaths each year as a result of being overweight or obese. Abdominal obesity is a key independent marker for the development of obesity-related diseases, characterized by excess accumulation of abdominal adipose tissues (AAT). AAT is highly heterogeneous and distinct AAT depots show different associations with cardiometabolic risk factors.
[0004] Abdominal adipose tissue (AAT) can generally be separated into two main depots: subcutaneous adipose tissue (SAT) and intra-abdominal adipose tissue (1AAT). Each of the SAT and IAAT are highly heterogeneous and can be further divided into anatomically distinct sub-depots. For example, SAT may be separated into superficial subcutaneous adipose tissue (SSAT) and deep subcutaneous adipose tissue (DSAT). IAAT may be further separated into intraperitoneal adipose tissue (IPAT), retroperitoneal adipose tissue (RPAT), and paraspinal adipose tissue (PSAT). These individual sub-depots show differences in their lipolytic activity, secretome, and association with obesity related cardiometabolic risk factors and disease. [0005] As traditional clinical obesity measurements, such as body mass index (BM1) and waist circumference (WC), are not able to capture information about adipose tissue distribution, quantification or classification of individual tissues in AAT is highly challenging and requires expert knowledge, and often is very time consuming and expensive.
SUMMARY
[0006] According to an aspect, disclosed herein a system for classifying adipose tissue. The system includes memory storing instructions; and a processor coupled to the memory and configured to process the stored instructions to implement: a segmentation module configured to: acquire a medical image of a subject; segment the medical image into a plurality of volumetric segments using a machine learning model, wherein each of the plurality of volumetric segments comprises a respective segmentation mask corresponding to one selected from a plurality of adipose tissue types.
[0007] In some embodiments, the processor is further configured to determine a metabolic outcome based on the plurality of volumetric segments in the adipose tissue. In some embodiments, the segmentation module is further configured to determine a risk of gestational diabetes mellitus (GDM) based on a quantification of the plurality of volumetric segments in the adipose tissue. In some embodiments, the segmentation module is further configured to determine a risk of birth of large for gestational age (LGA) offspring based on a quantification of the plurality of volumetric segments in the adipose tissue. In some embodiments, the segmentation module if further configured to determine a risk of a disease based on a distribution of the plurality of volumetric segments in the adipose tissue. In some embodiments, the segmentation module is further configured to determine a risk of a disease based on a quantification of the plurality of volumetric segments in the adipose tissue. L0008J According to another aspect, disclosed herein a method of classifying adipose tissue. The method includes: acquiring a medical image of a subject; segmenting the medical image into a plurality of volumetric segments using a machine learning model, wherein each of the plurality of volumetric segments comprises a respective segmentation mask corresponding to one selected from a plurality of adipose tissue types.
BRIEF DESCRIPTION OF THE DRAWINGS
[0009] Various embodiments of the present disclosure are described below with reference to the following drawings:
[0010] FIG. 1A is a schematic of a system for classifying adipose tissue according to embodiments of the present disclosure;
[0011] FIG. IB is a schematic of a workflow of a method of classifying adipose tissue of the system of FIG. 1 A;
[0012] FIG. 2A is an example of a medical image of a subject;
[0013] FIG. 2B is an example of a segmented output of the medical image using the system of FIG. 1 ;
[0014] FIG. 2C is another example of a segmented output of the medical image using the system of FIG. 1;
[0015] FIG. 2D is yet another example of a segmented output of the medical image using the system of FIG. 1;
[0016] FIG. 3 A and 3B are illustrations of a graphical user interface of the system according to various embodiments;
[0017] FIG. 4A is a flowchart of a method of classifying adipose tissue according to embodiments of the present disclosure; [0018] FIG. 4B is a schematic workflow of a system for classifying adipose tissue according to embodiments of the present disclosure;
[0019] FIG. 5 is a schematic diagram of a machine learning model according to embodiments of the present disclosure;
[0020] FIGs. 6 to 9 are schematic diagrams of another machine learning model and sublayers according to embodiments of the present disclosure;
[0021] FIG. 10A shows an example of medical image and segmented output of the abdomen of a normal weight subject;
[0022] FIG. 10A shows another example of medical image and respective segmented output of the abdomen for an overweight participant;
[0023] FIGs. 11A to HE are Bland- Altman plots showing volumetric differences in cubic centimetres between the ground truth (GT) and the prediction (P) of model 1 for the hold-out test set for superficial subcutaneous (SSAT), deep subcutaneous (DSAT), intraperitoneal (IP AT), retroperitoneal (RPAT), and paraspinal (PSAT) adipose tissue;
[0024] FIGs. 12A to 12H show example medical images and respective segmented outputs of model 1 on a hold-out test set;
[0025] FIG. 13 shows the distributional differences of obesity measurements and abdominal adipose tissue depots between distinct phenotype strata, Pl to P4, where BMI=Body Mass Index, WC=Waist Circumference, WHR=Waist-to-Hip ratio, TAAT=Total Abdominal Adipose Tissue, SSAT=Superficial Subcutaneous Adipose Tissue, DSAT=Deep Subcutaneous Adipose Tissue, IPAT=Intraperitoneal Adipose Tissue, RPAT=Retroperitoneal Adipose Tissue, and PSAT=Paraspinal Adipose Tissue.
[0026] FIG. 14 shows the distributional differences of ectopic fat depots and clinical measurements between distinct phenotype strata, Pl to P4, where IMCL=Intramyoccllular Lipids, TG= Triglyceride, HDL=High-Density Lipoprotein, LDL=Low-Density Lipoprotein, FPG=Fasting Plasma Glucose, DBP=Diastolic Blood Pressure, SBP=Systolic Blood Pressure, hsCRP=High-sensitivity C-Reactive Protein.
DETAILED DESCRIPTION
[0027] The following detailed description is made with reference to the accompanying drawings, showing details and embodiments of the present disclosure for the purposes of illustration. Features that are described in the context of an embodiment may correspondingly be applicable to the same or similar features in the other embodiments, even if not explicitly described in these other embodiments. Additions and/or combinations and/or alternatives as described for a feature in the context of an embodiment may correspondingly be applicable to the same or similar feature in the other embodiments.
[0028] In the context of various embodiments, the articles “a”, “an” and “the” as used with regard to a feature or element include a reference to one or more of the features or elements.
[0029] In the context of various embodiments, the term “about” or “approximately” as applied to a numeric value encompasses the exact value and a reasonable variance as generally understood in the relevant technical field, e.g., within 10% of the specified value.
[0030] As used herein, the term “and/or” includes any and all combinations of one or more of the associated listed items.
[0031] Further, one skilled in the art will recognize that many functional units in this description have been labelled as modules throughout the description. The person skilled in the art will also recognize that a module may be implemented as circuits, logic chips or any sort of discrete component, and multiple modules may be combined into a single module or divided into sub-modules as required without departing from the disclosure. Still further, one skilled in the art will also recognize that a module may be implemented in software which may then be executed by a variety of processors. In embodiments of the disclosure, a module may also comprise computer instructions or executable codes that may instruct a computer processor to carry out a sequence of events based on instructions received. The choice of the implementation of the modules is left as a decision to a person skilled in the art and does not limit the scope of this disclosure in any way.
[0032] For the sake of brevity, the term "machine learning model" may be used to refer to any one or more of the terms "artificial intelligence model", "neural network model ", “deep learning model”, "multi-layer perceptron model", “ResNet model”, “back propagation model”, etc., as will be understood from the context.
[0033] Due to the highly challenging delineation of intra- abdominal adipose tissue (1AAT) sub-depots, IAAT is commonly quantified as a single and unique fat depot. While IAAT has been identified as the main cause or indicator of obesity related cardiometabolic disease, fat deposition into RPAT may not be as deleterious as compared to fat deposition into IP AT. Biopsy studies have identified that particularly IPAT (omental and mesenteric fat) shows increased lipolytic activity and production and secretion of pro-inflammatory cytokines. Since only IPAT drains into the portal circulation (portal vein) while RPAT drains into the systemic circulation (inferior vena cava), it is postulated that the increase of IPAT may contribute to impaired hepatic energy regulation such as increased gluconeogenesis and production of very low-density lipoproteins. These alterations may then lead to the development of insulin resistance, hyperglycaemia, and hypertriglyceridemia. Similar to IAAT, SAT is heterogeneous, distinct associations of DS AT and SSAT with cardiometabolic risk factors have been shown. DSAT shares similar deleterious characteristics with IAAT and hence close association with cardiometabolic risk factors, while SSAT is considered a protective fat storage site. Therefore, to account for the heterogeneity of AAT, accurate delineation and distinction of individual AAT sub-depots play a pivotal role in understanding the complex manifestation of obesity and the assessment of cardiometabolic disease risk factors. [0034] While 1AAT and SAT have been studied in the context of white adipose tissue and cardiometabolic disease, RPAT and PSAT have been identified as potential sites of brown adipose tissue (BAT). Attributed to its non-shivering thermogenic function, BAT contributes significantly to thermogenesis and energy expenditure and has been proposed as a potential target to treat obesity. Since data on BAT function and location are limited, accurate segmentation of potential BAT storage sites could help to improve identification and characterization of BAT. In addition, adipose tissue infiltration into the lumbar paraspinal musculature is identified as a pathological phenotype in neuromuscular disease and may be a manifestation in patients with chronic lower back pain and symptomatic lumbar spinal stenosis. [0035] The present disclosure relates to a system for classifying adipose tissue and a method for classifying adipose tissue. The system and method may be a fully automated system/method for segmenting, quantifying, and visualizing distinct abdominal adipose tissue (AAT) depots or sub-depots from medical imaging data. Medical imaging modalities, such as computed tomography, magnetic resonance imaging (MRI) and electrical impedance tomography scan, may enable non-invasive imaging of tissue for specific characterization and quantification. The system and classification method as disclosed herein, utilizes medical images of a subject obtained from the medical imaging modalities, and autonomously compute and segment the abdominal adipose tissue into the respective adipose tissue types within a short duration (for example, below 20 seconds).
[0036] The automated and standardized quantification system and method opens up opportunities to better understand obesity and its physiological and pathological phenotypes in research setting as well as enable improved and rapid health assessment in a clinical context. There is a large potential to perform targeted interventions to improve wellness and metabolic health. Further, changes in AAT depots/sub-depots may be evaluated longitudinally or over a duration, in combination with lifestyle and metabolic interventions. In the aspect of research setting, large cohort studies and longitudinal studies relevant to abdominal adipose tissue may immensely benefit from utilization of the system/method as disclosed. The system and method of the various embodiments of the present disclosure may also be integrated with various MRI scanners/medical devices to achieve rapid results for various clinical radiological applications and wellness markets.
[0037] The present disclosure demonstrates a method and system for comprehensive classification and quantification of various adipose tissue depots or sub-depots. The detailed quantification of distinct adipose tissue depots or sub-depots may enable phenotypic risk assessment, guide diagnosis, and surgery planning. The presented disclosure is exemplary in nature and can be expanded to include analysis of neonates, children, and ageing subjects as well as different populations such as ethnic groups. The technology can be easily integrated with MRI scanners and can be utilized in obesity clinics, metabolic surgeries, and lifestyle interventions. Some examples of clinical application include but are not limited to: risk assessment of cardio-mctabolic disease in primary and secondary care; obesity/ diabetes; childhood obesity; geriatric; wellness applications (exercise/nutrition); metabolic/oncologic surgeries/cosmetic applications; real-time surgical applications.
[0038] In some embodiments, the adipose tissue types segmented from the medical image may include AAT depots, such as subcutaneous adipose tissue (SAT); and an intra-abdominal adipose tissue (1AAT). In some embodiments, the SAT and 1AAT depots may further be segmented into the respective sub-depots such as superficial subcutaneous adipose tissue (SSAT); deep subcutaneous adipose tissue (DSAT); intraperitoneal adipose tissue (IPAT); a retroperitoneal adipose tissue (RPAT), and a paraspinal adipose tissue (PSAT). Thereafter, based on the segmented adipose tissue types (such as AAT depots or AAT sub-depots), clinical interference or clinical analysis may be performed by a medical practitioner. [0039] FIG. 1A illustrates a system 100 for classifying adipose tissue or an adipose tissue classification system, according to various embodiments of the present disclosure. FIG. IB illustrates a workflow of a method of classifying adipose tissue of the system 100. The system 100 may include memory which stores instructions, and a computational device, such as a processor coupled to the memory. The processor may be configured to process the stored instructions on the memory to implement: a segmentation module 100. The segmentation module 100 may be configured to receive or acquire a medical image 210 of a subject 80 or patient from a medical imaging module or a database storing the medical images. In some embodiments, medical imaging 132 of the subject 80 may be performed via an imaging modularity 134 to obtain medical images 210, such as a raw image or raw image volume, and stored in the memory of a database or data storage. The segmentation module 100 may be configured to acquire the medical image 210 from the database for image processing 112 or a model application to produce an output. In some embodiments, the segmentation module 110 may be configured to use a machine learning model 300 to segment the medical image 210 to determine a model output 114, such as a segmented output 220. In some embodiments, the segmented output 220 may be one or more volumetric segments, each of the volumetric segment corresponding to a respective AAT depot or sub-depot in the medical image 210. In some embodiments, each of the volumetric segment may include a respective segmentation mask, each of the segmentation mask corresponding to a respective adipose tissue type, such as a respective AAT depot or sub-depot.
[0040] In some embodiments, the system 100 may be embodied in the form of a workstation, a laptop, a mobile device, a network server, a PACS server, a cloud computing device, etc which interfaces with or executes the machine learning model 300 of the segmentation module
110. In some embodiments, the segmentation module 110 may carry out an adipose tissue classification method. The segmentation module 110 may be configured to segment the medical image 210 to determine or obtain a segmented output 220 or a segmented medical image. The model output or the segmented output 220 may be displayed via a graphical user interface 120, which provides visual and/or textual representations of the segmented output 220. Thereafter, clinical interferences 140 or clinical analysis may be performed by a medical practitioner. For example, the clinical interferences or analysis may include a diagnostic report, health assessments, surgery planning, intervention planning, personalized medicine, clinical analysis, etc.
[0041] In some embodiments, the system 100 may be integrated or incorporated with an image acquisition module 130, such as an MRI machine or CT machine. This enables the system 100 to be a complete one-stop solution for AAT depot/sub-depot segmentation and characterization.
[0042] In some embodiments, the medical image 210 may be an output from one of the following: a Magnetic Resonance Imaging (MRI) scan, a Computed Tomography (CT) scan, and an electrical impedance tomography scan. In some examples, the medical image 210 may be a three-dimensional (3D) volumetric image, such as a 3D-Computed Tomography image. In other examples, the medical image 210 may include a plurality of 2D medical images, thus forming one or more stacks of 2D medical images. In other examples, the medical image 210 may include a point cloud, each of point (coordinate) in the point cloud corresponding to a value obtained from a medical imaging process, such as an MRI scan.
[0043] In some embodiments, the segmentation module 110 segments the medical image 210 by classifying each voxel (or a predetermined unit volume) of the medical image 210 with a label which corresponds to each AAT depot or AAT sub-depot. In some embodiments, the label may correspond to one of: a background; SAT; and IAAT. Further, each voxel with the label of SAT may be further classified into or provided with another label corresponding to one of: SSAT and DSAT. Similarly, each voxel with the label of IAAT may be further classified into or provided with another label corresponding to one of: IP AT; RPAT; and PS AT. In other embodiments, each voxel may be directly classified with a label corresponding to one of: a background; SSAT; DSAT; IP AT; RPAT, and PSAT.
[0044] Referring to FIG. 2A and 2B, in various embodiments of the present disclosure, the segmented output 220 may include the medical image 210 overlaid with one or more segmentation masks 221/222/223/224/225 to obtain a visualization output. FIG. 2A illustrates the medical image 210 and FIG. 2B illustrates the visualization output which includes the medical image 210 overlaid with the segmentation masks 221/222/223/224/225.
[0045] In some embodiments, each segmentation mask 221/222/223/224/225 may correspond to a respective AAT sub-depot. For example, segmentation mask 221 corresponds to the SSAT; segmentation mask 222 corresponds to DSAT; segmentation mask 223 corresponds to IP AT ; segmentation mask 224 corresponds to RPAT ; and segmentation mask 225 corresponds to PSAT.
[0046] In some embodiments, each segmentation mask is provided with at least one predetermined visually-distinguishable characteristic. Talcing FIG. 2A and 2B as examples, each segmentation mask may be represented by a unique hatching pattern corresponding to area/volume of each respective AAT sub-depot. Therefore, the segmented output 220 may include segmentation masks of different hatching patterns overlaid onto the medical image 210 for easy visualization. In other examples, referring to FIG. 2C, each segmentation mask may be represented by a unique colour or shade of colour corresponding to the area/volume of each respective AAT sub-depot. Therefore, the segmented output 220 may include segmentation masks of different colours overlaid onto the medical image 210. In other examples, referring to FIG. 2D, each segmentation mask may be represented by an edge or contour corresponding to the area/volume of each respective AAT sub-depot. In alternative embodiments, the segmented output 220 may only include one or more segmentation masks without being overlaid on the medical image 210.
[0047] In some embodiments, each of the segmentation masks may be rendered as a volumetric segment corresponding to each AAT sub-depot. In other words, each segmentation mask may be a three-dimensional volume. Therefore, the segmented output may include multiple segmentation masks, each being a three-dimensional volume, overlaid onto a three- dimensional medical image, to obtain a three-dimensional segmented output 220.
[0048] In other examples, the segmented output 220 may include data representing or corresponding to segmentation of the medical image 210 into one or more AAT depots or subdepots. The segmented output 220 may be a two-dimensional surface field or a three- dimensional point cloud which includes values corresponding to each AAT sub-depot. In other examples, the segmented output 220 may include quantitative data, such as the respective tissue specific volumes, corresponding to each of the AAT depots or sub-depots in the medical image 210. Therefore, the segmented output 220 may include multiple volumetric values, each indicative of a specific volume corresponding to each of the AAT sub-depot.
[0049] Referring to FIG. 3 A and 3B, in some embodiments of the system 100, a graphical user interface 120 may be provided to display or to present the segmented output 220 to enable a convenient utilization for a user. The graphical user interface 120 may enable the unprocessed three-dimensional (3D) volume medical image 210 to be loaded. The user may be able to toggle between the medical image 210 (as shown in FIG. 3 A) and the segmented output 220 (as shown in FIG. 3B) for better visualization.
[0050] In some embodiments as shown in FIGs. 3A and 3B, the segmented output 220 may be displayed as a visualization output including multiple two-dimensional (2D) images or in other words, multiple slices of 2D images forming the three-dimensional segmented output 220. Each of the plurality of 2D images may be representative of a cross section of the three- dimensional segmented output 220. In other embodiments, the multiple 2D images corresponding to sections or cross-sections of the three-dimensional segmented output 220 may be displayed via the graphical user interface 120. In other embodiments, the 2D images may be moveable or selectable relative to the three-dimensional segmented output 220.
[0051] In some embodiments, the graphical user interface 120 may allow a user to view each individual slice of the medical image 210 by sliding through the 3D-volume of the medical image 210. The user may choose to selectively apply the segmentation model to the volume or a defined region of interest, using appropriate action buttons in the menu bar. The GUI may then apply the developed algorithm to the image volume. Subsequently, the program will compute the volumes for each fat depot by multiplying the number of labelled voxels of the respective fat depot with the voxel resolution. Quantified volumes will then be displayed within the interface. The segmentation mask will be overlaid on the raw image highlighting the segmented areas by assigning distinct label colours to the unique fat depots.
[0052] In some embodiments, the graphical user interface 120 additionally enables visualization options, to investigate the individual adipose tissue depots, such as zooming options, adjustment of the opacity of the overlaid segmentation masks, editing of the segmentation masks, and measurement of regions of interest. The user may then export the produced results in the desired imaging formats.
[0053] FIG. 4A is a flowchart illustrating a method of classifying adipose tissue 400 according to various embodiments of the present disclosure. The method 400 includes in stage 410, acquiring a medical image of a subject. In some examples, the medical image is acquired from a database storing the medical image. In other examples, the medical image may be acquired from a medical imaging module. The method may further include, in stage 420, segmenting the medical image into a plurality of volumetric segments using a machine learning model, wherein each of the plurality of volumetric segments comprises a respective segmentation mask corresponding to one selected from a plurality of adipose tissue types.
[0054] In some embodiments, the method 400 may further include in stage 430, classifying each voxel of the medical image with a label. In some embodiments, the label may correspond to one of: a background; a subcutaneous adipose tissue (SAT); and an intra-abdominal adipose tissue (IAAT). Each voxel with the label of subcutaneous adipose tissue (SAT) may be further classified into one of: a superficial subcutaneous adipose tissue (SSAT) and a deep subcutaneous adipose tissue (DSAT). Each voxel with the label of intra-abdominal adipose tissue (IAAT) may be further classified into one of: an intraperitoneal adipose tissue (IP AT); a retroperitoneal adipose tissue (RPAT), and a paraspinal adipose tissue (PSAT).
[0055] In some embodiments, the method 400 also includes providing a respective specific volume corresponding to each of the plurality of volumetric segments. The method 400 may further include converting a volume of each of the plurality of volumetric segments into a respective relative volume expressed as a percentage of a combined volume of all of the plurality of volumetric segments.
[0056] In some embodiments, segmenting of the medical image may include: downsampling the medical image for at least one down-sampling iteration to obtain an intermediate representation of the medical image; and up-sampling the intermediate representation of the medical image for at least one up-sampling iteration to obtain the plurality of volumetric segments. Further, the method may include providing a skip connection from one of the at least one down-sampling iteration to a corresponding at least one up-sampling iteration. In some embodiments, the method 400 includes training the machine learning model with a training data, wherein the training data includes augmented data. Further, the method 400 may include normalizing an image volume of the medical image. [0057] In some embodiments, the method 400 may further include in stage 440, overlaying the segmentation masks on the medical image to obtain a visualization output, wherein each segmentation mask is provided with at least one predetermined visually-distinguishable characteristic. Further, the method 400 may include displaying the visualization output as a plurality of two-dimensional (2D) images, wherein each of the plurality of 2D images is representative of a cross section of the visualization output.
[0058] FIG. 4B illustrates embodiments of an overall workflow of a method of classifying adipose tissue according to embodiments of the present disclosure. In the process of adipose tissue classification, one or more medical images 210 may be acquired and/or received by the segmentation module 110 from a database or a data source. The medical images 210 may first be pre-processed by an image pre-processor 1 16 prior to inputting to a machine learning model 300. In some embodiments, the machine learning model 300 may receive the medical images 210 sequentially to classify each medical image into the various AAT depots or subdepots. Alternatively, the machine learning model 300 may receive the medical images 210 concurrently to classify each medical image into the various AAT depots or subdepots. Upon classification, the segmentation module 1 10 outputs one or more segmented output 220. As previously described, the segmented output 220 may include tissue specific volumes corresponding to each classification of adipose tissue. Further, the segmented output 220 may also include multiple segmentation masks, each segmentation mask corresponding to one adipose tissue type. The segmented output 220 may be displayed by the graphical user interface 120. The graphical user interface 120 may include editing tools or editing functions such as an image editor, thus allowing a user to enhance the segmented output 220 to improve the visual representation of the segmented output 220. For example, the image editor may allow the user to alter or darken an interface between two AAT sub-depots or to change the colour of representation of each AAT sub-depot. In some embodiments, the graphical user interface may also include visualization tools or visualization functions such as 3D visualization of the segmented output via augmented reality, or visualization functions such as 2D slicing planes, etc.
[0059] FIG. 5 illustrates an embodiment of a machine learning model 300 or segmentation model according to various embodiments of the disclosure. The machine learning model 300 may be a three-dimensional (3D) deep convolutional neural network. The machine learning model 300 may include an input block 310; one or more down-sampling blocks 320; and one or more up-sampling blocks 330, connected in series or in sequence. In some embodiments, an input received by each down-sampling block 320 is down-sampled or down-scaled and provided as a respective output to a subsequent block, such as another down-sampling block 320. In some embodiments, the last of the down-sampling blocks 320 may output an intermediate representation 215 of the medical image 210 into the first of the up-sampling block 330. In some examples, the output of the down-sampling block 320 has a lower resolution or lower data size in comparison to the input as received by the same down-sampling block 320. In some embodiments, an input received by each up-sampling block 330 is up-sampled or up- scaled and provided as a respective output to a subsequent block, such as another up-sampling block 330. In some examples, the output of the up-sampling block 330 has a higher resolution or higher data size in comparison to the input as received by the same up-sampling block 330. In some embodiments, the last of the up-sampling blocks 330 may output the segmented output 220 and/or the segmentation masks.
[0060] In some embodiments, one or more of the down-sampling blocks 320 may provide a skip connection 325 input to a respective up-sampling block 330. In other words, a skip connection 325 may be provided from one or more of the down-sampling blocks 220 to a respective one or more up-sampling block 330. In some embodiments, a skip connection 225 is provided between each pah- of corresponding down-sampling block 320 and up-sampling block 330. In some embodiments, a skip connection 225 is provided between the input block 310 and the last of the up-sampling block 330. In some embodiments, skip connections 225 are selectively provided between selected pairs of down-sampling block 320 and up-sampling block 330.
[0061] In some embodiments, the down-sampling blocks 320 and/or the up-sampling blocks 330 may have a different block dimension. In some embodiments, one or more of the down sampling blocks 320 may include sub-blocks such as: convolution layer; 3D convolution layer; Ixlxl convolution layer; instance normalization layer; Leaky Relu activation function, etc. In some embodiments, one or more of the up-sampling blocks 330 may include sub-blocks such as: Trilinear interpolation layer; convolution layer; concatenate layer; 3D convolution layer; instance normalization layer; Leakly Relu activation function. In some embodiments, the up- sampling blocks 330 may also include a skip connection input 325 and/or a feed forward connection. In some embodiments, each convolution layer may include sub-blocks such as: 3D convolution layer; instance normalization layer; Leakly Relu activation function.
[0062] FIGs. 6 to 9 illustrates a machine learning model 300 architecture according to embodiments of the disclosure. The machine learning model 300 may be a Deep Learning based AAT quantification model. The machine learning model 300 may be a ResNet based 3D-UNet implemented to segment the distinct AAT depots.
[0063] In some embodiments, the machine learning model 300 includes 11 building blocks, containing a total of 59,145,102 trainable parameters. As shown in FIG. 6, in a specific example, the machine learning model 300 includes one input block 310; five down-sampling blocks 320 and five up-sampling blocks 330. Still referring to FIG. 6, the numbers in the paratheses within the blocks indicate dimensions of the respective block: (x, y, z).
[0064] Convolutional operations may be performed in a sequence of 3D convolution 350, instance normalization 360, followed by a Leaky-Relu activation 370. Trilinear interpolation 380 may be used to up-sample feature maps within the decoder path. Down-sampling is performed with a stride operation of two in the final convolutional layer in each block. The network may be configured to first pool over the x- and y-axis until their dimensions match the z-axis; thereafter, all axes are down sampled synchronously. After each down-sampling or down-sampling block 320, the number of convolutional filters is doubled. For example, starting with 24 convolutional filters in the first block and 768 convolutional filters in the deepest block. In some embodiments, Glorot uniform initialization is used for weight initialization.
[0065] While the described example is specific to MRI data as the input medical image, the model may be translated to other imaging modalities such as computed tomography (CT) images or data. The segmentation task of the method is defined as a voxel-wise classification of AAT into background, SSAT, DSAT, IP AT, RPAT, and PSAT. In some embodiments, the machine learning model 300 may be trained on manual expert-generated segmented data. The weight in the machine learning model 300 may be tuned or adjusted using backpropagation algorithms. In some embodiments, the Adam optimizer is used to minimize the loss function, which is defined as a label wise summation of the binary cross entropy and the generalized dice loss as follows:
Figure imgf000020_0001
wherein, Pc is the probability matrix for class c and GTc is the corresponding ground truth matrix. Hyper-parameters, including batch size, learning rate, and patience may be determined empirically. In some embodiments, the training parameters are found to be batch size of 2, a learning rate of I x 10 4. and a patience of 40.
[0066] Imaging volumes are normalized to a scale of 0-1. Patches with a shape of 320x320x40 arc cropped from each image volume randomly for each training cycle. Data augmentation is applied during training, on the fly. Augmentation methods are implemented using the TorchlO library and include adding of noise, affine and elastic transformation, inducing of bias field inhomogeneity, ghosting to simulate respiratory and cardiac motion, and additional motion artifacts to mimic patient movements. The final model configuration is then evaluated using 5-fold cross-validation. In addition, the model is tested on an external hold-out set (N=89).
[0067] FIG. 10A shows an exemplary segmented output 220 generated using the proposed method and machine learning model 300 of the abdomen of a normal weight subject. Further, the segmented output 220 is presented in a three-dimensional volume showing different slices/cross-sections of the segmented output 220. FIG. 10B shows another exemplary segmented output 220 of the abdomen for an overweight participant. Referring to FIG. 10B, the segmented output 220 shows a larger volume (both relative volume and absolute volume) of IAT in comparison to the segmented output 220 of FIG. 1 1 A.
[0068] The accuracy of the segmentation model or machine learning model 300 is evaluated against the manually generated ground truth. Volumes are calculated by multiplying the number of labelled voxels of the respective fat depot with the voxel resolution. The segmentation overlaps between the predicted and ground truth volumes are evaluated by computing the Dice similarity coefficients (0 indicating no overlap and 1 representing 100% overlap). Further, evaluation metrics, including, false positive rate (the number of predicted voxels assigned to a class that have a true label belonging to another class), false negative rate (the proportion of true labelled ground truth voxels for which the model predicted a different class), precision (ratio of the correctly positive predicted voxels to all positive predicted voxels), and sensitivity (ratio of correctly positive predicted voxels to all actual positive voxels) are presented.
[0069] The evaluation metrics are computed for each individual label/fat depot. Results for the 5-fold experiment are presented in Table 1. To assess the volumetric agreement with the ground truth data, one model of the 5-fold experiment, is randomly selected and further evaluated; denoted by model 1 in the following. Numerical results are displayed in Table 2 and volumetric agreement is illustrated using Bland-Altman plots in FIGs. 11A to HE. FIGs. 12A to 12H further show example medical images 210 and respective segmented outputs 220 of model 1 on the hold-out test set.
Table 1: 5-fold cross-validation mean performance
Dice FP FN Precision Sensitivity
Fat Depot (%) (%) (%) (%) (%)
SSAT 98.31 ± 0.08 1.58 ± 0.23 0.17 ± 0.03 98.21 ± 0.20 98.21 ± 0.24
DSAT 97.23 ± 0.13 2.84 ± 0.31 0.16 ± 0.03 97.33 ± 0.50 96.95 ± 0.48
IPAT 96.45 ± 0.22 3.67 ± 0.67 0.07 ± 0.01 96.60 ± 0.27 97.01 ± 0.37
RPAT 96.25 ± 0.25 3.70 ± 0.20 0.05 ± 0.01 96.21 ± 0.50 96.63 ± 0.17
PSAT 95.91 ± 0.34 4.44 ± 0.43 0.02 ± 0.0 96.28 ± 0.36 95.57 ± 0.45
Quantitative evaluation metrics: All values are in %, presented as mean ± standard deviation. SSAT = Superficial Subcutaneous Adipose Tissue, DSAT = Deep Subcutaneous Adipose Tissue, IPAT = Intraperitoneal Adipose Tissue, RPAT = Retroperitoneal Adipose Tissue, PSAT = Paraspinal Adipose Tissue, DICE = Dice Similarity Coefficient, FP = False Positive Rate, FN = False Negative Rate.
Table 2 Evaluation results on hold-out test set
Metrics Volume
Fat Dice FP FN Precision Sensitivity Prediction Ground Truth
Depot (%) (%) (%) (%) (%)
Figure imgf000022_0001
1660.38 1627.34 98.21 1.71 ± 0.19 ± 98.15 ± 98.19 ±
SSAT [1200.10, [1197.39,
± 0.05 0.20 0.02 0.15 0.16
2349.62] 2360.29]
1072.26
97.29 2.77 ± 0.17 ± 97.38 ± 97.05 + 1083.95
DS AT [781.31 ,
± 0.07 0.31 0.02 0.37 0.32 [777.22, 1533.57]
1520.58]
96.76 3.37 ± 0.07 ± 96.91 ± 97.22 ± 433.91 441.86
IPAT
± 0.15 0.50 0.00 0.23 0.32 [277.13, 829.31] [275.97, 825.18]
96.50 3.52 ± 0.06 ± 96.53 ± 96.73 ± 336.51 333.65
RPAT
± 0.09 0.24 0.00 0.26 0.20 [239.82, 239.82] [238.01, 451.82] 96.08 4.24 ± 0.02 ± 96.41 ± 95.78 + 109.54 109.61
PSAT
± 0.19 0.4 0.0 0.38 0.39 [91.36. 126.19] [91.55, 125.93]
Quantitative evaluation metrics are in %, presented as mean ± standard deviation. Volumes are presented in cm ’, median [Q1 , Q3], SSAT = Superficial Subcutaneous Adipose Tissue, DSAT = Deep Subcutaneous Adipose Tissue, IPAT = Intraperitoneal Adipose Tissue, RPAT = Retroperitoneal Adipose Tissue, PSAT = Paraspinal Adipose Tissue, DICE = Dice Similarity Coefficient, FP = False Positive Rate, FN = False Negative Rate.
[0070] The proposed method shows high accuracy when compared with the manually created ground truth data with mean Dice similarity scores (5-fold cross-validation) of 98.3%, 97.2%, 96.5%, 96.3%, and 95.9% for SSAT, DSAT, IPAT, RPAT, and PSAT, respectively. The proposed method enables reliable segmentation of individual adipose tissue sub-depots from common medical images such as MRT volumes. Bland-Altman plots (FTGs. 10A to 10E) show a slight overestimation for SSAT (mean difference: 12.63 ± 51.20 cm3), IPAT (mean difference: 0.92 ± 7.66 cm3), and PSAT (mean difference: 0.72 ± 2.44 cm3), and slight underestimation for DSAT (mean difference: 15.23 ± 50.76 cm3) and RPAT (mean difference: 0.12 ± 5.85 cm3). With respect to depot-specific median values, the above over- and under- estimations are not significant as they only present an estimation error of 0.78%, 1 .4%, 0.21 %, 0.04%, and 0.66% for SSAT, DSAT, IPAT, RPAT, and PSAT, respectively. Model inference time, for an abdominal volume, was assessed and takes approximately 20 seconds and 1.5 seconds on an Intel I CITM i7-10750H CPU (@ 2.60GHz 2.59 GHz) and an NVIDIA V100 GPU (32GB), respectively which points to a short computational time for prompt adipose tissue segmentation. Further, the short computational time required advantageously allows the segmented output to be integrated or incorporated with existing imaging system workflow, and may be presented collectively with the medical images.
Figure imgf000024_0001
[0071 J While anthropometric measurements like BM1 and crude measurements of abdominal obesity such as WC and waist-to-hip ratio (WHR) have been adopted to assess the risk and progression of obesity and cardiometabolic disease in clinical care, those methods suffer from precision due to the extremely heterogeneous manifestation of obesity. Studies have shown that the metabolic heterogeneity of obesity is closely linked to adipose tissue distribution. Therefore, identifying distinct AAT partitioning patterns to advance risk stratification of obesity, beyond traditional clinical obesity risk assessment methods is useful.
[0072] According to another aspect, the system may be configured to determine a risk stratification of a metabolic outcome. A method of risk stratification of the metabolic outcome is also disclosed. The method of risk stratification of a metabolic outcome may include determining a risk of the metabolic outcome based on a plurality of volumetric segments in the adipose tissue. As examples, the metabolic outcome may include metabolic syndromes, gestational diabetes mellitus (GDM), birth of large for gestational age (LG A) offspring, and diseases such as metabolic diseases, type 2 diabetes, cardiovascular diseases, etc..
[0073] In some embodiments, the method of risk stratification of the metabolic outcome may include determining a risk of the metabolic outcome, such as a disease, based on a relative distribution of the plurality of volumetric segments in the adipose tissue. In other words, the risk of a metabolic outcome, such as a disease, is determined based on a distribution and/or location of respective ones of the plurality of volumetric segments. For example, a relatively high risk of metabolic disease is determined based on a high amount of intraperitoneal adipose tissue (IP AT) sub-depot volume located at an anterior of the abdomen and a low amount of deep subcutaneous adipose tissue (DSAT) sub-depot volume located at a posterior of the abdomen. In other embodiments, a relatively high risk of disease may be determined based on a single adipose tissue sub-depot which is uniformly distributed in specific locations of the abdomen. In some examples, the disease may include but not limited to: type two diabetes, cardiovascular disease, metabolic disease, diseases relating to liver, kidney, abdomen, etc.
[0074] In other embodiments, the method of risk stratification of the metabolic outcome may include determining a risk of a disease based on a relative quantification of the plurality of volumetric segments in the adipose tissue. As examples, a relatively high risk of metabolic disease is determined based on a high amount of intraperitoneal adipose tissue (TP AT) subdepot volume relative to the deep subcutaneous adipose tissue (DSAT) sub-depot volume. In some examples, the risk of metabolic disease may be determined based on the presence of specific adipose tissue types.
[0075] The system 100 may also be configured to determine a risk stratification of cardiometabolic disease. A method of risk stratification of cardiometabolic disease is disclosed. The method of risk stratification of cardiometabolic disease includes determining a risk of cardiometabolic disease of a subject based on a quantification (relative quantification or an absolute quantification) of respective AAT depot or sub-depot of an adipose tissue of the subject. The method may include determining a relative quantification or an absolute quantification of a plurality of volumetric segments corresponding to respective AAT depot or sub-depot in the adipose tissue. The method may utilize the system for and method of classifying adipose tissue as disclosed in previous sections. In some embodiments, the method of risk stratification of cardiometabolic disease may further include determining a metabolically unfavourable abdominal adipose tissue distribution responsive to a higher amount of intraperitoneal adipose tissue (IPAT) in relative to a reference subject or phenotype, and a lower amount of deep subcutaneous adipose tissue (DSAT) relative to the reference subject or phenotype. The reference subject or phenotype may be a subject of common physiological parameters, characterized by a predominantly normal BMI (58%) and healthy metabolic profile
(72%). [0076] The following examples illustrate the importance and usefulness of the method of risk stratification by quantifying distinct AAT depots, including SSAT, DSAT, IP AT, and RPAT, to improve risk stratification of cardiometabolic disease. Abdominal MRI and clinical measurements were collected from 385 participants. The MRI data were processed by the proposed segmentation method to segment and quantify SSAT, DSAT, IP AT, and RPAT tissue volumes. Clinical measurements included diastolic (DBP) and systolic (SBP) blood pressure, high density lipoprotein cholesterol (HDL-C), low density lipoprotein cholesterol (LDL-C), triglycerides (TGs), fasting plasma glucose (FPG), and high-sensitivity C-reactive protein (hsCRP) as well as measurements of ectopic fat depots such as liver fat and intramyocellular lipids (IMCL), and traditional obesity measurements like BMI, WC, WHR, and total body fat (BF). Metabolic Syndrome (MetS) was defined, as per the International Diabetes Federation criteria, as the primary outcome variable to assess cardiometabolic disease risk.
[0077] To phenotype abdominal obesity, absolute volumes of distinct AAT depots were converted to relative volumes, expressed as a percentage of total AAT (TAAT). Receiver operating characteristic (ROC) statistics were computed to assess the diagnostic power of the individual AAT measurements for risk stratification of MetS. Optimal AAT depot-specific cutoffs that maximize the Youden’s index (YI) to classify MetS were selected. AAT depots with diagnostic value (YI>0.5) were used to create unique strata for phenotyping and risk stratification of abdominal obesity.
[0078] Only relative measurements of DSAT and IP AT showed diagnostic value for MetS, with optimal cut-off values of 27% and 15%, respectively; hence phenotypic stratification into specific AAT partitioning patterns was only further investigated using DSAT and IPAT. First, DSAT and IPAT measurements were dichotomized using the optimal cut-off values. Subsequently, participants were stratified into four possible phenotype categories Pl (high IPAT, low DSAT), P2 (high IPAT, high DSAT), P3 (low IPAT, low DSAT), and P4 (low IPAT, high DSAT). As illustrated in Table 3, the P4 phenotype, characterized by predominantly normal BMI (58%) and healthy metabolic profile (72%), defined most of the study participants (245 out of 385); hence P4 was chosen as the reference group or reference phenotype in the following section.
[0079] Participants’ demographics as well as the prevalence and relative risk for MetS for all unique phenotypes are shown in Table 3. Groupwise comparison of the distributions of distinct obesity and clinical measurements, between the individual phenotypes, are shown in FIGs. 13 and 14, respectively. Participants defined by the P3 phenotype (N=10) showed the lowest levels for all obesity measurements and similar characteristics for all clinical measurements as group P4 (FIGs. 13 and 14); therefore, in the following elucidation P3 is not contrasted against the other groups. Only Pl and P2 phenotype are contrasted with each other and with the P4 phenotype.
[0080] While participants defined by the P4 phenotype were characterized by an overall healthy metabolic profile, participants defined by the Pl and P2 phenotypes appeared to be increasingly affected by obesity and showed elevated levels of BMI, WC, WHR, total body fat (BF), and TAAT, compared to P4 (FIG. 13). Additionally, the Pl and P2 phenotypes showed elevated levels of IMCL, liver fat, hsCRP, and reduced levels of HDL, compared to P4 (FIG. 14). This indicated that the two groups with elevated levels of IPAT, Pl and P2, are characterized by increased obesity and obesity related metabolic alterations. Interestingly, individuals defined by Pl (high IPAT, low DSAT) showed more than 4 times higher prevalence of MetS compared to individuals defined by P2 (high IPAT, high DSAT) (MetS prevalence: Pl=34.7% vs. P2=9.1%). Furthermore, the relative risk (RR) for MetS was more than 3-fold higher for individuals defined by Pl phenotype compared to individuals characterized by P2 (MetS RR: Pl=17.4% vs. P2=4.6%), with reference to P4. While RR and prevalence of MetS were significantly elevated in individuals defined by Pl, there was no statistically significant difference in traditional obesity measurements including BM1, WC, WHR, BF, and TAAT when compared to individuals in P2.
[0081] As described above, in a cohort of Asian women aged 21-44 years, the utility of profiling distinct AAT partitioning patterns was illustrated, with consideration of SSAT, DS AT, IP AT, and RPAT, for risk assessment of cardiometabolic disease. A healthy abdominal fat partitioning pattern may be defined by a decreased relative amount of IP AT (<14.5%) and increased relative amount of DSAT (>27.3%) (P4 phenotype). Phenotypes characterized by increased IPAT accumulation (Pl and P2) showed elevated levels of obesity and metabolic impairments. While individuals defined by the Pl and P2 phenotype showed similar levels of traditional clinical obesity measurements, the Pl phenotype exhibited a higher cardiometabolic risk profile, characterized by increased liver fat, elevated circulating TGs, and decreased HDL- C concentrations, resulting in significantly increased prevalence and relative risk for MetS, and hence increased risk for developing cardiometabolic disease. Therefore, the Pl phenotype of a relatively high or a higher amount of intraperitoneal adipose tissue (IPAT) relative to the reference phenotype (P4), and a relatively low or a lower amount of deep subcutaneous adipose tissue (DSAT) relative to the reference phenotype appears to define a metabolically unfavourable adipose tissue partitioning pattern.
[0082] In relation to the above risk stratification of cardiometabolic disease for the general population, maternal obesity similarly poses a higher short- and long-term risk for both maternal and child health. During pregnancy, disruptions in energy storage capacity and supply can result in increased risk for adverse metabolic pregnancy events or gestation events, such as gestational diabetes mellitus (GDM). Moreover, impaired maternal metabolic homeostasis can also lead to inadequate fetal development, including the birth of large for gestational age (LGA) offspring. The previously identified metabolically unfavorable abdominal adipose tissue distribution pattern (Pl phenotype) associated with metabolic syndrome in Asian women independent of BM1 may also indicate reduced DSAT expandability due to decrease metabolic flexibility during pregnancy.
[0083] According to another aspect, in addition to risk stratification of cardiometabolic disease, the system 100 may also be configured to determine a risk stratification of gestation events. A method of risk stratification of gestation events is disclosed according to various embodiments. Tn some embodiments, the method includes determining a risk of gestational diabetes mellitus (GDM) based on a quantification of respective AAT depot or sub-depot of an adipose tissue of the subject prior to conception. In other embodiments, the method includes determining a risk of birth of large for gestational age (LGA) offspring based on a quantification of respective AAT depot or sub-depot of an adipose tissue of the subject prior to conception. The method may include determining a relative quantification or an absolute quantification of a plurality of volumetric segments corresponding to respective AAT depot or sub-depot in the adipose tissue. Tire method may utilize the adipose tissue classification method or system as disclosed in previous sections.
[0084] In some embodiments, the method of risk stratification of gestation events may further include determining a metabolically unfavourable abdominal adipose tissue distribution responsive to a higher amount of intraperitoneal adipose tissue (IP AT) relative to a reference subject or phenotype, and a lower amount of deep subcutaneous adipose tissue (DSAT) relative to the reference subject or phenotype. The reference subject or phenotype may be a subject of common physiological parameters, characterized by a predominantly normal BMI (58%) and healthy metabolic profile (72%).
[0085] In some examples relating to pregnant subjects, 101 women from the S-PRESTO cohort (21-39 years) who underwent pre-conception magnetic resonance imaging (MRI) scans
(<1 year) were longitudinally followed during pregnancy. Among them, 17 women were diagnosed with GDM, 19 had LGA offspring, and 5 women had both GDM and an LGA offspring. GDM was diagnosed based on an oral glucose tolerance test (OGTT) at 26-28 week gestation (IADPSG criteria) and LGA offspring was defined by birth weight >90 percentile of the S-PRESTO population. To assess the value of the obesity phenotypes to predict future metabolic events, phenotypic odds for developing GDM and having LGA offspring were determined by binary logistic regression models adjusted for age, ethnicity, educational status, parity, and BMT. AAT depots were segmented and quantified from MRI volumes, converted to relative volumes, and expressed as % of total AAT. Participants were stratified into 4 obesity phenotypes: Pl (high IPAT, low DSAT), P2 (high IPAT, high DSAT), P3 (low IPAT, low DSAT), and P4 (low IPAT, high DSAT) using previously identified cut-off values of 27.3% and 14.5% for DSAT and IPAT, respectively.
[0086] P4 defined most participants with normal weight (Asian cut-off: BMI<23) and was considered as the reference group or reference phenotype. Regression results are shown in Table 4. In comparison to P4, the odds for GDM were >5 times higher for individuals in Pl (Odds ratio (95%CI): 5.36 (1.12, 27.85)) after adjusting for confoundcrs including prcprcgnancy BMI (Table SI). Additionally, women categorized as Pl exhibited 4-fold higher odds (4.5 (0.96, 22.00)) for having LGA offspring, compared to P4. While this did not reach statistical significance upon adjusting for BMI (Table 4), it provides an indication of risk of LGA offspring in Pl phenotype relative to P4 phenotype. P2 phenotype was not associated with GDM or LGA.
Figure imgf000031_0001
Figure imgf000032_0001
[0087] Particularly during rapid weight gain, like gestation, adipose tissue plasticity is crucial to maintain metabolic homeostasis. Therefore, limited adipose tissue plasticity could become apparent in the form of gestational metabolic complications like GDM. It was determined that women defined by Pl, prior to pregnancy (for < 1 year), showed >5-fold increased odds for GDM - independent of pre-pregnancy BMI. Since higher visceral fat before and during pregnancy is a strong indicator for GDM, the increased odds for GDM could be simply explained by the elevated %IPAT in Pl. However, from a holistic perspective, it is plausible that women in Pl exhibited elevated %IPAT prior to pregnancy, possibly due to an exploitation of their DSAT expansion capacity.
[0088] Consequently, this would have resulted in reduced DSAT plasticity during gestation thus predisposing these women to developing GDM. Precisely, reduced DSAT plasticity during gestational weight gain may lead to impaired metabolic flexibility and cause ectopic fat accumulation, lipotoxicity, and glucotoxicity. A particular failure of the pancreas to adapt to this sudden increased lipid and glucose flux would lead to beta-cell dysfunction, ultimately resulting in GDM. Therefore, indicating that in women with ‘healthy’ physiques, DSAT expansion is an innate adaptive mechanism to handle excess weight gain and might be compromised in women who are affected by obesity.
[0089] Although not reaching statistical significance, it is worth noting that, women in Pl also demonstrated >4-fold increased odds of having a LGA offspring. While the association was borderline significant, the limited sample size may have influenced the results. Therefore, Pl might not only contribute to inadequate maternal energy storage but also to excessive fetal growth during pregnancy. Given the increased susceptibility of LGA offspring to adverse cardiometabolic health, the identification of women categorized by Pl prior to pregnancy might present an opportunity for potential intergenerational intervention strategies.
[0090] The proposed phenotype risk stratification method collectively enhances metabolic risk stratification in women affected by obesity. While AAT depot-specific expansion mechanisms for maintaining metabolic homeostasis during excess weight gain remain largely unknown, this study is the first to propose a pathomechanism between reduced DSAT expansion and MetS and GDM. Since MetS, GDM, and T2D share similar underlying obesity- related metabolic impairments (e.g., lipotoxicity and insulin resistance), collectively our findings underscore the critical role of AAT distribution in shaping metabolic health among Asian women.
[0091] In comparison to traditional obesity measurements, using the proposed system and method for phenotyping abdominal obesity using relative measurements of distinct AAT depots improved risk stratification of MetS in an Asian female cohort. Therefore, automated, and rapid assessment of distinct AAT depots could not only help to improve understanding of obesity but also improve risk assessment of obesity and obesity related disease in clinical care. Further, the proposed system and method advantageously provides phenotyping/differentiation of 1AAT sub-depots (IP AT, RPAT, and PS AT). Furthermore, three-dimensional volumes can be fed to the convolutional neural network (enables improved segmentation as anatomical three- dimensional contexts are needed for accurate differentiation of individual adipose tissue depots).
[0092] All examples described herein, whether of apparatus, methods, materials, or products, are presented for the purpose of illustration and to aid understanding, and are not intended to be limiting or exhaustive. Modifications may be made by one of ordinary skill in the art without departing from the scope of the invention as claimed.

Claims

CLAIMS A system for classifying adipose tissue, comprising: memory storing instructions; and a processor coupled to the memory and configured to process the stored instructions to implement: a segmentation module configured to: acquire a medical image of a subject; segment the medical image into a plurality of volumetric segments using a machine learning model, wherein each of the plurality of volumetric segments comprises a respective segmentation mask corresponding to one selected from a plurality of adipose tissue types. The system as recited in claim 1, wherein the segmentation module is further configured to classify each voxel of the medical image with a label corresponding to one of: a background, a subcutaneous adipose tissue (SAT), and an intra-abdominal adipose tissue (IAAT). The system as recited in claim 2, wherein the segmentation module is further configured to classify each voxel with the label of subcutaneous adipose tissue (SAT) with one of: a superficial subcutaneous adipose tissue (SSAT) and a deep subcutaneous adipose tissue (DSAT). The system as recited in claim 2, wherein the segmentation module is further configured to classify each voxel with the label of intra-abdominal adipose tissue (1AAT) with one of: an intraperitoneal adipose tissue (IP AT), a retroperitoneal adipose tissue (RPAT), and a paraspinal adipose tissue (PSAT). The system as recited in claim 1, wherein the medical image comprises a three- dimensional (3D) volumetric image. The system as recited in claim 1, wherein each of the plurality of adipose tissue types is selected from the group consisting of: a subcutaneous adipose tissue (SAT), an intra-abdominal adipose tissue (IAAT), a superficial subcutaneous adipose tissue (SSAT), a deep subcutaneous adipose tissue (DSAT), an intraperitoneal adipose tissue (TPAT), a retroperitoneal adipose tissue (RPAT), and a paraspinal adipose tissue (PSAT). The system as recited in claim 1, wherein the segmentation module is further configured to provide a respective specific volume corresponding to each of the plurality of volumetric segments. The system as recited in claim 1, wherein the segmentation module is further configured to convert a volume of each of the plurality of volumetric segments into a respective relative volume expressed as a percentage of a combined volume of all of the plurality of volumetric segments. The system as recited in claim 1, wherein to segment the medical image further includes: down-sampling the medical image for at least one down-sampling iteration to obtain an intermediate representation of the medical image; and up-sampling the intermediate representation of the medical image for at least one up-sampling iteration to obtain the plurality of volumetric segments. The system as recited in claim 9, wherein the segmentation module is further configured to provide a skip connection from one of the at least one down-sampling iteration to a corresponding at least one up-sampling iteration. The system as recited in claim 1, wherein the segmentation module is further configured to train the machine learning model with a training data, wherein the training data includes augmented data. The system as recited in claim 1, wherein the segmentation module is further configured to normalize an image volume of the medical image. The system as recited in claim 1, wherein the machine learning model is a three- dimensional (3D) deep convolutional neural network. The system as recited in claim 1, wherein the medical image comprises an output from one of the following: a Magnetic Resonance Imaging (MRI) scan, a Computed Tomography (CT) scan, and an electrical impedance tomography scan. The system as recited in claim 1, wherein the segmentation module is further configured to overlay the segmentation masks on the medical image to obtain a visualization output, wherein each segmentation mask is provided with at least one predetermined visually-distinguishable characteristic. The system as recited in claim 15, wherein the segmentation module is further configured to display the visualization output as a plurality of two-dimensional (2D) images, wherein each of the plurality of 2D images is representative of a cross section of the visualization output. The system as recited in claim 1, wherein the segmentation module is further configured to determine a risk of a metabolic outcome based on the plurality of volumetric segments in the adipose tissue. The system as recited in claim 17, wherein the segmentation module is further configured to determine a risk of cardiomctabolic disease based on a quantification of the plurality of volumetric segments in the adipose tissue. The system as recited in claim 17, wherein the segmentation module is further configured to determine a risk of gestational diabetes mellitus (GDM) based on a quantification of the plurality of volumetric segments in the adipose tissue. The system as recited in claim 17, wherein the segmentation module is further configured to determine a risk of birth of large for gestational age (LGA) offspring based on a quantification of the plurality of volumetric segments in the adipose tissue. The system as recited in any of claims 18 to 20, wherein the segmentation module is further configured to determine an unfavorable abdominal adipose tissue (AAT) distribution risk responsive to a higher amount of an intraperitoneal adipose tissue (IP AT) relative to a reference phenotype, and a lower amount of deep subcutaneous adipose tissue (DSAT) relative to the reference phenotype. The system as recited in claim 17, wherein the segmentation module is further configured to determine a risk of a disease based on a relative distribution of the plurality of volumetric segments in the adipose tissue. The system as recited in claim 17, wherein the segmentation module is further configured to determine a risk of a disease based on a relative quantification of the plurality of volumetric segments in the adipose tissue. The system as recited in any of claims 22 to 23, wherein the disease includes at least one of: type two diabetes and cardiovascular disease. A method of classifying adipose tissue, method comprising: acquiring a medical image of a subject; segmenting the medical image into a plurality of volumetric segments using a machine learning model, wherein each of the plurality of volumetric segments comprises a respective segmentation mask corresponding to one selected from a plurality of adipose tissue types.
PCT/SG2023/050744 2022-11-23 2023-11-09 A system for and a method of classifying adipose tissue Ceased WO2024112260A2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
EP23895140.4A EP4622555A2 (en) 2022-11-23 2023-11-09 A system for and a method of classifying adipose tissue

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
SG10202260168Q 2022-11-23
SG10202260168Q 2022-11-23

Publications (3)

Publication Number Publication Date
WO2024112260A2 true WO2024112260A2 (en) 2024-05-30
WO2024112260A9 WO2024112260A9 (en) 2024-07-04
WO2024112260A3 WO2024112260A3 (en) 2024-08-15

Family

ID=91196705

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/SG2023/050744 Ceased WO2024112260A2 (en) 2022-11-23 2023-11-09 A system for and a method of classifying adipose tissue

Country Status (2)

Country Link
EP (1) EP4622555A2 (en)
WO (1) WO2024112260A2 (en)

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10157462B2 (en) * 2016-06-27 2018-12-18 University Of Central Florida Research Foundation, Inc. System and method for image-based quantification of white and brown adipose tissue at the whole-body, organ and body-region levels
WO2019182520A1 (en) * 2018-03-22 2019-09-26 Agency For Science, Technology And Research Method and system of segmenting image of abdomen of human into image segments corresponding to fat compartments

Also Published As

Publication number Publication date
WO2024112260A3 (en) 2024-08-15
WO2024112260A9 (en) 2024-07-04
EP4622555A2 (en) 2025-10-01

Similar Documents

Publication Publication Date Title
Lee et al. Deep neural network for automatic volumetric segmentation of whole-body CT images for body composition assessment
US11443433B2 (en) Quantification and staging of body-wide tissue composition and of abnormal states on medical images via automatic anatomy recognition
Zhou et al. Deep learning-based carotid plaque segmentation from B-mode ultrasound images
CN113711271A (en) Deep convolutional neural network for tumor segmentation by positron emission tomography
CN103054563B (en) A kind of quantification of blood vessel wall image texture characteristic and extracting method
Huang et al. ISA-Net: Improved spatial attention network for PET-CT tumor segmentation
Zopfs et al. Evaluating body composition by combining quantitative spectral detector computed tomography and deep learning-based image segmentation
Kawahara et al. Image synthesis with deep convolutional generative adversarial networks for material decomposition in dual-energy CT from a kilovoltage CT
CN111598864A (en) A method for evaluating the differentiation of hepatocellular carcinoma based on fusion of multimodal image contributions
CN117788435A (en) Physical examination CT image data processing and analyzing system and application thereof
WO2011139232A1 (en) Automated identification of adipose tissue, and segmentation of subcutaneous and visceral abdominal adipose tissue
Yang et al. A multi-stage progressive learning strategy for COVID-19 diagnosis using chest computed tomography with imbalanced data
Oh et al. Segmentation of white matter hyperintensities on 18F-FDG PET/CT images with a generative adversarial network
Jin et al. Segmentation and evaluation of adipose tissue from whole body MRI scans
Pusterla et al. An automated pipeline for computation and analysis of functional ventilation and perfusion lung MRI with matrix pencil decomposition: TrueLung
EP4622555A2 (en) A system for and a method of classifying adipose tissue
Benrabha et al. Automatic ROI detection and classification of the achilles tendon ultrasound images
Memiş et al. A new scheme for automatic 2D detection of spheric and aspheric femoral heads: A case study on coronal MR images of bilateral hip joints of patients with Legg-Calve-Perthes disease
CN119151967A (en) Medical image analysis method and system based on flat scanning CT data
Saadizadeh Breast cancer detection in thermal images using GLRLM algorithm
Takahashi et al. Automated volume measurement of abdominal adipose tissue from entire abdominal cavity in Dixon MR images using deep learning
Moghbeli et al. A method for body fat composition analysis in abdominal magnetic resonance images via self-organizing map neural network
Wald et al. Automated quantification of adipose and skeletal muscle tissue in whole-body MRI data for epidemiological studies
Dietz et al. Diabetes detection from whole-body magnetic resonance imaging using deep learning
Yunardi et al. Contrast-enhanced Based on Abdominal Kernels for CT Image Noise Reduction

Legal Events

Date Code Title Description
WWE Wipo information: entry into national phase

Ref document number: 2023895140

Country of ref document: EP

NENP Non-entry into the national phase

Ref country code: DE

WWE Wipo information: entry into national phase

Ref document number: 11202503083Y

Country of ref document: SG

WWP Wipo information: published in national office

Ref document number: 11202503083Y

Country of ref document: SG

ENP Entry into the national phase

Ref document number: 2023895140

Country of ref document: EP

Effective date: 20250623

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23895140

Country of ref document: EP

Kind code of ref document: A2

WWP Wipo information: published in national office

Ref document number: 2023895140

Country of ref document: EP