[go: up one dir, main page]

WO2024038443A1 - Procédé et système de segmentation de résultats dans un balayage d'un système d'imagerie - Google Patents

Procédé et système de segmentation de résultats dans un balayage d'un système d'imagerie Download PDF

Info

Publication number
WO2024038443A1
WO2024038443A1 PCT/IL2023/050853 IL2023050853W WO2024038443A1 WO 2024038443 A1 WO2024038443 A1 WO 2024038443A1 IL 2023050853 W IL2023050853 W IL 2023050853W WO 2024038443 A1 WO2024038443 A1 WO 2024038443A1
Authority
WO
WIPO (PCT)
Prior art keywords
scan
target
scans
model
segmentation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
PCT/IL2023/050853
Other languages
English (en)
Inventor
Leo Joskowicz
Jacob Sosna
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hadasit Medical Research Services and Development Co
Yissum Research Development Co of Hebrew University of Jerusalem
Original Assignee
Hadasit Medical Research Services and Development Co
Yissum Research Development Co of Hebrew University of Jerusalem
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hadasit Medical Research Services and Development Co, Yissum Research Development Co of Hebrew University of Jerusalem filed Critical Hadasit Medical Research Services and Development Co
Publication of WO2024038443A1 publication Critical patent/WO2024038443A1/fr
Anticipated expiration legal-status Critical
Ceased legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/09Supervised learning
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30096Tumor; Lesion

Definitions

  • the present invention relates generally to the field of assistive diagnostics. More specifically, the present invention relates automated segmentation of findings in a scan of an imaging system.
  • Embodiments of the invention make use of experimental observations that were obtained by the inventors. It is speculated that latent data may be included in the relations between different scan instances of a specific subject over time, referred to herein as “longitudinal studies”, or “longitudinal scans”. Such latent information may be useful for training ML models to perform Al based functions, such as segmentation of regions in medical images. As elaborated herein, embodiments of the invention may simultaneously apply multichannel ML models on a plurality of scans, pertaining to a specific patient. As elaborated herein, the specific methods of training the multichannel ML model may reduce the need for expert-provided annotations, and improve performance of the trained ML model.
  • Embodiments of the invention may include a method of segmenting findings in a scan of an imaging system by at least one processor.
  • Embodiments of the method may include providing, or receiving a machine-learning (ML), that includes a plurality of channels.
  • the ML model may be pretrained, or configured to receive, via the plurality of channels, a corresponding plurality of scans depicting at least a portion of a patient organ, and (b) produce pairwise segmentation of findings in at least one of the plurality of scans.
  • the at least one processor may, e.g., during an inference stage, may receive a target scan of the imaging system, depicting at least a portion of an organ of a target patient.
  • the at least one processor may introduce the target scan as input for two or more channels of the plurality of channels, and obtain, from the ML model, pairwise segmentation of at least one finding depicted in the target scan, based on the training of the ML model.
  • the at least one processor may (e.g., continuously, and/or during a training stage) training the ML model by receiving a scan sequence, that may include two or more scans depicting at least a portion of an organ of the same patient, at different points in time; receiving at least one label, representing an annotated segmentation of a scan of the scan sequence; introducing the two or more scans of the scan sequence as input for two or more channels of the plurality of channels; and training the ML model, based on said label, to produce pairwise segmentation of at least one scan of the scan sequence.
  • a scan sequence may include two or more scans depicting at least a portion of an organ of the same patient, at different points in time
  • receiving at least one label representing an annotated segmentation of a scan of the scan sequence
  • introducing the two or more scans of the scan sequence as input for two or more channels of the plurality of channels
  • training the ML model based on said label, to produce pairwise segmentation of at least one scan of the scan sequence.
  • the target scan may be a volumetric scan data element, that may include a plurality of two-dimensional (2D) scan slices.
  • the at least one processor may obtain pairwise segmentation of a finding depicted in the target scan by classifying at least one section of at least one 2D scan slice of the target scan as pertaining to a predefined class of findings.
  • classes of findings may include, for example, regions in the patient’s body, such as a region of lesion, a region of a tumor, a region of metastasis, an infarcted region, a region of inflammation, and an ischemic region.
  • embodiments of the invention may include a method of segmenting findings in a scan of an imaging system by at least one processor.
  • the at least one processor may, e.g., at a training stage, receive a scan sequence that may include two or more scans depicting at least a portion of an organ of the same patient, at different points in time; receive at least one first label, representing an annotated segmentation of a first scan of the scan sequence; introduce the two or more scans of the scan sequence as input (e.g., unique, respective input) for two or more channels of a ML model, that may include a plurality of channels.
  • the at least one processor may then use the first label as supervisory data to train the ML model, so as to produce pairwise segmentation of at least one finding depicted in a scan of the scan sequence.
  • the at least one processor may subsequently, e.g., at an inference stage, apply the trained ML model on one or more target scans, to produce a pairwise segmentation of at least one finding depicted in the one or more target scans, based on said training.
  • Embodiments of the invention may include an inference stage, whereat the at least one processor may be configured to receiving a target scan sequence, that may include two or more target scans, each depicting at least a portion of an organ of the same patient, at different points in time.
  • the at least one processor may introduce the two or more target scans of the target scan sequence as input for two or more channels of the plurality of channels, and produce one or more pairwise segmentations of one or more findings depicted in at least one of the two or more target scans, based on the training of the ML model.
  • the segmentations of findings may be, or may include lesion segments, corresponding to regions of lesions in a patient’s body.
  • the at least one processor may be configured to apply a pair-wise analysis algorithm on the target scans of the target scan sequence, to determine a lesion category that pertains to at least one lesion segment.
  • lesion category may include, for example new lesions (e.g., lesions that appear in new scans, but are absent in older scans), existing lesions (e.g., lesions that appear both in new scans and in older scans), and disappeared lesions(e.g., lesions that appear in old scans, but are absent in newer scans).
  • the at least one processor may subsequently provide, e.g., via a user interface (UI), a notification that represents at least one lesion segment (e.g., an identification of the lesion, a highlighted region of the lesion, and the like), and/or a corresponding lesion category (e.g., new, disappeared, and existing lesions). Additionally, or alternatively, the at least one processor may calculate a metric (e.g., size, area, diameter, volume, circumference, and the like) of the at least one lesion segment, as depicted in one or more target scans of the target scan sequence.
  • a metric e.g., size, area, diameter, volume, circumference, and the like
  • the at least one processor may subsequently provide, e.g., via the UI, a notification that represents the calculated metric (e.g., size, area, diameter, volume, circumference, and the like) of the at least one lesion segment.
  • the calculated metric e.g., size, area, diameter, volume, circumference, and the like
  • the at least one processor may repeat the pair-wise analysis, so as to traverse, or cascade through the target scans of the target scan sequence, in a chronological order.
  • the at least one processor may thus produce a report data element, that represents evolution of lesions along a timeline of the scan sequence.
  • evolution may include, for example, a change in one or more lesions' category (e.g., newly appearing, disappearing, etc.), a change in one or more lesions' location, and/or a change in one or more lesions' calculated metric (e.g., a change in volume, etc.).
  • the ML model may include at least one annotation channel.
  • an annotation channel may allow input of annotated or labeled data (e.g., expert-provided segmentation maps), during inference of the ML model, and may not necessarily be used for training the ML model.
  • annotated data channel may introduce labels or annotations (e.g., segmentation maps) that pertain to specific (e.g., historic) scans in a scan sequence.
  • the at least one processor may (e.g., during the training stage) receive, via an input of the annotation channel, at least one second label or annotation, representing an annotated segmentation of a second scan of the scan sequence.
  • the at least one processor may subsequently (a) use the at least one second label or annotation as an instance of a data sample, and/or (b) use the first label or annotation as supervisory data to train the ML model, so as to produce segmentation of at least one finding depicted in at least one scan, other than the second scan.
  • the at least one processor may (e.g., during an inference stage) receive a target scan sequence that may include two or more target scans, each depicting at least a portion (e.g., the same portion) of an organ of the same patient, at different points in time.
  • the at least one processor may introduce the two or more target scans of the target scan sequence as input for two or more separate, corresponding channels of the plurality of channels.
  • the at least one processor may introduce at least one target label, representing an annotated segmentation of a first target scan of the target scan sequence as input to the annotation channel.
  • the at least one processor may subsequently obtain, from the ML model, a segmentation of at least one finding, depicted in at least one second target scan of the two or more target scans, based on said training.
  • the at least one processor may apply a pair- wise analysis algorithm on the target scans of the target scan sequence, to determine a lesion category pertaining to at least one lesion segment, and provide, via a user interface (UI), a notification representing at least one lesion segment and a corresponding lesion category.
  • UI user interface
  • Embodiments of the invention may include a system for classifying findings in a scan of an imaging system.
  • Embodiments of the system may include a non-transitory memory device, wherein modules of instruction code are stored, and at least one processor associated with the memory device, and configured to execute the modules of instruction code.
  • the at least one processor may be configured to pre-train a multichannel ML model to (a) receive, via the plurality of channels, a corresponding plurality of scans, depicting at least a portion of a patient organ, and (b) produce pairwise segmentation of findings in at least one of the plurality of scans; receive a target scan of the imaging system, depicting at least a portion of an organ of a target patient; introduce the target scan as input for two or more channels of the plurality of channels; and obtain, from the ML model, pairwise segmentation of at least one finding depicted in the target scan, based on said training of the ML model.
  • FIG. 1 is a block diagram, depicting a computing device which may be included in a system for segmentation of findings in a scan of an imaging system, according to some embodiments of the invention.
  • FIG. 2 is a block diagram, depicting a system for automated segmentation of findings in a scan of an imaging system, according to some embodiments of the invention.
  • Fig. 3 is an image depicting an exemplary comparison between results of lesion detection and segmentation in a longitudinal scan of a Ever of a patient, as obtained by currently available systems of image analysis, as opposed to systems based on embodiments of the invention.
  • Fig. 4 is a table depicting experimental results of lesion detection and segmentation in the lungs, as obtained by embodiments of the invention.
  • Fig. 5 is a flow diagram, depicting a method of automated segmentation of findings in a scan of an imaging system, by at least one processor, according to some embodiments of the invention.
  • Fig. 6 is a flow diagram depicting a method of segmenting findings in a scan of an imaging system by at least one processor, according to some embodiments of the invention.
  • Fig. 7 is a schematic diagram depicting four scenarios of lesion, and lesion changes analysis, according to some embodiments of the invnetion.
  • the terms “plurality” and “a plurality” as used herein may include, for example, “multiple” or “two or more”.
  • the terms “plurality” or “a plurality” may be used throughout the specification to describe two or more components, devices, elements, units, parameters, or the like.
  • the term “set” when used herein may include one or more items.
  • a neural network (NN) or an artificial neural network (ANN), e.g., a neural network implementing a machine learning (ML) or artificial intelligence (Al) function may refer to an information processing paradigm that may include nodes, referred to as neurons, organized into layers, with links between the neurons. The links may transfer signals between neurons and may be associated with weights.
  • a NN may be configured or trained for a specific task, e.g., pattern recognition or classification. Training a NN for the specific task may involve adjusting these weights based on examples.
  • Each neuron of an intermediate or last layer may receive an input signal, e.g., a weighted sum of output signals from other neurons, and may process the input signal using a linear or nonlinear function (e.g., an activation function).
  • the results of the input and intermediate layers may be transferred to other neurons and the results of the output layer may be provided as the output of the NN.
  • the neurons and links within a NN are represented by mathematical constructs, such as activation functions and matrices of data elements and weights.
  • a processor e.g., CPUs or graphics processing units (GPUs), or a dedicated hardware device may perform the relevant calculations.
  • Computing device 1 may include a processor or controller 2 that may be, for example, a central processing unit (CPU) processor, a chip or any suitable computing or computational device, an operating system 3, a memory 4, executable code 5, a storage system 6, input devices 7 and output devices 8.
  • processor 2 or one or more controllers or processors, possibly across multiple units or devices
  • More than one computing device 1 may be included in, and one or more computing devices 1 may act as the components of, a system according to embodiments of the invention.
  • Operating system 3 may be or may include any code segment (e.g., one similar to executable code 5 described herein) designed and/or configured to perform tasks involving coordination, scheduling, arbitration, supervising, controlling or otherwise managing operation of computing device 1, for example, scheduling execution of software programs or tasks or enabling software programs or other modules or units to communicate.
  • Operating system 3 may be a commercial operating system. It will be noted that an operating system 3 may be an optional component, e.g., in some embodiments, a system may include a computing device that does not require or include an operating system 3.
  • Memory 4 may be or may include, for example, a Random- Access Memory (RAM), a read only memory (ROM), a Dynamic RAM (DRAM), a Synchronous DRAM (SDRAM), a double data rate (DDR) memory chip, a Flash memory, a volatile memory, a nonvolatile memory, a cache memory, a buffer, a short term memory unit, a long term memory unit, or other suitable memory units or storage units.
  • Memory 4 may be or may include a plurality of possibly different memory units.
  • Memory 4 may be a computer or processor non-transitory readable medium, or a computer non-transitory storage medium, e.g., a RAM.
  • a non-transitory storage medium such as memory 4, a hard disk drive, another storage device, etc. may store instructions or code which when executed by a processor may cause the processor to carry out methods as described herein.
  • Executable code 5 may be any executable code, e.g., an application, a program, a process, task, or script. Executable code 5 may be executed by processor or controller 2 possibly under control of operating system 3. For example, executable code 5 may be an application that may automatically segment findings in a scan of an imaging system as further described herein. Although, for the sake of clarity, a single item of executable code
  • a system may include a plurality of executable code segments similar to executable code 5 that may be loaded into memory 4 and cause processor 2 to carry out methods described herein.
  • Storage system 6 may be or may include, for example, a flash memory as known in the art, a memory that is internal to, or embedded in, a micro controller or chip as known in the art, a hard disk drive, a CD-Recordable (CD-R) drive, a Blu-ray disk (BD), a universal serial bus (USB) device or other suitable removable and/or fixed storage unit.
  • Data pertaining to scans of an imaging system such as a Computerized Tomography (CT) system or a Magnetic Resonance Imaging (MRI) system may be stored in storage system 6, and may be loaded from storage system 6 into memory 4 where it may be processed by processor or controller 2.
  • CT Computerized Tomography
  • MRI Magnetic Resonance Imaging
  • memory 4 may be a non-volatile memory having the storage capacity of storage system 6. Accordingly, although shown as a separate component, storage system
  • Input devices 7 may be or may include any suitable input devices, components, or systems, e.g., a detachable keyboard or keypad, a mouse and the like.
  • Output devices 8 may include one or more (possibly detachable) displays or monitors, speakers and/or any other suitable output devices.
  • Any applicable input/output (RO) devices may be connected to Computing device 1 as shown by blocks 7 and 8.
  • NIC network interface card
  • USB universal serial bus
  • any suitable number of input devices 7 and output device 8 may be operatively connected to Computing device 1 as shown by blocks 7 and 8.
  • a system may include components such as, but not limited to, a plurality of central processing units (CPU) or any other suitable multi-purpose or specific processors or controllers (e.g., similar to element 2), a plurality of input units, a plurality of output units, a plurality of memory units, and a plurality of storage units.
  • CPU central processing units
  • controllers e.g., similar to element 2
  • FIG. 2 is a block diagram, depicting a system 10 for automated segmentation of findings in a scan 20B, or scan sequence 20A of an imaging system, according to some embodiments of the invention.
  • system 10 may be implemented as a software module, a hardware module, or any combination thereof.
  • system 10 may be, or may include a computing device such as element 1 of Fig. 1, and may be adapted to execute one or more modules of executable code (e.g., element 5 of Fig. 1) to automatically segment, and/or categorize findings in a scan 20B of an imaging system, as further described herein.
  • modules of executable code e.g., element 5 of Fig. 1
  • arrows may represent flow of one or more data elements to and from system 10 and/or among modules or elements of system 10. Some arrows have been omitted in Fig. 2 for the purpose of clarity.
  • system 10 may be communicatively connected to a scan module 20, such as a CT scan module or MRI scan module.
  • System 10 may be adapted to receive, e.g., from scan module 20 (and/or from another computing device, via input 7 of Fig. 1) at least one scan data element 20B of interest, also referred to herein as a “target” scan 20B.
  • scan 20B may be a data structure (e.g., a multidimensional matrix) that may represent a volumetric scan of a region or organ of interest.
  • Scan 20B may include a plurality of two-dimensional (2D) scan slices or images 20C each representing structural 2D information pertaining to the scanned organ or region.
  • system 10 may receive, e.g., from one or more scan modules 20 and/or computing devices 1 of Fig. 1 a scan sequence 20A (also referred to herein as a longitudinal scan 20A), pertaining to a specific patient or subject.
  • a scan sequence 20A or longitudinal scan 20A may refer to a group or series of scans 20B (e.g., volumetric scans 20B), pertaining to the same patient or subject, depicting the same or a common region or organ of interest, and taken at different points in time.
  • a scan sequence 20A may include (a) a first volumetric scan 20B that includes or depicts a liver of a patient, as scanned by a first scanning module 20, at a first healthcare facility, at a first point in time, and (b) a second volumetric scan 20B that includes or depicts the liver of the same patient, as scanned by a second scanning module 20, at a second healthcare facility, and at a second point in time (e.g., one year later).
  • segmentation or image segmentation may be used herein to refer to an ML-based algorithm for identifying, or separating portions of an image, as known in the art.
  • a scan image or scan slice 20C obtained from a scanning modality 20, may be segmented to identify specific regions in scan slice 20C, that pertain to specific organs, tissues, anatomical features and the like.
  • embodiments of the invention may segment a finding (e.g., a medical finding) depicted in the target scan 20B by classifying at least one section or region of at least one two-dimensional (2D) scan slice 20C of target scan 20B as pertaining to a predefined class of findings (e.g., lesions, tumors, ischemic regions, a region of inflammation, an infarcted region, regions of anomalous anatomical features, regions of tumor metastasis, and the like).
  • a finding e.g., a medical finding depicted in the target scan 20B by classifying at least one section or region of at least one two-dimensional (2D) scan slice 20C of target scan 20B as pertaining to a predefined class of findings (e.g., lesions, tumors, ischemic regions, a region of inflammation, an infarcted region, regions of anomalous anatomical features, regions of tumor metastasis, and the like).
  • 2D two-dimensional
  • volumemetric segmentation may be used herein to refer to a ML- based algorithm for identifying, or separating three-dimensional (3D) regions within a scan 20B, e.g., by identifying contiguous segmented regions from a plurality of scan slices 20C, as known in the art.
  • system 10 may include a multi-channel machine-learning (ML) model 110.
  • system 10 may obtain, or receive ML model 110 (e.g., from input 7 of Fig. 1) as an untrained, or partially trained NN model, that includes a predefined NN architecture, including a plurality of NN nodes, arranged in one or more layers, as known in the art.
  • ML model 110 may be a Deep Neural Network (DNN) model or Convolutional Neural Network (CNN) that may be configured to receive at least one scan slice 20C, and perform segmentation of scan slice 20C, to obtain one or more segment data elements 11 IS, representing unique, contiguous two-dimensional areas within the scanned region.
  • DNN Deep Neural Network
  • CNN Convolutional Neural Network
  • ML model 110 may be, or may include a simultaneous multi-channel 3D U-Net model, that may be trained on pairs of registered scans 20B or scan slices 20C of individual patients, so as to identify segments (e.g., lung lesions) and changes in identified segments (e.g., changes in lung lesions) based on the relative lesion and healthy tissue appearance differences, as elaborated herein.
  • segments 11 IS may refer interchangeably to the physical areas within the scanned region or tissue, as well as the corresponding data elements that are obtained from ML model 110 and represent these physical areas.
  • system 10 may include a registration module 150, adapted to register, or match between pairs of consecutive scan slices 20C, as known in the art, so as to identify contiguous regions between scan slices 20C of each pair.
  • ML model 110 may collaborate with registration module 150 to identify contiguous regions among a plurality of scan slices 20C, thereby obtaining volumetric segments 111VS data elements, representing unique, contiguous volumes within the scanned region.
  • volumetric segments 111 VS may refer interchangeably to the physical volumes within the scanned region or tissue, as well as the corresponding data elements that are obtained from ML model 110, and that represent these physical volumes.
  • ML model 110 may be referred to as a multi-channel model, in a sense that ML model 110 may include a plurality of input channels, adapted to receive a respective plurality of data structures (e.g., scan slices 20C) of the same type as input.
  • ML model 110 may include a plurality of input channels, adapted to receive a respective plurality of data structures (e.g., scan slices 20C) of the same type as input.
  • ML model 110 may be configured to analyze the plurality of data structures simultaneously or concurrently, rather than separately, to produce a synergistic effect from the concurrent combination of input data structures.
  • the terms “simultaneous” or “concurrent” may be used in this context to indicate that analysis of two or more scans 20B or scan slices 20C is performed substantially at the same time, such that computations in a first channel 110C1 of ML model 110 may affect an outcome of computations in a second channel 110C2 of ML model 110, and vice-versa.
  • Fig. 3 is an image depicting an exemplary comparison between results of lesion detection and segmentation in a longitudinal scan 20A of a liver of a patient, as obtained by currently available systems of image analysis, as opposed to systems based on embodiments of the invention.
  • the term “longitudinal” is used herein to indicate that scans 20B included in scan sequence 20A may be taken or sampled at a clinically significant time gap, e.g., days, months or years apart.
  • the top row of images in Fig. 3 relate to a first, “prior” scan slice 20C, depicting a tomographic image of a patient’s liver at a first point in time.
  • the bottom row of images in Fig. 3 relate to a second, “current” scan slice 20C, depicting a tomographic image of the same patient’s liver at a second point in time.
  • registration module 150 may further be configured to match an align between scan slices 20C of different scans 20B in a scan sequence 20A.
  • registration module 150 may be configured to search for at least one second scan slice 20C of a second volumetric scan 20B, that best fits the at least one first scan slice 20C. Registration module 150 may subsequently register or align the second scan slice 20C to the first scan slice 20C according to any image registration algorithm, as known in the art.
  • the leftmost column (a) depicts a tomography image (e.g., a scan slice 20C), of a liver of a patient, acquired 4 months apart from two, separate CT scanning modalities 20, following registration or alignment of the images by registration module 150.
  • a tomography image e.g., a scan slice 20C
  • FIG. 3 shows ground truth (e.g., expert-provided) segmentation of a lesion (marked in orange and dashed circle) whose volume has increased between the prior scan and current scan by 1.64cc.
  • ground truth e.g., expert-provided
  • FIG. 3 shows standalone classification of lesions in the prior scan 20B and the current scan 20B, as performed by currently available systems for image analysis.
  • the term “standalone” is used to indicate that segmentation network was trained on single, annotated tomography images, and subsequently inferred separately, on each of the scan slices 20C, e.g., the “prior” and “current” slices 20C of the “prior” and “current” scans 20B.
  • the lesion in the prior scan 20B was missed by the standalone system (top, false negative).
  • the lesion in the current scan 20B was detected, but only partially segmented:
  • the dice segmentation coefficient e.g., a portion of segment area with respect to ground truth, or expert-annotated segmentation
  • the standalone system erroneously identified (false positive) two more regions as new lesions (marked in red and dashed circles).
  • FIG. 3 shows results of segmentation that were obtained by embodiments of the invention (by multi-channel ML model 110), which, as explained herein is trained to perform simultaneous or concurrent classification of segments 111VS.
  • the lesion was correctly detected and segmented (marked in orange and dashed circle) in both the “prior” scan slice 20C (top, dice segmentation coefficient 0.88) and the “current” scan slice 20C (bottom, dice segmentation coefficient 0.90).
  • ML model 110 may receive a longitudinal scan sequence 20A that may include two or more (e.g., a plurality) of scans 20B or scan slices 20C, depicting at least a portion of an organ of the same subject or patient, at different points in time.
  • a longitudinal scan sequence 20A may include two or more (e.g., a plurality) of scans 20B or scan slices 20C, depicting at least a portion of an organ of the same subject or patient, at different points in time.
  • ML model 110 may receive or introduce, via a first input channel 110C1, a first scan slice 20C of a first scan 20B, depicting a tomographic image of a patient’ s organ (e.g., a liver) at a first point in time (e.g., obtained at a first medical imaging facility), and receive or introduce via a second input channel 110C2, a second scan slice 20C of a second scan 20B, depicting a tomographic image of the patient’s organ at a second point in time (e.g., obtained at a second medical imaging facility).
  • a first input channel 110C1 e.g., a first scan slice 20C of a first scan 20B
  • a first point in time e.g., obtained at a first medical imaging facility
  • At least one scan 20B or scan slices 20C may be annotated 30, so as to facilitate training of ML model 110 via a supervised training algorithm.
  • ML model 110 may receive via another (e.g., a third) input channel 110C3 at least one label 110C3’, representing annotation or labeling 30 of a segment or region of a scan 20B or scan slice 20C of the scan sequence 20A.
  • the at least one label 110C3’ may be a data structure (e.g., a map, a table, a 3D matrix and the like) that identifies or represents one or more specific, expert-annotated segments (e.g., lesions, tissues, organs, etc.) in one of the received scans 20B or scan slices 20C.
  • system 10 may further include a training module 120, configured to train ML model 110, using the at least one label 110C3’ as supervisory data, to segment (e.g., produce pairwise segmentation (111VS/11 IS)) at least one scan 20B or scan slice 20C in incoming, target scans 20B or scan sequences 20A.
  • segment e.g., produce pairwise segmentation (111VS/11 IS)
  • Training module 120 may supervise the training of ML model 110, by serially introducing a plurality of annotated data samples (e.g., scans 20B or scan slices 20C associated with corresponding labels 110C3’), using labels 110C3’ as supervisory data, as known in the art. It may be appreciated that training module 120 may train ML model 110 either during a predefined training phase, or as a continuous or repetitive process, e.g., as additional annotated scans 20B or scan slices 20C are received.
  • annotated data samples e.g., scans 20B or scan slices 20C associated with corresponding labels 110C3’
  • ML model 110 may receive via the plurality of channels 110C1, 110C2 a corresponding plurality of scans 20B or scan slices 20C, depicting at least a portion of a patient’s organ, and subsequently produce pairwise segmentation 111VS/111S of findings (e.g., lesions, areas of inflammation, ischemic regions, and the like) in at least one of the plurality of scans 20B or scan slices 20C.
  • a corresponding plurality of scans 20B or scan slices 20C depicting at least a portion of a patient’s organ
  • pairwise segmentation 111VS/111S of findings e.g., lesions, areas of inflammation, ischemic regions, and the like
  • ML model 110 may receive a target scan 20B or scan slice 20C from imaging system or scan modality 20, depicting at least a portion of an organ (e.g. a lung) of a target patient.
  • target scan 20A or scan slice 20C may be a single scan 20B or scan slice 20C, in a sense that it may not be related, or associated to a longitudinal scan sequence 20A.
  • system 10 may introduce the target scan 20B or scan slice 20C as concurrent, or simultaneous input for two or more channels (e.g., together, into input channels 110C 1 and 110C2) of the plurality of channels .
  • System 10 may thus simultaneously infer two or more channels of ML model 110 on the target scan 20B or scan slice 20C, to obtain pairwise segmentation 110C 1/110C2 of at least one finding (e.g., a lesion in the lung) depicted in the target scan 20B or scan slice 20C, based on the training of ML model 110.
  • Fig. 4 is a table depicting experimental results of lesion detection and segmentation in the lungs, as obtained by embodiments of the invention.
  • the data exhibited in Fig. 4 relates separately to different sizes or diameters of detected lesions: the first four rows (marked by a dashed rectangle) relate to lesion diameter in excess of 10mm, the next four rows relate to lesion diameter between 5mm and 10mm, and the last four rows relate to lesions of all diameters.
  • each row relates to a different scenario of training of ML model 110, and inference of ML model 110 on incoming data:
  • ML model 110 was trained on separate, annotated scan slices 20C, using labels 110C3’ of annotated segmentation maps for each scan slice 20C as supervisory information. ML model 110 was subsequently inferred in a standalone mode (e.g., separately), on individual target scan slices 20C. This is comparable to the performance of currently available systems and methods of image analysis, as discussed above (e.g., in relation to Fig. 3). As shown in row (iv), the precision of lesion detection, when using the stand-alone system was 0.59.
  • precision may be used in this context to refer to a performance metric, representing a portion of true-positive predictions from an overall number of positive predictions provided by the ML segmentation model 110.
  • ML model 110 included two input channels (110C1, 110C2). ML model 110 was trained on pairs of scan slices 20C, using labels 110C3’ of annotated segmentation maps for at least one scan slice 20C of the pair as supervisory information. In a subsequent inference stage, ML model 110 was sequentially inferred in a standalone mode, e.g., on individual target scan slices 20C. In each instance of inference, the relevant target scan slice 20C was introduced as input into both input channels 110C1 and 110C2, to be concurrently analyzed by both channels. As shown in row (iii), the precision of lesion detection, when using the stand-alone system improved to 0.87, with respect to the standalone, single network of row (iv).
  • Fig. 5 is a flow diagram depicting a method of automated segmentation of findings in a scan of an imaging system, by at least one processor (e.g., processor 2 of Fig. 1), according to some embodiments of the invention.
  • the at least one processor 2 may provide or receive (e.g., via an input device 7 of Fig. 1) an ML model (e.g., ML model 110 of Fig. 2).
  • ML model 110 may include a plurality of channels (e.g., 110C1, 110C2, hence - a multichannel ML model).
  • ML model 110 may be pretrained to (a) receive, via the plurality of channels 110C 1, 110C2, a corresponding plurality of scans 20B depicting at least a portion of an organ of a patient, and (b) produce pairwise segmentation of findings in at least one of the plurality of scans 20B.
  • the at least one processor 2 may (e.g., during an inference stage) receive a target scan 20B of the imaging system, the target scan 20B may depict at least a portion of an organ of a target patient.
  • the term “target” may be used herein to indicate an object of interest.
  • the at least one processor 2 may introduce the target scan (e.g., a single scan 20B, or a single slice 20C) as input for two or more channels 10A/10B of the plurality of channels.
  • the target scan e.g., a single scan 20B, or a single slice 20C
  • the at least one processor 2 may subsequently (e.g., as elaborated herein in relation to Fig. 2), obtain, from ML model 110 a segmentation of at least one finding depicted in the target scan, based on said training of the ML model 110.
  • Fig. 6 is a flow diagram depicting a method of segmenting (111VS/11 IS) findings in a scan 20B of an imaging system 20 by at least one processor (e.g., processor 2 of Fig. 1) of system 10, according to some embodiments of the invention.
  • processor 2 of Fig. 1 e.g., processor 2 of Fig. 1
  • the at least one processor 2 of system 10 may receive a scan sequence (e.g., longitudinal scan sequence 20A of Fig. 2) that may include two or more scans or scan slices (e.g., 20A/20B of Fig. 2). As elaborated herein, the two or more scans 20B or scan slices 20C may depict at least a portion of an organ of the same patient, at different points in time. [0084] Additionally, or alternatively, and as shown in step S2010, the at least one processor 2 may receive at least one first label (e.g., 30/110C3’ of Fig. 2) representing an annotated segmentation of a first scan 20B or scan slice 20C of the received scan sequence 20C.
  • a scan sequence e.g., longitudinal scan sequence 20A of Fig. 2
  • the two or more scans 20B or scan slices 20C may depict at least a portion of an organ of the same patient, at different points in time.
  • the at least one processor 2 may receive at least one first label (e.g., 30/
  • the at least one first label 30/110C3’ may be an expert-annotated segmentation map of at least one (or exactly one) scan slice 20C, identifying or labeling one or more regions depicted in the at least one (e.g., exactly one) scan slice 20C as pertaining to a specific class of anatomical regions and/or clinical findings, as elaborated herein.
  • the at least one processor 2 may introduce the two or more scans 20B or scan slices 20C of the received scan sequence 20C as input for two or more respective channels of a multi-channel ML model (e.g., 110 of Fig. 2), as elaborated herein (e.g., in relation to Fig. 2).
  • the at least one processor may then use the at least one first label 110C3’ as supervisory data, to train ML model 110 to produce a pairwise segmentation (e.g., 111 VS/11 IS of Fig. 2) of at least one finding depicted in a scan 20B or scan slice 20C of the scan sequence 20C.
  • the at least one processor 2 may train ML model 110 to classify at least one 2D segment 11 IS of scan slice 20C, as pertaining to at least one anatomical feature or clinical finding, as elaborated herein. Additionally, or alternatively, the at least one processor 2 may employ a registration module (e.g., 150 of Fig. 2) to integrate a plurality of segments 11 IS from a plurality of scan slices 20C of scan 20B, to classify at least one 3D region 111VS of scan 20B as pertaining to at least one anatomical feature or clinical finding, thereby producing a volumetric segmentation 111VS of scan 20B.
  • a registration module e.g., 150 of Fig. 2
  • processor 2 may perform this training during an initial training stage, as known in the art. Additionally, or alternatively, processor 2 may continuously (e.g., repetitively, over time) train ML model 110, as additional annotated data 30 is received.
  • the at least one processor 2 may apply or infer the trained ML model 110 on one or more target scans 20B or target scan slices 20C, to produce a pairwise segmentation 111VS/111S of at least one finding depicted in the one or more target scans 20B or target scan slices 20C, based on the training of ML model 110.
  • system 10 may receive a target scan sequence 20C that includes two or more target scans 20B or target scan slices 20C depicting at least a portion of an organ of the same patient, at different points in time.
  • System 10 may introduce the two or more target scans 20B or target scan slices 20C of sequence 20C as respective input for two or more channels 110C1, 110C2 of the plurality of channels of ML model 110.
  • System 10 may then produce one or more pairwise segmentations 11 IS of one or more findings depicted in at least one of the two or more target scans slices 20B, based on the training of ML model 110, as elaborated herein.
  • system 10 may apply registration module 150 on the one or more pairwise segmentations 11 IS, to produce one or more volumetric pairwise segmentations 111VS of findings depicted in at least one of the two or more target scans 20B, as elaborated herein.
  • the training of ML model 110 on groups (e.g., pairs) of annotated scan slices 20C, and the subsequent inference of the trained ML model 110 on groups (e.g., pairs) of scan slices 20C has been demonstrated to have improved results with respect to stand-alone training and/or inference of segmentation models, even when using the same annotated data for training.
  • row (iv) represents a workflow commonly used in the art, where a segmentation ML model was trained in “stand-alone” mode, e.g., on separate, annotated scan slices 20C, and was subsequently inferred in “stand-alone” mode, e.g., applied on individual target scan slices 20C, and not pairwise.
  • row (ii) represents a mode of the present invention (“Simultaneous, without prior”), where ML model 110 was trained by training module 120 of Fig. 2 simultaneously, e.g., on pairs of scan slices 20C, where at least one scan slice 20C was annotated. Subsequently, ML model 110 was inferred on pairs of target scan slices 20C, e.g., provided by registration module 150, as elaborated herein.
  • system 10 may be configured to analyze longitudinal information (e.g., longitudinal scans 20A), to segment regions of interest in a scanned region or organ, and subsequently assess a condition of a patient.
  • longitudinal information e.g., longitudinal scans 20A
  • Such longitudinal information may include partial labeling (e.g., expert-annotation) of scans, that may be accumulated over time, e.g., from different healthcare centers, and in relation to different scanning devices 20.
  • partial labeling e.g., expert-annotation
  • embodiments of the invention may therefore utilize such partial annotation or labeling, associated with a first scan as input data samples for segmenting regions of interest in a second scan 20B, that may have been obtained at a later (longitudinal) point in time.
  • ML model 110 may include at least one annotation channel 110C3, configured to receive at least one label 30, representing an annotated segmentation 110C3’ of a scan 20B or scan slice 20C of a scan sequence 20C.
  • annotated segmentation 110C3’ may be used as supervisory information for training ML model 110.
  • ML model 110 may include multiple channels, each configured to receive a respective scan 20B or scan slice 20C as input, it may also be appreciated that annotated segmentation 110C3’ may also be used as a data sample (as opposed to supervisory data) for training ML model 110.
  • system 10 may receive via an annotation channel 110C3 at least one first label 30/110C3’ representing an annotated segmentation of a first scan 20B or scan slice 20C of the received scan sequence 20C, and at least one second label 30/110C3’ representing an annotated segmentation of a second scan 20B or scan slice 20C of the scan sequence 20C.
  • System 10 e.g., processor 2
  • training module 120 may then utilize training module 120 to train the ML model 110, so as to produce segmentation 111VS/11 IS of at least one finding depicted in at least one scan, other than the second scan.
  • training module 120 may train ML model 110 based on (a) the first scan 20B or scan slice 20C, (b) the second scan 20B or scan slice 20C, and/or (c) the at least one second label 30/110C3’, while using the at least one first label 30/110C3’ as supervisory data.
  • system 10 may receive a target scan sequence 20C that may include two or more target scans 20B or scan slices 20C, each depicting at least a portion of an organ of the same patient, at different points in time. System 10 may then introduce the two or more target scans 20B or scan slices 20C as respective input for two or more channels of the plurality of channels. Additionally, system 10 may receive at least one target label 30/110C3’ representing an annotated segmentation of a first target scan 20B or scan slice 20C of the two or more target scans 20B or scan slices 20C, and introduce the at least one target label as input to the annotation channel 110C3.
  • ML model 110 may subsequently produce a segmentation 11 IS of at least one finding, depicted in at least one second target scan slice 20C of the two or more target scans 20B, based on the training. Additionally, system 10 may apply registration module 150 on the one or more segmentations 11 IS, to produce one or more volumetric segmentations 111 VS of findings depicted in at least one second target scan slice 20C of the two or more target scans 20B, as elaborated herein.
  • row (iv) represents a workflow commonly used in the art, where a segmentation ML model was trained in a “stand-alone” mode, e.g., on separate, annotated scan slices 20C, and was subsequently inferred in a non-pairwise, “stand-alone” mode, e.g., applied on individual target scan slices 20C.
  • row (i) represents a mode of the present invention (“Simultaneous with prior”), where ML model 110 was trained by training module 120 of Fig.
  • pairs of scan slices 20C where at least one scan slice 20C was annotated
  • target labels or annotations which were not included in the training process, and pertain to one scan of the pair of target scan slices.
  • segmentations 111VS/111S may respectively represent or define an area or region in a scan 20B or scan slice 20C that is suspected to include a clinical finding, such as a lesion (e.g., as depicted in Fig. 3)
  • system 10 may further include a pairwise analysis module 130.
  • Pairwise analysis module 130 may be configured to perform a pairwise analysis algorithm on the target scans 20B of a received target scan sequence 20C, to determine a lesion category or class 13OA/13OB of at least one lesion associated with, or pertaining to the at least one lesion segment 111VS/111S.
  • pairwise may be used herein to indicate concurrent analysis of scans 20B and/or scan slices 20C in pairs, e.g., pairs of consecutive scans 20B, taken at different points in time (e.g., within more than a day’s difference), or at different medical facilities.
  • pairwise analysis may be synergistically beneficial, to improve image analysis in relation to currently available methods.
  • a first type of segment category or lesion category 130B may be associated with a 2D segment 11 IS of a scan slice 20C.
  • a second type of segment category or lesion category 130A may be associated with a volumetric segment 111VS of a scan 20B.
  • segment category or lesion category 13OA/13OB may represent a category of new lesions, e.g., lesions that have appeared within the time-gap between a fist scan 20B and a second scan 20B of the longitudinal scan sequence 20A, as presented by red areas and dashed circles in Fig. 3.
  • lesion category 13OA/13OB may represent a category of existing lesions, e.g., lesions that have been apparent throughout the time period of the longitudinal scan sequence 20A, as presented by brown areas in Fig. 3.
  • lesion category 13OA/13OB may represent a category of disappeared lesions, e.g., lesions that have disappeared within the time period of the longitudinal scan sequence 20A.
  • system 10 may produce or transmit a notification 40 representing at least one lesion segment 111VS/11 IS and/or corresponding lesion category 13OA/13OB, to a predefined computing device (e.g., 1 of Fig. 1) or application.
  • a predefined computing device e.g., 1 of Fig. 1
  • system 10 may present notification 40 of lesion segment 111 VS/11 IS and/or corresponding lesion category 13OA/13OB via a user interface (UI) such as output device 8 of Fig, 1.
  • UI user interface
  • system 10 may transmit notification 40 of lesion segment 111VS/11 IS and/or corresponding lesion category 13OA/13OB as an electronic message (e.g., an email, a Short Messaging Service (SMS), and the like) to a computing device (e.g., 1 of Fig. 1) and/or account of a predefined user.
  • an electronic message e.g., an email, a Short Messaging Service (SMS), and the like
  • SMS Short Messaging Service
  • system 10 may include a metric analysis module 140, configured to calculate at least one 2D physical metric 140B or 3D physical metric 140A (e.g., a size, a volume, a diameter, and the like) of at least one lesion segment 111 VS/11 IS, as depicted in one or more target scans 20B of the target scan sequence 20C.
  • System 10 may subsequently include the at least one physical metric 140A/140B within notification 40.
  • target longitudinal scan sequence 20A may include three or more scans 20B, each taken at a different point in time.
  • pairwise analysis module 130 may repeatedly implement the pairwise analysis algorithm on pairs of subsequent scans 20B in a cascading order, to chronologically indicate 40 evolution of a condition of the patient, throughout the longitudinal scan sequence 20A.
  • pairwise analysis module 130 may repeat the pairwise analysis, so as to traverse through the target scans 20B of the target scan sequence 20C, in a chronological order.
  • System 10 may subsequently produce notification 40 as a report data element, representing evolution of lesions along a timeline of the scan sequence.
  • evolution may include, for example evolution of metrics 140A/140B (e.g., size, volume, etc.) of specific segments 111VS/11 IS. Additionally, or alternatively, such evolution may include, for example evolution of specific segments’ 111VS/11 IS classification or category 13OA/13OB.
  • Fig. 7 is a schematic diagram depicting four scenarios of lesion, and lesion changes analysis, according to some embodiments of the invnetion.
  • pairwise analysis module 130 may concatenate a plurality of multichannel ML models 110 (e.g., U-Net neural networks), to perform a concatenated pairwise lesion and/or lesion changes analysis as basis for longitudinal analysis of a plurality of consecutive scans of a patient.
  • multichannel ML models 110 e.g., U-Net neural networks
  • system 10 facilitate incorporation of all the available patient scans and previous lesion segmentations, to accommodate complete and accurate analysis of both minor and major changes in longitudinal scan sequences 20A.
  • Scenario (d), denoted “Longitudinal” is applicable for studies with mutiple consecutive scans, e.g., where more than two consecutive scans 20B, and possibly some of the previous lesion segmentations 111S/111VS are also available (n > 2).
  • a first ML model 110 may include a simultaneous two-channel U-Net NN architecture, and may be trained on registered pairs of prior and current scans.
  • a first ML model 110 denoted M2 may include a simultaneous three-channel U-Net NN architecture, trained on triplets of registered current and prior scans 20B and prior lesion segmentations 111S/111VS.
  • the lesion detection and pairwise segmentation may be performed with model ML
  • the same target scan 20B may be input, or introduced into both channels 110C1 (denoted ‘ 1’) and 110C2 (denoted ‘2’).
  • this configuration may be referred to as a “Simultaneous single” to indicate that ML model 110 (e.g., U-Net) was trained on pairs of scans, and is used when only one scan of the subject or opatient is available, or when the prior and current scans 20B are not registered due to major changes in the depicted tissue or organ of scan(s) 20B.
  • ML model 110 e.g., U-Net
  • pairwise analysis module 130 of Fig. 2 may perform lesion detection and pairwise segmentation with two Ml models 110: one for the prior scan 20B and one for the current scan 20B.
  • Each Ml model 110 may receive both scans 20B in dedicated, respective channels 110C1, 110C2 (denoted ‘ 1 ’ and ‘2’). The difference is the roles of the scans.
  • Such configuration may be referred to herein as simultaneous, as it may perform the segmentations process concurrently, or in parallel, on both scans 20B.
  • scenario (c) when the prior lesion segmentations 111S/111VS are also available from a previous analysis, they may be introduced as input via a third, dedicated channel 110C3 of ML model 110 to improve robustness and accuracy. This is performed with ML model 110 such as M2.
  • ML model 110 such as M2.
  • Such a scenario may be referred to herein as “Simultaneous with prior”, meaning that both channels are handled simultaneously, or in parallel, each in a dedicated channel (110C1, 110C2), together with annotation of prior segmentation introduced into a third channel 110C3.
  • Sn is the current, most recent scan 20B and S 1 is the first, oldest scan 20B for the same patient.
  • Such a scenario may be referred to herein as a Longitudinal study.
  • system 10 may include, or implement a cascade of ML models 110, allowing longitudinal study of lesions in a particular patient.
  • pairwise analysis module 130 may start the processing with the oldest scan pair in the cascade, denoted here as Si, and S2. For that purpose, pairwise analysis module 130 may use an Ml simultaneous ML model 110 to produce their corresponding lesion segmentations (LSi, LS2) as elaborated herein (e.g., in relation to scenario (b)).
  • Si the oldest scan pair in the cascade
  • S2 the oldest scan pair in the cascade
  • pairwise analysis module 130 may use an Ml simultaneous ML model 110 to produce their corresponding lesion segmentations (LSi, LS2) as elaborated herein (e.g., in relation to scenario (b)).
  • pairwise analysis module 130 may utilize M2 ML models 110 to process subsequent scan pairs (Si, Si+i). Each such degree may relate to a specific scan 20B (Si+i) of scan sequence 20C, and may include an M2 ML model 110.
  • input channels 110C1, 110C2 of the M2 ML model 110 may serve to input the dedicated scan 20B (Si+i) and a previous scan 20B (Si), respectively.
  • input channels 110C3 may receive as input a computed lesion segmentation 111S/111VS (LSi) of the previous scan (Si).
  • M2 ML model 110 may analyze the current, and previous scans simultaneously, together with the computed lesion segmentation 111S/111VS (LSi) of the previous scan (Si), to obtain computed lesion segmentation 111S/111VS (LSi+1) of the current scan (Si+1) as elaborated herein (e.g., in relation to scenario (c)).
  • Pairwise analysis module 130 may proceed this cascade until the final pair of current and prior scan 20B (Sn-1, Sn) is reached. When lesion segmentation priors 111S/111VS (LSi) are available, they may be used instead of the computed ones. At the end of this process, pairwise analysis module 130 may produce a longitudinal classification 13OA/13OB (e.g., new lesion, disappearing lesion, a change in a lesion, etc.) as an aggregation of classifications 13OA/13OB of one or more (e.g., all) degrees in the cascade.
  • a longitudinal classification 13OA/13OB e.g., new lesion, disappearing lesion, a change in a lesion, etc.
  • system 10 may receive a target scan sequence 20A that includes two or more (e.g., more than three) target scans 20B depicting at least a portion of an organ of the same patient, at different points in time (e.g., over a day’s difference).
  • System 10 may provide a cascade that may include a plurality of ML model 110 instances, where each ML model instance is dedicated to produce pairwise segmentations of one or more findings in a pair of consecutive target scans 20B.
  • System 10 may subsequently produce, as elaborated herein a longitudinal categorization or classification 13OA/13OB of findings in the target scan sequence based on (e.g., as an aggregation of) the pairwise segmentations of the cascaded ML models.
  • the longitudinal categorization of findings 13OA/13OB may include, for example, representation of changes in the depicted tissue or organ, such as appearance of new lesions, disappearing lesions, change in lesion volumes or morphology, and the like.
  • embodiments of the invention may include a practical application for improving computer-based, assistive diagnostic technology.
  • embodiments of the invention may improve clinical decision making by providing accurate and reliable segmentation of scanned regions.
  • improvement in segmentation may, for example, provide improved volumetric lesion and lesion changes measurements in support of disease status evaluation, and allow accurate assessment of treatment efficacy and response to therapy.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Evolutionary Computation (AREA)
  • Computational Linguistics (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • General Engineering & Computer Science (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • Medical Informatics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

Un système et un procédé de segmentation de résultats dans un balayage d'un système d'imagerie par au moins un processeur peuvent consister à : fournir un modèle d'apprentissage machine (ML) comprenant une pluralité de canaux, le modèle ML étant préentraîné pour (a) recevoir, par le biais de la pluralité de canaux, une pluralité correspondante de balayages représentant au moins une partie d'un organe de patient, et (b) produire une segmentation de résultats dans au moins l'un de la pluralité de balayages ; recevoir un balayage cible du système d'imagerie, représentant au moins une partie d'un organe d'un patient cible ; introduire le balayage cible en tant qu'entrée pour au moins deux canaux de la pluralité de canaux ; et obtenir, à partir du modèle ML, une segmentation d'au moins un résultat représenté dans le balayage cible, sur la base dudit entraînement du modèle ML.
PCT/IL2023/050853 2022-08-14 2023-08-14 Procédé et système de segmentation de résultats dans un balayage d'un système d'imagerie Ceased WO2024038443A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US202263397877P 2022-08-14 2022-08-14
US63/397,877 2022-08-14

Publications (1)

Publication Number Publication Date
WO2024038443A1 true WO2024038443A1 (fr) 2024-02-22

Family

ID=89941398

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IL2023/050853 Ceased WO2024038443A1 (fr) 2022-08-14 2023-08-14 Procédé et système de segmentation de résultats dans un balayage d'un système d'imagerie

Country Status (1)

Country Link
WO (1) WO2024038443A1 (fr)

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200167930A1 (en) * 2017-06-16 2020-05-28 Ucl Business Ltd A System and Computer-Implemented Method for Segmenting an Image

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200167930A1 (en) * 2017-06-16 2020-05-28 Ucl Business Ltd A System and Computer-Implemented Method for Segmenting an Image

Similar Documents

Publication Publication Date Title
Hennessey et al. Artificial intelligence in veterinary diagnostic imaging: A literature review
Ali et al. Machine learning based automated segmentation and hybrid feature analysis for diabetic retinopathy classification using fundus image
Bhandarkar et al. Deep learning based computer aided diagnosis of Alzheimer’s disease: a snapshot of last 5 years, gaps, and future directions
US20250139767A1 (en) Machine learning for predicting cancer genotype and treatment response using digital histopathology images
Kang et al. Deep learning based on ResNet-18 for classification of prostate imaging-reporting and data system category 3 lesions
Ho et al. Feature-level ensemble approach for COVID-19 detection using chest X-ray images
Anlin Sahaya Infant Tinu et al. Detection of brain tumour via reversing hexagonal feature pattern for classifying double-modal brain images
Jain et al. Early detection of brain tumor and survival prediction using deep learning and an ensemble learning from radiomics images
Shamrat et al. Analysing most efficient deep learning model to detect COVID-19 from computer tomography images
Gleichgerrcht et al. Radiological identification of temporal lobe epilepsy using artificial intelligence: a feasibility study
Sahu et al. Analysis of deep learning methods for healthcare sector-medical imaging disease detection
Bentaher et al. R2A-UNET: double attention mechanisms with residual blocks for enhanced MRI image segmentation
Luong et al. A computer-aided detection to intracranial hemorrhage by using deep learning: a case study
Liu et al. Dual-branch image projection network for geographic atrophy segmentation in retinal OCT images
Najjar et al. Hybrid Deep Learning Model for Hippocampal Localization in Alzheimer's Diagnosis Using U-Net and VGG16
US10839513B2 (en) Distinguishing hyperprogression from other response patterns to PD1/PD-L1 inhibitors in non-small cell lung cancer with pre-therapy radiomic features
WO2024038443A1 (fr) Procédé et système de segmentation de résultats dans un balayage d'un système d'imagerie
Jun et al. Medical data science in rhinology: Background and implications for clinicians
Guttulsrud Generating synthetic medical images with 3d gans
Azam Zia et al. Identification of Alzheimer disease by using hybrid deep models
Zaman et al. Efficient labelling for efficient deep learning: the benefit of a multiple-image-ranking method to generate high volume training data applied to ventricular slice level classification in cardiac MRI
Bardwell et al. Cognitive impairment prediction by normal cognitive brain MRI scans using deep learning
Karpagam et al. Automated diagnosis system for Alzheimer disease using features selected by artificial bee colony
Wang et al. Detection of mild cognitive impairment based on attention mechanism and parallel dilated convolution
Bhagat et al. Computational Intelligence approach to improve the Classification accuracy of Brain Tumor Detection

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23854642

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 23854642

Country of ref document: EP

Kind code of ref document: A1