[go: up one dir, main page]

WO2018205922A1 - Procédés et système pour test de fonction pulmonaire basé sur une imagerie médicale et un apprentissage automatique - Google Patents

Procédés et système pour test de fonction pulmonaire basé sur une imagerie médicale et un apprentissage automatique Download PDF

Info

Publication number
WO2018205922A1
WO2018205922A1 PCT/CN2018/085987 CN2018085987W WO2018205922A1 WO 2018205922 A1 WO2018205922 A1 WO 2018205922A1 CN 2018085987 W CN2018085987 W CN 2018085987W WO 2018205922 A1 WO2018205922 A1 WO 2018205922A1
Authority
WO
WIPO (PCT)
Prior art keywords
images
learning based
deep
machine learning
pulmonary function
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
PCT/CN2018/085987
Other languages
English (en)
Inventor
Yin Zhou
Hui Liu
Taofeng ZHU
Meng Zhu
Zhouguang HUI
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Suzhou Complexis Medical Inc
Original Assignee
Suzhou Complexis Medical Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Suzhou Complexis Medical Inc filed Critical Suzhou Complexis Medical Inc
Publication of WO2018205922A1 publication Critical patent/WO2018205922A1/fr
Anticipated expiration legal-status Critical
Ceased legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10116X-ray image
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10132Ultrasound image
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10132Ultrasound image
    • G06T2207/101363D ultrasound image
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30061Lung

Definitions

  • the present disclosure relates to automatic and accurate pulmonary function test, and more particularly, to use image segmentation methods to segment and reconstruct the regions of interest for thoracic diagnostic medical images, and to use machine learning methods to derive the pulmonary function indicators based on the features extracted from these regions.
  • Pulmonary function is vital to the health condition of a person.
  • the volume of lungs is roughly homogeneously taken up by alveoli.
  • Each alveolus consists of three kinds of elements. Two of them are lung tissue and blood vessels going through it, forming the alveolar wall. The third of them is the volume confined in an alveolus to take in air during inhalation and being uniformly emptied during exhalation.
  • inhalation and exhalation i.e. respiration
  • inhaled air goes through trachea and bronchi, fills up and enlarges alveoli, and eventually leaves alveoli after air exchange while refreshed blood goes to other parts of the body through cardiovascular system.
  • volume of alveoli i.e.
  • PFT pulmonary function test
  • Medical imaging techniques may help physicians improve the diagnosis by increasing the objectiveness of the evaluation. These medical images give physicians a direct impression of pulmonary structures and a certain degree of knowledge of what to expect during clinical examination. Nevertheless, information conveyed in a medical image has to be interpreted with knowledge of imaging modality for this image to both exclude regions not contributing to pulmonary capacity and identify condition of alveoli representing by pixels in region of lungs, i.e. a proper “segmentation” of anatomic structures and an accurate “translation” from pixel value to physical condition of alveoli. Besides, while a patient has to remain a certain kind of posture in traditional examination, the patient has to take a different posture before an imaging device, bringing extra difficult in diagnosis.
  • Some software tools for imaging based PFTs require the the images from different time phase points in succession during a whole cycle of respiration and localize the low attenuation area as the region with emphysematous changes. While this technique improves the recognition of the emphysematous region, more radiation dose from imaging, such as CT, is required to be imposed on the patient, which may leads to more radiation-related side effects.
  • an objective of this invention is to provide effective and efficient solutions for this purpose.
  • the present disclosure is directed to methods and systems for pulmonary function test based on diagnostic imaging and machine learning.
  • a method for pulmonary function test based on imaging comprising:
  • the set of diagnostic thoracic images are received from one or more medical imaging modalities for radiotherapy, preferably selected from one or more of digital radiography (DR) , computerized tomography (CT) , magnetic resonance imaging (MRI) , positron emission tomography (PET) , ultrasound, single-photon emission computed tomography (SPECT)
  • DR digital radiography
  • CT computerized tomography
  • MRI magnetic resonance imaging
  • PET positron emission tomography
  • SPECT single-photon emission computed tomography
  • the diagnostic thoracic images are two-dimensional or three-dimensional.
  • the lung segmentation and region condition classification comprises: segmenting the pulmonary sacs for each lung; classifying the condition of each pulmonary alveolar sac into a set of categories, including, but not limited to, functioning and malfunctioning.
  • the segmentation method is selected from one or a combination of: connected component based, threshold-based, canny edge, level-set based, active contour and pixel or voxel wise machine learning based segmentation methods.
  • machine learning based segmentation method is selected from one or a combination of: neural network, svm, random forest, adaboost and deep learning based method.
  • deep learning based method is selected from one or a combination of: convolutional neural network, deep Boltzmann machine, stacked (de-noising) auto-encoder, and deep belief network.
  • pulmonary region condition classification method is based on machine learning based method, which takes input features from the region from a single image or the union of features from the same region from a group of images at different time phase points during the respiration.
  • the machine learning based method is selected from one or a combination of: neural network, svm, random forest, adaboost and deep learning based method.
  • deep learning based method is selected from one or a combination of: convolutional neural network, deep Boltzmann machine, stacked (de-noising) auto-encoder, and deep belief network.
  • derivation of pulmonary function indicators is based on machine learning method.
  • machine learning based method is selected from one or a combination of: neural network, svm, random forest, adaboost and deep learning based method.
  • deep learning based method is selected from one or a combination of: convolutional neural network, deep Boltzmann machine, stacked (de-noising) auto-encoder and deep belief network.
  • the pulmonary function indicators comprise tidal volume (VT) , inspiratory reserve volume (IRV) , expiratory reserve volume (ERV) , residual volume (RV) , total lung capacity (TLC) , inspiratory capacity (IC) , functional residual capacity (FRC) , vital capacity (VC) , forced vital capacity (FVC) , forced expiratory volume in 1 second (FEV1) , forced expiratory flow (FEF) , forced inspiratory flow rates (FIFs) and maximum voluntary ventilation (MVV) .
  • VT tidal volume
  • IDV inspiratory reserve volume
  • EDV expiratory reserve volume
  • RV residual volume
  • TLC total lung capacity
  • IC inspiratory capacity
  • FRC functional residual capacity
  • VC vital capacity
  • FVC forced vital capacity
  • FEV1 forced expiratory volume in 1 second
  • FEV1 forced expiratory flow
  • FEFs forced inspiratory flow rates
  • MVV maximum voluntary ventilation
  • Also provided herein is a system having computer program code stored on a non-transitory computer readable medium for pulmonary function test comprising:
  • a central station having a processor unit for storing and processing image data of a diagnostic thoracic imaging into pulmonary function indicators;
  • a user interface device connected to the central station configured for:
  • a. having an interface for receiving a set of diagnostic thoracic images from some imaging device
  • processing of the image data comprises:
  • the central station is selected from a standalone workstation or a central server.
  • the user interface device is selected from a device directly connected to the central station or a remote device remotely connected to the central station via local network or internet or wireless network.
  • the remote device is a device with a pure web-browser based interface and/or a wireless mobile device.
  • the lung segmentation and region of interest classification comprises: segmenting the pulmonary sacs for each lung; classifying the condition of each pulmonary alveolar sac into two category: functioning and malfunctioning.
  • the set of diagnostic thoracic images are received from one or more medical imaging modalities for radiotherapy, preferably selected from one or more of digital radiography (DR) , computerized tomography (CT) , magnetic resonance imaging (MRI) , positron emission tomography (PET) , ultrasound, single-photon emission computed tomography (SPECT) .
  • DR digital radiography
  • CT computerized tomography
  • MRI magnetic resonance imaging
  • PET positron emission tomography
  • SPECT single-photon emission computed tomography
  • the diagnostic thoracic images are two-dimensional or three-dimensional.
  • the segmentation method is selected from one or a combination of: connected component based, threshold-based, canny edge, level-set based, active contour and pixel or voxel wise machine learning based segmentation methods.
  • machine learning based segmentation method is selected from one or a combination of: neural network, svm, random forest, adaboost and deep learning based method.
  • deep learning based method is selected from one or a combination of: convolutional neural network, deep Boltzmann machine, stacked (de-noising) auto-encoder, and deep belief network.
  • pulmonary region condition classification method is based on machine learning based method, which takes input features from the region from a single image or the union of features from the same region from a group of images at different time phase points during the respiration.
  • the machine learning based method is selected from one or a combination of: neural network, svm, random forest, adaboost and deep learning based method.
  • deep learning based method is selected from one or a combination of: convolutional neural network, deep Boltzmann machine, stacked (de-noising) auto-encoder, and deep belief network.
  • derivation of pulmonary function indicators is based on machine learning method.
  • machine learning based segmentation method is selected from one or a combination of : neural network, svm, random forest, adaboost and deep learning based method.
  • deep learning based method is selected from one or a combination of: convolutional neural network, deep Boltzmann machine, stacked (de-noising) auto-encoder, and deep belief network.
  • the pulmonary function indicators comprise tidal volume (VT) , inspiratory reserve volume (IRV) , expiratory reserve volume (ERV) , residual volume (RV) , total lung capacity (TLC) , inspiratory capacity (IC) , functional residual capacity (FRC) , vital capacity (VC) , forced vital capacity (FVC) , forced expiratory volume in 1 second (FEV1) , forced expiratory flow (FEF) , forced inspiratory flow rates (FIFs) and maximum voluntary ventilation (MVV)
  • Fig. 1 an illustration of the structure of lung.
  • Fig. 2 a work-flow of pulmonary function test based on diagnostic imaging and machine learning.
  • FIG. 3 an illustration of a system for pulmonary function test based on diagnostic imaging and machine learning, as well as some other related devices.
  • Fig. 4 an illustration of the machine learning model with support vector machine (SVM) method for lung segmentation.
  • SVM support vector machine
  • Fig. 5 an illustration of the deep convolution neural network (DCNN) for lung segmentation.
  • DCNN deep convolution neural network
  • Fig. 6 an illustration of the deep convolution neural network (DCNN) for classification of pulmonary region conditions.
  • DCNN deep convolution neural network
  • Fig. 7 an illustration of the machine learning model with support vector machine (SVM) method for prediction of pulmonary function indicators.
  • SVM support vector machine
  • “Pulmonary function” refers to lungs’capability in respiration.
  • PFT Pulmonary Function Testing
  • Pulmonary function testing is a complete evaluation of the respiratory system including patient history, physical examinations, chest x-ray examinations, arterial blood gas analysis, and tests of pulmonary function. The primary purpose of pulmonary function testing is to identify the severity of pulmonary impairment. Pulmonary function testing has diagnostic and therapeutic roles and helps clinicians answer some general questions about patients with lung disease. PFTs are normally performed by a respiratory therapist. Normally, PFTs are measured by procedures such as Spirometry.
  • alveolar sac or “a small cluster of alveoli” is the place where pulmonary artery meets pulmonary vein. Fresh air coming through bronchi and alveolar duct at the end of it would be held in an alveolar sac and participate in air exchanging with pulmonary artery and vein. In most occasions, several alveolar sacs would be very close to each other and connect to the “tree” of bronchi through the same bronchiole, i.e. the endmost part of bronchi.
  • Alveolar conditions are states about performance of alveoli in respiration. Most alveoli in a patient should be of a “functioning” condition for a patient to go through his daily routines. And a “malfunctioning” condition or “disorders of the respiratory system” can be classified into four general areas: Obstructive conditions, e.g. emphysema; Restrictive conditions, e.g. fibrosis; Vascular diseases, e.g. pulmonary embolism; and other diseases. For example, dust of various scales from smoking, polluted air or bad working conditions may be aggregated in parts of bronchi, especially the bronchioles, so that air no longer goes into and comes out of alveolar sacs connected to said bronchile.
  • Alveoli are likely to fall into this condition during inhalation, leaving alveolar sacs of inflated alveoli attached to obstructed bronchiles. And an infant of pre-term birth may suffer from collapsed alveoli due to lack of so-called “alveolar type II cells” in underdeveloped lungs. And furthermore, obstruction in either pulmonary artery or pulmonary vein may hinder or even stop blood flow in that part of lungs. While this condition causes relatively small change in behavior of alveoli, it brings change to appearance of blood vessels in a thoracic medical image.
  • “Respiratory cycle” or “Inhalation-exhalation cycle” consists of a full inhalation and a successive full exhalation. And a person goes through many cycles of respiration every day.
  • an alveolus inflates and holds fresh air coming in through bronchi and alveolar duct at the end of it. Gases would be exchanged between the alveolar cavity holding fresh air and pulmonary artery and vein going around this alveolus. Then begins exhalation, and the alveolus would be emptied and shrink in volume so as to push the not-so-fresh air back to atmosphere.
  • a “time phase point of respiratory cycle” refers to an instant between the time a respiratory cycle begins and ends. Images taken at the same time phase points of different respiratory cycles appear in roughly the same manner. While images taken at different time phase points of a single cycle appear differently due to change of alveolar volume.
  • “Diagnostic Medical imaging” is the technique and process of creating visual representations of the interior of a body for clinical analysis and medical intervention, as well as visual representation of the function of some organs or tissues (physiology) . Diagnostic medical imaging seeks to reveal internal structures hidden by the skin and bones to diagnose disease. Diagnostic medical imaging also establishes a database of normal anatomy and physiology to make it possible to identify abnormalities. Although imaging of removed organs and tissues can be performed for medical reasons, such procedures are usually considered part of pathology instead of medical imaging.
  • radiology uses the imaging technologies of X-ray radiography, magnetic resonance imaging, medical ultrasonography or ultrasound, endoscopy, elastography, tactile imaging, thermography, medical photography and nuclear medicine functional imaging techniques as positron emission tomography (PET) and Single-photon emission computed tomography (SPECT) .
  • PET positron emission tomography
  • SPECT Single-photon emission computed tomography
  • CT scan uses computer-controlled X-rays to create images of the body.
  • An x-ray tube is rotated around the patient.
  • X-rays are emitted by the tube as it transverses around the body.
  • Linear detectors are positioned on the opposite side of the x-ray tube to receive the transmitted x-ray beams after attenuation. Since the x-ray attenuation properties of various tissues differ, the final transmitted x-rays can be correlated to the tissue properties within its path.
  • Detectors will collect the profiles of x-rays with different strength passed through the patient and generate the projection data. Through the backward projection method, the cross-section image slices will be reconstructed from the collected data.
  • CT scan images are three dimensional and are preferably interpreted as a series of transverse images in slices.
  • MRI uses radio waves in the presence of a strong magnetic field that surrounds the opening of the MRI machine where the patient lies to get tissues to emit radio waves of their own. Different tissues (including tumors) emit a more or less intense signal based on their chemical makeup, so a picture of the body organs can be displayed on a computer screen. Much like CT scans, MRI can produce 3D images of sections of the body, but MRI is sometimes more sensitive than CT scans for distinguishing soft tissues.
  • PET scan creates computerized images of chemical changes, such as sugar metabolism, that take place in tissue.
  • the patient is given an injection of a substance that consists of a combination of a sugar and a small amount of non-harmful and radioactively labeled sugar.
  • the radioactive sugar can help in locating a tumor, because cancer cells take up or absorb sugar more avidly than other tissues in the body such that the radioactive sugar will accumulate in the tumor.
  • a PET scanner is used to detect the distribution of the sugar in the tumor and in the body. In some embodiments, by the combined matching of a CT scan with PET images, there is an improved capacity to discriminate normal from abnormal tissues.
  • SPECT uses non-harmful radioactive tracers and a scanner to record data that a computer constructs into 2D or 3D images.
  • a small amount of a radioactive drug is injected into a vein and a scanner is used to make detailed images of areas inside the body where the radioactive material is taken up by the cells.
  • SPECT can give information about blood flow to tissues and chemical reactions (metabolism) in the body.
  • a “region of interest (ROI) ” is a selected subset of samples in a set of medical images identified for a clinical purpose. In the context of radiotherapy, it may refers to, in the discretized version, a sub-region of pixels in a slice of 2D medical image, or a sub-region of voxels in the reconstructed 3D image; or it may refer to, in the continuous version, the area inside the closed boundary curve in a slice of 2D medical image or the volume inside the closed boundary surface in the reconstructed 3D image.
  • the ROI may be the alveolus or vessels in lung, which will provide useful information for the prediction of pulmonary function.
  • Image segmentation is to recognize an object or a number of objects displayed in an unspecified image and localize the deployment of said objects in said image.
  • image segmentation is to provide a set of medical images with related ROIs which may provide valuable information for the diagnosis of the diseases.
  • the result of “image segmentation” is a “segmentation” in the form of an image containing sub-regions of pixels labeled with distinct ROI ID referring to various ROIs. An accurate segmentation is important for an effective diagnosis. And an automatic segmentation is preferred to reduce work-hour spent in medical image interpretation and diagnosis.
  • Machine learning method refers to the method of computer science that, according to Arthur Samuel in 1959, gives “computers the ability to learn without being explicitly programmed. " Evolved from the study of pattern recognition and computational learning theory in artificial intelligence, machine learning explores the study and construction of algorithms that can learn from and make predictions on data –such algorithms overcome following strictly static program instructions by making data-driven predictions or decisions, through building a model from sample inputs. Machine learning is employed in a range of computing tasks where designing and programming explicit algorithms with good performance is difficult or unfeasible; example applications include medical image segmentation and understanding.
  • pixel or voxel wise recognition is enabled by learning from on Ground Truth pixel or voxel data and make predictions on each pixel of voxel of the image to segment.
  • medical condition recognition is enabled by learning from and make predictions on image data.
  • SVM Small Vector machine
  • An SVM model is a representation of the examples as points in space, mapped so that the examples of the separate categories are divided by a clear gap that is as wide as possible. New examples are then mapped into that same space and predicted to belong to a category based on which side of the gap they fall.
  • SVMs can efficiently perform a non-linear classification using what is called the kernel trick, implicitly mapping their inputs into high-dimensional feature spaces.
  • Deep learning method refers to a class of machine learning algorithms that: (1) use a cascade of many layers of nonlinear processing units for feature extraction and transformation. Each successive layer uses the output from the previous layer as input.
  • the algorithms may be supervised or unsupervised and applications include pattern analysis (unsupervised) and classification (supervised) ; (2) are based on the (unsupervised) learning of multiple levels of features or representations of the data. Higher level features are derived from lower level features to form a hierarchical representation; (3) are part of the broader machine learning field of learning representations of data; (4) learn multiple levels of representations that correspond to different levels of abstraction; the levels form a hierarchy of concepts. Deep learning is part of a broader family of machine learning methods based on learning representations of data.
  • An observation e.g., an image
  • An observation can be represented in many ways such as a vector of intensity values per pixel, or in a more abstract way as a set of edges, regions of particular shape, etc.
  • Deep Convolutional Neural Network refers to a type of feed-forward artificial neural network in which the connectivity pattern between its neurons is inspired by the organization of the animal visual cortex. Individual cortical neurons respond to stimuli in a restricted region of space known as the receptive field. The receptive fields of different neurons partially overlap such that they tile the visual field. The response of an individual neuron to stimuli within its receptive field can be approximated mathematically by a convolution operation. Convolutional networks were inspired by biological processes and are variations of multilayer perceptrons designed to use minimal amounts of preprocessing.
  • Embodiments of the present disclosure are generally related to automatic and accurate pulmonary function test. Methods and systems disclosed herein facilitate a user to automatically produce a set of diagnostic indicators with a system of integrated devices configured for this purpose and at an unspecified time.
  • FIG. 1 An exemplary illustration of lungs 100 of a patient is provided. Thoracic images of lungs will be acquired from imaging device and processed following a work-flow 200 with an integrated Pulmonary Function Test (PFT) system and related devices 300.
  • PFT Pulmonary Function Test
  • This PFT system is implemented with machine learning methods selected from but not limited to : SVM for segmenting lungs and ROI in lungs 400, DCNN for segmenting lungs and ROI in lungs 500, DCNN for classifying ROIs according to pulmonary region conditions 600, and SVM for predicting pulmonary function indicators 700.
  • SVM Pulmonary Function Test
  • FIG. 1 an exemplary illustration of lungs 100 of a patient is provided.
  • Anatomic structures not shown in the figure are considered not contributing to respiration.
  • Anatomic structures like Mouth, Larynx 110, Trachea 120, and Bronchi 130, including Primary bronchi 131, Secondary bronchi 132, Tertiary bronchi 133, Bronchioles 134, are considered not contributing to lung capacity.
  • alveoli 144 are mainly in lung tissues other than these two kind of anatomic structures. Since the diameter of an alveolus is about 0.2mm, a pixel in a thoracic image is roughly describing condition of a small cluster of alveoli, i.e. an alveolar sac, connected to an end of the “tree” of bronchi. And with proper methods implemented in a system of adequate hardware, a detailed recognition of conditions of these sacs based on a segmentation of thoracic images.
  • a “slice” of thoracic image in a target set of images represents information gathered from certain imaging modality and about a layer of tissue with a non-trivial thickness. Hence the thicker the “slice” is, the vaguer its appearance would be. This makes it a necessity to reconstruct the 3D alveolar regions for an accurate prediction of pulmonary function indicators based on the thoracic image which is packed with information about lung tissues.
  • a work-flow of pulmonary function test based on diagnostic imaging and machine learning 200 is provided.
  • the work-flow starts at step 201.
  • a target set of diagnostic thoracic images including lungs 100 is acquired
  • lungs regions are segmented from other anatomic structures in the thoracic images with some straight forward methods
  • ROIs of alveolar regions are segmented from lungs regions in the thoracic images
  • ROIs of alveolar regions are put into two or more categories, e.g. functioning and malfunctioning, according to information conveyed in the thoracic images;
  • ROIs of alveolar regions are reconstructed into 3D alveolar regions based according to information conveyed in the thoracic images and in the condition classification of ROIs;
  • step 231 features are extracted from thoracic images, reconstructed 3D alveolar regions, and condition classification of alveolar alveolar regions;
  • a set of pulmonary function indicators are derived based on these features
  • the Pulmonary Function Test (PFT) system 302 acquires a target set of diagnostic images from the image device 301 following a DICOM protocol 304 to generate a set of indicators 305. And a user at the user interface of PFT system 311 or at a remote user interface device 303 may view these indicators.
  • PFT Pulmonary Function Test
  • An embodiment of the PFT system 302 in present disclosure consists of processors 310, a user interface 311, storage devices 312, and a memory 313.
  • the processors 310 are to execute codes of PFT engine loaded in memory 313.
  • a user may interact with this system at user interface 311.
  • the storage devices 312 are used to store Codes and/or configuration files of machine learning models 320 utilized by PFT engine 313 and its software modules once they are loaded into memory when the system is activated.
  • the target set of diagnostic images acquired from imaging device 321 is also stored in one or more of the storage devices 312. When the PFT engine is loaded into the memory, the engine will process diagnostic images 321 with its modules in an orderly fashion to generate a set of indicators which may be viewed at user interface 311 or a remote user interface device 303.
  • a PFT engine 313 mainly consists of three software modules, i.e. the segmentation module 330, the classification module 331, and the indicator module 332.
  • the memory will load the codes of PFT engine 313 from storage devices 320 through a series of processes, wherein the segmentation module 330 loads its code and initializes with configuration file through process 340, the classification module 331 through process 341, and the indicator module 332 through process 342.
  • the PFT system 302 finishes activation these modules will be called upon in an orderly fashion.
  • the PFT engine 313 retrieves diagnostic images 321 and feeds them to segmentation module 320 to generate label images representing a segmentation of ROIs of alveolar regions.
  • the PFT engine 313 feeds the label images of segmentation and the diagnostic images 321 into the classification module 331 to generate label images representing classification of ROIs according to alveolar conditions, e.g. functioning and malfunctioning alveoli and other types of conditions.
  • the PFT engine 313 feeds the thoracic images, label images of segmentation of ROIs, and the label images of condition classification of ROIs to indicator module 332. 3D alveolar regions will be reconstructed based on these information. And then this module will identify and extract features about sacs, and derive a set of pulmonary function indicators based on these features.
  • the PFT engine 313 may provide the pulmonary function indicators to the user interface 311 or the remote user interface device 303 on demand.
  • FIG. 4 an illustration of the machine learning model with Support Vector Machine (SVM) for segmenting lungs and ROIs within lungs 400 is provided.
  • SVM Support Vector Machine
  • An SVM has to be trained or be initialized by the configuration of a trained model through process 340 before being used to segmenting lungs and ROIs within lungs. And the process of training an SVM can be summarized into three sections.
  • the model receives a first set of multi-dimensional data points, which is select from parts of the thoracic images 420. These data points can be put into two types, i.e. within a region 432 and out of this region 430. And these two types of data points should roughly suggesting a probable boundary 431 that separates them.
  • one of kernel functions 440 adopted in SVM model is used to transform the first set of data points into a second set of data points with extra dimensions. And when a proper kernel function is chosen, information conveyed in extra dimensions of the data points should be linearly separable.
  • the SVM begins to figure out this boundary 451 by gradually excluding data points too far away from it while trying to enlarging margins on both sides of it as wide as possible.
  • a proper kernel function 440 has been chosen to generate the second set of data points, a boundary separating two types of data points may be figured out within reasonable time and with only a small subset of data points at the edge of the margin 450, i.e. the Support Vectors.
  • Information about the boundary, the margin, and the support vectors can be used to formulate an explicit formula describing the boundary. And so an SVM model has been trained.
  • a proper SVM utilized by a segmentation module 330 for segmenting lungs and ROIs within lungs input images will be fed to the module, information conveyed in the images will be process by the SVM 402 with the kernel function 412 and the explicit formula about the boundary 451 to separate pixels in the images into two types, i.e. within certain kinds of ROIs and out of them, i.e. label images of segmentation 403.
  • the PFT engine 313 collects these label images and fed them to successive modules for further treatment.
  • DCNN Deep Convolution Neural Network
  • the illustrated embodiment of DCNN consists of three kinds of transformations used in seventeen layers to transform each of the thoracic images fed to the DCNN into sixteen intermediate feature maps of multiple channels, and eventually into multi-channel score maps. Each channel of such a score map predicts location of an ROI related to the channel. And together, channels of a score map gives a segmentation of ROIs for one of the input images in the form of a label image.
  • the first kind of transformation is convolution + ReLU transformation 501, which means a receptive field of 3*3*channel-dimension in a feature map 531 is convolved with a kernel matrix of 3*3*channel-dimension representing a convolution filter to generate a value which is then processed with an ReLU filter to generated a successive value as an element of a multi-channel pixel in the generated feature map 532. And other values of that multi-channel pixels are generated similarly from the same receptive field but with other filters.
  • this transformation By repeatedly applying said filters across entire input feature map 531, this transformation pixel-wisely generates a feature map 532 of same spatial dimensions with the preceding feature map 531. And, again, other channels of that feature map is generated from the same preceding feature map but with other different transforms.
  • the third kind of transformation is only used in the last layer. It up-samples a feature map into an output image of same spatial size with the input image representing segmentation of ROIs of alveolar regions 503, which serves as an output of segmentation module 330 corresponding to the input image.
  • the DCNN takes a fused image of [512*512*2] 510 as its input.
  • One of the channel 512 represents a diagnostic image from the target set 321.
  • the other channel 511 represents an initial label image generated with straight forward non-deep-learning methods also implemented in the module;
  • a layer of convolution + ReLU receives the fused image 510 and transforms it into a successive feature map of [512*512*64] 521 with 64 pairs of convolution + ReLU filters in the layer;
  • a layer of convolution + ReLU receives the preceding feature map 521 and transforms it into a successive feature map of [512*512*64] 522 with 64 pairs of convolution + ReLU filters in the layer;
  • a layer of pooling receives the preceding feature map 522 and transforms it into a successive feature map of [256*256*128] 531 with 128 pooling filters in the layer;
  • a layer of convolution + ReLU receives the preceding feature map 531 and transforms it into a successive feature map of [256*256*128] 532 with 128 pairs of convolution + ReLU filters in the layer;
  • a layer of pooling receives the preceding feature map 532 and transforms it into a successive feature map of [128*128*256] 541 with 256 pooling filters in the layer;
  • a layer of convolution + ReLU receives the preceding feature map 541 and transforms it into a successive feature map of [128*128*256] 542 with 256 pairs of convolution + ReLU filters in the layer;
  • a layer of convolution + ReLU receives the preceding feature map 542 and transforms it into a successive feature map of [128*128*256] 543 with 256 pairs of convolution + ReLU filters in the layer;
  • a layer of pooling receives the preceding feature map 543 and transforms it into a successive feature map of [64*64*512] 551 with 512 pooling filters in the layer;
  • a layer of convolution + ReLU receives the preceding feature map 551 and transforms it into a successive feature map of [64*64*512] 552 with 512 pairs of convolution + ReLU filters in the layer;
  • a layer of convolution + ReLU receives the preceding feature map 552 and transforms it into a successive feature map of [64*64*512] 553 with 512 pairs of convolution + ReLU filters in the layer;
  • a layer of pooling receives the preceding feature map 553 and transforms it into a successive feature map of [32*32*512] 561 with 512 pooling filters in the layer;
  • a layer of convolution + ReLU receives the preceding feature map 561 and transforms it into a successive feature map of [32*32*512] 562 with 512 pairs of convolution + ReLU filters in the layer;
  • a layer of convolution + ReLU receives the preceding feature map 562 and transforms it into a successive feature map of [32*32*512] 563 with 512 pairs of convolution + ReLU filters in the layer;
  • a layer of pooling receives the preceding feature map 563 and transforms it into a successive feature map of [16*16*4096] 571 with 4096 pooling filters in the layer;
  • a layer of convolution + ReLU receives the preceding feature map 571 and transforms it into a successive feature map of [16*16*4096] 572 with 4096 pairs of convolution + ReLU filters in the layer;
  • a layer of convolution + ReLU receives the preceding feature map 572 and transforms it into a successive feature map of [16*16*ROI-count] 573 with ROI-count pairs of convolution + ReLU filters in the layer.
  • a label image of same spatial size with input image 510 is generated from feature map 573 with up-sampling and interpolation methods.
  • Each channel of the feature 573 predicts location of one kind of ROIs or alveolar regions in a label map corresponding to an input image. And predictions are collected to compose a label image that describes segmentation for the corresponding input image.
  • an input image 512 has been processed by the DCNN into a label image of segmentation 580 of ROIs in it, assisted by an initial label image 511.
  • DCNN Deep Convolution Neural Network
  • the first kind of transformation is convolution + ReLU transformation 601, which means a receptive field of 3*3*channel-dimension in a feature map 631 is convolved with a kernel matrix of 3*3*channel-dimension representing a convolution filter to generate a value which is then processed with an ReLU filter to generated a successive value as an element of a multi-channel pixel in the generated feature map 632. And other values of that multi-channel pixels are generated similarly from the same receptive field but with other filters.
  • this transformation By repeatedly applying said filters across entire input feature map 631, this transformation pixel-wisely generates a feature map 632 of same spatial dimensions with the preceding feature map 631. And, again, other channels of that feature map is generated from the same preceding feature map but with other different transforms.
  • the third kind of transformation is only used in the last layer which up-samples and interpolates a feature map into an output of same spatial size with the input image representing alveolar regions within ROI of lungs 603, which serves as an output of classification module 331 corresponding to the input image.
  • the DCNN takes a fused image of [512*512*2] 610 as its input.
  • One of the channel 612 represents an diagnostic image from the target set 321.
  • the other channel 611 represents a label image of segmentation of alveolar regions received from segmentation module 330.
  • a layer of convolution + ReLU receives the fused image 610 and transforms it into a successive feature map of [512*512*64] 621 with 64 pairs of convolution + ReLU filters in the layer;
  • a layer of convolution + ReLU receives the preceding feature map 621 and transforms it into a successive feature map of [512*512*64] 622 with 64 pairs of convolution + ReLU filters in the layer;
  • a layer of pooling receives the preceding feature map 622 and transforms it into a successive feature map of [256*256*128] 631 with 128 pooling filters in the layer.
  • a layer of convolution + ReLU receives the preceding feature map 631 and transforms it into a successive feature map of [256*256*128] 632 with 128 pairs of convolution + ReLU filters in the layer;
  • a layer of pooling receives the preceding feature map 632 and transforms it into a successive feature map of [128*128*256] 641 with 256 pooling filters in the layer;
  • a layer of convolution + ReLU receives the preceding feature map 641 and transforms it into a successive feature map of [128*128*256] 642 with 256 pairs of convolution + ReLU filters in the layer;
  • a layer of convolution + ReLU receives the preceding feature map 642 and transforms it into a successive feature map of [128*128*256] 643 with 256 pairs of convolution + ReLU filters in the layer;
  • a layer of pooling receives the preceding feature map 643 and transforms it into a successive feature map of [64*64*512] 651 with 512 pooling filters in the layer;
  • a layer of convolution + ReLU receives the preceding feature map 651 and transforms it into a successive feature map of [64*64*512] 652 with 512 pairs of convolution + ReLU filters in the layer;
  • a layer of convolution + ReLU receives the preceding feature map 652 and transforms it into a successive feature map of [64*64*512] 653 with 512 pairs of convolution + ReLU filters in the layer;
  • a layer of pooling receives the preceding feature map 653 and transforms it into a successive feature map of [32*32*512] 661 with 512 pooling filters in the layer;
  • a layer of convolution + ReLU receives the preceding feature map 661 and transforms it into a successive feature map of [32*32*512] 662 with 512 pairs of convolution + ReLU filters in the layer;
  • a layer of convolution + ReLU receives the preceding feature map 662 and transforms it into a successive feature map of [32*32*512] 663 with 512 pairs of convolution + ReLU filters in the layer;
  • a layer of pooling receives the preceding feature map 663 and transforms it into a successive feature map of [16*16*4096] 671 with 4096 pooling filters in the layer;
  • a layer of convolution + ReLU receives the preceding feature map 671 and transforms it into a successive feature map of [16*16*4096] 672 with 4096 pairs of convolution + ReLU filters in the layer;
  • a layer of convolution + ReLU receives the preceding feature map 672 and transforms it into a successive feature map of [16*16*ROI-count] 673 with ROI-count pairs of convolution + ReLU filters in the layer, each pair generating a score map related to one kind of ROIs.
  • an output label image of same spatial size with input image 610 is generated from feature map 673 with up-sampling and interpolation methods.
  • Each channel of the feature 573 predicts that the related ROI takes one of two or more conditions.
  • ROIs can be put into two or more categories, e.g. function and malfunction alveoli, according to these predictions.
  • a label image can be composed according to this classification of ROIs according to their conditions for the corresponding input image.
  • an input image 612 has been processed by a DCNN for classification into an output label image of classification 680, assisted by a label image of segmentation 611.
  • FIG. 7 an illustration of the machine learning model with Support Vector Machine (SVM) for deriving pulmonary function indicators 400 is provided.
  • SVM Support Vector Machine
  • An SVM has to be trained or be initialized by the configuration of a trained model through process 342 before being used for this purpose. And the process of training an SVM can be summarized into three sections.
  • the model receives a first set of multi-dimensional data points, which is select from parts of the thoracic images 701. These data points can be put into two types, i.e. within a region 732 and out of this region 730. And these two types of data points should roughly suggesting a probable boundary 731 that separates them.
  • one of kernel functions 740 adopted in SVM model is used to transform the first set of data points into a second set of data points with extra dimensions. And when a proper kernel function is chosen, information conveyed in extra dimensions of the data points should be linearly separable.
  • the SVM begins to figure out this boundary 751 by gradually excluding data points too far away from it while trying to enlarging margins on both sides of it as wide as possible.
  • a proper kernel function 740 has been chosen to generate the second set of data points, a boundary separating two types of data points may be figured out within reasonable time and with only a small subset of data points at the edge of the margin 750, i.e. the Support Vectors.
  • Information about the boundary, the margin, and the support vectors can be used to formulate an explicit formula describing the boundary. And so an SVM model has been trained.
  • the first type is a thoracic image 701 from target set 321.
  • this image is fused with a label image 403 or an other label image 580.
  • These two label images both describe segmentation or ROIs but are generated with different machine learning models.
  • 3D alveolar regions are reconstructed based on fused images of this kind with straight forward methods implemented in the indicator module 332, e.g. tri-linear interpolation method.
  • the second is a label image of condition classification of ROIs 702.
  • ROIs are given distinct labels representing one of two or more conditions of alveoli in them, e.g. functioning alveoli 721 and malfunctioning alveoli 720.
  • one or more label images from different time phase points 703 may help improve quality in predicting those indicators since functioning alveoli in these images may change appearance, e.g. a region of alveoli shrinking in volume 722.
  • Information conveyed in the thoracic image with extra channels 701, the label image of classification at one time phase point 702, and one or more label images of classification at one or more time phase points 703 will be process by the SVM 704 with the kernel function 712 and the explicit formula about the boundary 751. And from these images, features about pixels in the thoracic images will be extracted. Assisted by information on reconstructed 3D alveolar regions, these features will be utilized to derive a complete set pf pulmonary function indicators 705 for diagnosis of pulmonary function.
  • the PFT engine 313 will collect these indicators and provides them to user interface 311 or a remote user interface device 303 on demand.

Landscapes

  • Engineering & Computer Science (AREA)
  • Quality & Reliability (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)
  • Apparatus For Radiation Diagnosis (AREA)

Abstract

L'invention concerne des procédés relatifs à l'exécution du test de fonction pulmonaire basé sur une imagerie diagnostique et un apprentissage automatique. Dans un mode de réalisation, un ensemble d'images thoraciques de diagnostic sont acquis à partir d'un certain dispositif d'imagerie à un ou plusieurs points de phases temporelles de façon successive pendant la respiration d'un sujet ; les poumons gauche et droit sont segmentés à partir des images et chaque poumon est segmenté en une pluralité de zones d'intérêt ; l'état de chaque zone d'intérêt est classé dans une catégorie spécifique ; un ensemble de caractéristiques sont extraites des images, des zones reconstruites et de leurs catégories ; et les indicateurs de fonction pulmonaire sont dérivés de ces caractéristiques à l'aide de modèles d'apprentissage automatique.
PCT/CN2018/085987 2017-05-08 2018-05-08 Procédés et système pour test de fonction pulmonaire basé sur une imagerie médicale et un apprentissage automatique Ceased WO2018205922A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN2017083431 2017-05-08
CNPCT/CN2017/083431 2017-05-08

Publications (1)

Publication Number Publication Date
WO2018205922A1 true WO2018205922A1 (fr) 2018-11-15

Family

ID=64104329

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/085987 Ceased WO2018205922A1 (fr) 2017-05-08 2018-05-08 Procédés et système pour test de fonction pulmonaire basé sur une imagerie médicale et un apprentissage automatique

Country Status (1)

Country Link
WO (1) WO2018205922A1 (fr)

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109741316A (zh) * 2018-12-29 2019-05-10 成都金盘电子科大多媒体技术有限公司 医学影像智能评片系统
CN110766701A (zh) * 2019-10-31 2020-02-07 北京推想科技有限公司 网络模型训练方法及装置、区域划分方法及装置
CN110930378A (zh) * 2019-11-18 2020-03-27 上海体素信息科技有限公司 基于低数据需求的肺气肿影像处理方法及系统
CN111028248A (zh) * 2019-12-19 2020-04-17 杭州健培科技有限公司 一种基于ct图像的静动脉分离方法及装置
CN111127482A (zh) * 2019-12-20 2020-05-08 广州柏视医疗科技有限公司 基于深度学习的ct影像肺气管的分割方法及系统
CN111563523A (zh) * 2019-02-14 2020-08-21 西门子医疗有限公司 利用机器训练的异常检测的copd分类
CN111724361A (zh) * 2020-06-12 2020-09-29 深圳技术大学 实时显示病灶的方法及装置、电子设备和存储介质
CN112308853A (zh) * 2020-10-20 2021-02-02 平安科技(深圳)有限公司 电子设备、医学图像指标生成方法、装置及存储介质
CN112580153A (zh) * 2020-12-29 2021-03-30 成都运达科技股份有限公司 一种车辆走行部监测部件健康状态管理系统及方法
CN113226459A (zh) * 2018-12-24 2021-08-06 皇家飞利浦有限公司 用于监测接受外部射束辐射治疗的胸部患者的肺部状况的自动化检测
CN118452959A (zh) * 2024-05-07 2024-08-09 广州医科大学附属第一医院(广州呼吸中心) 基于pet/ct动态显像的双肺内放射性分布计算方法
CN119606409A (zh) * 2024-11-26 2025-03-14 北京大学第三医院(北京大学第三临床医学院) 一种基于参数反应图和图像生成系统的肺部病变分析系统

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060239522A1 (en) * 2005-03-21 2006-10-26 General Electric Company Method and system for processing computed tomography image data
US20070102011A1 (en) * 1998-06-10 2007-05-10 Asthmatx, Inc. Methods of evaluating individuals having reversible obstructive pulmonary disease
CN102240212A (zh) * 2010-05-14 2011-11-16 Ge医疗系统环球技术有限公司 测量气胸的方法和装置
CN105101878A (zh) * 2013-04-05 2015-11-25 东芝医疗系统株式会社 医用图像处理装置以及医用图像处理方法
US20160171689A1 (en) * 2013-08-20 2016-06-16 The Asan Foundation Method for quantifying medical image

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070102011A1 (en) * 1998-06-10 2007-05-10 Asthmatx, Inc. Methods of evaluating individuals having reversible obstructive pulmonary disease
US20060239522A1 (en) * 2005-03-21 2006-10-26 General Electric Company Method and system for processing computed tomography image data
CN102240212A (zh) * 2010-05-14 2011-11-16 Ge医疗系统环球技术有限公司 测量气胸的方法和装置
CN105101878A (zh) * 2013-04-05 2015-11-25 东芝医疗系统株式会社 医用图像处理装置以及医用图像处理方法
US20160171689A1 (en) * 2013-08-20 2016-06-16 The Asan Foundation Method for quantifying medical image

Cited By (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113226459A (zh) * 2018-12-24 2021-08-06 皇家飞利浦有限公司 用于监测接受外部射束辐射治疗的胸部患者的肺部状况的自动化检测
CN109741316A (zh) * 2018-12-29 2019-05-10 成都金盘电子科大多媒体技术有限公司 医学影像智能评片系统
CN109741316B (zh) * 2018-12-29 2023-03-31 成都金盘电子科大多媒体技术有限公司 医学影像智能评片系统
CN111563523A (zh) * 2019-02-14 2020-08-21 西门子医疗有限公司 利用机器训练的异常检测的copd分类
CN111563523B (zh) * 2019-02-14 2024-03-26 西门子医疗有限公司 利用机器训练的异常检测的copd分类
CN110766701A (zh) * 2019-10-31 2020-02-07 北京推想科技有限公司 网络模型训练方法及装置、区域划分方法及装置
CN110930378A (zh) * 2019-11-18 2020-03-27 上海体素信息科技有限公司 基于低数据需求的肺气肿影像处理方法及系统
CN110930378B (zh) * 2019-11-18 2023-05-16 上海体素信息科技有限公司 基于低数据需求的肺气肿影像处理方法及系统
CN111028248A (zh) * 2019-12-19 2020-04-17 杭州健培科技有限公司 一种基于ct图像的静动脉分离方法及装置
CN111127482A (zh) * 2019-12-20 2020-05-08 广州柏视医疗科技有限公司 基于深度学习的ct影像肺气管的分割方法及系统
CN111724361A (zh) * 2020-06-12 2020-09-29 深圳技术大学 实时显示病灶的方法及装置、电子设备和存储介质
CN112308853A (zh) * 2020-10-20 2021-02-02 平安科技(深圳)有限公司 电子设备、医学图像指标生成方法、装置及存储介质
CN112580153B (zh) * 2020-12-29 2022-10-11 成都运达科技股份有限公司 一种车辆走行部监测部件健康状态管理系统及方法
CN112580153A (zh) * 2020-12-29 2021-03-30 成都运达科技股份有限公司 一种车辆走行部监测部件健康状态管理系统及方法
CN118452959A (zh) * 2024-05-07 2024-08-09 广州医科大学附属第一医院(广州呼吸中心) 基于pet/ct动态显像的双肺内放射性分布计算方法
CN119606409A (zh) * 2024-11-26 2025-03-14 北京大学第三医院(北京大学第三临床医学院) 一种基于参数反应图和图像生成系统的肺部病变分析系统

Similar Documents

Publication Publication Date Title
WO2018205922A1 (fr) Procédés et système pour test de fonction pulmonaire basé sur une imagerie médicale et un apprentissage automatique
JP2022025095A (ja) 機械学習を用いた医用イメージングの変換のためのシステムおよび方法
JP6220310B2 (ja) 医用画像情報システム、医用画像情報処理方法及びプログラム
JP5676269B2 (ja) 脳画像データの画像解析
CN111598895A (zh) 一种基于诊断影像和机器学习的测量肺功能指标的方法
US20180020998A1 (en) Medical-image processing apparatus and medical-image diagnostic apparatus
CN112884759A (zh) 一种乳腺癌腋窝淋巴结转移状态的检测方法及相关装置
EP4150569B1 (fr) Caractéristiques d'imagerie fonctionnelle à partir d'images de tomodensitométrie
Kamiya Deep learning technique for musculoskeletal analysis
CN115515479B (zh) 用于监测治疗副作用的装置
Kumaraswamy et al. A review on cancer detection strategies with help of biomedical images using machine learning techniques
JP7155274B2 (ja) 加速化された臨床ワークフローのためのシステムおよび方法
US20140228667A1 (en) Determining lesions in image data of an examination object
Lalitha et al. Medical imaging modalities and different image processing techniques: State of the art review
Wang et al. X-Recon: learning-based patient-specific high-resolution CT reconstruction from orthogonal X-ray images
US20250308019A1 (en) Method for training a system adapted for aiding evaluation of a medical image
CN108230289A (zh) 基于x线体检正位胸片的计算机辅助诊断系统和方法
Lauritzen et al. Evaluation of ct image synthesis methods: From atlas-based registration to deep learning
Machado et al. Radiologists' Gaze Characterization During Lung Nodule Search in Thoracic CT
CN117788614A (zh) 用于机器消耗的ct重建
Klinwichit et al. The Radiographic view classification and localization of Lumbar spine using Deep Learning Models
Althof integrityNet: a deep learning approach for pulmonary fissure integrity classification
Ratul A Class-Conditioned Deep Neural Network to Reconstruct CT Volumes from X-Ray Images: Depth-Aware Connection and Adaptive Feature Fusion
Lamrani et al. U-Net-based Artificial Intelligence for Accurate and Robust Brain Tumor Diagnosis using Magnetic Resonance Imaging
Qureshi Computer aided assessment of CT scans of traumatic brain injury patients

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18798466

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18798466

Country of ref document: EP

Kind code of ref document: A1