WO2021020198A1 - 情報処理装置、プログラム、学習済みモデル、診断支援装置、学習装置及び予測モデルの生成方法 - Google Patents
情報処理装置、プログラム、学習済みモデル、診断支援装置、学習装置及び予測モデルの生成方法 Download PDFInfo
- Publication number
- WO2021020198A1 WO2021020198A1 PCT/JP2020/028074 JP2020028074W WO2021020198A1 WO 2021020198 A1 WO2021020198 A1 WO 2021020198A1 JP 2020028074 W JP2020028074 W JP 2020028074W WO 2021020198 A1 WO2021020198 A1 WO 2021020198A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image data
- feature amount
- learning
- image
- information
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Ceased
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0012—Biomedical image inspection
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B10/00—Instruments for taking body samples for diagnostic purposes; Other methods or instruments for diagnosis, e.g. for vaccination diagnosis, sex determination or ovulation-period determination; Throat striking implements
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/05—Detecting, measuring or recording for diagnosis by means of electric currents or magnetic fields; Measuring using microwaves or radio waves
- A61B5/055—Detecting, measuring or recording for diagnosis by means of electric currents or magnetic fields; Measuring using microwaves or radio waves involving electronic [EMR] or nuclear [NMR] magnetic resonance, e.g. magnetic resonance imaging
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B6/00—Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
- A61B6/02—Arrangements for diagnosis sequentially in different planes; Stereoscopic radiation diagnosis
- A61B6/03—Computed tomography [CT]
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/25—Fusion techniques
- G06F18/253—Fusion techniques of extracted features
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q10/00—Administration; Management
- G06Q10/04—Forecasting or optimisation specially adapted for administrative or management purposes, e.g. linear programming or "cutting stock problem"
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/80—Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
- G06V10/806—Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/80—Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
- G06V10/809—Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of classification results, e.g. where the classifiers operate on the same input data
- G06V10/811—Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of classification results, e.g. where the classifiers operate on the same input data the classifiers operating on different input data, e.g. multi-modal recognition
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H30/00—ICT specially adapted for the handling or processing of medical images
- G16H30/40—ICT specially adapted for the handling or processing of medical images for processing medical images, e.g. editing
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H50/00—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
- G16H50/20—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H50/00—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
- G16H50/30—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for calculating health indices; for individual health risk assessment
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10072—Tomographic images
- G06T2207/10081—Computed x-ray tomography [CT]
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10072—Tomographic images
- G06T2207/10088—Magnetic resonance imaging [MRI]
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20212—Image combination
- G06T2207/20221—Image fusion; Image merging
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30016—Brain
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30096—Tumor; Lesion
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V2201/00—Indexing scheme relating to image or video recognition or understanding
- G06V2201/03—Recognition of patterns in medical or anatomical images
Definitions
- the present invention relates to an information processing device, a program, a learned model, a diagnostic support device, a learning device, and a method for generating a prediction model, and particularly relates to an artificial intelligence technique for predicting an unknown matter using an image.
- CT Computer Tomography
- MRI Magnetic Resonance Imaging
- CAD Computer-Aided Diagnosis
- Non-Patent Document 1 proposes a method of automatically discriminating between a normal brain image and a dementia brain image by machine learning.
- Patent Document 1 discloses a diagnostic support device that predicts whether or not a patient with mild cognitive impairment will develop Alzheimer's disease within a predetermined period of time by analyzing a brain image.
- Non-Patent Document 1 proposes a method of predicting whether or not mild cognitive impairment (MCI: Mild Cognitive Impairment) progresses to Alzheimer's disease by using a multimodal recurrent neural network.
- MCI Mild Cognitive Impairment
- the diagnosis support by a doctor is provided by predicting the progress of the disease. Is desired. For example, it is desirable to determine whether patients with MCI will progress to Alzheimer's disease in the future.
- Non-Patent Document 1 is a technique for determining whether Alzheimer's disease or a normal state is present from an MRI image of the brain, and does not predict the future.
- Patent Document 1 and Non-Patent Document 2 are techniques for determining whether or not Alzheimer's disease will progress in the future, but the determination accuracy is not sufficient.
- Such a problem is not limited to the use of predicting the future progression of a disease based on medical images, but is a problem common to the process of predicting the future state or the past state of various matters related to temporal changes. Can be grasped.
- the use of social infrastructure inspection to predict the future deterioration state of a structure from an image of a structure such as a bridge or a building, or a point in the past that goes back in time from an image showing the current state It is also required to improve the accuracy of prediction in applications such as verification for predicting the situation in.
- the present invention has been made in view of such circumstances, and is a method of generating an information processing device, a program, a learned model, a diagnostic support device, a learning device, and a prediction model capable of performing highly accurate prediction using images.
- the purpose is to provide.
- the information processing apparatus is based on an information acquisition unit that accepts input of image data and non-image data related to a target matter, and image data and non-image data input via the information acquisition unit. It includes a prediction unit that predicts the state of matters at a time different from the time when the image data was taken, and the prediction unit includes a first feature amount calculated from the image data and a second calculation unit calculated from the non-image data.
- a third feature quantity in which the first feature quantity and the second feature quantity are fused is calculated by performing a weighting calculation by a calculation method that outputs a combination of the products of the elements with the feature quantity of. It is an information processing device that makes a prediction based on a third feature amount.
- the prediction accuracy is improved as compared with the case of adopting a simple linear connection (linear method). Can be done.
- prediction includes the concepts of reasoning, estimation, and discrimination, when predicting what will happen after the time of shooting and when predicting what will happen at a time before the time of shooting. Includes both concepts with.
- Time point of shooting image data refers to the time point when the object for acquiring image data was shot. Since the image data is an image image content that captures the state of the object at the time of shooting, the “time of shooting the image data” may be understood as the time when the state of the image content indicated by the image data is grasped.
- the term "time point” is evaluated to be approximately the same point in time, not only when it strictly points to a specific point of "time”, but also from the viewpoint of a common sense measure of the speed of change over time for the subject matter. Time range may be included. Even if the shooting time of the image data cannot be clearly specified, it is possible to perform prediction processing by regarding at least the time when the image data is input to the information processing device as the "shooting time”. ..
- a time point different from the shooting time of the image data is a comprehensive expression including both the concept of referring to the time point after the shooting time and the case of referring to the time point before the shooting time. ..
- the prediction unit receives input of image data and non-image data, and outputs information indicating a state of matters at a time point different from the time of shooting as a result of prediction. It can be configured to include a trained prediction model that has been machine-learned as described above.
- the prediction unit may be configured by using a neural network.
- the prediction unit responds to each of a plurality of candidates as a state relating to a matter at a time different from the time of shooting with respect to the input of image data and non-image data. It is possible to perform a classification process for determining which class belongs to a plurality of classes to be performed and output the processing result of the class classification.
- the prediction unit describes the matter at a time different from the time of shooting as a state after a specific period has elapsed from the time of shooting or a state before a specific period from the time of shooting. It is possible to perform a two-class classification process for determining whether the past state is the first state or a second state different from the first state, and output the processing result of the two-class classification. it can.
- the prediction unit calculates the first feature amount from the image data and the second feature amount from the non-image data.
- a third processing unit that calculates a third feature amount by performing weighting calculation by a calculation method that outputs a combination of products of elements from the first feature amount and the second feature amount. And may be configured to include.
- the weighting calculation performed by the third processing unit is configured to include a process of multiplying the first feature amount and the second feature amount at a random ratio. Can be done.
- the first processing unit is configured by using a first neural network including a plurality of convolution layers and a first fully connected layer, and is a second processing.
- the part may be constructed using a second neural network including a second fully connected layer.
- the information processing apparatus can be configured to include a third fully connected layer for calculating the final output value from the third feature amount.
- the non-image data may be configured to include information data on matters not appearing in the image indicated by the image data.
- the non-image data can be configured to include information data in which information at a plurality of time points is inherent.
- the subject matter is the health condition of the subject
- the image data is a medical image obtained by photographing the subject, and is non-image data.
- the prediction unit determines the health condition of the subject after a lapse of a specific period from the time when the medical image was taken, or the subject at a time in the past before the time when the medical image was taken. It can be configured to predict the health condition of the examiner.
- the term "health condition of a subject” includes various conditions related to the health of a subject, such as the condition of the subject, the progress of the disease, or whether or not the subject is a healthy person. Concept is included.
- the subject matter is the medical condition of a subject with mild cognitive impairment
- the image data is MRI (Magnetic) obtained by photographing the subject's brain.
- Resonance Imaging The non-imaging data includes at least one of the subject's blood test data, genetic data, and cognitive ability score, and the subject's age and gender, and is a predictor. Can be configured to predict whether the condition of the subject is Alzheimer's disease or mild cognitive impairment after a lapse of a specific period from the time when the MRI image is taken.
- the subject of the clinical trial is a subject who is predicted by the prediction unit that the condition of the subject is mild cognitive impairment after a lapse of a specific period. It can be configured to be excluded from.
- the information processing apparatus is an information processing apparatus including a processor and a non-temporary computer-readable medium in which a program executed by the processor is recorded, wherein the processor is a program.
- the product of the elements of the first feature amount calculated from the image data and the second feature amount calculated from the non-image data after receiving the input of the image data and the non-image data related to the target matter.
- a third feature amount which is a fusion of the first feature amount and the second feature amount, is calculated by performing a weighting calculation by a calculation method using the combination of the above as an output, and the image is based on the third feature amount. It is an information processing device that predicts the state of matters at a time different from the time when the data was taken.
- the program according to still another aspect of the present disclosure includes a function of accepting input of image data and non-image data related to a target matter into a computer, a first feature amount calculated from the image data, and non-image data.
- the trained model according to still another aspect of the present disclosure is machine-learned to receive input of image data and non-image data relating to a subject matter and output information predicted from the image data and non-image data.
- the trained model outputs a combination of the products of the elements of the first feature amount calculated from the image data and the second feature amount calculated from the non-image data.
- a third feature amount which is a fusion of the first feature amount and the second feature amount, is calculated by performing a weighting calculation by a calculation method, and based on the third feature amount, the time and time when the image data is taken. Is a trained model for making a computer work to output information that shows what is happening at different times.
- the subject matter is the health condition of the subject
- the image data is a medical image obtained by photographing the subject, and is non-image data.
- the trained model is the health condition of the subject after a lapse of a specific period from the time when the medical image was taken, or at a time in the past before the time when the medical image was taken. It can be configured to predict the health condition of the subject.
- the diagnostic support device includes a non-transitory computer-readable medium on which the trained model according to one aspect of the present disclosure is recorded, and a processor operating according to the trained model. It is a device.
- the learning device is a learning device including a processor and a non-temporary computer-readable medium in which a learning program executed by the processor is recorded, wherein the processor is a learning program.
- training data including image data and non-image data related to the target matter and data showing a known state of the matter corresponding to the combination of the image data and the non-image data is acquired, and the image data and the non-image are acquired.
- a processor that inputs data to the training model and performs machine learning of the training model so that image data and non-image data output predictive information indicating the state of matters at a time different from the time when the image data was taken.
- the learning model performs weighting calculation by a calculation method that outputs a combination of the products of the elements of the first feature amount calculated from the image data and the second feature amount calculated from the non-image data.
- This is a learning device that calculates a third feature amount in which the first feature amount and the second feature amount are fused, and outputs prediction information based on the third feature amount.
- the method of generating the prediction model includes image data and non-image data relating to the target matter, data showing a known state of the matter corresponding to the combination of the image data and the non-image data, and By acquiring training data including the above and performing machine learning of the learning model using the training data, matters at a time different from the time when the image data was taken with respect to the input of image data and non-image data.
- the training model includes a first feature amount calculated from image data and a second feature amount calculated from non-image data, including generating a trained prediction model that outputs prediction information indicating the situation.
- a third feature amount in which the first feature amount and the second feature amount are fused is calculated, and a third feature amount is calculated. It is a method of generating a prediction model that outputs prediction information based on the feature amount of.
- the method of generating a predictive model is understood as an invention of a method of manufacturing a predictive model.
- the present invention using image data and non-image data, it is possible to accurately predict the state of the image data at a time different from the time when the image data was taken.
- FIG. 1 is an explanatory diagram showing an outline of processing in the information processing apparatus according to the embodiment of the present invention.
- FIG. 2 is a conceptual diagram showing a network structure of a learning model used for machine learning for generating a prediction model.
- FIG. 3 is an explanatory diagram showing an outline of weighting calculation using the bilinear method.
- FIG. 4 is an explanatory diagram showing an outline of weighting calculation using the bilinear shake method.
- FIG. 5 is a hardware configuration diagram illustrating an outline of a medical image information system including an information processing apparatus according to an embodiment of the present invention.
- FIG. 6 is a block diagram showing a schematic configuration of the learning device.
- FIG. 7 is a conceptual diagram of the learning data stored in the learning data storage unit.
- FIG. 1 is an explanatory diagram showing an outline of processing in the information processing apparatus according to the embodiment of the present invention.
- FIG. 2 is a conceptual diagram showing a network structure of a learning model used for machine learning for generating a prediction model.
- FIG. 8 is a functional block diagram showing a function of learning processing in the learning device.
- FIG. 9 is a flowchart illustrating the procedure of the learning method using the learning device.
- FIG. 10 is a block diagram showing a schematic configuration of the information processing apparatus.
- FIG. 11 is a functional block diagram showing a function of dementia progression prediction processing in the information processing apparatus.
- FIG. 12 is a flowchart illustrating the procedure of the diagnosis support method using the information processing apparatus.
- FIG. 13 is a flowchart showing an example of the processing content of the prediction processing in step S24 of FIG.
- FIG. 14 is a block diagram showing an example of a computer hardware configuration.
- FIG. 1 is an explanatory diagram showing an outline of processing in the information processing apparatus 10 according to the embodiment of the present invention.
- the information processing device 10 is a computer system that performs a task of predicting the progression of dementia whether or not a patient with mild cognitive impairment (MCI) progresses to Alzheimer's disease (AD) one year after baseline.
- MCI mild cognitive impairment
- AD Alzheimer's disease
- FIG. 1 is an explanatory diagram showing an outline of processing in the information processing apparatus 10 according to the embodiment of the present invention.
- the information processing device 10 is a computer system that performs a task of predicting the progression of dementia whether or not a patient with mild cognitive impairment (MCI) progresses to Alzheimer's disease (AD) one year after baseline.
- MCI mild cognitive impairment
- AD Alzheimer's disease
- the information processing device 10 performs arithmetic processing using the trained prediction model 12 generated by machine learning.
- the prediction model 12 is, for example, a learning model constructed by using a hierarchical multi-layer neural network, and network parameters are determined by deep learning. Network parameters include the filter coefficients (weights of connections between nodes) of the filters used to process each layer and the bias of the nodes.
- the "neural network” is a mathematical model of information processing that simulates the mechanism of the cranial nerve system. Processing using a neural network can be realized by using a computer.
- the processing unit including the neural network can be configured as a program module.
- the subject patient changes from MCI to Alzheimer's disease (AD) one year later in response to the input of the MRI image IM obtained by photographing the brain of the patient who is the subject and the biological information BI of this patient.
- AD Alzheimer's disease
- the prediction model 12 can be understood as a classifier and may be understood as a classifier or discriminator that identifies a class.
- the biological information BI includes, for example, at least one, preferably a plurality of combinations, of blood test data, genetic data, cognitive ability score, cerebrospinal fluid data, age, and gender.
- As the biological information BI it is preferable to use at least one of blood test data, genetic data, cognitive ability score and cerebrospinal fluid data, and age and gender.
- blood test data, genetic data, cognitive ability score, age and gender are used as the biological information BI to be input to the information processing apparatus 10.
- data such as MCI and other biomarkers having a correlation with Alzheimer's disease may be used as the biological information BI.
- the first feature amount calculated from the input MRI image IM and the second feature amount calculated from the input biometric information BI are fused by the calculation of the bilinear method.
- the two-class classification is determined based on the obtained third feature amount.
- the "bilinear method" referred to here is a calculation method for calculating a combination of products of elements of two different types of feature quantities using a first feature quantity and a second feature quantity.
- the baseline when predicting the progression of dementia is, for example, the state at the time of making a diagnosis of the target patient, and specifically, various tests such as taking an MRI image IM used for the diagnosis and a cognitive ability test are performed. This is the state when the data was acquired by executing. It should be noted that “one year later” here does not have to be strict, and may be approximately one year later, including a period range that is generally accepted.
- the time when the MRI image IM is taken, or the time of the baseline at which the inspection including the taking of the MRI image IM is performed is an example of the "time of taking the image data" in the present disclosure.
- “One year later” is an example of "a time point different from the shooting time” and "after a specific period has passed from the shooting time” in the present disclosure.
- the data used for learning is, for example, data of an MCI patient having data of a plurality of types of items shown below, and it is specified whether the patient actually progressed to Alzheimer's disease one year later or not. It will be the data of the patients who have made it.
- the data of the plurality of types of items includes MRI image, blood test data, genetic data, cognitive ability score, age, and gender data.
- the cognitive ability score is, for example, one of ADAS (Alzheimer's Disease Assessment Scale) score, MMSE (Mini Mental State Examination) score, FAQ (Functional Activities Questionnaire) score, and CDR (Clinical Dementia Rating) score, or , A plurality of combinations of these may be used.
- ADAS Alzheimer's Disease Assessment Scale
- MMSE Mini Mental State Examination
- FAQ Fluorescenceal Activities Questionnaire
- CDR Chronic Dementia Rating
- the gene data may be, for example, data indicating a genotype, and specifically, test data of apolipoprotein E (ApoE).
- ApoE is a gene involved in the development of Alzheimer's disease.
- the ApoE gene has three subtypes ( ⁇ 2, ⁇ 3, ⁇ 4), of which those with " ⁇ 4" have a relatively high risk of developing Alzheimer's disease.
- Correct answer data plays the role of a teacher signal in supervised learning.
- Data of multiple MCI patients corresponding to such usage data conditions are used as a set of learning data.
- the data used for learning is subjected to the following preprocessing in order to perform learning efficiently.
- the brightness value is normalized and the alignment with the atlas image is performed to facilitate learning.
- the alignment with the standard brain may be performed.
- the preprocessing as described above may be performed by a learning device incorporating a learning program that executes machine learning, or may be performed by using a computer system different from the learning device.
- FIG. 2 is a conceptual diagram showing a network structure of a learning model 14 used for machine learning to generate a prediction model 12. By performing deep learning using the learning model 14, network parameters are determined.
- the network structure shown in FIG. 2 may be understood as the network structure of the prediction model 12.
- the MRI image IM (p) and the biological information BI (p) are input data for learning, and “p” represents an index of the learning data.
- the learning model 14 is based on a neural network 16 including a plurality of convolutional layers and a fully connected layer FC1 for calculating the first feature amount Fv1 from the input MRI image IM (p), and the input biological information BI (p).
- a fully coupled layer FC2 that calculates a second feature amount Fv2, a fusion layer FU that fuses the first feature amount Fv1 and a second feature amount Fv2 by a weighting calculation by a bilinear method, and a third fusion layer FU obtained from the fusion layer FU.
- the fully coupled layer FC3, which calculates the final output value from the feature amount Fv3 of the above, is provided.
- Each of the blocks indicated by the symbols C1, C2, C3, C4 and C5 in FIG. 2 represents a network module in which the arithmetic processing of a plurality of layers is collectively represented in one block.
- one block is a plurality of layers in which convolution processing, batch normalization processing, ReLU (Rectified Linear Unit) function, convolution processing, batch normalization processing, ReLU function, and pooling processing are performed in this order.
- the vertical size of each block gradually decreasing from C1 to C5 indicates that the image size of the feature map calculated in each block gradually decreases.
- the horizontal size of each block represents the relative change in the number of channels of the feature map calculated in each block.
- the outline of the processing by the learning model 14 is as follows. That is, for the MRI image IM (p), the first feature amount Fv1 is extracted by passing through a plurality of convolution layers. The first feature quantity Fv1 is represented by a vector including a plurality of elements. For the biological information BI (p), the treatment of full binding by the full binding layer FC2 is performed once to obtain a second feature amount Fv2. The second feature amount Fv2 is represented by a vector containing a plurality of elements. Then, the first feature amount Fv1 and the second feature amount Fv2 are fused in the fusion layer FU, and the final output value is calculated in the fully coupled layer FC3 using the fused third feature amount Fv3.
- the final output value obtained from the fully connected layer FC3 is, for example, a classification score representing the certainty (likelihood) of each class. Good.
- the classification score may be converted into a value normalized to a numerical value in the range of 0 to 1, that is, a probability by using a softmax function or the like.
- the neural network 16 shown in FIG. 2 is an example of the "first processing unit” and the "first neural network” in the present disclosure.
- the fully connected layer FC1 is an example of the "first fully connected layer” in the present disclosure.
- the fully connected layer FC2 is an example of the "second processing unit”, the “second fully connected layer”, and the “second neural network” in the present disclosure.
- the fusion layer FU is an example of the "third processing unit” in the present disclosure.
- the fully connected layer FC3 is an example of the "third fully connected layer” in the present disclosure.
- Each of the fully bonded layers FC1, FC2, and FC3 may be configured to include a plurality of layers.
- the final output value obtained through the fully connected layer FC3 is an example of "prediction information" in the present disclosure.
- FIG. 3 is an explanatory diagram showing an outline of weighting calculation using the bilinear method in the fusion layer FU.
- x represents the first feature amount obtained from the MRI image IM (p)
- y represents the second feature amount obtained from the biological information BI (p).
- z represents a third feature amount output from the fusion layer FU.
- each element of x and y is multiplied, weighted, and added. That is, the element of z is calculated according to the following equation 1.
- I in the formula is the index of the element of x. j is the index of the element of y. k is the index of the element of z.
- the fusion layer FU uses the first feature amount and the second feature amount, and is weighted by a calculation method that outputs a combination of products of two different types of feature amounts. To generate a third feature quantity in which the two types of feature quantities are fused.
- Equation 2 in the combination processing by the linear method, each of the elements of x and y is weighted and added.
- the linear method cannot consider the combination of products of x and y elements.
- Equation 1 in the fusion process using the bilinear method shown in Equation 1, the combination of the products of the elements of x and y is considered, so that the expressive power of the network is improved by that amount, and the product of the elements is calculated. Since the meaning of correlating is also included, it is possible to improve the prediction accuracy by considering the correlation between the two features obtained from different types of information.
- the calculation method in the fusion layer FU is not limited to the bilinear method shown in Equation 1, and for example, the bilinear shake method may be applied.
- the bilinear shake method the weighting of each element when calculating the product of two types of feature quantities is changed by a random number.
- FIG. 4 is an explanatory diagram showing an outline of weighting calculation using the bilinear shake method in the fusion layer FU.
- x represents the first feature amount obtained from the MRI image IM (p)
- y represents the second feature amount obtained from the biological information BI (p).
- ⁇ represents a value of 0 to 1 generated by a random number.
- ⁇ is generated for each element k, i, j.
- z represents a third feature amount output from the fusion layer FU.
- each element of x and y is multiplied, weighted, and added. That is, the element of z is calculated according to the following equation 3.
- I in the formula is the index of the element of x. j is the index of the element of y. k is the index of the element of z.
- the fusion layer FU uses the first feature amount and the second feature amount, and is weighted by a calculation method that outputs a combination of products of two different types of feature amounts. To generate a third feature quantity in which the two types of feature quantities are fused.
- the bilinear shake method considers the combination of products at random ratios for the first feature amount and the second feature amount, so that it is possible to prevent the learning from being biased to either feature amount. ..
- the following modes can be considered as random number generation patterns. For example, when considering that there are 10 combinations of x and y as training data and learning them with 10 epochs, the following three patterns can be considered as patterns for generating random numbers of ⁇ .
- Random numbers are generated for each combination of x and y and for each epoch. In this case, the same ⁇ does not exist.
- FIG. 5 is a hardware configuration diagram illustrating an outline of the medical image information system 40 including the information processing device 10 according to the embodiment of the present invention.
- the three-dimensional image capturing device 42, the image storage server 44, and the information processing device 10 are connected to each other in a communicable state via the communication line 46.
- the communication line 46 may be, for example, a local area network constructed in a medical institution such as a hospital.
- the format of connection to the communication line 46 and communication between devices is not limited to wired, and may be wireless.
- the three-dimensional image capturing device 42 is a device that generates a three-dimensional image representing the site by photographing the site to be diagnosed by the patient who is the subject, and specifically, a CT device and an MRI apparatus. , And PET equipment and the like.
- the three-dimensional image composed of the plurality of slice images generated by the three-dimensional image capturing device 42 is transmitted to the image storage server 44 for each unit inspection and stored.
- the diagnosis target site of the patient is the brain
- the three-dimensional imaging device 42 is an MRI device.
- a three-dimensional MRI image including the patient's brain is generated.
- the MRI image is a diffusion-weighted image.
- FIG. 5 shows one 3D image capturing device 42
- a plurality of 3D image capturing devices may be connected to the communication line 46. It should be noted that the plurality of three-dimensional image capturing devices may include different modality.
- the image storage server 44 is a computer that stores and manages various data, and is provided with a large-capacity external storage device and database management software.
- the image storage server 44 communicates with other devices via the communication line 46, and transmits / receives image data and the like.
- the image storage server 44 acquires various data including image data of the three-dimensional image generated by the three-dimensional image capturing device 42 via the communication line 46, and uses it as a recording medium such as a large-capacity external storage device. Save and manage.
- the storage format of the image data and the communication between the devices via the communication line 46 are based on a protocol such as DICOM (Digital Imaging and Communication in Medicine).
- the image storage server 44 stores blood test data, genetic data, cognitive ability score, and biological information including the age and sex of the patient.
- the in-hospital terminal device 50 may be connected to the communication line 46.
- FIG. 5 shows one in-hospital terminal device 50, a plurality of in-hospital terminal devices may be connected to the communication line 46.
- Biological information including blood test data and other test data can be input from the information processing device 10 and / or the in-hospital terminal device 50, and the living body is sent from the information processing device 10 and / or the in-hospital terminal device 50 to the image storage server 44. Information can be sent.
- the function of the information processing device 10 may be incorporated in the in-hospital terminal device 50.
- the communication line 46 may be connected to the wide area communication network 66 via the router 60.
- the wide area communication network 66 may be configured to include the Internet and / or a dedicated communication line.
- the learning device 100 is configured by a computer system to generate the prediction model 12 incorporated in the information processing device 10.
- the learning device 100 is connected to the wide area communication network 66, and can collect learning data via the wide area communication network 66.
- the learning device 100 can collect learning data from a plurality of image storage servers installed in a plurality of medical institutions (not shown in FIG. 5) in addition to the image storage server 44.
- personal information such as a name that can identify the individual patient is kept secret.
- the learning data set which is a collection of a plurality of learning data, is stored in the internal storage of the learning device 100, the external storage connected to the learning device 100, the data storage server, or the like.
- FIG. 6 is a block diagram showing a schematic configuration of the learning device 100.
- the learning device 100 can be realized by a computer system configured by using one or a plurality of computers.
- the computer system that constitutes the learning device 100 may be the same system as the computer system that constitutes the information processing device 10, may be a different system, or may be a system that shares some elements. Good.
- the learning device 100 is realized by installing a learning program on a computer.
- the learning device 100 includes a processor 102, a non-temporary computer-readable medium 104, an input / output interface 106, a communication interface 108, a bus 110, an input device 114, and a display device 116.
- the processor 102 includes a CPU.
- Processor 102 may include a GPU.
- the processor 102 is connected to the computer-readable medium 104, the input / output interface 106, and the communication interface 108 via the bus 110.
- the computer-readable medium 104 includes a memory that is a main storage device and a storage that is an auxiliary storage device.
- the computer-readable medium 104 may be a semiconductor memory, a hard disk (HDD: Hard Disk Drive) device, a solid state drive (SSD: Solid State Drive) device, or a plurality of combinations thereof.
- the learning device 100 is connected to the learning data storage unit 170 via the communication interface 108 or the input / output interface 106.
- the learning data storage unit 170 is configured to include a storage for storing learning data necessary for the learning device 100 to perform machine learning.
- the "learning data” is training data used for machine learning, and is synonymous with “learning data” or "training data”.
- the learning data storage unit 170 and the learning device 100 are configured as separate devices, but these functions may be realized by one computer, or two or more devices may be realized.
- the processing functions may be shared and realized by the computer.
- the learning program is a program that realizes a function of causing a computer to learn a learning model 14.
- the processor 102 executes a command of the learning program, the computer functions as an information acquisition unit 121, a preprocessing unit 122, a learning model 14, an error calculation unit 124, and an optimizer 125.
- the information acquisition unit 121 acquires learning data from the learning data storage unit 170.
- the information acquisition unit 121 may be configured to include a data input terminal that captures data from an external or other signal processing unit in the device. Further, the information acquisition unit 121 is configured to include an input / output interface 106, a communication interface 108, a media interface for reading and writing a portable external storage medium such as a memory card (not shown), or an appropriate combination of these embodiments. You may.
- the information acquisition unit 121 can acquire data necessary for performing learning data from the image storage server 44 described with reference to FIG.
- the pre-processing unit 122 performs pre-processing on the MRI image and biological information acquired from the image storage server 44 or the like in order to improve the efficiency of machine learning processing.
- the learning data processed by the pre-processing unit 122 can be stored in the learning data storage unit 170. If a learning data set to which necessary preprocessing has been performed is prepared in advance, it is possible to omit the processing by the preprocessing unit 222.
- the error calculation unit 124 calculates the error between the predicted value indicated by the classification score output from the learning model 14 and the correct answer data.
- the error calculation unit 124 evaluates the error using the loss function.
- the loss function may be, for example, cross entropy or mean square error.
- the optimizer 125 performs a process of updating the network parameters of the learning model 14 from the calculation result of the error calculation unit 124.
- the computer functions as the display control unit 130.
- the display control unit 130 generates a display signal necessary for display output to the display device 116, and controls the display of the display device 116.
- the input device 114 is composed of, for example, a keyboard, a mouse, a touch panel, or other pointing device, a voice input device, or an appropriate combination thereof.
- the input device 114 receives various inputs by the operator.
- the display device 116 is composed of, for example, a liquid crystal display, an organic electro-luminescence (OEL) display, a projector, or an appropriate combination thereof.
- the input device 114 and the display device 116 are connected to the bus 110 via the input / output interface 106.
- the display device 116 and the input device 114 may be integrally configured by using the touch panel.
- FIG. 7 is a conceptual diagram of the learning data stored in the learning data storage unit 170.
- the learning data storage unit 170 contains an MRI image IM (p) for a plurality of MCI patients, a biological information BI (p), and a correct answer information CD (p) which is known information indicating a medical condition one year later.
- the learning data LD (p) in which the combination of the above is associated with each patient is saved.
- the MRI image IM (p) may be preprocessed image data that has been preprocessed. p represents, for example, an index corresponding to the patient number.
- FIG. 8 is a functional block diagram showing the function of the learning process in the learning device 100.
- the learning device 100 reads the learning data LD (p) from the learning data storage unit 170 and executes machine learning.
- the learning device 100 can read the learning data LD (p) and update the parameters in units of mini-batch in which a plurality of learning data LDs (p) are put together.
- FIG. 8 shows the flow of processing of one set of learning data, but when performing mini-batch learning, a plurality of sets (for example, m sets) of learning data included in the mini-batch are collectively processed.
- the information acquisition unit 121 of the learning device 100 includes an image acquisition unit 141, a biological information acquisition unit 142, and a correct answer information acquisition unit 143.
- the image acquisition unit 141 acquires the MRI image IM (p).
- the biological information acquisition unit 142 acquires the biological information BI (p).
- the correct answer information acquisition unit 143 acquires the correct answer information CD (p).
- the correct answer information CD (p) is, for example, classification score data (correct answer data) representing the correct answer label of the classification. In the case of two-class classification, specifically, the score in the case of Alzheimer's disease can be set as "1", and the score in the case of MCI can be set as "0".
- the MRI image IM (p) acquired via the image acquisition unit 141 is input to the learning model 14.
- the biological information BI (p) acquired via the biological information acquisition unit 142 is input to the learning model 14.
- the learning model 14 outputs a classification score according to the class according to the process described with reference to FIGS. 2 and 3 or according to the process described with reference to FIGS. 2 and 4.
- the classification score calculated by the learning model 14 corresponds to the predicted value.
- the error calculation unit 124 performs a calculation for evaluating the error between the classification score output from the learning model 14 and the correct answer data acquired from the correct answer information acquisition unit 143.
- the optimizer 125 includes a parameter update amount calculation unit 151 and a parameter update processing unit 152.
- the parameter update amount calculation unit 151 uses the error calculation result obtained from the error calculation unit 124 to calculate the update amount of the network parameter of the learning model 14.
- the parameter update processing unit 152 performs the parameter update processing of the learning model 14 according to the parameter update amount calculated by the parameter update amount calculation unit 151.
- the optimizer 125 updates the parameters based on an algorithm such as the backpropagation method.
- the parameters to be updated are the parameters of the neural network 16 including a plurality of convolution layers and the fully connected layer FC1, the parameters of the weights in the fully connected layer FC2, the parameters of the weights in the fusion layer FU, and the parameters in the fully connected layer FC3. Includes parameters for. Note that some of the parameters of the neural network 16 may be excluded from the update target. For example, among the blocks shown by C1 to C5 in FIG. 2, some parameters of the layer near the input side may be fixed.
- FIG. 9 is a flowchart illustrating the procedure of the learning method using the learning device 100 as an example.
- a learning data set is prepared as a preliminary preparation before executing the learning process. That is, a plurality of learning data of a combination of the MRI image IM (p), the biological information BI (p), and the correct answer information CD (p) as described in FIG. 7 are prepared.
- the function of generating such a learning data set may be incorporated in the learning device 100, or may be incorporated in a device other than the learning device 100.
- step S1 of FIG. 9 the learning device 100 acquires learning data.
- the learning device 100 can acquire a plurality of learning data from the learning data storage unit 170 in units of mini-batch.
- step S2 the learning device 100 inputs an MRI image and biological information to the learning model 14, and performs prediction processing using the learning model 14.
- the prediction processing performed by the learning model 14 includes a process of calculating the first feature amount Fv1 from the MRI image, a process of calculating the second feature amount Fv2 from the biological information, and a bilinear method.
- step S3 the error calculation unit 124 calculates the error between the predicted value obtained from the learning model 14 and the correct answer data.
- step S4 the optimizer 125 calculates the update amount of the parameters of the learning model 14 based on the error calculated in step S3.
- step S5 the optimizer 125 updates the parameters of the learning model 14 according to the update amount calculated in step S4. Parameter update processing is performed in units of mini-batch.
- the learning device 100 determines whether or not to end learning.
- the learning end condition may be determined based on the value of the error, or may be determined based on the number of parameter updates. As a method based on the error value, for example, the learning end condition may be that the error has converged within a specified range. As a method based on the number of updates, for example, the learning end condition may be that the number of updates reaches the specified number of times.
- step S6 If the determination result in step S6 is No, the learning device 100 returns to step S1 and repeats the learning process until the learning end condition is satisfied.
- step S6 When the determination result in step S6 is Yes determination, the learning device 100 ends the flowchart shown in FIG.
- the trained learning model 14 thus obtained is applied as the prediction model 12 of the information processing device 10.
- the learning method described with reference to FIG. 9 is an example of a method for generating the prediction model 12.
- the learning device 100 may update the parameters of the prediction model 12 by performing additional learning using the newly collected learning data after the prediction model 12 is generated.
- the learned parameters obtained by the additional learning can be provided to the information processing apparatus 10 via the wide area communication network 66 or by using a portable external storage medium such as a memory card. With such a mechanism, it is possible to update the prediction performance of the prediction model 12.
- FIG. 10 is a block diagram showing a schematic configuration of the information processing apparatus 10.
- the information processing device 10 is realized by installing a diagnostic support program on a computer.
- the information processing device 10 includes a processor 202, a non-temporary computer-readable medium 204, an input / output interface 206, a communication interface 208, a bus 210, an input device 214, and a display device 216.
- the hardware configuration of the information processing device 10 may be the same as the hardware configuration of the learning device 100 described with reference to FIG.
- the hardware configurations of the processor 202, the computer-readable medium 204, the input / output interface 206, the communication interface 208, the bus 210, the input device 214, and the display device 216 of FIG. 10 are the processor 102 of FIG. 6 and the computer-readable medium 104. , Input / output interface 106, communication interface 108, bus 110, input device 114, and display device 116.
- the diagnosis support program is a program that predicts the progression of dementia based on the MRI image and biological information of the patient to be diagnosed.
- the processor 202 executes the command of the diagnosis support program, the computer functions as an information acquisition unit 221, a preprocessing unit 222, and a prediction model 12.
- the computer when the processor 202 executes the command of the display control program, the computer functions as the display control unit 230.
- the display control unit 230 generates a display signal necessary for display output to the display device 216, and controls the display of the display device 216.
- the information acquisition unit 221 acquires the MRI image and biological information of the patient who has undergone the examination.
- the information acquisition unit 221 can acquire patient data from the image storage server 44.
- the information acquisition unit 221 may be configured to include a data input terminal that captures data from an external or other signal processing unit in the device. Further, the information acquisition unit 221 is configured to include an input / output interface 206, a communication interface 208, a media interface for reading / writing a portable external storage medium such as a memory card (not shown), or an appropriate combination of these embodiments. You may.
- the pre-processing unit 222 performs pre-processing on the MRI image and biological information acquired via the information acquisition unit 221.
- the processing content by the preprocessing unit 222 may be the same as that of the preprocessing unit 122 of the learning device 100.
- the prediction model 12 predicts whether or not the target patient will progress to Alzheimer's disease one year later according to the algorithm described in FIG. 2 based on the input MRI image and biological information.
- the question of predicting the progress of "whether or not Alzheimer's disease progresses in one year” is the same as the question of "whether or not Alzheimer's disease develops within one year", in short, one year of the patient. It is a question of predicting whether the later condition is Alzheimer's disease or MCI.
- the medical condition of the patient one year later is an example of "a state related to matters at a time different from the time when the image data was taken" and "health condition of the subject” in this disclosure.
- the prediction model 12 is an example of the “prediction unit” in the present disclosure.
- Alzheimer's disease and MCI as candidates for medical conditions one year later are examples of "plurality of candidates as appearances related to matters at a time different from the time of imaging" in the present disclosure.
- Alzheimer's disease as a candidate for a medical condition one year later is an example of the "first condition” in the present disclosure
- MCI as a candidate for a medical condition one year later is an example of the "second condition” in the present disclosure. ..
- FIG. 11 is a functional block diagram showing a function of dementia progression prediction processing in the information processing apparatus 10.
- the information processing apparatus 10 reads the MRI image and the biological information of the MCI patient who is the subject from the image storage server 44 or the like.
- the information acquisition unit 221 of the information processing device 10 includes an image acquisition unit 241 and a biological information acquisition unit 242.
- the image acquisition unit 241 acquires an MRI image IM.
- the biological information acquisition unit 242 acquires biological information.
- the MRI image IM acquired via the image acquisition unit 241 is input to the prediction model 12 after being preprocessed by the preprocessing unit 222. Further, the biometric information BI acquired via the biometric information acquisition unit 242 is input to the prediction model 12 after being preprocessed by the preprocessing unit 222.
- the preprocessing by the preprocessing unit 222 can be omitted.
- the prediction model 12 outputs the prediction result according to the processing described with reference to FIGS. 2 and 3 or according to the processing described with reference to FIGS. 2 and 4.
- the prediction result is displayed on the display device 216 via the display control unit 230.
- the information processing device 10 is an example of the "diagnosis support device” in the present disclosure.
- MRI image IM and biometric BI of MCI patients correlate with the progression of the condition.
- the MRI image IM and the biological information BI are examples of "image data and non-image data relating to a subject matter" in the present disclosure.
- the MRI image IM is an example of "image data” in the present disclosure.
- Biological information BI is an example of "non-image data” in the present disclosure.
- the prediction result output from the prediction model 12 is an example of the "classification processing result" in the present disclosure.
- FIG. 12 is a flowchart illustrating the procedure of the diagnosis support method using the information processing apparatus 10.
- a learning data set is prepared as a preliminary preparation before executing the learning process. That is, a plurality of learning data of a combination of the MRI image IM (p), the biological information BI (p), and the correct answer information CD (p) as described in FIG. 7 are prepared.
- the function of generating such a learning data set may be incorporated in the learning device 100, or may be incorporated in a device other than the learning device 100.
- step S21 the information processing device 10 acquires an MRI image of the subject.
- step S22 the information processing device 10 acquires the biological information of the subject.
- the order of processing in steps S21 and S22 can be interchanged. Further, steps S21 and S22 may be executed by parallel processing or parallel processing.
- step S23 the preprocessing unit 222 of the information processing device 10 preprocesses the input MRI image and biological information as necessary.
- step S24 the information processing device 10 inputs an MRI image and biological information into the prediction model 12 and performs prediction processing.
- step S26 the information processing device 10 outputs the prediction result obtained by the prediction model 12.
- step S26 the information processing apparatus 10 ends the flowchart of FIG.
- FIG. 13 is a flowchart showing an example of the processing content of the prediction processing (step S24).
- the flowchart of FIG. 13 is applied to the process of step S24 of FIG.
- the prediction model 12 calculates the feature amount of the input MRI image. That is, the prediction model 12 calculates the first feature amount Fv1 from the MRI image according to the forward propagation path of the neural network 16.
- step S32 the prediction model 12 calculates the feature amount of the input biological information. That is, the prediction model 12 calculates the second feature amount Fv2 by combining the elements of the biological information by the fully connected layer FC2.
- the order of processing in step S31 and step S32 can be exchanged. Further, steps S31 and S32 may be executed by parallel processing or parallel processing.
- step S34 the prediction model 12 fuses the two types of features obtained from steps S31 and S32 by weighting calculation by the bilinear method to generate a third feature amount Fv3.
- step S35 the prediction model 12 determines the two-class classification based on the third feature amount Fv3, which is the feature amount fused in step S34.
- step S35 the information processing device 10 ends the flowchart of FIG. 13 and returns to the flowchart of FIG.
- FIG. 14 is a block diagram showing an example of a computer hardware configuration.
- the computer 800 may be a personal computer, a workstation, or a server computer.
- the computer 800 is a device having a part or all of the information processing device 10, the image storage server 44, the in-hospital terminal device 50, the learning device 100, and the learning data storage unit 170, or a plurality of these functions, which have already been described. Can be used.
- the computer 800 includes a CPU (Central Processing Unit) 802, a RAM (Random Access Memory) 804, a ROM (Read Only Memory) 806, a GPU (Graphics Processing Unit) 808, a storage 810, a communication unit 812, an input device 814, and a display device 816. And a bus 818.
- the GPU (Graphics Processing Unit) 808 may be provided as needed.
- the CPU 802 reads various programs stored in the ROM 806, the storage 810, or the like, and executes various processes.
- the RAM 804 is used as a work area of the CPU 802. Further, the RAM 804 is used as a storage unit for temporarily storing the read program and various data.
- the storage 810 includes, for example, a hard disk device, an optical disk, a magneto-optical disk, or a semiconductor memory, or a storage device configured by using an appropriate combination thereof.
- the storage 810 stores various programs, data, and the like necessary for prediction processing and / or learning processing.
- the program stored in the storage 810 is loaded into the RAM 804, and the CPU 802 executes the program, so that the computer 800 functions as a means for performing various processes specified by the program.
- the communication unit 812 is an interface that performs communication processing with an external device by wire or wirelessly and exchanges information with the external device.
- the communication unit 812 can play the role of an information acquisition unit that accepts input such as an image.
- the input device 814 is an input interface that accepts various operation inputs to the computer 800.
- the input device 814 may be, for example, a keyboard, mouse, touch panel, or other pointing device, or voice input device, or any combination thereof.
- the display device 816 is an output interface that displays various types of information.
- the display device 816 may be, for example, a liquid crystal display, an organic electro-luminescence (OEL) display, a projector, or an appropriate combination thereof.
- OEL organic electro-luminescence
- a non-temporary program such as an optical disk, a magnetic disk, a semiconductor memory, or the like, which realizes a part or all of the processing function of at least one of the prediction function and the learning function described in the above-described embodiment on a computer. It is possible to record on a computer-readable medium which is an information storage medium and provide a program through this information storage medium.
- program signal as a download service using a telecommunication line such as the Internet, instead of storing the program in such a tangible non-temporary computer-readable medium and providing the program.
- each processing unit The prediction model 12 shown in FIG. 1 and the like, the learning model 14 shown in FIG. 2 and the like, the information acquisition unit 121 shown in FIG. 6, the preprocessing unit 122, the error calculation unit 124, the optimizer 125, the display control unit 130, and the image shown in FIG. Acquisition unit 141, biological information acquisition unit 142, correct answer information acquisition unit 143, parameter update amount calculation unit 151, parameter update processing unit 152, information acquisition unit 221 shown in FIG. 10, preprocessing unit 222, display control unit 230, FIG.
- the hardware structure of the processing unit that executes various processes such as the image acquisition unit 241 and the biometric information acquisition unit 242 shown in the above is, for example, various processors as shown below. ..
- Various processors include a CPU, which is a general-purpose processor that executes programs and functions as various processing units, a GPU, which is a processor specialized in image processing, and an FPGA (Field Programmable Gate Array) circuit configuration after manufacturing.
- a dedicated electric circuit that is a processor having a circuit configuration specially designed to execute a specific process such as a programmable logic device (PLD) or an ASIC (Application Specific Integrated Circuit), which is a processor that can change the CPU. Etc. are included.
- One processing unit may be composed of one of these various processors, or may be composed of two or more processors of the same type or different types.
- one processing unit may be composed of a plurality of FPGAs, a combination of a CPU and an FPGA, or a combination of a CPU and a GPU.
- a plurality of processing units may be configured by one processor.
- one processor is configured by a combination of one or more CPUs and software, as represented by a computer such as a client or a server. There is a form in which the processor functions as a plurality of processing units.
- SoC System On Chip
- IC Integrated Circuit
- circuitry that combines circuit elements such as semiconductor elements.
- Example In the information processing apparatus 10 according to the embodiment of the present invention, it was determined whether or not the patient changed from MCI to Alzheimer's disease one year later by using the MRI image, genotype, cognitive ability score, age and gender of the patient.
- the discrimination accuracy was 85% or more.
- “Discrimination” is included in the concept of "prediction”. Discrimination accuracy is synonymous with “prediction accuracy”.
- the method according to the embodiment of the present invention has improved discrimination accuracy as compared with the method described in Non-Patent Document 2.
- the future state of the patient can be predicted with high accuracy by using the MRI image of the patient and the biological information.
- ⁇ Assumed implementation mode >> As an aspect of using the dementia progression prediction function of the information processing device 10 according to the present embodiment, for example, in order to improve the accuracy of the clinical trial at the time of developing a new drug, a clinical trial is conducted on a subject who is predicted not to progress to Alzheimer's disease in the first place. It is conceivable to remove it from.
- Alzheimer's disease As another usage mode, after a drug that suppresses the progression of Alzheimer's disease is sold, in order to administer the drug only to patients who are effective with the drug, a patient having the effect of the drug, that is, Alzheimer's disease It is conceivable to identify the patient whose disease is predicted to progress.
- ⁇ Modification 2 In the above-described embodiment, an example of predicting the progression of dementia with an MCI patient as a subject has been described, but a healthy subject may be included as the subject, and the classification of the class is "no change". Healthy individuals may be classified among them. Alternatively, a "healthy person" class may be provided as a candidate for classification. In order to realize the prediction about the healthy person, the data about the healthy person is used as the learning data.
- the network of two-class classification has been described as an example, but the scope of application of the technique of the present disclosure is not limited to this example, and it can be applied to a network of three or more classes of multi-class classification. Further, the technique of the present disclosure can be applied not only to a prediction model for classifying but also to a prediction model for solving a regression problem.
- ⁇ Modification 4 >>
- an example of using an MRI image has been described, but the technique of the present disclosure is not limited to the MRI image, and an image of another modality may be used.
- a CT image acquired by a CT device may be used.
- the CT image may be a non-contrast CT image obtained by taking a picture without using a contrast medium, or a contrast CT image obtained by taking a picture using a contrast medium. You may use it.
- the prediction technique according to the present disclosure can be applied not only to a three-dimensional tomographic image but also to various two-dimensional images.
- the image to be processed may be a two-dimensional X-ray image.
- the prediction technique according to the present disclosure is not limited to medical images, and can be applied to various images such as ordinary camera images.
- the technique according to the present disclosure can be applied to an application for predicting a future deterioration state after a lapse of a specific period from an image of a building such as a bridge.
- the non-image information used in combination with the image of a building such as a bridge is, for example, at least one of the material of the building, the location information indicating the installation location, the structural style, and the inspection data of the periodic inspection, preferably. May be a combination of multiple types.
- the non-image data used in combination with the image data in order to improve the prediction accuracy includes information that is not directly related to the content of the image.
- the biological information used as input in combination with the MRI image of the brain for predicting the progression of dementia is an example of information that is not directly related to the image content of the MRI image. That is, the biological information includes information about matters that do not appear in the MRI image.
- the biological information is an example of "information about matters not appearing in the image indicated by the image data" in the present disclosure.
- image data such as an MRI image is information at a specific timing at the time when the target is photographed, so to speak, one image data is "temporary point information".
- the task of predicting future matters is a question of how it will change from the present or the past to the future. Therefore, in order to improve the prediction accuracy, it is preferable to use non-image information having a dimension of "time" that the image data does not have in combination with the image data.
- the genetic information exemplified by a genotype is not information at a certain point in time, but contains information on how the patient has or may change characteristics from the past to the future. Information containing information at a plurality of time points in this way can be interpreted as information having a time dimension.
- the genetic data is an example of "data of information in which information at a plurality of time points is inherent" in the present disclosure.
- the technique of the present disclosure can also be applied to a process of predicting the appearance at a time point in the past before the time when the image was taken.
- data showing a known state before the time of shooting is used as correct answer information corresponding to the image data and the non-image data in the training data.
- the algorithm for predicting the past state before a specific period from the time of shooting is the same as the algorithm for predicting the state after the lapse of a specific period from the time of shooting.
- Distillation model and derivative model It is possible to generate a derivative model and / or a distillation model based on the trained model generated by implementing the learning method of the present disclosure.
- the derived model is a derived trained model obtained by performing additional training on the trained model, and is also referred to as a “reuse model”.
- additional training means that a new trained parameter is generated by applying a different training data set to an existing trained model and performing further training. The additional learning is performed, for example, for the purpose of maintaining or improving the accuracy of the trained model, or adapting to a region different from the region originally trained.
- distillation means that the input to the existing trained model and the output result for the input are used as the training data set of the new model to perform machine learning, and the new trained model and / or trained. It means to generate a parameter.
- the "distillation model” is an inference program (prediction model) in which learned parameters newly generated by distillation are incorporated.
- the distillation model may have a different network structure than the original trained model.
- AD Alzheimer's Disease
- AD Alzheimer's Disease
- the multi-modal input data are defined as MRI images and clinical data including social cognitive scores, APoEgenotype, gender and age obtained from Our criteria of selecting these input data are that they are mostly obtained by non-invasive examination.
- the proposed method integrates features obtained from MRI images and clinical dat a effectively by using bi-linear fusion.
- Bi-linear fusion computes the products of all elements between images and clinical features, where the correlation between them are included. That led a big improvement of prediction of prediction of prediction of prediction of prediction of prediction of prediction of prediction of prediction of prediction of -linear fusion achieved to predict conversion in one year with 84.8% accuracy, comparing with 75.3% accuracy using linear fusion.
- the proposed method is useful for screening examination for AD or decoding while alignment with stratification experiment input data is relatively easy to be obtained.
- Information processing device 10
- Prediction model 14
- Learning model 16
- Neural network 40
- Medical image information system 42
- 3D imaging device 44
- Image storage server 46
- Communication line 50
- In-hospital terminal device 60
- Router 66
- Wide area communication network 100
- Learning device 102
- Processor 104
- Computer readable medium 106
- Input / output interface 108
- Communication interface 110
- Bus 114
- Input device 116
- Information acquisition unit 122
- Preprocessing unit 124
- Error calculation unit 125
- Optimizer 130 Display control unit 141
- Image acquisition unit 142
- Biometric information acquisition unit 143
- Correct answer information acquisition unit 151
- Parameter update Quantity calculation unit 152
- Parameter update processing unit 170
- Learning data storage unit 202
- Processor 204
- Computer readable medium 206
- Input / output interface 208
- Communication interface 210
- Bus 214
- Input device 216
- Information acquisition unit 222
Landscapes
- Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Theoretical Computer Science (AREA)
- General Health & Medical Sciences (AREA)
- Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Public Health (AREA)
- General Physics & Mathematics (AREA)
- Biomedical Technology (AREA)
- Life Sciences & Earth Sciences (AREA)
- Databases & Information Systems (AREA)
- Evolutionary Computation (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Data Mining & Analysis (AREA)
- Radiology & Medical Imaging (AREA)
- Artificial Intelligence (AREA)
- Pathology (AREA)
- Epidemiology (AREA)
- Primary Health Care (AREA)
- Software Systems (AREA)
- Multimedia (AREA)
- Computing Systems (AREA)
- Quality & Reliability (AREA)
- Heart & Thoracic Surgery (AREA)
- Business, Economics & Management (AREA)
- Veterinary Medicine (AREA)
- Animal Behavior & Ethology (AREA)
- Surgery (AREA)
- Molecular Biology (AREA)
- Biophysics (AREA)
- Human Resources & Organizations (AREA)
- High Energy & Nuclear Physics (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Evolutionary Biology (AREA)
- General Engineering & Computer Science (AREA)
- Strategic Management (AREA)
- Economics (AREA)
- Game Theory and Decision Science (AREA)
Abstract
Description
図1は、本発明の実施形態に係る情報処理装置10における処理の概要を示す説明図である。情報処理装置10は、軽度認知障害(MCI)の患者がベースラインから1年後にアルツハイマー病(AD)に進行するか否かの認知症進行予測のタスクを行うコンピュータシステムである。なお、アルツハイマー病は、アルツハイマー型認知症と同義である。
予測モデル12を生成するための学習方法について説明する。
学習に使用するデータは、例えば、以下に示す複数種類の項目のデータを持つMCI患者のデータであり、かつ、その患者が1年後に実際にアルツハイマー病に進行したか、進行しなかったかを特定できている患者のデータとする。複数種類の項目のデータとは、本例の場合、MRI画像、血液検査データ、遺伝子データ、認知能力スコア、年齢、及び性別のデータを含む。
MRI画像については、輝度値の正規化ならびにアトラス画像との位置合わせを行い、学習させやすくする。なお、アトラス画像との位置合わせに代えて、標準脳との位置合わせを行ってもよい。
図2は、予測モデル12を生成するための機械学習に用いられる学習モデル14のネットワーク構造を示す概念図である。学習モデル14を用いて深層学習を行うことにより、ネットワークのパラメータが決定される。図2に示すネットワーク構造は、予測モデル12のネットワーク構造と理解してよい。
図5は、本発明の実施形態に係る情報処理装置10を含む医用画像情報システム40の概要を例示的に示すハードウェア構成図である。医用画像情報システム40は、3次元画像撮影装置42と、画像保存サーバ44と、情報処理装置10とが通信回線46を経由して通信可能な状態で接続されている。通信回線46は、例えば、病院などの医療機関に構築されるローカルエリアネットワークであってよい。通信回線46への接続及び装置間での通信の形式は、有線に限らず、無線であってもよい。
図6は、学習装置100の概略構成を示すブロック図である。学習装置100は、1台又は複数台のコンピュータを用いて構成されるコンピュータシステムによって実現することができる。学習装置100を構成するコンピュータシステムは、情報処理装置10を構成するコンピュータシステムと同じシステムであってもよいし、異なるシステムであってもよく、また一部の要素を共有するシステムであってもよい。
図7は、学習データ保存部170に保存される学習データの概念図である。学習データ保存部170には、複数のMCI患者についてのMRI画像IM(p)と、生体情報BI(p)と、1年後の病状を示す既知の情報である正解情報CD(p)と、の組み合わせが患者ごとに紐付けされた学習データLD(p)が保存される。MRI画像IM(p)は、前処理が施された前処理済みの画像データであってよい。pは、例えば、患者番号に相当するインデックスを表す。
図8は、学習装置100における学習処理の機能を示す機能ブロック図である。図8において、図6で説明した要素と同一の要素には同一の符号を付す。学習装置100は、学習データ保存部170から学習データLD(p)を読み込み、機械学習を実行する。学習装置100は、複数の学習データLD(p)をまとめたミニバッチの単位で学習データLD(p)の読み込みとパラメータの更新を行うことができる。
図9は、学習装置100を用いた学習方法の手順を例示的に示すフローチャートである。学習処理を実行する前の事前準備として、学習データセットを用意しておく。すなわち、図7で説明したようなMRI画像IM(p)と生体情報BI(p)と正解情報CD(p)との組み合わせの学習データを複数用意する。このような学習データセットを生成する機能は、学習装置100に組み込まれていてもよいし、学習装置100とは別の装置に組み込まれていてもよい。
図10は、情報処理装置10の概略構成を示すブロック図である。情報処理装置10は、コンピュータに診断支援プログラムをインストールすることにより実現される。情報処理装置10は、プロセッサ202、非一時的なコンピュータ可読媒体204、入出力インターフェース206、通信インターフェース208、バス210、入力装置214及び表示装置216を備える。情報処理装置10のハードウェア構成は、図8で説明した学習装置100のハードウェア構成と同様であってよい。すなわち、図10のプロセッサ202、コンピュータ可読媒体204、入出力インターフェース206、通信インターフェース208、バス210、入力装置214及び表示装置216の各々のハードウェア構成は、図6のプロセッサ102、コンピュータ可読媒体104、入出力インターフェース106、通信インターフェース108、バス110、入力装置114及び表示装置116と同様であってよい。
図11は、情報処理装置10における認知症進行予測処理の機能を示す機能ブロック図である。図11において、図10で説明した要素と同一の要素には同一の符号を付す。情報処理装置10は、被検者であるMCI患者のMRI画像と生体情報とを画像保存サーバ44などから読み込む。
図12は、情報処理装置10を用いた診断支援方法の手順を例示的に示すフローチャートである。学習処理を実行する前の事前準備として、学習データセットを用意しておく。すなわち、図7で説明したようなMRI画像IM(p)と生体情報BI(p)と正解情報CD(p)との組み合わせの学習データを複数用意する。このような学習データセットを生成する機能は、学習装置100に組み込まれていてもよいし、学習装置100とは別の装置に組み込まれていてもよい。
図14は、コンピュータのハードウェア構成の例を示すブロック図である。コンピュータ800は、パーソナルコンピュータであってもよいし、ワークステーションであってもよく、また、サーバコンピュータであってもよい。コンピュータ800は、既に説明した情報処理装置10、画像保存サーバ44、院内端末装置50、学習装置100、学習データ保存部170のいずれかの一部又は全部又はこれらの複数の機能を備えた装置として用いることができる。
上述の実施形態で説明した予測機能、及び学習機能のうち少なくとも1つの処理機能の一部又は全部をコンピュータに実現させるプログラムを、光ディスク、磁気ディスク、若しくは、半導体メモリその他の有体物たる非一時的な情報記憶媒体であるコンピュータ可読媒体に記録し、この情報記憶媒体を通じてプログラムを提供することが可能である。
図1等に示す予測モデル12、図2等に示す学習モデル14、図6に示す情報取得部121、前処理部122、誤差計算部124、オプティマイザ125、表示制御部130、図8に示す画像取得部141、生体情報取得部142、正解情報取得部143、パラメータ更新量計算部151、パラメータ更新処理部152、図10に示す情報取得部221、前処理部222、表示制御部230、図11に示す画像取得部241、及び生体情報取得部242などの各種の処理を実行する処理部(processing unit)のハードウェア的な構造は、例えば、次に示すような各種のプロセッサ(processor)である。
本発明の実施形態に係る情報処理装置10において、患者のMRI画像、遺伝子タイプ、認知能力スコア、年齢及び性別を用いて、1年後にMCIからアルツハイマー病に変化するか否かを判別したところ、判別精度は85%以上で判別可能であった。「判別」は「予測」の概念に含まれる。判別精度は「予測精度」と同義である。
本発明の実施形態に係る情報処理装置10によれば、患者のMRI画像と生体情報とを用いて、患者の将来の状態を高精度に予測することができる。
本実施形態に係る情報処理装置10の認知症進行予測機能の利用態様として、例えば、新薬開発時の治験の精度を上げるために、そもそもアルツハイマー病に進行しないであろうと予測される対象者を治験から外す、という態様が考えられる。
上述の実施形態では「1年後」の状態を予測する例を説明したが、「1年後」という期間的な条件についてはこの例に限定されない。予測の目的に応じて「6ヶ月後」、「18ヶ月後」、「2年後」など、学習に用いる正解情報の与え方を変えることにより、様々な期間に対する予測を実現することができる。
上述の実施形態ではMCI患者を被検者として認知症進行予測を行う例を述べたが、被検者として健常者が含まれていてもよく、クラス分類の判別において「変化なし」のクラスの中に健常者が分類されてもよい。あるいはまた、クラス分類の候補の中に「健常者」のクラスが設けられてもよい。健常者に関する予測を実現するには、学習データとして健常者に関するデータを用いる。
上述の実施形態では、2クラス分類のネットワークを例に説明したが、本開示の技術の適用範囲はこの例に限らず、3クラス以上の多クラス分類のネットワークに適用することもできる。また、本開示の技術は、クラス分類を行う予測モデルに限らず、回帰問題を解く予測モデルに適用することもできる。
上述の実施形態においては、MRI画像を使用する例を説明したが、本開示の技術はMRI画像に限られず、他のモダリティの画像を使用してもよい。例えば、CT装置により取得されるCT画像を使用してもよい。この場合、CT画像は、造影剤を使用しないで撮影を行うことにより取得される非造影CT画像を使用してもよいし、造影剤を使用して撮影を行うことにより取得した造影CT画像を使用してもよい。また、PET(Positron Emission Tomography)装置により取得されるPET画像、OCT(Optical Coherence Tomography)装置により取得されるOCT画像、3次元超音波撮影装置により取得される3次元超音波画像などを用いてもよい。また、本開示による予測技術は、3次元断層画像に限らず、各種の2次元画像に適用することができる。例えば、処理の対象とする画像は、2次元のX線画像であってもよい。
また、本開示による予測技術は、医用画像に限定されず、通常のカメラ画像など、様々な画像について適用することができる。例えば、橋梁などの建造物を撮影した画像から特定期間経過後の将来の劣化状態を予測する用途などに、本開示による技術を適用することができる。橋梁などの建造物の画像と組み合わせて使用される非画像の情報は、例えば、建造物の材料、設置場所を示す位置情報、構造様式、及び定期検査の検査データのうちの少なくとも1つ、好ましくは複数種類の組み合わせであってよい。
本開示の技術において、予測精度を高めるために、画像データと組み合わせて使用する非画像データは、画像の内容と直接関係していない情報を含む。認知症進行予測のために脳のMRI画像と組み合わせて、入力として用いられる生体情報は、MRI画像の画像内容と直接関係ない情報の一例である。つまり、生体情報はMRI画像に現れていない事柄に関する情報を含む。生体情報は本開示における「画像データが示す画像に現れていない事柄に関する情報」の一例である。
本開示の技術は、画像が撮影された時点よりも前の過去の時点での様子を予測する処理にも適用可能である。この場合、学習データにおける画像データ及び非画像データに対応する正解情報として撮影時点よりも前の既知の様子を示すデータが用いられる。
本開示の学習方法を実施することによって生成された学習済みモデルを基に、派生モデル及び/又は蒸留モデルを生成することが可能である。派生モデルとは、学習済みモデルに対してさらなる追加の学習を実施することによって得られる派生的な学習済みモデルであり「再利用モデル」ともいう。ここでの「追加の学習」とは、既存の学習済みモデルに、異なる学習データセットを適用して、更なる学習を行うことにより、新たに学習済みパラメータを生成することをいう。追加の学習は、例えば、学習済みモデルの精度の維持又は向上を図ること、あるいは、当初に学習させた領域と異なる領域に適応させることなどを目的として行われる。
上述の実施形態で説明した構成や変形例で説明した事項は、適宜組み合わせて用いることができ、また、一部の事項を置き換えることもできる。本発明は上述した実施形態に限定されず、本発明の精神を逸脱しない範囲で種々の変形が可能であることは言うまでもない。
バイリニア法による融合処理を用いたアルツハイマー病の進行予測のためのマルチモーダル深層学習の技術の具体例に関する概要を英語で記載したものを以下に開示する。
12 予測モデル
14 学習モデル
16 ニューラルネットワーク
40 医用画像情報システム
42 3次元画像撮影装置
44 画像保存サーバ
46 通信回線
50 院内端末装置
60 ルータ
66 広域通信網
100 学習装置
102 プロセッサ
104 コンピュータ可読媒体
106 入出力インターフェース
108 通信インターフェース
110 バス
114 入力装置
116 表示装置
121 情報取得部
122 前処理部
124 誤差計算部
125 オプティマイザ
130 表示制御部
141 画像取得部
142 生体情報取得部
143 正解情報取得部
151 パラメータ更新量計算部
152 パラメータ更新処理部
170 学習データ保存部
202 プロセッサ
204 コンピュータ可読媒体
206 入出力インターフェース
208 通信インターフェース
210 バス
214 入力装置
216 表示装置
221 情報取得部
222 前処理部
230 表示制御部
241 画像取得部
242 生体情報取得部
800 コンピュータ
802 CPU
804 RAM
806 ROM
808 GPU
810 ストレージ
812 通信部
814 入力装置
816 表示装置
818 バス
IM MRI画像
BI 生体情報
CD 正解情報
LD 学習データ
FC1 全結合層
FC2 全結合層
FC3 全結合層
FU フュージョン層
Fv1 第1の特徴量
Fv2 第2の特徴量
Fv3 第3の特徴量
S1~S6 学習方法のステップ
S21~S26 診断支援方法のステップ
S31~S35 予測処理のステップ
Claims (22)
- 対象とする事柄に関する画像データ及び非画像データの入力を受け付ける情報取得部と、
前記情報取得部を介して入力された前記画像データ及び前記非画像データに基づき、前記画像データの撮影時点とは異なる時点での前記事柄に関する様子を予測する予測部と、
を備え、
前記予測部は、前記画像データから算出される第1の特徴量と、前記非画像データから算出される第2の特徴量との要素同士の積の組み合わせを出力とする演算方法による重み付け計算を行うことによって前記第1の特徴量と前記第2の特徴量とが融合された第3の特徴量を算出し、かつ前記第3の特徴量に基づいて前記予測を行う、情報処理装置。 - 前記予測部は、前記画像データ及び前記非画像データの入力を受けて、前記撮影時点とは異なる時点での前記事柄に関する様子を示す情報を前記予測の結果として出力するように機械学習された学習済みの予測モデルを含む、請求項1に記載の情報処理装置。
- 前記予測部は、ニューラルネットワークを用いて構成される、請求項1又は2に記載の情報処理装置。
- 前記予測部は、前記画像データ及び前記非画像データの入力に対して、前記撮影時点とは異なる時点での前記事柄に関する様子としての複数の候補の各々に対応する複数のクラスのうち、いずれのクラスに属するかを判別するクラス分類の処理を行い、前記クラス分類の処理結果を出力する、請求項1から3のいずれか一項に記載の情報処理装置。
- 前記予測部は、前記撮影時点とは異なる時点での前記事柄に関する様子として、前記撮影時点から特定期間経過後の様子、又は前記撮影時点よりも特定期間前の過去の様子が第1の状態であるか、前記第1の状態と異なる第2の状態であるかを判別する2クラス分類の処理を行い、前記2クラス分類の処理結果を出力する、請求項1から3のいずれか一項に記載の情報処理装置。
- 前記予測部は、
前記画像データから前記第1の特徴量を算出する第1の処理部と、
前記非画像データから前記第2の特徴量を算出する第2の処理部と、
前記第1の特徴量及び前記第2の特徴量から前記要素同士の積の組み合わせを出力とする前記演算方法による重み付け計算を行うことにより、前記第3の特徴量を算出する第3の処理部と、
を含んで構成される、請求項1から5のいずれか一項に記載の情報処理装置。 - 前記第3の処理部が行う前記重み付け計算は、
前記第1の特徴量と前記第2の特徴量とをランダムな割合でかけ合わせる処理を含む、
請求項6に記載の情報処理装置。 - 前記第1の処理部は、複数の畳み込み層と第1の全結合層とを含む第1のニューラルネットワークを用いて構成され、
前記第2の処理部は、第2の全結合層を含む第2のニューラルネットワークを用いて構成される、請求項6又は7に記載の情報処理装置。 - 前記第3の特徴量から最終出力値を算出する第3の全結合層を備える、請求項8に記載の情報処理装置。
- 前記非画像データは、前記画像データが示す画像に現れていない事柄に関する情報のデータを含む、請求項1から9のいずれか一項に記載の情報処理装置。
- 前記非画像データは、複数の時点おける情報が内在されている情報のデータを含む、請求項1から10のいずれか一項に記載の情報処理装置。
- 前記対象とする事柄は、被検者の健康状態であり、
前記画像データは、前記被検者を撮影して得られる医用画像であり、
前記非画像データは、前記被検者の生体情報を含み、
前記予測部は、前記医用画像の撮影時点から特定期間経過後における前記被検者の健康状態、又は前記医用画像の撮影時点よりも特定期間前の過去の時点における前記被検者の健康状態を予測する、請求項1から11のいずれか一項に記載の情報処理装置。 - 前記対象とする事柄は、軽度認知障害の被検者の病状であり、
前記画像データは、前記被検者の脳を撮影して得られるMRI(Magnetic Resonance Imaging)画像であり、
前記非画像データは、前記被検者の血液検査データ、遺伝子データ、及び認知能力スコアのうちの少なくとも1つのデータと、前記被検者の年齢及び性別と、を含み、
前記予測部は、前記MRI画像の撮影時点から特定期間経過後において前記被検者の病状がアルツハイマー病であるか軽度認知障害であるかの予測を行う、請求項1から11のいずれか一項に記載の情報処理装置。 - 前記予測部により、前記特定期間経過後において前記被検者の病状が軽度認知障害であるとの予測結果が得られた前記被検者を治験の対象者から除外する、請求項13に記載の情報処理装置。
- プロセッサと、前記プロセッサによって実行されるプログラムが記録された非一時的なコンピュータ可読媒体と、を含む情報処理装置であって、
前記プロセッサは、前記プログラムの指令に従い、
対象とする事柄に関する画像データ及び非画像データの入力を受け、
前記画像データから算出される第1の特徴量と、前記非画像データから算出される第2の特徴量との要素同士の積の組み合わせを出力とする演算方法による重み付け計算を行うことによって前記第1の特徴量と前記第2の特徴量とが融合された第3の特徴量を算出し、
前記第3の特徴量に基づいて、前記画像データの撮影時点とは異なる時点での前記事柄に関する様子を予測する、
情報処理装置。 - コンピュータに、
対象とする事柄に関する画像データ及び非画像データの入力を受け付ける機能と、
前記画像データから算出される第1の特徴量と、前記非画像データから算出される第2の特徴量との要素同士の積の組み合わせを出力とする演算方法による重み付け計算を行うことによって前記第1の特徴量と前記第2の特徴量とが融合された第3の特徴量を算出する機能と、
前記第3の特徴量に基づいて、前記画像データの撮影時点とは異なる時点での前記事柄に関する様子を予測する機能と、を実現させるためのプログラム。 - 非一時的かつコンピュータ読取可能な記録媒体であって、前記記録媒体に格納された指令がコンピュータによって読み取られた場合に請求項16に記載のプログラムをコンピュータに実行させる記録媒体。
- 対象とする事柄に関する画像データ及び非画像データの入力を受けて、前記画像データ及び前記非画像データから予測される情報を出力するように機械学習された学習済みモデルであって、
前記学習済みモデルは、前記画像データから算出される第1の特徴量と、前記非画像データから算出される第2の特徴量との要素同士の積の組み合わせを出力とする演算方法による重み付け計算を行うことによって前記第1の特徴量と前記第2の特徴量とが融合された第3の特徴量を算出し、かつ前記第3の特徴量に基づいて、前記画像データの撮影時点とは異なる時点での前記事柄に関する様子を示す前記情報を出力するよう、コンピュータを機能させるための学習済みモデル。 - 前記対象とする事柄は、被検者の健康状態であり、
前記画像データは、前記被検者を撮影して得られる医用画像であり、
前記非画像データは、前記被検者の生体情報を含み、
前記学習済みモデルは、前記医用画像の撮影時点から特定期間経過後における前記被検者の健康状態、又は前記医用画像の撮影時点よりも特定期間前の過去の時点における前記被検者の健康状態を予測する、請求項18に記載の学習済みモデル。 - 請求項19に記載の学習済みモデルが記録された非一時的なコンピュータ可読媒体と、前記学習済みモデルに従って動作するプロセッサと、を含む診断支援装置。
- プロセッサと、前記プロセッサによって実行される学習プログラムが記録された非一時的なコンピュータ可読媒体と、を含む学習装置であって、
前記プロセッサは、前記学習プログラムの指令に従い、
対象とする事柄に関する画像データ及び非画像データと、前記画像データ及び前記非画像データの組み合わせに対応した前記事柄の既知の様子を示すデータと、を含む学習データを取得し、
前記画像データ及び前記非画像データを学習モデルに入力して、前記画像データ及び前記非画像データから前記画像データの撮影時点とは異なる時点での前記事柄に関する様子を示す予測情報が出力されるように前記学習モデルの機械学習を行う、プロセッサであり、
前記学習モデルは、
前記画像データから算出される第1の特徴量と、前記非画像データから算出される第2の特徴量との要素同士の積の組み合わせを出力とする演算方法による重み付け計算を行うことによって前記第1の特徴量と前記第2の特徴量とが融合された第3の特徴量を算出し、
前記第3の特徴量に基づいて、前記予測情報を出力する、
学習装置。 - 対象とする事柄に関する画像データ及び非画像データと、前記画像データ及び前記非画像データの組み合わせに対応した前記事柄の既知の様子を示すデータと、を含む学習データを取得することと、
前記学習データを用いて、学習モデルの機械学習を行うことにより、前記画像データ及び前記非画像データの入力に対して、前記画像データの撮影時点とは異なる時点での前記事柄に関する様子を示す予測情報を出力する学習済みの予測モデルを生成することと、を含み、
前記学習モデルは、
前記画像データから算出される第1の特徴量と、前記非画像データから算出される第2の特徴量との要素同士の積の組み合わせを出力とする演算方法による重み付け計算を行うことによって前記第1の特徴量と前記第2の特徴量とが融合された第3の特徴量を算出し、
前記第3の特徴量に基づいて、前記予測情報を出力する、
予測モデルの生成方法。
Priority Applications (4)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202080048846.6A CN114080646A (zh) | 2019-07-26 | 2020-07-20 | 信息处理装置、程序、学习完毕模型、诊断支援装置、学习装置及预测模型的生成方法 |
| JP2021536958A JP7170145B2 (ja) | 2019-07-26 | 2020-07-20 | 情報処理装置、プログラム、学習済みモデル、診断支援装置、学習装置及び予測モデルの生成方法 |
| EP20846460.2A EP4005498B1 (en) | 2019-07-26 | 2020-07-20 | Information processing device, program, learned model, diagnostic assistance device, learning device, and method for generating prediction model |
| US17/565,412 US12169932B2 (en) | 2019-07-26 | 2021-12-29 | Information processing device, program, trained model, diagnostic support device, learning device, and prediction model generation method |
Applications Claiming Priority (4)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| JP2019137875 | 2019-07-26 | ||
| JP2019-137875 | 2019-07-26 | ||
| JP2020036935 | 2020-03-04 | ||
| JP2020-036935 | 2020-03-04 |
Related Child Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| US17/565,412 Continuation US12169932B2 (en) | 2019-07-26 | 2021-12-29 | Information processing device, program, trained model, diagnostic support device, learning device, and prediction model generation method |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2021020198A1 true WO2021020198A1 (ja) | 2021-02-04 |
Family
ID=74229630
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/JP2020/028074 Ceased WO2021020198A1 (ja) | 2019-07-26 | 2020-07-20 | 情報処理装置、プログラム、学習済みモデル、診断支援装置、学習装置及び予測モデルの生成方法 |
Country Status (5)
| Country | Link |
|---|---|
| US (1) | US12169932B2 (ja) |
| EP (1) | EP4005498B1 (ja) |
| JP (1) | JP7170145B2 (ja) |
| CN (1) | CN114080646A (ja) |
| WO (1) | WO2021020198A1 (ja) |
Cited By (10)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN113411236A (zh) * | 2021-06-23 | 2021-09-17 | 中移(杭州)信息技术有限公司 | 质差路由器检测方法、装置、设备及存储介质 |
| WO2022209290A1 (ja) * | 2021-03-30 | 2022-10-06 | 富士フイルム株式会社 | 構造物の状態予測装置、方法及びプログラム |
| WO2022224524A1 (ja) * | 2021-04-22 | 2022-10-27 | ソニーグループ株式会社 | 患者モニタリングシステム |
| WO2023276977A1 (ja) * | 2021-06-28 | 2023-01-05 | 富士フイルム株式会社 | 医療支援装置、医療支援装置の作動方法、医療支援装置の作動プログラム |
| WO2023276563A1 (ja) * | 2021-06-29 | 2023-01-05 | 大日本印刷株式会社 | 診断支援装置、コンピュータプログラム及び診断支援方法 |
| WO2023105976A1 (ja) * | 2021-12-08 | 2023-06-15 | 富士フイルム株式会社 | 臨床試験支援装置、臨床試験支援装置の作動方法、および臨床試験支援装置の作動プログラム |
| JP2023117592A (ja) * | 2022-02-14 | 2023-08-24 | コニカミノルタ株式会社 | プログラム、動態解析システム及び動態解析装置 |
| JPWO2024062931A1 (ja) * | 2022-09-20 | 2024-03-28 | ||
| JP7554439B1 (ja) | 2023-05-30 | 2024-09-20 | メディカルリサーチ株式会社 | 情報処理方法、コンピュータプログラム及び情報処理装置 |
| WO2025177780A1 (ja) * | 2024-02-22 | 2025-08-28 | 国立大学法人大阪大学 | 認知機能予測システム |
Families Citing this family (9)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| TWI775161B (zh) * | 2020-09-28 | 2022-08-21 | 臺北醫學大學 | 腫瘤復發預測裝置與方法 |
| KR20230018929A (ko) * | 2021-07-30 | 2023-02-07 | 주식회사 루닛 | 환자에 대한 해석가능한 예측 결과를 생성하는 방법 및 시스템 |
| CN114398983B (zh) * | 2022-01-14 | 2024-11-05 | 腾讯科技(深圳)有限公司 | 分类预测方法、装置、设备、存储介质及计算机程序产品 |
| CN115240854B (zh) * | 2022-07-29 | 2023-10-03 | 中国医学科学院北京协和医院 | 一种胰腺炎预后数据的处理方法及其系统 |
| CN115187151B (zh) * | 2022-09-13 | 2022-12-09 | 北京锘崴信息科技有限公司 | 基于联邦学习的排放可信分析方法及金融信息评价方法 |
| CN115590481B (zh) * | 2022-12-15 | 2023-04-11 | 北京鹰瞳科技发展股份有限公司 | 一种用于预测认知障碍的装置和计算机可读存储介质 |
| CN121058068A (zh) * | 2023-04-03 | 2025-12-02 | 莫尔研究应用有限公司 | 使用血液检测数据和机器学习模型预测神经系统变性疾病的发生和进展 |
| KR102775338B1 (ko) * | 2023-05-25 | 2025-03-06 | (주)그래디언트 바이오컨버전스 | 유전자 정보 데이터와 뇌 이미지 데이터를 이용한 치매 진단 방법 및 시스템 |
| CN120833343A (zh) * | 2025-09-19 | 2025-10-24 | 季华实验室 | 液体体积预测方法、电子设备及计算机可读存储介质 |
Citations (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JPS648390B2 (ja) | 1980-10-24 | 1989-02-14 | Tokyo Shibaura Electric Co | |
| JP2008157640A (ja) * | 2006-12-20 | 2008-07-10 | Fujifilm Ri Pharma Co Ltd | 脳画像データに関する時系列データの解析方法、プログラムおよび記録媒体 |
| JP2011521220A (ja) * | 2008-05-15 | 2011-07-21 | ユニヴェルシテ ピエール エ マリー キュリー(パリ シス) | アルツハイマー病の予測を支援する方法及び自動化システム、並びに、前記システムをトレーニングする方法 |
| JP2015166962A (ja) * | 2014-03-04 | 2015-09-24 | 日本電気株式会社 | 情報処理装置、学習方法、及び、プログラム |
| JP2019008742A (ja) * | 2017-06-28 | 2019-01-17 | ヤフー株式会社 | 学習装置、生成装置、学習方法、生成方法、学習プログラム、および生成プログラム |
Family Cites Families (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| WO2017114810A1 (en) * | 2015-12-31 | 2017-07-06 | Vito Nv | Methods, controllers and systems for the control of distribution systems using a neural network arhcitecture |
| RU2020114290A (ru) * | 2017-10-31 | 2021-12-01 | ДжиИ ХЕЛТКЕР ЛИМИТЕД | Медицинская система для диагностики патологии и / или исхода когнитивного заболевания |
| EP3987455A1 (en) * | 2019-06-19 | 2022-04-27 | Yissum Research Development Company of the Hebrew University of Jerusalem Ltd. | Machine learning-based anomaly detection |
| EP4088883A1 (en) * | 2021-05-11 | 2022-11-16 | Siemens Industry Software Ltd. | Method and system for predicting a collision free posture of a kinematic system |
-
2020
- 2020-07-20 CN CN202080048846.6A patent/CN114080646A/zh active Pending
- 2020-07-20 EP EP20846460.2A patent/EP4005498B1/en active Active
- 2020-07-20 WO PCT/JP2020/028074 patent/WO2021020198A1/ja not_active Ceased
- 2020-07-20 JP JP2021536958A patent/JP7170145B2/ja active Active
-
2021
- 2021-12-29 US US17/565,412 patent/US12169932B2/en active Active
Patent Citations (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JPS648390B2 (ja) | 1980-10-24 | 1989-02-14 | Tokyo Shibaura Electric Co | |
| JP2008157640A (ja) * | 2006-12-20 | 2008-07-10 | Fujifilm Ri Pharma Co Ltd | 脳画像データに関する時系列データの解析方法、プログラムおよび記録媒体 |
| JP2011521220A (ja) * | 2008-05-15 | 2011-07-21 | ユニヴェルシテ ピエール エ マリー キュリー(パリ シス) | アルツハイマー病の予測を支援する方法及び自動化システム、並びに、前記システムをトレーニングする方法 |
| JP2015166962A (ja) * | 2014-03-04 | 2015-09-24 | 日本電気株式会社 | 情報処理装置、学習方法、及び、プログラム |
| JP2019008742A (ja) * | 2017-06-28 | 2019-01-17 | ヤフー株式会社 | 学習装置、生成装置、学習方法、生成方法、学習プログラム、および生成プログラム |
Non-Patent Citations (5)
| Title |
|---|
| GARAM LEEKWANGSIK NHO ET AL.: "Predicting Alzheimer's disease progression using multi-modal deep learning approach", SCIENTIFIC REPORTS 9, no. 1952, 2019 |
| GARAM LEEKWANGSIK NHO ET AL.: "Predicting Alzheimer's disease progression using multi-modal deep learning approach", SCIENTIFIC REPORTS, vol. 9, no. 1952, 2019 |
| SAMAN ET AL.: "DeepAD: Alzheimer's Disease Classification via Deep Convolutional Neural Networks using MRI and fMRI", BIORXIV, 2016, pages 070441 |
| SPASOV, SIEMON E. ET AL.: "A Multi-modal Convolutional Neural Network Framework for the Prediction of Alzheimer's Disease", 2018 40TH ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY, 18 July 2018 (2018-07-18), pages 1271 - 1274, XP033431915, DOI: 10.1109/EMBC.2018.8512468 * |
| SPASOV, SIMEON ET AL.: "A parameter-efficient deep learning approach to predict conversion from mild cognitive impairment to Alzheimer's disease", NEUROIMAGE, vol. 189, 1 April 2019 (2019-04-01), pages 276 - 287, XP085636020, DOI: 10.1016/j.neuroimage.2019.01.031 * |
Cited By (17)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| WO2022209290A1 (ja) * | 2021-03-30 | 2022-10-06 | 富士フイルム株式会社 | 構造物の状態予測装置、方法及びプログラム |
| WO2022224524A1 (ja) * | 2021-04-22 | 2022-10-27 | ソニーグループ株式会社 | 患者モニタリングシステム |
| CN113411236B (zh) * | 2021-06-23 | 2022-06-14 | 中移(杭州)信息技术有限公司 | 质差路由器检测方法、装置、设备及存储介质 |
| CN113411236A (zh) * | 2021-06-23 | 2021-09-17 | 中移(杭州)信息技术有限公司 | 质差路由器检测方法、装置、设备及存储介质 |
| US20240120038A1 (en) * | 2021-06-28 | 2024-04-11 | Fujifilm Corporation | Medical support device, operation method of medical support device, and operation program of medical support device |
| WO2023276977A1 (ja) * | 2021-06-28 | 2023-01-05 | 富士フイルム株式会社 | 医療支援装置、医療支援装置の作動方法、医療支援装置の作動プログラム |
| WO2023276563A1 (ja) * | 2021-06-29 | 2023-01-05 | 大日本印刷株式会社 | 診断支援装置、コンピュータプログラム及び診断支援方法 |
| JP2023005697A (ja) * | 2021-06-29 | 2023-01-18 | 大日本印刷株式会社 | 診断支援装置、コンピュータプログラム及び診断支援方法 |
| WO2023105976A1 (ja) * | 2021-12-08 | 2023-06-15 | 富士フイルム株式会社 | 臨床試験支援装置、臨床試験支援装置の作動方法、および臨床試験支援装置の作動プログラム |
| JP2023117592A (ja) * | 2022-02-14 | 2023-08-24 | コニカミノルタ株式会社 | プログラム、動態解析システム及び動態解析装置 |
| JP7779167B2 (ja) | 2022-02-14 | 2025-12-03 | コニカミノルタ株式会社 | プログラム、動態解析システム及び動態解析装置 |
| WO2024062931A1 (ja) * | 2022-09-20 | 2024-03-28 | 学校法人順天堂 | 神経変性疾患のリスク判定方法及び判定装置 |
| JPWO2024062931A1 (ja) * | 2022-09-20 | 2024-03-28 | ||
| JP7610810B2 (ja) | 2022-09-20 | 2025-01-09 | 学校法人順天堂 | 神経変性疾患のリスク判定方法及び判定装置 |
| JP7554439B1 (ja) | 2023-05-30 | 2024-09-20 | メディカルリサーチ株式会社 | 情報処理方法、コンピュータプログラム及び情報処理装置 |
| JP2024171733A (ja) * | 2023-05-30 | 2024-12-12 | メディカルリサーチ株式会社 | 情報処理方法、コンピュータプログラム及び情報処理装置 |
| WO2025177780A1 (ja) * | 2024-02-22 | 2025-08-28 | 国立大学法人大阪大学 | 認知機能予測システム |
Also Published As
| Publication number | Publication date |
|---|---|
| US12169932B2 (en) | 2024-12-17 |
| EP4005498A4 (en) | 2022-09-21 |
| JP7170145B2 (ja) | 2022-11-11 |
| EP4005498B1 (en) | 2025-07-23 |
| CN114080646A (zh) | 2022-02-22 |
| JPWO2021020198A1 (ja) | 2021-02-04 |
| EP4005498A1 (en) | 2022-06-01 |
| US20220122253A1 (en) | 2022-04-21 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| JP7170145B2 (ja) | 情報処理装置、プログラム、学習済みモデル、診断支援装置、学習装置及び予測モデルの生成方法 | |
| Ghorbani et al. | Deep learning interpretation of echocardiograms | |
| CN109447183B (zh) | 预测模型训练方法、装置、设备以及介质 | |
| US10984905B2 (en) | Artificial intelligence for physiological quantification in medical imaging | |
| CN111247595B (zh) | 用于诊断认知疾病病理和/或结果的医疗系统 | |
| Gatta et al. | Towards a modular decision support system for radiomics: A case study on rectal cancer | |
| CN109727660B (zh) | 在医学成像中针对血液动力学量化的不确定性或敏感性的机器学习预测 | |
| JP2020513615A (ja) | 深層学習ニューラルネットワークの分散化された診断ワークフロー訓練 | |
| Kadry et al. | Res-Unet based blood vessel segmentation and cardio vascular disease prediction using chronological chef-based optimization algorithm based deep residual network from retinal fundus images | |
| Kumar et al. | Deep-learning-enabled multimodal data fusion for lung disease classification | |
| Zhao et al. | Multi-view prediction of Alzheimer’s disease progression with end-to-end integrated framework | |
| Alsadoun et al. | Artificial intelligence (AI)-Enhanced detection of diabetic retinopathy from fundus images: the current landscape and future directions | |
| Chang et al. | Application of multimodal deep learning and multi-instance learning fusion techniques in predicting STN-DBS outcomes for Parkinson's disease patients | |
| JP7457292B2 (ja) | 脳画像解析装置、制御方法、及びプログラム | |
| Kalita et al. | Artificial intelligence in diagnostic medical image processing for advanced healthcare applications | |
| Aghaei et al. | Brain age gap estimation using attention-based resnet method for Alzheimer’s disease detection | |
| Akbarifar et al. | Multimodal dementia identification using lifestyle and brain lesions, a machine learning approach | |
| Akan et al. | ViViEchoformer: deep video regressor predicting ejection fraction | |
| Abinaya et al. | Accurate Liver Fibrosis Detection Through Hybrid MRMR-BiLSTM-CNN Architecture with Histogram Equalization and Optimization | |
| CN119339873B (zh) | 一种神经系统影像分析方法和装置 | |
| Liu et al. | Dual-branch image projection network for geographic atrophy segmentation in retinal OCT images | |
| Brzus et al. | A Clinical Neuroimaging Platform for Rapid, Automated Lesion Detection and Personalized Post-Stroke Outcome Prediction | |
| Weigel et al. | Normative connectome-based analysis of sensorimotor deficits in acute subcortical stroke | |
| Sagiroglu et al. | A novel brain tumor magnetic resonance imaging dataset (Gazi Brains 2020): initial benchmark results and comprehensive analysis | |
| Koh | Deep Learning Framework for Kidney Tumor Segmentation with Sur-gical Method Recommendation and Operative Time Prediction |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 20846460 Country of ref document: EP Kind code of ref document: A1 |
|
| ENP | Entry into the national phase |
Ref document number: 2021536958 Country of ref document: JP Kind code of ref document: A |
|
| NENP | Non-entry into the national phase |
Ref country code: DE |
|
| ENP | Entry into the national phase |
Ref document number: 2020846460 Country of ref document: EP Effective date: 20220228 |
|
| WWG | Wipo information: grant in national office |
Ref document number: 2020846460 Country of ref document: EP |