US11645748B2 - Three-dimensional automatic location system for epileptogenic focus based on deep learning - Google Patents
Three-dimensional automatic location system for epileptogenic focus based on deep learning Download PDFInfo
- Publication number
- US11645748B2 US11645748B2 US17/047,392 US201917047392A US11645748B2 US 11645748 B2 US11645748 B2 US 11645748B2 US 201917047392 A US201917047392 A US 201917047392A US 11645748 B2 US11645748 B2 US 11645748B2
- Authority
- US
- United States
- Prior art keywords
- image
- epileptogenic focus
- pet
- layer
- data
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0012—Biomedical image inspection
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B6/00—Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
- A61B6/02—Arrangements for diagnosis sequentially in different planes; Stereoscopic radiation diagnosis
- A61B6/03—Computed tomography [CT]
- A61B6/032—Transmission computed tomography [CT]
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B6/00—Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
- A61B6/02—Arrangements for diagnosis sequentially in different planes; Stereoscopic radiation diagnosis
- A61B6/03—Computed tomography [CT]
- A61B6/037—Emission tomography
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B6/00—Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
- A61B6/50—Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment specially adapted for specific body parts; specially adapted for specific clinical applications
- A61B6/501—Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment specially adapted for specific body parts; specially adapted for specific clinical applications for diagnosis of the head, e.g. neuroimaging or craniography
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B6/00—Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
- A61B6/52—Devices using data or image processing specially adapted for radiation diagnosis
- A61B6/5211—Devices using data or image processing specially adapted for radiation diagnosis involving processing of medical diagnostic data
- A61B6/5217—Devices using data or image processing specially adapted for radiation diagnosis involving processing of medical diagnostic data extracting a diagnostic or physiological parameter from medical diagnostic data
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/0464—Convolutional networks [CNN, ConvNet]
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/082—Learning methods modifying the architecture, e.g. adding, deleting or silencing nodes or connections
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/09—Supervised learning
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/255—Detecting or recognising potential candidate objects based on visual cues, e.g. shapes
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/764—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H30/00—ICT specially adapted for the handling or processing of medical images
- G16H30/40—ICT specially adapted for the handling or processing of medical images for processing medical images, e.g. editing
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B6/00—Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
- A61B6/52—Devices using data or image processing specially adapted for radiation diagnosis
- A61B6/5211—Devices using data or image processing specially adapted for radiation diagnosis involving processing of medical diagnostic data
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10072—Tomographic images
- G06T2207/10104—Positron emission tomography [PET]
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30016—Brain
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/44—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
- G06V10/443—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by matching or filtering
- G06V10/449—Biologically inspired filters, e.g. difference of Gaussians [DoG] or Gabor filters
- G06V10/451—Biologically inspired filters, e.g. difference of Gaussians [DoG] or Gabor filters with interaction between the filter responses, e.g. cortical complex cells
- G06V10/454—Integrating the filters into a hierarchical structure, e.g. convolutional neural networks [CNN]
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V2201/00—Indexing scheme relating to image or video recognition or understanding
- G06V2201/03—Recognition of patterns in medical or anatomical images
Definitions
- the present disclosure relates to the technical field of medical imaging engineering, and in particular, to a three-dimensional automatic location system for an epileptogenic focus based on deep learning.
- detection systems aiming at epilepsy include positron emission computed tomography (PET), nuclear magnetic resonance imaging (MRI), single photon emission computed tomography (SPECT) and electroencephalography (EEG), in which the PET has higher sensitivity for the detection and prognosis of epilepsy.
- PET positron emission computed tomography
- MRI nuclear magnetic resonance imaging
- SPECT single photon emission computed tomography
- EEG electroencephalography
- Imaging technology usually judges abnormalities based on statistical inference of standard uptake values (SUV) and/or asymmetry index (AI) of regions or voxels.
- Regional statistical methods usually divide a brain into larger regions of interest (ROI) and then compare an average value of SUV or AI in the region. Since the region is often much larger than a lesion region, this method will ignore subtle changes, resulting in reduction in its detection sensitivity.
- Voxel statistical methods usually use a statistical parameter mapping (SPM) software to compare data from individual cases and a control group, but the voxel statistical methods are highly sensitive to registration errors so that it is to generate false positives in misaligned regions.
- SPM statistical parameter mapping
- an object of the present disclosure is to provide a three-dimensional location system for a brain epileptogenic focus in a region based on deep learning, for automatically locating a position of the brain epileptogenic focus, with accuracy of location results being high and robustness of the model being relatively high.
- a three-dimensional automatic location system for an epileptogenic focus based on deep learning includes following modules:
- a PET image acquisition and labelling module including image acquisition and epileptogenic focus region labelling:
- acquiring an image using a 3D PET/CT scanner to acquire a PET image of a brain, a subject maintaining the same posture during an acquisition process, acquiring the PET image.
- image format conversion is performed, that is, an originally acquired image sequence in a DICOM format is converted into an easy-to-process image in a NIFTI format.
- a PET image registration module using cross-correlation as the similarity measure between images, using a symmetric differential homeomorphic (SyN) algorithm to deform all PET images and the labelled images thereof into the same symmetric standard space, in order to achieve the registration from the acquired PET images and the labelled images to standard symmetric brain templates.
- SyN symmetric differential homeomorphic
- a Gaussian smoothing algorithm is used to reduce registration errors caused by individual differences.
- the Gaussian smoothing process selects the FWHM of the full width at half maximum of the Gaussian function to be 5 to 15 mm.
- Z-score normalization is performed on the smoothed image.
- P u is a pixel point of the original image
- P d is a pixel point of the distorted image
- P c is a distortion center
- r is a distance between P d and P c in a vector space.
- P a is an image pixel point after the image intensity enhancement
- g_mult is an image pixel point of a multiplicative Gaussian bias field
- g_add is an image pixel point of an additive Gaussian bias field.
- image block division performing image block division on enhanced image data, using a three-dimensional sliding window to divide left and right hemispheres L and R of the PET image into mirror image pairs of the image block, and dividing data of the mirror image pairs of the image block into a training set, a verification set and a test set according to proportions; the training set, the verification set and the test set all contain two types of PET image block data—epileptogenic focus and normal.
- image data set resolution of each PET image data is X ⁇ Y ⁇ Z pixels, a size of the sliding scanning window block is set to m ⁇ m ⁇ m, and a sliding step length is set to t. Then, the size of each image block is m ⁇ m ⁇ m.
- the left and right hemispheres of a PET image it can be divided into
- a network building module building a deep network SiameseNet.
- This network contains two identical convolutional neural networks, a fully connected layer and an output layer.
- Each of the convolutional neural networks has a structure of ten layers, in which the first layer includes one convolution layer (conv), one batch normalization operation unit (batch normalization), one Relu function, and one pool layer (pool) that are connected in sequence; each of the second to the ninth layers is a ResBlock, and each of the ResBlocks contains two convolution layers, two normalization operations and one Relu function that are connected in sequence; the tenth layer is one convolution layer, and the tenth layers of the two convolutional neural networks output and are connected to one fully connected layer (fc) for nonlinear transformation. Finally, one output layer is connected.
- the two convolutional neural networks of the SiameseNet share the same weight parameter ⁇ in each layer and the inputs of the network are mirror image pairs of a pair of image blocks, to obtain a feature L_feature and a feature R_feature of two high-dimensional images.
- Dimensions of the fully connected layer vector are 2048, 1024, 512 and 2 in sequence.
- the output layer uses a classification probability of a softmax regression function, that is, a probability that the image block carries the epileptogenic focus or is normal.
- a cross entropy function is used as a loss function of the network.
- a calculation method of the cross entropy Loss (a, b) is:
- n the number of samples
- a correct probability distribution
- b probability distribution predicted by the network model.
- SGD stochastic gradient descent
- ⁇ k ⁇ k - 1 - ⁇ ⁇ d d ⁇ ⁇ k - 1 ⁇ Loss ⁇ ( a , b )
- ⁇ is a learning rate and ⁇ k is a k-th weight parameter.
- the image classification using the trained model to calculate a probability heatmap of the PET image of the test set.
- the probability heatmap is a probability map stitched by corresponding probabilities of different image blocks on one PET image, and a size is
- a logistic regression algorithm is used to classify the probability heatmap corresponding to each PET image, to obtain a classification result, that is, the normal PET image or the epileptogenic focus PET image.
- Locating of the epileptogenic focus performing bilinear interpolation on the probabilistic heatmap identified as the epileptogenic focus PET image, changing the probability heatmap to a size of the original image, and predicting a region larger than a probability threshold as the epileptogenic focus region.
- the SiameseNet can automatically learn high-dimensional asymmetric features in PET images to discover internal relationship between the PET image and the epileptogenic focus. Compared with the traditional location system for the epileptogenic focus, the system proposed by the present disclosure can learn high-order features that are difficult for human eyes to recognize, and it also takes into account the a priori knowledge of asymmetric metabolic distribution in patients having unilateral epilepsy.
- the system proposed by the present disclosure can accurately detect images of patients having abnormal metabolism, and compared with the existing SPM software, the epileptogenic focus region predicted by the system is more consistent with a physician's visual assessment and maintains relatively high accuracy and efficiency. Therefore, it has relatively high value in helping doctors locate the epileptogenic region and follow-up surgical treatment.
- the system proposed by the present disclosure is effective for the detection of epileptogenic focus in different brain regions of the whole brain and is suitable for epileptic patients having epileptogenic focus in different brain regions.
- the present disclosure utilizes image enhancement and mirror image pairs for division of the image blocks to increase the sample amount, based on which training model and testing data are performed, thereby avoiding overfitting of network training and improving the robustness of the network training.
- the present disclosure uses sample weighting as data enhancement and sets a relatively large weight for a small number of samples, to balance a proportion of normal region samples with a proportion of epileptogenic region samples in each batch during training.
- FIG. 1 is a structural block diagram of a three-dimensional location system for an epileptogenic focus based on deep learning according to an embodiment of the present disclosure
- FIG. 2 is a flowchart of implementation of a three-dimensional location system for an epileptogenic focus based on deep learning according to an embodiment of the present disclosure
- FIG. 3 is a schematic diagram of building of a deep SiameseNet according to an embodiment of the present disclosure
- FIG. 4 is a structural schematic diagram of a single residual neural network of SiameseNet according to the present disclosure
- FIG. 5 is a probability heatmap corresponding to a PET image according to an embodiment of the present disclosure.
- the three-dimensional automatic location system for an epileptogenic focus includes following modules:
- a PET image acquisition and labelling module including image acquisition and epileptogenic focus region labelling:
- acquiring an image using a 3D PET/CT scanner to acquire a PET image of a brain, a subject maintaining the same posture during an acquisition process, acquiring the PET image.
- image format conversion is performed, that is, an originally acquired image sequence in a DICOM format is converted into an easy-to-process image in a NIFTI format.
- a PET image registration module using cross-correlation as the similarity measure between images, using a symmetric differential homeomorphic (SyN) algorithm to deform all PET images and the labelled images thereof into the same symmetric standard space, in order to achieve the registration of the acquired PET images, the labelled images and standard symmetric brain templates.
- SyN symmetric differential homeomorphic
- the first term is a smoothing term, in which L is a smoothing operator and v is a velocity field. ⁇ in the second term controls accuracy of matching.
- C(I,J) is a similarity measure, where C(I,J) can be expressed as:
- a Gaussian smoothing algorithm is used to reduce registration errors caused by individual differences.
- the Gaussian smoothing process selects the FWHM of the full width at half maximum of the Gaussian function to be 5 to 15 mm, to eliminate the registration errors caused by individual differences.
- Z-score normalization is performed on the smoothed image:
- ⁇ is mean of the registered image J
- ⁇ is variance of an image
- P u is a pixel point of the original image
- P d is a pixel point of the distorted image
- P c is a distortion center
- r is a distance between P d and P c in a vector space.
- P a is an image pixel point after the image intensity enhancement
- g_mult is an image pixel point of a multiplicative Gaussian bias field
- g_add is an image pixel point of an additive Gaussian bias field.
- image block division performing image block division on enhanced image data, using a three-dimensional sliding window to divide left and right hemispheres L and R of the PET image into mirror image pairs of the image block, and dividing data of the mirror image pairs of the image block into a training set, a verification set and a test set according to proportions; the training set, the verification set and the test set all contain two types of PET image block data—epileptogenic focus and normal.
- image data set resolution of each PET image data is X ⁇ Y ⁇ Z pixels, a size of the sliding scanning window block is set to m ⁇ m ⁇ m, and a sliding step length is set to t. Then, the size of each image block is m ⁇ m ⁇ m.
- the left and right hemispheres of a PET image it can be divided into
- a network building module building a deep twin network SiameseNet.
- This network contains two identical convolutional neural networks, a fully connected layer and an output layer.
- Each of the convolutional neural networks has a structure of ten layers, in which the first layer includes one convolution layer (cony), one batch normalization operation unit (batch normalization), one Relu function, and one pool layer (pool) that are connected in sequence; each of the second to the ninth layers is a ResBlock, and each of the ResBlocks contains two convolution layers, two normalization operations and one Relu function that are connected in sequence; the tenth layer is one convolution layer, and the tenth layers of the two convolutional neural networks output and are connected to one fully connected layer (fc) for nonlinear transformation. Finally, one output layer is connected. Parameter setting for one random dropout can be 0.5.
- output conv is the three-dimensional size of output image data of each of the convolution layer (length, width and depth of the image)
- input conv is a three-dimensional size of an input image
- kernal is a three-dimensional size of a convolution kernel
- stride is a step length of the convolution kernel.
- the batch normalization operation For each of the convolution layers, the batch normalization operation is used, to accelerate a convergence speed and stability of the network, and a formula for the batch normalization operation is:
- input norm is each batch data that is input, is normalized data
- output norm is batch data output by the batch normalization operation
- ⁇ and ⁇ are respectively mean and variance of each batch data
- ⁇ and ⁇ are respectively scaling and translation variables
- ⁇ is a relatively small constant data added to increase training stability
- the two convolutional neural networks of the SiameseNet share the same weight parameter ⁇ in each layer and the inputs of the network are mirror image pairs of a pair of image blocks.
- the size of the input image block is 48 ⁇ 48 ⁇ 48 ⁇ 1, where 48 ⁇ 48 ⁇ 48 represents the length, width and height of the image block, 1 represents the number of channels of the image block.
- a resulting feature size is 24 ⁇ 24 ⁇ 24 ⁇ 64
- feature sizes respectively obtained through ResBlocks are 12 ⁇ 12 ⁇ 12 ⁇ 64, 12 ⁇ 12 ⁇ 12 ⁇ 64, 6 ⁇ 6 ⁇ 6 ⁇ 128, 6 ⁇ 6 ⁇ 6 ⁇ 128, 3 ⁇ 3 ⁇ 3 ⁇ 256, 3 ⁇ 3 ⁇ 3 ⁇ 256, 3 ⁇ 3 ⁇ 3 ⁇ 512 and 3 ⁇ 3 ⁇ 3 ⁇ 512 .
- L_feature and R_feature having a size of 1 ⁇ 1 ⁇ 1 ⁇ 2048 are obtained.
- MLP multi-layer perceptron
- the output layer uses a classification probability of a softmax regression function, that is, a probability that the image block carries the epileptogenic focus or is normal, and a formula of softmax is:
- d j represents output of different categories
- a cross entropy function is used as a loss function of the network.
- a calculation method of the cross entropy Loss(a, b) is:
- n the number of samples
- a correct probability distribution
- b probability distribution predicted by the network model.
- SGD stochastic gradient descent
- ⁇ k ⁇ k - 1 - ⁇ ⁇ d d ⁇ ⁇ k - 1 ⁇ ⁇ Loss ⁇ ⁇ ( a , b )
- ⁇ is a learning rate and ⁇ k is a k-th weight parameter.
- FIG. 4 flowcharts of the training phase and the test phase are as shown in FIG. 4 , a basic network framework adopted by SiameseNet is ResNet18, and the two ResNets share the same network weight parameter ⁇ .
- the network is trained using a training set of the epileptogenic focus PET image and the normal image, and a network model is obtained through the training process.
- a small number of mirror image pairs of an image background block are added to the normal samples of the training set, to reduce impact of the image background on the model.
- the image classification using the trained model to calculate a probability heatmap of the PET image of the test set.
- the probability heatmap is a probability map stitched by corresponding probabilities of different image blocks on one PET image, and a size is
- a logistic regression algorithm is used to classify the probability heatmap corresponding to each PET image, to obtain a classification result, that is, the normal PET image or the epileptogenic focus PET image.
- Locating of the epileptogenic focus performing bilinear interpolation on the probabilistic heatmap identified as the epileptogenic focus PET image, changing the probability heatmap to a heatmap having the same size as that of the original image, and predicting a region larger than a probability threshold as the epileptogenic focus region.
- f(m+u, n+v) is a newly calculated pixel value
- f(m, n), f(m+1, n), f(m, n+1) and f(m+1, n+1) are respectively four original pixel values around the new pixel value
- u and v are distances between the original pixel point and the new pixel point.
- an acquired PET data set is divided into a training set, a verification set and a test set, a twin network learning system is used to extract two feature vectors of left and right brain image blocks, an absolute difference between the two feature vectors is calculated, and then a multi-layer perceptron is added for probability regression.
- a sliding window block is used for scanning test on each entire image, a probability heatmap is output after scanning, and finally a detection result map is obtained, so as to achieve classification and locating of an epileptogenic focus in the PET image.
- AUC of a classification result of the entire image is 94%.
- the epileptogenic focus region predicted by the system is more consistent with a physician's visual assessment and maintains a higher accuracy and efficiency.
Landscapes
- Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Medical Informatics (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Biophysics (AREA)
- Biomedical Technology (AREA)
- General Physics & Mathematics (AREA)
- Radiology & Medical Imaging (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Public Health (AREA)
- Evolutionary Computation (AREA)
- Animal Behavior & Ethology (AREA)
- High Energy & Nuclear Physics (AREA)
- Veterinary Medicine (AREA)
- Surgery (AREA)
- Heart & Thoracic Surgery (AREA)
- Pathology (AREA)
- Optics & Photonics (AREA)
- Artificial Intelligence (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Computing Systems (AREA)
- Software Systems (AREA)
- Data Mining & Analysis (AREA)
- General Engineering & Computer Science (AREA)
- Mathematical Physics (AREA)
- Computational Linguistics (AREA)
- Multimedia (AREA)
- Databases & Information Systems (AREA)
- Quality & Reliability (AREA)
- Pulmonology (AREA)
- Dentistry (AREA)
- Neurology (AREA)
- Neurosurgery (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Bioinformatics & Computational Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
Abstract
Description
P u =P d+(P d −P c)(k 1 r 2 +k 2 r 4 +k 3 r 6+. . . )
P a =g_mult×P u +g_add
pairs of image blocks.
Afterwards, a logistic regression algorithm is used to classify the probability heatmap corresponding to each PET image, to obtain a classification result, that is, the normal PET image or the epileptogenic focus PET image.
P u =P d+(P d −P c)(k 1 r 2 +k 2 r 4 +k 3 r 6+. . . )
P a =g_mult×P u +g_add
pairs of image blocks.
outputrelu=max(inputrelu, 0)
Afterwards, a logistic regression algorithm is used to classify the probability heatmap corresponding to each PET image, to obtain a classification result, that is, the normal PET image or the epileptogenic focus PET image.
f(m+u, n+v)=(1−u)(1−v)f(m, n)+u(1−v)f(m+1, n)+(1−u)vf(m, n+1)+uvf(m+1, n+1)
Claims (9)
P u =P d+(P d −P c)(k 1 r 2 +k 2 r 4 +k 3 r 6+. . . ),
P a =g_mult×P u +g_add,
Loss(a, b)=−Σi=1 n a i ln b i,
Applications Claiming Priority (3)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201910549416.1 | 2019-06-24 | ||
| CN201910549416.1A CN110390351B (en) | 2019-06-24 | 2019-06-24 | A three-dimensional automatic localization system for epileptogenic foci based on deep learning |
| PCT/CN2019/103530 WO2020224123A1 (en) | 2019-06-24 | 2019-08-30 | Deep learning-based seizure focus three-dimensional automatic positioning system |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| US20220230302A1 US20220230302A1 (en) | 2022-07-21 |
| US11645748B2 true US11645748B2 (en) | 2023-05-09 |
Family
ID=68285820
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| US17/047,392 Active US11645748B2 (en) | 2019-06-24 | 2019-08-30 | Three-dimensional automatic location system for epileptogenic focus based on deep learning |
Country Status (3)
| Country | Link |
|---|---|
| US (1) | US11645748B2 (en) |
| CN (1) | CN110390351B (en) |
| WO (1) | WO2020224123A1 (en) |
Families Citing this family (50)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN111127532B (en) * | 2019-12-31 | 2020-12-22 | 成都信息工程大学 | Medical Image Deformation Registration Method and System Based on Deep Learning Feature Optical Flow |
| US11887298B2 (en) * | 2020-01-07 | 2024-01-30 | Rensselaer Polytechnic Institute | Fluorescence lifetime imaging using deep learning |
| CN111311635A (en) * | 2020-02-08 | 2020-06-19 | 腾讯科技(深圳)有限公司 | A target positioning method, device and system |
| CN111460991A (en) * | 2020-03-31 | 2020-07-28 | 科大讯飞股份有限公司 | Anomaly detection method, related device and readable storage medium |
| CN111369558A (en) * | 2020-04-08 | 2020-07-03 | 哈尔滨理工大学 | Child epilepsy positioning method based on multi-modal brain images |
| CN111584066B (en) * | 2020-04-13 | 2022-09-09 | 清华大学 | Brain medical image diagnosis method based on convolutional neural network and symmetric information |
| CN112148914A (en) * | 2020-05-28 | 2020-12-29 | 青岛海尔智能技术研发有限公司 | Method, device and equipment for retrieving clothes images |
| CN111640107B (en) * | 2020-06-02 | 2024-02-06 | 无锡北邮感知技术产业研究院有限公司 | Method and device for detecting position of epilepsy induction stove |
| KR20210152244A (en) * | 2020-06-08 | 2021-12-15 | 삼성전자주식회사 | Apparatus for implementing neural network and operation method thereof |
| CN111951228B (en) * | 2020-07-22 | 2022-03-15 | 电子科技大学 | Epileptogenic focus positioning system integrating gradient activation mapping and deep learning model |
| CN112185523B (en) * | 2020-09-30 | 2023-09-08 | 南京大学 | Diabetic retinopathy classification method based on multi-scale convolutional neural network |
| CN112419278B (en) * | 2020-11-25 | 2024-04-19 | 上海应用技术大学 | Solid wood floor classification method based on deep learning |
| CN112633301A (en) * | 2021-01-14 | 2021-04-09 | 北京工业大学 | Traditional Chinese medicine tongue image greasy feature classification method based on depth metric learning |
| CN113688942A (en) * | 2021-02-09 | 2021-11-23 | 四川大学 | Method and device for automatically evaluating cephalic and lateral adenoid body images based on deep learning |
| CN113705301B (en) * | 2021-03-16 | 2025-11-11 | 腾讯科技(深圳)有限公司 | Image processing method and device |
| CN113269711B (en) * | 2021-04-06 | 2024-01-09 | 东北大学 | Brain image processing methods and devices, electronic equipment and storage media |
| CN113112476B (en) * | 2021-04-14 | 2023-08-29 | 中国人民解放军北部战区总医院 | Method and system for identifying epileptogenic focus and/or predicting pathological typing of epileptogenic focus |
| CN113506233B (en) * | 2021-07-08 | 2024-04-19 | 西安电子科技大学 | SAR self-focusing method based on deep learning |
| CN113643336B (en) * | 2021-07-26 | 2024-03-15 | 之江实验室 | Three-dimensional image rigid matching method based on spherical polar coordinate system deep neural network |
| CN113724307B (en) * | 2021-09-02 | 2023-04-28 | 深圳大学 | Image registration method and device based on characteristic self-calibration network and related components |
| CN113808097B (en) * | 2021-09-14 | 2024-04-12 | 北京主导时代科技有限公司 | Method and system for detecting loss of key parts of train |
| CN113920193B (en) * | 2021-09-18 | 2025-07-11 | 杭州微影医疗科技有限公司 | A magnetic resonance imaging automatic positioning method and system based on image registration |
| CN114154593A (en) * | 2021-12-25 | 2022-03-08 | 郑州大学 | Classification of ischemic stroke based on multi-scale convolutional network and active learning |
| CN114376522B (en) * | 2021-12-29 | 2023-09-05 | 四川大学华西医院 | Method for constructing computer identification model for identifying juvenile myoclonus epilepsy |
| EP4235685A1 (en) * | 2022-02-23 | 2023-08-30 | Siemens Healthcare GmbH | Method, system and computer program for detection of a disease information |
| CN114820535B (en) * | 2022-05-05 | 2023-09-12 | 深圳市铱硙医疗科技有限公司 | Image detection method and device for aneurysm, computer equipment and storage medium |
| CN114947812B (en) * | 2022-06-29 | 2025-05-09 | 兰州大学 | A method for optimizing magnetic resonance scanning time based on enhanced dynamic detection probability |
| CN115081486B (en) * | 2022-07-05 | 2023-07-04 | 华南师范大学 | System and method for positioning epileptic focus by using intracranial brain electrical network in early stage of epileptic seizure |
| CN115345784A (en) * | 2022-08-16 | 2022-11-15 | 北京理工大学 | PET image repositioning method and system |
| CN115486814A (en) * | 2022-09-23 | 2022-12-20 | 杭州电子科技大学 | Intelligent fusion analysis and processing method based on multi-modal epilepsy data |
| CN115564747A (en) * | 2022-10-20 | 2023-01-03 | 上海工程技术大学 | A liver vessel segmentation method and device based on deep learning |
| CN115661680B (en) * | 2022-11-15 | 2023-04-07 | 北京轨道未来空间科技有限公司 | Satellite remote sensing image processing method |
| CN116229464B (en) * | 2023-02-24 | 2025-06-03 | 中国长江三峡集团有限公司 | Inspection target labeling model training, labeling method, device and electronic equipment |
| CN116071350B (en) * | 2023-03-06 | 2023-07-04 | 同心智医科技(北京)有限公司 | Method, device and storage medium for recognizing cerebral microbleeds based on deep learning |
| CN116051810B (en) * | 2023-03-30 | 2023-06-13 | 武汉纺织大学 | Intelligent clothing positioning method based on deep learning |
| CN116385393B (en) * | 2023-04-04 | 2025-02-18 | 上海鸢理冠智能科技有限公司 | Convolutional neural network model, pathological image recognition system, method, equipment and medium |
| CN116168352B (en) * | 2023-04-26 | 2023-06-27 | 成都睿瞳科技有限责任公司 | Power grid obstacle recognition processing method and system based on image processing |
| CN117197868B (en) * | 2023-09-04 | 2025-05-16 | 西安电子科技大学 | Batchformer-based pain expression evaluation method |
| CN117122288B (en) * | 2023-09-08 | 2024-10-18 | 太原理工大学 | Epilepsy EEG signal early warning method and device based on anchored convolutional network |
| CN116958128B (en) * | 2023-09-18 | 2023-12-26 | 中南大学 | Automatic positioning method of medical images based on deep learning |
| TWI856828B (en) * | 2023-09-23 | 2024-09-21 | 國立政治大學 | Image scanning positioning auxiliary system |
| KR102728844B1 (en) * | 2023-09-27 | 2024-11-13 | 주식회사 에이트테크 | System and method for generating distorted image data |
| CN117690134B (en) * | 2024-02-02 | 2024-04-12 | 苏州凌影云诺医疗科技有限公司 | Method and device for assisting in marking target position of electrotome in ESD operation |
| CN118762238B (en) * | 2024-09-05 | 2024-11-08 | 杭州电子科技大学 | A method for generating images of lesions in different positions based on mammographic images |
| CN118762241B (en) * | 2024-09-06 | 2024-11-29 | 山东大学 | Medical image lesion classification method and system |
| CN118799382B (en) * | 2024-09-11 | 2024-12-13 | 首都医科大学宣武医院 | Statistical method and device for asymmetric index voxel values of scanned image |
| CN118787372B (en) * | 2024-09-12 | 2024-12-24 | 杭州网之易创新科技有限公司 | Epileptiform discharge determination method, epileptiform discharge determination device, electronic equipment and computer storage medium |
| CN119251460A (en) * | 2024-12-02 | 2025-01-03 | 苏州凌影云诺医疗科技有限公司 | A method for lesion localization combined with image quality assessment of endoscopic ultrasound |
| CN119722620B (en) * | 2024-12-07 | 2025-08-22 | 山东协晨医疗科技有限公司 | A medical image intelligent processing method and system based on deep learning |
| CN119273901A (en) * | 2024-12-09 | 2025-01-07 | 同心智医科技(北京)股份有限公司 | Method and device for identifying cerebral microbleeds |
Citations (8)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20160116603A1 (en) * | 2014-10-23 | 2016-04-28 | National Yang-Ming University | Method for pet attenuation correction |
| CN107403201A (en) | 2017-08-11 | 2017-11-28 | 强深智能医疗科技(昆山)有限公司 | Tumour radiotherapy target area and jeopardize that organ is intelligent, automation delineation method |
| CN108629784A (en) | 2018-05-08 | 2018-10-09 | 上海嘉奥信息科技发展有限公司 | A kind of CT image intracranial vessel dividing methods and system based on deep learning |
| CN109447996A (en) | 2017-08-28 | 2019-03-08 | 英特尔公司 | Hand Segmentation in 3-D image |
| CN109523521A (en) | 2018-10-26 | 2019-03-26 | 复旦大学 | Lung neoplasm classification and lesion localization method and system based on more slice CT images |
| US20190130569A1 (en) | 2017-10-26 | 2019-05-02 | Wisconsin Alumni Research Foundation | Deep learning based data-driven approach for attenuation correction of pet data |
| CN109754387A (en) | 2018-11-23 | 2019-05-14 | 北京永新医疗设备有限公司 | Medical image lesion detection and positioning method, device, electronic device and storage medium |
| US20190247662A1 (en) * | 2017-12-04 | 2019-08-15 | Neuroenhancement Lab, LLC | Method and apparatus for neuroenhancement to facilitate learning and performance |
Family Cites Families (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN109447966A (en) * | 2018-10-26 | 2019-03-08 | 科大讯飞股份有限公司 | Lesion localization recognition methods, device, equipment and the storage medium of medical image |
-
2019
- 2019-06-24 CN CN201910549416.1A patent/CN110390351B/en active Active
- 2019-08-30 US US17/047,392 patent/US11645748B2/en active Active
- 2019-08-30 WO PCT/CN2019/103530 patent/WO2020224123A1/en not_active Ceased
Patent Citations (8)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20160116603A1 (en) * | 2014-10-23 | 2016-04-28 | National Yang-Ming University | Method for pet attenuation correction |
| CN107403201A (en) | 2017-08-11 | 2017-11-28 | 强深智能医疗科技(昆山)有限公司 | Tumour radiotherapy target area and jeopardize that organ is intelligent, automation delineation method |
| CN109447996A (en) | 2017-08-28 | 2019-03-08 | 英特尔公司 | Hand Segmentation in 3-D image |
| US20190130569A1 (en) | 2017-10-26 | 2019-05-02 | Wisconsin Alumni Research Foundation | Deep learning based data-driven approach for attenuation correction of pet data |
| US20190247662A1 (en) * | 2017-12-04 | 2019-08-15 | Neuroenhancement Lab, LLC | Method and apparatus for neuroenhancement to facilitate learning and performance |
| CN108629784A (en) | 2018-05-08 | 2018-10-09 | 上海嘉奥信息科技发展有限公司 | A kind of CT image intracranial vessel dividing methods and system based on deep learning |
| CN109523521A (en) | 2018-10-26 | 2019-03-26 | 复旦大学 | Lung neoplasm classification and lesion localization method and system based on more slice CT images |
| CN109754387A (en) | 2018-11-23 | 2019-05-14 | 北京永新医疗设备有限公司 | Medical image lesion detection and positioning method, device, electronic device and storage medium |
Non-Patent Citations (1)
| Title |
|---|
| International Search Report (PCT/CN2019/103530); dated Mar. 5, 2020. |
Also Published As
| Publication number | Publication date |
|---|---|
| WO2020224123A1 (en) | 2020-11-12 |
| CN110390351B (en) | 2020-07-24 |
| CN110390351A (en) | 2019-10-29 |
| US20220230302A1 (en) | 2022-07-21 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US11645748B2 (en) | Three-dimensional automatic location system for epileptogenic focus based on deep learning | |
| US11961233B2 (en) | Method and apparatus for training image segmentation model, computer device, and storage medium | |
| Wang et al. | Patch-based output space adversarial learning for joint optic disc and cup segmentation | |
| AU2017292642B2 (en) | System and method for automatic detection, localization, and semantic segmentation of anatomical objects | |
| WO2018120942A1 (en) | System and method for automatically detecting lesions in medical image by means of multi-model fusion | |
| CN117995341A (en) | Image-based severe disease comparison and evaluation method and system | |
| CN113658721B (en) | Alzheimer disease progress prediction method | |
| CN114694236A (en) | An Eye Movement Segmentation and Localization Method Based on Recurrent Residual Convolutional Neural Network | |
| CN115210755A (en) | Resolving class-diverse loss functions of missing annotations in training data | |
| Panda et al. | Glauconet: patch-based residual deep learning network for optic disc and cup segmentation towards glaucoma assessment | |
| Banerjee et al. | A CADe system for gliomas in brain MRI using convolutional neural networks | |
| US20240144469A1 (en) | Systems and methods for automatic cardiac image analysis | |
| CN112766332A (en) | Medical image detection model training method, medical image detection method and device | |
| Mahapatra et al. | CT image synthesis using weakly supervised segmentation and geometric inter-label relations for COVID image analysis | |
| Matoug et al. | Predicting Alzheimer's disease by classifying 3D-Brain MRI images using SVM and other well-defined classifiers | |
| CN119920448A (en) | A generative model-based auxiliary diagnosis method for facial paralysis | |
| Kanse et al. | HG-SVNN: harmonic genetic-based support vector neural network classifier for the glaucoma detection | |
| Rachmawati et al. | Bone scan image segmentation based on active shape model for cancer metastasis detection | |
| Widodo et al. | Volumetric hippocampus segmentation using 3d u-net based on transfer learning | |
| Rohini et al. | ConvNet based detection and segmentation of brain tumor from MR images | |
| CN106709921A (en) | Color image segmentation method based on space Dirichlet hybrid model | |
| Janardhan et al. | Comparative Evaluation of Traditional Methods and Deep Learning for Brain Glioma Imaging. Review Paper | |
| US12367582B1 (en) | Method and system for processing intracranial large vessel image, electronic device, and medium | |
| US20250078258A1 (en) | Disease-specific longitudinal change analysis in medical imaging | |
| Baloni et al. | H-detect: an algorithm for early detection of hydrocephalus |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| FEPP | Fee payment procedure |
Free format text: ENTITY STATUS SET TO UNDISCOUNTED (ORIGINAL EVENT CODE: BIG.); ENTITY STATUS OF PATENT OWNER: SMALL ENTITY |
|
| AS | Assignment |
Owner name: ZHEJIANG UNIVERSITY, CHINA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:ZHUO, CHENG;TIAN, MEI;ZHANG, HONG;AND OTHERS;SIGNING DATES FROM 20200929 TO 20201010;REEL/FRAME:054087/0719 |
|
| FEPP | Fee payment procedure |
Free format text: ENTITY STATUS SET TO SMALL (ORIGINAL EVENT CODE: SMAL); ENTITY STATUS OF PATENT OWNER: SMALL ENTITY |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION |
|
| STCF | Information on status: patent grant |
Free format text: PATENTED CASE |