WO2018180386A1 - Ultrasound imaging diagnosis assistance method and system - Google Patents
Ultrasound imaging diagnosis assistance method and system Download PDFInfo
- Publication number
- WO2018180386A1 WO2018180386A1 PCT/JP2018/009336 JP2018009336W WO2018180386A1 WO 2018180386 A1 WO2018180386 A1 WO 2018180386A1 JP 2018009336 W JP2018009336 W JP 2018009336W WO 2018180386 A1 WO2018180386 A1 WO 2018180386A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- ultrasonic
- region
- images
- tissue
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Ceased
Links
Images
Classifications
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B8/00—Diagnosis using ultrasonic, sonic or infrasonic waves
- A61B8/08—Clinical applications
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B8/00—Diagnosis using ultrasonic, sonic or infrasonic waves
- A61B8/13—Tomography
- A61B8/14—Echo-tomography
Definitions
- the present invention relates to an ultrasonic diagnostic imaging support method, system, and apparatus.
- B-CAD As a lesion detection for breast ultrasound images, there is B-CAD developed by Canadian company Medipattern (registered trademark).
- B-CAD a lesion (tumor) that appears as a dark block is targeted, and the examiner (user) specifies the approximate location of the lesion.
- the contour of the lesion is automatically extracted based on the position information, and the malignancy is calculated based on the shape and size.
- the measurement result is displayed as a moving image.
- a tumor is depicted as a dark blocky shadow, and can be detected even if each frame of a moving image is treated as an independent still image.
- the tumor can be detected even by the abnormality detection technique from the still image (pathological image) disclosed in Patent Document 1.
- the shape of the non-mass lesion is not clear, and the texture change indicated by the mammary gland tissue must be observed. Therefore, the approach of the method described in Patent Document 1 cannot be used, and the correlation between the previous and next frames is measured.
- the moving image pattern recognition is essential.
- Patent Document 2 a position sensor is attached to an ultrasonic probe, and image information and position information are combined to construct three-dimensional data of the internal structure. Based on the surface area and volume ratio of the tumor, the image is a tumor. A technique for determining whether or not is is disclosed.
- Patent Document 3 discloses a technique for estimating a position by analyzing an acquired image instead of attaching a position sensor to an ultrasonic probe.
- Medipattern registered trademark
- B-CAD registered trademark
- the method using the gradient of the brightness value assume that the lesion is reflected in the input image. Always overdetect the normal area. Therefore, it cannot be applied when an image showing no lesion is targeted, and is not practical in breast ultrasonography.
- the estimated position information is displayed on the screen as a body mark, and is used only to make it easier for a doctor to grasp the examination site, and is not used for automatic detection of a lesion.
- An object of the present invention is to increase the detection accuracy of an ultrasonic inspection system that automatically detects a lesion based on a moving image composed of a plurality of temporally continuous frames output from an ultrasonic inspection apparatus by moving an ultrasonic probe. There is to increase.
- the ultrasonic image diagnosis support system or method shown in FIG. 1 includes a learning phase (S10) and an examination phase (S20 to S24).
- the diagnosis unit is a diagnosis tissue (observation site) and its periphery, and the output includes a display.
- diagnosis tissue observation site
- the diagnosis tissue is described as a mammary gland tissue and the lesion is a tumor as an example, but the combination is not limited thereto.
- the learning phase (S10), an image showing a previously cut out mass and other images are used as input, and a model that classifies the mass and others using the deep learning method based on those images (patch images) is created.
- a region obtained as a tumor candidate is detected by comparing the model obtained in the learning phase with the image (S20) of each frame of the moving image (S21). Thereafter, the breast tissue is automatically extracted, and the tumor candidate region in the region other than the mammary gland is removed (S22). Further, a tumor candidate region that occurs once using the continuity of frames is removed (S23), and the finally remaining tumor candidate region is output as a detection result (S24).
- An ultrasonic diagnostic imaging support method comprising a learning phase (S10) and an examination phase (S20 to S24), In the learning phase (S10) The image showing the previously cut out lesion and the other images are input as patch images. Create a model that classifies the lesion and others using Deep Learning method based on the patch image,
- a moving image consisting of a plurality of frames of a diagnostic unit including a diagnostic tissue is acquired by operating an ultrasonic probe of an ultrasonic inspection apparatus (S20), Comparing the model obtained in the learning phase with the image of the frame of the moving image to detect a lesion candidate area of the image of the frame in the diagnosis unit (S21), Performing automatic extraction of the diagnostic tissue region in the diagnostic unit from the frame image of the moving image, removing the region other than the diagnostic tissue region and the lesion candidate region included therein (S22), Using the continuity of the sequence of frames to remove the lesion candidate region that occurs only once in the diagnostic tissue (S23), Only the region of the diagnostic tissue in which the lesion candidate region finally remaining in
- a multi-resolution image composed of a plurality of resolution images is created from the moving image frame (S210), Perform detection processing of the lesion candidate area for each layer image of the multi-resolution image (S211), Converting the coordinates of the abnormal region in the image of each hierarchy into the coordinates of the original resolution, and integrating the images at each of the plurality of resolutions (S212), (2) The ultrasonic image diagnosis support method according to (2).
- An ultrasonic diagnostic imaging support system that uses an ultrasonic moving image (hereinafter simply referred to as a moving image) obtained by the operation of an ultrasonic probe of an ultrasonic inspection apparatus, It consists of a learning phase (S10) and an inspection phase (S20-S24) In the learning phase (S10) The image showing the previously cut out lesion and the other images are input as patch images.
- a moving image obtained by the operation of an ultrasonic probe of an ultrasonic inspection apparatus
- a moving image consisting of a plurality of frames of a diagnostic unit including a diagnostic tissue is acquired by operating an ultrasonic probe of an ultrasonic inspection apparatus (S20), Comparing the model obtained in the learning phase with the image of the frame of the moving image to detect a lesion candidate area of the lesion in the diagnostic unit of the image of the frame (S21); Performing automatic extraction of the diagnostic tissue region in the diagnostic unit from the frame image of the moving image, removing the region other than the diagnostic tissue region and the lesion candidate region included therein (S22), Using the continuity of the sequence of frames to remove the lesion candidate region that occurs only once in the diagnostic tissue (S23), Only the region of the diagnostic tissue in which the lesion candidate region finally remaining in the image of the moving image frame is marked is output as a detection result (S24), An ultrasonic diagnostic imaging support system characterized by the above. (9) The ultrasonic diagnostic imaging support system
- a multi-resolution image composed of a plurality of resolution images is created from the moving image frame (S210), Perform detection processing of the lesion candidate area for each layer image of the multi-resolution image (S211), Converting the coordinates of the abnormal region in the image of each hierarchy into the coordinates of the original resolution, and integrating the images at each of the plurality of resolutions (S212),
- the accuracy of the ultrasonic image diagnosis support method / system is improved by the combination of the automatic extraction processing of the diagnostic tissue and the improved detection of excessive detection of lesions according to the present invention. As a result, more ultrasonic image diagnosis support can be accurately performed in a short period of time.
- a patch image of a tumor is created as an abnormal image for learning by perturbing the center of gravity of the tumor within a range where the tumor does not protrude from the patch image.
- normal patch images are created as a normal image of any size by specifying the position with a random number in a frame in which a tumor image of a breast ultrasound image taken as a moving image is not drawn. To do.
- Model learning In order to calculate a model for classifying normal and abnormal (tumor), the proposed system uses a machine learning method.
- a deep learning method such as Deep Belief Network (DBN) or Neural Network by Stacked Denoising Auto Encoder (SDAE) is used.
- DBN Deep Belief Network
- SDAE Neural Network by Stacked Denoising Auto Encoder
- CNN ConvolutionalvolutionNeural Network
- SVM Support ⁇ Vector Machine
- logistic regression analysis linear discriminant analysis
- random forest method Boosting method (AdaBoost, LogitBoost etc.)
- AdaBoost Boosting method
- the Neural Network shown in Fig. 3 consists of links that connect units, and automatically adjusts (updates) the link weights (parameters) based on the patch images for learning, so that normality and abnormalities are determined. Calculate the model to be classified.
- weights are updated sequentially using optimization methods such as stochastic gradient descent, momentum method, Adam, AdaGrad, and AdaDelta.
- a normal region (region other than a tumor) of a breast ultrasound image may be similar to a tumor, and this normal region may be erroneously determined as a tumor (overdetection).
- a normal sample that is easily overdetected is preferentially selected, and the model is updated using the image.
- Pre-training In the Deep Learning method, pre-training and fine tuning learning are performed in two stages.
- weights are set by unsupervised learning. Thereafter, the weight obtained by the pre-training is used as an initial value, and the weight is updated by a normal learning method such as an error back propagation method (fine tuning).
- DBN Deep Belief Network
- SDAE Stacked Denoising Auto ⁇ Encode
- an ultrasonic moving image of the subject's observation site and its surroundings for a predetermined time is acquired (S20).
- a breast ultrasound moving image of the cut surface in the depth direction obtained when the ultrasound probe is scanned in one direction along the surface of the examination part generally called a breast is composed of multiple frame images arranged in the order of time occurrence. Is done. This image is called a breast ultrasound image.
- FIG. 13 shows a flowchart of the tumor detection process (S21).
- the deep learning method is applied to the local region of each frame of the input moving image, and the region determined to be abnormal (tumor) is set as a tumor candidate region.
- a multi-resolution image composed of a plurality of resolution images is created from the input image (moving image frame) (S210).
- the tumor candidate region detection process is performed on the images of each layer of the multi-resolution image (S211).
- the coordinates of the abnormal area in the image of each hierarchy calculated above are converted into the coordinates of the original resolution, and the detection results at each resolution are integrated (S212). Each process is described in detail below.
- FIG. 4 shows an example of a multi-resolution image when the magnification is set to 1 ⁇ , 0.75 ⁇ , and 0.5 ⁇ .
- the Bicubic method was used as the image enlargement / reduction algorithm.
- the Nearest Neighbour method or Bilinear method can be selected as an image enlargement / reduction algorithm.
- Detection process of tumor candidate area from each layer There are two methods for detecting a tumor candidate region. First, prepare a rectangular area (search window) that has been set in advance, and perform a raster scan on the image of each layer of the multi-resolution image to determine whether each area is normal or abnormal. This is a determination method (S211a). The second is a method of dividing an image of each layer into a plurality of regions by the super pixel method and determining whether each region is normal or abnormal (S211b).
- HOG Histograms of Oriented Gradients
- LBP Local Binary Pattern
- GLAC Gradient Local AutoCorrelation
- NLAC Normal Local AutoCorrelations
- HLAC Higher-order Local AutoCorrelation
- Gabor feature quantities can be used.
- While shifting the search window the process of “acquisition of feature vector” and “judgment of normal / abnormal” is repeated to scan the entire area in the input image.
- the search window is moved by dx (pixel) in the horizontal direction and dy (pixel) in the vertical direction, and the above-mentioned “feature vector acquisition” and “normal / abnormal determination” are repeated.
- the coordinates of each search window and the labels at each search window position are accumulated.
- the moving widths dx and dy of the search window are made smaller than the size hxw of the search window (for example, dx is half of w and dy is half of h) so as to include a certain amount of overlap with the area once determined.
- Move the search window As a result, a single tumor can be determined using a plurality of search windows whose positions are shifted, so that it can be expected that the detection accuracy of the tumor is improved.
- the region determined to be abnormal above is set as a tumor candidate region, and the upper left coordinate (x0, y0) and the lower right coordinate (x1, y1) of the region are acquired.
- a superpixel method is applied to the image to divide the image into a plurality of non-overlapping regions (superpixels).
- a feature vector (hxw dimension) is obtained by rearranging the pixel values of the rectangular area of the vertical h and horizontal w centered on the center of gravity of the superpixel into one line.
- HOG Histograms of Oriented Gradients
- LBP Local Binary Pattern
- GLAC Gradient Local AutoCorrelation
- NLAC Normal Local Auto-Correlations
- HLAC Higher-order Local AutoCorrelation
- Gabor feature quantities can be used.
- the “feature vector acquisition” and “normal / abnormal determination” processes are applied.
- the coordinates of each search window and the labels at each search window position are accumulated.
- the region determined to be abnormal above is set as a tumor candidate region, and the upper left coordinates (x0, y0) and lower right coordinates (x1, y1) of the region are acquired.
- FIG. 14 shows a flowchart of the overdetection suppression process (S22) other than the mammary gland tissue. Since the tumor occurs in the mammary gland tissue, the region detected in the tissue other than the mammary gland is overdetected in the region detected in the tumor detection process (S21).
- mammary gland tissue is automatically extracted from the breast ultrasound image, and tumor candidate areas other than the mammary gland are removed (S22).
- S22 There are two methods of automatic extraction of breast tissue: “Otsu's binarization and automatic extraction of mammary tissue by graph cut (S221)” and “ZCA whitening and automatic extraction of mammary tissue by CRF (S222)”. Any of the methods can be selected.
- the breast ultrasound image is divided into strip-shaped regions of width w, and Otsu's binarization is applied to each strip-shaped region.
- the average u and the variance ⁇ of the luminance values of the original image in an area (area determined to be white) that is higher than the threshold that is automatically set when Otsu's binarization is applied are acquired.
- the breast ultrasound image is divided into M local regions of length h and width w.
- V represents a set of M local regions
- Ni represents an adjacent region in the local region i (in this system, there are 8 adjacent regions).
- ⁇ u (y i ) is called a data term
- ⁇ p (y i , y j ) is called a smoothing term, which are defined as follows in this system.
- the first is ZCA whitening, which reduces the correlation between depth and luminance values in order to reduce the effect of depth on luminance values.
- the depth is a distance from the upper end of the image to the pixel. Since the ultrasonic wave is attenuated as it goes deeper into the tissue from the skin and the reflected wave is weakened, the phenomenon that the echo level (brightness on the ultrasonic image) is reduced is reduced.
- the brightness value in an arbitrary range is enhanced using a piecewise linear function.
- ZCA whitening is a process of making the correlation between variables close to zero in order to eliminate the bias in a plurality of variables having strong correlations.
- d i 1,..., L.
- ZCA whitening to.
- T represents transposition.
- ZCA whitening first, principal component analysis is applied to L vector groups [t 1 , ..., t L ] so that a matrix U having eigenvectors as columns and a pair of eigenvalues ⁇ as diagonal elements. An angle matrix ⁇ is calculated.
- ⁇ diag ( ⁇ 1 , ⁇ 2 ) (where ⁇ 1 > ⁇ 2 ).
- calculate the transformation matrix of ZCA whitening then the vector after ZCA whitening
- v i (with superscript bar) is used as the luminance value after applying ZCA whitening.
- the luminance value z i after the linear conversion is calculated by converting the luminance value by the following equation.
- Luminance histogram acquisition (S222b) In order to capture the brightness in the mammary tissue, a histogram of luminance values is used as a feature vector. A breast ultrasound image is divided into M rectangular regions (patch images), and a histogram of luminance values is calculated from each patch image. Since the image has been converted to G gradation, the G-dimensional feature vector is obtained from the patch image of each region divided into M pieces.
- each conditional random field (CRF) that can estimate the tissue (label) depicted in each patch image is used.
- the mammary gland likelihood is calculated from the region.
- CRF is defined by the following probability model.
- E (X, Y, w) is called an energy function
- V represents a set of patch images
- Ni represents n neighbors to the patch image i (note that the number of neighbors n can be set arbitrarily) Yes, usually 8 neighbors are used).
- ⁇ u is called a data term
- ⁇ p is called a pair-wise term (smoothing term).
- ⁇ u is defined as follows.
- w [w u , w p ] is a learning parameter of CRF.
- w (with a hat) is calculated by solving the following equation: Use.
- the following equation can be solved by an optimization method such as the stochastic gradient descent method, the momentum method, Adam, AdaGrad, or AdaDelta.
- a larger value means that the degree of mammary glandness is higher.
- an area of 0.5 or more is determined as a mammary gland tissue.
- the shadow generated by the influence of speckle noise since the shadow generated by the influence of speckle noise has no volume, it is not drawn at the same position in multiple consecutive frames, but is only detected as a single tumor candidate region. Should be regarded as overdetection.
- the tumor candidate region that occurs at the same position in consecutive frames is treated as the final tumor region.
- the masses in a plurality of continuous frames (reference frames) photographed before the frame being observed (target frame) in order to detect the tumor are removed. The position information of the candidate area is used.
- the candidate tumor regions in a plurality of consecutive frames are removed, and only the candidate tumor regions that are detected in multiple locations in the spatial and temporal directions are finally obtained.
- a typical tumor area As a typical tumor area.
- the number of reference frames used for overdetection suppression can be set arbitrarily. As the number of reference frames used increases, the effect of overdetection suppression increases. For example, if the number of reference frames used for overdetection suppression is only two frames including the frame of interest, the ultrasound images drawn in those frames are similar, and there is a high possibility that overdetection will occur at the same position. Detection may not be removed.
- the number of reference frames used for overdetection suppression is more than 2 frames (for example, 5 frames including the frame of interest)
- the shape (pattern) of the drawn tissue varies, so all frames are the same
- the possibility of occurrence of overdetection at the position is reduced, and overdetection can be appropriately removed.
- Increasing the number of reference frames to be used increases the effect of overdetection removal, but there is a risk that the tumor candidate area in which the tumor is detected may be erroneously removed. For example, when the number of reference frames used for overdetection suppression is 5 frames, if only 2 frames are drawn, only 2 frames of the tumor candidate area are detected. There is a risk of removal.
- the first contrivance is (S211a) "Detection of tumor candidate area by sliding window", the search windows are moved so as to overlap each other, and a plurality of tumor candidate areas are detected at the same position as much as possible.
- ⁇ Mean shift clustering is performed based on the obtained center coordinates and frame number of the tumor candidate area, and the tumor candidate areas are grouped.
- the number of elements of each group is calculated, and all tumor candidate regions belonging to groups whose number of elements is equal to or less than a preset threshold value are removed.
- the remaining tumor candidate area is finally determined as a tumor area.
- T the currently displayed frame number and s be the number of frames of interest.
- the center coordinates (Xc, Yc) and the frame number (Tc) of the tumor candidate region c in the T-th s frames (FIG. 8) from the T-s + 1-th frame in the moving image are acquired.
- the center coordinates are calculated as shown in FIG. 9 from the upper left coordinates (x0, y0) and lower right coordinates (x1, y1) of the tumor candidate region and the frame number T.
- the following mean shift clustering is executed using the center coordinates and frame numbers of all the tumor candidate regions as input vectors, and K group numbers ⁇ g 1 ,..., G K ⁇ are assigned to all the input vectors.
- the number K of groups is automatically determined by the mean shift clustering algorithm.
- a clustering method that can automatically adjust the number of clusters such as the x-means method and Infinite Gaussian Mixture Model (IGMM) can be applied.
- IGMM Infinite Gaussian Mixture Model
- the number of elements in each group is acquired for ⁇ g 1 , ..., g K ⁇ for the K groups calculated above. If the number of elements is equal to or less than a preset threshold value Th n , all abnormal areas belonging to the group are deleted. In the present invention, the value 5 is set as the threshold value Th n .
- the network structure based on the Deep Learning method has five layers (the number of units in each layer is 2500, 900, 625, 225, 2 in order from the input layer), and the search window size is 50 pixels vertically, 50 pixels horizontally, and the search window moves The width was 25 pixels in the y direction and 25 pixels in the x direction.
- the multi-resolution image has three layers (1x, 0.75x, 0.5x), and in order to verify the effect of priority learning, the over-detection suppression process in steps S22 and S23 is not introduced in the examination phase, and the tumor in step S21 Only candidate regions were used.
- FIG. 10 shows the average overdetection number and detection rate per frame. It can be seen that by introducing priority learning, the number of overdetections is reduced and the detection rate is further increased. From these results, it is considered that priority learning is effective as a technique for improving detection accuracy.
- the network structure based on the Deep Learning method has four layers (the number of units in each layer is 625, 500, 500, 2 in order from the input layer), the search window size is 50 pixels long, 50 pixels wide, and the movement width of the search window is 25 pixels in the y direction and 25 pixels in the x direction.
- the 50x50 image in the search window was reduced to 25x25 by the Bicubic method when it entered the Deep Learning network.
- the multi-resolution image has 3 layers (1x, 0.75x, 0.5x).
- step S23 In order to verify the effect of automatic extraction of breast tissue, priority learning in the learning phase and overdetection suppression processing in step S23 are not introduced, detection of a tumor candidate region in step S21 and overdetection suppression processing for other than breast tissue in step S22 Only used.
- the automatic extraction method of the mammary gland tissue in addition, in the automatic extraction method of the mammary gland tissue, the automatic extraction method of the mammary gland tissue by binarization of Otsu in step S221 and graph cut was adopted.
- Fig. 11 shows a comparison of the number of over-detected experimental results. It can be seen that the automatic extraction technique of the mammary gland tissue is effective in reducing overdetection in tissues other than the mammary gland (non-mammary gland) and suppressing overdetection.
- the network structure based on the Deep Learning method has four layers (the number of units in each layer is 625, 500, 500, 2 in order from the input layer), the search window size is 50 pixels long, 50 pixels wide, and the movement width of the search window is 25 pixels in the y direction and 25 pixels in the x direction.
- the 50x50 image in the search window was reduced to 25x25 by the Bicubic method when it entered the Deep Learning network.
- the multi-resolution image has 3 layers (1x, 0.75x, 0.5x).
- the priority detection in the learning phase and the overdetection suppression process other than the mammary gland tissue in step S22 are not introduced, and the detection of the tumor candidate region in step S21 and the step Only the overdetection suppression process using the frame continuity of S23 was used.
- FIG. 12 shows a comparison of the average overdetection number per frame. It can be seen that the overdetection suppression process using the continuity of frames is applied to reduce overdetection.
- Abnormality judgment area (lesion candidate area) 2 Abnormality judgment area not included in the area of the observation site 3 Region of observation site 4 Area that is not the area of the observation site 5 Anomaly judgment area that is in the region of the observation site and is not drawn continuously in the still image frame 6 Patch image 7 Learning model DB 8 Observation site and its surroundings (diagnostic part)
Landscapes
- Life Sciences & Earth Sciences (AREA)
- Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Pathology (AREA)
- Radiology & Medical Imaging (AREA)
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Heart & Thoracic Surgery (AREA)
- Medical Informatics (AREA)
- Molecular Biology (AREA)
- Surgery (AREA)
- Animal Behavior & Ethology (AREA)
- General Health & Medical Sciences (AREA)
- Public Health (AREA)
- Veterinary Medicine (AREA)
- Ultra Sonic Daignosis Equipment (AREA)
- Image Analysis (AREA)
Abstract
Description
本発明は、超音波画像診断支援方法、システム、および装置に関する。 The present invention relates to an ultrasonic diagnostic imaging support method, system, and apparatus.
乳房超音波画像を対象とした病変検出としては、カナダの企業Medipattern(登録商標)社が開発したB-CADがある。
B-CADでは、暗い塊状として映る病変(腫瘤)を対象とし、検査者(ユーザー)が病変の大まかな位置を指定する。その位置情報をもとに病変の輪郭を自動抽出し、形状や大きさの値をもとに悪性度を算出する。
As a lesion detection for breast ultrasound images, there is B-CAD developed by Canadian company Medipattern (registered trademark).
In B-CAD, a lesion (tumor) that appears as a dark block is targeted, and the examiner (user) specifies the approximate location of the lesion. The contour of the lesion is automatically extracted based on the position information, and the malignancy is calculated based on the shape and size.
また、病変が暗く映ることに着目し、輝度値の勾配を利用して病変を自動検出する手法が提案されている。 Also, focusing on the fact that the lesion appears dark, a method for automatically detecting the lesion using the gradient of the luminance value has been proposed.
超音波検査装置では、測定結果は動画像として表示される。
例えば、乳腺超音波検査において、腫瘤は暗い塊状の陰として描出されるため、動画像の各フレームを独立した静止画像として扱っても検出され得る。
In the ultrasonic inspection apparatus, the measurement result is displayed as a moving image.
For example, in a mammary gland ultrasonography, a tumor is depicted as a dark blocky shadow, and can be detected even if each frame of a moving image is treated as an independent still image.
そのため特許文献1に示される静止画像(病理画像)からの異常検出技術でも、腫瘤は発見可能である。しかし非腫瘤性病変の形状は明瞭ではなく、乳腺組織が示すテクスチャの変化を観測しなければならないため、特許文献1に記載の手法のアプローチで対応できず、前後のフレームの相関を計測する等の動画像パターン認識が必須となる。
Therefore, the tumor can be detected even by the abnormality detection technique from the still image (pathological image) disclosed in
特許文献2には、超音波プローブに位置センサを取り付けて、画像情報と位置情報とを組み合わせて内部構造の3次元データを構築し、腫瘤の表面積と体積の比率に基づいて、その像が腫瘍であるか否かを判定する技術が開示されている。
In
特許文献3には、超音波プローブに位置センサを取り付けるのではなく、取得された画像を解析して位置を推定する技術が示されている。
しかし、Medipattern(登録商標)社のB-CADや輝度値の勾配を利用する手法では、入力された画像に病変が映っていることを想定しているため、病変が映っていない画像に適用すると必ず正常な領域を過検出してしまう。そのため、病変が映っていない画像が対象となる場合は適用できず、乳房超音波検査では実用的ではない。 However, Medipattern (registered trademark) B-CAD and the method using the gradient of the brightness value assume that the lesion is reflected in the input image. Always overdetect the normal area. Therefore, it cannot be applied when an image showing no lesion is targeted, and is not practical in breast ultrasonography.
特許文献2に示された技術では、形状が明瞭な腫瘤のみを対象としており、形状が不明瞭な非腫瘤性病変は対象としていない。
In the technique disclosed in
また、特許文献3に記載の技術では、推定位置情報をボディマークとして画面表示し、医師が検査部位を把握しやすくするために用いるのみで、病変の自動検出には利用されていない。
Further, in the technique described in
本発明の目的は、超音波プローブを動かして超音波検査装置から出力される時間的に連続した複数のフレームの列からなる動画像に基づいて病変を自動検出する超音波検査システムの検出精度を高めることにある。 An object of the present invention is to increase the detection accuracy of an ultrasonic inspection system that automatically detects a lesion based on a moving image composed of a plurality of temporally continuous frames output from an ultrasonic inspection apparatus by moving an ultrasonic probe. There is to increase.
(提案システムの全体像)
図1に示す、超音波画像診断支援システム、または方法は、学習フェーズ(S10)と検査フェーズ(S20~S24)から構成される。
(Overview of the proposed system)
The ultrasonic image diagnosis support system or method shown in FIG. 1 includes a learning phase (S10) and an examination phase (S20 to S24).
一度、学習フェーズを実施して学習モデルDBを生成しておけば、その後は検査フェーズだけを何度も繰り返して実施できる。
診断部は診断組織(観察部位)およびその周辺であり、出力と言う時は表示を含むものとする。
以下は例として、診断組織(観察部位)を乳腺組織、病変は腫瘤として説明するが、その組み合わせに限定されるものではない。
Once the learning phase is generated and the learning model DB is generated, only the inspection phase can be repeated many times thereafter.
The diagnosis unit is a diagnosis tissue (observation site) and its periphery, and the output includes a display.
In the following, the diagnosis tissue (observation site) is described as a mammary gland tissue and the lesion is a tumor as an example, but the combination is not limited thereto.
学習フェーズ(S10)では、事前に切り出した腫瘤が映る画像とそれ以外の画像を入力とし、それらの画像(パッチ画像)を基にDeep Learning法を用いて腫瘤とそれ以外を分類するモデルを作成する。
検査フェーズでは、学習フェーズで得られたモデルと動画像の各フレームの画像(S20)を比較することで腫瘤の候補となる領域を検出する(S21)。
その後、乳腺組織の自動抽出を行い、乳腺以外の領域における腫瘤候補領域を除去する(S22)。
さらに、フレームの連続性を利用して単発的に発生する腫瘤候補領域を除去し(S23)、最終的に残った腫瘤候補領域を検出結果として出力する(S24)。
In the learning phase (S10), an image showing a previously cut out mass and other images are used as input, and a model that classifies the mass and others using the deep learning method based on those images (patch images) is created. To do.
In the examination phase, a region obtained as a tumor candidate is detected by comparing the model obtained in the learning phase with the image (S20) of each frame of the moving image (S21).
Thereafter, the breast tissue is automatically extracted, and the tumor candidate region in the region other than the mammary gland is removed (S22).
Further, a tumor candidate region that occurs once using the continuity of frames is removed (S23), and the finally remaining tumor candidate region is output as a detection result (S24).
(1)
学習フェーズ(S10)と検査フェーズ(S20~S24)から構成される超音波画像診断支援方法であって、
学習フェーズ(S10)において、
事前に切り出した病変が映る画像とそれ以外の画像をパッチ画像として入力とし、
前記パッチ画像を基にDeep Learning法を用いて前記病変とそれ以外を分類するモデルを作成し、
検査フェーズにおいて、
超音波検査装置の超音波プローブを動作させて診断組織を含む診断部の複数のフレームの列からなる動画像を取得し(S20)、
前記学習フェーズで得られたモデルと前記動画像のフレームの画像を比較して前記診断部において前記フレームの画像の病変候補領域を検出し(S21)、
前記動画像のフレームの画像から前記診断部における前記診断組織の領域の自動抽出を行い、前記診断組織の領域以外の領域とそこに含まれる前記病変候補領域を除去し(S22)、
前記フレームの列の連続性を利用して前記診断組織に単発的に発生する前記病変候補領域を除去し(S23)、
前記動画像のフレームの画像の最終的に残った前記病変候補領域がマークされた前記診断組織の領域のみを検出結果として出力する(S24)、
ことを特徴とする超音波画像診断支援方法。
(2)
前記診断組織は乳腺組織であり、前記病変はその腫瘤であることを特徴とする(1)に記載の超音波画像診断支援方法。
(1)
An ultrasonic diagnostic imaging support method comprising a learning phase (S10) and an examination phase (S20 to S24),
In the learning phase (S10)
The image showing the previously cut out lesion and the other images are input as patch images.
Create a model that classifies the lesion and others using Deep Learning method based on the patch image,
In the inspection phase
A moving image consisting of a plurality of frames of a diagnostic unit including a diagnostic tissue is acquired by operating an ultrasonic probe of an ultrasonic inspection apparatus (S20),
Comparing the model obtained in the learning phase with the image of the frame of the moving image to detect a lesion candidate area of the image of the frame in the diagnosis unit (S21),
Performing automatic extraction of the diagnostic tissue region in the diagnostic unit from the frame image of the moving image, removing the region other than the diagnostic tissue region and the lesion candidate region included therein (S22),
Using the continuity of the sequence of frames to remove the lesion candidate region that occurs only once in the diagnostic tissue (S23),
Only the region of the diagnostic tissue in which the lesion candidate region finally remaining in the image of the moving image frame is marked is output as a detection result (S24),
An ultrasonic diagnostic imaging support method characterized by the above.
(2)
The diagnostic imaging support method according to (1), wherein the diagnostic tissue is a mammary gland tissue, and the lesion is a mass thereof.
(3)
前記病変候補領域の検出(S21)は、
前記動画像のフレームから、複数の解像度の画像で構成される多重解像度画像を作成し(S210)、
前記多重解像度画像の各階層の画像に対して前記病変候補領域の検出処理を行い(S211)、
前記各階層の画像における異常領域の座標をもとの解像度の座標に変換し、各前記複数の解像度における画像を統合する(S212)、
ことにより行うことを特徴とする(2)に記載の超音波画像診断支援方法。
(4)
前記病変候補領域の検出処理(S211)をスライディングウィンドウにより行う(S211a)ことを特徴とする(3)に記載の超音波画像診断支援方法。
(5)
前記病変候補領域の検出処理(S211)をスーパーピクセルにより行う(S211b)こと特徴とする(3)に記載の超音波画像診断支援方法。
(3)
The detection of the lesion candidate area (S21)
A multi-resolution image composed of a plurality of resolution images is created from the moving image frame (S210),
Perform detection processing of the lesion candidate area for each layer image of the multi-resolution image (S211),
Converting the coordinates of the abnormal region in the image of each hierarchy into the coordinates of the original resolution, and integrating the images at each of the plurality of resolutions (S212),
(2) The ultrasonic image diagnosis support method according to (2).
(4)
The ultrasonic image diagnosis support method according to (3), wherein the lesion candidate area detection process (S211) is performed by a sliding window (S211a).
(5)
The ultrasonic image diagnosis support method according to (3), wherein the lesion candidate region detection process (S211) is performed by a super pixel (S211b).
(6)
前記病変候補領域の除去を(S22)、大津の二値化とグラフカットによる前記乳腺組織の自動抽出により行うこと特徴とする(4)または(5)のいずれかに記載の超音波画像診断支援方法。
(7)
前記病変候補領域の除去を(S22)、CRF (Conditional Random Field)法による前記乳腺組織の自動抽出により行うこと特徴とする(4)または(5)のいずれかに記載の超音波画像診断支援方法。
(6)
The ultrasonic image diagnosis support according to any one of (4) and (5), wherein the lesion candidate area is removed (S22) by performing Otsu binarization and automatic extraction of the mammary gland tissue by graph cut Method.
(7)
The ultrasonic image diagnosis support method according to any one of (4) and (5), wherein the lesion candidate region is removed (S22) by automatic extraction of the mammary gland tissue by a CRF (Conditional Random Field) method .
(8)
超音波検査装置の超音波プローブの動作により得られる超音波動画像(以下単に動画像とする)を利用する超音波画像診断支援システムであって、
学習フェーズ(S10)と検査フェーズ(S20~S24)から構成され、
学習フェーズ(S10)において、
事前に切り出した病変が映る画像とそれ以外の画像をパッチ画像として入力とし、
前記パッチ画像を基にDeep Learning法を用いて前記病変とそれ以外を分類するモデルを作成し、
検査フェーズ(S20~S24)において、
超音波検査装置の超音波プローブを動作させて診断組織を含む診断部の複数のフレームの列からなる動画像を取得し(S20)、
前記学習フェーズで得られたモデルと前記動画像のフレームの画像を比較して前記フレームの画像の診断部において前記病変の病変候補領域を検出し(S21)、
前記動画像のフレームの画像から前記診断部における前記診断組織の領域の自動抽出を行い、前記診断組織の領域以外の領域とそこに含まれる前記病変候補領域を除去し(S22)、
前記フレームの列の連続性を利用して前記診断組織に単発的に発生する前記病変候補領域を除去し(S23)、
前記動画像のフレームの画像の最終的に残った前記病変候補領域がマークされた前記診断組織の領域のみを検出結果として出力する(S24)、
ことを特徴とする超音波画像診断支援システム。
(9)
前記診断組織は乳腺組織であり、前記病変はその腫瘤であることを特徴とする(8に記載の超音波画像診断支援システム。
(8)
An ultrasonic diagnostic imaging support system that uses an ultrasonic moving image (hereinafter simply referred to as a moving image) obtained by the operation of an ultrasonic probe of an ultrasonic inspection apparatus,
It consists of a learning phase (S10) and an inspection phase (S20-S24)
In the learning phase (S10)
The image showing the previously cut out lesion and the other images are input as patch images.
Create a model that classifies the lesion and others using Deep Learning method based on the patch image,
In the inspection phase (S20-S24)
A moving image consisting of a plurality of frames of a diagnostic unit including a diagnostic tissue is acquired by operating an ultrasonic probe of an ultrasonic inspection apparatus (S20),
Comparing the model obtained in the learning phase with the image of the frame of the moving image to detect a lesion candidate area of the lesion in the diagnostic unit of the image of the frame (S21);
Performing automatic extraction of the diagnostic tissue region in the diagnostic unit from the frame image of the moving image, removing the region other than the diagnostic tissue region and the lesion candidate region included therein (S22),
Using the continuity of the sequence of frames to remove the lesion candidate region that occurs only once in the diagnostic tissue (S23),
Only the region of the diagnostic tissue in which the lesion candidate region finally remaining in the image of the moving image frame is marked is output as a detection result (S24),
An ultrasonic diagnostic imaging support system characterized by the above.
(9)
The ultrasonic diagnostic imaging support system according to claim 8, wherein the diagnostic tissue is a mammary gland tissue, and the lesion is a mass thereof.
(10)
前記病変候補領域の検出(S21)は、
前記動画像のフレームから、複数の解像度の画像で構成される多重解像度画像を作成し(S210)、
前記多重解像度画像の各階層の画像に対して前記病変候補領域の検出処理を行い(S211)、
前記各階層の画像における異常領域の座標をもとの解像度の座標に変換し、各前記複数の解像度における画像を統合する(S212)、
ことにより行うこと特徴とする(9)に記載の超音波画像診断支援システム。
(11)
前記病変候補領域の検出処理(S211)をスライディングウィンドウにより行う(S211a)ことを特徴とする(10)に記載の超音波画像診断支援システム。
(12)
前記病変候補領域の検出処理(S211)をスーパーピクセルにより行う(S211b)ことを特徴とする(10)に記載の超音波画像診断支援システム。
(10)
The detection of the lesion candidate area (S21)
A multi-resolution image composed of a plurality of resolution images is created from the moving image frame (S210),
Perform detection processing of the lesion candidate area for each layer image of the multi-resolution image (S211),
Converting the coordinates of the abnormal region in the image of each hierarchy into the coordinates of the original resolution, and integrating the images at each of the plurality of resolutions (S212),
The ultrasonic image diagnosis support system according to (9), which is performed by:
(11)
The ultrasonic image diagnosis support system according to (10), wherein the lesion candidate region detection process (S211) is performed by a sliding window (S211a).
(12)
The ultrasonic image diagnosis support system according to (10), wherein the lesion candidate area detection process (S211) is performed by a super pixel (S211b).
(13)
前記病変候補領域の除去を(S22)、大津の二値化とグラフカットによる前記乳腺組織の自動抽出により行うこと特徴とする(11)または(12)のいずれかに記載の超音波画像診断支援システム。
(14)
前記病変候補領域の除去を(S22)、CRFによる前記乳腺組織の自動抽出により行うこと特徴とする(11)または(12)のいずれかに記載の超音波画像診断支援システム。
(13)
The ultrasonic image diagnosis support according to any one of (11) and (12), wherein the lesion candidate region is removed (S22) by performing automatic extraction of the mammary gland tissue by binarization of Otsu and graph cut system.
(14)
The ultrasonic image diagnosis support system according to any one of (11) and (12), wherein the lesion candidate region is removed (S22) by automatic extraction of the mammary gland tissue by CRF.
本発明の診断組織の自動抽出処理と改良した病変の過検出抑制処理の組み合わせにより超音波画像診断支援方法・システムの精度が向上した。
これにより短い期間でより多くの超音波画像診断支援が精度よく行えるようになった。
The accuracy of the ultrasonic image diagnosis support method / system is improved by the combination of the automatic extraction processing of the diagnostic tissue and the improved detection of excessive detection of lesions according to the present invention.
As a result, more ultrasonic image diagnosis support can be accurately performed in a short period of time.
以下に学習フェーズおよび検査フェーズにおける処理を詳説する。 The processing in the learning phase and the inspection phase is described in detail below.
以下に学習フェーズ(S10)における処理を詳述する。
(学習用パッチ画像の生成)
Deep Learning法を含む多くの機械学習手法では、検出対象(本発明の場合は腫瘤)の画像とそれ以外の画像の両方を用いて学習させる必要がある。
図2に示すように、提案システムでは、矩形に切り出した画像(パッチ画像)を学習用の画像として使用する。
The processing in the learning phase (S10) will be described in detail below.
(Generation of learning patch images)
In many machine learning methods including the Deep Learning method, it is necessary to perform learning using both an image of a detection target (a tumor in the case of the present invention) and other images.
As shown in FIG. 2, in the proposed system, an image cut out into a rectangle (patch image) is used as an image for learning.
腫瘤を適切に学習させるためには、腫瘤の大半が描出される画像を入力する必要がある。
そこで、腫瘤のパッチ画像は、腫瘤がパッチ画像から大きくはみ出さない範囲内で、腫瘤の重心位置を摂動させた画像を学習用の異常画像として作成する。
In order to appropriately learn a tumor, it is necessary to input an image in which the majority of the tumor is depicted.
Therefore, a patch image of a tumor is created as an abnormal image for learning by perturbing the center of gravity of the tumor within a range where the tumor does not protrude from the patch image.
また、正常のパッチ画像は、動画像として撮影された乳房超音波画像の腫瘤が描出されないフレームにおいて、乱数で位置を指定して、その位置が中心になるように任意のサイズの正常画像を作成する。 In addition, normal patch images are created as a normal image of any size by specifying the position with a random number in a frame in which a tumor image of a breast ultrasound image taken as a moving image is not drawn. To do.
(モデルの学習)
正常と異常(腫瘤)を分類するモデルを算出するために、提案システムでは機械学習手法を用いる。
機械学習手法の具体的な手法としては、Deep Learning手法であるDeep Belief Network (DBN)やStacked Denoising Auto Encoder (SDAE)によるNeural Networkを用いる。
(Model learning)
In order to calculate a model for classifying normal and abnormal (tumor), the proposed system uses a machine learning method.
As a concrete method of the machine learning method, a deep learning method such as Deep Belief Network (DBN) or Neural Network by Stacked Denoising Auto Encoder (SDAE) is used.
その他の機械学習手法として、確率的勾配降下法による逐次学習を適用でき、Convolutional Neural Network (CNN)やSupport Vector Machine (SVM)、ロジスティック回帰分析、線形判別分析、ランダムフォレスト法、Boosting法(AdaBoost、LogitBoostなど)を用いてもよい。 As other machine learning methods, sequential learning by stochastic gradient descent can be applied, ConvolutionalvolutionNeural Network (CNN) and Support や Vector Machine (SVM), logistic regression analysis, linear discriminant analysis, random forest method, Boosting method (AdaBoost, LogitBoost etc.) may be used.
図3に示すNeural Networkでは、ユニットとユニット間を結合するリンクから構成され、リンクの重み(パラメータ)を学習用のパッチ画像に基づいて適切に自動調整(更新)することで、正常と異常を分類するモデルを算出する。 The Neural Network shown in Fig. 3 consists of links that connect units, and automatically adjusts (updates) the link weights (parameters) based on the patch images for learning, so that normality and abnormalities are determined. Calculate the model to be classified.
(不均衡なサンプル数に対する学習手法)
Deep Learning手法の学習では、確率的勾配降下法やモーメンタム法、Adam、AdaGrad、AdaDeltaといった最適化手法を用いて重みを逐次的に更新する。
(Learning method for unbalanced number of samples)
In deep learning method learning, weights are updated sequentially using optimization methods such as stochastic gradient descent, momentum method, Adam, AdaGrad, and AdaDelta.
一般に、正常のパッチ画像および異常のパッチ画像の数が極端に不均衡な場合では学習効果が低減する。
たとえば、異常サンプルが正常サンプルと比較して非常に少数の場合、正常サンプルのみを学習して異常を検出できなくなる可能性がある。
Generally, when the number of normal patch images and abnormal patch images is extremely imbalanced, the learning effect is reduced.
For example, when the number of abnormal samples is very small compared to normal samples, there is a possibility that only normal samples can be learned to detect abnormalities.
このサンプル数の不均衡の影響を避けるため、提案システムにおける学習フェーズでは、次の処理をパラメータ更新ごとに行う。 In order to avoid the influence of this sample number imbalance, the following processing is performed for each parameter update in the learning phase in the proposed system.
(1)学習データから正常サンプルと異常サンプルを同数だけランダムに選択してネットワークの重みを自動調整し、(2)次のパラメータ更新時では正常のサンプルや異常のサンプルをランダムに入れ替える。 (1) Randomly select the same number of normal samples and abnormal samples from the learning data to automatically adjust the network weight, and (2) randomly replace normal samples and abnormal samples at the next parameter update.
(重点学習)
乳房超音波画像の正常な領域(腫瘤以外の領域)には腫瘤と類似する場合があり、この正常領域を誤って腫瘤と判定(過検出)する可能性がある。
(Priority learning)
A normal region (region other than a tumor) of a breast ultrasound image may be similar to a tumor, and this normal region may be erroneously determined as a tumor (overdetection).
そこで、上記で述べた正常のサンプル数の選択方法をランダムに選ぶのではなく、過検出されやすい正常サンプルを優先的に選択して、その画像を用いてモデルを更新する。 Therefore, instead of randomly selecting the normal sample number selection method described above, a normal sample that is easily overdetected is preferentially selected, and the model is updated using the image.
(プレトレーニング)
Deep Learning手法では、プレトレーニングとファインチューニングの2段階の学習が行われる。
(Pre-training)
In the Deep Learning method, pre-training and fine tuning learning are performed in two stages.
プレトレーニングでは、教師なし学習により重みを設定する。その後、プレトレーニングで得られた重みを初期値として、誤差逆伝搬法など通常の学習方法により重みを更新する(ファインチューニング)。 In pre-training, weights are set by unsupervised learning. Thereafter, the weight obtained by the pre-training is used as an initial value, and the weight is updated by a normal learning method such as an error back propagation method (fine tuning).
プレトレーニングでは、Deep Belief Network (DBN)とStacked Denoising Auto Encode (SDAE)の2種類あり、重点学習を採用した場合は、SDAEが有効であり、重点学習を採用しない場合はDBNが有効である。 ∙ There are two types of pre-training: Deep Belief Network (DBN) and Stacked Denoising Auto 、 Encode (SDAE). SDAE is effective when priority learning is adopted, and DBN is effective when priority learning is not adopted.
以下に検査フェーズ(S20からS25)における処理を詳述する。 The processing in the inspection phase (S20 to S25) will be described in detail below.
(フレーム画像取得処理)
まず、被験者の観察部位およびその周辺の所定時間の超音波動画像を取得する(S20)。
一般的に乳房と呼ばれる診察部の表面に沿って超音波プローブを一方向に走査した場合に得られる深度方向の切断面の乳房超音波動画像は時間の生起順に並んだ複数のフレーム画像から構成される。その画像を乳房超音波画像と呼ぶ。
(Frame image acquisition process)
First, an ultrasonic moving image of the subject's observation site and its surroundings for a predetermined time is acquired (S20).
A breast ultrasound moving image of the cut surface in the depth direction obtained when the ultrasound probe is scanned in one direction along the surface of the examination part generally called a breast is composed of multiple frame images arranged in the order of time occurrence. Is done. This image is called a breast ultrasound image.
(腫瘤検出処理)
図13に腫瘤検出処理(S21)のフローチャートを示す。
腫瘤検出処理(S21)では、入力された動画像の各フレームの局所領域に対し、Deep Learning手法を適用し、異常(腫瘤)と判定された領域を腫瘤候補領域とする。
(Mass detection processing)
FIG. 13 shows a flowchart of the tumor detection process (S21).
In the tumor detection process (S21), the deep learning method is applied to the local region of each frame of the input moving image, and the region determined to be abnormal (tumor) is set as a tumor candidate region.
腫瘤検出処理の詳細な手順を以下に示す。 The detailed procedure of the tumor detection process is shown below.
入力画像(動画像のフレーム)から、複数の解像度の画像で構成される多重解像度画像を作成する(S210)。 A multi-resolution image composed of a plurality of resolution images is created from the input image (moving image frame) (S210).
多重解像度画像の各階層の画像に対して腫瘤候補領域の検出処理を行う(S211)。 The tumor candidate region detection process is performed on the images of each layer of the multi-resolution image (S211).
上で算出した各階層の画像における異常領域の座標をもとの解像度の座標に変換し、各解像度における検出結果を統合する(S212)。
それぞれの処理について、以下に詳しく述べる。
The coordinates of the abnormal area in the image of each hierarchy calculated above are converted into the coordinates of the original resolution, and the detection results at each resolution are integrated (S212).
Each process is described in detail below.
(多重解像度画像の作成)(S210)
入力画像を縮小(または拡大)することで、複数の解像度からなる多重解像度画像を作成する。
(Create multi-resolution image) (S210)
By reducing (or enlarging) the input image, a multi-resolution image having a plurality of resolutions is created.
具体的には、入力画像を任意の倍率a1、・・・、akに変換することで、k階層からなる多重解像度画像を作成する。 Specifically, by converting the input image into arbitrary magnifications a 1 ,..., A k , a multi-resolution image having k layers is created.
図4に、倍率を1倍、0.75倍、0.5倍に設定したときの多重解像度画像の例を示す。
なお、画像の拡大・縮小アルゴリズムはBicubic法を使用した。画像の拡大・縮小アルゴリズムとして、Nearest Neighbour法やBilinear法を選択できる。
FIG. 4 shows an example of a multi-resolution image when the magnification is set to 1 ×, 0.75 ×, and 0.5 ×.
Note that the Bicubic method was used as the image enlargement / reduction algorithm. The Nearest Neighbour method or Bilinear method can be selected as an image enlargement / reduction algorithm.
(各階層からの腫瘤候補領域の検出処理)(S211)
腫瘤候補領域の検出方法としては2種類ある。
1つ目は事前に設定した矩形の領域(探索窓)を用意し、その探索窓を多重解像度画像の各階層の画像に対してラスタスキャンを実行し、各領域に対して正常か異常かを判定する方法である(S211a)。
2つ目は各階層の画像をスーパーピクセル法によって複数の領域に分割し、それぞれの領域に対して正常か異常かを判定する方法である(S211b)。
(Detection process of tumor candidate area from each layer) (S211)
There are two methods for detecting a tumor candidate region.
First, prepare a rectangular area (search window) that has been set in advance, and perform a raster scan on the image of each layer of the multi-resolution image to determine whether each area is normal or abnormal. This is a determination method (S211a).
The second is a method of dividing an image of each layer into a plurality of regions by the super pixel method and determining whether each region is normal or abnormal (S211b).
(スライディングウィンドウによる腫瘤候補領域の検出)(S211a)
スライドウィンドウによる腫瘤候補領域の検出では、上記で作成した多重解像度画像の各階層の画像に対し、次の手順により正常と異常の推定を行う。
(特徴ベクトルの取得)縦h(pixel)、横w(pixel)の探索窓の内部の領域をパッチ画像として切り出す。
パッチ画像のピクセル値を1行に並び替えて特徴ベクトル(hxw次元)として使用する。ピクセル値以外に、HOG (Histograms of Oriented Gradients)特徴量やLBP (Local Binary Pattern)特徴量、GLAC (Gradient Local AutoCorrelation)特徴量、NLAC (Normal Local AutoCorrelations)特徴量、HLAC (Higher-order Local AutoCorrelation)特徴量、GLCM (Gray Level Correlation Matrix)に基づく特徴量、ガボール特徴量を使用することができる。
(Detection of tumor candidate area by sliding window) (S211a)
In the detection of a tumor candidate area using a sliding window, normality and abnormality are estimated by the following procedure for the images of each layer of the multi-resolution image created above.
(Acquisition of feature vector) The area inside the search window of vertical h (pixel) and horizontal w (pixel) is cut out as a patch image.
The pixel values of the patch image are rearranged in one line and used as a feature vector (hxw dimension). In addition to pixel values, HOG (Histograms of Oriented Gradients) features, LBP (Local Binary Pattern) features, GLAC (Gradient Local AutoCorrelation) features, NLAC (Normal Local AutoCorrelations) features, HLAC (Higher-order Local AutoCorrelation) Feature quantities, feature quantities based on GLCM (Gray Level Correlation Matrix), and Gabor feature quantities can be used.
(正常/異常の判定)
上記「モデルの学習」で算出したモデル(重み)と上で得られた特徴ベクトルを使用し、上で得られた特徴ベクトルのラベル(正常か異常)を推定する。
(Normal / abnormal judgment)
Using the model (weight) calculated in the above “model learning” and the feature vector obtained above, the label (normal or abnormal) of the feature vector obtained above is estimated.
検索窓をずらしながら上記「特徴ベクトルの取得」と上記「正常/異常の判定」の処理を繰り返し、入力画像内の領域全体を走査する。
検索窓を横方向にdx (pixel)、縦方向にdy (pixel)移動させ、上記「特徴ベクトルの取得」と上記「正常/異常の判定」の処理を繰り返す。
While shifting the search window, the process of “acquisition of feature vector” and “judgment of normal / abnormal” is repeated to scan the entire area in the input image.
The search window is moved by dx (pixel) in the horizontal direction and dy (pixel) in the vertical direction, and the above-mentioned “feature vector acquisition” and “normal / abnormal determination” are repeated.
各検索窓の座標と各検索窓位置でのラベルを蓄積していく。
ここで、探索窓の移動幅dx、dyを探索窓のサイズhxwより小さくし(たとえば、dxをwの半分、dyをhの半分)、一度判定した領域に対してある程度の重なりを含むように探索窓を移動させる。
これにより、1つの腫瘤に対して、位置をずらした複数の探索窓を用いて判定できるようになるため、腫瘤の検出精度が向上すると期待できる。
The coordinates of each search window and the labels at each search window position are accumulated.
Here, the moving widths dx and dy of the search window are made smaller than the size hxw of the search window (for example, dx is half of w and dy is half of h) so as to include a certain amount of overlap with the area once determined. Move the search window.
As a result, a single tumor can be determined using a plurality of search windows whose positions are shifted, so that it can be expected that the detection accuracy of the tumor is improved.
(腫瘤候補領域の確定)
上で異常と判定された領域を腫瘤候補領域とし、その領域の左上座標(x0,y0)と右下座標(x1,y1)を取得する。
(Determine tumor candidate area)
The region determined to be abnormal above is set as a tumor candidate region, and the upper left coordinate (x0, y0) and the lower right coordinate (x1, y1) of the region are acquired.
(スーパーピクセルによる腫瘤候補領域の検出)(S211b)
スーパーピクセルによる腫瘤候補領域の検出では、上記で作成した多重解像度画像の各階層の画像に対し、次の手順により正常と異常の推定を行う。
(Detection of tumor candidate area by superpixel) (S211b)
In the detection of a tumor candidate area by superpixels, normality and abnormality are estimated by the following procedure for the images of each layer of the multi-resolution image created above.
(特徴ベクトルの取得)
画像に対してスーパーピクセル法を適用し、重なりのない複数の領域(スーパーピクセル)に分割する。
スーパーピクセルの重心を中心とした縦h、横wの矩形領域の画素値を1行に並び替えて特徴ベクトル(hxw次元)を取得する。ピクセル値以外に、HOG (Histograms of Oriented Gradients)特徴量やLBP(Local Binary Pattern)特徴量、GLAC(Gradient Local AutoCorrelation)特徴量、NLAC (Normal Local Auto-Correlations)特徴量、HLAC(Higher-order Local AutoCorrelation)特徴量、GLCM(Gray Level Correlation Matrix)に基づく特徴量、ガボール特徴量を使用することができる。
(Get feature vectors)
A superpixel method is applied to the image to divide the image into a plurality of non-overlapping regions (superpixels).
A feature vector (hxw dimension) is obtained by rearranging the pixel values of the rectangular area of the vertical h and horizontal w centered on the center of gravity of the superpixel into one line. In addition to pixel values, HOG (Histograms of Oriented Gradients) features, LBP (Local Binary Pattern) features, GLAC (Gradient Local AutoCorrelation) features, NLAC (Normal Local Auto-Correlations) features, HLAC (Higher-order Local) AutoCorrelation) feature quantities, feature quantities based on GLCM (Gray Level Correlation Matrix), and Gabor feature quantities can be used.
(正常/異常の判定)
「モデルの学習」で算出したモデル(重み)と上で得られた特徴ベクトルを使用し、上で得られた特徴ベクトルのラベル(正常か異常)を推定する。
(Normal / abnormal judgment)
Using the model (weight) calculated in “model learning” and the feature vector obtained above, the label (normal or abnormal) of the feature vector obtained above is estimated.
上記全てのスーパーピクセルに対し、「特徴ベクトルの取得」と「正常/異常の判定」の処理を適用する。
各検索窓の座標と各検索窓位置でのラベルを蓄積していく。
For all the superpixels, the “feature vector acquisition” and “normal / abnormal determination” processes are applied.
The coordinates of each search window and the labels at each search window position are accumulated.
(腫瘤候補領域の確定)
上において異常と判定された領域を腫瘤候補領域とし、その領域の左上座標(x0,y0)と右下座標(x1,y1)を取得する。
(Determine tumor candidate area)
The region determined to be abnormal above is set as a tumor candidate region, and the upper left coordinates (x0, y0) and lower right coordinates (x1, y1) of the region are acquired.
(腫瘤候補領域の統合)
図6に示すように、a倍に縮小・拡大された画像における腫瘤候補領域の座標(x0, y0)と(x1,y1)を、下記の式を用いて原画像における座標(X0,Y0)と(X1,Y1)に変換する。
(Integration of tumor candidate areas)
As shown in FIG. 6, the coordinates (x0, y0) and (x1, y1) of the tumor candidate region in the image reduced / enlarged by a times are represented by the coordinates (X0, Y0) in the original image using the following equations. And (X1, Y1).
(乳腺組織以外の過検出抑制処理)(S22)
図14に、乳腺組織以外の過検出抑制処理(S22)のフローチャートを示す。
腫瘤は乳腺組織で発生するため、腫瘤検出処理(S21)で検出された領域において、乳腺以外の組織で検出された領域は過検出である。
(Excessive detection suppression for non-breast tissue) (S22)
FIG. 14 shows a flowchart of the overdetection suppression process (S22) other than the mammary gland tissue.
Since the tumor occurs in the mammary gland tissue, the region detected in the tissue other than the mammary gland is overdetected in the region detected in the tumor detection process (S21).
本発明では、乳腺以外における明らかな過検出を除去するため、乳房超音波画像から乳腺組織を自動抽出し、乳腺以外における腫瘤候補領域を除去する(S22)。
その方法として、「大津の二値化とグラフカットによる乳腺組織の自動抽出手法(S221)」と「ZCA白色化とCRFによる乳腺組織の自動抽出手法(S222)」の2つの乳腺組織の自動抽出手法のいずれかを選択することができる。
In the present invention, in order to remove obvious over-detection other than in the mammary gland, mammary gland tissue is automatically extracted from the breast ultrasound image, and tumor candidate areas other than the mammary gland are removed (S22).
There are two methods of automatic extraction of breast tissue: “Otsu's binarization and automatic extraction of mammary tissue by graph cut (S221)” and “ZCA whitening and automatic extraction of mammary tissue by CRF (S222)”. Any of the methods can be selected.
なお、「大津の二値化とグラフカットによる乳腺組織の自動抽出手法(S221)」では、乳腺組織を事前に学習する必要がない。 In the “automatic extraction method of mammary gland tissue by binarizing Otsu and graph cut (S221)”, it is not necessary to learn mammary gland tissue in advance.
一方で、「ZCA白色化とCRFによる乳腺組織の自動抽出手法(S222)」では、乳腺組織を事前に学習する必要があり、「大津」と比較して精度良く乳腺組織を自動抽出できる。 On the other hand, “Automatic extraction method of mammary gland tissue by ZCA whitening and CRF (S222)” needs to learn mammary gland tissue in advance, and can automatically extract mammary gland tissue with higher accuracy than “Otsu”.
(大津の二値化とグラフカットによる乳腺組織の自動抽出手法)(S221)
大津の二値化とグラフカットによる乳腺組織の自動抽出手法では、まず、乳腺組織の大まかな輝度値を取得するために大津の二値化処理を行う。
次に、できるだけ隣り合う領域が同じ組織(乳腺か非乳腺)であると判定するためにグラフカット法を行う。
(Automatic extraction method of mammary gland tissue by binarization of Otsu and graph cut) (S221)
In the automatic extraction method of mammary gland tissue by binarization and graph cut of Otsu, first, binarization processing of Otsu is performed in order to obtain a rough luminance value of the mammary gland tissue.
Next, a graph cut method is performed in order to determine that adjacent regions are the same tissue (mammary gland or non-mammary gland) as much as possible.
(大津の二値化)
乳房超音波画像を幅wの短冊状の領域に分割し、各短冊状領域で大津の二値化を適用する。
大津の二値化を適用した際の自動で設定された閾値より高い領域(白と判定された領域)における元画像の輝度値の平均uと分散σを取得する。
(Otsu's binarization)
The breast ultrasound image is divided into strip-shaped regions of width w, and Otsu's binarization is applied to each strip-shaped region.
The average u and the variance σ of the luminance values of the original image in an area (area determined to be white) that is higher than the threshold that is automatically set when Otsu's binarization is applied are acquired.
(グラフカット法)
乳房超音波画像を縦h、幅wのM個の局所領域に分割する。
M個の局所領域における平均輝度値がX=[x1,…,xM]のとき、グラフカット法では、次のエネルギー関数E(Y)が最小となるM個の局所領域のラベル群Y=[y1,…,yM]を算出する。
(Graph cut method)
The breast ultrasound image is divided into M local regions of length h and width w.
When the average luminance value in M local regions is X = [x 1 , ..., x M ], the graph cut method uses the label group Y of M local regions that minimizes the following energy function E (Y) = [y 1 , ..., y M ] is calculated.
ここで、VはM個の局所領域の集合を表し、Niは局所領域iにおける隣接領域を表す(本システムでは隣接領域は8近傍)。
また、φu(yi)はデータ項、φp(yi,yj)は平滑化項と呼ばれ、本システムでは次のように定義した。
Here, V represents a set of M local regions, and Ni represents an adjacent region in the local region i (in this system, there are 8 adjacent regions).
Φ u (y i ) is called a data term, and φ p (y i , y j ) is called a smoothing term, which are defined as follows in this system.
ここで、||x||2はxのL2ノルムを表す。また、λとkはユーザーが指定するパラメータである。λを大きくし、kを小さくすると隣接領域が同一のラベルとして判定されやすくなる。本発明ではλ=1とし、k =0.5とした。また、Pr(xi|yi)はyiのときのxiの確率を表し、本発明では数式(4)により定義した。 Here, || x || 2 represents the L2 norm of x. Λ and k are parameters specified by the user. When λ is increased and k is decreased, adjacent regions are easily determined as the same label. In the present invention, λ = 1 and k = 0.5. Pr (x i | y i ) represents the probability of x i when y i , and is defined by Equation (4) in the present invention.
(ZCA白色化とCRFによる乳腺組織の自動抽出手法)(S222)
ZCA白色化とCRFによる乳腺組織の自動抽出では、まず、ZCA白色化と区間線形関数による前処理を行う(S222a)。
次に、画像を局所領域に分割し、各局所領域から輝度ヒストグラムを取得する(S222b)。
(Automatic extraction method of mammary gland tissue by ZCA whitening and CRF) (S222)
In automatic extraction of mammary gland tissue by ZCA whitening and CRF, first, preprocessing by ZCA whitening and interval linear functions is performed (S222a).
Next, the image is divided into local regions, and a luminance histogram is acquired from each local region (S222b).
最後に条件付確率場(Conditional Random Field; CRF)により、乳腺組織を自動抽出する(S222c)。
以下、3つの処理について説明する。
Finally, a mammary gland tissue is automatically extracted by a conditional random field (CRF) (S222c).
Hereinafter, three processes will be described.
(画像の前処理)(S222a)
乳房超音波画像に対して次の2つの画像処理を施す。
(Image pre-processing) (S222a)
The following two image processes are performed on the breast ultrasound image.
1つ目は、輝度値が受ける深度の影響を低減するために、深度と輝度値の相関を低減するZCA白色化である。ここで、深度とは画像の上端から画素までの距離である。超音波が皮膚から組織の奥深くへと進む程に減衰し、反射波が弱まるために、エコーレベル(超音波画像上の明るさ)が小さくなる現象の低減を図る。 The first is ZCA whitening, which reduces the correlation between depth and luminance values in order to reduce the effect of depth on luminance values. Here, the depth is a distance from the upper end of the image to the pixel. Since the ultrasonic wave is attenuated as it goes deeper into the tissue from the skin and the reflected wave is weakened, the phenomenon that the echo level (brightness on the ultrasonic image) is reduced is reduced.
2つ目は、超音波画像に多く含まれるスペックルノイズを低減して乳腺組織を強調するために、区分線形関数を用いて任意の範囲の輝度値を強調する。 Secondly, in order to reduce the speckle noise contained in the ultrasound image and emphasize the mammary gland tissue, the brightness value in an arbitrary range is enhanced using a piecewise linear function.
ZCA白色化は、相関の強い複数の変数における偏りをなくすため、各々の変数間の相関を0に近づける処理である。
画像中L個の各ピクセルにおける輝度値をvi、 深度をdi (i=1,…,L)とする。
なお、本システムでは、画像左上を原点とした際の各ピクセルにおけるy座標を深度diとした。
ZCA whitening is a process of making the correlation between variables close to zero in order to eliminate the bias in a plurality of variables having strong correlations.
The luminance value at each of the L pixels in the image is v i and the depth is d i (i = 1,..., L).
In this system, the y-coordinate of each pixel when the image upper left as an origin and a depth d i.
この輝度値と深度を要素に持つベクトル A vector with the brightness value and depth as elements
に対してZCA白色化を適用する。
なお、Tは転置を表す。
ZCA白色化では、まず、L個のベクトル群[t1,...,tL]に主成分分析を適用することで、固有ベクトルを列とする行列Uと固有値λを対角要素とする対角行列Λを算出する。ここではΛ=diag(λ1, λ2)(ただしλ1>λ2)となる。
次に、ZCA白色化の変換行列を算出し、次いでZCA白色化後のベクトル
Apply ZCA whitening to.
T represents transposition.
In ZCA whitening, first, principal component analysis is applied to L vector groups [t 1 , ..., t L ] so that a matrix U having eigenvectors as columns and a pair of eigenvalues λ as diagonal elements. An angle matrix Λ is calculated. Here, Λ = diag (λ 1 , λ 2 ) (where λ 1 > λ 2 ).
Next, calculate the transformation matrix of ZCA whitening, then the vector after ZCA whitening
を算出する。
ここで、
Is calculated.
here,
であり、vi(上付バー付き)をZCA白色化適用後の輝度値として使用する。
区分線形関数による輝度変換では、輝度値を次式により変換することで、線形変換後の輝度値ziを算出する。
And v i (with superscript bar) is used as the luminance value after applying ZCA whitening.
In the luminance conversion by the piecewise linear function, the luminance value z i after the linear conversion is calculated by converting the luminance value by the following equation.
ここで、Gは変換後の画像における階調数を表している。
また、予備実験よりzl=‐1, zu=3, G=8とした。
Here, G represents the number of gradations in the converted image.
From preliminary experiments, z l = -1, z u = 3, and G = 8.
(輝度ヒストグラムの取得)(S222b)
乳腺組織における明るさを捉えるため、輝度値のヒストグラムを特徴ベクトルとして利用する。
乳房超音波画像をM個の矩形領域(パッチ画像)に分割し、各パッチ画像から輝度値のヒストグラムを算出する。
なお、画像はG階調に変換されているため、M個に分割された各領域のパッチ画像から、G次元の特徴ベクトル
(Luminance histogram acquisition) (S222b)
In order to capture the brightness in the mammary tissue, a histogram of luminance values is used as a feature vector.
A breast ultrasound image is divided into M rectangular regions (patch images), and a histogram of luminance values is calculated from each patch image.
Since the image has been converted to G gradation, the G-dimensional feature vector is obtained from the patch image of each region divided into M pieces.
が算出される。 Is calculated.
(条件付き確率場)(S222c)
M個の各パッチ画像が乳腺組織であるもっともらしさ(尤度)を算出する。
乳腺組織は領域が連続しており,任意の領域が乳腺組織である場合、隣接領域も乳腺である可能性が高い(空間的連続性)。
(Conditional random field) (S222c)
The likelihood (likelihood) that each of the M patch images is mammary tissue is calculated.
The mammary gland tissue is continuous, and if any region is mammary gland tissue, the adjacent region is also likely to be a mammary gland (spatial continuity).
そこで、隣接領域との関係性を捉えることで空間的連続性を考慮し、各パッチ画像に描出される組織(ラベル)を推定できる条件付き確率場(Conditional Random Field; CRF)を用いて、各領域から乳腺尤度を算出する。
M個のパッチ画像から抽出した特徴ベクトル群Xとそれに対応するラベル集合Y、
Therefore, by considering the spatial continuity by capturing the relationship with adjacent regions, each conditional random field (CRF) that can estimate the tissue (label) depicted in each patch image is used. The mammary gland likelihood is calculated from the region.
Feature vector group X extracted from M patch images and the corresponding label set Y,
に対して(ただし、yi=1は乳腺組織,yi=‐1は乳腺以外の組織を表す)、CRFは次式の確率モデルで定義される。 (Where y i = 1 represents mammary tissue and y i = −1 represents tissue other than mammary gland), CRF is defined by the following probability model.
ここで、Zは分配関数と呼ばれ、0=<Pr(Y|X,w)=<1を保証するため、 , Where Z is called the partition function and in order to guarantee 0 = <Pr (Y | X, w) = <1,
である。
また、E(X,Y,w)はエネルギー関数と呼ばれ,Vはパッチ画像の集合を表し、Niはパッチ画像iに対するn近傍を表す(なお近傍数nは任意に設定することが可能であり、通常は8近傍が使われる)。エネルギー関数におけるφuはデータ項、φpはペアワイズ項(平滑化項)と呼ばれ、本発明では次のように定義した。
It is.
E (X, Y, w) is called an energy function, V represents a set of patch images, Ni represents n neighbors to the patch image i (note that the number of neighbors n can be set arbitrarily) Yes, usually 8 neighbors are used). In the energy function, φ u is called a data term, and φ p is called a pair-wise term (smoothing term). In the present invention, φ u is defined as follows.
ここで、δ(yi≠yj)はyi=yjのときに1を、yi≠yjのときに0を取る関数である。また、σはユーザーが設定するパラメータであり、本発明ではσ=3とした。
w=[wu,wp]はCRFの学習パラメータである。実際には、学習用のN個の画像集合Xn={xN}とそのラベル集合Yn={yN}を与えられたときに下記の式を解くことで算出されるw(ハット付き)を用いる。なお、下記の式は、確率的勾配降下法やモーメンタム法、Adam、AdaGrad、AdaDeltaといった最適化手法により解くことができる。
Here, δ (y i ≠ y j ) is a function that takes 1 when y i = y j and takes 0 when y i ≠ y j . Also, σ is a parameter set by the user, and σ = 3 in the present invention.
w = [w u , w p ] is a learning parameter of CRF. Actually, given N sets of learning images Xn = {x N } and their label sets Yn = {y N }, w (with a hat) is calculated by solving the following equation: Use. The following equation can be solved by an optimization method such as the stochastic gradient descent method, the momentum method, Adam, AdaGrad, or AdaDelta.
CRFの学習パラメータ w(ハット付き)と検査用の乳房超音波画像におけるM個のパッチ画像から抽出した特徴ベクトル群 ∙ Feature vector group extracted from M patch images in CRF learning parameter w (with hat) and breast ultrasound image for examination
を用いて、各矩形領域(パッチ画像)から From each rectangular area (patch image) using
を算出する。
この値が大きいほど乳腺らしさの程度が高いことを意味しており、本発明では0.5以上の領域を乳腺組織と判定する。
Is calculated.
A larger value means that the degree of mammary glandness is higher. In the present invention, an area of 0.5 or more is determined as a mammary gland tissue.
(フレームの連続性を利用した過検出抑制処理)(S23)
乳房の断面が動画像として記録されている乳房超音波画像では、立体の構造物として体積を持つ腫瘤は複数のフレームの同一位置に連続して描出される公算が極めて高い。
そのため、腫瘤が映る場合は、ステップS21で算出した腫瘤候補領域が複数の連続フレームの同一位置に、非常に高い確率で連続して検出される。
(Overdetection suppression processing using frame continuity) (S23)
In a breast ultrasound image in which a cross section of the breast is recorded as a moving image, a mass having a volume as a three-dimensional structure is highly likely to be drawn continuously at the same position in a plurality of frames.
Therefore, when a tumor appears, the tumor candidate areas calculated in step S21 are continuously detected at the same position in a plurality of consecutive frames with a very high probability.
一方で、スペックルノイズなどの影響で発生した影は体積を持たないため、複数の連続フレームにおける同一位置に描出されることはなく、単発的な腫瘤候補領域として検出されるのみであり、これは過検出と見做されるべきである。
本発明では、単発的に発生する過検出を抑制するために、連続するフレームにおける同一位置に発生する腫瘤候補領域のみ最終的な腫瘤領域として扱う。
連続フレームの同一位置で検出された腫瘤候補領域以外を除去する、本発明では、腫瘤を検出すべく観察中のフレーム(注目フレーム)より前に撮影された複数の連続フレーム(参照フレーム)における腫瘤候補領域の位置情報を利用する。
On the other hand, since the shadow generated by the influence of speckle noise has no volume, it is not drawn at the same position in multiple consecutive frames, but is only detected as a single tumor candidate region. Should be regarded as overdetection.
In the present invention, in order to suppress over-detection that occurs only once, only the tumor candidate region that occurs at the same position in consecutive frames is treated as the final tumor region.
In the present invention, except for the tumor candidate region detected at the same position in the continuous frame, the masses in a plurality of continuous frames (reference frames) photographed before the frame being observed (target frame) in order to detect the tumor are removed. The position information of the candidate area is used.
具体的には、複数の連続したフレームにおける腫瘤候補領域において、空間および時間方向において孤立している腫瘤候補領域を除去し、空間および時間的に近い位置で複数検出された腫瘤候補領域のみを最終的な腫瘤領域として判定する。 Specifically, in the candidate tumor regions in a plurality of consecutive frames, the candidate tumor regions that are isolated in the spatial and temporal directions are removed, and only the candidate tumor regions that are detected in multiple locations in the spatial and temporal directions are finally obtained. As a typical tumor area.
ここで、過検出抑制に利用する参照フレーム数は任意に設定することができる。
利用する参照フレーム数を多くするほど、過検出抑制の効果が高まる。
例えば、過検出抑制に利用する参照フレーム数が注目フレームを含む2フレームのみの場合、それらのフレームに描出される超音波画像は類似しており、同一位置で過検出する可能性が高く、過検出を除去できない可能性がある。
Here, the number of reference frames used for overdetection suppression can be set arbitrarily.
As the number of reference frames used increases, the effect of overdetection suppression increases.
For example, if the number of reference frames used for overdetection suppression is only two frames including the frame of interest, the ultrasound images drawn in those frames are similar, and there is a high possibility that overdetection will occur at the same position. Detection may not be removed.
これに対し、過検出抑制に利用する参照フレーム数を2フレームより多くした場合(例えば、注目フレームを含む5フレーム)、描出される組織の形状(模様)が変動するため、全てのフレームの同一位置に過検出が発生する可能性は低くなり、適切に過検出を除去することができる。 On the other hand, when the number of reference frames used for overdetection suppression is more than 2 frames (for example, 5 frames including the frame of interest), the shape (pattern) of the drawn tissue varies, so all frames are the same The possibility of occurrence of overdetection at the position is reduced, and overdetection can be appropriately removed.
利用する参照フレーム数を多くすれば過検出除去の効果が高まるが、腫瘤を検出した腫瘤候補領域を誤って除去する恐れがある。
例えば、過検出抑制に利用する参照フレーム数を5フレームとしたとき、腫瘤が2フレームしか描出されていない場合は、腫瘤候補領域も2フレーム分しか検出されないため、この領域を過検出として、誤って除去する恐れがある。
Increasing the number of reference frames to be used increases the effect of overdetection removal, but there is a risk that the tumor candidate area in which the tumor is detected may be erroneously removed.
For example, when the number of reference frames used for overdetection suppression is 5 frames, if only 2 frames are drawn, only 2 frames of the tumor candidate area are detected. There is a risk of removal.
本システムでは正しく検出された腫瘤候補領域を誤って除去しないために、予め、次の2つの工夫を行い、少数の参照フレームでしか描出されない腫瘤においても正しく検出できるようにした。
1つ目の工夫は、(S211a)「スライドウィンドウによる腫瘤候補領域の検出」において、探索窓が重なるように移動させ、できるだけ同一位置で複数の腫瘤候補領域を検出するようにした。
2つ目の工夫は、(S210)「多重解像度画像の作成」を導入し、異なる解像度の画像を用いることで、同一領域で複数の腫瘤候補領域を検出するようにした。
In this system, in order not to mistakenly remove a tumor candidate region that was detected correctly, the following two measures were taken in advance so that a tumor that could only be drawn with a small number of reference frames could be detected correctly.
The first contrivance is (S211a) "Detection of tumor candidate area by sliding window", the search windows are moved so as to overlap each other, and a plurality of tumor candidate areas are detected at the same position as much as possible.
The second contrivance introduced (S210) “Create multi-resolution image” and used multiple resolution images to detect multiple tumor candidate regions in the same region.
以下、フレームの連続性を利用した過検出抑制について、詳しく述べる。
まず、現在のフレームと規定値分だけ前のフレームの間における全ての腫瘤候補領域の中心座標を取得する。
Hereinafter, over-detection suppression using frame continuity will be described in detail.
First, the center coordinates of all the tumor candidate regions between the current frame and the previous frame by the specified value are acquired.
取得した腫瘤候補領域の中心座標とフレーム番号に基づいてミーンシフトクラスタリングを実施し、腫瘤候補領域のグループ分けを行う。 ・ Mean shift clustering is performed based on the obtained center coordinates and frame number of the tumor candidate area, and the tumor candidate areas are grouped.
算出したグループについて、各グループの要素数を算出し、要素数が事前に設定した閾値以下のグループに属する腫瘤候補領域を全て除去する。
残った腫瘤候補領域を最終的に腫瘤領域として判定する。
For the calculated group, the number of elements of each group is calculated, and all tumor candidate regions belonging to groups whose number of elements is equal to or less than a preset threshold value are removed.
The remaining tumor candidate area is finally determined as a tumor area.
(腫瘤候補領域の座標取得)
現在表示中のフレーム番号をT、その着目するフレームの枚数をsとする。
このとき、動画像におけるT-s+1番目のフレームからT番目のs枚分のフレーム(図8)における腫瘤候補領域cの中心座標 (Xc,Yc)とフレーム番号(Tc)を取得する。
(Acquisition of coordinates of tumor candidate area)
Let T be the currently displayed frame number and s be the number of frames of interest.
At this time, the center coordinates (Xc, Yc) and the frame number (Tc) of the tumor candidate region c in the T-th s frames (FIG. 8) from the T-s + 1-th frame in the moving image are acquired.
(クラスタリングによるグループ分け)
上で取得したすべての腫瘤候補領域の中心座標(Xc,Yc,Tc) を算出する。
(Grouping by clustering)
The center coordinates (Xc, Yc, Tc) of all the tumor candidate regions acquired above are calculated.
中心座標は、腫瘤候補領域の左上座標(x0,y0)と右下座標(x1,y1)、フレーム番号Tから図9に示すように算出する。 The center coordinates are calculated as shown in FIG. 9 from the upper left coordinates (x0, y0) and lower right coordinates (x1, y1) of the tumor candidate region and the frame number T.
全ての腫瘤候補領域の中心座標とフレーム番号を入力ベクトルとして下記のミーンシフトクラスタリングを実行し、全ての入力ベクトルに対してK個のグループ番号{g1,... ,gK}を割り当てる。 The following mean shift clustering is executed using the center coordinates and frame numbers of all the tumor candidate regions as input vectors, and K group numbers {g 1 ,..., G K } are assigned to all the input vectors.
なお、グループ数Kはミーンシフトクラスタリングのアルゴリズムにより自動的に決定される。また、ミーンシフトクラスタリング以外に、x-means法やInfinite Gaussian Mixture Model(IGMM)といったクラスタ数を自動調整できるクラスタリング手法を適用できる。 Note that the number K of groups is automatically determined by the mean shift clustering algorithm. In addition to mean shift clustering, a clustering method that can automatically adjust the number of clusters such as the x-means method and Infinite Gaussian Mixture Model (IGMM) can be applied.
(クラスタ結果の要素数の除去)
上記で算出したK個のグループに{g1,... ,gK}対して、各グループの要素数を取得する。
要素数が事前に設定した閾値Thn以下であれば、そのグループに属する異常領域を全て削除する。本発明では、この閾値Thnとして値5を設定した。
(Removing the number of elements in the cluster result)
The number of elements in each group is acquired for {g 1 , ..., g K } for the K groups calculated above.
If the number of elements is equal to or less than a preset threshold value Th n , all abnormal areas belonging to the group are deleted. In the present invention, the value 5 is set as the threshold value Th n .
(重点学習の検証)
学習フェーズ(S10)で述べた重点学習の有効性の検証を行った。
実験では、学習対象の患者7名、検査用の患者15名の乳房超音波画像を使用した。
(Verification of priority learning)
The effectiveness of the priority learning described in the learning phase (S10) was verified.
In the experiment, breast ultrasound images of 7 patients to be learned and 15 patients for examination were used.
また、Deep Learning法によるネットワーク構造は5階層(入力層から順に、各層のユニット数は2500, 900, 625, 225, 2)とし、探索窓の大きさを縦50pixel、横50pixel、探索窓の移動幅をy方向に25pixel、x方向に25pixelとした。 In addition, the network structure based on the Deep Learning method has five layers (the number of units in each layer is 2500, 900, 625, 225, 2 in order from the input layer), and the search window size is 50 pixels vertically, 50 pixels horizontally, and the search window moves The width was 25 pixels in the y direction and 25 pixels in the x direction.
多重解像度画像は3階層(1倍、0.75倍、0.5倍)とし、重点学習の効果を検証するため、検査フェーズでは、ステップS22とステップS23の過検出抑制処理は導入せず、ステップS21における腫瘤候補領域のみを使用した。 The multi-resolution image has three layers (1x, 0.75x, 0.5x), and in order to verify the effect of priority learning, the over-detection suppression process in steps S22 and S23 is not introduced in the examination phase, and the tumor in step S21 Only candidate regions were used.
図10に、1フレームあたりの平均過検出数と検出率を示す。
重点学習を導入することで過検出数が減少し、さらに検出率が増加していることがわかる。
この結果より、検出精度を向上させる手法として、重点学習は有効であると考えられる。
FIG. 10 shows the average overdetection number and detection rate per frame.
It can be seen that by introducing priority learning, the number of overdetections is reduced and the detection rate is further increased.
From these results, it is considered that priority learning is effective as a technique for improving detection accuracy.
(乳腺組織の自動抽出による過検出抑制)
検査フェーズにおけるステップS22の乳腺組織以外の過検出抑制処理の有効性を検証するため、乳腺組織の自動抽出を適用しない場合と適用した場合における過検出数の比較を行った。
実験では、学習用に患者7名、検査用に患者5名の乳房超音波画像を使用した。
(Over-detection suppression by automatic extraction of breast tissue)
In order to verify the effectiveness of the over-detection suppression process for non-mammary gland tissue in step S22 in the examination phase, the number of over-detections when the automatic extraction of mammary tissue was not applied and when it was applied were compared.
In the experiment, breast ultrasound images of 7 patients for learning and 5 patients for examination were used.
また、Deep Learning法によるネットワーク構造は4階層(入力層から順に、各層のユニット数は625, 500, 500, 2)とし、探索窓の大きさを縦50pixel、横50pixel、探索窓の移動幅をy方向に25pixel、x方向に25pixelとした。 In addition, the network structure based on the Deep Learning method has four layers (the number of units in each layer is 625, 500, 500, 2 in order from the input layer), the search window size is 50 pixels long, 50 pixels wide, and the movement width of the search window is 25 pixels in the y direction and 25 pixels in the x direction.
探索窓内の50x50の画像はDeep Learningのネットワークに入力する際にBicubic法により25x25に縮小した。多重解像度画像は3階層(1倍、0.75倍、0.5倍)とした。 The 50x50 image in the search window was reduced to 25x25 by the Bicubic method when it entered the Deep Learning network. The multi-resolution image has 3 layers (1x, 0.75x, 0.5x).
乳腺組織の自動抽出の効果を検証するため、学習フェーズにおける重点学習とステップS23の過検出抑制処理は導入せず、ステップS21の腫瘤候補領域の検出とステップS22の乳腺組織以外の過検出抑制処理のみを使用した。 In order to verify the effect of automatic extraction of breast tissue, priority learning in the learning phase and overdetection suppression processing in step S23 are not introduced, detection of a tumor candidate region in step S21 and overdetection suppression processing for other than breast tissue in step S22 Only used.
なお、乳腺組織の自動抽出手法では、ステップS221の大津の二値化とグラフカットによる乳腺組織の自動抽出手法を採用した。 In addition, in the automatic extraction method of the mammary gland tissue, the automatic extraction method of the mammary gland tissue by binarization of Otsu in step S221 and graph cut was adopted.
図11に、実験結果の過検出数の比較を示す。乳腺以外の組織(非乳腺)における過検出を削減することに成功し、過検出を抑制するために、乳腺組織の自動抽出手法は有効であることがわかる。 Fig. 11 shows a comparison of the number of over-detected experimental results. It can be seen that the automatic extraction technique of the mammary gland tissue is effective in reducing overdetection in tissues other than the mammary gland (non-mammary gland) and suppressing overdetection.
(フレームの連続性を利用した過検出抑制処理の検証)
検査フェーズにおけるステップS23のフレームの連続性を利用した過検出抑制処理の有効性を検証するため、フレームの連続性を利用した過検出抑制処理を適用しない場合と適用した場合における過検出の比較を行った。
(Verification of overdetection suppression processing using frame continuity)
In order to verify the effectiveness of the overdetection suppression process using the continuity of the frame in step S23 in the inspection phase, compare the overdetection when the overdetection suppression process using the continuity of the frame is not applied and when it is applied. went.
実験では、学習用に患者7名、検査用に患者5名の乳房超音波画像を使用した。 In the experiment, breast ultrasound images of 7 patients for learning and 5 patients for examination were used.
また、Deep Learning法によるネットワーク構造は4階層(入力層から順に、各層のユニット数は625, 500, 500, 2)とし、探索窓の大きさを縦50pixel、横50pixel、探索窓の移動幅をy方向に25pixel、x方向に25pixelとした。 In addition, the network structure based on the Deep Learning method has four layers (the number of units in each layer is 625, 500, 500, 2 in order from the input layer), the search window size is 50 pixels long, 50 pixels wide, and the movement width of the search window is 25 pixels in the y direction and 25 pixels in the x direction.
探索窓内の50x50の画像はDeep Learningのネットワークに入力する際にBicubic法により25x25に縮小した。 The 50x50 image in the search window was reduced to 25x25 by the Bicubic method when it entered the Deep Learning network.
多重解像度画像は3階層(1倍、0.75倍、0.5倍)とした。フレームの連続性を利用した過検出抑制処理の効果を検証するため、学習フェーズにおける重点学習とステップS22の乳腺組織以外の過検出抑制処理は導入せず、ステップS21の腫瘤候補領域の検出とステップS23のフレームの連続性を利用した過検出抑制処理のみを使用した。 The multi-resolution image has 3 layers (1x, 0.75x, 0.5x). In order to verify the effect of the overdetection suppression process using the continuity of the frame, the priority detection in the learning phase and the overdetection suppression process other than the mammary gland tissue in step S22 are not introduced, and the detection of the tumor candidate region in step S21 and the step Only the overdetection suppression process using the frame continuity of S23 was used.
図12に、1フレームあたりの平均過検出数の比較を示す。
フレームの連続性を利用した過検出抑制処理を適用することで過検出を削減することに成功していることがわかる。
FIG. 12 shows a comparison of the average overdetection number per frame.
It can be seen that the overdetection suppression process using the continuity of frames is applied to reduce overdetection.
1 異常判定領域(病変候補領域)
2 観察部位(診断組織)の領域に含まれない異常判定領域
3 観察部位の領域
4 観察部位の領域ではない領域
5 観察部位の領域にあって静止画フレームに連続して描出されない異常判定領域
6 パッチ画像
7 学習モデルDB
8 観察部位およびその周辺(診断部)
1 Abnormality judgment area (lesion candidate area)
2 Abnormality judgment area not included in the area of the observation site
3 Region of observation site
4 Area that is not the area of the observation site
5 Anomaly judgment area that is in the region of the observation site and is not drawn continuously in the still image frame
6 Patch image 7 Learning model DB
8 Observation site and its surroundings (diagnostic part)
Claims (14)
学習フェーズ(S10)において、
事前に切り出した病変が映る画像とそれ以外の画像をパッチ画像として入力とし、
前記パッチ画像を基にDeep Learning法を用いて前記病変とそれ以外を分類するモデルを作成し、
検査フェーズにおいて、
超音波検査装置の超音波プローブを動作させて診断組織を含む診断部の複数のフレームの列からなる動画像を取得し(S20)、
前記学習フェーズで得られたモデルと前記動画像のフレームの画像を比較して前記診断部において前記フレームの画像の病変候補領域を検出し(S21)、
前記動画像のフレームの画像から前記診断部における前記診断組織の領域の自動抽出を行い、前記診断組織の領域以外の領域とそこに含まれる前記病変候補領域を除去し(S22)、
前記フレームの列の連続性を利用して前記診断組織に単発的に発生する前記病変候補領域を除去し(S23)、
前記動画像のフレームの画像の最終的に残った前記病変候補領域がマークされた前記診断組織の領域のみを検出結果として出力する(S24)、
ことを特徴とする超音波画像診断支援方法。 An ultrasonic diagnostic imaging support method comprising a learning phase (S10) and an examination phase (S20 to S24),
In the learning phase (S10)
The image showing the previously cut out lesion and the other images are input as patch images.
Create a model that classifies the lesion and others using Deep Learning method based on the patch image,
In the inspection phase
A moving image consisting of a plurality of frames of a diagnostic unit including a diagnostic tissue is acquired by operating an ultrasonic probe of an ultrasonic inspection apparatus (S20),
Comparing the model obtained in the learning phase with the image of the frame of the moving image to detect a lesion candidate area of the image of the frame in the diagnosis unit (S21),
Performing automatic extraction of the diagnostic tissue region in the diagnostic unit from the frame image of the moving image, removing the region other than the diagnostic tissue region and the lesion candidate region included therein (S22),
Using the continuity of the sequence of frames to remove the lesion candidate region that occurs only once in the diagnostic tissue (S23),
Only the region of the diagnostic tissue in which the lesion candidate region finally remaining in the image of the moving image frame is marked is output as a detection result (S24),
An ultrasonic diagnostic imaging support method characterized by the above.
前記動画像のフレームから、複数の解像度の画像で構成される多重解像度画像を作成し(S210)、
前記多重解像度画像の各階層の画像に対して前記病変候補領域の検出処理を行い(S211)、
前記各階層の画像における異常領域の座標をもとの解像度の座標に変換し、各前記複数の解像度における画像を統合する(S212)、
ことにより行うことを特徴とする請求項2に記載の超音波画像診断支援方法。 The detection of the lesion candidate area (S21)
A multi-resolution image composed of a plurality of resolution images is created from the moving image frame (S210),
Perform detection processing of the lesion candidate area for each layer image of the multi-resolution image (S211),
Converting the coordinates of the abnormal region in the image of each hierarchy into the coordinates of the original resolution, and integrating the images at each of the plurality of resolutions (S212),
The ultrasonic image diagnosis support method according to claim 2, wherein the ultrasonic image diagnosis support method is performed.
学習フェーズ(S10)と検査フェーズ(S20~S24)から構成され、
学習フェーズ(S10)において、
事前に切り出した病変が映る画像とそれ以外の画像をパッチ画像として入力とし、
前記パッチ画像を基にDeep Learning法を用いて前記病変とそれ以外を分類するモデルを作成し、
検査フェーズ(S20~S24)において、
超音波検査装置の超音波プローブを動作させて診断組織を含む診断部の複数のフレームの列からなる動画像を取得し(S20)、
前記学習フェーズで得られたモデルと前記動画像のフレームの画像を比較して前記フレームの画像の診断部において前記病変の病変候補領域を検出し(S21)、
前記動画像のフレームの画像から前記診断部における前記診断組織の領域の自動抽出を行い、前記診断組織の領域以外の領域とそこに含まれる前記病変候補領域を除去し(S22)、
前記フレームの列の連続性を利用して前記診断組織に単発的に発生する前記病変候補領域を除去し(S23)、
前記動画像のフレームの画像の最終的に残った前記病変候補領域がマークされた前記診断組織の領域のみを検出結果として出力する(S24)、
ことを特徴とする超音波画像診断支援システム。 An ultrasonic diagnostic imaging support system that uses an ultrasonic moving image (hereinafter simply referred to as a moving image) obtained by the operation of an ultrasonic probe of an ultrasonic inspection apparatus,
It consists of a learning phase (S10) and an inspection phase (S20-S24)
In the learning phase (S10)
The image showing the previously cut out lesion and the other images are input as patch images.
Create a model that classifies the lesion and others using Deep Learning method based on the patch image,
In the inspection phase (S20-S24)
A moving image consisting of a plurality of frames of a diagnostic unit including a diagnostic tissue is acquired by operating an ultrasonic probe of an ultrasonic inspection apparatus (S20),
Comparing the model obtained in the learning phase with the image of the frame of the moving image to detect a lesion candidate area of the lesion in the diagnostic unit of the image of the frame (S21);
Performing automatic extraction of the diagnostic tissue region in the diagnostic unit from the frame image of the moving image, removing the region other than the diagnostic tissue region and the lesion candidate region included therein (S22),
Using the continuity of the sequence of frames to remove the lesion candidate region that occurs only once in the diagnostic tissue (S23),
Only the region of the diagnostic tissue in which the lesion candidate region finally remaining in the image of the moving image frame is marked is output as a detection result (S24),
An ultrasonic diagnostic imaging support system characterized by the above.
前記動画像のフレームから、複数の解像度の画像で構成される多重解像度画像を作成し(S210)、
前記多重解像度画像の各階層の画像に対して前記病変候補領域の検出処理を行い(S211)、
前記各階層の画像における異常領域の座標をもとの解像度の座標に変換し、各前記複数の解像度における画像を統合する(S212)、
ことにより行うこと特徴とする請求項9に記載の超音波画像診断支援システム。 The detection of the lesion candidate area (S21)
A multi-resolution image composed of a plurality of resolution images is created from the moving image frame (S210),
Perform detection processing of the lesion candidate area for each layer image of the multi-resolution image (S211),
Converting the coordinates of the abnormal region in the image of each hierarchy into the coordinates of the original resolution, and integrating the images at each of the plurality of resolutions (S212),
The ultrasonic image diagnosis support system according to claim 9, wherein the ultrasonic image diagnosis support system is performed.
The ultrasonic image diagnosis support system according to claim 11 or 12, wherein the lesion candidate region is removed (S22) by automatic extraction of the mammary gland tissue by CRF.
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| JP2019509166A JP6710373B2 (en) | 2017-03-30 | 2018-03-09 | Ultrasound image diagnosis support method and system |
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| JP2017068394 | 2017-03-30 | ||
| JP2017-068394 | 2017-03-30 |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2018180386A1 true WO2018180386A1 (en) | 2018-10-04 |
Family
ID=63677201
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/JP2018/009336 Ceased WO2018180386A1 (en) | 2017-03-30 | 2018-03-09 | Ultrasound imaging diagnosis assistance method and system |
Country Status (2)
| Country | Link |
|---|---|
| JP (1) | JP6710373B2 (en) |
| WO (1) | WO2018180386A1 (en) |
Cited By (15)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN109784692A (en) * | 2018-12-29 | 2019-05-21 | 重庆大学 | A kind of fast and safely constraint economic load dispatching method based on deep learning |
| WO2020175356A1 (en) * | 2019-02-27 | 2020-09-03 | 学校法人慶應義塾 | Storage medium, image diagnosis assistance device, learning device, and learned model generation method |
| JP2021033826A (en) * | 2019-08-28 | 2021-03-01 | 龍一 中原 | Medical image processing device, medical image processing method and medical image processing program |
| CN112638279A (en) * | 2018-10-22 | 2021-04-09 | 百合医疗科技株式会社 | Ultrasonic diagnostic system |
| WO2021145584A1 (en) * | 2020-01-16 | 2021-07-22 | 성균관대학교산학협력단 | Apparatus for correcting position of ultrasound scanner for artificial intelligence-type ultrasound self-diagnosis using augmented reality glasses, and remote medical diagnosis method using same |
| KR102304609B1 (en) * | 2021-01-20 | 2021-09-24 | 주식회사 딥바이오 | Method for refining tissue specimen image, and computing system performing the same |
| WO2021206170A1 (en) * | 2020-04-10 | 2021-10-14 | 公益財団法人がん研究会 | Diagnostic imaging device, diagnostic imaging method, diagnostic imaging program, and learned model |
| KR20220050977A (en) * | 2020-02-10 | 2022-04-25 | 텐센트 테크놀로지(센젠) 컴퍼니 리미티드 | Medical image processing method, image processing method and apparatus |
| KR20220129144A (en) * | 2021-03-15 | 2022-09-23 | 비엔제이바이오파마 주식회사 | Apparatus and method for determining disease of target site based on patch image |
| JPWO2022259299A1 (en) * | 2021-06-07 | 2022-12-15 | ||
| JP2023511300A (en) * | 2020-01-16 | 2023-03-17 | コーニンクレッカ フィリップス エヌ ヴェ | Method and system for automatically finding anatomy in medical images |
| WO2023113414A1 (en) * | 2021-12-13 | 2023-06-22 | 주식회사 딥바이오 | Method for training artificial neural network providing determination result of pathology specimen, and computing system performing same |
| CN116485791A (en) * | 2023-06-16 | 2023-07-25 | 华侨大学 | Method and system for automatic detection of double-view breast tumor lesion area based on absorption |
| WO2024181188A1 (en) | 2023-03-02 | 2024-09-06 | 富士フイルム株式会社 | Ultrasound diagnostic device and control method for ultrasound diagnostic device |
| WO2024203214A1 (en) * | 2023-03-29 | 2024-10-03 | 富士フイルム株式会社 | Ultrasonic diagnostic device and method for controlling ultrasonic diagnostic device |
Families Citing this family (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| KR102446638B1 (en) * | 2021-04-28 | 2022-09-26 | 주식회사 딥바이오 | Method for training artificial neural network determining lesion area caused by breast cancer, and computing system performing the same |
| JP7555170B2 (en) | 2021-05-12 | 2024-09-24 | 富士フイルム株式会社 | Ultrasound diagnostic device and diagnostic support method |
Citations (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JP2015154918A (en) * | 2014-02-19 | 2015-08-27 | 三星電子株式会社Samsung Electronics Co.,Ltd. | Lesion detection apparatus and method |
| WO2016088758A1 (en) * | 2014-12-01 | 2016-06-09 | 国立研究開発法人産業技術総合研究所 | Ultrasound examination system and ultrasound examination method |
-
2018
- 2018-03-09 WO PCT/JP2018/009336 patent/WO2018180386A1/en not_active Ceased
- 2018-03-09 JP JP2019509166A patent/JP6710373B2/en active Active
Patent Citations (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JP2015154918A (en) * | 2014-02-19 | 2015-08-27 | 三星電子株式会社Samsung Electronics Co.,Ltd. | Lesion detection apparatus and method |
| WO2016088758A1 (en) * | 2014-12-01 | 2016-06-09 | 国立研究開発法人産業技術総合研究所 | Ultrasound examination system and ultrasound examination method |
Cited By (31)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN112638279A (en) * | 2018-10-22 | 2021-04-09 | 百合医疗科技株式会社 | Ultrasonic diagnostic system |
| CN109784692B (en) * | 2018-12-29 | 2020-11-24 | 重庆大学 | A fast and safe constrained economic scheduling method based on deep learning |
| CN109784692A (en) * | 2018-12-29 | 2019-05-21 | 重庆大学 | A kind of fast and safely constraint economic load dispatching method based on deep learning |
| JP7185242B2 (en) | 2019-02-27 | 2022-12-07 | 株式会社フィックスターズ | Program and diagnostic imaging aid |
| WO2020175356A1 (en) * | 2019-02-27 | 2020-09-03 | 学校法人慶應義塾 | Storage medium, image diagnosis assistance device, learning device, and learned model generation method |
| JPWO2020175356A1 (en) * | 2019-02-27 | 2020-09-03 | ||
| JP2021033826A (en) * | 2019-08-28 | 2021-03-01 | 龍一 中原 | Medical image processing device, medical image processing method and medical image processing program |
| JP7418730B2 (en) | 2019-08-28 | 2024-01-22 | 龍一 中原 | Medical image processing device, medical image processing method, and medical image processing program |
| WO2021145584A1 (en) * | 2020-01-16 | 2021-07-22 | 성균관대학교산학협력단 | Apparatus for correcting position of ultrasound scanner for artificial intelligence-type ultrasound self-diagnosis using augmented reality glasses, and remote medical diagnosis method using same |
| US12144687B2 (en) | 2020-01-16 | 2024-11-19 | OMNI C&S Inc | Apparatus for correcting posture of ultrasound scanner for artificial intelligence-type ultrasound self-diagnosis using augmented reality glasses, and remote medical diagnosis method using same |
| JP2023511300A (en) * | 2020-01-16 | 2023-03-17 | コーニンクレッカ フィリップス エヌ ヴェ | Method and system for automatically finding anatomy in medical images |
| KR20220050977A (en) * | 2020-02-10 | 2022-04-25 | 텐센트 테크놀로지(센젠) 컴퍼니 리미티드 | Medical image processing method, image processing method and apparatus |
| KR102874690B1 (en) * | 2020-02-10 | 2025-10-21 | 텐센트 테크놀로지(센젠) 컴퍼니 리미티드 | Medical image processing method, image processing method and device |
| JP2022553979A (en) * | 2020-02-10 | 2022-12-27 | ▲騰▼▲訊▼科技(深▲セン▼)有限公司 | Medical image processing method, image processing method, medical image processing device, image processing device, computer device and program |
| US12322092B2 (en) | 2020-02-10 | 2025-06-03 | Tencent Technology (Shenzhen) Company Limited | Medical image processing method and apparatus, image processing method and apparatus, terminal and storage medium |
| JP7628536B2 (en) | 2020-02-10 | 2025-02-10 | ▲騰▼▲訊▼科技(深▲セン▼)有限公司 | Medical image processing method, image processing method, medical image processing device, image processing device, computer device, and program |
| JPWO2021206170A1 (en) * | 2020-04-10 | 2021-10-14 | ||
| WO2021206170A1 (en) * | 2020-04-10 | 2021-10-14 | 公益財団法人がん研究会 | Diagnostic imaging device, diagnostic imaging method, diagnostic imaging program, and learned model |
| WO2022158843A1 (en) * | 2021-01-20 | 2022-07-28 | 주식회사 딥바이오 | Method for refining tissue specimen image, and computing system performing same |
| CN116745809A (en) * | 2021-01-20 | 2023-09-12 | 第一百欧有限公司 | Tissue sample image refining method and computing system for performing same |
| KR102304609B1 (en) * | 2021-01-20 | 2021-09-24 | 주식회사 딥바이오 | Method for refining tissue specimen image, and computing system performing the same |
| KR20220129144A (en) * | 2021-03-15 | 2022-09-23 | 비엔제이바이오파마 주식회사 | Apparatus and method for determining disease of target site based on patch image |
| KR102692189B1 (en) * | 2021-03-15 | 2024-08-07 | 연세대학교 산학협력단 | Apparatus and method for determining disease of target object based on patch image |
| JPWO2022259299A1 (en) * | 2021-06-07 | 2022-12-15 | ||
| JP7622837B2 (en) | 2021-06-07 | 2025-01-28 | 日本電信電話株式会社 | Object detection device and method |
| WO2022259299A1 (en) * | 2021-06-07 | 2022-12-15 | 日本電信電話株式会社 | Object detection device and method |
| WO2023113414A1 (en) * | 2021-12-13 | 2023-06-22 | 주식회사 딥바이오 | Method for training artificial neural network providing determination result of pathology specimen, and computing system performing same |
| WO2024181188A1 (en) | 2023-03-02 | 2024-09-06 | 富士フイルム株式会社 | Ultrasound diagnostic device and control method for ultrasound diagnostic device |
| WO2024203214A1 (en) * | 2023-03-29 | 2024-10-03 | 富士フイルム株式会社 | Ultrasonic diagnostic device and method for controlling ultrasonic diagnostic device |
| CN116485791B (en) * | 2023-06-16 | 2023-09-29 | 华侨大学 | Automatic detection method and system for double-view breast tumor lesion area based on absorbance |
| CN116485791A (en) * | 2023-06-16 | 2023-07-25 | 华侨大学 | Method and system for automatic detection of double-view breast tumor lesion area based on absorption |
Also Published As
| Publication number | Publication date |
|---|---|
| JPWO2018180386A1 (en) | 2019-11-07 |
| JP6710373B2 (en) | 2020-06-17 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| JP6710373B2 (en) | Ultrasound image diagnosis support method and system | |
| CN109754007A (en) | Peplos intelligent measurement and method for early warning and system in operation on prostate | |
| JP2008530700A (en) | Fast object detection method using statistical template matching | |
| CN111784701B (en) | Ultrasonic image segmentation method and system combining boundary feature enhancement and multi-scale information | |
| Strisciuglio et al. | Multiscale blood vessel delineation using B-COSFIRE filters | |
| CN117636116A (en) | Method for intelligently fusing CT image data with MRI data | |
| David et al. | Robust classification of brain tumor in MRI images using salient structure descriptor and RBF kernel-SVM | |
| Pham et al. | A comparison of texture models for automatic liver segmentation | |
| CN112837325A (en) | Medical image image processing method, device, electronic device and medium | |
| CN112488996A (en) | Inhomogeneous three-dimensional esophageal cancer energy spectrum CT (computed tomography) weak supervision automatic labeling method and system | |
| KR20220061076A (en) | Target data prediction method using correlation information based on multi medical image | |
| WO2024173568A1 (en) | Detecting and analyzing scanning gaps in colonoscopy videos | |
| Susomboon et al. | Automatic single-organ segmentation in computed tomography images | |
| US8165375B2 (en) | Method and system for registering CT data sets | |
| Gong et al. | An automatic pulmonary nodules detection method using 3d adaptive template matching | |
| Sathananthavathi et al. | Improvement of thin retinal vessel extraction using mean matting method | |
| Ogul et al. | Unsupervised rib delineation in chest radiographs by an integrative approach | |
| Selvy et al. | A proficient clustering technique to detect CSF level in MRI brain images using PSO algorithm | |
| Ravishankar et al. | Four novel approaches for detection of region of interest in mammograms—A comparative study | |
| JP2006175036A (en) | Rib shape estimating apparatus, rib profile estimating method, and its program | |
| JP2005160916A (en) | Method, apparatus and program for determining calcification shadow | |
| CN120908727B (en) | Method for identifying cervical squamous carcinoma and cervical adenocarcinoma by synthetic MRI | |
| CN120013944B (en) | A method for extracting trabecular bone features based on image analysis | |
| JP2005177037A (en) | Calcified shadow judgment method, calcified shadow judgment apparatus and program | |
| Veeramani et al. | Hybrid and automated segmentation algorithm for malignant melanoma using chain codes and active contours |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 18774821 Country of ref document: EP Kind code of ref document: A1 |
|
| ENP | Entry into the national phase |
Ref document number: 2019509166 Country of ref document: JP Kind code of ref document: A |
|
| NENP | Non-entry into the national phase |
Ref country code: DE |
|
| 122 | Ep: pct application non-entry in european phase |
Ref document number: 18774821 Country of ref document: EP Kind code of ref document: A1 |