WO2021171394A1 - 学習済みモデルの作成方法、画像生成方法および画像処理装置 - Google Patents
学習済みモデルの作成方法、画像生成方法および画像処理装置 Download PDFInfo
- Publication number
- WO2021171394A1 WO2021171394A1 PCT/JP2020/007600 JP2020007600W WO2021171394A1 WO 2021171394 A1 WO2021171394 A1 WO 2021171394A1 JP 2020007600 W JP2020007600 W JP 2020007600W WO 2021171394 A1 WO2021171394 A1 WO 2021171394A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- ray
- extracted
- generated
- trained model
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Ceased
Links
- 0 C1*2C(C3)=C3CC12 Chemical compound C1*2C(C3)=C3CC12 0.000 description 2
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0012—Biomedical image inspection
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B6/00—Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
- A61B6/02—Arrangements for diagnosis sequentially in different planes; Stereoscopic radiation diagnosis
- A61B6/03—Computed tomography [CT]
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T11/00—2D [Two Dimensional] image generation
- G06T11/003—Reconstruction from projections, e.g. tomography
- G06T11/008—Specific post-processing after tomographic reconstruction, e.g. voxelisation, metal artifact correction
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/50—Image enhancement or restoration using two or more images, e.g. averaging or subtraction
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/774—Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
- G06V10/7747—Organisation of the process, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/778—Active pattern-learning, e.g. online learning of image or video features
- G06V10/7784—Active pattern-learning, e.g. online learning of image or video features based on feedback from supervisors
- G06V10/7792—Active pattern-learning, e.g. online learning of image or video features based on feedback from supervisors the supervisor being an automated module, e.g. "intelligent oracle"
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/60—Type of objects
- G06V20/64—Three-dimensional objects
- G06V20/647—Three-dimensional objects by matching two-dimensional images to three-dimensional objects
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10072—Tomographic images
- G06T2207/10081—Computed x-ray tomography [CT]
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20212—Image combination
- G06T2207/20221—Image fusion; Image merging
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30008—Bone
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30101—Blood vessel; Artery; Vein; Vascular
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V2201/00—Indexing scheme relating to image or video recognition or understanding
- G06V2201/03—Recognition of patterns in medical or anatomical images
Definitions
- the present invention relates to a trained model creation method, an image generation method, and an image processing device.
- the International Publication No. 2019/138438 discloses that an image representing a specific part is created by converting an X-ray image of a region including a specific part of a subject using a trained model. ing.
- the above-mentioned International Publication No. 2019/138438 exemplifies the bone part of a subject, a blood vessel into which a contrast medium is injected, and a stent placed in the body as specific sites.
- the trained model is created by performing machine learning using the first DRR (Digitally Reconstructed Radiography) image and the second DRR image reconstructed from the CT image data as a teacher input image and a teacher output image, respectively. By differentiating the image obtained by the conversion using the trained model from the original image, it is possible to generate an image in which a specific part is removed.
- DRR Digitally Reconstructed Radiography
- the present invention has been made to solve the above-mentioned problems, and one object of the present invention is to perform image processing on various image elements and also on a plurality of image elements.
- the method of creating a trained model in the first aspect of the present invention generates a reconstructed image obtained by reconstructing three-dimensional X-ray image data into a two-dimensional projected image, and by simulation.
- a two-dimensional projected image is generated from the three-dimensional model of the image element to be extracted, the projected image of the image element is superposed on the reconstructed image to generate a superposed image, and the superposed image is used as teacher input data and the reconstructed image.
- a trained model that performs a process of extracting image elements included in the input image is created.
- a plurality of image elements are separately extracted from an X-ray image by using a trained model trained in a process of extracting a specific image element from an input image.
- a processed image in which image processing is performed on each image element included in the X-ray image is generated. ..
- the image processing apparatus uses an image acquisition unit for acquiring an X-ray image and a trained model trained in a process of extracting a specific image element from an input image to obtain an X-ray image. It is included in the X-ray image by performing an inter-image calculation using an extraction processing unit that separately extracts a plurality of image elements from the image, a plurality of extracted images extracted for each image element, and an X-ray image. Each image element is provided with an image generation unit that generates a processed image in which image processing is performed.
- the “processing of extracting an image element” means to generate an image representing the extracted image element by extracting the image element, and X-rays from which the image element has been removed by the extraction. It is a broad concept that includes both the generation of images. More specifically, the "processing of extracting" an image element includes generating an image of only the image element and generating an image obtained by removing the image element from the original X-ray image. .. Further, “inter-image calculation” means that one image is generated by performing operations such as addition, subtraction, multiplication, and division between one image and another image. More specifically, "inter-image calculation” means that a pixel value calculation process is performed for each corresponding pixel between a plurality of images to determine the pixel value of that pixel in the calculated image. means.
- the reconstructed image obtained by reconstructing the three-dimensional X-ray image data into a two-dimensional projected image and the three-dimensional model of the image element to be extracted by simulation Since the superimposed image obtained by superimposing the two-dimensional projected image generated from the above is used as the teacher input data and the reconstructed image or the projected image is used as the teacher output data, the three-dimensional X-ray image data includes the image element to be extracted. Even if you don't have it, you can perform machine learning using the image elements to be extracted generated by simulation. That is, the teacher data can be prepared without actually preparing the CT image data including the image element to be extracted.
- the teacher data can be prepared even for the image element included in the CT image data but difficult to separate and extract. As a result, it is possible to efficiently create a trained model for performing image processing on various image elements and on a plurality of image elements.
- X-rays are used using a trained model in which a process of extracting a specific image element from an input image is trained.
- a processed image is generated by extracting a plurality of image elements separately from the image and performing an inter-image calculation using the plurality of extracted images extracted for each image element and an X-ray image, so that the processed image is input.
- Various image elements are separately extracted as extracted images from the X-ray image, and each extracted image is freely added or subtracted from the X-ray image according to the type of the extracted image element. be able to.
- image processing can be performed on various image elements and also on a plurality of image elements.
- FIG. 1 It is a block diagram which showed the image processing apparatus by one Embodiment. It is a figure which showed an example of the X-ray photographing apparatus. It is a figure for demonstrating the machine learning and the trained model for the training model. It is a figure which showed the example of the image element. It is a figure which showed the 1st example of the extraction of the image element by a trained model. It is a figure which showed the 2nd example of the extraction of the image element by the trained model, and the generation of a processed image. It is a figure which showed the example which image processing is performed on the extracted image unlike FIG. It is a flowchart for demonstrating the image generation method by one Embodiment.
- the image processing device 100 uses the trained model 40 created by machine learning to extract the image element 50 included in the X-ray image 201, and uses the extracted image element 50 to perform image processing of the X-ray image 201. It is configured to do.
- the image processing device 100 takes the X-ray image 201 captured by the X-ray imaging device 200 as an input, and generates a processed image 22 in which image processing is performed on each image element 50 included in the X-ray image 201 as an output.
- the image processing device 100 includes an image acquisition unit 10, an extraction processing unit 20, and an image generation unit 30.
- the image processing device 100 includes one or a plurality of processors 101 such as a CPU (Central Processing Unit) and a GPU (Graphics Processing Unit), and one or a plurality of processors such as an HDD (Hard Disk Drive) and an SSD (Solid State Drive). It is composed of a storage unit 102 and a computer including the storage unit 102.
- the image processing device 100 is connected to the display device 103.
- the image acquisition unit 10 is configured to acquire the X-ray image 201.
- the image acquisition unit 10 is composed of, for example, an interface for communicably connecting an external device and an image processing device 100.
- the interface of the image acquisition unit 10 may include a communication interface such as a LAN (Local Area Network).
- the image acquisition unit 10 may include an input / output interface such as HDMI (registered trademark), DisplayPort, and a USB port.
- the image acquisition unit 10 can acquire the X-ray image 201 from the X-ray photographing apparatus 200 or from the server apparatus connected via the network by communication.
- the X-ray image 201 is a medical image obtained by photographing a patient or a subject with an X-ray imaging apparatus 200.
- the X-ray image 201 may be either a still image or a moving image.
- a moving image is a collection of still images taken at a predetermined frame rate.
- the X-ray image 201 is a two-dimensional image.
- the X-ray image 201 may be various images taken by simple X-ray photography, fluoroscopy, angiography, or the like.
- the extraction processing unit 20 is configured to separately extract a plurality of image elements 50 from the X-ray image 201 by using the learned model 40 stored in the storage unit 102.
- the trained model 40 is a trained model in which a process of extracting a specific image element 50 from an input image is trained.
- the image element 50 is an image portion or image information constituting the X-ray image 201, and is defined as a group of the same or the same type.
- Image element 50 can be a part of the human body that is anatomically classified.
- Such an image element 50 is a living tissue such as a bone or a blood vessel.
- the image element 50 can be an object introduced or placed in the body of a subject in an operation or the like.
- Such image element 50 can be, for example, a device such as a catheter, guidewire, stent, surgical instrument, fixture, etc. that is introduced into the body.
- the image element 50 may be noise, artifacts, scattered X-ray components, or the like generated during imaging processing in X-ray photography.
- the image element 50 may be clothing worn by the subject and reflected at the time of shooting.
- clothing is a concept that includes clothing, ornaments, and other accessories.
- clothing buttons, fasteners, accessories, metal fittings, and the like are reflected in the X-ray image 201.
- the trained model 40 is created in advance by machine learning that learns the process of extracting a specific image element 50 from the input image.
- the extraction processing unit 20 extracts the image element 50 using one or a plurality of trained models 40.
- the extraction processing unit 20 generates a plurality of extracted images 21 obtained by extracting different image elements 50 from the X-ray image 201.
- the first extracted image 21 includes a first image element 50
- the second extracted image 21 includes a second image element 50 different from the first image element 50.
- the method of creating the trained model 40 will be described later.
- the image generation unit 30 performs an inter-image calculation using the plurality of extracted images 21 extracted for each image element 50 and the X-ray image 201 to form each image element 50 included in the X-ray image 201. It is configured to generate a processed image 22 that has undergone image processing.
- the image processing includes, for example, an enhancement process of the image element 50 or a removal process of the image element 50.
- the enhancement process is a process of relatively increasing the pixel value of the pixels belonging to the image element 50.
- the removal process is a process of relatively lowering the pixel values of the pixels belonging to the image element 50.
- the removal process includes not only removing completely from the image, but also reducing visibility by partial removal.
- the enhancement process may be, for example, an edge enhancement process.
- the removal process can be a noise removal process.
- the inter-image calculation is to determine the pixel value of the corresponding pixel in the processed image 22 by calculating the pixel value for each corresponding pixel between the plurality of extracted images 21 and the X-ray image 201.
- the content of the operation is not particularly limited, but may be, for example, four arithmetic operations of addition, subtraction, multiplication and division.
- the inter-image calculation includes weighting addition or subtraction of individual extracted images 21 with respect to the X-ray image 201. By weighting and adding the extracted image 21 to the X-ray image 201, the image element 50 included in the X-ray image 201 can be emphasized. By weighting and subtracting the extracted image 21 on the X-ray image 201, the image element 50 included in the X-ray image 201 can be removed. By adjusting the weight value, the degree of emphasis or the degree of removal of the image element 50 can be optimized.
- the processor 101 functions as the extraction processing unit 20 and the image generation unit 30 by executing a program (not shown) stored in the storage unit 102. That is, in the example of FIG. 1, the extraction processing unit 20 and the image generation unit 30 are realized as functional blocks of the processor 101.
- the extraction processing unit 20 and the image generation unit 30 may be configured as separate hardware.
- the individual hardware includes that the extraction processing unit 20 and the image generation unit 30 are configured by separate processors.
- the individual hardware includes a plurality of computers (PCs) in the image processing device 100, and a computer (PC) that functions as an extraction processing unit and a computer (PC) that functions as an image generation unit are separately provided. Including being
- the image processing device 100 causes the display device 103 to display the processed image 22 generated by the image generation unit 30.
- the image processing device 100 transmits, for example, the generated processed image 22 to the server device via the network.
- the image processing device 100 records, for example, the generated processed image 22 in the storage unit 102.
- FIG. 2 shows a configuration example of the X-ray imaging apparatus 200.
- FIG. 2 shows an example of a blood vessel X-ray imaging apparatus capable of performing fluoroscopic imaging of blood vessels.
- the X-ray imaging apparatus 200 includes a top plate 210, an X-ray irradiation unit 220, and an X-ray detector 230.
- the top plate 210 is configured to support the subject 1 (person).
- the X-ray irradiation unit 220 includes an X-ray source such as an X-ray tube, and is configured to irradiate X-rays toward the X-ray detector 230.
- the X-ray detector 230 is composed of, for example, an FPD (Flat Panel Detector), and is configured to detect X-rays emitted from the X-ray irradiation unit 220 and transmitted through the subject 1.
- FPD Full Panel Detector
- the X-ray irradiation unit 220 and the X-ray detector 230 are held by the C arm 240.
- the C-arm 240 is movable in the first direction 250 along the arcuate arm portion and is rotatable in the second direction 252 around the rotation axis 251.
- the X-ray imaging apparatus 200 can change the projection direction of X-rays from the X-ray irradiation unit 220 toward the X-ray detector 230 to the first direction 250 and the second direction 252 by a predetermined angle range, respectively. ..
- the extraction process of the image element 50 included in the X-ray image 201 is performed by the trained model 40 created by machine learning. As shown in FIG. 3, the trained model 40 extracts a pre-learned image element 50 from the input image, and outputs an extracted image 21 displaying only the extracted image element 50.
- the trained model 40 uses the reconstructed image 60 reconstructed from the three-dimensional image data into a two-dimensional projected image, and the projected image 61 created from the three-dimensional model of the image element 50 by simulation. It is created in advance by machine learning.
- any method such as a full-layer convolutional neural network (FCN), a neural network, a support vector machine (SVM), and boosting can be used.
- FCN full-layer convolutional neural network
- SVM support vector machine
- LM trained model 40
- Such a learning model LM (learned model 40) includes an input layer 41 into which an image is input, a convolution layer 42, and an output layer 43.
- machine learning is performed using the training data 66 including the teacher input data 64 and the teacher output data 65.
- the teacher input data 64 and the teacher output data 65 included in one learning data 66 have a relationship between the data before extraction and the data after extraction for the same image element 50.
- Machine learning is performed for each image element of the plurality of image elements 50 to be extracted. That is, the learning data 66 is prepared for each image element 50 to be extracted.
- the plurality of image elements 50 include a first element 51 which is a living tissue and a second element 52 which is a non-living tissue. Further, the plurality of image elements 50 include at least a plurality of bone 53, blood vessels 54, a device 55 introduced into the body, clothing 56, noise 57, and a scattered ray component 58 of X-rays. Of these, the bone 53 and the blood vessel 54 correspond to the first element 51.
- the first element 51 may include a living tissue other than the bone 53 and the blood vessel 54. Of these, the device 55, clothing 56, noise 57, and the scattered X-ray component 58 introduced into the body correspond to the second element 52.
- the second element 52 may include image elements other than the device 55, clothing 56, noise 57 and scattered radiation component 58.
- one trained model 40 is configured to extract a plurality of image elements 50 separately.
- the trained model 40 has one input channel and a plurality (N) output channels. N is an integer greater than or equal to 2.
- the trained model 40 extracts N image elements 50 separately.
- the trained model 40 outputs the extracted first to Nth image elements 50 as first extracted images 21-1 to Nth extracted images 21-N from N output channels, respectively.
- the trained model 40 has one input channel and N + 1 output channels.
- the trained model 40 extracts a plurality of (N) image elements 50 from the input image without duplication.
- “No duplication” means that the image information extracted in one of the extracted images (for example, the first extracted image 21-1) is the other extracted image (for example, the second extracted image 21-2 to the Nth extracted image 21-N). Means that it is not included in.
- the trained model 40 outputs the extracted first to Nth image elements as the first extracted image 21-1 to the Nth extracted image 21-N, respectively. Then, the trained model 40 outputs the residual image element 59 remaining after extraction as the N + 1 extracted image 21x from the N + 1th output channel.
- the first extracted image 21-1 to the Nth extracted image 21-N do not include the same image information. Then, among the input X-ray images 201, the image information that remains without being extracted is included in the N + 1 extracted image 21x. Therefore, when the first extracted image 21-1 to the Nth extracted image 21-N and the N + 1 extracted image 21x are added, the original X-ray image 201 is returned.
- the trained model 40 extracts a plurality of image elements 50 from the input image without duplication, and extracts the extracted plurality of image elements 50 and the residual image elements 59 remaining after the extraction. Each is configured to output. As a result, the total of the image information extracted by the extraction process does not increase or decrease as compared with the input image.
- the image generation unit 30 adds a weighting coefficient 23 to the first extracted image 21-1 to the Nth extracted image 21-N with respect to the input X-ray image 201.
- the inter-image calculation for addition or subtraction is performed to generate the processed image 22.
- the N + 1 extracted image 21x representing the residual image element 59 does not need to be used for image processing.
- the coefficients w1 to wn are individually set corresponding to each of the first extracted images 21-1 to the Nth extracted images 21-N.
- the weighting coefficient 23 may be, for example, a fixed value set in advance in the storage unit 102. However, a plurality of types of weighting coefficients may be set depending on the use of the processed image 22 and the like.
- the first set value of the weighting coefficient 23 is used for the first extracted image 21-1
- the second set value of the weighting coefficient 23 is used for the first extracted image 21-1. It is done, and so on.
- the weighting coefficient 23 may be set to an arbitrary value according to the operation input of the user.
- image processing may be performed on each of the extracted images 21 before performing the inter-image calculation between each extracted image 21 and the X-ray image 201.
- the image generation unit 30 is configured to separately perform image processing on a part or all of the plurality of extracted images 21.
- the processed image 22 is generated by an inter-image calculation between the plurality of extracted images 21 after image processing and the X-ray image 201.
- the image generation unit 30 performs the first image processing 25-1 on the first extracted image 21-1 and the second image processing 25-2 on the second extracted image 21-2. , ..., The Nth image processing 25-N is performed on the Nth extracted image 21-N. Since it may not be necessary to perform image processing depending on the image element 50, image processing may be performed only on a part of the extracted images 21.
- the image processing performed on each extracted image is not particularly limited, but may be, for example, image correction processing or image interpolation processing.
- the image correction process may include an edge enhancement process and a noise removal process.
- the image correction process may be, for example, contrast adjustment, line enhancement process, smoothing process, or the like.
- the image interpolation process is a process of interpolating the interrupted portion of the image element 50 that appears to be interrupted in the middle because it is difficult to be captured in the X-ray image 201 such as a guide wire or a catheter.
- contrast adjustment, line enhancement processing, etc. appropriate parameters differ for each image element 50, and it is difficult to process all the image elements 50 at once, but image processing is performed on each extracted image 21. As a result, optimal image processing is performed for each image element 50.
- the image generation method of the present embodiment includes at least the following steps S2 and S5 shown in FIG. (S2) A plurality of image elements 50 are separately extracted from the X-ray image 201 by using the trained model 40 trained in the process of extracting the specific image element 50 from the input image. (S5) By performing an inter-image calculation using a plurality of extracted images 21 extracted for each image element 50 and an X-ray image 201, image processing is performed on each image element 50 included in the X-ray image 201. The processed image 22 that has been performed is generated. Further, the image generation method of the present embodiment may further include steps S1, S3, S4, and S6 shown in FIG.
- step S1 the X-ray image 201 is acquired.
- the image acquisition unit 10 acquires the X-ray image 201 captured by the X-ray imaging device 200 shown in FIG. 2, for example, by communication with the X-ray imaging device 200 or the server device. do.
- step S2 the extraction processing unit 20 separately extracts a plurality of image elements 50 from the X-ray image 201 using the trained model 40.
- the extraction processing unit 20 inputs the X-ray image 201 acquired in step S1 into the trained model 40.
- the trained model 40 outputs the first extracted image 21-1 to the Nth extracted image 21-N as shown in FIG. 5 or FIG.
- step S3 image processing may be performed on a part or all of the extracted plurality of extracted images 21.
- the image generation unit 30 executes preset image processing with predetermined parameters for the extracted image 21 to be image processed. Whether or not to perform image processing may be determined according to the input from the user. Whether or not to perform image processing may be determined according to the image quality of the extracted image 21. Step S3 does not have to be executed.
- step S4 the image generation unit 30 acquires calculation parameters for each extracted image 21.
- the calculation parameters include, for example, a set value of the weighting coefficient 23 and a set value of the calculation method.
- the set value of the calculation method indicates whether to perform weighting addition (that is, emphasis processing of the image element 50) or weighting subtraction (that is, removal processing of the image element 50) for the target extracted image 21.
- the setting value of the calculation method and the setting value of the weighting coefficient 23 are preset in the storage unit 102 for each type of the extracted image element 50.
- step S5 the image generation unit 30 performs inter-image calculation using the plurality of extracted images 21 extracted for each image element 50 and the X-ray image 201.
- the image generation unit 30 performs an inter-image calculation according to the parameters acquired in step S4.
- the image generation unit 30 multiplies each of the first extracted images 21-1 to the Nth extracted images 21-N by the corresponding weighting factors 23, and performs inter-image calculation on the X-ray image 201 by the corresponding calculation methods. I do.
- the X-ray image 201 weighted or subtracted by each extracted image 21 is generated as the processed image 22.
- the image generation unit 30 generates a processed image 22 (see FIG. 6 or 7) in which each image element 50 included in the X-ray image 201 is subjected to image processing that is an enhancement process or a removal process.
- step S6 the image processing device 100 outputs the processed image 22.
- the image processing device 100 outputs the processed image 22 to the display device 103 or the server device. Further, the image processing device 100 stores the processed image 22 in the storage unit 102.
- the image generation unit 30 may accept an operation input for changing the calculation parameter.
- the image generation unit 30 may accept the input of the value of the weighting coefficient 23, or may accept the selection of another preset parameter.
- the image generation unit 30 may accept changes in the image processing parameters in step S3. Then, the image generation unit 30 may regenerate the processed image 22 using the changed parameters according to the operation input of the user.
- the trained model 40 may be created by the processor 101 of the image processing device 100, but may be executed by using a computer for machine learning (learning device 300, see FIG. 10).
- the method of creating the trained model of the present embodiment includes the following steps S11 to S14.
- S11 A reconstructed image 60 is generated by reconstructing the CT (Computed Tomography) image data 80 into a two-dimensional projected image.
- the CT image data 80 is an example of "three-dimensional X-ray image data”.
- S12 A two-dimensional projection image 61 is generated from the three-dimensional model of the image element 50 to be extracted by the simulation.
- the projected image 61 of the image element 50 is superimposed on the reconstructed image 60 to generate the superimposed image 67.
- S14 An image included in the input image by performing machine learning using the superimposed image 67 as the teacher input data 64 (see FIG. 3) and the reconstructed image 60 or the projected image 61 as the teacher output data 65 (see FIG. 3).
- a trained model 40 (see FIG. 3) that performs a process of extracting the element 50 is created.
- machine learning inputs teacher input data 64 (see FIG. 3) and teacher output data 65 (see FIG. 3) created for each image element 50 to one learning model LM.
- machine learning may be performed on a separate learning model LM for each of the image elements 50 to be extracted.
- CT image data 80 is acquired.
- the CT image data 80 is three-dimensional image data that reflects the three-dimensional structure of the subject 1 obtained by taking a CT image of the subject 1.
- the CT image data 80 is a three-dimensional aggregate of voxel data including three-dimensional position coordinates and a CT value at the position coordinates.
- four-dimensional 4D-CT data including time changes in three-dimensional information may be used. As a result, it is possible to accurately learn even an object that moves with time.
- step S11 a reconstructed image 60 obtained by reconstructing the CT image data 80 into a two-dimensional projected image is generated.
- the reconstructed image 60 is a DRR image generated from the CT image data 80.
- the DRR image is created as a two-dimensional projection image by virtual perspective projection simulating the geometric projection conditions of the X-ray irradiation unit 220 and the X-ray detector 230 of the X-ray imaging apparatus 200 as shown in FIG. This is a simulated X-ray image.
- the virtual X-ray tube 91 and the virtual X-ray detector 92 are virtually placed in a predetermined projection direction with respect to the CT image data 80.
- a three-dimensional spatial arrangement (photographing geometry) of a virtual X-ray imaging system is generated.
- the arrangement of the CT image data 80, the virtual X-ray tube 91, and the virtual X-ray detector 92 is such that the actual subject 1 shown in FIG. 2 and the X-ray irradiation unit 220 and the X-ray detector 230 are arranged. It is the same shooting geometry as.
- the imaging geometry means the geometrical arrangement relationship between the subject 1 and the X-ray irradiation unit 220 and the X-ray detector 230 in a three-dimensional space.
- the pixel value of each pixel in the reconstructed image 60 is added by adding the sum of the CT values in each voxel that the X-rays emitted from the virtual X-ray tube 91 have passed by the time it reaches the virtual X-ray detector 92. Is calculated.
- the shooting geometry it is possible to generate a simulated X-ray image at an arbitrary projection angle.
- the learning device 300 In the method of creating the trained model, the learning device 300 generates a plurality of reconstructed images 60 from one three-dimensional data (CT image data 80).
- the number of generated reconstructed images 60 can be, for example, about 100,000.
- the learning device 300 generates a plurality of reconstructed images 60 by making each parameter such as projection angle, projection coordinates, parameters for DRR image generation, contrast, and edge enhancement different from each other.
- the plurality of reconstructed images 60 that are different from each other can be generated by, for example, an algorithm that randomly changes the above parameters to generate the reconstructed image 60.
- the reconstructed image 60 since the superimposed image 67 superimposed on the projected image 61 including the image element 50 is created, the reconstructed image 60 does not include the image element 50 to be extracted, or even if the image element 50 is included. It does not have to have extractable contrast.
- random and random number mean non-regularity and a non-regular sequence of numbers (set of numbers), but they do not have to be completely random and are pseudo-random. , Pseudo-random numbers shall be included.
- step S12 of FIG. 9 a two-dimensional projection image 61 is generated from the three-dimensional model of the image element 50 to be extracted by the simulation.
- the projected image 61 is a two-dimensional image representing the image element 50 to be extracted by the trained model 40.
- the projected image 61 includes, for example, only the image element 50.
- the learning device 300 acquires a three-dimensional model of the image element 50 to be extracted, and generates a projection image 61 from the three-dimensional model by simulation.
- the three-dimensional model is created, for example, by performing CT imaging of an object including the image element 50 to be extracted, and extracting the image element 50 from the obtained CT data.
- the three-dimensional model can be created by using, for example, a CT image database published by a research institution or the like.
- the CT image database includes, for example, a lung CT image data set, LIDC / IDRI (The Lung Image Data base Consortium and Image Data Base Initiative) by the National Cancer Institute of the United States.
- LIDC / IDRI The Lung Image Data base Consortium and Image Data Base Initiative
- a brain CT image data set, a standardized three-dimensional model of the skeleton, and the like can be used.
- the device 55 and the clothing 56 can be created by using the three-dimensional CAD data.
- the three-dimensional model is, for example, three-dimensional image data including only the image element 50 to be extracted.
- a projection image 61 can be created by acquiring two-dimensional data (X-ray image) that actually includes the image element 50 to be extracted and separating and extracting the image element 50 included in the acquired image.
- the learning device 300 generates a plurality of two-dimensional projected images 61 by making parameters such as a projection direction, a translation amount, a rotation amount, a deformation amount, and a contrast different from each other with respect to a three-dimensional model or two-dimensional data.
- the plurality of projected images 61 randomly change variable parameters such as translation amount, rotation amount, deformation amount, and contrast with respect to the original data (three-dimensional model, two-dimensional data, other projected images 61). It can be generated by the processing algorithm.
- the learning device 300 generates a plurality of two-dimensional projected images 61 for each type of image element 50.
- the learning device 300 generates a plurality of projection images 61 from one original data (three-dimensional data, two-dimensional data, or projection image 61).
- the number of projected images 61 generated from one original data can be, for example, about 100,000.
- step S13 the projection image 61 generated in step S12 is superimposed on the reconstructed image 60 generated in step S11 to generate a superimposed image 67 (see FIG. 10).
- a superimposition image 67 including a projection image 61 of the image element 50 to be extracted is generated in the two-dimensional reconstructed image 60.
- a plurality of superimposed images 67 are also generated by combining the plurality of reconstructed images 60 and the plurality of projected images 61.
- One training data 66 includes a superposed image 67 and any of the reconstructed image 60 and the projected image 61 used to generate the superposed image 67.
- Machine learning is performed in step S14 of FIG.
- the superimposed image 67 is used as the teacher input data 64 input to the input layer 41 of the learning model LM of FIG.
- the reconstructed image 60 or the projected image 61 is used as the teacher output data 65 input to the output layer 43 of the learning model.
- the learning model LM learns to extract the image element 50 from the input image and generate the extracted image 21 that does not include the image element 50.
- the teacher output data 65 is the projected image 61
- the learning model LM learns to extract the image element 50 from the input image and generate the extracted image 21 representing the extracted image element 50.
- the extracted image 21 can be an image containing only the image element 50.
- Adopting 60 and adopting the projected image 61 can be considered to be equivalent from the viewpoint of image processing.
- the reconstructed image 60 that actually includes the image element 50 to be extracted may be used as the teacher input data 64, and the projected image 61 of the image element 50 extracted from the reconstructed image 60 may be used as the teacher output data 65.
- step S15 of FIG. 9 the learning device 300 determines whether or not machine learning is completed.
- the learning device 300 determines that the machine learning is completed when, for example, all the learning data 66 are machine-learned a predetermined number of iterations.
- the learning device 300 determines that machine learning is completed when, for example, the value of the evaluation function for evaluating the performance of the learning model LM becomes equal to or greater than a predetermined value.
- the learning device 300 determines that the machine learning has not been completed, the learning data 66 is changed in step S16, and the machine learning in step S14 is executed using the next learning data 66.
- the learned learning model LM is stored as the learned model 40 in step S17. This completes the creation of the trained model 40.
- the created trained model 40 is recorded via a network or on a non-transient recording medium and provided to the image processing apparatus 100.
- FIG. 12 shows an example in which the image element 50 is a bone 53.
- the teacher input data 64 is, for example, a superimposed image 67 of the reconstructed image 60 not including the bone 53 and the projected image 61 including the bone 53.
- the superimposed image 67 is generated by creating a projection image 61 from the CT image data 80 or a three-dimensional model of the skeleton and superimposing it on the reconstructed image 60 from which the bone 53 has been removed.
- the reconstructed image 60 from which the bone 53 has been removed is generated by clipping (fixing) the CT value to zero for the pixels in the CT value range of the bone 53.
- the CT value of the bone is generally about 200 HU to about 1000 HU, so the threshold value may be set to a predetermined value of about 0 HU to 200 HU.
- the teacher output data 65 is a projection image 61 including only the bone 53.
- the projection image 61 used for generating the superimposed image 67 is used.
- the reconstructed image 60 generated from the CT image data 80 may be used as the teacher input data 64. Therefore, it is not always necessary to create the superimposed image 67.
- the teacher output data 65 is the reconstructed image 60 including only the bone 53.
- the reconstructed image 60 including only the bone 53 is generated by clipping the CT value to zero for a pixel having a CT value less than the CT value of the bone 53 among the reconstructed images 60 including the bone 53, for example. ..
- the learning model LM learns to generate an extracted image 21 in which the bone 53 is extracted like the teacher output data 65 from the input image such as the teacher input data 64.
- the image processing device 100 weights and subtracts the extracted image 21 generated by the trained model 40 with respect to the X-ray image 201 acquired by the X-ray photographing device 200. As a result, as shown in FIG. 13, a processed image 22 in which the image element 50 of the bone 53 is removed from the X-ray image 201 is generated. In FIG. 13, for convenience of explanation, the image element 50 of the removed bone 53 is shown by a broken line.
- FIG. 14 shows an example in which the image element 50 is the device 55.
- the device 55 is a guide wire.
- the teacher input data 64 is a superimposed image 67 including the device 55.
- the reconstructed image 60 generated from the CT image data 80 does not include the image element 50.
- the projection image 61 of only the device 55 generated from the three-dimensional model of the device 55 is superimposed on the reconstructed image 60.
- a superimposed image 67 including the device 55 is generated as shown in FIG.
- a plurality of variations of the projection image 61 whose shape and the like are changed by a simulation may be created by taking a two-dimensional X-ray image of the device 55.
- a plurality of superimposed images 67 are generated using the plurality of projected images 61.
- the projected image 61 of the image element 50 determines the shape of the three-dimensional model of the device 55 based on random coordinate values. It is generated by simulating with the generated curve.
- examples (A) to (I) in which a projected image 61 of the guide wire 55b holding the stent 55a is generated in a random shape by a curve simulation are shown.
- the guide wire 55b in each projected image 61 is generated in a different shape by a Bezier curve with a random coordinate value as a base point.
- the Bezier curve is a K-1 degree curve obtained from K control points (K is an integer of 3 or more).
- K is an integer of 3 or more).
- a large number of projection images 61 of devices 55 having various shapes can be generated by an algorithm that randomly specifies the coordinate values of K control points.
- a plurality of single projection images 61 of the stent 55a placed in the body are generated by applying random translation, rotation, deformation, and contrast change to the three-dimensionally modeled pseudo-stent.
- the teacher output data 65 is a projection image 61 including only the device 55.
- the projection image 61 used for generating the superimposed image 67 is used.
- the learning model LM learns to generate an extracted image 21 obtained by extracting the device 55 like the teacher output data 65 from an input image such as the teacher input data 64.
- the image processing device 100 weights and adds the extracted image 21 generated by the trained model 40 to the X-ray image 201 acquired by the X-ray photographing device 200.
- a processed image 22 in which the image element 50 of the device 55 is emphasized is generated from the X-ray image 201.
- the device 55 is emphasized by showing the device 55 thicker than in FIG.
- the enhancement process may include not only a process of increasing the pixel value but also a process of coloring and displaying the image element 50 of the device 55 by the image process before the inter-image calculation shown in FIG. 7.
- FIG. 17 shows an example in which the image element 50 is noise 57.
- the noise 57 is, for example, random noise, but is shown in FIG. 17 as a set of dotted lines in the horizontal direction for convenience of explanation.
- the teacher input data 64 is a superimposed image 67 including noise 57.
- the reconstructed image 60 generated from the CT image data 80 does not include the noise 57.
- the randomly generated projection image 61 containing only the noise 57 is superimposed on the reconstructed image 60.
- the superimposed image 67 including the noise 57 is generated as shown in FIG.
- the noise 57 is created by randomly generating Gaussian noise that follows a Gaussian distribution and Poisson noise that follows a Poisson distribution for each projected image 61.
- the teacher output data 65 is a projection image 61 containing only noise 57.
- the projection image 61 used for generating the superimposed image 67 is used.
- the learning model LM learns to generate an extracted image 21 in which noise 57 is extracted, such as teacher output data 65, from an input image such as teacher input data 64.
- the image processing device 100 weights and subtracts the extracted image 21 generated by the trained model 40 with respect to the X-ray image 201 acquired by the X-ray photographing device 200.
- a processed image 22 in which the image element 50 of the noise 57 is removed from the X-ray image 201 is generated.
- the processed image 22 of FIG. 18 shows that the noise 57 has been removed from the X-ray image 201 including the noise 57 as in the teacher input data 64 of FIG.
- FIG. 19 shows an example in which the image element 50 is a blood vessel 54.
- the blood vessel 54 is a contrast-enhanced blood vessel imaged by introducing a contrast medium.
- FIG. 19 shows an example of a cerebral blood vessel in the head, but other blood vessels may be used.
- the blood vessels can be, for example, the coronary arteries of the heart.
- the teacher input data 64 is a superposed image 67 including a blood vessel 54.
- the reconstructed image 60 generated from the CT image data 80 taken without contrast medium contains almost no blood vessels 54 (does not have sufficient contrast).
- the projection image 61 of only the blood vessel 54 generated from the three-dimensional model of the blood vessel 54 is superimposed on the reconstructed image 60.
- a superimposed image 67 including the blood vessel 54 is generated as shown in FIG.
- the projected image 61 of the image element 50 is generated by a simulation in which the shape of the three-dimensional model of the blood vessel 54 is randomly changed.
- the blood vessels of the projected image 61 are subjected to random translation, rotation, deformation, contrast change, etc. by simulation. That is, similar to the device 55 shown in FIG. 16, a variation of the projected image 61 of the blood vessel 54 that has been randomly changed is generated.
- the projected image 61 of the blood vessel 54 may be created from the CT image data of the contrast-enhanced blood vessel taken contrast-enhanced.
- the teacher output data 65 is a projection image 61 including only the blood vessel 54.
- the projection image 61 used for generating the superimposed image 67 is used.
- the learning model LM learns to generate an extracted image 21 in which a blood vessel 54 is extracted like a teacher output data 65 from an input image such as a teacher input data 64.
- the image processing device 100 weights and adds the extracted image 21 generated by the trained model 40 to the X-ray image 201 acquired by the X-ray photographing device 200.
- a processed image 22 in which the image element 50 of the blood vessel 54 is emphasized is generated from the X-ray image 201.
- the processed image 22 of FIG. 20 shows that the blood vessel 54 is highlighted in the X-ray image 201 including the image element 50 of the blood vessel 54 as in the teacher input data 64 of FIG.
- FIG. 21 shows an example in which the image element 50 is clothing 56.
- the image element 50 is clothing 56.
- a button of the garment and a necklace worn by the subject are shown.
- the teacher input data 64 is a superimposed image 67 including clothing 56.
- the reconstructed image 60 generated from the CT image data 80 does not include clothing 56.
- the projection image 61 of only the clothing 56 generated from the three-dimensional model of the clothing 56 is superimposed on the reconstructed image 60.
- a superimposed image 67 including clothing 56 is generated.
- the three-dimensional model of the clothing 56 may be created from the CT image of the clothing 56 alone, or may be created from, for example, CAD data.
- a two-dimensional X-ray image of the clothing 56 may be taken to obtain a projected image 61.
- the projected image 61 is subjected to random translation, rotation, deformation, contrast change, etc. by simulation.
- the teacher output data 65 is a projection image 61 including only clothing 56.
- the same data as the projection image 61 used for generating the superimposed image 67 is used.
- the learning model LM learns to generate an extracted image 21 in which clothing 56 is extracted like the teacher output data 65 from an input image such as the teacher input data 64.
- the image processing device 100 weights and subtracts the extracted image 21 generated by the trained model 40 with respect to the X-ray image 201 acquired by the X-ray photographing device 200. As a result, as shown in FIG. 22, a processed image 22 in which the image element 50 of the garment 56 is removed from the X-ray image 201 is generated. In FIG. 22, for convenience, the portion of the image element 50 of the removed clothing 56 is shown by a broken line to explain that it has been removed.
- FIG. 23 shows an example in which the image element 50 is an X-ray scattered ray component 58.
- the teacher input data 64 is a superposed image 67 including a scattered radiation component 58.
- the scattered ray component 58 is not included in the reconstructed operation.
- the projected image 61 of only the scattered ray component 58 generated by the Monte Carlo simulation modeling the shooting environment of the input image is superimposed on the reconstructed image 60.
- the superimposed image 67 including the scattered radiation component 58 is generated.
- each of the X-ray photons emitted from the X-ray irradiation unit 220 and detected by the X-ray detector 230 is calculated (simulated) as a stochastic phenomenon using a random number. That is, in the simulation, the physical properties related to the projection direction of X-rays, the shape (body shape) of the subject, and the interaction with photons are assumed.
- the interaction such as absorption and scattering phenomenon that occurs when the X-ray photon passes through the subject is calculated by using a random number as a stochastic phenomenon.
- a predetermined number of photons is calculated, and a projection image 61 formed by the X-ray photons detected by the virtual X-ray detector 230 is generated.
- the predetermined number of photons may be a number sufficient for imaging, for example, about 10 billion.
- a plurality of projected images 61 of the image element 50 are generated by changing the projection angle over the projection angle range that can be photographed by the X-ray imaging apparatus 200 in the imaging environment model 85. ..
- the projection direction can be changed to the first direction 250 and the second direction 252 by moving the C arm 240. Therefore, as shown in FIG. 25, Monte Carlo with a plurality of projection angles changed to different angle values over the entire projection angle range of ⁇ ⁇ degrees in the first direction 250 and ⁇ ⁇ degrees in the second direction 252.
- the projected image 61 by simulation is generated.
- the projection angle may be changed at equal intervals over the entire projection angle range, or may be a value randomly changed by a predetermined number within the projection angle range. Further, the projected image 61 may be generated at an angle value outside the projection angle range and near the limit of the projection angle ( ⁇ ⁇ degree, ⁇ ⁇ degree).
- a plurality of projected images 61 of the image element 50 which is the scattered radiation component 58, are generated by changing the energy spectrum of the virtual radiation source in the photographing environment model 85. That is, the projection image 61 is created by Monte Carlo simulation under a plurality of conditions in which the energy spectrum of the X-rays irradiated by the X-ray irradiation unit 220 assumed as the photographing environment model 85 is changed to different spectra.
- the second energy spectrum 112 is a spectrum having a relatively higher energy than the first energy spectrum 111.
- the horizontal axis of the graph shows the energy [keV] of X-ray photons
- the vertical axis of the graph shows the relative intensity of X-rays (that is, the number of X-ray photons detected).
- a beam hardening phenomenon that is relatively biased toward the high energy side occurs in the process of detecting the energy spectrum of the X-ray irradiated to the subject.
- the reconstructed image 60 generated from the CT image data 80 cannot simulate the image quality change caused by the beam hardening phenomenon, but the projected image 61 by the Monte Carlo simulation can simulate the influence of the beam hardening phenomenon. It is possible.
- FIG. 23 shows, as an example, a projection image 61 of the scattered radiation component 58 due to Compton scattering obtained by Monte Carlo simulation.
- a scattered radiation component 58 other than Compton scattering such as Rayleigh scattering may be obtained.
- the scattered radiation component 58 due to multiple scattering as well as single scattering may be obtained.
- Machine learning may be performed so that these various scattered radiation components 58 are created as separate projection images 61 and extracted separately, or a projection image 61 that collectively displays the various scattered radiation components 58 is created. , Machine learning may be performed so as to collectively extract various scattered radiation components 58.
- the teacher output data 65 is a projection image 61 containing only the scattered radiation component 58.
- the same data as the projection image 61 used for generating the superimposed image 67 is used.
- the learning model LM learns to generate an extracted image 21 in which the scattered ray component 58 is extracted like the teacher output data 65 from the input image such as the teacher input data 64.
- the image processing device 100 weights and subtracts the extracted image 21 generated by the trained model 40 with respect to the X-ray image 201 acquired by the X-ray photographing device 200.
- a processed image 22 in which the image element 50 of the scattered radiation component 58 is removed from the X-ray image 201 is generated. Since the scattered ray component 58 is a factor that lowers the contrast of the X-ray image, it is possible to improve the contrast in the processed image 22 by removing the scattered ray component 58.
- the processed image 22 of FIG. 24 shows that the contrast is improved by removing the scattered ray component 58 from the X-ray image 201 whose contrast is lowered by the scattered ray component 58 as in the teacher input data 64 of FIG. 23. Shown.
- each of the created teacher input data 64 and each teacher output data 65 is limited by a collimator (not shown) included in the X-ray imaging apparatus 200 (X-ray irradiation unit 220).
- the collimator image 68 is included. In the collimator image 68, the image is formed only in a part of the area in the image, and the area shielded by the collimator does not include the image information.
- a part of each teacher input data 64 and each teacher output data 65 shows the shape of the X-ray irradiation range (that is, the image area) and the influence of the collimator by simulation based on the image actually taken by the collimator.
- a plurality of collimator images 68 in which the parameters of the received image quality are randomly different are included. Image quality parameters affected by the collimator include transparency (contrast), edge blur, and noise content.
- the collimator image 68 is generated, for example, by removing an image portion outside the simulated irradiation range of the superimposed image 67, the reconstructed image 60, and the projected image 61, and performing image processing simulating the influence of the collimator. ..
- the collimator image 68 may be generated from an image actually taken using a collimator. This makes it possible to improve the robustness of the extraction process of the image element 50 against changes in the X-ray irradiation range and changes in image quality due to the use of the collimator.
- machine learning for each type of the image element 50 and generation of the processed image 22 by the image processing device 100 are performed.
- each image element 50 and processed image 22 have been described individually.
- the X-ray image 201 input to the image processing device 100 includes the above-mentioned bone 53, blood vessel 54, device 55, clothing 56, and noise 57.
- a plurality of scattered ray components 58 are included.
- the image processing device 100 generates an extracted image 21 in which individual image elements 50 are extracted from the input X-ray image 201 by the trained model 40, and performs inter-image calculation.
- a processed image 22 in which each of the plurality of image elements 50 is emphasized or removed is generated.
- the first extracted image 21-1 represents the bone 53
- the second extracted image 21-2 represents the device 55
- the fourth extracted image represents the blood vessel 54
- the fifth extracted image represents the clothing 56
- the sixth extracted image represents the noise 57. Represents the scattered radiation component 58.
- the processed image 22 is applied to an X-ray image of the front of the chest of the subject by simple X-ray photography.
- the bone 53, the noise 57, the clothing 56, and the scattered radiation component 58 are removed. Removal of bone 53 improves visibility of areas of interest such as the heart and lungs. Further, by removing the noise 57 and the scattered ray component 58, the visibility of the entire image is improved. Since the image element 50 of the clothing 56 can be removed, the subject can perform X-ray photography without removing the clothing, accessories, etc. containing metal or the like. This brings about useful effects such as improvement of work efficiency and reduction of waiting time of subjects when X-ray photography of a large number of subjects is continuously performed, for example, in a mass examination.
- the processed image 22 is applied to an X-ray fluoroscopic image in X-ray interventional radiology (IVR) such as catheter treatment using an X-ray angiography apparatus.
- IVR X-ray interventional radiology
- the bone 53, the noise 57, and the scattered radiation component 58 are removed.
- devices 55 such as catheters, guide wires and stents and blood vessels 54 are highlighted. The removal of the bone 53, the noise 57 and the scattered radiation component 58 improves the visibility of the fluoroscopic image.
- the visibility of the area of interest in catheter treatment and the device being manipulated is improved.
- the reconstructed image 60 in which the CT image data 80 is reconstructed into a two-dimensional projected image and the three-dimensional model of the image element 50 to be extracted by simulation are used.
- the superimposed image 67 on which the generated two-dimensional projected image 61 is superimposed is used as the teacher input data 64, and the reconstructed image 60 or the projected image 61 is used as the teacher output data 65.
- the teacher data can be prepared without actually preparing the CT image data 80 including the image element 50 to be extracted.
- the teacher data can be prepared even for the image element 50 which is difficult to separate and extract even if it is included in the CT image data 80. ..
- a plurality of image elements from the X-ray image 201 are used by using the trained model 40 trained in the process of extracting a specific image element 50 from the input image.
- the 50 are extracted separately, and the processed image 22 is generated by performing an inter-image calculation using the plurality of extracted images 21 extracted for each image element 50 and the X-ray image 201.
- various image elements 50 are separately extracted as the extracted image 21 from the input X-ray image 201, and each extracted image 21 is extracted as the X-ray image 201 according to the type of the extracted image element 50. Can be freely added or subtracted.
- image processing can be performed on various image elements 50 and also on a plurality of image elements 50.
- a plurality of superimposed images 67 are created for each of a plurality of image elements 50 that are different from each other, and the plurality of image elements 50 are a first element 51 that is a biological tissue and a second image element that is a non-living tissue. Includes element 52 and.
- image processing for the image element 50 of the biological tissue such as the bone 53 and the blood vessel 54 and the image element 50 of the non-living tissue such as the device 55 introduced into the body and the clothing 56 worn by the subject are used.
- By using such a trained model 40 in the image processing apparatus 100 it is possible to perform both image processing on the image element 50 of the living tissue and image processing on the image element 50 of the non-living tissue in a complex manner.
- a plurality of superimposed images 67 are created for each of a plurality of image elements 50 different from each other, and the plurality of image elements 50 include a bone 53, a blood vessel 54, a device 55 introduced into the body, clothing 56, and the like. It contains at least a plurality of noise 57 and scattered X-ray component 58.
- the image element 50 includes a linear or tubular device 55
- the projected image 61 of the image element 50 is a curve generated by generating the shape of the three-dimensional model of the device 55 based on random coordinate values. It is generated by simulating with.
- teacher data for learning the image element 50 of the device 55 which is long and bends into various shapes such as a guide wire and a catheter, can be generated in a large amount in various shapes by simulation.
- efficient machine learning can be performed without actually preparing a large amount of three-dimensional CT data in which the device 55 is placed inside the subject.
- the image element 50 includes the blood vessel 54, and the projected image 61 of the image element 50 is generated by a simulation in which the shape of the three-dimensional model of the blood vessel 54 is randomly changed.
- teacher data for learning the image element 50 of the blood vessel 54 that bends into a long and complicated shape can be generated by simulation in a large amount and including various individual differences.
- efficient machine learning can be performed without preparing a large amount of 3D CT data of various subjects.
- the image element 50 includes the scattered X-ray component 58, and the projected image 61 of the image element 50 is generated by a Monte Carlo simulation modeling the shooting environment of the input image.
- a projected image 61 of the scattered X-ray component 58 which is difficult to separate and extract from the actual 3D CT data and the 2D X-ray image 201, can be generated by Monte Carlo simulation.
- a plurality of projected images 61 of the image element 50 are formed by changing the projection angle over the projection angle range ( ⁇ ⁇ , ⁇ ⁇ ) that can be photographed by the X-ray imaging apparatus 200 in the imaging environment model 85. Will be generated.
- X-ray radiography can be performed at various projection angles. It is possible to create a highly versatile trained model 40 capable of effectively extracting the scattered ray component 58 even when the X-ray imaging is performed while changing the projection angle.
- a plurality of projected images 61 of the image element 50 are generated by changing the energy spectrum of the virtual radiation source in the photographing environment model 85.
- X-ray imaging is performed under various imaging conditions having different energy spectra depending on the imaging site, etc., but according to the above configuration, the X-ray image 201 is imaged with various energy spectra. Even if this is the case, a highly versatile trained model 40 capable of effectively extracting the scattered ray component 58 can be created.
- machine learning includes inputting teacher input data 64 and teacher output data 65 created for each image element 50 to one learning model LM, and the trained model 40 is a trained model 40.
- a plurality of image elements 50 are extracted from the input image without duplication, and the extracted plurality of image elements 50 and the residual image element 59 remaining after the extraction are output.
- Doctors and the like who make a diagnosis using the X-ray image 201 find the basis of the diagnosis in the image information contained in the original image even if various image processes for improving the visibility are performed. According to the trained model 40 capable of extracting the image element 50 so as not to cause loss, a doctor or the like can provide a reliable image even when performing complex image processing.
- a part or all of the plurality of extracted images 21 is image-processed separately, and the processed image 22 is a combination of the plurality of extracted images 21 after the image processing and the X-ray image 201. Generated by inter-image calculation.
- the extracted image 21 of each extracted image element 50 can be independently corrected by utilizing the fact that a plurality of image elements 50 can be extracted separately using the trained model 40.
- image processing such as complement processing can be performed.
- the image processing algorithm becomes complicated and heavy, and the specific image element 50 becomes complicated. Other image elements 50 may be adversely affected.
- the inter-image calculation includes weighting addition or weighting subtraction of each extracted image 21 with respect to the X-ray image 201.
- the inter-image calculation includes weighting addition or weighting subtraction of each extracted image 21 with respect to the X-ray image 201.
- the device for performing machine learning (learning device 300) and the image processing device 100 are separate devices, but the present invention is not limited to this.
- machine learning may be performed in the image processing device.
- the learning device 300 may be configured by a server computer provided on the cloud.
- a plurality of image elements 50 may be extracted by a plurality of trained models.
- one trained model 40 may be provided for each image element 50 to be extracted.
- a trained model 40-1, a trained model 40-2, ..., A trained model 40-N are provided.
- One trained model 40 extracts one (one type) image element 50.
- a plurality of trained models 40 created so as to extract a plurality of image elements 50 may be provided.
- the plurality of image elements 50 include at least a plurality of bone 53, blood vessel 54, device 55, clothing 56, noise 57, and scattered radiation component 58.
- the image element 50 may include image elements other than bone 53, blood vessels 54, device 55, clothing 56, noise 57 and scattered radiation component 58.
- the image element 50 may be a specific structural part such as a specific organ in the body. Further, for example, the image element 50 of the bone at a specific site in the bone 53 and the image element 50 of the blood vessel at the specific site among the blood vessels 54 may be extracted separately from other bones and other blood vessels.
- the image element 50 may be free of bone 53, blood vessels 54, device 55, clothing 56, noise 57 and scattered radiation component 58.
- the bone 53 is subjected to the removal treatment and the blood vessel 54 and the device 55 are subjected to the enhancement treatment by the inter-image calculation is shown, but the present invention is not limited to this.
- the bone 53 may be highlighted, or one or both of the blood vessels 54 and the device 55 may be removed.
- the inter-image operation may be addition or subtraction without weighting factors.
- the enhancement process of the image element 50 may be performed by multiplication with or without a weighting factor.
- the removal process of the image element 50 may be performed by division with or without a weighting factor.
- a reconstructed image obtained by reconstructing 3D X-ray image data into a 2D projection image is generated.
- a two-dimensional projection image is generated from the three-dimensional model of the image element to be extracted.
- the projected image of the image element is superimposed on the reconstructed image to generate a superimposed image.
- a trained model that performs a process of extracting the image element included in the input image is created. How to create a trained model.
- (Item 2) A plurality of the superimposed images are created for each of a plurality of image elements that are different from each other.
- a plurality of the superimposed images are created for each of a plurality of image elements that are different from each other.
- the image element includes a linear or tubular device.
- the image element includes blood vessels and The method for creating a trained model according to item 1, wherein the projected image of the image element is generated by a simulation in which the shape of a three-dimensional model of a blood vessel is randomly changed.
- the image element contains a scattered X-ray component.
- the machine learning includes inputting the teacher input data and the teacher output data created for each image element into one learning model.
- the trained model is configured to extract the plurality of image elements from the input image without duplication, and output the extracted plurality of image elements and the residual image elements remaining after the extraction, respectively.
- the method for creating a trained model according to item 1.
- Image processing is performed separately on a part or all of the plurality of extracted images.
- the image generation method according to item 10 wherein the processed image is generated by an inter-image calculation between the plurality of extracted images after image processing and the X-ray image.
- the trained model is configured to extract the plurality of image elements from the input image without duplication, and output the extracted plurality of image elements and the residual image elements remaining after the extraction, respectively.
- the trained model is pre-created by machine learning using a reconstructed image reconstructed from 3D image data into a 2D projected image and a projected image created from the 3D model of the image element by simulation.
- the image generation method according to item 10.
- An image acquisition unit that acquires X-ray images and An extraction processing unit that separately extracts a plurality of image elements from the X-ray image using a trained model trained in a process of extracting a specific image element from an input image, and an extraction processing unit.
- An image processing device including an image generation unit for generating.
- Image acquisition unit 20 Extraction processing unit 21 (21-1, 21-2, 21-N) Extracted image 22 Processed image 30
- Image generation unit 40 (40-1, 40-2, 40-N) Trained model 50 images Element 51 1st element 52 2nd element 53 Bone 54 Vascular 55 Device 56 Clothing 57 Noise 58 Scattered ray component 59 Residual image element 60 Reconstructed image 61 Projection image 64
- Teacher input data 65
- Teacher output data 67
- Superimposed image 80 3D X-ray image data
- Shooting environment model 100
- Image processing device 111 First energy spectrum (energy spectrum) 112 Second energy spectrum (energy spectrum) 200
- X-ray imaging device 201 X-ray image LM learning model
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- General Health & Medical Sciences (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Multimedia (AREA)
- Databases & Information Systems (AREA)
- Evolutionary Computation (AREA)
- Software Systems (AREA)
- Computing Systems (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Radiology & Medical Imaging (AREA)
- Quality & Reliability (AREA)
- Heart & Thoracic Surgery (AREA)
- High Energy & Nuclear Physics (AREA)
- Veterinary Medicine (AREA)
- Biophysics (AREA)
- Animal Behavior & Ethology (AREA)
- Surgery (AREA)
- Molecular Biology (AREA)
- Public Health (AREA)
- Biomedical Technology (AREA)
- Pathology (AREA)
- Optics & Photonics (AREA)
- Image Analysis (AREA)
- Apparatus For Radiation Diagnosis (AREA)
- Image Processing (AREA)
Abstract
Description
まず、図1を参照して、画像処理装置100の構成について説明する。
図1に示したように、X線画像201に含まれる画像要素50の抽出処理は、機械学習によって作成された学習済みモデル40によって実施される。学習済みモデル40は、図3に示すように、入力された画像から、予め学習された画像要素50を抽出し、抽出した画像要素50のみを表示した抽出画像21を出力する。
次に、図8を参照して、本実施形態の画像生成方法について説明する。画像生成方法は、画像処理装置100によって実施することができる。本実施形態の画像生成方法は、少なくとも、図8に示した以下のステップS2およびステップS5を備える。
(S2)入力画像中から特定の画像要素50を抽出する処理を学習させた学習済みモデル40を用いて、X線画像201から複数の画像要素50を別々に抽出する。
(S5)画像要素50毎に抽出された複数の抽出画像21と、X線画像201と、を用いた画像間演算を行うことにより、X線画像201に含まれる各画像要素50に画像処理が行われた処理画像22を生成する。
また、本実施形態の画像生成方法は、図8に示したステップS1、S3、S4、S6をさらに備えていてもよい。
次に、学習済みモデルの作成方法について説明する。学習済みモデル40の作成は、画像処理装置100のプロセッサ101によって実施してもよいが、機械学習用のコンピュータ(学習装置300、図10参照)を用いて実行されうる。
(S11)CT(コンピュータ断層撮影)画像データ80を2次元の投影画像に再構成した再構成画像60を生成する。CT画像データ80は、「3次元X線画像データ」の一例である。
(S12)シミュレーションにより、抽出対象となる画像要素50の3次元モデルから2次元の投影像61を生成する。
(S13)画像要素50の投影像61を再構成画像60に重畳して重畳画像67を生成する。
(S14)重畳画像67を教師入力データ64(図3参照)とし、再構成画像60または投影像61を教師出力データ65(図3参照)として機械学習を行うことにより、入力画像に含まれる画像要素50を抽出する処理を行う学習済みモデル40(図3参照)を作成する。
次に、画像要素50毎の、重畳画像67(教師入力データ64)と、再構成画像60または投影像61(教師出力データ65)との具体例を説明する。また、画像要素50毎の、抽出画像21を用いた処理画像22の例を説明する。
図12は、画像要素50が骨53である例を示す。教師入力データ64は、たとえば骨53を含まない再構成画像60と、骨53を含む投影像61との重畳画像67である。
図14は、画像要素50がデバイス55である例を示す。図14では、デバイス55がガイドワイヤである。
図17は、画像要素50がノイズ57である例を示す。ノイズ57は、たとえばランダムノイズであるが、図17では、説明のために便宜的に、水平方向の点線の集合として示している。
図19は、画像要素50が血管54である例を示す。血管54は、造影剤を導入して画像化された造影血管である。図19は、頭部の脳血管の例を示しているが、その他の血管でもよい。血管は、たとえば心臓の冠動脈でありうる。
図21は、画像要素50が衣類56である例を示す。図21では、衣類56の例として、衣服のボタンおよび被験者が身に着けるネックレスを示している。
図23は、画像要素50がX線の散乱線成分58である例を示す。
本実施形態では、作成される各教師入力データ64および各教師出力データ65の一部には、X線撮影装置200(X線照射部220)が備えるコリメータ(図示せず)によって撮影範囲が制限されたコリメータ画像68が含まれる。コリメータ画像68では、画像中の一部の領域にのみ画像が形成され、コリメータによって遮蔽された領域には画像情報が含まれない。各教師入力データ64および各教師出力データ65のうち一部は、コリメータを用いて実際に撮影された画像に基づくシミュレーションにより、X線の照射範囲(すなわち、画像領域)の形状、コリメータによる影響を受ける画質のパラメータをランダムに異ならせた複数のコリメータ画像68を含む。コリメータによる影響を受ける画質のパラメータは、透過度合い(コントラスト)、エッジのボケ、ノイズの含有量などである。
本実施形態では、以下のような効果を得ることができる。
なお、今回開示された実施形態は、すべての点で例示であって制限的なものではないと考えられるべきである。本発明の範囲は、上記した実施形態の説明ではなく、請求の範囲によって示され、さらに請求の範囲と均等の意味および範囲内でのすべての変更(変形例)が含まれる。
上記した例示的な実施形態は、以下の態様の具体例であることが当業者により理解される。
3次元X線画像データを2次元の投影画像に再構成した再構成画像を生成し、
シミュレーションにより、抽出対象となる画像要素の3次元モデルから2次元の投影像を生成し、
前記画像要素の前記投影像を前記再構成画像に重畳して重畳画像を生成し、
前記重畳画像を教師入力データとし、前記再構成画像または前記投影像を教師出力データとして機械学習を行うことにより、入力画像に含まれる前記画像要素を抽出する処理を行う学習済みモデルを作成する、学習済みモデルの作成方法。
前記重畳画像は、互いに異なる複数の画像要素毎に、複数作成され、
前記複数の画像要素は、生体組織である第1要素と、非生体組織である第2要素とを含む、項目1に記載の学習済みモデルの作成方法。
前記重畳画像は、互いに異なる複数の画像要素毎に、複数作成され、
前記複数の画像要素は、骨、血管、体内に導入されるデバイス、衣類、ノイズおよびX線の散乱線成分のうち少なくとも複数を含む、項目1に記載の学習済みモデルの作成方法。
前記画像要素は、線状または管状のデバイスを含み、
前記画像要素の前記投影像は、前記デバイスの3次元モデルの形状をランダムな座標値に基づいて生成した曲線でシミュレーションすることにより生成される、項目1に記載の学習済みモデルの作成方法。
前記画像要素は、血管を含み、
前記画像要素の前記投影像は、血管の3次元モデルの形状をランダムに変化させるシミュレーションにより生成される、項目1に記載の学習済みモデルの作成方法。
前記画像要素は、X線の散乱線成分を含み、
前記画像要素の前記投影像は、前記入力画像の撮影環境をモデル化したモンテカルロシミュレーションにより生成される、項目1に記載の学習済みモデルの作成方法。
前記画像要素の前記投影像は、撮影環境モデルにおけるX線撮影装置が撮影可能な投影角度範囲に亘って投影角度を変化させて複数生成される、項目6に記載の学習済みモデルの作成方法。
前記画像要素の前記投影像は、撮影環境モデルにおける仮想線源のエネルギースペクトルを変化させて複数生成される、項目6に記載の学習済みモデルの作成方法。
前記機械学習は、1つの学習モデルに対して、前記画像要素毎に作成された前記教師入力データおよび前記教師出力データを入力することを含み、
前記学習済みモデルは、前記入力画像から前記複数の画像要素を重複なしで抽出し、抽出した前記複数の画像要素と、抽出後に残る残余画像要素と、をそれぞれ出力するように構成されている、項目1に記載の学習済みモデルの作成方法。
入力画像中から特定の画像要素を抽出する処理を学習させた学習済みモデルを用いて、X線画像から複数の画像要素を別々に抽出し、
画像要素毎に抽出された複数の抽出画像と、前記X線画像と、を用いた画像間演算を行うことにより、前記X線画像に含まれる各画像要素に画像処理が行われた処理画像を生成する、画像生成方法。
前記画像処理は、強調処理または除去処理を含む、項目10に記載の画像生成方法。
複数の画像要素は、生体組織である第1要素と、非生体組織である第2要素とを含む、項目10に記載の画像生成方法。
複数の画像要素は、骨、血管、体内に導入されるデバイス、衣類、ノイズおよびX線の散乱線成分のうち少なくとも複数を含む、項目10に記載の画像生成方法。
前記複数の抽出画像の一部または全部に対して、別々に画像処理を行い、
前記処理画像は、画像処理後の前記複数の抽出画像と、前記X線画像との画像間演算により生成される、項目10に記載の画像生成方法。
前記画像間演算は、前記X線画像に対する、個々の抽出画像の重み付け加算または重み付け減算を行うことを含む、項目10に記載の画像生成方法。
前記学習済みモデルは、前記入力画像から前記複数の画像要素を重複なしで抽出し、抽出した前記複数の画像要素と、抽出後に残る残余画像要素と、をそれぞれ出力するように構成されている、項目10に記載の画像生成方法。
前記学習済みモデルは、3次元画像データから2次元の投影画像に再構成した再構成画像と、シミュレーションにより前記画像要素の3次元モデルから作成された投影像とを用いて機械学習により予め作成されている、項目10に記載の画像生成方法。
X線画像を取得する画像取得部と、
入力画像中から特定の画像要素を抽出する処理を学習させた学習済みモデルを用いて、前記X線画像から複数の画像要素を別々に抽出する抽出処理部と、
画像要素毎に抽出された複数の抽出画像と、前記X線画像と、を用いた画像間演算を行うことにより、前記X線画像に含まれる各画像要素に画像処理が行われた処理画像を生成する画像生成部と、を備える、画像処理装置。
20 抽出処理部
21(21-1、21-2、21-N) 抽出画像
22 処理画像
30 画像生成部
40(40-1、40-2、40-N) 学習済みモデル
50 画像要素
51 第1要素
52 第2要素
53 骨
54 血管
55 デバイス
56 衣類
57 ノイズ
58 散乱線成分
59 残余画像要素
60 再構成画像
61 投影像
64 教師入力データ
65 教師出力データ
67 重畳画像
80 CT画像データ(3次元X線画像データ)
85 撮影環境モデル
100 画像処理装置
111 第1エネルギースペクトル(エネルギースペクトル)
112 第2エネルギースペクトル(エネルギースペクトル)
200 X線撮影装置
201 X線画像
LM 学習モデル
Claims (18)
- 3次元X線画像データを2次元の投影画像に再構成した再構成画像を生成し、
シミュレーションにより、抽出対象となる画像要素の3次元モデルから2次元の投影像を生成し、
前記画像要素の前記投影像を前記再構成画像に重畳して重畳画像を生成し、
前記重畳画像を教師入力データとし、前記再構成画像または前記投影像を教師出力データとして機械学習を行うことにより、入力画像に含まれる前記画像要素を抽出する処理を行う学習済みモデルを作成する、学習済みモデルの作成方法。 - 前記重畳画像は、互いに異なる複数の画像要素毎に、複数作成され、
前記複数の画像要素は、生体組織である第1要素と、非生体組織である第2要素とを含む、請求項1に記載の学習済みモデルの作成方法。 - 前記重畳画像は、互いに異なる複数の画像要素毎に、複数作成され、
前記複数の画像要素は、骨、血管、体内に導入されるデバイス、衣類、ノイズおよびX線の散乱線成分のうち少なくとも複数を含む、請求項1に記載の学習済みモデルの作成方法。 - 前記画像要素は、線状または管状のデバイスを含み、
前記画像要素の前記投影像は、前記デバイスの3次元モデルの形状をランダムな座標値に基づいて生成した曲線でシミュレーションすることにより生成される、請求項1に記載の学習済みモデルの作成方法。 - 前記画像要素は、血管を含み、
前記画像要素の前記投影像は、血管の3次元モデルの形状をランダムに変化させるシミュレーションにより生成される、請求項1に記載の学習済みモデルの作成方法。 - 前記画像要素は、X線の散乱線成分を含み、
前記画像要素の前記投影像は、前記入力画像の撮影環境をモデル化したモンテカルロシミュレーションにより生成される、請求項1に記載の学習済みモデルの作成方法。 - 前記画像要素の前記投影像は、撮影環境モデルにおけるX線撮影装置が撮影可能な投影角度範囲に亘って投影角度を変化させて複数生成される、請求項6に記載の学習済みモデルの作成方法。
- 前記画像要素の前記投影像は、撮影環境モデルにおける仮想線源のエネルギースペクトルを変化させて複数生成される、請求項6に記載の学習済みモデルの作成方法。
- 前記機械学習は、1つの学習モデルに対して、前記画像要素毎に作成された前記教師入力データおよび前記教師出力データを入力することを含み、
前記学習済みモデルは、前記入力画像から前記複数の画像要素を重複なしで抽出し、抽出した前記複数の画像要素と、抽出後に残る残余画像要素と、をそれぞれ出力するように構成されている、請求項1に記載の学習済みモデルの作成方法。 - 入力画像中から特定の画像要素を抽出する処理を学習させた学習済みモデルを用いて、X線画像から複数の画像要素を別々に抽出し、
画像要素毎に抽出された複数の抽出画像と、前記X線画像と、を用いた画像間演算を行うことにより、前記X線画像に含まれる各画像要素に画像処理が行われた処理画像を生成する、画像生成方法。 - 前記画像処理は、強調処理または除去処理を含む、請求項10に記載の画像生成方法。
- 複数の画像要素は、生体組織である第1要素と、非生体組織である第2要素とを含む、請求項10に記載の画像生成方法。
- 複数の画像要素は、骨、血管、体内に導入されるデバイス、衣類、ノイズおよびX線の散乱線成分のうち少なくとも複数を含む、請求項10に記載の画像生成方法。
- 前記複数の抽出画像の一部または全部に対して、別々に画像処理を行い、
前記処理画像は、画像処理後の前記複数の抽出画像と、前記X線画像との画像間演算により生成される、請求項10に記載の画像生成方法。 - 前記画像間演算は、前記X線画像に対する、個々の抽出画像の重み付け加算または重み付け減算を行うことを含む、請求項10に記載の画像生成方法。
- 前記学習済みモデルは、前記入力画像から前記複数の画像要素を重複なしで抽出し、抽出した前記複数の画像要素と、抽出後に残る残余画像要素と、をそれぞれ出力するように構成されている、請求項10に記載の画像生成方法。
- 前記学習済みモデルは、3次元画像データから2次元の投影画像に再構成した再構成画像と、シミュレーションにより前記画像要素の3次元モデルから作成された投影像とを用いて機械学習により予め作成されている、請求項10に記載の画像生成方法。
- X線画像を取得する画像取得部と、
入力画像中から特定の画像要素を抽出する処理を学習させた学習済みモデルを用いて、前記X線画像から複数の画像要素を別々に抽出する抽出処理部と、
画像要素毎に抽出された複数の抽出画像と、前記X線画像と、を用いた画像間演算を行うことにより、前記X線画像に含まれる各画像要素に画像処理が行われた処理画像を生成する画像生成部と、を備える、画像処理装置。
Priority Applications (5)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| PCT/JP2020/007600 WO2021171394A1 (ja) | 2020-02-26 | 2020-02-26 | 学習済みモデルの作成方法、画像生成方法および画像処理装置 |
| CN202080097558.XA CN115209808B (zh) | 2020-02-26 | 2020-02-26 | 学习完毕模型的制作方法、图像生成方法以及图像处理装置 |
| US17/802,303 US12400324B2 (en) | 2020-02-26 | 2020-02-26 | Creation method of trained model, image generation method, and image processing device |
| JP2022502635A JPWO2021171394A1 (ja) | 2020-02-26 | 2020-02-26 | |
| JP2024133883A JP2024149738A (ja) | 2020-02-26 | 2024-08-09 | 学習済みモデルの作成方法および画像生成方法 |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| PCT/JP2020/007600 WO2021171394A1 (ja) | 2020-02-26 | 2020-02-26 | 学習済みモデルの作成方法、画像生成方法および画像処理装置 |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2021171394A1 true WO2021171394A1 (ja) | 2021-09-02 |
Family
ID=77490800
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/JP2020/007600 Ceased WO2021171394A1 (ja) | 2020-02-26 | 2020-02-26 | 学習済みモデルの作成方法、画像生成方法および画像処理装置 |
Country Status (4)
| Country | Link |
|---|---|
| US (1) | US12400324B2 (ja) |
| JP (2) | JPWO2021171394A1 (ja) |
| CN (1) | CN115209808B (ja) |
| WO (1) | WO2021171394A1 (ja) |
Cited By (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| WO2023161848A1 (en) * | 2022-02-24 | 2023-08-31 | Auris Health, Inc. | Three-dimensional reconstruction of an instrument and procedure site |
| WO2023166417A1 (en) * | 2022-03-01 | 2023-09-07 | Verb Surgical Inc. | Apparatus, systems, and methods for intraoperative instrument tracking and information visualization |
Families Citing this family (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JP7352382B2 (ja) * | 2019-05-30 | 2023-09-28 | キヤノン株式会社 | 画像処理装置、画像処理方法及びプログラム |
| JP7413216B2 (ja) * | 2020-09-15 | 2024-01-15 | 富士フイルム株式会社 | 学習装置、方法およびプログラム、学習済みモデル、並びに放射線画像処理装置、方法およびプログラム |
| EP4248404B1 (en) * | 2020-11-20 | 2024-06-26 | Koninklijke Philips N.V. | Determining interventional device shape |
| CN116433476B (zh) * | 2023-06-09 | 2023-09-08 | 有方(合肥)医疗科技有限公司 | Ct图像处理方法及装置 |
| EP4488948A1 (de) * | 2023-07-05 | 2025-01-08 | Ziehm Imaging GmbH | Verfahren zum einstellen der sichtbarkeit von objekten in einem durch strahlung erzeugten projektionsbild |
Citations (6)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| WO2018159775A1 (ja) * | 2017-03-03 | 2018-09-07 | 国立大学法人筑波大学 | 対象追跡装置 |
| JP2018206382A (ja) * | 2017-06-01 | 2018-12-27 | 株式会社東芝 | 画像処理システム及び医用情報処理システム |
| WO2019138438A1 (ja) * | 2018-01-09 | 2019-07-18 | 株式会社島津製作所 | 画像作成装置 |
| JP2019130275A (ja) * | 2018-01-31 | 2019-08-08 | 株式会社リコー | 医用画像処理装置、医用画像処理方法、プログラム及び医用画像処理システム |
| WO2019176734A1 (ja) * | 2018-03-12 | 2019-09-19 | 東芝エネルギーシステムズ株式会社 | 医用画像処理装置、治療システム、および医用画像処理プログラム |
| JP2020018705A (ja) * | 2018-08-02 | 2020-02-06 | キヤノンメディカルシステムズ株式会社 | 医用画像処理装置、画像生成方法、及び画像生成プログラム |
Family Cites Families (13)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US6459457B1 (en) * | 1999-12-21 | 2002-10-01 | Texas Instruments Incorporated | Adaptive color comb filter |
| WO2007029467A1 (ja) | 2005-09-05 | 2007-03-15 | Konica Minolta Medical & Graphic, Inc. | 画像処理方法及び画像処理装置 |
| US9490183B2 (en) * | 2014-05-16 | 2016-11-08 | Tokyo Electron Limited | Nondestructive inline X-ray metrology with model-based library method |
| RU2698997C1 (ru) * | 2016-09-06 | 2019-09-02 | Электа, Инк. | Нейронная сеть для генерации синтетических медицинских изображений |
| KR101857624B1 (ko) * | 2017-08-21 | 2018-05-14 | 동국대학교 산학협력단 | 임상 정보를 반영한 의료 진단 방법 및 이를 이용하는 장치 |
| US10885629B2 (en) * | 2018-01-31 | 2021-01-05 | Ricoh Company, Ltd. | Medical image processing apparatus, medical image processing method, medium, and medical image processing system |
| US10896508B2 (en) | 2018-02-07 | 2021-01-19 | International Business Machines Corporation | System for segmentation of anatomical structures in cardiac CTA using fully convolutional neural networks |
| US10910099B2 (en) | 2018-02-20 | 2021-02-02 | Siemens Healthcare Gmbh | Segmentation, landmark detection and view classification using multi-task learning |
| CN108898606B (zh) * | 2018-06-20 | 2021-06-15 | 中南民族大学 | 医学图像的自动分割方法、系统、设备及存储介质 |
| WO2020056485A1 (en) * | 2018-09-21 | 2020-03-26 | Tenova Goodfellow Inc. | In situ apparatus for furnace off-gas constituent and flow velocity measurement |
| CN112822982B (zh) | 2018-10-10 | 2023-09-15 | 株式会社岛津制作所 | 图像制作装置、图像制作方法以及学习完毕模型的制作方法 |
| KR101981202B1 (ko) | 2018-12-11 | 2019-05-22 | 메디컬아이피 주식회사 | 의료영상 재구성 방법 및 그 장치 |
| CN110009613A (zh) * | 2019-03-28 | 2019-07-12 | 东南大学 | 基于深度稠密网络的低剂量ct成像方法、装置及系统 |
-
2020
- 2020-02-26 WO PCT/JP2020/007600 patent/WO2021171394A1/ja not_active Ceased
- 2020-02-26 CN CN202080097558.XA patent/CN115209808B/zh active Active
- 2020-02-26 JP JP2022502635A patent/JPWO2021171394A1/ja active Pending
- 2020-02-26 US US17/802,303 patent/US12400324B2/en active Active
-
2024
- 2024-08-09 JP JP2024133883A patent/JP2024149738A/ja active Pending
Patent Citations (6)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| WO2018159775A1 (ja) * | 2017-03-03 | 2018-09-07 | 国立大学法人筑波大学 | 対象追跡装置 |
| JP2018206382A (ja) * | 2017-06-01 | 2018-12-27 | 株式会社東芝 | 画像処理システム及び医用情報処理システム |
| WO2019138438A1 (ja) * | 2018-01-09 | 2019-07-18 | 株式会社島津製作所 | 画像作成装置 |
| JP2019130275A (ja) * | 2018-01-31 | 2019-08-08 | 株式会社リコー | 医用画像処理装置、医用画像処理方法、プログラム及び医用画像処理システム |
| WO2019176734A1 (ja) * | 2018-03-12 | 2019-09-19 | 東芝エネルギーシステムズ株式会社 | 医用画像処理装置、治療システム、および医用画像処理プログラム |
| JP2020018705A (ja) * | 2018-08-02 | 2020-02-06 | キヤノンメディカルシステムズ株式会社 | 医用画像処理装置、画像生成方法、及び画像生成プログラム |
Non-Patent Citations (2)
| Title |
|---|
| AMBROSINI, PIERRE ET AL.: "Fully Automatic and Real-Time Catheter Segmentation in X-Ray Fluoroscopy", ARXIV:1707.05137V1 [CS.CV], 2017, XP080777275, Retrieved from the Internet <URL:https://arxiv.org/pdf/1707.05137.pdf> [retrieved on 20200420] * |
| EGKERT, DOMINIK ET AL.: "Deep learning-based denoising of mammographic images using physics- driven data augmentation", ARXIV:1912.05240V1 [EESS.IV], 2019, XP055850408, Retrieved from the Internet <URL:https://arxiv.org/pdf/1912.05240.pdf> [retrieved on 20200420] * |
Cited By (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| WO2023161848A1 (en) * | 2022-02-24 | 2023-08-31 | Auris Health, Inc. | Three-dimensional reconstruction of an instrument and procedure site |
| WO2023166417A1 (en) * | 2022-03-01 | 2023-09-07 | Verb Surgical Inc. | Apparatus, systems, and methods for intraoperative instrument tracking and information visualization |
Also Published As
| Publication number | Publication date |
|---|---|
| JP2024149738A (ja) | 2024-10-18 |
| CN115209808B (zh) | 2025-08-01 |
| US20230097849A1 (en) | 2023-03-30 |
| US12400324B2 (en) | 2025-08-26 |
| CN115209808A (zh) | 2022-10-18 |
| JPWO2021171394A1 (ja) | 2021-09-02 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US12400324B2 (en) | Creation method of trained model, image generation method, and image processing device | |
| US9934597B2 (en) | Metal artifacts reduction in cone beam reconstruction | |
| US9754390B2 (en) | Reconstruction of time-varying data | |
| JP5635730B2 (ja) | 画像から関心のある特徴部を抽出するためのシステム及び方法 | |
| CN102483853B (zh) | 用于处理投影数据的装置和方法 | |
| US10147207B2 (en) | System and method for high-temporal resolution, time-resolved cone beam CT angiography | |
| EP1913558B1 (en) | 3d-2d adaptive shape model supported motion compensated reconstruction | |
| JP2017537674A (ja) | 複数回の取得にわたってコントラストを正規化するための方法およびシステム | |
| Schnurr et al. | Simulation-based deep artifact correction with convolutional neural networks for limited angle artifacts | |
| CN101453950A (zh) | 分层运动估计 | |
| JP7657635B2 (ja) | 画像処理装置、学習装置、放射線画像撮影システム、画像処理方法、学習方法、画像処理プログラム、及び学習プログラム | |
| WO2016134127A1 (en) | Common-mask guided image reconstruction for enhanced four-dimensional come-beam computed tomography | |
| EP4160539B1 (en) | Adaptive auto-segmentation in computed tomography | |
| EP4160538A1 (en) | Metal artifact reduction in computed tomography | |
| JP6479919B2 (ja) | 流動データの再構築 | |
| Do et al. | A decomposition-based CT reconstruction formulation for reducing blooming artifacts | |
| Natalinova et al. | Computer spatially oriented reconstruction of a 3d heart shape based on its tomographic imaging | |
| JP2021041090A (ja) | 医用画像処理装置、x線画像処理システム、および、学習モデルの生成方法 | |
| EP4312772B1 (en) | Subtraction imaging | |
| Manhart et al. | Iterative denoising algorithms for perfusion C-arm CT with a rapid scanning protocol | |
| Nikolau | Development of a Prototype 2D Dual-energy Subtraction Angiography System for Image-guided Interventions | |
| WO2025061493A1 (en) | Providing a projection image of a vascular region | |
| Giordano | Perfusion imaging in the peripheral vasculature using interventional C-arm systems | |
| Zhang | Motion and Metal Artifact Correction for Enhancing Plaque Visualization in Coronary Computed Tomography Angiography | |
| Lewis | Lung tumor tracking, trajectory reconstruction, and motion artifact removal using rotational cone-beam projections |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 20922377 Country of ref document: EP Kind code of ref document: A1 |
|
| ENP | Entry into the national phase |
Ref document number: 2022502635 Country of ref document: JP Kind code of ref document: A |
|
| NENP | Non-entry into the national phase |
Ref country code: DE |
|
| 122 | Ep: pct application non-entry in european phase |
Ref document number: 20922377 Country of ref document: EP Kind code of ref document: A1 |
|
| WWG | Wipo information: grant in national office |
Ref document number: 202080097558.X Country of ref document: CN |
|
| WWG | Wipo information: grant in national office |
Ref document number: 17802303 Country of ref document: US |