WO2019221222A1 - Dispositif d'apprentissage, procédé d'apprentissage, programme, modèle appris, et dispositif de détection de métastase osseuse - Google Patents
Dispositif d'apprentissage, procédé d'apprentissage, programme, modèle appris, et dispositif de détection de métastase osseuse Download PDFInfo
- Publication number
- WO2019221222A1 WO2019221222A1 PCT/JP2019/019478 JP2019019478W WO2019221222A1 WO 2019221222 A1 WO2019221222 A1 WO 2019221222A1 JP 2019019478 W JP2019019478 W JP 2019019478W WO 2019221222 A1 WO2019221222 A1 WO 2019221222A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- bone metastasis
- patch image
- scintigram
- learning
- region
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Ceased
Links
Images
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01T—MEASUREMENT OF NUCLEAR OR X-RADIATION
- G01T1/00—Measuring X-radiation, gamma radiation, corpuscular radiation, or cosmic radiation
- G01T1/16—Measuring radiation intensity
- G01T1/161—Applications in the field of nuclear medicine, e.g. in vivo counting
- G01T1/164—Scintigraphy
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
Definitions
- the present invention relates to a technique for detecting a bone metastasis region from a subject's scintigram.
- Non-patent Document 1 As a related study of bone metastasis detection on bone scintigram, Kawakami's research can be cited (Non-patent Document 1).
- Non-Patent Document 1 after performing segmentation (whole body skeleton classification), a highly integrated part is detected in each skeleton region (eight regions) using information such as an average and a standard deviation.
- Non-Patent Document 2 225 prostate cancers were analyzed by analyzing bone scintigrams of whole body images using the CAD system “BONENAVI version 2.1.7” (FUJIFILM RI Pharma Co., Ltd., Tokyo, Japan).
- ANN Artificial Neural Networks
- BSI Block Scan Index
- the ANN value indicates the possibility of bone metastasis, the range of ANN is a continuous value of 0-1, “0” means no possibility of bone metastasis, “1” is suspected of bone metastasis Means high.
- BSI indicates the amount of metastatic tumor (composition ratio of bone metastasis region to whole body skeleton). As for the detection result, Sensitivity is 82% (102/124), and Specificity is 83% (84/101).
- the bone metastasis detection support system on the bone scintigram mainly includes anatomical structure recognition processing, abnormal site enhancement (feature extraction) and detection processing. Finally, these processes are combined to determine a site suspected of having bone metastasis, and the result is presented to the doctor.
- a bone metastasis area is displayed on the subject's scintigram, and a non-bone metastasis area (non-malignant lesion area such as a fracture or inflammation) having a concentration value similar to that of the bone metastasis area is referred to as a bone metastasis area.
- a false detection sino-called “too much picking up”
- a decrease in detection accuracy were observed. Therefore, the present invention provides a technique for reducing over-pickup while maintaining the detection rate of the bone metastasis region.
- One aspect of the present invention is a learning device that generates a model of a neural network for detecting bone metastasis from a subject's scintigram, and includes a plurality of subject's scintigrams, bone metastasis regions and non-bone metastases in each scintigram
- An input unit that inputs a correct answer label of a region as teacher data, and a learning unit that learns a model of a neural network used for detecting a bone metastasis region of a bone scintigram using the teacher data.
- Another aspect of the present invention is a learning method for generating a neural network model for detecting a bone metastasis region from a subject's scintigram, wherein the scintigram of a plurality of subjects and the bone metastasis region in each scintigram And a step of inputting a correct label of a non-bone metastasis region as teacher data, and a step of learning a neural network model used for detecting a bone metastasis region of a bone scintigram using the teacher data.
- Another aspect of the present invention is a program for generating a model of a neural network for detecting a bone metastasis region from a subject's scintigram, and comprising a plurality of subject's scintigrams and bone metastases in each scintigram
- a program that executes a step of inputting correct labels of regions and non-bone metastasis regions as teacher data, and a step of learning a neural network model used to detect bone metastasis regions of bone scintigrams using the teacher data. is there.
- Another aspect of the present invention is a learned model for causing a computer to function so as to detect a bone metastasis region from a subject's scintigram, which has a convolution layer and a deconvolution layer, and includes a convolution layer. It consists of a neural network that includes a structure that inputs the feature map obtained in step 2 into the deconvolution layer, and learns scintillograms of multiple subjects and correct labels of bone metastasis and non-bone metastasis areas in each scintigram as teacher data. It is a learned model that allows a computer to function to detect a bone metastasis region from a subject's scintigram input to the neural network.
- the bone metastasis area can be appropriately detected from the scintigram of the subject using this model.
- FIG. 1 is a diagram illustrating a configuration of a learning device according to the first embodiment.
- FIG. 2A is a diagram showing a subject's scintigram and a correct answer label.
- FIG. 2B is a diagram illustrating an example of a patch image.
- FIG. 2C is a diagram illustrating another example of the patch image.
- FIG. 3 is a diagram showing the configuration of the neural network model.
- FIG. 4 is a diagram illustrating a configuration of the bone metastasis detection apparatus according to the first embodiment.
- FIG. 5 is a diagram illustrating an example of a patch image cut out from the scintigram.
- FIG. 6 is a diagram illustrating an operation of the learning device according to the first embodiment.
- FIG. 1 is a diagram illustrating a configuration of a learning device according to the first embodiment.
- FIG. 2A is a diagram showing a subject's scintigram and a correct answer label.
- FIG. 2B is a diagram illustrating an example of
- FIG. 7 is a diagram illustrating an operation of the bone metastasis detection apparatus according to the first embodiment.
- FIG. 8 is a diagram illustrating a configuration of the learning device according to the second embodiment.
- FIG. 9 is a diagram illustrating a configuration of a learning device according to a modification.
- FIG. 10 is a FROC curve showing the relationship between Sensitivity and FP (P) obtained by experiment.
- FIG. 11 is a diagram illustrating a configuration of a learning device according to the third embodiment.
- FIG. 12 is a diagram illustrating a scintigram of a subject input to the learning device according to the third embodiment.
- FIG. 13 is a diagram illustrating a configuration of a neural network model used in the learning device according to the third embodiment.
- FIG. 14 is a diagram illustrating a configuration of a bone metastasis detection apparatus according to the third embodiment.
- FIG. 15 is a diagram illustrating an operation of the learning device according to the third embodiment.
- a learning device is a learning device that generates a model of a neural network for detecting bone metastasis from a subject's scintigram, and includes a plurality of subject's scintigrams, bone metastasis regions and non-bones in each scintigram.
- An input unit that inputs the correct label of the metastasis region as teacher data and a learning unit that learns a model of a neural network used for detecting the bone metastasis region of the bone scintigram using the teacher data.
- the non-bone metastasis region is a region having a density value similar to that of the bone metastasis region, but is a region where bone metastasis has not occurred.
- Non-bone metastasis regions include non-malignant lesion regions (fractures, inflammations, etc.).
- the bone metastasis region is also referred to as “abnormal accumulation”.
- the bone metastasis area can be appropriately detected from the scintigram of the subject using this model.
- the learning device includes a patch image creation unit that generates a patch image by cutting out a region in which a subject's bone is captured from a plurality of subject's scintigrams, and the learning unit includes a patch image and a correct answer corresponding to the patch image. You may learn using a label as teacher data.
- the memory size required for learning in the neural network model increases as the image size increases. According to the configuration of the embodiment, it is possible to reduce the memory size required at the time of learning by generating a patch image obtained by cutting out an area in which a subject's bone is captured and performing learning using the patch image. Further, since the appearance of the bone metastasis region does not depend so much on the shape of the organ, learning can be performed even when the entire organ is not shown. Therefore, the patch image is suitable for teacher data for learning a neural network model for detecting a bone metastasis region.
- the patch image creating unit scans a window having a predetermined size on the subject's scintigram, and when the subject's bone is reflected in the window, the patch image is displayed on the area of the window. May be cut out as By scanning the window and cutting out the patch image in this way, the patch image can be cut out uniformly from the subject's scintigram.
- the learning device includes a patch image created by the patch image creating unit, a patch image including a bone metastasis region or a non-bone metastasis region, and a patch image including neither a bone metastasis region nor a non-bone metastasis region You may provide the teacher data analysis part which calculates
- the inventors conducted inference using a model generated by learning with various teacher data, and examined conditions for generating a neural network model that can appropriately detect a bone metastasis region.
- the contents of the patch image constituting the teacher data (the composition ratio of the patch image including the bone metastasis region or the non-bone metastasis region and the patch image including neither the bone metastasis region nor the non-bone metastasis region) is the neural network.
- it is possible to adjust the teacher data so that appropriate learning can be performed by analyzing the teacher data used for learning and displaying the contents of the teacher data.
- the learning device includes a bone metastasis region and a non-bone metastasis region from the patch image created by the patch image creation unit so that the composition ratio obtained by the teacher data analysis unit is included in a predetermined range.
- a patch image selection unit that thins out patch images that do not include any of them may be provided.
- composition ratio of the patch image including the bone metastasis region is increased by the configuration of the embodiment. .
- the learning apparatus may include a patch image inversion unit that inverts at least a part of the patch image created by the patch image creation unit.
- the inverted patch image may be used, or both the inverted patch image and the patch image before being inverted may be used as teacher data.
- the neural network may include an Encoder-Decoder structure, and a structure in which a feature map obtained by the Encoder structure is input to the Decoder structure.
- the neural network has a structure in which a first network part having an Encoder-Decoder structure and a second network part having an Encoder-Decoder structure are combined, and an input unit is For each subject, the scintigram taken from the front and back and the correct label thereof are inputted, and the learning unit inputs the scintigram taken from the front of the subject to the input layer of the first network part, and the second network part Learning may be performed by inputting a scintigram obtained by photographing the subject from the rear to the input layer.
- the neural network has a structure in which a first network part having an Encoder-Decoder structure and a second network part having an Encoder-Decoder structure are combined, and an input
- the unit inputs the scintigrams taken from the front and back and the correct labels thereof for each subject, and the learning unit extracts the first patch image cut out from the scintigrams taken from the front in the input layer of the first network part.
- the second patch image corresponding to the first patch image cut out from the scintigram obtained by photographing the subject from the rear side may be input to the input layer of the second network portion to perform learning.
- the bone metastasis region is processed by simultaneously processing with a neural network having a structure in which two network parts having an Encoder-Decoder structure are combined. It is possible to generate a neural network model with improved accuracy of discrimination between non-bone metastasis regions.
- the non-malignant lesion area is included in the non-bone metastasis area
- the input unit is a scintigraphy of a plurality of subjects to which the correct labels of the bone metastasis area and the non-malignant lesion area are attached.
- the gram may be received as teacher data, and the learning unit may learn a neural network model that detects each of the bone metastasis region and the non-malignant lesion region using the teacher data.
- the bone metastasis detection device of the embodiment creates a patch image from a storage unit that stores a learned model of a neural network learned by the learning device, an input unit that inputs a scintigram of a subject, and a scintigram A patch image creation unit; an inference unit that inputs a patch image to an input layer of a learned model read from the storage unit and obtains a bone metastasis region included in the patch image; and an output unit that outputs data indicating the bone metastasis region; Is provided.
- over-pickup can be reduced while maintaining the detection rate of the bone metastasis region.
- the program according to the embodiment is a program for detecting a bone metastasis region from a subject's scintigram, the step of inputting the subject's scintigram into a computer, the step of creating a patch image from the scintigram, and the above Reading the learned model from the storage unit storing the learned model of the neural network learned by the learned learning device, inputting the patch image to the input layer of the learned model, and obtaining a bone metastasis region included in the patch image; And outputting the data indicating the bone metastasis region.
- the program according to the embodiment is a program for detecting a bone metastasis region from a subject's scintigram, the step of inputting two scintigrams taken from the front and back of the subject to a computer, and two scintigrams
- the learning model is read out from the storage unit storing the learned model generated in advance by learning using the teacher data and the step of inverting one of the two in the horizontal direction.
- the step of obtaining the bone metastasis region included in the scintigram and the step of outputting the data indicating the bone metastasis region are executed.
- FIG. 1 is a diagram illustrating a configuration of a learning device 1 according to the first embodiment.
- the learning device 1 according to the first embodiment is a device that generates a neural network model for detecting a bone metastasis region from a subject's scintigram by learning.
- the neural network model generated by the learning device 1 according to the present embodiment is a model for classifying the area of the subject's scintigram into three classes: a bone metastasis area, a non-bone metastasis area, and a background.
- the non-bone metastatic region class includes physiologically accumulated regions such as the kidney and the bladder in addition to the non-malignant lesion region.
- the learning device 1 includes an input unit 10 for inputting teacher data, a control unit 11 for learning a neural network model based on the teacher data, a storage unit 16 for storing a model generated by learning, and a storage unit 16. And an output unit 17 that outputs the stored model to the outside.
- FIG. 2A is a diagram illustrating an example of teacher data input to the input unit 10.
- the teacher data includes a subject's scintigram and correct label data given to the subject's scintigram.
- the size of the scintigram is 512 ⁇ 1024 [pixels].
- the correct answer label specifies for each pixel whether the pixel of interest is a pixel corresponding to integration or a background (other than integration) pixel. In the case of a pixel corresponding to accumulation, it is further specified whether the region is a bone metastasis region, injection leakage or urine leakage, or non-bone metastasis region. Note that injection leakage or urine leakage is excluded from detection by the bone metastasis detection device 20 of the present embodiment.
- the control unit 11 includes a density normalization processing unit 12, a patch image creation unit 13, a patch image inversion unit 14, and a learning unit 15.
- the concentration normalization processing unit 12 has a function of normalizing concentration values in order to suppress variations in concentration values of normal bone regions that are different for each subject.
- the density normalization processing unit 12 normalizes density values by density range adjustment, normal bone level identification, and gray scale normalization processing. For example, the density range is adjusted by linear conversion so that the pixel value of the cumulative upper 0.2% of the density histogram excluding the density value 0 of the input image becomes 1023 and the pixel value of the cumulative upper 98% becomes 0.
- a multiple threshold method is used for the density histogram excluding the density value 0 of the image after the density range adjustment.
- the threshold value is set in increments of 1% from the cumulative top 1% of the histogram to the pixel value of 25%.
- 4-connected labeling is performed. For the result, an area having an area of 10 [pixels] or more and less than 4900 [pixels] is extracted.
- the average density values in the obtained area are arranged in descending order to obtain a transition point (between the normal area and the abnormal area).
- the transition point is where two consecutive average density values are 3% or less of the peak pixel.
- the peak pixel is the maximum value of the average density value for each region.
- the patch image creation unit 13 has a function of cutting out and creating a patch image from the subject's scintigram.
- the size of the patch image is 64 ⁇ 64 [pixels].
- a 64 ⁇ 64 [pixels] window is scanned over the subject's scintigram (512 ⁇ 1024 [pixels]) at intervals of 2 [pixels].
- (1) Accumulated labels (bone metastasis region or non-bone metastasis region) in the window ) Is included, or (2) if the bone region is included and the accumulation is not included, it is cut out as an image patch.
- the patch image creation unit 13 cuts out image patches from the input bone scintigrams of a plurality of subjects.
- FIG. 2B and FIG. 2C are diagrams showing examples of image patches cut out from a subject's scintigram and corresponding correct labels.
- the patch image reversing unit 14 has a function of reversing part of the created patch images.
- the learning unit 15 has a function of learning a neural network model for detecting a bone metastasis region from a scintigram using a patch image.
- U-Net which is one of FCN (FullyFullConvolutional Network) is used as a neural network model.
- FIG. 3 is a diagram showing an example of a neural network model used in the present embodiment.
- FIG. 3 shows an example of the structure when a patch having a patch size of 64 ⁇ 64 [pixel] is input.
- the neural network model used in the present embodiment has an Encoder-Decoder structure.
- Encoder structure convolution and pooling are repeated to extract global features of the image.
- the global structure is restored to the original size image by the Decoder structure.
- local features are also learned by combining the features obtained in the Encode process.
- the neural network model used in the present embodiment is one of the residual blocks Botleneck (K. He, X. Zhang, S. Ren, and J. Sun “Deep” in order to extract more advanced features. residual learning for image recognition ”arXiv: 1512.03385, 2015).
- the input image is grayscale, and the input dimension is 64 ⁇ 64 ⁇ 1.
- the number of channels is set to 32 in the convolution layer, and it is passed through Botleneck. After that, 2 ⁇ 2 MAX pooling is performed, and it is passed through Botleneck so that the number of channels is doubled. These layers are repeated a total of 4 times, and the final feature map size of Encoder is 4 ⁇ 4 ⁇ 512.
- the size of the feature map is doubled using a deconvolution layer.
- the output of the deconvolution layer and the Encoder feature map are concatenated and passed through Bottleneck. Similar to the Encoder, these layers are repeated a total of 4 times, and the final feature map size of the Decoder is 64 ⁇ 64 ⁇ 32.
- 3 channels background, bone metastasis region, non-bone metastasis region
- zero padding is performed in all 3 ⁇ 3 convolution layers, and Batch Normalization (S. Ioffe, and C. Szegedy, “Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift” 167X3: , 2015) and ReLU function.
- the learning unit 15 learns the neural network model by using the patch image (including the one that is horizontally reversed by the patch image reversing unit 14) and its correct answer label. Learning is performed by evaluating an error (loss function) between the probability p i obtained by converting the output when the patch image is input to the neural network model by the softmax function and the probability of the correct answer.
- the softmax function and the loss function are shown below.
- the learning unit 15 verifies the learned network using a verification data set (validation set).
- a learning model that has been trained an arbitrary number of iterations using teacher data is stored, and the parameters of the learning model are searched with a validation set for all the learning models.
- the number of iterations of learning is determined using FP (P) + FN (P), which is the sum of over-pick-up FP (P) in pixel units and overlooked FN (P) in pixel units, as an evaluation value.
- the learning unit 15 stores the model generated by learning in the storage unit 16.
- the configuration of the learning device 1 according to the present embodiment has been described above.
- Examples of the hardware of the learning device 1 described above are computers including a CPU, a RAM, a ROM, a hard disk, a display, a keyboard, a mouse, a communication interface, and the like. It is.
- the learning device 1 described above is realized by storing a program having a module for realizing each function described above in a RAM or ROM and executing the program by the CPU. Such a program is also included in the scope of the present invention.
- FIG. 4 is a diagram illustrating a configuration of the bone metastasis detection apparatus 20.
- the bone metastasis detection device 20 stores an input unit 21 that inputs a subject's scintigram, a control unit 22 that detects a bone metastasis region from the subject's scintigram, and a learned model learned by the learning device 1 described above.
- the control unit 22 includes a density normalization processing unit 23, a patch image creation unit 24, and an inference unit 25.
- the concentration normalization processing unit 23 is the same as the concentration normalization processing unit 12 included in the learning device 1.
- the patch image creation unit 24 has a function of cutting out a 64 ⁇ 64 [pixels] patch image from the input subject's scintigram.
- the basic configuration is the same as that of the patch image creation unit 13 provided in the learning apparatus 1, but the interval at which the patch image is cut out is different. That is, while the learning device 1 cuts out at intervals of 2 [pixels], the bone metastasis detection device 20 cuts out patch images at intervals of 32 [pixels].
- the inference unit 25 reads the learned model from the learned model storage unit 26, inputs the patch image to the input layer of the learned model, and each pixel of the patch image includes each of the background, the bone metastasis region, and the non-bone metastasis region. Find the probability of being in a class.
- FIG. 5 is a diagram showing an example of a patch image cut out from the scintigram.
- the patch images are cut out from the subject's scintigram so that the patch images adjacent to each other overlap each other. Therefore, for example, in the region R, the patch images A to D are overlapped, and the feature map of the pixels in the region R is obtained for each of the four patch images A to D.
- the inference unit 25 takes the average of the feature maps obtained for each of the four patch images. Then, the inference unit 25 converts the reconstructed output into a probability using a softmax function, determines a class having the highest probability for each pixel, and sets it as a final output.
- FIG. 6 is a diagram illustrating the operation of the learning device 1.
- the learning device 1 inputs scintillograms of a plurality of subjects and corresponding correct labels (background, bone metastasis region, non-bone metastasis region) as teacher data (S10).
- the learning device 1 normalizes the density of the input scintigram (S11), and creates a patch image from the normalized scintigram (S12).
- the learning device 1 horizontally flips a part of the created patch images (S13). Subsequently, the learning device 1 learns the neural network model using the patch image and the correct answer label corresponding to the patch image (S14), and stores the neural network model obtained by the learning in the storage unit 16 (S15).
- the learning model memorize
- FIG. 7 is a diagram illustrating the operation of the bone metastasis detection apparatus 20.
- the bone metastasis detection apparatus 20 inputs the scintigram of the subject to be examined (S20).
- the bone metastasis detection device 20 normalizes the concentration of the input scintigram (S21), and creates a patch image from the normalized scintigram (S22).
- the bone metastasis detection device 20 reads a learned neural network model from the storage unit 26, inputs a patch image to the input layer of the read neural network model, and detects a bone metastasis region of each pixel included in the patch image. (S23).
- the bone metastasis detection apparatus 20 integrates detection results for pixels in a region where a plurality of patch images overlap (S24).
- the bone metastasis detection apparatus 20 outputs the final result of the obtained bone metastasis region (S25).
- the learning device 1 learns a neural network model using a subject's scintigram and a corresponding correct answer label. By using this learned model, it is possible to reduce so-called “too much picking up” and to appropriately detect the bone metastasis region.
- the learning device 1 of the first embodiment can reduce the memory size required for learning by performing learning using a patch image cut out from the subject's scintigram.
- appropriate learning can be performed even if learning is performed by dividing into patch images.
- the learning device 1 increases the variation of the teacher data by horizontally reversing some of the patch images, and obtains a robust learning result.
- the patch image may be vertically reversed.
- the method using the vertically inverted patch image is suitable when the anatomical structure of the background bone is vertically symmetric (for example, when examining the accumulation of extremities extending in the vertical direction).
- FIG. 8 is a diagram illustrating a configuration of the learning device 2 according to the second embodiment.
- the neural network model generated by the learning device 2 of the second embodiment is divided into three classes: a bone metastasis region, a non-bone metastasis region, and a background. It is a model for classifying.
- the basic configuration of the learning device 2 of the second embodiment is the same as that of the learning device 1 of the first embodiment, but the learning device 2 of the second embodiment is a lot of teacher data.
- a teacher data analysis unit 18 for analyzing the contents of the image patches.
- Many image patches include a patch image including a bone metastasis region or a non-bone metastasis region, and a patch image including neither a bone metastasis region nor a non-bone metastasis region.
- the teacher data analysis unit 18 includes a patch image including a bone metastasis region or a non-bone metastasis region and a patch image including neither a bone metastasis region nor a non-bone metastasis region included in a large number of patch images which are teacher data. Find the composition ratio.
- the output unit 17 outputs the composition ratio data of the patch image that generated the learned model stored in the storage unit 16.
- the composition ratio of the patch image used to generate the learned model By outputting the composition ratio of the patch image used to generate the learned model in this way, the bone metastasis region detection accuracy performed using the learned model does not increase, and when generating a new learned model , You can get hints on how to change the teacher data to learn.
- an example is given in which the user who looks at the composition ratio of the patch image changes the teacher data.
- the learning apparatus 2 changes the teacher data based on the composition ratio of the patch image. May be.
- FIG. 9 is a diagram showing a learning device 3 which is a modification of the second embodiment.
- the learning device 3 according to the modification includes a patch image selection unit 19 in addition to the configuration of the learning device 2 of the second embodiment.
- the patch image selection unit 19 has a function of selecting a patch image used for learning based on the analysis result by the teacher data analysis unit 18. According to the studies by the present inventors, it is considered that an appropriate model cannot be generated if there are too many patch images that do not include any of the bone metastasis region and the non-bone metastasis region.
- the learning device 3 includes the bone metastasis region or the non-bone metastasis region when the composition ratio of the patch image that does not include the bone metastasis region or the non-bone metastasis region is equal to or greater than a predetermined threshold. Instead of using all missing patch images, select a patch image to use for learning. This increases the possibility that a model with high detection accuracy of the bone metastasis region can be generated.
- FIG. 11 is a diagram illustrating the learning device 4 according to the third embodiment.
- the learning device 4 of the third embodiment uses Butterfly-Net as a model of a neural network to be learned.
- Butterfly-Net has a structure in which two network parts having an Encoder-Decoder structure are combined.
- Butterfly-Net is described in detail in "Btrfly Net: Vertebrae Labeling with Energybased Adversarial Learning of Local Spine Prior" Anjany Sekuboyina et al., MICCAI 2018.
- the learning device 4 includes an input unit 40 for inputting teacher data, a control unit 41 for learning a neural network model based on the teacher data, a storage unit 47 for storing a model generated by learning, and a storage unit 47. And an output unit 48 that outputs the stored model to the outside.
- the learning device 4 of the third embodiment includes a subject's scintigram area, which includes a bone metastasis area, a non-malignant lesion area (fracture, inflammation, etc.), and other areas (physiological accumulation areas such as kidneys and bladders, A model for classification into three classes of injection leakage / urine leakage and background) is generated.
- the physiological accumulation region is included in the class of the other region, and is classified into a class different from the non-malignant lesion region.
- the learning device 4 includes, as teacher data, a scintigram of a subject photographed from the front (hereinafter referred to as “front image”) and a scintigram of a subject photographed from the rear (hereinafter referred to as “rear image”).
- front image a scintigram of a subject photographed from the front
- rear image a scintigram of a subject photographed from the rear
- the correct answer label given to each scintigram is used.
- FIG. 12 is a diagram illustrating an example of a front image and a rear image. The rear image is inverted in the horizontal direction.
- the control unit 41 includes an image inverting unit 42, a front / rear image alignment unit 43, a density normalization processing unit 44, a patch image creation unit 45, and a learning unit 46.
- the image reversing unit 42 has a function of inverting the rear image.
- the front and rear image alignment unit 43 performs alignment between the front image and the inverted rear image.
- the rear image is inverted and aligned with the front image.
- the front image may be inverted and aligned with the rear image.
- the concentration normalization processing unit 44 has a function of normalizing the concentration value in order to suppress variation in the concentration value of the normal bone region that is different for each subject.
- the density normalization processing unit 44 normalizes density values by density range adjustment, normal bone level identification, and gray scale normalization processing.
- the density normalization processing unit 44 converts the input scintillogram density I in to a normalized density I normalized by the following equation (3).
- the patch image creation unit 45 has a function of cutting out and creating a patch image from the subject's scintigram.
- the patch image creation unit 45 cuts out patch images at corresponding locations from the front image and the rear image, and generates a pair of front and rear patch images.
- a patch image A obtained from the front image and a patch image A ′ obtained from the rear image are paired patch images.
- the patch image B and the patch image B ′ are also a pair of patch images.
- the size of the patch image is 64 ⁇ 64 [pixels].
- a 64 ⁇ 64 [pixels] window is scanned over the subject's scintigram (512 ⁇ 1024 [pixels]) at intervals of 2 [pixels].
- (1) Accumulated labels (bone metastasis region or non-bone metastasis region) in the window ) Is included, or (2) if the bone region is included and the accumulation is not included, it is cut out as an image patch.
- the paired patch image is cut out from the other of the front image and the rear image.
- the learning unit 46 has a function of learning a neural network model for detecting a bone metastasis region from a scintigram using a patch image.
- Butterfly-Net having a structure in which two U-Nets are combined is used as a neural network model.
- FIG. 13 is a diagram illustrating an example of Butterfly-Net used in the present embodiment.
- the upper side of Butterfly-Net has a downward convex configuration, and has almost the same structure as the network shown in FIG.
- the lower side of Butterfly-Net has an upwardly convex configuration, and has the same structure as the network shown in FIG. 3 (only drawn upside down).
- Butterfly-Net is a combination of two U-Nets at 8 ⁇ 8 128 feature maps.
- the neural network model used in the present embodiment is one of the residual blocks Botleneck (K. He, X. Zhang, S. Ren, and J. Sun “Deep” in order to extract more advanced features. residual learning for image recognition ”arXiv: 1512.03385, 2015).
- the improved Butterfly-Net is referred to as “ResButterfly-Net” in this document.
- the input image is grayscale, and the input dimension is 64 ⁇ 64 ⁇ 1.
- the number of channels is set to 32 in the convolution layer, and it is passed through Botleneck.
- 2 ⁇ 2 MAX pooling is performed, and it is passed through Botleneck so that the number of channels is doubled.
- the process of repeating these layers three times in total is performed on each of the upper and lower U-Nets.
- a feature map having a size of 8 ⁇ 8 ⁇ 128 is obtained, the upper and lower two U-Net feature maps are combined, and further, Botleneck and MAX pooling are performed twice, and finally, 2 by Encode.
- a feature map having a size of ⁇ 2 ⁇ 512 is obtained.
- FIG. 13 shows a bone metastasis region (Bone metastatic legion) and a non-malignant lesion region (Non-malignant lesion), but portions other than the bone metastasis region and the non-malignant lesion region are other regions.
- the learning unit 46 learns a neural network model using the patch images before and after the pair and the correct answer labels. Performs learning by evaluating an error (loss function) with probability p i and correct probability output converted by softmax function when you enter a pair of the patch image to the neural network model.
- the loss function is shown below.
- w c is a weight of class c for reducing the influence of the difference in the number of pixels.
- FIG. 14 is a diagram illustrating a configuration of a bone metastasis detection apparatus 50 according to the third embodiment.
- the bone metastasis detection device 50 includes an input unit 51 that inputs a subject's scintigram, a control unit 52 that detects a bone metastasis region from the subject's scintigram, and a memory that stores a learned model learned by the learning device 4 described above. Part 58 and an output part 59 for outputting the data of the detected bone metastasis region.
- the control unit 52 includes an image inverting unit 53, a front / rear image alignment unit 54, a density normalization processing unit 55, a patch image creation unit 56, and an inference unit 57.
- the image reversing unit 53, the front and rear image alignment unit 54, and the density normalization processing unit 55 are the same as the image reversing unit 42, the front and rear image alignment unit 43, and the density normalization processing unit 44 provided in the learning device 4.
- the patch image creation unit 56 has a function of cutting out a patch image from the input subject's scintigram (front image and rear image).
- the patch image creation unit 56 generates a pair of patch images by cutting out corresponding regions from the preceding and following scintigrams. Note that the patch image creation unit 56 and the patch image creation unit 45 included in the learning device 4 may change the interval at which the patch image is cut out.
- the inference unit 57 reads the learned model from the learned model storage unit 58, inputs a pair of patch images to the input layer of the learned model, and each pixel of the patch image includes a bone metastasis region, a non-malignant lesion region, and the like. The probability corresponding to each class of the region is obtained.
- FIG. 15 is a diagram illustrating the operation of the learning device 4.
- the learning device 4 inputs scintillograms (anterior image and posterior image) of multiple subjects and correct labels (bone metastasis region, non-malignant lesion region, and other regions) corresponding thereto as teacher data (S30).
- the learning device 4 inverts the rear image (S31), and performs alignment between the front image and the inverted rear image (S32).
- the learning device 4 normalizes the density of the input front image and rear image (S33), cuts out corresponding regions of the previous and next images, and generates a plurality of pairs of patch images (S34).
- the learning device 4 learns the neural network model using the patch image and the correct answer label corresponding to the patch image (S35).
- a pair of front and back patch images are input to the input layer of Butterfly-Net, and learning is performed based on the class and correct data output from the output layer.
- the learning device 4 stores the neural network model obtained by learning in the storage unit 47 (S36).
- the learning model stored in the storage unit 47 is read and output to another device or the like.
- the learning device 4 uses a butterfly-net neural network model as a learning model and inputs a pair of front and rear patch images to the input layer to perform learning. By simultaneously processing the patch images, it is possible to generate a neural network model that can accurately detect the bone metastasis region.
- the patch image is inverted, and a part of the patch images among a large number of patch images is inverted horizontally.
- the patch image that is the teacher data is a pair of front and rear images. Therefore, when the front or rear patch image is inverted, the other patch image is also inverted in the same direction.
- robust learning can be performed by increasing the teacher data by inverting the patch image.
- the learning device 4 according to the above-described third embodiment, an example of generating a learned model that is classified into a class different from that of the first embodiment has been described, but the same as the first embodiment It is of course possible to generate a trained model that classifies into classes. Conversely, in the first embodiment and the second embodiment described above, injection leakage and urine leakage are excluded, but physiological accumulation of the kidneys and bladder, injection leakage / urine leakage, and background are other areas. As a result, it is possible to generate a model that is classified into the same class as in the third embodiment.
- Example 1 An example in which a bone metastasis region is detected using a learned model generated by using the learning device 1 according to the first embodiment will be described.
- the learned model a bone metastasis region was detected using a learned model generated using the teacher data obtained by inverting the patch image and a learned model generated using the teacher data not inverting the patch image.
- Frontal bone scintigram density value normalized image 103 cases Image size: 512 ⁇ 1024 [pixels] ⁇ Resolution: 2.8 x 2.8 [mm / pixel] ⁇ Patch size: 64 x 64 [pixels]
- the verification data is data for determining the number of learning iterations.
- MadaBoost C. Domingo and O. Watanabe “MadaBoost: A modification of AdaBoost” Proc. Thirteenth Annual Conference on Computational Learning Theory, pp. 180-189, which performs detection based on the classification results by a number of weak classifiers. , 2000).
- the algorithm of MadaBoost used the method developed by the inventors' laboratory (Yuta Minami “Improved bone metastasis detection process on bone scintigram”, 3rd Nuclear Medicine Image Analysis Software Development Conference).
- FIG. 10 is a FROC curve showing the relationship between Sensitivity and FP (P) obtained by experiment.
- the FROC curve shown in FIG. 10 indicates that the “sensitivity” that picks up the non-bone metastasis region as the bone metastasis region increases as the Sensitivity of the vertical axis increases.
- Sensitivity is 0.8
- over-pickup is 200 pixels or less.
- MadaBoost over-pickup of 500 pixels or more has occurred, and in the example, over-pickup could be suppressed.
- a U-Net (Flip) graph shows a result of detection using a learned model generated using teacher data obtained by inverting some patch images
- the U-Net graph is FIG. 10 is a diagram illustrating a result of detection using a learned model generated without performing inversion of a patch image.
- Example 2 An example in which a bone metastasis region is detected using a learned model generated using the learning device 4 according to the third embodiment will be described.
- the learned model ResButterfly-Net described in the third embodiment and Butterfly-Net in which Bottleck of ResButterfly-Net was replaced with a convolution layer were used. Further, U-Net was used as a comparative example.
- the verification data is data for determining the optimal number of iterations for learning.
- Table 1 shows each evaluation value when the sensitivity of the bone metastasis region is 0.9. The top is the result of the front image, and the bottom is the result of the back image. As can be seen in Table 1, it was confirmed in many indicators that using ResButterfly-Net and Butterfly-Net as a learning model resulted in fewer errors when detecting hot spots than a model using U-Net. .
- a learning device for generating a model of a neural network for detecting abnormal accumulation from a subject's scintigram An input unit that inputs scintillograms of a plurality of subjects and correct collection labels of normal accumulation and abnormal accumulation in each scintigram as teacher data; A learning unit that learns a model of a neural network used to detect abnormal accumulation of bone scintigrams using the teacher data;
- a learning apparatus comprising: (2) A patch image creation unit that creates a patch image by cutting out a region in which a subject's bone is reflected from the plurality of subject's scintigrams, The learning device according to (1), wherein the learning unit performs learning using the patch image and a correct answer label corresponding to the patch image as teacher data.
- the patch image creation unit scans a window having a predetermined size on the subject's scintigram, and when the subject's bone is reflected in the window, the region of the window is used as the patch image.
- the learning device described in (2) (4)
- a teacher data analysis unit for obtaining a composition ratio between a patch image including normal accumulation or abnormal accumulation and a patch image including neither normal accumulation nor abnormal accumulation is provided ( The learning device according to 2) or (3).
- the learning apparatus further comprising a patch image selection unit that thins out the images.
- the learning apparatus according to any one of (1) to (5), further including a patch image inversion unit that inverts at least a part of the patch image created by the patch image creation unit. .
- the learning device according to any one of (1) to (6), wherein the neural network has an Encoder-Decoder structure and includes a structure for inputting a feature map obtained by the Encoder structure into the Decoder structure.
- a learning method for generating a model of a neural network for detecting abnormal accumulation from a subject's scintigram Inputting scintillograms of a plurality of subjects and correct labels of normal accumulation and abnormal accumulation in each scintigram as teacher data; Learning a neural network model for use in detecting abnormal accumulation of bone scintigrams using the teacher data;
- a learning method comprising: (9) A program for generating a neural network model for detecting abnormal accumulation from a subject's scintigram, Inputting scintillograms of a plurality of subjects and correct labels of normal accumulation and abnormal accumulation in each scintigram as teacher data; Learning a neural network model for use in detecting abnormal accumulation of bone scintigrams using the teacher data;
- a program that executes (10) A learned model for functioning a computer to detect abnormal accumulation from a subject's scintigram, Consists of a neural network having a convolution layer and a deconvolution layer and including a structure for inputting a feature map obtained by the convolution layer
- a storage unit storing a learned model of a neural network learned by the learning device according to any one of (2) to (7);
- An inference unit that inputs the patch image to the input layer of the learned model read from the storage unit, and obtains an abnormal accumulation region included in the patch image;
- An output unit for outputting data indicating the abnormal accumulation region;
- An abnormal accumulation detection apparatus comprising:
Landscapes
- Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- General Physics & Mathematics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Molecular Biology (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Optics & Photonics (AREA)
- General Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- High Energy & Nuclear Physics (AREA)
- Medical Informatics (AREA)
- Spectroscopy & Molecular Physics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Theoretical Computer Science (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
- Nuclear Medicine (AREA)
Abstract
L'invention concerne un dispositif d'apprentissage (1) apte à générer un modèle d'un réseau neuronal pour détecter une région de métastase osseuse à partir d'un scintigramme d'un sujet. Le dispositif d'apprentissage (1) comprend : une unité d'entrée (10) pour entrer, en tant que des données d'enseignant, des scintigrammes d'une pluralité de sujets, et corriger des marqueurs de réponse d'une région de métastase osseuse et d'une région non de métastase osseuse dans les scintigrammes ; une unité de création d'image de retouche (13) pour découper une région dans laquelle un os d'un sujet apparaît, à partir des scintigrammes de la pluralité de sujets, et créer une image de retouche ; et une unité d'apprentissage (15) pour apprendre un modèle d'un réseau neuronal utilisé pour détecter une région de métastase osseuse d'un scintigramme d'os à l'aide de l'image de retouche et d'un marqueur de réponse correcte correspondant à celle-ci en tant que des données d'enseignant.
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| JP2020519910A JP7352261B2 (ja) | 2018-05-18 | 2019-05-16 | 学習装置、学習方法、プログラム、学習済みモデルおよび骨転移検出装置 |
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| JP2018-096186 | 2018-05-18 | ||
| JP2018096186 | 2018-05-18 |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2019221222A1 true WO2019221222A1 (fr) | 2019-11-21 |
Family
ID=68539742
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/JP2019/019478 Ceased WO2019221222A1 (fr) | 2018-05-18 | 2019-05-16 | Dispositif d'apprentissage, procédé d'apprentissage, programme, modèle appris, et dispositif de détection de métastase osseuse |
Country Status (3)
| Country | Link |
|---|---|
| JP (1) | JP7352261B2 (fr) |
| TW (1) | TW202004572A (fr) |
| WO (1) | WO2019221222A1 (fr) |
Cited By (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN111627554A (zh) * | 2020-05-28 | 2020-09-04 | 浙江德尔达医疗科技有限公司 | 一种基于深度卷积神经网络的骨折图像自动分类系统 |
| JP2022017098A (ja) * | 2020-07-13 | 2022-01-25 | 繁 塩澤 | データ生成装置、検出装置、及びプログラム |
| JP2022169484A (ja) * | 2021-04-27 | 2022-11-09 | インターナショナル・ビジネス・マシーンズ・コーポレーション | コンピュータ実施方法、コンピュータプログラム、コンピュータシステム(光学式文字認識のための文書セグメンテーション) |
| JP2023025415A (ja) * | 2021-08-10 | 2023-02-22 | 株式会社スタージェン | プログラム、記憶媒体、システム、学習済モデルおよび判定方法 |
| JP2024000482A (ja) * | 2022-06-20 | 2024-01-05 | 緯創資通股▲ふん▼有限公司 | 医療画像の処理方法及び医療画像処理用コンピュータ装置 |
Citations (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| WO2007029467A1 (fr) * | 2005-09-05 | 2007-03-15 | Konica Minolta Medical & Graphic, Inc. | Procédé et dispositif de traitement d'image |
| JP2016142664A (ja) * | 2015-02-04 | 2016-08-08 | 日本メジフィジックス株式会社 | 核医学画像解析技術 |
| WO2018008593A1 (fr) * | 2016-07-04 | 2018-01-11 | 日本電気株式会社 | Dispositif d'apprentissage de diagnostic par l'image, dispositif de diagnostic par l'image, procédé de diagnostic par l'image, et support d'enregistrement pour stocker un programme |
| JP6294529B1 (ja) * | 2017-03-16 | 2018-03-14 | 阪神高速技術株式会社 | ひび割れ検出処理装置、およびひび割れ検出処理プログラム |
-
2019
- 2019-05-16 WO PCT/JP2019/019478 patent/WO2019221222A1/fr not_active Ceased
- 2019-05-16 TW TW108117252A patent/TW202004572A/zh unknown
- 2019-05-16 JP JP2020519910A patent/JP7352261B2/ja active Active
Patent Citations (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| WO2007029467A1 (fr) * | 2005-09-05 | 2007-03-15 | Konica Minolta Medical & Graphic, Inc. | Procédé et dispositif de traitement d'image |
| JP2016142664A (ja) * | 2015-02-04 | 2016-08-08 | 日本メジフィジックス株式会社 | 核医学画像解析技術 |
| WO2018008593A1 (fr) * | 2016-07-04 | 2018-01-11 | 日本電気株式会社 | Dispositif d'apprentissage de diagnostic par l'image, dispositif de diagnostic par l'image, procédé de diagnostic par l'image, et support d'enregistrement pour stocker un programme |
| JP6294529B1 (ja) * | 2017-03-16 | 2018-03-14 | 阪神高速技術株式会社 | ひび割れ検出処理装置、およびひび割れ検出処理プログラム |
Non-Patent Citations (2)
| Title |
|---|
| "Image recognition by deep learning 9-convolutional neural network by Keras", DEEP LEARNING, vol. 5, 3 August 2017 (2017-08-03), Retrieved from the Internet <URL:https://lp-tech.net/articles/5MIeh> [retrieved on 20190628] * |
| KAWAKAMI KAZUKIMI: "Analysis of nuclear medicine image using artificial intelligence", MEDICAL IMAGING AND INFORMATION SCIENCES, vol. 34, no. 2, June 2017 (2017-06-01), pages 100 - 102 * |
Cited By (7)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN111627554A (zh) * | 2020-05-28 | 2020-09-04 | 浙江德尔达医疗科技有限公司 | 一种基于深度卷积神经网络的骨折图像自动分类系统 |
| JP2022017098A (ja) * | 2020-07-13 | 2022-01-25 | 繁 塩澤 | データ生成装置、検出装置、及びプログラム |
| JP7491755B2 (ja) | 2020-07-13 | 2024-05-28 | 繁 塩澤 | データ生成装置、検出装置、及びプログラム |
| JP2022169484A (ja) * | 2021-04-27 | 2022-11-09 | インターナショナル・ビジネス・マシーンズ・コーポレーション | コンピュータ実施方法、コンピュータプログラム、コンピュータシステム(光学式文字認識のための文書セグメンテーション) |
| JP2023025415A (ja) * | 2021-08-10 | 2023-02-22 | 株式会社スタージェン | プログラム、記憶媒体、システム、学習済モデルおよび判定方法 |
| JP7757076B2 (ja) | 2021-08-10 | 2025-10-21 | 英毅 森 | プログラム、記憶媒体、システム、学習済モデルおよび判定方法 |
| JP2024000482A (ja) * | 2022-06-20 | 2024-01-05 | 緯創資通股▲ふん▼有限公司 | 医療画像の処理方法及び医療画像処理用コンピュータ装置 |
Also Published As
| Publication number | Publication date |
|---|---|
| JPWO2019221222A1 (ja) | 2021-08-12 |
| TW202004572A (zh) | 2020-01-16 |
| JP7352261B2 (ja) | 2023-09-28 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| JP7352261B2 (ja) | 学習装置、学習方法、プログラム、学習済みモデルおよび骨転移検出装置 | |
| Ramdlon et al. | Brain tumor classification using MRI images with K-nearest neighbor method | |
| US12026876B2 (en) | System and method for automatic detection of vertebral fractures on imaging scans using deep networks | |
| CN115131290B (zh) | 图像处理方法 | |
| Bohaju | Brain tumor | |
| Shamrat et al. | Analysing most efficient deep learning model to detect COVID-19 from computer tomography images | |
| Ali et al. | Breast tumor segmentation using neural cellular automata and shape guided segmentation in mammography images | |
| Arun et al. | Deep vein thrombosis detection via combination of neural networks | |
| Ahmad et al. | Fusion of metadata and dermoscopic images for melanoma detection: Deep learning and feature importance analysis | |
| Sridhar et al. | Lung Segment Anything Model (LuSAM): A Prompt-integrated Framework for Automated Lung Segmentation on ICU Chest X-Ray Images | |
| Afnaan et al. | VisioRenalNet: Spatial vision transformer UNet for enhanced T2-weighted kidney MRI segmentation | |
| Fatima et al. | Synthetic Lung Ultrasound Data Generation Using Autoencoder with Generative Adversarial Network | |
| bin Azhar et al. | Enhancing COVID-19 Detection in X-Ray Images Through Deep Learning Models with Different Image Preprocessing Techniques. | |
| CN111445456A (zh) | 分类模型、网络模型的训练方法及装置、识别方法及装置 | |
| Fan et al. | LW-MorphCNN: a lightweight morphological attention-based subtype classification network for lung cancer | |
| EP4544493A1 (fr) | Système et procédé de classification de carcinome basocellulaire sur la base d'une microscopie confocale | |
| Saptasagar et al. | Diagnosis and Prediction of Lung Tumour Using Combined ML Techniques | |
| Sathish et al. | Accurate Deep Learning Models for Predicting Brain Cancer at begin Stage | |
| Agafonova et al. | Meningioma detection in MR images using convolutional neural network and computer vision methods | |
| Khan et al. | Early pigment spot segmentation and classification from iris cellular image analysis with explainable deep learning and multiclass support vector machine | |
| Wu et al. | Mscan: Multi-scale channel attention for fundus retinal vessel segmentation | |
| Carson | Automatic Bone Structure Segmentation of Under-sampled CT/FLT-PET Volumes for HSCT Patients | |
| Moazzami et al. | Open Set Recognition for Endoscopic Image Classification: A Deep Learning Approach on the Kvasir Dataset | |
| Liu et al. | Detection of clavicle fracture after martial arts training based on 3D segmentation and multi-perspective fusion | |
| US20250217974A1 (en) | Cervical vertebral maturation assessment using an innovative artificial intelligence-based imaging analysis system |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 19803999 Country of ref document: EP Kind code of ref document: A1 |
|
| ENP | Entry into the national phase |
Ref document number: 2020519910 Country of ref document: JP Kind code of ref document: A |
|
| NENP | Non-entry into the national phase |
Ref country code: DE |
|
| 122 | Ep: pct application non-entry in european phase |
Ref document number: 19803999 Country of ref document: EP Kind code of ref document: A1 |