WO2021054477A2 - Disease diagnostic support method using endoscopic image of digestive system, diagnostic support system, diagnostic support program, and computer-readable recording medium having said diagnostic support program stored therein - Google Patents
Disease diagnostic support method using endoscopic image of digestive system, diagnostic support system, diagnostic support program, and computer-readable recording medium having said diagnostic support program stored therein Download PDFInfo
- Publication number
- WO2021054477A2 WO2021054477A2 PCT/JP2020/035652 JP2020035652W WO2021054477A2 WO 2021054477 A2 WO2021054477 A2 WO 2021054477A2 JP 2020035652 W JP2020035652 W JP 2020035652W WO 2021054477 A2 WO2021054477 A2 WO 2021054477A2
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- endoscopic image
- neural network
- convolutional neural
- disease
- endoscopic
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Ceased
Links
Images
Classifications
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B1/00—Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B1/00—Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
- A61B1/04—Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor combined with photographic or television appliances
- A61B1/045—Control thereof
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
Definitions
- the present invention relates to a method for supporting diagnosis of a disease by endoscopic images of the digestive organs using a neural network, a diagnosis support system, a diagnosis support program, and a computer-readable recording medium that stores this diagnosis support program.
- Endoscopy is performed on the digestive organs, such as the larynx, pharynx, esophagus, stomach, duodenum, biliary tract, pancreatic duct, small intestine, and large intestine.
- Endoscopy of the upper digestive organs is for screening for gastric cancer, esophageal cancer, peptic ulcer, reflux gastritis, etc.
- endoscopy of the large intestine is for colon cancer, colorectal polyps, ulcerative colitis. It is often done for screening such as.
- endoscopy of the upper digestive organs is commonly incorporated into detailed examinations of various upper abdominal symptoms, detailed examinations based on positive barium tests for gastric diseases, and regular medical examinations in Japan. It is also useful for close examination of abnormal serum pepsinogen levels.
- gastric cancer screening has been shifting from conventional barium examination to gastroscopy.
- Gastric cancer is one of the most common malignant tumors, and it is estimated that about 1 million cases have occurred worldwide several years ago. In particular, 50% to 70% of gastric cancers in East Asian countries are detected as early gastric cancer. Endoscopic submucosal dissection (ESD) is a minimally invasive treatment for early gastric cancer, and according to the Japanese Gastric Cancer Treatment Guidelines, there is almost no risk of lymph node relocation. Therefore, early detection and treatment of gastric cancer is the best way to reduce mortality, and improving the accuracy of endoscopic diagnosis of early gastric cancer is very helpful in reducing gastric cancer incidence and mortality.
- ESD Endoscopic submucosal dissection
- H. pylori Helicobacter pylori
- H. pylori atrophic gastritis and intestinal metaplasia
- Connect. 98% of non-cardiac gastric cancers worldwide are H. Helicobacter pylori is believed to have contributed.
- H. Patients infected with Helicobacter pylori are at increased risk of gastric cancer, and H. pylori.
- the International Agency for Research on Cancer has announced that H. pylori has been eradicated.
- Helicobacter pylori is classified as a clear carcinogen. From this result, in order to reduce the risk of developing gastric cancer, H. Eradication of Helicobacter pylori is useful, and H. pylori with antibacterial agents. Helicobacter pylori eradication has become an insurance practice in Japan, and will continue to be a treatment method that is strongly encouraged in terms of health and hygiene. In fact, the Ministry of Health, Labor and Welfare of Japan announced in February 2013 that H. Approved health insurance coverage for the eradication treatment of gastritis patients due to Helicobacter pylori infection.
- H. Gastroscopy provides extremely useful information for the differential diagnosis of the presence of Helicobacter pylori infection. If the capillaries look clean (RAC (regular arrangement of collecting venules)) or fundic gland polyps are H. Characteristic of Helicobacter pylori-negative gastric mucosa, atrophy, redness, mucosal swelling, and wrinkled wall hypertrophy are associated with H. pylori. This is a typical finding of Helicobacter pylori-infected gastritis. In addition, patchy red spots are described in H. It is a characteristic of the gastric mucosa that has been sterilized by Helicobacter pylori. H.
- H. pylori Accurate endoscopic diagnosis of Helicobacter pylori infection is based on anti-H. pylori in blood or urine. Patients who are confirmed by various tests such as Helicobacter pylori IgG level measurement, fecal antigen measurement, urea breath test, or rapid urease test and have a positive test result are H. pylori. You can proceed to Helicobacter pylori eradication. Endoscopy is widely used to examine gastric lesions, but when confirming gastric lesions without clinical specimen analysis, H. If even Helicobacter pylori infection can be identified, the burden on patients will be greatly reduced without performing uniform blood tests and urine tests, and it can be expected to contribute to the medical economy.
- a general endoscope is about 2 m in length, and to insert the endoscope into the small intestine, it is taken orally via the stomach and duodenum, or transanally via the large intestine to the small intestine. This is because it is necessary to insert the small intestine, and since the small intestine itself is a long organ as long as 6-7 m, it is difficult to insert and observe the entire small intestine with a general endoscope. Therefore, for endoscopy of the small intestine, double-balloon endoscopy (see Patent Document 1) or wireless capsule endoscopy (hereinafter sometimes referred to simply as "WCE”) (see Patent Document 2). Is used.
- a balloon provided on the tip side of the endoscope and a balloon provided on the tip side of an overtube covering the endoscope are inflated or deflated alternately or at the same time. It is a method of performing an examination while shortening and straightening the long small intestine by pulling it together, but it is difficult to perform an examination over the entire length of the small intestine at one time because the length of the small intestine is long. Therefore, the examination of the small intestine by double-balloon endoscopy is usually performed in two parts, an oral endoscopy and a transanal endoscopy.
- endoscopy by WCE swallows an orally ingestible capsule with a built-in camera, flash, battery, transmitter, etc., and wirelessly transmits the image taken while the capsule is moving in the digestive tract to the outside.
- the examination is performed by receiving and recording this externally, and it is possible to take an image of the entire small intestine at one time.
- pharyngeal cancer is often detected at an advanced stage, and the prognosis is poor.
- patients with advanced laryngeal cancer require surgical resection and chemoradiotherapy, have both cosmetic problems and dysphagia and speech loss, resulting in a significant reduction in quality of life.
- ESD esophagogastroduodenal endoscopy
- Endoscopes cannot use iodine staining in the pharynx because of the risk of aspiration into the airways, unlike in the esophagus. Therefore, superficial pharyngeal cancer was rarely detected.
- WLI white photoacoustic imaging
- ME-NBI magnifying endoscopy with narrow-band imaging
- VSCS vehicle plus surface classification system: a diagnostic system that makes a differential diagnosis of cancer and non-cancer using anatomical structures and indicators visualized by narrow band imaging
- VSCS Magnetic Endoscopy Simple Diagnostic Algorithm for Early Gastric Cancer
- MESDA-G MESDA-G algorithm
- ME-NBI is believed to make a significant contribution to clinical practice, but these benefits have been reported primarily based on results obtained by endoscopists (experts) and ME-using VSCS. Acquiring NBI diagnostic skills requires considerable expertise and experience.
- endoscopic image-based diagnoses are not only time-consuming to train endoscopists and check stored images, but are also subjective, with various false-positive and false-negative judgments. May occur.
- diagnosis by an endoscopist may be less accurate due to fatigue. Such a large burden on the site and a decrease in accuracy may lead to a limit on the number of examinees, and there is a concern that medical services according to demand will not be sufficiently provided.
- AI artificial intelligence
- Deep learning can learn higher-order features from input data using a neural network configured by stacking multiple layers. Deep learning also uses a backpropagation algorithm to update the internal parameters used to calculate the representation of each layer from the representation of the previous layer by showing how the device should change. can do.
- Deep learning can be a powerful machine learning technique that can be trained with previously accumulated medical images and can directly derive the patient's clinical features from the medical images.
- Neural networks are mathematical models that express the characteristics of neural circuits in the brain by computer simulations, and the algorithmic approach that supports deep learning is neural networks.
- the inventors have already constructed a CNN system that can classify images of the esophagus, stomach, and duodenum according to anatomical sites and can reliably find gastric cancer in endoscopic images (non-patent documents). See 8 and 9). Furthermore, the inventors, etc., have described H.I. We report the role of CNN in the diagnosis of Helicobacter pylori gastritis, and show that the ability of CNN is comparable to that of an experienced endoscopist, and the diagnosis time is considerably shortened (see Patent Document 5 and Non-Patent Document 10).
- CNN is expected to be applied to WCE because it can process a large amount of images automatically and quickly, and it is possible to achieve good results in some WCE fields. Is also shown (see Non-Patent Documents 11-13).
- Non-Patent Documents 11-13 there are still many unclear points about applying CNN to these WCE images of the small intestine to diagnose various diseases of the small intestine and bleeding or elevated lesions, and further improvements can be made. It has been demanded. In particular, there is a great demand to reduce the burden on endoscopists by applying CNN to the moving image of WCE of the small intestine and automatically detecting various types of the small intestine.
- an object of the present invention is to accurately detect the presence or absence of mucosal damage (erosion / ulcer), vasodilation, elevated lesion or bleeding of the small intestine using the CNN system based on the endoscopic image or moving image of the small intestine by WCE. It is an object of the present invention to provide a method for supporting diagnosis of a disease of the small intestine that can be identified, a diagnosis support system, a diagnosis support program, and a computer-readable recording medium that stores the diagnosis support program.
- Another object of the present invention is a diagnostic support method, a diagnostic support system, a diagnostic support program, and a diagnostic support program for early gastric cancer that can accurately diagnose early gastric cancer using a CNN system based on ME-NBI images. It is an object of the present invention to provide a computer-readable recording medium in which a device is stored.
- Another object of the present invention is that when diagnosing early gastric cancer using the CNN system, the depth of invasion of the lesion can be diagnosed more accurately, and the applicability of ESD can be accurately diagnosed.
- An object of the present invention is to provide a diagnostic support method, a diagnostic support system, a diagnostic support program, and a computer-readable recording medium that stores the diagnostic support program.
- the method for supporting the diagnosis of a disease by the endoscopic image of the digestive organ using the CNN of the first aspect of the present invention corresponds to the first endoscopic image of the digestive organ and the first endoscopic image.
- the CNN system was trained using at least one definitive diagnosis result of information corresponding to the positive or negative of the digestive organ disease, and the trained CNN system was input from the endoscopic image input unit. Digestion using the CNN system, which detects the disease in the digestive organ and outputs at least one of the regions corresponding to the positive of the disease and the probability score based on the second endoscopic image of the digestive organ.
- a method for supporting the diagnosis of a disease using an endoscopic image of an organ wherein the endoscopic image is a WCE image of the small intestine, and the trained CNN system is a second input from the endoscopic image input unit. It is characterized by outputting the region of the elevated lesion as the disease detected in the WCE image of the above.
- WCE images of a plurality of small intestines for which the CNN system is obtained in advance for each of a plurality of subjects Since it is trained based on the first endoscopic image consisting of the above and the positive or negative definitive diagnosis result of the disease obtained in advance for each of the plurality of subjects, it is substantially internal in a short time. With accuracy comparable to that of an endoscopist, it is possible to obtain a region or probability score corresponding to the positive of the digestive organ disease of the subject, and it is possible to select the subject who must make a definitive diagnosis in a short time. Become.
- the method for supporting the diagnosis of a disease by endoscopic images of digestive organs using the CNN system of the first aspect of the present invention it is shorter than the endoscopic images of the small intestine by WCE for a large number of subjects.
- WCE the endoscopic images of the small intestine
- the elevated lesions of such an embodiment include not only polyps but also nodules, epithelial tumors, submucosal tumors, vascular structures and the like.
- the method for supporting the diagnosis of a disease by an endoscopic image of the digestive organ using the CNN system of the second aspect of the present invention is a method of supporting the diagnosis of a disease by an endoscopic image of the digestive organ using the CNN system of the first aspect.
- the trained convolutional neural network system is characterized by further displaying the probability score of the disease in the second endoscopic image.
- a definitive diagnosis result by an endoscopic specialist can be obtained in the second endoscopic image.
- Accurate contrast between the areas identified and the areas positive for the disease detected by the trained CNN system can result in better sensitivity and specificity of the CNN.
- the method for supporting the diagnosis of a disease by an endoscopic image of a digestive organ using the CNN system of the third aspect of the present invention is a method of supporting a diagnosis of a disease by an endoscopic image of the digestive organ using the CNN system of the second aspect.
- the trained convolutional neural network system is associated with the area of the elevated lesion based on the positive or negative definitive diagnosis of the disease in the small intestine displayed in the second endoscopic image.
- the area of the elevated lesion of the second endoscopic image detected by the trained convolutional neural network system is displayed, and the above-mentioned based on the definitive diagnosis result displayed in the second endoscopic image. It is characterized in that the correctness of the diagnosis result of the trained convolutional neural network system is determined by the overlap between the region of the raised lesion and the detected region of the raised lesion.
- a definitive diagnosis result by an endoscopist can be obtained in the second endoscopic image. Areas that have been identified and areas that are positive for the disease detected by the trained CNN system are displayed so that the overlap of those areas can be immediately compared to the diagnostic results of the trained CNN. become.
- the method for supporting the diagnosis of a disease by the endoscopic image of the digestive organ using the CNN system of the fourth aspect of the present invention is the method of supporting the diagnosis of the disease by the endoscopic image of the digestive organ using the CNN system of the third aspect.
- the overlap is (1) When it is 80% or more of the area of the elevated lesion based on the definitive diagnosis result, or (2) When there are a plurality of positive regions of the disease detected by the trained convolutional neural network system, and any one region overlaps with the region of the elevated lesion based on the definitive diagnosis result.
- the diagnosis of the trained convolutional neural network system is characterized by determining that it is correct.
- the correctness of the diagnosis of the CNN system can be easily determined and trained.
- the accuracy of the diagnosis of the CNN system will be improved, and it will be possible to clarify the direction in which improvement is necessary.
- the method for supporting the diagnosis of a disease by endoscopic images of the digestive organs using the CNN system of the fifth aspect of the present invention is the endoscopic examination of the digestive organs using the CNN system of any one of the first to fourth aspects.
- the trained convolutional neural network system determines that the detected elevated lesion is one of a polyp, a nodule, an epithelial tumor, a submucosal tumor, and a vascular structure. It is characterized by doing.
- the CNN system itself is raised even with an endoscopic image of the small intestine by a large amount of WCE. Since the specific type of sexual lesion is determined, the endoscopist can select the subjects who must make a definitive diagnosis in a short time.
- the method for supporting the diagnosis of a disease by an endoscopic image of a digestive organ using the CNN system of the sixth aspect of the present invention corresponds to the first endoscopic image of the digestive organ and the first endoscopic image.
- a convolutional neural network system is trained using at least one definitive diagnosis result of information corresponding to positive or negative of the digestive organ disease, and the trained convolutional neural network system inputs an endoscopic image.
- the convolution that detects the disease of the digestive organ Based on the second endoscopic image of the digestive organ input from the unit, the convolution that detects the disease of the digestive organ and outputs at least one of the region corresponding to the positive of the disease and the probability score.
- a method for supporting disease diagnosis using an endoscopic image of a digestive organ using a neural network system wherein the endoscopic image is a WCE image of the small intestine, and the trained convolutional neural network system is the endoscope. It is characterized in that the probability score of bleeding as the disease detected in the second WCE image input from the image input unit is output.
- an image containing blood components of the small intestine and a normal mucosal image can be distinguished accurately and at high speed. Easy to check / correct by an endoscopist.
- the method for supporting the diagnosis of a disease by endoscopic images of the digestive organs using the CNN system of the seventh aspect of the present invention corresponds to the first endoscopic image of the digestive organs and the first endoscopic image.
- the convolutional neural network system is trained using the definitive diagnosis result of the information corresponding to the positive or negative of the digestive organ disease, and the trained convolutional neural network system is input from the endoscopic image input unit.
- a convolutional neural network system that detects the disease in the digestive organ and outputs at least one of a region corresponding to the positive of the disease and a probability score based on the second endoscopic image of the digestive organ.
- the first endoscopic image is a still image of the WCE of the small intestine
- the second endoscopic image is the WCE of the small intestine.
- the trained convolutional neural network system displays the region of the disease detected in the second WCE image input from the endoscopic image input unit.
- the endoscopist can select the subjects who must make a definitive diagnosis in a short time.
- the method for supporting the diagnosis of diseases by endoscopic images of the digestive organs using the CNN system of the above aspect not only continuous moving images captured by a so-called video camera but also still images are used as moving images by WCE of the small intestine. It also includes images taken so that the imaging interval is shortened so that the image can be regarded as a moving image.
- the method for supporting the diagnosis of a disease by the endoscopic image of the digestive organ using the CNN system of the eighth aspect of the present invention is the diagnosis support of the disease by the endoscopic image of the digestive organ using the CNN system of the seventh aspect.
- the method is characterized in that the area of the disease is at least one of mucosal disorders, vasodilation, elevated lesions, and bleeding.
- the method for supporting the diagnosis of diseases by endoscopic images of the digestive organs using the CNN system of the eighth aspect of the present invention at least one of mucosal disorders, vasodilatation, elevated lesions, and bleeding even with one CNN system. Since the two diseases can be detected individually or simultaneously, the endoscopist can quickly select the subjects for whom a separate definitive diagnosis must be made.
- the method for supporting the diagnosis of a disease by an endoscopic image of a digestive organ using the CNN system of the ninth aspect of the present invention is a method of supporting the diagnosis of a disease by an endoscopic image of a digestive organ using the CNN system of the eighth aspect.
- the area of mucosal damage is at least one of erosions and ulcers
- the area of vasodilation is at least one of vasodilator 1a and vasodilator 1b
- the area of elevated lesions Is characterized by at least one of polyps, nodules, submucosal tumors, vascular structures and epithelial tumors.
- the method for supporting the diagnosis of diseases by endoscopic images of digestive organs using the CNNCNN system of the ninth aspect of the present invention at least one of mucosal disorders, vasodilatation, elevated lesions and bleeding even with one CNN system. Not only can one disease be detected individually or simultaneously, but mucosal disorders, vasodilators and elevated lesions can be further subdivided and detected individually or simultaneously.
- the endoscopist will be able to quickly and in detail select subjects for whom a separate definitive diagnosis must be made.
- the method for supporting the diagnosis of a disease by an endoscopic image of a digestive organ using the CNN system of the tenth aspect of the present invention is a method of supporting the diagnosis of a disease by an endoscopic image of the digestive organ using the CNN system of the seventh-9th aspect.
- the trained CNN system has a first CNN system portion in which the definitive diagnosis result is trained by a still image of mucosal damage and a second CNN system whose definitive diagnosis result is trained by a still image of vasodilatory disease.
- the CNN system uses a still image of mucosal damage and a still image of vasodilator as a definitive diagnosis result.
- the CNN system can accurately detect at least one of four types: mucosal damage, vasodilation, elevated lesions, and bleeding. become able to.
- the method for supporting the diagnosis of a disease by endoscopic images of the digestive organs using the CNN system of the eleventh aspect of the present invention corresponds to the first endoscopic image of the digestive organs and the first endoscopic image.
- the convolutional neural network system is trained using at least one definitive diagnosis result of the information corresponding to the positive or negative of the digestive organ disease, and the trained convolutional neural network system is used for endoscopic image input.
- Convolution that detects the disease in the digestive organs and outputs at least one of the regions corresponding to the positives of the disease and the probability score based on the second endoscopic image of the digestive organs input from the unit.
- the neural network system is characterized in that the region of early gastric cancer as the disease is output from the endoscopic image input from the endoscopic image input unit.
- the endoscopic images by the inundation method narrow band imaging of the stomach have less halation and are endoscopic. Since it is possible to generate a well-focused and clear image with uniform quality suitable for endoscopic diagnosis, H. Even when it is difficult to diagnose with conventional endoscopic images such as gastric cancer after Helicobacter pylori eradication and uninfected gastric cancer, early gastric cancer can be diagnosed with high diagnostic accuracy.
- the method for supporting the diagnosis of a disease by the endoscopic image of the digestive organ using the CNN system of the twelfth aspect of the present invention is the diagnosis support of the disease by the endoscopic image of the digestive organ using the CNN system of the eleventh aspect.
- the trained convolutional neural network network system is characterized by having a function of displaying an area of early gastric cancer as the disease on a heat map.
- the heat map is a color corresponding to the concept for the point of interest, here the probability value of gastric cancer.
- the probability value of gastric cancer For example, the higher the probability value of gastric cancer, the darker the red color can be displayed. Therefore, the part of the second endoscopic image where there is a possibility of gastric cancer and the magnitude of the probability of gastric cancer at that part should be confirmed at a glance. Will be able to.
- the method for supporting the diagnosis of a disease by endoscopic images of the digestive organs using the CNN system of the thirteenth aspect of the present invention corresponds to the first endoscopic image of the digestive organs and the first endoscopic image.
- the convolutional neural network system is trained using at least one definitive diagnosis result of the information corresponding to the positive or negative of the digestive organ disease, and the trained convolutional neural network system is used for endoscopic image input.
- Convolution that detects the disease in the digestive organs and outputs at least one of the regions corresponding to the positives of the disease and the probability score based on the second endoscopic image of the digestive organs input from the unit.
- the trained convolutional neural network system is characterized in that it outputs the depth of invasion of the disease as the disease of the endoscopic image input from the endoscopic image input unit. To do.
- the invasion depth (infiltration depth) of the disease can be accurately known, and thus early gastric cancer. You will be able to accurately diagnose whether you have advanced gastric cancer.
- the method for supporting the diagnosis of a disease by the endoscopic image of the digestive organ using the CNN system of the 14th aspect of the present invention is the diagnostic support of the disease by the endoscopic image of the digestive organ using the CNN system of the 13th aspect.
- the method is characterized in that the depth of invasion is output as to whether the submucosal invasion is less than 500 ⁇ m or more than 500 ⁇ m.
- ESD endoscopic submucosal dissection
- the method for supporting the diagnosis of a disease by endoscopic images of the digestive organs using the CNN system according to the fifteenth aspect of the present invention is the convolutional neural network in the method for supporting the diagnosis of any one of the specific aspects of the first to fourteenth aspects. Is further characterized in that it is combined with three-dimensional information from an X-ray computed tomography apparatus, an ultrasonic computed tomography apparatus or a magnetic resonance imaging apparatus.
- the CNN system can represent the structure of each digestive organ three-dimensionally, the CNN system according to any one of the first to fourteenth aspects can be used. When combined with the output, it becomes possible to more accurately grasp the part where the endoscopic image was taken.
- the method for supporting the diagnosis of a disease by endoscopic images of the digestive organs using the CNN system according to the sixteenth aspect of the present invention is the method for supporting the diagnosis of a disease according to any one of the first to fourteenth aspects.
- the image may be an image being taken with an endoscope, an image transmitted via a communication network, an image provided by a remote control system or a cloud-based system, an image recorded on a computer-readable recording medium, or an image. , At least one of the moving images.
- the probabilities of positive and negative diseases of the digestive organs with respect to the input second endoscopic image are different. Since the severity can be output in a short time, it can be used regardless of the input format of the second endoscopic image, for example, an image transmitted from a remote place or a moving image.
- the communication network the well-known Internet, intranet, extranet, LAN, ISDN, VAN, CATV communication network, virtual private network, telephone line network, mobile communication network, satellite communication network, etc. are used. It is possible.
- the transmission media constituting the communication network are well-known IEEE1394 serial bus, USB, power line carrier, cable TV line, telephone line, ADSL line and other wired, infrared, Bluetooth (registered trademark), IEEE802.11 and other wireless.
- Wireless networks such as mobile phone networks, satellite lines, and terrestrial digital networks can be used. As a result, it can be used as a form of so-called cloud service or remote support service.
- Computer-readable recording media include well-known tape systems such as magnetic tapes and cassette tapes, floppy (registered trademark) disks, magnetic disks such as hard disks, and compact disks-ROM / MO / MD / digital video discs / compact.
- a disk system including an optical disk such as a disk-R, a card system such as an IC card, a memory card, and an optical card, or a semiconductor memory system such as a mask ROM / EPROM / EPROM / flash ROM can be used.
- a mask ROM / EPROM / EPROM / flash ROM can be used.
- the disease diagnosis support system using the endoscopic image of the digestive organ using the CNN system of the 17th aspect of the present invention includes an endoscopic image input unit, an output unit, a computer incorporating the CNN, and a computer. It is a disease diagnosis support system by an endoscopic image of a digestive organ using a CNN system having the above, and the computer is a first storage area for storing a first endoscopic image of the digestive organ and the first storage area. A second storage area for storing definitive diagnosis results of information corresponding to the positive and negative of the disease in the digestive organ, which corresponds to the endoscopic image of 1, and a third storage area for storing the CNN program. The CNN program is trained based on the first endoscopic image stored in the first storage unit and the definitive diagnosis result stored in the second storage area.
- the endoscopic image is a WCE endoscopic image of the small intestine
- the trained CNN program is a WCE input from the endoscopic image input unit. It is characterized by outputting an area of an elevated lesion as the disease in an endoscopic image.
- the disease diagnosis support system using the endoscopic image of the digestive organ using the CNN system of the eighteenth aspect of the present invention is an endoscopic image of the digestive organ using the CNN system of the seventeenth aspect of the present invention.
- the trained CNN program is characterized in that the probability score of the disease is further displayed in the second endoscopic image.
- the disease diagnosis support system based on the endoscopic image of the digestive organ using the CNN system of the 19th aspect of the present invention is an endoscopic image of the digestive organ using the CNN system of the 17th aspect of the present invention.
- the CNN program includes the area of the elevated lesion displayed in the second endoscopic image based on the positive or negative definitive diagnosis result of the disease in the small intestine.
- the area of the elevated lesion detected by the trained CNN system is displayed, and the area of the elevated lesion based on the definitive diagnosis result displayed in the second endoscopic image and the area of the elevated lesion are displayed. It is characterized in that the correctness of the diagnosis result is determined by the overlap with the detected area of the elevated lesion.
- the disease diagnosis support system using the endoscopic image of the digestive organ using the CNN system of the twentieth aspect of the present invention is an endoscopic image of the digestive organ using the CNN system of the seventeenth aspect of the present invention.
- the disease diagnosis support system based on (1) When it is 80% or more of the area of the elevated lesion based on the definitive diagnosis result, or (2) When there are a plurality of positive regions of the disease detected by the trained CNN system, and any one region overlaps with the region of the elevated lesion based on the definitive diagnosis result.
- the diagnosis of the trained CNN system is characterized by determining that it is correct.
- the disease diagnosis support system based on the endoscopic image of the digestive organ using the CNN system of the 21st aspect of the present invention is an endoscopic image of the digestive organ using the CNN system of the 17th aspect of the present invention.
- the trained CNN program indicates in the second image that the elevated lesion is either a polyp, a nodule, an epithelial tumor, a submucosal tumor or a vascular structure. It is characterized by doing.
- the disease diagnosis support system using the endoscopic image of the digestive organ using the CNN system of the 22nd aspect of the present invention includes an endoscopic image input unit, an output unit, a computer incorporating a CNN, and a computer. It is a disease diagnosis support system by an endoscopic image of a digestive organ using a CNN system having the above, and the computer is a first storage area for storing a first endoscopic image of the digestive organ and the first storage area. A second storage area for storing definitive diagnosis results of information corresponding to the positive and negative of the disease in the digestive organ, which corresponds to the endoscopic image of 1, and a third storage area for storing the CNN program. The CNN program is trained based on the first endoscopic image stored in the first storage unit and the definitive diagnosis result stored in the second storage area.
- the endoscopic image is a WCE image of the small intestine
- the trained CNN program displays the probability score of bleeding as the disease in the second image. It is characterized by displaying.
- the disease diagnosis support system using the endoscopic image of the digestive organ using the CNN system of the 23rd aspect of the present invention includes an endoscopic image input unit, an output unit, a computer incorporating the CNN, and a computer. It is a disease diagnosis support system by an endoscopic image of a digestive organ using a CNN system having the above, and the computer is a first storage area for storing a first endoscopic image of the digestive organ and the first storage area. A second storage area for storing definitive diagnosis results of information corresponding to the positive and negative of the disease in the digestive organ, which corresponds to the endoscopic image of 1, and a third storage area for storing the CNN program. The CNN program is trained based on the first endoscopic image stored in the first storage unit and the definitive diagnosis result stored in the second storage area.
- the first endoscopic image is a still image of the WCE of the small intestine
- the second endoscopic image is a moving image of the WCE of the small intestine.
- the trained CNN program is characterized in displaying the area of the disease detected in a second WCE image input from the endoscopic image input unit.
- the disease diagnosis support system based on the endoscopic image of the digestive organ using the CNN system of the 24th aspect of the present invention is an endoscopic image of the digestive organ using the CNN system of the 23rd aspect of the present invention.
- the area of the disease is at least one of mucosal disorders, vasodilators, elevated lesions, and bleeding.
- the disease diagnosis support system based on the endoscopic image of the digestive organ using the CNN system of the 25th aspect of the present invention is an endoscopic image of the digestive organ using the CNN system of the 24th aspect of the present invention.
- the area of mucosal damage is at least one of erosion and ulcer
- the area of vasodilation is at least one of vasodilator type 1a and vasodilator type 1b.
- the area of the elevated lesion is characterized by at least one of polyps, nodules, submucosal tumors, vascular structures and epithelial tumors.
- the disease diagnosis support system based on the endoscopic image of the digestive organ using the CNN system according to the 26th aspect of the present invention is the digestive organ using the CNN system according to any 23-25 aspect of the present invention.
- the trained CNN system has a definitive diagnosis result of a first CNN system portion trained by a still image of mucosal damage and a definitive diagnosis result of vasodilatory disease.
- a second CNN system part trained by still images From a composite CNN system of a third CNN system part where the definitive diagnosis was trained with a still image of an elevated lesion and a fourth CNN system part where the definitive diagnosis was trained with a still image of bleeding. It is characterized by becoming.
- the disease diagnosis support system using the endoscopic image of the digestive organ using the CNN system according to the 27th aspect of the present invention is a computer incorporating an endoscopic image input unit, an output unit, and a convolutional neural network. It is a disease diagnosis support system by endoscopic images of the digestive organs using a convolutional neural network system having The computer A first storage area for storing the first endoscopic image of the digestive organs, A second storage area corresponding to the first endoscopic image and storing a definitive diagnosis result of information corresponding to the positive and negative of the disease in the digestive organs.
- a third storage area for storing the convolutional neural network program With The convolutional neural network program It is trained based on the first endoscopic image stored in the first storage unit and the definitive diagnosis result stored in the second storage area. Based on the second endoscopic image of the digestive organs input from the endoscopic image input unit, the information corresponding to the positive or negative of the disease of the digestive organs with respect to the second endoscopic image is described. It is supposed to be output to the output section, The endoscopic image is an endoscopic image obtained by water invasion narrow band imaging of the stomach, and the trained convolutional neural network program is a disease of the endoscopic image input from the endoscopic image input unit. It is characterized by outputting the area of early gastric cancer.
- the disease diagnosis support system based on the endoscopic image of the digestive organ using the CNN system of the 28th aspect of the present invention is an endoscopic image of the digestive organ using the CNN system of the 27th aspect of the present invention.
- the trained convolutional neural network program is characterized by having a function of displaying an area of early gastric cancer as the disease on a heat map.
- the disease diagnosis support system using the endoscopic image of the digestive organ using the CNN system according to the 29th aspect of the present invention is a computer incorporating an endoscopic image input unit, an output unit, and a convolutional neural network.
- a disease diagnosis support system using an endoscopic image of a digestive organ using a convolutional neural network system wherein the computer stores a first endoscopic image of the digestive organ.
- a second storage area for storing definitive diagnosis results of information corresponding to the positive and negative of the disease in the digestive organ corresponding to the first endoscopic image, and the convolutional neural network program are stored.
- the convolutional neural network program includes a third storage area, the first endoscopic image stored in the first storage unit, and a determination stored in the second storage area.
- Information corresponding to the positive or negative of the disease is to be output to the output unit, and the endoscopic image is selected from a white light image of the stomach, a non-magnifying narrow band light image, and an indigocarmine dye spray image.
- the trained convolutional neural network program is characterized in that it outputs the invasion depth of the disease as the disease of the endoscopic image input from the endoscopic image input unit. ..
- the disease diagnosis support system based on the endoscopic image of the digestive organ using the CNN system of the thirtieth aspect of the present invention is an endoscopic image of the digestive organ using the CNN system of the 29th aspect of the present invention.
- the trained convolutional neural network program has a function of outputting whether the submucosal invasion is less than 500 ⁇ m or more than 500 ⁇ m as the invasion depth of the disease. It is characterized by.
- the disease diagnosis support system based on the endoscopic image of the digestive organ using the CNN system according to the 31st aspect of the present invention is the digestive organ using the CNN system according to any 17-30 aspect of the present invention.
- the CNN program is further combined with three-dimensional information from an X-ray computed tomography apparatus, an ultrasonic computed tomography apparatus, or a magnetic resonance imaging apparatus. It is a feature.
- the disease diagnosis support system based on the endoscopic image of the digestive organ using the CNN system according to the 32nd aspect of the present invention is the digestive organ using the CNN system according to any 17-31 of the present invention.
- the second endoscopic image is an image being taken by the endoscope, an image transmitted via a communication network, a remote control system, or a cloud type. It is characterized by being at least one of an image provided by the system, an image recorded on a computer-readable recording medium, or a moving image.
- the digestive organ using CNN according to any aspect 1-16 of the present invention can have the same effect as the disease diagnosis support method using endoscopic images.
- the diagnostic support program based on the endoscopic image of the digestive organ using the CNN system according to the 33rd aspect of the present invention is the disease based on the endoscopic image of the digestive organ according to any one of 17-32 of the present invention. It is characterized in that it is for operating a computer as each means in the diagnostic support system of.
- the diagnostic support program based on the endoscopic image of the digestive organ according to the 33rd aspect of the present invention, as each means in the disease diagnosis support system based on the endoscopic image of the digestive organ according to any 17-32 aspect. It is possible to provide a diagnostic support program using endoscopic images of the digestive organs for operating a computer.
- the computer-readable recording medium of the 34th aspect of the present invention is characterized by recording a diagnostic support program by endoscopic images of the digestive organs using the CNN system of the 33rd aspect of the present invention. ..
- the computer-readable recording medium using the endoscopic image of the digestive organ of the 34th aspect of the present invention
- a medium can be provided.
- a WCE image of the small intestine or various endoscopic images of the stomach obtained in advance for each of a plurality of subjects by a program incorporating CNN, and each of the plurality of subjects in advance. Since it is trained based on the positive or negative definitive diagnosis of the obtained disease, it responds to the positive of the gastrointestinal disease of the subject in a short time and with substantially the same accuracy as the endoscopist. Areas or probability scores can be obtained, and subjects who must make a definitive diagnosis can be selected in a short time.
- FIG. 4A-4D are diagrams showing a representative enteroscopy image correctly diagnosed by the CNN of Embodiment 1 and a probability score of a specific site recognized by the CNN.
- 5A-5E are examples of images diagnosed as false positives by the CNN of Embodiment 1, respectively, based on darkness, laterality, bubbles, debris, and vasodilation, with FIGS. 5F-5H being true. This is an example of an image that is eroded but diagnosed as a false positive.
- Embodiment 2 It is the schematic of the flowchart for the CNN system construction of Embodiment 2. It is a figure which shows an example of the ROC curve by CNN of Embodiment 2. 8A-8E show representative regions correctly detected and classified by the CNN of Embodiment 2, respectively, into five categories: polyps, nodules, epithelial tumors, submucosal tumors and vascular structures. 9A-9C are examples of images of one patient that could not be detected correctly by the CNN of Embodiment 2. Among the false positive images of the second embodiment, this is an example of an image diagnosed by an endoscopist as a true elevated lesion. It is the schematic of the flowchart for the CNN system construction of Embodiment 3.
- FIG. 13A is an image containing representative blood correctly classified by the CNN system of Embodiment 3
- FIG. 13B is also an image showing a normal mucosal image
- FIG. 14A is an image correctly classified as containing a blood component by the red region estimation display function (SBI)
- FIG. 14B is an image erroneously classified as a normal mucosa by SBI.
- SBI red region estimation display function
- FIG. 16A is a graph showing the difference in the detection rate of the whole disease
- FIG. 16A is a graph showing the difference in the detection rate of the whole disease
- 16B is a graph showing the difference in the detection rate of each of the four types of diseases such as mucosal disorder, vasodilation, elevated lesion and blood component.
- 16C is a graph showing the difference in the detection rate for each disease when the diseases shown in FIG. 16B are further subdivided.
- 17A-17C are graphs corresponding to FIGS. 16A-16C in the case of strict criteria considering single or multiple lesions, respectively.
- FIG. 25A is a diagram showing an example of a gastric cancer ROC curve using a white light image in the CNN system of the sixth embodiment, and FIG.
- FIG. 25B is a diagram showing an example of a gastric cancer ROC curve using a non-enlarged narrow band optical image.
- FIG. 25C is a diagram showing an example of the ROC curve of gastric cancer using the same indigo carmine pigment spraying image. It is a block diagram of the disease diagnosis support method by the endoscopic image of the digestive organ using the neural network of Embodiment 7. It is a block diagram of the disease diagnosis support system by the endoscopic image of the digestive organ of Embodiment 8.
- a wireless capsule endoscope (WCE) will be described with respect to a disease diagnosis support method, a diagnosis support system, a diagnosis support program, and a computer-readable recording medium storing the diagnosis support program according to an endoscopic image of the digestive organs according to the present invention.
- WCE wireless capsule endoscope
- the method for diagnosing a disease by the endoscopic image of the present invention, the diagnosing support system, the diagnosing support program, and the computer-readable recording medium in which the diagnosing support program is stored are subjected to small intestinal spread using WCE.
- WCE small intestinal spread using WCE.
- the main indications for WCE are gastrointestinal bleeding (OGIB: Obscure Gastrointestinal Bleeding) of unknown cause, and other cases where abnormal small intestinal images were observed using other medical equipment, abdominal pain, and follow-up of past small intestinal cases Referrals from primary care physicians regarding ups, diarrhea, and screening.
- the main causes were non-steroidal anti-inflammatory drug, followed by inflammatory bowel disease, malignant tumor of the small intestine, and anastomotic ulcer, but the etiology could not be determined in many cases.
- Table 1 shows the patient characteristics of the data debt used for training and validation of the CNN system.
- the CNN system used in the first embodiment is trained by using backpropagation (backpropagation method).
- backpropagation method a deep neural network architecture called Single Shot MultiBox Detector (SSD, https://arxiv.org/abs/1512.02325) was used without changing the algorithm.
- SSD Single Shot MultiBox Detector
- the Caffe framework is one of the first developed, most popular and widely used frameworks.
- the CNN system of Embodiment 1 was trained that the area inside the boundary box was the erosion / ulcer area and the other area was the background.
- the CNN system of Embodiment 1 itself then extracted specific features of the border box region and learned erosion / ulcer features via training datasets. All layers of the CNN are probabilistically optimized with a global learning rate of 0.0001. Each image was resized to 300 x 300 pixels. The size of the bounding box was changed accordingly. These values were set by trial and error to ensure that all data is compatible with SSDs.
- Intel's Core i7-7700K was used as the CPU, and NVIDIA's GeForce GTX 1070 was used as the GPU for the graphics processing device.
- the WCE endoscopy image determined to be the correct answer in this way can be used as a diagnostic aid at the site of double-checking the image taken by adding that information to the image, or a video during WCE endoscopy. Information is displayed in real time and used as a diagnostic aid.
- the receiver operating characteristic (ROC) curve is plotted and the area under the curve (AUC) for evaluation of erosion / ulcer discrimination by the trained CNN system of Embodiment 1.
- Various cutoff values for probability scores including scores according to the Youden index, were used to calculate the ability of the CNN system of Embodiment 1 to detect erosions / ulcers, sensitivity, specificity and accuracy.
- the Youden index is one of the standard methods for determining the optimum cutoff value calculated by sensitivity and specificity, and the numerical value of "sensitivity + specificity -1" is maximized.
- the cutoff value is calculated.
- the data were statistically analyzed using Stata software (version 13; Stata Corp, College Station, TX, USA).
- the trained CNN system of Embodiment 1 took 233 seconds to evaluate these images. This is equal to the speed of 44.8 images per second.
- the AUC of the trained CNN system of Embodiment 1 in which erosions / ulcers were detected was 0.960 (95% confidence interval [CI], 0.950-0.969; see FIG. 3).
- the optimum cutoff value of the probability score was 0.481, and the region with the probability score of 0.481 was recognized as erosion / ulcer by CNN.
- the sensitivity, specificity and accuracy of CNN were 88.2% (95% CI (confidence interval), 84.8-91.0%), 90.9% (95% CI, 90.). It was 3-91.4%) and 90.8% (95% CI, 90.2-91.3%) (see Table 2).
- Table 2 shows the sensitivity, specificity, and accuracy of each calculated by increasing the cutoff value of the probability score by 0.1 from 0.2 to 0.9.
- FIGS. 4A-4D show typical regions correctly detected by the CNN system
- FIGS. 5A-5H show typical regions misclassified by the CNN system, respectively.
- False negative images as shown in Table 4, have unclear boundaries (see FIG. 5A), a color similar to the surrounding normal mucosa, too small, and totally unobservable (lateral (visible because the affected area is on the side). It was classified into four causes (difficult) or partial (only partially visible)) (see FIG. 5B).
- a CNN-based program for automatic detection of erosions and ulcers in the small intestinal image of WCE was constructed with a high accuracy of 90.8% (AUC). , 0.960) revealed that erosions / ulcers could be detected in independent test images.
- Embodiment 2 In the second embodiment, a method for diagnosing a disease related to an elevated lesion of the small intestine using a wireless capsule endoscopy (WCE) image, a diagnostic support system, a diagnostic support program, and a computer-readable recording medium storing the diagnostic support program will be described.
- WCE wireless capsule endoscopy
- the morphological characteristics of elevated lesions vary from polyps, nodules, masses / tumors to vascular structures, and the etiology of these lesions includes neuroendocrine tumors, adenocarcinomas, and familial adenocarcinoma polyposis. Includes Koz-Jeghers syndrome, follicular lymphoma and gastrointestinal stromal tumors. Since these lesions require early diagnosis and treatment, it is necessary to avoid overlooking the WCE test.
- FIG. 1 The outline of the flowchart of the CNN system of the second embodiment is shown in FIG.
- the CNN system of the second embodiment used the same SSD deep neural network architecture and Caffe framework as in the first embodiment.
- Six endoscopists manually annotated all areas of the elevated lesion in the image of the training dataset with a rectangular bounding box. Annotations were performed individually by each endoscopist and a consensus was later determined. These images were incorporated into the SSD architecture through the Caffe framework.
- the CNN system of Embodiment 2 was trained to be a raised lesion inside the border box, with other areas as the background.
- the CNN system of Embodiment 2 itself then extracted specific features of the border box region and "learned" the features of the elevated lesions via the training dataset. All layers of the CNN are probabilistically optimized with a global learning rate of 0.0001.
- Each image was resized to 300 x 300 pixels. The size of the bounding box was changed accordingly. These values were set by trial and error to ensure that all data is compatible with SSDs.
- Intel's Core i7-7700K was used as the CPU, and NVIDIA's GeForce GTX 1070 was used as the GPU for the graphics processing device.
- As the WCE the same Pilcam SB2 or SB3 WCE device as in the first embodiment was used. Data were analyzed using STATA software (version 13; Stata Corp, College Station, TX, USA).
- the inventors evaluated the CNN box in descending order of the probability score of each image for the ability of the CNN system of the second embodiment to determine whether or not each image contained an elevated lesion.
- CNN boxes, probabilistic scores for elevated lesions, and categories of elevated lesions were determined as a result of CNN when the CNN box clearly surrounded the elevated lesions.
- IoU Intersection over Unions
- the area under the curve (AUC) is plotted to plot the receiver operating characteristic (ROC) curve by varying the cutoff value of the probability score and to assess the degree of elevated lesion identification by the trained CNN system of Embodiment 2.
- ROC receiver operating characteristic
- Embodiment 2 The secondary result in Embodiment 2 is the classification of elevated lesions into 5 categories by CNN and the detection of elevated lesions in individual patient analysis. Regarding the accuracy of classification, the concordance rate of classification between CNN and endoscopists was examined. An individual patient analysis of the detection rate of elevated lesions defined that detection by CNN was correct if the CNN detected at least one elevated lesion image in multiple images of the same patient.
- Table 6 shows the patient characteristics of the data set used for training and verification of the CNN system of the second embodiment, and the details of the training data set and the verification data set.
- the validation dataset includes 7,507 images containing elevated lesions from 73 patients (male, 65.8%, mean age, 60.1 years, standard deviation, 18.7 years) and 20. It consisted of 10,000 lesion-free images from human patients (male, 60.0%, mean age, 51.9 years, standard deviation, 11.4 years).
- the CNN constructed in the second embodiment completed the analysis of all images in 530.462 seconds, and the average speed per image was 0.0303 seconds.
- the AUC of the CNN of Embodiment 2 used to detect elevated lesions was 0.911 (95% confidence interval (Cl), 0.9069-0.9155) (see FIG. 7).
- the optimal cutoff value for the probability score was 0.317. Therefore, regions with a probability score of 0.317 or higher were recognized as elevated lesions detected by CNN. Using that cutoff value, the sensitivity and specificity of CNNs are 90.7% (95% CI, 90.0% -91.4%) and 79.8% (95% CI, 79.0%-, respectively). 80.6%) (see Table 7).
- CNN sensitivities were 86.5%, 92.0%, 95.8%, 77 for detection of polyps, nodules, epithelial tumors, submucosal tumors, and vascular structures. It was 0.0% and 94.4%. 8A-8E show representative regions correctly detected and classified by the CNN of Embodiment 2, respectively, into five categories: polyps, nodules, epithelial tumors, submucosal tumors and vascular structures.
- the detection rate of elevated lesions was 98.6% (72/73).
- the detection rates of polyps, nodules, epithelial tumors, submucosal tumors, and vascular structures per patient were 96.7% (29/30), 100% (14/14), and 100%. It was (14/14), 100% (11/11), and 100% (4/4).
- all three images of the polyp of one patient shown in FIGS. 9A-9C could not be detected by the CNN of Embodiment 2. In this image, all probability scores for the CNN box were less than 0.317 and were not detected as elevated lesions by CNN.
- Table 8 shows the labeling of elevated lesions by CNN and a specialist endoscopist.
- CNN and endoscopist labeling concordance rates for polyps, nodules, epithelial tumors, submucosal tumors, and vascular structures were 42.0%, 83.0%, 82.2%, 44.5%, and 48. It was 0.0%.
- the category based on CEST is applied, and although there are differences in sensitivity between categories such as polyps, nodules, epithelial tumors, submucosal tumors, and vascular structure, the sensitivity is high. , It was clarified that it can be detected and classified with a good detection rate.
- Embodiment 3 WCE has become an indispensable tool for investigating small bowel disease, and the main indication is gastrointestinal bleeding (OGIB) of unknown cause for which no clear source of bleeding is found.
- OGIB gastrointestinal bleeding
- a "red region estimation display function" Suspected Blood Indicator; hereinafter simply referred to as "SBI”
- SBI Sesected Blood Indicator
- Non-Patent Document 15 Non-Patent Document 15.
- SBI is an image selection tool included in RAPID CE interpretation software (Medtronic, Minnesota, Minneapolis, USA) that tags potentially bleeding areas with red pixels.
- a method for diagnosing small intestinal bleeding by WCE image using a CNN system, a diagnostic support system, a diagnostic support program, and a computer-readable recording medium storing the diagnostic support program Will be described.
- detecting the blood component of the small intestine it is possible to estimate the amount of blood, and in this case, it is also possible to estimate the blood volume from the distribution range of blood or the like.
- the case of detecting the presence or absence of blood components that is, the presence or absence of bleeding will be illustrated.
- Embodiment 3 The algorithm of the CNN system used in Embodiment 3 was developed using ResNet50 (https://arxiv.org/abs/1512.03385), which is a 50-layer deep neural network architecture. It was then trained using the Caffe framework first developed at the Berkeley Vision Learning Center to train and validate the newly developed CNN system. Then, using SGD (Stochastic Gradient Descent), all layers of the network were stochastically optimized with a global learning rate of 0.0001. All images have been resized to 224 x 224 pixels for compatibility with ResNet50.
- ResNet50 https://arxiv.org/abs/1512.03385
- SGD Spochastic Gradient Descent
- the main results in the CNN system of Embodiment 3 are the area under the curve (AUC) of the receiver operating characteristic (ROC) curve, and the CNN system between images of blood components and images of normal mucous membranes. Includes the accuracy of the discriminating ability of.
- the trained CNN system of Embodiment 3 output consecutive numbers between 0 and 1 as probability scores for blood components per image. The higher the probability score, the higher the probability that the CNN system will contain blood components in the image.
- the validation test of the CNN system in Embodiment 3 was performed using a single still image, plotting the ROC curve by varying the threshold of the probability score, and calculating the AUC to assess the degree of discrimination. ..
- the threshold of the probability score is simply set to 0.5 for final classification by the CNN system to identify the CNN system between images containing blood components and images of normal mucous membranes.
- the sensitivity, specificity and accuracy of the ability were calculated.
- the validation set evaluated the sensitivity, specificity and accuracy of SBI discrimination between images containing blood components and images of normal mucosa by examining 10,208 images. Differences in performance between the CNN system of Embodiment 3 and SBI were compared using McNemmer's test. The data obtained were statistically analyzed using STATA software (version 13; Stata Corp, College Station, TX, USA).
- the validation dataset consisted of 10,208 images from 25 patients (male, 56%, mean age, 53.4 years, standard deviation, 12.4 years).
- the trained CNN system of Embodiment 3 took 250 seconds to evaluate these images. This is equal to the speed of 40.8 images per second.
- the AUC of the CNN system of Embodiment 3 for identifying images containing blood components was 0.9998 (95% CI (confidence interval), 0.9996-1.0000; see FIG. 12).
- Table 9 also shows the respective sensitivities, specificities and accuracy calculated by increasing the probability score cutoff value by 0.1 from 0.1 to 0.9.
- FIG. 13 shows an image containing representative blood (FIG. 13A) and a normal mucosa image (FIG. 13B) correctly classified by the CNN system of Embodiment 3.
- the probability values obtained by the CNN system of the third embodiment of FIGS. 13A and 13B are shown in Table 10 below.
- FIG. 14 shows seven false negative images classified as normal mucosa by the CNN of Embodiment 3. Of these, the four images shown in FIG. 14A were correctly classified as containing blood components by SBI, and the three images in FIG. 14B were erroneously classified as normal mucosa by SBI.
- the classifications obtained by the CNN system and SBI of the third embodiment of FIGS. 14A and 14B are shown in Table 12 below, and the relationship between the classifications of the CNN system and SBI is shown in Table 13, respectively.
- the trained CNN system of the third embodiment it was possible to distinguish between an image containing a blood component and a normal mucosal image with a high accuracy of 99.9% (AUC, 0.9998).
- AUC a normal mucosal image
- a direct comparison with SBI showed that the trained CNN system of Embodiment 3 could be classified more accurately than SBI.
- the trained CNN system of Embodiment 3 was superior to SBI in both sensitivity and specificity. This result shows that the trained CNN system of Embodiment 3 can be used as a highly accurate screening tool for WCE.
- Endoscopy by WCE is performed by externally receiving and recording images taken while the capsule is moving in the digestive tract, and it is possible to take a picture of the entire small intestine at one time. It is also possible to shoot as.
- screening for moving WCE images is more burdensome for physicians than for still images.
- the moving image of the present invention in addition to those continuously shot by a so-called video camera, when still images are continuously shot with a very short shooting interval and a series of still images are continuously reproduced. Also includes those that can be recognized as movies.
- the QuickView mode is known for the purpose of shortening the screening time in the WCE image of such a moving image.
- QuickView mode is an image selection tool installed in RAPID CE interpretation software (Medtronic, Minnesota, Minneapolis, USA). According to this QuickView mode, it is reported that it is not suitable for the initial screening due to the high error rate for important diseases despite its relatively high sensitivity and ability to shorten the reading time. (See Non-Patent Document 16).
- the WCE still image is used and the WCE moving image is not used in the screening by the WCE image.
- the diagnostic support method for various diseases of the small intestine using the CNN system, the diagnostic support system, the diagnostic support program, and the computer reading that stores the diagnostic support program are compared with the QuickView mode described above. A possible recording medium will be described.
- the verification data set includes 379 photographs taken between June 2018 and May 2019 at the institutions to which some of the inventors belong (Tokyo University Hospital, Hiroshima University Hospital and Sendai Kousei Hospital, Japan). WCE video images were acquired retroactively. The WCE test was performed using a Pilcam SB3 device. The training data set and the verification data set are completely independent.
- the CNN system of Embodiment 4 mucosal damage and nodules can be detected when images containing mucosal damage and nodules are trained with images containing other abnormalities.
- the SSD that detects mucosal damage and the SSD that detects nodules were separated from the other SSDs because they were shown to be less sexual. That is, the CNN system of the fourth embodiment constructed and used a composite CNN system using the following four subsystems. (1) SSD that detects mucosal damage, (2) SSD that detects nodules, (3) SSD that detects other abnormalities (vasodilation, polyps, submucosal tumors, vascular structure and epithelial tumors), and (4) ResNet50 for detecting blood components.
- the main analytical results in the CNN system of Embodiment 4 include the detection of various abnormalities in the small intestine.
- the main analysis is primarily patient-specific disease analysis. If at least one particular anomaly image was obtained from a moving image of a patient, the detection result for that type of anomaly was defined as correct for that patient. For example, if the CNN acquired at least one mucosal damage image from a patient containing multiple mucosal damage images, it was determined that the CNN was able to accurately detect the mucosal damage in that patient.
- the detection rate by QuickView mode installed in RAPID CE interpretation software v8.3 was also evaluated.
- QuickView mode picks up images containing significant lesions and enables fast-forward review of WCE video.
- the sampling rate can be set between 2% and 80% in QuickView mode.
- the sampling rate in the QuickView mode was set according to the average acquisition rate of images by CNN. By setting the threshold in this way, the CNN and QuickView modes were able to reduce the image to the same extent, which allowed a direct comparison of detectability between the two systems.
- a patient-by-patient analysis was performed based on strict criteria for the detection rate of various abnormalities, taking into account the number of abnormalities (single or multiple) of each type.
- the correct detection criterion is that patients with multiple lesions of a particular abnormality need to detect multiple lesions, not just a single lesion. For example, if CNN detects only one ulcer in a patient with multiple ulcers, it is determined by strict criteria that CNN cannot properly detect the ulcer in that patient.
- Previous studies have suggested that information on single or multiple lesions may be useful in investigating the etiology and management of small bowel disease (see Non-Patent Document 17), thus substituting this rigorous criterion. Adopted as an analysis.
- Detectability between CNN and QuickView modes was compared using McNemmer's test. Abnormalities were diagnosed in the original laboratory by two endoscopists, and a consensus review was conducted by two other endoscopists, ie, at the University of Tokyo Hospital as a gold standard, prior to searching by the two systems. It was. These studies were limited to the small intestine section of the small intestine WCE full video. The size of mucosal damage was classified as erosion ( ⁇ 5 mm) or ulceration (> 5 mm). The types of vasodilators were classified into types 1a or 1b according to Yano Yamamoto's classification (see Non-Patent Document 18). Type 1a lesions are characterized by punctate erythema ( ⁇ 1 mm) with or without exudation, and type 1b lesions are characterized by patchy erythema (2-3 mm) with or without exudation.
- the measurement of secondary results was a classification of CNN-induced abnormalities into the following four categories: (1) Mucosal damage, (2) Vasodilation, (3) Nodules, polyps, epithelial tumors, submucosal tumors, and elevated lesions including vascular structure, and (4) blood components.
- Table 14 shows the patient characteristics of the verification data set used for the verification of the CNN system of the fourth embodiment.
- the validation dataset consisted of WCE moving images of the small intestine of 379 patients (male: 57%, mean age: 62.3 years, standard deviation: 17.5 years). The most common indication for WCE was unexplained gastrointestinal bleeding (39%).
- duplicate data is allowed, the value of the part to which " ⁇ " is given indicates “mean value ⁇ standard deviation", and the value in parentheses indicates the percentage.
- data duplication is allowed.
- the average reading speed of the CNN system of the fourth embodiment was 0.09 seconds per image. Since this CNN system picked up 1,135,104 images (22.5%) from 5,050,226 verification images, the sampling rate in QuickView mode was set to 23%. The average number of all images of the small intestine is 13,325 per video, and the average number of images picked up by the CNN system of Embodiment 4 is 2,295 per video (average rate: 22.5%, Standard deviation: 8.7%). Representative images of various abnormalities correctly detected by the CNN system of Embodiment 4 are shown in FIG. 15 together with the respective definitive diagnosis results. The rectangular frame in FIG. 15 indicates the region of the diseased part given by the endoscopist, and the numerical value shows the probability score given by CNN.
- FIG. 16A is a graph showing the difference in the detection rate of the whole disease
- FIG. 16B is a graph showing the difference in the detection rate of each of the four types of diseases such as mucosal disorder, vasodilation, elevated lesion and blood component
- FIG. 16C is a graph showing the difference in the detection rate for each disease when the diseases shown in FIG. 16B are further subdivided.
- 17A to 17C are graphs corresponding to FIGS. 16A to 16C in the case of a strict standard considering a single lesion or a plurality of lesions, respectively. Further, in FIGS. 16 and 17, “*” indicates p ⁇ 0.05, “**” indicates P ⁇ 0.001, and “NS” indicates no significant difference.
- Mucosal disorders, vasodilation, elevated lesions, and blood components were detected in 94, 29, 81, and 23 patients, respectively (see Table 14).
- the detection rate of patient-specific anomalies by CNN was significantly higher than that by QuickView mode (99%: 89%, p ⁇ 0.001), as shown in FIG.
- the detection rates of erosion, ulcer, vasodilator type 1a, vasodilator type 1b, polyps, nodules, submucosal tumors, vascular structure, epithelial tumors and blood components by the CNN system of Embodiment 4 are 100, respectively.
- the detection rate of CNN anomalies per patient based on strict criteria was significantly higher than in QuickView mode (98% vs. 85%, p ⁇ 0.001) (Fig. 17A).
- the detection rate of erosion, ulcer, vasodilation type 1a, vasodilation type 1b, polyps, nodules, submucosal tumors, vascular structure, epithelial tumors, and blood volume by CNN is 99% (69 / 69 /).
- FIG. 18 shows four false negative images that could not be detected by CNN in a patient-by-patient analysis based on strict criteria.
- FIG. 18D submucosal tumor (n 1). All of these were acquired correctly in QuickView mode.
- Table 15 shows the correspondence between the results of the CNN of the fourth embodiment and the diagnosis results of the endoscopist.
- the numerical values in parentheses in Table 15 indicate the ratio to the whole.
- the concordance rate between the result by CNN of the fourth embodiment and the diagnosis result by the endoscopist was 95.7% in the case of mucosal disorder and 75.9% in the case of vasodilation. Elevated lesions accounted for 98.8%, and blood components accounted for 100%.
- the trained CNN system of the fourth embodiment various abnormalities in WCE moving images captured at a plurality of facilities can be detected with high sensitivity, and by direct comparison with the existing QuickView mode, it is possible to detect them.
- the detection rate of per-patient anomalies by the trained CNN system of Embodiment 4 was found to be significantly higher than in QuickView mode (99% vs. 89%). That is, it has been found that the trained CNN system of Embodiment 4 may be superior to the existing QuickView mode and helps reduce the burden on the physician without reducing the detection rate of anomalies. ..
- the method for diagnosing a disease by the endoscopic image of the present invention, the diagnosis support system, the diagnosis support program, and a computer-readable recording medium storing the diagnosis support program are used at an early stage using ME-NBI images. An example applied in the case of gastric cancer will be described.
- A is a differentiated cancer (0-IIc, mub1)
- B is a differentiated cancer (0-IIc, tub2)
- C is a differentiated cancer (0-IIa, tub1)
- D to F indicates gastric gland mucosa
- GI indicates pyloric gland mucosa
- J and K indicate patchy redness
- L indicates adenoma
- M indicates yellow tumor
- N indicates local atrophy
- O indicates ulcer scar.
- 0-IIc indicates a surface recessed type
- 0-IIa indicates a surface raised type
- tub1 indicates a well-differentiated adenocarcinoma
- tub2 indicates a moderately differentiated adenocarcinoma.
- the CNN system of Embodiment 5 was developed by transfer learning utilizing Deep Residual Network (ResNet-50, Non-Patent Document 21), which is a state-of-the-art CNN architecture pre-trained in an ImageNet database containing more than 14 million images. Was done.
- the same Caffe framework as in Embodiment 1 was used for training, verification, and testing of CNNs.
- transfer learning is used to replace the final classification layer with another fully connected layer, retrain using the training dataset, and fine-tuning the parameters of all layers. It was adjusted.
- Each image was resized to 224 x 224 pixels. A rotated image was also used to increase the number of images.
- the CNN system of Embodiment 5 was trained and validated with a dataset of 5,574 ME-NBI images (early gastric cancer: 3977 images of 267 cases, non-cancerous images: 1777 images).
- the training data set was randomly divided into a training data set (4460 images) and a verification data set (1114 images) and trained at a ratio of 8: 2, and the CNN system of the fifth embodiment was constructed (FIG. 20). reference)
- Table 16 shows the definitions of the evaluation criteria (accuracy, sensitivity, specificity, PPV (positive predictive value) for early gastric cancer, NPV (negative predictive value) for early gastric cancer, false positives and false negatives). Further, in order to evaluate the accuracy of the CNN system of the fifth embodiment, the area (AUC) below the receiver operating characteristic curve (ROC) was obtained. The overall test rate was defined from the start to the end of the analysis of the test image by time measurement incorporated in the CNN system of Embodiment 5.
- the CNN system of the fifth embodiment can input the input image. I tried to understand how to recognize.
- Grad-CAM produces a coarse localization map that emphasizes important areas in the image to predict the target concept (in this case gastric cancer).
- a heat map image that is, a heat map was created from the position identification data of the localization map.
- test dataset contained data that was later collected.
- Table 17 shows the characteristics of patients and lesions used in the training and test datasets.
- the test dataset includes H. et al. Cases of Helicobacter pylori-negative gastric cancer (including uninfected, post-eradication cases) and cases with a small tumor diameter and lesion in the lower part (L) of the stomach were included.
- Table 18 shows the results of the VS classification system for early gastric cancer in the training and test data sets. There are no major differences in borderline, MVP (microvascular pattern), MSP (microsurface pattern) and diagnosis between the training and test datasets. There were 18 lesions that were not diagnosed with cancer by endoscopy. Of these, 8 lesions, 7/267 (2.6%) in the training dataset and 1/82 (1.2%) in the test dataset, were erroneously diagnosed as non-cancer due to lack of boundaries. .. The remaining 10 lesions, 6/267 (2.2%) in the training dataset and 4/82 (4.9%) in the test dataset, were mistakenly non-cancerous due to the presence of normal MVPs and MSPs. Diagnosed, but borderline was present. Compared to the training dataset, the test dataset may have contained more lesions of normal MVP and MSP, but the difference is not significant.
- the overall test speed was 38.3 sheets / second (0.026 seconds / sheet).
- the performance of the CNN system of Embodiment 5 is shown in Table 19. The accuracy was 98.7%, and 2271 out of 2300 images were diagnosed correctly. Sensitivity, specificity, PPV, NPV, false positive rate and false negative rate are 98% (1401/1430), 100% (870/870), 100% (1401/1401), 96.8% (870/899), respectively. ), 0% (0/870) and 2% (29/1430).
- the receiver operating characteristic curve (ROC) by the CNN system of the fifth embodiment was evaluated, the area under the curve (AUC) was 99% (see FIG. 21).
- a to C are examples of images diagnosed as false negatives by the CNN system of Embodiment 5, and the lesions are diagnosed as intestinal metaplasia or gastritis.
- A is a differentiated cancer (0-IIc, tub1, after eradication of H. pyrrolili), showing normal MVP + MSP with a borderline
- B is a differentiated cancer (0-IIc, tub2, H. After eradication of pirori), showing normal MVP + MSP with borderline
- C is differentiated cancer (0-IIa, tube1, after eradication of H. pirori), showing normal MVP + MSP with borderline.
- D is an example of an image having bleeding diagnosed as false negative
- E is an example of an image having a low magnification field and out of focus also diagnosed as false negative
- 0-IIc indicates a surface recessed type
- 0-IIa indicates a surface raised type
- tub1 indicates a well-differentiated adenocarcinoma
- tub2 indicates a moderately differentiated adenocarcinoma.
- FIG. 23 An example of a heat map is shown in FIG.
- A is a differentiated cancer (0-IIc, tub1)
- B is a differentiated cancer (0-IIc, tub1)
- C is a patchy redness
- D is an adenoma.
- ⁇ D are heat maps corresponding to A to D, respectively.
- 0-IIc indicates a surface recess type
- tube1 indicates a well-differentiated adenocarcinoma.
- the area determined to be cancer by the CNN system of Embodiment 5 was displayed in red, which coincided with the area determined to be cancer by the endoscopist.
- the image of the gastric adenoma and the image showing the patchy redness are determined to be non-cancerous by the CNN system of Embodiment 5 and are not displayed in red on the heat map. Similarly, these non-cancerous areas coincided with the areas determined by the endoscopist to be non-cancerous.
- the CNN system of the fifth embodiment showed higher diagnostic accuracy than those obtained conventionally.
- the most important difference from the conventional one was the ME-NBI observation method.
- the maximum magnification water immersion method used in the CNN system of Embodiment 5 can eliminate halation and produce a perfectly focused, clear image of uniform quality suitable for endoscopic diagnosis. , These images are ideal for diagnostic support by the CNN system.
- the CNN system of Embodiment 5 can analyze 38.3 images per second, thus overcoming this technical limitation. Therefore, it is expected that the CNN system of the fifth embodiment can be applied to the diagnosis by the video image by the procedure such as ME-NBI.
- ordinary white light imaging is performed on a method for supporting a diagnosis of a disease using an endoscopic image of the present invention, a diagnosis support system, a diagnosis support program, and a computer-readable recording medium storing the diagnosis support program.
- NBI Non-enlarged narrow band imaging
- Indigo indigocarmine dye application
- the stomach is roughly divided from the inner surface side into a mucosal layer (M), a submucosal layer (SM), a mucosal muscle layer (MP), a subserosal layer (SS), and a serosa (SE).
- M mucosal layer
- SM submucosal layer
- MP mucosal muscle layer
- SS subserosal layer
- SE serosa
- Evis Lucera Spectrum system or Evis Lucera Elite system (Olympus, Tokyo, Japan) is mainly used for endoscopic surgery, and high resolution or high definition endoscopes (GIF-Q240, GIF-Q240Z, GIF-H260, GIF-Q260, GIF-H260Z, GIF-PQ260, GIF-XP260N, GIF-H290, GIF-H290Z, GIF-XP290N, Olympus, Tokyo, Japan) were used. All images were extracted that could show at least half of the entire lesion of interest. If the video file is accessible, if possible, with conventional white light imaging (WLI), non-magnification narrowband imaging (NBI), and 0.1% indigocarmine dyeing imaging (Indigo). If possible, a total of two or more close-up images and distant-view images were extracted from the video.
- WLI white light imaging
- NBI non-magnification narrowband imaging
- Indigo 0.1% indigocarmine dyeing imaging
- the collected images were randomly divided into training and test datasets at a ratio of 4: 1 by computerized randomization.
- Facebook's PyTorch https://pytorch.org/
- deep learning framework was used to train, validate and test the AI system of Embodiment 6.
- the AI systems of Embodiment 6, which use WLI images, NBI images, and Indigo images, respectively, are defined as AI system (WLI), AI system (NBI), and AI system (Indigo), respectively.
- the training dataset for the CNN algorithm includes 8217 WLI images (6030 for training and 2241 for verification), including 884 lesions (428 for training and 184 for verification), and 629 lesions (for training).
- the test data set consists of a total of 1715 WLI images including 236 lesions of gastric cancer, 575 NBI images including 158 lesions, and 639 Indigo images including 111 lesions. It was. The images of the training and test datasets were mutually exclusive.
- Embodiment 6 Three independent AI systems were developed to predict the depth of gastric cancer invasion using WLI images, NBI images and Indigo images, respectively.
- the AI system of the sixth embodiment was developed by transfer learning utilizing ResNet-50, which is a state-of-the-art CNN architecture, as in the case of the fifth embodiment.
- ResNet-50 is a state-of-the-art CNN architecture, as in the case of the fifth embodiment.
- transfer learning is used to replace the final classification layer with another fully connected layer, retrain using the training dataset, and parameterizing all layers. Fine-tuned. All images were resized to 224 x 224 pixels. To increase the number of images, the images were expanded by vertical inversion, horizontal inversion, and scaling. All layers of the CNN were trained using a stochastic gradient descent algorithm with parameters of batch size: 64, global learning rate: 0.001, and epoch count: 100.
- the trained CNN-based AI system of Embodiment 6 is programmed to output probability scores (range, 0-1) of "M or SM1" and "SM2 and above".
- M indicates the case where the tip of the lesion stays in the mucosal layer
- SM1 indicates the case where the tip of the lesion stays in the submucosal depth of less than 500 ⁇ m
- SM2 or more indicates the case of the lesion.
- MP muscularis mucosae
- SS subserosal layer
- SE serosa
- SE serosa
- SI infiltration
- positive was defined as a histologically proven cancer infiltrating SM2 or higher.
- Sensitivity, specificity, accuracy, positive predictive value and negative predictive value were calculated on an image-based and lesion-based basis for the three CNN-based AI systems of Embodiment 6.
- the correct answer for each lesion in the test image was the correct answer when the majority of the images were correct for each lesion.
- Table 22 shows the detailed clinical features of the patients and lesions included in the training and test datasets.
- the distribution of depth of invasion is also shown in Table 22.
- the receiver operating characteristic curve (ROC) of the probability of invasion classification for the WLI test image is shown in FIG. 25A.
- the area under the curve (AUC) of the AI system (WLI) was 0.9590.
- the optimal cutoff value for the probability score was 0.5448. Diagnosis by the AI system (WLI) was made based on this probability score.
- the performance of the AI system (WLI) is shown in Table 23.
- the image-based sensitivity, specificity, accuracy, positive predictive value and negative predictive value of the AI system (WLI) were 89.2%, 98.7%, 94.4%, 98.3% and 91. It was 7%.
- the AI system (WLI) lesion-based sensitivity, specificity, accuracy, positive predictive value and negative predictive value were 84.4%, 99.4%, 94.5%, 98.5% and 98.5%, respectively. It was 92.9%.
- the AUC of the AI system (NBI) was 0.9048, and the optimum cutoff value of the probability score was 0.4031.
- the AUC of the AI system (Indigo) was 0.9491, and the optimum cutoff value of the probability score was 0.6094.
- the depth of penetration of the tip of the lesion of gastric cancer is less than 500 ⁇ m (SM1) or 500 ⁇ m or more (SM2) in the submucosa. It was confirmed that the diagnosis could be made accurately, and it was confirmed that it was possible to accurately determine whether or not ESD was optimal.
- Embodiment 7 A method for supporting the diagnosis of a disease by endoscopic images of the digestive organs using the CNN system of the seventh embodiment will be described with reference to FIG.
- the method for supporting the diagnosis of a disease by endoscopic images of the digestive organs using the CNN system of the first to sixth embodiments can be used.
- the CNN system is trained / verified using a first endoscopic image of the digestive organs and a definitive positive or negative diagnostic result of the disease of the digestive organs corresponding to the first endoscopic image. To do.
- the CNN system trained / validated in S1 is based on a second endoscopic image of the gastrointestinal tract, with at least one positive and / or negative of the gastrointestinal tract disease and a positive probability score. Is output. This second endoscopic image shows a newly observed or input endoscopic image.
- the second endoscope image is an image being taken by the endoscope, an image transmitted via a communication network, an image provided by a remote control system or a cloud-type system, and a computer reading. It may be at least one of images or moving images recorded on a possible recording medium.
- Emodiment 8 The disease diagnosis support system based on the endoscopic image of the digestive organ, the diagnostic support program based on the endoscopic image of the digestive organ, and the computer-readable recording medium of the eighth embodiment will be described with reference to FIG. 27.
- the method for supporting the diagnosis of a disease by the endoscopic image of the digestive organ described in the seventh embodiment can be used.
- the disease diagnosis support system 1 using an endoscopic image of the digestive organs includes an endoscopic image input unit 10, an output unit 30, a computer 20 incorporating a CNN program, and an output unit 30.
- the computer 20 has a first storage area 21 for storing a first endoscopic image of the digestive organs, and a positive or negative gastrointestinal disorder, a past disease, or a severe condition corresponding to the first endoscopic image. It includes a second storage area 22 for storing at least one definitive diagnosis result of information corresponding to the level of degree or the imaged site, and a third storage area 23 for storing the CNN program.
- the CNN program stored in the third storage area 23 is divided into a first endoscopic image stored in the first storage area 21 and a definitive diagnosis result stored in the second storage area 22.
- the digestive organs of the digestive organ disease Based on the second endoscopic image of the digestive organs that has been trained / verified and input from the endoscopic image input unit 10, the digestive organs of the digestive organ disease with respect to the second endoscopic image.
- the positive and / or negative of the disease and at least one of the positive probability scores are output to the output unit 30.
- the second endoscope image to be stored in the third storage area is provided by an image being photographed by the endoscope, an image transmitted via a communication network, a remote control system, or a cloud-type system. It may be at least one of an image, an image recorded on a computer-readable recording medium, or a moving image.
- the disease diagnosis support system based on the endoscopic image of the digestive organs of the eighth embodiment includes a diagnostic support program based on the endoscopic image of the digestive organs for operating a computer as each means.
- the diagnostic support program using endoscopic images of the digestive organs can be stored in a computer-readable recording medium.
Landscapes
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Engineering & Computer Science (AREA)
- Surgery (AREA)
- Physics & Mathematics (AREA)
- Heart & Thoracic Surgery (AREA)
- General Health & Medical Sciences (AREA)
- Biophysics (AREA)
- Pathology (AREA)
- Biomedical Technology (AREA)
- Veterinary Medicine (AREA)
- Medical Informatics (AREA)
- Molecular Biology (AREA)
- Public Health (AREA)
- Animal Behavior & Ethology (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Theoretical Computer Science (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Optics & Photonics (AREA)
- Radiology & Medical Imaging (AREA)
- Endoscopes (AREA)
Abstract
Description
本発明は、ニューラルネットワーク(neural network)を用いた消化器官の内視鏡画像による疾患の診断支援方法、診断支援システム、診断支援プロラム及びこの診断支援プログラムを記憶したコンピュータ読み取り可能な記録媒体に関する。 The present invention relates to a method for supporting diagnosis of a disease by endoscopic images of the digestive organs using a neural network, a diagnosis support system, a diagnosis support program, and a computer-readable recording medium that stores this diagnosis support program.
消化器官、例えば、喉頭、咽頭、食道、胃、十二指腸、胆道、膵菅、小腸、大腸などに対し、内視鏡検査が多く行われている。上部消化器官の内視鏡検査は、胃がん、食道がん、消化性潰瘍、逆流性胃炎などのスクリーニングのために、また、大腸の内視鏡検査は大腸がん、大腸ポリープ、潰瘍性大腸炎等のスクリーニングのために、しばしば行われている。特に上部消化器官の内視鏡検査は、様々な上腹部症状の詳細な検査、胃の病気に対するバリウム検査の陽性結果を受けての精密検査、及び、日本の定期健康診断に一般的に組み込まれている異常な血清ペプシノゲンレベルに対する精密検査にも有用である。また、近年では、胃がん検診は従来のバリウム検査から胃内視鏡検査への移行が進んでいる。 Many endoscopy is performed on the digestive organs, such as the larynx, pharynx, esophagus, stomach, duodenum, biliary tract, pancreatic duct, small intestine, and large intestine. Endoscopy of the upper digestive organs is for screening for gastric cancer, esophageal cancer, peptic ulcer, reflux gastritis, etc., and endoscopy of the large intestine is for colon cancer, colorectal polyps, ulcerative colitis. It is often done for screening such as. In particular, endoscopy of the upper digestive organs is commonly incorporated into detailed examinations of various upper abdominal symptoms, detailed examinations based on positive barium tests for gastric diseases, and regular medical examinations in Japan. It is also useful for close examination of abnormal serum pepsinogen levels. In recent years, gastric cancer screening has been shifting from conventional barium examination to gastroscopy.
胃がんは、最も一般的な悪性腫瘍の1つであり、数年前には世界中で約100万件も発症していると推定されている。特に、東アジア諸国の胃がんの50%~70%は早期胃がんとして検出されている。内視鏡的粘膜下層剥離術(ESD)は早期胃がんの低侵襲治療法であり、日本胃がん治療ガイドラインによればほとんどリンパ節移転のリスクはないとされている。したがって、胃がんの早期発見と早期治療が死亡率を低下させる最良の方法であり、早期胃がんの内視鏡診断の精度の向上は、胃がんの発生率と死亡率の削減に非常に役立つ。 Gastric cancer is one of the most common malignant tumors, and it is estimated that about 1 million cases have occurred worldwide several years ago. In particular, 50% to 70% of gastric cancers in East Asian countries are detected as early gastric cancer. Endoscopic submucosal dissection (ESD) is a minimally invasive treatment for early gastric cancer, and according to the Japanese Gastric Cancer Treatment Guidelines, there is almost no risk of lymph node relocation. Therefore, early detection and treatment of gastric cancer is the best way to reduce mortality, and improving the accuracy of endoscopic diagnosis of early gastric cancer is very helpful in reducing gastric cancer incidence and mortality.
胃がん発症の根本原因のうち、ヘリコバクター・ピロリ(Helicobacter pylori、以下、「H.ピロリ」ということがある。)感染は、萎縮性胃炎、腸上皮化生を誘導し、最終的には胃がん発症につながる。世界中で非噴門胃がんのうちの98%はH.ピロリが寄与していると考えられている。H.ピロリに感染した患者の胃がんの危険性が高まり、H.ピロリ除菌後の胃がんの発生率が低下したことを考慮し、国際がん研究機関(International Agency for Research on Cancer)は、H.ピロリを明確な発がん物質と分類している。この結果から、胃がん発症のリスクを低減させるためにはH.ピロリの除菌が有用であり、抗菌薬によるH.ピロリの除菌はわが国の保険診療にもなっており、今後とも保健衛生上強く奨励される治療法である。事実、日本国の厚生労働省は、2013年2月にH.ピロリ感染による胃炎患者の根絶治療のための健康保険適用を承認した。 Among the root causes of gastric cancer development, Helicobacter pylori (hereinafter sometimes referred to as "H. pylori") infection induces atrophic gastritis and intestinal metaplasia, and eventually leads to the development of gastric cancer. Connect. 98% of non-cardiac gastric cancers worldwide are H. Helicobacter pylori is believed to have contributed. H. Patients infected with Helicobacter pylori are at increased risk of gastric cancer, and H. pylori. Considering that the incidence of gastric cancer after Helicobacter pylori eradication has decreased, the International Agency for Research on Cancer has announced that H. pylori has been eradicated. Helicobacter pylori is classified as a clear carcinogen. From this result, in order to reduce the risk of developing gastric cancer, H. Eradication of Helicobacter pylori is useful, and H. pylori with antibacterial agents. Helicobacter pylori eradication has become an insurance practice in Japan, and will continue to be a treatment method that is strongly encouraged in terms of health and hygiene. In fact, the Ministry of Health, Labor and Welfare of Japan announced in February 2013 that H. Approved health insurance coverage for the eradication treatment of gastritis patients due to Helicobacter pylori infection.
H.ピロリ感染の存在についての鑑別診断には、胃内視鏡検査は極めて有用な情報を提供する。毛細血管がきれいに見える場合(RAC(regular arrangement of collecting venules))や胃底腺ポリープはH.ピロリ陰性の胃粘膜に特徴的であるが、萎縮、発赤、粘膜腫脹、皺壁肥大は、H.ピロリ感染胃炎の代表的な所見である。また、斑状赤斑は、H.ピロリを除菌した胃粘膜の特性である。H.ピロリ感染の正確な内視鏡診断は、血液又は尿中の抗H.ピロリIgGレベル測定、糞便抗原測定、尿素呼気試験、又は迅速ウレアーゼ試験などの様々な検査によって確認され、検査結果が陽性の患者はH.ピロリ除菌に進むことができる。内視鏡検査は広く胃病変の検査に使われるが、臨床検体分析によらずに胃病変の確認時にH.ピロリ感染までも特定できるようになれば、画一的に血液検査や尿検査等を行うことのなく、患者の負担は大きく減り、また医療経済上の貢献も期待できる。 H. Gastroscopy provides extremely useful information for the differential diagnosis of the presence of Helicobacter pylori infection. If the capillaries look clean (RAC (regular arrangement of collecting venules)) or fundic gland polyps are H. Characteristic of Helicobacter pylori-negative gastric mucosa, atrophy, redness, mucosal swelling, and wrinkled wall hypertrophy are associated with H. pylori. This is a typical finding of Helicobacter pylori-infected gastritis. In addition, patchy red spots are described in H. It is a characteristic of the gastric mucosa that has been sterilized by Helicobacter pylori. H. Accurate endoscopic diagnosis of Helicobacter pylori infection is based on anti-H. pylori in blood or urine. Patients who are confirmed by various tests such as Helicobacter pylori IgG level measurement, fecal antigen measurement, urea breath test, or rapid urease test and have a positive test result are H. pylori. You can proceed to Helicobacter pylori eradication. Endoscopy is widely used to examine gastric lesions, but when confirming gastric lesions without clinical specimen analysis, H. If even Helicobacter pylori infection can be identified, the burden on patients will be greatly reduced without performing uniform blood tests and urine tests, and it can be expected to contribute to the medical economy.
このように、上部消化器官及び大腸の内視鏡検査は広く行われるようになっているが、小腸に対する内視鏡検査は、一般的な内視鏡を小腸の内部にまで挿入することが困難なため、あまり行われていない。一般的な内視鏡は長さが約2m程度であり、小腸まで内視鏡を挿入するには、経口的に胃及び十二指腸を経由して、あるいは経肛門的に大腸を経由して小腸まで挿入する必要があり、しかも、小腸自体は6-7m程度もある長い器官であるので、一般的な内視鏡では小腸全体に亘る挿入及び観察が困難なためである。そのため、小腸の内視鏡検査には、ダブルバルーン内視鏡(特許文献1参照)又はワイヤレスカプセル内視鏡(Wireless Capsule Endoscopy、以下単に「WCE」ということがある。)(特許文献2参照)が使用されている。 In this way, endoscopy of the upper digestive organs and large intestine has become widespread, but endoscopy of the small intestine makes it difficult to insert a general endoscope into the inside of the small intestine. Therefore, it is not often done. A general endoscope is about 2 m in length, and to insert the endoscope into the small intestine, it is taken orally via the stomach and duodenum, or transanally via the large intestine to the small intestine. This is because it is necessary to insert the small intestine, and since the small intestine itself is a long organ as long as 6-7 m, it is difficult to insert and observe the entire small intestine with a general endoscope. Therefore, for endoscopy of the small intestine, double-balloon endoscopy (see Patent Document 1) or wireless capsule endoscopy (hereinafter sometimes referred to simply as "WCE") (see Patent Document 2). Is used.
ダブルバルーン内視鏡は、内視鏡の先端側に設けられたバルーンと、内視鏡を覆うオーバーチューブの先端側に設けられたバルーンとを、交互にあるいは同時に膨らませたりしぼませたりして、長い小腸をたぐり寄せるようにして短縮化・直線化しながら検査を行う方法であるが、小腸の長さが長いので、一度に小腸の全長に亘って検査を行うことは困難である。そのため、ダブルバルーン内視鏡による小腸の検査は、通常は経口的な内視鏡検査と、経肛門的な内視鏡検査との2回に分けて行われている。 In a double-balloon endoscope, a balloon provided on the tip side of the endoscope and a balloon provided on the tip side of an overtube covering the endoscope are inflated or deflated alternately or at the same time. It is a method of performing an examination while shortening and straightening the long small intestine by pulling it together, but it is difficult to perform an examination over the entire length of the small intestine at one time because the length of the small intestine is long. Therefore, the examination of the small intestine by double-balloon endoscopy is usually performed in two parts, an oral endoscopy and a transanal endoscopy.
また、WCEによる内視鏡検査は、カメラ、フラッシュ、電池、送信機等が内蔵された経口摂取可能なカプセルを飲み込み、カプセルが消化管内を移動中に撮影した画像を無線で外部に送信し、これを外部で受信及び記録することにより検査が行われるものであり、一度に小腸の全体に亘る撮影が可能である。 In addition, endoscopy by WCE swallows an orally ingestible capsule with a built-in camera, flash, battery, transmitter, etc., and wirelessly transmits the image taken while the capsule is moving in the digestive tract to the outside. The examination is performed by receiving and recording this externally, and it is possible to take an image of the entire small intestine at one time.
なお、WCEによって発見される小腸における最も一般的な症状は、びらんや潰瘍などの粘膜障害であるが、これらは主に非ステロイド性抗炎症薬(NSAID)によって引き起こされ、時にはクローン病又は小腸悪性腫瘍によって引き起こされるため、早期診断と早期治療が必要である。従前の各種報告では、小腸のびらんないし潰瘍による粘膜が破壊されている部分は、周囲の正常粘膜との間に色の差が小さいため、ソフトウェア的に自動検出するには血管拡張症を検出する場合よりも劣っていた(非特許文献1参照)。 The most common symptoms in the small intestine found by WCE are mucosal disorders such as erosions and ulcers, which are mainly caused by nonsteroidal anti-inflammatory drugs (NSAIDs) and sometimes Crohn's disease or malignant small intestine. Early diagnosis and early treatment are needed because it is caused by the tumor. According to various previous reports, the part where the mucous membrane is destroyed due to erosion or ulcer of the small intestine has a small color difference from the surrounding normal mucosa, so vasodilation is detected for automatic detection by software. It was inferior to the case (see Non-Patent Document 1).
また、咽頭がんはしばしば進行した段階で検出され、予後は不良である。さらに、咽頭がんの進行した患者は、外科的切除と化学放射線療法を必要とし、美容上の問題と、嚥下及び会話の機能喪失の両方の問題があり、結果として生活の質を大きく低下させる。従前、食道胃十二指腸内視鏡(EGD)検査では、患者の不快感を軽減するために内視鏡が咽頭を素早く通過することが重要であると考えられ、咽頭の観察も確立されていなかった。内視鏡医は、食道の場合とは異なり、気道への誤嚥のリスクがあるため、咽頭でヨウ素染色を使用できない。したがって、表在性咽頭がんはほとんど検出されなかった。 In addition, pharyngeal cancer is often detected at an advanced stage, and the prognosis is poor. In addition, patients with advanced laryngeal cancer require surgical resection and chemoradiotherapy, have both cosmetic problems and dysphagia and speech loss, resulting in a significant reduction in quality of life. .. Previously, in esophagogastroduodenal endoscopy (EGD) examination, it was considered important for the endoscope to pass through the pharynx quickly in order to reduce the discomfort of the patient, and observation of the pharynx was not established. .. Endoscopes cannot use iodine staining in the pharynx because of the risk of aspiration into the airways, unlike in the esophagus. Therefore, superficial pharyngeal cancer was rarely detected.
しかし、近年、狭帯域イメージング(NBI)などの画像強調内視鏡検査の開発及び内視鏡医の意識の向上もあり、食道胃十二指腸内視鏡検査中の咽頭がんの検出が増加している。表在性咽頭がん(SPC)の検出の増加に伴い、表在性胃腸がんの局所切除として確立された内視鏡的切除術(ER)、内視鏡的粘膜下層剥離術(ESD)、又は内視鏡的粘膜切除術(EMR)により表在性咽頭がんを治療する機会が得られる。また、表在性咽頭がんの患者の機能と生活の質を維持する理想的な低侵襲治療である表在性咽頭がんの内視鏡的粘膜下層剥離術による短期及び長期の好ましい治療結果も報告されている。 However, in recent years, with the development of image-enhanced endoscopy such as narrow-band imaging (NBI) and the increase in the awareness of endoscopists, the detection of pharyngeal cancer during esophagogastric duodenal endoscopy has increased. There is. With increasing detection of superficial pharyngeal cancer (SPC), endoscopic resection (ER) and endoscopic submucosal dissection (ESD) have been established as local resections of superficial gastrointestinal cancer. , Or endoscopic mucosal resection (EMR) provides an opportunity to treat superficial laryngeal cancer. In addition, short-term and long-term favorable treatment results by endoscopic submucosal dissection for superficial pharyngeal cancer, which is an ideal minimally invasive treatment for maintaining the function and quality of life of patients with superficial pharyngeal cancer. Has also been reported.
また、食道胃十二指腸鏡検査における白色光イメージング(WLI)は、胃がんの早期発見の最も感度の高い方法である。ただし早期胃がんの正確な診断は、特に小さな病変の場合、WLI単独では困難な場合がある。狭帯域イメージングによる内視鏡検査のうち、NBI併用拡大内視鏡(ME-NBI(magnifying endoscopy with narrow-band imaging))は、最近開発された画像強調内視鏡技術である。早期胃がんのME-NBIによる診断では、VSCS(vessel plus surface classification system:狭帯域イメージングにより視覚化される解剖学的構造や指標を用い、がん、非がんの鑑別診断を行う診断体系)が、早期胃がんと非がんを区別するのに非常に有用であり、より少ない生検数で各がんを診断することができることが明らかにされている(非特許文献19参照)。 In addition, white photoacoustic imaging (WLI) in esophagogastric duodenoscopy is the most sensitive method for early detection of gastric cancer. However, accurate diagnosis of early gastric cancer can be difficult with WLI alone, especially for small lesions. Among endoscopy by narrow band imaging, NBI combined magnifying endoscopy (ME-NBI (magnifying endoscopy with narrow-band imaging)) is a recently developed image-enhanced endoscopy technique. In the diagnosis of early gastric cancer by ME-NBI, VSCS (vessel plus surface classification system: a diagnostic system that makes a differential diagnosis of cancer and non-cancer using anatomical structures and indicators visualized by narrow band imaging) It has been clarified that it is very useful for distinguishing between early gastric cancer and non-cancer, and that each cancer can be diagnosed with a smaller number of biopsies (see Non-Patent Document 19).
最近、早期胃がんの拡大内視鏡検査簡易診断アルゴリズム(MESDA-G(Magnifying Endoscopy Simple Diagnostic Algorithm for Early Gastric Cancer))が早期胃がんのME-NBIによる診断の統一システムとして提案され、広く知られるようになった(非特許文献20参照)。VSCSは重要な診断アルゴリズムであり、MESDA-Gのアルゴリズムのベースとして機能している。ME-NBIは臨床診療に多大な貢献をすると考えられているが、これらの利点は主に内視鏡専門医(エキスパート)によって得られた結果に基いて報告されており、VSCSを使用したME-NBIによる診断のスキルを獲得するには、かなりの専門知識と経験が必要である。 Recently, a simple diagnostic algorithm for early gastric cancer (Magnifying Endoscopy Simple Diagnostic Algorithm for Early Gastric Cancer) has been proposed as a unified system for the diagnosis of early gastric cancer by ME-NBI, and has become widely known. (See Non-Patent Document 20). VSCS is an important diagnostic algorithm and serves as the basis for the MESDA-G algorithm. ME-NBI is believed to make a significant contribution to clinical practice, but these benefits have been reported primarily based on results obtained by endoscopists (experts) and ME-using VSCS. Acquiring NBI diagnostic skills requires considerable expertise and experience.
このような消化器官の内視鏡検査においては、多くの内視鏡画像が収集されるが、精度管理のために内視鏡専門医による内視鏡画像のダブルチェックが義務付けられている。年に数万件もの内視鏡検診に伴い、二次読影において内視鏡専門医が読影する画像枚数は1人あたり1時間で約2800枚と膨大なものとなっており、現場の大きな負担となっている。 In such endoscopy of the digestive organs, many endoscopic images are collected, but double-checking of the endoscopic images by an endoscopist is obligatory for quality control. With tens of thousands of endoscopic examinations a year, the number of images read by an endoscopist in secondary interpretation is enormous, about 2,800 per person per hour, which is a heavy burden on the site. It has become.
特に小腸のWCEによる検査では、WCEの移動は、WCE自体の動きによるものではなく、腸の蠕動によるものであるため、外部から動きを規制することはできない。そのため、見逃しを防ぐために一度の検査で多数の画像が撮影され、しかも、WCEが小腸を移動している時間は約8時間もあるため、一度の検査で撮影される画像は非常に多くなる。たとえば、WCEは1人あたり約60,000枚の画像を無線で送信するので、内視鏡専門医は早送りしてチェックすることとなるが、異常な所見は1フレームないし2フレームしか現れない可能性があるため、これによる平均的なWCE画像分析には30-120分の厳しい注意と集中が必要である。しかも、近年に至り、WCEにおいて動画像が撮像されるようにもなり、内視鏡専門医がチェックに要する負担は非常に大きくなる。 Especially in the examination by WCE of the small intestine, the movement of WCE is not due to the movement of WCE itself, but due to the peristalsis of the intestine, so the movement cannot be regulated from the outside. Therefore, in order to prevent oversight, a large number of images are taken in one examination, and since the WCE takes about 8 hours to move in the small intestine, the number of images taken in one examination is very large. For example, WCE wirelessly sends about 60,000 images per person, so endoscopists will have to fast-forward and check, but unusual findings may only appear in one or two frames. Therefore, the average WCE image analysis by this requires 30-120 minutes of strict attention and concentration. Moreover, in recent years, moving images have been taken by WCE, and the burden on endoscopists for checking becomes extremely large.
しかも、これらの内視鏡画像に基づく診断は、内視鏡専門医に対する訓練や、保存画像をチェックするのに多くの時間を要するばかりか、主観的であり、様々な偽陽性判断及び偽陰性判断を生じる可能性がある。さらに、内視鏡専門医による診断は、疲労により精度が悪化することがある。このような現場の多大な負担や精度の低下は、受診者数の制限にもつながる可能性があり、ひいては需要に応じた医療サービスが十分に提供されない懸念も想定される。 Moreover, these endoscopic image-based diagnoses are not only time-consuming to train endoscopists and check stored images, but are also subjective, with various false-positive and false-negative judgments. May occur. In addition, the diagnosis by an endoscopist may be less accurate due to fatigue. Such a large burden on the site and a decrease in accuracy may lead to a limit on the number of examinees, and there is a concern that medical services according to demand will not be sufficiently provided.
加えて、最近、H.ピロリ未感染又は除菌後の胃がんの発生率が上昇している。それらの胃がんは異型度が非常に低い腫瘍であること、表層が非腫瘍粘膜で覆われていることなどの要因から内視鏡診断は特に困難と考えられる。今後は、内視鏡診断が困難な早期胃がん(EGC)の症例数は増加し続けることが予想されるため、WLIだけでなくME-NBIを使用した、より正確な早期胃がんの内視鏡診断技術の開発が要望されている。しかも、早期胃がん(EGC)においては、内視鏡的粘膜下層剥離術(ESD)の進展により、サイズや潰瘍の有無に関係なく腫瘍の一括切除が可能になってきているので、ESDの適用可否の術前診断が必要とされる。 In addition, recently, H. The incidence of gastric cancer uninfected with H. pylori or after eradication is increasing. Endoscopic diagnosis is considered to be particularly difficult due to factors such as the fact that these gastric cancers are tumors with a very low degree of atypia and the surface layer is covered with non-tumor mucosa. Since the number of cases of early gastric cancer (EGC), which is difficult to diagnose endoscopically, is expected to continue to increase in the future, more accurate endoscopic diagnosis of early gastric cancer using ME-NBI as well as WLI. Development of technology is required. Moreover, in early gastric cancer (EGC), the progress of endoscopic submucosal dissection (ESD) has made it possible to perform batch resection of tumors regardless of size or the presence or absence of ulcers. Preoperative diagnosis is required.
上記の内視鏡検査の労務負荷と精度低下の改善のためには、AI(人工知能:artificial intelligence)の活用が期待されている。近年の画像認識能力が人間を上回ったAIを内視鏡専門医のアシストとして使用できれば、二次読影作業の精度とスピードを向上させるものと期待されている。 It is expected that AI (artificial intelligence) will be used to improve the labor load and accuracy deterioration of the above endoscopy. If AI, whose image recognition ability exceeds that of humans in recent years, can be used as an assist for endoscopists, it is expected to improve the accuracy and speed of secondary interpretation work.
近年、ディープラーニング(深層学習)を用いたAIが様々な医療分野で注目されており、放射線腫瘍学、皮膚がん分類、糖尿病性網膜症(非特許文献2-4参照)や消化器内視鏡分野、特に大腸内視鏡を含む分野(非特許文献5-7参照)だけでなく、様々な医療分野において、医学画像を専門医に替わってスクリーニングできるとの報告がある。また、各種AIを利用して医用画像診断を行った特許文献(特許文献3、4参照)も存在する。しかし、AIの内視鏡画像診断能力が実際の医療現場において役立つ精度(正確性)と性能(スピード)を満たせるかどうかについては、十分に検証されておらず、AIを利用した各種内視鏡画像に基づく診断は、未だに実用化されていない。
In recent years, AI using deep learning has attracted attention in various medical fields, such as radiation oncology, skin cancer classification, diabetic retinopathy (see Non-Patent Document 2-4), and gastrointestinal endoscopy. It has been reported that medical images can be screened on behalf of specialists not only in the field of mirroring, especially in the field including colonoscopy (see Non-Patent Document 5-7), but also in various medical fields. In addition, there are patent documents (see
ディープラーニングは、複数に重ねて構成されたニューラルネットワークを用いて、入力データから高次の特徴量を学習できる。また、ディープラーニングは、バックプロパゲーション・アルゴリズムを使用して、各層の表現を前の層の表現から計算するために使用される内部パラメータを、装置がどのように変更すべきかを示すことによって更新することができる。 Deep learning can learn higher-order features from input data using a neural network configured by stacking multiple layers. Deep learning also uses a backpropagation algorithm to update the internal parameters used to calculate the representation of each layer from the representation of the previous layer by showing how the device should change. can do.
医用画像の関連付けに際しては、ディープラーニングは、過去に蓄積された医用画像を用いて訓練することができ、医学的画像から患者の臨床的特徴を直接得ることができる強力な機械学習技術になり得る。ニューラルネットワークは脳の神経回路の特性を計算機上のシミュレーションによって表現した数理モデルであるところ、ディープラーニングを支えるアルゴリズムのアプローチがニューラルネットワークである。畳み込みニューラルネットワーク(CNN)は、Szegedyらによって開発され、画像の深層学習のための最も一般的なネットワークアーキテクチャである。 In associating medical images, deep learning can be a powerful machine learning technique that can be trained with previously accumulated medical images and can directly derive the patient's clinical features from the medical images. .. Neural networks are mathematical models that express the characteristics of neural circuits in the brain by computer simulations, and the algorithmic approach that supports deep learning is neural networks. Convolutional Neural Networks (CNN), developed by Szegedy et al., Is the most common network architecture for deep learning of images.
発明者等は、既に、解剖学的部位に応じて食道・胃・十二指腸の画像を分類でき、内視鏡画像中の胃がんを確実に見出すことができるCNNシステムを構築してきている(非特許文献8,9参照)。さらに、発明者等は、内視鏡画像に基づくH.ピロリ胃炎の診断におけるCNNの役割を報告し、CNNの能力が経験豊富な内視鏡医に匹敵し、診断時間がかなり短くなることを示している(特許文献5,非特許文献10参照)。 The inventors have already constructed a CNN system that can classify images of the esophagus, stomach, and duodenum according to anatomical sites and can reliably find gastric cancer in endoscopic images (non-patent documents). See 8 and 9). Furthermore, the inventors, etc., have described H.I. We report the role of CNN in the diagnosis of Helicobacter pylori gastritis, and show that the ability of CNN is comparable to that of an experienced endoscopist, and the diagnosis time is considerably shortened (see Patent Document 5 and Non-Patent Document 10).
また、CNNは、大量の画像を自動的かつ迅速に処理することができるため、WCEに適用することが期待されており、幾つかのWCEの分野で一応の良好な結果を達成することができることも示されている(非特許文献11-13参照)。しかしながら、これらの小腸のWCE画像に対してCNNを適用して、小腸の各種疾患や、出血ないし隆起性病変を診断することについては依然として多くの不明な点が存在しており、更なる改良が求められている。特に小腸のWCEの動画像にCNNを適用し、小腸の各種以上を自動的に検出することにより、内視鏡専門医の負担を軽減することに対する要望は大きい。 In addition, CNN is expected to be applied to WCE because it can process a large amount of images automatically and quickly, and it is possible to achieve good results in some WCE fields. Is also shown (see Non-Patent Documents 11-13). However, there are still many unclear points about applying CNN to these WCE images of the small intestine to diagnose various diseases of the small intestine and bleeding or elevated lesions, and further improvements can be made. It has been demanded. In particular, there is a great demand to reduce the burden on endoscopists by applying CNN to the moving image of WCE of the small intestine and automatically detecting various types of the small intestine.
本発明は、上記のような従来技術の課題を解決すべくなされたものである。すなわち、本発明の目的は、WCEによる小腸の内視鏡画像ないし動画像に基く、CNNシステムを用いた小腸の粘膜障害(びらん/潰瘍)、血管拡張、隆起性病変ないし出血の有無を正確に同定することができる小腸の疾患の診断支援方法、診断支援システム、診断支援プログラム及びこの診断支援プログラムを記憶したコンピュータ読み取り可能な記録媒体を提供することを目的とする。 The present invention has been made to solve the above-mentioned problems of the prior art. That is, an object of the present invention is to accurately detect the presence or absence of mucosal damage (erosion / ulcer), vasodilation, elevated lesion or bleeding of the small intestine using the CNN system based on the endoscopic image or moving image of the small intestine by WCE. It is an object of the present invention to provide a method for supporting diagnosis of a disease of the small intestine that can be identified, a diagnosis support system, a diagnosis support program, and a computer-readable recording medium that stores the diagnosis support program.
本発明の別の目的は、ME-NBI画像に基づく、CNNシステムを用いた早期胃がんの正確な診断を行うことができる早期胃がんの診断支援方法、診断支援システム、診断支援プログラム及びこの診断支援プログラムを記憶したコンピュータ読み取り可能な記録媒体を提供することを目的とする。 Another object of the present invention is a diagnostic support method, a diagnostic support system, a diagnostic support program, and a diagnostic support program for early gastric cancer that can accurately diagnose early gastric cancer using a CNN system based on ME-NBI images. It is an object of the present invention to provide a computer-readable recording medium in which a device is stored.
さらに、本発明の別の目的は、CNNシステムを用いた早期胃がんの診断に際し、病変部の深達度をより正確に診断することができ、ESDの適用可否の正確な診断を行うことができる診断支援方法、診断支援システム、診断支援プログラム及びこの診断支援プログラムを記憶したコンピュータ読み取り可能な記録媒体を提供することを目的とする。 Furthermore, another object of the present invention is that when diagnosing early gastric cancer using the CNN system, the depth of invasion of the lesion can be diagnosed more accurately, and the applicability of ESD can be accurately diagnosed. An object of the present invention is to provide a diagnostic support method, a diagnostic support system, a diagnostic support program, and a computer-readable recording medium that stores the diagnostic support program.
本発明の第1の態様のCNNを用いた消化器官の内視鏡画像による疾患の診断支援方法は、消化器官の第1の内視鏡画像と、前記第1の内視鏡画像に対応する、前記消化器官の疾患の陽性又は陰性に対応する情報の少なくとも1つの確定診断結果と、を用いてCNNシステムを訓練し、前記訓練されたCNNシステムは、内視鏡画像入力部から入力された前記消化器官の第2の内視鏡画像に基いて、前記消化器官の前記疾患を検出して前記疾患の陽性に対応する領域及び確率スコアの少なくとも1つを出力する、CNNシステムを用いた消化器官の内視鏡画像による疾患の診断支援方法であって、前記内視鏡画像が小腸のWCE画像であり、前記訓練されたCNNシステムは、前記内視鏡画像入力部から入力された第2のWCE画像中に検出した前記疾患としての隆起性病変の領域を出力することを特徴とする。 The method for supporting the diagnosis of a disease by the endoscopic image of the digestive organ using the CNN of the first aspect of the present invention corresponds to the first endoscopic image of the digestive organ and the first endoscopic image. The CNN system was trained using at least one definitive diagnosis result of information corresponding to the positive or negative of the digestive organ disease, and the trained CNN system was input from the endoscopic image input unit. Digestion using the CNN system, which detects the disease in the digestive organ and outputs at least one of the regions corresponding to the positive of the disease and the probability score based on the second endoscopic image of the digestive organ. A method for supporting the diagnosis of a disease using an endoscopic image of an organ, wherein the endoscopic image is a WCE image of the small intestine, and the trained CNN system is a second input from the endoscopic image input unit. It is characterized by outputting the region of the elevated lesion as the disease detected in the WCE image of the above.
本発明の第1の態様のCNNシステムを用いた消化器官の内視鏡画像による疾患の診断支援方法によれば、CNNシステムが複数の被験者のそれぞれについて予め得られている複数の小腸のWCE画像からなる第1の内視鏡画像と、複数の被験者のそれぞれについて予め得られている前記疾患の陽性又は陰性の確定診断結果とに基いて訓練されているので、短時間で、実質的に内視鏡専門医に匹敵する精度、で被験者の消化器官の疾患の陽性に対応する領域ないし確率スコアを得ることができ、別途確定診断を行わなければならない被験者を短時間で選別することができるようになる。 According to the method for supporting the diagnosis of a disease by endoscopic images of the digestive organs using the CNN system of the first aspect of the present invention, WCE images of a plurality of small intestines for which the CNN system is obtained in advance for each of a plurality of subjects. Since it is trained based on the first endoscopic image consisting of the above and the positive or negative definitive diagnosis result of the disease obtained in advance for each of the plurality of subjects, it is substantially internal in a short time. With accuracy comparable to that of an endoscopist, it is possible to obtain a region or probability score corresponding to the positive of the digestive organ disease of the subject, and it is possible to select the subject who must make a definitive diagnosis in a short time. Become.
しかも、本発明の第1の態様のCNNシステムを用いた消化器官の内視鏡画像による疾患の診断支援方法によれば、多数の被験者についてのWCEによる小腸の内視鏡画像に対して、短時間で、実質的に内視鏡専門医に匹敵する精度で少腸の隆起性病変の有無及び存在する場合はその領域と、その領域の確率スコアと、を得ることができ、別途確定診断を行わなければならない被験者を短時間で選別することができるようになり、内視鏡専門医によるチェック/修正が容易になる。なお、係る態様の隆起性病変には、ポリープだけでなく、結節、上皮性腫瘍、粘膜下腫瘍、血管構造等が含まれる。 Moreover, according to the method for supporting the diagnosis of a disease by endoscopic images of digestive organs using the CNN system of the first aspect of the present invention, it is shorter than the endoscopic images of the small intestine by WCE for a large number of subjects. In time, it is possible to obtain the presence or absence of an elevated lesion of the small intestine and its region if it exists, and the probability score of that region with an accuracy substantially comparable to that of an endoscopist, and a definitive diagnosis is made separately. It will be possible to select the subjects who must be selected in a short time, and it will be easy for the endoscopist to check / correct. The elevated lesions of such an embodiment include not only polyps but also nodules, epithelial tumors, submucosal tumors, vascular structures and the like.
本発明の第2の態様のCNNシステムを用いた消化器官の内視鏡画像による疾患の診断支援方法は、第1の態様のCNNシステムを用いた消化器官の内視鏡画像による疾患の診断支援方法において、前記訓練された畳み込みニューラルネットワークシステムは、前記第2の内視鏡画像内にさらに前記疾患の確率スコアを表示することを特徴とする。 The method for supporting the diagnosis of a disease by an endoscopic image of the digestive organ using the CNN system of the second aspect of the present invention is a method of supporting the diagnosis of a disease by an endoscopic image of the digestive organ using the CNN system of the first aspect. In the method, the trained convolutional neural network system is characterized by further displaying the probability score of the disease in the second endoscopic image.
本発明の第2の態様のCNNシステムを用いた消化器官の内視鏡画像による疾患の診断支援方法によれば、第2の内視鏡画像内に、内視鏡専門医による確定診断結果が得られた領域と、訓練されたCNNシステムによって検出された疾患の陽性の領域とが正確に対比できるので、CNNの感度及び特異度をより良好なものとすることができるようになる。 According to the method for supporting the diagnosis of a disease by endoscopic images of the digestive organs using the CNN system of the second aspect of the present invention, a definitive diagnosis result by an endoscopic specialist can be obtained in the second endoscopic image. Accurate contrast between the areas identified and the areas positive for the disease detected by the trained CNN system can result in better sensitivity and specificity of the CNN.
また、本発明の第3の態様のCNNシステムを用いた消化器官の内視鏡画像による疾患の診断支援方法は、第2の態様のCNNシステムを用いた消化器官の内視鏡画像による疾患の診断支援方法において、前記訓練された畳み込みニューラルネットワークシステムは、前記第2の内視鏡画像内に表示された前記小腸の前記疾患の陽性又は陰性の確定診断結果に基く前記隆起性病変の領域と、前記訓練された畳み込みニューラルネットワークシステムにより検出された前記第2の内視鏡画像の隆起性病変の領域を表示し、第2の内視鏡画像内に表示された前記確定診断結果に基づく前記隆起性病変の領域と、前記検出された前記隆起性病変の領域との重なりにより、前記訓練された畳み込みニューラルネットワークシステムの診断結果の正誤を判定することを特徴とする。 Further, the method for supporting the diagnosis of a disease by an endoscopic image of a digestive organ using the CNN system of the third aspect of the present invention is a method of supporting a diagnosis of a disease by an endoscopic image of the digestive organ using the CNN system of the second aspect. In the diagnostic support method, the trained convolutional neural network system is associated with the area of the elevated lesion based on the positive or negative definitive diagnosis of the disease in the small intestine displayed in the second endoscopic image. , The area of the elevated lesion of the second endoscopic image detected by the trained convolutional neural network system is displayed, and the above-mentioned based on the definitive diagnosis result displayed in the second endoscopic image. It is characterized in that the correctness of the diagnosis result of the trained convolutional neural network system is determined by the overlap between the region of the raised lesion and the detected region of the raised lesion.
本発明の第3の態様のCNNシステムを用いた消化器官の内視鏡画像による疾患の診断支援方法によれば、第2の内視鏡画像内に、内視鏡専門医による確定診断結果が得られた領域と、訓練されたCNNシステムによって検出された疾患の陽性の領域とが表示されているので、それらの領域の重なり状態によって、直ちに訓練されたCNNの診断結果に対比することができるようになる。 According to the method for supporting the diagnosis of a disease by endoscopic images of the digestive organs using the CNN system of the third aspect of the present invention, a definitive diagnosis result by an endoscopist can be obtained in the second endoscopic image. Areas that have been identified and areas that are positive for the disease detected by the trained CNN system are displayed so that the overlap of those areas can be immediately compared to the diagnostic results of the trained CNN. become.
また、本発明の第4の態様のCNNシステムを用いた消化器官の内視鏡画像による疾患の診断支援方法は、第3の態様のCNNシステムを用いた消化器官の内視鏡画像による疾患の診断支援方法において、前記重なりが、
(1)前記確定診断結果に基く前記隆起性病変の領域の80%以上である時、又は、
(2)前記訓練された畳み込みニューラルネットワークシステムにより検出された前記疾患の陽性の領域が複数存在するとき、いずれか一つの領域が前記確定診断結果に基く前記隆起性病変の領域と重なっている時、
前記訓練された畳み込みニューラルネットワークシステムの診断は正しいと判定することを特徴とする。
Further, the method for supporting the diagnosis of a disease by the endoscopic image of the digestive organ using the CNN system of the fourth aspect of the present invention is the method of supporting the diagnosis of the disease by the endoscopic image of the digestive organ using the CNN system of the third aspect. In the diagnostic support method, the overlap is
(1) When it is 80% or more of the area of the elevated lesion based on the definitive diagnosis result, or
(2) When there are a plurality of positive regions of the disease detected by the trained convolutional neural network system, and any one region overlaps with the region of the elevated lesion based on the definitive diagnosis result. ,
The diagnosis of the trained convolutional neural network system is characterized by determining that it is correct.
本発明の第4の態様のCNNシステムを用いた消化器官の内視鏡画像による疾患の診断支援方法によれば、CNNシステムの診断の正誤を容易に判定することができるようになり、訓練されたCNNシステムの診断の精度が向上するとともに、改善しなければならない方向を明確にすることができるようになる。 According to the method for supporting the diagnosis of a disease by endoscopic images of the digestive organs using the CNN system of the fourth aspect of the present invention, the correctness of the diagnosis of the CNN system can be easily determined and trained. The accuracy of the diagnosis of the CNN system will be improved, and it will be possible to clarify the direction in which improvement is necessary.
また、本発明の第5の態様のCNNシステムを用いた消化器官の内視鏡画像による疾患の診断支援方法は、第1-4のいずれかの態様のCNNシステムを用いた消化器官の内視鏡画像による疾患の診断支援方法において、前記訓練された畳み込みニューラルネットワークシステムは、検出された前記隆起性病変がポリープ、結節、上皮性腫瘍、粘膜下腫瘍及び血管構造のいずれかであることを判定することを特徴とする。 Further, the method for supporting the diagnosis of a disease by endoscopic images of the digestive organs using the CNN system of the fifth aspect of the present invention is the endoscopic examination of the digestive organs using the CNN system of any one of the first to fourth aspects. In the method of assisting diagnosis of a disease by endoscopy, the trained convolutional neural network system determines that the detected elevated lesion is one of a polyp, a nodule, an epithelial tumor, a submucosal tumor, and a vascular structure. It is characterized by doing.
本発明の第5の態様のCNNシステムを用いた消化器官の内視鏡画像による疾患の診断支援方法によれば、多量のWCEによる小腸の内視鏡画像であっても、CNNシステム自体が隆起性病変の具体的な種別を判定しているので、内視鏡専門医が別途確定診断を行わなければならない被験者を短時間で選別することができるようになる。 According to the method for supporting the diagnosis of a disease by endoscopic images of digestive organs using the CNN system of the fifth aspect of the present invention, the CNN system itself is raised even with an endoscopic image of the small intestine by a large amount of WCE. Since the specific type of sexual lesion is determined, the endoscopist can select the subjects who must make a definitive diagnosis in a short time.
本発明の第6の態様のCNNシステムを用いた消化器官の内視鏡画像による疾患の診断支援方法は、消化器官の第1の内視鏡画像と、前記第1の内視鏡画像に対応する、前記消化器官の疾患の陽性又は陰性に対応する情報の少なくとも1つの確定診断結果と、を用いて畳み込みニューラルネットワークシステムを訓練し、前記訓練された畳み込みニューラルネットワークシステムは、内視鏡画像入力部から入力された前記消化器官の第2の内視鏡画像に基いて、前記消化器官の前記疾患を検出して前記疾患の陽性に対応する領域及び確率スコアの少なくとも1つを出力する、畳み込みニューラルネットワークシステムを用いた消化器官の内視鏡画像による疾患の診断支援方法であって、前記内視鏡画像が小腸のWCE画像であり、前記訓練された畳み込みニューラルネットワークシステムは、前記内視鏡画像入力部から入力された第2のWCE画像中に検出した前記疾患としての出血の確率スコアを出力することを特徴とする。 The method for supporting the diagnosis of a disease by an endoscopic image of a digestive organ using the CNN system of the sixth aspect of the present invention corresponds to the first endoscopic image of the digestive organ and the first endoscopic image. A convolutional neural network system is trained using at least one definitive diagnosis result of information corresponding to positive or negative of the digestive organ disease, and the trained convolutional neural network system inputs an endoscopic image. Based on the second endoscopic image of the digestive organ input from the unit, the convolution that detects the disease of the digestive organ and outputs at least one of the region corresponding to the positive of the disease and the probability score. A method for supporting disease diagnosis using an endoscopic image of a digestive organ using a neural network system, wherein the endoscopic image is a WCE image of the small intestine, and the trained convolutional neural network system is the endoscope. It is characterized in that the probability score of bleeding as the disease detected in the second WCE image input from the image input unit is output.
第6の態様のCNNシステムを用いた消化器官の内視鏡画像による疾患の診断支援方法によれば、小腸の血液成分を含む画像と正常な粘膜画像とを正確にかつ高速に区別できるので、内視鏡専門医によるチェック/修正が容易になる。 According to the method for supporting the diagnosis of a disease by endoscopic images of digestive organs using the CNN system of the sixth aspect, an image containing blood components of the small intestine and a normal mucosal image can be distinguished accurately and at high speed. Easy to check / correct by an endoscopist.
本発明の第7の態様のCNNシステムを用いた消化器官の内視鏡画像による疾患の診断支援方法は、消化器官の第1の内視鏡画像と、前記第1の内視鏡画像に対応する、前記消化器官の疾患の陽性又は陰性に対応する情報の確定診断結果と、を用いて畳み込みニューラルネットワークシステムを訓練し、前記訓練された畳み込みニューラルネットワークシステムは、内視鏡画像入力部から入力された前記消化器官の第2の内視鏡画像に基いて、前記消化器官の前記疾患を検出して前記疾患の陽性に対応する領域及び確率スコアの少なくとも1つを出力する、畳み込みニューラルネットワークシステムを用いた消化器官の内視鏡画像による疾患の診断支援方法であって、前記第1の内視鏡画像が小腸のWCEの静止画像であり、前記第2の内視鏡画像が小腸のWCEの動画像であり、前記訓練された畳み込みニューラルネットワークシステムは、前記内視鏡画像入力部から入力された第2のWCE画像中に検出した前記疾患の領域を表示することを特徴とする。 The method for supporting the diagnosis of a disease by endoscopic images of the digestive organs using the CNN system of the seventh aspect of the present invention corresponds to the first endoscopic image of the digestive organs and the first endoscopic image. The convolutional neural network system is trained using the definitive diagnosis result of the information corresponding to the positive or negative of the digestive organ disease, and the trained convolutional neural network system is input from the endoscopic image input unit. A convolutional neural network system that detects the disease in the digestive organ and outputs at least one of a region corresponding to the positive of the disease and a probability score based on the second endoscopic image of the digestive organ. The first endoscopic image is a still image of the WCE of the small intestine, and the second endoscopic image is the WCE of the small intestine. The trained convolutional neural network system displays the region of the disease detected in the second WCE image input from the endoscopic image input unit.
本発明の第7の態様のCNNシステムを用いた消化器官の内視鏡画像による疾患の診断支援方法によれば、CNNシステムが小腸のWCEによる静止画像によって正確に訓練されているので、小腸のWCEによる動画像中の疾患を見落とすことなく実質的に全て拾い上げることができるので、内視鏡専門医が別途確定診断を行わなければならない被験者を短時間で選別することができるようになる。なお、係る態様のCNNシステムを用いた消化器官の内視鏡画像による疾患の診断支援方法においては、小腸のWCEによる動画像として、いわゆるビデオカメラによって撮像した連続動画像だけでなく、静止画像の撮像間隔を短くして実質的に動画像と見なせる状態となるように撮影したものも含まれる。 According to the method for assisting the diagnosis of a disease by endoscopic images of the digestive organs using the CNN system of the seventh aspect of the present invention, since the CNN system is accurately trained by the still image by WCE of the small intestine, the small intestine Since virtually all diseases can be picked up without overlooking the disease in the moving image by WCE, the endoscopist can select the subjects who must make a definitive diagnosis in a short time. In the method for supporting the diagnosis of diseases by endoscopic images of the digestive organs using the CNN system of the above aspect, not only continuous moving images captured by a so-called video camera but also still images are used as moving images by WCE of the small intestine. It also includes images taken so that the imaging interval is shortened so that the image can be regarded as a moving image.
本発明の第8の態様のCNNシステムを用いた消化器官の内視鏡画像による疾患の診断支援方法は、第7の態様のCNNシステムを用いた消化器官の内視鏡画像による疾患の診断支援方法において、前記疾患の領域が、粘膜障害、血管拡張症、隆起性病変、出血の少なくとも1つであることを特徴とする。 The method for supporting the diagnosis of a disease by the endoscopic image of the digestive organ using the CNN system of the eighth aspect of the present invention is the diagnosis support of the disease by the endoscopic image of the digestive organ using the CNN system of the seventh aspect. The method is characterized in that the area of the disease is at least one of mucosal disorders, vasodilation, elevated lesions, and bleeding.
本発明の第8の態様のCNNシステムを用いた消化器官の内視鏡画像による疾患の診断支援方法によれば、1つのCNNシステムでも粘膜障害、血管拡張症、隆起性病変、出血の少なくとも1つの疾患を個別にないし同時に検出することができるので、内視鏡専門医が別途確定診断を行わなければならない被験者を短時間で選別することができるようになる。 According to the method for supporting the diagnosis of diseases by endoscopic images of the digestive organs using the CNN system of the eighth aspect of the present invention, at least one of mucosal disorders, vasodilatation, elevated lesions, and bleeding even with one CNN system. Since the two diseases can be detected individually or simultaneously, the endoscopist can quickly select the subjects for whom a separate definitive diagnosis must be made.
本発明の第9の態様のCNNシステムを用いた消化器官の内視鏡画像による疾患の診断支援方法は、第8の態様のCNNシステムを用いた消化器官の内視鏡画像による疾患の診断支援方法において、前記粘膜障害の領域がびらん及び潰瘍の少なくとも一つであり、前記血管拡張症の領域が血管拡張症1a型及び血管拡張症1b型の少なくとも一つであり、前記隆起性病変の領域がポリープ、結節、粘膜下腫瘍、血管構造及び上皮性腫瘍の少なくとも1つであることを特徴とする。 The method for supporting the diagnosis of a disease by an endoscopic image of a digestive organ using the CNN system of the ninth aspect of the present invention is a method of supporting the diagnosis of a disease by an endoscopic image of a digestive organ using the CNN system of the eighth aspect. In the method, the area of mucosal damage is at least one of erosions and ulcers, the area of vasodilation is at least one of vasodilator 1a and vasodilator 1b, and the area of elevated lesions. Is characterized by at least one of polyps, nodules, submucosal tumors, vascular structures and epithelial tumors.
本発明の第9の態様のCNNCNNシステムを用いた消化器官の内視鏡画像による疾患の診断支援方法によれば、1つのCNNシステムでも粘膜障害、血管拡張症、隆起性病変及び出血の少なくとも1つの疾患を1つの疾患を個別にないし同時に検出することができるだけでなく、粘膜障害、血管拡張症及び隆起性病変の疾患についてはさらに細かく分類して個別にないし同時に検出することができるので、内視鏡専門医が別途確定診断を行わなければならない被験者を短時間で詳細に選別することができるようになる。 According to the method for supporting the diagnosis of diseases by endoscopic images of digestive organs using the CNNCNN system of the ninth aspect of the present invention, at least one of mucosal disorders, vasodilatation, elevated lesions and bleeding even with one CNN system. Not only can one disease be detected individually or simultaneously, but mucosal disorders, vasodilators and elevated lesions can be further subdivided and detected individually or simultaneously. The endoscopist will be able to quickly and in detail select subjects for whom a separate definitive diagnosis must be made.
本発明の第10の態様のCNNシステムを用いた消化器官の内視鏡画像による疾患の診断支援方法は、第7-9の態様のCNNシステムを用いた消化器官の内視鏡画像による疾患の診断支援方法において、前記訓練されたCNNシステムは、確定診断結果が粘膜障害の静止画像により訓練された第1のCNNシステム部分と、確定診断結果が血管拡張症の静止画像により訓練された第2のCNNシステム部分と、確定診断結果が隆起性病変の静止画像により訓練された第3のCNNシステム部分と、確定診断結果が出血の静止画像により訓練された第4のCNNシステムシステム部分と、の複合化されたCNNシステムからなるものを用いることを特徴とする。 The method for supporting the diagnosis of a disease by an endoscopic image of a digestive organ using the CNN system of the tenth aspect of the present invention is a method of supporting the diagnosis of a disease by an endoscopic image of the digestive organ using the CNN system of the seventh-9th aspect. In the diagnostic support method, the trained CNN system has a first CNN system portion in which the definitive diagnosis result is trained by a still image of mucosal damage and a second CNN system whose definitive diagnosis result is trained by a still image of vasodilatory disease. CNN system part, a third CNN system part whose definitive diagnosis results were trained by still images of elevated lesions, and a fourth CNN system part whose definitive diagnosis results were trained by still images of bleeding. It is characterized by using a complex CNN system.
確定診断結果が粘膜障害の静止画像及び血管拡張症の静止画像含めてCNNシステムを訓練すると、粘膜障害及び結節の検出可能性が低下する。本発明の第10の態様のCNNシステムを用いた消化器官の内視鏡画像による疾患の診断支援方法によれば、確定診断結果が粘膜障害の静止画像及び血管拡張症の静止画像を用いるCNNシステムが、他の疾患の確定診断結果の静止画像を用いるCNNシステムとは分離されているので、粘膜障害、血管拡張症、隆起性病変及び出血の4種類の少なくとも1つについて正確に検出することができるようになる。 Training the CNN system with definitive diagnostic results including still images of mucosal disorders and vasodilators reduces the detectability of mucosal disorders and nodules. According to the method for supporting the diagnosis of a disease by endoscopic images of the digestive organs using the CNN system of the tenth aspect of the present invention, the CNN system using a still image of mucosal damage and a still image of vasodilator as a definitive diagnosis result. However, because it is separated from the CNN system, which uses still images of definitive diagnostic results of other diseases, it can accurately detect at least one of four types: mucosal damage, vasodilation, elevated lesions, and bleeding. become able to.
本発明の第11の態様のCNNシステムを用いた消化器官の内視鏡画像による疾患の診断支援方法は、消化器官の第1の内視鏡画像と、前記第1の内視鏡画像に対応する、前記消化器官の疾患の陽性又は陰性に対応する情報の少なくとも1つの確定診断結果と、を用いて畳み込みニューラルネットワークシステムを訓練し、前記訓練された畳み込みニューラルネットワークシステムは、内視鏡画像入力部から入力された前記消化器官の第2の内視鏡画像に基いて、前記消化器官の前記疾患を検出して前記疾患の陽性に対応する領域及び確率スコアの少なくとも1つを出力する、畳み込みニューラルネットワークシステムを用いた消化器官の内視鏡画像による疾患の診断支援方法であって、前記内視鏡画像が胃の水侵法狭帯域イメージングによる内視鏡画像であり、前記訓練された畳み込みニューラルネットワークシステムは、前記内視鏡画像入力部から入力された内視鏡画像の前記疾患としての早期胃がんの領域を出力することを特徴とする。 The method for supporting the diagnosis of a disease by endoscopic images of the digestive organs using the CNN system of the eleventh aspect of the present invention corresponds to the first endoscopic image of the digestive organs and the first endoscopic image. The convolutional neural network system is trained using at least one definitive diagnosis result of the information corresponding to the positive or negative of the digestive organ disease, and the trained convolutional neural network system is used for endoscopic image input. Convolution that detects the disease in the digestive organs and outputs at least one of the regions corresponding to the positives of the disease and the probability score based on the second endoscopic image of the digestive organs input from the unit. It is a method for assisting diagnosis of a disease by endoscopic images of the digestive organs using a neural network system, in which the endoscopic images are endoscopic images by water invasion narrow band imaging of the stomach, and the trained convolution. The neural network system is characterized in that the region of early gastric cancer as the disease is output from the endoscopic image input from the endoscopic image input unit.
本発明の第11の態様のCNNシステムを用いた消化器官の内視鏡画像による疾患の診断支援方法によれば、胃の浸水法狭帯域イメージングによる内視鏡画像は、ハレーションが少なく、内視鏡診断に適した均一な品質で完全に焦点が合った鮮明な画像を生成できるため、H.ピロリ除菌後胃がん、未感染胃がんのような従来の内視鏡画像では診断が困難な場合でも、高い診断精度で早期胃がんを診断することができるようになる。 According to the method for supporting the diagnosis of a disease by endoscopic images of the digestive organs using the CNN system of the eleventh aspect of the present invention, the endoscopic images by the inundation method narrow band imaging of the stomach have less halation and are endoscopic. Since it is possible to generate a well-focused and clear image with uniform quality suitable for endoscopic diagnosis, H. Even when it is difficult to diagnose with conventional endoscopic images such as gastric cancer after Helicobacter pylori eradication and uninfected gastric cancer, early gastric cancer can be diagnosed with high diagnostic accuracy.
本発明の第12の態様のCNNシステムを用いた消化器官の内視鏡画像による疾患の診断支援方法は、第11の態様のCNNシステムを用いた消化器官の内視鏡画像による疾患の診断支援方法において、前記訓練された畳み込みニューラルネットワークネットワークシステムは、前記疾患としての早期胃がんの領域をヒートマップで表示する機能を備えていることを特徴とする。 The method for supporting the diagnosis of a disease by the endoscopic image of the digestive organ using the CNN system of the twelfth aspect of the present invention is the diagnosis support of the disease by the endoscopic image of the digestive organ using the CNN system of the eleventh aspect. In the method, the trained convolutional neural network network system is characterized by having a function of displaying an area of early gastric cancer as the disease on a heat map.
本発明の第12の態様のCNNシステムを用いた消化器官の内視鏡画像による疾患の診断支援方法によれば、ヒートマップは、注目箇所に対するコンセプト、ここでは胃がんの確率値、に対応した色、例えば胃がんの確率値が高いほど濃い赤色で表示することができるので、第2の内視鏡画像中の胃がんの可能性がある部位及びその部位の胃がんの確率の大小を一目で確認することができるようになる。 According to the method for supporting the diagnosis of diseases by endoscopic images of the digestive organs using the CNN system of the twelfth aspect of the present invention, the heat map is a color corresponding to the concept for the point of interest, here the probability value of gastric cancer. For example, the higher the probability value of gastric cancer, the darker the red color can be displayed. Therefore, the part of the second endoscopic image where there is a possibility of gastric cancer and the magnitude of the probability of gastric cancer at that part should be confirmed at a glance. Will be able to.
本発明の第13の態様のCNNシステムを用いた消化器官の内視鏡画像による疾患の診断支援方法は、消化器官の第1の内視鏡画像と、前記第1の内視鏡画像に対応する、前記消化器官の疾患の陽性又は陰性に対応する情報の少なくとも1つの確定診断結果と、を用いて畳み込みニューラルネットワークシステムを訓練し、前記訓練された畳み込みニューラルネットワークシステムは、内視鏡画像入力部から入力された前記消化器官の第2の内視鏡画像に基いて、前記消化器官の前記疾患を検出して前記疾患の陽性に対応する領域及び確率スコアの少なくとも1つを出力する、畳み込みニューラルネットワークシステムを用いた消化器官の内視鏡画像による疾患の診断支援方法であって、前記内視鏡画像が胃の白色光画像、非拡大狭帯域光画像及びインジゴカルミン色素散布画像から選択された少なくとも1つであり、前記訓練された畳み込みニューラルネットワークシステムは、前記内視鏡画像入力部から入力された内視鏡画像の前記疾患としての前記疾患の深達度を出力することを特徴とする。 The method for supporting the diagnosis of a disease by endoscopic images of the digestive organs using the CNN system of the thirteenth aspect of the present invention corresponds to the first endoscopic image of the digestive organs and the first endoscopic image. The convolutional neural network system is trained using at least one definitive diagnosis result of the information corresponding to the positive or negative of the digestive organ disease, and the trained convolutional neural network system is used for endoscopic image input. Convolution that detects the disease in the digestive organs and outputs at least one of the regions corresponding to the positives of the disease and the probability score based on the second endoscopic image of the digestive organs input from the unit. It is a method for supporting the diagnosis of diseases by endoscopic images of the digestive organs using a neural network system, and the endoscopic images are selected from a white light image of the stomach, a non-magnifying narrow band light image, and an indigocarmine dye spray image. The trained convolutional neural network system is characterized in that it outputs the depth of invasion of the disease as the disease of the endoscopic image input from the endoscopic image input unit. To do.
本発明の第13の態様のCNNシステムを用いた消化器官の内視鏡画像による疾患の診断支援方法によれば、疾患の深達度(浸潤深度)を正確に分かるようになるので、早期胃がんであるか進行胃がんであるかを正確に診断することができるようになる。 According to the method for supporting the diagnosis of a disease by endoscopic images of the digestive organs using the CNN system of the thirteenth aspect of the present invention, the invasion depth (infiltration depth) of the disease can be accurately known, and thus early gastric cancer. You will be able to accurately diagnose whether you have advanced gastric cancer.
本発明の第14の態様のCNNシステムを用いた消化器官の内視鏡画像による疾患の診断支援方法は、第13の態様のCNNシステムを用いた消化器官の内視鏡画像による疾患の診断支援方法において、前記深達度として、粘膜下浸潤が500μm未満であるか500μm以上であるかを出力することを特徴とする。 The method for supporting the diagnosis of a disease by the endoscopic image of the digestive organ using the CNN system of the 14th aspect of the present invention is the diagnostic support of the disease by the endoscopic image of the digestive organ using the CNN system of the 13th aspect. The method is characterized in that the depth of invasion is output as to whether the submucosal invasion is less than 500 μm or more than 500 μm.
疾患の粘膜下浸潤が500μm未満の深達度であれば、内視鏡的粘膜下層剥離術(ESD)によって根治的切除が達成される可能性がある。本発明の第14の態様のCNNシステムを用いた消化器官の内視鏡画像による疾患の診断支援方法によれば、粘膜下の深達度を正確に診断できるようになるので、早期胃がんのうちESDの適用が可能な症例であるかあるいは外科手術が必要な症例であるのかを正確に区別できるようになる。 If the submucosal infiltration of the disease is less than 500 μm, endoscopic submucosal dissection (ESD) may achieve radical resection. According to the method for supporting the diagnosis of diseases by endoscopic images of digestive organs using the CNN system of the 14th aspect of the present invention, the depth of invasion under the mucosa can be accurately diagnosed. It will be possible to accurately distinguish whether the case is applicable to ESD or requires surgery.
本発明の第15の態様のCNNシステムを用いた消化器官の内視鏡画像による疾患の診断支援方法は、第1-14の具体的のいずれかの態様の診断支援方法において、前記畳み込みニューラルネットワークは、さらにX線コンピュータ断層撮影装置、超音波コンピュータ断層撮影装置又は磁気共鳴画像診断装置からの3次元情報と組み合わされていることを特徴とする。 The method for supporting the diagnosis of a disease by endoscopic images of the digestive organs using the CNN system according to the fifteenth aspect of the present invention is the convolutional neural network in the method for supporting the diagnosis of any one of the specific aspects of the first to fourteenth aspects. Is further characterized in that it is combined with three-dimensional information from an X-ray computed tomography apparatus, an ultrasonic computed tomography apparatus or a magnetic resonance imaging apparatus.
X線コンピュータ断層撮影装置、超音波コンピュータ断層撮影装置又は磁気共鳴画像診断装置は、それぞれの消化器官の構造を立体的に表すことができるから、第1-14のいずれかの態様におけるCNNシステムの出力と組み合わせると、内視鏡画像が撮影された部位をより正確に把握することができるようになる。 Since the X-ray computed tomography apparatus, the ultrasonic computed tomography apparatus, or the magnetic resonance imaging apparatus can represent the structure of each digestive organ three-dimensionally, the CNN system according to any one of the first to fourteenth aspects can be used. When combined with the output, it becomes possible to more accurately grasp the part where the endoscopic image was taken.
本発明の第16の態様のCNNシステムを用いた消化器官の内視鏡画像による疾患の診断支援方法は、第1-14のいずれかの態様の診断支援方法において、前記第2の内視鏡画像は、内視鏡で撮影中の画像、通信ネットワークを経由して送信されてきた画像、遠隔操作システム又はクラウド型システムによって提供される画像、コンピュータ読み取り可能な記録媒体に記録された画像、又は、動画の少なくとも1つであることを特徴とする。 The method for supporting the diagnosis of a disease by endoscopic images of the digestive organs using the CNN system according to the sixteenth aspect of the present invention is the method for supporting the diagnosis of a disease according to any one of the first to fourteenth aspects. The image may be an image being taken with an endoscope, an image transmitted via a communication network, an image provided by a remote control system or a cloud-based system, an image recorded on a computer-readable recording medium, or an image. , At least one of the moving images.
第16の態様のCNNシステムを用いた消化器官の内視鏡画像による疾患の診断支援方法によれば、入力された第2の内視鏡画像に対する消化器官の疾患陽性及び陰性のそれぞれの確率ないし重症度を短時間で出力することができるので、第2の内視鏡画像の入力形式によらず、例えば遠隔地から送信された画像であっても、動画であっても利用可能となる。なお、通信ネットワークとしては、周知のインターネット、イントラネット、エキストラネット、LAN、ISDN、VAN、CATV通信網、仮想専用網(virtual private network)、電話回線網、移動体通信網、衛星通信網等を利用可能である。また、通信ネットワークを構成する伝送媒体も周知のIEEE1394シリアルバス、USB、電力線搬送、ケーブルTV回線、電話線回線、ADSL回線等の有線、赤外線、Bluetooth(登録商標)、IEEE802.11等の無線、携帯電話網、衛星回線、地上波デジタル網等の無線等を利用できる。これらによって、いわゆるクラウドサービスや遠隔支援サービスの形態として利用可能である。 According to the method for supporting the diagnosis of diseases by endoscopic images of the digestive organs using the CNN system of the 16th aspect, the probabilities of positive and negative diseases of the digestive organs with respect to the input second endoscopic image are different. Since the severity can be output in a short time, it can be used regardless of the input format of the second endoscopic image, for example, an image transmitted from a remote place or a moving image. As the communication network, the well-known Internet, intranet, extranet, LAN, ISDN, VAN, CATV communication network, virtual private network, telephone line network, mobile communication network, satellite communication network, etc. are used. It is possible. In addition, the transmission media constituting the communication network are well-known IEEE1394 serial bus, USB, power line carrier, cable TV line, telephone line, ADSL line and other wired, infrared, Bluetooth (registered trademark), IEEE802.11 and other wireless. Wireless networks such as mobile phone networks, satellite lines, and terrestrial digital networks can be used. As a result, it can be used as a form of so-called cloud service or remote support service.
また、コンピュータ読み取り可能な記録媒体としては、周知の磁気テープやカセットテープ等のテープ系、フロッピー(登録商標)ディスク、ハードディスク等の磁気ディスク、コンパクトディスク-ROM/MO/MD/デジタルビデオデイスク/コンパクトディスク-R等の光ディスクを含むディスク系、ICカード、メモリカード、光カード等のカード系、あるいはマスクROM/EPROM/EEPROM/フラッシュROM等の半導体メモリ系等を用いることができる。これらによって、いわゆる医療機関や検診機関に簡便にシステムを移植又は設置できる形態を提供することができる。 Computer-readable recording media include well-known tape systems such as magnetic tapes and cassette tapes, floppy (registered trademark) disks, magnetic disks such as hard disks, and compact disks-ROM / MO / MD / digital video discs / compact. A disk system including an optical disk such as a disk-R, a card system such as an IC card, a memory card, and an optical card, or a semiconductor memory system such as a mask ROM / EPROM / EPROM / flash ROM can be used. As a result, it is possible to provide a form in which the system can be easily transplanted or installed in a so-called medical institution or a medical examination institution.
さらに、本発明の第17の態様のCNNシステムを用いた消化器官の内視鏡画像による疾患の診断支援システムは、内視鏡画像入力部と、出力部と、CNNが組み込まれたコンピュータと、を有するCNNシステムを用いた消化器官の内視鏡画像による疾患の診断支援システムであって、前記コンピュータは、消化器官の第1の内視鏡画像を記憶する第1の記憶領域と、前記第1の内視鏡画像に対応する、前記消化器官の前記疾患の陽性及び陰性に対応する情報の確定診断結果を記憶する第2の記憶領域と、前記CNNプログラムを記憶する第3の記憶領域と、を備え、前記CNNプログラムは、前記第1の記憶部に記憶されている前記第1の内視鏡画像と、前記第2の記憶領域に記憶されている確定診断結果とに基いて訓練されており、前記内視鏡画像入力部から入力された消化器官の第2の内視鏡画像に基いて、前記第2の内視鏡画像に対する前記消化器官の前記疾患の陽性又は陰性に対応する情報を前記出力部に出力するものとされており、前記内視鏡画像が小腸のWCE内視鏡画像であり、前記訓練されたCNNプログラムは、前記内視鏡画像入力部から入力されたWCE内視鏡画像の前記疾患としての隆起性病変の領域を出力することを特徴とする。 Further, the disease diagnosis support system using the endoscopic image of the digestive organ using the CNN system of the 17th aspect of the present invention includes an endoscopic image input unit, an output unit, a computer incorporating the CNN, and a computer. It is a disease diagnosis support system by an endoscopic image of a digestive organ using a CNN system having the above, and the computer is a first storage area for storing a first endoscopic image of the digestive organ and the first storage area. A second storage area for storing definitive diagnosis results of information corresponding to the positive and negative of the disease in the digestive organ, which corresponds to the endoscopic image of 1, and a third storage area for storing the CNN program. The CNN program is trained based on the first endoscopic image stored in the first storage unit and the definitive diagnosis result stored in the second storage area. Corresponds to the positive or negative of the disease of the digestive organ with respect to the second endoscopic image based on the second endoscopic image of the digestive organ input from the endoscopic image input unit. Information is supposed to be output to the output unit, the endoscopic image is a WCE endoscopic image of the small intestine, and the trained CNN program is a WCE input from the endoscopic image input unit. It is characterized by outputting an area of an elevated lesion as the disease in an endoscopic image.
また、本発明の第18の態様のCNNシステムを用いた消化器官の内視鏡画像による疾患の診断支援システムは、本発明の第17の態様のCNNシステムを用いた消化器官の内視鏡画像による疾患の診断支援システムにおいて、前記訓練されたCNNプログラムは、前記第2の内視鏡画像内にさらに前記疾患の確率スコアを表示することを特徴とする。 Further, the disease diagnosis support system using the endoscopic image of the digestive organ using the CNN system of the eighteenth aspect of the present invention is an endoscopic image of the digestive organ using the CNN system of the seventeenth aspect of the present invention. In the disease diagnosis support system according to the above, the trained CNN program is characterized in that the probability score of the disease is further displayed in the second endoscopic image.
また、本発明の第19の態様のCNNシステムを用いた消化器官の内視鏡画像による疾患の診断支援システムは、本発明の第17の態様のCNNシステムを用いた消化器官の内視鏡画像による疾患の診断支援システムにおいて、前記CNNプログラムは、前記第2の内視鏡画像内に、前記小腸の前記疾患の陽性又は陰性の確定診断結果に基いて表示された前記隆起性病変の領域と、前記訓練されたCNNシステムにより検出された前記隆起性病変の領域とを表示し、前記第2の内視鏡画像内に表示された前記確定診断結果に基づく前記隆起性病変の領域と、前記検出された前記隆起性病変の領域との重なりにより、診断結果の正誤を判定することを特徴とする。 Further, the disease diagnosis support system based on the endoscopic image of the digestive organ using the CNN system of the 19th aspect of the present invention is an endoscopic image of the digestive organ using the CNN system of the 17th aspect of the present invention. In the disease diagnosis support system according to the above, the CNN program includes the area of the elevated lesion displayed in the second endoscopic image based on the positive or negative definitive diagnosis result of the disease in the small intestine. The area of the elevated lesion detected by the trained CNN system is displayed, and the area of the elevated lesion based on the definitive diagnosis result displayed in the second endoscopic image and the area of the elevated lesion are displayed. It is characterized in that the correctness of the diagnosis result is determined by the overlap with the detected area of the elevated lesion.
また、本発明の第20の態様のCNNシステムを用いた消化器官の内視鏡画像による疾患の診断支援システムは、本発明の第17の態様のCNNシステムを用いた消化器官の内視鏡画像による疾患の診断支援システムにおいて、前記重なりが、
(1)前記確定診断結果に基く前記隆起性病変の領域の80%以上である時、又は、
(2)前記訓練されたCNNシステムにより検出された前記疾患の陽性の領域が複数存在するとき、いずれか一つの領域が前記確定診断結果に基く前記隆起性病変の領域と重なっている時、
前記訓練されたCNNシステムの診断は正しいと判定することを特徴とする。
Further, the disease diagnosis support system using the endoscopic image of the digestive organ using the CNN system of the twentieth aspect of the present invention is an endoscopic image of the digestive organ using the CNN system of the seventeenth aspect of the present invention. In the disease diagnosis support system based on
(1) When it is 80% or more of the area of the elevated lesion based on the definitive diagnosis result, or
(2) When there are a plurality of positive regions of the disease detected by the trained CNN system, and any one region overlaps with the region of the elevated lesion based on the definitive diagnosis result.
The diagnosis of the trained CNN system is characterized by determining that it is correct.
また、本発明の第21の態様のCNNシステムを用いた消化器官の内視鏡画像による疾患の診断支援システムは、本発明の第17の態様のCNNシステムを用いた消化器官の内視鏡画像による疾患の診断支援システムにおいて、前記訓練されたCNNプログラムは、前記第2の画像内に前記隆起性病変がポリープ、結節、上皮性腫瘍、粘膜下腫瘍及び血管構造のいずれかであることを表示することを特徴とする。 Further, the disease diagnosis support system based on the endoscopic image of the digestive organ using the CNN system of the 21st aspect of the present invention is an endoscopic image of the digestive organ using the CNN system of the 17th aspect of the present invention. In the disease diagnosis support system according to the above, the trained CNN program indicates in the second image that the elevated lesion is either a polyp, a nodule, an epithelial tumor, a submucosal tumor or a vascular structure. It is characterized by doing.
また、本発明の第22の態様のCNNシステムを用いた消化器官の内視鏡画像による疾患の診断支援システムは、内視鏡画像入力部と、出力部と、CNNが組み込まれたコンピュータと、を有するCNNシステムを用いた消化器官の内視鏡画像による疾患の診断支援システムであって、前記コンピュータは、消化器官の第1の内視鏡画像を記憶する第1の記憶領域と、前記第1の内視鏡画像に対応する、前記消化器官の前記疾患の陽性及び陰性に対応する情報の確定診断結果を記憶する第2の記憶領域と、前記CNNプログラムを記憶する第3の記憶領域と、を備え、前記CNNプログラムは、前記第1の記憶部に記憶されている前記第1の内視鏡画像と、前記第2の記憶領域に記憶されている確定診断結果とに基いて訓練されており、前記内視鏡画像入力部から入力された消化器官の第2の内視鏡画像に基いて、前記第2の内視鏡画像に対する前記消化器官の前記疾患の陽性又は陰性に対応する情報を前記出力部に出力するものとされており、前記内視鏡画像が小腸のWCE画像であり、前記訓練されたCNNプログラムは前記第2の画像内に前記疾患としての出血の確率スコアを表示することを特徴とする。 Further, the disease diagnosis support system using the endoscopic image of the digestive organ using the CNN system of the 22nd aspect of the present invention includes an endoscopic image input unit, an output unit, a computer incorporating a CNN, and a computer. It is a disease diagnosis support system by an endoscopic image of a digestive organ using a CNN system having the above, and the computer is a first storage area for storing a first endoscopic image of the digestive organ and the first storage area. A second storage area for storing definitive diagnosis results of information corresponding to the positive and negative of the disease in the digestive organ, which corresponds to the endoscopic image of 1, and a third storage area for storing the CNN program. The CNN program is trained based on the first endoscopic image stored in the first storage unit and the definitive diagnosis result stored in the second storage area. Corresponds to the positive or negative of the disease of the digestive organ with respect to the second endoscopic image based on the second endoscopic image of the digestive organ input from the endoscopic image input unit. The information is supposed to be output to the output unit, the endoscopic image is a WCE image of the small intestine, and the trained CNN program displays the probability score of bleeding as the disease in the second image. It is characterized by displaying.
また、本発明の第23の態様のCNNシステムを用いた消化器官の内視鏡画像による疾患の診断支援システムは、内視鏡画像入力部と、出力部と、CNNが組み込まれたコンピュータと、を有するCNNシステムを用いた消化器官の内視鏡画像による疾患の診断支援システムであって、前記コンピュータは、消化器官の第1の内視鏡画像を記憶する第1の記憶領域と、前記第1の内視鏡画像に対応する、前記消化器官の前記疾患の陽性及び陰性に対応する情報の確定診断結果を記憶する第2の記憶領域と、前記CNNプログラムを記憶する第3の記憶領域と、を備え、前記CNNプログラムは、前記第1の記憶部に記憶されている前記第1の内視鏡画像と、前記第2の記憶領域に記憶されている確定診断結果とに基いて訓練されており、前記内視鏡画像入力部から入力された消化器官の第2の内視鏡画像に基いて、前記第2の内視鏡画像に対する前記消化器官の前記疾患の陽性又は陰性に対応する情報を前記出力部に出力するものとされており、前記第1の内視鏡画像が小腸のWCEの静止画像であり、前記第2の内視鏡画像が小腸のWCEの動画像であり、前記訓練されたCNNプログラムは、前記内視鏡画像入力部から入力された第2のWCE画像中に検出した前記疾患の領域を表示することを特徴とする。 Further, the disease diagnosis support system using the endoscopic image of the digestive organ using the CNN system of the 23rd aspect of the present invention includes an endoscopic image input unit, an output unit, a computer incorporating the CNN, and a computer. It is a disease diagnosis support system by an endoscopic image of a digestive organ using a CNN system having the above, and the computer is a first storage area for storing a first endoscopic image of the digestive organ and the first storage area. A second storage area for storing definitive diagnosis results of information corresponding to the positive and negative of the disease in the digestive organ, which corresponds to the endoscopic image of 1, and a third storage area for storing the CNN program. The CNN program is trained based on the first endoscopic image stored in the first storage unit and the definitive diagnosis result stored in the second storage area. Corresponds to the positive or negative of the disease of the digestive organ with respect to the second endoscopic image based on the second endoscopic image of the digestive organ input from the endoscopic image input unit. Information is supposed to be output to the output unit, the first endoscopic image is a still image of the WCE of the small intestine, and the second endoscopic image is a moving image of the WCE of the small intestine. The trained CNN program is characterized in displaying the area of the disease detected in a second WCE image input from the endoscopic image input unit.
また、本発明の第24の態様のCNNシステムを用いた消化器官の内視鏡画像による疾患の診断支援システムは、本発明の第23の態様のCNNシステムを用いた消化器官の内視鏡画像による疾患の診断支援システムにおいて、前記疾患の領域が、粘膜障害、血管拡張症、隆起性病変、出血の少なくとも1つであることを特徴とする。 Further, the disease diagnosis support system based on the endoscopic image of the digestive organ using the CNN system of the 24th aspect of the present invention is an endoscopic image of the digestive organ using the CNN system of the 23rd aspect of the present invention. In the disease diagnosis support system according to the above, the area of the disease is at least one of mucosal disorders, vasodilators, elevated lesions, and bleeding.
また、本発明の第25の態様のCNNシステムを用いた消化器官の内視鏡画像による疾患の診断支援システムは、本発明の第24の態様のCNNシステムを用いた消化器官の内視鏡画像による疾患の診断支援システムにおいて、前記粘膜障害の領域がびらん及び潰瘍の少なくとも一つであり、前記血管拡張症の領域が血管拡張症1a型及び血管拡張症1b型の少なくとも一つであり、前記隆起性病変の領域がポリープ、結節、粘膜下腫瘍、血管構造及び上皮性腫瘍の少なくとも1つであることを特徴とする。 Further, the disease diagnosis support system based on the endoscopic image of the digestive organ using the CNN system of the 25th aspect of the present invention is an endoscopic image of the digestive organ using the CNN system of the 24th aspect of the present invention. In the disease diagnosis support system according to the above, the area of mucosal damage is at least one of erosion and ulcer, and the area of vasodilation is at least one of vasodilator type 1a and vasodilator type 1b. The area of the elevated lesion is characterized by at least one of polyps, nodules, submucosal tumors, vascular structures and epithelial tumors.
また、本発明の第26の態様のCNNシステムを用いた消化器官の内視鏡画像による疾患の診断支援システムは、本発明の第23-25のいずれかの態様のCNNシステムを用いた消化器官の内視鏡画像による疾患の診断支援システムにおいて、前記訓練されたCNNシステムは、確定診断結果が粘膜障害の静止画像により訓練された第1のCNNシステム部分と、確定診断結果が血管拡張症の静止画像により訓練された第2のCNNシステム部分と、
確定診断結果が隆起性病変の静止画像により訓練された第3のCNNシステム部分と、確定診断結果が出血の静止画像により訓練された第4のCNNシステム部分と、の複合化されたCNNシステムからなることを特徴とする。
Further, the disease diagnosis support system based on the endoscopic image of the digestive organ using the CNN system according to the 26th aspect of the present invention is the digestive organ using the CNN system according to any 23-25 aspect of the present invention. In the disease diagnosis support system using endoscopic images, the trained CNN system has a definitive diagnosis result of a first CNN system portion trained by a still image of mucosal damage and a definitive diagnosis result of vasodilatory disease. A second CNN system part trained by still images,
From a composite CNN system of a third CNN system part where the definitive diagnosis was trained with a still image of an elevated lesion and a fourth CNN system part where the definitive diagnosis was trained with a still image of bleeding. It is characterized by becoming.
また、本発明の第27の態様のCNNシステムを用いた消化器官の内視鏡画像による疾患の診断支援システムは、内視鏡画像入力部と、出力部と、畳み込みニューラルネットワークが組み込まれたコンピュータと、を有する畳み込みニューラルネットワークシステムを用いた消化器官の内視鏡画像による疾患の診断支援システムであって、
前記コンピュータは、
消化器官の第1の内視鏡画像を記憶する第1の記憶領域と、
前記第1の内視鏡画像に対応する、前記消化器官の前記疾患の陽性及び陰性に対応する情報の確定診断結果を記憶する第2の記憶領域と、
前記畳み込みニューラルネットワークプログラムを記憶する第3の記憶領域と、
を備え、
前記畳み込みニューラルネットワークプログラムは、
前記第1の記憶部に記憶されている前記第1の内視鏡画像と、前記第2の記憶領域に記憶されている確定診断結果とに基いて訓練されており、
前記内視鏡画像入力部から入力された消化器官の第2の内視鏡画像に基いて、前記第2の内視鏡画像に対する前記消化器官の前記疾患の陽性又は陰性に対応する情報を前記出力部に出力するものとされており、
前記内視鏡画像が胃の水侵法狭帯域イメージングによる内視鏡画像であり、前記訓練された畳み込みニューラルネットワークプログラムは、前記内視鏡画像入力部から入力された内視鏡画像の前記疾患としての早期胃がんの領域を出力することを特徴とする。
Further, the disease diagnosis support system using the endoscopic image of the digestive organ using the CNN system according to the 27th aspect of the present invention is a computer incorporating an endoscopic image input unit, an output unit, and a convolutional neural network. It is a disease diagnosis support system by endoscopic images of the digestive organs using a convolutional neural network system having
The computer
A first storage area for storing the first endoscopic image of the digestive organs,
A second storage area corresponding to the first endoscopic image and storing a definitive diagnosis result of information corresponding to the positive and negative of the disease in the digestive organs.
A third storage area for storing the convolutional neural network program,
With
The convolutional neural network program
It is trained based on the first endoscopic image stored in the first storage unit and the definitive diagnosis result stored in the second storage area.
Based on the second endoscopic image of the digestive organs input from the endoscopic image input unit, the information corresponding to the positive or negative of the disease of the digestive organs with respect to the second endoscopic image is described. It is supposed to be output to the output section,
The endoscopic image is an endoscopic image obtained by water invasion narrow band imaging of the stomach, and the trained convolutional neural network program is a disease of the endoscopic image input from the endoscopic image input unit. It is characterized by outputting the area of early gastric cancer.
また、本発明の第28の態様のCNNシステムを用いた消化器官の内視鏡画像による疾患の診断支援システムは、本発明の第27の態様のCNNシステムを用いた消化器官の内視鏡画像による疾患の診断支援システムにおいて、前記訓練された畳み込みニューラルネットワークプログラムは、前記疾患としての早期胃がんの領域をヒートマップで表示する機能を備えていることを特徴とする。 Further, the disease diagnosis support system based on the endoscopic image of the digestive organ using the CNN system of the 28th aspect of the present invention is an endoscopic image of the digestive organ using the CNN system of the 27th aspect of the present invention. In the disease diagnosis support system according to the above, the trained convolutional neural network program is characterized by having a function of displaying an area of early gastric cancer as the disease on a heat map.
また、本発明の第29の態様のCNNシステムを用いた消化器官の内視鏡画像による疾患の診断支援システムは、内視鏡画像入力部と、出力部と、畳み込みニューラルネットワークが組み込まれたコンピュータと、を有する畳み込みニューラルネットワークシステムを用いた消化器官の内視鏡画像による疾患の診断支援システムであって、前記コンピュータは、消化器官の第1の内視鏡画像を記憶する第1の記憶領域と、前記第1の内視鏡画像に対応する、前記消化器官の前記疾患の陽性及び陰性に対応する情報の確定診断結果を記憶する第2の記憶領域と、前記畳み込みニューラルネットワークプログラムを記憶する第3の記憶領域と、を備え、前記畳み込みニューラルネットワークプログラムは、前記第1の記憶部に記憶されている前記第1の内視鏡画像と、前記第2の記憶領域に記憶されている確定診断結果とに基いて訓練されており、前記内視鏡画像入力部から入力された消化器官の第2の内視鏡画像に基いて、前記第2の内視鏡画像に対する前記消化器官の前記疾患の陽性又は陰性に対応する情報を前記出力部に出力するものとされており、前記内視鏡画像が胃の白色光画像、非拡大狭帯域光画像及びインジゴカルミン色素散布画像から選択された少なくとも1つであり、前記訓練された畳み込みニューラルネットワークプログラムは、前記内視鏡画像入力部から入力された内視鏡画像の前記疾患としての前記疾患の深達度を出力することを特徴とする。 Further, the disease diagnosis support system using the endoscopic image of the digestive organ using the CNN system according to the 29th aspect of the present invention is a computer incorporating an endoscopic image input unit, an output unit, and a convolutional neural network. A disease diagnosis support system using an endoscopic image of a digestive organ using a convolutional neural network system, wherein the computer stores a first endoscopic image of the digestive organ. A second storage area for storing definitive diagnosis results of information corresponding to the positive and negative of the disease in the digestive organ corresponding to the first endoscopic image, and the convolutional neural network program are stored. The convolutional neural network program includes a third storage area, the first endoscopic image stored in the first storage unit, and a determination stored in the second storage area. The above-mentioned digestive organs with respect to the second endoscopic image based on the second endoscopic image of the digestive organs input from the endoscopic image input unit, which is trained based on the diagnosis result. Information corresponding to the positive or negative of the disease is to be output to the output unit, and the endoscopic image is selected from a white light image of the stomach, a non-magnifying narrow band light image, and an indigocarmine dye spray image. At least one, the trained convolutional neural network program is characterized in that it outputs the invasion depth of the disease as the disease of the endoscopic image input from the endoscopic image input unit. ..
また、本発明の第30の態様のCNNシステムを用いた消化器官の内視鏡画像による疾患の診断支援システムは、本発明の第29の態様のCNNシステムを用いた消化器官の内視鏡画像による疾患の診断支援システムにおいて、前記訓練された畳み込みニューラルネットワークプログラムは、前記疾患の前記深達度として、粘膜下浸潤が500μm未満であるか500μm以上であるかを出力する機能を備えていることを特徴とする。 Further, the disease diagnosis support system based on the endoscopic image of the digestive organ using the CNN system of the thirtieth aspect of the present invention is an endoscopic image of the digestive organ using the CNN system of the 29th aspect of the present invention. In the disease diagnosis support system according to the above, the trained convolutional neural network program has a function of outputting whether the submucosal invasion is less than 500 μm or more than 500 μm as the invasion depth of the disease. It is characterized by.
また、本発明の第31の態様のCNNシステムを用いた消化器官の内視鏡画像による疾患の診断支援システムは、本発明の第17-30のいずれかの態様のCNNシステムを用いた消化器官の内視鏡画像による疾患の診断支援システムにおいて、前記CNNプログラムは、さらにX線コンピュータ断層撮影装置、超音波コンピュータ断層撮影装置又は磁気共鳴画像診断装置からの3次元情報と組み合わされていることを特徴とする。 Further, the disease diagnosis support system based on the endoscopic image of the digestive organ using the CNN system according to the 31st aspect of the present invention is the digestive organ using the CNN system according to any 17-30 aspect of the present invention. In the disease diagnosis support system using endoscopic images, the CNN program is further combined with three-dimensional information from an X-ray computed tomography apparatus, an ultrasonic computed tomography apparatus, or a magnetic resonance imaging apparatus. It is a feature.
また、本発明の第32の態様のCNNシステムを用いた消化器官の内視鏡画像による疾患の診断支援システムは、本発明の第17-31のいずれかの態様のCNNシステムを用いた消化器官の内視鏡画像による疾患の診断支援システムにおいて、前記第2の内視鏡画像は、内視鏡で撮影中の画像、通信ネットワークを経由して送信されてきた画像、遠隔操作システム又はクラウド型システムによって提供される画像、コンピュータ読み取り可能な記録媒体に記録された画像、又は、動画の少なくとも1つであることを特徴とする。 Further, the disease diagnosis support system based on the endoscopic image of the digestive organ using the CNN system according to the 32nd aspect of the present invention is the digestive organ using the CNN system according to any 17-31 of the present invention. In the disease diagnosis support system using the endoscopic image, the second endoscopic image is an image being taken by the endoscope, an image transmitted via a communication network, a remote control system, or a cloud type. It is characterized by being at least one of an image provided by the system, an image recorded on a computer-readable recording medium, or a moving image.
本発明の第17-32いずれかの態様の消化器官の内視鏡画像による疾患の診断支援システムによれば、それぞれ本発明の第1-16のいずれかの態様のCNNを用いた消化器官の内視鏡画像による疾患の診断支援方法と同様の効果を奏することができる。 According to the disease diagnosis support system based on the endoscopic image of the digestive organ according to any aspect 17-32 of the present invention, the digestive organ using CNN according to any aspect 1-16 of the present invention, respectively. It can have the same effect as the disease diagnosis support method using endoscopic images.
また、本発明の第33の態様のCNNシステムを用いた消化器官の内視鏡画像による診断支援プログラムは、本発明の第17-32のいずれかに記載の消化器官の内視鏡画像による疾患の診断支援システムにおける各手段としてコンピュータを動作させるためのものであることを特徴とする。 Further, the diagnostic support program based on the endoscopic image of the digestive organ using the CNN system according to the 33rd aspect of the present invention is the disease based on the endoscopic image of the digestive organ according to any one of 17-32 of the present invention. It is characterized in that it is for operating a computer as each means in the diagnostic support system of.
本発明の第33の態様の消化器官の内視鏡画像による診断支援プログラムによれば、第17-32のいずれかの態様の消化器官の内視鏡画像による疾患の診断支援システムにおける各手段としてコンピュータを動作させるための、消化器官の内視鏡画像による診断支援プログラムを提供することができる。 According to the diagnostic support program based on the endoscopic image of the digestive organ according to the 33rd aspect of the present invention, as each means in the disease diagnosis support system based on the endoscopic image of the digestive organ according to any 17-32 aspect. It is possible to provide a diagnostic support program using endoscopic images of the digestive organs for operating a computer.
さらに、本発明の第34の態様のコンピュータ読み取り可能な記録媒体は、本発明の第33の態様のCNNシステムを用いた消化器官の内視鏡画像による診断支援プログラムを記録したことを特徴とする。 Furthermore, the computer-readable recording medium of the 34th aspect of the present invention is characterized by recording a diagnostic support program by endoscopic images of the digestive organs using the CNN system of the 33rd aspect of the present invention. ..
本発明の第34の態様の消化器官の内視鏡画像によるコンピュータ読み取り可能な記録媒体によれば、第33の態様の消化器官の内視鏡画像による診断支援プログラムを記録したコンピュータ読み取り可能な記録媒体を提供することができる。 According to the computer-readable recording medium using the endoscopic image of the digestive organ of the 34th aspect of the present invention, the computer-readable recording recording the diagnostic support program by the endoscopic image of the digestive organ of the 33rd aspect. A medium can be provided.
以上述べたように、本発明によれば、CNNを組み込んだプログラムが複数の被験者のそれぞれについて予め得られている小腸のWCE画像ないし胃の各種内視鏡画像と、複数の被験者のそれぞれについて予め得られている疾患の陽性又は陰性の確定診断結果とに基いて訓練されているので、短時間で、実質的に内視鏡専門医に匹敵する精度で被験者の消化器官の疾患の陽性に対応する領域ないし確率スコアを得ることができ、別途確定診断を行わなければならない被験者を短時間で選別することができるようになる。 As described above, according to the present invention, a WCE image of the small intestine or various endoscopic images of the stomach obtained in advance for each of a plurality of subjects by a program incorporating CNN, and each of the plurality of subjects in advance. Since it is trained based on the positive or negative definitive diagnosis of the obtained disease, it responds to the positive of the gastrointestinal disease of the subject in a short time and with substantially the same accuracy as the endoscopist. Areas or probability scores can be obtained, and subjects who must make a definitive diagnosis can be selected in a short time.
以下、本発明に係る消化器官の内視鏡画像による疾患の診断支援方法、診断支援システム、診断支援プログラム及びこの診断支援プログラムを記憶したコンピュータ読み取り可能な記録媒体について、ワイヤレスカプセル内視鏡(WCE)を用いる場合を例にとって詳細に説明する。ただし、以下に示す実施形態は、本発明の技術思想を具体化するための例を示すものであって、本発明をこれらの場合に特定することを意図するものではない。すなわち、本発明は特許請求の範囲に含まれるその他の実施形態のものにも等しく適応し得るものである。 Hereinafter, a wireless capsule endoscope (WCE) will be described with respect to a disease diagnosis support method, a diagnosis support system, a diagnosis support program, and a computer-readable recording medium storing the diagnosis support program according to an endoscopic image of the digestive organs according to the present invention. ) Will be described in detail by taking as an example. However, the embodiments shown below show examples for embodying the technical idea of the present invention, and are not intended to specify the present invention in these cases. That is, the present invention can be equally applied to those of other embodiments included in the claims.
[実施形態1]
実施形態1では、本発明の内視鏡画像による疾患の診断支援方法、診断支援システム、診断支援プログラム及びこの診断支援プログラムを記憶したコンピュータ読み取り可能な記録媒体について、WCEを用いて小腸のびらん/潰瘍の場合について適用した例を説明する。なお、実施形態1では、びらんと潰瘍との区別が困難であったので、両者合わせて「びらん/潰瘍」と表してある。すなわち、この明細書における「びらん/潰瘍」という用語は、「びらん」、「潰瘍」、「びらん及び潰瘍」の意味だけでなく、「びらん及び潰瘍のどちらかは明確でないが、少なくとも正常粘膜ではない」ものも含む意味で用いられている。
[Embodiment 1]
In the first embodiment, the method for diagnosing a disease by the endoscopic image of the present invention, the diagnosing support system, the diagnosing support program, and the computer-readable recording medium in which the diagnosing support program is stored are subjected to small intestinal spread using WCE. An example applied in the case of an ulcer will be described. In the first embodiment, it was difficult to distinguish between erosion and ulcer, so both are collectively referred to as "erosion / ulcer". That is, the term "erosion / ulcer" as used herein means not only "erosion", "erosion", "erosion and ulcer", but also "either erosion or ulcer, but at least in normal mucosa". It is used to mean that there is no such thing.
[データセットについて]
発明者の一人が属する医院において、2009年10月から2014年12月までの間にWCEを受けた115人の患者から、訓練用データセットとして小腸のびらん/潰瘍の画像を5360枚収集した。また、実施形態1のCNNシステムの検証のために、2015年1月から2018年1月までに65人の患者からの10,440枚の独立した画像を準備し、検証用データセットとして用いた。これらの検証用データセットのうち、45人の患者の440画像は小腸のびらん/潰瘍を有しており、20人の患者の10,000画像は小腸の正常粘膜であると3人の内視鏡専門医により診断されている。WCEとしては、Pillcam(登録商標)SB2又はSB3WCE装置(Given Imaging,Yoqneam,Israel)を用いて実施した。
[About dataset]
At the clinic to which one of the inventors belongs, 5360 images of small intestinal erosion / ulcer were collected as a training data set from 115 patients who underwent WCE between October 2009 and December 2014. In addition, for verification of the CNN system of
なお、実施形態1のCNNシステムを訓練/検証するために、アルゴリズムの開発に先立って画像に付随する全患者情報を匿名化した。実施形態1のCNNシステムに関与した内視鏡医のいずれも、識別可能な患者情報にアクセスすることができないようにした。このCNNシステムの訓練/検証は、匿名化されたデータを使用した後ろ向き調査であったため、患者の同意書についてはオプトアウトアプローチを採用した。この研究は、東京大学倫理委員会(No.11931)及び日本医師会(ID:JMA-IIA00283)の承認を得た。実施形態1のCNNシステムのフローチャートの概要を図1に示す。
In order to train / verify the CNN system of
WCEの適応症は、原因不明の消化管出血(OGIB:Obscure Gastrointestinal Bleeding)が主であり、他に他の医療機器を用いて異常小腸画像が観察された例、腹痛、過去の小腸症例のフォローアップ、下痢、スクリーニングに関するプライマリケア医からの紹介等であった。病因としては、非ステロイド性抗炎症が多く、それに次いで炎症性腸疾患、小腸悪性腫瘍、吻合部潰瘍が主であったが、病因を確定できなかったものも多かった。CNNシステムの訓練用及び検証用に用いられたデータデットの患者特性を表1に示した。 The main indications for WCE are gastrointestinal bleeding (OGIB: Obscure Gastrointestinal Bleeding) of unknown cause, and other cases where abnormal small intestinal images were observed using other medical equipment, abdominal pain, and follow-up of past small intestinal cases Referrals from primary care physicians regarding ups, diarrhea, and screening. The main causes were non-steroidal anti-inflammatory drug, followed by inflammatory bowel disease, malignant tumor of the small intestine, and anastomotic ulcer, but the etiology could not be determined in many cases. Table 1 shows the patient characteristics of the data debt used for training and validation of the CNN system.
[訓練/検証・アルゴリズム]
本実施形態1で用いたCNNシステムは、図2に示したように、バックプロパゲーション(Backpropagation:誤差逆伝播法)を用いて訓練されている。実施形態1のCNNシステムを構築するために、アルゴリズムを変更することなく、Single Shot MultiBox Detector(SSD、https://arxiv.org/abs/1512.02325)と呼ばれるディープニューラルネットワークアーキテクチャを利用した。まず、2人の内視鏡専門医によって、訓練データセットの画像内のびらん/潰瘍のすべての領域に、手動で長方形の境界ボックスを有する注釈が付けられた。これらの画像は、バークレー・ビジョン・ラーニング・センター(Berkeley Vision and Learning Center)で最初に開発されたCaffeフレームワークを通じてSSDアーキテクチャに組み込まれた。Caffeフレームワークは、最初に開発された、最も一般的で広く使用されているフレームワークの1つである。
[Training / Verification / Algorithm]
As shown in FIG. 2, the CNN system used in the first embodiment is trained by using backpropagation (backpropagation method). To build the CNN system of
実施形態1のCNNシステムは、境界ボックスの内側の領域がびらん/潰瘍領域であり、他の領域が背景であると訓練された。そして、実施形態1のCNNシステムは、それ自体で境界ボックス領域の特定の特徴を抽出し、訓練データセットを介してびらん/潰瘍の特徴を学習した。CNNのすべての層は、グローバル学習率0.0001で確率的最適化が行われている。各画像は300×300ピクセルにリサイズした。それに応じて境界ボックスのサイズも変更された。これらの値は、すべてのデータがSSDと互換性があることを保証するために、試行錯誤によって設定された。CPUとしてINTEL社のCore i7-7700Kを使用し、グラフィックス処理装置用GPUとしてNVIDEA社のGeForce GTX 1070を使用した。
The CNN system of
[結果の測定及び統計]
まず、検証用データセットの画像内のびらん/潰瘍の全てに、手作業で長方形の境界ボックス(以下、「真のボックス」という。)を太線で付与した。また、実施形態1の訓練されたCNNシステムは、検証用データセットセットの画像内の検出したびらん/潰瘍の領域に長方形の境界ボックス(以下、「CNNボックス」という。)を細線で付与するとともに、びらん/潰瘍の確率スコア(範囲は0-1)を出力した。確率スコアが高いほど、実施形態1の訓練されたCNNシステムはその領域にびらん/潰瘍が含まれている確率が高いと判断していることを示している。
[Measurement and statistics of results]
First, all erosions / ulcers in the image of the validation dataset were manually marked with a rectangular border box (hereinafter referred to as the "true box") with thick lines. In addition, the trained CNN system of the first embodiment imparts a rectangular boundary box (hereinafter referred to as "CNN box") to the detected erosion / ulcer region in the image of the verification data set with a thin line. , Erosion / ulcer probability score (range 0-1) was output. The higher the probability score, the higher the probability that the trained CNN system of
発明者等は、各画像がびらん/潰瘍を含むか否かについて、実施形態1のCNNシステムが判別する能力を評価した。この評価を実行するために、以下の定義を使用した。
1)CNNボックスが真のボックスに80%以上重なったときは正解とした。
2)複数のCNNボックスが1つの画像内に存在し、それらのボックスの1つでもびらん/潰瘍を正しく検出した場合、画像が正しく識別されたと結論付けた。
なお、このようにして正解と判断されたWCE内視鏡画像は、その情報を画像に付与して撮影された画像のダブルチェックの現場で診断補助として活用したり、WCE内視鏡検査時に動画でリアルタイムで情報を表示して診断補助として活用される。
The inventors evaluated the ability of the CNN system of
1) When the CNN box overlaps the true box by 80% or more, the answer is correct.
2) It was concluded that if multiple CNN boxes were present in one image and even one of those boxes correctly detected erosions / ulcers, the images were correctly identified.
The WCE endoscopy image determined to be the correct answer in this way can be used as a diagnostic aid at the site of double-checking the image taken by adding that information to the image, or a video during WCE endoscopy. Information is displayed in real time and used as a diagnostic aid.
また、確率スコアのカットオフ値を変えることによって、受信機動作特性(ROC)曲線をプロットし、実施形態1の訓練されたCNNシステムによるびらん/潰瘍識別の評価のために曲線下面積(AUC)を計算した。Youdenインデックスに従ったスコアを含む確率スコアに対する様々なカットオフ値を用いて、実施形態1のCNNシステムのびらん/潰瘍を検出する能力である、感度、特異度及び精度を計算した。なお、Youdenインデックスは、感度と特異度で計算された最適なカットオフ値を決定するための標準的な方法の1つであり、「感度+特異度-1」の数値が最大となるようなカットオフ値を求めるものである。ここではSTATAソフトウェア(バージョン13;Stata Corp, College Station, TX, USA)を用いてデータを統計的に分析した。
Also, by varying the cutoff value of the probability score, the receiver operating characteristic (ROC) curve is plotted and the area under the curve (AUC) for evaluation of erosion / ulcer discrimination by the trained CNN system of
検証用データセットは、65人の患者(男性=62%、平均年齢=57歳、標準偏差(SD)=19歳)からの10,440画像からなっていた。実施形態1の訓練されたCNNシステムは、これらの画像を評価するのに233秒を要した。これは、毎秒44.8画像の速度に等しい。びらん/潰瘍を検出した実施形態1の訓練されたCNNシステムのAUCは0.960(95%信頼区間[CI]、0.950-0.969;図3参照)であった。
The validation dataset consisted of 10,440 images from 65 patients (male = 62%, mean age = 57 years, standard deviation (SD) = 19 years). The trained CNN system of
Youdenインデックスによれば、確率スコアの最適カットオフ値は0.481であり、確率スコアが0.481の領域がCNNによってびらん/潰瘍として認識された。そのカットオフ値では、CNNの感度、特異度及び精度は、88.2%(95%CI(信頼区間)、84.8-91.0%)、90.9%(95%CI、90.3-91.4%)及び90.8%(95%CI、90.2-91.3%)であった(表2参照)。なお、表2は、確率スコアのカットオフ値を0.2から0.9まで0.1ずつ増加させて計算した、それぞれの感度、特異度及び精度を示している。 According to the Youden index, the optimum cutoff value of the probability score was 0.481, and the region with the probability score of 0.481 was recognized as erosion / ulcer by CNN. At that cutoff value, the sensitivity, specificity and accuracy of CNN were 88.2% (95% CI (confidence interval), 84.8-91.0%), 90.9% (95% CI, 90.). It was 3-91.4%) and 90.8% (95% CI, 90.2-91.3%) (see Table 2). Table 2 shows the sensitivity, specificity, and accuracy of each calculated by increasing the cutoff value of the probability score by 0.1 from 0.2 to 0.9.
このようにして、確率スコアのカットオフ値=0.481として実施形態1の訓練されたCNNシステムにより分類されたびらん/潰瘍の分類結果と内視鏡専門医によるびらん/潰瘍の分類結果との関係を表3に纏めて示した。
また、図4A-図4DはそれぞれCNNシステムによって正しく検出された代表的な領域を示し、図5A-図5HはそれぞれCNNシステムによって誤分類された典型的な領域を示している。偽陰性画像は、表4に示されるように、境界不明瞭(図5A参照)、周囲の正常粘膜と類似の色、小さすぎ、全体の観察不可(側方性(患部が側面にあるので見え難い)ないし部分性(部分的にしか見えない))(図5B参照)の4つの原因に分類された。 In addition, FIGS. 4A-4D show typical regions correctly detected by the CNN system, and FIGS. 5A-5H show typical regions misclassified by the CNN system, respectively. False negative images, as shown in Table 4, have unclear boundaries (see FIG. 5A), a color similar to the surrounding normal mucosa, too small, and totally unobservable (lateral (visible because the affected area is on the side). It was classified into four causes (difficult) or partial (only partially visible)) (see FIG. 5B).
一方、偽陽性画像は、表5に示したように、正常粘膜、泡(図5C)、破片(図5D)、血管拡張(図5E)、真のびらん(図5F-図5H)の5つの原因に分類された。
以上述べたように、実施形態1の訓練されたCNNシステムによれば、WCEの小腸像におけるびらん及び潰瘍の自動検出のためのCNNベースのプログラムが構築され、90.8%の高い精度(AUC、0.960)の独立した試験画像におけるびらん/潰瘍を検出できることが明らかにされた。
As mentioned above, according to the trained CNN system of
[実施形態2]
実施形態2では、ワイヤレスカプセル内視鏡(WCE)画像による小腸の隆起性病変に関する疾患の診断支援方法、診断支援システム、診断支援プログラム及びこの診断支援プログラムを記憶したコンピュータ読み取り可能な記録媒体について説明する。なお、隆起性病変との形態学的特徴は、ポリープ、結節、腫瘤/腫瘍から血管構造までさまざまであり、これらの病変の病因には、神経内分泌腫瘍、腺がん、家族性腺腫性ポリポーシス、プーツ-ジェガーズ(Peutz-Jeghers)症候群、濾胞性リンパ腫及び胃腸間質腫瘍が含まれまる。これらの病変には早期診断と治療が必要であるため、WCE検査の見落としを避ける必要がある。
[Embodiment 2]
In the second embodiment, a method for diagnosing a disease related to an elevated lesion of the small intestine using a wireless capsule endoscopy (WCE) image, a diagnostic support system, a diagnostic support program, and a computer-readable recording medium storing the diagnostic support program will be described. To do. The morphological characteristics of elevated lesions vary from polyps, nodules, masses / tumors to vascular structures, and the etiology of these lesions includes neuroendocrine tumors, adenocarcinomas, and familial adenocarcinoma polyposis. Includes Peutz-Jeghers syndrome, follicular lymphoma and gastrointestinal stromal tumors. Since these lesions require early diagnosis and treatment, it is necessary to avoid overlooking the WCE test.
[データセットについて]
発明者が属する複数の医院において、2009年10月から2018年5月まで、WCEを受けた292人の患者から、訓練用データセットとして隆起性病変の画像を30,584枚収集した。また、CNNのトレーニングに使用される画像とは無関係に、隆起性病変のない10,000枚の画像と、隆起性病変のある7,507枚の画像を含む、93人の患者からの合計17,507枚の画像を検証に使用した。隆起性病変は、CEST分類の定義(非特許文献14参照)に基づいて、ポリープ、結節、上皮性腫瘍、粘膜下腫瘍、及び血管構造の5つのカテゴリーに形態学的に分類した。ただし、CEST分類の定義における腫瘤/腫瘍病変は、上皮性腫瘍と粘膜下腫瘍とに分けて分類した。
[About dataset]
From October 2009 to May 2018, 30,584 images of elevated lesions were collected from 292 patients who underwent WCE at multiple clinics to which the inventor belongs as a training data set. Also, a total of 17 from 93 patients, including 10,000 images without elevated lesions and 7,507 images with elevated lesions, regardless of the images used for CNN training. , 507 images were used for verification. Elevated lesions were morphologically classified into five categories: polyps, nodules, epithelial tumors, submucosal tumors, and vascular structure, based on the definition of CEST classification (see Non-Patent Document 14). However, mass / tumor lesions in the definition of CEST classification were classified into epithelial tumors and submucosal tumors.
[訓練/検証・アルゴリズム]
実施形態2のCNNシステムのフローチャートの概要を図6に示した。実施形態2のCNNシステムでは、実施形態1の場合と同様のSSDディープニューラルネットワークアーキテクチャ及びCaffeフレームワークを利用した。まず、6人の内視鏡専門医によって、訓練データセットの画像内の隆起性病変の全ての領域に、手動で長方形の境界ボックスを有する注釈が付けられた。注釈はそれぞれの内視鏡専門医により個別に実行され、コンセンサスは後で決定された。 これらの画像はCaffeフレームワークを通じてSSDアーキテクチャに組み込まれた。
[Training / Verification / Algorithm]
The outline of the flowchart of the CNN system of the second embodiment is shown in FIG. The CNN system of the second embodiment used the same SSD deep neural network architecture and Caffe framework as in the first embodiment. First, six endoscopists manually annotated all areas of the elevated lesion in the image of the training dataset with a rectangular bounding box. Annotations were performed individually by each endoscopist and a consensus was later determined. These images were incorporated into the SSD architecture through the Caffe framework.
実施形態2のCNNシステムは、境界ボックスの内側の隆起性病変であり、他の領域が背景であると訓練された。そして、実施形態2のCNNシステムは、それ自体で境界ボックス領域の特定の特徴を抽出し、訓練データセットを介して隆起性病変の特徴を「学習」した。CNNの全ての層は、グローバル学習率0.0001で確率的最適化が行われている。各画像は300×300ピクセルにリサイズした。それに応じて境界ボックスのサイズも変更された。これらの値は、全てのデータがSSDと互換性があることを保証するために、試行錯誤によって設定された。CPUとしてINTEL社のCore i7-7700Kを使用し、グラフィックス処理装置用GPUとしてNVIDEA社のGeForce GTX 1070を使用した。なお、WCEとしては、実施形態1の場合と同様のPillcam SB2又はSB3WCE装を用いて実施した。データは、STATAソフトウェア(バージョン13;Stata Corp、College Station、TX、USA)を使用して分析された。
The CNN system of
なお、実施形態2のCNNシステムを訓練/検証するために、アルゴリズムの開発に先立って画像に付随する全患者情報を匿名化した。実施形態2のCNNシステムに関与した内視鏡医のいずれも、識別可能な患者情報にアクセスすることができないようにした。このCNNシステムの訓練/検証は、匿名化されたデータを使用した後ろ向き調査であったため、患者の同意書についてはオプトアウトアプローチを採用した。この研究は、日本医師会倫理委員会(ID:JMA-IIA00283)、仙台厚生病院(No.30-5)、東京大学病院(No.11931)、広島大学病院(No.E-1246)の承認を得た。
In order to train / verify the CNN system of the second embodiment, all patient information attached to the image was anonymized prior to the development of the algorithm. None of the endoscopists involved in the CNN system of
[結果の測定及び統計]
まず、検証用データセットの画像内の隆起性病変の領域の全てに、手作業で長方形の境界ボックス(以下、「真のボックス」という。)を太線で付与した。また、実施形態2の訓練されたCNNシステムは、検証用データセットセットの画像内の検出した隆起性病変の領域に長方形の境界ボックス(以下、「CNNボックス」という。)を細線で付与するとともに、隆起性病変の領域の確率スコア(PS:範囲は0-1)を出力した。確率スコアが高いほど、実施形態2の訓練されたCNNシステムはその領域に隆起性病変が含まれている確率が高いと判断していることを示している。
[Measurement and statistics of results]
First, rectangular border boxes (hereinafter referred to as "true boxes") were manually added with thick lines to all areas of elevated lesions in the image of the validation dataset. In addition, the trained CNN system of the second embodiment imparts a rectangular boundary box (hereinafter referred to as “CNN box”) to the area of the detected elevated lesion in the image of the verification data set with a thin line. , The probability score (PS: range 0-1) of the area of the elevated lesion was output. The higher the probability score, the higher the probability that the trained CNN system of
発明者等は、実施形態2のCNNシステムが各画像が隆起性病変を含むか否かについて判別する能力を、各画像の確率スコアの降順でCNNボックスを評価した。CNNボックス、隆起性病変の確率スコア、及び隆起性病変のカテゴリーは、CNNボックスが隆起性病変を明確に囲んでいたときはCNNの結果として決定された。 The inventors evaluated the CNN box in descending order of the probability score of each image for the ability of the CNN system of the second embodiment to determine whether or not each image contained an elevated lesion. CNN boxes, probabilistic scores for elevated lesions, and categories of elevated lesions were determined as a result of CNN when the CNN box clearly surrounded the elevated lesions.
CNNボックスが多数描かれているために視覚的に判断することが困難であった場合、CNNボックスと真のボックスのオーバーラップが0.05Intersection over Unions (IoU)と等しいかそれよりも大きい場合は、CNNの結論として決定された。なお、IoUは、オブジェクト検出器の精度を測定する評価方法であり、2つのボックスの重複領域を2つのボックスの結合領域で割ることによって計算される。
IoU=(オーバーラップ領域)/(両者合わせた領域)
CNNボックスが上記のルールに適用されなかった場合、確率スコアが次に低いCNNボックスが順番に評価された。
If it is difficult to judge visually due to the large number of CNN boxes drawn, if the overlap between the CNN box and the true box is equal to or greater than 0.05 Intersection over Unions (IoU). , CNN's conclusion. Note that IoU is an evaluation method for measuring the accuracy of an object detector, and is calculated by dividing the overlapping area of two boxes by the combined area of two boxes.
IoU = (overlap area) / (area where both are combined)
If the CNN box did not apply to the above rules, the CNN box with the next lowest probability score was evaluated in order.
複数の真のボックスが1つの画像に表示されたときに、CNNボックスの1つが真のボックスと重なる場合、そのCNNボックスはCNNの結果であると決定された。隆起性病変のない画像の場合、確率スコアが最大のCNNボックスがCNNの結果として決定された。3人の内視鏡専門医が全ての画像に対してこれらのタスクを実行した。 When multiple true boxes are displayed in one image, if one of the CNN boxes overlaps the true box, it was determined that the CNN box is the result of the CNN. For images without elevated lesions, the CNN box with the highest probability score was determined as a result of the CNN. Three endoscopists performed these tasks on all images.
確率スコアのカットオフ値を変化させることで受信機動作特性(ROC)曲線をプロットし、実施形態2の訓練されたCNNシステムによる隆起性病変識別の程度を評価するために曲線下面積(AUC)を計算した(図7参照)。そして、実施形態1の場合と同様にして、Youdenインデックスに従ったスコアを含む確率スコアに対する様々なカットオフ値を用いて、実施形態2のCNNシステムの隆起性病変を検出する能力である、感度、特異度及び精度を計算した。
The area under the curve (AUC) is plotted to plot the receiver operating characteristic (ROC) curve by varying the cutoff value of the probability score and to assess the degree of elevated lesion identification by the trained CNN system of
実施形態2における二次的な結果は、CNNによる隆起性病変の5つのカテゴリーへの分類、及び個々の患者分析における隆起性病変の検出である。分類の精度については、CNNと内視鏡専門医との分類の一致率を調べた。隆起性病変の検出率に関する個々の患者分析では、CNNが同じ患者の複数の画像で少なくとも1つの隆起性病変画像を検出した場合、CNNによる検出は正しいと定義された。
The secondary result in
さらに、CNNプロセスを行った後、検証データトセット内の10,000枚の臨床的に正常とされた画像を再評価した。正常画像とされた画像中、幾つかの真の隆起性病変であると思われるCNNボックスが抽出された。この病変は医師によって見落とされている可能性がある。これは、3人の内視鏡専門医のコンセンサスに基づいている。 Furthermore, after performing the CNN process, 10,000 clinically normal images in the validation data set were re-evaluated. From the images that were considered normal, some CNN boxes that appeared to be true elevated lesions were extracted. This lesion may have been overlooked by a doctor. This is based on the consensus of three endoscopists.
また、実施形態2のCNNシステムの訓練用及び検証用に用いられたデータデットの患者特性及び訓練用データセットと検証用データセットの詳細を表6に示した。検証用データセットは、73人の患者(男性、65.8%、平均年齢、60.1歳、標準偏差、18.7歳)からの隆起性病変を含む7,507枚の画像と、20人の患者(男性、60.0%、平均年齢、51.9歳、標準偏差、11.4歳)からの病変のない10,000枚の画像で構成されていた。 Table 6 shows the patient characteristics of the data set used for training and verification of the CNN system of the second embodiment, and the details of the training data set and the verification data set. The validation dataset includes 7,507 images containing elevated lesions from 73 patients (male, 65.8%, mean age, 60.1 years, standard deviation, 18.7 years) and 20. It consisted of 10,000 lesion-free images from human patients (male, 60.0%, mean age, 51.9 years, standard deviation, 11.4 years).
実施形態2で構築されたCNNは、530.462秒で全画像の分析を終え、1画像あたりの平均速度は0.0303秒であった。隆起性病変の検出に使用された実施形態2のCNNのAUCは0.911(95%信頼区間(Cl)、0.9069-0.9155)であった(図7参照)。
The CNN constructed in the second embodiment completed the analysis of all images in 530.462 seconds, and the average speed per image was 0.0303 seconds. The AUC of the CNN of
Youdenインデックスによると、確率スコアの最適なカットオフ値は0.317であった。したがって、確率スコアが0.317以上の領域がCNNによって検出された隆起性病変として認識された。そのカットオフ値を使用すると、CNNの感度と特異度はそれぞれ90.7%(95%CI、90.0%-91.4%)と79.8%(95%CI、79.0%-80.6%)であった(表7参照)。 According to the Youden index, the optimal cutoff value for the probability score was 0.317. Therefore, regions with a probability score of 0.317 or higher were recognized as elevated lesions detected by CNN. Using that cutoff value, the sensitivity and specificity of CNNs are 90.7% (95% CI, 90.0% -91.4%) and 79.8% (95% CI, 79.0%-, respectively). 80.6%) (see Table 7).
隆起性病変のカテゴリーのサブグループ分析では、CNNの感度は、ポリープ、結節、上皮性腫瘍、粘膜下腫瘍、及び血管構造の検出について86.5%、92.0%、95.8%、77.0%、及び94.4%であった。図8A-図8Eは、それぞれ実施形態2のCNNによってポリープ、結節、上皮性腫瘍、粘膜下腫瘍及び血管構造の5つのカテゴリーに正しく検出及び分類された代表的な領域を示す。
In a subgroup analysis of the raised lesion category, CNN sensitivities were 86.5%, 92.0%, 95.8%, 77 for detection of polyps, nodules, epithelial tumors, submucosal tumors, and vascular structures. It was 0.0% and 94.4%. 8A-8E show representative regions correctly detected and classified by the CNN of
個々の患者の分析では、隆起性病変の検出率は98.6%(72/73)であった。隆起性病変のカテゴリー別に、ポリープ、結節、上皮性腫瘍、粘膜下腫瘍、及び血管構造の患者あたりの検出率は、96.7%(29/30)、100%(14/14)、100%(14/14)、100%(11/11)、及び100%(4/4)であった。ただし、図9A―図9Cに示した1人の患者のポリープの3つの画像は、全て実施形態2のCNNによって検出できなかった。この画像では、CNNボックスの全ての確率スコアは0.317未満であったため、CNNによって隆起性病変としては検出されなかった。また、隆起性病変を持たないように見えるが、CNNが確率スコア0.317以上のCNNボックスを提供した偽陽性画像(n=2,019)中、2つが内視鏡専門医によって真の隆起性病変であることが示唆された(図10参照)。
In the analysis of individual patients, the detection rate of elevated lesions was 98.6% (72/73). By category of elevated lesions, the detection rates of polyps, nodules, epithelial tumors, submucosal tumors, and vascular structures per patient were 96.7% (29/30), 100% (14/14), and 100%. It was (14/14), 100% (11/11), and 100% (4/4). However, all three images of the polyp of one patient shown in FIGS. 9A-9C could not be detected by the CNN of
CNN及び専門の内視鏡医による隆起性病変のラベル付けを表8に示した。ポリープ、結節、上皮性腫瘍、粘膜下腫瘍、及び血管構造に対するCNN及び内視鏡医のラベル付けの一致率は42.0%、83.0%、82.2%、44.5%及び48.0%であった。 Table 8 shows the labeling of elevated lesions by CNN and a specialist endoscopist. CNN and endoscopist labeling concordance rates for polyps, nodules, epithelial tumors, submucosal tumors, and vascular structures were 42.0%, 83.0%, 82.2%, 44.5%, and 48. It was 0.0%.
以上述べたように、実施形態2におけるCNNでは、CESTに基づくカテゴリーを適用し、ポリープ、結節、上皮性腫瘍、粘膜下腫瘍、血管構造などのカテゴリー間の感度の違いもあるが、高感度で、良好な検出率で検出及び分類できることが明らかにされた。 As described above, in CNN in the second embodiment, the category based on CEST is applied, and although there are differences in sensitivity between categories such as polyps, nodules, epithelial tumors, submucosal tumors, and vascular structure, the sensitivity is high. , It was clarified that it can be detected and classified with a good detection rate.
[実施形態3]
WCEは、小腸疾患を調査するための不可欠なツールとなっており、主要な適応症は明らかな出血源が見当たらない原因不明の消化管出血(OGIB)が主である。WCE画像のスクリーニングに際し、医師は患者一人あたり10,000枚の画像を30-120分もかけて読影している。そのため、WCEの画像解析には血液成分を自動的に検出できるかどうかが重要である。このようなWCE画像における血液成分を自動的に検出する手段として、例えば「赤色領域推定表示機能」(Suspected Blood Indicator。以下単に「SBI」という。)が知られている(非特許文献15参照)。SBIは、RAPID CE読影ソフトウェア(Medtronic、ミネソタ州、ミネアポリス、米国)に搭載されている画像選択ツールであり、出血の可能性がある領域を赤色のピクセルでタグ付けする。
[Embodiment 3]
WCE has become an indispensable tool for investigating small bowel disease, and the main indication is gastrointestinal bleeding (OGIB) of unknown cause for which no clear source of bleeding is found. When screening WCE images, doctors interpret 10,000 images per patient for 30-120 minutes. Therefore, it is important for WCE image analysis whether or not blood components can be automatically detected. As a means for automatically detecting a blood component in such a WCE image, for example, a "red region estimation display function" (Suspected Blood Indicator; hereinafter simply referred to as "SBI") is known (see Non-Patent Document 15). .. SBI is an image selection tool included in RAPID CE interpretation software (Medtronic, Minnesota, Minneapolis, USA) that tags potentially bleeding areas with red pixels.
実施形態3では、上述したSBIと対比して、CNNシステムを用いたWCE画像による小腸の出血の診断支援方法、診断支援システム、診断支援プログラム及びこの診断支援プログラムを記憶したコンピュータ読み取り可能な記録媒体について説明する。なお、小腸の血液成分の検出に際しては、血液の量的な推測が可能であり、この場合、血液の分布範囲等から血液量を推測することも可能である。ただし、以下では血液成分の有無、すなわち出血の有無について検出する場合を例示する。 In the third embodiment, in comparison with the above-mentioned SBI, a method for diagnosing small intestinal bleeding by WCE image using a CNN system, a diagnostic support system, a diagnostic support program, and a computer-readable recording medium storing the diagnostic support program. Will be described. When detecting the blood component of the small intestine, it is possible to estimate the amount of blood, and in this case, it is also possible to estimate the blood volume from the distribution range of blood or the like. However, in the following, the case of detecting the presence or absence of blood components, that is, the presence or absence of bleeding will be illustrated.
[データセットについて]
発明者の一部が属する単一の機関(東京大学病院、日本)において、2009年11月から2015年8月までのWCE画像を遡及的に取得した。その期間中、WCEは実施形態3の場合と同様の、Pillcam SB2又はSB3 WCE装置を使用して実施した。2人の内視鏡専門医が、SBIの結果を考慮せずに、内腔に血液成分を含む画像及び正常な小腸粘膜の画像を分類した。内腔の血液成分は、活動性出血又は凝血塊と定義されている。
[About dataset]
WCE images from November 2009 to August 2015 were retroactively acquired at a single institution (University of Tokyo Hospital, Japan) to which some of the inventors belong. During that period, WCE was performed using the Pilcam SB2 or SB3 WCE device as in
実施形態3のCNNシステムの訓練用データセットとして、27,847枚の画像(29人の患者の血液成分を含む6,503枚の画像及び12人の患者の正常小腸粘膜の21,344枚の画像)を収集した。同じくCNNシステムの検証用データセットとして、訓練用データセットとは別に、25人の患者から10,208枚の画像を用意した。これらの画像のうち、5人の患者からの208枚の画像は小腸に出血があることを示しており、20人の患者からの10,000枚の画像は正常な小腸粘膜のものであった。実施形態3のCNNシステムのフローチャートの概要を図11に示した。
As a training dataset for the CNN system of
なお、実施形態3のCNNシステムを訓練/検証するために、アルゴリズムの開発に先立って画像に付随する全患者情報を匿名化した。実施形態3のCNNに関与した内視鏡医のいずれも識別可能な患者情報にアクセスすることができないようにした。この実施形態3のCNNシステムの訓練/検証は、匿名化されたデータを使用した後ろ向き調査であったため、患者の同意書についてはオプトアウトアプローチを採用した。この研究は、東京大学倫理委員会(No.11931)及び日本医師会(ID JMA-IIA00283)の承認を得た。
In order to train / verify the CNN system of the third embodiment, all patient information attached to the image was anonymized prior to the development of the algorithm. None of the endoscopists involved in the CNN of
[訓練/検証・アルゴリズム]
実施形態3で用いたCNNシステムのアルゴリズムは、50層のディープニューラルネットワークアーキテクチャであるResNet50(https://arxiv.org/abs/1512.03385)を使用して開発された。その後、新しく開発されたCNNシステムを訓練して検証するために、バークレー・ビジョン・ラーニング・センターで最初に開発されたCaffeフレームワークを使用して訓練された。そして、SGD(Stochastic Gradient Descent)を使用し、ネットワークのすべての層をグローバル学習率0.0001で確率的最適化を行った。全ての画像は、ResNet50との互換性を付与するために、224×224ピクセルにリサイズした。
[Training / Verification / Algorithm]
The algorithm of the CNN system used in
[結果の測定及び統計]
実施形態3のCNNシステムにおける主要な結果には、受信機動作特性(ROC)曲線の曲線下面積(AUC)、感度、特異度、及び血液成分の画像と正常粘膜の画像との間のCNNシステムの識別能力の精度が含まれている。訓練された実施形態3のCNNシステムは、画像当たりの血液成分についての確率スコアとして、0から1の間の連続した数値を出力した。確率スコアが高いほど、CNNシステムは画像に血液成分が含まれている確率が高いと判断していることを示している。実施形態3におけるCNNシステムの検証テストは、単一の静止画像を使用して実行され、確率スコアの閾値を変化させることによってROC曲線をプロットし、識別の程度を評価するためにAUCを計算した。
[Measurement and statistics of results]
The main results in the CNN system of
実施形態3では、CNNシステムによる最終的な分類のために、確率スコアのしきい値を単純に0.5に設定し、血液成分を含む画像と正常粘膜の画像との間のCNNシステムの識別能力の感度、特異性及び精度を計算した。さらに、検証セットでは、10,208枚の画像を検討することにより、血液成分を含む画像と正常粘膜の画像との間のSBIによる識別能力の感度、特異性及び精度を評価した。実施形態3のCNNシステムとSBIとの能力の差異は、マクネマーの検定を使用して比較された。得られたデータは、STATAソフトウェア(バージョン13;Stata Corp、College Station、TX、USA)を用いて統計的に分析された。
In
検証用データセットは、25人の患者からの10,208枚の画像から構成されていた(男性、56%、平均年齢、53.4歳、標準偏差、12.4歳)。実施形態3の訓練されたCNNシステムは、これらの画像を評価するのに250秒要した。これは毎秒40.8枚の画像の速度に等しい。血液成分を含む画像を識別する実施形態3のCNNシステムのAUCは0.9998であった(95%CI(信頼区間)、0.9996-1.0000;図12参照)。また、表9は、確率スコアのカットオフ値を0.1から0.9まで0.1ずつ増加させることによって計算されたそれぞれの感度、特異度及び精度を示す。
The validation dataset consisted of 10,208 images from 25 patients (male, 56%, mean age, 53.4 years, standard deviation, 12.4 years). The trained CNN system of
確率スコア0.5のカットオフ値で、実施形態3のCNNシステムの感度、特異度及び精度はそれぞれ96.63%(95%CI、93.19-98.64%)、99.96%(95%CI、99.90-99.99%)及び99.89%(95%CI、99.81-99.95%)であった。なお、表15の確率スコアのカットオフ値0.21は、Youdenの指数に従って計算された最適カットオフ値であるが、この検証用データセットでは、Youdenインデックスのカットオフ値における精度は、0.5の単純カットオフ値における精度よりも低かった。また、図13は、実施形態3のCNNシステムによって正しく分類された代表的な血液を含む画像(図13A)と正常粘膜画像(図13B)を示している。なお、図13A及び図13Bのそれぞれの実施形態3のCNNシステムによって得られた確率値を下記表10に示した。
With a cutoff value of 0.5 probability score, the sensitivity, specificity and accuracy of the CNN system of
一方、SBIの感度、特異度及び精度は、それぞれ76.92%(95%CI、70.59-82.47%)、99.82%(95%CI、99.72-99.89%)及び99.35%(95%CI、99.18-99.50%)であった。これらはいずれも、CNNシステムよりも有意に低かった(p<0.01)。表11に実施形態3のCNNシステムとSBIの分類の相違を示した。 On the other hand, the sensitivity, specificity and accuracy of SBI were 76.92% (95% CI, 70.59-82.47%) and 99.82% (95% CI, 99.72-99.89%), respectively. And 99.35% (95% CI, 99.18-99.50%). All of these were significantly lower than the CNN system (p <0.01). Table 11 shows the difference in classification between the CNN system of the third embodiment and the SBI.
図14は、実施形態3のCNNによって正常粘膜として分類された7つの偽陰性画像を示している。これらのうち、図14Aに示した4つの画像はSBIによって血液成分を含むものとして正しく分類されたものであり、図14Bの3つの画像はSBIによって正常粘膜として誤って分類されたものである。なお、図14A及び図14Bのそれぞれの実施形態3のCNNシステム及びSBIによって得られた分類を下記表12に、同じくCNNシステムとSBIとの分類の関係を表13にそれぞれ示した。
FIG. 14 shows seven false negative images classified as normal mucosa by the CNN of
以上述べたように、実施形態3の訓練されたCNNシステムによれば、99.9%の高精度で血液成分を含む画像と正常な粘膜画像を区別できた(AUC、0.9998)。また、SBIとの直接比較では、実施形態3の訓練されたCNNシステムがSBIよりも正確に分類できることが示された。単純カットオフポイント0.5でも、実施形態3の訓練されたCNNシステムは、感度及び特異性の両方でSBIより優れていた。この結果は、実施形態3の訓練されたCNNシステムがWCEの非常に正確なスクリーニングツールとして使用できることを示している。
As described above, according to the trained CNN system of the third embodiment, it was possible to distinguish between an image containing a blood component and a normal mucosal image with a high accuracy of 99.9% (AUC, 0.9998). In addition, a direct comparison with SBI showed that the trained CNN system of
[実施形態4]
WCEによる内視鏡検査は、カプセルが消化管内を移動中に撮影した画像を外部で受信及び記録することにより検査が行われるものであり、一度に小腸の全体に亘る撮影が可能であり、動画としての撮影も可能である。しかしながら、動画のWCE画像のスクリーニングは、静止画像の場合よりも医師の負担がより大きくなる。なお、本発明における動画には、いわゆるビデオカメラによって連続的に撮影したもの以外に、撮影間隔を非常に短くして連続的に静止画を撮影し、一連の静止画を連続的に再生した際に動画として認識できるものも含まれる。
[Embodiment 4]
Endoscopy by WCE is performed by externally receiving and recording images taken while the capsule is moving in the digestive tract, and it is possible to take a picture of the entire small intestine at one time. It is also possible to shoot as. However, screening for moving WCE images is more burdensome for physicians than for still images. In the moving image of the present invention, in addition to those continuously shot by a so-called video camera, when still images are continuously shot with a very short shooting interval and a series of still images are continuously reproduced. Also includes those that can be recognized as movies.
このような動画のWCE画像におけるスクリーニング時間の短縮を目的としたQuick Viewモードが知られている。Quick Viewモードは、RAPID CE読影ソフトウェア(Medtronic、ミネソタ州、ミネアポリス、米国)に搭載されている画像選択ツールである。このQuick Viewモードによれば、比較的高い感度と読み取り時間の短縮能力を有しているにもかかわらず、重要な疾患に対するミス率が大きいため、最初のスクリーニングには適さないと報告されている(非特許文献16参照)。 The QuickView mode is known for the purpose of shortening the screening time in the WCE image of such a moving image. QuickView mode is an image selection tool installed in RAPID CE interpretation software (Medtronic, Minnesota, Minneapolis, USA). According to this QuickView mode, it is reported that it is not suitable for the initial screening due to the high error rate for important diseases despite its relatively high sensitivity and ability to shorten the reading time. (See Non-Patent Document 16).
さらに、実施形態1-3では、WCE画像によるスクリーニングに際して、WCEの静止画像を使用しており、WCEの動画像を使用してはいない。実施形態4では、上述したQuick Viewモードと対比しながら、CNNシステムを用いたWCE動画像による小腸の各種疾患の診断支援方法、診断支援システム、診断支援プログラム及びこの診断支援プログラムを記憶したコンピュータ読み取り可能な記録媒体について説明する。なお、小腸の血液成分の検出に際しては、血液の量的な推測が可能であるが、以下では実施形態3の場合と同様に血液成分の有無、すなわち出血の有無について検出する場合で説明する。 Further, in the first embodiment, the WCE still image is used and the WCE moving image is not used in the screening by the WCE image. In the fourth embodiment, the diagnostic support method for various diseases of the small intestine using the CNN system, the diagnostic support system, the diagnostic support program, and the computer reading that stores the diagnostic support program are compared with the QuickView mode described above. A possible recording medium will be described. When detecting the blood component of the small intestine, it is possible to estimate the amount of blood, but the case of detecting the presence or absence of the blood component, that is, the presence or absence of bleeding will be described below as in the case of the third embodiment.
[データセットについて]
実験例4のCNNシステムの訓練データセットとしては、粘膜障害(n=5360枚)、結節(n=11,907枚)、血管拡張症(n=2,237枚)、ポリープ(n=10,704枚)、上皮性腫瘍(n=6,514枚)、粘膜下腫瘍(n=1,875枚)、血管構造(n=393枚)、血液成分(n=6,503枚)、及びCEST分類による正常画像(n=21,344枚)の静止画像を用いた。これらの訓練用静止画像は、発明者の一部が属する機関(東京大学病院、広島大学病院及び仙台厚生病院、日本)において、2009年10月から2018年5月までに患者から収集された。なお、WCE検査は、Pillcam SB2又はSB3デバイス(Medtronic、ミネアポリス、ミネソタ州、米国)を使用して実施された。
[About dataset]
The training data set for the CNN system of Experimental Example 4 includes mucosal disorders (n = 5360 sheets), nodules (n = 11,907 sheets), vasodilatory disease (n = 2,237 sheets), and polyps (n = 10, pcs). 704), epithelial tumor (n = 6,514), submucosal tumor (n = 1,875), vascular structure (n = 393), blood component (n = 6,503), and CEST Still images of normal images (n = 21,344 sheets) according to classification were used. These training still images were collected from patients from October 2009 to May 2018 at the institutions to which some of the inventors belong (Tokyo University Hospital, Hiroshima University Hospital and Sendai Kousei Hospital, Japan). The WCE test was performed using a Pilcam SB2 or SB3 device (Medtronic, Minneapolis, Minnesota, USA).
検証データセットとしては、同じく発明者の一部が属する機関(東京大学病院、広島大学病院及び仙台厚生病院、日本)において、2018年6月から2019年5月までの間に撮影された379件のWCE動画像を遡及的に取得した。なお、WCE検査は、Pillcam SB3デバイスを用いて行った。なお、訓練データセットと検証データセットとは完全に独立している。 The verification data set includes 379 photographs taken between June 2018 and May 2019 at the institutions to which some of the inventors belong (Tokyo University Hospital, Hiroshima University Hospital and Sendai Kousei Hospital, Japan). WCE video images were acquired retroactively. The WCE test was performed using a Pilcam SB3 device. The training data set and the verification data set are completely independent.
また、実施形態4のCNNシステムを訓練/検証するために、アルゴリズムの開発に先立って画像に付随する全患者情報を匿名化した。実施形態4のCNNに関与した内視鏡医のいずれも識別可能な患者情報にアクセスすることができないようにした。この実施形態4のCNNシステムの訓練/検証は、匿名化されたデータを使用した後ろ向き調査であったため、患者の同意書についてはオプトアウトアプローチを採用した。この研究は、東京大学倫理委員会(No.11931)、日本医師会(ID JMA-IIA00283)、広島大学病院(No.E-1246)、及び仙台厚生病院(No.30-5)の承認を得た。 Also, in order to train / verify the CNN system of Embodiment 4, all patient information attached to the image was anonymized prior to the development of the algorithm. None of the endoscopists involved in the CNN of Embodiment 4 had access to identifiable patient information. Since the training / validation of the CNN system of Embodiment 4 was a retrospective study using anonymized data, an opt-out approach was adopted for patient consent. This study was approved by the University of Tokyo Ethics Committee (No. 11931), Japan Medical Association (ID JMA-IIA00283), Hiroshima University Hospital (No. E-1246), and Sendai Kousei Hospital (No. 30-5). Obtained.
[訓練/検証・アルゴリズム]
実施形態4で用いたCNNシステムのアルゴリズムは、実施形態3と同様のSSD及びResNet50アーキテクチャであり、その後、Caffeフレームワークを使用して訓練された。そして、SGDを使用し、ネットワークのすべての層をグローバル学習率0.0001で確率的最適化を行った。
[Training / Verification / Algorithm]
The algorithm of the CNN system used in Embodiment 4 was the same SSD and ResNet50 architecture as in
以前の研究の画像を使用した予備評価では、実施形態4のCNNシステムにおいては、粘膜障害及び結節を含む画像が他の異常を含む画像と一緒に訓練されると、粘膜障害及び結節の検出可能性が低下することが示されたため、粘膜障害を検出するSSDと結節を検出するSSDは他のSSDとは分離された。すなわち、実施形態4のCNNシステムは、次の4つのサブシステムを使用して複合CNNシステムを構築して使用した。
(1)粘膜障害を検出するSSD、
(2)結節を検出するSSD、
(3)その他の異常(血管拡張、ポリープ、粘膜下腫瘍、血管構造及び上皮性腫瘍)を検出するSSD、及び、
(4)血液成分を検出するためのResNet50。
In a preliminary evaluation using images from previous studies, in the CNN system of Embodiment 4, mucosal damage and nodules can be detected when images containing mucosal damage and nodules are trained with images containing other abnormalities. The SSD that detects mucosal damage and the SSD that detects nodules were separated from the other SSDs because they were shown to be less sexual. That is, the CNN system of the fourth embodiment constructed and used a composite CNN system using the following four subsystems.
(1) SSD that detects mucosal damage,
(2) SSD that detects nodules,
(3) SSD that detects other abnormalities (vasodilation, polyps, submucosal tumors, vascular structure and epithelial tumors), and
(4) ResNet50 for detecting blood components.
実施形態4の複合CNNシステムの訓練のために、訓練データセット画像データセットの血液成分を除くすべての異常は、400件以上のWCE画像をレビューした経験がある6名の内視鏡専門医によって手動で長方形の境界ボックス(真のボックス)が付与された(n=38,181枚)。計算には、Intel Core i7-7700K中央処理装置とGeForce GTX1070グラフィックス処理装置を使用した。 For training the composite CNN system of Embodiment 4, all abnormalities except the blood components of the training dataset image dataset were manually performed by 6 endoscopists who have reviewed more than 400 WCE images. A rectangular border box (true box) was added in (n = 38,181 sheets). For the calculation, an Intel Core i7-7700K central processing unit and a GeForce GTX 1070 graphics processing unit were used.
[結果の測定及び統計]
実施形態4のCNNシステムにおける主な分析結果には、小腸のさまざまな異常の検出が含まれている。主要な分析は、主として患者ごとの疾患の分析である。1人の患者の動画像から少なくとも1つの特定の異常画像が取得された場合、そのタイプの異常の検出結果はその患者については正しいと定義された。たとえば、CNNが複数の粘膜障害画像を含む患者から少なくとも1つの粘膜障害画像を取得した場合、CNNはその患者の粘膜障害を正確に検出できたと判断された。
[Measurement and statistics of results]
The main analytical results in the CNN system of Embodiment 4 include the detection of various abnormalities in the small intestine. The main analysis is primarily patient-specific disease analysis. If at least one particular anomaly image was obtained from a moving image of a patient, the detection result for that type of anomaly was defined as correct for that patient. For example, if the CNN acquired at least one mucosal damage image from a patient containing multiple mucosal damage images, it was determined that the CNN was able to accurately detect the mucosal damage in that patient.
複合CNNシステムに加えて、RAPID CE読影ソフトウェアv8.3に搭載されたQuick Viewモードによる検出率も評価した。Quick Viewモードは、重大な病変を含む画像をピックアップし、WCE動画像の早送りレビューを可能する。RAPID CE読影ソフトウェアv8.3では、Quick Viewモードでサンプリングレート(しきい値の割合)を2%から80%の間に設定できる。実施形態4では、Quick Viewモードのサンプリングレートは、CNNによる画像の平均取得割合に従って設定された。このようにしきい値を設定することにより、CNNとQuick Viewモードは画像を同程度に減少させることができ、それにより2つのシステム間の検出可能性を直接比較することができた。 In addition to the composite CNN system, the detection rate by QuickView mode installed in RAPID CE interpretation software v8.3 was also evaluated. QuickView mode picks up images containing significant lesions and enables fast-forward review of WCE video. In RAPID CE interpretation software v8.3, the sampling rate (threshold ratio) can be set between 2% and 80% in QuickView mode. In the fourth embodiment, the sampling rate in the QuickView mode was set according to the average acquisition rate of images by CNN. By setting the threshold in this way, the CNN and QuickView modes were able to reduce the image to the same extent, which allowed a direct comparison of detectability between the two systems.
サブ解析として、各タイプの異常(単一又は複数)の数を考慮して、さまざまな異常の検出率の厳格な基準に基づく患者ごとの分析が実行された。 この分析において、正しい検出の基準は、特定の異常の複数の病変を有する患者について、単一の病変だけでなく複数の病変を検出する必要があるということである。 たとえば、CNNが複数の潰瘍を有する患者から1つの潰瘍のみを検出した場合、厳格な基準ではCNNはその患者から潰瘍を適切に検出できないと判断した。以前の研究では、単一又は複数の病変の情報が小腸疾患の病因と管理を検討するのに役立つ可能性があることが示唆されていたため(非特許文献17参照)、この厳密な基準をサブ解析として採用した。 As a sub-analysis, a patient-by-patient analysis was performed based on strict criteria for the detection rate of various abnormalities, taking into account the number of abnormalities (single or multiple) of each type. In this analysis, the correct detection criterion is that patients with multiple lesions of a particular abnormality need to detect multiple lesions, not just a single lesion. For example, if CNN detects only one ulcer in a patient with multiple ulcers, it is determined by strict criteria that CNN cannot properly detect the ulcer in that patient. Previous studies have suggested that information on single or multiple lesions may be useful in investigating the etiology and management of small bowel disease (see Non-Patent Document 17), thus substituting this rigorous criterion. Adopted as an analysis.
CNNとQuick Viewモード間の検出可能性は、マクネマーのテストを使用して比較された。異常は2人の内視鏡専門医によって元の研究所で診断され、コンセンサスレビューは2つのシステムによる検索の前に別の2人の内視鏡専門医によって、すなわちゴールドスタンダードとして東京大学病院で行われた。これらの調査は、小腸WCEフル動画像の小腸セクションに限定されていました。粘膜障害のサイズは、びらん(≦5mm)又は潰瘍形成(>5mm)に分類された。血管拡張症のタイプは、矢野山本の分類(非特許文献18参照)に従って、タイプ1a又は1bに分類された。タイプ1a病変は、滲出を伴う又は伴わない点状紅斑(<1mm)を特徴とし、タイプ1b病変は、滲出を伴う又は伴わない斑状紅斑(2-3mm)を特徴とする。 Detectability between CNN and QuickView modes was compared using McNemmer's test. Abnormalities were diagnosed in the original laboratory by two endoscopists, and a consensus review was conducted by two other endoscopists, ie, at the University of Tokyo Hospital as a gold standard, prior to searching by the two systems. It was. These studies were limited to the small intestine section of the small intestine WCE full video. The size of mucosal damage was classified as erosion (≤5 mm) or ulceration (> 5 mm). The types of vasodilators were classified into types 1a or 1b according to Yano Yamamoto's classification (see Non-Patent Document 18). Type 1a lesions are characterized by punctate erythema (<1 mm) with or without exudation, and type 1b lesions are characterized by patchy erythema (2-3 mm) with or without exudation.
副次的結果の測定は、CNNによる異常の以下に示す4つのカテゴリーへの分類であった。
(1)粘膜障害、
(2)血管拡張症、
(3)結節、ポリープ、上皮性腫瘍、粘膜下腫瘍、及び血管構造を含む隆起性病変、及び
(4)血液成分。
The measurement of secondary results was a classification of CNN-induced abnormalities into the following four categories:
(1) Mucosal damage,
(2) Vasodilation,
(3) Nodules, polyps, epithelial tumors, submucosal tumors, and elevated lesions including vascular structure, and (4) blood components.
主な結果(すなわち、異常の検出率)の評価で、異常が検出された場合は、病変が拾われたときのラベル付けに関係なく正しいと定義された。一方、二次的結果の評価では、患者分析ごとに、CNNと内視鏡専門医(例えば、ゴールドスタンダード)の分類の一致を調べた。1人の患者の動画像から少なくとも1つの特定の異常画像が取得され、正しい分類としてラベル付けされた場合、そのタイプの異常の分類は患者ごとに正しいと定義されました。データは、上述したのと同様のSTATAソフトウェアを使用して統計的に分析された。 In the evaluation of the main result (ie, abnormality detection rate), if an abnormality was detected, it was defined as correct regardless of the labeling when the lesion was picked up. On the other hand, in the evaluation of secondary results, the classification agreement between CNN and endoscopist (eg, gold standard) was examined for each patient analysis. If at least one particular anomaly image was taken from a moving image of a patient and labeled as the correct classification, then that type of anomaly classification was defined as correct for each patient. The data were statistically analyzed using STATA software similar to that described above.
実施形態4のCNNシステムの検証に用いられた検証用データセットの患者特性を表14に示した。検証用データセットは、379人の患者の小腸のWCE動画像で構成されていた(男性:57%、平均年齢:62.3歳、標準偏差:17.5歳)。WCEの最も一般的な適応症は、原因不明の消化管出血であった(39%)。なお、表14においては重複データが許容されており、「±」が付与されている部分の値は「平均値±標準偏差」を示し、かっこ内はパーセンテージを示す。ただし、表14ではデータの重複が許されている。 Table 14 shows the patient characteristics of the verification data set used for the verification of the CNN system of the fourth embodiment. The validation dataset consisted of WCE moving images of the small intestine of 379 patients (male: 57%, mean age: 62.3 years, standard deviation: 17.5 years). The most common indication for WCE was unexplained gastrointestinal bleeding (39%). In Table 14, duplicate data is allowed, the value of the part to which "±" is given indicates "mean value ± standard deviation", and the value in parentheses indicates the percentage. However, in Table 14, data duplication is allowed.
実施形態4のCNNシステムの平均読み取り速度は、1画像あたり0.09秒であった。このCNNシステムは5,050,226枚の検証用画像から1,135,104枚の画像(22.5%)をピックアップしたため、Quick Viewモードでのサンプリングレートを23%に設定した。小腸部分の全画像の平均数は1動画あたり13,325であり、実施形態4のCNNシステムがピックアップした画像の平均数は1動画あたり2,295枚である(平均レート:22.5%、標準偏差:8.7%)。実施形態4のCNNシステムによって正しく検出されたさまざまな異常の代表的な画像をそれぞれの確定診断結果とともに図15に示した。なお、図15中の方形枠は内視鏡専門医により付与された疾患部位の領域を示し、数値はCNNによって付与された確率スコアを示す。 The average reading speed of the CNN system of the fourth embodiment was 0.09 seconds per image. Since this CNN system picked up 1,135,104 images (22.5%) from 5,050,226 verification images, the sampling rate in QuickView mode was set to 23%. The average number of all images of the small intestine is 13,325 per video, and the average number of images picked up by the CNN system of Embodiment 4 is 2,295 per video (average rate: 22.5%, Standard deviation: 8.7%). Representative images of various abnormalities correctly detected by the CNN system of Embodiment 4 are shown in FIG. 15 together with the respective definitive diagnosis results. The rectangular frame in FIG. 15 indicates the region of the diseased part given by the endoscopist, and the numerical value shows the probability score given by CNN.
また、CNNシステムとQuick Viewモードとのそれぞれの疾患の検出率の差を図16に、同じく単一又は複数の病変を考慮した厳格な基準の場合のCNNシステムとQuick Viewモードとのそれぞれの疾患の検出率の差を図17に示した。図16において、図16Aは疾患全体の検出率の差を示すグラフであり、図16Bは粘膜障害、血管拡張、隆起性病変及び血液成分の4種の疾患毎の検出率の差を示すグラフであり、さらに図16Cは図16Bに示した疾患をさらに細分した場合の各疾患毎の検出率の差を示すグラフである。また図17A-図17Cはそれぞれ単一又は複数の病変を考慮した厳格な基準の場合の図16A-図16Cに対応するグラフである。また、図16及び図17における「*」はp<0.05、「**」はP<0.001、「N.S.」は有意差なしを示している。 In addition, the difference in the detection rate of each disease between the CNN system and the Quick View mode is shown in FIG. 16, and the disease between the CNN system and the Quick View mode in the case of a strict standard also considering a single lesion or a plurality of lesions is shown in FIG. The difference in the detection rates of the above is shown in FIG. In FIG. 16, FIG. 16A is a graph showing the difference in the detection rate of the whole disease, and FIG. 16B is a graph showing the difference in the detection rate of each of the four types of diseases such as mucosal disorder, vasodilation, elevated lesion and blood component. Further, FIG. 16C is a graph showing the difference in the detection rate for each disease when the diseases shown in FIG. 16B are further subdivided. Further, FIGS. 17A to 17C are graphs corresponding to FIGS. 16A to 16C in the case of a strict standard considering a single lesion or a plurality of lesions, respectively. Further, in FIGS. 16 and 17, “*” indicates p <0.05, “**” indicates P <0.001, and “NS” indicates no significant difference.
なお、粘膜障害、血管拡張、隆起性病変、及び血液成分は、それぞれ94人、29人、81人及び23人の患者で検出された(表14参照)。全体として、CNNによる患者ごとの異常の検出率は、図16に示したように、Quick Viewモードによるものよりも有意に高かった(99%:89%、p<0.001)。詳しくは、実施形態4のCNNシステムによるびらん、潰瘍、血管拡張症1a型、血管拡張症1b型、ポリープ、結節、粘膜下腫瘍、血管構造、上皮性腫瘍及び血液成分の検出率は、それぞれ100%(70/70)、100%(24/24)、95%(20/21)、100%(8/8)、100%(22/22)、100%(21/21)、94%(15/16)、100%(13/13)、100%(9/9)及び100%(23/23)であり、Quick Viewモードによるものはそれぞれ90%、96%、100%、88%、82%、91%、75%、54%、100%、96%であった。この結果から、実施形態4のCNNシステムによるびらん、ポリープ、及び血管構造の検出率は、Quick Viewモードによる検出率よりも有意に高い(p<0.05)ことが示された。 Mucosal disorders, vasodilation, elevated lesions, and blood components were detected in 94, 29, 81, and 23 patients, respectively (see Table 14). Overall, the detection rate of patient-specific anomalies by CNN was significantly higher than that by QuickView mode (99%: 89%, p <0.001), as shown in FIG. Specifically, the detection rates of erosion, ulcer, vasodilator type 1a, vasodilator type 1b, polyps, nodules, submucosal tumors, vascular structure, epithelial tumors and blood components by the CNN system of Embodiment 4 are 100, respectively. % (70/70), 100% (24/24), 95% (20/21), 100% (8/8), 100% (22/22), 100% (21/21), 94% ( 15/16), 100% (13/13), 100% (9/9) and 100% (23/23), 90%, 96%, 100%, 88%, respectively, in QuickView mode. It was 82%, 91%, 75%, 54%, 100% and 96%. From this result, it was shown that the detection rate of erosions, polyps, and vascular structures by the CNN system of the fourth embodiment was significantly higher (p <0.05) than the detection rate by the QuickView mode.
病変の数(単一又は複数)を考慮した厳格な基準に基づく患者ごとの分析では、内視鏡専門医は、びらんのある患者の53%(37/70)、潰瘍のある患者の42%(10/24)、血管拡張症1a型の患者の14%(3/21)、血管拡張症1b型の患者の13%(1/8)、ポリープの患者の41%(9/22)、結節を有する患者の81%(17/21)、粘膜下腫瘍を有する患者の6%(1/16)、血管構造を有する患者の23%(3/13)、上皮性腫瘍を有する患者の22%(2/9)、及び血液成分がある患者の100%(23/23)を検出することができた。
In a patient-by-patient analysis based on rigorous criteria considering the number of lesions (single or multiple), endoscopists found 53% (37/70) of patients with rash and 42% (37/70) of patients with
全体として、厳格な基準に基づいた患者ごとのCNNによる異常の検出率は、Quick Viewモードよりも有意に高かった(98%対85%、p<0.001)(図17A)。具体的には、CNNによるびらん、潰瘍、血管拡張症1a型、血管拡張症1b型、ポリープ、結節、粘膜下腫瘍、血管構造、上皮性腫瘍、及び血液量の検出率は99%(69/70)、100%(24/24)、90%(19/21)、100%(8/8)、100%(22/22)、100%(21/21)、94%(15/16)、100%(13/13)、100%(9/9)、及び100%(23/23)であり、同じくQuick Viewモードによるものはそれぞれ87%、96%、100%、88%、73%、81%、75%、38%、100%、96%であった。これにより、厳格な基準による実施形態4のCNNによるびらん、ポリープ、結節、及び血管構造の検出率は、Quick Viewモードによる検出率よりも有意に高い(p<0.05)ことが確認できた。 Overall, the detection rate of CNN anomalies per patient based on strict criteria was significantly higher than in QuickView mode (98% vs. 85%, p <0.001) (Fig. 17A). Specifically, the detection rate of erosion, ulcer, vasodilation type 1a, vasodilation type 1b, polyps, nodules, submucosal tumors, vascular structure, epithelial tumors, and blood volume by CNN is 99% (69 / 69 /). 70), 100% (24/24), 90% (19/21), 100% (8/8), 100% (22/22), 100% (21/21), 94% (15/16) , 100% (13/13), 100% (9/9), and 100% (23/23), and those in QuickView mode are 87%, 96%, 100%, 88%, and 73%, respectively. , 81%, 75%, 38%, 100%, 96%. As a result, it was confirmed that the detection rate of erosion, polyps, nodules, and vascular structure by CNN of Embodiment 4 according to strict criteria was significantly higher than the detection rate by QuickView mode (p <0.05). ..
図18は、厳格な基準に基づく患者ごとの分析でCNNによって検出できなかった4つの偽陰性画像を示す。図18Aは侵食(n=1)、図18B及び図18Cは血管拡張症1a型(n=2)、及び図18D粘膜下腫瘍(n=1)の例であった。これらはすべて、Quick Viewモードでは正しく取得された。 FIG. 18 shows four false negative images that could not be detected by CNN in a patient-by-patient analysis based on strict criteria. FIG. 18A is an example of erosion (n = 1), FIGS. 18B and 18C are examples of vasodilator type 1a (n = 2), and FIG. 18D submucosal tumor (n = 1). All of these were acquired correctly in QuickView mode.
実施形態4のCNNによる結果と内視鏡専門医による診断結果の対応を表15に示した。なお、表15におけるカッコ内の数値は、全体に対する割合を示す。
以上述べたとおり、実施形態4の訓練されたCNNシステムによれば、複数の施設で撮像されたWCE動画像におけるさまざまな異常を高感度で検出でき、既存のQuick Viewモードとの直接比較により、実施形態4の訓練されたCNNシステムによる患者ごとの異常の検出率はQuick Viewモードよりも著しく高いことが明らかになった(99%対89%)。すなわち、実施形態4の訓練されたCNNシステムは、既存のQuick Viewモードよりも優れている可能性があり、異常の検出率を低下させることなく医師の負担を軽減するのに役立つことがわかった。 As described above, according to the trained CNN system of the fourth embodiment, various abnormalities in WCE moving images captured at a plurality of facilities can be detected with high sensitivity, and by direct comparison with the existing QuickView mode, it is possible to detect them. The detection rate of per-patient anomalies by the trained CNN system of Embodiment 4 was found to be significantly higher than in QuickView mode (99% vs. 89%). That is, it has been found that the trained CNN system of Embodiment 4 may be superior to the existing QuickView mode and helps reduce the burden on the physician without reducing the detection rate of anomalies. ..
[実施形態5]
実施形態5では、本発明の内視鏡画像による疾患の診断支援方法、診断支援システム、診断支援プログラム及びこの診断支援プログラムを記憶したコンピュータ読み取り可能な記録媒体について、ME-NBI画像を用いて早期胃がんの場合に適用した例を説明する。
[Embodiment 5]
In the fifth embodiment, the method for diagnosing a disease by the endoscopic image of the present invention, the diagnosis support system, the diagnosis support program, and a computer-readable recording medium storing the diagnosis support program are used at an early stage using ME-NBI images. An example applied in the case of gastric cancer will be described.
[データセットについて]
発明者の一人が属する医院において、2013年4月から2018年3月の間に内視鏡検査により切除された分化型早期胃がん745病変から、合計349病変が選択された。これらの病変から、MESDA-Gによる診断を可能にするのに十分な品質を備えた最大倍率での水浸法を使用した5,227枚のME-NBI画像を訓練用データセットとして収集した(図19)。胃底腺型胃がん (GAFG:gastric adenocarcinoma of fundic gland)及びびまん型早期胃がんのME-NBI画像、及び分析不能で低品質の画像は、これらの症例のME-NBI所見が正確に診断される可能性が低いため除外された。
[About dataset]
At the clinic to which one of the inventors belongs, a total of 349 lesions were selected from 745 lesions of differentiated early gastric cancer resected by endoscopy between April 2013 and March 2018. From these lesions, 5,227 ME-NBI images were collected as a training dataset using the maximal water immersion method with sufficient quality to enable diagnosis by MESDA-G ( FIG. 19). ME-NBI images of gastric adenocarcinoma of fundic gland (GAFG) and diffuse early gastric cancer, as well as unanalyzable, low-quality images, can accurately diagnose ME-NBI findings in these cases. Excluded due to low sex.
なお、図19において、Aは分化型がん(0-IIc、tub1)、Bは分化型がん(0-IIc、tub2)、Cは分化型がん(0-IIa、tub1)、D~Fは胃底腺粘膜、G~Iは幽門腺粘膜、J及びKは斑状発赤、Lは腺腫、Mは黄色腫、Nは局所萎縮、Oは潰瘍の瘢痕を、それぞれ示す。ここで、0-IIcは表面陥凹型、0-IIaは表面隆起型を示し、tub1は高分化腺がん、tub2は中分化腺がんを示す。 In FIG. 19, A is a differentiated cancer (0-IIc, mub1), B is a differentiated cancer (0-IIc, tub2), C is a differentiated cancer (0-IIa, tub1), D to F indicates gastric gland mucosa, GI indicates pyloric gland mucosa, J and K indicate patchy redness, L indicates adenoma, M indicates yellow tumor, N indicates local atrophy, and O indicates ulcer scar. Here, 0-IIc indicates a surface recessed type, 0-IIa indicates a surface raised type, tub1 indicates a well-differentiated adenocarcinoma, and tub2 indicates a moderately differentiated adenocarcinoma.
全ての早期胃がん患者に対する病理組織学的診断は胃腸病理学に特化した病理専門医によって行われた。主な組織型は、日本の胃がんの分類、第3英語版に従って決定された。さらに、同じ条件下で得られた非がん性粘膜又は非がん性病変の2,647枚のME-NBI画像が収集されました。これらのME-NBI画像には、胃底腺粘膜、幽門腺粘膜、斑状発赤、腺腫、黄色腫、限局性萎縮及び潰瘍の瘢痕が含まれ(図19参照)、内視鏡診断が行われたが、生検組織の病理学的検査は行わなかった。 Histopathological diagnosis for all patients with early gastric cancer was made by a pathologist specializing in gastrointestinal pathology. The main histological types were determined according to the Japanese classification of gastric cancer, the 3rd English version. In addition, 2,647 ME-NBI images of non-cancerous mucosa or non-cancerous lesions obtained under the same conditions were collected. These ME-NBI images included gastric gland mucosa, pyloric mucosa, patchy redness, adenomas, yellow tumors, localized atrophy and scars of ulcers (see Figure 19) and were endoscopically diagnosed. However, no pathological examination of the biopsy tissue was performed.
全ての画像は、高解像度内視鏡(GIF-H260Z又はGIF-H290Z;オリンパスメディカルシステム、東京、日本国)及び標準内視鏡ビデオシステム(EVIS LUCERA CV260/CLV-260、EVIS LUCERA ELITE CV-290/CLV-290SL;オリンパスメディカルシステム)を使用して収集された。全ての被験患者に対し、水浸法を使用した最大倍率のME-NBIが3人の経験豊富な内視鏡医によって実行された。ビデオプロセッサは、常にME-NBIのB8レベルに設定された構造強化機能と、レベル1に固定されたNBIカラーモードで使用された。
All images are high resolution endoscopes (GIF-H260Z or GIF-H290Z; Olympus Medical System, Tokyo, Japan) and standard endoscope video systems (EVIS LUCERA CV260 / CLV-260, EVIS LUCERA ELITE CV-290). / CLV-290SL; collected using the Olympus Medical System). For all test patients, maximum magnification ME-NBI using the water immersion method was performed by three experienced endoscopists. The video processor was always used with the structural enhancements set to the B8 level of ME-NBI and the NBI color mode fixed at
[訓練/検証・アルゴリズム]
実施形態5のCNNシステムは、1400万枚以上の画像を含むImageNetデータベースで事前訓練された最先端のCNNアーキテクチャであるDeep Residual Network (ResNet-50、非特許文献21)を活用した転移学習により開発された。CNNの訓練、検証、テストには、実施形態1の場合と同様のCaffeフレームワークを使用した。実施形態5のCNNシステムでは、転移学習を使用して、最終的な分類レイヤーを別の完全に接続されたレイヤーに置き換え、訓練データセットを使用して再訓練し、すべてのレイヤーのパラメータを微調整した。各画像は、224×224ピクセルにリサイズした。画像数を拡張させるために、画像を回転したものも用いた。実施形態5のCNNステムの分類パフォーマンスを改善するために、訓練データセットに対してのみ画像を回転したものを厳密に実行した。CNNのすべてのレイヤーは、確率的勾配降下法を使用して、0.001のグローバル学習率で微調整された。これらの値は、すべてのデータがResNet-50と互換性があるように試行錯誤によって設定された。
[Training / Verification / Algorithm]
The CNN system of Embodiment 5 was developed by transfer learning utilizing Deep Residual Network (ResNet-50, Non-Patent Document 21), which is a state-of-the-art CNN architecture pre-trained in an ImageNet database containing more than 14 million images. Was done. The same Caffe framework as in
実施形態5のCNNシステムは、5,574枚のME-NBI画像のデータセットで訓練及び検証された(早期胃がん:267症例の3797画像、非がん性画像:1777画像)。訓練中に、訓練データセットをランダムに訓練データセット(4460画像)と検証データセット(1114画像)に8:2の比率で分割して訓練し、実施形態5のCNNシステムを構築した(図20参照) The CNN system of Embodiment 5 was trained and validated with a dataset of 5,574 ME-NBI images (early gastric cancer: 3977 images of 267 cases, non-cancerous images: 1777 images). During the training, the training data set was randomly divided into a training data set (4460 images) and a verification data set (1114 images) and trained at a ratio of 8: 2, and the CNN system of the fifth embodiment was constructed (FIG. 20). reference)
[結果の測定及び統計]
診断の精度(がん又は非がんの診断)を評価するために、2,300枚のME-NBI画像(早期胃がん:82症例の1430画像、非がん画像:870画像)の個別のテストデータセットを実施形態5のCNNシステムに適用した(図20参照)。テストデータセットは拡張せず、得られた画像をそのまま用いた。検証及びテストでは、実施形態5の訓練されたCNNシステムは、それぞれの画像に対してがんであるかがん以外であるかに応じた確率値に対応する0から1の間の連続数を生成した。
[Measurement and statistics of results]
Individual tests of 2,300 ME-NBI images (early gastric cancer: 1430 images of 82 cases, non-cancer images: 870 images) to evaluate the accuracy of the diagnosis (diagnosis of cancer or non-cancer) The dataset was applied to the CNN system of Embodiment 5 (see FIG. 20). The test data set was not expanded and the obtained images were used as they were. In validation and testing, the trained CNN system of Embodiment 5 generated a contiguous number between 0 and 1 corresponding to the probability value depending on whether each image was cancerous or non-cancerous.
[評価アルゴリズム]
評価基準の定義(精度、感度、特異度、早期胃がんのPPV(陽性的中率)、早期胃がんのNPV(陰性的中率)、偽陽性及び偽陰性)を表16に示した。また、実施形態5のCNNシステムの精度を評価するため、受信者動作特性曲線(ROC)の下側の面積(AUC)を求めた。全体的なテスト速度は、実施形態5のCNNシステムに組み込まれた時間測定によるテスト画像の分析の開始から終了までと定義された。
[Evaluation algorithm]
Table 16 shows the definitions of the evaluation criteria (accuracy, sensitivity, specificity, PPV (positive predictive value) for early gastric cancer, NPV (negative predictive value) for early gastric cancer, false positives and false negatives). Further, in order to evaluate the accuracy of the CNN system of the fifth embodiment, the area (AUC) below the receiver operating characteristic curve (ROC) was obtained. The overall test rate was defined from the start to the end of the analysis of the test image by time measurement incorporated in the CNN system of Embodiment 5.
また、画像のどの領域が分類結果に最も重要かを判断するために、Gradient-weighted Class Activation Map(Grad-CAM、非特許文献22)を適用することにより、実施形態5のCNNシステムが入力画像を認識する方法を理解しようとした。Grad-CAMは、ターゲットコンセプト(この場合は胃がん)を予測するために、画像内の重要な領域を強調する粗い局在マップを生成する。ここでは、局在マップの位置特定データからヒートマップイメージ、すなわちヒートマップを作成した。 Further, by applying the Gradient-weighted Class Activation Map (Grad-CAM, Non-Patent Document 22) in order to determine which region of the image is most important for the classification result, the CNN system of the fifth embodiment can input the input image. I tried to understand how to recognize. Grad-CAM produces a coarse localization map that emphasizes important areas in the image to predict the target concept (in this case gastric cancer). Here, a heat map image, that is, a heat map was created from the position identification data of the localization map.
全ての統計分析は、R(The R Foundation for Statistical Computing、ウィーン、オーストリア)のグラフィカルユーザーインターフェイスである、EZR(Easy R;埼玉医療センター、自治医科大学、埼玉、日本国)で実行された。連続データは、Mann-Whitney U検定を使用して比較された。変数のカテゴリー分析は、フィッシャーの正確確率検定を使用して実行された。p値<0.05の場合は、統計的に有意な差を示すと見なされた。 All statistical analysis was performed at EZR (EasyR; Saitama Medical Center, Jichi Medical University, Saitama, Japan), which is a graphical user interface of R (The R Foundation for Statistical Computing, Vienna, Austria). Continuous data were compared using the Mann-Whitney U test. Variable categorical analysis was performed using Fisher's exact test. A p-value <0.05 was considered to show a statistically significant difference.
実施形態5のCNNシステムの構築にあたり、順天堂大学医学部の治験審査委員会(承認番号:#18-229)の審査を受け、承認された。分析は、各患者が口頭による同意による治療に同意した後に得られた匿名の臨床データを使用したため、患者は研究に同意する必要はなく、提示されたデータから個人を特定することはできないようにされた。 In constructing the CNN system of Embodiment 5, it was reviewed and approved by the clinical trial review committee (approval number: # 18-229) of Juntendo University School of Medicine. The analysis used anonymous clinical data obtained after each patient consented to oral consent treatment, so patients did not have to consent to the study and could not identify individuals from the data presented. Was done.
2013年4月から2018年3月までに、合計349人の早期胃がん患者が登録された。ME-NBI手順の時間枠に従って、これらの早期胃がんから取得したデータは、訓練データセット(n=267)とテストデータセット(n=82)の2つのセットに分割された。テストデータセットには、後から収集されたデータが含まれていた。表17に、訓練及びテストデータセットで使用された患者と病変の特性を示した。訓練データセットと比較して、テストデータセットには、H.ピロリ陰性の胃がんの症例(感染していない、根絶後の症例を含む)、胃の下部(L)に小さな腫瘍径と病変がある症例が含まれていた。 From April 2013 to March 2018, a total of 349 patients with early gastric cancer were registered. According to the time frame of the ME-NBI procedure, the data obtained from these early gastric cancers were divided into two sets, a training data set (n = 267) and a test data set (n = 82). The test dataset contained data that was later collected. Table 17 shows the characteristics of patients and lesions used in the training and test datasets. Compared to the training dataset, the test dataset includes H. et al. Cases of Helicobacter pylori-negative gastric cancer (including uninfected, post-eradication cases) and cases with a small tumor diameter and lesion in the lower part (L) of the stomach were included.
訓練及びテストデータセットにおける早期胃がんのVS分類体系の結果を表18に示した。訓練データセットとテストデータセットの間には、境界線、MVP(microvascular pattern:微小血管構築像)、MSP(microsurface pattern:表面微細構造)及び診断に大きな違いはない。内視鏡検査によってがんと診断されなかった18の病変があった。そのうち、8つの病変、訓練データセットで7/267(2.6%)、テストデータセットで1/82(1.2%)は、境界線がなかったために非がんと誤って診断された。残りの10個の病変、訓練データセットの6/267(2.2%)及びテストデータセットの4/82(4.9%)は、通常のMVP及びMSPの存在により非がんと誤って診断されたが、境界ラインが存在していた。訓練データセットと比較して、テストデータセットには通常のMVPとMSPの病変が多く含まれていたと考えられるが、違いは大きくはない Table 18 shows the results of the VS classification system for early gastric cancer in the training and test data sets. There are no major differences in borderline, MVP (microvascular pattern), MSP (microsurface pattern) and diagnosis between the training and test datasets. There were 18 lesions that were not diagnosed with cancer by endoscopy. Of these, 8 lesions, 7/267 (2.6%) in the training dataset and 1/82 (1.2%) in the test dataset, were erroneously diagnosed as non-cancer due to lack of boundaries. .. The remaining 10 lesions, 6/267 (2.2%) in the training dataset and 4/82 (4.9%) in the test dataset, were mistakenly non-cancerous due to the presence of normal MVPs and MSPs. Diagnosed, but borderline was present. Compared to the training dataset, the test dataset may have contained more lesions of normal MVP and MSP, but the difference is not significant.
全体的なテストスピードは38.3枚/秒(0.026秒/枚)であった。実施形態5のCNNシステムのパフォーマンスを表19に示した。精度は98.7%で、2300枚の画像のうち2271枚が正しく診断されました。感度、特異度、PPV、NPV、偽陽性率及び偽陰性率はそれぞれ98%(1401/1430)、100%(870/870)、100%(1401/1401)、96.8%(870/899)、0%(0/870)及び2%(29/1430)であった。実施形態5のCNNシステムによる受信者動作特性曲線(ROC)を評価したとき、曲線の下側の面積(AUC)は99%であった(図21参照)。 The overall test speed was 38.3 sheets / second (0.026 seconds / sheet). The performance of the CNN system of Embodiment 5 is shown in Table 19. The accuracy was 98.7%, and 2271 out of 2300 images were diagnosed correctly. Sensitivity, specificity, PPV, NPV, false positive rate and false negative rate are 98% (1401/1430), 100% (870/870), 100% (1401/1401), 96.8% (870/899), respectively. ), 0% (0/870) and 2% (29/1430). When the receiver operating characteristic curve (ROC) by the CNN system of the fifth embodiment was evaluated, the area under the curve (AUC) was 99% (see FIG. 21).
29枚のME-NBI画像に示される計12例の早期胃がんは、非がんと誤診された。6例では、経験のある内視鏡医であっても腸上皮化生又は胃炎と区別するのは困難であった(表20、図22A~C)。さらに、中部胃又は下部胃に位置し、比較的小さいサイズで0-IIcとして表される病変は、テストデータセットでより頻繁に見られた。浸潤の深さに関して、すべての症例は粘膜内がんと診断され、それらのほとんどはピロリ菌陰性であった。他の6症例の画像は、出血、低倍率視野、焦点ずれのため、診断を行うには品質が不十分であった(図22D、E)。偽陽性画像は、テストデータセットでは表示されなかった。 A total of 12 cases of early gastric cancer shown in 29 ME-NBI images were misdiagnosed as non-cancer. In 6 cases, it was difficult for even an experienced endoscopist to distinguish from intestinal metaplasia or gastritis (Table 20, FIGS. 22A-C). In addition, lesions located in the middle or lower stomach, with a relatively small size and represented as 0-IIc, were found more frequently in the test dataset. In terms of infiltration depth, all cases were diagnosed with intramucosal cancer and most of them were H. pylori negative. Images of the other 6 cases were of poor quality for diagnosis due to hemorrhage, low magnification, and out-of-focus (FIGS. 22D, E). False positive images were not displayed in the test dataset.
なお、図22において、A~Cは実施形態5のCNNシステムによって偽陰性と診断された画像の例であり、病変は腸上皮化生又は胃炎として診断されたものである。このうち、Aは分化型がん(0-IIc、tub1、H.ピロリ根絶後)であり、境界線を伴う通常のMVP+MSPを示し、Bは分化型がん(0-IIc、tub2、H.ピロリ根絶後)であり、境界線を伴う通常のMVP+MSPを示し、Cは分化型がん(0-IIa、tub1、H.ピロリ根絶後)であり、境界線を伴う通常のMVP+MSPを示す。なお、Dは偽陰性と診断された出血を有する画像の例であり、Eは同じく偽陰性と診断された低倍率視野及び焦点ずれの画像の例である。ここで、0-IIcは表面陥凹型、0-IIaは表面隆起型を示し、tub1は高分化腺がん、tub2は中分化腺がんを示している。 Note that in FIG. 22, A to C are examples of images diagnosed as false negatives by the CNN system of Embodiment 5, and the lesions are diagnosed as intestinal metaplasia or gastritis. Of these, A is a differentiated cancer (0-IIc, tub1, after eradication of H. pyrrolili), showing normal MVP + MSP with a borderline, and B is a differentiated cancer (0-IIc, tub2, H. After eradication of pirori), showing normal MVP + MSP with borderline, C is differentiated cancer (0-IIa, tube1, after eradication of H. pirori), showing normal MVP + MSP with borderline. Note that D is an example of an image having bleeding diagnosed as false negative, and E is an example of an image having a low magnification field and out of focus also diagnosed as false negative. Here, 0-IIc indicates a surface recessed type, 0-IIa indicates a surface raised type, tub1 indicates a well-differentiated adenocarcinoma, and tub2 indicates a moderately differentiated adenocarcinoma.
ヒートマップの例を図23に示した。なお、図23において、Aは.分化型がん(0-IIc、tub1)、Bは分化型がん(0-IIc、tub1)、Cは斑状発赤、Dは腺腫をそれぞれ示しており、a~dはそれぞれA~Dに対応するヒートマップである。ここで、0-IIcは表面陥凹型、tub1は高分化腺がんを示している。 An example of a heat map is shown in FIG. In FIG. 23, A is a differentiated cancer (0-IIc, tub1), B is a differentiated cancer (0-IIc, tub1), C is a patchy redness, and D is an adenoma. ~ D are heat maps corresponding to A to D, respectively. Here, 0-IIc indicates a surface recess type, and tube1 indicates a well-differentiated adenocarcinoma.
早期胃がん画像では、実施形態5のCNNシステムによってがんと判定された領域は赤で表示され、内視鏡医によってがんと判定された領域と一致していた。胃腺腫の画像と斑状発赤を示す画像は、実施形態5のCNNシステムによって非がん性であると判断され、ヒートマップ上に赤色で表示されない。同様に、これらの非がん領域は、内視鏡医によって非がんと判定された領域と一致していた。 In the early gastric cancer image, the area determined to be cancer by the CNN system of Embodiment 5 was displayed in red, which coincided with the area determined to be cancer by the endoscopist. The image of the gastric adenoma and the image showing the patchy redness are determined to be non-cancerous by the CNN system of Embodiment 5 and are not displayed in red on the heat map. Similarly, these non-cancerous areas coincided with the areas determined by the endoscopist to be non-cancerous.
実施形態5のCNNシステムは、従来得られたものよりも高い診断精度を示していた。従来のものとの最も重要な違いは、ME-NBI観測法であった。実施形態5のCNNシステムで使用されている最大倍率の水浸法は、ハレーションを排除して、内視鏡診断に適した均一な品質の完全に焦点が合った、鮮明な画像を生成できるため、これらの画像はCNNシステムによる診断支援に最適である。また、ビデオに記録された早期胃がんを診断するには、CNNシステムで1秒間に30枚を超える画像を分析する必要がある。実施形態5のCNNシステムは、1秒間に38.3枚の画像を分析できるので、この技術的な制限を克服できている。したがって、この施形態5のCNNシステムは、ME-NBIなどの手順でビデオ画像による診断に適用できると期待される。 The CNN system of the fifth embodiment showed higher diagnostic accuracy than those obtained conventionally. The most important difference from the conventional one was the ME-NBI observation method. Because the maximum magnification water immersion method used in the CNN system of Embodiment 5 can eliminate halation and produce a perfectly focused, clear image of uniform quality suitable for endoscopic diagnosis. , These images are ideal for diagnostic support by the CNN system. Also, in order to diagnose early gastric cancer recorded in the video, it is necessary to analyze more than 30 images per second with the CNN system. The CNN system of Embodiment 5 can analyze 38.3 images per second, thus overcoming this technical limitation. Therefore, it is expected that the CNN system of the fifth embodiment can be applied to the diagnosis by the video image by the procedure such as ME-NBI.
[実施形態6]
実施形態6では、本発明の内視鏡画像による疾患の診断支援方法、診断支援システム、診断支援プログラム及びこの診断支援プログラムを記憶したコンピュータ読み取り可能な記録媒体について、通常の白色光イメージング(WLI)、非拡大狭帯域イメージング(NBI)及びインジゴカルミン色素散布法(Indigo)による3種類の内視鏡画像を用いた、胃がんの深達度を診断する診断支援方法、診断支援システム、診断支援プログラム及びこの診断支援プログラムを記憶したコンピュータ読み取り可能な記録媒体について説明する。
[Embodiment 6]
In the sixth embodiment, ordinary white light imaging (WLI) is performed on a method for supporting a diagnosis of a disease using an endoscopic image of the present invention, a diagnosis support system, a diagnosis support program, and a computer-readable recording medium storing the diagnosis support program. , Non-enlarged narrow band imaging (NBI) and indigocarmine dye application (Indigo) diagnostic support method, diagnostic support system, diagnostic support program and diagnostic support method for diagnosing the depth of gastric cancer using three types of endoscopic images. A computer-readable recording medium that stores this diagnostic support program will be described.
胃は、内面側から大きく分けて粘膜層(M)、粘膜下層(SM)、粘膜筋層(MP)、漿膜下層(SS)及び漿膜(SE)からなっている。このうち、早期胃がんに分類されるのは病変部の先端が粘膜層(M)又は粘膜下層(SM)内に止まっているものであり、病変部の先端が粘膜下層(SM)よりも深い層にまで浸潤しているものは進行胃がんに分類される。なお、日本胃がん学会によるガイドラインによれば、胃がんのうち、内視鏡的粘膜下層剥離術(ESD)によって治癒的切除が達成できるのは、胃がんの病変部が粘膜層(M)内に止まっているもの及び粘膜下層(SM)内の浸潤が500μm未満(SM1)のものであり、より浸潤が深い胃がんでは外科的手術が必要であるとされている。そのため、早期胃がんに対してESDが最適であるかどうかを判断するためには、胃がんの病変部の先端の浸潤位置(深達度)が粘膜下層内で500μm未満(SM1)であるか500μm以上(SM2)であるかの診断が不可欠である。 The stomach is roughly divided from the inner surface side into a mucosal layer (M), a submucosal layer (SM), a mucosal muscle layer (MP), a subserosal layer (SS), and a serosa (SE). Of these, those classified as early gastric cancer are those in which the tip of the lesion remains in the mucosal layer (M) or submucosal layer (SM), and the tip of the lesion is deeper than the submucosal layer (SM). Those that have invaded the area are classified as advanced gastric cancer. According to the guidelines of the Japanese Society of Gastric Cancer, among gastric cancers, curative resection can be achieved by endoscopic submucosal dissection (ESD) because the lesion of gastric cancer remains in the mucosal layer (M). Invasion into the submucosa (SM) is less than 500 μm (SM1), and surgical operation is required for deeper invasion of gastric cancer. Therefore, in order to determine whether ESD is optimal for early gastric cancer, the infiltration position (depth) of the tip of the lesion of gastric cancer is less than 500 μm (SM1) or 500 μm or more in the submucosa. Diagnosis of whether it is (SM2) is indispensable.
[データセットについて]
実施形態6のCNNシステムの構築にあたり、東京大学(No.11931)及び日本医師会(ID JMA-IIA00283)の倫理委員会の承認を得た。発明者の一人が属する医院において、2013年1月から2019年6月までの内視鏡検査及び病理のレポートを遡及的にレビューし、術前に内視鏡検査が行われ、その後に内視鏡的切除又は外科手術が行われ、最終的な病理学的診断が胃がんであった症例を抽出した。代表的な胃がんの画像を図24に示した。なお、図24において、(a)は深部粘膜下組織に浸潤した胃がんの内視鏡画像の例であり、(b)は粘膜内に止まっている胃がんの内視鏡画像の例である。なお、抽出した例のうち、胃底腺型胃がん、以前に胃がんの治療歴がある例、分析不能で低品質の画像、及び、1つの内視鏡画像内に複数の病変が見られるものは除外された。
[About dataset]
In constructing the CNN system of the sixth embodiment, the approval of the ethics committee of the University of Tokyo (No. 11931) and the Japan Medical Association (ID JMA-IIA00283) was obtained. At the clinic to which one of the inventors belongs, a retrospective review of endoscopy and pathology reports from January 2013 to June 2019, preoperative endoscopy, followed by endoscopy Cases in which endoscopic resection or surgery was performed and the final pathological diagnosis was gastric cancer were extracted. An image of a typical gastric cancer is shown in FIG. In FIG. 24, (a) is an example of an endoscopic image of gastric cancer invading deep submucosa, and (b) is an example of an endoscopic image of gastric cancer perched in the mucosa. Among the extracted cases, those with fundic gland-type gastric cancer, those with previous treatment history for gastric cancer, unanalyzable and low-quality images, and those with multiple lesions in one endoscopic image Excluded.
内視鏡手術には主にEvis Lucera Spectrumシステム又はEvis Lucera Eliteシステム(オリンパス、東京、日本)が使用され、高解像ないし高精細内視鏡(GIF-Q240、GIF-Q240Z、GIF-H260、GIF-Q260、GIF-H260Z、GIF-PQ260、GIF-XP260N、GIF-H290、GIF-H290Z、GIF-XP290N、オリンパス、東京、日本)が用いられた。目的とする病変全体の少なくとも半分が見える可能性のあるすべての画像が抽出された。動画ファイルにアクセスできる場合は、可能であれば、従来の白色光イメージング(WLI)、非拡大狭帯域イメージング(NBI)、0.1%インディゴカルミン色素散布イメージング(Indigo)のそれぞれの撮影法で、近接画像・遠景画像の計2枚以上をできれば、動画から抽出した。 Evis Lucera Spectrum system or Evis Lucera Elite system (Olympus, Tokyo, Japan) is mainly used for endoscopic surgery, and high resolution or high definition endoscopes (GIF-Q240, GIF-Q240Z, GIF-H260, GIF-Q260, GIF-H260Z, GIF-PQ260, GIF-XP260N, GIF-H290, GIF-H290Z, GIF-XP290N, Olympus, Tokyo, Japan) were used. All images were extracted that could show at least half of the entire lesion of interest. If the video file is accessible, if possible, with conventional white light imaging (WLI), non-magnification narrowband imaging (NBI), and 0.1% indigocarmine dyeing imaging (Indigo). If possible, a total of two or more close-up images and distant-view images were extracted from the video.
収集された画像は、コンピュータ化されたランダム化により、4:1の比率で訓練用データセットとテスト用データセットにランダムに分割された。FacebookのPyTorch(https://pytorch.org/)ディープラーニングフレームワークが実施形態6のAIシステムの訓練、検証及びテストに使用された。WLI画像、NBI画像及びIndigo画像をそれぞれ使用する実施形態6のAIシステムは、それぞれAIシステム(WLI)、AIシステム(NBI)及びAIシステム(Indigo)として定義された。 The collected images were randomly divided into training and test datasets at a ratio of 4: 1 by computerized randomization. Facebook's PyTorch (https://pytorch.org/) deep learning framework was used to train, validate and test the AI system of Embodiment 6. The AI systems of Embodiment 6, which use WLI images, NBI images, and Indigo images, respectively, are defined as AI system (WLI), AI system (NBI), and AI system (Indigo), respectively.
ここでは、東京大学病院で2013年1月から2019年6月に内視鏡的切除又は外科手術が行われた1084例の胃がんの症例から合計16,557枚の画像を抽出した。訓練用画像及びテスト用画像のデータセットの詳細を表21に示した。 Here, a total of 16,557 images were extracted from 1084 cases of gastric cancer that underwent endoscopic resection or surgery from January 2013 to June 2019 at the University of Tokyo Hospital. Details of the training image and test image datasets are shown in Table 21.
CNNアルゴリズムの訓練用データセットは、884例の病変(訓練用428例及び検証用184例)を含む8271枚のWLI画像(訓練用6030枚及び検証用2241枚)、629例の病変(訓練用445例及び検証用184例)を含む2701枚のNBI画像(訓練用1889枚及び検証用812枚)、及び416例の病変(訓練用291例及び検証用125例)を含む2656枚Indigo画像(訓練用1909枚及び検証用747枚)で構成されていた。また、テスト用データセットは、胃がんの236例の病変を含む合計1715枚のWLI画像、158例の病変を含む575枚のNBI画像、111例の病変を含む639枚のIndigo画像で構成されていた。訓練用データセットとテスト用データセットの画像は相互に排他的であった。 The training dataset for the CNN algorithm includes 8217 WLI images (6030 for training and 2241 for verification), including 884 lesions (428 for training and 184 for verification), and 629 lesions (for training). 2701 NBI images (1889 for training and 812 for verification) including 445 and 184 for verification, and 2656 Indigo images (291 for training and 125 for verification) including 416 lesions (291 for training and 125 for verification). It consisted of 1909 sheets for training and 747 sheets for verification). In addition, the test data set consists of a total of 1715 WLI images including 236 lesions of gastric cancer, 575 NBI images including 158 lesions, and 639 Indigo images including 111 lesions. It was. The images of the training and test datasets were mutually exclusive.
[訓練/検証・アルゴリズム]
実施形態6では、WLI画像、NBI画像及びIndigo画像をそれぞれ使用して胃がんの深達度を予測する3つの独立したAIシステムを開発した。実施形態6のAIシステムは、実施形態5の場合と同様に、最先端のCNNアーキテクチャであるResNet-50を活用した転移学習によって開発された。実施形態6のCNNシステムでは、転移学習を使用して、最終的な分類レイヤーを別の完全に接続されたレイヤーに置き換え、訓練用データセットを使用して再訓練し、すべてのレイヤーのパラメータを微調整した。全ての画像は、224×224ピクセルにリサイズした。画像数を拡張させるために、画像を垂直反転、水平反転、及びスケーリングによって拡張した。CNNのすべてのレイヤーは、確率的勾配降下アルゴリズムを使用し、バッチサイズ:64、グローバル学習率:0.001、エポック数:100のパラメータを用いて訓練された。
[Training / Verification / Algorithm]
In Embodiment 6, three independent AI systems were developed to predict the depth of gastric cancer invasion using WLI images, NBI images and Indigo images, respectively. The AI system of the sixth embodiment was developed by transfer learning utilizing ResNet-50, which is a state-of-the-art CNN architecture, as in the case of the fifth embodiment. In the CNN system of Embodiment 6, transfer learning is used to replace the final classification layer with another fully connected layer, retrain using the training dataset, and parameterizing all layers. Fine-tuned. All images were resized to 224 x 224 pixels. To increase the number of images, the images were expanded by vertical inversion, horizontal inversion, and scaling. All layers of the CNN were trained using a stochastic gradient descent algorithm with parameters of batch size: 64, global learning rate: 0.001, and epoch count: 100.
訓練された実施形態6のCNNに基くAIシステムは、「M又はSM1」及び「SM2以上」の確率スコア(範囲、0?1)を出力するようにプログラムされている。なお、「M」は病変の先端が粘膜層に止まる場合を示し、「SM1」は病変の先端が粘膜下層内の深さが500μm未満の範囲に止まる場合を示し、「SM2以上」は病変の先端の粘膜下層内の深さが500μm以上である場合だけでなく、粘膜筋層(MP)内、漿膜下層(SS)内、漿膜(SE)内にまで浸潤している場合や、他臓器へ浸潤(SI)している場合も含む。また、「陽性」とは、組織学的に証明されたSM2又はそれ以上に浸潤しているがんであると定義した。 The trained CNN-based AI system of Embodiment 6 is programmed to output probability scores (range, 0-1) of "M or SM1" and "SM2 and above". "M" indicates the case where the tip of the lesion stays in the mucosal layer, "SM1" indicates the case where the tip of the lesion stays in the submucosal depth of less than 500 μm, and "SM2 or more" indicates the case of the lesion. Not only when the depth in the submucosal layer at the tip is 500 μm or more, but also when it invades into the muscularis mucosae (MP), subserosal layer (SS), serosa (SE), or to other organs. It also includes the case of infiltration (SI). In addition, "positive" was defined as a histologically proven cancer infiltrating SM2 or higher.
[結果の測定及び統計]
実施形態6のCNNに基づく3種のAIシステムについて、感度、特異度、精度、陽性的中率及び陰性的中率を、画像ベース及び病変ベースで計算した。病変ベースの精度については、テスト画像について病変ごとの正解判定は、病変ごとに過半数の画像で正解している場合を正解とした。
[Measurement and statistics of results]
Sensitivity, specificity, accuracy, positive predictive value and negative predictive value were calculated on an image-based and lesion-based basis for the three CNN-based AI systems of Embodiment 6. Regarding the lesion-based accuracy, the correct answer for each lesion in the test image was the correct answer when the majority of the images were correct for each lesion.
全てのデータは、JMPバージョン14.2(SAS Institute Inc., Cary, North Carolina, USA)を使用して統計的に分析した。それぞれのデータは、必要に応じて、頻度とパーセンテージ、又は、平均値(標準偏差)として提示し、それぞれの種別データにおいては病変数も示した。WLI画像、NBI画像、及びIndigo画像をそれぞれ使用する3種類のAIシステム間の病変ベースの精度の比較を、ウィルコクソン順位和検定によって行った。ウィルコクソン順位和検定は、訓練用データセットとテスト用データセットの間の数値データ変数に対して実行した。また、全てのカテゴリー変数に対してフィッシャーの正確確率検定を行った。すべてのp値は両側検定による値であり、p<0.05は統計的に有意であると見なされた。また、実施形態6のCNNに基く3種のAIシステムについて、カットオフ値を評価するために、受信者動作特性曲線(ROC)を使用した。 All data were statistically analyzed using JMP version 14.2 (SAS Institute Inc., Cary, North Carolina, USA). Each data was presented as frequency and percentage, or mean (standard deviation), as needed, and the number of lesions was also shown in each type of data. A comparison of lesion-based accuracy between the three AI systems using WLI, NBI, and Indigo images was performed by Wilcoxon rank sum test. The Wilcoxon rank sum test was performed on numerical data variables between the training and test datasets. Fisher's exact test was also performed on all categorical variables. All p-values were by two-sided testing and p <0.05 was considered statistically significant. In addition, a receiver operating characteristic curve (ROC) was used to evaluate the cutoff value for the three AI systems based on the CNN of Embodiment 6.
訓練用データセット及びテスト用データセットに含まれる患者と病変の詳細な臨床的特徴を表22に示した。WLIの訓練用データセットとテスト用データセットとの間のバックグラウンド要素は、男性の割合が75.9%及び74.6%(p値=0.67)であり、平均年齢が70.4歳及び71.7歳(p値=0.10)であって、とも実質的な差異はなかった。深達度の分布も表22に示した。 Table 22 shows the detailed clinical features of the patients and lesions included in the training and test datasets. The background factors between the WLI training and test datasets were 75.9% and 74.6% male (p-value = 0.67) with an average age of 70.4. There was no substantial difference between the ages of 71.7 years and 71.7 years (p-value = 0.10). The distribution of depth of invasion is also shown in Table 22.
WLIテスト画像に対する深達度の分類の確率の受信者動作特性曲線(ROC)を図25Aに示した。AIシステム(WLI)の曲線下面積(AUC)は0.9590であった。Youdenインデックスによると、確率スコアの最適カットオフ値は0.5448であった。AIシステム(WLI)による診断は、この確率スコアに基づいて行われた。AIシステム(WLI)のパフォーマンスを表23に示した。 The receiver operating characteristic curve (ROC) of the probability of invasion classification for the WLI test image is shown in FIG. 25A. The area under the curve (AUC) of the AI system (WLI) was 0.9590. According to the Youden index, the optimal cutoff value for the probability score was 0.5448. Diagnosis by the AI system (WLI) was made based on this probability score. The performance of the AI system (WLI) is shown in Table 23.
AIシステム(WLI)の画像ベースの感度、特異度、精度、陽性的中率及び陰性的中率は、それぞれ89.2%、98.7%、94.4%、98.3%及び91.7%であった。また、AIシステム(WLI)の病変ベースの感度、特異度、精度、陽性的中率及び陰性的中率は、それぞれ84.4%、99.4%、94.5%、98.5%及び92.9%であった。 The image-based sensitivity, specificity, accuracy, positive predictive value and negative predictive value of the AI system (WLI) were 89.2%, 98.7%, 94.4%, 98.3% and 91. It was 7%. In addition, the AI system (WLI) lesion-based sensitivity, specificity, accuracy, positive predictive value and negative predictive value were 84.4%, 99.4%, 94.5%, 98.5% and 98.5%, respectively. It was 92.9%.
また、図25Bに示すように、AIシステム(NBI)のAUCは0.9048で、確率スコアの最適なカットオフ値は0.4031であった。さらに、図25Cに示すように、AIシステム(Indigo)のAUCは0.9491で、確率スコアの最適カットオフ値は0.6094であった。 Further, as shown in FIG. 25B, the AUC of the AI system (NBI) was 0.9048, and the optimum cutoff value of the probability score was 0.4031. Further, as shown in FIG. 25C, the AUC of the AI system (Indigo) was 0.9491, and the optimum cutoff value of the probability score was 0.6094.
AIシステム(NBI)及びAIシステム(Indigo)の画像ベースの精度は、それぞれ93.9%及び94.2%であった。また、AIシステム(NBI)及び(Indigo)の病変ベースの精度は、それぞれ94.3%及び95.5%であった。すなわち、深達度の分類に関するAIシステム(WLI)、AIシステム(NBI)及びAIシステム(Indigo)の間の病変ベースの精度に有意差はなく(p=0.41)、いずれも高精度に深達度を診断できることが確認された。したがって、実施形態6の3種類のAIシステムによれば、いずれも胃がんの病変部の先端の深達度が粘膜下層内で500μm未満(SM1)であるか500μm以上(SM2)であるかを高精度に診断できることが確認され、ESDが最適であるかどうかを正確に判断できることが確認された。 The image-based accuracy of the AI system (NBI) and AI system (Indigo) was 93.9% and 94.2%, respectively. Also, the lesion-based accuracy of the AI system (NBI) and (Indigo) was 94.3% and 95.5%, respectively. That is, there was no significant difference in lesion-based accuracy between the AI system (WLI), AI system (NBI) and AI system (Indigo) regarding the classification of invasion depth (p = 0.41), all of which were highly accurate. It was confirmed that the depth of invasion can be diagnosed. Therefore, according to the three types of AI systems of the sixth embodiment, it is highly determined whether the depth of penetration of the tip of the lesion of gastric cancer is less than 500 μm (SM1) or 500 μm or more (SM2) in the submucosa. It was confirmed that the diagnosis could be made accurately, and it was confirmed that it was possible to accurately determine whether or not ESD was optimal.
[実施形態7]
実施形態7のCNNシステムを用いた消化器官の内視鏡画像による疾患の診断支援方法について、図26を用いて説明する。実施形態7では、実施形態1-6のCNNシステムを用いた消化器官の内視鏡画像による疾患の診断支援方法を使用することができる。S1では、消化器官の第1の内視鏡画像と、第1の内視鏡画像に対応する、消化器官の前記疾患の陽性若しくは陰性の確定診断結果と、を用いてCNNシステムを訓練/検証する。
[Embodiment 7]
A method for supporting the diagnosis of a disease by endoscopic images of the digestive organs using the CNN system of the seventh embodiment will be described with reference to FIG. In the seventh embodiment, the method for supporting the diagnosis of a disease by endoscopic images of the digestive organs using the CNN system of the first to sixth embodiments can be used. In S1, the CNN system is trained / verified using a first endoscopic image of the digestive organs and a definitive positive or negative diagnostic result of the disease of the digestive organs corresponding to the first endoscopic image. To do.
S2では、S1において訓練/検証されたCNNシステムは、消化器官の第2の内視鏡画像に基いて、当該消化器官の疾患の陽性及び/又は陰性と、陽性の確率スコアとの少なくとも1つを出力する。この第2の内視鏡画像は、新たに観察ないし入力された内視鏡画像を示す。 In S2, the CNN system trained / validated in S1 is based on a second endoscopic image of the gastrointestinal tract, with at least one positive and / or negative of the gastrointestinal tract disease and a positive probability score. Is output. This second endoscopic image shows a newly observed or input endoscopic image.
また、S2では、第2の内視鏡画像は、内視鏡で撮影中の画像、通信ネットワークを経由して送信されてきた画像、遠隔操作システム又はクラウド型システムによって提供される画像、コンピュータ読み取り可能な記録媒体に記録された画像、又は、動画の少なくとも1つであってもよい。 Further, in S2, the second endoscope image is an image being taken by the endoscope, an image transmitted via a communication network, an image provided by a remote control system or a cloud-type system, and a computer reading. It may be at least one of images or moving images recorded on a possible recording medium.
[実施形態8]
実施形態8の消化器官の内視鏡画像による疾患の診断支援システム、消化器官の内視鏡画像による診断支援プログラム、及び、コンピュータ読み取り可能な記録媒体について図27を参照して、説明する。実施形態8では、実施形態7で説明した消化器官の内視鏡画像による疾患の診断支援方法を利用することができる。
[Embodiment 8]
The disease diagnosis support system based on the endoscopic image of the digestive organ, the diagnostic support program based on the endoscopic image of the digestive organ, and the computer-readable recording medium of the eighth embodiment will be described with reference to FIG. 27. In the eighth embodiment, the method for supporting the diagnosis of a disease by the endoscopic image of the digestive organ described in the seventh embodiment can be used.
この消化器官の内視鏡画像による疾患の診断支援システム1は、内視鏡画像入力部10と、出力部30と、CNNプログラムが組み込まれたコンピュータ20と、出力部30と、を有する。コンピュータ20は、消化器官の第1の内視鏡画像を記憶する第1の記憶領域21と、第1の内視鏡画像に対応する、消化器官の疾患の陽性若しくは陰性、過去の疾患、重症度のレベル、又は、撮像された部位に対応する情報の少なくとも1つの確定診断結果を記憶する第2の記憶領域22と、CNNプログラムを記憶する第3の記憶領域23と、を備える。第3の記憶領域23に記憶されたCNNプログラムは、第1の記憶領域21に記憶されている第1の内視鏡画像と、第2の記憶領域22に記憶されている確定診断結果とに基いて訓練/検証されており、内視鏡画像入力部10から入力された消化器官の第2の内視鏡画像に基いて、第2の内視鏡画像に対する消化器官の疾患の当該消化器官の疾患の陽性及び/又は陰性と、陽性の確率スコアの少なくとも1つを出力部30に出力する。
The disease
また、第3の記憶領域に記憶させる第2の内視鏡画像は、内視鏡で撮影中の画像、通信ネットワークを経由して送信されてきた画像、遠隔操作システム又はクラウド型システムによって提供される画像、コンピュータ読み取り可能な記録媒体に記録された画像、又は、動画の少なくとも1つであってもよい。 Further, the second endoscope image to be stored in the third storage area is provided by an image being photographed by the endoscope, an image transmitted via a communication network, a remote control system, or a cloud-type system. It may be at least one of an image, an image recorded on a computer-readable recording medium, or a moving image.
実施形態8の消化器官の内視鏡画像による疾患の診断支援システムは、各手段としてコンピュータを動作させるためのもの消化器官の内視鏡画像による診断支援プログラムを備えている。また、消化器官の内視鏡画像による診断支援プログラムは、コンピュータ読み取り可能な記録媒体に記憶しておくことができる。 The disease diagnosis support system based on the endoscopic image of the digestive organs of the eighth embodiment includes a diagnostic support program based on the endoscopic image of the digestive organs for operating a computer as each means. In addition, the diagnostic support program using endoscopic images of the digestive organs can be stored in a computer-readable recording medium.
10…内視鏡画像入力部
20…コンピュータ
21…第1の記憶領域
22…第2の記憶領域
23…第3の記憶領域
30…出力部
10 ... Endoscopic
Claims (34)
前記第1の内視鏡画像に対応する、前記消化器官の疾患の陽性又は陰性に対応する情報の少なくとも1つの確定診断結果と、
を用いて畳み込みニューラルネットワークシステムを訓練し、
前記訓練された畳み込みニューラルネットワークシステムは、内視鏡画像入力部から入力された前記消化器官の第2の内視鏡画像に基いて、前記消化器官の前記疾患を検出して前記疾患の陽性に対応する領域及び確率スコアの少なくとも1つを出力する、畳み込みニューラルネットワークシステムを用いた消化器官の内視鏡画像による疾患の診断支援方法であって、
前記内視鏡画像が小腸のワイヤレスカプセル内視鏡画像であり、前記訓練された畳み込みニューラルネットワークシステムは、前記内視鏡画像入力部から入力された第2のワイヤレスカプセル内視鏡画像中に検出した前記疾患としての隆起性病変の領域を出力することを特徴とする、畳み込みニューラルネットワークシステムを用いた消化器官の内視鏡画像による疾患の診断支援方法。 The first endoscopic image of the digestive system and
At least one definitive diagnosis result of the information corresponding to the positive or negative of the digestive organ disease corresponding to the first endoscopic image, and
Train a convolutional neural network system using
The trained convolutional neural network system detects the disease of the digestive organ and makes the disease positive based on the second endoscopic image of the digestive organ input from the endoscopic image input unit. A method for supporting the diagnosis of diseases by endoscopic images of digestive organs using a convolutional neural network system that outputs at least one of a corresponding region and a probability score.
The endoscopic image is a wireless capsule endoscopic image of the small intestine, and the trained convolutional neural network system detects it in a second wireless capsule endoscopic image input from the endoscopic image input unit. A method for supporting the diagnosis of a disease by endoscopic images of the digestive organs using a convolutional neural network system, which is characterized by outputting an area of an elevated lesion as the disease.
前記訓練された畳み込みニューラルネットワークシステムにより検出された前記第2の内視鏡画像の隆起性病変の領域を表示し、
第2の内視鏡画像内に表示された前記確定診断結果に基づく前記隆起性病変の領域と、前記検出された前記隆起性病変の領域との重なりにより、前記訓練された畳み込みニューラルネットワークシステムの診断結果の正誤を判定することを特徴とする、請求項2に記載の畳み込みニューラルネットワークシステムを用いた消化器官の内視鏡画像による疾患の診断支援方法。 The trained convolutional neural network system comprises the area of the elevated lesion displayed in the second endoscopic image based on the positive or negative definitive diagnosis of the disease in the small intestine.
The area of the elevated lesion in the second endoscopic image detected by the trained convolutional neural network system is displayed.
By overlapping the area of the raised lesion based on the definitive diagnosis result displayed in the second endoscopic image with the detected area of the raised lesion, the trained convolutional neural network system A method for supporting diagnosis of a disease by using an endoscopic image of a digestive organ using the convolutional neural network system according to claim 2, wherein the correctness of the diagnosis result is determined.
(1)前記確定診断結果に基く前記隆起性病変の領域の80%以上である時、又は、
(2)前記訓練された畳み込みニューラルネットワークシステムにより検出された前記疾患の陽性の領域が複数存在するとき、いずれか一つの領域が前記確定診断結果に基く前記隆起性病変の領域と重なっている時、
前記訓練された畳み込みニューラルネットワークシステムの診断は正しいと判定することを特徴とする、請求項3に記載の畳み込みニューラルネットワークシステムを用いた消化器官の内視鏡画像による疾患の診断支援方法。 The overlap is
(1) When it is 80% or more of the area of the elevated lesion based on the definitive diagnosis result, or
(2) When there are a plurality of positive regions of the disease detected by the trained convolutional neural network system, and any one region overlaps with the region of the elevated lesion based on the definitive diagnosis result. ,
The method for supporting diagnosis of a disease using an endoscopic image of a digestive organ using the convolutional neural network system according to claim 3, wherein the diagnosis of the trained convolutional neural network system is determined to be correct.
前記第1の内視鏡画像に対応する、前記消化器官の疾患の陽性又は陰性に対応する情報の少なくとも1つの確定診断結果と、
を用いて畳み込みニューラルネットワークシステムを訓練し、
前記訓練された畳み込みニューラルネットワークシステムは、内視鏡画像入力部から入力された前記消化器官の第2の内視鏡画像に基いて、前記消化器官の前記疾患を検出して前記疾患の陽性に対応する領域及び確率スコアの少なくとも1つを出力する、畳み込みニューラルネットワークシステムを用いた消化器官の内視鏡画像による疾患の診断支援方法であって、
前記内視鏡画像が小腸のワイヤレスカプセル内視鏡画像であり、前記訓練された畳み込みニューラルネットワークシステムは、前記内視鏡画像入力部から入力された第2のワイヤレスカプセル内視鏡画像中に検出した前記疾患としての出血の確率スコアを出力することを特徴とする、
畳み込みニューラルネットワークシステムを用いた消化器官の内視鏡画像による疾患の診断支援方法。 The first endoscopic image of the digestive system and
At least one definitive diagnosis result of the information corresponding to the positive or negative of the digestive organ disease corresponding to the first endoscopic image, and
Train a convolutional neural network system using
The trained convolutional neural network system detects the disease of the digestive organ and makes the disease positive based on the second endoscopic image of the digestive organ input from the endoscopic image input unit. A method for supporting the diagnosis of diseases by endoscopic images of digestive organs using a convolutional neural network system that outputs at least one of a corresponding region and a probability score.
The endoscopic image is a wireless capsule endoscopic image of the small intestine, and the trained convolutional neural network system detects it in a second wireless capsule endoscopic image input from the endoscopic image input unit. It is characterized in that the probability score of bleeding as the disease is output.
A method for supporting the diagnosis of diseases by endoscopic images of the digestive organs using a convolutional neural network system.
前記第1の内視鏡画像に対応する、前記消化器官の疾患の陽性又は陰性に対応する情報の確定診断結果と、
を用いて畳み込みニューラルネットワークシステムを訓練し、
前記訓練された畳み込みニューラルネットワークシステムは、内視鏡画像入力部から入力された前記消化器官の第2の内視鏡画像に基いて、前記消化器官の前記疾患を検出して前記疾患の陽性に対応する領域及び確率スコアの少なくとも1つを出力する、畳み込みニューラルネットワークシステムを用いた消化器官の内視鏡画像による疾患の診断支援方法であって、
前記第1の内視鏡画像が小腸のワイヤレスカプセル内視鏡の静止画像であり、
前記第2の内視鏡画像が小腸のワイヤレスカプセル内視鏡の動画像であり、
前記訓練された畳み込みニューラルネットワークシステムは、前記内視鏡画像入力部から入力された第2のワイヤレスカプセル内視鏡画像中に検出した前記疾患の領域を表示することを特徴とする、畳み込みニューラルネットワークシステムを用いた消化器官の内視鏡画像による疾患の診断支援方法。 The first endoscopic image of the digestive system and
The definitive diagnosis result of the information corresponding to the positive or negative of the digestive organ disease corresponding to the first endoscopic image, and
Train a convolutional neural network system using
The trained convolutional neural network system detects the disease of the digestive organ and makes the disease positive based on the second endoscopic image of the digestive organ input from the endoscopic image input unit. A method for supporting the diagnosis of diseases by endoscopic images of digestive organs using a convolutional neural network system that outputs at least one of a corresponding region and a probability score.
The first endoscopic image is a still image of a wireless capsule endoscope of the small intestine.
The second endoscopic image is a moving image of a wireless capsule endoscope of the small intestine.
The trained convolutional neural network system displays a region of the disease detected in a second wireless capsule endoscopic image input from the endoscopic image input unit. A method for supporting diagnosis of diseases by endoscopic images of the digestive organs using the system.
前記血管拡張症の領域が血管拡張症1a型及び血管拡張症1b型の少なくとも一つであり、
前記隆起性病変の領域がポリープ、結節、粘膜下腫瘍、血管構造及び上皮性腫瘍の少なくとも1つであることを特徴とする、請求項8に記載の畳み込みニューラルネットワークシステムを用いた消化器官の内視鏡画像による疾患の診断支援方法。 The area of mucosal damage is at least one of erosions and ulcers.
The region of vasodilation is at least one of vasodilator type 1a and vasodilator type 1b.
Within the digestive organs using the convolutional neural network system according to claim 8, wherein the area of the elevated lesion is at least one of a polyp, a nodule, a submucosal tumor, a vascular structure and an epithelial tumor. A method for supporting the diagnosis of diseases using endoscopic images.
確定診断結果が粘膜障害の静止画像により訓練された第1の畳み込みニューラルネットワークシステム部分と、
確定診断結果が血管拡張症の静止画像により訓練された第2の畳み込みニューラルネットワークシステム部分と、
確定診断結果が隆起性病変の静止画像により訓練された第3の畳み込みニューラルネットワークシステム部分と、
確定診断結果が出血の静止画像により訓練された第4の畳み込みニューラルネットワークシステム部分と、
の複合化された畳み込みニューラルネットワークシステムからなるものを用いることを特徴とする、請求項7-9のいずれかに記載の畳み込みニューラルネットワークシステムを用いた消化器官の内視鏡画像による疾患の診断支援方法。 The trained convolutional neural network system
The first convolutional neural network system part, whose definitive diagnostic results were trained by still images of mucosal damage,
A second convolutional neural network system part where the definitive diagnosis was trained with still images of vasodilation,
A third convolutional neural network system part where the definitive diagnosis was trained by still images of elevated lesions,
The fourth convolutional neural network system part, whose definitive diagnostic results were trained by still images of bleeding,
Support for diagnosing a disease by endoscopic images of the digestive organs using the convolutional neural network system according to any one of claims 7-9, which comprises using one consisting of the complex convolutional neural network system of the above. Method.
前記第1の内視鏡画像に対応する、前記消化器官の疾患の陽性又は陰性に対応する情報の少なくとも1つの確定診断結果と、
を用いて畳み込みニューラルネットワークシステムを訓練し、
前記訓練された畳み込みニューラルネットワークシステムは、内視鏡画像入力部から入力された前記消化器官の第2の内視鏡画像に基いて、前記消化器官の前記疾患を検出して前記疾患の陽性に対応する領域及び確率スコアの少なくとも1つを出力する、畳み込みニューラルネットワークシステムを用いた消化器官の内視鏡画像による疾患の診断支援方法であって、
前記内視鏡画像が胃の浸水法狭帯域イメージングによる内視鏡画像であり、前記訓練された畳み込みニューラルネットワークシステムは、前記内視鏡画像入力部から入力された内視鏡画像の前記疾患としての早期胃がんの領域を出力することを特徴とする、
畳み込みニューラルネットワークシステムを用いた消化器官の内視鏡画像による疾患の診断支援方法。 The first endoscopic image of the digestive system and
At least one definitive diagnosis result of the information corresponding to the positive or negative of the digestive organ disease corresponding to the first endoscopic image, and
Train a convolutional neural network system using
The trained convolutional neural network system detects the disease of the digestive organ and makes the disease positive based on the second endoscopic image of the digestive organ input from the endoscopic image input unit. A method for supporting the diagnosis of diseases by endoscopic images of digestive organs using a convolutional neural network system that outputs at least one of a corresponding region and a probability score.
The endoscopic image is an endoscopic image obtained by inundation method narrow band imaging of the stomach, and the trained convolutional neural network system is used as the disease of the endoscopic image input from the endoscopic image input unit. Characterized by outputting the area of early gastric cancer,
A method for supporting the diagnosis of diseases by endoscopic images of the digestive organs using a convolutional neural network system.
前記第1の内視鏡画像に対応する、前記消化器官の疾患の陽性又は陰性に対応する情報の少なくとも1つの確定診断結果と、
を用いて畳み込みニューラルネットワークシステムを訓練し、
前記訓練された畳み込みニューラルネットワークシステムは、内視鏡画像入力部から入力された前記消化器官の第2の内視鏡画像に基いて、前記消化器官の前記疾患を検出して前記疾患の陽性に対応する領域及び確率スコアの少なくとも1つを出力する、畳み込みニューラルネットワークシステムを用いた消化器官の内視鏡画像による疾患の診断支援方法であって、
前記内視鏡画像が胃の白色光画像、狭帯域イメージング画像及びインジゴカルミン色素散布画像から選択された少なくとも1つであり、
前記訓練された畳み込みニューラルネットワークシステムは、前記内視鏡画像入力部から入力された内視鏡画像の前記疾患としての前記疾患の深達度を出力することを特徴とする、
畳み込みニューラルネットワークシステムを用いた消化器官の内視鏡画像による疾患の診断支援方法。 The first endoscopic image of the digestive system and
At least one definitive diagnosis result of the information corresponding to the positive or negative of the digestive organ disease corresponding to the first endoscopic image, and
Train a convolutional neural network system using
The trained convolutional neural network system detects the disease of the digestive organ and makes the disease positive based on the second endoscopic image of the digestive organ input from the endoscopic image input unit. A method for supporting the diagnosis of diseases by endoscopic images of digestive organs using a convolutional neural network system that outputs at least one of a corresponding region and a probability score.
The endoscopic image is at least one selected from a white light image of the stomach, a narrow band imaging image and an indigo carmine dye spray image.
The trained convolutional neural network system is characterized by outputting the invasion depth of the disease as the disease of the endoscopic image input from the endoscopic image input unit.
A method for supporting the diagnosis of diseases by endoscopic images of the digestive organs using a convolutional neural network system.
前記コンピュータは、
消化器官の第1の内視鏡画像を記憶する第1の記憶領域と、
前記第1の内視鏡画像に対応する、前記消化器官の前記疾患の陽性及び陰性に対応する情報の確定診断結果を記憶する第2の記憶領域と、
前記畳み込みニューラルネットワークプログラムを記憶する第3の記憶領域と、
を備え、
前記畳み込みニューラルネットワークプログラムは、
前記第1の記憶部に記憶されている前記第1の内視鏡画像と、前記第2の記憶領域に記憶されている確定診断結果とに基いて訓練されており、
前記内視鏡画像入力部から入力された消化器官の第2の内視鏡画像に基いて、前記第2の内視鏡画像に対する前記消化器官の前記疾患の陽性又は陰性に対応する情報を前記出力部に出力するものとされており、
前記内視鏡画像が小腸のワイヤレスカプセル内視鏡画像であり、前記訓練された畳み込みニューラルネットワークプログラムは、前記内視鏡画像入力部から入力されたワイヤレスカプセル内視鏡画像の前記疾患としての隆起性病変の領域を出力することを特徴とする、畳み込みニューラルネットワークシステムを用いた消化器官の内視鏡画像による疾患の診断支援システム。 It is a disease diagnosis support system using an endoscopic image of the digestive organs using a convolutional neural network system having an endoscopic image input unit, an output unit, and a computer incorporating a convolutional neural network.
The computer
A first storage area for storing the first endoscopic image of the digestive organs,
A second storage area corresponding to the first endoscopic image and storing a definitive diagnosis result of information corresponding to the positive and negative of the disease in the digestive organs.
A third storage area for storing the convolutional neural network program,
With
The convolutional neural network program
It is trained based on the first endoscopic image stored in the first storage unit and the definitive diagnosis result stored in the second storage area.
Based on the second endoscopic image of the digestive organs input from the endoscopic image input unit, the information corresponding to the positive or negative of the disease of the digestive organs with respect to the second endoscopic image is described. It is supposed to be output to the output section,
The endoscopic image is a wireless capsule endoscopic image of the small intestine, and the trained convolutional neural network program is a prominence of the wireless capsule endoscopic image input from the endoscopic image input unit as the disease. A disease diagnosis support system using endoscopic images of the digestive organs using a convolutional neural network system, which is characterized by outputting the area of sexual lesions.
前記第2の内視鏡画像内に、前記小腸の前記疾患の陽性又は陰性の確定診断結果に基いて表示された前記隆起性病変の領域と、前記訓練された畳み込みニューラルネットワークシステムにより検出された前記隆起性病変の領域とを表示し、
前記第2の内視鏡画像内に表示された前記確定診断結果に基づく前記隆起性病変の領域と、前記検出された前記隆起性病変の領域との重なりにより、診断結果の正誤を判定することを特徴とする、請求項17に記載の畳み込みニューラルネットワークシステムを用いた消化器官の内視鏡画像による疾患の診断支援システム。 The convolutional neural network program
In the second endoscopic image, the area of the elevated lesion displayed based on the positive or negative definitive diagnosis of the disease in the small intestine and was detected by the trained convolutional neural network system. Display the area of the elevated lesion and
The correctness of the diagnosis result is determined by the overlap between the region of the raised lesion based on the definitive diagnosis result displayed in the second endoscopic image and the detected region of the raised lesion. The disease diagnosis support system based on an endoscopic image of a digestive organ using the convolutional neural network system according to claim 17, wherein the system comprises.
(1)前記確定診断結果に基く前記隆起性病変の領域の80%以上である時、又は、
(2)前記訓練された畳み込みニューラルネットワークシステムにより検出された前記疾患の陽性の領域が複数存在するとき、いずれか一つの領域が前記確定診断結果に基く前記隆起性病変の領域と重なっている時、
前記訓練された畳み込みニューラルネットワークシステムの診断は正しいと判定することを特徴とする、請求項19に記載の畳み込みニューラルネットワークシステムを用いた消化器官の内視鏡画像による疾患の診断支援システム。 The overlap is
(1) When it is 80% or more of the area of the elevated lesion based on the definitive diagnosis result, or
(2) When there are a plurality of positive regions of the disease detected by the trained convolutional neural network system, and any one region overlaps with the region of the elevated lesion based on the definitive diagnosis result. ,
The disease diagnosis support system using an endoscopic image of a digestive organ using the convolutional neural network system according to claim 19, wherein the diagnosis of the trained convolutional neural network system is determined to be correct.
前記コンピュータは、
消化器官の第1の内視鏡画像を記憶する第1の記憶領域と、
前記第1の内視鏡画像に対応する、前記消化器官の前記疾患の陽性及び陰性に対応する情報の確定診断結果を記憶する第2の記憶領域と、
前記畳み込みニューラルネットワークプログラムを記憶する第3の記憶領域と、
を備え、
前記畳み込みニューラルネットワークプログラムは、
前記第1の記憶部に記憶されている前記第1の内視鏡画像と、前記第2の記憶領域に記憶されている確定診断結果とに基いて訓練されており、
前記内視鏡画像入力部から入力された消化器官の第2の内視鏡画像に基いて、前記第2の内視鏡画像に対する前記消化器官の前記疾患の陽性又は陰性に対応する情報を前記出力部に出力するものとされており、
前記内視鏡画像が小腸のワイヤレスカプセル内視鏡画像であり、前記訓練された畳み込みニューラルネットワークプログラムは前記第2の画像内に前記疾患としての出血の確率スコアを表示することを特徴とする、畳み込みニューラルネットワークシステムを用いた消化器官の内視鏡画像による疾患の診断支援システム。 It is a disease diagnosis support system using an endoscopic image of the digestive organs using a convolutional neural network system having an endoscopic image input unit, an output unit, and a computer incorporating a convolutional neural network.
The computer
A first storage area for storing the first endoscopic image of the digestive organs,
A second storage area corresponding to the first endoscopic image and storing a definitive diagnosis result of information corresponding to the positive and negative of the disease in the digestive organs.
A third storage area for storing the convolutional neural network program,
With
The convolutional neural network program
It is trained based on the first endoscopic image stored in the first storage unit and the definitive diagnosis result stored in the second storage area.
Based on the second endoscopic image of the digestive organs input from the endoscopic image input unit, the information corresponding to the positive or negative of the disease of the digestive organs with respect to the second endoscopic image is described. It is supposed to be output to the output section,
The endoscopic image is a wireless capsule endoscopic image of the small intestine, and the trained convolutional neural network program displays a probability score of bleeding as the disease in the second image. A disease diagnosis support system using endoscopic images of the digestive organs using a convolutional neural network system.
前記コンピュータは、
消化器官の第1の内視鏡画像を記憶する第1の記憶領域と、
前記第1の内視鏡画像に対応する、前記消化器官の前記疾患の陽性及び陰性に対応する情報の確定診断結果を記憶する第2の記憶領域と、
前記畳み込みニューラルネットワークプログラムを記憶する第3の記憶領域と、
を備え、
前記畳み込みニューラルネットワークプログラムは、
前記第1の記憶部に記憶されている前記第1の内視鏡画像と、前記第2の記憶領域に記憶されている確定診断結果とに基いて訓練されており、
前記内視鏡画像入力部から入力された消化器官の第2の内視鏡画像に基いて、前記第2の内視鏡画像に対する前記消化器官の前記疾患の陽性又は陰性に対応する情報を前記出力部に出力するものとされており、
前記第1の内視鏡画像が小腸のワイヤレスカプセル内視鏡の静止画像であり、
前記第2の内視鏡画像が小腸のワイヤレスカプセル内視鏡の動画像であり、
前記訓練された畳み込みニューラルネットワークプログラムは、前記内視鏡画像入力部から入力された第2のワイヤレスカプセル内視鏡画像中に検出した前記疾患の領域を表示することを特徴とする、畳み込みニューラルネットワークシステムを用いた消化器官の内視鏡画像による疾患の診断支援システム。 It is a disease diagnosis support system using an endoscopic image of the digestive organs using a convolutional neural network system having an endoscopic image input unit, an output unit, and a computer incorporating a convolutional neural network.
The computer
A first storage area for storing the first endoscopic image of the digestive organs,
A second storage area corresponding to the first endoscopic image and storing a definitive diagnosis result of information corresponding to the positive and negative of the disease in the digestive organs.
A third storage area for storing the convolutional neural network program,
With
The convolutional neural network program
It is trained based on the first endoscopic image stored in the first storage unit and the definitive diagnosis result stored in the second storage area.
Based on the second endoscopic image of the digestive organs input from the endoscopic image input unit, the information corresponding to the positive or negative of the disease of the digestive organs with respect to the second endoscopic image is described. It is supposed to be output to the output section,
The first endoscopic image is a still image of a wireless capsule endoscope of the small intestine.
The second endoscopic image is a moving image of a wireless capsule endoscope of the small intestine.
The trained convolutional neural network program displays a region of the disease detected in a second wireless capsule endoscopic image input from the endoscopic image input unit. A disease diagnosis support system using endoscopic images of the digestive organs using the system.
前記血管拡張症の領域が血管拡張症1a型及び血管拡張症1b型の少なくとも一つであり、
前記隆起性病変の領域がポリープ、結節、粘膜下腫瘍、血管構造及び上皮性腫瘍の少なくとも1つであることを特徴とする、請求項24に記載の畳み込みニューラルネットワークシステムを用いた消化器官の内視鏡画像による疾患の診断支援システム。 The area of mucosal damage is at least one of erosions and ulcers.
The region of vasodilation is at least one of vasodilator type 1a and vasodilator type 1b.
Within the digestive organs using the convolutional neural network system of claim 24, wherein the area of the elevated lesion is at least one of a polyp, a nodule, a submucosal tumor, a vascular structure and an epithelial tumor. Disease diagnosis support system using endoscopic images.
確定診断結果が粘膜障害の静止画像により訓練された第1の畳み込みニューラルネットワークシステム部分と、
確定診断結果が血管拡張症の静止画像により訓練された第2の畳み込みニューラルネットワークシステム部分と、
確定診断結果が隆起性病変の静止画像により訓練された第3の畳み込みニューラルネットワークシステム部分と、
確定診断結果が出血の静止画像により訓練された第4の畳み込みニューラルネットワークシステム部分と、
の複合化された畳み込みニューラルネットワークシステムからなることを特徴とする、請求項23-25のいずれかに記載の畳み込みニューラルネットワークシステムを用いた消化器官の内視鏡画像による疾患の診断支援システム。 The trained convolutional neural network system
The first convolutional neural network system part, whose definitive diagnostic results were trained by still images of mucosal damage,
A second convolutional neural network system part where the definitive diagnosis was trained with still images of vasodilation,
A third convolutional neural network system part where the definitive diagnosis was trained by still images of elevated lesions,
The fourth convolutional neural network system part, whose definitive diagnostic results were trained by still images of bleeding,
A disease diagnosis support system based on an endoscopic image of a digestive organ using the convolutional neural network system according to any one of claims 23 to 25, which comprises the composite convolutional neural network system of the above.
前記コンピュータは、
消化器官の第1の内視鏡画像を記憶する第1の記憶領域と、
前記第1の内視鏡画像に対応する、前記消化器官の前記疾患の陽性及び陰性に対応する情報の確定診断結果を記憶する第2の記憶領域と、
前記畳み込みニューラルネットワークプログラムを記憶する第3の記憶領域と、
を備え、
前記畳み込みニューラルネットワークプログラムは、
前記第1の記憶部に記憶されている前記第1の内視鏡画像と、前記第2の記憶領域に記憶されている確定診断結果とに基いて訓練されており、
前記内視鏡画像入力部から入力された消化器官の第2の内視鏡画像に基いて、前記第2の内視鏡画像に対する前記消化器官の前記疾患の陽性又は陰性に対応する情報を前記出力部に出力するものとされており、
前記内視鏡画像が胃の浸水法狭帯域イメージングによる内視鏡画像であり、前記訓練された畳み込みニューラルネットワークプログラムは、前記内視鏡画像入力部から入力された内視鏡画像の前記疾患としての早期胃がんの領域を出力することを特徴とする、畳み込みニューラルネットワークシステムを用いた消化器官の内視鏡画像による疾患の診断支援システム。 It is a disease diagnosis support system using an endoscopic image of the digestive organs using a convolutional neural network system having an endoscopic image input unit, an output unit, and a computer incorporating a convolutional neural network.
The computer
A first storage area for storing the first endoscopic image of the digestive organs,
A second storage area corresponding to the first endoscopic image and storing a definitive diagnosis result of information corresponding to the positive and negative of the disease in the digestive organs.
A third storage area for storing the convolutional neural network program,
With
The convolutional neural network program
It is trained based on the first endoscopic image stored in the first storage unit and the definitive diagnosis result stored in the second storage area.
Based on the second endoscopic image of the digestive organs input from the endoscopic image input unit, the information corresponding to the positive or negative of the disease of the digestive organs with respect to the second endoscopic image is described. It is supposed to be output to the output section,
The endoscopic image is an endoscopic image obtained by inundation method narrow band imaging of the stomach, and the trained convolutional neural network program is used as the disease of the endoscopic image input from the endoscopic image input unit. A disease diagnosis support system using endoscopic images of the digestive organs using a convolutional neural network system, which is characterized by outputting the area of early gastric cancer.
前記コンピュータは、
消化器官の第1の内視鏡画像を記憶する第1の記憶領域と、
前記第1の内視鏡画像に対応する、前記消化器官の前記疾患の陽性及び陰性に対応する情報の確定診断結果を記憶する第2の記憶領域と、
前記畳み込みニューラルネットワークプログラムを記憶する第3の記憶領域と、
を備え、
前記畳み込みニューラルネットワークプログラムは、
前記第1の記憶部に記憶されている前記第1の内視鏡画像と、前記第2の記憶領域に記憶されている確定診断結果とに基いて訓練されており、
前記内視鏡画像入力部から入力された消化器官の第2の内視鏡画像に基いて、前記第2の内視鏡画像に対する前記消化器官の前記疾患の陽性又は陰性に対応する情報を前記出力部に出力するものとされており、
前記内視鏡画像が胃の白色光画像、狭帯域イメージング画像及びインジゴカルミン色素散布画像から選択された少なくとも1つであり、前記訓練された畳み込みニューラルネットワークプログラムは、前記内視鏡画像入力部から入力された内視鏡画像の前記疾患としての前記疾患の深達度を出力することを特徴とする、畳み込みニューラルネットワークシステムを用いた消化器官の内視鏡画像による疾患の診断支援システム。 It is a disease diagnosis support system using an endoscopic image of the digestive organs using a convolutional neural network system having an endoscopic image input unit, an output unit, and a computer incorporating a convolutional neural network.
The computer
A first storage area for storing the first endoscopic image of the digestive organs,
A second storage area corresponding to the first endoscopic image and storing a definitive diagnosis result of information corresponding to the positive and negative of the disease in the digestive organs.
A third storage area for storing the convolutional neural network program,
With
The convolutional neural network program
It is trained based on the first endoscopic image stored in the first storage unit and the definitive diagnosis result stored in the second storage area.
Based on the second endoscopic image of the digestive organs input from the endoscopic image input unit, the information corresponding to the positive or negative of the disease of the digestive organs with respect to the second endoscopic image is described. It is supposed to be output to the output section,
The endoscopic image is at least one selected from a white light image of the stomach, a narrow band imaging image and an indigocarmine dye spray image, and the trained convolutional neural network program is from the endoscopic image input section. A disease diagnosis support system using an endoscopic image of a digestive organ using a convolutional neural network system, which outputs the depth of invasion of the disease as the disease of the input endoscopic image.
Applications Claiming Priority (10)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| JP2019-172355 | 2019-09-20 | ||
| JP2019172355 | 2019-09-20 | ||
| JP2019197174 | 2019-10-30 | ||
| JP2019-197174 | 2019-10-30 | ||
| JP2019225961 | 2019-12-13 | ||
| JP2019-225961 | 2019-12-13 | ||
| JP2020-027627 | 2020-02-20 | ||
| JP2020027627 | 2020-02-20 | ||
| JP2020-080865 | 2020-04-30 | ||
| JP2020080865 | 2020-04-30 |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| WO2021054477A2 true WO2021054477A2 (en) | 2021-03-25 |
| WO2021054477A3 WO2021054477A3 (en) | 2021-07-22 |
Family
ID=74882973
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/JP2020/035652 Ceased WO2021054477A2 (en) | 2019-09-20 | 2020-09-19 | Disease diagnostic support method using endoscopic image of digestive system, diagnostic support system, diagnostic support program, and computer-readable recording medium having said diagnostic support program stored therein |
Country Status (1)
| Country | Link |
|---|---|
| WO (1) | WO2021054477A2 (en) |
Cited By (13)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN113222932A (en) * | 2021-05-12 | 2021-08-06 | 上海理工大学 | Small intestine endoscope image feature extraction method based on multi-convolution neural network integrated learning |
| CN113344926A (en) * | 2021-08-05 | 2021-09-03 | 武汉楚精灵医疗科技有限公司 | Method, device, server and storage medium for recognizing biliary-pancreatic ultrasonic image |
| CN114332056A (en) * | 2021-12-31 | 2022-04-12 | 南京鼓楼医院 | Early gastric cancer endoscope real-time auxiliary detection system based on target detection algorithm |
| CN115661086A (en) * | 2022-10-28 | 2023-01-31 | 电子科技大学长三角研究院(湖州) | Gastritis image analysis method based on integrated network structure and multi-device migration |
| WO2023053854A1 (en) * | 2021-09-29 | 2023-04-06 | 京都府公立大学法人 | Diagnostic assistance device and diagnostic assistance program |
| JP7349005B1 (en) | 2022-11-08 | 2023-09-21 | 株式会社両備システムズ | Program, information processing method, information processing device, and learning model generation method |
| CN117058467A (en) * | 2023-10-10 | 2023-11-14 | 湖北大学 | Gastrointestinal tract lesion type identification method and system |
| WO2024018581A1 (en) * | 2022-07-21 | 2024-01-25 | 日本電気株式会社 | Image processing device, image processing method, and storage medium |
| JPWO2024024022A1 (en) * | 2022-07-28 | 2024-02-01 | ||
| CN118743531A (en) * | 2024-06-17 | 2024-10-08 | 山东大学齐鲁医院 | A fluorescence diagnosis auxiliary system for Helicobacter pylori |
| EP4497368A1 (en) * | 2023-07-25 | 2025-01-29 | Olympus Medical Systems Corporation | Endoscopic imaging manipulation method and system |
| EP4497369A1 (en) * | 2023-07-25 | 2025-01-29 | Olympus Medical Systems Corporation | Method and system for medical endoscopic imaging analysis and manipulation |
| EP4579685A4 (en) * | 2022-08-26 | 2025-11-26 | Univ Korea Res & Bus Found | Training method and training device of a model for determining nasal cavity volume, as well as method and device for determining nasal cavity volume |
Family Cites Families (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20120316421A1 (en) * | 2009-07-07 | 2012-12-13 | The Johns Hopkins University | System and method for automated disease assessment in capsule endoscopy |
| WO2016185617A1 (en) * | 2015-05-21 | 2016-11-24 | オリンパス株式会社 | Image processing device, image processing method, and image processing program |
| JP6656357B2 (en) * | 2016-04-04 | 2020-03-04 | オリンパス株式会社 | Learning method, image recognition device and program |
| EP3705025A4 (en) * | 2017-10-30 | 2021-09-08 | Japanese Foundation For Cancer Research | DEVICE FOR SUPPORTING IMAGE DIAGNOSIS, DATA ACQUISITION METHOD, METHOD FOR SUPPORTING IMAGE DIAGNOSIS, AND PROGRAM FOR SUPPORTING IMAGE DIAGNOSIS |
-
2020
- 2020-09-19 WO PCT/JP2020/035652 patent/WO2021054477A2/en not_active Ceased
Cited By (17)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN113222932B (en) * | 2021-05-12 | 2023-05-02 | 上海理工大学 | Small intestine endoscope picture feature extraction method based on multi-convolution neural network integrated learning |
| CN113222932A (en) * | 2021-05-12 | 2021-08-06 | 上海理工大学 | Small intestine endoscope image feature extraction method based on multi-convolution neural network integrated learning |
| CN113344926A (en) * | 2021-08-05 | 2021-09-03 | 武汉楚精灵医疗科技有限公司 | Method, device, server and storage medium for recognizing biliary-pancreatic ultrasonic image |
| CN113344926B (en) * | 2021-08-05 | 2021-11-02 | 武汉楚精灵医疗科技有限公司 | Method, device, server and storage medium for recognizing biliary-pancreatic ultrasonic image |
| WO2023053854A1 (en) * | 2021-09-29 | 2023-04-06 | 京都府公立大学法人 | Diagnostic assistance device and diagnostic assistance program |
| CN114332056A (en) * | 2021-12-31 | 2022-04-12 | 南京鼓楼医院 | Early gastric cancer endoscope real-time auxiliary detection system based on target detection algorithm |
| WO2024018581A1 (en) * | 2022-07-21 | 2024-01-25 | 日本電気株式会社 | Image processing device, image processing method, and storage medium |
| JPWO2024024022A1 (en) * | 2022-07-28 | 2024-02-01 | ||
| EP4579685A4 (en) * | 2022-08-26 | 2025-11-26 | Univ Korea Res & Bus Found | Training method and training device of a model for determining nasal cavity volume, as well as method and device for determining nasal cavity volume |
| CN115661086A (en) * | 2022-10-28 | 2023-01-31 | 电子科技大学长三角研究院(湖州) | Gastritis image analysis method based on integrated network structure and multi-device migration |
| JP2024068425A (en) * | 2022-11-08 | 2024-05-20 | 株式会社両備システムズ | PROGRAM, INFORMATION PROCESSING METHOD, INFORMATION PROCESSING DEVICE, AND METHOD FOR GENERATING LEARNING MODEL |
| JP7349005B1 (en) | 2022-11-08 | 2023-09-21 | 株式会社両備システムズ | Program, information processing method, information processing device, and learning model generation method |
| EP4497368A1 (en) * | 2023-07-25 | 2025-01-29 | Olympus Medical Systems Corporation | Endoscopic imaging manipulation method and system |
| EP4497369A1 (en) * | 2023-07-25 | 2025-01-29 | Olympus Medical Systems Corporation | Method and system for medical endoscopic imaging analysis and manipulation |
| CN117058467B (en) * | 2023-10-10 | 2023-12-22 | 湖北大学 | Gastrointestinal tract lesion type identification method and system |
| CN117058467A (en) * | 2023-10-10 | 2023-11-14 | 湖北大学 | Gastrointestinal tract lesion type identification method and system |
| CN118743531A (en) * | 2024-06-17 | 2024-10-08 | 山东大学齐鲁医院 | A fluorescence diagnosis auxiliary system for Helicobacter pylori |
Also Published As
| Publication number | Publication date |
|---|---|
| WO2021054477A3 (en) | 2021-07-22 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| JP7037220B2 (en) | A computer-readable recording medium that stores a disease diagnosis support system using endoscopic images of the digestive organs, a method of operating the diagnosis support system, a diagnosis support program, and this diagnosis support program. | |
| JP7017198B2 (en) | A computer-readable recording medium that stores a disease diagnosis support method, diagnosis support system, diagnosis support program, and this diagnosis support program using endoscopic images of the digestive organs. | |
| JP7216376B2 (en) | Diagnosis support method, diagnosis support system, diagnosis support program, and computer-readable recording medium storing this diagnosis support program using endoscopic images of digestive organs | |
| WO2021054477A2 (en) | Disease diagnostic support method using endoscopic image of digestive system, diagnostic support system, diagnostic support program, and computer-readable recording medium having said diagnostic support program stored therein | |
| Okagawa et al. | Artificial intelligence in endoscopy | |
| WO2019245009A1 (en) | Method of assisting disease diagnosis based on endoscope image of digestive organ, diagnosis assistance system, diagnosis assistance program, and computer-readable recording medium having said diagnosis assistance program stored thereon | |
| Cai et al. | Using a deep learning system in endoscopy for screening of early esophageal squamous cell carcinoma (with video) | |
| Kamba et al. | Reducing adenoma miss rate of colonoscopy assisted by artificial intelligence: a multicenter randomized controlled trial | |
| Igarashi et al. | Anatomical classification of upper gastrointestinal organs under various image capture conditions using AlexNet | |
| JP7335552B2 (en) | Diagnostic imaging support device, learned model, operating method of diagnostic imaging support device, and diagnostic imaging support program | |
| Horiuchi et al. | Convolutional neural network for differentiating gastric cancer from gastritis using magnified endoscopy with narrow band imaging | |
| Cho et al. | Automated classification of gastric neoplasms in endoscopic images using a convolutional neural network | |
| Namikawa et al. | Utilizing artificial intelligence in endoscopy: a clinician’s guide | |
| Suzuki et al. | Artificial intelligence for cancer detection of the upper gastrointestinal tract | |
| Li et al. | Intelligent detection endoscopic assistant: An artificial intelligence-based system for monitoring blind spots during esophagogastroduodenoscopy in real-time | |
| Jiang et al. | Differential diagnosis of Helicobacter pylori-associated gastritis with the linked-color imaging score | |
| Park et al. | Effectiveness of a novel artificial intelligence-assisted colonoscopy system for adenoma detection: a prospective, propensity score-matched, non-randomized controlled study in Korea | |
| Cychnerski et al. | ERS: a novel comprehensive endoscopy image dataset for machine learning, compliant with the MST 3.0 specification | |
| JP2023079866A (en) | Stomach cancer examination method using super-magnifying endoscope, diagnosis support method, diagnosis support system, diagnosis support program, trained model, and image diagnosis support device | |
| Zachariah et al. | The potential of deep learning for gastrointestinal endoscopy—a disruptive new technology | |
| Foerster et al. | Advanced endoscopic imaging methods | |
| Ikeya et al. | Real-time Use of Computer-aided Diagnosis in the Optical Diagnosis of Gastric Neoplasia: A Multicenter Randomized Controlled Trial Q1 111 | |
| Samuel et al. | PTH-015 Developing The Recorded Image Quality Index (RIQI) Tool–Measuring Recorded Image Quality, Degree of Representation and Utility | |
| Shiroma et al. | Ability of artificial intelligence to detect T1 esophageal squamous cell carcinoma from endoscopic videos: supportive effects of real-time assistance | |
| Pittayanon et al. | Su1401 The Learning Curve on the Images Obtained by Probe-Based Confocal LASER Endomicroscopy (pCLE) for the Interpretation of Malignant Biliary Stricture |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 20865830 Country of ref document: EP Kind code of ref document: A2 |
|
| NENP | Non-entry into the national phase |
Ref country code: DE |
|
| 122 | Ep: pct application non-entry in european phase |
Ref document number: 20865830 Country of ref document: EP Kind code of ref document: A2 |
|
| NENP | Non-entry into the national phase |
Ref country code: JP |