WO2021261323A1 - 情報処理装置、情報処理方法、プログラム及び情報処理システム - Google Patents
情報処理装置、情報処理方法、プログラム及び情報処理システム Download PDFInfo
- Publication number
- WO2021261323A1 WO2021261323A1 PCT/JP2021/022634 JP2021022634W WO2021261323A1 WO 2021261323 A1 WO2021261323 A1 WO 2021261323A1 JP 2021022634 W JP2021022634 W JP 2021022634W WO 2021261323 A1 WO2021261323 A1 WO 2021261323A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- region
- information processing
- image data
- area
- fitting
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Ceased
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/12—Edge-based segmentation
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T11/00—2D [Two Dimensional] image generation
- G06T11/20—Drawing from basic elements, e.g. lines or circles
- G06T11/203—Drawing of straight lines or curves
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T11/00—2D [Two Dimensional] image generation
- G06T11/40—Filling a planar surface by adding surface attributes, e.g. colour or texture
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/181—Segmentation; Edge detection involving edge growing; involving edge linking
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/187—Segmentation; Edge detection involving region growing; involving region merging; involving connected component labelling
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2200/00—Indexing scheme for image data processing or generation, in general
- G06T2200/24—Indexing scheme for image data processing or generation, in general involving graphical user interfaces [GUIs]
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10056—Microscopic image
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20092—Interactive image processing based on input by user
- G06T2207/20104—Interactive definition of region of interest [ROI]
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30024—Cell structures in vitro; Tissue sections in vitro
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30096—Tumor; Lesion
Definitions
- This disclosure relates to information processing devices, information processing methods, programs and information processing systems.
- a target area for example, a lesion area
- machine learning is performed using images of a plurality of known (labeled) target areas as teacher data, and the classifier constructed by such machine learning and the data (model) used by the classifiers.
- the target area can be automatically extracted from the newly obtained image data.
- annotation data the image data of the known target area used as the teacher data.
- various techniques are disclosed, and as an example thereof, the following Non-Patent Document 1 can be mentioned.
- the range of the target area is specified by the user drawing a line on the image data using an input device (for example, a mouse or an electronic pen), and the image in the specified range is extracted. Is generated by doing so. Then, in order to perform the automatic extraction of the target area as described above with high accuracy, machine learning is performed using a large amount of appropriately labeled, accurate annotation data, and the classifier and the classifier are used. It is required to construct the model data to be used.
- an information processing device an information processing method, a program, and an information processing system that can efficiently generate data (annotation data) to be subjected to a predetermined process (machine learning).
- An information processing apparatus includes a region determination unit that performs fitting to the boundary of the first region and determines a second region to be subjected to a predetermined process.
- the processor acquires the information of the first region designated by the user's fill input operation for the image data of the living tissue, and the image data and the information of the first region. Based on the above, an information processing method is provided, which includes performing fitting with respect to the boundary of the first region and determining a second region to be subjected to a predetermined process.
- the computer has an information acquisition unit for acquiring information in a first region designated by a user's fill input operation for image data of a living tissue, and the image data and the first region.
- a program is provided that, based on the information in ..
- an information processing system including an information processing apparatus and a program for causing the information processing apparatus to execute information processing, wherein the information processing apparatus is of a user according to the program.
- the boundary between the first region based on the information acquisition unit that acquires the information of the first region designated by the fill input operation for the image data of the living tissue and the image data and the information of the first region.
- An information processing system is provided that functions as an area determination unit that executes fitting on an object and determines a second area to be subjected to a predetermined process.
- the input screen according to the embodiment of the present disclosure is an explanatory diagram (No. 1).
- the input screen according to the embodiment of the present disclosure is an explanatory diagram (No. 2).
- It is a sub-flow chart (No. 1) of step S230 shown in FIG. It is explanatory drawing (the 1) explaining each submode which concerns on embodiment of this disclosure.
- a pathologist may make a diagnosis using a pathological image, but the diagnosis result for the same pathological image may differ depending on the pathologist.
- Such variability in diagnosis is due to, for example, empirical values such as years of experience of pathologists and differences in specialty, and it is difficult to avoid variability in diagnosis. Therefore, in recent years, there has been a technique for deriving diagnostic support information, which is information for supporting pathological diagnosis, using machine learning for the purpose of supporting all pathologists to perform highly accurate pathological diagnosis.
- diagnostic support information which is information for supporting pathological diagnosis, using machine learning for the purpose of supporting all pathologists to perform highly accurate pathological diagnosis.
- the classifier and the data (model data) used by the classifier are constructed. Then, by using the discriminator constructed by such machine learning and the model data used by the discriminator, it is possible to automatically extract an image of a notable target area in a new pathological image. With such a technique, it is possible to provide the pathologist with information on the target area of interest in the new pathological image, so that the pathologist can more appropriately perform the pathological diagnosis of the pathological image.
- data which attached a label (annotation) to the image of the target area for example, a lesion area
- annotation data data which attached a label (annotation) to the image of the target area (for example, a lesion area) used as the teacher data of the machine learning.
- the label (annotation) attached to the target area can be information about various target areas.
- the information includes diagnostic results such as the subtype of "cancer", the stage of "cancer” and the degree of differentiation of cancer cells, the presence or absence of lesions in the target area, and the probability that the target area contains lesions.
- the location of the lesion, the type of lesion, etc. can be included.
- the degree of differentiation can be used to predict information such as what kind of drug (anticancer drug, etc.) is likely to be effective.
- FIG. 1 is a diagram showing a configuration example of the information processing system 1 according to the embodiment of the present disclosure.
- the information processing system 1 according to the embodiment of the present disclosure includes an information processing device 10, a display device 20, a scanner 30, a learning device 40, and a network 50.
- the information processing device 10, the scanner 30, and the learning device 40 are configured to be able to communicate with each other via the network 50.
- the communication method used in the network 50 any method can be applied regardless of whether it is wired or wireless, but it is desirable to use a communication method capable of maintaining stable operation.
- the information processing device 10 and the display device 20 may be separate devices as shown in FIG. 1, or may be an integrated device, and are not particularly limited. ..
- the outline of each device included in the information processing system 1 will be described below.
- the information processing device 10 is configured by, for example, a computer, and can generate annotation data used for the machine learning and output it to the learning device 40 described later.
- the information processing apparatus 10 is used by a user (for example, a doctor, a laboratory engineer, or the like).
- a user for example, a doctor, a laboratory engineer, or the like.
- various operations by the user are input to the information processing apparatus 10 via a mouse (not shown) or a pen tablet (not shown).
- various operations by the user may be input to the information processing apparatus 10 via a terminal (not shown).
- various presentation information to the user is output from the information processing device 10 via the display device 20.
- various presentation information to the user may be output from the information processing apparatus 10 via a terminal (not shown).
- the details of the information processing apparatus 10 according to the embodiment of the present disclosure will be described later.
- the display device 20 is, for example, a display device such as a liquid crystal display, an EL (Electro Luminescence), or a CRT (Cathode Ray Tube), and can display a pathological image under the control of the information processing device 10 described above. Further, the display device 20 may have a touch panel superimposed on which an input from the user is received. In the present embodiment, the display device 20 may be compatible with 4K or 8K, or may be configured by a plurality of display devices, and is not particularly limited. Then, while viewing the pathological image displayed on the display device 20, the user uses the above-mentioned mouse (not shown), pen tablet (not shown), or the like to pay attention to the target area (for example, lesion) on the pathological image. Area) can be freely specified and annotations (labels) can be added to the target area.
- a display device such as a liquid crystal display, an EL (Electro Luminescence), or a CRT (Cathode Ray Tube)
- the display device 20 may have
- the scanner 30 can read a living tissue such as a cell specimen obtained from a specimen. As a result, the scanner 30 generates a pathological image in which the living tissue is captured and outputs the pathological image to the above-mentioned information processing apparatus 10.
- the scanner 30 has an image sensor and generates a pathological image by imaging a living tissue with the image sensor.
- the reading method of the scanner 30 is not limited to a specific type. In the present embodiment, the reading method of the scanner 30 may be a CCD (Charge Coupled Device) type or a CIS (Contact Image Sensor) type, and is not particularly limited.
- the CCD type can correspond to a type in which light (reflected light or transmitted light) from a living tissue is read by a CCD sensor and the light read by the CCD sensor is converted into image data.
- the CIS method uses an RGB three-color LED (Light Emitting Diode) as a light source, reads light from living tissue (reflected light or transmitted light) with a photosensor, and converts the read result into image data. Can correspond to.
- the image data according to the embodiment of the present disclosure is not limited to the lesion image.
- the pathological image is a combination of a plurality of images obtained by continuously photographing a living tissue (slide) set on a stage of a scanner (a microscope having an image sensor). One image may also be included.
- a method of connecting a plurality of images to generate one image is called hall slide imaging (WSI).
- the learning device 40 is configured by, for example, a computer, and can construct a classifier and model data used by the classifier by performing machine learning using a plurality of annotation data. Then, by using the discriminator constructed by the learning device 40 and the model data used by the discriminator, it becomes possible to automatically extract an image of a notable target area in a new pathological image. .. Deep learning can typically be used for the machine learning.
- the classifier is realized by a neural network.
- the model data may correspond to the weight of each neuron in the neural network.
- the discriminator may be realized by a device other than the neural network. In the present embodiment, for example, the discriminator may be realized by a random forest, a support vector machine, or an adder boost, and is not particularly limited.
- the learning device 40 acquires a plurality of annotation data and calculates the feature amount of the image of the target area included in the annotation data.
- Features include, for example, cell nuclei or cell nucleus color features (brightness, saturation, wavelength, spectrum, etc.), shape features (circularity, circumference, etc.), density, distance from a specific morphology, local features, structure extraction. Any treatment (nuclear detection, etc.), aggregated information (cell density, orientation, etc.), etc. may be used.
- the learning device 40 inputs an image of a target area into an algorithm such as a neural network, and calculates the feature amount of the image.
- the learning device 40 calculates the representative feature amount, which is the feature amount of the entire plurality of target areas, by aggregating the feature amounts of the images of the plurality of target areas with the same annotation (label). ..
- the learning device 40 has a plurality of features based on the distribution of the features of each image in the plurality of target areas (for example, the color histogram) and the features such as LBP (Local Binary Pattern) focusing on the texture structure of the image. Calculate the representative features of the entire target area. Then, the discriminator can extract an image of another target area similar to the target area from the areas included in the new pathological image based on the calculated feature amount of the target area.
- the information processing device 10, the scanner 30, and the learning device 40 exist as separate devices.
- a part or all of the information processing device 10, the scanner 30, and the learning device 40 may exist as an integrated device.
- a part of the functions of any one of the information processing device 10, the scanner 30, and the learning device 40 may be incorporated in the other device.
- FIG. 2 is a flowchart showing an operation example of the information processing system 1 according to the embodiment of the present disclosure. Specifically, the information processing system 1 acquires a pathological image, generates annotation data, and identifies a device. It shows the flow until the construction of etc. Further, FIG. 3 is an explanatory diagram illustrating an operation example of the information processing system 1 according to the embodiment of the present disclosure.
- the information processing method according to the present embodiment includes steps S100 to S300.
- steps S100 to S300 the information processing method according to the present embodiment will be described.
- the scanner 30 photographs (reads) a biological tissue, which is an observation object contained in a slide, generates a pathological image in which the biological tissue is captured, and outputs the pathological image to the information processing apparatus 10 (step S100).
- the biological tissue can be a tissue or cell collected from a patient, a piece of meat of an organ, saliva, blood, or the like.
- the information processing device 10 presents the pathological image 610 to the user via the display device 20, as shown on the left side of FIG. While viewing the pathological image 610, the user uses a mouse (not shown) or a pen tablet (not shown) to focus on the pathological image 610 (for example, a lesion) as shown in the center of FIG. The range of the area) is specified, and an annotation (label) is given to the specified target area 702. Then, as shown on the right side of FIG. 3, the information processing apparatus 10 generates annotation data 710 based on the image of the target region 702 to which the annotation is added, and outputs the annotation data 710 to the learning apparatus 40 (step S200).
- the learning device 40 constructs a classifier and model data used by the classifier by performing machine learning using a plurality of annotation data 710 (step S300).
- FIGS. 4 and 5 are explanatory views illustrating an operation example of the information processing apparatus 10 according to the embodiment of the present disclosure.
- annotation data 710 for machine learning will be prepared. If a sufficient amount of annotation data 710 cannot be prepared, the accuracy of machine learning will be reduced, the accuracy of the classifier constructed and the model data used by the classifier will be reduced, and in a new pathological image. It becomes difficult to more accurately extract a notable target area (for example, a lesion area).
- the annotation data 710 (specifically, the image included in the annotation data 710) has a curve 704 with respect to the pathological image 610 by the user using a mouse (not shown) or the like.
- a boundary indicating the range of the target area 702 is specified, and it is generated by extracting an image of the specified range.
- the target area 702 means not only the boundary input by the user but the entire area surrounded by the boundary.
- the contour of the actual target area 702 is performed. It is conceivable to acquire an image of the target area 702 from the pathological image 610 based on the acquired contour. By executing such a fitting process, even if the curve 704 drawn by the user deviates from the contour of the actual target area 702, the contour of the target area 702 is accurately acquired as the user intended. be able to.
- fitting processing method applicable here for example, "foreground background fitting", “cell membrane fitting”, “cell nucleus fitting” and the like can be mentioned, and the details thereof will be described later.
- the target area 702 may have a complicated and intricate shape such as a cancer cell, and in such a case, drawing a curve 704 on the pathological image 610 by the user may be a path of the curve 704. It is difficult to avoid that the input work time becomes long because of the long time. Therefore, it is difficult to efficiently generate a large amount of highly accurate annotation data 710.
- the present inventors have conceived to specify the range of the target area 702 by performing a fill input operation on the pathological image 610.
- the work of filling the target area 702 can reduce the time and effort of the user as compared with the work of drawing the curve 704.
- the contour of the actual target region 702 can be acquired by the fitting process based on the boundary of the filled region by the fill input operation, and the image of the target region 702 can be extracted from the pathological image 610 based on the acquired contour. ..
- the fill input operation means an operation in which the user specifies the range of the target area 702 by the fill range 700 that fills the target area 702 on the pathological image 610, as shown in the center of FIG. do.
- a tissue section or cell that is a part of a tissue (for example, an organ or an epithelial tissue) acquired from a living body (for example, a human body, a plant, etc.) is referred to as a living tissue.
- various types of the target area 702 are assumed.
- the tumor area is mainly assumed.
- examples of the target region 702 include a region containing a sample, a tissue region, an artifact region, an epithelial tissue, a squamous epithelium, a glandular region, a cell atypical region, and a tissue atypical region.
- the biological tissue described below may be subjected to various stainings as needed.
- the biological tissue specimen may or may not be stained with various stains, and is not particularly limited.
- the staining includes not only general staining represented by HE (hematoxylin / eosin) staining, gymza staining, papanicolou staining, etc., but also periodic acid shift (PAS) staining used when focusing on a specific tissue.
- fluorescent staining such as FISH (Fluorescense In-Situ Hybridization) and enzyme antibody method is included.
- the fill input operation is based on the input operation of the user, and is displayed by superimposing the pathological image (image data) 610 on the pathological image with a locus having a predetermined width. It means an operation of filling the target area 702 which is a part of 610. Further, in the following description, when the predetermined width is set to be less than the threshold value, the input operation is performed by the user to obtain a locus having the same width as the threshold value in the pathological image (image data) 610. It shall be a line drawing input operation (stroke) that draws so as to overlap with.
- stroke line drawing input operation
- FIG. 6 is a diagram showing a functional configuration example of the information processing apparatus 10 according to the present embodiment.
- the information processing apparatus 10 mainly includes a processing unit 100, an image data receiving unit 120, a storage unit 130, an operation unit 140, and a transmitting unit 150. The details of each functional unit of the information processing apparatus 10 will be sequentially described below.
- the processing unit 100 can generate annotation data 710 from the pathological image 610 based on the pathological image (image data) 610 and an input operation from the user.
- the processing unit 100 functions by, for example, executing a program stored in a storage unit 130, which will be described later, by a CPU (Central Processing Unit) or an MPU (Micro Processing Unit) using a RAM (Random Access Memory) or the like as a work area. do.
- the processing unit 100 may be configured by an integrated circuit such as an ASIC (Application Specific Integrated Circuit) or an FPGA (Field Programmable Gate Array). The details of the processing unit 100 will be described later.
- the image data receiving unit 120 and the transmitting unit 150 are configured to include a communication circuit.
- the image data receiving unit 120 can receive the pathological image (image data) 610 from the scanner 30 via the network 50.
- the image data receiving unit 120 outputs the received pathological image 610 to the processing unit 100 described above.
- the transmission unit 150 can transmit the annotation data 710 to the learning device 40 via the network 50.
- the storage unit 130 is realized by, for example, a semiconductor memory element such as a RAM or a flash memory, or a storage device such as a hard disk or an optical disk.
- the storage unit 130 stores annotation data 710 already generated by the processing unit 100, a program executed by the processing unit 100, and the like.
- the operation unit 140 has a function of accepting an operation input by the user.
- the operation unit 140 includes a mouse and a keyboard.
- the operation unit 140 is not limited to the case where the operation unit 140 includes a mouse and a keyboard.
- the operation unit 140 may include an electronic pen, a touch panel, or an image sensor for detecting a line of sight.
- the above configuration described with reference to FIG. 6 is merely an example, and the configuration of the information processing apparatus 10 according to the present embodiment is not limited to such an example. That is, the configuration of the information processing apparatus 10 according to the present embodiment can be flexibly modified according to specifications and operations.
- FIG. 7 is a diagram showing a functional configuration example of the processing unit 100 shown in FIG. Specifically, as shown in FIG. 7, the processing unit 100 includes a locus width setting unit 102, an information acquisition unit 104, a determination unit 106, an area determination unit 108, an extraction unit 110, and a display control unit 112. Mainly has. The details of each functional unit of the processing unit 100 will be sequentially described below.
- the locus width setting unit 102 can acquire input information by the user from the operation unit 140, and can set the width of the locus in the fill input operation based on the acquired information. Then, the locus width setting unit 102 can output the information of the set locus width to the information acquisition unit 104 and the display control unit 112, which will be described later. The details of the input setting of the locus width by the user will be described later.
- the locus width setting unit 102 may switch from the fill input operation to the line drawing input operation when the locus width is set to be less than a predetermined threshold value. That is, the locus width setting unit 102 can switch between the fill input operation and the line drawing input operation.
- the line drawing input operation means an input operation in which a locus having the same width as the threshold value is drawn by the user so as to be superimposed on the pathological image (image data) 610. ..
- the locus width setting unit 102 is the analysis result for the pathological image 610 (for example, the frequency analysis result for the pathological image 610, the extraction result of recognizing and extracting a specific tissue from the pathological image 610, etc.), or the pathological image 610.
- the width of the locus may be automatically set based on the display magnification. Further, the locus width setting unit 102 may automatically set the width of the locus based on the speed at which the user draws a locus on the pathological image 610. Further, the locus width setting unit 102 is based on the input start position (start point of the locus) of the fill input operation for the pathological image 610, for example, with respect to the area related to the existing annotation data (other learning image data) 710.
- the width of the locus may be automatically set based on the positional relationship of the input start position, or the fill input operation and the line drawing input operation may be switched (details will be described later).
- the width of the locus by automatically setting and switching the width of the locus in this way, the convenience of the input operation is further enhanced, and a large amount of highly accurate annotation data 710 can be efficiently generated. Can be possible.
- the information acquisition unit 104 can acquire information on an input operation by the user from the operation unit 140, and outputs the acquired information to the determination unit 106, which will be described later. Specifically, the information acquisition unit 104 acquires information on the fill range (first region) 700, which is filled and designated by the user's fill input operation for the pathological image (for example, image data of a living tissue) 610. .. Further, the information acquisition unit 104 may acquire information in a range (third region) specified by being surrounded by the curve 704 drawn by the user's line drawing input operation for the pathological image 610.
- the determination unit 106 includes the fill range (first area) 700 specified by the user's fill input operation for the pathological image 610, and another existing one or a plurality of annotations already stored in the storage unit 130. It can be determined whether or not the data 710 overlaps. Further, the determination unit 106 can also determine in what state the fill range 700 overlaps with other existing annotation data 710 (for example, whether or not it overlaps so as to straddle). Then, the determination unit 106 outputs the determination result to the area determination unit 108, which will be described later.
- the area determination unit 108 determines the pathological image (image data) 610, the fill range (first area) 700 designated by the user's fill input operation for the pathological image 610, and the determination result of the determination unit 106 described above. Based on this, fitting is performed on the boundary line of all or a part of the fill range 700 filled by the fill input operation. Then, the area determination unit 108 can acquire the outline of the whole or a part of the target area (second area) 702 by the fitting process. Further, the area determination unit 108 outputs the acquired contour information of the target area 702 to the extraction unit 110 and the display control unit 112, which will be described later.
- the area determination unit 108 is designated by a fill input operation based on a mode (range setting mode) (addition mode, correction mode) preset by the user and the determination result of the determination unit 106 described above.
- a mode range setting mode
- addition mode, correction mode addition mode, correction mode
- the area determination unit 108 executes fitting in the determined fitting range.
- the fitting performed here can be, for example, a fitting based on the boundary between the foreground and the background, a fitting based on the contour of the cell membrane, or a fitting based on the contour of the cell nucleus (details thereof will be described later). .. It should be noted that which fitting method to use may be determined in advance by the user, or may be determined according to the characteristics of the pathological image (image data) 610.
- the determination of the fitting range in the present embodiment is executed as follows. For example, in the additional mode (first range setting mode), the area determination unit 108 uses the fill range (first area) 700 specified by the fill input operation and other existing annotation data (for other learning). When the area (area related to the image data) 710 does not overlap, the fitting range is determined so that the fitting is executed for all the boundary lines of the fill range 700.
- the area determination unit 108 uses the fill range (first area) 700 specified by the fill input operation and other existing annotation data (others).
- the fitting range is determined so as to execute the fitting on the boundary line of the area that does not overlap with the other existing annotation data 710 in the filled range 700. ..
- the area related to the contour of the newly fitted range and the other existing annotation data 710 are integrated (combined), and the target area corresponding to the image that can be included in the new annotation data 710 ( Second area) 702.
- the area determination unit 108 uses the fill range (first area) 700 specified by the fill input operation and other existing annotation data (others).
- the fitting range is determined so as to execute the fitting on the boundary line of the area overlapping the other existing annotation data 710 in the filled range 700.
- the area determination unit 108 is based on the pathological image 610 and the information of the range (third area) specified by the user's line drawing input operation for the pathological image 610, and the range specified by the line drawing input operation (3rd area). Fitting may be performed on the boundary line of the third region) to determine the target region (second region) 702 corresponding to the image that may be included in the new annotation data 710.
- the extraction unit 110 is determined from the pathological image (image data) 610 based on the target region (second region) 702 that corresponds to the image that can be included in the new annotation data 710, which is determined by the region determination unit 108.
- the image of the target area 702 used for machine learning can be extracted.
- the extraction unit 110 outputs the extracted image together with the annotation attached by the user to the learning device 40 as new annotation data 710.
- the display control unit 112 can control the display of the display device 20 based on various information. For example, the display control unit 112 can set the magnification of the pathological image 610 displayed on the display device 20 based on the input operation by the user. Further, the display control unit 112 may use the analysis result for the pathological image 610 (for example, the frequency analysis result for the pathological image 610, the extraction result for recognizing and extracting a specific tissue from the pathological image 610, etc.), or the pathology by the user.
- the magnification of the pathological image 610 to be displayed may be automatically set based on the speed at which the trajectory is drawn on the image 610. In the present embodiment, by automatically setting the magnification in this way, it is possible to improve the convenience of the input operation and efficiently generate a large amount of highly accurate annotation data 710.
- the configuration of the processing unit 100 according to the present embodiment is not limited to such an example. That is, the configuration of the processing unit 100 according to the present embodiment can be flexibly modified according to the specifications and operation.
- the area determination unit 108 executes the fitting process in the determined fitting range.
- the fitting process executed here can be, for example, the “foreground background fitting”, the “cell membrane fitting”, the “cell nucleus fitting” and the like described above.
- Formground background fitting is a fitting process for the boundary between the foreground and the background.
- the “foreground background fitting” is applied when the target region 702 is, for example, a region containing a sample, a tissue region, an artifact region, an epithelial tissue, a squamous epithelium, a glandular region, a cell atypical region, a tissue atypical region, or the like. can do.
- the fitting process can be performed using the segmentation algorithm by graph cut. Machine learning may be used for the segmentation algorithm.
- the "foreground background fitting" process for example, a set of pixels having the same or similar color values as the color values of the pixels existing in the range specified by the user on the pathological image 610 on the pathological image 610. Assuming that it is the target area 702 to be extracted (segmentation), the outline thereof is acquired. At this time, a part of the area to be the foreground object and a part of the area to be the background object are specified in advance on the image. Then, assuming that the pixels in the area adjacent to the foreground object and the background object have different color values, a cost function that becomes the minimum cost when the foreground label or the background label is appropriately attached to all the pixels is used. By giving and calculating the combination of labels that minimizes the cost (graph cut) (solving the energy minimization problem), segmentation can be performed.
- cell membrane fitting is a fitting process for the cell membrane.
- the characteristics of the cell membrane are recognized from the pathological image, and the fitting process is performed along the contour of the cell membrane based on the recognized characteristics of the cell membrane and the range surrounded by the curve 704 drawn by the user.
- the edge dyed in brown by the membrane staining of immunostaining can be used.
- the staining conditions are not limited to the above examples, and may be any of general staining, immunostaining, fluorescent immunostaining, and the like.
- cell nucleus fitting is a fitting to the cell nucleus.
- the characteristics of the cell nucleus are recognized from the pathological image, and fitting is performed along the contour of the cell nucleus based on the recognized characteristics of the cell nucleus and the range surrounded by the curve 704 drawn by the user.
- the nucleus is dyed blue, so that the staining information by hematoxylin eosin (HE) can be used for the fitting.
- the staining conditions are not limited to the above examples, and may be any of general staining, immunostaining, fluorescent immunostaining, and the like.
- the area determination unit 108 acquires the boundary line (contour) of the fill range 700 based on the information of the fill range (first area) 700 specified by the user's fill input operation for the pathological image 610. Then, the region determination unit 108 is based on the pathological image 610 and the boundary line of the filled area 700, and the target region (second region) 702 (region with a sample, tissue region, artifact region, epithelial tissue, flat epithelium, Fitting can be performed by extracting the contours of glandular regions, cell atypical regions, tissue atypical regions, etc.) using a segmentation algorithm based on graph cuts. Alternatively, machine learning may be used for the segmentation algorithm.
- the contour of the target region 702 may be determined so that the certainty (reliability) as the contour becomes higher.
- the contour of the target area 702 can be accurately acquired. Therefore, according to the present embodiment, it is possible to efficiently generate a large amount of highly accurate annotation data 710.
- the contour search during the fitting process is performed within a range (having a predetermined width) separated from the boundary line of the fill range (first area) 700 specified by the fill input operation by a predetermined distance.
- a search range the range in which the contour is searched during the fitting process is referred to as a "search range", and for example, a predetermined distance along the normal direction with respect to the boundary line of the fill range 700 specified by the fill input operation.
- the search range can be up to a range as far away as possible. More specifically, in the present embodiment, the search range is a range located outside and inside the boundary line of the filled range 700, and separated from the boundary line by a predetermined distance along the normal direction. You may. Alternatively, in the present embodiment, the search range may be a range located outside or inside the boundary line of the filled range 700, and separated from the boundary line by a predetermined distance along the normal direction. It is not particularly limited (details will be described later).
- the predetermined distance (predetermined width) in the search range may be preset by the user.
- the predetermined distance (predetermined width) in the search range is the analysis result for the pathological image 610 (for example, the frequency analysis result for the pathological image 610 or the specific tissue from the pathological image 610). It may be automatically set based on the recognition, the extracted extraction result, etc.) and the speed at which the user draws a trajectory on the pathological image 610.
- the information processing device 10 may display the search range to the user via the display device 20.
- the user may repeatedly make corrections.
- FIG. 8 is a flowchart showing an information processing method according to the present embodiment
- FIGS. 9 and 10 are explanatory views of an input screen according to the present embodiment.
- the method of creating annotation data 710 in the information processing method according to the present embodiment includes steps S210 to S260. The details of each of these steps will be described below.
- the information processing device 10 acquires the data of the pathological image 610 and presents it to the user via the display device 20. Then, the information processing apparatus 10 acquires the information of the mode (range setting mode) (addition mode, correction mode) selected from the user, and sets it to either the addition mode or the correction mode (step S210). For example, as shown in FIGS. 9 and 10, the user can select a mode by pressing down one of the two icons 600 displayed on the upper left of the display unit 200 of the display device 20. can.
- Step S220 the user performs a fill input operation on the target area 702 of the pathological image 610, and the information processing apparatus 10 displays information on the fill range (first area) 700 designated by the user's fill input operation.
- the user can perform a fill input operation by performing an operation of moving the icon 602 on the pathological image 610 displayed on the display unit 200 of the display device 20. ..
- the information processing apparatus 10 is a sub for determining the fitting range based on the mode (range setting mode) (addition mode, correction mode) preset by the user and the determination result of the determination unit 106 described above.
- the mode is determined (step S230).
- the determination unit 106 determines that the fill range 700 overlaps with other existing annotation data 710. Is determined to be a separation mode as a submode (see FIG. 15). Further, for example, in the correction mode (second range setting mode), when the determination unit 106 determines that the fill range 700 does not overlap over the other existing annotation data 710, the submode is used. It is determined that the mode is erased (see FIG. 15). The details of step S230 will be described later.
- the information processing apparatus 10 determines the fitting range based on the submode determined in step S230 described above, and performs the fitting process based on the preset fitting method (step S240). Specifically, the information processing apparatus 10 performs an energy (cost) calculation using a graph cut based on the pathological image 610 and the boundary line of the fill range 700 specified by the fill input operation, and is based on the calculation result. Then, by correcting (fitting) the above boundary line, a new contour is acquired. Then, the information processing apparatus 10 acquires the target region (second region) 702 corresponding to the image that can be included in the new annotation data 710 based on the newly acquired contour.
- the fitting range is determined so that the fitting is executed for all the boundary lines of the fill range 700 specified by the fill input operation. Further, for example, in the integrated mode and the extended mode, the fitting range is determined so as to execute the fitting on the boundary line of the area of the filled range 700 that does not overlap with the other existing annotation data 710. In this case, the area related to the contour of the newly fitted range and the other existing annotation data 710 are integrated, and the target area corresponding to the image that can be included in the new annotation data 710 (second). Area) 702. Further, for example, in the separation mode and the erase mode, the fitting range is determined so as to execute the fitting on the boundary line of the area overlapping the other existing annotation data 710 in the fill range 700. In this case, by removing the region related to the contour of the newly fitted range from the other existing annotation data 710, the target region corresponding to the image that can be included in the new annotation data 710 (second). Area) 702.
- the information processing device 10 displays the target area (second area) 702 obtained by the fitting in step S240 described above to the user via the display device 20 and visually confirms it (step S250). In this embodiment, the process may return to step S220 depending on the confirmation result of the user. Then, the information processing apparatus 10 generates new annotation data 710 by associating the image of the target area 702 with the annotation attached to the target area 702 by the user.
- the information processing apparatus 10 determines whether or not the generation of annotation data 710 can be completed (step S260). The information processing apparatus 10 ends the process when it can be terminated (step S260: Yes), and returns to step S210 described above when it cannot be terminated (step S260: No).
- step S230 will be described for each of the add mode and the modify mode.
- step S230 in the additional mode will be described with reference to FIGS. 11 to 14.
- 11 is a sub-flow chart of step S230 shown in FIG. 8, and FIGS. 12 to 14 are explanatory views illustrating each sub-mode according to the present embodiment.
- step S230 in the additional mode includes sub-step S231 to sub-step S235. The details of each of these substeps will be described below.
- the information processing apparatus 10 determines whether or not the fill range (first area) 700 specified by the user's fill input operation for the pathological image 610 overlaps with the existing annotation data 710 (sub). Step S231). When the fill range 700 and the other existing annotation data 710 overlap (substep S231: Yes), the information processing apparatus 10 proceeds to substep S233. On the other hand, when the fill range 700 and the other existing annotation data 710 do not overlap with each other (substep S2311: No), the information processing apparatus 10 proceeds to substep S232.
- the information processing apparatus 10 determines the fitting range so as to execute the fitting for all the boundary lines of the fill range 700 (new mode) (substep S232). Next, for example, as shown in FIG. 12, the information processing apparatus 10 performs fitting on all the boundary lines of the filled range 700 and acquires a new contour. Then, the information processing apparatus 10 acquires the target region (second region) 702 corresponding to the image that can be included in the new annotation data 710 based on the newly acquired contour.
- the information processing apparatus 10 determines whether or not the fill range 700 and the other existing plurality of annotation data 710 overlap each other (substep S233).
- the information processing apparatus 10 proceeds to substep S234.
- the fill range 700 and the other existing plurality of annotation data 710 do not overlap each other (substep S233: No)
- the information processing apparatus 10 proceeds to substep S235.
- the information processing apparatus 10 determines the fitting range (integrated mode) (sub) so as to execute the fitting on the boundary line of the area of the filled range 700 that does not overlap with each of the other existing annotation data 710. Step S234). Then, the information processing apparatus 10 performs fitting in the above fitting range and acquires a new contour. Then, based on the newly acquired contour, for example, as shown in FIG. 14, the information processing apparatus 10 includes a region related to the contour of the range in which the fitting is newly executed, and other existing plurality of annotation data 710a.
- the target area (second area) 702 is acquired by integrating with 710b.
- the information processing apparatus 10 determines the fitting range (extended mode) so as to execute the fitting on the boundary line of the area of the filled range 700 that does not overlap with the other existing annotation data 710 (substep S235). ). Next, the information processing apparatus 10 performs fitting in the fitting range and acquires a new contour. Then, based on the newly acquired contour, for example, as shown in FIG. 13, the information processing apparatus 10 uses the other existing annotation data 710 for the area related to the contour of the newly fitted range. Expand and acquire the target area (second area) 702.
- step S230 in the correction mode includes sub-step S236 to sub-step S238. The details of each of these substeps will be described below.
- the information processing apparatus 10 determines whether or not the filled range (first area) 700 overlaps the other existing annotation data 710 (the filled range 700 is the other existing annotation data 710). Whether or not they overlap so as to extend from one end to the other) is determined (substep S236).
- the information processing apparatus 10 proceeds to sub-step S237 when the filled range (first region) 700 overlaps the other existing annotation data 710 so as to straddle the other existing annotation data 710 (sub-step S236: Yes).
- the information processing apparatus 10 proceeds to substep S238 when the filled range (first area) 700 does not overlap so as to straddle the other existing annotation data 710 (substep S236: No). ..
- the information processing apparatus 10 determines the fitting range (separation mode) so as to execute the fitting on the boundary line of the area of the filled range (first area) 700 that overlaps with the other existing annotation data 710. ) (Substep S237). Next, the information processing apparatus 10 performs fitting in the fitting range and acquires a new contour. Then, based on the newly acquired contour, the information processing apparatus 10 removes a region related to the contour of the newly fitted range from the other existing annotation data 710, for example, as shown in FIG. As a result, the target regions (second regions) 702a and 702b corresponding to the images that can be included in the new annotation data 710 are acquired.
- the information processing apparatus 10 determines the fitting range (erasure mode) so as to execute the fitting on the boundary line of the area of the filled range (first area) 700 that overlaps with the other existing annotation data 710. ) (Substep S238). Next, the information processing apparatus 10 performs fitting in the fitting range and acquires a new contour. Then, the information processing apparatus 10 removes a region related to the contour of the newly fitted range from the other existing annotation data 710 based on the newly acquired contour, for example, as shown in FIG. By erasing), the target area (second area) 702 corresponding to the image that can be included in the new annotation data 710 is acquired.
- FIGS. 18 to 20 are explanatory views illustrating a search range according to the present embodiment.
- the search range is located outside and inside the boundary line 800 of the fill range 700 (not shown in FIG. 18). It may be a range 810 separated by a predetermined distance along the normal direction from the above.
- the search range is located outside the boundary line 800 of the fill range 700 (not shown in FIG. 19), in the normal direction from the boundary line 800. It may be a range 810 separated by a predetermined distance along the line.
- the search range is located inside the boundary line 800 of the fill range 700 (not shown in FIG. 20), in the normal direction from the boundary line 800. It may be a range 810 separated by a predetermined distance along the line.
- the range of the target area 702 can be specified by performing a fill input operation on the pathological image 610 by the user. Therefore, according to the present embodiment, even if the target area 702 has a complicated and intricate shape such as a cancer cell as shown in FIG. 9, the work of drawing a curve 704 by using the fill input operation. Compared with, it is possible to generate highly accurate annotation data while reducing the time and effort of the user. As a result, according to the present embodiment, it is possible to efficiently generate a large amount of highly accurate annotation data 710.
- a fill input operation can be performed on the pathological image 610, and the target area 702 can be determined by the fill range 700.
- a line drawing input operation for drawing a curve 704 can be performed on the pathological image 610, and the target area 702 can be determined by the fill range 700. That is, in this modification, the fill input operation and the line drawing input operation can be switched.
- the lesion site is spread as a whole, but there may be a normal site (region indicated by reference numeral 700 in the figure) in some places of the lesion site.
- the lesion site that spreads as a whole is designated by drawing a curve 704 by a line drawing input operation.
- the normal part is filled by the fill input operation in the correction mode.
- the target area 702 excluding the above-mentioned normal part from the range surrounded by the curve 704. If the fill input operation and the line drawing input operation can be appropriately switched and used in this way, the annotation data 710 with the lesion site as the target area 702 as shown in FIG. 22 can further reduce the time and effort of the user. However, it is possible to generate efficiently.
- the user may switch between the fill input operation and the line drawing input operation by performing a selection operation on the icon or the like, or the width of the locus is set to less than the threshold value by the user. If it is set, it may be switched to the line drawing input operation.
- the fill input operation and the line drawing input operation may be switched based on the positional relationship. Specifically, as shown on the left side of FIG. 23, when the input is started from the vicinity of the contour of the existing annotation data (other learning image data) 710, it is set to the line drawing input operation, while FIG. 23. As shown on the right side of, when the input is started from the inside of the existing annotation data 710, it is set to the fill input operation.
- the width of the locus may also be automatically adjusted based on the positional relationship of the input start position with respect to the existing annotation data (other learning image data) 710.
- the range of the target area 702 can be specified by performing a fill input operation on the pathological image 610 by the user. Therefore, according to the present embodiment, even if the target area 702 has a complicated and intricate shape such as a cancer cell as shown in FIG. 9, the work of drawing a curve 704 by using the fill input operation. Compared with, it is possible to generate highly accurate annotation data while reducing the time and effort of the user. As a result, according to the present embodiment, it is possible to efficiently generate a large amount of highly accurate annotation data 710.
- the object to be imaged is not limited to the living tissue, but may be a subject having a fine structure or the like, and is not particularly limited.
- the technology according to the present disclosure can be applied to various products. For example, even if the technique according to the present disclosure is applied to a pathological diagnosis system or a support system thereof (hereinafter referred to as a diagnosis support system) in which a doctor or the like observes cells or tissues collected from a patient to diagnose a lesion. good.
- This diagnostic support system may be a WSI (Whole Slide Imaging) system that diagnoses or supports a lesion based on an image acquired by using digital pathology technology.
- FIG. 24 is a diagram showing an example of a schematic configuration of a diagnostic support system 5500 to which the technique according to the present disclosure is applied.
- the diagnostic support system 5500 includes one or more pathological systems 5510. Further, the medical information system 5530 and the out-licensing device 5540 may be included.
- Each of the one or more pathological systems 5510 is a system mainly used by pathologists, and is introduced into, for example, a laboratory or a hospital.
- Each pathological system 5510 may be introduced in different hospitals, and may be installed in various networks such as WAN (Wide Area Network) (including the Internet), LAN (Local Area Network), public line network, and mobile communication network, respectively. It is connected to the medical information system 5530 and the out-licensing device 5540 via the system.
- WAN Wide Area Network
- LAN Local Area Network
- public line network public line network
- mobile communication network mobile communication network
- Each pathological system 5510 includes a microscope (specifically, a microscope used in combination with digital imaging technology) 5511, a server 5512, a display control device 5513, and a display device 5514.
- a microscope specifically, a microscope used in combination with digital imaging technology
- server 5512 a server 5512
- display control device 5513 a display device 5514.
- the microscope 5511 has the function of an optical microscope, photographs an observation object housed in a glass slide, and acquires a pathological image which is a digital image.
- the observation object is, for example, a tissue or cell collected from a patient, and may be a piece of meat, saliva, blood, or the like of an organ.
- the microscope 5511 functions as the scanner 30 shown in FIG.
- the server 5512 stores and stores the pathological image acquired by the microscope 5511 in a storage unit (not shown). Further, when the server 5512 receives a viewing request from the display control device 5513, the server 5512 searches for a pathological image from a storage unit (not shown) and sends the searched pathological image to the display control device 5513.
- the server 5512 functions as the information processing apparatus 10 according to the embodiment of the present disclosure.
- the display control device 5513 sends a viewing request for the pathological image received from the user to the server 5512. Then, the display control device 5513 displays the pathological image received from the server 5512 on the display device 5514 using a liquid crystal display, EL (Electro-Luminence), CRT (Cathode Ray Tube), or the like.
- the display device 5514 may be compatible with 4K or 8K, and is not limited to one, and may be a plurality of display devices.
- the object to be observed when the object to be observed is a solid substance such as a piece of meat of an organ, the object to be observed may be, for example, a stained thin section.
- the thin section may be prepared, for example, by slicing a block piece cut out from a sample such as an organ. Further, when slicing, the block pieces may be fixed with paraffin or the like.
- various stainings such as general staining showing the morphology of the tissue such as HE (Hematoxylin-Eosin) staining, immunostaining showing the immune state of the tissue such as IHC (Immunohistochemistry) staining, and fluorescent immunostaining are used. May be applied. At that time, one thin section may be stained with a plurality of different reagents, or two or more thin sections (also referred to as adjacent thin sections) continuously cut out from the same block piece may be different reagents from each other. It may be stained using.
- the microscope 5511 may include a low-resolution photographing unit for photographing at a low resolution and a high-resolution photographing unit for photographing at a high resolution.
- the low-resolution photographing unit and the high-resolution photographing unit may have different optical systems or may be the same optical system. When the optical system is the same, the resolution of the microscope 5511 may be changed according to the object to be photographed.
- the glass slide containing the observation object is placed on a stage located within the angle of view of the microscope 5511.
- the microscope 5511 acquires an entire image within the angle of view using a low-resolution photographing unit, and identifies a region of an observation object from the acquired overall image.
- the microscope 5511 divides the area where the observation object exists into a plurality of divided areas of a predetermined size, and sequentially photographs each divided area by the high-resolution photographing unit to acquire a high-resolution image of each divided area. do.
- the stage may be moved, the photographing optical system may be moved, or both of them may be moved.
- each divided region may overlap with the adjacent divided region in order to prevent the occurrence of a shooting omission region due to an unintended slip of the glass slide.
- the whole image may include identification information for associating the whole image with the patient. This identification information may be, for example, a character string, a QR code (registered trademark), or the like.
- the high resolution image acquired by the microscope 5511 is input to the server 5512.
- the server 5512 divides each high-resolution image into smaller-sized partial images (hereinafter referred to as tile images). For example, the server 5512 divides one high-resolution image into a total of 100 tile images of 10 ⁇ 10 vertically and horizontally. At that time, if the adjacent divided regions overlap, the server 5512 may perform stitching processing on the high-resolution images adjacent to each other by using a technique such as template matching. In that case, the server 5512 may generate a tile image by dividing the entire high-resolution image bonded by the stitching process. However, the tile image may be generated from the high resolution image before the stitching process.
- the server 5512 can generate a tile image of a smaller size by further dividing the tile image. The generation of such a tile image may be repeated until a tile image having a size set as a minimum unit is generated.
- the server 5512 executes the tile composition process of generating one tile image by synthesizing a predetermined number of adjacent tile images for all the tile images. This tile composition process can be repeated until one tile image is finally generated.
- a tile image group having a pyramid structure in which each layer is composed of one or more tile images is generated.
- the tile image of one layer and the tile image of a different layer have the same number of pixels, but their resolutions are different. For example, when a total of four tile images of 2 ⁇ 2 are combined to generate one tile image in the upper layer, the resolution of the tile image in the upper layer is 1/2 times the resolution of the tile image in the lower layer used for composition. It has become.
- a tile image group having such a pyramid structure By constructing a tile image group having such a pyramid structure, it is possible to switch the detail level of the observation object displayed on the display device depending on the hierarchy to which the tile image to be displayed belongs. For example, when the tile image of the lowermost layer is used, the narrow area of the observation object may be displayed in detail, and the wider area of the observation object may be displayed coarser as the tile image of the upper layer is used. can.
- the generated tile image group of the pyramid structure is stored in a storage unit (not shown) together with identification information (referred to as tile identification information) that can uniquely identify each tile image, for example.
- the server 5512 receives a request for acquiring a tile image including tile identification information from another device (for example, a display control device 5513 or a derivation device 5540), the server 5512 transmits the tile image corresponding to the tile identification information to the other device. do.
- the tile image which is a pathological image
- a specific pathological image and another pathological image corresponding to an imaging condition different from the specific imaging condition, which is another pathological image in the same region as the specific pathological image are displayed. It may be displayed side by side.
- Specific shooting conditions may be specified by the viewer. Further, when a plurality of imaging conditions are specified for the viewer, pathological images of the same region corresponding to each imaging condition may be displayed side by side.
- the server 5512 may store the tile image group having a pyramid structure in a storage device other than the server 5512, for example, a cloud server. Further, a part or all of the tile image generation process as described above may be executed by a cloud server or the like.
- the display control device 5513 extracts a desired tile image from the tile image group having a pyramid structure in response to an input operation from the user, and outputs this to the display device 5514.
- the user can obtain the feeling of observing the observation object while changing the observation magnification. That is, the display control device 5513 functions as a virtual microscope.
- the virtual observation magnification here actually corresponds to the resolution.
- any method may be used for shooting a high-resolution image.
- the divided area may be photographed while repeatedly stopping and moving the stage to acquire a high-resolution image, or the divided area may be photographed while moving the stage at a predetermined speed to acquire a high-resolution image on the strip. May be good.
- the process of generating a tile image from a high-resolution image is not an indispensable configuration, and by gradually changing the resolution of the entire high-resolution image bonded by the stitching process, an image whose resolution changes stepwise can be created. It may be generated. Even in this case, it is possible to gradually present the user from a low-resolution image in a wide area to a high-resolution image in a narrow area.
- the medical information system 5530 is a so-called electronic medical record system, and stores information related to diagnosis such as patient identification information, patient disease information, test information and image information used for diagnosis, diagnosis results, and prescription drugs.
- a pathological image obtained by photographing an observation object of a patient can be once stored via the server 5512 and then displayed on the display device 5514 by the display control device 5513.
- the pathologist using the pathological system 5510 makes a pathological diagnosis based on the pathological image displayed on the display device 5514.
- the results of the pathological diagnosis made by the pathologist are stored in the medical information system 5530.
- the derivation device 5540 can perform analysis on the pathological image.
- a learning model created by machine learning can be used for this analysis.
- the derivation device 5540 may derive a classification result of a specific region, an organization identification result, or the like as the analysis result. Further, the derivation device 5540 may derive identification results such as cell information, number, position, and luminance information, and scoring information for them. These information derived by the derivation device 5540 may be displayed on the display device 5514 of the pathological system 5510 as diagnostic support information.
- the out-licensing device 5540 may be a server system composed of one or more servers (including a cloud server) and the like. Further, the derivation device 5540 may be configured to be incorporated in, for example, a display control device 5513 or a server 5512 in the pathology system 5510. That is, various analyzes on the pathological image may be performed within the pathological system 5510.
- the technique according to the present disclosure can be suitably applied to the server 5512 as described above among the configurations described above.
- the technique according to the present disclosure may be publicly applied to image processing in the server 5512.
- a clearer pathological image can be obtained, so that the diagnosis of the lesion can be performed more accurately.
- the configuration described above can be applied not only to the diagnostic support system but also to general biological microscopes such as confocal microscopes, fluorescence microscopes, and video microscopes using digital imaging technology.
- the observation target may be a biological sample such as cultured cells, a fertilized egg, or a sperm, a biomaterial such as a cell sheet or a three-dimensional cell tissue, or a biological material such as a zebrafish or a mouse.
- the observation object is not limited to the glass slide, and can be observed in a state of being stored in a well plate, a petri dish, or the like.
- a moving image may be generated from a still image of an observation object acquired by using a microscope using digital imaging technology.
- a moving image may be generated from still images taken continuously for a predetermined period, or an image sequence may be generated from still images taken at predetermined intervals.
- the observation target such as the beat and elongation of cancer cells, nerve cells, myocardial tissue, sperm, movement such as migration, and the division process of cultured cells and fertilized eggs. It is possible to analyze the dynamic characteristics of objects using machine learning.
- the information processing system 1 having the information processing device 10, the scanner 30, the learning device 40, and the network 50 has been mainly described. However, an information processing system having some of these may also be provided. For example, an information processing system having a part or all of the information processing device 10, the scanner 30, and the learning device 40 may be provided. At this time, the information processing system does not have to be a combination of the entire device (combination of hardware and software).
- an information processing system having a first device (a combination of hardware and software) and software of the second device can also be provided.
- an information processing system having a scanner 30 (a combination of hardware and software) and software of an information processing apparatus 10 can also be provided.
- an information processing system including a plurality of configurations arbitrarily selected from the information processing device 10, the scanner 30, and the learning device 40 can also be provided.
- FIG. 25 is a hardware configuration diagram showing an example of a computer 1000 that realizes the functions of the information processing apparatus 10.
- the computer 1000 includes a CPU 1100, a RAM 1200, a ROM (Read Only Memory) 1300, an HDD (Hard Disk Drive) 1400, a communication interface 1500, and an input / output interface 1600.
- Each part of the computer 1000 is connected by a bus 1050.
- the CPU 1100 operates based on the program stored in the ROM 1300 or the HDD 1400, and controls each part. For example, the CPU 1100 expands a program stored in the ROM 1300 or the HDD 1400 into the RAM 1200, and executes processing corresponding to various programs.
- the ROM 1300 stores a boot program such as a BIOS (Basic Output Output System) executed by the CPU 1100 when the computer 1000 is started, a program depending on the hardware of the computer 1000, and the like.
- BIOS Basic Output Output System
- the HDD 1400 is a computer-readable recording medium that non-temporarily records a program executed by the CPU 1100 and data used by such a program.
- the HDD 1400 is a recording medium for recording an image processing program according to the present disclosure, which is an example of program data 1450.
- the communication interface 1500 is an interface for the computer 1000 to connect to an external network 1550 (for example, the Internet).
- the CPU 1100 receives data from another device or transmits data generated by the CPU 1100 to another device via the communication interface 1500.
- the input / output interface 1600 is an interface for connecting the input / output device 1650 and the computer 1000.
- the CPU 1100 receives data from an input device such as a keyboard or mouse via the input / output interface 1600. Further, the CPU 1100 transmits data to an output device such as a display, a speaker, or a printer via the input / output interface 1600. Further, the input / output interface 1600 may function as a media interface for reading a program or the like recorded on a predetermined computer-readable recording medium (media).
- the media includes, for example, an optical recording medium such as a DVD (Digital Versaille Disc), a PD (Phase change rewritable Disc), a magneto-optical recording medium such as MO (Magnet-Optical disk), a tape medium, a magnetic recording medium, a semiconductor memory, or the like.
- an optical recording medium such as a DVD (Digital Versaille Disc), a PD (Phase change rewritable Disc), a magneto-optical recording medium such as MO (Magnet-Optical disk), a tape medium, a magnetic recording medium, a semiconductor memory, or the like.
- the CPU 1100 of the computer 1000 realizes the functions of the processing unit 100 and the like by executing the image processing program loaded on the RAM 1200. do.
- the information processing program according to the present disclosure and the data in the storage unit 130 may be stored in the HDD 1400.
- the CPU 1100 reads the program data 1450 from the HDD 1400 and executes it, but as another example, an information processing program may be acquired from another device via the external network 1550.
- the information processing device 10 according to the present embodiment may be applied to a system including a plurality of devices, which is premised on connection to a network (or communication between each device), such as cloud computing. good. That is, the information processing device 10 according to the present embodiment described above can be realized as the information processing system 1 according to the present embodiment by, for example, a plurality of devices.
- the above is an example of the hardware configuration of the information processing apparatus 10.
- Each of the above-mentioned components may be configured by using general-purpose members, or may be configured by hardware specialized for the function of each component. Such a configuration may be appropriately modified depending on the technical level at the time of implementation.
- each step in the information processing method of the embodiment of the present disclosure described above does not necessarily have to be processed in the order described.
- each step may be processed in an appropriately reordered manner.
- each step may be partially processed in parallel or individually instead of being processed in chronological order.
- the processing of each step does not necessarily have to be processed according to the described method, and may be processed by another method, for example, by another functional unit.
- each component of each device shown in the figure is a functional concept, and does not necessarily have to be physically configured as shown in the figure. That is, the specific form of distribution / integration of each device is not limited to the one shown in the figure, and all or part of them may be functionally or physically distributed / physically distributed in any unit according to various loads and usage conditions. Can be integrated and configured.
- the following configurations also belong to the technical scope of the present disclosure.
- the information acquisition unit that acquires the information of the first area specified by the user's fill input operation for the image data of the living tissue, and A region determination unit that executes fitting with respect to the boundary of the first region based on the image data and the information of the first region and determines a second region to be subjected to a predetermined process.
- a extraction unit for extracting learning image data, which is image data used for machine learning, from the image data based on the second region is further provided.
- the information processing device according to (1) above.
- the area determination unit may refer to all the boundaries of the first area.
- the information processing apparatus according to (7) above which executes fitting.
- the area determination unit is used for the other learning in the first area.
- the information processing apparatus according to (8) above wherein the fitting is executed for a boundary of an area that does not overlap with an area related to image data.
- the region determination unit connects the portion of the first region related to the boundary of the range where the fitting is newly executed and the region related to the other learning image data to obtain the second region.
- the area determination unit has the other learning image in the first area.
- the information processing apparatus according to any one of (7) to (10) above, which executes the fitting with respect to the boundary of the area overlapping the area related to the data.
- the region determination unit determines the second region by removing the portion of the first region related to the boundary of the range where the fitting is newly executed from the region related to the other learning image data.
- the information processing apparatus according to (11) above.
- the information processing device described in one.
- the information processing apparatus executes the fitting to the boundary of the first region based on the image data of the outer and inner regions of the contour of the first region. ..
- the fill input operation is an operation of filling a part of the image data with a locus having a predetermined width displayed superimposed on the image data by the user (2) to (4).
- the information processing apparatus according to any one of the above.
- (16) The information processing apparatus according to (15) above, further comprising a locus width setting unit for setting the predetermined width.
- the locus width setting unit switches between a line drawing input operation for drawing a locus having a predetermined width on the image data by the user and a fill input operation. Processing equipment.
- the information processing apparatus according to (21) above, wherein the locus width setting unit sets the predetermined width based on the positional relationship of the input start position with respect to the area related to other learning image data.
- the information acquisition unit acquires information in a third area designated by the line drawing input operation for the image data of the user.
- the region determination unit executes fitting with respect to the boundary of the third region based on the image data and the information of the third region, and determines the second region.
- the processor Acquiring the information of the first area specified by the user's fill input operation for the image data of the living tissue, and Based on the image data and the information of the first region, fitting is performed on the boundary of the first region to determine a second region to be subjected to a predetermined process.
- a predetermined process including, Information processing method.
- Computer The information acquisition unit that acquires the information of the first area specified by the user's fill input operation for the image data of the living tissue, and A region determination unit that executes fitting with respect to the boundary of the first region based on the image data and the information of the first region and determines a second region to be subjected to a predetermined process.
- a program that functions as.
- Information processing equipment and A program for causing the information processing device to execute information processing Is an information processing system that includes The information processing device follows the program.
- the information acquisition unit that acquires the information of the first area specified by the user's fill input operation for the image data of the living tissue, and A region determination unit that executes fitting with respect to the boundary of the first region based on the image data and the information of the first region and determines a second region to be subjected to a predetermined process. Functions as, Information processing system.
- Information processing system 10 Information processing device 20 Display device 30 Scanner 40 Learning device 50 Network 100 Processing unit 102 Trajectory width setting unit 104 Information acquisition unit 106 Judgment unit 108 Area determination unit 110 Extraction unit 112 Display control unit 120 Image data reception unit 130 Storage unit 140 Operation unit 150 Transmission unit 200 Display unit 600, 602 Icon 610 Pathological image 700 Fill range 702, 702a, 702b Target area 704 Curve 710, 710a, 710b Annotation data 800 Borderline 810 range
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
- Investigating Or Analysing Biological Materials (AREA)
Abstract
Description
1. 本開示の実施形態の概要について
1.1 背景
1.2 本開示の実施形態の概要
2. 実施形態
2.1 情報処理装置の機能構成例
2.2 処理部の機能構成例
2.3 フィッティング処理について
2.4 情報処理方法
3. 変形例
4. まとめ
5. 応用例
6. ハードウェア構成
7. 補足
<1.1 背景>
本開示の実施形態の概要を説明する前に、図1を参照しながら、本発明者らが本開示の実施形態を創作するに至る背景について説明する。
情報処理装置10は、例えば、コンピュータによって構成され、上記機械学習に用いるアノテーション・データを生成し、後述する学習装置40へ出力することができる。例えば、情報処理装置10は、ユーザ(例えば、医師や検査技師等)によって利用される。本開示の実施形態では、ユーザによる各種の操作が、情報処理装置10に対してマウス(図示省略)やペンタブレット(図示省略)を介して入力される場合を主に想定する。しかしながら、本実施形態においては、ユーザによる各種の操作は、情報処理装置10に対して図示しない端末を介して入力されてもよい。また、本実施形態では、ユーザへの各種の提示情報が、情報処理装置10から表示装置20を介して出力される場合を主に想定する。しかしながら、本実施形態においては、ユーザへの各種の提示情報は、情報処理装置10から図示しない端末を介して出力されてもよい。なお、本開示の実施形態に係る情報処理装置10の詳細については、後述する。
表示装置20は、例えば、液晶、EL(Electro Luminescence)、CRT(Cathode Ray Tube)等の表示装置であって、上述した情報処理装置10の制御により、病理画像を表示することができる。さらに、表示装置20は、ユーザからの入力を受け付けるタッチパネルが重畳されていてもよい。なお、本実施形態においては、表示装置20は、4Kや8Kに対応していてもよいし、複数の表示装置により構成されてもよく、特に限定されるものではない。そして、ユーザは、表示装置20に表示された病理画像を閲覧しながら、上述したマウス(図示省略)やペンタブレット(図示省略)等を用いて、病理画像上で注目したい対象領域(例えば、病変領域)を自由に指定し、対象領域に対してアノテーション(ラベル)を付与することができる。
スキャナ30は、検体から得られる細胞標本等の生体組織に対する読み取りを行うことができる。これによって、スキャナ30は、生体組織が写る病理画像を生成し、上述した情報処理装置10へ出力する。例えば、スキャナ30は、イメージセンサを有しており、イメージセンサによって生体組織を撮像することによって、病理画像を生成する。スキャナ30の読み取り方式は特定のタイプに限定されない。なお、本実施形態においては、スキャナ30の読み取り方式は、CCD(Charge Coupled Device)タイプであってもよいし、CIS(Contact Image Sensor)タイプであってもよく、特に限定されるものではない。ここで、CCDタイプは、生体組織からの光(反射光又は透過光)をCCDセンサによって読み取り、CCDセンサによって読み取った光を画像データに変換するタイプに相当し得る。一方、CIS方式は、RGB三色のLED(Light Emitting Diode)を光源に使用し、生体組織からの光(反射光又は透過光)をフォトセンサによって読み取り、読み取った結果を画像データに変換するタイプに相当し得る。
学習装置40は、例えば、コンピュータによって構成され、複数のアノテーション・データを用いて機械学習を行うことによって識別器と識別器によって利用されるモデルデータとを構築することができる。そして、学習装置40で構築された識別器と識別器によって利用されるモデルデータとを用いることにより、新たな病理画像における注目すべき対象領域の画像を自動的に抽出することができるようになる。上記機械学習には、典型的にはディープラーニングが用いられ得る。なお、本開示の実施形態の説明では、識別器がニューラルネットワークによって実現される場合を主に想定する。かかる場合、モデルデータは、ニューラルネットワークの各ニューロンの重みに相当し得る。しかし、本実施形態においては、識別器は、ニューラルネットワーク以外によって実現されてもよい。本実施形態においては、例えば、識別器は、ランダムフォレストによって実現されてもよいし、サポートベクタマシンによって実現されてもよいし、アダブーストによって実現されてもよく、特に限定されるものではない。
次に、図4及び図5を参照して、本開示の実施形態の概要を説明する。図4及び図5は、本開示の実施形態に係る情報処理装置10の動作例を説明する説明図である。
<2.1 情報処理装置の機能構成例>
まずは、図6を参照して、本開示の実施形態に係る情報処理装置10の機能構成例について説明する。図6は、本実施形態に係る情報処理装置10の機能構成例を示す図である。詳細には、図6に示すように、情報処理装置10は、処理部100と、画像データ受信部120と、記憶部130と、操作部140と、送信部150とを主に有する。以下に、情報処理装置10の各機能部の詳細について順次説明する。
処理部100は、病理画像(画像データ)610及びユーザからの入力操作に基づき、病理画像610からアノテーション・データ710を生成することができる。処理部100は、例えば、CPU(Central Processing Unit)やMPU(Micro Processing Unit)によって、後述する記憶部130に格納されたプログラムがRAM(Random Access Memory)等を作業領域として実行されることにより機能する。また、処理部100は、例えばASIC(Application specific Integrated Circuit)やFPGA(Field Programmable Gate Array)等の集積回路により構成されてもよい。なお、処理部100の詳細については後述する。
画像データ受信部120及び送信部150は、通信回路を含んで構成される。画像データ受信部120は、スキャナ30からネットワーク50を介して病理画像(画像データ)610を受信することができる。画像データ受信部120は、受信した病理画像610を上述した処理部100に出力する。一方、送信部150は、処理部100からアノテーション・データ710が出力されると、学習装置40に対してネットワーク50を介して送信することができる。
記憶部130は、例えば、RAM、フラッシュメモリ(Flash Memory)等の半導体メモリ素子、または、ハードディスク、光ディスク等の記憶装置によって実現される。記憶部130は、処理部100によりすでに生成されたアノテーション・データ710や、処理部100で実行されるプログラム等を格納する。
操作部140は、ユーザによる操作の入力を受け付ける機能を有する。本開示の実施形態においては、操作部140が、マウス及びキーボードを含む場合を主に想定する。しかしながら、本実施形態においては、操作部140は、マウス及びキーボードを含む場合に限定されるものではない。本実施形態においては、例えば、操作部140は、電子ペンを含んでもよいし、タッチパネルを含んでもよいし、視線を検出するイメージセンサを含んでもよい。
続いて、図7を参照して、上述した処理部100の機能構成例について説明する。図7は、図6に示す処理部100の機能構成例を示す図である。詳細には、図7に示すように、処理部100は、軌跡幅設定部102と、情報取得部104と、判定部106と、領域決定部108と、抽出部110と、表示制御部112と、を主に有する。以下に、処理部100の各機能部の詳細について順次説明する。
軌跡幅設定部102は、操作部140からユーザによる入力情報を取得し、取得した情報に基づいて、塗りつぶし入力操作における軌跡の幅を設定することができる。そして、軌跡幅設定部102は、設定した軌跡の幅の情報を後述する情報取得部104及び表示制御部112に出力することができる。なお、ユーザによる軌跡の幅の入力設定の詳細については後述する。
情報取得部104は、操作部140から、ユーザによる入力操作の情報を取得することができ、取得した情報を後述する判定部106に出力する。詳細には、情報取得部104は、ユーザの、病理画像(例えば、生体組織の画像データ)610に対する塗りつぶし入力操作により塗りつぶされて指定された塗りつぶし範囲(第1の領域)700の情報を取得する。また、情報取得部104は、ユーザの、病理画像610に対する線描入力操作により描かれた曲線704に囲まれることで指定された範囲(第3の領域)の情報を取得してもよい。
判定部106は、ユーザの、病理画像610に対する塗りつぶし入力操作により指定された塗りつぶし範囲(第1の領域)700と、記憶部130にすでに格納されている他の既存の1つ又は複数のアノテーション・データ710とが重なるか否かを判定することができる。また、判定部106は、塗りつぶし範囲700が、他の既存のアノテーション・データ710とどのような状態で重なっているか(例えば、跨るように重なるかどうか)等を判定することもできる。そして、判定部106は、判定結果を後述する領域決定部108に出力する。
領域決定部108は、病理画像(画像データ)610と、ユーザの、病理画像610に対する塗りつぶし入力操作により指定された塗りつぶし範囲(第1の領域)700と、上述した判定部106の判定結果とに基づいて、塗りつぶし入力操作によって塗りつぶされた塗りつぶし範囲700の全体又は一部の境界線に対してフィッティングを行う。そして、領域決定部108は、当該フィッティング処理により、対象領域(第2の領域)702の全体又は一部の輪郭を取得することができる。さらに、領域決定部108は、取得された対象領域702の輪郭の情報を後述する抽出部110及び表示制御部112に出力する。
抽出部110は、領域決定部108で決定された、新たなアノテーション・データ710に含まれ得る画像に対応する対象領域(第2の領域)702に基づいて、病理画像(画像データ)610から、機械学習に利用される対象領域702の画像を抽出することができる。そして、抽出部110は、ユーザが付したアノテーションとともに、抽出した画像を、新たなアノテーション・データ710として学習装置40へ出力する。
表示制御部112は、各種情報に基づき、表示装置20の表示を制御することができる。例えば、表示制御部112は、ユーザによる入力操作に基づいて、表示装置20で表示する病理画像610の倍率を設定することができる。さらに、表示制御部112は、病理画像610に対する解析結果(例えば、病理画像610に対しする周波数解析結果や、病理画像610から特定の組織を認識、抽出した抽出結果等)、又は、ユーザが病理画像610上へ軌跡を描く速度に基づいて、表示する病理画像610の倍率を自動設定してもよい。本実施形態においては、このように倍率を自動で設定することにより、入力操作の利便性をより高め、精度の高い、大量のアノテーション・データ710を効率よく生成することを可能することができる。
先に説明したように、領域決定部108は、決定されたフィッティング範囲においてフィッティング処理を実行する。ここで実行されるフィッティング処理は、例えば、先に説明した「前景背景フィッティング」、「細胞膜フィッティング」、「細胞核フィッティング」等であることができる。
以上、本実施形態に係る情報処理装置10、処理部100及びフィッティングの詳細について説明した。次に、図8から図20を参照して、本実施形態に係る情報処理方法における、アノテーション・データ710の作成方法(図2に示すステップS200)の詳細について説明する。図8は、本実施形態に係る情報処理方法を示すフローチャートであり、図9及び図10は、本実施形態に係る入力画面を説明図である。
先に説明したように、対象領域702が複雑に入り組んだ形状である場合には、塗りつぶし入力操作は、効率的な範囲の指定方法ではあるが、このような幅の広い軌跡を用いて、詳細な境界線を入力することは難しい。そこで、対象領域702の形状に応じて、塗りつぶし入力操作と線描入力操作とを切り替えたり、軌跡の幅を変化させたりすることができれば、ユーザの手間をより低減しつつ、高精度なアノテーション・データを生成することができることとなる。そこで、以下に説明する、本開示の実施形態の変形例においては、頻繁に軌跡の幅を変化させたり、塗りつぶし入力操作と線描入力操作とを切り替えたりすることができるようにする。以下に、図21から図23を参照して、本変形例の詳細について説明する。図21から図23は、本開示の実施形態の変形例を説明する説明図である。
以上のように、本実施形態においては、ユーザにより病理画像610に対して塗りつぶし入力操作を行うことにより、対象領域702の範囲を指定することができる。従って、本実施形態によれば、対象領域702が、例えば図9に示すようにがん細胞のような複雑に入り組んだ形状であっても、塗りつぶし入力操作を用いることにより、曲線704を描く作業に比べ、ユーザの手間を低減しつつ、高精度なアノテーション・データを生成することができる。その結果、本実施形態によれば、精度の高い、大量のアノテーション・データ710を効率よく生成することができる。
本開示に係る技術は、様々な製品へ応用することができる。例えば、本開示に係る技術は、医師等が患者から採取された細胞や組織を観察して病変を診断する病理診断システムやその支援システム等(以下、診断支援システムと称する)に適用されてもよい。この診断支援システムは、デジタルパソロジー技術を利用して取得された画像に基づいて病変を診断又はその支援をするWSI(Whole Slide Imaging)システムであってもよい。
上述してきた各実施形態に係る情報処理装置10等の情報機器は、例えば図25に示すような構成のコンピュータ1000によって実現される。以下、本開示の実施形態に係る情報処理装置10を例に挙げて説明する。図25は、情報処理装置10の機能を実現するコンピュータ1000の一例を示すハードウェア構成図である。コンピュータ1000は、CPU1100、RAM1200、ROM(Read Only Memory)1300、HDD(Hard Disk Drive)1400、通信インターフェイス1500、及び入出力インターフェイス1600を有する。コンピュータ1000の各部は、バス1050によって接続される。
なお、先に説明した本開示の実施形態は、例えば、上記で説明したような情報処理装置又は情報処理システムで実行される情報処理方法、情報処理装置を機能させるためのプログラム、及びプログラムが記録された一時的でない有形の媒体を含みうる。また、当該プログラムをインターネット等の通信回線(無線通信も含む)を介して頒布してもよい。
(1)
ユーザの、生体組織の画像データに対する塗りつぶし入力操作により指定された第1の領域の情報を取得する情報取得部と、
前記画像データと前記第1の領域の情報とに基づいて、前記第1の領域の境界に対してフィッティングを実行し、所定の処理に供される第2の領域を決定する領域決定部と、
を備える、
情報処理装置。
(2)
前記第2の領域に基づいて、前記画像データから、機械学習に利用される画像データである学習用画像データを抽出する抽出部をさらに備える、
上記(1)に記載の情報処理装置。
(3)
前記生体組織は細胞標本である、上記(2)に記載の情報処理装置。
(4)
前記領域決定部は、前景と背景との境界に基づくフィッティング、細胞膜に基づくフィッティング、又は、細胞核に基づくフィッティングを実行する、上記(2)又は(3)に記載の情報処理装置。
(5)
前記第1の領域と他の学習用画像データに係る領域とが重なるか否かを判定する判定部をさらに備える、上記(2)~(4)のいずれか1つに記載の情報処理装置。
(6)
前記領域決定部は、前記判定部の判定結果に基づいて、前記第1の領域の境界のうち、フィッティングを実行するフィッティング範囲を決定し、前記フィッティング範囲において前記フィッティングを実行する、上記(5)に記載の情報処理装置。
(7)
前記領域決定部は、範囲設定モードに従って前記フィッティング範囲を決定する、上記(6)に記載の情報処理装置。
(8)
前記領域決定部は、第1の範囲設定モードにおいては、前記第1の領域と前記他の学習用画像データに係る領域とが重ならない場合、前記第1の領域の境界の全てに対して前記フィッティングを実行する、上記(7)に記載の情報処理装置。
(9)
前記領域決定部は、前記第1の範囲設定モードにおいては、前記第1の領域と前記他の学習用画像データに係る領域とが重なる場合、前記第1の領域のうち、前記他の学習用画像データに係る領域と重ならない領域の境界に対して前記フィッティングを実行する、上記(8)に記載の情報処理装置。
(10)
前記領域決定部は、新たに前記フィッティングを実行した範囲の境界に係る前記第1の領域の部分と、前記他の学習用画像データに係る領域とを結合することにより、前記第2の領域を決定する、上記(9)に記載の情報処理装置。
(11)
前記領域決定部は、第2の範囲設定モードにおいては、前記第1の領域と前記他の学習用画像データに係る領域とが重なる場合、前記第1の領域のうち、前記他の学習用画像データに係る領域と重なる領域の境界に対して前記フィッティングを実行する、上記(7)~(10)のいずれか1つに記載の情報処理装置。
(12)
前記領域決定部は、前記他の学習用画像データに係る領域から、新たに前記フィッティングを実行した範囲の境界に係る前記第1の領域の部分を除去することにより、前記第2の領域を決定する、上記(11)に記載の情報処理装置。
(13)
前記領域決定部は、前記第1の領域の境界の外側又は内側の領域の画像データに基づいて、前記第1の領域の境界に対する前記フィッティングを実行する、上記(2)~(12)のいずれか1つに記載の情報処理装置。
(14)
前記領域決定部は、前記第1の領域の輪郭の外側及び内側の領域の画像データに基づいて、前記第1の領域の境界に対する前記フィッティングを実行する、上記(2)に記載の情報処理装置。
(15)
前記塗りつぶし入力操作は、前記ユーザによって、前記画像データに重畳して表示される、所定の幅を持った軌跡で、前記画像データの一部を塗りつぶす操作である、上記(2)~(4)のいずれか1つに記載の情報処理装置。
(16)
前記所定の幅を設定する軌跡幅設定部をさらに備える、上記(15)に記載の情報処理装置。
(17)
前記軌跡幅設定部は、前記ユーザによって、前記所定の幅を持った軌跡を前記画像データに重畳するように描く線描入力操作と、前記塗りつぶし入力操作とを切り替える、上記(16)に記載の情報処理装置。
(18)
前記所定の幅が閾値未満に設定された場合には、前記線描入力操作に切り替わる、上記(17)に記載の情報処理装置。
(19)
前記軌跡幅設定部は、前記ユーザの入力に基づいて前記所定の幅を設定する、上記(16)~(18)のいずれか1つに記載の情報処理装置。
(20)
前記軌跡幅設定部は、前記画像データに対する解析結果、又は、前記画像データの表示倍率に基づいて、前記所定の幅を設定する、上記(16)~(18)のいずれか1つに記載の情報処理装置。
(21)
前記軌跡幅設定部は、前記画像データに対する入力操作の入力開始位置に基づいて、前記所定の幅を設定する、上記(16)~(18)のいずれか1つに記載の情報処理装置。
(22)
前記軌跡幅設定部は、他の学習用画像データに係る領域に対する前記入力開始位置の位置関係に基づいて、前記所定の幅を設定する、上記(21)に記載の情報処理装置。
(23)
前記情報取得部は、前記ユーザの、前記画像データに対する前記線描入力操作により指定された第3の領域の情報を取得し、
前記領域決定部は、前記画像データと前記第3の領域の情報とに基づいて、前記第3の領域の境界に対してフィッティングを実行し、前記第2の領域を決定する、
上記(17)に記載の情報処理装置。
(24)
プロセッサが、
ユーザの、生体組織の画像データに対する塗りつぶし入力操作により指定された第1の領域の情報を取得することと、
前記画像データと前記第1の領域の情報とに基づいて、前記第1の領域の境界に対してフィッティングを実行し、所定の処理に供される第2の領域を決定することと、
を含む、
情報処理方法。
(25)
コンピュータを、
ユーザの、生体組織の画像データに対する塗りつぶし入力操作により指定された第1の領域の情報を取得する情報取得部と、
前記画像データと前記第1の領域の情報とに基づいて、前記第1の領域の境界に対してフィッティングを実行し、所定の処理に供される第2の領域を決定する領域決定部と、
として機能させる、プログラム。
(26)
情報処理装置と、
情報処理を前記情報処理装置に実行させるためのプログラムと、
を含む、情報処理システムであって、
前記情報処理装置は、前記プログラムに従って、
ユーザの、生体組織の画像データに対する塗りつぶし入力操作により指定された第1の領域の情報を取得する情報取得部と、
前記画像データと前記第1の領域の情報とに基づいて、前記第1の領域の境界に対してフィッティングを実行し、所定の処理に供される第2の領域を決定する領域決定部と、
として機能する、
情報処理システム。
10 情報処理装置
20 表示装置
30 スキャナ
40 学習装置
50 ネットワーク
100 処理部
102 軌跡幅設定部
104 情報取得部
106 判定部
108 領域決定部
110 抽出部
112 表示制御部
120 画像データ受信部
130 記憶部
140 操作部
150 送信部
200 表示部
600、602 アイコン
610 病理画像
700 塗りつぶし範囲
702、702a、702b 対象領域
704 曲線
710、710a、710b アノテーション・データ
800 境界線
810 範囲
Claims (26)
- ユーザの、生体組織の画像データに対する塗りつぶし入力操作により指定された第1の領域の情報を取得する情報取得部と、
前記画像データと前記第1の領域の情報とに基づいて、前記第1の領域の境界に対してフィッティングを実行し、所定の処理に供される第2の領域を決定する領域決定部と、
を備える、
情報処理装置。 - 前記第2の領域に基づいて、前記画像データから、機械学習に利用される画像データである学習用画像データを抽出する抽出部をさらに備える、
請求項1に記載の情報処理装置。 - 前記生体組織は細胞標本である、請求項2に記載の情報処理装置。
- 前記領域決定部は、前景と背景との境界に基づくフィッティング、細胞膜に基づくフィッティング、又は、細胞核に基づくフィッティングを実行する、請求項2に記載の情報処理装置。
- 前記第1の領域と他の学習用画像データに係る領域とが重なるか否かを判定する判定部をさらに備える、請求項2に記載の情報処理装置。
- 前記領域決定部は、前記判定部の判定結果に基づいて、前記第1の領域の境界のうち、フィッティングを実行するフィッティング範囲を決定し、前記フィッティング範囲において前記フィッティングを実行する、請求項5に記載の情報処理装置。
- 前記領域決定部は、範囲設定モードに従って前記フィッティング範囲を決定する、請求項6に記載の情報処理装置。
- 前記領域決定部は、第1の範囲設定モードにおいては、前記第1の領域と前記他の学習用画像データに係る領域とが重ならない場合、前記第1の領域の境界の全てに対して前記フィッティングを実行する、請求項7に記載の情報処理装置。
- 前記領域決定部は、前記第1の範囲設定モードにおいては、前記第1の領域と前記他の学習用画像データに係る領域とが重なる場合、前記第1の領域のうち、前記他の学習用画像データに係る領域と重ならない領域の境界に対して前記フィッティングを実行する、請求項8に記載の情報処理装置。
- 前記領域決定部は、新たに前記フィッティングを実行した範囲の境界に係る前記第1の領域の部分と、前記他の学習用画像データに係る領域とを結合することにより、前記第2の領域を決定する、請求項9に記載の情報処理装置。
- 前記領域決定部は、第2の範囲設定モードにおいては、前記第1の領域と前記他の学習用画像データに係る領域とが重なる場合、前記第1の領域のうち、前記他の学習用画像データに係る領域と重なる領域の境界に対して前記フィッティングを実行する、請求項7に記載の情報処理装置。
- 前記領域決定部は、前記他の学習用画像データに係る領域から、新たに前記フィッティングを実行した範囲の境界に係る前記第1の領域の部分を除去することにより、前記第2の領域を決定する、請求項11に記載の情報処理装置。
- 前記領域決定部は、前記第1の領域の境界の外側又は内側の領域の画像データに基づいて、前記第1の領域の境界に対する前記フィッティングを実行する、請求項2に記載の情報処理装置。
- 前記領域決定部は、前記第1の領域の輪郭の外側及び内側の領域の画像データに基づいて、前記第1の領域の境界に対する前記フィッティングを実行する、請求項2に記載の情報処理装置。
- 前記塗りつぶし入力操作は、前記ユーザによって、前記画像データに重畳して表示される、所定の幅を持った軌跡で、前記画像データの一部を塗りつぶす操作である、請求項2に記載の情報処理装置。
- 前記所定の幅を設定する軌跡幅設定部をさらに備える、請求項15に記載の情報処理装置。
- 前記軌跡幅設定部は、前記ユーザによって、前記所定の幅を持った軌跡を前記画像データに重畳するように描く線描入力操作と、前記塗りつぶし入力操作とを切り替える、請求項16に記載の情報処理装置。
- 前記所定の幅が閾値未満に設定された場合には、前記線描入力操作に切り替わる、請求項17に記載の情報処理装置。
- 前記軌跡幅設定部は、前記ユーザの入力に基づいて前記所定の幅を設定する、請求項16に記載の情報処理装置。
- 前記軌跡幅設定部は、前記画像データに対する解析結果、又は、前記画像データの表示倍率に基づいて、前記所定の幅を設定する、請求項16に記載の情報処理装置。
- 前記軌跡幅設定部は、前記画像データに対する入力操作の入力開始位置に基づいて、前記所定の幅を設定する、請求項16に記載の情報処理装置。
- 前記軌跡幅設定部は、他の学習用画像データに係る領域に対する前記入力開始位置の位置関係に基づいて、前記所定の幅を設定する、請求項21に記載の情報処理装置。
- 前記情報取得部は、前記ユーザの、前記画像データに対する前記線描入力操作により指定された第3の領域の情報を取得し、
前記領域決定部は、前記画像データと前記第3の領域の情報とに基づいて、前記第3の領域の境界に対してフィッティングを実行し、前記第2の領域を決定する、
請求項17に記載の情報処理装置。 - プロセッサが、
ユーザの、生体組織の画像データに対する塗りつぶし入力操作により指定された第1の領域の情報を取得することと、
前記画像データと前記第1の領域の情報とに基づいて、前記第1の領域の境界に対してフィッティングを実行し、所定の処理に供される第2の領域を決定することと、
を含む、
情報処理方法。 - コンピュータを、
ユーザの、生体組織の画像データに対する塗りつぶし入力操作により指定された第1の領域の情報を取得する情報取得部と、
前記画像データと前記第1の領域の情報とに基づいて、前記第1の領域の境界に対してフィッティングを実行し、所定の処理に供される第2の領域を決定する領域決定部と、
として機能させる、プログラム。 - 情報処理装置と、
情報処理を前記情報処理装置に実行させるためのプログラムと、
を含む、情報処理システムであって、
前記情報処理装置は、前記プログラムに従って、
ユーザの、生体組織の画像データに対する塗りつぶし入力操作により指定された第1の領域の情報を取得する情報取得部と、
前記画像データと前記第1の領域の情報とに基づいて、前記第1の領域の境界に対してフィッティングを実行し、所定の処理に供される第2の領域を決定する領域決定部と、
として機能する、
情報処理システム。
Priority Applications (4)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202180042618.2A CN115943305A (zh) | 2020-06-24 | 2021-06-15 | 信息处理装置、信息处理方法、程序和信息处理系统 |
| JP2022531832A JPWO2021261323A1 (ja) | 2020-06-24 | 2021-06-15 | |
| EP21830137.2A EP4174764A4 (en) | 2020-06-24 | 2021-06-15 | Information processing device, information processing method, program, and information processing system |
| US18/000,683 US20230215010A1 (en) | 2020-06-24 | 2021-06-15 | Information processing apparatus, information processing method, program, and information processing system |
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| JP2020108732 | 2020-06-24 | ||
| JP2020-108732 | 2020-06-24 |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2021261323A1 true WO2021261323A1 (ja) | 2021-12-30 |
Family
ID=79281205
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/JP2021/022634 Ceased WO2021261323A1 (ja) | 2020-06-24 | 2021-06-15 | 情報処理装置、情報処理方法、プログラム及び情報処理システム |
Country Status (5)
| Country | Link |
|---|---|
| US (1) | US20230215010A1 (ja) |
| EP (1) | EP4174764A4 (ja) |
| JP (1) | JPWO2021261323A1 (ja) |
| CN (1) | CN115943305A (ja) |
| WO (1) | WO2021261323A1 (ja) |
Cited By (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20220262040A1 (en) * | 2021-02-16 | 2022-08-18 | Hitachi, Ltd. | Microstructural image analysis device and microstructural image analysis method |
Families Citing this family (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20240427474A1 (en) * | 2023-06-21 | 2024-12-26 | GE Precision Healthcare LLC | Systems and methods for annotation panels |
| CN116740768B (zh) * | 2023-08-11 | 2023-10-20 | 南京诺源医疗器械有限公司 | 基于鼻颅镜的导航可视化方法、系统、设备及存储介质 |
Citations (6)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JP2013041357A (ja) * | 2011-08-12 | 2013-02-28 | Sony Corp | 情報処理装置及び情報処理方法 |
| JP2013152699A (ja) * | 2011-12-26 | 2013-08-08 | Canon Inc | 画像処理装置、画像処理システム、画像処理方法およびプログラム |
| JP2018165718A (ja) * | 2012-09-06 | 2018-10-25 | ソニー株式会社 | 情報処理装置、情報処理方法、および顕微鏡システム |
| WO2019230447A1 (ja) * | 2018-06-01 | 2019-12-05 | 株式会社フロンティアファーマ | 画像処理方法、薬剤感受性試験方法および画像処理装置 |
| JP2020035094A (ja) * | 2018-08-28 | 2020-03-05 | オリンパス株式会社 | 機械学習装置、教師用データ作成装置、推論モデル、および教師用データ作成方法 |
| JP2020038600A (ja) * | 2018-08-31 | 2020-03-12 | ソニー株式会社 | 医療システム、医療装置および医療方法 |
Family Cites Families (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| WO2006016317A2 (en) * | 2004-08-09 | 2006-02-16 | Koninklijke Philips Electronics N.V. | Segmentation based on region-competitive deformable mesh adaptation |
-
2021
- 2021-06-15 US US18/000,683 patent/US20230215010A1/en active Pending
- 2021-06-15 CN CN202180042618.2A patent/CN115943305A/zh not_active Withdrawn
- 2021-06-15 EP EP21830137.2A patent/EP4174764A4/en not_active Withdrawn
- 2021-06-15 WO PCT/JP2021/022634 patent/WO2021261323A1/ja not_active Ceased
- 2021-06-15 JP JP2022531832A patent/JPWO2021261323A1/ja active Pending
Patent Citations (6)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JP2013041357A (ja) * | 2011-08-12 | 2013-02-28 | Sony Corp | 情報処理装置及び情報処理方法 |
| JP2013152699A (ja) * | 2011-12-26 | 2013-08-08 | Canon Inc | 画像処理装置、画像処理システム、画像処理方法およびプログラム |
| JP2018165718A (ja) * | 2012-09-06 | 2018-10-25 | ソニー株式会社 | 情報処理装置、情報処理方法、および顕微鏡システム |
| WO2019230447A1 (ja) * | 2018-06-01 | 2019-12-05 | 株式会社フロンティアファーマ | 画像処理方法、薬剤感受性試験方法および画像処理装置 |
| JP2020035094A (ja) * | 2018-08-28 | 2020-03-05 | オリンパス株式会社 | 機械学習装置、教師用データ作成装置、推論モデル、および教師用データ作成方法 |
| JP2020038600A (ja) * | 2018-08-31 | 2020-03-12 | ソニー株式会社 | 医療システム、医療装置および医療方法 |
Non-Patent Citations (2)
| Title |
|---|
| JESSICA L. BAUMANN ET AL.: "Annotation of Whole Slide Images Using Touchscreen Technology", PATHOLOGY VISIONS, 2018 |
| See also references of EP4174764A4 |
Cited By (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20220262040A1 (en) * | 2021-02-16 | 2022-08-18 | Hitachi, Ltd. | Microstructural image analysis device and microstructural image analysis method |
| US12254654B2 (en) * | 2021-02-16 | 2025-03-18 | Hitachi, Ltd. | Microstructural image analysis device and microstructural image analysis method |
Also Published As
| Publication number | Publication date |
|---|---|
| JPWO2021261323A1 (ja) | 2021-12-30 |
| EP4174764A1 (en) | 2023-05-03 |
| CN115943305A (zh) | 2023-04-07 |
| EP4174764A4 (en) | 2023-12-27 |
| US20230215010A1 (en) | 2023-07-06 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US12243231B2 (en) | Computer supported review of tumors in histology images and post operative tumor margin assessment | |
| JP7146886B2 (ja) | 包括的なマルチアッセイ組織分析のためのシステムおよび方法 | |
| US20230419696A1 (en) | Image analysis method, apparatus, program, and learned deep learning algorithm | |
| JP6336391B2 (ja) | 情報処理装置、情報処理方法、およびプログラム | |
| US12131465B2 (en) | User-assisted iteration of cell image segmentation | |
| WO2021261323A1 (ja) | 情報処理装置、情報処理方法、プログラム及び情報処理システム | |
| US20230016320A1 (en) | Image analysis method, image generation method, learning-model generation method, annotation apparatus, and annotation program | |
| US20240152692A1 (en) | Information processing device, information processing method, information processing system, and conversion model | |
| US20230230398A1 (en) | Image processing device, image processing method, image processing program, and diagnosis support system | |
| KR102682730B1 (ko) | 병리 진단 케이스의 대표 병변 이미지 생성 방법 및 이를 수행하는 컴퓨팅 시스템 | |
| US20230177679A1 (en) | Image processing apparatus, image processing method, and image processing system | |
| Scognamiglio et al. | Bracs: A dataset for breast carcinoma subtyping in h&E histology images | |
| KR102789724B1 (ko) | 생체 조직 이미지의 대표 병변 이미지 생성 방법 및 이를 수행하는 컴퓨팅 시스템 | |
| KR102781301B1 (ko) | 병리 진단 케이스에 대한 병리 진단 보고서를 생성하는 방법 및 이를 수행하는 컴퓨팅 시스템 | |
| JP2021124861A (ja) | 解析装置、解析方法、解析プログラム及び診断支援システム |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 21830137 Country of ref document: EP Kind code of ref document: A1 |
|
| ENP | Entry into the national phase |
Ref document number: 2022531832 Country of ref document: JP Kind code of ref document: A |
|
| NENP | Non-entry into the national phase |
Ref country code: DE |
|
| ENP | Entry into the national phase |
Ref document number: 2021830137 Country of ref document: EP Effective date: 20230124 |
|
| WWW | Wipo information: withdrawn in national office |
Ref document number: 2021830137 Country of ref document: EP |