[go: up one dir, main page]

WO2025225015A1 - Image processing device, image display method, image display program, recording medium, and image diagnosis system - Google Patents

Image processing device, image display method, image display program, recording medium, and image diagnosis system

Info

Publication number
WO2025225015A1
WO2025225015A1 PCT/JP2024/016557 JP2024016557W WO2025225015A1 WO 2025225015 A1 WO2025225015 A1 WO 2025225015A1 JP 2024016557 W JP2024016557 W JP 2024016557W WO 2025225015 A1 WO2025225015 A1 WO 2025225015A1
Authority
WO
WIPO (PCT)
Prior art keywords
emphasis
image
emphasized
unit
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
PCT/JP2024/016557
Other languages
French (fr)
Japanese (ja)
Inventor
誠 北村
大夢 杉田
明広 窪田
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Olympus Medical Systems Corp
Original Assignee
Olympus Medical Systems Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Olympus Medical Systems Corp filed Critical Olympus Medical Systems Corp
Priority to PCT/JP2024/016557 priority Critical patent/WO2025225015A1/en
Publication of WO2025225015A1 publication Critical patent/WO2025225015A1/en
Pending legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G5/00Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G5/00Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
    • G09G5/22Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators characterised by the display of characters or indicia using display control signals derived from coded signals representing the characters or indicia, e.g. with a character-code memory
    • G09G5/24Generation of individual character patterns
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast

Definitions

  • the present invention therefore aims to provide an image processing device, image display method, image display program, storage medium, and image diagnostic system that can generate display images that allow medical professionals and others to quickly identify the underlying information when referring to image diagnostic results of medical images generated by CAD or other methods.
  • An image processing device includes a classification result receiving unit that receives lesion classification results from an image, and a sentence generation unit that generates sentences based on information on the basis of the classification.
  • the image processing device also includes a classification unit that classifies components of the sentences into emphasis targets and non-emphasis targets.
  • the image processing device also includes an emphasis unit that emphasizes the emphasis targets.
  • the image processing device also includes an output unit that outputs at least the emphasis targets from the sentences to a monitor.
  • An image processing device includes one or more processors.
  • the processors receive the image classification results, generate the basis information for the classification as text, classify the components of the text into those to be emphasized and those not to be emphasized, emphasize the emphasized components, and output at least the emphasized components of the text and the classification results to a monitor.
  • An image display method receives the results of image classification, generates the basis information for the classification as text, classifies the components of the text into highlight and non-highlight items, highlights the highlight items, and outputs at least the highlight items of the text to a monitor.
  • An image display program receives the results of image classification, generates text as the basis for the classification, classifies the components of the text into emphasis targets and non-emphasis targets, emphasizes the emphasis targets, generates an image associated with the emphasis targets, and outputs at least the emphasis targets of the text to a monitor.
  • a storage medium stores an image display program that receives the results of image classification, generates text based on information on the basis of the classification, classifies the components of the text into emphasis targets and non-emphasis targets, emphasizes the emphasis targets, generates images associated with the emphasis targets, and outputs at least the emphasis targets of the text to a monitor.
  • An image diagnostic system includes a classification unit that receives evidence information for the differentiation of lesions and classifies the components of the text that make up the evidence information into emphasis targets and non-emphasis targets.
  • the image diagnostic system also includes an emphasis unit that emphasizes the emphasis targets.
  • the image diagnostic system includes an output unit that outputs at least the emphasis targets from the text to a monitor.
  • the present invention has the advantage of being able to generate a display image that allows medical professionals and others to quickly identify the underlying information when referring to the image diagnosis results of medical images obtained using CAD or other methods.
  • FIG. 1 is a block diagram illustrating an example of the configuration of a medical system including an image processing apparatus according to a first embodiment of the present invention.
  • FIG. 2 is a schematic block diagram illustrating a detailed configuration of the endoscope device.
  • FIG. 1 is a block diagram for explaining the functional configuration of an image processing apparatus according to a first embodiment.
  • 10 is a flowchart illustrating an example of a discrimination algorithm.
  • FIG. 2 is a diagram illustrating an example of a display image according to the first embodiment.
  • FIG. 2 is a diagram illustrating an example of a display image according to the first embodiment.
  • FIG. 2 is a diagram illustrating an example of a display image according to the first embodiment.
  • FIG. 2 is a diagram illustrating an example of a display image according to the first embodiment.
  • FIG. 2 is a diagram illustrating an example of a display image according to the first embodiment.
  • FIG. 2 is a diagram illustrating an example of a display image according to the first embodiment.
  • FIG. 10 is a diagram illustrating an example of a display image according to the second embodiment.
  • FIG. 10 is a diagram illustrating an example of a display image according to the second embodiment.
  • FIG. 10 is a diagram illustrating an example of a display image according to the second embodiment.
  • FIG. 11 is a diagram illustrating an example of a display image according to the third embodiment.
  • FIG. 13 is a diagram illustrating an example of a display image according to the fourth embodiment.
  • FIG. 13 is a view for explaining an example of a display image according to the fifth embodiment.
  • FIG. 13 is a view for explaining an example of a display image according to the fifth embodiment.
  • FIG. 20 is a view for explaining an example of a display image according to the sixth embodiment.
  • FIG. 10 is a diagram illustrating an example of a display image according to the second embodiment.
  • FIG. 10 is a diagram illustrating an example of a display image according to the second embodiment.
  • FIG. 10 is a diagram illustrating an example of a
  • FIG. 23 is a view for explaining an example of a display image according to the seventh embodiment.
  • FIG. 23 is a view for explaining an example of a display image according to the seventh embodiment.
  • FIG. 23 is a view for explaining an example of a display image according to the seventh embodiment.
  • FIG. 23 is a diagram illustrating an example of a display image according to the eighth embodiment.
  • FIG. 1 is a block diagram illustrating an example of the configuration of a medical system including an image processing device according to a first embodiment of the present invention.
  • the medical system according to this embodiment is mainly composed of an image processing device 1, an endoscope device 3, a display device 41, and a network 5. As shown in FIG. 1, it may also include a server 2.
  • the medical system shown in FIG. 1 analyzes medical images acquired by the endoscope device 3 and the like, and outputs information to assist doctors in making diagnoses by differentiating lesions and the like.
  • the image processing device 1 differentiates lesions from medical images and outputs the results of the differentiation.
  • the image processing device 1 includes a processor 11 and a storage device 12.
  • the processor 11 is connected to the network 5 and includes a central processing unit (hereinafter referred to as the CPU) 111 and hardware circuits 112 such as ROM and RAM.
  • the processor 11 may also include an integrated circuit such as an FPGA instead of or separate from the CPU.
  • the storage device 12 stores various software programs. Each software program is read and executed by the CPU 111 of the processor 11. Note that all or part of the various programs may be stored in the ROM of the processor 11.
  • the storage device 12 stores a program (not shown) used to control the operation of the image processing device 1, as well as an image display program 121.
  • the image display program 121 is a program that performs processing related to the generation of display images based on the results of lesion differentiation in medical images.
  • the storage device 12 also stores various setting values and parameters required for executing the image display program 121.
  • the storage device 12 is also configured to be able to store medical images, etc. obtained from the server 2 and the endoscope device 3.
  • the server 2 is connected to the network 5 and includes a processor 21 and a storage device 22.
  • the processor 21 includes a CPU and other components.
  • the storage device 22 is a large-capacity storage device such as a hard disk drive.
  • the server 2 stores medical images output from endoscope devices 3 and other devices connected to the network 5.
  • the server 2 also has the function of transmitting medical images in response to requests from image processing devices 1 and other devices connected to the network 5.
  • the endoscope device 3 is a device used to observe the inside of a subject's body.
  • Figure 2 is a schematic block diagram illustrating the detailed configuration of the endoscope device.
  • the endoscope device 3 includes an endoscope 31 and a video processor 32.
  • the endoscope 31 has an elongated insertion section 311, an operating section 312 provided at the base end of the insertion section 311, and a universal cable 313 extending from the operating section 312.
  • the image sensor 314 is provided at the tip of the insertion section 311. Light reflected from the subject illuminated by illumination light emitted from an illumination window (not shown) is incident on the imaging surface of the image sensor 314 via an observation window (not shown). The image sensor 314 outputs an imaging signal of the subject image.
  • the operation unit 312 has various operation members 312a, including an up-down bending knob, a left-right bending knob, and various operation buttons.
  • the user of the endoscope 31 can operate these operation members 312a to bend the bending section (not shown) provided in the insertion section 311, or give instructions to record endoscopic images.
  • the endoscope 31 is connected to the video processor 32 via a connector 313a provided at the base end of the universal cable 313.
  • the imaging signal from the imaging element 314 is supplied to the video processor 32 via a signal line 315 that passes through the insertion portion 311, the operation portion 312, and the universal cable 313.
  • the video processor 32 includes a control unit 321, an image processing unit 322, a communication interface (hereinafter abbreviated as communication I/F) 323, and an operation panel 324.
  • the video processor 32 is connected to the network 5.
  • the control unit 321 controls the overall operation of the endoscope device 3 and the execution of various functions.
  • the control unit 321 includes a CPU, ROM, RAM, etc., and programs for various operations and controls are recorded in the ROM.
  • the image processing unit 322 drives the image sensor 314 and receives an image signal from the image sensor 314. Under the control of the control unit 321, the image processing unit 322 generates an endoscopic image based on the image signal and outputs it to the control unit 321.
  • the operation panel 324 has various buttons and the like that allow the user to specify the execution of various functions.
  • the operation panel 324 is, for example, a display device with a touch panel. The user can operate the operation panel 324 to instruct the endoscope device 3 to execute the desired function.
  • the control unit 321 generates an image signal for the endoscopic image to be displayed on the display device 41 and outputs it to the display device 41.
  • the endoscopic image based on the image signal is displayed on the screen of the display device 41.
  • FIG. 3 is a block diagram for explaining the functional configuration of the image processing device according to the first embodiment.
  • the image processing device 1 is configured to have a basis information generation unit 102, a sentence generation unit 102a, a classification unit 103, an emphasis unit 104, and an output unit 105. It may further include a detection unit 100 or a discrimination unit 101, or at least one of these may be separate from the detection unit 100.
  • the discrimination unit 101 is equipped with artificial intelligence (AI) that has undergone deep learning for CADx (Computer-Aided Diagnosis).
  • AI artificial intelligence
  • CADx Computer-Aided Diagnosis
  • the results of the medical image classification performed by the classification unit 101 are output to the basis information generation unit 102, which serves as a classification result receiving unit.
  • the basis information generation unit 102 estimates the basis for the classification, i.e., why the classification unit 101 output such a classification result.
  • a configuration may be provided that allows the display of the basis information to be turned on and off, so that the results of the classification are output to the basis information generation unit 102 only when the display of the basis information is turned on.
  • the sentence generation unit 102a converts the basis for discrimination into sentences that can be understood by humans.
  • the classification unit 103 classifies multiple types of sentences into emphasis targets and non-emphasis targets. Specific examples will be described later, but when there are multiple types of sentences generated by the sentence generation unit 102a, each sentence may be classified into emphasis targets and non-emphasis targets, or emphasis targets and non-emphasis targets may be classified within a single sentence. When classifying emphasis targets and non-emphasis targets within a single sentence, this may be done on a phrase-by-phrase basis, on a word-by-word basis, or a combination of these. It is also possible to switch emphasis targets and non-emphasis targets over time.
  • the highlighting unit 104 generates information for highlighting the highlight target according to the classification result, and outputs it to the display device 41 via the output unit 105 as a highlighting process.
  • Examples of highlighting methods used by the highlighting unit 104 include: "underlining the highlight target and not underlining the non-highlight target", “making the font of the highlight target bolder than that of the non-highlight target”, “making the font size of the highlight target larger than that of the non-highlight target”, “making the font type of the highlight target different from that of the non-highlight target”, “making the color of the highlight target different from that of the non-highlight target”, “making the tilt angle of the font of the highlight target different from that of the non-highlight target”, or “making the blinking speed of the highlight target faster than that of the non-highlight target”.
  • the highlighting unit 104 may be equipped with an image generating unit 104a.
  • the image generating unit 104a generates an highlighted image by adding marker images to locations on the differentiated medical image that correspond to the components to be highlighted.
  • the marker image may be an icon, such as an arrow, that points to the area to be highlighted, or a bounding box that surrounds the area to be highlighted, or may be linked to the evidence information.
  • the evidence information includes information regarding the outline of the lesion, a frame that traces the outline of the lesion may be displayed.
  • the marker image be an image that is visually consistent with the highlighting process of the component to be highlighted.
  • the display color of the marker image be similar or the same color as the display color of the component to be highlighted. Similar colors refer to colors that fall within a specified range on the color wheel.
  • the results of detection of the position and contour of the lesion by the detection unit 100 may be input to the image generation unit 104a, and the detection information of the position and contour of the lesion may be used to generate a marker image.
  • the detection unit 100 may be equipped with artificial intelligence (AI) that has undergone deep learning for Computer-Aided Detection (CADe).
  • AI artificial intelligence
  • CADe Computer-Aided Detection
  • sentence generation unit 102a generates multiple types of sentences, "sentences verbalizing the first basis,” “sentences verbalizing the second basis,” and “sentences verbalizing the third basis,” from the evidence information generated by the evidence information generation unit 102.
  • the highlighting unit 104 uses the above-mentioned highlighting method or the like to highlight the "sentences verbalizing the first basis” so that, when viewed by a doctor's eyes, they stand out more than the "sentences verbalizing the second basis” and "sentences verbalizing the third basis.”
  • sentence generation unit 102a generates a "sentence verbalizing the basis” that includes a "first word,” a "second word,” and a "third word”
  • the classification unit 103 may determine that the "first word” is to be emphasized and that the "second word” and "third word” are not to be emphasized.
  • the classification unit 103 may add timing information to the emphasis target information and non-emphasis target information and output the information to the emphasis unit 104. Even if there are multiple areas that you want the doctor to pay attention to, by emphasizing them in order at different times, the doctor's burden can be reduced while still being able to input information.
  • the diagnostic order can be used, for example, to determine the order of highlighting.
  • the diagnostic order can be based on a user-customized and registered order, or a diagnostic algorithm can be used.
  • Figure 4 shows the diagnostic procedure using MESDA-G as an example of a diagnostic algorithm.
  • MESDA-G is a diagnostic algorithm used to differentiate gastric cancer. It first determines color changes (whitish or reddish) or morphological changes (protrusion, flattening, or depression) on the gastric mucosal surface from the image. Next, it identifies the demarcation line (DL) between the lesion and the background mucosa (S1). If there is no DL, it is diagnosed as non-cancerous (benign lesion).
  • IMVP irregular microvascular pattern
  • IMSP irregular microsurface pattern
  • the differentiation unit 101 differentiates gastric cancer from a medical image and obtains a result of cancer with a diagnostic certainty of 90%.
  • the sentence generation unit 102a From the evidence information generated by the evidence information generation unit 102, the sentence generation unit 102a generates multiple types of sentences, such as the sentence "there is a clear boundary" as the "first evidence,” the sentence “it has an irregular texture” as the "second evidence,” and the sentence “it is reddish in color” as the "third evidence.” These sentences are then merged to generate the evidence information sentence "there is a clear boundary, it has an irregular texture, and it is reddish in color.”
  • the classification unit first highlights sentences, phrases, or words related to the color of the gastric mucosal surface.
  • next highlighting target is sentences, phrases, or words related to the boundary line between the lesion and the background mucosa.
  • the next highlighting target is sentences, phrases, or words related to the microvascular pattern. Therefore, the components in the sentence “It has clear boundaries, an irregular texture, and is reddish in color” are emphasized in the order “redness,” “clear boundaries,” and “irregular texture.”
  • the highlighting timing can be set to be simultaneous, or the order can be determined by user settings.
  • diagnostic algorithms may be switched to perform differentiation depending on the type of organ. If the target organ is the large intestine, for example, the JNET classification, NICE classification, Sano classification, Jikei classification, Showa classification, Hiroshima classification, Kudo-Tsuruta classification, and Akita-Nisseki classification are used. If the target organ is the esophagus, for example, the Inoue classification, Japan Esophageal Society classification, Arima classification, and BING classification are used. If the target organ is the stomach, for example, the MESDA-G, Koyama classification, Jikei classification, Yao classification, and Yagi classification are used. In addition to the type of organ, diagnostic algorithms may be switched to perform differentiation depending on the target lesion, the type of endoscope used to capture the medical images, etc.
  • a keyword can be set for each component of the diagnostic sequence, and the order of emphasis for each component of the evidence information can be determined based on the degree of match between each component of the evidence information and the keyword.
  • the keywords related to the first item in the diagnostic order are "red, bleeding, protuberance, flat, depression," the keyword related to the second item in the diagnostic order is "boundary,” and the keyword related to the third item in the diagnostic order is "capillary, capillary, texture, texture” are stored in the classification unit 103, or the classification unit 103 can access this information stored in a separate structure. Then, if the sentence generation unit generates sentences such as "There is a clear boundary" and "The color is reddish,” then "The color is reddish,” which has a higher degree of match with the keyword related to the first item in the diagnostic order, is emphasized first in chronological order.
  • the sentence generation unit 102a may generate sentences using medical terms contained in a medical dictionary set by the user or the like. For example, prior to image diagnosis, an arbitrary dictionary may be selected and set from multiple medical dictionaries, and the dictionary may be used to generate sentences for the evidence information. The generated sentences are output to the classification unit 103.
  • the output unit 105 generates and outputs a display image to be displayed on the display device 41.
  • the display image includes the results of the diagnosis (disease name, diagnostic certainty), text of the evidence information that has been highlighted, and an highlighted diagram.
  • Figures 5A, 5B, 5C, and 5D are diagrams illustrating an example of a display image according to the first embodiment.
  • Figure 5A shows an example of a display image 411 displayed on the display device 41.
  • the display image 411 has two display areas D1 and D2.
  • the highlighted diagram G1 generated by the diagram generation unit 104a is placed in the display area D1, and the results of the diagnosis (disease name, diagnostic certainty) and text of the evidence information generated by the highlighting unit 104 are placed in the display area D2.
  • Figure 5A shows a display image when the "first evidence” is to be highlighted.
  • the marker image L1 in the highlighted diagram G1 is added to the location corresponding to the "first evidence.”
  • the bolded portions of the text of the evidence information and the marker image L1 indicated by a bold line are displayed in green, for example.
  • the rest of the evidence information text is displayed in black, for example.
  • Figure 5B is an example of a display image in the example of gastric cancer differentiation described above, where the "first basis” of "there is a clear boundary” is to be highlighted.
  • Figure 5C is an example of a display image in the example of gastric cancer differentiation described above, where the "second basis” of "has an irregular texture” is to be highlighted.
  • Figure 5D is an example of a display image in the example of gastric cancer differentiation described above, where the "third basis” of "the color is reddish” is to be highlighted.
  • the emphasis (bold part) in the text of the evidence information and the marker image L1 indicated by a thick line are displayed in green, for example.
  • the emphasis (left italic part) in the text of the evidence information and the marker image L2 indicated by a double line are displayed in blue, for example.
  • the emphasis (right italic part) in the text of the evidence information and the marker image L3 indicated by a thick dotted line are displayed in red, for example. In this way, when the components classified as emphasised are different, they may be displayed using different colors.
  • the image processing device of the first embodiment differentiates the display color of the components to be emphasized from the display color of the other components in the text of the evidence information to be displayed in the display image. This allows the user to quickly identify the evidence information to be emphasized. Furthermore, the image processing device of the first embodiment adds marker images to the medical image that has been differentiated at locations corresponding to the components to be emphasized, to generate an emphasized image. At this time, the display color of the marker images is made the same as the display color of the components to be emphasized. This allows the user to quickly identify the evidence information to be emphasized and the location of the corresponding lesion.
  • the description method settings for colors may be switchable. For example, two modes, a normal mode and a detailed mode, may be set, and when the detailed mode is set, a sentence that describes in detail the color of a component that includes color may be generated.
  • the process of visually highlighting components classified as those to be highlighted is to set the display color of the components to be highlighted in the displayed image to a color different from the display color of components not to be highlighted.
  • the texture of the components to be highlighted in the displayed image is made different from the texture of the components not to be highlighted.
  • the processing performed by the highlighting unit 104 in this embodiment differs from the first embodiment.
  • the image processing device in this embodiment and the following embodiments has the same configuration as the image processing device 1 in the first embodiment.
  • the same components are designated by the same reference numerals and their descriptions are omitted.
  • Figures 6A, 6B, and 6C are diagrams illustrating an example of a display image according to the second embodiment.
  • Figure 6A is an example of a display image in the example of gastric cancer differentiation described above, where the "first basis” "there is a clear boundary” is to be highlighted.
  • Figure 6B is an example of a display image in the example of gastric cancer differentiation described above, where the "second basis” "has an irregular texture” is to be highlighted.
  • Figure 6C is an example of a display image in the example of gastric cancer differentiation described above, where the "third basis” "the color is reddish” is to be highlighted.
  • marker image L4 in the highlighted image G1 is added to the location corresponding to the component to be highlighted.
  • the component to be highlighted in the text of the evidence information is shown in bold and underlined.
  • the other parts of the text of the evidence information are shown in thin, ununderlined text. In this way, by making the texture of the characters of the component to be highlighted in the text of the evidence information different from the texture of the other components, the user can quickly identify the evidence information to be highlighted.
  • the texture of the component to be emphasized is made different from that of other components depending on the thickness of the characters and whether or not they are underlined, but it is also possible to make the texture different by changing the size, font, or tilt of the characters, or by combining these. (Third embodiment)
  • a display image is generated in which an emphasis image is placed when one specific component is to be emphasized. Therefore, when another component is to be emphasized, it was necessary to generate a display image in which an emphasis image corresponding to that component is placed.
  • this embodiment differs from the first embodiment in that an emphasis image is placed in a single display image in which each component is to be emphasized.
  • the display image 411 generated by the output unit 105 includes text of the evidence information that has been subjected to emphasis processing, a medical image that has been differentiated, and an emphasis diagram.
  • Figure 7 is a diagram illustrating an example of a display image according to the third embodiment.
  • the display image 411 has three display areas D1, D2, and D3. Highlighted diagrams G1, G2, and G3 are arranged in display area D1, and text of the evidence information is arranged in display area D2. Furthermore, the medical image G0 that has been differentiated is arranged in display area D3.
  • the text of the evidence information arranged in display area D2 is visually highlighted in different ways for each component. For example, the “first evidence” “has a clear boundary” is displayed in green, the “second evidence” “has an irregular texture” is displayed in blue, and the “third evidence” “is reddish in color” is displayed in red.
  • Marker image L1 in the highlighted image G1 arranged in display area D1 is added to the location corresponding to the "first evidence.”
  • the color of marker image L1 is the same as the display color of the "first evidence.”
  • Marker image L2 in the highlighted image G2 is added to the location corresponding to the "second evidence.”
  • the color of marker image L2 is the same as the display color of the "second evidence.”
  • Marker image L3 in the highlighted image G3 is added to the location corresponding to the "third evidence.”
  • the color of marker image L3 is the same as the display color of the "third evidence.”
  • a display image is generated in which an emphasis image is placed when a specific component is the emphasis target.
  • this embodiment differs in that a display image is generated in which a reference image containing a lesion similar to the component is also placed.
  • the reference image is, for example, an atlas image published in a medical encyclopedia or the like, which contains a lesion similar to the component.
  • the display image 411 generated by the output unit 105 includes text of the evidence information that has been subjected to emphasis processing, a medical image that has been differentiated, and a reference image.
  • Figure 8 is a diagram illustrating an example of a display image according to the fourth embodiment.
  • the display image 411 has three display areas D1, D2, and D4.
  • An emphasis image G1 is placed in display area D1, and text of the evidence information is placed in display area D2.
  • a reference image G11 is placed in display area D4.
  • a marker image L11 is added to the reference image G11.
  • the marker image L11 is added to a location in the reference image G11 that corresponds to the component to be emphasized.
  • the marker image L11 is an image that is visually consistent with the marker image L1.
  • the display color of the marker image L11 is the same as the display color of the marker image L1 and the display color of the component to be emphasized.
  • This embodiment differs from the above-described embodiments in that the display images are switched sequentially based on the order of diagnoses in the differentiation. For example, when differentiation is performed using the MESDA-G diagnostic procedure shown in Figure 4, a display image ( Figure 9A) is first displayed in which the components based on the diagnosis shown in S1 are highlighted. Next, a display image (Figure 9B) is displayed in which the components based on the diagnosis shown in S2 are highlighted.
  • Figures 9A and 9B are diagrams illustrating an example of a display image according to the fifth embodiment.
  • the differential diagnosis results shown in display area D1 preferably display not only the disease name and diagnostic certainty, but also the judgment result for each diagnosis (for example, "DL+” in Figure 9A or "IMVP/IMSP+” in Figure 9B). In this case, it is preferable to display the judgment result in the same color as the component being highlighted.
  • the display color of the component to be highlighted and the display color of the marker image may be changed depending on the judgment status, such as red if the judgment result is "+” and green if the judgment result is "-”.
  • This embodiment differs from the above-described embodiment in that, as a result of the differentiation, evidence information is displayed on the displayed image only if the lesion is determined to be a disease, and evidence information is not displayed if the lesion is determined not to be a disease.
  • the display image shown in Figure 11A will be displayed first.
  • the display image shown in Figure 11B will be displayed.
  • the display image shown in Figure 11C will be displayed.
  • This embodiment differs from the above-described embodiment in that, as a result of the differentiation, evidence information is displayed on the displayed image only when the lesion is determined to be a disease and the diagnostic certainty is high (higher than a set value); it does not display evidence information when the lesion is determined not to be a disease, or when the diagnostic certainty is low even if the lesion is determined to be a disease.
  • the emphasis unit 104 synthesizes audio data that has been subjected to emphasis processing, such as increasing the volume of the emphasized items compared to the non-emphasized items, or reading out only the emphasized items first, followed by all of the evidence, and transmits this data to the audio device 42 via the output unit 105.
  • the audio device 42 may be a general speaker, or it may be a directional speaker, earphones, headphones, or a bone conduction device. When a directional speaker, earphones, headphones, or bone conduction device is used, the evidence information can be transmitted only to a specific party (the doctor) via these, thereby reducing the possibility that the subject will see the evidence information.
  • the present invention is not limited to the above-described embodiments, and can be embodied by modifying the components in the implementation stage without departing from the spirit of the invention. Furthermore, various inventions can be created by appropriately combining the multiple components disclosed in the above-described embodiments. For example, some of the components shown in the embodiments may be deleted. Furthermore, components from different embodiments may be combined as appropriate.
  • the controls mainly illustrated in the flowcharts can often be configured by program, and may be stored on a recording medium or recording unit.
  • the recording medium or recording unit may be recorded at the time of product shipment, may use a distributed recording medium, or may be downloaded via the internet.
  • the parts described as "parts" may be configured by combining dedicated circuits or multiple general-purpose circuits, and, if necessary, by combining a microcomputer, a processor such as a CPU, or a sequencer such as an FPGA that operates according to pre-programmed software. It is also possible to design the system so that some or all of the control is taken over by an external device, in which case a wired or wireless communication circuit is involved. Communication can be via Bluetooth (registered trademark), Wi-Fi, telephone lines, USB, etc.
  • the dedicated circuits, general-purpose circuits, and control units may be integrated into an ASIC.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Hardware Design (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Measuring And Recording Apparatus For Diagnosis (AREA)

Abstract

An image processing device 1 comprises: an identification result reception unit 102 that receives a lesion identification result a from an image; a text generation unit 102a that generates, as text, basis information of the identification; a classification unit 103 that classifies constituent elements of the text into an object to be emphasized and an object not to be emphasized; an emphasis unit 104 that emphasizes the object to be emphasized; and an output unit 105 that outputs at least the object to be emphasized in the text to a monitor.

Description

画像処理装置、画像表示方法、画像表示プログラム、記録媒体、および、画像診断システムImage processing device, image display method, image display program, recording medium, and image diagnosis system

 近年、IT技術の進展によって、様々な分野においてAI(Artificial Intelligence)を用いた画像診断が実用化され始めている。画像診断において、AIが行った推論に対する根拠情報を提示する技術が知られている(例えば、特許文献1参照)。 In recent years, advances in IT technology have led to the practical application of image diagnosis using AI (Artificial Intelligence) in a variety of fields. In image diagnosis, a technology is known that presents supporting information for inferences made by AI (see, for example, Patent Document 1).

特許第7161979号公報Patent No. 7161979

 医療分野においても、内視鏡診断などに用いられるCAD(コンピュータ支援診断)用のAIについて、ユーザである医師に対し、根拠情報を全て提示することが求められる可能性がある。しかし、診断時間は限られる。 In the medical field, it is possible that AI for CAD (computer-aided diagnosis), used in endoscopic diagnosis and other procedures, will be required to provide all supporting information to the doctor who uses it. However, the time available for diagnosis is limited.

 そこで、本発明は、CADなどによる医用画像の画像診断結果を医療従事者などが参照する際に、根拠情報を迅速に識別可能な表示画像を生成することができる、画像処理装置、画像表示方法、画像表示プログラム、記憶媒体、および、画像診断システムを提供することを目的としている。 The present invention therefore aims to provide an image processing device, image display method, image display program, storage medium, and image diagnostic system that can generate display images that allow medical professionals and others to quickly identify the underlying information when referring to image diagnostic results of medical images generated by CAD or other methods.

 本発明の一態様による画像処理装置は、画像から病変の鑑別結果を受領する鑑別結果受領部と、前記鑑別の根拠情報を文章として生成する文章生成部とを備える具備する。また、画像処理装置は、前記文章の構成要素を強調対象、および、非強調対象に分類する分類部も備える。さらに、画像処理装置は、前記強調対象を強調する強調部も備える。また、画像処理装置は、前記文章のうち少なくとも前記強調対象をモニタに出力する出力部も備える。 An image processing device according to one aspect of the present invention includes a classification result receiving unit that receives lesion classification results from an image, and a sentence generation unit that generates sentences based on information on the basis of the classification. The image processing device also includes a classification unit that classifies components of the sentences into emphasis targets and non-emphasis targets. The image processing device also includes an emphasis unit that emphasizes the emphasis targets. The image processing device also includes an output unit that outputs at least the emphasis targets from the sentences to a monitor.

 本発明の一態様による画像処理装置は、一つ以上のプロセッサを含む。前記プロセッサは、画像の鑑別結果を受領し、前記鑑別の根拠情報を文章として生成し、前記文章の構成要素を強調対象、および、非強調対象に分類し、前記強調対象を強調し、前記文章のうち少なくとも前記強調対象、および、前記鑑別の結果をモニタに出力する。 An image processing device according to one aspect of the present invention includes one or more processors. The processors receive the image classification results, generate the basis information for the classification as text, classify the components of the text into those to be emphasized and those not to be emphasized, emphasize the emphasized components, and output at least the emphasized components of the text and the classification results to a monitor.

 本発明の一態様による画像表示方法は、画像の鑑別結果を受領し、前記鑑別の根拠情報を文章として生成し、前記文章の構成要素を強調対象、および、非強調対象に分類し、前記強調対象を強調し、前記文章のうち少なくとも前記強調対象をモニタに出力する。 An image display method according to one aspect of the present invention receives the results of image classification, generates the basis information for the classification as text, classifies the components of the text into highlight and non-highlight items, highlights the highlight items, and outputs at least the highlight items of the text to a monitor.

 本発明の一態様による画像表示プログラムは、画像の鑑別結果を受領し、前記鑑別の根拠情報を文章として生成し、前記文章の構成要素を強調対象、および、非強調対象に分類し、前記強調対象を強調し、前記強調対象に紐づく図を生成し、前記文章のうち少なくとも前記強調対象をモニタに出力する。 An image display program according to one aspect of the present invention receives the results of image classification, generates text as the basis for the classification, classifies the components of the text into emphasis targets and non-emphasis targets, emphasizes the emphasis targets, generates an image associated with the emphasis targets, and outputs at least the emphasis targets of the text to a monitor.

 本発明の一態様による記憶媒体は、画像の鑑別結果を受領し、前記鑑別の根拠情報を文章として生成し、前記文章の構成要素を強調対象、および、非強調対象に分類し、前記強調対象を強調し、前記強調対象に紐づく図を生成し、前記文章のうち少なくとも前記強調対象をモニタに出力する、画像表示プログラムを記憶している。 A storage medium according to one aspect of the present invention stores an image display program that receives the results of image classification, generates text based on information on the basis of the classification, classifies the components of the text into emphasis targets and non-emphasis targets, emphasizes the emphasis targets, generates images associated with the emphasis targets, and outputs at least the emphasis targets of the text to a monitor.

 本発明の一態様による画像診断システムは、病変の鑑別の根拠情報を受信し、前記根拠情報を構成する文章の構成要素を強調対象、および、非強調対象に分類する分類部を具備する。また、画像診断システムは、前記強調対象を強調する強調部も具備する。さらに、画像診断システムは、前記文章のうち少なくとも前記強調対象をモニタに出力するする出力部を具備する。 An image diagnostic system according to one aspect of the present invention includes a classification unit that receives evidence information for the differentiation of lesions and classifies the components of the text that make up the evidence information into emphasis targets and non-emphasis targets. The image diagnostic system also includes an emphasis unit that emphasizes the emphasis targets. Furthermore, the image diagnostic system includes an output unit that outputs at least the emphasis targets from the text to a monitor.

 本発明によれば、CADなどによる医用画像の画像診断結果を医療従事者などが参照する際に、根拠情報を迅速に識別可能な表示画像を生成することができるという効果を有する。 The present invention has the advantage of being able to generate a display image that allows medical professionals and others to quickly identify the underlying information when referring to the image diagnosis results of medical images obtained using CAD or other methods.

本発明の第1実施形態に係る画像処理装置を含む医療システムの構成の一例を説明するためのブロック図。1 is a block diagram illustrating an example of the configuration of a medical system including an image processing apparatus according to a first embodiment of the present invention. 内視鏡装置の詳細な構成を説明する概略ブロック図。FIG. 2 is a schematic block diagram illustrating a detailed configuration of the endoscope device. 第1実施形態に係る画像処理装置の機能構成を説明するためのブロック図。FIG. 1 is a block diagram for explaining the functional configuration of an image processing apparatus according to a first embodiment. 鑑別アルゴリズムの一例を説明するフローチャート。10 is a flowchart illustrating an example of a discrimination algorithm. 第1実施形態に係る表示画像の一例を説明する図。FIG. 2 is a diagram illustrating an example of a display image according to the first embodiment. 第1実施形態に係る表示画像の一例を説明する図。FIG. 2 is a diagram illustrating an example of a display image according to the first embodiment. 第1実施形態に係る表示画像の一例を説明する図。FIG. 2 is a diagram illustrating an example of a display image according to the first embodiment. 第1実施形態に係る表示画像の一例を説明する図。FIG. 2 is a diagram illustrating an example of a display image according to the first embodiment. 第2実施形態に係る表示画像の一例を説明する図。FIG. 10 is a diagram illustrating an example of a display image according to the second embodiment. 第2実施形態に係る表示画像の一例を説明する図。FIG. 10 is a diagram illustrating an example of a display image according to the second embodiment. 第2実施形態に係る表示画像の一例を説明する図。FIG. 10 is a diagram illustrating an example of a display image according to the second embodiment. 第3実施形態に係る表示画像の一例を説明する図。FIG. 11 is a diagram illustrating an example of a display image according to the third embodiment. 第4実施形態に係る表示画像の一例を説明する図。FIG. 13 is a diagram illustrating an example of a display image according to the fourth embodiment. 第5実施形態に係る表示画像の一例を説明する図。FIG. 13 is a view for explaining an example of a display image according to the fifth embodiment. 第5実施形態に係る表示画像の一例を説明する図。FIG. 13 is a view for explaining an example of a display image according to the fifth embodiment. 第6実施形態に係る表示画像の一例を説明する図。FIG. 20 is a view for explaining an example of a display image according to the sixth embodiment. 第7実施形態に係る表示画像の一例を説明する図。FIG. 23 is a view for explaining an example of a display image according to the seventh embodiment. 第7実施形態に係る表示画像の一例を説明する図。FIG. 23 is a view for explaining an example of a display image according to the seventh embodiment. 第7実施形態に係る表示画像の一例を説明する図。FIG. 23 is a view for explaining an example of a display image according to the seventh embodiment. 第8実施形態に係る表示画像の一例を説明する図。FIG. 23 is a diagram illustrating an example of a display image according to the eighth embodiment.

 以下、図面を参照して本発明の実施の形態について詳細に説明する。
(第1実施形態)
Hereinafter, embodiments of the present invention will be described in detail with reference to the drawings.
(First embodiment)

 図1は、本発明の第1実施形態に係る画像処理装置を含む医療システムの構成の一例を説明するためのブロック図である。本実施形態に係る医療システムは、画像処理装置1と、内視鏡装置3と、表示装置41と、ネットワーク5とから主に構成される。また、図1に示すようにサーバ2を含んでいても良い。図1に示す医療システムでは、内視鏡装置3などによって取得された医用画像を解析し、病変などを鑑別して医師の診断を支援する情報が出力される。 FIG. 1 is a block diagram illustrating an example of the configuration of a medical system including an image processing device according to a first embodiment of the present invention. The medical system according to this embodiment is mainly composed of an image processing device 1, an endoscope device 3, a display device 41, and a network 5. As shown in FIG. 1, it may also include a server 2. The medical system shown in FIG. 1 analyzes medical images acquired by the endoscope device 3 and the like, and outputs information to assist doctors in making diagnoses by differentiating lesions and the like.

 画像処理装置1は、医用画像から病変を鑑別し、鑑別結果を出力する。画像処理装置1は、プロセッサ11と、記憶装置12とを具備する。  The image processing device 1 differentiates lesions from medical images and outputs the results of the differentiation. The image processing device 1 includes a processor 11 and a storage device 12.

 プロセッサ11は、ネットワーク5に接続されており、中央処置装置(以下、CPUという)111、ROM、RAMなどのハードウェア回路112を具備する。なお、プロセッサ11は、CPUに代えて、あるいはCPUとは別のFPGA等の集積回路を含んでもよい。 The processor 11 is connected to the network 5 and includes a central processing unit (hereinafter referred to as the CPU) 111 and hardware circuits 112 such as ROM and RAM. The processor 11 may also include an integrated circuit such as an FPGA instead of or separate from the CPU.

 記憶装置12は、各種ソフトウエアプログラムを記憶している。各ソフトウエアプログラムは、プロセッサ11のCPU111により読み出されて実行される。なお、各種プログラムの全部あるいは一部は、プロセッサ11のROMに格納されていてもよい。 The storage device 12 stores various software programs. Each software program is read and executed by the CPU 111 of the processor 11. Note that all or part of the various programs may be stored in the ROM of the processor 11.

 記憶装置12には、画像処理装置1の動作制御に用いられるプログラム(不図示)や、画像表示プログラム121が記憶されている。画像表示プログラム121は、医用画像に対する病変の鑑別結果に基づき、表示画像の生成に係る処理を行うプログラムである。また、記憶装置12には、画像表示プログラム121の実行に必要な各種設定値やパラメータなども記憶されている。また、記憶装置12には、サーバ2や内視鏡装置3から取得する医用画像等を格納することができるように構成されている。 The storage device 12 stores a program (not shown) used to control the operation of the image processing device 1, as well as an image display program 121. The image display program 121 is a program that performs processing related to the generation of display images based on the results of lesion differentiation in medical images. The storage device 12 also stores various setting values and parameters required for executing the image display program 121. The storage device 12 is also configured to be able to store medical images, etc. obtained from the server 2 and the endoscope device 3.

 サーバ2は、ネットワーク5に接続されており、プロセッサ21と、記憶装置22とを具備する。プロセッサ21は、CPUなどを含む。記憶装置22は、ハードディスク装置などの大容量記憶装置である。サーバ2は、ネットワーク5に接続された内視鏡装置3などから出力された医用画像を蓄積する。また、サーバ2は、ネットワーク5に接続された画像処理装置1などからの要求に応じて、医用画像を送信する機能を有する。  The server 2 is connected to the network 5 and includes a processor 21 and a storage device 22. The processor 21 includes a CPU and other components. The storage device 22 is a large-capacity storage device such as a hard disk drive. The server 2 stores medical images output from endoscope devices 3 and other devices connected to the network 5. The server 2 also has the function of transmitting medical images in response to requests from image processing devices 1 and other devices connected to the network 5.

 内視鏡装置3は、被検体の体内を観察する機器である。図2は、内視鏡装置の詳細な構成を説明する概略ブロック図である。内視鏡装置3は、内視鏡31、およびビデオプロセッサ32を含む。 The endoscope device 3 is a device used to observe the inside of a subject's body. Figure 2 is a schematic block diagram illustrating the detailed configuration of the endoscope device. The endoscope device 3 includes an endoscope 31 and a video processor 32.

 内視鏡31は、細長の挿入部311と、挿入部311の基端に設けられた操作部312と、操作部312から延出するユニバーサルケーブル313を有する。 The endoscope 31 has an elongated insertion section 311, an operating section 312 provided at the base end of the insertion section 311, and a universal cable 313 extending from the operating section 312.

 撮像素子314が挿入部311の先端に設けられている。図示しない照明窓から出射された照明光により照明された被検体から反射光が、図示しない観察窓を介して撮像素子314の撮像面に入射する。撮像素子314は、被写体像の撮像信号を出力する。 The image sensor 314 is provided at the tip of the insertion section 311. Light reflected from the subject illuminated by illumination light emitted from an illumination window (not shown) is incident on the imaging surface of the image sensor 314 via an observation window (not shown). The image sensor 314 outputs an imaging signal of the subject image.

 操作部312は、上下方向の湾曲ノブ、左右方向の湾曲ノブ、及び、各種操作ボタンなどからなる、各種操作部材312aを有する。内視鏡31のユーザは、これらの操作部材312aを操作して、挿入部311に設けられた湾曲部(図示せず)を湾曲させたり、内視鏡画像の記録指示をしたりすることができる。 The operation unit 312 has various operation members 312a, including an up-down bending knob, a left-right bending knob, and various operation buttons. The user of the endoscope 31 can operate these operation members 312a to bend the bending section (not shown) provided in the insertion section 311, or give instructions to record endoscopic images.

 内視鏡31は、ユニバーサルケーブル313の基端に設けられたコネクタ313aにより、ビデオプロセッサ32と接続される。  The endoscope 31 is connected to the video processor 32 via a connector 313a provided at the base end of the universal cable 313.

 撮像素子314からの撮像信号は、挿入部311、操作部312及びユニバーサルケーブル313内を挿通する信号線315を介してビデオプロセッサ32に供給される。 The imaging signal from the imaging element 314 is supplied to the video processor 32 via a signal line 315 that passes through the insertion portion 311, the operation portion 312, and the universal cable 313.

 ビデオプロセッサ32は、制御部321と、画像処理部322と、通信インターフェース(以下、通信I/Fと略す)323と、操作パネル324を含む。ビデオプロセッサ32は、ネットワーク5と接続されている。 The video processor 32 includes a control unit 321, an image processing unit 322, a communication interface (hereinafter abbreviated as communication I/F) 323, and an operation panel 324. The video processor 32 is connected to the network 5.

 制御部321は、内視鏡装置3の全体の動作及び各種機能の実行を制御する。制御部321は、CPU、ROM、RAM等を含み、各種動作及び制御のためのプログラムがROMに記録されている。 The control unit 321 controls the overall operation of the endoscope device 3 and the execution of various functions. The control unit 321 includes a CPU, ROM, RAM, etc., and programs for various operations and controls are recorded in the ROM.

 CPUが操作部312の各種操作部材312aあるいは操作パネル324からの操作信号に応じたプログラムをROMから読み出して実行することによって、内視鏡装置3の全体の動作及び各種機能の実行が行われる。 The CPU reads from ROM and executes programs in response to operation signals from the various operation members 312a of the operation unit 312 or the operation panel 324, thereby controlling the overall operation of the endoscope device 3 and executing various functions.

 画像処理部322は、制御部321の制御の下、撮像素子314を駆動して、撮像素子314からの撮像信号を受信する。画像処理部322は、制御部321の制御の下で、撮像信号に基づいて内視鏡画像を生成し、制御部321へ出力する。 Under the control of the control unit 321, the image processing unit 322 drives the image sensor 314 and receives an image signal from the image sensor 314. Under the control of the control unit 321, the image processing unit 322 generates an endoscopic image based on the image signal and outputs it to the control unit 321.

 通信I/F323は、制御部321とネットワーク5を接続する回路である。制御部321は、通信I/F323を介して、内視鏡画像をサーバ2へ出力する。 The communication I/F 323 is a circuit that connects the control unit 321 to the network 5. The control unit 321 outputs endoscopic images to the server 2 via the communication I/F 323.

 操作パネル324は、ユーザが各種機能の実行を指定するための各種ボタンなどを有する。操作パネル324は、例えば、タッチパネル付きの表示装置である。ユーザは、操作パネル324を操作して、所望の機能の実行を、内視鏡装置3に指示することができる。 The operation panel 324 has various buttons and the like that allow the user to specify the execution of various functions. The operation panel 324 is, for example, a display device with a touch panel. The user can operate the operation panel 324 to instruct the endoscope device 3 to execute the desired function.

 制御部321は、表示装置41に表示する内視鏡画像の画像信号を生成し、表示装置41へ出力する。画像信号に基づく内視鏡画像が、表示装置41の画面上に表示される。 The control unit 321 generates an image signal for the endoscopic image to be displayed on the display device 41 and outputs it to the display device 41. The endoscopic image based on the image signal is displayed on the screen of the display device 41.

 表示装置41は、報知装置4に含まれる。報知装置4は、表示装置41のほか、音響装置42を備えていてもよい。表示装置41は、モニタ等を備える。ビデオプロセッサ32は、ネットワーク5を介して表示装置41に画像データを送信してもよいし、ネットワーク5および画像処理装置1を介して表示装置41に画像データを送信してもよい。後述の根拠を文章として提示する場合、根拠によっては癌である可能性を示唆する文章など内視鏡検査の被験者にとってセンシティブな情報である可能性がある。このため、根拠情報が被験者の目に入らない様に、表示装置41と被験者が横たわる診察台との相対的な角度や高さを調整してもよい。また、表示装置41として据え置き型のモニタではなくモニタ付きゴーグルを用いる等してもよい。 The display device 41 is included in the notification device 4. The notification device 4 may include an audio device 42 in addition to the display device 41. The display device 41 includes a monitor, etc. The video processor 32 may send image data to the display device 41 via the network 5, or may send image data to the display device 41 via the network 5 and the image processing device 1. When the evidence described below is presented as text, depending on the evidence, it may be sensitive information for the subject undergoing the endoscopic examination, such as text suggesting the possibility of cancer. For this reason, the relative angle and height of the display device 41 and the examination table on which the subject is lying may be adjusted so that the evidence information is not seen by the subject. Furthermore, goggles with a monitor may be used as the display device 41 instead of a stationary monitor.

 次に、画像処理装置1において、医用画像から表示画像を生成する機能について説明する。図3は、第1実施形態に係る画像処理装置の機能構成を説明するためのブロック図である。画像処理装置1は、根拠情報生成部102と、文章生成部102aと、分類部103と、強調部104と、出力部105とを有して構成される。さらに、検出部100、または鑑別部101を含んでいても良いし、これらのうち少なくとも1種は検出部100と別体であってもよい。 Next, the function of generating a display image from a medical image in the image processing device 1 will be described. Figure 3 is a block diagram for explaining the functional configuration of the image processing device according to the first embodiment. The image processing device 1 is configured to have a basis information generation unit 102, a sentence generation unit 102a, a classification unit 103, an emphasis unit 104, and an output unit 105. It may further include a detection unit 100 or a discrimination unit 101, or at least one of these may be separate from the detection unit 100.

 鑑別部101は、CADx(Computer-Aided Diagnosis)のための深層学習を施した人工知能(AI)を搭載している。鑑別部101による鑑別結果として、例えば病変、血管、もしくは神経であるか否か、または、病変の種類、ピロリ菌等への感染履歴、血管の種類、臓器の種類もしくは臓器内の部位を、出力部105を通じて表示装置41に出力する。 The discrimination unit 101 is equipped with artificial intelligence (AI) that has undergone deep learning for CADx (Computer-Aided Diagnosis). The discrimination results by the discrimination unit 101, such as whether it is a lesion, blood vessel, or nerve, or the type of lesion, infection history with Helicobacter pylori or other bacteria, type of blood vessel, type of organ, or location within the organ, are output to the display device 41 via the output unit 105.

 鑑別部101における医用画像の鑑別の結果は、鑑別結果受領部としての根拠情報生成部102へ出力される。根拠情報生成部102は鑑別部101が何故そのような鑑別結果を出力したのか、鑑別の根拠を推定する。図示しないが、根拠情報の表示のオンオフを操作可能な構成を設けて、根拠情報の表示をオンにした場合にのみ鑑別の結果の結果を根拠情報生成部102へ出力するようにしてもよい。 The results of the medical image classification performed by the classification unit 101 are output to the basis information generation unit 102, which serves as a classification result receiving unit. The basis information generation unit 102 estimates the basis for the classification, i.e., why the classification unit 101 output such a classification result. Although not shown, a configuration may be provided that allows the display of the basis information to be turned on and off, so that the results of the classification are output to the basis information generation unit 102 only when the display of the basis information is turned on.

 文章生成部102aは鑑別根拠を人が理解できるように文章化する。分類部103は複数種類の文章を強調対象と、非強調対象とに分類する。具体例は後述するが、文章生成部102aが生成した文章が複数種類ある場合、文章毎に強調対象と、非強調対象とに分類してもよいし、一本の文章の中で強調対象と、非強調対象とを分類してもよい。一本の文章の中で強調対象と、非強調対象とを分類する場合、文節単位であってもよいし、単語単位であってもよいし、これらの組み合わせであってもよい。強調対象と、非強調対象とは経時的に入れ替えることも可能である。 The sentence generation unit 102a converts the basis for discrimination into sentences that can be understood by humans. The classification unit 103 classifies multiple types of sentences into emphasis targets and non-emphasis targets. Specific examples will be described later, but when there are multiple types of sentences generated by the sentence generation unit 102a, each sentence may be classified into emphasis targets and non-emphasis targets, or emphasis targets and non-emphasis targets may be classified within a single sentence. When classifying emphasis targets and non-emphasis targets within a single sentence, this may be done on a phrase-by-phrase basis, on a word-by-word basis, or a combination of these. It is also possible to switch emphasis targets and non-emphasis targets over time.

 強調部104は分類結果に従い、強調処理として、強調対象を強調する情報を生成し出力部105を経由して表示装置41に出力する。強調部104による強調方法としては、例えば下記が挙げられる。「強調対象に下線を引いて、非強調対象に下線を引かない」、「強調対象のフォントを、非強調対象よりも太字にする」、「強調対象のフォントサイズを、非強調対象のフォントサイズよりも大きくする」、「強調対象のフォントの種類を、非強調対象のフォントの種類と異ならせる」、「強調対象の色を、非強調対象とは異なる色にする」、「強調対象のフォントの傾斜角度を、非強調対象の傾斜角度と異ならせる」、または「強調対象の点滅速度を、非強調対象の点滅速度よりも早くする」。 The highlighting unit 104 generates information for highlighting the highlight target according to the classification result, and outputs it to the display device 41 via the output unit 105 as a highlighting process. Examples of highlighting methods used by the highlighting unit 104 include: "underlining the highlight target and not underlining the non-highlight target", "making the font of the highlight target bolder than that of the non-highlight target", "making the font size of the highlight target larger than that of the non-highlight target", "making the font type of the highlight target different from that of the non-highlight target", "making the color of the highlight target different from that of the non-highlight target", "making the tilt angle of the font of the highlight target different from that of the non-highlight target", or "making the blinking speed of the highlight target faster than that of the non-highlight target".

 色を使った強調方法としては例えば、表1に示すように背景の色に合わせて文字の色を選択する方法もある。
[表1]
As an example of a method of highlighting using color, there is a method of selecting the color of the characters to match the color of the background, as shown in Table 1.
[Table 1]

 強調部104は図生成部104aを備えていても良い。図生成部104aは、鑑別が行われた医用画像に対し、強調対象の構成要素に対応する箇所にマーカ画像を付加して、強調図を生成する。マーカ画像としては、矢印の様な強調したい部分を指し示すアイコンであってもよいし、バウンディングボックスの様な強調したい部分を囲うものであってもよいし、根拠情報と連動する形であってもよい。例えば根拠情報として、病変箇所の輪郭に関する情報が含まれていた場合に、病変箇所の輪郭をなぞる枠を表示してもよい。マーカ画像は、強調対象の構成要素の強調処理と視覚的に統一感のある画像とすることが好ましい。例えば、マーカ画像の表示色は、強調対象の構成要素の表示色と同系色または同色であることが好ましい。同系色とは、色の色相環のうち所定範囲内に含まれる色同志を同系色と指す。 The highlighting unit 104 may be equipped with an image generating unit 104a. The image generating unit 104a generates an highlighted image by adding marker images to locations on the differentiated medical image that correspond to the components to be highlighted. The marker image may be an icon, such as an arrow, that points to the area to be highlighted, or a bounding box that surrounds the area to be highlighted, or may be linked to the evidence information. For example, if the evidence information includes information regarding the outline of the lesion, a frame that traces the outline of the lesion may be displayed. It is preferable that the marker image be an image that is visually consistent with the highlighting process of the component to be highlighted. For example, it is preferable that the display color of the marker image be similar or the same color as the display color of the component to be highlighted. Similar colors refer to colors that fall within a specified range on the color wheel.

 検出部100による病変の位置や輪郭の検出結果を図生成部104aに入力し、病変の位置や輪郭の検出情報をマーカ画像の生成に使用してもよい。検出部100はCADe(Computer-Aided Detection)のための深層学習を施した人工知能(AI)を搭載していてもよい。 The results of detection of the position and contour of the lesion by the detection unit 100 may be input to the image generation unit 104a, and the detection information of the position and contour of the lesion may be used to generate a marker image. The detection unit 100 may be equipped with artificial intelligence (AI) that has undergone deep learning for Computer-Aided Detection (CADe).

 具体的には、根拠情報生成部102が生成した根拠情報から、文章生成部102aが、「第1根拠を言語化した文章」、「第2根拠を言語化した文章」、および「第3根拠を言語化した文章」の複数種類の文章を生成したとする。これらの文章に対して分類部103は「第1根拠を言語化した文章」を強調対象、「第2根拠を言語化した文章」、および「第3根拠を言語化した文章」を非強調対象と判断した場合、強調部104は上述の強調方法などを使用し、医師の目で見た場合に、「第1根拠を言語化した文章」の方が、「第2根拠を言語化した文章」、および「第3根拠を言語化した文章」よりも目立って見える様に強調する。 Specifically, it is assumed that the sentence generation unit 102a generates multiple types of sentences, "sentences verbalizing the first basis," "sentences verbalizing the second basis," and "sentences verbalizing the third basis," from the evidence information generated by the evidence information generation unit 102. When the classification unit 103 determines that the "sentences verbalizing the first basis" is to be emphasized and the "sentences verbalizing the second basis" and "sentences verbalizing the third basis" are not to be emphasized, the highlighting unit 104 uses the above-mentioned highlighting method or the like to highlight the "sentences verbalizing the first basis" so that, when viewed by a doctor's eyes, they stand out more than the "sentences verbalizing the second basis" and "sentences verbalizing the third basis."

 勿論、上述のとおり文章単位で強調することは必須ではない。例えば、文章生成部102aが、「第1単語」「第2単語」および「第3単語」を含んで構成された「根拠を言語化した文章」を生成した場合、分類部103が「第1単語」を強調対象とし、「第2単語」および「第3単語」をを非強調対象と判断することも可能である。 Of course, as mentioned above, it is not necessary to emphasize sentence-by-sentence. For example, if the sentence generation unit 102a generates a "sentence verbalizing the basis" that includes a "first word," a "second word," and a "third word," the classification unit 103 may determine that the "first word" is to be emphasized and that the "second word" and "third word" are not to be emphasized.

 分類部103は強調対象情報および非強調対象情報にタイミング情報を加えて強調部104に出力してもよい。医師に注目して貰いたい箇所が複数ある場合でも、タイミングをずらして順に強調することにより、医師の負荷を減らしつつ情報をインプットすることができる。 The classification unit 103 may add timing information to the emphasis target information and non-emphasis target information and output the information to the emphasis unit 104. Even if there are multiple areas that you want the doctor to pay attention to, by emphasizing them in order at different times, the doctor's burden can be reduced while still being able to input information.

 例えば、『第1のタイミングでは「第1根拠を言語化した文章」を強調対象、「第2根拠を言語化した文章」、および「第3根拠を言語化した文章」を非強調対象とする』、『第1のタイミングより遅い第2のタイミングでは「第2根拠を言語化した文章」を強調対象、「第1根拠を言語化した文章」、および「第3根拠を言語化した文章」を非強調対象とする』、『第2のタイミングより遅い第3タイミングでは「第3根拠を言語化した文章」を強調対象、「第1根拠を言語化した文章」、および「第2根拠を言語化した文章」を非強調対象とする』という情報を出力することも可能である。 For example, it is possible to output information such as, "At a first timing, the 'sentence verbalizing the first basis' is to be emphasized, and the 'sentence verbalizing the second basis' and the 'sentence verbalizing the third basis' are to be de-emphasized," "At a second timing later than the first timing, the 'sentence verbalizing the second basis' is to be emphasized, and the 'sentence verbalizing the first basis' and the 'sentence verbalizing the third basis' are to be de-emphasized," and "At a third timing later than the second timing, the 'sentence verbalizing the third basis' is to be emphasized, and the 'sentence verbalizing the first basis' and the 'sentence verbalizing the second basis' are to be de-emphasized."

 強調させる順番を決める方法として、例えば診断順序を活用することができる。診断順序は、ユーザがカスタマイズして登録したものに沿っても良いし、診断アルゴリズムを活用することもできる。図4には、診断アルゴリズムの一例として、MESDA-Gによる診断手順を示している。MESDA-Gは、胃癌の鑑別に用いられる診断アルゴリズムであり、まず、画像から胃粘膜表面の色の変化(白っぽいまたは赤みがかった)または形態学的変化(隆起、平ら、または陥没)を判断する。次に、病変と背景粘膜の間の境界線(DL:Demarcation line)を同定する(S1)。DLがない場合、非癌(良性病変)と診断する。DLが存在する場合は、その後の不規則な微小血管パターン(IMVP:irregular microvascular pattern)と不規則な微小表面パターン(IMSP:irregular microsurface pattern)の存在を観察する(S2)。不規則な微小血管および/または微小表面パターンが境界線内に存在する場合、癌と診断する。不規則な微小血管および/または微小表面パターンが境界線内に存在しない場合、非癌と診断する。 The diagnostic order can be used, for example, to determine the order of highlighting. The diagnostic order can be based on a user-customized and registered order, or a diagnostic algorithm can be used. Figure 4 shows the diagnostic procedure using MESDA-G as an example of a diagnostic algorithm. MESDA-G is a diagnostic algorithm used to differentiate gastric cancer. It first determines color changes (whitish or reddish) or morphological changes (protrusion, flattening, or depression) on the gastric mucosal surface from the image. Next, it identifies the demarcation line (DL) between the lesion and the background mucosa (S1). If there is no DL, it is diagnosed as non-cancerous (benign lesion). If DL is present, the presence of an irregular microvascular pattern (IMVP) and an irregular microsurface pattern (IMSP) is then observed (S2). If irregular microvascular and/or microsurface patterns are present within the borderline, the diagnosis is cancer. If irregular microvascular and/or microsurface patterns are not present within the borderline, the diagnosis is non-cancerous.

 具体例として、鑑別部101において、医用画像に対して胃癌の鑑別を行い、診断確信度90%で癌であるという結果が得られた場合について説明する。根拠情報生成部102が生成した根拠情報から、文章生成部102aが、「第1根拠」として「明確な境界がある」という文章、「第2根拠」として「イレギュラーなテクスチャを有している」という文章、「第3根拠」として「色は赤みを帯びている」という複数種類の文章を生成したとする。さらに、これらの文章をマージして、「明確な境界があり、イレギュラーなテクスチャを有しており、色は赤みを帯びている」という根拠情報の文章を生成したとする。この場合、診断アルゴリズムに沿って分類部は、胃粘膜表面の色に関わる文章、文節、または単語を最初の強調対象とする。そして、病変と背景粘膜の間の境界線に関わる文章、文節、または単語を次の強調対象とする。微小血管パターンに関わる文章、文節、または単語を更に次の強調対象とする。したがって、「明確な境界があり、イレギュラーなテクスチャを有しており、色は赤みを帯びている」という文章に含まれる構成要素は「赤み」、「明確な境界」、「イレギュラーなテクスチャ」の順で強調が行われる。 As a specific example, let us consider a case where the differentiation unit 101 differentiates gastric cancer from a medical image and obtains a result of cancer with a diagnostic certainty of 90%. From the evidence information generated by the evidence information generation unit 102, the sentence generation unit 102a generates multiple types of sentences, such as the sentence "there is a clear boundary" as the "first evidence," the sentence "it has an irregular texture" as the "second evidence," and the sentence "it is reddish in color" as the "third evidence." These sentences are then merged to generate the evidence information sentence "there is a clear boundary, it has an irregular texture, and it is reddish in color." In this case, in accordance with the diagnostic algorithm, the classification unit first highlights sentences, phrases, or words related to the color of the gastric mucosal surface. Then, the next highlighting target is sentences, phrases, or words related to the boundary line between the lesion and the background mucosa. The next highlighting target is sentences, phrases, or words related to the microvascular pattern. Therefore, the components in the sentence "It has clear boundaries, an irregular texture, and is reddish in color" are emphasized in the order "redness," "clear boundaries," and "irregular texture."

 「胃粘膜表面の色の変化」および「形態学的変化」の様に同列の者がある場合は強調のタイミングが同時となる様に設定してもよいし、ユーザ設定により順列をつけられるようにしてもよい。 If there are similar changes, such as "changes in the color of the gastric mucosal surface" and "morphological changes," the highlighting timing can be set to be simultaneous, or the order can be determined by user settings.

 なお、臓器の種類に応じて、診断アルゴリズムを切り替えて鑑別を行ってもよい。対象臓器が大腸の場合、例えば、JNET分類、NICE分類、佐野分類、慈恵分類、昭和分類、広島分類、工藤・鶴田分類、秋田日石分類などを用いる。対象臓器が食道の場合、例えば、井上分類、日本食道学会分類、有馬分類、BINGなどを用いる。対象臓器が胃の場合、例えば、MESDA-G、小山分類、慈恵分類、八尾分類、八木分類などを用いる。臓器の種類のほかに、対象となる病変、医用画像の撮像に用いられた内視鏡の種類などに応じて、診断アルゴリズムを切り替えて鑑別を行ってもよい。 In addition, diagnostic algorithms may be switched to perform differentiation depending on the type of organ. If the target organ is the large intestine, for example, the JNET classification, NICE classification, Sano classification, Jikei classification, Showa classification, Hiroshima classification, Kudo-Tsuruta classification, and Akita-Nisseki classification are used. If the target organ is the esophagus, for example, the Inoue classification, Japan Esophageal Society classification, Arima classification, and BING classification are used. If the target organ is the stomach, for example, the MESDA-G, Koyama classification, Jikei classification, Yao classification, and Yagi classification are used. In addition to the type of organ, diagnostic algorithms may be switched to perform differentiation depending on the target lesion, the type of endoscope used to capture the medical images, etc.

 診断順序に沿って強調させる順番を決める場合、例えば、診断順序の構成要素毎にキーワードを設定しておき、根拠情報の各構成要素とキーワードとの合致度合から根拠情報の各構成要素の強調する順序を決めることができる。 When determining the order of emphasis in accordance with the diagnostic sequence, for example, a keyword can be set for each component of the diagnostic sequence, and the order of emphasis for each component of the evidence information can be determined based on the degree of match between each component of the evidence information and the keyword.

 例えば、上述のMESDA-Gを使用する場合、診断順序の一番手に関わるキーワードとして「赤、出血、隆起、平ら、陥没」、診断順序の二番手に関わるキーワードとして「境界」、診断順序の三番手に関わるキーワードとして「毛細血管、毛細、テクスチャ、質感」を分類部103に保管しておくか、別の構成に保管されたこれらの情報に分類部103がアクセスできるようにしておく。そして、文章生成部が「「明確な境界がある」、「色は赤みを帯びている」という文章を生成した場合、診断順序の一番手に関わるキーワードとの合致度が高い「色は赤みを帯びている」の方を時系列で先に強調する。 For example, when using the above-mentioned MESDA-G, the keywords related to the first item in the diagnostic order are "red, bleeding, protuberance, flat, depression," the keyword related to the second item in the diagnostic order is "boundary," and the keyword related to the third item in the diagnostic order is "capillary, capillary, texture, texture" are stored in the classification unit 103, or the classification unit 103 can access this information stored in a separate structure. Then, if the sentence generation unit generates sentences such as "There is a clear boundary" and "The color is reddish," then "The color is reddish," which has a higher degree of match with the keyword related to the first item in the diagnostic order, is emphasized first in chronological order.

 なお、文章生成部102aは、ユーザなどにより設定された医学辞書に記載されている医学用語を用いて、文章の生成を行ってもよい。例えば、画像診断に先立って、複数の医学辞書から任意の辞書を選択・設定しておき、当該辞書を用いて根拠情報の文章を生成してもよい。生成された文章は、分類部103へ出力される。 The sentence generation unit 102a may generate sentences using medical terms contained in a medical dictionary set by the user or the like. For example, prior to image diagnosis, an arbitrary dictionary may be selected and set from multiple medical dictionaries, and the dictionary may be used to generate sentences for the evidence information. The generated sentences are output to the classification unit 103.

 出力部105は、表示装置41に表示させる表示画像を生成して出力する。表示画像には、鑑別結果(疾患名、診断確信度)および強調処理が行われた根拠情報の文章と、強調図とが含まれる。図5A、5B、5C、5Dは、第1実施形態に係る表示画像の一例を説明する図である。図5Aは、表示装置41に表示された表示画像411の一例を示している。表示画像411には、2つの表示領域D1、D2が設けられている。表示領域D1には、図生成部104aで生成された強調図G1が配置され、表示領域D2には、鑑別結果(疾患名、診断確信度)、および、強調部104で生成された根拠情報の文章が配置される。図5Aは、「第1根拠」が強調対象である場合の表示画像である。この場合、強調図G1におけるマーカ画像L1は、「第1根拠」に対応する箇所に付加されている。表示画像411において、根拠情報の文章の太字部分と太線で示すマーカ画像L1は、例えば緑色で表示されている。根拠情報の文章の他の部分は、例えば黒色で表示されている。このように、根拠情報の文章のうち、強調対象の構成要素の表示色を他の構成要素の表示色と異ならせることにより、ユーザは、強調対象の根拠情報を迅速に識別することができる。また、強調対象の構成要素の表示色とマーカ画像L1の表示色とを同一にすることにより、ユーザは、強調対象の根拠情報と、それに対応する病変の位置とを迅速に識別することができる。 The output unit 105 generates and outputs a display image to be displayed on the display device 41. The display image includes the results of the diagnosis (disease name, diagnostic certainty), text of the evidence information that has been highlighted, and an highlighted diagram. Figures 5A, 5B, 5C, and 5D are diagrams illustrating an example of a display image according to the first embodiment. Figure 5A shows an example of a display image 411 displayed on the display device 41. The display image 411 has two display areas D1 and D2. The highlighted diagram G1 generated by the diagram generation unit 104a is placed in the display area D1, and the results of the diagnosis (disease name, diagnostic certainty) and text of the evidence information generated by the highlighting unit 104 are placed in the display area D2. Figure 5A shows a display image when the "first evidence" is to be highlighted. In this case, the marker image L1 in the highlighted diagram G1 is added to the location corresponding to the "first evidence." In the display image 411, the bolded portions of the text of the evidence information and the marker image L1 indicated by a bold line are displayed in green, for example. The rest of the evidence information text is displayed in black, for example. In this way, by making the display color of the component to be highlighted different from the display colors of the other components in the evidence information text, the user can quickly identify the evidence information to be highlighted. Furthermore, by making the display color of the component to be highlighted the same as the display color of the marker image L1, the user can quickly identify the evidence information to be highlighted and the location of the corresponding lesion.

 なお、強調対象の構成要素やマーカ画像の表示色は、強調対象の構成要素ごとに異なる色にすることが望ましい。図5Bは、上述した胃癌の鑑別の例において、「第1根拠」である「明確な境界があり」が強調対象である場合の表示画像の一例である。図5Cは、上述した胃癌の鑑別の例において、「第2根拠」である「イレギュラーなテクスチャを有しており」が強調対象である場合の表示画像の一例である。図5Dは、上述した胃癌の鑑別の例において、「第3根拠」である「色は赤みを帯びている」が強調対象である場合の表示画像の一例である。 It is desirable to use different colors for the components to be highlighted and the marker images. Figure 5B is an example of a display image in the example of gastric cancer differentiation described above, where the "first basis" of "there is a clear boundary" is to be highlighted. Figure 5C is an example of a display image in the example of gastric cancer differentiation described above, where the "second basis" of "has an irregular texture" is to be highlighted. Figure 5D is an example of a display image in the example of gastric cancer differentiation described above, where the "third basis" of "the color is reddish" is to be highlighted.

 図5Bに示す表示画像411において、根拠情報の文章の強調対象(太字部分)と太線で示すマーカ画像L1は、例えば緑色で表示されている。図5Cに示す表示画像411において、根拠情報の文章の強調対象(左斜体の斜字部分)と二重線で示すマーカ画像L2は、例えば青色で表示されている。図5Dに示す表示画像411において、根拠情報の文章の強調対象(右斜体の斜字部分)と太点線で示すマーカ画像L3は、例えば赤色で表示されている。このように、強調対象に分類された構成要素が異なる場合、異なる色を用いて表示するようにしてもよい。 In the display image 411 shown in Figure 5B, the emphasis (bold part) in the text of the evidence information and the marker image L1 indicated by a thick line are displayed in green, for example. In the display image 411 shown in Figure 5C, the emphasis (left italic part) in the text of the evidence information and the marker image L2 indicated by a double line are displayed in blue, for example. In the display image 411 shown in Figure 5D, the emphasis (right italic part) in the text of the evidence information and the marker image L3 indicated by a thick dotted line are displayed in red, for example. In this way, when the components classified as emphasised are different, they may be displayed using different colors.

 このように、第1実施形態の画像処理装置は、医用画像の鑑別の結果に基づき表示画像を生成する際に、表示画像に表示させる根拠情報の文章において、強調対象の構成要素の表示色を他の構成要素の表示色と異ならせている。これにより、ユーザは、強調対象の根拠情報を迅速に識別することができる。また、第1実施形態の画像処理装置は、鑑別が行われた医用画像に対し、強調対象の構成要素に対応する箇所にマーカ画像を付加して、強調図を生成する。このとき、マーカ画像の表示色を、強調対象の構成要素の表示色と同一にしている。これにより、ユーザは、強調対象の根拠情報と、それに対応する病変の位置とを迅速に識別することができる。 In this way, when generating a display image based on the results of differentiation of a medical image, the image processing device of the first embodiment differentiates the display color of the components to be emphasized from the display color of the other components in the text of the evidence information to be displayed in the display image. This allows the user to quickly identify the evidence information to be emphasized. Furthermore, the image processing device of the first embodiment adds marker images to the medical image that has been differentiated at locations corresponding to the components to be emphasized, to generate an emphasized image. At this time, the display color of the marker images is made the same as the display color of the components to be emphasized. This allows the user to quickly identify the evidence information to be emphasized and the location of the corresponding lesion.

 なお、文章生成部102aにおいて根拠情報の文章の生成においては、医学辞書の設定のほかに、色にかかわる記述方法の設定を切替可能にしてもよい。例えば、通常モードと詳細モードの2つのモードを設定しておき、詳細モードが設定された場合には、色を含む構成要素については、色についての説明を詳細に記述した文章が生成されるようにしてもよい。
(第2実施形態)
In generating sentences for evidence information in the sentence generator 102a, in addition to the medical dictionary settings, the description method settings for colors may be switchable. For example, two modes, a normal mode and a detailed mode, may be set, and when the detailed mode is set, a sentence that describes in detail the color of a component that includes color may be generated.
Second Embodiment

 上述した第1実施形態では、強調対象に分類された構成要素を視覚的に強調させる処理として、表示画像における強調対象の構成要素の表示色を、非強調対象の構成要素の表示色と異なる色に設定している。これに対し、本実施形態では、表示画像における強調対象の構成要素のテクスチャを、非強調対象の構成要素のテクスチャと異ならせている。すなわち、本実施形態は、強調部104における処理が第1実施形態と異なる。 In the first embodiment described above, the process of visually highlighting components classified as those to be highlighted is to set the display color of the components to be highlighted in the displayed image to a color different from the display color of components not to be highlighted. In contrast, in this embodiment, the texture of the components to be highlighted in the displayed image is made different from the texture of the components not to be highlighted. In other words, the processing performed by the highlighting unit 104 in this embodiment differs from the first embodiment.

 本実施形態ならびに以降の実施形態における画像処理装置は、第1実施形態の画像処理装置1と同様の構成を有している。同じ構成要素については同じ符号を付して説明は省略する。 The image processing device in this embodiment and the following embodiments has the same configuration as the image processing device 1 in the first embodiment. The same components are designated by the same reference numerals and their descriptions are omitted.

 本実施形態における強調部104では、強調対象の構成要素の文字に下線を付したり、非強調対象の構成要素の文字よりも太字にしたりすることで、視覚的に強調させている。図6A、6B、6Cは、第2実施形態に係る表示画像の一例を説明する図である。図6Aは、上述した胃癌の鑑別の例において、「第1根拠」である「明確な境界があり」が強調対象である場合の表示画像の一例である。図6Bは、上述した胃癌の鑑別の例において、「第2根拠」である「イレギュラーなテクスチャを有しており」が強調対象である場合の表示画像の一例である。図6Cは、上述した胃癌の鑑別の例において、「第3根拠」である「色は赤みを帯びている」が強調対象である場合の表示画像の一例である。 In this embodiment, the highlighting unit 104 visually highlights the text of the component to be highlighted by underlining it or by making it bolder than the text of the component not to be highlighted. Figures 6A, 6B, and 6C are diagrams illustrating an example of a display image according to the second embodiment. Figure 6A is an example of a display image in the example of gastric cancer differentiation described above, where the "first basis" "there is a clear boundary" is to be highlighted. Figure 6B is an example of a display image in the example of gastric cancer differentiation described above, where the "second basis" "has an irregular texture" is to be highlighted. Figure 6C is an example of a display image in the example of gastric cancer differentiation described above, where the "third basis" "the color is reddish" is to be highlighted.

 図6A~図6Cにおいて、強調図G1におけるマーカ画像L4は、強調対象の構成要素に対応する箇所に付加されている。表示画像411において、根拠情報の文章における強調対象の構成要素は、太字下線付きで示されている。根拠情報の文章の他の部分は、細字下線なしで示されている。このように、根拠情報の文章のうち、強調対象の構成要素の文字のテクスチャを他の構成要素のテクスチャと異ならせることにより、ユーザは、強調対象の根拠情報を迅速に識別することができる。 In Figures 6A to 6C, marker image L4 in the highlighted image G1 is added to the location corresponding to the component to be highlighted. In the display image 411, the component to be highlighted in the text of the evidence information is shown in bold and underlined. The other parts of the text of the evidence information are shown in thin, ununderlined text. In this way, by making the texture of the characters of the component to be highlighted in the text of the evidence information different from the texture of the other components, the user can quickly identify the evidence information to be highlighted.

 なお、上述では、文字の太さや下線の有無により、強調対象の構成要素とその他の構成要素とのテクスチャを異ならせているが、文字の大きさやフォント、文字の傾き度合などを異ならせてもよい。また、これらを組み合わせてもよい。
(第3実施形態)
In the above description, the texture of the component to be emphasized is made different from that of other components depending on the thickness of the characters and whether or not they are underlined, but it is also possible to make the texture different by changing the size, font, or tilt of the characters, or by combining these.
(Third embodiment)

 上述した第1実施形態では、特定の一の構成要素を強調対象とした場合の強調図が配置された表示画像を生成している。従って、別の構成要素を強調対象とする場合、当該構成要素に対応する強調図が配置された表示画像を生成する必要があった。これに対し、本実施形態では、1つの表示画像において、各構成要素のそれぞれを強調対象とした場合の強調図を配置している点が、第1実施形態と異なる。 In the first embodiment described above, a display image is generated in which an emphasis image is placed when one specific component is to be emphasized. Therefore, when another component is to be emphasized, it was necessary to generate a display image in which an emphasis image corresponding to that component is placed. In contrast, this embodiment differs from the first embodiment in that an emphasis image is placed in a single display image in which each component is to be emphasized.

 本実施形態における出力部105で生成される表示画像411には、強調処理が行われた根拠情報の文章と、鑑別が行われた医用画像と、強調図とが含まれる。図7は、第3実施形態に係る表示画像の一例を説明する図である。表示画像411には、3つの表示領域D1、D2、D3が設けられている。表示領域D1には、強調図G1、G2、G3が配置され、表示領域D2には、根拠情報の文章が配置される。また、表示領域D3には、鑑別が行われた医用画像G0が配置される。 In this embodiment, the display image 411 generated by the output unit 105 includes text of the evidence information that has been subjected to emphasis processing, a medical image that has been differentiated, and an emphasis diagram. Figure 7 is a diagram illustrating an example of a display image according to the third embodiment. The display image 411 has three display areas D1, D2, and D3. Highlighted diagrams G1, G2, and G3 are arranged in display area D1, and text of the evidence information is arranged in display area D2. Furthermore, the medical image G0 that has been differentiated is arranged in display area D3.

 表示領域D2に配置された根拠情報の文章は、構成要素ごとに異なる方法で視覚的に強調されている。例えば、「第1根拠」である「明確な境界があり」は緑色、「第2根拠」である「イレギュラーなテクスチャを有しており」は青色、「第3根拠」である「色は赤みを帯びている」は赤色で表示されている。表示領域D1に配置された強調図G1におけるマーカ画像L1は、「第1根拠」に対応する箇所に付加されている。マーカ画像L1の色は、「第1根拠」の表示色と同じ色になされている。強調図G2におけるマーカ画像L2は、「第2根拠」に対応する箇所に付加されている。マーカ画像L2の色は、「第2根拠」の表示色と同じ色になされている。強調図G3におけるマーカ画像L3は、「第3根拠」に対応する箇所に付加されている。マーカ画像L3の色は、「第3根拠」の表示色と同じ色になされている。 The text of the evidence information arranged in display area D2 is visually highlighted in different ways for each component. For example, the "first evidence" "has a clear boundary" is displayed in green, the "second evidence" "has an irregular texture" is displayed in blue, and the "third evidence" "is reddish in color" is displayed in red. Marker image L1 in the highlighted image G1 arranged in display area D1 is added to the location corresponding to the "first evidence." The color of marker image L1 is the same as the display color of the "first evidence." Marker image L2 in the highlighted image G2 is added to the location corresponding to the "second evidence." The color of marker image L2 is the same as the display color of the "second evidence." Marker image L3 in the highlighted image G3 is added to the location corresponding to the "third evidence." The color of marker image L3 is the same as the display color of the "third evidence."

 このように、根拠情報の文章の各構成要素に対応する強調図を1つの表示画像に配置することで、構成要素間の比較を迅速に行うことができる。
(第4実施形態)
In this way, by arranging the highlighted images corresponding to the respective components of the sentence of the evidence information in one display image, it is possible to quickly compare the components.
(Fourth embodiment)

 上述した第1実施形態では、特定の一の構成要素を強調対象とした場合の強調図が配置された表示画像を生成している。これに対し、本実施形態では、当該構成要素と類似した病変を含む参照画像も配置した表示画像を生成する点が異なっている。参照画像は、例えば、医学図鑑などに掲載されているアトラス画像において、当該構成要素と類似した病変を含む画像である。 In the first embodiment described above, a display image is generated in which an emphasis image is placed when a specific component is the emphasis target. In contrast, this embodiment differs in that a display image is generated in which a reference image containing a lesion similar to the component is also placed. The reference image is, for example, an atlas image published in a medical encyclopedia or the like, which contains a lesion similar to the component.

 本実施形態における出力部105で生成される表示画像411には、強調処理が行われた根拠情報の文章と、鑑別が行われた医用画像と、参照画像とが含まれる。図8は、第4実施形態に係る表示画像の一例を説明する図である。表示画像411には、3つの表示領域D1、D2、D4が設けられている。表示領域D1には、強調図G1が配置され、表示領域D2には、根拠情報の文章が配置される。また、表示領域D4には、参照画像G11が配置される。参照画像G11には、マーカ画像L11が付加されている。マーカ画像L11は、参照画像G11において、強調対象の構成要素に対応する箇所に付加される。マーカ画像L11は、マーカ画像L1と視覚的に統一感のある画像とする。例えば、マーカ画像L11の表示色は、マーカ画像L1の表示色、及び、強調対象の構成要素の表示色と同一にする。 In this embodiment, the display image 411 generated by the output unit 105 includes text of the evidence information that has been subjected to emphasis processing, a medical image that has been differentiated, and a reference image. Figure 8 is a diagram illustrating an example of a display image according to the fourth embodiment. The display image 411 has three display areas D1, D2, and D4. An emphasis image G1 is placed in display area D1, and text of the evidence information is placed in display area D2. Furthermore, a reference image G11 is placed in display area D4. A marker image L11 is added to the reference image G11. The marker image L11 is added to a location in the reference image G11 that corresponds to the component to be emphasized. The marker image L11 is an image that is visually consistent with the marker image L1. For example, the display color of the marker image L11 is the same as the display color of the marker image L1 and the display color of the component to be emphasized.

 このように、根拠情報の文章と、強調図と、参照画像を1つの表示画像に配置することで、他の症例との比較を迅速に行うことができる。
(第5実施形態)
In this way, by arranging the evidence information text, the highlighted image, and the reference image in one displayed image, comparison with other cases can be made quickly.
Fifth Embodiment

 本実施形態は、鑑別における診断の順番に基づいて、表示画像を順次切り替えて表示させる点が、上述の各実施形態と異なる。例えば、図4に示すMESDA-Gによる診断手順を用いて鑑別を行った場合、まず、S1に示す診断に基づく構成要素を強調対象とした表示画像(図9A)表示する。続いて、S2に示す診断に基づく構成要素を強調対象とした表示画像(図9B)を表示する。図9A、9Bは、第5実施形態に係る表示画像の一例を説明する図である。 This embodiment differs from the above-described embodiments in that the display images are switched sequentially based on the order of diagnoses in the differentiation. For example, when differentiation is performed using the MESDA-G diagnostic procedure shown in Figure 4, a display image (Figure 9A) is first displayed in which the components based on the diagnosis shown in S1 are highlighted. Next, a display image (Figure 9B) is displayed in which the components based on the diagnosis shown in S2 are highlighted. Figures 9A and 9B are diagrams illustrating an example of a display image according to the fifth embodiment.

 表示領域D1に示す鑑別結果は、疾患名、診断確信度、に加えて、各診断における判定結果(例えば、図9Aにおける"DL+"や、図9Bにおける"IMVP/IMSP+")も表示させることが好ましい。その場合、判定結果は、強調対象の構成要素の表示色と同一色で表示させることが好ましい。 The differential diagnosis results shown in display area D1 preferably display not only the disease name and diagnostic certainty, but also the judgment result for each diagnosis (for example, "DL+" in Figure 9A or "IMVP/IMSP+" in Figure 9B). In this case, it is preferable to display the judgment result in the same color as the component being highlighted.

 なお、強調対象の構成要素の表示色やマーカ画像の表示色は、判定結果が"+"の場合は赤色、"-"の場合は緑色など、判定の状況に応じて切り替えてもよい。
(第6実施形態)
The display color of the component to be highlighted and the display color of the marker image may be changed depending on the judgment status, such as red if the judgment result is "+" and green if the judgment result is "-".
Sixth Embodiment

 本実施形態は、鑑別の結果として、病変が疾患であると判定された場合のみ、表示画像に根拠情報を表示し、疾患でないと判定された場合には根拠情報を表示しない点が、上述の実施形態と異なる。 This embodiment differs from the above-described embodiment in that, as a result of the differentiation, evidence information is displayed on the displayed image only if the lesion is determined to be a disease, and evidence information is not displayed if the lesion is determined not to be a disease.

 図10は、第6実施形態に係る表示画像の一例を説明する図である。病変が疾患であると判定された場合の表示画像は、上述の実施形態と同様である(例えば、図5Bなど)。病変が疾患でないと判定された場合には、例えば図10に示す表示画面が生成される。すなわち、表示領域D1には、強調図G1aが配置され、表示領域D2には、否定された疾患名、診断確信度が配置される。強調図G1aは、医用画像の病変に対応する箇所にマーカ画像L5を付加して生成された図である。 FIG. 10 is a diagram illustrating an example of a display image according to the sixth embodiment. The display image when the lesion is determined to be a disease is the same as that of the above-described embodiments (for example, FIG. 5B). When the lesion is determined not to be a disease, for example, a display screen as shown in FIG. 10 is generated. That is, an emphasized image G1a is placed in display area D1, and the rejected disease name and diagnostic certainty are placed in display area D2. The emphasized image G1a is generated by adding a marker image L5 to the location of the lesion on the medical image.

 このように、鑑別の結果、疾患でないと判定された場合には、表示画像の情報量を減らすことにより、ユーザは、迅速に画像診断を行うことができる。
(第7実施形態)
In this way, if the result of the differential diagnosis is that there is no disease, the amount of information in the displayed image is reduced, allowing the user to quickly perform image diagnosis.
Seventh Embodiment

 本実施形態は、診断における判断根拠の重要度に基づいて、表示画像を順次切り替えて表示させる点が、上述の各実施形態と異なる。図11A、11B、11Cは、第7実施形態に係る表示画像の一例を説明する図である。図11A、11B、11Cは、それぞれ、図5B、5C、5Dに示す表示画像に対し、判断根拠の重要度が付加されている。 This embodiment differs from the above-described embodiments in that the display images are sequentially switched based on the importance of the basis for the diagnosis. Figures 11A, 11B, and 11C are diagrams illustrating an example of a display image according to the seventh embodiment. Figures 11A, 11B, and 11C are the display images shown in Figures 5B, 5C, and 5D, respectively, to which the importance of the basis for the diagnosis has been added.

 例えば、DLの同定結果が疾患の判定に対する重要度が最も高く(例えば、重要度100)、IMVP/IMSPの観察結果が次に重要度が高く(例えば、重要度70)、胃粘膜表面の色の変化の重要度が最も低い(例えば、重要度30)場合、図11Aに示す表示画像が最初に表示される。続いて、図11Bに示す表示画像が表示される。最後に、図11Cに示す表示画像が表示される。 For example, if the DL identification result has the highest importance for disease diagnosis (e.g., importance 100), the IMVP/IMSP observation result has the next highest importance (e.g., importance 70), and the color change of the gastric mucosal surface has the lowest importance (e.g., importance 30), the display image shown in Figure 11A will be displayed first. Next, the display image shown in Figure 11B will be displayed. Finally, the display image shown in Figure 11C will be displayed.

 このように、判断根拠の重要度が高い順に、構成要素を強調した画像を表示させることで、ユーザは迅速に画像診断を行うことができる。
(第8実施形態)
In this way, by displaying an image in which components are highlighted in descending order of importance of the basis for judgment, the user can quickly perform image diagnosis.
Eighth Embodiment

 本実施形態は、鑑別の結果として、病変が疾患であると判定され、かつ、診断確信度が高い(設定された値よりも高い)場合のみ、表示画像に根拠情報を表示し、疾患でないと判定された場合や、疾患であると判定されても診断確信度が低い場合には根拠情報を表示しない点が、上述の実施形態と異なる。 This embodiment differs from the above-described embodiment in that, as a result of the differentiation, evidence information is displayed on the displayed image only when the lesion is determined to be a disease and the diagnostic certainty is high (higher than a set value); it does not display evidence information when the lesion is determined not to be a disease, or when the diagnostic certainty is low even if the lesion is determined to be a disease.

 図12は、第8実施形態に係る表示画像の一例を説明する図である。病変が疾患であると判定された場合の表示画像は、上述の実施形態と同様である(例えば、図5Bなど)。また病変が疾患でないと判定された場合の表示画像は、例えば、図10に示す画像である。病変が疾患と判定されたものの診断確信度が低い場合には、例えば図12に示す表示画面が生成される。すなわち、表示領域D1には、強調図G1aが配置され、表示領域D2には、判定疾患名、診断確信度が配置される。強調図G1aは、医用画像の病変に対応する箇所にマーカ画像L5を付加して生成された図である。 FIG. 12 is a diagram illustrating an example of a display image according to the eighth embodiment. The display image when the lesion is determined to be a disease is the same as that of the above-described embodiments (for example, FIG. 5B). The display image when the lesion is determined not to be a disease is, for example, the image shown in FIG. 10. When the lesion is determined to be a disease but the diagnostic certainty is low, a display screen such as that shown in FIG. 12 is generated. That is, an emphasized image G1a is placed in display area D1, and the determined disease name and diagnostic certainty are placed in display area D2. The emphasized image G1a is generated by adding a marker image L5 to the location of the lesion on the medical image.

 このように、鑑別の結果、病変が疾患と判定されたものの診断確信度が低い場合にも、疾患でないと判定された場合と同様に表示画像の情報量を減らすことにより、ユーザは、迅速に画像診断を行うことができる。
(第9実施形態)
In this way, even if the lesion is determined to be a disease as a result of differentiation but the diagnostic certainty is low, the amount of information in the displayed image can be reduced, just as in the case where the lesion is determined to be not a disease, allowing the user to perform image diagnosis quickly.
Ninth Embodiment

 上述の実施形態では、表示装置41に根拠を文章として提示する場合について紹介したが、本発明はこれに限定されず、根拠情報を音声で医師にインプットしてもよい。音声と実施例1~8とを組み合わせても良い。 In the above embodiment, we introduced a case where the rationale is presented as text on the display device 41, but the present invention is not limited to this, and the rationale information may also be input to the doctor via voice. Voice may also be combined with Examples 1 to 8.

 この場合、強調部104は強調対象の音量を非強調対象よりも大きくする、強調対象のみ最初に読み上げて、続いて根拠を全て読み上げる等の強調処理を施した音声データを合成し、出力部105を経由して音響装置42に送信する。音響装置42は、一般的なスピーカーであってもよいし、指向性スピーカー、イヤホン、ヘッドホン、または骨伝導装置であってもよい。指向性スピーカー、イヤホン、ヘッドホン、または骨伝導装置を用いる場合、これらを通じて特定の相手(医師)にのみ根拠情報を送信することで、被験者が根拠情報を目にする可能性を減らすことができる。 In this case, the emphasis unit 104 synthesizes audio data that has been subjected to emphasis processing, such as increasing the volume of the emphasized items compared to the non-emphasized items, or reading out only the emphasized items first, followed by all of the evidence, and transmits this data to the audio device 42 via the output unit 105. The audio device 42 may be a general speaker, or it may be a directional speaker, earphones, headphones, or a bone conduction device. When a directional speaker, earphones, headphones, or bone conduction device is used, the evidence information can be transmitted only to a specific party (the doctor) via these, thereby reducing the possibility that the subject will see the evidence information.

 本発明は、上記各実施形態にそのまま限定されるものではなく、実施段階ではその要旨を逸脱しない範囲で構成要素を変形して具体化できる。また、上記各実施形態に開示されている複数の構成要素の適宜な組み合わせにより、種々の発明を形成できる。例えば、実施形態に示される全構成要素の幾つかの構成要素を削除してもよい。さらに、異なる実施形態にわたる構成要素を適宜組み合わせてもよい。 The present invention is not limited to the above-described embodiments, and can be embodied by modifying the components in the implementation stage without departing from the spirit of the invention. Furthermore, various inventions can be created by appropriately combining the multiple components disclosed in the above-described embodiments. For example, some of the components shown in the embodiments may be deleted. Furthermore, components from different embodiments may be combined as appropriate.

 なお、特許請求の範囲、明細書、および図面中の動作フローに関して、便宜上「まず、」、「次に、」等を用いて説明したとしても、この順で実施することが必須であることを意味するものではない。また、これらの動作フローを構成する各ステップは、発明の本質に影響しない部分については、適宜省略も可能であることは言うまでもない。 Note that even if the operational flows in the claims, specifications, and drawings are explained using terms such as "first," "next," etc. for convenience, this does not mean that they must be performed in that order. Furthermore, it goes without saying that the steps that make up these operational flows may be omitted as appropriate if they do not affect the essence of the invention.

 なお、ここで説明した技術のうち、主にフローチャートで説明した制御に関しては、プログラムで設定可能であることが多く、記録媒体や記録部に収められる場合もある。この記録媒体、記録部への記録の仕方は、製品出荷時に記録してもよく、配布された記録媒体を利用してもよく、インターネットを介してダウンロードしたものでもよい。 Furthermore, of the technologies described here, the controls mainly illustrated in the flowcharts can often be configured by program, and may be stored on a recording medium or recording unit. The recording medium or recording unit may be recorded at the time of product shipment, may use a distributed recording medium, or may be downloaded via the internet.

 なお、実施例中で、「部」(セクションやユニット)として記載した部分は、専用の回路や、複数の汎用の回路を組み合わせて構成してもよく、必要に応じて、予めプログラムされたソフトウェアに従って動作を行うマイコン、CPUなどのプロセッサ、あるいはFPGAなどシーケンサを組み合わせて構成されてもよい。また、その制御の一部または全部を外部の装置が引き受けるような設計も可能で、この場合、有線や無線の通信回路が介在する。通信は、ブルートゥース(登録商標)やWiFi、電話回線などで行えばよく、USBなどで行っても良い。専用の回路、汎用の回路や制御部を一体としてASICとして構成してもよい。 In the examples, the parts described as "parts" (sections or units) may be configured by combining dedicated circuits or multiple general-purpose circuits, and, if necessary, by combining a microcomputer, a processor such as a CPU, or a sequencer such as an FPGA that operates according to pre-programmed software. It is also possible to design the system so that some or all of the control is taken over by an external device, in which case a wired or wireless communication circuit is involved. Communication can be via Bluetooth (registered trademark), Wi-Fi, telephone lines, USB, etc. The dedicated circuits, general-purpose circuits, and control units may be integrated into an ASIC.

Claims (16)

 画像から病変の鑑別結果を受領する鑑別結果受領部、
 前記鑑別の根拠情報を文章として生成する文章生成部、
 前記文章の構成要素を強調対象、および、非強調対象に分類する分類部、
 前記強調対象を強調する強調部、ならびに
 前記文章のうち少なくとも前記強調対象をモニタに出力する出力部
 を含む画像処理装置。
a differentiation result receiving unit that receives a differentiation result of a lesion from the image;
a sentence generation unit that generates the basis information for the discrimination as a sentence;
a classification unit for classifying the components of the sentence into emphasis targets and non-emphasis targets;
an emphasis unit that emphasizes the emphasis target; and an output unit that outputs at least the emphasis target of the sentence to a monitor.
 前記出力部は前記鑑別の結果もモニタに出力し、
 前記強調対象に紐づく強調図を生成する図生成部、を含み、
 前記出力部は、
 前記強調対象の出力状態と、前記強調図の表示とが連動するように前記強調図を前記鑑別の結果に紐づけて前記モニタに出力する
 請求項1に記載の画像処理装置。
The output unit also outputs the discrimination result to a monitor,
a picture generating unit that generates an emphasis picture associated with the emphasis target,
The output unit
The image processing device according to claim 1 , wherein the emphasized image is linked to the result of the discrimination and output to the monitor so that the output state of the object to be emphasized and the display of the emphasized image are linked.
 前記出力部は前記文章のうち前記強調対象、および、前記非強調対象の両方を出力し、
 前記強調部は、
 強調対象に下線を引いて、非強調対象に下線を引かない、
 強調対象のフォントを、非強調対象よりも太字にする、
 強調対象のフォントサイズを、非強調対象のフォントサイズよりも大きくする、
 強調対象のフォントの種類を、非強調対象のフォントの種類と異ならせる、
 強調対象の色を、非強調対象とは異なる色にする
 強調対象の傾斜角度を、非強調対象の傾斜角度と異ならせる、および、
 強調対象の点滅速度を、非強調対象の点滅速度よりも早くする
 のうちの少なくとも1種を実施する請求項1に記載の画像処理装置。
the output unit outputs both the emphasis target and the non-emphasis target of the sentence,
The highlighting section is
Underline what is emphasized and do not underline what is not emphasized,
Make the font of emphasized items bolder than that of non-emphasized items,
Make the font size of emphasized items larger than the font size of non-emphasized items,
Use a different font type for the highlighted text than for the unhighlighted text.
The color of the highlighted object is different from that of the non-highlighted object. The tilt angle of the highlighted object is different from that of the non-highlighted object. And
2. The image processing device according to claim 1, wherein the image processing device performs at least one of the following: making the blinking speed of the highlighted object faster than the blinking speed of the non-highlighted object.
 前記図生成部は、
 前記強調図で用いる輪郭線の色を前記強調対象と同系色にする請求項1に記載の画像処理装置。
The diagram generating unit
2. The image processing device according to claim 1, wherein the color of the outline used in the highlighted image is the same color as the highlighted object.
 前記図生成部は、
 前記強調図として、画像中の前記強調対象の説明文に該当する部位を囲うかまたは指し示す図を生成する
 請求項1に記載の画像処理装置。
The diagram generating unit
The image processing device according to claim 1 , wherein a diagram is generated as the emphasized diagram, the diagram enclosing or pointing out a portion of the image that corresponds to the explanatory text to be emphasized.
 前記分類部は、
 前記強調対象を第1強調対象と、第2強調対象とに分類し、
 前記強調部は、前記第2強調対象を前記第1強調対象とは異なる見た目、または、タイミングで強調し、
 前記図生成部は第1強調対象に紐づく第1強調図、および、前記第2強調対象に紐づく第2強調図を生成し、
 前記出力部は、
 第1強調対象の強調状態と前記第1強調図の表示とを連動させ、
 第2強調対象の強調状態と前記第2強調図の表示とを連動させる
 請求項1に記載の画像処理装置。
The classification unit
The emphasis target is classified into a first emphasis target and a second emphasis target,
the highlighting unit highlights the second emphasis target in an appearance or at a timing different from that of the first emphasis target,
the image generating unit generates a first emphasized image associated with the first emphasized object and a second emphasized image associated with the second emphasized object;
The output unit
Linking the highlighting state of the first highlight object with the display of the first highlight image;
The image processing device according to claim 1 , wherein the emphasis state of the second emphasis target and the display of the second emphasis image are linked.
 前記強調部は、前記第2強調対象を前記第1強調対象とは異なる見た目で強調する方法として、
 前記第2強調対象を前記第1強調対象とは異なる色で表示する
 請求項6に記載の画像処理装置。
The highlighting unit highlights the second emphasis target in an appearance different from that of the first emphasis target by:
The image processing device according to claim 6 , wherein the second emphasis target is displayed in a color different from the color of the first emphasis target.
 前記強調部は、前記第1強調対象および前記第2強調対象を異なるタイミングで強調し、
前記第1強調対象および前記第2強調対象のうち強調するタイミングにないものは、非強調対象と同じ表示にする請求項6に記載の画像処理装置。
the highlighting unit highlights the first emphasis target and the second emphasis target at different timings;
The image processing device according to claim 6 , wherein one of the first and second emphasis targets that is not at the timing of being emphasized is displayed in the same manner as a non-emphasis target.
 前記強調部は、前記第1強調対象および前記第2強調対象を異なるタイミングで強調し、
前記出力部は、前記第1強調対象および前記第2強調対象のうち強調するタイミングにないものを前記モニタに出力しない請求項6に記載の画像処理装置。
the highlighting unit highlights the first emphasis target and the second emphasis target at different timings;
The image processing device according to claim 6 , wherein the output unit does not output to the monitor one of the first emphasis target and the second emphasis target that is not at the timing to be emphasized.
 前記強調部は、前記第1強調対象および前記第2強調対象を異なるタイミングで強調し、
前記強調の順序は所定の診断順序に基づく
請求項6に記載の画像処理装置。
the highlighting unit highlights the first emphasis target and the second emphasis target at different timings;
The image processing apparatus according to claim 6 , wherein the order of enhancement is based on a predetermined diagnostic order.
 前記鑑別結果が所定条件と合致した場合に、
前記強調部は、前記強調対象を強調する
請求項1に記載の画像処理装置。
If the discrimination result matches a predetermined condition,
The image processing device according to claim 1 , wherein the emphasis unit emphasizes the emphasis target.
 画像処理装置は一つ以上のプロセッサを含み、
 前記プロセッサは、
 画像の鑑別結果を受領し、
 前記鑑別の根拠情報を文章として生成し、
 前記文章の構成要素を強調対象、および、非強調対象に分類し、
 前記強調対象を強調し、
 前記文章のうち少なくとも前記強調対象をモニタに出力する。
the image processing device includes one or more processors;
The processor:
Receive the image identification results,
generating the basis information for the discrimination as a sentence;
Classifying the components of the sentence into emphasis targets and non-emphasis targets;
highlighting the emphasis target,
At least the emphasis target of the sentence is output to a monitor.
 画像の鑑別結果を受領し、
 前記鑑別の根拠情報を文章として生成し、
 前記文章の構成要素を強調対象、および、非強調対象に分類し、
 前記強調対象を強調し、
 前記文章のうち少なくとも前記強調対象をモニタに出力する
 画像表示方法。
Receive the image identification results,
generating the basis information for the discrimination as a sentence;
Classifying the components of the sentence into emphasis targets and non-emphasis targets;
highlighting the emphasis target,
and outputting at least the emphasis target of the sentence to a monitor.
 画像の鑑別結果を受領し、
 前記鑑別の根拠情報を文章として生成し、
 前記文章の構成要素を強調対象、および、非強調対象に分類し、
 前記強調対象を強調し、
 前記強調対象に紐づく図を生成し、
 前記文章のうち少なくとも前記強調対象をモニタに出力する
 画像表示プログラム。
Receive the image identification results,
generating the basis information for the discrimination as a sentence;
Classifying the components of the sentence into emphasis targets and non-emphasis targets;
highlighting the emphasis target,
generating a diagram associated with the object to be emphasized;
an image display program that outputs at least the emphasis target of the sentence to a monitor.
 画像の鑑別結果を受領し、
 前記鑑別の根拠情報を文章として生成し、
 前記文章の構成要素を強調対象、および、非強調対象に分類し、
 前記強調対象を強調し、
 前記強調対象に紐づく図を生成し、
 前記文章のうち少なくとも前記強調対象をモニタに出力する
 画像表示プログラムを記憶した記憶媒体。
Receive the image identification results,
generating the basis information for the discrimination as a sentence;
Classifying the components of the sentence into emphasis targets and non-emphasis targets;
highlighting the emphasis target,
generating a diagram associated with the object to be emphasized;
A storage medium storing an image display program for outputting at least the highlighted portion of the text to a monitor.
 病変の鑑別の根拠情報を受信し、前記根拠情報を構成する文章の構成要素を強調対象、および、非強調対象に分類する分類部、
 前記強調対象を強調する強調部、ならびに
 前記文章のうち少なくとも前記強調対象をモニタに出力するする出力部
 を含む画像診断システム。
a classification unit that receives evidence information for the differentiation of a lesion and classifies components of the sentence that constitute the evidence information into emphasis targets and non-emphasis targets;
An image diagnostic system including: an emphasis unit that emphasizes the emphasis target; and an output unit that outputs at least the emphasis target of the sentence to a monitor.
PCT/JP2024/016557 2024-04-26 2024-04-26 Image processing device, image display method, image display program, recording medium, and image diagnosis system Pending WO2025225015A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/JP2024/016557 WO2025225015A1 (en) 2024-04-26 2024-04-26 Image processing device, image display method, image display program, recording medium, and image diagnosis system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/JP2024/016557 WO2025225015A1 (en) 2024-04-26 2024-04-26 Image processing device, image display method, image display program, recording medium, and image diagnosis system

Publications (1)

Publication Number Publication Date
WO2025225015A1 true WO2025225015A1 (en) 2025-10-30

Family

ID=97489647

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2024/016557 Pending WO2025225015A1 (en) 2024-04-26 2024-04-26 Image processing device, image display method, image display program, recording medium, and image diagnosis system

Country Status (1)

Country Link
WO (1) WO2025225015A1 (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2013134744A (en) * 2011-12-27 2013-07-08 Canon Inc Image processing device, image processing system, image processing method and program
JP2020119224A (en) * 2019-01-23 2020-08-06 Phcホールディングス株式会社 Medication guidance support method and medication guidance support device
JP2020144498A (en) * 2019-03-05 2020-09-10 株式会社日立製作所 Image diagnosis support device and image processing method
JP2022161823A (en) * 2021-04-09 2022-10-21 コニカミノルタ株式会社 Failed photography judgment support device, failed photography judgment support system, failed photography judgment support method and program
US20240016366A1 (en) * 2020-11-25 2024-01-18 Aidot Inc. Image diagnosis system for lesion

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2013134744A (en) * 2011-12-27 2013-07-08 Canon Inc Image processing device, image processing system, image processing method and program
JP2020119224A (en) * 2019-01-23 2020-08-06 Phcホールディングス株式会社 Medication guidance support method and medication guidance support device
JP2020144498A (en) * 2019-03-05 2020-09-10 株式会社日立製作所 Image diagnosis support device and image processing method
US20240016366A1 (en) * 2020-11-25 2024-01-18 Aidot Inc. Image diagnosis system for lesion
JP2022161823A (en) * 2021-04-09 2022-10-21 コニカミノルタ株式会社 Failed photography judgment support device, failed photography judgment support system, failed photography judgment support method and program

Similar Documents

Publication Publication Date Title
US12022991B2 (en) Endoscope processor, information processing device, and program
JP4418400B2 (en) Image display device
CN101637379B (en) Image display method and endoscope system
US20160133014A1 (en) Marking And Tracking An Area Of Interest During Endoscopy
CN103025227B (en) Image processing device and method
US20230395250A1 (en) Customization, troubleshooting, and wireless pairing techniques for surgical instruments
EP4091532A1 (en) Medical image processing device, endoscope system, diagnosis assistance method, and program
JPWO2019078102A1 (en) Medical image processing equipment
WO2019008942A1 (en) Medical image processing device, endoscope device, diagnostic support device, medical service support device and report generation support device
US20240382065A1 (en) Information processing apparatus, control method, and non-transitory storage medium
JP2021045337A (en) Medical image processing equipment, processor equipment, endoscopic systems, medical image processing methods, and programs
WO2021210676A1 (en) Medical image processing device, endoscope system, operation method for medical image processing device, and program for medical image processing device
WO2019087969A1 (en) Endoscope system, reporting method, and program
US20230260117A1 (en) Information processing system, endoscope system, and information processing method
US20220351396A1 (en) Medical image data creation apparatus for training, medical image data creation method for training and non-transitory recording medium in which program is recorded
JP2006296569A (en) Image display device
WO2025225015A1 (en) Image processing device, image display method, image display program, recording medium, and image diagnosis system
US12426774B2 (en) Endoscopy support apparatus, endoscopy support method, and computer readable recording medium
US20230410304A1 (en) Medical image processing apparatus, medical image processing method, and program
JP2025037660A (en) Medical support device, endoscope device, medical support method, and program
WO2022270582A1 (en) Examination assistance device, examination assistance method, and examination assistance program
WO2023233323A1 (en) Customization, troubleshooting, and wireless pairing techniques for surgical instruments
CN105899123B (en) Video processor for endoscope and the endoscopic system with the video processor
US20250169676A1 (en) Medical support device, endoscope, medical support method, and program
US20240335093A1 (en) Medical support device, endoscope system, medical support method, and program

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 24936931

Country of ref document: EP

Kind code of ref document: A1