[go: up one dir, main page]

WO2025173164A1 - Dispositif d'aide à l'inspection endoscopique, procédé d'aide à l'inspection endoscopique, et support d'enregistrement - Google Patents

Dispositif d'aide à l'inspection endoscopique, procédé d'aide à l'inspection endoscopique, et support d'enregistrement

Info

Publication number
WO2025173164A1
WO2025173164A1 PCT/JP2024/005237 JP2024005237W WO2025173164A1 WO 2025173164 A1 WO2025173164 A1 WO 2025173164A1 JP 2024005237 W JP2024005237 W JP 2024005237W WO 2025173164 A1 WO2025173164 A1 WO 2025173164A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
endoscopic
lesion area
support device
user
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
PCT/JP2024/005237
Other languages
English (en)
Japanese (ja)
Inventor
弘泰 齊賀
貴行 奥野
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
NEC Corp
Original Assignee
NEC Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by NEC Corp filed Critical NEC Corp
Priority to PCT/JP2024/005237 priority Critical patent/WO2025173164A1/fr
Publication of WO2025173164A1 publication Critical patent/WO2025173164A1/fr
Pending legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B1/00Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
    • A61B1/04Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor combined with photographic or television appliances
    • A61B1/045Control thereof

Definitions

  • This disclosure relates to assistance in endoscopic examinations.
  • Patent Document 1 Even with Patent Document 1, it is not necessarily possible to accurately diagnose lesions.
  • One objective of the present disclosure is to provide an endoscopic examination support device that is capable of accurately diagnosing lesions during endoscopic examinations.
  • an endoscopic examination support device includes: an image acquisition means for acquiring an endoscopic image; a gaze point detection means for detecting a gaze point of a user with respect to the endoscopic image; a mask image generating means for generating a mask image based on the gaze point; a first lesion area estimation means for estimating a first lesion area in the endoscopic image based on the endoscopic image and the mask image; a first extraction means for generating a first extracted image by extracting the first lesion area from the endoscopic image; Equipped with.
  • an endoscopic examination assistance method includes: Perform image acquisition to acquire an endoscopic image; performing gaze point detection to detect a gaze point of a user on the endoscopic image; A mask image is generated based on the gaze point. performing a first lesion area estimation based on the endoscopic image and the mask image to estimate a first lesion area in the endoscopic image; A first cutout is performed to generate a first cutout image by cutting out the first lesion area from the endoscopic image.
  • a recording medium includes: Perform image acquisition to acquire an endoscopic image; performing gaze point detection to detect a gaze point of a user on the endoscopic image; A mask image is generated based on the gaze point. performing a first lesion area estimation based on the endoscopic image and the mask image to estimate a first lesion area in the endoscopic image; A program for causing a computer to execute a first cutout process for generating a first cutout image by cutting out the first lesion area from the endoscopic image is recorded.
  • This disclosure makes it possible to accurately diagnose lesions during endoscopic examinations.
  • FIG. 1 is a block diagram showing a schematic configuration of an endoscopic examination system according to the present disclosure.
  • 1 is a block diagram showing a hardware configuration of an endoscopic examination support device according to the present disclosure.
  • FIG. 1 is a block diagram showing a functional configuration of an endoscopic examination support device according to the present disclosure.
  • FIG. 10 is a flowchart of processing by the endoscopic examination support device according to the present disclosure. 10 shows an example of a lesion area evaluation screen. 10 shows an example of an editing screen for a lesion area. 10 shows an example of a confirmation screen and a diagnosis result.
  • FIG. 10 is a block diagram showing a functional configuration of another endoscopic examination support device according to the present disclosure. 10 shows an example of a lesion area selection screen.
  • FIG. 10 is a flowchart of processing by another endoscopic examination support device according to the present disclosure.
  • FIG. 10 is a block diagram showing a functional configuration of another endoscopic examination support device according to the present disclosure. 10 shows an example of a selection screen for diagnostic results.
  • 10 is a flowchart of processing by another endoscopic examination support device according to the present disclosure.
  • FIG. 10 is a block diagram showing a functional configuration of another endoscopic examination support device according to the present disclosure.
  • 10 is a flowchart of processing by another endoscopic examination support device according to the present disclosure.
  • FIG. 1 shows a schematic configuration of an endoscopic examination system 100.
  • the endoscopic examination system 100 diagnoses the lesion area using AI and displays the diagnosis result.
  • the endoscopic examination system 100 of this embodiment estimates the lesion area based on the line of sight of the doctor directed at the display device that displays the endoscopic image.
  • the endoscopic examination system 100 then extracts only the lesion area from the endoscopic image and diagnoses the lesion area using AI. In this way, the endoscopic examination system 100 of this embodiment can use the lesion area from which unnecessary background has been removed for AI diagnosis, thereby enabling accurate lesion diagnosis.
  • the endoscopic examination system 100 mainly comprises an endoscopic examination support device 1, a display device 2, an endoscope 3 connected to the endoscopic examination support device 1, and an eye tracking device 4.
  • the endoscopic examination support device 1 acquires from the endoscope 3 images (i.e., video images; hereinafter also referred to as "endoscopic images Ic") captured by the endoscope scope 3 during an endoscopic examination, and displays on the display device 2 display data for the examiner (doctor) performing the endoscopic examination to check. Specifically, the endoscopic examination support device 1 acquires video images of the inside of organs captured by the endoscope scope 3 during an endoscopic examination as endoscopic images Ic. Furthermore, if the doctor finds a lesion during an endoscopic examination, he or she operates the endoscope scope 3 to input instructions to capture the lesion location. The endoscopic examination support device 1 generates an endoscopic image that captures the lesion location based on the doctor's imaging instructions. Specifically, the endoscopic examination support device 1 generates a still endoscopic image from the video image Ic based on the doctor's imaging instructions.
  • endoscopic images Ic i.e., video images; hereinafter also referred to as "endoscopic images Ic”
  • the display device 2 is a display or the like that displays a predetermined image based on a display signal supplied from the endoscopic examination support device 1.
  • the endoscope 3 mainly comprises an operating unit 36 that allows the doctor to input instructions for air supply, water supply, angle adjustment, and imaging, a flexible shaft 37 that is inserted into the subject's organ to be examined, a tip 38 that incorporates an imaging unit such as a miniature image sensor, and a connection unit 39 for connecting to the endoscopic examination support device 1.
  • [Hardware configuration] 2 shows the hardware configuration of the endoscopic examination support device 1.
  • the endoscopic examination support device 1 mainly includes a processor 11, a memory 12, an interface 13, an input unit 14, a light source unit 15, a sound output unit 16, and a database (hereinafter referred to as "DB") 17. These elements are connected via a data bus 19.
  • DB database
  • Processor 11 performs predetermined processing by executing programs stored in memory 12.
  • Processor 11 is a processor such as a CPU (Central Processing Unit), GPU (Graphics Processing Unit), or TPU (Tensor Processing Unit). Note that processor 11 may be composed of multiple processors.
  • Processor 11 is an example of a computer.
  • the memory 12 is composed of various volatile memories used as working memory, such as RAM (Random Access Memory) and ROM (Read Only Memory), and non-volatile memory that stores information necessary for the processing of the endoscopic examination support device 1.
  • the memory 12 may include an external storage device such as a hard disk connected to or built into the endoscopic examination support device 1, or may include a storage medium such as a removable flash memory or disk medium.
  • the memory 12 stores programs that enable the endoscopic examination support device 1 to execute each process in this embodiment.
  • the memory 12 temporarily stores a series of endoscopic images Ic captured by the endoscope 3 during an endoscopic examination.
  • the memory 12 also temporarily stores endoscopic images captured during an endoscopic examination based on imaging instructions from a doctor. These images are stored in the memory 12 in association with, for example, the subject's identification information (e.g., patient ID), timestamp information, etc.
  • the interface 13 acts as an interface between the endoscopic examination support device 1 and an external device.
  • the interface 13 supplies the display data Id generated by the processor 11 to the display device 2.
  • the interface 13 also supplies illumination light generated by the light source unit 15 to the endoscope 3.
  • the interface 13 also supplies an electrical signal indicating the endoscopic image Ic supplied from the endoscope 3 to the processor 11.
  • the interface 13 also supplies an electrical signal indicating the eyeball image supplied from the gaze tracking device 4 to the processor 11.
  • the interface 13 may be a communications interface such as a network adapter for wired or wireless communication with an external device, or may be a hardware interface compliant with USB (Universal Serial Bus), SATA (Serial AT Attachment), etc.
  • the input unit 14 generates input signals based on the doctor's operations.
  • the input unit 14 is, for example, a button, a touch panel, a remote controller, or an audio input device.
  • the light source unit 15 generates light to be supplied to the tip 38 of the endoscope 3.
  • the light source unit 15 may also incorporate a pump for sending water or air to be supplied to the endoscope 3.
  • the sound output unit 16 outputs sound based on the control of the processor 11.
  • DB17 stores machine learning models such as a segmentation model, a lesion classification model, and a lesion detection model, which will be described later.
  • DB17 may include an external storage device such as a hard disk connected to or built into the endoscopic examination support device 1, or may include a storage medium such as a removable flash memory. Instead of providing DB17 within the endoscopic examination system 100, DB17 may be provided on an external server, and data may be obtained from that server via communication.
  • [Functional configuration] 3 is a block diagram showing the functional configuration of the endoscopic examination support device 1 according to the first embodiment.
  • the endoscopic examination support device 1 functionally includes a gaze position identification unit 111, a coordinate conversion unit 112, a mask image generation unit 113, a lesion region analysis unit 114, a background removal unit 115, a lesion classification unit 116, and an output unit 117.
  • the endoscopic examination support device 1 receives an endoscopic video Ic from the endoscope 3. The endoscopic examination support device 1 then generates an endoscopic image from the endoscopic video Ic based on imaging instructions from a doctor. The endoscopic image is input to the gaze position identification unit 111, the lesion area analysis unit 114, and the background removal unit 115. In addition, an eyeball image is input to the endoscopic examination support device 1 from the gaze tracking device 4. The eyeball image is input to the gaze position identification unit 111.
  • the gaze position identification unit 111 identifies the position of the doctor's gaze on the display device 2 (hereinafter also referred to as the "point of gaze") based on the eyeball image. It is assumed that display data including an endoscopic image is displayed on the display device 2. It is also assumed that the doctor is gazing at the area in the endoscopic image that he wishes to analyze.
  • the gaze position identification unit 111 can identify the gaze point using, for example, the corneal reflex method. Specifically, the gaze position identification unit 111 recognizes the reflection point of the near-infrared LED on the cornea from the eyeball image. The gaze position identification unit 111 also recognizes the position of the pupil in the eye from the eyeball image. The gaze position identification unit 111 then detects the doctor's gaze direction from the positional relationship between the reflection point and the pupil. Based on the gaze direction, the gaze position identification unit 111 identifies the position coordinates (hereinafter also referred to as "gaze point coordinates") of the doctor's gaze point in a coordinate system based on the display device 2 (hereinafter also referred to as the "display device coordinate system"). The gaze position identification unit 111 outputs the gaze point coordinates in the display device coordinate system and information regarding the time the gaze point was measured to the coordinate conversion unit 112.
  • the coordinate conversion unit 112 converts the gaze point coordinates in the display device coordinate system into gaze point coordinates in a coordinate system based on the endoscopic image (hereinafter also referred to as the "endoscopic coordinate system").
  • the relationship between the display device coordinate system and the endoscope coordinate system is predetermined, and it is assumed that mutual coordinate conversion is possible between the display device coordinate system and the endoscope coordinate system.
  • a conversion formula or conversion table from the display device coordinate system to the endoscope coordinate system is prepared in advance, and the coordinate conversion unit 112 performs coordinate conversion using that conversion formula or conversion table.
  • the coordinate conversion unit 112 outputs the gaze point coordinates in the endoscope coordinate system to the mask image generation unit 113.
  • the mask image generation unit 113 generates a mask image of the same size as the endoscopic image based on the gaze point coordinates in the endoscopic coordinate system. For example, the mask image generation unit 113 generates a mask image in which the area gazed upon by the doctor is indicated by black pixels and other areas are indicated by white pixels by pointing to each gaze point indicated by the gaze point coordinates on a frame image of the same size as the endoscopic image.
  • An example of binarization processing by the mask image generation unit 113 is shown below. (Example 1)
  • the mask image generation unit 113 generates a mask image by converting the gaze point with the longest dwell time within a predetermined number of seconds to a black pixel value and converting the other areas to white pixel values for each gaze point.
  • the mask image generating unit 113 generates a mask image by converting gaze points whose dwell time is equal to or longer than a predetermined threshold into black pixel values and converting other areas into white pixel values.
  • the mask image generation unit 113 generates a mask image by converting all gaze points that exist within a predetermined number of seconds into black pixel values for each gaze point and converting other areas into white pixel values.
  • the mask image generation unit 113 generates a mask image by converting the positions and areas where the gaze point has been measured a predetermined number of times or more for each gaze point into black pixel values and converting other areas into white pixel values.
  • the mask image generated by the mask image generation unit 113 is not limited to a binary image, but may be an image (for example, a grayscale image) that represents the dwell time of the gaze point using shades of a single color.
  • the mask image generation unit 113 may also generate a mask image by representing the gaze point estimated in a single measurement as a circle of a predetermined size.
  • the mask image generation unit 113 outputs the generated mask image to the lesion area analysis unit 114.
  • the lesion area analysis unit 114 estimates the lesion area in the endoscopic image based on the endoscopic image and the mask image. Specifically, the lesion area analysis unit 114 inputs the endoscopic image and the mask image into a segmentation model, thereby segmenting the area identified by the mask image in the endoscopic image (hereinafter also referred to as the "specific area"). The lesion area analysis unit 114 then estimates the segmented specific area as the lesion area.
  • the segmentation model may be, for example, a model constructed by performing machine learning using training data in which a set of an image and a mask image is used as the input image and the result of segmenting the specific area identified by the mask image is used as the correct answer.
  • a segmentation model such as SAM (Segment Anything Model) published by Meta, Inc. may be used, which trains on a large number of general and medical images and allows input by specifying a target area as a mask image, known as a visual prompt.
  • the lesion area analysis unit 114 outputs the segmentation results to the background removal unit 115.
  • the background removal unit 115 extracts only the segmented lesion area from the endoscopic image. This generates an image of the lesion area with unnecessary background information removed.
  • the background removal unit 115 outputs the image of the extracted lesion area to the lesion classification unit 116.
  • the lesion classification unit 116 diagnoses the lesion area using an image of the extracted lesion area. Diagnosis includes, for example, lesion differential diagnosis, such as distinguishing between tumor and non-tumor and depth of invasion, and measuring the size of the lesion. Specifically, the lesion classification unit 116 diagnoses the lesion area using a pre-prepared image recognition model.
  • This image recognition model is a machine learning model that is trained in advance to diagnose the lesion area using an endoscopic image containing the lesion area as input, and is hereinafter also referred to as the "lesion classification model.”
  • the internal configuration of the machine learning model is arbitrary, but can be configured, for example, using a CNN (Convolutional Neural Network).
  • the lesion classification unit 116 outputs the diagnosis results to the output unit 117.
  • the output unit 117 outputs the diagnosis results to the display device 2.
  • the endoscopic examination support device 1 can generate images of lesion areas from which unnecessary background information has been removed, making it possible to improve the accuracy of diagnoses using lesion classification models.
  • the gaze position identification unit 111 and the coordinate conversion unit 112 are examples of an image acquisition means and a gaze point detection means
  • the mask image generation unit 113 is an example of a mask image generation means
  • the lesion area analysis unit 114 is an example of a first lesion area estimation means
  • the background removal unit 115 is an example of a first extraction means
  • the lesion classification unit 116 is an example of a diagnosis means
  • the output unit 117 is an example of a diagnosis result output means.
  • FIG. 4 is a flowchart of the endoscopic examination support process performed by the endoscopic examination support device 1. This process is realized by the processor 11 shown in Fig. 2 executing a program prepared in advance and operating as each element shown in Fig. 3.
  • the endoscopic examination support device 1 receives an endoscopic video Ic from the endoscope 3.
  • the endoscopic examination support device 1 generates an endoscopic image from the endoscopic video Ic based on imaging instructions from a doctor.
  • the endoscopic examination support device 1 also receives an eyeball image from an eye tracking device 4.
  • the endoscopic image is input to the gaze position identification unit 111, the lesion area analysis unit 114, and the background removal unit 115, and the eyeball image is input to the gaze position identification unit 111 (step S111).
  • the endoscopic examination support device 1 may present an evaluation screen for evaluating the estimated lesion area to the doctor.
  • the doctor looks at a display like that shown in Figure 5 and evaluates whether the lesion area 22 and extracted image 23 are appropriate.
  • the doctor then inputs the evaluation into the endoscopic examination support device 1.
  • the doctor can input the evaluation by voice into the endoscopic examination support device 1 via the input unit 14.
  • the endoscopic examination support device 1 is equipped with a voice recognition function and obtains the evaluation by voice recognition of the doctor's input voice.
  • the editing screen generation process and the editing process based on the doctor's instructions can be performed, for example, by the background removal unit 115 of the endoscopic examination support device 1.
  • the background removal unit 115 is an example of an editing screen data output means and an editing operation reception means.
  • the endoscopic examination support process is executed to the end.
  • the endoscopic examination support device 1 may stop the endoscopic examination support process midway and ask the doctor whether or not to continue the endoscopic examination support process.
  • the endoscopic examination support device 1 may present the doctor with a confirmation screen to confirm whether or not to perform a diagnosis using the lesion classification unit 116.
  • FIG. 7(A) shows an example of a confirmation screen.
  • the display area of the display device 2 includes an endoscopic image 31, a lesion area 32, a cropped image 33, and a reject area 34.
  • the endoscopic image 31 is an endoscopic image generated based on imaging instructions from a doctor.
  • the lesion area 32 indicates the lesion area estimated by the lesion area analysis unit 114.
  • the cropped image 33 indicates an image of the lesion area cropped by the background removal unit 115.
  • the Reject area 34 is an area for controlling the execution of a diagnosis. For example, a doctor checks the lesion area 32 and the extracted image 33, and if he or she decides not to perform a diagnosis, he or she gazes at the Reject area 34. When the endoscopic examination support device 1 detects that the doctor has been gazing at the Reject area 34 for a predetermined time TH1 or more, the endoscopic examination support device 1 does not perform a diagnosis using the lesion classification unit 116 and terminates processing. On the other hand, if the doctor does not gaze at the Reject area 34, the endoscopic examination support device 1 causes the lesion classification unit 116 to perform a diagnosis after a predetermined time TH2 has elapsed. The endoscopic examination support device 1 then displays the diagnosis results as shown in Figure 7 (B) on the display device 2. The doctor may also instruct the endoscopic examination support device 1 whether or not to perform a diagnosis by voice input.
  • Figure 7(B) shows an example of the display of the diagnosis results.
  • the display area of the display device 2 includes an endoscopic image 31, a lesion area 32, a cut-out image 33, and a diagnosis result 35.
  • the diagnosis result 35 indicates the diagnosis result of the lesion area 32 by the lesion classification unit 116.
  • the process of generating the confirmation screen and the process of controlling the execution of the diagnosis can be performed, for example, by the lesion classification unit 116 of the endoscopic examination support device 1.
  • the lesion classification unit 116 is an example of a confirmation screen data output means and a response acquisition means.
  • the endoscopic examination system of the second embodiment includes an additional lesion area analyzer.
  • a doctor can select a lesion area to be used for diagnosis from the lesion areas analyzed by each lesion area analyzer. Note that the system configuration and hardware configuration are the same as those of the first embodiment, and therefore will not be described here.
  • [Functional configuration] 8 is a block diagram showing the functional configuration of an endoscopic examination support device 1a according to the second embodiment.
  • the endoscopic examination support device 1a includes a gaze position identification unit 211, a coordinate conversion unit 212, a mask image generation unit 213, a first lesion area analysis unit 214, a background removal unit 215, a second lesion area analysis unit 216, a lesion area selection unit 217, a lesion classification unit 218, and an output unit 219. That is, the endoscopic examination support device 1a includes a second lesion analysis unit 216 in addition to the first lesion analysis unit 214 corresponding to the lesion area analysis unit 114 of the first embodiment.
  • the gaze position identification unit 211, coordinate conversion unit 212, mask image generation unit 213, first lesion area analysis unit 214, lesion classification unit 218, and output unit 219 have the same configuration and operate in the same way as the gaze position identification unit 111, coordinate conversion unit 112, mask image generation unit 113, lesion area analysis unit 114, lesion classification unit 116, and output unit 117 of the endoscopic examination support device 1, and will not be described again.
  • the background removal unit 215 receives the segmentation results from the first lesion area analysis unit 214.
  • the background removal unit 215 cuts out only the segmented lesion area from the endoscopic image.
  • the background removal unit 215 outputs the segmentation results and an image of the cut-out lesion area to the lesion area selection unit 217.
  • Second lesion area analysis unit 216 detects a lesion area contained in the endoscopic image using a pre-prepared image recognition model or the like.
  • This image recognition model is a machine learning model that has been trained in advance to use an endoscopic image as input and estimate a lesion area contained in the endoscopic image, and is hereinafter also referred to as a "lesion detection model.”
  • the internal configuration of the machine learning model is arbitrary, but can be configured using a CNN, for example.
  • the lesion area estimated by the first lesion area analysis unit 214 will hereinafter also be referred to as the "first lesion area.”
  • the lesion area detected by the second lesion area analysis unit 216 will hereinafter also be referred to as the "second lesion area.”
  • the lesion area selection unit 217 generates a selection screen for selecting a lesion area to be used for diagnosis and outputs it to the display device 2.
  • the doctor selects the lesion area to be used for diagnosis from the multiple lesion areas included on the selection screen.
  • the lesion area selection unit 217 then outputs the lesion area selected by the doctor to the lesion classification unit 218.
  • Figure 9 shows an example of a selection screen display generated by the lesion area selection unit 217.
  • Figure 9 includes endoscopic image 41, lesion areas 42-44, and cropped images 42a-44a.
  • Endoscopic image 41 is an endoscopic image generated based on imaging instructions from a doctor.
  • Lesion area 42 is a first lesion area.
  • Cropped image 42a is an image cropped from lesion area 42.
  • Lesion area 43 and lesion area 44 are second lesion areas.
  • Cropped images 43a and 44a are images cropped from lesion area 43 and lesion area 44, respectively.
  • the doctor looks at a display like that shown in Figure 9 and selects which of the cut-out images 42a to 44a to use for diagnosis. For example, the doctor gazes at the cut-out image to be used for diagnosis from among the cut-out images 42a to 44a.
  • the lesion area selection unit 217 outputs the cut-out image that the doctor gazes at for a predetermined time TH3 or more to the lesion classification unit 218.
  • the doctor may also select the cut-out image to be used for diagnosis by voice input.
  • the second lesion area analysis unit 216 is an example of a second lesion area estimation means and a second extraction means
  • the lesion area selection unit 217 is an example of a lesion area selection screen data output means and a selection operation reception means.
  • Fig. 10 is a flowchart of the endoscopic examination support processing performed by the endoscopic examination support device 1a. This processing is realized by the processor 11 shown in Fig. 2 executing a program prepared in advance and operating as each element shown in Fig. 8. Note that the processing of steps S211 to S216 is similar to the processing of steps S111 to S116 of the endoscopic examination support processing shown in Fig. 4, and therefore description thereof will be omitted.
  • the endoscopic image is input to the second lesion area analysis unit 216.
  • the second lesion area analysis unit 216 uses a lesion detection model to detect the lesion area contained in the endoscopic image (step S217).
  • the lesion area selection unit 217 generates a selection screen for selecting the lesion area to be used for diagnosis and outputs it to the display device 2.
  • the lesion area selection unit 217 then outputs the lesion area selected by the doctor to the lesion classification unit 218.
  • the lesion classification unit 218 diagnoses the lesion area input from the lesion area selection unit 217 (step S218).
  • the output unit 219 then outputs the diagnosis results to the display device 2 (step S219). The process then ends.
  • the endoscopic examination system of the third embodiment includes different lesion area analyzers, each of which diagnoses the lesion area analyzed by the other lesion area analyzers. The doctor then selects an appropriate diagnosis result from the multiple diagnosis results. Note that the system configuration and hardware configuration are the same as those of the first embodiment, and therefore will not be described here.
  • [Functional configuration] 11 is a block diagram showing the functional configuration of an endoscopic examination support device 1b according to the third embodiment.
  • the endoscopic examination support device 1b includes a gaze position identification unit 311, a coordinate conversion unit 312, a mask image generation unit 313, a first lesion area analysis unit 314, a background removal unit 315, a second lesion area analysis unit 316, a lesion classification unit 317, a classification result selection unit 318, and an output unit 319.
  • the gaze position identification unit 311, coordinate conversion unit 312, mask image generation unit 313, first lesion area analysis unit 314, background removal unit 215, and second lesion area analysis unit 316 have the same configuration and operate in the same way as the gaze position identification unit 211, coordinate conversion unit 212, mask image generation unit 213, first lesion area analysis unit 214, background removal unit 215, and second lesion area analysis unit 216 of the endoscopic examination support device 1a of the second embodiment, and therefore will not be described again.
  • the lesion area estimated by the first lesion area analysis unit 314 will also be referred to as the "first lesion area” below.
  • the lesion area detected by the second lesion area analysis unit 316 will also be referred to as the "second lesion area” below.
  • the classification result selection unit 318 generates a selection screen for selecting a diagnostic result and outputs it to the display device 2.
  • the doctor selects the diagnostic result to adopt from the multiple diagnostic results included on the selection screen.
  • the classification result selection unit 318 then outputs the diagnostic result selected by the doctor to the output unit 319.
  • Figure 12 shows an example of a selection screen display generated by the classification result selection unit 318.
  • Figure 12 includes endoscopic image 51, lesion areas 52-54, cropped images 52a-54a, and diagnosis results 52b-54b.
  • Endoscopic image 51 is an endoscopic image generated based on imaging instructions from a doctor.
  • Lesion area 52 is a first lesion area.
  • Cropped image 52a is an image cropped from lesion area 52.
  • Diagnosis result 52b is the diagnosis result for cropped image 52a.
  • Lesion area 53 and lesion area 54 are second lesion areas.
  • Cropped images 53a and 54a are images cropped from lesion area 53 and lesion area 54, respectively.
  • Diagnosis results 53b and 54b are the diagnosis results for cropped images 53a and 54a.
  • the doctor looks at a display like that shown in Figure 12 and selects which diagnostic result to adopt from diagnostic results 52b to 54b. For example, the doctor focuses on which diagnostic result to adopt from diagnostic results 52b to 54b.
  • the classification result selection unit 318 outputs to the output unit 319 the diagnostic result that the doctor has been focusing on for a predetermined time TH4 or more. The doctor may also select which diagnostic result to adopt by voice input.
  • the output unit 319 outputs the diagnosis results input from the classification result selection unit 318 to the display device 2.
  • the classification result selection unit 318 is an example of a diagnosis result selection screen data output means and a diagnosis result reception means.
  • Fig. 13 is a flowchart of the endoscopic examination support processing by the endoscopic examination support device 1b. This processing is realized by the processor 11 shown in Fig. 2 executing a program prepared in advance and operating as each element shown in Fig. 11. Note that the processing of steps S311 to S317 is similar to the processing of steps S211 to S217 of the endoscopic examination support processing shown in Fig. 10, and therefore description thereof will be omitted.
  • the lesion classification unit 317 diagnoses the first lesion area and the second lesion area (step S318). Specifically, the lesion classification unit 317 diagnoses the first lesion area based on the image of the first lesion area cut out by the background removal unit 315. The lesion classification unit 317 also diagnoses the second lesion area based on the endoscopic image input from the second lesion area analysis unit 316. Specifically, the lesion classification unit 317 diagnoses the second lesion area based on the image of the second lesion area enclosed by a rectangle on the endoscopic image.
  • the classification result selection unit 318 generates a selection screen for selecting a diagnosis result and outputs it to the display device 2.
  • the classification result selection unit 318 then outputs the diagnosis result selected by the doctor to the output unit 319.
  • the output unit 319 outputs the diagnosis result input from the classification result selection unit 318 to the display device 2 (step S319). The processing then ends.
  • the endoscopic examination support device 500 includes an image acquisition unit 501, a gaze point detection unit 502, a mask image generation unit 503, a first lesion area estimation unit 504, and a first extraction unit 505.
  • FIG. 15 is a flowchart of processing by the endoscopic examination support device of the fourth embodiment.
  • the image acquisition means 501 acquires an endoscopic image (step S501).
  • the gaze point detection means 502 detects the user's gaze point on the endoscopic image (step S502).
  • the mask image generation means 503 generates a mask image based on the gaze point (step S503).
  • the first lesion area estimation means 504 estimates a first lesion area in the endoscopic image based on the endoscopic image and the mask image (step S504).
  • the first extraction means 505 generates a first extracted image by extracting the first lesion area from the endoscopic image (step S505).
  • the endoscopic examination support device 500 of the fourth embodiment enables accurate diagnosis of lesions. Furthermore, the endoscopic examination support device 500 can support user decision-making in the medical field.
  • An endoscopic examination support device comprising:
  • Appendix 2 The endoscopic examination support device described in Appendix 1, wherein the first lesion area estimation means estimates the first lesion area using a first machine learning model that receives an endoscopic image and a mask image as input and segments the lesion area contained in the endoscopic image.
  • (Appendix 5) a confirmation screen data output means for outputting confirmation screen data to the user for confirming whether or not the first extracted image is to be used for diagnosis; an answer acquisition means for acquiring from the user an answer as to whether or not the first extracted image is to be used for diagnosis; Equipped with The endoscopic examination support device according to claim 3, wherein the diagnostic means acquires a diagnostic result of the first lesion area from the first extracted image when the answer is that the image will be used for diagnosis.
  • a second lesion area estimation means for estimating a second lesion area from an endoscopic image using a third machine learning model that receives an endoscopic image as an input and detects a lesion area included in the endoscopic image; a second extraction means for extracting the second lesion area from the endoscopic image to generate a second extracted image; a diagnostic result selection screen data output means for outputting to the user diagnostic result selection screen data for selecting a diagnostic result to be adopted from a plurality of diagnostic results; a diagnostic result receiving means for receiving a diagnostic result selection operation by the user; Equipped with the diagnostic means uses the second machine learning model to obtain a diagnosis result of the first lesion area from the first extracted image, and also obtain a diagnosis result of the second lesion area from the second extracted image; the diagnostic result selection screen data includes a diagnostic result of the first lesion area and a diagnostic result of the second lesion area, 4.
  • the diagnostic result output means outputs the diagnostic result selected by the user as a final result.
  • Appendix 13 an eyeball image acquisition means for acquiring an eyeball image including the eyeball of a user;
  • Appendix 15 The endoscopic examination support device described in Appendix 1, wherein the mask image generation means generates a binarized mask image of a predetermined range of areas based on the gaze point where the dwell time is equal to or greater than a predetermined threshold, and other areas.
  • (Appendix 18) Perform image acquisition to acquire an endoscopic image; performing gaze point detection to detect a gaze point of a user on the endoscopic image; A mask image is generated based on the gaze point. performing a first lesion area estimation based on the endoscopic image and the mask image to estimate a first lesion area in the endoscopic image; An endoscopic examination support method for performing a first cutout, which generates a first cutout image by cutting out the first lesion area from the endoscopic image.

Landscapes

  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Surgery (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Biomedical Technology (AREA)
  • Optics & Photonics (AREA)
  • Pathology (AREA)
  • Radiology & Medical Imaging (AREA)
  • Biophysics (AREA)
  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Medical Informatics (AREA)
  • Molecular Biology (AREA)
  • Animal Behavior & Ethology (AREA)
  • General Health & Medical Sciences (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Endoscopes (AREA)

Abstract

L'invention concerne un dispositif d'aide à l'inspection endoscopique dans lequel un moyen d'acquisition d'image acquiert une image endoscopique. Un moyen de détection de point de regard détecte un point de regard d'un utilisateur par rapport à l'image endoscopique. Un moyen de génération d'image de masque génère une image de masque sur la base du point de regard. Un premier moyen d'estimation de région de lésion estime une première région de lésion dans l'image endoscopique sur la base de l'image endoscopique et de l'image de masque. Un premier moyen de découpe génère une première image découpée obtenue par découpe de la première région de lésion à partir de l'image endoscopique. Le dispositif d'aide à l'inspection endoscopique peut aider à la prise de décision d'utilisateurs dans le domaine médical.
PCT/JP2024/005237 2024-02-15 2024-02-15 Dispositif d'aide à l'inspection endoscopique, procédé d'aide à l'inspection endoscopique, et support d'enregistrement Pending WO2025173164A1 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/JP2024/005237 WO2025173164A1 (fr) 2024-02-15 2024-02-15 Dispositif d'aide à l'inspection endoscopique, procédé d'aide à l'inspection endoscopique, et support d'enregistrement

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/JP2024/005237 WO2025173164A1 (fr) 2024-02-15 2024-02-15 Dispositif d'aide à l'inspection endoscopique, procédé d'aide à l'inspection endoscopique, et support d'enregistrement

Publications (1)

Publication Number Publication Date
WO2025173164A1 true WO2025173164A1 (fr) 2025-08-21

Family

ID=96772561

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2024/005237 Pending WO2025173164A1 (fr) 2024-02-15 2024-02-15 Dispositif d'aide à l'inspection endoscopique, procédé d'aide à l'inspection endoscopique, et support d'enregistrement

Country Status (1)

Country Link
WO (1) WO2025173164A1 (fr)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH04242624A (ja) * 1991-01-08 1992-08-31 A T R Shichiyoukaku Kiko Kenkyusho:Kk 眼球の注視領域表示装置
WO2016117277A1 (fr) * 2015-01-21 2016-07-28 Hoya株式会社 Système d'endoscope
WO2017183353A1 (fr) * 2016-04-19 2017-10-26 オリンパス株式会社 Système d'endoscope
WO2019087790A1 (fr) * 2017-10-31 2019-05-09 富士フイルム株式会社 Dispositif d'aide à l'inspection, dispositif d'endoscope, procédé d'aide à l'inspection et programme d'aide à l'inspection
WO2023042273A1 (fr) * 2021-09-14 2023-03-23 日本電気株式会社 Dispositif de traitement d'image, procédé de traitement d'image, et support de stockage

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH04242624A (ja) * 1991-01-08 1992-08-31 A T R Shichiyoukaku Kiko Kenkyusho:Kk 眼球の注視領域表示装置
WO2016117277A1 (fr) * 2015-01-21 2016-07-28 Hoya株式会社 Système d'endoscope
WO2017183353A1 (fr) * 2016-04-19 2017-10-26 オリンパス株式会社 Système d'endoscope
WO2019087790A1 (fr) * 2017-10-31 2019-05-09 富士フイルム株式会社 Dispositif d'aide à l'inspection, dispositif d'endoscope, procédé d'aide à l'inspection et programme d'aide à l'inspection
WO2023042273A1 (fr) * 2021-09-14 2023-03-23 日本電気株式会社 Dispositif de traitement d'image, procédé de traitement d'image, et support de stockage

Similar Documents

Publication Publication Date Title
US9445713B2 (en) Apparatuses and methods for mobile imaging and analysis
US11298012B2 (en) Image processing device, endoscope system, image processing method, and program
CN101170940A (zh) 图像显示装置
CN113365545B (zh) 图像记录装置、图像记录方法和图像记录程序
EP4437931A1 (fr) Système d'aide à la chirurgie, procédé d'aide à la chirurgie et programme d'aide à la chirurgie
KR20220130855A (ko) 인공 지능 기반 대장 내시경 영상 진단 보조 시스템 및 방법
US12433478B2 (en) Processing device, endoscope system, and method for processing captured image
US20250281022A1 (en) Endoscopy support device, endoscopy support method, and recording medium
WO2023126999A1 (fr) Dispositif de traitement d'image, procédé de traitement d'image et support de stockage
US20230410304A1 (en) Medical image processing apparatus, medical image processing method, and program
US20250037278A1 (en) Method and system for medical endoscopic imaging analysis and manipulation
WO2025173164A1 (fr) Dispositif d'aide à l'inspection endoscopique, procédé d'aide à l'inspection endoscopique, et support d'enregistrement
JP2024164943A (ja) 医療画像処理装置及び内視鏡システム
WO2023218523A1 (fr) Second système endoscopique, premier système endoscopique et procédé d'inspection endoscopique
CN115035086A (zh) 一种基于深度学习的结核皮试智能筛查分析方法和装置
CN116849593A (zh) 具有器官识别功能的可视喉镜系统以及器官识别方法
JP7768398B2 (ja) 内視鏡検査支援装置、内視鏡検査支援方法、及び、プログラム
US12444052B2 (en) Learning apparatus, learning method, program, trained model, and endoscope system
JP7148193B1 (ja) 手術支援システム、手術支援方法、及び手術支援プログラム
US20240335093A1 (en) Medical support device, endoscope system, medical support method, and program
EP4470448A1 (fr) Dispositif de détermination d'image, procédé de détermination d'image et support d'enregistrement
KR20250164482A (ko) 영상처리 및 인공지능 기술을 활용한 의료 검사 정보 및 피드백을 제공하는 시스템, 장치 및 방법
WO2024176780A1 (fr) Dispositif d'assistance médicale, endoscope, procédé d'assistance médicale, et programme
WO2024121886A1 (fr) Dispositif d'aide à l'examen endoscopique, procédé d'aide à l'examen endoscopique, et support d'enregistrement
WO2024166731A1 (fr) Dispositif de traitement d'images, endoscope, procédé de traitement d'images, et programme

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 24924858

Country of ref document: EP

Kind code of ref document: A1