[go: up one dir, main page]

WO2021200985A1 - Programme, procédé de traitement d'informations, système de traitement d'informations et procédé permettant de générer un modèle d'apprentissage - Google Patents

Programme, procédé de traitement d'informations, système de traitement d'informations et procédé permettant de générer un modèle d'apprentissage Download PDF

Info

Publication number
WO2021200985A1
WO2021200985A1 PCT/JP2021/013600 JP2021013600W WO2021200985A1 WO 2021200985 A1 WO2021200985 A1 WO 2021200985A1 JP 2021013600 W JP2021013600 W JP 2021013600W WO 2021200985 A1 WO2021200985 A1 WO 2021200985A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
catheter
complementary
information
control unit
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
PCT/JP2021/013600
Other languages
English (en)
Japanese (ja)
Inventor
太輝人 犬飼
雄紀 坂口
悠介 関
陽 井口
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Terumo Corp
Original Assignee
Terumo Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Terumo Corp filed Critical Terumo Corp
Priority to JP2022512562A priority Critical patent/JP7615127B2/ja
Publication of WO2021200985A1 publication Critical patent/WO2021200985A1/fr
Anticipated expiration legal-status Critical
Priority to JP2024232685A priority patent/JP7747865B2/ja
Ceased legal-status Critical Current

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B1/00Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B1/00Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
    • A61B1/04Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor combined with photographic or television appliances
    • A61B1/045Control thereof
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/12Diagnosis using ultrasonic, sonic or infrasonic waves in body cavities or body tracts, e.g. by using catheters

Definitions

  • the present invention relates to a program, an information processing method, an information processing system, and a learning model generation method.
  • a catheter system is used in which a catheter for diagnostic imaging is inserted into a luminal organ such as a blood vessel to take a tomographic image (Patent Document 1).
  • Information missing parts may occur in the tomographic image due to the influence of guide wires, stents, or strongly calcified parts. Experts such as skilled doctors perform catheter treatment and the like while supplementing the missing information by guessing. Therefore, long training is required to use the catheter system.
  • One aspect is to provide a program or the like that makes the catheter system easy to use.
  • the program acquires a catheter image generated using a diagnostic imaging catheter inserted into a luminal organ, acquires complementary information that complements the information missing portion of the acquired catheter image, and complements the catheter image. Have the computer perform the process of displaying information.
  • FIG. 1 is an explanatory diagram illustrating an outline of the catheter system 10.
  • the catheter system 10 includes a catheter 40 for diagnostic imaging, an MDU (Motor Driving Unit) 33, and an information processing device 20.
  • the diagnostic imaging catheter 40 is connected to the information processing device 20 via the MDU 33.
  • a display device 31 and an input device 32 are connected to the information processing device 20.
  • the input device 32 is an input device such as a keyboard, mouse, trackball or microphone.
  • the display device 31 and the input device 32 may be integrally laminated to form a touch panel.
  • the input device 32 and the information processing device 20 may be integrally configured.
  • FIG. 2 is an explanatory diagram illustrating an outline of the diagnostic imaging catheter 40.
  • FIG. 2 shows an example of a diagnostic imaging catheter 40 for IVUS (Intravascular Ultrasound), that is, for ultrasonic tomographic image generation, which is used when generating an ultrasonic tomographic image from the inside of a blood vessel.
  • the ultrasonic tomographic image is an example of a catheter image generated by using the diagnostic imaging catheter 40.
  • the IVUS catheter is an example of a tomographic image generating catheter.
  • the diagnostic imaging catheter 40 has a probe portion 41 and a connector portion 45 arranged at the end of the probe portion 41.
  • the probe portion 41 is connected to the MDU 33 via the connector portion 45.
  • the side of the diagnostic imaging catheter 40 far from the connector portion 45 will be referred to as the distal end side.
  • the shaft 43 is inserted inside the probe portion 41.
  • a sensor 42 is connected to the tip end side of the shaft 43. The shaft 43 and the sensor 42 can rotate and move forward and backward inside the probe portion 41.
  • the sensor 42 is an ultrasonic transducer that transmits and receives ultrasonic waves.
  • An annular tip marker 44 is fixed in the vicinity of the tip of the probe portion 41.
  • the material of the tip marker 44 is a material such as metal that does not allow X-rays to pass through.
  • the diagnostic imaging catheter 40 may be a catheter for optical tomography (Optical Coherence Tomography) or OFDI (Optical Frequency Domain Imaging) that generates an optical tomography using near-infrared light. good.
  • the optical tomography image is an example of a catheter image generated by using the diagnostic imaging catheter 40.
  • the sensor 42 of these diagnostic imaging catheters 40 is a transmission / reception unit that irradiates near-infrared light and receives reflected light.
  • the catheter for optical tomography generation is an example of a catheter for generation of a tomography.
  • the diagnostic imaging catheter 40 may have sensors 42 for both an ultrasonic transducer and a transmitter / receiver for OCT or OFDI.
  • the diagnostic imaging catheter 40 may have a total of three sensors 42, including an ultrasonic transducer, a transmission / reception unit for OCT, and a transmission / reception unit for OFDI.
  • the diagnostic imaging catheter 40 is not limited to the mechanical scanning method that mechanically rotates and moves forward and backward. It may be an electronic radial scanning type diagnostic imaging catheter 40 using a sensor 42 in which a plurality of ultrasonic transducers are arranged in an annular shape.
  • the diagnostic imaging catheter 40 may have a so-called linear scanning type sensor 42 in which a plurality of ultrasonic transducers are arranged in a row along the longitudinal direction.
  • the diagnostic imaging catheter 40 may have a so-called two-dimensional array type sensor 42 in which a plurality of ultrasonic transducers are arranged in a matrix.
  • the luminal organ into which the diagnostic imaging catheter 40 is inserted is, for example, a blood vessel, pancreatic duct, bile duct, bronchus, or the like.
  • the diagnostic imaging catheter 40 is the machine scanning type IVUS catheter shown in FIG. 2 as an example.
  • a reflector existing inside a luminal organ such as an erythrocyte and the outside of a luminal organ such as the epicardium and the heart It is possible to generate a tomographic image containing the organs present in.
  • FIG. 3 is an explanatory diagram illustrating the configuration of the catheter system 10.
  • the catheter system 10 includes an information processing device 20, an MDU 33, and a diagnostic imaging catheter 40.
  • the information processing device 20 includes a control unit 21, a main storage device 22, an auxiliary storage device 23, a communication unit 24, a display unit 25, an input unit 26, a catheter control unit 271, and a bus.
  • the control unit 21 is an arithmetic control device that executes the program of the present embodiment.
  • One or more CPUs Central Processing Units
  • GPUs Graphics Processing Units
  • TPUs Torsor Processing Units
  • multi-core CPUs and the like are used for the control unit 21.
  • the control unit 21 is connected to each hardware unit constituting the information processing device 20 via a bus.
  • the main storage device 22 is a storage device such as SRAM (Static Random Access Memory), DRAM (Dynamic Random Access Memory), and flash memory.
  • SRAM Static Random Access Memory
  • DRAM Dynamic Random Access Memory
  • flash memory temporary stores information necessary in the middle of processing performed by the control unit 21 and a program being executed by the control unit 21.
  • the auxiliary storage device 23 is a storage device such as a SRAM, a flash memory, a hard disk, or a magnetic tape.
  • the auxiliary storage device 23 stores a program to be executed by the control unit 21 and various data necessary for executing the program.
  • the communication unit 24 is an interface for communicating between the information processing device 20 and the network.
  • the display unit 25 is an interface for connecting the display device 31 and the bus.
  • the input unit 26 is an interface that connects the input device 32 and the bus.
  • the catheter control unit 271 controls the MDU 33, controls the sensor 42, generates an image based on the signal received from the sensor 42, and the like.
  • the MDU 33 rotates the sensor 42 and the shaft 43 inside the probe unit 41.
  • the catheter control unit 271 generates one image for each rotation of the sensor 42.
  • the generated image is a cross-layer image centered on the probe portion 41 and substantially perpendicular to the probe portion 41.
  • the MDU 33 can move forward and backward while rotating the sensor 42 and the shaft 43 inside the probe unit 41.
  • the catheter control unit 271 continuously generates a plurality of transverse layer images substantially perpendicular to the probe unit 41 at predetermined intervals.
  • the control unit 21 may realize the function of the catheter control unit 271.
  • the information processing device 20 is an X-ray angiography device, an X-ray CT (Computed Tomography) device, an MRI (Magnetic Resonance Imaging) device, a PET (Positron Emission Tomography) device, or a supermarket via HIS (Hospital Information System) or the like. It is connected to various diagnostic imaging devices 37 such as a sound wave diagnostic device.
  • X-ray CT Computed Tomography
  • MRI Magnetic Resonance Imaging
  • PET Positron Emission Tomography
  • HIS Hospital Information System
  • the information processing device 20 of the present embodiment is a dedicated ultrasonic diagnostic device, a personal computer, a tablet, a smartphone, or the like having the function of the ultrasonic diagnostic device.
  • FIG. 4 to 7 are examples of screens displayed by the catheter system 10.
  • the screen shown in FIG. 4 includes a first image field 51, a stop button 581, a selection button 582, and a completion button 583.
  • FIG. 4 shows a state in which the stop button 581 is not selected.
  • the selection button 582 and the completion button 583 are set so that they cannot be selected.
  • the first image column 51 a tomographic image obtained by using the diagnostic imaging catheter 40 is displayed in real time.
  • an image displayed in real time is referred to as a real-time image.
  • a small crescent-shaped guide wire image 472 is displayed at 5 o'clock with respect to the circular catheter image 471 displayed in the central portion of the first image field 51. Since the ultrasonic waves radiated from the sensor 42 do not reach the back surface of the guide wire, which is a strong reflector, a substantially fan-shaped information missing portion 473 is formed outside the guide wire image 472. Such an information missing portion 473 is called an acoustic shadow portion.
  • the information missing portion 473 is not limited to the acoustic shadow portion.
  • the information missing portion is also formed by, for example, a ringdown caused by vibration of an ultrasonic transducer, multiple echoes formed by a strong reflector such as a guide wire, and an artifact such as external noise.
  • an information missing portion 473 is formed on the outside of the image showing the strands constituting the mesh of the stent, which is a strong reflector (see FIG. 23). Similarly, an information missing portion 473 is formed on the strongly calcified site and on the outside of the highly damped plaque.
  • FIG. 5 shows an example of a screen displayed when the control unit 21 accepts the selection of the stop button 581.
  • the selection button 582 is set to be selectable.
  • the second image field 52 is displayed on the upper right of the screen. A real-time image is displayed in the second image field 52.
  • the image displayed at the time when the selection of the stop button 581 is accepted is displayed in a stationary state.
  • the control unit 21 may accept the selection of the stop button 581 by voice recognition, operation of a foot switch (not shown), or the like.
  • FIG. 6 shows an example of a screen displayed when the control unit 21 accepts the selection of the selection button 582.
  • the cursor 68 is displayed in the first image field 51.
  • the user operates the cursor 68 via the input device 32 to place the designated point mark 571, for example, on the edge of the luminal organ.
  • FIG. 7 shows an example of a screen displayed when the control unit 21 accepts the selection of the complement button 583.
  • the control unit 21 displays a ring-shaped, that is, a closed-curve, complementary line 572 that complements the edge of the luminal organ based on the designated point mark 571 arranged by the user.
  • the complementary line 572 is generated by connecting the designated point marks 571 arranged by the user with each other by a curve such as a spline curve.
  • Complementary line 572 is an example of complementary information that complements the information missing portion 473.
  • FIG. 8 is a flowchart illustrating the flow of program processing.
  • the program shown in FIG. 8 is a program that is started when the user selects the selection button 582.
  • the control unit 21 accepts the position designation of the designated point mark 571 by the user (step S501).
  • the control unit 21 determines whether or not the input of the designated point mark 571 is completed (step S502). For example, when the user instructs the end of input, the control unit 21 determines that the reception of the designated point mark 571 has been completed. When the control unit 21 receives a predetermined number of designated point marks 571, it may determine that the input has been completed.
  • step S502 If it is determined that the process has not been completed (NO in step S502), the control unit 21 returns to step S501.
  • step S503 the control unit 21 generates an annular complementary line 572 connecting the designated point marks 571 received in step S501 (step S503).
  • the control unit 21 superimposes the generated complementary line 572 on the pause image and displays the screen described with reference to FIG. 7 (step S504).
  • the control unit 21 ends the process.
  • the control unit 21 may calculate parameters such as the area, the maximum diameter, and the minimum diameter based on the completed complementary line 572 and display them on the screen described with reference to FIG. 7.
  • the control unit 21 may accept the specification of the type of the parameter to be calculated or the calculation formula.
  • a catheter system 10 that easily embodies and displays the complementary line 572 that the user imagines in his / her head based on the tomographic image. Therefore, it is possible to provide a catheter system 10 that can be easily used by a user.
  • the doctor inserts the diagnostic imaging catheter 40 into the blood vessel where the stent is scheduled to be placed, and determines the stenotic state of the blood vessel based on the tomographic image.
  • a doctor or a medical staff instructed by the doctor operates the input device 32 to measure the area and inner diameter of the blood vessel lumen in the tomographic image.
  • the doctor decides the stent to be placed based on the measurement result.
  • the control unit 21 can calculate the area, inner diameter, etc. of the blood vessel lumen based on the above-mentioned complementary line 572. That is, it is possible to provide a catheter system 10 that calculates the area and inner diameter of the blood vessel lumen.
  • the control unit 21 may display information on the recommended stent based on the calculated area, inner diameter, and the like. Since the area and inner diameter of the portion surrounded by the closed curve have been calculated conventionally, the description thereof will be omitted in detail.
  • the doctor inserts the diagnostic imaging catheter 40 into the same blood vessel and confirms that there is no gap between the blood vessel wall and the placed stent based on the tomographic image. If there is a gap between the vessel wall and the stent, the doctor will take measures such as re-dilation of the stent.
  • the tomographic image after stent placement contains a large number of information missing parts 473 due to the strands of the stent. Therefore, the outer elastic plate is displayed in a broken line shape cut by the information missing portion 473.
  • the above-mentioned complementary line 572 in addition to the doctor who set the position of the designated point mark 571, other medical staff can easily grasp the indwelling state of the stent.
  • Part or all of the program may be executed on a large computer connected via a network, a virtual machine running on the large computer, or a cloud computing system.
  • a part or all of the program may be executed by a plurality of personal computers or the like that perform distributed processing.
  • the information processing device 20 may be a general-purpose personal computer, tablet, smartphone, or the like that does not have a function of connecting the MDU 33 and the diagnostic imaging catheter 40.
  • step S501 When, for example, five or more designated point marks 571 are received in step S501, the control unit 21 generates and displays the complementary line 572 (step S503) (step S504). After that, the control unit 21 returns to step S501 and further accepts the designation of the designated point mark 571.
  • the user places an additional designated point mark 571 in a portion where the place where he / she actually wants to draw the complementary line 572 is away from the displayed complementary line 572. By doing so, the user can create a desired complementary line 572 with a small number of designated point marks 571.
  • the present embodiment relates to a catheter system 10 that automatically displays a candidate point mark 573 at a location similar to a designated point mark 571 designated by a user.
  • the description of the parts common to the first embodiment will be omitted.
  • FIG. 9 is an example of a screen displayed by the catheter system 10 of the second embodiment. As described with reference to FIG. 6, the user operates the cursor 68 via the input device 32 to specify the position of the designated point mark 571.
  • the control unit 21 detects an area similar to the peripheral portion of the designated point mark 571 that has received the designation. Specifically, the control unit 21 extracts a template area 574 having a predetermined number of pixels centered on the designated point mark 571. In FIG. 9, the template area 574 is shown by a virtual line.
  • the control unit 21 detects an area similar to the template area 574 from the paused image by a template matching method. Since the template matching method has been used conventionally, the details thereof will be omitted. It is desirable that the control unit 21 excludes the vicinity of the place where the designated point mark 571 is already arranged from the target of template matching.
  • the control unit 21 displays a candidate point mark 573 in the central portion of the region.
  • the control unit 21 may display a plurality of candidate point marks 573 at the same time.
  • the control unit 21 may display only the candidate point mark 573 having the highest degree of similarity at the time of template matching.
  • the candidate point mark 573 is a mark having a mode different from that of the designated point mark 571, and the user can easily distinguish between the two.
  • the user determines whether or not to approve the displayed candidate point mark 573, and operates the input device 32. For example, the user approves by clicking the candidate point mark 573, and rejects it by dragging and dropping it outside the first image field 51.
  • the user may input an approval or rejection instruction by voice input.
  • the control unit 21 changes the candidate point mark 573 approved by the user to the designated point mark 571.
  • the control unit 21 deletes the candidate point mark 573 rejected by the user.
  • the user may suspend the judgment regarding the displayed candidate point mark 573 and specify the position of the next candidate point mark 573.
  • the control unit 21 extracts the points indicating the outer elastic plate by template matching and displays the candidate point mark 573.
  • the control unit 21 extracts the points indicating the inner surface of the lumen wall and displays the candidate point mark 573.
  • FIG. 10 is a flowchart illustrating a processing flow of the program of the second embodiment.
  • the control unit 21 accepts the position designation of the designated point mark 571 by the user (step S511).
  • the control unit 21 determines whether or not the input of the designated point mark 571 is completed (step S512). For example, when the user instructs the end of input, the control unit 21 determines that the reception of the designated point mark 571 has been completed.
  • the control unit 21 receives a predetermined number of designated point marks 571, it may determine that the input has been completed.
  • control unit 21 extracts the template area 574 centered on the position received in step S511 from the paused image (step S513).
  • the control unit 21 executes a template matching process to detect an area similar to the template area 574 from the paused image (step S514). Since template matching has been performed conventionally, the details thereof will be omitted.
  • the control unit 21 determines whether or not the detection of an area similar to the template area 574 has been successful (step S515). For example, the control unit 21 determines that the template region 574 has succeeded when the similarity between the region detected in step S514 and the region exceeds a predetermined threshold value.
  • the degree of similarity is, for example, the sum of the difference squares (SSD: Sum of Squared Difference) of the pixel values of the pixels at the same position in the two regions, the sum of the absolute values of the differences (SAD: Sum of Absolute Difference), or the normalized cross-correlation (SAD: Sum of Absolute Difference). It can be evaluated by NCC: Normalized Cross-Correlation). All of the above methods are examples.
  • the similarity evaluation method is not limited to these.
  • step S515 If it is determined that it has not succeeded (NO in step S515), the control unit 21 returns to step S511. If it is determined to be successful (YES in step S515), the control unit 21 displays the candidate point mark 573 in the center of the area matching the template area 574 (step S516).
  • the control unit 21 displays the candidate point mark 573 in the center of the region having the highest degree of similarity, for example.
  • the control unit 21 may display the candidate point mark 573 at the center of each of the plurality of regions determined to satisfy the condition in step S515.
  • the control unit 21 returns to step S511.
  • the user inputs the approval or rejection of the displayed candidate point mark 573 at an arbitrary timing by operating the cursor 68, inputting voice, or the like.
  • the control unit 21 changes the approved candidate point mark 573 to the designated point mark 571 and deletes the rejected candidate point mark 573.
  • control unit 21 When it is determined that the process is completed (YES in step S512), the control unit 21 generates an annular complementary line 572 connecting the designated point marks 571 received in step S501 (step S517). The control unit 21 superimposes the generated complementary line 572 on the pause image and displays the screen described with reference to FIG. 7 (step S518). The control unit 21 ends the process.
  • the catheter system 10 that automatically displays the candidate point mark 573 at a position similar to the position where the user has designated the designated point mark 571. It is possible to provide a catheter system 10 that can reduce the load on the user regarding the input of the designated point mark 571.
  • the present embodiment relates to a catheter system 10 that automatically generates complementary lines 572 in real time.
  • the description of the parts common to the first embodiment will be omitted.
  • FIG. 11 is an explanatory diagram illustrating the configuration of the learning model 61.
  • the learning model 61 is a model that accepts an input image and outputs a prediction regarding an output image showing a complementary line 572.
  • the learning model 61 is generated by machine learning. The method of generating the learning model 61 will be described later.
  • the learning model 61 outputs, for example, a model that outputs a complementary line 572 corresponding to the outer elastic plate, a model that outputs a complementary line 572 corresponding to the inner surface of the lumen wall, and a complementary line 572 corresponding to the periphery of the calcified portion. It is created for each complementary line to be generated, such as a model and a model that outputs complementary lines 572 corresponding to the periphery of the plaque.
  • the learning model 61 may be created for each combination of a site into which the diagnostic imaging catheter 40 is inserted, such as a coronary artery, a lower limb aorta, a bile duct, a pancreatic duct, and a bronchus, and a complementary line to be generated.
  • a site into which the diagnostic imaging catheter 40 is inserted such as a coronary artery, a lower limb aorta, a bile duct, a pancreatic duct, and a bronchus, and a complementary line to be generated.
  • Each learning model 61 is stored in the auxiliary storage device 23.
  • the learning model 61 may be stored in an external large-capacity storage device connected to the information processing device 20.
  • the control unit 21 may acquire the learning model 61 stored in the server or the like via the network each time it is needed.
  • the learning model 61 may be a model that accepts an input image and outputs an output image in which the complementary line 572 and the input image are superimposed.
  • the input image input to the learning model 61 may be a vertical tomographic image.
  • the learning model 61 may be a model that accepts an input image and outputs a group of coordinates via the complementary line 572.
  • FIG. 12 and 13 are examples of screens displayed by the catheter system 10 of the third embodiment.
  • FIG. 12 shows an example of a screen displayed on the display device 31 by the control unit 21 before creating the complementary line 572.
  • the screen shown in FIG. 12 includes a first image field 51, a target selection button 591, and a start button 584.
  • the target selection button 591 is a pull-down menu type button, and a part where the user wants to display the complementary line 572 is selected.
  • EEM that is, an external elastic plate is selected.
  • the user observes the real-time image displayed in the first image field 51 and determines the portion where the complementary line 572 is displayed.
  • the user operates the target selection button 591 to select a portion to display the complementary line 572, and then selects the start button 584.
  • FIG. 13 shows an example of a screen displayed in the first image field 51 by the control unit 21 after accepting the user's selection of the start button 584.
  • the screen shown in FIG. 13 includes a first image field 51, a target selection button 591, and an end button 585.
  • a real-time image in which a complementary line 572 corresponding to "EEM" is superimposed is displayed in the first image field 51.
  • the user wants to end the display of the complementary line 572, the user selects the end button 585.
  • a button or the like for accepting a change in the display mode such as the color and thickness of the complementary line 572 may be displayed.
  • a button or the like for temporarily hiding the complementary line 572 may be displayed.
  • FIG. 14 is a flowchart illustrating a processing flow of the program of the third embodiment.
  • the control unit 21 acquires the complement target portion set by the user using the target selection button 591 (step S521).
  • the control unit 21 selects the learning model 61 corresponding to the complement target portion acquired in step S521 (step S522). In the subsequent processing, the control unit 21 uses the learning model 61 selected in step S522.
  • the control unit 21 acquires a real-time image from the catheter control unit 271 (step S523).
  • the control unit 21 inputs the acquired real-time image into the learning model 61 and acquires an output image showing the complementary line 572 (step S524).
  • the control unit 21 displays a real-time image on which the complementary line 572 is superimposed in the first image field 51 (step S525).
  • the control unit 21 determines whether or not to end the process (step S526). For example, when the selection of the end button 585 is accepted, or when the diagnostic imaging catheter 40 is removed from the MDU 33, the control unit 21 determines that the process is completed.
  • step S526 If it is determined that the process is not completed (NO in step S526), the control unit 21 returns to step S523. If it is determined that the process is to be completed (YES in step S526), the control unit 21 ends the process.
  • control unit 21 may execute step S524 once every two frames or three frames, for example.
  • the control unit 21 may determine that the frame to be processed in step S524 is synchronized with the electrocardiogram.
  • a catheter system 10 that automatically displays a complementary line 572 on a real-time image. According to this embodiment, it is possible to provide a catheter system 10 in which a user can specify a site indicated by a complementary line 572.
  • the present embodiment relates to a catheter system 10 that displays an image that complements the information missing portion 473.
  • the description of the parts common to the first embodiment will be omitted.
  • FIG. 15 is an explanatory diagram illustrating the configuration of the learning model 61 of the fourth embodiment.
  • the learning model shown in FIG. 15 is a model that outputs a prediction regarding an output image that complements the information missing portion 473 when an input image having the information missing portion 473 is input.
  • the output image of FIG. 15 is an example of the complemented image generated so as to complement the information missing portion 473.
  • the learning model 61 includes a missing region extraction model 611, a complementary model 612, a cutting section 613, and a compositing section 614.
  • the missing region extraction model 611 and the complementary model 612 are neural network models having learnable parameters.
  • the cutting unit 613 and the combining unit 614 are arithmetic units that calculate the pixel value of each pixel constituting the image.
  • the missing area extraction model 611 is a model that receives an input image including the information missing part 473 and generates a missing area image 483 by extracting the part indicating the information missing part 473.
  • the missing region extraction model 611 has, for example, a configuration of Mask-RCNN (Region Convolutional Neural Network), which is a kind of object detection model.
  • the information missing portion 473 in the missing area image 483 is shown by hatching.
  • the missing area image 483 is an image having the same size as the input image, and the pixel value of the pixel corresponding to the information missing portion 473 is "1", and the pixel value of the other portion of the pixel is "0". be.
  • the complementary model 612 is a model that accepts an input image including the information missing part 473 and generates an estimated image 486 without the information missing part 473.
  • the cutout unit 613 generates the cutout image 487 by replacing the pixel values of the pixels of the estimated image 486 that do not correspond to the information missing portion 473 with the pixel values of the background color.
  • the complementary model 612 has, for example, a configuration in which a plurality of folding layers are continuous.
  • the compositing unit 614 synthesizes the input image and the cropped image 487 to generate an output image.
  • the output image is an image in which only the portion corresponding to the information missing portion 473 of the input image is replaced with the image generated by the complementary model 612.
  • the cut-out image 487 is an example of complementary information that complements the information missing portion 473.
  • the learning model 61 has been trained so that when an input image having the information missing portion 473 is input, an expert outputs an output image that does not give a sense of discomfort.
  • the method of generating the learning model 61 will be described later.
  • FIG. 16 is an explanatory diagram illustrating how to use the learning model 61 of the fourth embodiment.
  • the control unit 21 inputs the input image to the learning model 61 and acquires the cut image 487 output from the cut unit 613.
  • FIG. 17 to 21 are examples of screens displayed by the catheter system 10 of the fourth embodiment.
  • the screen shown in FIG. 17 includes a first image field 51, a non-complementary button 580, a complementary button 583, a colored button 586, a border button 587, and a shaded button 588.
  • One of the no-completion button 580 and the completion button 583 is always set to be selected.
  • FIG. 17 shows a state in which the non-complementary button 580 is selected.
  • the complement button 583, the coloring button 586, and the border button 587 are set so that they cannot be selected.
  • a real-time image including the information missing portion 473 is displayed in the first image column 51. In FIG. 17, the information missing portion 473 is not complemented.
  • FIG. 18 shows an example of a screen displayed when the control unit 21 accepts the selection of the complement button 583.
  • the non-complementary button 580 is deselected, and the colored button 586, the border button 587, and the shaded button 588 are set to be selectable.
  • the second image field 52 is displayed on the upper right of the screen.
  • the first image field 51 an image obtained by combining the real-time image and the cut image 487 acquired from the learning model 61 is displayed.
  • a real-time image is displayed in the second image field 52.
  • the user can determine the cross-sectional shape, cross-sectional area, inner diameter, and the like of the luminal organ based on the image complementing the information missing portion 473 displayed in the first image field 51.
  • the control unit 21 can calculate the area, inner diameter, etc. of the blood vessel lumen based on the image in which the information missing portion 473 displayed in the first image field 51 does not exist.
  • the control unit 21 may display information on the recommended stent based on the calculated area, inner diameter, and the like.
  • the user can observe the real-time image before complementation displayed in the second image field 52 as needed. By comparing the first image field 51 and the second image field 52, the user can confirm where the portion complemented by the learning model 61 is.
  • FIG. 19 shows an example of a screen displayed when the control unit 21 accepts the selection of the border button 587.
  • the control unit 21 combines the real-time image and the cut-out image 487 with a border added to the edge and displays it in the first image field 51.
  • the user can easily distinguish between the portion complemented by the learning model 61 and the portion actually acquired by the diagnostic imaging catheter 40.
  • FIG. 20 is an example of a screen displayed when the control unit 21 accepts the selection of the border button 587 and the shaded button 588.
  • the control unit 21 combines the real-time image and the cut-out image 487 with a border added to the edge and shaded, and displays it in the first image field 51. Even an inexperienced user can provide the catheter system 10 in which the portion complemented by the learning model 61 is not likely to be mistaken for the portion actually acquired by the diagnostic imaging catheter 40.
  • the user can select the colored button 586, the border button 587, and the shaded button 588 in any combination.
  • the colored button 586, the border button 587, and the shaded button 588 are examples of means for selecting the marking mode for the cropped image 487.
  • the marking method is not limited to these. Marking of any aspect can be set.
  • FIG. 21 is an example of a screen displayed when the control unit 21 accepts the selection of the coloring button 586.
  • the control unit 21 combines the real-time image and the cut-out image 487 colored in the high-echo portion and displays it in the first image field 51. The user can easily distinguish between the portion complemented by the learning model 61 and the portion actually acquired by the diagnostic imaging catheter 40.
  • FIG. 22 is a flowchart illustrating a processing flow of the program of the fourth embodiment.
  • the control unit 21 acquires a real-time image from the catheter control unit 271 (step S531).
  • the control unit 21 determines whether or not the selection of the complement button 583 is accepted (step S532).
  • the control unit 21 displays a real-time image in the first image field 51 (step S534).
  • control unit 21 When it is determined that the image is accepted (YES in step S532), the control unit 21 inputs the acquired real-time image into the learning model 61 and acquires the cut image 487 (step S533). The control unit 21 determines whether or not the selection of marking for the cut image 487 is accepted via the colored button 586, the border button 587, and the shaded button 588 (step S535).
  • step S535 When it is determined that the marking selection is not accepted (NO in step S535), the control unit 21 synthesizes the real-time image and the cut-out image 487 (step S536). When it is determined that the selection of marking is accepted (YES in step S535), the control unit 21 synthesizes the real-time image and the cut-out image 487 with the specified marking (step S537).
  • step S536 or step S537 the control unit 21 displays the combined image (step S538).
  • step S539 the control unit 21 determines whether or not to end the process. For example, when the diagnostic imaging catheter 40 is removed from the MDU 33, the control unit 21 determines that the process is completed.
  • step S539 If it is determined that the process does not end (NO in step S539), the control unit 21 returns to step S531. If it is determined to end (YES in step S539), the control unit 21 ends the process.
  • a catheter system 10 that complements and displays the information missing portion 473 in a state without discomfort while maintaining the portion actually acquired by using the diagnostic imaging catheter 40.
  • the present embodiment relates to a catheter system 10 that complements the information missing portion 473 in a transverse layer image with a transverse layer image of another frame.
  • the description of the parts common to the first embodiment will be omitted.
  • FIG. 23 is an explanatory diagram illustrating a method of generating a screen displayed by the catheter system 10 of the fifth embodiment.
  • the present embodiment will be described by taking as an example a case where the stent is obtained by inserting the diagnostic imaging catheter 40 into the indwelling site.
  • the first tomographic image and the second tomographic image are cross-layer images acquired at positions slightly different in the longitudinal direction of the probe portion 41. There is no significant difference in the structure of the luminal organ itself between the two transverse images. Due to the strands of the stent, a plurality of information missing portions 473 are radially generated. Since the stent has a mesh shape, the position of the information missing portion 473 is different when the slice surface of the transverse layer image is deviated in the longitudinal direction of the probe portion 41.
  • control unit 21 extracts the information missing unit 473 from the second tomographic image.
  • the control unit 21 cuts out a portion corresponding to each information missing portion 473 from the first tomographic image and synthesizes it into the second tomographic image. As a result, a composite image that does not include the information missing portion 473 is generated.
  • FIG. 24 is a flowchart illustrating a processing flow of the program of the fifth embodiment.
  • the control unit 21 acquires a first real-time image from the catheter control unit 271 and records it in the main storage device 22 or the auxiliary storage device 23 (step S551).
  • the first real-time image is an example of a catheter image generated by using the diagnostic imaging catheter 40.
  • the control unit 21 extracts the information missing unit 473 in the first real-time image (step S552).
  • the information missing portion 473 is extracted by, for example, template matching using a fan-shaped low echo region as a template. A fan-shaped region with a high echo at the central end and a low echo at the rest may be used in the template.
  • the information missing portion 473 may be extracted by a learning model using an object detection algorithm such as Mask-RCNN.
  • the control unit 21 acquires a second real-time image from the catheter control unit 271 and records it in the main storage device 22 or the auxiliary storage device 23 (step S553). During the execution of this program, the user slightly advances and retreats the diagnostic imaging catheter 40 in the vicinity of the observation target site.
  • This program may be executed during a pullback operation using the MDU33.
  • the control unit 21 acquires the first real-time image and then acquires the second real-time image one to several frames later.
  • the second real-time image is an example of a second catheter image generated at a different time than the first catheter image.
  • the control unit 21 extracts the information missing unit 473 in the second real-time image (step S554).
  • the control unit 21 determines whether or not the information missing unit 473 extracted in step S552 and the information missing unit 473 extracted in step S554 overlap (step S555). For example, the control unit 21 determines that the two information missing units 473 overlap each other when the area of the overlapping portion exceeds a predetermined area.
  • step S555 If it is determined that they are duplicated (YES in step S555), the control unit 21 returns to step S553. When it is determined that there is no overlap (NO in step S555), the control unit 21 synthesizes the first real-time image and the second real-time image (step S556).
  • control unit 21 cuts out the portion corresponding to the information missing portion 473 extracted in step S554 from the first real-time image and synthesizes it into the second real-time image.
  • the control unit 21 may cut out the portion corresponding to the information missing portion 473 extracted in step S552 from the second real-time image and combine it with the first real-time image.
  • the control unit 21 displays the combined image in the first image field 51 of the screen described using, for example, FIGS. 17 to 21 (step S557).
  • the control unit 21 ends the process.
  • the information missing portion 473 is complemented by using a cross-layer image of another frame, it is possible to provide a catheter system 10 in which a false image due to image synthesis is unlikely to occur.
  • the present embodiment relates to a catheter system 10 that complements the information missing portion 473 in an image by using an image of the same frame.
  • the description of the parts common to the first embodiment will be omitted.
  • FIG. 25 is an explanatory diagram illustrating a method of generating a screen displayed by the catheter system 10 of the sixth embodiment. The present embodiment will be described by taking an image including the information missing portion 473 due to the guide wire as an example.
  • control unit 21 extracts the information missing unit 473 from the original image.
  • the control unit 21 copies the pasted area 485 having a shape corresponding to the information missing portion 473 from the portion of the original image in which the information is not missing, and synthesizes it with the information missing portion 473. As a result, a composite image that does not include the information missing portion 473 is generated.
  • FIG. 26 is a flowchart illustrating a processing flow of the program of the sixth embodiment.
  • the control unit 21 acquires a real-time image from the catheter control unit 271 (step S561).
  • the control unit 21 extracts the information missing unit 473 in the real-time image (step S562).
  • the control unit 21 extracts candidates for the pasting area 485 from the real-time image acquired in step S561 (step S563).
  • the pasting area 485 candidate is extracted, for example, by cutting out a portion corresponding to the shape in which the information missing portion 473 is rotated around the center of the image from the real-time image.
  • the control unit 21 synthesizes the real-time image acquired in step S561 and the candidate of the pasting area 485 extracted in step S563 (step S564).
  • the control unit 21 extracts the information missing unit 473 from the composite image synthesized in step S564 (step S565).
  • the method for extracting the information missing portion 473 in step S565 is the same as in step S562.
  • the control unit 21 determines whether or not the composite image includes the information missing unit 473 (step S566). Specifically, when the information missing portion 473 exceeding a predetermined area is extracted, the control unit 21 determines that the information missing portion 473 is included in the composite image.
  • step S566 When it is determined that the information missing unit 473 is included (YES in step S566), the control unit 21 returns to step S563. When it is determined that the information missing unit 473 is not included (NO in step S566), the control unit 21 uses the image synthesized in step S564, for example, the first image field 51 of the screen described with reference to FIGS. 17 to 21. Is displayed in (step S567). The control unit 21 ends the process.
  • the information missing portion 473 is complemented by using the pasting area 485 acquired from the transverse layer image of the same frame, it is possible to provide the catheter system 10 with less time lag associated with the complementing process.
  • the present embodiment relates to a method of generating the learning model 61 of the third embodiment described with reference to FIG. The description of the parts common to the third embodiment will be omitted.
  • FIG. 27 is an explanatory diagram illustrating the record layout of the training DB.
  • the training data DB is a database that records the detection target, the input image, and the output image in association with each other, and is used for training the learning model 61 by machine learning.
  • the training DB has a target field, an input image field, and an output image field.
  • the name of the target for creating the complementary line 572 is recorded.
  • An image acquired by using the diagnostic imaging catheter 40 is recorded in the input image field.
  • an image of a complementary line indicating the target recorded in the target field is recorded.
  • the training DB a large amount of combinations of the target name, the input image generated by using the diagnostic imaging catheter 40, and the output image confirmed to be correct by an expert or the like are recorded.
  • the training DB is generated based on a case record using the first embodiment or the second embodiment by, for example, a specialist who is proficient in the specifications of the diagnostic imaging catheter 40.
  • FIG. 28 is a flowchart illustrating a processing flow of the program of the seventh embodiment. A case where machine learning of the learning model 61 is performed using the information processing device 20 will be described as an example.
  • the program of FIG. 28 may be executed by hardware different from the information processing device 20, and the learning model 61 for which machine learning has been completed may be copied to the auxiliary storage device 23 via the network.
  • the learning model 61 trained by one hardware can be used by a plurality of information processing devices 20.
  • an unlearned model combining, for example, a convolution layer, a pooling layer, and a fully connected layer is prepared.
  • the program of FIG. 28 adjusts each parameter of the prepared model to perform machine learning.
  • the control unit 21 acquires a training record used for training one epoch from the training DB (step S571).
  • the control unit 21 adjusts the parameters of the model so that the output image is output from the output layer when the input image is input to the input layer of the model (step S572).
  • the control unit 21 determines whether or not to end the process (step S573). For example, when the control unit 21 finishes learning a predetermined number of epochs, it determines that the process is finished.
  • the control unit 21 may acquire test data from the training DB, input it to the model being machine-learned, and determine that the process ends when an output with a predetermined accuracy is obtained.
  • step S573 If it is determined that the process is not completed (NO in step S573), the control unit 21 returns to step S571.
  • the control unit 21 records the parameters of the trained model in the auxiliary storage device 23 (step S574). After that, the control unit 21 ends the process.
  • a trained model is generated.
  • the learning model 61 described in the third embodiment can be generated by machine learning.
  • the present embodiment relates to a method of generating the learning model 61 of the fourth embodiment described with reference to FIG. The description of the parts common to the fourth embodiment will be omitted.
  • FIG. 29 is an explanatory diagram illustrating an outline of a method for generating the learning model 61.
  • the learning model 61 is a model that accepts an input image and generates an output image.
  • the classifier 65 is a model that receives the output image output from the learning model 61 and determines whether it is a true image or a false image.
  • the classifier 65 has, for example, a structure in which a folding layer and a pooling layer are repeated, a fully connected layer, and a softmax layer are connected.
  • the learning model 61 can generate a natural output image.
  • GAN Generative Adversarial Networks
  • FIG. 30 is a flowchart illustrating a processing flow of the program of the eighth embodiment. A case where machine learning of the learning model 61 is performed using the information processing device 20 will be described as an example.
  • the program of FIG. 30 may be executed by hardware different from the information processing device 20, and the learning model 61 for which machine learning has been completed may be copied to the auxiliary storage device 23 via the network.
  • the learning model 61 trained by one hardware can be used by a plurality of information processing devices 20.
  • the control unit 21 acquires a plurality of input images (step S581).
  • the input image is recorded in the input image field of the training DB described with reference to, for example, FIG. 27.
  • the input image includes an image that does not include the information missing portion 473.
  • the control unit 21 sets "false” for the image input to the classifier 65 via the learning model 61, and does not include the information missing unit 473 input to the classifier 65 without going through the learning model 61.
  • the parameter of the classifier 65 is adjusted so as to output "true” (step S582).
  • the control unit 21 outputs the parameters of the learning model 61, specifically, the parameters of the missing region extraction model 611 and the complementary model 612 so that the classifier 65 outputs "true” and "false” with a probability of 50%. Adjust (step S583).
  • the control unit 21 may repeat the process of step S582 and step S583 a plurality of times.
  • the control unit 21 records the parameters of the learned learning model 61 in the auxiliary storage device 23 (step S584). After that, the control unit 21 ends the process. By the above processing, the trained learning model 61 is generated.
  • the learning model 61 described in the fourth embodiment can be generated by machine learning.
  • the present embodiment relates to a catheter system 10 that distinguishes between a high-reliability portion and a low-reliability portion of the complementary line 572.
  • the description of the parts common to the third embodiment will be omitted.
  • FIG. 31 is an explanatory diagram illustrating a method of generating an image displayed by the catheter system 10 of the ninth embodiment.
  • a case where the display range of the tomographic image obtained by using the diagnostic imaging catheter 40 is small, that is, a case where an input image displaying the vicinity of the diagnostic imaging catheter 40 is used will be described as an example.
  • the guide wire image 472 and the shadow forming portion image 474 are displayed in the input image.
  • the shadow forming part image 474 is an image showing a strong reflector such as a strongly calcified site, a stent, or another medical device used at the same time as the diagnostic imaging catheter 40.
  • An information lacking portion 473 due to acoustic shading is formed on the outside of the guide wire image 472 and on the outside of the shadow forming portion image 474. That is, the guide wire image 472 and the shadow forming portion image 474 are examples of images showing the shadow forming portion.
  • the control unit 21 generates two complementary lines 572, a first complementary line 561 and a second complementary line 562, based on the input image (step S101).
  • the first complementary line 561 indicates the inner surface of the blood vessel
  • the second complementary line 562 indicates the outer elastic plate.
  • the number of complementary lines 572 generated in step S101 may be one or three or more.
  • the control unit 21 can generate an image with a complementary line by synthesizing the input image and the complementary line 572 (step S102).
  • the image with complementary lines is shown for convenience of explanation, and may not be actually generated.
  • the control unit 21 extracts the guide wire image 472 and the shadow forming unit image 474 from the input image or the image with complementary lines (step S103).
  • the guide wire image 472 is a region having a brightness higher than a predetermined brightness, and is a region existing inside the first complementary line 561.
  • the control unit 21 may extract the guide wire image 472 by pattern matching based on the shape and dimensions specified in advance.
  • the shadow forming portion image 474 is a region having a brightness higher than a predetermined brightness and is a region existing outside the first complementary line 561.
  • the shadow forming part image 474 corresponds to, for example, a wire or a strongly calcified part of the stent.
  • the guide wire image 472 and the shadow forming portion image 474 are examples of the portion in which the cause of the formation of the acoustic shadow portion is depicted.
  • the control unit 21 determines the low reliability region 55 based on the extracted guide wire image 472 and the shadow forming unit image 474.
  • the low-reliability region 55 is a substantially fan-shaped region in which the region near the guide wire image 472 and the region outside the guide wire image 472 are combined, and the region near the shadow forming portion image 474 and the shadow forming portion image 474. Is a substantially fan-shaped area including the outer area.
  • the control unit 21 may include a region having a predetermined width closer to the diagnostic imaging catheter than the guide wire image 472 and the shadow forming portion image 474 in the low reliability region 55.
  • FIG. 32 is an explanatory diagram illustrating a method of generating the complementary line 572 of the ninth embodiment.
  • the learning model 61 of the present embodiment is a model that accepts an input image and outputs classification data.
  • the classification data is data in which each part constituting the input image is associated with the label classified for each subject drawn in the part. Each part is, for example, each pixel.
  • the classification data can be used to generate a classification image in which the input image is painted separately for each drawn subject.
  • the learning model 61 outputs classification data in which each pixel constituting the input image is classified into, for example, a first label, a second label, or a third label.
  • An example of a classification image generated based on the classification data is shown.
  • the first label region 541, the second label region 542, and the third label region 543 are arranged substantially concentrically with the catheter image 471 at the center.
  • the first label region 541 indicates the lumen of the luminal organ into which the diagnostic imaging catheter 40 is inserted, that is, the lumen region of the blood vessel through which blood flows.
  • the second label region 542 indicates the luminal wall, that is, the blood vessel wall.
  • the third label region 543 indicates the region outside the luminal wall, that is, the region outside the outer elastic plate showing the outer surface of the luminal organ.
  • the third label area 543 includes, for example, muscles, nerves, fats, and other blood vessels in close proximity to the blood vessel into which the diagnostic imaging catheter 40 is inserted.
  • the boundary line between the first label area 541 and the second label area 542 corresponds to the above-mentioned first complementary line 561.
  • the boundary line between the second label area 542 and the third label area 543 corresponds to the above-mentioned second complementary line 562.
  • the complementary line 572 created based on the input image may be referred to as the mode 1 complementary line 572 in the following description.
  • FIG. 32 an input image displayed in the so-called XY format and a classification image in which the classification data is displayed in the XY format are schematically shown.
  • the learning model 61 receives a so-called RT format input image formed by arranging scanning line data formed by the sensor 42 transmitting and receiving ultrasonic waves in parallel in the order of scanning angles, and outputs classification data. good. Since the conversion method from the RT format to the XY format is known, the description thereof will be omitted. Since the input image is not affected by the interpolation process or the like when converting from the RT format to the XY format, more appropriate classification data is generated.
  • the learning model 61 is, for example, a trained model that performs semantic segmentation on an input image.
  • the trained model that performs semantic segmentation uses teacher data that combines an input image generated using the diagnostic imaging catheter 40 and a classification image that is painted by an expert for each subject on which the input image is drawn. Is generated by machine learning.
  • the learning model 61 may be created for each site into which the diagnostic imaging catheter 40 is inserted, such as the coronary artery, lower limb aorta, bile duct, pancreatic duct, and bronchus.
  • the learning model 61 may be created for each display range for generating a tomographic image using the diagnostic imaging catheter 40.
  • the learning model 61 may be created for each patient attribute, such as the patient's age or gender.
  • the complementary line 572 of the mode 1 may be generated by using the learning model 61 of the third embodiment described with reference to FIG.
  • a first complementary line 561 indicating the inner surface of the blood vessel and a second complementary line 562 indicating the outer elastic plate are generated using the appropriate learning model 61, respectively.
  • the complementary line 572 of mode 1 may be generated by any other method.
  • FIG. 33 is an explanatory diagram illustrating a method of generating the complementary line 572 of the mode 2. As described with reference to FIG. 31, the description begins with the process after the generation of the complementary line 572 in mode 1 and the determination of the low reliability region 55 are completed. In FIG. 33, the guide wire image 472 and the shadow forming portion image 474 are not shown.
  • the control unit 21 deletes the portion of the complementary line 572 of the mode 1 that is included in the low reliability region 55 (step S111).
  • the complementary line 572 is in a partially broken state.
  • the control unit 21 generates a change line 565 that smoothly connects the break points of the complementary line 572.
  • the control unit 21 generates the complementary line 572 of the second mode (step S112).
  • the change line 565 corresponding to each complementary line 572 is shown by a thick line.
  • the control unit 21 may accept the designation of the generation method by the user.
  • the complementary line 572 in the second mode is less susceptible to artifacts such as multiple echoes formed by a strong reflector such as a guide wire.
  • the complementary line of the second mode described above is a complementary line in which a region of the complementary line 572 having a lower reliability than the other regions is modified based on a region whose reliability is not low.
  • the control unit 21 determines that the reliability of the entire complementary line 572 is low when most of the complementary line 572 is included in the low reliability region 55. For example, when the shadow-forming part image 474 is a wire of a stent, or when there is extensive calcification, the range of the low-reliability region 55 is wide, and most of the complementary lines 572 are the low-reliability region 55. include.
  • FIG. 34 is a flowchart illustrating a processing flow of the program of the ninth embodiment.
  • the control unit 21 acquires a real-time image from the catheter control unit 271 (step S601).
  • the control unit 21 determines whether or not an instruction to perform complementation is received from the user (step S602).
  • An example of a screen that accepts instructions from the user will be described later.
  • the control unit 21 When it is determined that the image is not accepted (NO in step S602), the control unit 21 displays a real-time image in the first image field 51 (step S603). When it is determined that the acceptance is accepted (YES in step S602), the control unit 21 activates the completion line generation subroutine (step S611).
  • the complement line generation subroutine is a subroutine that generates a complement line 572. The processing flow of the completion line generation subroutine will be described later.
  • the control unit 21 extracts a high-luminance region from the real-time image (step S612).
  • the high-luminance region is, for example, a region in which a large number of pixels having a brightness higher than a predetermined threshold are solidified than a predetermined number.
  • the brightness threshold value and the pixel number threshold value may be appropriately set by the user.
  • the guide wire image 472 and the shadow forming portion image 474 are extracted.
  • the control unit 21 sets the range of the low reliability region 55 (step S613).
  • the low reliability region 55 is a substantially fan-shaped region outside the guide wire image 472 and the shadow forming portion image 474.
  • the control unit 21 calculates the ratio of the length of the range included in the low reliability region 55 to the total length of one of the complementary lines 572 generated in step S611 (step S614).
  • the control unit 21 determines whether or not the ratio of the range included in the low reliability region 55 is larger than the predetermined threshold value (step S615). When it is determined that the number is large (YES in step S615), the control unit 21 temporarily informs the main storage device 22 or the auxiliary storage device 23 that the ratio of the range included in the low reliability area 55 for the complementary line 572 being processed is large. (Step S616).
  • control unit 21 determines whether or not the user has received an instruction to display the complementary line 572 of the mode 2 (step S617).
  • An example of a screen that accepts instructions from the user will be described later.
  • step S617 When it is determined that the reception is accepted (YES in step S617), the control unit 21 generates the complementary line 572 of mode 2 described with reference to FIG. 33 (step S618). If it is determined that the acceptance is not accepted (NO in step S617), after the end of step S618 or the end of step S616, the control unit 21 determines whether or not the processing of all the complementary lines 572 generated in step S611 is completed. Determine (step S619).
  • control unit 21 If it is determined that the process has not ended (NO in step S619), the control unit 21 returns to step S614. When it is determined that the process is completed (YES in step S619), the control unit 21 displays a real-time image and a complementary line 572 in the first image field 51 (step S620).
  • control unit 21 determines whether or not to end the process (step S621). For example, when the diagnostic imaging catheter 40 is removed from the MDU 33, the control unit 21 determines that the process is completed.
  • step S621 If it is determined that the process does not end (NO in step S621), the control unit 21 returns to step S601. If it is determined to end (YES in step S621), the control unit 21 ends the process.
  • FIG. 35 is a flowchart illustrating a processing flow of a subroutine for generating a complementary line.
  • the control unit 21 inputs a real-time image into the learning model 61 and acquires classification data (step S631). That is, in step S631, the control unit 21 acquires the label corresponding to the subject drawn in each portion constituting the real-time image.
  • the control unit 21 extracts a complementary line corresponding to a boundary between regions where the acquired labels are different from each other (step S632).
  • the control unit 21 ends the process.
  • FIG. 36 is an example of a screen displayed by the catheter system 10 of the ninth embodiment.
  • the control unit 21 displays the screen shown in FIG. 36 on the display device 31 in step S603 of the flowchart described with reference to FIG. 34.
  • the screen shown in FIG. 36 includes a first image field 51, two target selection buttons 591, an intraluminal high-intensity part button 594, and an in-wall high-intensity part button 595.
  • Two mode selection buttons 592 and a non-display selection button 593 are displayed below each target selection button 591.
  • the first image column 51 a real-time image obtained by using the diagnostic imaging catheter 40 is displayed.
  • step S602 of the flowchart described with reference to FIG. 34 the control unit 21 determines that the instruction to perform complementation is not received from the user (NO in step S602), and executes step S603.
  • the target selection button 591 accepts the selection of the complementary line 572 to be extracted.
  • the user can select “lumen inner surface” and “EEM”.
  • “lumen inner surface” and “EEM” are examples.
  • the control unit 21 appropriately displays the type of the complementary line 572 corresponding to the luminal organ into which the diagnostic imaging catheter 40 is inserted.
  • the control unit 21 may display the target selection button 591 regarding the complementary line 572 of the type preset by the user.
  • the mode selection button 592 accepts the selection regarding the mode of the complementary line 572.
  • the control unit 21 When the selection of "mode 1" is accepted, the control unit 21 generates the complementary line 572 of mode 1 described with reference to FIG. 31.
  • the control unit 21 When the selection of "mode 2" is accepted, the control unit 21 generates the complementary line 572 of mode 2 described with reference to FIG. 33.
  • the non-display selection button 593 accepts a selection regarding whether or not to display the complementary line 572 of the portion included in the low reliability area 55.
  • the intraluminal high-brightness portion button 594 accepts a selection regarding the presence or absence of information display regarding the high-brightness portion existing in the lumen of the lumen organ.
  • the high-brightness portion button 595 in the wall accepts a selection regarding the presence or absence of information display regarding the high-brightness portion existing inside the luminal wall, that is, between the inner wall of the luminal organ and the outer elastic plate.
  • the control unit 21 displays the screens shown in FIGS. 37 to 41 on the display device 31 in step S620 of the flowchart described with reference to FIG. 34.
  • first image column 51 an image in which the first complementary line 561 and the second complementary line 562 are superimposed on the real-time image obtained by using the diagnostic imaging catheter 40 is displayed.
  • first complementary line 561 corresponding to the "lumen inner surface” the portion that does not overlap with the low reliability region 55 described with reference to FIG. 31 is indicated by a thick solid line, and the portion that overlaps with the low reliability region 55 is indicated by a thin solid line.
  • second complementary line 562 corresponding to "EEM” the portion that does not overlap with the low reliability region 55 is indicated by a thick broken line, and the portion that overlaps with the low reliability region 55 is indicated by a thin broken line.
  • the region having a lower reliability than the other regions is displayed in a mode different from the region not having a low reliability.
  • the complementary lines 572 and the region having low reliability and the region not having low reliability may be displayed separately by color, brightness, or the like.
  • the user can easily distinguish between the high-reliability portion and the low-reliability portion of the complementary line 572 generated by the control unit 21.
  • the target selection button 591 By operating the target selection button 591, the user can appropriately select the complementary line 572 to be displayed in the first image field 51.
  • the selection of the non-display selection button 593 corresponding to "EEM" is accepted.
  • the portion of the second complementary line 562 displayed that overlaps with the low reliability region 55 is in a state of being erased.
  • the user can confirm the unreliable part of the second complementary line 562 with his / her own eyes and make an appropriate judgment based on his / her specialized knowledge.
  • the intraluminal high-intensity part button 594 and the in-wall high-intensity part button 595 are selected.
  • a low reliability region 55 generated by the guide wire image 472, which is a high brightness portion in the lumen, and a low reliability region 55 generated by the low reliability region 55, which is a high brightness portion in the wall, are displayed. ing.
  • the control unit 21 may display the guide wire image 472 and the shadow forming unit image 474 in a colored state. The user can confirm which portion of the control unit 21 has determined to generate the low reliability region 55.
  • FIG. 40 shows a case where a real-time image is obtained at the portion where the stent is placed.
  • a large number of shadow forming part images 474 formed by the strands of the stent are drawn side by side in a ring shape. Therefore, a low reliability region 55 corresponding to each shadow forming portion image 474 is generated.
  • FIG. 40 is an example of a screen displayed by the control unit 21 when it is determined in step S615 described with reference to FIG. 34 that the range included in the low reliability region 55 for the second complementary line 562 is larger than the threshold value. Is shown.
  • the control unit 21 does not display the second complementary line 562 in the first image field 51.
  • the control unit 21 displays a notification column 597 indicating that "display is not possible” instead of the legend column 596 on the right side of the character "EEM". The user can grasp that the second complementary line 562 generated by the control unit 21 is not displayed in the first image field 51 because the reliability is low.
  • FIG. 41 shows an example of a screen displayed by the control unit 21 when the user clicks, for example, the notification field 597 and instructs the display of the second complementary line 562, which has low reliability.
  • the entire second complementary line 562 is indicated by a thin dashed line indicating low reliability.
  • the notification "EEM has low reliability" indicates that the reliability of the second complementary line 562 is low.
  • the learning model 61 may output labels corresponding to the guide wire image 472 and the shadow forming portion image 474, respectively. In such a case, step S612 described with reference to FIG. 34 is unnecessary.
  • the learning model 61 may output a label corresponding to, for example, a highly dampening plaque.
  • the control unit 21 may also set the region outside the high damping plaque to the low reliability region 55 in step S613 described with reference to FIG. 34.
  • a moving image or a still image stored in the auxiliary storage device 23 or the like may be used. It is possible to provide a catheter system 10 that can be used for recording medical records after the end of a case.
  • the information processing device 20 may be a reaction personal computer, tablet, smartphone, or the like that does not have a function of connecting the MDU 33 and the diagnostic imaging catheter 40.
  • FIG. 42 is a functional block diagram of the information processing system 10 of the tenth embodiment.
  • the information processing system 10 includes an image acquisition unit 81, a complementary information acquisition unit 82, and a display unit 83.
  • the image acquisition unit 81 acquires a catheter image generated by using the diagnostic imaging catheter 40 inserted into the luminal organ.
  • the complementary information acquisition unit 82 acquires complementary information that complements the information missing portion 473 of the catheter image acquired by the image acquisition unit 81.
  • the display unit 83 displays the catheter image acquired by the image acquisition unit 81 and the complementary information acquired by the complementary information acquisition unit 82.
  • FIG. 43 is an explanatory diagram illustrating the configuration of the catheter system 10 of the eleventh embodiment.
  • the catheter system 10 of the present embodiment is realized by operating the catheter control device 27, the MDU 33, the diagnostic imaging catheter 40, the general-purpose computer 90, and the program 97 in combination.
  • Regarding morphology The description of the parts common to the first embodiment will be omitted.
  • the catheter system 10 is also an example of the information processing system of the present embodiment.
  • the catheter control device 27 is an ultrasonic diagnostic device for IVUS that controls the MDU 33, controls the sensor 42, and generates a transverse layer image and a longitudinal tomographic image based on the signal received from the sensor 42. Since the function and configuration of the catheter control device 27 are the same as those of the conventionally used ultrasonic diagnostic device, the description thereof will be omitted.
  • the catheter system 10 of this embodiment includes a computer 90.
  • the computer 90 includes a control unit 21, a main storage device 22, an auxiliary storage device 23, a communication unit 24, a display unit 25, an input unit 26, a reading unit 29, and a bus.
  • the computer 90 is an information device such as a general-purpose personal computer, a tablet, a smartphone, or a server computer.
  • Program 97 is recorded on the portable recording medium 96.
  • the control unit 21 reads the program 97 via the reading unit 29 and stores it in the auxiliary storage device 23. Further, the control unit 21 may read the program 97 stored in the semiconductor memory 98 such as the flash memory mounted in the computer 90. Further, the control unit 21 may download the program 97 from the communication unit 24 and another server computer (not shown) connected via a network (not shown) and store the program 97 in the auxiliary storage device 23.
  • the program 97 is installed as a control program of the computer 90, loaded into the main storage device 22, and executed. As a result, the computer 90 functions as the information processing device 20 described above.
  • the computer 90 is a general-purpose personal computer, tablet, smartphone, large computer, virtual machine running on the large computer, cloud computing system, or quantum computer.
  • the computer 90 may be a plurality of personal computers or the like that perform distributed processing.
  • Appendix 2 The program according to Appendix 1 that inputs the acquired catheter image to a learning model that outputs complementary information when a catheter image is input and acquires the complementary information output from the learning model.
  • Appendix 3 The program according to Appendix 2, wherein the complementary information is a post-complementary image generated so as to complement the missing information portion.
  • Appendix 4 The program according to Appendix 1, which complements the information missing portion based on a second catheter image acquired at a time different from that of the catheter image.
  • Appendix 5 The program according to Appendix 1, which complements the information missing portion based on a portion different from the information missing portion of the catheter image.
  • the diagnostic imaging catheter is a tomographic image generating catheter.
  • the catheter image is a tomographic image generated by using the tomographic image generation catheter.
  • the program according to any one of Supplementary note 1 to Supplementary note 5, wherein the information missing portion is a shadow portion.
  • Appendix 7 The program according to Appendix 6, wherein the tomographic image generating catheter is a catheter for ultrasonic tomographic image generation.
  • Appendix 8 The program according to Appendix 6, wherein the tomographic image generation catheter is a catheter for optical tomographic image generation.
  • Appendix 11 The program according to any one of Appendix 1 to Appendix 10, which displays the acquired catheter image and the complemented image that complements the missing information side by side.
  • An image acquisition unit that acquires a catheter image generated using a diagnostic imaging catheter inserted into a luminal organ, and an image acquisition unit.
  • a complementary information acquisition unit that acquires complementary information that complements the information missing portion of the acquired catheter image, and a complementary information acquisition unit.
  • An information processing system including a display unit that displays the catheter image and the complementary information.
  • Catheter system (information processing system) 20 Information processing device 21 Control unit 22 Main storage device 23 Auxiliary storage device 24 Communication unit 25 Display unit 26 Input unit 27 Catheter control device 271 Catheter control unit 29 Reading unit 31 Display device 32 Input device 33 MDU 37 Diagnostic imaging device 40 Diagnostic imaging catheter 41 Probe part 42 Sensor 43 Shaft 44 Tip marker 45 Connector part 471 Catheter image 472 Guide wire image 473 Information missing part 474 Shadow forming part image 483 Missing area image 485 Paste area 486 Estimated image 487 Cutout Image (complementary information) 51 1st image field 52 2nd image field 541 1st label area 542 2nd label area 543 3rd label area 55 Low reliability area 561 1st complementary line 562 2nd complementary line 565 Change line 571 Designated point mark 572 Complementary line ( Supplementary information) 573 Candidate point mark 574 Template area 580 No completion button 581 Stop button 582 Select button 583 Completion button 584 Start button 585 End button 586 Colored button 587 Border

Landscapes

  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Surgery (AREA)
  • Medical Informatics (AREA)
  • Biophysics (AREA)
  • Pathology (AREA)
  • Radiology & Medical Imaging (AREA)
  • Engineering & Computer Science (AREA)
  • Biomedical Technology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Physics & Mathematics (AREA)
  • Molecular Biology (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Animal Behavior & Ethology (AREA)
  • General Health & Medical Sciences (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Optics & Photonics (AREA)
  • Apparatus For Radiation Diagnosis (AREA)
  • Ultra Sonic Daignosis Equipment (AREA)

Abstract

L'invention concerne un programme et similaire qui permettent d'utiliser facilement un système de cathéter. Ce programme amène un ordinateur à exécuter l'acquisition d'une image de cathéter générée à l'aide d'un cathéter de diagnostic d'image inséré dans un organe luminal, l'acquisition d'informations complémentaires (572) qui complètent une partie sans informations (473) de l'image de cathéter acquise, et l'affichage de l'image de cathéter et des informations complémentaires (572). L'image de cathéter acquise est entrée dans un modèle d'apprentissage qui délivre en sortie des informations complémentaires (572) lorsqu'une image de cathéter est entrée, et les informations complémentaires (572) délivrées par le modèle d'apprentissage sont acquises.
PCT/JP2021/013600 2020-03-30 2021-03-30 Programme, procédé de traitement d'informations, système de traitement d'informations et procédé permettant de générer un modèle d'apprentissage Ceased WO2021200985A1 (fr)

Priority Applications (2)

Application Number Priority Date Filing Date Title
JP2022512562A JP7615127B2 (ja) 2020-03-30 2021-03-30 プログラム、情報処理方法、情報処理システムおよび学習モデルの生成方法
JP2024232685A JP7747865B2 (ja) 2020-03-30 2024-12-27 プログラム、情報処理方法および情報処理装置

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2020-061515 2020-03-30
JP2020061515 2020-03-30

Publications (1)

Publication Number Publication Date
WO2021200985A1 true WO2021200985A1 (fr) 2021-10-07

Family

ID=77928669

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2021/013600 Ceased WO2021200985A1 (fr) 2020-03-30 2021-03-30 Programme, procédé de traitement d'informations, système de traitement d'informations et procédé permettant de générer un modèle d'apprentissage

Country Status (2)

Country Link
JP (2) JP7615127B2 (fr)
WO (1) WO2021200985A1 (fr)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2024096050A (ja) * 2022-12-29 2024-07-11 ヴェラン メディカル テクノロジーズ,インコーポレイテッド 事前に取得された画像データをリアルタイム画像ストリームの閉塞領域に合成するシステムおよび方法
JP2025518086A (ja) * 2022-05-27 2025-06-12 ボストン サイエンティフィック サイムド,インコーポレイテッド 脈管内可視化のためのシステムおよび方法

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2009014914A (ja) * 2007-07-03 2009-01-22 Olympus Corp 計測用内視鏡装置
US20120075638A1 (en) * 2010-08-02 2012-03-29 Case Western Reserve University Segmentation and quantification for intravascular optical coherence tomography images
JP2013505782A (ja) * 2009-09-23 2013-02-21 ライトラブ イメージング, インコーポレイテッド 管腔形態および血管抵抗測定データ収集のシステム、装置および方法
JP2013111443A (ja) * 2011-12-01 2013-06-10 Hitachi Aloka Medical Ltd 超音波画像処理装置
JP2017104550A (ja) * 2015-12-09 2017-06-15 キヤノン株式会社 光音響装置、表示制御方法、プログラム

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2009014914A (ja) * 2007-07-03 2009-01-22 Olympus Corp 計測用内視鏡装置
JP2013505782A (ja) * 2009-09-23 2013-02-21 ライトラブ イメージング, インコーポレイテッド 管腔形態および血管抵抗測定データ収集のシステム、装置および方法
US20120075638A1 (en) * 2010-08-02 2012-03-29 Case Western Reserve University Segmentation and quantification for intravascular optical coherence tomography images
JP2013111443A (ja) * 2011-12-01 2013-06-10 Hitachi Aloka Medical Ltd 超音波画像処理装置
JP2017104550A (ja) * 2015-12-09 2017-06-15 キヤノン株式会社 光音響装置、表示制御方法、プログラム

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2025518086A (ja) * 2022-05-27 2025-06-12 ボストン サイエンティフィック サイムド,インコーポレイテッド 脈管内可視化のためのシステムおよび方法
JP2024096050A (ja) * 2022-12-29 2024-07-11 ヴェラン メディカル テクノロジーズ,インコーポレイテッド 事前に取得された画像データをリアルタイム画像ストリームの閉塞領域に合成するシステムおよび方法
JP7715789B2 (ja) 2022-12-29 2025-07-30 ヴェラン メディカル テクノロジーズ,インコーポレイテッド 事前に取得された画像データをリアルタイム画像ストリームの閉塞領域に合成するシステムおよび方法

Also Published As

Publication number Publication date
JPWO2021200985A1 (fr) 2021-10-07
JP2025036696A (ja) 2025-03-14
JP7747865B2 (ja) 2025-10-01
JP7615127B2 (ja) 2025-01-16

Similar Documents

Publication Publication Date Title
JP7747865B2 (ja) プログラム、情報処理方法および情報処理装置
EP2965263B1 (fr) Segmentation multimodale dans des images intravasculaires
KR101797042B1 (ko) 의료 영상 합성 방법 및 장치
US20110245651A1 (en) Medical image playback device and method, as well as program
US20220039778A1 (en) Diagnostic assistance device and diagnostic assistance method
JP2023066260A (ja) 学習モデル生成方法、画像処理装置、プログラムおよび訓練データ生成方法
US20240013514A1 (en) Information processing device, information processing method, and program
US20230260120A1 (en) Information processing device, information processing method, and program
US12444170B2 (en) Program, information processing method, method for generating learning model, method for relearning learning model, and information processing system
JP2022055170A (ja) コンピュータプログラム、画像処理方法及び画像処理装置
WO2021193024A1 (fr) Programme, procédé de traitement d'informations, dispositif de traitement d'informations et procédé de génération de modèle
CN114902288A (zh) 利用基于解剖结构的三维(3d)模型切割进行三维(3d)打印的方法和系统
JP7774259B2 (ja) 情報処理装置および学習済モデルの生成方法
WO2021199960A1 (fr) Programme, procédé de traitement d'informations, et système de traitement d'informations
JP2025186422A (ja) プログラム、情報処理方法および情報処理装置
JP7644092B2 (ja) プログラム、情報処理方法、学習モデルの生成方法、学習モデルの再学習方法、および、情報処理システム
JP7577734B2 (ja) プログラム、情報処理方法および情報処理装置
WO2023127785A1 (fr) Procédé de traitement d'informations, dispositif de traitement d'informations et programme
CN117522887A (zh) 用于定义超声图像中的感兴趣区域的边界的系统和方法
JP7774258B2 (ja) 情報処理装置、情報処理方法、プログラムおよび学習済モデルの生成方法
KR20160076951A (ko) 바디 마커를 생성하는 방법 및 장치.
WO2021193021A1 (fr) Programme, procédé de traitement d'informations, dispositif de traitement d'informations et procédé de génération de modèle
US20250029509A1 (en) Manipulating a medical image acquisition system
US20240221366A1 (en) Learning model generation method, image processing apparatus, information processing apparatus, training data generation method, and image processing method
WO2024071322A1 (fr) Procédé de traitement d'informations, procédé de génération de modèle d'apprentissage, programme informatique et dispositif de traitement d'informations

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21778802

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2022512562

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21778802

Country of ref document: EP

Kind code of ref document: A1