WO2017034020A1 - Dispositif de traitement d'images médicales et programme de traitement d'images médicales - Google Patents
Dispositif de traitement d'images médicales et programme de traitement d'images médicales Download PDFInfo
- Publication number
- WO2017034020A1 WO2017034020A1 PCT/JP2016/074966 JP2016074966W WO2017034020A1 WO 2017034020 A1 WO2017034020 A1 WO 2017034020A1 JP 2016074966 W JP2016074966 W JP 2016074966W WO 2017034020 A1 WO2017034020 A1 WO 2017034020A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- medical image
- display
- image processing
- processing apparatus
- operator
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Ceased
Links
Images
Classifications
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H30/00—ICT specially adapted for the handling or processing of medical images
- G16H30/20—ICT specially adapted for the handling or processing of medical images for handling medical images, e.g. DICOM, HL7 or PACS
Definitions
- the present invention relates to a medical image processing apparatus and a medical image processing program.
- the present invention particularly relates to a medical image processing apparatus and the like that can display a three-dimensional medical image obtained by imaging a patient and perform various display processes with good workability and while maintaining the cleanliness of an operator. .
- CT Computer Tomography
- MRI Magnetic Resonance Imaging
- PET PET
- ultrasound diagnostic devices angiographic imaging devices, etc.
- medical diagnostic imaging devices Yes.
- a simulation may be performed before performing an operation on an organ such as a liver in which blood vessels are intertwined in a complicated manner.
- the simulation is performed, for example, by performing a contrast CT examination, preparing a fluoroscopic image of a site to be treated, and confirming it on a display.
- Such simulations are useful for studying treatment plans.
- Patent Document 1 discloses a technique for simulating a surgical operation by displaying an organ such as a liver on a tablet terminal.
- Patent Document 1 uses a tablet terminal, so it is useful in that it can be carried anywhere and can perform preoperative simulation. In addition, since it can be brought into the operating room, it is also useful in that some confirmation can be performed during the operation.
- the functions provided may not be used.
- the present invention has been made paying attention to such problems. Its purpose is to display a three-dimensional medical image obtained by imaging a patient, a medical image processing apparatus and an image processing program capable of performing various display processes with good workability and while keeping the operator clean. Is to provide.
- a medical image processing apparatus for solving the above-described problems is as follows: Display, A control unit (processor) connected to the display; An audio input device; A motion sensor, A medical image processing apparatus comprising: The control unit (processor) a: an image display unit for displaying a three-dimensional medical image on the display; b: a mode selection unit for recognizing a voice input using the voice recognition device and switching a mode related to display of the three-dimensional medical image in accordance with the voice; c: a display processing unit for recognizing an operator's motion input via the motion sensor and changing the display of the three-dimensional medical image accordingly; A medical image processing apparatus.
- control unit -Display a 3D medical image on the display; Recognizing speech input using the speech recognition device and switching modes relating to the display of the 3D medical image accordingly; -Configured to recognize an operator's motion input via the motion sensor and change the display of the 3D medical image accordingly.
- Anatomical structure refers to an object (for example, an organ, bone, blood vessel, etc.) that can be recognized in a subject, and includes fat, lesions such as a tumor, and the like.
- Terminal refers to an information processing apparatus that is connected to a network or used in a standalone manner to perform data processing. An arbitrary peripheral device may be connected to the information processing apparatus. Basically, it is also preferable that various functions are provided in one device such as a tablet terminal or a laptop computer. However, in some cases, a part of the functions may be arbitrarily set depending on, for example, a load. It can also be configured to be functionally or physically distributed in units.
- Connected means that, in addition to the case where two elements are directly connected, one element and another element are indirectly connected via some intermediate element without departing from the spirit of the present invention. This includes cases where It also includes both wired and wireless connections.
- the component expressed as “function” + “part” corresponds to a functional block that performs a predetermined function.
- a functional block does not necessarily indicate a division between hardware circuits.
- one or more functional blocks can be implemented with a single hardware, but can also be implemented with multiple hardware.
- a medical image processing apparatus and the like that can display a three-dimensional medical image obtained by imaging a patient and perform various display processes with good workability and while maintaining the cleanliness of an operator are provided. can do.
- FIG. 1 It is a figure which shows the medical image processing apparatus of one Embodiment. It is a figure which shows the example of the block diagram of the image processing apparatus of FIG. It is a figure which shows an example of the component of a hospital system.
- 3 is a flowchart of an operation example of the image processing apparatus in FIG. 1. It is a figure which shows the example of the three-dimensional medical image which displayed the liver and the blood vessel of the circumference
- the medical image processing apparatus 301 of the present embodiment is a portable computer device such as a tablet terminal as an example. Alternatively, it may be a laptop PC (notebook PC) having a touch panel display.
- FIG. 1 shows an example of a tablet terminal, which may be configured by installing an image processing program according to one embodiment of the present invention in a commercially available tablet terminal.
- the medical image processing apparatus will be described simply as an image processing apparatus.
- a screen size is 9 inches or more or 10 inches or more.
- the thickness is preferably 20 mm or less or 15 mm or less.
- the mass is preferably within 2 kg or within 1.5 kg.
- the image processing apparatus 301 has a thin casing 301a, and a touch panel display 360 is provided on one surface thereof.
- the touch panel display 360 includes a display 361 (see FIG. 2) and a touch panel 363 (see FIG. 2).
- Such an image processing apparatus 301 can be connected to a network of a hospital system, for example, as shown in FIG.
- the hospital system of this example includes the following devices connected to the network 30: an imaging device 1, a chemical injection device 10, a hospital information system HIS (Hospital Information System) 21, and a radiology information system.
- Each of the above elements may be only one or plural.
- the connection to the network may be a wired connection or a wireless connection.
- Examples of the imaging device 1 include imaging devices such as a CT device, an MR device, and an angiography device. Other types of imaging devices may be used, or a plurality of imaging devices of the same type or different types may be used.
- a three-dimensional medical image which will be described later, may be created using a plurality of modality images, such as combining an image captured by a CT apparatus and an image captured by an MR apparatus.
- the drug solution injection device 10 may be a contrast agent injection device that injects at least a contrast agent.
- the drug solution injection device 10 includes a drive mechanism that pushes a drug solution from a container (for example, a syringe) filled with the drug solution, and an operation thereof. It may include a control circuit for controlling.
- a contrast medium injection device including an injection head and a console can be used.
- the drive mechanism may be a piston drive mechanism, a roller pump, or the like.
- the image processing apparatus 301 includes a display 361, a touch panel 363, an input device 365, a communication unit 367, an interface 368, a slot 369, a control unit 350, a storage unit 359, and the like. Note that all of these are not essential, and some of them may be omitted.
- Examples of the display 361 include devices such as liquid crystal panels and organic EL panels.
- a touch panel display in which the touch panel 363 is integrally provided can also be used.
- the touch panel a system such as a resistive film, capacitance, electromagnetic induction, surface acoustic wave, infrared ray, or the like can be used.
- a multi-touch that is a touch at a plurality of positions, such as a capacitance method, may be detected.
- the touch operation can be performed using a user's finger or a touch pen.
- the touch panel may detect the start of a touch operation on the touch panel, the movement of the touch position, the end of the touch operation, and the like, and output the detected touch type and coordinate information.
- the image processing apparatus 301 of the present embodiment can perform operations related to image display using voice input or motion input, as will be described later. Therefore, the touch panel 363 may be omitted depending on circumstances.
- the input device 365 for example, a general device such as a keyboard or a mouse can be used.
- the storage unit 359 includes a hard disk drive (HDD: Hard Disk). Drive (Solid State Drive) and / or memory, etc., and an OS (Operating System) program or a medical image processing program according to an embodiment of the present invention (Including algorithm data and graphical user interface data) may be stored.
- HDD Hard Disk
- Drive Solid State Drive
- OS Operating System
- a medical image processing program including algorithm data and graphical user interface data
- the computer program may be downloaded in whole or in part from an external device when necessary via an arbitrary network.
- the computer program may be stored in a computer-readable recording medium.
- the “recording medium” is a memory card, USB memory, SD card (registered trademark), flexible disk, magneto-optical disk, ROM, EPROM, EEPROM, It shall include any “portable physical media” such as CD-ROM, MO, DVD, and Blu-ray (registered trademark) Disc.
- the medical image processing apparatus according to the present embodiment may be provided with a slot 369 for reading the storage medium as described above.
- the communication unit 367 is a unit for enabling communication with an external network or device in a wired or wireless manner.
- the communication unit 367 may include a transmitter for sending data to the outside, a receiver for receiving data from the outside, and the like.
- the interface 368 is for connecting various external devices and the like, and only one interface is shown in the drawing, but a plurality of interfaces may naturally be provided.
- the slot 369 is a part for reading data from a computer readable medium.
- the interface 368 is a part for connecting an external device or the like.
- the image processing apparatus 301 of this embodiment includes a microphone 370 as an audio input device.
- a microphone built in the housing may be used, or an external microphone that is separate from the housing and is connected to the terminal by wire or wirelessly. It is also possible to use a device in which a motion sensor 380 and a microphone described below are integrated.
- voice recognition software is installed in the image processing apparatus 301, and thereby a voice recognition unit 351 is configured.
- the motion sensor 380 is a sensor that three-dimensionally detects the movement of at least a part of the operator's body in a non-contact manner.
- Motion recognition software is installed in the image processing apparatus 301, and thereby a motion recognition unit 353 is configured.
- a leap motion controller manufactured by Leap Motion, “Leep Motion” is a registered trademark
- the leap motion controller is an input device that can recognize the position, shape, and movement of the operator's finger and / or the position and movement of the palm in real time without contact.
- the leap motion controller is configured as a sensor unit including an infrared irradiation unit and a CCD camera.
- the upper area of the sensor unit is a recognition area. This sensor unit is used by connecting to a tablet terminal or a laptop PC by wire or wireless.
- the motion sensor 380 for example, Kinect (manufactured by Microsoft Corporation, registered trademark) can be used.
- the motion sensor 380 may include one or more cameras and one or more distance sensors. Or you may provide only one of them.
- the unit of the motion sensor 380 may have a built-in microphone.
- the accuracy of the detection target (for example, a hand) of the motion sensor 380 is preferably one that can be detected with an accuracy of 5 mm or less, and more preferably one that can be detected with an accuracy of 1 mm or less.
- the detection principle of the motion sensor 380 is not limited to a specific one.
- a system called Light Coding can be used. In this method, a large number of dot patterns are irradiated from the infrared light emitting unit, and the amount of change (distortion) when the dot pattern hits the detection target (person) is read by the camera.
- a system called TOF Time Of Flight
- the recognition accuracy is higher than that of the Light Coding method, and the deterioration of accuracy due to distance is small.
- a system in which reflection of light irradiated on an object by an infrared LED is photographed by two cameras and movement is recognized may be used.
- the control unit 350 includes hardware such as a central processing unit (CPU) and a memory, and a computer program is installed to perform various arithmetic processes.
- the control unit 350 includes an image display unit 355a, an operation determination unit 355b, a display processing unit 355c, and a mode selection unit 355d. Further, as described above, the voice recognition unit 351 and the motion recognition unit 353 are included.
- the image display unit 355a displays a medical three-dimensional medical image on the display 361.
- the image display unit 355a displays each of anatomical structures such as the liver and blood vessels as independent objects.
- each anatomical structure may be displayed in a different color.
- the color in which each anatomical structure is displayed may be manually input and set by the operator, but is not necessarily limited thereto. As will be described later, in the case where color assignment or the like is performed in advance on a predetermined data server side (by a table or the like), the display may be performed in accordance therewith.
- the operation determination unit 355b receives an input operation on the input device 365 and the touch panel 363.
- the display processing unit 355c performs various image processing. For example, -Rotation of 3D image, -Translation of 3D images -3D display and enlargement / reduction of images, -Change the transparency of images displayed in 3D, -Switching the display / non-display of a given object, -Cut (split) function of a given object, -A function for specifying an area of a predetermined object, etc. It is. The specific contents of these functions will be described in detail in a series of operations described later.
- the voice recognition unit 351 performs various kinds of voice recognition. For example, the following words are recognized.
- the name of the anatomical structure for example, an organ name such as "liver” or a blood vessel name such as "portal vein” or "hepatic artery”).
- the motion recognition unit 353 performs various motion recognition processes. For example, assuming that the motion sensor detects a hand, the position or movement of the hand (finger) in the detection space is detected.
- processing performed by other computer means is not limited to a tablet terminal and a notebook PC, and can be an object of one embodiment of the present invention.
- processing performed by other computer means is not limited to a tablet terminal and a notebook PC, and can be an object of one embodiment of the present invention.
- the invention disclosed mainly as the description of “operation” can be understood by those skilled in the art as an invention of a product or an invention of a computer program with different category expressions. . Accordingly, the present specification also discloses such an invention.
- This three-dimensional medical image includes a liver 371 and a blood vessel 375.
- three-dimensional medical image data is acquired in step S11.
- the “three-dimensional medical image data” may be created based on data obtained by tomographic imaging of a patient with an imaging device.
- volume data by volume rendering may be used.
- the data format of the three-dimensional image is not particularly limited, and various types can be used.
- an STL (Standard Triangulated Language) file format can be used.
- the image data may be stored in a predetermined data storage area such as a predetermined database server, PACS, DICOM server, or workstation.
- a predetermined data storage area such as a predetermined database server, PACS, DICOM server, or workstation.
- the image processing apparatus 301 reads the data from a predetermined data storage area on the network and stores it in the storage unit 359 in the apparatus.
- the image processing apparatus 301 displays a three-dimensional medical image on the display 361 (step S12).
- the creation of a three-dimensional medical image can basically be performed using a known method. An image creation flow according to an embodiment of the present invention will be described later with reference to the drawings.
- the generated three-dimensional medical image data may be stored in the image processing apparatus 301 and / or stored in an external server (for example, a server on the cloud).
- various display modes are prepared for the display of medical images. For example, -Display a given anatomical structure in a translucent state; -Display a given anatomical structure in an opaque state, -Color display of a given anatomical structure, -Display a given anatomical structure with a shadow, At least one of displaying an image of a three-dimensional coordinate axis (or an equivalent thereof, for example, a cube) on the screen, and the like.
- the liver is displayed in a translucent state and the blood vessel is displayed in an opaque state.
- the tumor may be displayed in an opaque state.
- the ability to perform such confirmation is very useful, for example, in that the positional relationship between the liver, blood vessels, tumors, and the like can be well confirmed, particularly in surgery where a portion of the liver is removed by laparoscopic surgery.
- the image processing apparatus 301 according to the present embodiment is portable, and therefore it is possible to check a three-dimensional medical image by operating the apparatus in the operating room.
- the liver and blood vessels may be displayed in different colors. If there is a tumor, the tumor may be displayed in a different color. More specifically, regarding the blood vessels, the liver, portal vein, and hepatic artery may be displayed in different colors. When blood vessels are grouped, they may be displayed together in the same color.
- step S13 the image processing device 301 detects the position of the hand so that the distance between the motion sensor 380 and the operator's hand is appropriate. Specifically, the operator places the hand above the motion sensor 380.
- An appropriate distance (height from the sensor to the hand) between the sensor and the operator's hand is set in advance, for example, in a range of h1 (mm) to h2 (mm). The reason why the appropriate range is set in advance is that there is a possibility that the movement of the hand cannot be recognized well when the operator's hand is too close or too far from the sensor.
- a reference circle (first circle) 391 having a predetermined size is displayed in the approximate center of the screen.
- the first circle 391 is always displayed in a fixed size regardless of the position of the operator's hand.
- a second circle 393 is also displayed on the screen.
- the center of the second circle 393 corresponds to the position of the operator's hand. That is, when the operator's hand is directly above the motion sensor 380 (an example), the center of the second circle 393 is the same as the center of the circle 391 at the second position. That is, the two circles 391 and 393 are displayed as concentric circles.
- the second circle 393 is also displaced in the same direction (as an example on the right) and displayed in real time. Is done.
- the operator confirms whether his / her hand is in an appropriate position (horizontal position) with respect to the motion sensor 380 while viewing the positional relationship between the two circles 391 and 393 on the screen. can do.
- the diameter of the second circle 393 corresponds to the height of the operator's hand.
- the second circle 393 is formed so that the diameter of the second circle 393 is the same as the diameter of the first circle 391. Is displayed.
- the size of the second circle 393 decreases accordingly, and conversely, as the hand position decreases, the size of the second circle 393 increases accordingly.
- the operator can check whether the height of his / her hand is appropriate while viewing the magnitude relationship between the two circles 391 and 393 on the screen.
- the second circle 393 may be displayed in a special display when the horizontal position, the height position of the hand, or a combination thereof enters a predetermined appropriate position.
- the second circle 393 may be displayed in different colors depending on whether it is outside the proper range as shown in FIG. 6 (a) or within the proper range as shown in FIG. 6 (b). In one embodiment, it is preferable to switch between blinking display and lighting display.
- step S13 the step for making the distance between the motion sensor and the operator's hand appropriate is completed.
- step S14 a voice input for selecting a display mode is received.
- speech input the following words may be recognized: -"Move” -"Stop” -"rotation” -"Multi"
- the voice recognition function may be turned on as a trigger when the position of the operator's hand enters a predetermined appropriate range in step S3.
- the voice recognition function is OFF and is ON only when it is within the appropriate range.
- a display such as “During voice recognition” may be displayed on the screen.
- a display such as “During voice recognition” may be displayed on the screen. preferable.
- the step of detecting the hand position in S13 may be omitted.
- the operator utters “move” with the voice recognition function ON.
- the image processing apparatus 301 analyzes the voice input from the microphone 370 by the voice recognition unit 351 and recognizes the word “move”. In response to this, a transition is made to the “zoom / pan” mode (step S15).
- the image processing apparatus 301 then waits for input of motion by the operator's hand.
- the image processing apparatus 301 uses the motion sensor 380 and the motion recognition unit 353 to recognize the position and movement of the operator's hand in real time.
- the image is gradually reduced in accordance with the movement.
- the operator moves his / her hand downward (when moving from the initial height h 0 to a lower h L )
- the medical image is gradually enlarged in accordance with the movement.
- the three-dimensional medical image is panned (translated) in accordance with the moving direction and moving amount.
- the image is reduced or enlarged by moving the hand up and down, and the image is moved in parallel by moving the hand horizontally.
- zoom / pan of an image is performed using motion input capable of intuitive analog input. Rotation, etc.). Therefore, it is intuitive for the operator, excellent in operability, and contributes to practical use.
- the motion sensor 380 can input without touching the device, the input can be performed while keeping the operator's hand clean.
- Such a configuration is very advantageous in that, for example, an intraoperative doctor can check an image using an image processing apparatus in an operating room.
- a plurality of blood vessels are present in the liver in a branch shape. Therefore, when part of the liver parenchyma is excised so as not to damage the blood vessel more than necessary, it is necessary to sufficiently confirm the positional relationship between the blood vessel and the tumor.
- the positional relationship of blood vessels and the like can be confirmed while viewing a three-dimensional medical image during the operation.
- a blood vessel may exist on the front side and a tumor may be hidden behind the blood vessel (the tumor is not shown in the figure, but refer to FIG. 5 for reference).
- the image processing apparatus of the present embodiment can check the tumor by rotating the image in the “rotation” mode.
- rotation is not performed at every predetermined angle, but it can be freely rotated (steplessly) by an arbitrary angle by motion input, so that good observation is performed. It becomes possible.
- the “zoom / pan” mode is configured not to perform “rotation” (details below) of the three-dimensional medical image.
- rotation it is often the case that only enlargement / reduction or parallel movement is desired. Therefore, it is easier for the operator to perform rotation, enlargement, reduction, and parallel movement while maintaining the desired posture.
- ⁇ Rotation> If you want to rotate the displayed 3D medical image, do the following: First, the operator utters “stop” to cancel the “zoom / pan” mode. The image processing apparatus 301 accepts this by the voice recognition function, cancels the “zoom / pan” mode, and transitions to a state of accepting another mode.
- the operator says “Rotate”.
- the image processing apparatus 301 accepts this through the voice recognition function and transitions to the “rotation” mode. Next, the image processing apparatus 301 waits for input of motion by the operator's hand.
- the image processing apparatus 301 recognizes the position and movement of the operator's hand in real time. Then, the image processing apparatus 301 rotates the three-dimensional medical image around a predetermined rotation axis (X axis, Y axis, Z axis) in accordance with the movement of the operator's hand. Specifically, the movement of the operator's hand in the horizontal direction or the movement of moving the hand along the surface of the virtual sphere is recognized. Then, the three-dimensional medical image is rotated by a predetermined angle corresponding to the moving direction, moving speed, and moving amount.
- a predetermined rotation axis X axis, Y axis, Z axis
- rotation mode it is preferable in one embodiment that only rotation is allowed and panning (parallel movement) and zooming (enlargement / reduction) are prohibited. Accordingly, for example, it is possible to rotate the image in a desired direction while maintaining a predetermined image size, and perform subsequent predetermined image processing and observation.
- the image processing apparatus 301 recognizes this and shifts to the “multi” mode.
- the three-dimensional medical image is rotated, moved, enlarged / reduced according to the input of the motion of the operator's hand.
- a function capable of rotating a three-dimensional medical image only by voice input instead of motion input may be implemented.
- the image processing apparatus 301 recognizes this by saying “rotation”, “left”, and “15 °”. Then, the image is rotated by 15 ° around a predetermined rotation axis (for example, the Z axis extending in the vertical direction of the screen). If it is desired to rotate 15 ° upward around the axis extending in the horizontal direction, for example, “rotation”, “up”, and “15 °” may be input as voices.
- the image processing apparatus 301 can select an anatomical structure portion or change the transmittance of the selected one by voice input. This will be described again after explaining the operation through the touch panel.
- the image processing apparatus 301 displays anatomical structures in the three-dimensional medical image as independent objects. Thereby, each can be selected individually or display can be switched on and off. For example, hepatic arteries, portal veins, and hepatic veins may be grouped and selected at once, or may be selected individually.
- the rotation of the three-dimensional medical image can also be performed by an operation on the touch panel.
- the image processing apparatus 301 rotates the three-dimensional medical image accordingly.
- the image processing apparatus 301 also responds when the operator touches two points on the screen and performs an operation (a pinch-out operation or a pinch-in operation) to increase or decrease the distance between the two points.
- the image may be enlarged or displayed.
- the image processing apparatus 301 may change the display density when an operator touches any anatomical structure (for example, liver). Specifically, switching between two states of a normal opaque display state and a semi-transparent state may be performed. That is, as an example. It may be a semi-transparent state when touched once and return to a normal display state when touched again.
- any anatomical structure for example, liver
- the transparency is set to a plurality of stages such as 0%, 30%, 70%, and 100% (non-display), and the display density is sequentially switched in a loop shape every time the touch is performed. It may be. In this case, the transparency of 100% (that is, the non-display state) may be excluded from this loop. Naturally, the specific numerical value of the transmittance can be changed as appropriate. In short, it is only necessary that the transparency is set to at least a plurality of stages, and they are switched.
- the display transparency switching function can be exhibited simply by touching an arbitrary anatomical structure. Therefore, the operation can be performed simply and intuitively as compared with a method in which some icon or the like is separately selected or a command needs to be selected in order to switch the transparency.
- Gestures for changing the transparency are not limited to those described above.
- the image processing apparatus accepts the input, and the transparency is changed accordingly. It is good also as a structure changed.
- the transparency may be set in several steps, for example, 0%, 30%, 70%, 100% (non-display), or instead, in a stepless manner (continuously). It is good also as a structure which the transmittance
- the image processing apparatus 301 sets the anatomical structure as “selected”.
- the anatomical structure for example, the liver
- the anatomical structure may be displayed in a color different from the initial state or may be blinked.
- the fingertip one example
- the selected anatomical structure for example, liver
- FIG. 8 is a flowchart of a series of operations.
- the image processing apparatus 301 first displays a three-dimensional medical image as shown in FIG. 7A as step S1. Then, when the operator touches two points on an arbitrary anatomical structure (here, the liver 71) as shown in FIG. 7B, the state, that is, the two points touch. Is determined (step S2). In addition, as timing, two points may be touched simultaneously or substantially simultaneously.
- the image processing apparatus 301 determines whether or not the state where the two points are touched has continued for a certain period of time (step S3).
- the image processing apparatus 301 displays on the screen in a predetermined display manner that the two touched points P1 and P2 are designated when it is determined in step S3 that the continuation is longer than a certain time.
- Any "predetermined display mode" may be used. For example, (i) both the points P1 and P2 and the line L1 connecting them are displayed, or (ii) only the points P1 and P2 are displayed. Alternatively, only the line L1 may be displayed.
- the points P1 and P2 not only a small dot but a slightly larger graphical image as shown in FIG. 7B (for example, any shape such as a circle, a rectangle, a polygon, a star, etc.) so that the designated position can be clearly understood. However, it is also possible to display such as a circle).
- the image processing apparatus 301 may display the designated points P1 and P2 as they are as shown in FIG. 7C even after the operator releases the hand from the screen. Moreover, you may be comprised so that the fine adjustment of the position of the points P1 and P2 may be received. In this way, for example, the circular graphical image of P1 and P2 and / or the line L1 may be blinked so that it can be understood that the mode is a mode for accepting fine adjustment.
- FIG. 7C illustrates a state in which the point P2 is slightly moved and finely adjusted to the point P2 ′ as an example.
- This fine adjustment may be performed by the operator moving the graphical images of the points P1 and P2 with a finger, for example (operation on the touch panel).
- motion input may be used to finely adjust the positions of the points P1 and P2 without contacting the device. Note that displaying the cutting reference line L1 by voice input will be described later again.
- the cut function and the like of the present embodiment will be described first on the premise of touch panel operation.
- step S4 the operator touches a predetermined icon (for example, an “OK” input icon) on the screen. Then, the liver is cut by the line L1 connecting the points P1 and P2 as shown in FIG. 7D by the cutting function (step S5).
- a predetermined icon for example, an “OK” input icon
- the first part 71-1 and the second part 71-2 divided into two with the line L1 in between can be operated as independent anatomical structures.
- a method other than the above operation for example, (i) not touching an icon, but touching a predetermined area on the screen, or (ii) touching multiple times (double tap in one example)
- the above functions may be executed by performing an external operation. It may be by voice input.
- step S6 when the first part 71-1 is touched (step S6), the part as shown in FIG. Only selected. Then, the display density is switched. Specifically, only the first part 71-1 is translucently displayed. Touch again to return to the original display.
- the part 71-1 when the first part 71-1 is long pressed and swiped or dragged to the periphery of the screen, the part is not displayed, and the second part 71-2 and the blood vessel 73, Only 75 remains.
- the non-displayed image may be displayed as a thumbnail image 66 as illustrated in FIG. 7F.
- FIG. 9A shows a state in which two points P1 and P2 are touched as in the operation described with reference to FIG. 7B (note that the operator's finger remains touching two points on the screen. (The illustration is omitted).
- the image processing apparatus 301 moves.
- the positions of the latter two points P1 ′ and P2 ′ are specified, and a substantially rectangular area is designated based on the positions. Specifically, a quadrangle surrounded by four points, two points P1 and P2 before movement and two points P1 ′ and P2 ′ after movement, is designated as an area.
- the image processing apparatus 301 is configured so that fine adjustment can be performed.
- the operator may touch a predetermined icon (for example, an “OK” input icon) on the screen.
- the circles of the points P1, P2, P1 ′, and P2 ′ can be finely adjusted (an example). You may make it display the graphical image and the line which connects them blinking.
- the designated area Sa1 (see FIG. 9B) is divided from other parts and can be operated as an independent object. Therefore, it is possible to change the display density of only the region or switch the display on and off. According to such a function, for example, by not displaying only the region Sa1, it is possible to observe the inner blood vessels 73 and 75 and to confirm the relationship between the blood vessels 73 and 75 and the liver 71.
- the area designation is not necessarily performed with a quadrangle, and a triangle or a polygonal shape of a pentagon or more may be area designated.
- medical images of the liver and its surroundings are taken as an example, but of course, the anatomical structure is not limited to a specific one in the present invention.
- a medical image of the examiner's head may be displayed and various image processing may be performed on the medical image.
- selection is performed when a target is uttered by voice instead of touching and recognized. For example, if the operator says “liver”, the image processing apparatus 301 recognizes it and sets the liver to the selected state. In order to indicate the “selected state”, the anatomical structure (for example, the liver) may be displayed in a color different from the initial state, or may be blinked.
- the operator utters “Transparent”.
- the image processing apparatus 301 recognizes the voice and switches the display to a semi-transparent display.
- other anatomical structures in one example, blood vessels and tumors
- the transparency may be changed according to the distance from the motion sensor to the operator's hand. That is, the transmittance gradually increases (or decreases) when the hand is brought closer to the motion sensor, and conversely, the transmittance gradually decreases (or increases) when the hand is moved away from the motion sensor. .
- the mode is set to accept the motion input as described above. The device then detects the distance from the motion sensor to the operator's hand and changes the transparency accordingly.
- Line cut / box cut When performing line cut, for example, the operator utters “line cut”.
- the image processing apparatus 301 recognizes it and displays a cutting reference line on the screen. This reference line may be like the line L1 in FIG. 7B.
- the image processing apparatus 301 waits for motion input.
- the operator can change the position, length, orientation, etc. of the cutting reference line by motion input. Thereby, the reference line can be set at a predetermined position without contact.
- FIG. 5 (b) shows an example of this, and the portion 371-2 on the right side of the reference line remains opaque and the portion 371-1 on the left side is semi-transparent.
- the image processing apparatus 301 When performing box cut, for example, the operator utters “box cut”. Although detailed illustration is omitted, the image processing apparatus 301 recognizes the voice and displays a rectangle (one example) as a reference for excision on the screen.
- the size of the rectangle may be only one predetermined size, or a plurality of sizes such as large, medium, and small may be prepared.
- This box cut is to cut the target anatomical structure at a predetermined depth.
- the image processing apparatus 301 waits for motion input while displaying a rectangle serving as a reference for excision on the screen.
- the size and shape of the quadrangle may be fixed, but may be changeable. For example, the size and shape of the quadrangle can be changed by moving the corners of the default quadrangle displayed first. Motion input can be used to move the corner position.
- a substantially rectangular parallelepiped hole having a predetermined depth corresponding to the moving distance of the hand is formed in the liver correspondingly to the rectangle as an outline. It will be done. As a result, it is possible to obtain a medical observation image such that a part of the liver is excised and an internal blood vessel is not excised.
- the image processing apparatus 301 recognizes it and rotates the medical image in which the hole is formed by 15 °. .
- the internal configuration of the hole for example, a part of the liver is excised but the blood vessel is displayed
- the removal is performed using a rectangular outline, but it is needless to say that the outline may be defined by a triangular, polygonal, circular, elliptical or other arbitrary geometric shape.
- the area designated as a box is cut out, but conversely, only the area designated as a box may be left and the other areas may be hidden.
- image data is obtained by performing fluoroscopic imaging after a predetermined time after injecting the contrast agent, depending on the difference in the arrival time of the contrast agent, etc.
- CT values signal values
- threshold data are set and filtered for each blood vessel or organ to create volume data for each part.
- the following operations are relatively time-consuming. May be required. That is, in the viewer function of a three-dimensional medical image, for example, it is possible to switch between displaying with a blood vessel emphasized and displaying without emphasis. By being able to do this, for example, the peripheral portion of the blood vessel (the CT value (signal value) is low) can be switched as necessary or not, and observation can be performed as necessary. (See FIG. 10).
- the display mode of all blood vessels is uniformly changed for each blood vessel unless the display threshold is reset or filtered. Therefore, there is a problem that it is difficult to easily change the display.
- the image processing apparatus has the following functions.
- the image processing apparatus reads volume data based on information obtained by imaging a patient (see also step S1 in FIG. 3).
- the signal value, CT value, and standard deviation (SD) in the volume are analyzed. For example, if the average CT value is 300 HU or more, it is automatically determined as an artery, and if the average CT value is 100 HU or less, it is automatically determined as an organ. In addition to the CT value, the histogram shape of the CT value is also recognized. Generally, arteries tend to have a high peak and a narrow width (distribution), and portal veins and veins have a low peak and a wide width (distribution). Therefore, automatic recognition of the blood vessel type can be realized based on such elements.
- a histogram of blood vessels originally exists for each artery, vein, portal vein, etc., but each is integrated and normalized, and all blood vessels (in another embodiment, arbitrary blood vessels are normalized). It may be possible to represent two or more. Specifically, as an example, the average value and the center of gravity of each histogram may be calculated, and one histogram may be created by shifting the whole so that the lower one matches the higher one.
- the image processing apparatus accepts a predetermined input from the operator and does not display a portion below a certain reference value (or below a certain reference value).
- peripheral parts such as arteries, veins, and portal veins can be collectively hidden (see, for example, FIG. 10B).
- the lower limit value of the CT value to be displayed may be set lower as shown in FIG. 10A (the threshold value is 130 HU here). .
- predetermined input from the operator may be performed by operating an image button such as an icon, a cursor, or a slider on the screen, for example.
- a predetermined gesture of an operator's finger on the touch panel may be recognized and performed based on it.
- the blood vessel display may be changed by the operator touching the touch panel with several fingers and simultaneously moving the fingers in a predetermined direction. More specifically, when several fingers are simultaneously moved upward (in the first direction), the peripheral part of the blood vessel is displayed, and conversely, the fingers are moved downward (in the second direction). When moved, the peripheral portion of the blood vessel (more precisely, the vicinity of the outer edge of the thick portion of the blood vessel) may be removed.
- the above-described operation can be performed by a motion input via the motion sensor 380 instead of an operation on the touch panel. That is, in this configuration, the display of blood vessels (for example, other anatomical structures may be used) can be switched only by voice input and motion input, so it is not necessary to touch the touch panel, and the cleanliness is maintained. The three-dimensional medical image can be observed as it is.
- the display of blood vessels for example, other anatomical structures may be used
- a pointer appears as an image on the screen and is positioned at a predetermined part
- the part can be watched.
- the pointer needs to be arranged at an arbitrary part in the three-dimensional space, not in the plane. Such an operation of moving the pointer to an arbitrary part in the dimensional space is relatively difficult to perform with an input interface such as a mouse or a touch panel.
- the pointers may be arranged three-dimensionally using motion input.
- the medical image processing apparatus 301 first receives an input of a “pointer” (an example) using a voice recognition function. Then, as an example, a three-dimensionally displayed pointer is displayed on the screen.
- the pointer In the vertical direction and the horizontal direction on the screen, the pointer may be moved in accordance with the movement of the operator's hand in the horizontal plane. With respect to the depth direction, the pointer may move in the depth direction of the three-dimensional medical image when the operator brings his hand close to the motion sensor 380, and the pointer may move in the near direction when the operator moves away.
- a display mode in which the display size gradually decreases as it moves in the back direction and gradually increases as it moves in the front direction may be adopted.
- an image processing apparatus may include a schema image writing function.
- the image processing apparatus when there is a predetermined input by the operator, the image processing apparatus creates a schema image corresponding to the data using data of a three-dimensional medical image (see, for example, FIG. 5).
- the schema image may be any two-dimensional image such as a line drawing, a monochrome image, or a color image.
- Examples of the predetermined input by the operator include various inputs such as an input by touching an icon on the screen, a voice input, or a predetermined gesture input via a motion sensor.
- a medical image displayed in an orientation (example) as shown in FIG. 5 is currently displayed. It is also possible to write the data by converting the image into two dimensions as it is. At this time, a line drawing may be created by performing contour extraction processing.
- the data format may be any format, but for example, a PDF (Portable Document Format) format or any other image format such as GIF, PNG, or JPEG can be used.
- a doctor can write a sketch, a finding, or the like on the schema image created in this way, for example, with a touch pen or a finger.
- the schema image created by the image processing apparatus can be sent out from the apparatus and stored in a predetermined storage area connected on the network (see FIG. 3). For example, it may be imported as a part of the electronic medical record.
- the image processing apparatus is a portable type such as a tablet terminal, and can be taken out of the hospital and used in some cases. Such a configuration may be useful, for example, when performing a simulation of a procedure while confirming a three-dimensional medical image of a specific patient outside the hospital. However, it is necessary from the viewpoint of security that the internal information is kept secret when it is taken out of the hospital.
- the image processing apparatus preferably has the following functions: (a) a function for recognizing the current position of the apparatus, and (b) the apparatus based on the outside of the hospital.
- information including at least information that can identify a patient corresponds to this.
- the target information may be encrypted, or access to the information may be prohibited.
- determining whether the hospital is inside or outside may be based on whether it is within the range of the wireless network system in the hospital.
- FIG. 12 is a diagram schematically illustrating a state in which concealment is performed. In this screen, all information that can identify the patient is concealed. An icon 441 is displayed on the screen.
- a doctor who is viewing a three-dimensional medical image may want to temporarily confirm patient information, for example, when it is desired to confirm which patient the image belongs to.
- the medical image processing apparatus of this example first determines that the icon 441 has been pressed (FIGS. 12 and 13), and then displays patient information.
- the patient information may be data stored inside the apparatus, or may be data obtained by accessing an external server (a server in a hospital system, for example).
- the conditions under which such patient information can be displayed are limited to certain conditions.
- this is a case where the fingerprint authentication of the operator is performed on the apparatus.
- the identity authentication is performed by another authentication method.
- the communication with the external server is an example, and is configured to be possible only under secure communication conditions such as VPN (Virtual Private Network).
- VPN Virtual Private Network
- the information to be displayed for example, one, two, or three or more of the patient's initial, date of birth, sex, age, address, operation date, doctor in charge, etc. may be used. Patient ID, examination ID, etc. may be displayed.
- the displayed patient information may be automatically hidden again after a certain period of time.
- the display may be continued during the operation (in one example, the display is continued until the operation is completed (logout)).
- the minimum patient information and / or operation information can be confirmed as necessary, so that the displayed 3D medical image is mistaken for the patient who is actually operated on. The possibility of occurrence of such problems can be reduced.
- the medical image processing apparatus may have a function of displaying a stereo image as described below.
- an image including the first image 431L and the second image 431R may be displayed so that the operator can view stereoscopically.
- the first image 431L and the second image 431R display the same subject with a predetermined parallax. Sometimes referred to as a left-eye image and a right-eye image.
- an operation pad area 433 may be displayed as shown in FIG.
- the operation pad area 433 is an area for changing the display angle of the subject.
- the medical image processing apparatus changes the display angle of the subject (two images) at the same time accordingly. That is, the direction of the subject can be changed in conjunction with the movement of the finger.
- the subject image is not particularly limited, but may be a blood vessel contrast image.
- the doctor can more accurately grasp the three-dimensional structure of the subject through stereoscopic vision.
- the apparatus has a function of displaying a first image and a second image having different parallaxes. More specifically, it further has a function of displaying an operation pad area for operating those display angles. For example, one or a combination of gesture input, voice input, motion input, and the like can be used to change the display of these stereoscopic images.
- this specification also discloses the invention of the method and program corresponding to the said content.
- a fluoroscopic image of a patient is stored in a predetermined data server (for example, a DICOM server: one that stores data received from a modality in a predetermined format).
- a predetermined data server for example, a DICOM server: one that stores data received from a modality in a predetermined format.
- management may be performed by dividing into sections such as arteries, veins, bones, organs, and the like (further subdivided sections).
- Information for selection by the voice recognition function for example, voice input of “aorta” for an aorta) may be set for each object.
- automatic discrimination keys symbols, alphabets, numbers, and combinations thereof
- symbols symbols, alphabets, numbers, and combinations thereof
- an offset value may be set for each individual object.
- the offset value (see “CombineOfs” in the table) is set to, for example, “+50” for the aorta, “+100” for the abdominal artery, “+400” for the vein, and the like. .
- the offset value is used as follows as an example. For example, it is assumed that the central CT value of the artery is 350 HU and the central CT value of the vein is 200 HU. In this way, blood vessels have different degrees of contrast in arteries and veins (and portal veins). Therefore, in the above example in which the offset value is used to align them, the vein offset value is set to +150 HU, and the contrast effect is virtually increased so that the artery and vein are handled with the same threshold setting. It becomes possible.
- the offset value may be set not only in the artery and vein but also in other blood vessels such as the portal vein.
- the following usage may be performed with respect to the offset value of the real system instead of the vascular system.
- the offset value of a real organ for example, liver
- the offset value of a real organ is set to a large value such as +700 HU.
- the real organs such as the liver changes accordingly. The shape will collapse.
- the real organ may be displayed with a large offset value such as +700 HU.
- the presence of the setting table as described above is preferable in that the labor of production can be saved as much as possible.
- several tables are preferably prepared. This is because it is preferable that the A blood vessel (or A organ) can be clearly seen in one surgical procedure, but the B blood vessel (or B organ) can be clearly seen rather than the A blood vessel (or A organ) in another surgical procedure. This is because it may be preferable. That is, it is preferable in one embodiment that several tables each having an offset value corresponding to the technique are registered.
- system or apparatus may be configured as follows. That is, (i) a plurality of techniques are displayed, (ii) an operator selects one of them, and (iii) a corresponding table is called (displayed as necessary). May be.
- the display of the surgical method may be performed in accordance with a mode (part selection) for selecting which part of the body to perform the treatment in, for example, a user interface for setting injection conditions.
- a mode for selecting which part of the body to perform the treatment in, for example, a user interface for setting injection conditions.
- it may be configured to display a technique corresponding to a selected predetermined part (head, chest, abdomen, etc.) (see (i) above).
- the embodiment described above uses voice input or motion input (input corresponding to human movement).
- a predetermined physical switch may be used in combination with these inputs.
- a switch for example, a foot switch
- the “foot switch” is an example that is used by being placed on the floor, and includes a switch housing on which a sensor and a substrate are placed, and a pressing unit that can be stepped on with a foot or the like.
- the pressing portion is not limited, but may be a portion configured to be movable so as to be pushed down when stepped on.
- the detection signal from the foot switch may be supplied to the outside via a cable (wired), or may be configured to be supplied to the outside wirelessly.
- the foot switch may be electrically connected to the medical image processing apparatus of the present invention (for example, the control unit 350, see FIG. 2), but is not limited thereto.
- a configuration in which the foot switch is connected to another device may be used.
- FIG. 15 shows an example of the arrangement of foot switches.
- the chemical injection device includes an injection head 475 disposed near the imaging device 470, a first control unit (power supply unit) 478 connected thereto, and a console (second control unit) connected thereto. 476).
- the foot switch 477d is connected to the power supply unit as an example.
- a plurality of reception conditions may be set for voice operation.
- a foot switch may be used as one of the reception conditions.
- the following settings may be made (one or more). 1) Accept voice input only when there is motion input 2) Accept voice input only when the foot switch is ON 3) Only when there is motion input and the foot switch is ON Accept voice input 4) Always accept voice input
- the input conditions as described above may be registered in the voice operation command table of the device.
- the input 1) means that no voice input is accepted in the absence of motion input.
- the following input method is used for voice input. That is, for example, when a combination of direction and angle is entered such as “45 ° left, 30 ° above”, a command input is accepted. This is because if the information is not combination information, the probability of misrecognition may increase. This will be described in detail below.
- voice input is ON and only words such as “up” and “previous” can be recognized, in some cases, the word in the conversation being performed is recognized, and the intention is It is also assumed that no input is made. Therefore, a configuration may be adopted in which a command is accepted only when a combination of a plurality of words is recognized.
- Such a voice input method for preventing erroneous recognition is not necessarily limited to the combination of direction and angle as described above.
- the command is Accept.
- an input such as “direction” + “front” may be accepted, but an input such as the opposite “front” + “direction” may not be accepted (that is, the order of combination is determined). ing).
- the command may be received only when a combination such as “3D” + “enlarge” is recognized, for example, instead of simply recognizing the word “enlarge”. .
- a combination such as “3D” + “enlarge” is recognized, for example, instead of simply recognizing the word “enlarge”.
- a structure may be adopted in which the voice input is accepted only for a certain period of time (a configuration in which the voice input is not accepted other than the certain period).
- the foot switch may be turned ON only while the pressing part is stepped on.
- the subsequent process may be performed only when the foot switch is stepped for a certain period of time (so-called long press). According to this, it is possible to prevent unnecessary processing from being performed by unintentionally pressing the foot switch.
- the input as described above may be performed for a process in which such processing is a concern.
- the above operation may be performed when other switches (physical switches) are ON instead of the foot switch.
- voice input is performed by voice input only in a predetermined mode suitable for voice input, and input is performed by another input method other than that. It shall be.
- a predetermined input for example, “speech all ON” is entered as an example
- voice input that is not defaulted by default (at least a part of them) is also voiced.
- a function that enables input may be provided.
- such expansion of voice input is just an example, and may return to the original state after a certain time has elapsed (timeout function).
- timeout function a time-out period of about 1 minute, 3 minutes, or 5 minutes may be set in advance.
- the input for designating the angle may be configured to always react only to voice.
- a table for each word of voice input is prepared, and under what circumstances the input is permitted for each word (for example, the foot switch is ON for the word input “aaa”) Otherwise, the word input “bbb” is always accepted) may be set.
- the image processing apparatus of the present invention may basically be configured such that a medical image viewed from a certain direction is always displayed by default regardless of the part.
- the configuration is such that an image viewed from the front of the body is always displayed by default regardless of the site and / or technique.
- the configuration is such that an image viewed from a preset angle is displayed by default, which is preferable depending on the type of the region and / or technique.
- the lateral position is fundamental. Therefore, in the case of thoracoscopic surgery, the orientation of the lateral position may be set as the home position and the default display may be performed.
- the display is defaulted. It may be.
- the selection of the home position may be set manually or automatically according to at least one of the surgical method, the site, and the position of the tumor.
- whether the lesion is in the left lung or the right lung determines whether to be in the left lateral position or the right lateral position.
- a configuration is also useful in which the position of a lesion (tumor) is automatically recognized and one of them is automatically determined accordingly.
- the screen displayed when the voice input is ON may be as shown in FIG. 16, for example.
- This screen is displayed by adding several images to the display example shown in FIG. 6, but the images (see reference numerals 391 and 393) in FIG. 6 may be omitted. It will be easily understood by contractors.
- voice recognition voice input
- motion input there may be a motion input ON display 397a.
- display unit 398 that displays the recognized voice as characters. This makes it possible to visually confirm what kind of voice input has been made.
- a display unit 399 indicating whether or not the recognized voice is accepted as a command. If it is not accepted, a display such as “REJECTED” may be displayed, and this allows the operator to visually confirm that it has not been accepted.
- a display (361); A control unit (350) connected to the display; An audio input device (370); A motion sensor (380), A medical image processing apparatus (301) comprising: The control unit (350) a: an image display unit for displaying a three-dimensional medical image on the display; b: a mode selection unit for recognizing a voice input using the voice recognition device (170) and switching a mode related to the display of the three-dimensional medical image accordingly; c: a display processing unit for recognizing an operator's motion input via the motion sensor and changing the display of the three-dimensional medical image accordingly; Having Medical image processing apparatus.
- an apparatus or system having at least a control unit having the above-described characteristics.
- Zoom mode to enlarge and reduce the image Pan mode to translate the image
- Rotation mode to rotate the image
- the three-dimensional medical image rotates in response to the movement of the operator's hand.
- the medical image processing apparatus as described above.
- the three-dimensional medical image includes at least liver and blood vessel image data.
- a processing for displaying a three-dimensional medical image on a display
- b processing for recognizing a voice input using the voice recognition device (170) and switching a mode relating to the display of the three-dimensional medical image accordingly
- c processing for recognizing an operator's motion input via a motion sensor and changing the display of the three-dimensional medical image accordingly
- a medical image processing program processing for recognizing an operator's motion input via a motion sensor and changing the display of the three-dimensional medical image accordingly.
- Zoom mode to enlarge and reduce the image Pan mode to translate the image
- Rotation mode to rotate the image
- a computer displaying a three-dimensional medical image on a display;
- a computer recognizing an input voice using a voice recognition device (170) and switching a mode relating to display of the three-dimensional medical image accordingly;
- a computer recognizing an operator's motion input via a motion sensor and changing the display of the three-dimensional medical image accordingly;
- a method for operating a medical image processing apparatus comprising:
- Zoom mode to enlarge and reduce the image Pan mode to translate the image
- Rotation mode to rotate the image
- the zoom mode when the operator's hand is moved in the first direction, the three-dimensional medical image is enlarged, and when the operator's hand is moved in the opposite second direction, the three-dimensional medical image is reduced.
Landscapes
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Medical Informatics (AREA)
- Engineering & Computer Science (AREA)
- Public Health (AREA)
- General Health & Medical Sciences (AREA)
- Pathology (AREA)
- Molecular Biology (AREA)
- Epidemiology (AREA)
- Physics & Mathematics (AREA)
- Radiology & Medical Imaging (AREA)
- Biophysics (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Biomedical Technology (AREA)
- Heart & Thoracic Surgery (AREA)
- Primary Health Care (AREA)
- Surgery (AREA)
- Animal Behavior & Ethology (AREA)
- Veterinary Medicine (AREA)
- Apparatus For Radiation Diagnosis (AREA)
- User Interface Of Digital Computer (AREA)
- Measuring And Recording Apparatus For Diagnosis (AREA)
- Medical Treatment And Welfare Office Work (AREA)
- Image Analysis (AREA)
Abstract
L'invention concerne un dispositif de traitement d'image (301) qui comprend : un afficheur (361) ; une unité de commande (350) qui est connectée à l'afficheur ; un dispositif d'entrée vocale (370) ; et un capteur de mouvement (380). L'unité de commande (350) comprend : a : une partie d'affichage d'image pour afficher une image médicale tridimensionnelle sur l'afficheur ; b : une partie de sélection de mode pour commuter, lors de la reconnaissance d'une entrée vocale par l'intermédiaire d'un dispositif de reconnaissance vocale (170), le mode d'affichage lié à l'image médicale tridimensionnelle en réponse à la parole ; et c : une partie de traitement d'affichage pour changer, lors d'une reconnaissance par le capteur de mouvement d'un mouvement entré par un opérateur, l'affichage de l'image médicale tridimensionnelle en réponse au mouvement.
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| JP2017536488A JPWO2017034020A1 (ja) | 2015-08-26 | 2016-08-26 | 医用画像処理装置および医用画像処理プログラム |
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| JP2015-166871 | 2015-08-26 | ||
| JP2015166871 | 2015-08-26 |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2017034020A1 true WO2017034020A1 (fr) | 2017-03-02 |
Family
ID=58100556
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/JP2016/074966 Ceased WO2017034020A1 (fr) | 2015-08-26 | 2016-08-26 | Dispositif de traitement d'images médicales et programme de traitement d'images médicales |
Country Status (2)
| Country | Link |
|---|---|
| JP (4) | JPWO2017034020A1 (fr) |
| WO (1) | WO2017034020A1 (fr) |
Cited By (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JP2021094264A (ja) * | 2019-12-18 | 2021-06-24 | 東朋テクノロジー株式会社 | 身体支持追従装置 |
| JP2023051235A (ja) * | 2021-09-30 | 2023-04-11 | 富士フイルム株式会社 | 画像処理装置、画像処理方法、及びプログラム |
| JP2023104568A (ja) * | 2022-01-18 | 2023-07-28 | 富士フイルムビジネスイノベーション株式会社 | 情報処理装置及び情報処理プログラム |
| JP2024123195A (ja) * | 2020-02-28 | 2024-09-10 | 株式会社根本杏林堂 | 医用画像処理装置、医用画像処理方法および医用画像処理プログラム |
| US12488458B2 (en) | 2021-09-30 | 2025-12-02 | Fujifilm Corporation | Image processing device, image processing method, and program for outputting information for displaying two-dimensional image corresponding to cross section |
Citations (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JP2009116581A (ja) * | 2007-11-06 | 2009-05-28 | Ziosoft Inc | 医療画像処理装置および医療画像処理プログラム |
| WO2011085815A1 (fr) * | 2010-01-14 | 2011-07-21 | Brainlab Ag | Commande d'un système de navigation chirurgical |
| JP2014523772A (ja) * | 2011-06-22 | 2014-09-18 | コーニンクレッカ フィリップス エヌ ヴェ | 医用画像を処理するシステム及び方法 |
Family Cites Families (14)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JP3625933B2 (ja) * | 1995-12-14 | 2005-03-02 | ジーイー横河メディカルシステム株式会社 | 医用画像表示装置 |
| JPH10301567A (ja) * | 1997-04-22 | 1998-11-13 | Kawai Musical Instr Mfg Co Ltd | 電子楽器の音声制御装置 |
| JP2002304467A (ja) * | 2001-04-03 | 2002-10-18 | National Cancer Center-Japan | 診断支援システム |
| JP2003280681A (ja) * | 2002-03-25 | 2003-10-02 | Konica Corp | 医用画像処理装置、医用画像処理方法、プログラム、及び記録媒体 |
| JP2006051170A (ja) * | 2004-08-11 | 2006-02-23 | Toshiba Corp | 画像診断装置、頭部虚血部位解析システム、頭部虚血部位解析プログラムおよび頭部虚血部位解析方法 |
| JP2006181146A (ja) * | 2004-12-28 | 2006-07-13 | Fuji Photo Film Co Ltd | 診断支援装置、診断支援方法およびそのプログラム |
| CA2558653C (fr) | 2005-09-08 | 2012-12-18 | Aloka Co., Ltd. | Appareil de tomographie informatisee utilisant les rayons x et methode de traitement d'image |
| JP2008259710A (ja) | 2007-04-12 | 2008-10-30 | Fujifilm Corp | 画像処理方法および装置ならびにプログラム |
| JP2009061028A (ja) * | 2007-09-05 | 2009-03-26 | Nemoto Kyorindo:Kk | 画像処理装置及びそれを備えた医用ワークステーション |
| AU2008331807A1 (en) | 2007-12-03 | 2009-06-11 | Dataphysics Research, Inc. | Systems and methods for efficient imaging |
| JP2011118684A (ja) * | 2009-12-03 | 2011-06-16 | Toshiba Tec Corp | 調理補助端末及びプログラム |
| JP5747007B2 (ja) | 2012-09-12 | 2015-07-08 | 富士フイルム株式会社 | 医用画像表示装置、医用画像表示方法および医用画像表示プログラム |
| JP5989498B2 (ja) | 2012-10-15 | 2016-09-07 | 東芝メディカルシステムズ株式会社 | 画像処理装置及びプログラム |
| US20150212676A1 (en) * | 2014-01-27 | 2015-07-30 | Amit Khare | Multi-Touch Gesture Sensing and Speech Activated Radiological Device and methods of use |
-
2016
- 2016-08-26 WO PCT/JP2016/074966 patent/WO2017034020A1/fr not_active Ceased
- 2016-08-26 JP JP2017536488A patent/JPWO2017034020A1/ja active Pending
-
2021
- 2021-05-14 JP JP2021082521A patent/JP7229569B2/ja active Active
-
2023
- 2023-02-08 JP JP2023017220A patent/JP2023071677A/ja active Pending
-
2025
- 2025-05-02 JP JP2025076512A patent/JP2025109737A/ja active Pending
Patent Citations (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JP2009116581A (ja) * | 2007-11-06 | 2009-05-28 | Ziosoft Inc | 医療画像処理装置および医療画像処理プログラム |
| WO2011085815A1 (fr) * | 2010-01-14 | 2011-07-21 | Brainlab Ag | Commande d'un système de navigation chirurgical |
| JP2014523772A (ja) * | 2011-06-22 | 2014-09-18 | コーニンクレッカ フィリップス エヌ ヴェ | 医用画像を処理するシステム及び方法 |
Non-Patent Citations (1)
| Title |
|---|
| RYOMA FUJII ET AL.: "Development of hands-free 3D medical image visuallization system using Kinect", IEICE TECHNICAL REPORT, vol. 115, no. 139, 7 July 2015 (2015-07-07), pages 33 - 38, ISSN: 0913-5685 * |
Cited By (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JP2021094264A (ja) * | 2019-12-18 | 2021-06-24 | 東朋テクノロジー株式会社 | 身体支持追従装置 |
| JP2024123195A (ja) * | 2020-02-28 | 2024-09-10 | 株式会社根本杏林堂 | 医用画像処理装置、医用画像処理方法および医用画像処理プログラム |
| JP2023051235A (ja) * | 2021-09-30 | 2023-04-11 | 富士フイルム株式会社 | 画像処理装置、画像処理方法、及びプログラム |
| US12488458B2 (en) | 2021-09-30 | 2025-12-02 | Fujifilm Corporation | Image processing device, image processing method, and program for outputting information for displaying two-dimensional image corresponding to cross section |
| JP2023104568A (ja) * | 2022-01-18 | 2023-07-28 | 富士フイルムビジネスイノベーション株式会社 | 情報処理装置及び情報処理プログラム |
Also Published As
| Publication number | Publication date |
|---|---|
| JP2025109737A (ja) | 2025-07-25 |
| JPWO2017034020A1 (ja) | 2018-08-02 |
| JP2021121337A (ja) | 2021-08-26 |
| JP2023071677A (ja) | 2023-05-23 |
| JP7229569B2 (ja) | 2023-02-28 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| JP7229569B2 (ja) | 医用画像処理装置および医用画像処理プログラム | |
| US11096668B2 (en) | Method and ultrasound apparatus for displaying an object | |
| US10849597B2 (en) | Method of providing copy image and ultrasound apparatus therefor | |
| CN106922190A (zh) | 用于执行医疗手术并且用于访问和/或操纵医学相关信息的方法和系统 | |
| US10269453B2 (en) | Method and apparatus for providing medical information | |
| JP2025098287A (ja) | 医用画像処理システム、医用画像処理方法および医用画像処理プログラム | |
| JP2025065436A (ja) | 医用画像処理装置、医用画像処理方法および医用画像処理プログラム | |
| US10772595B2 (en) | Method and apparatus for displaying medical image | |
| JP6462358B2 (ja) | 医用画像表示端末および医用画像表示プログラム | |
| JP6501525B2 (ja) | 情報処理装置、情報処理方法およびプログラム | |
| JP6902012B2 (ja) | 医用画像表示端末および医用画像表示プログラム | |
| JP7107590B2 (ja) | 医用画像表示端末および医用画像表示プログラム | |
| JP7555560B2 (ja) | 医用画像処理装置、医用画像処理装置の制御方法および医用画像処理プログラム | |
| JP2022145671A (ja) | ビューワ、前記ビューワの制御方法および前記ビューワの制御プログラム | |
| JP2022009606A (ja) | 情報処理装置、情報処理方法およびプログラム | |
| CN120780139A (zh) | 包括来自一个或多个外部传感器的实况馈送的用于扩展现实体验的用户界面 | |
| JP2020177709A (ja) | 情報処理装置、情報処理方法およびプログラム |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 16839367 Country of ref document: EP Kind code of ref document: A1 |
|
| ENP | Entry into the national phase |
Ref document number: 2017536488 Country of ref document: JP Kind code of ref document: A |
|
| NENP | Non-entry into the national phase |
Ref country code: DE |
|
| 122 | Ep: pct application non-entry in european phase |
Ref document number: 16839367 Country of ref document: EP Kind code of ref document: A1 |