[go: up one dir, main page]

WO2017038300A1 - Dispositif d'imagerie par ultrasons et dispositif et procédé de traitement d'image - Google Patents

Dispositif d'imagerie par ultrasons et dispositif et procédé de traitement d'image Download PDF

Info

Publication number
WO2017038300A1
WO2017038300A1 PCT/JP2016/071731 JP2016071731W WO2017038300A1 WO 2017038300 A1 WO2017038300 A1 WO 2017038300A1 JP 2016071731 W JP2016071731 W JP 2016071731W WO 2017038300 A1 WO2017038300 A1 WO 2017038300A1
Authority
WO
WIPO (PCT)
Prior art keywords
volume data
image
ultrasonic
alignment
image processing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
PCT/JP2016/071731
Other languages
English (en)
Japanese (ja)
Inventor
子盛 黎
容弓 柿下
佩菲 朱
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hitachi Ltd
Original Assignee
Hitachi Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hitachi Ltd filed Critical Hitachi Ltd
Priority to JP2017537650A priority Critical patent/JP6490820B2/ja
Publication of WO2017038300A1 publication Critical patent/WO2017038300A1/fr
Anticipated expiration legal-status Critical
Ceased legal-status Critical Current

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/13Tomography
    • A61B8/14Echo-tomography

Definitions

  • the present invention relates to an ultrasonic imaging apparatus, and more particularly, to an imaging technique for simultaneously displaying an acquired ultrasonic image, an image of the same cross section taken by another imaging apparatus, and a predetermined feature portion in a subject.
  • the ultrasonic imaging apparatus irradiates the subject with ultrasonic waves and images the structure inside the subject using the reflected signal, the patient can be observed non-invasively and in real time.
  • other medical imaging devices such as an X-ray CT (Computed Tomography) device or an MRI (Magnetic Resonance Imaging) device can capture images in a wide range and with high resolution, so that detailed positional relationships of lesions and organs can be grasped. Can be done easily.
  • a tumor such as liver cancer can be found from an MRI image or an X-ray CT image at an early stage.
  • the position sensor is attached to the ultrasonic probe to calculate the positional relationship of the scan plane, and the two-dimensional image corresponding to the image of the ultrasonic scan plane is obtained from the three-dimensional diagnostic volume data imaged from the medical image diagnostic apparatus.
  • the diagnostic imaging system that constructs and displays the image has begun to spread.
  • Patent Document 1 describes a method of constructing an MRI cross-sectional image corresponding to an image of an ultrasonic scan plane by performing alignment between an ultrasonic three-dimensional image (volume) and an MRI three-dimensional image using blood vessel information. ing.
  • a blood vessel region is extracted as a binarized image from each of an ultrasonic image and an MRI image, thinned, and a blood vessel branch is detected therefrom.
  • the vascular branch obtained from each of the ultrasonic image and the MRI image is matched in a round-robin manner, the corresponding vascular branch is identified, and the geometric transformation matrix for alignment is estimated.
  • the MRI image is aligned with the ultrasonic image, and a corresponding cross-sectional image is generated and displayed.
  • Patent Document 1 can specify only the position information of the blood vessel branch, it is difficult to present the name of the anatomical characteristic site in the organ to the user. Further, when a site where a blood vessel is not abundant or a blood vessel cannot be clearly imaged, it is difficult to extract and align a blood vessel branch.
  • An object of the present invention is to provide an ultrasonic imaging apparatus and an image processing apparatus capable of easily estimating and identifying the position of a characteristic part from ultrasonic volume data and diagnostic volume data captured by another medical imaging apparatus. And providing a method.
  • an ultrasonic probe that transmits ultrasonic waves to a subject and receives ultrasonic waves from the subject, and a position sensor attached to the ultrasonic probe;
  • An image generation unit that generates an ultrasonic image from the reception signal of the ultrasonic probe and generates first volume data from the ultrasonic image and position information of the ultrasonic probe obtained from the position sensor;
  • an image processing device that receives and processes the second volume data for the subject, and the image processing device estimates and identifies the characteristic portion of the subject from the first volume data and the second volume data, respectively.
  • An ultrasonic imaging apparatus configured to perform alignment between the first volume data and the second volume using the position information of the characteristic part is provided.
  • an image processing apparatus includes a first volume data of an ultrasonic image of a subject and a second image that is different from the ultrasonic image of the subject.
  • a feature part estimation / identification unit that estimates and identifies a feature part of a subject from each of the volume data, and a registration unit that aligns the first volume data and the second volume data using the position information of the feature part.
  • an image processing method in an image processing apparatus wherein the image processing apparatus includes first volume data of an ultrasound image for a subject and an ultrasonic for the subject.
  • Image processing for estimating and identifying a characteristic part of a subject from each of second volume data of an image different from the sound wave image, and aligning the first volume data and the second volume data using position information of the characteristic part Provide a method.
  • the present invention it is possible to estimate and identify the position of a predetermined characteristic part from the ultrasonic volume data and the volume data of another imaging device, and to display the part name on each cross-sectional image. .
  • FIG. 1 is a block diagram showing an example of the overall configuration of an ultrasonic imaging apparatus according to Embodiment 1.
  • FIG. FIG. 1 is a block diagram illustrating a hardware configuration example of an ultrasonic imaging apparatus according to a first embodiment.
  • 1 is a functional block diagram of an image processing apparatus of an ultrasonic imaging apparatus according to Embodiment 1.
  • FIG. FIG. 3 is a flowchart showing a process flow of the ultrasonic imaging apparatus according to the first embodiment.
  • FIG. 3 is an explanatory diagram illustrating an example of a characteristic part according to the first embodiment. Explanatory drawing which shows the other example of the characteristic part based on Example 1.
  • FIG. Explanatory drawing which shows the other example of the characteristic part based on Example 1.
  • FIG. Explanatory drawing which shows the other example of the characteristic part based on Example 1.
  • FIG. 3 is a flowchart showing a position estimation / identification process of a characteristic part from volume data according to the first embodiment.
  • FIG. 3 is a flowchart illustrating an alignment process between an ultrasonic volume and a CT volume according to the first embodiment.
  • FIG. 6 is a functional block diagram of an image processing apparatus of an ultrasonic imaging apparatus according to a second embodiment.
  • FIG. 9 is a flowchart illustrating processing for correcting a characteristic part according to the second embodiment.
  • FIG. 9 is a functional block diagram of an image processing apparatus of an ultrasonic imaging apparatus according to a third embodiment.
  • FIG. 9 is a flowchart illustrating a blood vessel point group rigid body alignment process according to the third embodiment.
  • FIG. 9 is a flowchart illustrating image-based rigid body alignment processing according to the third embodiment.
  • FIG. 9 is a flowchart illustrating image-based non-rigid registration processing according to the third embodiment. The figure which shows an example of the display screen of a display, and a button selection means based on each Example.
  • the first embodiment transmits an ultrasonic wave to a subject and receives an ultrasonic wave from the subject, a position sensor attached to the ultrasonic probe, and reception of the ultrasonic probe.
  • An image generation unit that generates an ultrasonic image from the signal and generates first volume data from the ultrasonic image and position information of the ultrasonic probe obtained from the position sensor, first volume data, and a subject
  • An image processing device for receiving and processing the second volume data, wherein the image processing device estimates and identifies a characteristic part of the subject from each of the first volume data and the second volume data, and uses the position information of the characteristic part
  • the first embodiment is an image processing apparatus, and the characteristics of the subject from each of the first volume data of the ultrasound image of the subject and the second volume data of an image different from the ultrasound image of the subject.
  • Image processing apparatus including a characteristic part estimation / identification part for estimating and identifying a part, and an alignment part for aligning the first volume data and the second volume data using the positional information of the characteristic part, and a processing method therefor This is an example.
  • a predetermined anatomical form is obtained from each of ultrasonic volume data that is first volume data obtained by imaging a subject and volume data of a medical image diagnostic modality that is second volume data, for example, CT volume data. Then, position estimation and name identification are performed on each of the feature parts, and the name of the identified feature part is displayed on the display section of each volume data based on the estimated position information of the feature part. Further, a geometric transformation matrix for alignment between the ultrasonic volume and the CT volume is calculated using the position information of the corresponding characteristic part of the ultrasonic volume and the CT volume.
  • a 2D ultrasound image captured in real time during the operation and the corresponding CT cross section The image is displayed on the display unit at the same time. Furthermore, it is possible to guide the operation by displaying the names and positional relationships of the corresponding characteristic portions of the obtained ultrasound and CT on the ultrasound cross-sectional image and the CT cross-sectional image.
  • the ultrasonic imaging apparatus of the present embodiment includes an ultrasonic probe 7, a position sensor 8, an image generation unit 107, and an image processing device 108, and further includes a transmission unit 102, A transmission / reception switching unit 101, a receiving unit 105, a position detection unit 6, a user interface (UI) 121, and a control unit 106 are configured.
  • a transmission unit 102 A transmission / reception switching unit 101, a receiving unit 105, a position detection unit 6, a user interface (UI) 121, and a control unit 106 are configured.
  • UI user interface
  • the transmission unit 102 generates a transmission signal under the control of the control unit 106 and delivers the transmission signal for each of a plurality of ultrasonic elements constituting the ultrasonic probe 7 called an ultrasonic probe.
  • each of the plurality of ultrasonic elements of the ultrasonic probe 7 transmits an ultrasonic wave toward the subject 120.
  • the ultrasonic waves reflected by the subject 120 reach the plural ultrasonic elements of the ultrasonic probe 7 again and are received and converted into electric signals.
  • the signal received by the ultrasonic element is delayed by a predetermined delay amount corresponding to the position of the reception focal point and phased and added by the receiving unit 105. This is repeated for each of a plurality of reception focal points.
  • the signal after the phasing addition is transferred to the image generation unit 107.
  • the transmission / reception switching unit 101 selectively connects the transmission unit 102 or the reception unit 105 to the ultrasound probe 7.
  • the position detection unit 6 detects the position of the ultrasonic probe 7 from the output of the position sensor 8.
  • a magnetic sensor unit can be used as the position detection unit 6.
  • the position detection unit 6 forms a magnetic field space, and the position sensor 8 detects a magnetic field, thereby detecting coordinates from a position serving as a reference point.
  • the image generation unit 107 performs processing such as arranging the phasing addition signal received from the reception unit 105 at a position corresponding to the reception focus, and generates a 2D ultrasonic image.
  • the image generation unit 107 receives position information of the ultrasonic probe 7 at that time from the position detection unit 6 and assigns position information to the generated ultrasonic image.
  • the user moves the ultrasonic probe 7, and the image generation unit 107 generates an ultrasonic image to which the position information of the ultrasonic probe 7 at that time is added and outputs the ultrasonic image to the image processing device 108.
  • the processing device 108 can generate first volume data of a three-dimensional ultrasound image.
  • the image processing apparatus 108 receives second volume data obtained by the other image capturing apparatus for the subject 120 via the user interface (UI) 121, and receives the first volume data and the second volume data. Identify and align feature parts of volume data.
  • other image capturing apparatuses such as an MRI apparatus, an X-ray CT apparatus, and other ultrasonic diagnostic apparatuses are referred to as medical modalities.
  • an X-ray CT apparatus is used as an example of a medical modality
  • volume data of the X-ray CT apparatus is referred to as second volume data.
  • the position detection unit 6 detects the position of the ultrasonic probe 7 from the output of the position sensor 8.
  • a magnetic sensor unit can be used as the position detection unit 6.
  • the position detection unit 6 forms a magnetic field space, and the position sensor 8 detects a magnetic field, thereby detecting coordinates from a position serving as a reference point.
  • FIG. 2 is a block diagram illustrating a hardware configuration example of the image processing apparatus 108 and the user interface 121.
  • the hardware configuration example shown in FIG. 2 is commonly used in other embodiments described later.
  • the image processing apparatus 108 includes a CPU (processor) 1, a ROM (nonvolatile memory: a read-only storage medium) 2, a RAM (volatile memory: a storage medium capable of reading and writing data) 3, a storage device 4, and a display control unit. 15.
  • the user interface 121 includes an image input unit 9, a medium input unit 11, an input control unit 13, and an input device 14. These, the ultrasonic image generation unit 107, and the position detection unit 6 are connected to each other by a bus 5.
  • a display 16 is connected to the display control unit 15.
  • At least one of the ROM 2 and the RAM 3 stores in advance a program and data necessary for realizing the operation of the image processing apparatus 108 by the arithmetic processing of the CPU 1.
  • Various processes of the image processing apparatus 108 are realized by the CPU 1 executing a program stored in advance in at least one of the ROM 2 and the RAM 3.
  • the program executed by the CPU 1 may be stored in a storage medium 12 such as an optical disk, for example, and the medium input unit 11 (for example, an optical disk drive) may read the program and store it in the RAM 3.
  • the program may be stored in the storage device 4 and the program may be loaded from the storage device 4 into the RAM 3.
  • the program may be stored in the ROM 2 in advance.
  • the image input unit 9 is an interface for capturing CT volume data captured by the image capturing apparatus 10 which is a medical modality such as an X-ray CT apparatus.
  • the storage device 4 is a magnetic storage device that stores CT volume data and the like input via the image input unit 9.
  • the storage device 4 may include a nonvolatile semiconductor storage medium such as a flash memory, for example.
  • An external storage device connected via a network or the like may be used.
  • the input device 14 is a device that receives user operations, and includes, for example, a keyboard, a trackball, an operation panel, a foot switch, and the like.
  • the input control unit 13 is an interface that receives an operation input input by a user.
  • the operation input received by the input control unit 13 is processed by the CPU 1.
  • the display control unit 15 performs control to display the image data obtained by the processing of the CPU 1 on the display 16.
  • the display 16 displays an image under the control of the display control unit 15.
  • FIG. 3 is a functional block diagram showing an example of functions of the image processing apparatus 108 of the present embodiment.
  • the image processing apparatus 108 includes an ultrasonic image acquisition unit 28, an ultrasonic probe position information acquisition unit 29, an ultrasonic volume data generation unit 21 as first volume data, It includes a characteristic part position estimation / identification part 23 for sound wave volume data, a receiving part 22 for CT volume data as second volume data, and a characteristic part position estimation / identification part 24 for CT volume data.
  • the image processing apparatus 108 includes ultrasonic / CT corresponding feature part information 25 indicating information on the name / position of the feature part, a CT volume coordinate conversion calculation unit 26 for initial alignment, and real-time 2D-CT image calculation.
  • step S ⁇ b> 201 the CT volume data receiving unit 22 receives CT volume data from the image capturing apparatus 10 via the image input unit 9.
  • step S202 the display 16 is displayed on the display 16 to urge the ultrasonic probe 7 to move and scan.
  • a 2D ultrasound image is generated by the transmission unit 102, the reception unit 105, and the image generation unit 107.
  • the ultrasonic volume data generation unit 21 receives 2D ultrasonic images generated continuously from the image generation unit 107.
  • step S203 the position detection unit 6 detects the position of the ultrasonic probe 7 from the output of the position sensor 8.
  • the ultrasonic volume data generation unit 21 receives real-time position information of the ultrasonic probe.
  • step S204 the ultrasonic volume data generation unit 21 sets ultrasonic volume data as first volume data based on the continuously generated 2D ultrasonic image and the position information of the ultrasonic probe added thereto. Is generated.
  • the feature part position estimation / identification unit 23 of the ultrasonic volume and the feature part position estimation / identification part 24 of the CT volume use the well-known machine learning technique to calculate the ultrasonic volume data and the CT volume.
  • the position of a predetermined anatomical characteristic part is estimated from each of the above, and the name of each part is identified according to the estimation result.
  • the characteristic site is a medically defined organ or site within the organ, such as the portal vein umbilicus of the liver, the inflow of the inferior vena cava, the gallbladder, and the bifurcation points of the liver portal vein or vein. .
  • FIG. 5A, FIG. 5B, and FIG. 5C respectively show the three-dimensional position and image features of the portal vein umbilical portion of the liver, the inflow portion of the inferior vena cava, and the gallbladder as characteristic portions in the ultrasound volume data and the CT volume.
  • FIG. 5A, 5B, and 5C (a) shows an ultrasonic volume, and (b) shows a CT volume, and the cube 50 shown in each figure shows the position of each of the above anatomical features. ing. Details of position estimation and name identification of these characteristic parts will be described later.
  • FIG. 6 shows ultrasonic wave / CT corresponding characteristic part information 25, the characteristic part name estimated and identified from the ultrasonic volume and the CT volume, and three-dimensional position information, that is, an ultrasonic characteristic part indicating coordinates in each volume.
  • Information and CT characteristic part information can be stored as a table in the RAM 3 or the storage device 4.
  • the CT volume coordinate transformation calculation (initial alignment) unit 26 receives the ultrasonic / CT corresponding feature part information 25, and uses the position information of the corresponding feature part to convert the CT volume into an ultrasonic volume. An alignment conversion matrix for performing initial alignment is calculated. Details of the alignment transformation matrix calculation will be described later.
  • step S208 the real-time 2D ultrasonic image acquisition unit 30 receives the 2D ultrasonic image acquired in real time from the ultrasonic image acquisition unit 28.
  • step S209 the real-time 2D-CT image calculation unit 27, which is a CT section, accepts real-time position information of the ultrasonic probe corresponding to the 2D ultrasonic image, as in step S203.
  • the real-time 2D-CT image calculation unit 27 corresponds to the 2D ultrasound image acquired in real time using the position information of the ultrasound probe and the coordinate conversion matrix of the CT volume.
  • a 2D-CT cross-sectional image is calculated in real time from the CT volume.
  • step S211 the image display unit 31 receives the 2D ultrasound image, the 2D-CT cross-sectional image, and the name / position information of the ultrasound-CT compatible feature part.
  • the image display unit 31 displays the cross-sectional images of the 2D-CT and 2D ultrasound images on different screens 16A and 16B of the display 16 as shown in FIG. Then, the names 17A and 17B of the identified characteristic parts and the positional relationship between the characteristic parts and the 2D image are displayed on the respective screens.
  • the image display unit 31 projects the feature portion position in the three-dimensional coordinate system onto the currently displayed 2D image, and displays the feature portion location and the markers 18A and 18B at the projected location.
  • the image display unit 31 displays the names and positional relationships of the feature portions estimated from the ultrasonic volume data that is the first volume data and the CT volume data that is the second volume data, and the first and second positions after the alignment with the ultrasonic image. Since it can be displayed on each of the images generated from the two-volume data, accurate surgical navigation for the user can be realized.
  • the image display unit 31 can also display the size of the markers 18A and 18B in proportion to the projection distance from the characteristic part to the 2D image.
  • the touch panel operation button 19 in FIG. 15A and FIG. 15B are described in the second embodiment.
  • step S211 the image display unit 31 changes the color of one of the 2D ultrasound image and the 2D-CT cross-sectional image, and instead of displaying them side by side on the screens 16A and 16B, transparently.
  • a superimposed image can be generated and displayed on the display 16.
  • the image display unit 31 displays the feature part names 17A and 17B and the markers 18A and 18B on the superimposed image 2D image. Even in this case, the image display unit 31 can display the size of the marker in proportion to the projection distance from the characteristic part to the 2D image.
  • the processing of the characteristic part position estimation / identification unit 23 of the ultrasonic volume and the characteristic part position estimation / identification part 24 of the CT volume will be described using the flowchart shown in FIG.
  • the characteristic part position estimation / identification unit 23 of the ultrasonic volume and the characteristic part position estimation / identification part 24 of the CT volume basically include the same machine learning / identification means. Only the volume data to be processed, or the structure and parameter settings of the discriminator used for machine learning are different.
  • step S401 the characteristic part position estimation / identification units 23 and 24 accept volume data.
  • the characteristic part position estimation / identification units 23 and 24 perform characteristic part candidate position estimation and name identification.
  • the feature part position estimation / identification units 23 and 24 reduce the size of the volume data, and search for feature part candidates with rough resolution using machine learning.
  • a method of position estimation and name identification of a characteristic part for example, a Hough Forest method that is a known machine learning method can be used.
  • each of the characteristic part position estimation / identification units 23 and 24 acquires a local region (local volume data) around the searched characteristic part candidate from the normal size volume data.
  • each of the characteristic part position estimation / identification units 23 and 24 searches and identifies the characteristic part in detail in the local region surrounding the characteristic part candidate.
  • the above Hough Forest method can be used.
  • 3D CNN convolutional neural network
  • each of the characteristic part position estimation / identification units 23 and 24 excludes the characteristic part as an erroneous identification part when the characteristic part identification score obtained in step S404 is equal to or less than a predetermined threshold.
  • each characteristic part position estimation / identification unit 23, 24 outputs position / name information of the identified characteristic part.
  • the CT volume coordinate conversion calculation unit 26 is also realized by executing a program by the CPU 1.
  • the CT volume coordinate transformation calculation unit 26 receives ultrasound / CT corresponding feature part information 25 from each feature part position estimation / identification unit 23, 24.
  • step S302 the CT volume coordinate conversion calculation unit 26 determines whether or not the number of corresponding parts exceeds a predetermined numerical value N.
  • N is set to 3 in order to calculate a coordinate conversion parameter.
  • step S303 when the number of corresponding parts exceeds N, the CT volume coordinate conversion calculation unit 26 determines the coordinate conversion parameters between the corresponding parts using the position information of the corresponding characteristic parts.
  • a parameter for geometric transformation for example, the square sum of the geometric distance between corresponding parts is minimized with respect to the coordinates of the corresponding feature parts.
  • the known SVD (singular value decomposition) method can be applied.
  • step S304 when the number of corresponding parts does not satisfy N, the CT volume coordinate conversion calculation unit 26 calculates the distance between the corresponding parts as an initial value of translation.
  • the average value is calculated as the initial value of translation.
  • step S305 the CT volume coordinate transformation calculation unit 26 cuts out a three-dimensional local region centered on the corresponding feature portion from each of the ultrasonic volume and the CT volume.
  • the CT volume coordinate transformation calculation unit 26 calculates a similarity evaluation function of a three-dimensional local region centered on a corresponding feature portion of the CT volume and the ultrasonic volume.
  • a characteristic region such as a blood vessel region may be extracted from the three-dimensional local region of CT and ultrasound, and the degree of overlap of the blood vessel regions may be calculated.
  • the image similarity between the CT and ultrasonic three-dimensional local regions may be calculated.
  • a known mutual information amount can be used as the image similarity.
  • step S307 convergence calculation is performed in order to obtain geometric transformation information (translation and rotation angle) that maximizes or maximizes the similarity between the CT and ultrasonic three-dimensional local regions.
  • step S308 if the similarity has not converged, the geometric transformation information (translation and rotation angle) is updated to obtain a higher similarity. Then, steps S306 to S308 are performed again using the updated geometric transformation information.
  • the CT volume coordinate transformation calculation unit 26 outputs the geometric transformation information obtained in step S309.
  • the processing of the CT volume coordinate transformation calculation unit 26 in FIG. 3 can be completed.
  • a predetermined feature part is estimated and identified from each of the ultrasonic volume and the CT volume, and the ultrasonic volume and the CT volume are obtained using the obtained position information of the corresponding feature part. Perform position alignment. As a result, the ultrasonic image with the ultrasonic probe position information acquired in real time, the corresponding CT cross-sectional image, and the name and positional relationship of the characteristic part are displayed at the same time for automatic and accurate surgical navigation. It can be realized.
  • the image processing apparatus 108 is provided inside the ultrasonic imaging apparatus 100.
  • the image processing apparatus 108 illustrated in FIGS. 1 and 2 is an apparatus different from the ultrasonic imaging apparatus 100. It is also possible.
  • the image processing apparatus 108 and the ultrasonic imaging apparatus 100 are connected via a signal line or a network.
  • the image processing apparatus 108 is mounted on a general computer or an image processing apparatus such as a workstation, and connected to the ultrasonic imaging apparatus 100 via a network.
  • the image processing apparatus 108 receives the ultrasonic volume data and CT volume data to be aligned from the client terminal via the network and performs alignment processing.
  • the CT volume data after alignment is configured to be transmitted to an ultrasonic imaging apparatus that is a client terminal.
  • the ultrasonic imaging apparatus 100 can perform the alignment process using the computing capability of the image processing apparatus 108 connected via the network, the ultrasonic imaging apparatus 100 is small and simple, An apparatus capable of displaying an ultrasonic image and a CT image of the same cross section on a display in real time can be provided.
  • position estimation and identification of a predetermined feature part are performed from each of the ultrasonic volume data and the volume data of another imaging device, and the part name is assigned to each cross section. Can be displayed on the image. Furthermore, it is possible to automatically and accurately provide alignment between the ultrasonic volume and the other apparatus volume based on the corresponding positional relationship of the obtained characteristic parts.
  • position estimation and name identification of a predetermined anatomical feature part are performed from the volume data of ultrasound volume data and medical image diagnostic modality, for example, CT volume data, and the position of the obtained feature part is obtained.
  • the geometric transformation matrix for the alignment between the ultrasonic volume and the CT volume was calculated using the information.
  • the second embodiment further adds or corrects corresponding parts, or performs the geometric transformation for the alignment. It is an Example which performs correction of calculation based on a user instruction.
  • the same components and processes as those of the first embodiment are denoted by the same reference numerals and the description thereof is omitted.
  • FIG. 9 is a functional block diagram illustrating functions of the image processing apparatus 108 according to the second embodiment.
  • the image processing apparatus 108 includes an ultrasonic image acquisition unit 28, an ultrasonic probe position information acquisition unit 29, an ultrasonic volume data generation unit 21, and a characteristic part position of ultrasonic volume data.
  • An estimation / identification unit 23, a CT volume data reception unit 22, and a characteristic part position estimation / identification unit 24 of CT volume data are included.
  • the image processing apparatus 108 includes ultrasonic / CT-corresponding characteristic part information 25, a CT volume coordinate conversion calculation part 26, an initial aligned CT volume generation part 32, an image display part 31, and a characteristic part identification. And an alignment result correction unit 33.
  • the feature part identification and alignment result correcting unit 33 Displays a touch panel operation button 19 such as volume addition, manual correction, and detailed alignment on the display 16 as a display for inquiring whether or not the feature part identification and alignment is determined to be successful, and button selection means such as the input device 14 is displayed. The user's judgment is accepted. When the user inputs through the input device 14 that the feature part identification and the positioning are successful, the positioning process is completed.
  • the feature part identification and alignment result correction unit 33 performs the feature part identification and alignment correction process. Execute. That is, the image processing apparatus 108 according to the present embodiment includes the correction unit 33 that corrects the feature region estimated and identified from the ultrasonic volume data and the CT volume data in the feature region estimation / identification units 23 and 24. .
  • step S800 the feature part identification and alignment result correction unit 33 acquires position / name information of the ultrasonic-CT compatible feature part.
  • step S801 the feature part identification and alignment result correction unit 33 displays a display asking whether or not the user determines to additionally acquire an ultrasonic volume on the display 16, and the input device 14 or the touch panel operation button. The user's judgment is accepted via 19.
  • the feature part identification and alignment result correction unit 33 executes the processing of step 802 to step 803.
  • the correction unit 33 for feature part identification and alignment result executes the process of step 804.
  • step S802 the ultrasonic volume of the first volume data is used to image a different region of the organ with the ultrasonic probe 7, and an additional ultrasonic volume is acquired.
  • step S803 the feature part identification and alignment result correcting unit 33 performs the feature part position estimation / identification processing described using steps S401 to S406 in FIG. 7 from the newly added ultrasonic volume. Execute.
  • step S804 the user manually corrects the position and name of the obtained ultrasonic / CT compatible feature part via the input device 14.
  • the feature part identification and alignment result correcting unit 33 accepts the position / name information of the corrected ultrasonic / CT corresponding feature part obtained in step S803 or step S804.
  • step S805 the feature part identification and registration result correcting unit 33 uses the position / name information of the added or corrected ultrasonic / CT corresponding feature part in steps 301 to 309 in FIG. Proceeding to the processing, recalculation of CT volume coordinate conversion information is executed. That is, the image processing apparatus according to the present embodiment estimates a predetermined characteristic part of the subject from volume data obtained by imaging with the ultrasound probe 7 other than the ultrasound volume data that is the first volume data. And identifying the coordinates of the position of the second volume data using the position information of the characteristic part of the CT volume data which is the second volume data and the position information of the characteristic part of the volume data other than the corresponding first volume data. Re-output the conversion information.
  • step S806 the feature part identification and alignment result correcting unit 33 displays the corrected corresponding feature part and the eyelid alignment result on the display 16.
  • the correction unit 33 of the characteristic part identification and alignment result performs steps 801 to 806 again.
  • the characteristic part identification and alignment result correction processing is completed.
  • an ultrasonic imaging apparatus capable of executing addition or correction of a corresponding part and recalculation of coordinate conversion information for alignment based on a user instruction can be configured. it can.
  • alignment accuracy is improved by performing more detailed alignment on the ultrasonic volume data that has been aligned in the first or second embodiment and the already-aligned CT volume data.
  • 1 is an embodiment of an ultrasonic imaging apparatus capable of performing
  • FIG. 11 is a functional block diagram illustrating functions of the image processing apparatus 108 according to the third embodiment.
  • the image processing apparatus 108 includes an ultrasonic image acquisition unit 28, an ultrasonic probe position information acquisition unit 29, an ultrasonic volume data generation unit 21, and characteristic part position estimation of ultrasonic volume data.
  • An identification unit 23, a CT volume data receiving unit 22, and a characteristic part position estimation / identification unit 24 of CT volume data are included.
  • the image processing apparatus 108 includes ultrasonic / CT corresponding feature part information 25, a CT volume coordinate conversion calculation unit 26 for initial alignment, a real-time 2D-CT image calculation unit 27, and real-time 2D ultrasonic image acquisition.
  • a unit 30, an image display unit 31, and a detailed positioning unit 34 are included.
  • step S501 the detailed alignment unit 34 receives the initial aligned CT volume obtained by the configuration of the first or second embodiment.
  • step S502 the detailed alignment unit 34 extracts blood vessel data from the initial aligned CT volume.
  • the extracted blood vessel data is three-dimensional coordinate data of voxels in the segmented blood vessel region.
  • step S503 the detailed alignment unit 34 receives ultrasonic volume data.
  • step S504 the detailed alignment unit 34 extracts blood vessel data from the ultrasonic volume data.
  • step S505 the detailed alignment unit 34 performs alignment between ultrasound and CT blood vessel data.
  • a known automatic alignment method a known ICP (Iterative Closest Point) method can be used.
  • ICP Intelligent Closest Point
  • a point cloud of CT blood vessel data is subjected to geometric transformation, that is, translation and rotation are performed, and a distance between corresponding points with the point cloud of ultrasonic blood vessel data is obtained, and repeated so that the distance is minimized. Perform calculations. Thereby, both can be aligned.
  • the detailed alignment unit 34 outputs the alignment result.
  • the image-based rigid body alignment method shown in FIG. 13 can be used.
  • the detailed alignment unit 34 performs image-based rigid body alignment on the ultrasonic volume data and the CT volume data will be described.
  • the detailed alignment unit 34 receives the initial aligned CT volume and the ultrasonic volume, respectively, and obtains an image-based rigid body alignment target.
  • the detailed alignment unit 34 extracts image sampling points at coordinates owned by the ultrasonic volume.
  • Image sampling points may be extracted from all pixels in the image area as sampling points, but in order to improve the speed of the registration process, a grid is placed on the image and only the pixels in the grid nodes are selected. It may be used as a sampling point. Alternatively, a predetermined number of coordinates may be selected at random, and the luminance value at the obtained coordinates may be used as the luminance value of the sampling point.
  • step S604 the detailed alignment unit 34 geometrically converts the coordinates of the sampling points extracted from the ultrasonic volume into the coordinates of the corresponding points in the CT volume.
  • step S605 the detailed alignment unit 34 acquires the luminance data at the sampling point of the ultrasonic volume and the luminance data at the corresponding sampling point of the CT volume.
  • the detailed alignment unit 34 calculates the image similarity between the ultrasonic volume and the CT volume by applying a predetermined evaluation function to the luminance data at these sampling points. A known mutual information amount can be used as the image similarity.
  • step S606 to step 607 the detailed alignment unit 34 obtains geometric transformation information that maximizes or maximizes the image similarity between the ultrasonic volume and the CT volume, and updates the geometric transformation information.
  • step S608 the detailed alignment unit 34 outputs the alignment result.
  • the ultrasonic volume data that has been aligned in Example 1 or Example 2 and the CT volume data that has been aligned, or the ultrasonic volume data that has been rigidly aligned in the above-described embodiment can be improved by performing non-rigid alignment on the aligned CT volume data.
  • a control grid is installed in the CT volume to refer to the ultrasound volume and to deform the CT volume, and the CT image is deformed by moving control points in the control grid.
  • Image similarity is obtained between the deformed CT image and the referenced ultrasound image, optimization calculation based on the obtained image similarity is performed, and the movement amount of the control point in the control grid, that is, deformation A quantity is required.
  • the movement amount of the pixel between the control points in the control grid is calculated by interpolation of the movement amount of the control points arranged around the pixel.
  • coordinate conversion of the CT image is performed, and alignment is performed so as to locally deform the image.
  • step S701 the detailed alignment unit 34 receives the ultrasonic volume and the aligned CT volume.
  • step S702 the detailed alignment unit 34 arranges grid-like control points in the aligned CT volume.
  • step S703 the detailed alignment unit 34 performs the same processing as in step S603, and acquires an image sampling point of the ultrasonic volume.
  • step S704 the detailed alignment unit 34 calculates the coordinates of the image data in the CT volume corresponding to the coordinates of the sampling points.
  • the coordinates of the corresponding sampling point in the CT volume are obtained by interpolating the coordinates using, for example, a known B-spline function based on the positions of the surrounding control points.
  • the detailed alignment unit 34 calculates the luminance value of the corresponding sampling point by, for example, linear interpolation for each corresponding sampling point of the CT volume (sampling point corresponding to each sampling point of the ultrasonic volume). To do.
  • the coordinates (sampling points) of the CT volume changed with the movement of the control points and the luminance values at the coordinates (sampling points) are obtained. That is, the CT volume is deformed as the control point moves.
  • step S705 the detailed alignment unit 34 performs predetermined processing on the luminance data at the sampling point of the ultrasonic volume and the luminance data (data generated in step S704) at the corresponding sampling point of the CT volume after geometric transformation.
  • the image similarity between the ultrasonic volume and the CT volume is calculated by applying the evaluation function.
  • a known mutual information amount can be used as in the rigid body alignment.
  • step S706 the detailed alignment unit 34 performs a convergence calculation in order to obtain the movement amount of each control point such that the image similarity between the ultrasonic volume and the CT volume is maximized or maximized.
  • step S707 when the image similarity has not converged in step S706, the detailed alignment unit 34 updates the control point movement amount in order to obtain a higher image similarity. Then, Steps S704 to S706 are performed again using the updated control point movement amount.
  • step S708 the detailed alignment unit 34 determines in step S708 that all the pixels of the CT volume are based on the obtained control point movement amount.
  • the coordinates of each pixel are calculated by the same interpolation calculation as in step S704. Then, the luminance at the obtained coordinates is calculated, and an aligned CT volume is generated and output. With the above process, the non-rigid body alignment process is completed.
  • the ultrasonic volume data and CT volume data that have been aligned are used to perform steps 208 to 211 in FIG.
  • the ultrasonic image and the CT image can be matched with higher accuracy. Therefore, according to the configuration of the present embodiment, high-definition correspondence between both images is possible, and the positional relationship between the corresponding feature portions of the ultrasonic wave and CT can be adjusted, which can be confirmed with higher accuracy.
  • the present invention described in detail above simultaneously displays an ultrasonic image captured in real time, an image of the same cross section obtained from volume data captured in advance by another imaging device, and a position / name of a predetermined feature part. It is possible to provide an ultrasonic imaging apparatus capable of performing the above.
  • position estimation and name identification of a predetermined anatomical feature part are performed from each of ultrasonic volume data and diagnostic volume data previously captured from another medical image capturing device, It is possible to provide an ultrasonic imaging apparatus that displays a two-dimensional image of a sound wave scanning surface and a corresponding cross-sectional image of a diagnostic volume.
  • ultrasonic imaging capable of performing automatic and accurate alignment using the position information of the obtained characteristic part An apparatus can be provided.
  • the present invention is not limited to the ultrasonic imaging apparatus, but can be realized as an image processing apparatus connected to the ultrasonic imaging apparatus via a network and an image processing method thereof.
  • a part of the configuration of one embodiment can be replaced with the configuration of another embodiment, and the configuration of another embodiment can be added to the configuration of one embodiment. Further, it is possible to add, delete, and replace other configurations for a part of the configuration of each embodiment.

Landscapes

  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Pathology (AREA)
  • Radiology & Medical Imaging (AREA)
  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Medical Informatics (AREA)
  • Molecular Biology (AREA)
  • Surgery (AREA)
  • Animal Behavior & Ethology (AREA)
  • General Health & Medical Sciences (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Ultra Sonic Daignosis Equipment (AREA)
  • Apparatus For Radiation Diagnosis (AREA)

Abstract

L'invention concerne une navigation chirurgicale automatique et fiable par l'affichage simultané : du nom et de la relation positionnelle d'un site distinctif à l'intérieur d'un sujet ; d'images par ultrasons ; et d'images en coupe transversale CT correspondant aux images par ultrasons. Des données de volume par ultrasons pour une image par ultrasons et des données de volume CT acquises par un dispositif de capture d'image différent sont reçues et une pluralité de sites distinctifs prescrits d'un sujet sont estimés et identifiés à partir des données de volume par ultrasons et des données de volume CT par des unités respectives d'estimation et d'identification de positions de sites distinctifs (23, 24). Une unité d'affichage d'image (31) affiche le nom d'un site distinctif contenu dans les informations par ultrasons et de sites distinctifs CT acquises (25), sur la base de leurs informations positionnelles, sur une section d'affichage sur la base des données de volume respectives. Une unité de calcul de conversion de coordonnées de volume CT (26) calcule des informations de conversion de coordonnées pour un alignement à l'aide des informations positionnelles des sites distinctifs correspondant aux données de volume par ultrasons et aux données de volume CT.
PCT/JP2016/071731 2015-09-02 2016-07-25 Dispositif d'imagerie par ultrasons et dispositif et procédé de traitement d'image Ceased WO2017038300A1 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2017537650A JP6490820B2 (ja) 2015-09-02 2016-07-25 超音波撮像装置、画像処理装置、及び方法

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2015172627 2015-09-02
JP2015-172627 2015-09-02

Publications (1)

Publication Number Publication Date
WO2017038300A1 true WO2017038300A1 (fr) 2017-03-09

Family

ID=58187167

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2016/071731 Ceased WO2017038300A1 (fr) 2015-09-02 2016-07-25 Dispositif d'imagerie par ultrasons et dispositif et procédé de traitement d'image

Country Status (2)

Country Link
JP (1) JP6490820B2 (fr)
WO (1) WO2017038300A1 (fr)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019035049A1 (fr) * 2017-08-16 2019-02-21 Mako Surgical Corp. Recalage osseux en imagerie ultrasonore avec calibrage de la vitesse du son et segmentation fondés sur l'apprentissage
WO2019130636A1 (fr) * 2017-12-27 2019-07-04 株式会社日立製作所 Dispositif d'imagerie par ultrasons, dispositif de traitement d'image et méthode associée
JP2019195447A (ja) * 2018-05-09 2019-11-14 キヤノンメディカルシステムズ株式会社 超音波診断装置及び医用情報処理プログラム
CN112055870A (zh) * 2018-03-02 2020-12-08 皇家飞利浦有限公司 图像配准合格评价
JPWO2021117349A1 (fr) * 2019-12-10 2021-06-17
CN113729781A (zh) * 2020-05-29 2021-12-03 株式会社日立制作所 超声波摄像装置、治疗辅助系统以及图像显示方法
JP2022024965A (ja) * 2020-07-28 2022-02-09 株式会社リコー 位置合わせ装置、位置合わせシステム、位置合わせ方法およびプログラム
JP2024540039A (ja) * 2021-10-28 2024-10-31 ホウメディカ・オステオニクス・コーポレイション 超音波プローブの複合現実ガイダンス
JP2025013277A (ja) * 2023-07-13 2025-01-24 ▲か▼本(深▲せん▼)医療器械有限公司 マルチモーダル医療画像に基づく、経会陰前立腺穿刺のためのインテリジェント麻酔システム

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006167267A (ja) * 2004-12-17 2006-06-29 Hitachi Medical Corp 超音波診断装置
JP2011167331A (ja) * 2010-02-18 2011-09-01 Ge Medical Systems Global Technology Co Llc 超音波診断装置
JP2012245230A (ja) * 2011-05-30 2012-12-13 Ge Medical Systems Global Technology Co Llc 超音波診断装置及びその制御プログラム
JP2014113421A (ja) * 2012-12-12 2014-06-26 Toshiba Corp 超音波診断装置及び画像処理プログラム
JP2015020036A (ja) * 2013-07-24 2015-02-02 ジーイー・メディカル・システムズ・グローバル・テクノロジー・カンパニー・エルエルシー データ解析装置、データ解析装置のプログラム及び超音波診断装置

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006167267A (ja) * 2004-12-17 2006-06-29 Hitachi Medical Corp 超音波診断装置
JP2011167331A (ja) * 2010-02-18 2011-09-01 Ge Medical Systems Global Technology Co Llc 超音波診断装置
JP2012245230A (ja) * 2011-05-30 2012-12-13 Ge Medical Systems Global Technology Co Llc 超音波診断装置及びその制御プログラム
JP2014113421A (ja) * 2012-12-12 2014-06-26 Toshiba Corp 超音波診断装置及び画像処理プログラム
JP2015020036A (ja) * 2013-07-24 2015-02-02 ジーイー・メディカル・システムズ・グローバル・テクノロジー・カンパニー・エルエルシー データ解析装置、データ解析装置のプログラム及び超音波診断装置

Cited By (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11701090B2 (en) 2017-08-16 2023-07-18 Mako Surgical Corp. Ultrasound bone registration with learning-based segmentation and sound speed calibration
US12220281B2 (en) 2017-08-16 2025-02-11 Mako Surgical Corp. Ultrasound bone registration with learning-based segmentation and sound speed calibration
WO2019035049A1 (fr) * 2017-08-16 2019-02-21 Mako Surgical Corp. Recalage osseux en imagerie ultrasonore avec calibrage de la vitesse du son et segmentation fondés sur l'apprentissage
WO2019130636A1 (fr) * 2017-12-27 2019-07-04 株式会社日立製作所 Dispositif d'imagerie par ultrasons, dispositif de traitement d'image et méthode associée
JP2019115487A (ja) * 2017-12-27 2019-07-18 株式会社日立製作所 超音波撮像装置、画像処理装置、及び方法
CN112055870A (zh) * 2018-03-02 2020-12-08 皇家飞利浦有限公司 图像配准合格评价
JP7245256B2 (ja) 2018-03-02 2023-03-23 コーニンクレッカ フィリップス エヌ ヴェ 画像レジストレーションの認定
JP2021514767A (ja) * 2018-03-02 2021-06-17 コーニンクレッカ フィリップス エヌ ヴェKoninklijke Philips N.V. 画像レジストレーションの認定
JP7171228B2 (ja) 2018-05-09 2022-11-15 キヤノンメディカルシステムズ株式会社 超音波診断装置及び医用情報処理プログラム
JP2019195447A (ja) * 2018-05-09 2019-11-14 キヤノンメディカルシステムズ株式会社 超音波診断装置及び医用情報処理プログラム
JPWO2021117349A1 (fr) * 2019-12-10 2021-06-17
JP7209113B2 (ja) 2019-12-10 2023-01-19 富士フイルム株式会社 情報処理装置、情報処理システム、情報処理方法、及び情報処理プログラム
CN113729781A (zh) * 2020-05-29 2021-12-03 株式会社日立制作所 超声波摄像装置、治疗辅助系统以及图像显示方法
CN113729781B (zh) * 2020-05-29 2024-03-19 富士胶片医疗健康株式会社 超声波摄像装置、治疗辅助系统以及图像显示方法
JP2022024965A (ja) * 2020-07-28 2022-02-09 株式会社リコー 位置合わせ装置、位置合わせシステム、位置合わせ方法およびプログラム
JP7574642B2 (ja) 2020-07-28 2024-10-29 株式会社リコー 位置合わせ装置、位置合わせシステム、位置合わせ方法およびプログラム
JP2024540039A (ja) * 2021-10-28 2024-10-31 ホウメディカ・オステオニクス・コーポレイション 超音波プローブの複合現実ガイダンス
JP7739615B2 (ja) 2021-10-28 2025-09-16 ホウメディカ・オステオニクス・コーポレイション 超音波プローブの複合現実ガイダンス
JP2025013277A (ja) * 2023-07-13 2025-01-24 ▲か▼本(深▲せん▼)医療器械有限公司 マルチモーダル医療画像に基づく、経会陰前立腺穿刺のためのインテリジェント麻酔システム

Also Published As

Publication number Publication date
JPWO2017038300A1 (ja) 2018-04-26
JP6490820B2 (ja) 2019-03-27

Similar Documents

Publication Publication Date Title
JP6490820B2 (ja) 超音波撮像装置、画像処理装置、及び方法
EP2680778B1 (fr) Système et procédé d'initialisation et d'enregistrement automatiques d'un système de navigation
CN107016717B (zh) 用于患者的透视视图的系统和方法
JP6383483B2 (ja) 超音波撮像装置、および、画像処理装置
JP5990834B2 (ja) 診断画像生成装置および診断画像生成方法
KR20150027637A (ko) 의료영상들을 정합하는 방법 및 장치
US20180360427A1 (en) Ultrasonic diagnostic apparatus and medical image processing apparatus
KR102273020B1 (ko) 의료 영상 정합 방법 및 그 장치
JP6644795B2 (ja) 解剖学的オブジェクトをセグメント化する超音波画像装置及び方法
JP6887942B2 (ja) 超音波撮像装置、画像処理装置、及び方法
CN108697410B (zh) 超声波拍摄装置、图像处理装置及其方法
KR20200140683A (ko) 초음파 영상과 3차원 의료 영상의 정렬을 위한 장치 및 방법
US20190271771A1 (en) Segmented common anatomical structure based navigation in ultrasound imaging
US12239491B2 (en) System and method for real-time fusion of acoustic image with reference image
JP6204544B2 (ja) 診断画像生成装置
KR20150026354A (ko) 의료영상들을 정합하는 방법 및 장치
JP7299100B2 (ja) 超音波診断装置及び超音波画像処理方法
CN116194045A (zh) 用于提供二次医学成像源的方法
JP6991354B2 (ja) 画像データ処理方法、デバイス及びシステム
JP7270331B2 (ja) 医用画像診断装置及び画像処理装置
JP6598565B2 (ja) 画像処理装置、画像処理方法及びプログラム

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16841332

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2017537650

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 16841332

Country of ref document: EP

Kind code of ref document: A1