[go: up one dir, main page]

WO2017180097A1 - Alignement déformable d'entrées peropératoire et préopératoire à l'aide de modèles de mélange génératifs et d'une déformation biomécanique - Google Patents

Alignement déformable d'entrées peropératoire et préopératoire à l'aide de modèles de mélange génératifs et d'une déformation biomécanique Download PDF

Info

Publication number
WO2017180097A1
WO2017180097A1 PCT/US2016/027018 US2016027018W WO2017180097A1 WO 2017180097 A1 WO2017180097 A1 WO 2017180097A1 US 2016027018 W US2016027018 W US 2016027018W WO 2017180097 A1 WO2017180097 A1 WO 2017180097A1
Authority
WO
WIPO (PCT)
Prior art keywords
model
intraoperative
dimensional model
mesh
preoperative
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
PCT/US2016/027018
Other languages
English (en)
Inventor
Ali Kamen
Ankur KAPOOR
Stefan Kluckner
Thomas Pheiffer
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Siemens AG
Siemens Corp
Original Assignee
Siemens AG
Siemens Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Siemens AG, Siemens Corp filed Critical Siemens AG
Priority to PCT/US2016/027018 priority Critical patent/WO2017180097A1/fr
Publication of WO2017180097A1 publication Critical patent/WO2017180097A1/fr
Anticipated expiration legal-status Critical
Ceased legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • G06T7/344Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods involving models
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10068Endoscopic image
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images

Definitions

  • the following disclosure describes the present invention according to several embodiments directed at methods, systems, and apparatuses for registering intra and preoperative inputs using generative mixture models and biomechanical deformation techniques.
  • the technology described herein is especially applicable to, but not limited to, minimally invasive surgical techniques.
  • a laparoscopic camera is used to provide the surgeon with a visualization of the anatomical area of interest. For example, when removing a tumor, the surgeon's goal is to safely remove the tumor without damaging critical structures such as vessels.
  • the laparoscopic camera can only visualize the surface of the tissue. This makes localizing sub-surface structures, such as vessels and tumors, challenging. Therefore, intraoperative 3D images are introduced to provide updated information. While the intraoperative images typically have limited image information due to the constraints imposed in operating rooms, the preoperative images can provide supplementary anatomical and functional details, and carry accurate segmentation of organs, vessels, and tumors. To bridge the gap between surgical plans and laparoscopic images, registration of pre- and intraoperative 3D images is needed. However, this registration is challenging due to liquid injection or gas insufflation, breathing motion, and other surgical preparation which results in large organ deformation and sliding between viscera and abdominal wall. Therefore, a standard non-rigid registration method cannot be directly applied and enhanced registration techniques which account for deformation and sliding are needed.
  • Embodiments of the present invention address and overcome one or more of the above shortcomings and drawbacks, by providing methods, systems, articles of manufacture, and apparatuses for performing deformable registration of intra and preoperative inputs using generative mixture models and biomechanical deformation. More specifically, the techniques described perform deformable registration between endoscopic or laparoscopic video with preoperative or intraoperative 3D models. Gaussian mixture models are used to create forces for a biomechanical model which is applied to deform the 3D image volume data such that it matches a point cloud derived from the intraoperative video data.
  • the techniques described herein do not require strict point-to-point or surface-to-surface feature correspondences, but can also be used in the presence of known correspondences.
  • the disclosed technology may be applied to, for example, endoscopic-to-tomographic registrations.
  • a computer-implemented method of performing registration of preoperative and intraoperative image data includes receiving a first three-dimensional model (e.g., a mesh) of an anatomical area of interest derived from one or more image volumes acquired in a preoperative setting and acquiring images of the anatomical area of interest in an operative setting using an intraoperative image acquisition device.
  • a second three-dimensional model e.g., a point cloud
  • the first three-dimensional model is aligned with the second three-dimensional model using a rigid registration process.
  • an iterative deformable registration process is performed to further align the two three-dimensional models.
  • This iterative deformable registration process may include, for example, computing a generative mixture model representative of the second three-dimensional model, using the generative mixture model to derive physical force vectors, and biomechanically deforming the first three-dimensional model toward the second three-dimensional model using the physical force vectors.
  • a fused image display is presented which overlays the first three-dimensional model on live images acquired from the intraoperative image acquisition device in the operative setting.
  • dampening coefficients are applied during the iterative deformable registration process to velocities of points in the first three-dimensional model while deforming the first three-dimensional model toward the second three-dimensional model.
  • Various techniques may be used for selecting the dampening coefficients. For example, in one embodiment, for each respective point in the first three-dimensional model, dampening coefficients are selected which are proportional to the generative mixture model applied at that respective point.
  • one or more feature correspondences are identified between the first three-dimensional model and the second three-dimensional model and, during the iterative deformable registration process, the generative mixture model is weighted to favor the one or more feature correspondences.
  • a Von Mises-Fisher (VMF) model representative of the second three- dimensional model is computed and used to derive physical wrench vectors. These physical wrench vectors may then be applied to biomechanically deform the first three-dimensional model toward the second three-dimensional model.
  • damping wrench coefficients are applied to velocities of points in the first three-dimensional model while deforming the first three-dimensional model toward the second three- dimensional model.
  • VMF Von Mises-Fisher
  • a second computer- implemented method of performing registration of preoperative and intraoperative image data includes receiving a mesh prior to a surgical procedure.
  • This mesh includes mesh points which are representative of an anatomical area of interest.
  • intraoperative data representative of the anatomical area of interest is generated using a live image sequence acquired using an intraoperative imaging device.
  • this intraoperative data is a stitched three-dimensional point cloud.
  • the intraoperative data comprises individual depth data extracted from the live intraoperative image sequence(s).
  • a Gaussian mixture model is constructed based on the intraoperative data and, for each mesh point, a physical force vector is generated pointing to the intraoperative data using a corresponding gradient value in the Gaussian mixture model.
  • this Gaussian mixture model is weighted based on feature correspondences between the mesh and the intraoperative data.
  • the mesh is biomechanically deformed by modifying each mesh point based on its corresponding physical force vector. Once deformed, a fused image display which overlays the mesh on the live image sequence(s) may be presented.
  • a damping force vector is generated for each mesh point based on its corresponding gradient value in the Gaussian mixture model. Then, each respective damping force vector is applied to its corresponding mesh point while biomechanically deforming the mesh.
  • a VMF model is constructed based on the intraoperative data. Then, for each mesh point, a physical wrench vector pointing to the intraoperative data is generated using a corresponding VMF gradient value in the VMF model. The mesh may then be biomechanically deformed by modifying each mesh point based on its corresponding physical wrench vector. Additionally, the mesh may be biomechanically deformed using dampening wrench vectors which are computed using a corresponding VMF gradient value associated with the VMF model.
  • a system for performing registration of preoperative and intraoperative image data includes a database, an intraoperative image acquisition device and an imaging computer.
  • the database is configured to store a preoperative model representative of an anatomical area of interest.
  • the intraoperative image acquisition device is configured to acquire a live image sequence(s) of the anatomical area of interest during a surgical procedure.
  • the imaging computer is configured to generate an intraoperative model representative of the anatomical area of interest using the live image sequence(s) and construct a generative mixture model based on the intraoperative model.
  • the imaging computer generates a physical force vector pointing to the intraoperative model using a corresponding gradient value associated with the generative mixture model.
  • the imaging computer biomechanically deforms the mesh by modifying each point of the preoperative model based on its corresponding physical force vector.
  • the system also includes a display which is configured to present a fused image display which overlays the preoperative model on the live intraoperative image sequence(s).
  • the image acquisition device comprises an endoscope and the live intraoperative image sequences comprises of stereo images.
  • the image acquisition device comprises an endoscope and a patterned light source and the live intraoperative image sequences comprise of images from one or more view directions.
  • the image acquisition device is inserted inside a body cavity and acquires a three dimensional representation of the cavity structure using reflection of acoustic waves.
  • FIG. 1 shows a computer-assisted surgical system, used in some embodiments of the present invention
  • FIG. 2 provides a high-level overview of a method for performing deformable registration, according to some embodiments of the present invention
  • FIG. 3 provides an illustration of point cloud to mesh closest point distance error for each iteration of a biomechanical model, with and without damping forces, as may be applied in some embodiments;
  • FIG. 4 illustrates an exemplary computing environment within which embodiments of the invention may be implemented.
  • the following disclosure describes the present invention according to several embodiments directed at methods, systems, and apparatuses for deformable registration of intra and preoperative inputs using generative mixture models and biomechanical deformation.
  • One application of the disclosed technology is to support the fusion of preoperative image data to intraoperatively acquired video images.
  • the preoperative data typically comprises high resolution computed tomography (CT) or magnetic resonance (MR) images. These images are used to construct a digital patient-specific organ represented by points and possibly a topological structure such as a mesh.
  • CT computed tomography
  • MR magnetic resonance
  • the intraoperative video data typically comprises optical images from an endoscope, laparoscope, surgical microscope, or similar device which contains depth information to give a geometric point cloud representation of the organ surface during the intervention.
  • the techniques described herein are based on a surface- or point-based registration with a generative mixture model such as Gaussian mixture model (GMM) framework to provide weak correspondences between mesh and point cloud, and a biomechanical regularization to capture the non-rigid component of the registration.
  • GMM Gaussian mixture model
  • the various methods, systems, and apparatuses described herein are especially applicable to, but not limited to, minimally invasive surgical techniques.
  • FIG. 1 shows a computer-assisted surgical system 100, used in some embodiments of the present invention.
  • the system 100 includes components which may be categorized generally as being associated with a preoperative site 105 or an intraoperative site 110.
  • the various components located at each site 105, 110 may be operably connected with a network 115.
  • the components may be located at different areas of a facility, or even at different facilities.
  • the preoperative site 105 and the intraoperative site 110 are co-located.
  • the network 115 may be absent and the components may be directly connected.
  • a small scale network e.g., a local area network
  • an imaging system 105 A performs a scan on a subject 11 OA and gathers image volumes of an anatomical area of interest using any of a variety of imaging modalities including, for example, tomographic modalities, such as computed tomography (CT), magnetic resonance imaging (MRI), single-photon emission computed tomography (SPECT), and positron emission tomography (PET).
  • CT computed tomography
  • MRI magnetic resonance imaging
  • SPECT single-photon emission computed tomography
  • PET positron emission tomography
  • a polygonal or polyhedral mesh is generated using one or more techniques generally known in the art. This mesh comprises a plurality of vertices which approximate the geometric domain of an object the anatomical area of interest. Once the mesh is generated by the imaging system 105 A, it is transferred (e.g.
  • the image volumes are transferred to the database HOB and a computer at the intraoperative site 110 (e.g., imaging computer 110F) generates this mesh.
  • a computer at the intraoperative site 110 e.g., imaging computer 110F
  • a laparoscope HOD is used during surgery to acquire live video sequences of structures within the abdomen and pelvis within the subject 110A for presentation on a display 110E.
  • a small incision is made in a patient's abdominal wall allowing the laparoscope to be inserted.
  • laparoscopes including, for example, telescopic rod lens systems (usually connected to a video camera) and digital systems where a miniature digital video camera is at the end of the laparoscope.
  • laparoscopes may be configured to capture stereo images using either a two-lens optical system or a single optical channel.
  • a tracking system HOC provides tracking data to the imaging computer 11 OF for use in registration of the preoperative planning data (received from imaging system 105 A) with data gathered by laparoscope HOD.
  • an optical tracking system HOC is depicted.
  • EM electromagnetic
  • FIG. 1 only illustrates a single imaging computer 11 OF, in other embodiments, multiple imaging computers may be used.
  • the one or more imaging computers provide functionality for viewing, manipulating, communicating and storing medical images on computer readable media. Example implementations of computers that may be used as the imaging computer 11 OF are described below with reference to FIG. 4.
  • the system further includes a gas insufflation device (not shown in FIG. 1) which may be used to expand the anatomical area of interest (e.g., abdomen) to provide additional workroom or reduce obstruction during surgery.
  • This insufflation device may be configured to provide a pressure measurement values to the imaging computer 110F for display or for use in other applications such as the modeling techniques described herein.
  • devices such as liquid injection systems may be used to create and measure pressure during surgery as an alternative to the aforementioned gas insufflation device.
  • the imaging computer 110F retrieves the preoperative mesh from the database HOB (or generates the mesh based on a stored image volume).
  • the imaging computer 11 OF then performs a deformable registration of mesh to the intraoperative video sequences acquired with the laparoscope HOD.
  • the process of performing this deformable registration is described in further detail below with reference to FIG. 2.
  • the mesh is biomechanically deformed to match an intraoperative point cloud representative of the intraoperative video sequences. This deformation is performed using forces computed from a probabilistic model constructed on the point cloud.
  • the mesh Once the mesh has been deformed, it may be presented on the display 110E overlaying the intraoperative video sequences. Although a single display 110E is shown in the embodiment illustrated in FIG.
  • multiple displays may be used, for example, to display different perspectives of the anatomical area of interest (e.g., based on preoperative data and/or the intraoperative data), indications of sensitive tissue areas, or messages indicating that a new intraoperative scan should be performed to update the intraoperative planning data.
  • FIG. 2 provides a high-level overview of a method 200 for performing deformable registration, according to some embodiments of the present invention.
  • Registration of endoscopic/laparoscopic 3D video data to 3D image volumes is a challenging task due to intraoperative organ movements which occur with phenomena like breathing or surgical manipulation, such that correspondence between features in the video and features in the image volumes can be difficult to achieve.
  • the goal of fusing these images together can be cast as two steps: 1) an initial rigid alignment, 2) and a non-rigid alignment.
  • the method 200 shown in FIG. 2 and discussed in further detail below primarily addresses the latter non- rigid alignment.
  • a core concept disclosed herein is to establish fuzzy correspondences between geometry from the video data and geometry from the 3D image volumes and then force a tissue model created from the 3D images to match the intraoperative data.
  • the input images comprise preoperative image volumes 205 and intraoperative video sequences 210.
  • the preoperative image volumes 205 comprise 3D volumes captured by an image scanner (e.g., a CT, MR, or PET) before or during surgery. These preoperative image volumes 205 provide dense anatomical or functional data.
  • an image scanner e.g., a CT, MR, or PET
  • these preoperative image volumes 205 provide dense anatomical or functional data.
  • the organ of interest is segmented from this image data and used to construct a 3D point representation of the tissue.
  • this 3D point representation is a preoperative mesh 215; however, in other embodiments, other representations may be used such as a 3D point cloud 220.
  • the intraoperative video sequences 210 are captured by an optical image acquisition system such as a camera-projector system, a stereo camera system, or cameras combined with a time-of-flight sensor to provide 2D visual information and 2.5D depth information.
  • the 2.5D stream is of particular utility, in that it can provide metric geometric information about the object surface.
  • a 3D intraoperative point cloud 220 is created from this depth information using intrinsic camera parameters or by stitching individual 3D frames together to form a larger field of view of the organ surface.
  • intraoperative point cloud 220 is derived from dense stereo vision computations or a structured light-based vision system.
  • the intraoperative point cloud is derived from a contact or contact-less surface scanning device such as a range scanner.
  • Medical 3D organ data in the intraoperative data can comprise a triangulated organ surface or point cloud data generated by a segmentation process or a binary mask.
  • any rigid registration technique generally known in the art suitable to modalities being used may be applied to compute the initial registration.
  • intensity-based registration techniques are used to compare intensity patterns in the images of interest via correlation metrics.
  • feature-based methods are used to determine correspondences between image features such as points, lines, and contours.
  • the method 200 illustrated in FIG. 2 next performs a deformable registration at steps 235 - 245.
  • a biomechanical model is used to control the deformation of the preoperative data. In some embodiments, this would be performed by solving equations of motion using the finite element method with a mesh representation of the preoperative organ. In other embodiments, meshless methods are used to solve the biomechanics equations using the method 200.
  • This deformable registration process requires designating proper boundary conditions for the motion equations in terms of forces or displacements.
  • a generative mixture model such as Gaussian Mixture Model (GMM) is computed on the intraoperative point cloud.
  • GMM is a parametric probability density function represented as a weighted sum of Gaussian component densities.
  • the GMM will be treated as stationary in the registration.
  • Each cloud point is given a Gaussian function, with the point position as the mean.
  • the Gaussian standard deviation for the point is selected to be proportional to its distance to the closest point on the preoperative mesh.
  • Each of the cloud point Gaussians are combined into a single GMM functional.
  • the GMM gradient at that position is used to define a 3D vector which points toward the intraoperative point cloud 220.
  • These unitless vectors are multiplied at step 235 by a scaling factor to convert them into physical force vectors.
  • the physical force vectors determined at step 235 are utilized in a biomechanical model to drive the preoperative mesh 215 to deform toward the intraoperative point cloud 220.
  • Various biomechanical models may be applied at step 240. For example, one embodiment of the equations of motion applied at step 240:
  • Equation 1 u is a vector of displacements at the 3D points, M is a mass matrix, K is a stiffness matrix which depends on the material type, and R is a vector of external active forces.
  • the GMM gradient forces determined at steps 230 and 235 would be added to the R term in Equation 1.
  • Equation 1 is only one example of a biomechanical model that may be applied at step 240. In other embodiments, other biomechanical models may be applied, for example, to incorporate other characteristics of the materials involved in the deformation.
  • the final output of the method 200 is a deformation field which describes the non-rigid alignment of the preoperative data with the intraoperative data.
  • this deformation field is used to generate a fused image display which overlays the deformed preoperative mesh 215 on the intraoperative point cloud 220.
  • This fused image display may then be presented on a monitor at the intraoperative site to guide the medical staff in performing surgical procedures.
  • the GMM gradient forces or other biomechanical parameters associated with the anatomical area of interest are used to determine a time for performing a new intraoperative scan to update. As this time approaches, a visual and/or audible indicator maybe presented to alert the surgical team that a new intraoperative scan should be performed. Alternatively, the time may be used to automatically perform the scan. It should be noted that this time may be derived far in advance to when this scan is needed. Thus, any automatic or manual preparation of the device providing the intraoperative scan may be done while surgery is being performed, allowing for minimal time to be lost transitioning between surgery and intraoperative scanning.
  • damping forces may be applied to the moving preoperative mesh 215 to aid in convergence to the final alignment.
  • the gradient forces will be made overly strong and cause oscillating movement of the mesh through the intraoperative point cloud 220.
  • another set of forces may be introduced which penalize fast motion through the intraoperative point cloud 220. This could take the form of a force which is the product of the negative velocity of the mesh point multiplied by a damping coefficient written as:
  • the probabilistic model on the intraoperative point cloud 220 can incorporate surface normal.
  • This surface normal can be generated from methods generally known in the art.
  • the eigenvector corresponding to the smallest eigenvalue of covariance matrix, computed by using the neighborhood of the point can be used as a surrogate for lit.
  • c ⁇ ⁇ o, 1, 2 ⁇ (3)
  • p is the mean of all the points in the neighborhood JV, p; is the i th point and Vy is j th eigenvector.
  • a second pass to resolve the ambiguity in the direction of surface normal is required. The ambiguity is resolved using the assumption that the surface is generated from an endoscopic view and hence must be pointed towards the viewport.
  • the probabilistic GMM model applied at step 230 of FIG. 2 is generated on the augmented 6D vector, Xj G E 6 composed of [x;;ii;] instead of just the points.
  • the VMF gradient at that position is used to define a 6D vector which points toward the intraoperative cloud.
  • These vectors are unitless, and so are then multiplied by a scaling factor to convert them into physical wrench (force/torque) vectors.
  • the wrenches are then utilized in the biomechanical model to drive the tissue to deform toward the intraoperative point cloud 220.
  • one embodiment of the equations of motion may be written as
  • ⁇ Damping D n (-[v n ; ⁇ ⁇ ]) (2) where D is the vector of damping coefficients, and v, ⁇ are the point translational and rotational velocities of point normal, respectively.
  • FIG. 4 illustrates an exemplary computing environment 400 within which embodiments of the invention may be implemented.
  • This environment 400 may be used, for example, to implement a portion of one or more components used at the preoperative site 105 or the intraoperative site 110 illustrated in FIG. 1.
  • Computing environment 400 may include computer system 410, which is one example of a computing system upon which embodiments of the invention may be implemented.
  • Computers and computing environments, such as computer system 410 and computing environment 400, are known to those of skill in the art and thus are described briefly here.
  • the computer system 410 may include a communication mechanism such as a bus 421 or other communication mechanism for communicating information within the computer system 410.
  • the system 410 further includes one or more processors 420 coupled with the bus 421 for processing the information.
  • the processors 420 may include one or more central processing units (CPUs), graphical processing units (GPUs), or any other processor known in the art. More generally, a processor as used herein is a device for executing machine-readable instructions stored on a computer readable medium, for performing tasks and may comprise any one or combination of, hardware and firmware. A processor may also comprise memory storing machine- readable instructions executable for performing tasks. A processor acts upon information by manipulating, analyzing, modifying, converting or transmitting information for use by an executable procedure or an information device, and/or by routing the information to an output device.
  • CPUs central processing units
  • GPUs graphical processing units
  • a processor may use or comprise the capabilities of a computer, controller or microprocessor, for example, and be conditioned using executable instructions to perform special purpose functions not performed by a general purpose computer.
  • a processor may be coupled (electrically and/or as comprising executable components) with any other processor enabling interaction and/or communication there-between.
  • a user interface processor or generator is a known element comprising electronic circuitry or software or a combination of both for generating display images or portions thereof.
  • a user interface comprises one or more display images enabling user interaction with a processor or other device.
  • the computer system 410 also includes a system memory 430 coupled to the bus 421 for storing information and instructions to be executed by processors 420.
  • the system memory 430 may include computer readable storage media in the form of volatile and/or nonvolatile memory, such as read only memory (ROM) 431 and/or random access memory (RAM) 432.
  • the system memory RAM 432 may include other dynamic storage device(s) (e.g., dynamic RAM, static RAM, and synchronous DRAM).
  • the system memory ROM 431 may include other static storage device(s) (e.g., programmable ROM, erasable PROM, and electrically erasable PROM).
  • the system memory 430 may be used for storing temporary variables or other intermediate information during the execution of instructions by the processors 420.
  • a basic input/output system 433 (BIOS) containing the basic routines that help to transfer information between elements within computer system 410, such as during start-up, may be stored in ROM 431.
  • RAM 432 may contain data and/or program modules that are immediately accessible to and/or presently being operated on by the processors 420.
  • System memory 430 may additionally include, for example, operating system 434, application programs 435, other program modules 436 and program data 437.
  • the computer system 410 also includes a disk controller 440 coupled to the bus
  • a magnetic hard disk 441 and a removable media drive 442 e.g., floppy disk drive, compact disc drive, tape drive, and/or solid state drive.
  • the storage devices may be added to the computer system 410 using an appropriate device interface (e.g., a small computer system interface (SCSI), integrated device electronics (IDE), Universal Serial Bus (USB), or FireWire).
  • SCSI small computer system interface
  • IDE integrated device electronics
  • USB Universal Serial Bus
  • FireWire FireWire
  • the computer system 410 may also include a display controller 465 coupled to the bus 421 to control a display or monitor 466, such as a cathode ray tube (CRT) or liquid crystal display (LCD), for displaying information to a computer user.
  • the computer system includes an input interface 460 and one or more input devices, such as a keyboard 462 and a pointing device 461, for interacting with a computer user and providing information to the processor 420.
  • the pointing device 461 for example, may be a mouse, a light pen, a trackball, or a pointing stick for communicating direction information and command selections to the processor 420 and for controlling cursor movement on the display 466.
  • the display 466 may provide a touch screen interface which allows input to supplement or replace the communication of direction information and command selections by the pointing device 461.
  • the computer system 410 may perform a portion or all of the processing steps of embodiments of the invention in response to the processors 420 executing one or more sequences of one or more instructions contained in a memory, such as the system memory 430. Such instructions may be read into the system memory 430 from another computer readable medium, such as a hard disk 441 or a removable media drive 442.
  • the hard disk 441 may contain one or more datastores and data files used by embodiments of the present invention. Datastore contents and data files may be encrypted to improve security.
  • the processors 420 may also be employed in a multi-processing arrangement to execute the one or more sequences of instructions contained in system memory 430.
  • hard-wired circuitry may be used in place of or in combination with software instructions. Thus, embodiments are not limited to any specific combination of hardware circuitry and software.
  • the computer system 410 may include at least one computer readable medium or memory for holding instructions programmed according to embodiments of the invention and for containing data structures, tables, records, or other data described herein.
  • the term "computer readable medium” as used herein refers to any medium that participates in providing instructions to the processor 420 for execution.
  • a computer readable medium may take many forms including, but not limited to, non-transitory, nonvolatile media, volatile media, and transmission media.
  • Non-limiting examples of nonvolatile media include optical disks, solid state drives, magnetic disks, and magneto-optical disks, such as hard disk 441 or removable media drive 442.
  • Non-limiting examples of volatile media include dynamic memory, such as system memory 430.
  • Non-limiting examples of transmission media include coaxial cables, copper wire, and fiber optics, including the wires that make up the bus 421.
  • Transmission media may also take the form of acoustic or light waves, such as those generated during radio wave and infrared data communications.
  • the computing environment 400 may further include the computer system 410 operating in a networked environment using logical connections to one or more remote computers, such as remote computer 480.
  • Remote computer 480 may be a personal computer (laptop or desktop), a mobile device, a server, a router, a network PC, a peer device or other common network node, and typically includes many or all of the elements described above relative to computer system 410.
  • computer system 410 may include modem 472 for establishing communications over a network 471, such as the Internet. Modem 472 may be connected to system bus 421 via user network interface
  • Network 471 may be any network or system generally known in the art, including the Internet, an intranet, a local area network (LAN), a wide area network (WAN), a metropolitan area network (MAN), a direct connection or series of connections, a cellular telephone network, or any other network or medium capable of facilitating communication between computer system 410 and other computers (e.g., remote computing system 480).
  • the network 471 may be wired, wireless or a combination thereof. Wired connections may be implemented using Ethernet, Universal Serial Bus (USB), RJ-11, or any other wired connection generally known in the art.
  • Wireless connections may be implemented using Wi- Fi, WiMAX, and Bluetooth, infrared, cellular networks, satellite or any other wireless connection methodology generally known in the art. Additionally, several networks may work alone or in communication with each other to facilitate communication in the network
  • computers in computing environment 400 may include a hardware or software receiver module (not shown in FIG. 4) configured to receive one or more data items used in performing the techniques described herein.
  • An executable application comprises code or machine readable instructions for conditioning the processor to implement predetermined functions, such as those of an operating system, a context data acquisition system or other information processing system, for example, in response to user command or input.
  • An executable procedure is a segment of code or machine readable instruction, sub-routine, or other distinct section of code or portion of an executable application for performing one or more particular processes. These processes may include receiving input data and/or parameters, performing operations on received input data and/or performing functions in response to received input parameters, and providing resulting output data and/or parameters.
  • a graphical user interface comprises one or more display images, generated by a display processor and enabling user interaction with a processor or other device and associated data acquisition and processing functions.
  • the GUI also includes an executable procedure or executable application.
  • the executable procedure or executable application conditions the display processor to generate signals representing the GUI display images. These signals are supplied to a display device which displays the image for viewing by the user.
  • the processor under control of an executable procedure or executable application, manipulates the UI display images in response to signals received from the input devices. In this way, the user may interact with the display image using the input devices, enabling user interaction with the processor or other device.
  • the embodiments of the present invention can be included in an article of manufacture comprising, for example, a non-transitory computer readable medium.
  • This computer readable medium may have embodied therein a method for facilitating one or more of the techniques utilized by some embodiments of the present invention.
  • the article of manufacture may be included as part of a computer system or sold separately.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)

Abstract

L'invention concerne un procédé mis en œuvre par ordinateur pour réaliser un alignement de données d'image préopératoire et peropératoire, lequel procédé consiste à recevoir un premier modèle tridimensionnel d'une région anatomique d'intérêt dérivée à partir d'un ou plusieurs volumes d'image acquis dans un milieu préopératoire et à acquérir des images de la région anatomique d'intérêt dans un milieu opératoire à l'aide d'un dispositif d'acquisition d'image peropératoire. Un second modèle tridimensionnel de la région anatomique d'intérêt est généré à l'aide des images. Ensuite, le premier modèle tridimensionnel est aligné avec le second modèle tridimensionnel à l'aide d'un processus d'alignement rigide. Ensuite, un processus d'alignement déformable itératif est réalisé pour aligner en outre les deux modèles tridimensionnels. Ce processus d'alignement déformable itératif peut consister, par exemple, à calculer un modèle de mélange génératif représentatif du second modèle tridimensionnel, à utiliser le modèle de mélange génératif pour dériver des vecteurs de force physique, et à déformer de manière biomécanique le premier modèle tridimensionnel vers le second modèle tridimensionnel à l'aide des vecteurs de force physique.
PCT/US2016/027018 2016-04-12 2016-04-12 Alignement déformable d'entrées peropératoire et préopératoire à l'aide de modèles de mélange génératifs et d'une déformation biomécanique Ceased WO2017180097A1 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/US2016/027018 WO2017180097A1 (fr) 2016-04-12 2016-04-12 Alignement déformable d'entrées peropératoire et préopératoire à l'aide de modèles de mélange génératifs et d'une déformation biomécanique

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/US2016/027018 WO2017180097A1 (fr) 2016-04-12 2016-04-12 Alignement déformable d'entrées peropératoire et préopératoire à l'aide de modèles de mélange génératifs et d'une déformation biomécanique

Publications (1)

Publication Number Publication Date
WO2017180097A1 true WO2017180097A1 (fr) 2017-10-19

Family

ID=55910352

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2016/027018 Ceased WO2017180097A1 (fr) 2016-04-12 2016-04-12 Alignement déformable d'entrées peropératoire et préopératoire à l'aide de modèles de mélange génératifs et d'une déformation biomécanique

Country Status (1)

Country Link
WO (1) WO2017180097A1 (fr)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020140044A1 (fr) * 2018-12-28 2020-07-02 Activ Surgical, Inc. Génération d'imagerie tridimensionnelle synthétique à partir de cartes de profondeur partielles
CN113143459A (zh) * 2020-01-23 2021-07-23 海信视像科技股份有限公司 腹腔镜增强现实手术导航方法、装置及电子设备
US11857153B2 (en) 2018-07-19 2024-01-02 Activ Surgical, Inc. Systems and methods for multi-modal sensing of depth in vision systems for automated surgical robots
WO2024065343A1 (fr) * 2022-09-29 2024-04-04 中国科学院深圳先进技术研究院 Système et procédé d'enregistrement de données de nuage de points hépatiques préopératoires et peropératoires, terminal, et support de stockage
WO2024108409A1 (fr) * 2022-11-23 2024-05-30 北京肿瘤医院(北京大学肿瘤医院) Procédé et système d'imagerie en quatre dimensions sans contact basés sur un signal respiratoire de surface en quatre dimensions
US20240242426A1 (en) * 2023-01-12 2024-07-18 Clearpoint Neuro, Inc. Dense non-rigid volumetric mapping of image coordinates using sparse surface-based correspondences

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2014127321A2 (fr) * 2013-02-15 2014-08-21 Siemens Aktiengesellschaft Alignement à entraînement biomécanique d'image pré-opératoire pour des images 3d intra-opératoires en chirurgie laparoscopique

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2014127321A2 (fr) * 2013-02-15 2014-08-21 Siemens Aktiengesellschaft Alignement à entraînement biomécanique d'image pré-opératoire pour des images 3d intra-opératoires en chirurgie laparoscopique

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
BILLINGS SETH ET AL: "Generalized iterative most likely oriented-point (G-IMLOP) registration", INTERNATIONAL JOURNAL OF COMPUTER ASSISTED RADIOLOGY AND SURGERY, SPRINGER, DE, vol. 10, no. 8, 23 May 2015 (2015-05-23), pages 1213 - 1226, XP035524243, ISSN: 1861-6410, [retrieved on 20150523], DOI: 10.1007/S11548-015-1221-2 *
DUAY V ET AL: "Non-rigid registration algorithm with spatially varying stiffness properties", BIOMEDICAL IMAGING: MACRO TO NANO, 2004. IEEE INTERNATIONAL SYMPOSIUM ON ARLINGTON,VA, USA APRIL 15-18, 2004, PISCATAWAY, NJ, USA,IEEE, 15 April 2004 (2004-04-15), pages 408 - 411, XP010773884, ISBN: 978-0-7803-8389-0, DOI: 10.1109/ISBI.2004.1398561 *
MOHAMMADI AMROLLAH ET AL: "Estimation of intraoperative brain shift by combination of stereovision and doppler ultrasound: phantom and animal model study", INTERNATIONAL JOURNAL OF COMPUTER ASSISTED RADIOLOGY AND SURGERY, SPRINGER, DE, vol. 10, no. 11, 10 May 2015 (2015-05-10), pages 1753 - 1764, XP035574224, ISSN: 1861-6410, [retrieved on 20150510], DOI: 10.1007/S11548-015-1216-Z *
TAO WENBING ET AL: "Asymmetrical Gauss Mixture Models for Point Sets Matching", 2014 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, IEEE, 23 June 2014 (2014-06-23), pages 1598 - 1605, XP032649508, DOI: 10.1109/CVPR.2014.207 *

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11857153B2 (en) 2018-07-19 2024-01-02 Activ Surgical, Inc. Systems and methods for multi-modal sensing of depth in vision systems for automated surgical robots
WO2020140044A1 (fr) * 2018-12-28 2020-07-02 Activ Surgical, Inc. Génération d'imagerie tridimensionnelle synthétique à partir de cartes de profondeur partielles
CN113906479A (zh) * 2018-12-28 2022-01-07 艾科缇弗外科公司 从局部深度图生成合成三维成像
CN113143459A (zh) * 2020-01-23 2021-07-23 海信视像科技股份有限公司 腹腔镜增强现实手术导航方法、装置及电子设备
WO2024065343A1 (fr) * 2022-09-29 2024-04-04 中国科学院深圳先进技术研究院 Système et procédé d'enregistrement de données de nuage de points hépatiques préopératoires et peropératoires, terminal, et support de stockage
WO2024108409A1 (fr) * 2022-11-23 2024-05-30 北京肿瘤医院(北京大学肿瘤医院) Procédé et système d'imagerie en quatre dimensions sans contact basés sur un signal respiratoire de surface en quatre dimensions
US20240242426A1 (en) * 2023-01-12 2024-07-18 Clearpoint Neuro, Inc. Dense non-rigid volumetric mapping of image coordinates using sparse surface-based correspondences

Similar Documents

Publication Publication Date Title
CN104000655B (zh) 用于腹腔镜外科手术的组合的表面重构和配准
US9129422B2 (en) Combined surface reconstruction and registration for laparoscopic surgery
Plantefeve et al. Patient-specific biomechanical modeling for guidance during minimally-invasive hepatic surgery
Haouchine et al. Image-guided simulation of heterogeneous tissue deformation for augmented reality during hepatic surgery
Grasa et al. Visual SLAM for handheld monocular endoscope
US8712016B2 (en) Three-dimensional shape data processing apparatus and three-dimensional shape data processing method
CN102999938B (zh) 多模态体积图像的基于模型的融合的方法和系统
US20110282151A1 (en) Image-based localization method and system
US11900541B2 (en) Method and system of depth determination with closed form solution in model fusion for laparoscopic surgical guidance
US9155470B2 (en) Method and system for model based fusion on pre-operative computed tomography and intra-operative fluoroscopy using transesophageal echocardiography
EP2452649A1 (fr) Visualisation de données anatomiques à réalité améliorée
US20180189966A1 (en) System and method for guidance of laparoscopic surgical procedures through anatomical model augmentation
WO2017180097A1 (fr) Alignement déformable d'entrées peropératoire et préopératoire à l'aide de modèles de mélange génératifs et d'une déformation biomécanique
CN113302660A (zh) 对动态解剖结构进行可视化的方法
KR20190080703A (ko) 수술도구의 최적진입위치 산출시스템, 방법 및 프로그램
JP6608165B2 (ja) 画像処理装置及び方法、並びにコンピュータプログラム
Tella-Amo et al. Probabilistic visual and electromagnetic data fusion for robust drift-free sequential mosaicking: application to fetoscopy
WO2014127321A2 (fr) Alignement à entraînement biomécanique d'image pré-opératoire pour des images 3d intra-opératoires en chirurgie laparoscopique
Shu et al. Seamless augmented reality integration in arthroscopy: a pipeline for articular reconstruction and guidance
Zampokas et al. Real‐time stereo reconstruction of intraoperative scene and registration to preoperative 3D models for augmenting surgeons' view during RAMIS
Paulus et al. Surgical augmented reality with topological changes
Wang et al. Non-rigid scene reconstruction of deformable soft tissue with monocular endoscopy in minimally invasive surgery
JP5904976B2 (ja) 3次元データ処理装置、3次元データ処理方法及びプログラム
Boussot et al. Statistical model for the prediction of lung deformation during video-assisted thoracoscopic surgery
Zhang 3D Reconstruction of Colon Structures and Textures from Colonoscopic Videos

Legal Events

Date Code Title Description
NENP Non-entry into the national phase

Ref country code: DE

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16720232

Country of ref document: EP

Kind code of ref document: A1

122 Ep: pct application non-entry in european phase

Ref document number: 16720232

Country of ref document: EP

Kind code of ref document: A1