[go: up one dir, main page]

WO2016042297A1 - Ordinateur et procédé mis en œuvre par ordinateur pour supporter la chirurgie laparoscopique - Google Patents

Ordinateur et procédé mis en œuvre par ordinateur pour supporter la chirurgie laparoscopique Download PDF

Info

Publication number
WO2016042297A1
WO2016042297A1 PCT/GB2015/052631 GB2015052631W WO2016042297A1 WO 2016042297 A1 WO2016042297 A1 WO 2016042297A1 GB 2015052631 W GB2015052631 W GB 2015052631W WO 2016042297 A1 WO2016042297 A1 WO 2016042297A1
Authority
WO
WIPO (PCT)
Prior art keywords
anatomical structure
laparoscope
registration
images
stereo
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
PCT/GB2015/052631
Other languages
English (en)
Inventor
Steve Thompson
David Hawkes
Matt CLARKSON
Johannes TOTZ
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
UCL Business Ltd
Original Assignee
UCL Business Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by UCL Business Ltd filed Critical UCL Business Ltd
Publication of WO2016042297A1 publication Critical patent/WO2016042297A1/fr
Anticipated expiration legal-status Critical
Ceased legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • G06T7/75Determining position or orientation of objects or cameras using feature-based methods involving models
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B90/00Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
    • A61B90/36Image-producing devices or illumination devices not otherwise provided for
    • A61B2090/364Correlation of different images or relation of image positions in respect to the body
    • A61B2090/365Correlation of different images or relation of image positions in respect to the body augmented reality, i.e. correlating a live optical image with another image
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/70Manipulators specially adapted for use in surgery
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • G06T2207/10012Stereo images
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30056Liver; Hepatic
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30244Camera pose

Definitions

  • the present invention relates to laparoscopic surgery, and in particular to a computer and computer-implemented method that use stereoscopic images of an anatomical structure for supporting such surgery.
  • Image guidance systems require a method to register the pre-operative data to the intraoperative scene.
  • fiducial markers are in general impractical for abdominal surgery, so existing systems typically use exposed surfaces or natural anatomical landmarks to register the preoperative data.
  • Systems have been proposed using manually picked landmarks [3], structured light, laser depth scanners, and touching the surface with a tracked pointer [5].
  • the approach described herein helps to enable image guidance for laparoscopic (keyhole) surgery.
  • image guidance allows the surgeon to refer to preoperative images during surgery in an intuitive way - for example, by overlaying one or more preoperative images onto a laparoscopic video image (although other display options are possible).
  • This overlay of the images depends on a registration of the pre-operative images to the intra-operative video images.
  • Various methods have previously been proposed to achieve this registration, however they have generally required the use of very specialised (and typically still prototype) hardware - e.g. structured light or laser range finders - or else then depend upon explicit manual definition (and hence alignment) of landmark surface points by the surgeon.
  • the approach described herein helps to support registration using only a readily available ( and increasingly common ) stereo laparoscope in combination with a commercially available tracking system for the laparoscope, which is also widely used in image guided procedures.
  • a computer program comprising program instructions in machine-readable format that when executed by one or more processors in a computer system cause the computer system to implement any of the various methods as described above.
  • These program instructions may be stored on a non-transitory computer readable storage medium, such as a hard disk drive, read only memory (ROM) such as flash memory, an optical storage disk, and so on.
  • the program instructions may be loaded into random access memory (RAM) for execution by the one or more processors of a computer system from the computer readable storage medium. This loading may involve first downloading or transferring the program instructions over a computer network, such as a local area network (LAN) or the Internet.
  • LAN local area network
  • the computer system may comprise one or more machines, which may be general purpose machines running program instructions configured to perform such methods.
  • the general purpose machines may be supplemented with graphics processing cards units (GPUs) to provide additional processing capability.
  • the computer system may also comprise at least some special purpose hardware for performing some or all of the processing described above, such as determining the visualisations.
  • the computer system may be incorporated into apparatus specifically customised for performing computer-assisted (image-guided) surgery using a laparoscope. Such apparatus may be used to provide support during a surgical operation, such as by providing real-time visualisation of the position of the inserted laparoscope in combination (and registered) with one or more pre-operative images.
  • Figure 1 is a graph showing how the triangulation error and patch area various with distance from the laparoscopic lens
  • Figure 2 is an image of multiple surface patches overlaid onto a liver phantom
  • Figure 3 is an image of mounting prongs as used for subsurface landmarks for the liver phantom, as employed during an assessment of accuracy.
  • Figure 4 is an image showing estimated locations (circles) and corresponding true locations
  • Figure 5 is a flowchart illustrating a method for supporting laparoscopic surgery in accordance with some embodiments of the invention.
  • MIS minimally invasive surgery
  • Published image guidance systems for laparoscopic surgery generally involve the surgeon performing a manual alignment between the visible anatomy, as obtained from the laparoscope, and any preoperative data, or rely on specialised hardware, such as a laparoscope with one or more laser range finders attached.
  • the system described here removes the need for specialist equipment or manual alignment (point selection). Instead, a commercially available stereo laparoscope and tracking system are used to reconstruct and localise multiple surface patches.
  • a carefully designed user interface is also provided to enable multiple surface patches, each of approximately 30cm 2 to be captured, localised, and visualised within around 5 seconds each. Such visualisation is important as it allows the user to assess interactively the quality and spread of reconstructed patches.
  • a good set of patches can be collected and checked in under 2 minutes. Registration between the pre-operative surface and this set of surface patches is achieved using an iterative closest point (ICP) approach in conjunction with a sterile, manual initialisation. The registration, including manual initialisation, can be achieved within 3 minutes. The entire procedure (surface reconstruction, manual initialisation and ICP) was reliably achieved in under 5 minutes. N.B. all timings presented herein are given by way of example only and are based on the hardware and software of the current implementation - it will be appreciated that the timing may vary for other implementations that use different hardware and/or software.
  • the system described herein has been validated using a silicone liver phantom and in-vivo porcine data.
  • the results presented below are based on registrations performed "live", not subject to any post-operative adjustment.
  • the porcine data uses computed tomography (CT) data from an insufflated patient.
  • CT computed tomography
  • the registration process is performed in four steps: reconstruction and localisation of individual surface patches; filtering and compositing of the surface patches; manual
  • Camera calibration parameters are determined prior to surgery using Zhang's method [7], using for example the implementation in OpenCV, an open source computer vision facility (see www.opencv.org), with the "handeye" calibration performed as per Tsai [8].
  • the focal length is not changed during surgery; however, other implementations may involve changing the focal length, and re-calibrating as appropriate.
  • the median right to left lens transform is also determined via OpenCV.
  • Figure 1 shows the point triangulation error and the area of the reconstructed patches, which both increase with distance from the lens (in mm) - the upper curve of Figure 1 represents the point triangulation error (measured as rms mm), while the lower curve of Figure 1 represents the area of the reconstructed patches (cm 2 ).
  • Figure 2 shows nine surface patches 315, each with an area of around 30cm 2 , shown overlaid on the liver phantom 310.
  • the scale of the visible features is also application-specific and helps to determine at what depth the feature matching works reliably.
  • the approach described herein was found to give the best results when the liver surface was between approximately 50 and 80 mm from the lens, thereby giving surface patch areas of between approximately 14 and 36 cm 2 .
  • the laparoscope is tracked using an optical tracking system, NDI Polaris Spectra from Northern Digital Inc (NDI) - see www.ndigital.com. Passive tracking markers were placed on the external end (590 mm from the lens) of the laparoscope.
  • the estimated tracking transform referred to herein as T Ca mera2wid. is used to transform each set of triangulated points to the world (tracker) coordinate system. Accurate synchronisation of the tracking and video signals is important.
  • a time stamping and signalling protocol within NifTK accurate to the millisecond was implemented based on OpenlGTLink (the Open Network Interface for Image Guided Therapy - see www.openigtlink.org. By moving the laparoscope, it is straightforward to confirm that the surface patch has been placed in the correct location on the liver surface.
  • the resulting point clouds are filtered and composited to a single point cloud.
  • Filtering reduces the number of points in each patch from hundreds of thousands to hundreds, and therefore helps to reduce subsequent processing time (although some implementations may dispense with such filtering).
  • This reduction in number of points is done using voxel re-sampling, implemented within the point cloud library (PCL) (see www.pointclouds.org).
  • PCL point cloud library
  • the filtering process removes some of the triangulation noise by fitting the points in each patch to a local surface based on a maximum curvature function, also implemented within PCL.
  • This point cloud can be considered as representing the surface of the liver as viewed from the laparoscope as the image capture position for each respective patch (prior to the compositing).
  • the registration process estimates the transform from the model co-ordinate system to world coordinates, referred to herein as T M od2wi ⁇ i-
  • the transform TM O ⁇ I2WI- is determined so as to minimise the mean Euclidean distance between the filtered point cloud and the model liver surface.
  • ICP iterative closest point
  • VTK Visual Toolkit
  • VTK enables interpolation of a surface between its defined vertices, and has been found to work well and repeatably for the phantom, provided a suitable initialisation (within about 30 mm) is given.
  • the registration was somewhat less repeatable, and susceptible to small changes in initialisation.
  • the ICP algorithm generally requires a good starting estimate of
  • TMod2WH can be coarsely estimated from the position of the lens T Ca mera2wid and a preset offset transform Tonset. as per Equation 1 below.
  • the virtual anatomy remains static on the visible or real scene (derived from the laparoscope video) as the laparoscope is inserted through the trocar and positioned so that the visible scene matches, as closely as possible, the virtual scene. In practice, it is generally only necessary to get both the virtual and real livers visible.
  • the user "picks up" the virtual liver using a second tracked object. This is configured so that the user can now move the virtual liver in 6 degrees of freedom in the coordinate system of the Image Guided Laproscopy overlay screen.
  • T_Model2Centre which defines the transform from the origin of the pre-operative model to the desired centre of rotation of the model.
  • the user may select any centre of rotation - typically it would be the centroid of the anatomy of interest, for example the left lobe of the liver.
  • the application may include an intuitive interface for performing this operation.
  • T_World2Screen which defines the location of the centre of the user interface screen relative to the "world” or tracking system origin. This will depend on the geometry of the operating room.
  • the transform may be set manually or the application may include a user interface for setting the transform automatically.
  • the incremental motion of a tracked handheld "reference" object relative to the user screen is applied to the centre of the model, relative to the laparoscope lens.
  • the user may use any tracked object for this procedure.
  • a physical representation of the patient's liver might be used to help make the process more intuitive.
  • the user positions the virtual liver over the real liver as closely as possible. To aid the process it is possible to "pause" the laparoscopic video and tracking streams if so desired.
  • This system has been used on the phantom and in-vivo to provide successful initialisation.
  • the surgeon has chosen to show the registered model as a 2D overlay (rather than using the 3D visualisation capability of the laparoscope).
  • the error of interest is the difference between the predicted position of a given feature on screen and the actual position of the feature on the screen.
  • a set of landmark features that can be unambiguously located in the video images are used as a "gold standard" against which the system errors can be measured.
  • the phantom is designed, so that after the silicone liver has been imaged, the flexible silicone liver can be removed to allow the rigid mounting pins or prongs 415 to be imaged with the laparoscope (see Figure 3).
  • the mounting prongs are used as subsurface landmarks for the phantom data (at a clinically relevant depth).
  • Figure 4 illustrates the surface ablation and anatomical notch that were used for error measurement in the in-vivo data.
  • the green circles show the model estimates of positions corresponding to the neighbouring gold standard (crosshair) locations.
  • the individual lobes of a porcine liver can move independently, validation was limited to the lobe upon which registration was performed, in this case the right lobe.
  • a sample set of frames (every 25th frame for each channel) was extracted (791 for the phantom, 1193 for the porcine data). Any pertinent landmarks in this set of frames were manually selected. The pixel location for each of the landmarks was stored along with the relevant frame number. Right and left channels were treated independently.
  • each of the landmark points was manually identified in the CT data and transformed to world coordinates using the model-to-world transform from the registration process.
  • the position in world coordinates of each of the validation landmarks was estimated by triangulation and averaging of the manually selected gold standard screen points. This enables an estimate of laparoscope tracking and calibration errors independent of other system errors.
  • a further estimate of the model-to-world transform was achieved by performing a landmark point based registration between the triangulated landmark points and those identified in the CT. Doing this enables an estimate of the errors in manually locating landmark points and any deformation between the CT data and the video data.
  • Table 1 The three approaches to landmark localisation and the errors present in each method are shown in Table 1.
  • the number of samples for each porcine data set were, 478 samples of 4 surface landmarks, 234 samples of 6 surface landmarks, and 483 samples of 6 surface landmarks, respectively.
  • the error associated with "re-projection" was calculated as follows.
  • the gold standard pixel coordinates are undistorted using a 4 parameter distortion, then re-projected to normalised points (Xg S /z; y gs /z; and 1 .0) in the lens's coordinate system using the camera's projection matrix.
  • the model points are transformed into the lens's coordinate system using the tracking transform for the relevant frame to give
  • the error is defined by Equation 2 as:
  • Table 1 summarises the results for each validation experiment. For each experiment, the root mean square (RMS) and maximum error are presented. For both the phantom and porcine data, the results are taken from a single registration experiment. Repetition of the experiment for the phantom data was straightforward with results similar to those in Table 1 achieved reliably in under 5 minutes. Repetition of the porcine experiment was more difficult. It took 4 attempts, each taking 3 minutes, to achieve a successful registration. However, as the registration is based on the liver surface, which is visible in the laparoscopic video, failed registrations can be identified in real-time, and the process repeated until a satisfactory result is achieved.
  • RMS root mean square
  • the ICP registration was repeated 100 times from starting estimates based on random perturbations of the landmark based registration.
  • all 100 registrations resulted in RMS projection errors less than 4 mm.
  • the system described here uses a rigid or locally rigid registration between the pre-operative images and the intra-operative images, and hence does not adjust for intraoperative deformation of the liver.
  • one possibility is to allow some localised recovery of the dynamic organ deformation using a non-rigid, or locally rigid, variant of the ICP algorithm.
  • modelling of the insufflation process should enable use of CT data from non insufflated patients, as will be appropriate for human cases.
  • FIG. 5 presents a flowchart of a computer-implemented method for supporting laparoscopic surgery in accordance with various embodiments of the invention.
  • the method includes providing a 3-dimensional model of an anatomical structure of the subject of the laparoscopic surgery (operation 210).
  • the anatomical structure may be (for example) an organ such as the liver or pancreas.
  • this 3-D model will be derived from one or more 3-D pre-operative images of the subject, such as by using magnetic resonance imaging (MRI) or X-ray computed tomography imaging (CTI).
  • MRI magnetic resonance imaging
  • CTI X-ray computed tomography imaging
  • the 3-D image(s) will have been processed to extract the anatomical structure of interest (for example as a surface mesh or image), although it might also be feasible to use the 3-D preoperative image of the subject directly as the model. Note that this processing of the 3-D image(s) does not have to be performed in real-time, but rather can complete in the interval between the preoperative imaging and the surgical
  • the 3-D image(s) may be used for planning the operation, for example, for identifying a portion of an organ (the anatomical structure) to be removed, such as in a liver resection.
  • the model may be designed to accommodate motion or deformation of the anatomical structure, which may be a relevant factor for some organs.
  • the model may incorporate information as to how the organ is likely to deform, based perhaps on bio-mechanically modelling and/or statistical data on the deformation of organs from a large number of images. This deformation information can then be used to assist in the registration of the 3-D model of the anatomical structure to the laparoscope imaging.
  • the method includes, at the intra-operative stage, obtaining (receiving or acquiring) from a stereo laparoscope corresponding stereo pairs of images of the anatomical structure (operation 220). Corresponding pairs of the stereo images are then processed to generate a topographical representation of the anatomical structure (operation 230). Note that this this processing of the stereo images may be performed, at least in part, within the stereo laparoscope itself, or by some external processing unit or device. In addition, it is important to be able to perform the processing in real-time or in near to real-time, so that the results are quickly available to the surgeon who is performing the operative procedure.
  • the stereo laparoscope is generally provided with two lenses, referred to as left as right, which acquire corresponding pairs of images - i.e. for each image from the left lens there is a corresponding image from the right lens.
  • the pairs of images may be acquired as a video stream from each lens (hence each individual image can be considered as a frame of the video), although in other embodiments, the stereo laparoscope may provide successive pairs of individual (still) images.
  • processing the corresponding stereo pairs of images comprises matching features between the left and right images; and based on the matched features, triangulating individual points using left and right images from the stereo pairs to determine a 3-D position relative to a known position on the stereo laparoscope - for example relative to the left lens.
  • other techniques might be used for processing the images, for example, based on some form of global correlation between the left and right images (i.e. without first matching features between the two images), and/or by incorporating information indicating how the images change with movement of the laparoscope (which gives a generally known change in viewing angle onto the surface).
  • the topographical representation of the anatomical structure is based on one or more patches from different locations on the surface of the anatomical structure, where each patch corresponds to a given viewing area from the laparoscope - e.g. one field of view.
  • the use of multiple patches, for example, between 6 and 12, is particularly helpful when the anatomical structure is relatively sparse in terms of distinct topography.
  • the topographical representation of the anatomical structure may comprise any suitable form, such as a point cloud, a surface mesh, etc.
  • This representation may be filtered to reduce complexity (and noise), which can then help to reduce the subsequent computational burden of registering the topographical representation to the 3-D model.
  • filtering may comprise filtering the point cloud by fitting the points in a patch to a local surface based on a maximum curvature function.
  • the method further comprises tracking the position of the laparoscope (usually in combination with tracking the orientation of the laparoscope as well). For example, this tracking may be performed using an optical tracking system with passive tracking markers placed on the proximal end of the laparoscope. There are various other options available for such tracking, for example, attaching one or more ultrasound or microwave emitters to the laparoscope, or using a magnetic field sensor. A synchronisation is provided between the stereo images obtained by the laparoscope and the tracked position of the laparoscope.
  • a registration is now determined between the 3-D model of the anatomical structure and the topographical representation of the anatomical structure (operation 240). Again, this registration is performed as part of the intra-operative procedure, and hence should be completed quickly. In some embodiments, the registration is determined using an iterative closest point (ICP) technique, but any suitable algorithm or technique may be utilised.
  • ICP iterative closest point
  • the registration requires (or performs better and/or more quickly) when provided with an initialisation that provides a very approximate (coarse) registration between the 3-D model of the anatomical structure and the topographical representation of the anatomical structure.
  • This initialisation is based at least in part on a standard clinical laparoscopic approach to the anatomical structure, because in this case the view from the laparoscope relative to the model is predictable (to a certain extent).
  • a more accurate, manual, initialisation is performed by providing on a display screen: (i) a virtual view of the anatomical structure derived from the model; and (ii) a real view obtained from the stereo laparoscope.
  • the real view may be derived from the topographic representation derived above, or may alternatively comprise actual images obtained from the laparoscope (e.g. an image obtained from one lens, or a stereo or flattened composite obtained from both lens).
  • a clinician is then able to manually adjust at least one of the displayed virtual view and the displayed real view to provide alignment between the two views.
  • user interface may allow one of the views to be scaled, translated and rotated in order to achieve at least approximate alignment with the other view.
  • an appropriate initialisation for the registration can be determined. This initialisation will be based on the mutual (relative) geometry (orientation, etc) of the two views as aligned on the screen. In a situation where the real view is derived directly from the imaging of the laparoscope (rather than from the topographic
  • this geometry can also relate the displayed real view to the topographic
  • the registration procedure can then commence, based on this approximate alignment, to determine the registration between the topographic representation and the 3-D model.
  • this then allows the position of the laparoscope (which is known relative to the topographic representation) to be determined relative to the 3-D model (operation 250).
  • This information can then be used, for example, to display the 3-D model, including relevant information from the pre-operative imaging, in combination (and registration) with the view obtained from the stereo laparoscope, thereby supporting the image-guided surgical procedure.
  • the processing of operations 220, 230, and 240 to determine the registration can be performed as a preliminary portion of the operative procedure - e.g. by acquiring the image patches, determining the topographic representation, and registering to the 3-D model from the pre-operative imaging. As described herein, this procedure (and the associated processing) can be performed quickly, within a few minutes, which is feasible within a real-time, intra-operative context. The resulting registration can then be used to provide the image-guided support for the laparoscopic procedure by allowing the 3-D model and pre-operative imaging to be displayed to a clinician in conjunction with (and aligned to) with the view currently obtained from the stereo laparoscope.
  • the approach described herein provides an image-guided laparoscopy system that may be used (for example) for abdominal surgery such as liver resection.
  • a validation of this approach has been performed based on a realistic anatomy phantom and in-vivo porcine data.
  • Registration of pre-operative contrast enhanced CT data to intra-operative video has been achieved by combining stereoscopic surface reconstruction and optical tracking of the laparoscope. Multiple patches of visible surfaces may be reconstructed and combined accurately and quickly from stereo laparoscopy. Coupled with a locally rigid transformation model, registration has been achieved within 5 minutes. This has allowed laparoscopic surgical guidance to be obtained in a surgically realistic setting (this is believed to be for the first time). Testing of the system on a realistic liver phantom has shown that subsurface landmarks can be localised to an accuracy of 2.9 mm rms, while testing on porcine liver models has indicated an accuracy of 8.6 mm rms for anatomical landmarks.
  • liver resections of which there are about 1800 per year in the UK
  • pancreatic resections about 2200 per year in the UK
  • kidney operations for cancer about 3300 per year in the UK
  • gallbladder removal surgery about 60,000 per year in the UK
  • Other contexts for the use of the method and system described herein will be apparent to the skilled person.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Endoscopes (AREA)

Abstract

L'invention concerne un ordinateur et un procédé mis en œuvre par ordinateur pour supporter une intervention chirurgicale laparoscopique. Le procédé comprend les étapes consistant à produire un (3) modèle tridimensionnel d'une structure anatomique du sujet de la chirurgie laparoscopique ; obtenir, à partir d'un laparoscope stéréoscopique, des paires stéréoscopiques correspondantes d'images de la structure anatomique du sujet ; traiter les paires stéréoscopiques correspondantes d'images de la structure anatomique pour générer une représentation topographique de la structure anatomique ; déterminer un repérage entre le modèle tridimensionnel de la structure anatomique et la représentation topographique de la structure anatomique ; et utiliser le repérage pour déterminer une position du laparoscope par rapport au modèle tridimensionnel.
PCT/GB2015/052631 2014-09-19 2015-09-11 Ordinateur et procédé mis en œuvre par ordinateur pour supporter la chirurgie laparoscopique Ceased WO2016042297A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
GBGB1416586.4A GB201416586D0 (en) 2014-09-19 2014-09-19 Computer and computer-implemented method for supporting laparoscopic surgery
GB1416586.4 2014-09-19

Publications (1)

Publication Number Publication Date
WO2016042297A1 true WO2016042297A1 (fr) 2016-03-24

Family

ID=51869169

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/GB2015/052631 Ceased WO2016042297A1 (fr) 2014-09-19 2015-09-11 Ordinateur et procédé mis en œuvre par ordinateur pour supporter la chirurgie laparoscopique

Country Status (2)

Country Link
GB (1) GB201416586D0 (fr)
WO (1) WO2016042297A1 (fr)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113143459A (zh) * 2020-01-23 2021-07-23 海信视像科技股份有限公司 腹腔镜增强现实手术导航方法、装置及电子设备
CN115120350A (zh) * 2021-03-24 2022-09-30 上海微创医疗机器人(集团)股份有限公司 计算机可读存储介质、电子设备、位置标定及机器人系统
CN115607285A (zh) * 2022-12-20 2023-01-17 长春理工大学 一种单孔腹腔镜定位装置及方法

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140241600A1 (en) * 2013-02-25 2014-08-28 Siemens Aktiengesellschaft Combined surface reconstruction and registration for laparoscopic surgery

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140241600A1 (en) * 2013-02-25 2014-08-28 Siemens Aktiengesellschaft Combined surface reconstruction and registration for laparoscopic surgery

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
DAN WANG ET AL: "Real Time 3D Visualization of Intraoperative Organ Deformations Using Structured Dictionary", IEEE TRANSACTIONS ON MEDICAL IMAGING, IEEE SERVICE CENTER, PISCATAWAY, NJ, US, vol. 31, no. 4, 1 April 2012 (2012-04-01), pages 924 - 937, XP011491076, ISSN: 0278-0062, DOI: 10.1109/TMI.2011.2177470 *
THOMPSON STEPHEN ET AL: "Accuracy validation of an image guided laparoscopy system for liver resection", PROGRESS IN BIOMEDICAL OPTICS AND IMAGING, SPIE - INTERNATIONAL SOCIETY FOR OPTICAL ENGINEERING, BELLINGHAM, WA, US, vol. 9415, 18 March 2015 (2015-03-18), pages 941509 - 941509, XP060051293, ISSN: 1605-7422, DOI: 10.1117/12.2080974 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113143459A (zh) * 2020-01-23 2021-07-23 海信视像科技股份有限公司 腹腔镜增强现实手术导航方法、装置及电子设备
CN115120350A (zh) * 2021-03-24 2022-09-30 上海微创医疗机器人(集团)股份有限公司 计算机可读存储介质、电子设备、位置标定及机器人系统
CN115607285A (zh) * 2022-12-20 2023-01-17 长春理工大学 一种单孔腹腔镜定位装置及方法
CN115607285B (zh) * 2022-12-20 2023-02-24 长春理工大学 一种单孔腹腔镜定位装置及方法

Also Published As

Publication number Publication date
GB201416586D0 (en) 2014-11-05

Similar Documents

Publication Publication Date Title
US11025889B2 (en) Systems and methods for determining three dimensional measurements in telemedicine application
Shahidi et al. Implementation, calibration and accuracy testing of an image-enhanced endoscopy system
US9646423B1 (en) Systems and methods for providing augmented reality in minimally invasive surgery
EP3007635B1 (fr) Technique informatique de détermination d'une transformation de coordonnées pour navigation chirurgicale
US10327624B2 (en) System and method for image processing to generate three-dimensional (3D) view of an anatomical portion
US20180158201A1 (en) Apparatus and method for registering pre-operative image data with intra-operative laparoscopic ultrasound images
CN111494009B (zh) 用于手术导航的图像配准方法、装置和手术导航系统
US10716457B2 (en) Method and system for calculating resected tissue volume from 2D/2.5D intraoperative image data
Thompson et al. Accuracy validation of an image guided laparoscopy system for liver resection
Wengert et al. Markerless endoscopic registration and referencing
JP2012525190A (ja) 単眼の内視鏡画像からのリアルタイム深度推定
CN102428496A (zh) 用于em跟踪内窥镜系统的无标记物跟踪的配准和校准
CN105931237A (zh) 一种图像校准方法和系统
KR101767005B1 (ko) 표면정합을 이용한 영상정합방법 및 영상정합장치
Ma et al. Moving-tolerant augmented reality surgical navigation system using autostereoscopic three-dimensional image overlay
JP6493885B2 (ja) 画像位置合せ装置、画像位置合せ装置の作動方法および画像位置合せプログラム
Lapeer et al. Image‐enhanced surgical navigation for endoscopic sinus surgery: evaluating calibration, registration and tracking
US20210128243A1 (en) Augmented reality method for endoscope
WO2020031071A1 (fr) Localisation d'organe interne d'un sujet pour fournir une assistance pendant une chirurgie
WO2016042297A1 (fr) Ordinateur et procédé mis en œuvre par ordinateur pour supporter la chirurgie laparoscopique
KR101988531B1 (ko) 증강현실 기술을 이용한 간병변 수술 내비게이션 시스템 및 장기 영상 디스플레이 방법
CN113470184A (zh) 内窥镜增强现实误差补偿方法及装置
Bernhardt et al. Automatic detection of endoscope in intraoperative ct image: Application to ar guidance in laparoscopic surgery
US10049480B2 (en) Image alignment device, method, and program
Field et al. Stereo endoscopy as a 3-D measurement tool

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 15766216

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 15766216

Country of ref document: EP

Kind code of ref document: A1