WO2023036848A1 - Augmented reality surgical navigation system - Google Patents
Augmented reality surgical navigation system Download PDFInfo
- Publication number
- WO2023036848A1 WO2023036848A1 PCT/EP2022/074921 EP2022074921W WO2023036848A1 WO 2023036848 A1 WO2023036848 A1 WO 2023036848A1 EP 2022074921 W EP2022074921 W EP 2022074921W WO 2023036848 A1 WO2023036848 A1 WO 2023036848A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- orientation
- virtual environment
- displacement sensor
- patient
- camera
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Ceased
Links
Classifications
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B34/00—Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
- A61B34/20—Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B34/00—Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
- A61B34/10—Computer-aided planning, simulation or modelling of surgical operations
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B90/00—Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/20—Scenes; Scene-specific elements in augmented reality scenes
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B34/00—Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
- A61B34/10—Computer-aided planning, simulation or modelling of surgical operations
- A61B2034/101—Computer-aided simulation of surgical operations
- A61B2034/102—Modelling of surgical devices, implants or prosthesis
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B34/00—Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
- A61B34/10—Computer-aided planning, simulation or modelling of surgical operations
- A61B2034/101—Computer-aided simulation of surgical operations
- A61B2034/105—Modelling of the patient, e.g. for ligaments or bones
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B34/00—Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
- A61B34/10—Computer-aided planning, simulation or modelling of surgical operations
- A61B2034/107—Visualisation of planned trajectories or target regions
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B34/00—Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
- A61B34/20—Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
- A61B2034/2046—Tracking techniques
- A61B2034/2048—Tracking techniques using an accelerometer or inertia sensor
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B34/00—Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
- A61B34/20—Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
- A61B2034/2046—Tracking techniques
- A61B2034/2051—Electromagnetic tracking systems
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B34/00—Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
- A61B34/20—Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
- A61B2034/2046—Tracking techniques
- A61B2034/2055—Optical tracking systems
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B34/00—Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
- A61B34/20—Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
- A61B2034/2046—Tracking techniques
- A61B2034/2065—Tracking using image or pattern recognition
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B34/00—Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
- A61B34/20—Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
- A61B2034/2068—Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis using pointers, e.g. pointers having reference marks for determining coordinates of body points
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B90/00—Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
- A61B90/36—Image-producing devices or illumination devices not otherwise provided for
- A61B2090/364—Correlation of different images or relation of image positions in respect to the body
- A61B2090/365—Correlation of different images or relation of image positions in respect to the body augmented reality, i.e. correlating a live optical image with another image
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B90/00—Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
- A61B90/36—Image-producing devices or illumination devices not otherwise provided for
- A61B90/37—Surgical systems with images on a monitor during operation
- A61B2090/372—Details of monitor hardware
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B90/00—Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
- A61B90/50—Supports for surgical instruments, e.g. articulated arms
- A61B2090/502—Headgear, e.g. helmet, spectacles
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V2201/00—Indexing scheme relating to image or video recognition or understanding
- G06V2201/03—Recognition of patterns in medical or anatomical images
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V2201/00—Indexing scheme relating to image or video recognition or understanding
- G06V2201/03—Recognition of patterns in medical or anatomical images
- G06V2201/034—Recognition of patterns in medical or anatomical images of medical instruments
Definitions
- An augmented reality surgical navigation system in which a computer-generated image of a portion of a patient derived from scan data is displayed to the surgeon in alignment with that portion of the patient, or an image of that portion of the patient, during a surgical procedure to provide assistance to the surgeon.
- the invention is particularly concerned with monitoring movement of the patient, tissue displacement and tracking of surgical instruments during surgery and adjusting the display of the computer-generated image in order to maintain alignment between the computer-generated image and the portion of the patient during the surgical procedure.
- Optical alignment with the head of a patient can be achieved by using facial recognition techniques to identify landmark points on the face of a patient. Information to be displayed to the surgeon can be rendered in positions relative to these points.
- the body is usually draped during surgery which renders tracking of these landmark points during surgery using facial recognition techniques impossible. As a result, if the body moves during surgery alignment between the virtual model of the body part in the virtual environment and the actual body part in the physical environment will be lost.
- an augmented reality surgery system having a camera for imaging a view of a physical environment and a display for displaying a virtual environment.
- a displacement sensing system has one or more displacement sensors for fixing relative to a body part of a patient in the physical environment.
- the displacement sensing system outputs measurement data for each displacement sensor corresponding to translational and rotational movement of that displacement sensor relative to an origin and co-ordinate system defined by that displacement sensor.
- the augmented reality surgery system also includes a processing system which generates the virtual environment based on the physical environment of the camera so that each position in the physical environment has a corresponding position in the virtual environment.
- the processing system receives image data from the camera, and scan data corresponding to a three-dimensional model of the body part of the patient.
- the processing system determines a position and orientation for the three-dimensional model of the body part in the virtual environment matching the position and orientation of the body part of the patient in the physical environment, and introduces the three-dimensional model of the body part into the virtual environment with the determined position and orientation.
- the processing system determines, for each displacement sensor, the position in the virtual environment corresponding to the position of the origin point in the physical environment and the orientation of the coordinate axes of the displacement sensor in the virtual environment using image data for one or more images from the camera.
- the processing system then renders a first visual representation of the virtual environment including the model of the body part and outputs the first visual representation to the display, the first visual representation corresponding to the view of physical environment imaged by the camera at a first time.
- the processing system subsequently receive measurement data from the displacement sensor corresponding to movement of the body part of the patient in the physical environment and modifies at least one of the position, orientation and form of the model in the virtual environment based on the received measurement data, the determined position of the origin point of the displacement sensor and the determined orientation of the co-ordinate axes of the displacement sensor in the physical environment, and renders a second visual representation of the virtual environment including the modified model of the body part and outputs the second visual representation to the display, the second visual representation corresponding to the view of physical environment imaged by the camera at a second time.
- measurement data from the displacement sensing system can be used to track movement or reconfiguration of the body part at a second time during surgery.
- an augmented reality surgery system having a camera for imaging a view of a physical environment, a display for displaying a view of a virtual environment, and a displacement sensing system having one or more displacement sensors for fixing relative to a surgical instrument in the physical environment, the displacement sensing system being operable to output measurement data corresponding to translational and rotational movement of each displacement sensor relative to a respective origin and co-ordinate system.
- the augmented reality surgery system also includes a processing system which generates the virtual environment based on the physical environment of the camera so that each position in the physical environment has a corresponding position in the virtual environment, and for each displacement sensor determines the position of the origin and the orientation of the co-ordinate axes of that displacement sensor in the virtual environment using image data for one or more images from the camera.
- the processing system then introduces a three-dimensional model of the surgical instrument into the virtual environment with a position based on the determined position of the origin and an orientation based on the determined orientation of the co-ordinate axes in the virtual environment.
- the processing system renders a first visual representation of the virtual environment including the three-dimensional model of the surgical instrument and outputs the first visual representation to the display, the first visual representation corresponding to the view of the physical environment imaged by the camera at the first time.
- the processing system modifies at least one of the position and orientation of the three-dimensional model of the surgical instrument in the virtual environment based on the received measurement data and the determined position of the origin point of the displacement sensor and the determined orientation of the co-ordinate axes of the displacement sensor in the virtual environment and at a second time renders a second visual representation of the virtual environment including the modified model of the surgical instrument and output the second visual representation to the display, the second visual representation corresponding to the view of physical environment imaged by the camera at the second time.
- the displacement sensing system is an electromagnetic tracking system and the displacement sensors are probes whose translation and rotational movement is tracked in six dimensions.
- optical fiducial markers are provided on each probe positioned in relation to the origin point in dependence on the co-ordinate system of the probe.
- the augmented reality surgical navigation system could be used during neurosurgery to allow a virtual image of the head of a patient to track movement of the head of the patient during surgery.
- the augmented reality surgical navigation system could be used during spinal surgery in which the virtual image is modified to take account of relative movement of spinal vertebrae pre and post fixation, and to track the trajectory of screw insertions into the vertibrae.
- Figure 1A shows a perspective view of a surgeon performing cranial neurosurgery assisted by an augmented reality surgical navigation system in accordance with examples described herein.
- Figure IB shows a perspective view of a surgeon performing cranial neurosurgery whereby a probe or instrument is tracked via the augmented reality surgical navigation system as it enters the tissue, allowing the surgeon to follow and visualise its trajectory towards a desired target in accordance with examples described herein.
- Figure 2 shows a schematic view of a displacement sensor with surface optical fiducial markers that forms part of the augmented reality surgical navigation system of Figure 1.
- Figure 3A schematically shows the functional components of the augmented reality surgical navigation system for co-registering a 3D virtual model in a virtual environment with a corresponding body part of a patient in the physical environment.
- Figure 3B is a flow chart showing steps performed to generate a point cloud from the 3D virtual model in which each point represents a landmark feature on the surface of the 3D virtual model.
- Figure 3C is a flow chart showing operations performed to position the 3D virtual model in the virtual environment in a position and orientation corresponding to the position and orientation of the corresponding body part in the physical environment.
- Figure 4A schematically shows the functional components of the augmented reality surgical navigation system for registering the location of an origin point and the orientation of co-ordinate axes associated with a displacement sensor in the virtual environment.
- Figure 4B is a flow chart showing operations performed to determine the position in the virtual environment corresponding to the origin of the displacement sensor and the orientation of the co-ordinate axes of the displacement sensor in the virtual environment.
- Figure 5 is schematically shows the operation of the augmented reality surgical navigation system during surgery.
- Figures 6A and 6B show perspective views of a surgeon performing spinal surgery assisted by an augmented reality surgical navigation system in accordance with examples described herein. Detailed Description
- Figure 1 A shows a surgeon 1 carrying out a surgical procedure on a patient 3.
- the surgical procedure involves cranial neurosurgery on the brain, on blood vessels or on nerves located in the skull or near the brain.
- Such neurosurgical procedures require high precision to avoid significant complications, and the surgeon 1 makes use of an Augmented Reality (AR) surgical navigation system to assist with carrying out the neurosurgical procedure with such high precision.
- AR Augmented Reality
- the AR surgical navigation system includes a head-mounted AR display device 5. More particularly, in this example the AR display device 5 is a Microsoft HoloLens 2 device.
- the AR display device 5 senses the physical environment of the head-mounted AR display device, e.g. a surgical theatre, and generates a virtual environment corresponding to the physical environment using a first co-ordinate system, which will hereafter be called the virtual environment co-ordinate system.
- the headmounted AR display device 5 presents an image 15 to the surgeon 1 of a three- dimensional model of the head of the patient that is derived from scan data and positioned and oriented within the virtual environment of the AR surgical navigation system so as to match the position and orientation of the head of the patient 3 in the physical environment.
- the image is rendered from a viewpoint in the virtual environment corresponding to the position and orientation of the head-mounted AR device 5.
- the displayed image is superimposed over the corresponding portion of the physical body of the patient 3 in the field of view of the surgeon 1.
- the head-mounted AR display device 5 presents information detailing the trajectory and distance to a target location.
- the AR surgical navigation system also includes a displacement sensing system, which in this example is an electromagnetic tracking system in which movement of a displacement sensor 7 in six degrees of freedom (three translational and three rotational) is monitored using a field generator 9, which generates an electromagnetic field that induces currents in the displacement sensor 7 that can be analysed to determine the sensed movements.
- a displacement sensing system which in this example is an electromagnetic tracking system in which movement of a displacement sensor 7 in six degrees of freedom (three translational and three rotational) is monitored using a field generator 9, which generates an electromagnetic field that induces currents in the displacement sensor 7 that can be analysed to determine the sensed movements.
- the displacement sensor 7 is a substantially planar device, as shown in Figure 2, having optical fiduciary markers 21a, 21b and 21c (such as AprilTag, ARTag, ARToolkit, ArUco and the like) positioned thereon in a configuration such that a cartesian co-ordinate system for the sensed movement corresponds to a first axis 23a aligned with a line joining the first optical fiduciary marker 21a and the second optical fiduciary marker 21b, a second axis 23b aligned with a line joining the first optical fiduciary marker 21a and the third fiduciary marker 21c, and a third axis 23c aligned with a line perpendicular to the plane of the displacement sensor 7 and passing through the first optical fiduciary marker 23a.
- Displacement sensors sensor 7 can be fixed to the forehead of the patient 3, although other positions on the patient 3 are possible, and to surgical devices.
- the AR surgical navigation system also includes a fixed display 11 that displays the field of view of the surgeon 1, including both an image from a camera in the AR display device 5 that captures the view of the surgeon 1 of the physical environment and the displayed image corresponding with the same view of the virtual environment.
- the fixed display 11 may show a first image of the head of the patient 3 captured in the physical environment with a second image of the brain of the patient 3 captured from the virtual environment, with the first and second images having matching positions and orientations.
- the AR surgical navigation system includes a processing system having three different modes of operation, namely a patient registration mode, a displacement sensor registration mode and a surgery mode.
- the patient registration mode involves, before surgery, identifying the position and orientation in the virtual environment corresponding to the head of the patient 3, and then introducing the three-dimensional model derived from the previously scanned data of the head of the patient 3 into the virtual environment at a position and orientation matching the identified position and orientation for the physical head of the patient 3.
- the patient registration mode uses image processing techniques that require the head of the patient 3 to be in the field of view of the surgeon 1 wearing the head-mounted AR device 5.
- the patient registration mode uses the displacement sensing system to perform the orientation and positioning of the three- dimensional model into the virtual environment.
- the displacement sensor registration mode involves determining the position and orientation in the virtual world corresponding to the position and orientation of one or more six-dimensional displacement sensors (three axes of translation and three axes of rotation) which can be fixed relative to the head of the patient, for example on the forehead of the patient, or on a surgical device. In this way, detected movement of the displacement sensor can be converted into a corresponding translation and/or rotation in the virtual world of the corresponding object to which it is fixed.
- the displacement sensor registration mode also uses image processing techniques that require the head of the patient 3 to be in the field of view of the surgeon 1 wearing the head-mounted AR device 5.
- the surgery mode involves a rendered image of the three-dimensional model being displayed to the surgeon 1 from a viewpoint corresponding to the position and orientation of the AR display device 5.
- movement of the displacement sensor 7 fixed to the head of the patient 3 is sensed and converted by the AR surgical navigation system to a corresponding translation and/or rotation of the three-dimensional model of the head of the patient 3 in the virtual environment so as to maintain the rendered image being superimposed over the corresponding part of the head of the patient 3 in the field of view of the surgeon 1 even if the head of the patient 3 moves.
- movement of a displacement sensor fixed relative to a surgical device such as the stylet 13 is used to track movement of a virtual model of the surgical device in the virtual environment.
- the surgery mode does not require the displacement sensor 7 to be in the field of view of the surgeon 1 wearing the AR display device 5. This is advantageous because during surgery the head of the patient 3 is typically draped for hygiene reasons and therefore the displacement sensor 7 is not in the field of view of the surgeon 1. In addition, this is advantageous because during surgery a surgical device maybe inserted into body tissue and accordingly the tip of the surgical device is no longer visible to the surgeon 1.
- FIG 3A schematically illustrates the functionality of the surgical navigation system in the patient registration mode, the patient registration mode in this example using an optical system which employs image processing techniques.
- a headmounted augmented reality (AR) device 201 (corresponding to the AR device 5 of Figure 1) has a display 203, a camera 205 having associated depth sensing capabilities and a device pose calculator 207.
- the camera 205 takes images of the field of view of the surgeon 1 and the device pose calculator 207 processes those images to determine the position and orientation in the virtual environment corresponding to the position and orientation of the AR device 201 in the physical environment of the AR surgical navigation system.
- AR augmented reality
- scan data 209 corresponding to a pre-operative scan of the portion of the head of the patient is input to a processing system in which the scan data is processed by a 3D model generation module 211 to generate a virtual three-dimensional model of the portion of the head of the patient using a second co-ordinate system, which will hereafter be referred to as the scan model coordinate system.
- the pre-operative scan data is received in Digital Imaging and Communications in Medicine (DICOM) format which stores slice-based images.
- DICOM Digital Imaging and Communications in Medicine
- the pre-operative scan data may be from CT scans, MRI scans, ultrasound scans, or any other known medical imaging procedure.
- the slice-based images of the DICOM files are processed to construct a plurality of three-dimensional (3D) scan models each concentrating on a different aspect of the scan data using conventional segmentation filtering techniques to identify and separate anatomical features within the 2D DICOM images.
- 3D scan models is of the exterior surface of the head of the patient.
- Other 3D scan models may represent internal features of the head and brain such as brain tissue, blood vessels, the skull of the patient, etc. All these other models are constructed using the scan model co-ordinate system and the same scaling so as to allow the scan models to be selectively exchanged.
- the patient registration process then generates a 3D point cloud of landmark points in the 3D scan model of the exterior surface of the head.
- the patient registration mode renders, at S201, an image of the 3D model of the exterior surface of the head of the patient under virtual lighting conditions and from the viewpoint of a virtual camera positioned directly in front of the face of the patient 3 to create two-dimensional image data corresponding to an image of the face of the patient 3.
- a pre-trained facial detection model analyses at S203, this image of the face of the patient 3 to obtain a set of 2D landmark points relating to features of the face of the patient 3.
- These landmark points are then converted, at S205, into a 3D set of points using ray tracing or casting.
- the position of the virtual camera is used to project a ray from a 2D landmark point in the image plane of the virtual camera into virtual model space, and the collision point of this ray with the 3D model of the exterior surface of the head of the patient corresponds to the corresponding 3D position of that landmark point.
- the 3D point cloud of landmark points corresponding to positions of facial landmarks is generated.
- the patient registration process compares the 3D point cloud with image data for an image of the face of the patient captured by the camera 205 to determine, at 213, transform data for transforming the 3D model into a co-ordinate system relative to the camera 205, which will hereafter be referred to as the camera co-ordinate system.
- the patient registration process determines a translation vector and rotation matrix which matches the position and orientation of the 3D point cloud with the position and orientation of the head of the patient in the captured image.
- the patient registration process processes, at S303, the corresponding image data to detect faces within the image, and optical fiducial markers in the image that indicate which of the detected faces belongs to the patient 3, as only the patient 3 has optical fiducial markers on them.
- the optical fiducial markers are useful because there may be one or more persons other than the patient 3 within the image.
- the patient registration process then identifies, at S305, a 2D set of landmark points of the face of the patient 3 within the captured image using the same pre-trained facial detection model as was used to process the rendered image of the 3D model of the exterior surface of the head.
- the patient registration process then aligns, at S307, the 3D point cloud of landmark points for the 3D model and the 2D set of landmark points from the captured image using a conventional pose estimation process, which determines a translation vector and a rotation matrix that positions and orientates the 3D model in the field of view of the AR camera 205 co-registered with the physical head of the patient 3.
- a conventional pose estimation process which determines a translation vector and a rotation matrix that positions and orientates the 3D model in the field of view of the AR camera 205 co-registered with the physical head of the patient 3.
- PNP perspective n-point
- the determined translation vector and rotation matrix are relative to the field of view of the camera when capturing the image.
- the head-mounted AR device moves within the physical environment, and therefore a further transformation is required to determine compensated transform data to locate the 3D model in the virtual environment at a position and orientation that matches the position and orientation of the head of the patient 3 in the physical environment.
- the patient registration process then transforms, at 215, the 3D model into the virtual environment co-ordinate system.
- the patient registration process determines a compensated translation vector and a compensated rotation matrix which matches the position and orientation of the 3D point cloud with the position and orientation in the virtual environment that corresponds to the position and orientation of the head in the physical environment.
- the patient registration process receives, at S309, data form the device pose calculator 207 that identifies the position and the orientation within the virtual environment that corresponds to the position and orientation of the camera 205 in the physical environment when the image was captured.
- the patient registration process then calculates, at S311, a compensating translation vector and rotation matrix using the data provided by the device pose calculator 207.
- the compensating translation vector and rotation matrix form compensated transform data to transform the location and orientation of the 3D model of the exterior surface of the head into a location and orientation in the virtual environment, defined using the virtual environment co-ordinate system, so that the position and orientation of the 3D model in the virtual environment matches the position and orientation of the head of the patient 3 in the real world.
- the patient registration process then estimates the accuracy of the coregistration of the physical head and the virtual model.
- depth data from the depth sensors associated with the camera 205 is used to construct, at S313, a point cloud of landmark points of the physical face of the patient from the 2D set of landmark points determined from the captured image, and then the accuracy of the overlap of the point cloud of landmark points of the physical face of the patient determined from the captured image and the point cloud of landmark points of the virtual model is calculated, at S315, for example by calculating the sum of the cartesian distance between corresponding points of each point cloud.
- the patient registration process repeats, at S317, the above processing for multiple captured images, while the head of the patient is immobilised but the camera 205 may be mobile, until sufficient accuracy is achieved.
- This assessment of accuracy can involve averaging the previous best-fits, and determining, at S317, whether the difference between current and previous averaged fits has fallen below a defined accuracy threshold and hence converged on some suitable alignment.
- co-regi strati on is deemed to be sufficiently accurate and the averaged compensated translation vector and rotation matrix can be used to introduce the 3D models into the virtual environment.
- FIG 4A schematically illustrates the functionality of the surgical navigation system in the displacement sensor registration mode.
- each displacement sensor 7 is equipped with optical fiducial markers 21a-21c which are positioned to define the origin and axes of a co-ordinate system in which the displacement sensor 7 measures movement in six directions (three translational and three rotational), hereafter referred to as the displacement sensor co-ordinate system.
- the aim of the displacement sensor registration process is to identify the position of the origin of the displacement sensor co-ordinate system, and the orientation of the axes of the displacement sensor co-ordinate system, in the virtual environment.
- the displacement sensor registration process receives, at step S401, an image of the physical environment, which may be an image used for patient registration, together with associated depth data and identifies, at step S403, the optical fiducial markers on the displacement sensor 7, which can be done using available software from libraries such as OpenCV combined with helper libraries which provide specific implementations for the optical fiducial marker used.
- libraries such as OpenCV combined with helper libraries which provide specific implementations for the optical fiducial marker used.
- the displacement sensor registration process uses pose estimation to determine, at S405, a rotational transformation required to rotate the plane of the displacement sensor 7 containing the optical fiducial markers to align with the object plane of the camera 205. From this rotation transformation, the displacement sensor registration process determines, at S407, a rotation matrix transforming the displacement sensor co-ordinate system to the AR camera co-ordinate system. In addition, the depth data corresponding to the optical fiducial markers is used to calculate a 3D position of the origin of the displacement sensor 7 in the AR camera coordinate system.
- the determined origin position and rotation matrix are relative to the field of view of the camera 205 when capturing the image.
- the patient registration process generates, at S409, a compensating translation vector and rotation matrix using a location and orientation of the camera 205, provided by the device pose calculator 207, for when the image was captured.
- the compensating translation vector and rotation matrix converts the location of the origin of the displacement sensor 7 and the orientation of the displacement sensor 7 from the AR camera co-ordinate system to the virtual environment co-ordinate system, so that the position of the origin of the displacement sensor 7 and the orientation of the displacement sensor co-ordinate system in the virtual environment matches the position and orientation of the displacement sensor 7 in the physical environment.
- the displacement sensor registration process can then be repeated, at S413, using multiple different images captured by the camera 205 to acquire an average position of the origin of the displacement sensor 7 and the orientation of the displacement sensor co-ordinate system in the virtual environment.
- Figure 5 schematically illustrates the functionality of the surgical navigation system in the surgery mode.
- 3D models 211 are transformed into the virtual environment coordinate system.
- this involves using the transformation data generated during the patient registration process, so that its position and orientation in the virtual environment matches the position and orientation of the head of the patient 3 in the physical environment.
- 3D models corresponding to surgical devices this involves determining the position and orientation of the surgical device in the virtual environment based on the determined position and the orientation of the displacement sensor 7 attached to surgical device.
- Each 3D model then undergoes a further transformation based on the readings from the displacement sensor 7, and then the resultant transformed model is output to a rendering engine 235.
- the pose calculator 207 outputs data indicating the position and orientation in the virtual environment corresponding to the position and orientation of the AR camera 205 in the physical environment. This allow the rendering engine 235 to render two-dimensional image data corresponding to the view of a virtual camera, whose optical specification matches the optical specification of the AR camera 205, positioned and orientated in the virtual environment with the position and orientation indicated by the pose calculator 207.
- the resultant two-dimensional image data is then output by the rendering engine 235 to the display 203 for viewing by the surgeon. More particularly, the display of the head-mounted AR device 5 is a semi-transparent display device enabling the displayed image to be superimposed in the view of the surgeon 1.
- the displacement sensor 7 attached to the head will make a corresponding movement and the displacement sensing system will output displacement sensor readings 218a in six dimensions corresponding to translational movement of the origin point and rotational movement of the displacement sensor co-ordinate system.
- the displacement sensor readings 218a are converted to a displacement transformation 218b to effect a corresponding translation and rotation of the 3D model in the virtual environment. In this way, co-regi strati on between the head of the patient in the physical environment and the 3D model in the virtual environment is maintained.
- the surgeon 1 may make a visual check that the virtual image of one or more of the 3D models is aligned with the head of the patient 3 before draping the head of the patient 3 for surgery. After draping, the surgeon 1 is reliant on the transformations based on the sensor readings from the displacement sensor 7 to maintain alignment.
- the displacement sensor attached to the surgical device will make a corresponding movement and the displacement sensing system will output displacement sensor readings 218a in six dimensions corresponding to translational movement of the origin point and rotational movement of the displacement sensor co-ordinate system.
- the displacement sensor readings 218a are converted to a displacement transformation 218b to effect a corresponding translation and rotation of the 3D model of the surgical device in the virtual environment.
- a target location can be identified in the 3D models of the head of the patient 3, and data corresponding to a distance and trajectory between the tip of the stylet 13 and the target location can be calculated and displayed superimposed over the rendered image of the virtual environment.
- This data may be displayed with or without a rendered virtual model of the stylet 13.
- the augmented reality surgical navigation system can also be used to modify the form of a 3D model during surgery to take account of modifications to the corresponding body part during the surgical procedure or to track the trajectory of instruments such as spinal screws
- Figures 6A and 6B illustrate a surgeon 1 carrying out spinal surgery in which vertebrae of the spine are physically moved relative to each other.
- the 3D model is accordingly of the relevant vertebrae of the spine, with each vertebra having a respective sub-model.
- a displacement sensor 7 is fixed relative to each vertebra, and the position of the origin and the orientation of the co-ordinate axes of each displacement sensor in the virtual environment is determined in the manner described above.
- the surgeon 1 physically inserts a screw using the electromagnetically tracked tool A, the insertion and fixation of the screw moves the vertebrae relative to each other, and so the displacement sensor 7 on each vertebra will move with that vertebra respectively.
- the displacement readings for each displacement sensor can then be converted to transformation data for transforming the position and orientation of the corresponding sub-module in the virtual environment.
- transformation data for transforming the position and orientation of the corresponding sub-module in the virtual environment.
- the form of the 3D model is modified during surgery to take account of the relative movement of the vertebrae as a result of surgical manipulation.
- the trajectory and position of the spinal screws B will be tracked within the vertebrae.
- patient registration involves the processing of two- dimensional images of a body part of the patient to generate a set of three-dimensional points to allow co-regi strati on between three-dimensional scan data in the virtual world and the body part of the patient.
- patient registration mode uses the displacement sensing system instead of, or in conjunction with, the optical system described earlier.
- a displacement sensor 7 can output displacement sensor readings 218a in six dimensions corresponding to translational movement of the origin point and rotational movement of the displacement sensor coordinate system which, due to the registration process, corresponds to a position and orientation in the virtual environment.
- the optical patient registration process identifies a 2D set of landmark points of the face of the patient 3 within a captured image and aligns a 3D point cloud of landmark points for the 3D model with the 2D set of landmark points from the captured image, in the patient registration process which uses the displacement sensing system it is not necessary to capture an image of the patient 3.
- the patient registration process which uses the displacement sensing system obtains a 3D set of landmark points which can be aligned with the 3D model point cloud.
- the 3D position of each landmark point is obtained by e.g. the surgeon bringing a displacement sensor into close proximity with the landmark point and triggering the displacement sensor system to provide a displacement sensor reading 218a that provides the three-dimensional position of that landmark point in the virtual world.
- the registration system Upon capturing the 3D point cloud of displacement sensor readings relating to the landmark points, the registration system performs an alignment process as described previously to align the 3D landmark point cloud with the 3D model point cloud.
- the alignment process seeks to minimize the cartesian distance between the landmark points and the 3D model points until a pre-determined accuracy has been achieved.
- the displacement sensor used in the patient registration process may be in the form of a stylet sensor, for example, which permits precise identification of each landmark point by bringing the displacement sensor into close proximity with the landmark point of the patient’s body.
- the displacement sensor may be positioned on a pointing tool, and may be positioned away from e.g. a sterile pointing end of the pointing tool.
- the sterile pointing end of the pointing tool is at a known position relative to the displacement sensor, such that when the sterile pointing end is in close proximity with a landmark position on the patient’s body, the displacement sensor reading can be converted to a 3D landmark position based on the positional relationship between the sterile pointing end of the tool and the position of the displacement sensor on the tool.
- the registration process may, in some examples, use the displacement sensing system in conjunction with the optical system. For example, it may be desirable to calculate the position of some landmark points optically to avoid the risk of physical contact with a particular body part of the patient. Combining the two methods of patient registration may help improve accuracy of the overall patient registration process by using one method to verify the other. Furthermore, the optical patient registration system may be performed automatically, and the surgeon may use the displacement sensing system where verification of particular landmark points is required, which may further improve the speed of the patient registration process whilst retaining accuracy.
- the head-mounted AR device is a Microsoft HoloLens 2 device. It will be appreciated that other head-mounted AR devices that display a virtual environment in association with a physical environment could be used. Further, examples of the augmented reality surgical navigation system need not include a headmounted AR device as alternatively a fixed camera could be used, for example in conjunction with the fixed display 11, with the displacement sensing system being used to maintain registration between a 3D model in a virtual environment and a corresponding body part in a physical environment.
- displacement sensing system of the illustrated embodiment is an electronic tracking system
- other displacement systems that do not rely upon image analysis to track movement in six dimensions could be used.
- a six axis gyroscopic system could be used.
- optical fiducial markers on the displacement sensors assist in the registration of the position of the origin of the displacement sensor and the orientation of the co-ordinate axes of the displacement sensor in the virtual environment.
- the optical fiducial markers are not, however, essential as the displacement sensor may be designed to have sufficient landmark points to enable registration to be performed.
Landscapes
- Health & Medical Sciences (AREA)
- Engineering & Computer Science (AREA)
- Surgery (AREA)
- Life Sciences & Earth Sciences (AREA)
- Animal Behavior & Ethology (AREA)
- Veterinary Medicine (AREA)
- Biomedical Technology (AREA)
- Heart & Thoracic Surgery (AREA)
- Medical Informatics (AREA)
- Molecular Biology (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- General Health & Medical Sciences (AREA)
- Public Health (AREA)
- Robotics (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Pathology (AREA)
- Processing Or Creating Images (AREA)
Abstract
An augmented reality surgery system is disclosed. The augmented reality surgery system comprises a camera, display (5), displacement sensing system, and processing system. A 3D model is displayed in a virtual environment. The displacement sensing system comprises displacement sensors (7) which are used to update the position of the 3D model in the virtual environment.
Description
AUGMENTED REALITY SURGICAL NAVIGATION SYSTEM
Technical Field
An augmented reality surgical navigation system is disclosed in which a computer-generated image of a portion of a patient derived from scan data is displayed to the surgeon in alignment with that portion of the patient, or an image of that portion of the patient, during a surgical procedure to provide assistance to the surgeon. The invention is particularly concerned with monitoring movement of the patient, tissue displacement and tracking of surgical instruments during surgery and adjusting the display of the computer-generated image in order to maintain alignment between the computer-generated image and the portion of the patient during the surgical procedure.
Background
Many fields of surgery, such as cranial and spinal neurosurgery, require a high precision of execution to avoid significant complications. For example in cranial neurosurgery, the placement of extra ventricular drains is a common emergency neurosurgical procedure, used to release raised pressure in the brain. Placement of an emergency external ventricular drain is often performed free hand by a neurosurgeon. The positioning of approximately 1 in 5 drains needing to be subsequently revised due to misplacement, and with repeated insertions the likelihood of brain bleeds increases by 40%. Studies have shown that image-guided placement leads to improved accuracy of the external ventricular drain tip, reducing the risk of malposition by over 50%. In spinal surgery, a unique pitfail is when the incorrect vertebral level is exposed or operated upon, known as wrong level surgery, or fixation screws are incorrectly placed. To avoid this, the current convention is to perform checks with plain radiographs. These are resource intensive, requiring X-ray machine use, a radiographer, further anaesthetic time and consumables such as drapes. Current checklist site-verification systems such have been shown to have a very weak effect on reducing wrong site or placement errors. Realtime imaging feedback provided by augmented reality systems has the potential to reduce this risk. Studies have shown augmented reality systems generally gave higher accuracy placement of pedicle screws compared to conventional navigation. This again improves patient outcomes and reduces further revision work.
Likewise, surgery which requires differentiation between diseased tissues and healthy tissues can be simplified and made more effective if the diseased tissue is identified beforehand and a surgical navigation system highlights this tissue to the surgeon, rather than forcing the surgeon to inspect and determine this during the procedure.
During surgical procedures which utilise augmented reality to provide this navigational information, there is a requirement for optical alignment between a view of a virtual object within a virtual environment generated by the augmented reality system and a corresponding object in a physical environment (e.g. a surgical theatre). This may involve aligning a virtual model of a body part in the virtual environment with the actual body part in the physical environment. It can be critical that the virtual model of the body part remains in accurate alignment with the actual body part at all points during surgery.
Optical alignment with the head of a patient can be achieved by using facial recognition techniques to identify landmark points on the face of a patient. Information to be displayed to the surgeon can be rendered in positions relative to these points. However, for infection control purposes and to maintain sterility, the body is usually draped during surgery which renders tracking of these landmark points during surgery using facial recognition techniques impossible. As a result, if the body moves during surgery alignment between the virtual model of the body part in the virtual environment and the actual body part in the physical environment will be lost.
It is therefore desirable to find an alternative way of tracking the motion of a body part of a patient during surgery without a reliance on images of the body part. Furthermore it is highly desirable to be able to level check, track instruments, assess screw placement and assess spinal tissue changes post fixation using augmented reality rather than plain radiographs, thus reducing radiation exposure to both patient and staff.
Summary
According to a first aspect of the present invention, there is provided an augmented reality surgery system having a camera for imaging a view of a physical environment
and a display for displaying a virtual environment. A displacement sensing system has one or more displacement sensors for fixing relative to a body part of a patient in the physical environment. The displacement sensing system outputs measurement data for each displacement sensor corresponding to translational and rotational movement of that displacement sensor relative to an origin and co-ordinate system defined by that displacement sensor. The augmented reality surgery system also includes a processing system which generates the virtual environment based on the physical environment of the camera so that each position in the physical environment has a corresponding position in the virtual environment. The processing system receives image data from the camera, and scan data corresponding to a three-dimensional model of the body part of the patient. The processing system determines a position and orientation for the three-dimensional model of the body part in the virtual environment matching the position and orientation of the body part of the patient in the physical environment, and introduces the three-dimensional model of the body part into the virtual environment with the determined position and orientation. The processing system determines, for each displacement sensor, the position in the virtual environment corresponding to the position of the origin point in the physical environment and the orientation of the coordinate axes of the displacement sensor in the virtual environment using image data for one or more images from the camera. The processing system then renders a first visual representation of the virtual environment including the model of the body part and outputs the first visual representation to the display, the first visual representation corresponding to the view of physical environment imaged by the camera at a first time. The processing system subsequently receive measurement data from the displacement sensor corresponding to movement of the body part of the patient in the physical environment and modifies at least one of the position, orientation and form of the model in the virtual environment based on the received measurement data, the determined position of the origin point of the displacement sensor and the determined orientation of the co-ordinate axes of the displacement sensor in the physical environment, and renders a second visual representation of the virtual environment including the modified model of the body part and outputs the second visual representation to the display, the second visual representation corresponding to the view of physical environment imaged by the camera at a second time. In this way, by initially registering
the position and orientation in the virtual environment of one or more displacement sensors fixed to a body part using imaging techniques at a first time before surgery commences, measurement data from the displacement sensing system can be used to track movement or reconfiguration of the body part at a second time during surgery.
According to a second aspect of the invention, there is provided an augmented reality surgery system having a camera for imaging a view of a physical environment, a display for displaying a view of a virtual environment, and a displacement sensing system having one or more displacement sensors for fixing relative to a surgical instrument in the physical environment, the displacement sensing system being operable to output measurement data corresponding to translational and rotational movement of each displacement sensor relative to a respective origin and co-ordinate system. The augmented reality surgery system also includes a processing system which generates the virtual environment based on the physical environment of the camera so that each position in the physical environment has a corresponding position in the virtual environment, and for each displacement sensor determines the position of the origin and the orientation of the co-ordinate axes of that displacement sensor in the virtual environment using image data for one or more images from the camera. The processing system then introduces a three-dimensional model of the surgical instrument into the virtual environment with a position based on the determined position of the origin and an orientation based on the determined orientation of the co-ordinate axes in the virtual environment. At a first time, the processing system renders a first visual representation of the virtual environment including the three-dimensional model of the surgical instrument and outputs the first visual representation to the display, the first visual representation corresponding to the view of the physical environment imaged by the camera at the first time. Following receiving measurement data from the displacement sensor corresponding to movement of the surgical instrument in the physical environment, the processing system modifies at least one of the position and orientation of the three-dimensional model of the surgical instrument in the virtual environment based on the received measurement data and the determined position of the origin point of the displacement sensor and the determined orientation of the co-ordinate axes of the displacement sensor in the virtual environment and at a second time renders a second
visual representation of the virtual environment including the modified model of the surgical instrument and output the second visual representation to the display, the second visual representation corresponding to the view of physical environment imaged by the camera at the second time.
In an example of the present invention, the displacement sensing system is an electromagnetic tracking system and the displacement sensors are probes whose translation and rotational movement is tracked in six dimensions. To assist the optical registration of the probes, optical fiducial markers are provided on each probe positioned in relation to the origin point in dependence on the co-ordinate system of the probe.
The augmented reality surgical navigation system could be used during neurosurgery to allow a virtual image of the head of a patient to track movement of the head of the patient during surgery. Alternatively, the augmented reality surgical navigation system could be used during spinal surgery in which the virtual image is modified to take account of relative movement of spinal vertebrae pre and post fixation, and to track the trajectory of screw insertions into the vertibrae.
Further features and advantages of the invention will become apparent from the following description of embodiments of the invention, given by way of example only, which is made with reference to the accompanying drawings.
Brief Description of the Drawings
Figure 1A shows a perspective view of a surgeon performing cranial neurosurgery assisted by an augmented reality surgical navigation system in accordance with examples described herein.
Figure IB shows a perspective view of a surgeon performing cranial neurosurgery whereby a probe or instrument is tracked via the augmented reality surgical navigation system as it enters the tissue, allowing the surgeon to follow and
visualise its trajectory towards a desired target in accordance with examples described herein.
Figure 2 shows a schematic view of a displacement sensor with surface optical fiducial markers that forms part of the augmented reality surgical navigation system of Figure 1.
Figure 3A schematically shows the functional components of the augmented reality surgical navigation system for co-registering a 3D virtual model in a virtual environment with a corresponding body part of a patient in the physical environment.
Figure 3B is a flow chart showing steps performed to generate a point cloud from the 3D virtual model in which each point represents a landmark feature on the surface of the 3D virtual model.
Figure 3C is a flow chart showing operations performed to position the 3D virtual model in the virtual environment in a position and orientation corresponding to the position and orientation of the corresponding body part in the physical environment.
Figure 4A schematically shows the functional components of the augmented reality surgical navigation system for registering the location of an origin point and the orientation of co-ordinate axes associated with a displacement sensor in the virtual environment.
Figure 4B is a flow chart showing operations performed to determine the position in the virtual environment corresponding to the origin of the displacement sensor and the orientation of the co-ordinate axes of the displacement sensor in the virtual environment.
Figure 5 is schematically shows the operation of the augmented reality surgical navigation system during surgery.
Figures 6A and 6B show perspective views of a surgeon performing spinal surgery assisted by an augmented reality surgical navigation system in accordance with examples described herein.
Detailed Description
Overview
Figure 1 A shows a surgeon 1 carrying out a surgical procedure on a patient 3. In this example, the surgical procedure involves cranial neurosurgery on the brain, on blood vessels or on nerves located in the skull or near the brain. Such neurosurgical procedures require high precision to avoid significant complications, and the surgeon 1 makes use of an Augmented Reality (AR) surgical navigation system to assist with carrying out the neurosurgical procedure with such high precision.
In this example, the AR surgical navigation system includes a head-mounted AR display device 5. More particularly, in this example the AR display device 5 is a Microsoft HoloLens 2 device. The AR display device 5 senses the physical environment of the head-mounted AR display device, e.g. a surgical theatre, and generates a virtual environment corresponding to the physical environment using a first co-ordinate system, which will hereafter be called the virtual environment co-ordinate system.
The surgeon 1 holds a stylet 13 and during surgery, as shown in Figure IB, the headmounted AR display device 5 presents an image 15 to the surgeon 1 of a three- dimensional model of the head of the patient that is derived from scan data and positioned and oriented within the virtual environment of the AR surgical navigation system so as to match the position and orientation of the head of the patient 3 in the physical environment. The image is rendered from a viewpoint in the virtual environment corresponding to the position and orientation of the head-mounted AR device 5. In this way, the displayed image is superimposed over the corresponding portion of the physical body of the patient 3 in the field of view of the surgeon 1. Further, the head-mounted AR display device 5 presents information detailing the trajectory and distance to a target location.
The AR surgical navigation system also includes a displacement sensing system, which in this example is an electromagnetic tracking system in which movement of a
displacement sensor 7 in six degrees of freedom (three translational and three rotational) is monitored using a field generator 9, which generates an electromagnetic field that induces currents in the displacement sensor 7 that can be analysed to determine the sensed movements. In this example, the displacement sensor 7 is a substantially planar device, as shown in Figure 2, having optical fiduciary markers 21a, 21b and 21c (such as AprilTag, ARTag, ARToolkit, ArUco and the like) positioned thereon in a configuration such that a cartesian co-ordinate system for the sensed movement corresponds to a first axis 23a aligned with a line joining the first optical fiduciary marker 21a and the second optical fiduciary marker 21b, a second axis 23b aligned with a line joining the first optical fiduciary marker 21a and the third fiduciary marker 21c, and a third axis 23c aligned with a line perpendicular to the plane of the displacement sensor 7 and passing through the first optical fiduciary marker 23a. Displacement sensors sensor 7 can be fixed to the forehead of the patient 3, although other positions on the patient 3 are possible, and to surgical devices.
In this example, the AR surgical navigation system also includes a fixed display 11 that displays the field of view of the surgeon 1, including both an image from a camera in the AR display device 5 that captures the view of the surgeon 1 of the physical environment and the displayed image corresponding with the same view of the virtual environment. For example, as shown in Figure 1A, the fixed display 11 may show a first image of the head of the patient 3 captured in the physical environment with a second image of the brain of the patient 3 captured from the virtual environment, with the first and second images having matching positions and orientations.
The AR surgical navigation system includes a processing system having three different modes of operation, namely a patient registration mode, a displacement sensor registration mode and a surgery mode.
The patient registration mode involves, before surgery, identifying the position and orientation in the virtual environment corresponding to the head of the patient 3, and then introducing the three-dimensional model derived from the previously scanned data of the head of the patient 3 into the virtual environment at a position and orientation
matching the identified position and orientation for the physical head of the patient 3. As will be discussed in more detail hereafter, in this example the patient registration mode uses image processing techniques that require the head of the patient 3 to be in the field of view of the surgeon 1 wearing the head-mounted AR device 5. In another example also described in more detail hereafter, the patient registration mode uses the displacement sensing system to perform the orientation and positioning of the three- dimensional model into the virtual environment. These patient registration modes may be used independently or in combination.
The displacement sensor registration mode involves determining the position and orientation in the virtual world corresponding to the position and orientation of one or more six-dimensional displacement sensors (three axes of translation and three axes of rotation) which can be fixed relative to the head of the patient, for example on the forehead of the patient, or on a surgical device. In this way, detected movement of the displacement sensor can be converted into a corresponding translation and/or rotation in the virtual world of the corresponding object to which it is fixed. As will be discussed in more detail hereafter, in this example the displacement sensor registration mode also uses image processing techniques that require the head of the patient 3 to be in the field of view of the surgeon 1 wearing the head-mounted AR device 5.
The surgery mode involves a rendered image of the three-dimensional model being displayed to the surgeon 1 from a viewpoint corresponding to the position and orientation of the AR display device 5. During the surgery mode, movement of the displacement sensor 7 fixed to the head of the patient 3 is sensed and converted by the AR surgical navigation system to a corresponding translation and/or rotation of the three-dimensional model of the head of the patient 3 in the virtual environment so as to maintain the rendered image being superimposed over the corresponding part of the head of the patient 3 in the field of view of the surgeon 1 even if the head of the patient 3 moves. Further, movement of a displacement sensor fixed relative to a surgical device such as the stylet 13 is used to track movement of a virtual model of the surgical device in the virtual environment. As will be described in more detail hereafter, the surgery mode does not require the displacement sensor 7 to be in the field of view of the surgeon
1 wearing the AR display device 5. This is advantageous because during surgery the head of the patient 3 is typically draped for hygiene reasons and therefore the displacement sensor 7 is not in the field of view of the surgeon 1. In addition, this is advantageous because during surgery a surgical device maybe inserted into body tissue and accordingly the tip of the surgical device is no longer visible to the surgeon 1.
An overview of the functionality in each of these modes will now be described in more detail.
Patient Registration Mode
Figure 3A schematically illustrates the functionality of the surgical navigation system in the patient registration mode, the patient registration mode in this example using an optical system which employs image processing techniques. As shown, a headmounted augmented reality (AR) device 201 (corresponding to the AR device 5 of Figure 1) has a display 203, a camera 205 having associated depth sensing capabilities and a device pose calculator 207. The camera 205 takes images of the field of view of the surgeon 1 and the device pose calculator 207 processes those images to determine the position and orientation in the virtual environment corresponding to the position and orientation of the AR device 201 in the physical environment of the AR surgical navigation system.
In this example, scan data 209 corresponding to a pre-operative scan of the portion of the head of the patient is input to a processing system in which the scan data is processed by a 3D model generation module 211 to generate a virtual three-dimensional model of the portion of the head of the patient using a second co-ordinate system, which will hereafter be referred to as the scan model coordinate system. In this embodiment, the pre-operative scan data is received in Digital Imaging and Communications in Medicine (DICOM) format which stores slice-based images. The pre-operative scan data may be from CT scans, MRI scans, ultrasound scans, or any other known medical imaging procedure.
In this example, the slice-based images of the DICOM files are processed to construct a plurality of three-dimensional (3D) scan models each concentrating on a different aspect of the scan data using conventional segmentation filtering techniques to identify and separate anatomical features within the 2D DICOM images. One of the constructed 3D scan models is of the exterior surface of the head of the patient. Other 3D scan models may represent internal features of the head and brain such as brain tissue, blood vessels, the skull of the patient, etc. All these other models are constructed using the scan model co-ordinate system and the same scaling so as to allow the scan models to be selectively exchanged.
The patient registration process then generates a 3D point cloud of landmark points in the 3D scan model of the exterior surface of the head. In particular, referring to Figure 3B, the patient registration mode renders, at S201, an image of the 3D model of the exterior surface of the head of the patient under virtual lighting conditions and from the viewpoint of a virtual camera positioned directly in front of the face of the patient 3 to create two-dimensional image data corresponding to an image of the face of the patient 3. A pre-trained facial detection model analyses, at S203, this image of the face of the patient 3 to obtain a set of 2D landmark points relating to features of the face of the patient 3. These landmark points are then converted, at S205, into a 3D set of points using ray tracing or casting. In particular, the position of the virtual camera is used to project a ray from a 2D landmark point in the image plane of the virtual camera into virtual model space, and the collision point of this ray with the 3D model of the exterior surface of the head of the patient corresponds to the corresponding 3D position of that landmark point. In this way, the 3D point cloud of landmark points corresponding to positions of facial landmarks is generated.
The patient registration process then compares the 3D point cloud with image data for an image of the face of the patient captured by the camera 205 to determine, at 213, transform data for transforming the 3D model into a co-ordinate system relative to the camera 205, which will hereafter be referred to as the camera co-ordinate system. In particular, the patient registration process determines a translation vector and rotation
matrix which matches the position and orientation of the 3D point cloud with the position and orientation of the head of the patient in the captured image.
More particularly, with reference to Figure 3C, after the AR camera 205 captures, at S301, an image of the physical environment, the patient registration process processes, at S303, the corresponding image data to detect faces within the image, and optical fiducial markers in the image that indicate which of the detected faces belongs to the patient 3, as only the patient 3 has optical fiducial markers on them. The optical fiducial markers are useful because there may be one or more persons other than the patient 3 within the image. The patient registration process then identifies, at S305, a 2D set of landmark points of the face of the patient 3 within the captured image using the same pre-trained facial detection model as was used to process the rendered image of the 3D model of the exterior surface of the head. The patient registration process then aligns, at S307, the 3D point cloud of landmark points for the 3D model and the 2D set of landmark points from the captured image using a conventional pose estimation process, which determines a translation vector and a rotation matrix that positions and orientates the 3D model in the field of view of the AR camera 205 co-registered with the physical head of the patient 3. This is a well-studied problem in computer vision known as a perspective n-point (PNP) problem. In this way, the translation vector and the rotation matrix form transform data.
The determined translation vector and rotation matrix are relative to the field of view of the camera when capturing the image. It will be appreciated that the head-mounted AR device moves within the physical environment, and therefore a further transformation is required to determine compensated transform data to locate the 3D model in the virtual environment at a position and orientation that matches the position and orientation of the head of the patient 3 in the physical environment. Accordingly, as shown in Figure 3 A, the patient registration process then transforms, at 215, the 3D model into the virtual environment co-ordinate system. In particular, the patient registration process determines a compensated translation vector and a compensated rotation matrix which matches the position and orientation of the 3D point cloud with
the position and orientation in the virtual environment that corresponds to the position and orientation of the head in the physical environment.
Returning to Figure 3C, more particularly the patient registration process receives, at S309, data form the device pose calculator 207 that identifies the position and the orientation within the virtual environment that corresponds to the position and orientation of the camera 205 in the physical environment when the image was captured. The patient registration process then calculates, at S311, a compensating translation vector and rotation matrix using the data provided by the device pose calculator 207. The compensating translation vector and rotation matrix form compensated transform data to transform the location and orientation of the 3D model of the exterior surface of the head into a location and orientation in the virtual environment, defined using the virtual environment co-ordinate system, so that the position and orientation of the 3D model in the virtual environment matches the position and orientation of the head of the patient 3 in the real world.
In this example, the patient registration process then estimates the accuracy of the coregistration of the physical head and the virtual model. In particular, depth data from the depth sensors associated with the camera 205 is used to construct, at S313, a point cloud of landmark points of the physical face of the patient from the 2D set of landmark points determined from the captured image, and then the accuracy of the overlap of the point cloud of landmark points of the physical face of the patient determined from the captured image and the point cloud of landmark points of the virtual model is calculated, at S315, for example by calculating the sum of the cartesian distance between corresponding points of each point cloud.
To improve accuracy of co-regi strati on, the patient registration process repeats, at S317, the above processing for multiple captured images, while the head of the patient is immobilised but the camera 205 may be mobile, until sufficient accuracy is achieved. This assessment of accuracy can involve averaging the previous best-fits, and determining, at S317, whether the difference between current and previous averaged fits has fallen below a defined accuracy threshold and hence converged on some
suitable alignment. At this point, co-regi strati on is deemed to be sufficiently accurate and the averaged compensated translation vector and rotation matrix can be used to introduce the 3D models into the virtual environment.
Displacement Sensor Registration Process
Figure 4A schematically illustrates the functionality of the surgical navigation system in the displacement sensor registration mode. As discussed previously, each displacement sensor 7 is equipped with optical fiducial markers 21a-21c which are positioned to define the origin and axes of a co-ordinate system in which the displacement sensor 7 measures movement in six directions (three translational and three rotational), hereafter referred to as the displacement sensor co-ordinate system. The aim of the displacement sensor registration process is to identify the position of the origin of the displacement sensor co-ordinate system, and the orientation of the axes of the displacement sensor co-ordinate system, in the virtual environment.
In this example, as illustrated in Figure 4B, the displacement sensor registration process receives, at step S401, an image of the physical environment, which may be an image used for patient registration, together with associated depth data and identifies, at step S403, the optical fiducial markers on the displacement sensor 7, which can be done using available software from libraries such as OpenCV combined with helper libraries which provide specific implementations for the optical fiducial marker used. In this way, the 2D mage co-ordinates of the fiducial markers in the image, and their correct orientation based on the patterns forming the optical fiducial markers 21a-21c, are provided.
Based on the positions and orientations of the optical fiducial markers 21a-21c in the image, the displacement sensor registration process then uses pose estimation to determine, at S405, a rotational transformation required to rotate the plane of the displacement sensor 7 containing the optical fiducial markers to align with the object plane of the camera 205. From this rotation transformation, the displacement sensor registration process determines, at S407, a rotation matrix transforming the displacement sensor co-ordinate system to the AR camera co-ordinate system. In
addition, the depth data corresponding to the optical fiducial markers is used to calculate a 3D position of the origin of the displacement sensor 7 in the AR camera coordinate system.
The determined origin position and rotation matrix are relative to the field of view of the camera 205 when capturing the image. The patient registration process generates, at S409, a compensating translation vector and rotation matrix using a location and orientation of the camera 205, provided by the device pose calculator 207, for when the image was captured. The compensating translation vector and rotation matrix converts the location of the origin of the displacement sensor 7 and the orientation of the displacement sensor 7 from the AR camera co-ordinate system to the virtual environment co-ordinate system, so that the position of the origin of the displacement sensor 7 and the orientation of the displacement sensor co-ordinate system in the virtual environment matches the position and orientation of the displacement sensor 7 in the physical environment.
The displacement sensor registration process can then be repeated, at S413, using multiple different images captured by the camera 205 to acquire an average position of the origin of the displacement sensor 7 and the orientation of the displacement sensor co-ordinate system in the virtual environment.
Surgery Mode
Figure 5 schematically illustrates the functionality of the surgical navigation system in the surgery mode. 3D models 211 are transformed into the virtual environment coordinate system. For 3D models generated from scan data, this involves using the transformation data generated during the patient registration process, so that its position and orientation in the virtual environment matches the position and orientation of the head of the patient 3 in the physical environment. For 3D models corresponding to surgical devices, this involves determining the position and orientation of the surgical
device in the virtual environment based on the determined position and the orientation of the displacement sensor 7 attached to surgical device.
Each 3D model then undergoes a further transformation based on the readings from the displacement sensor 7, and then the resultant transformed model is output to a rendering engine 235. The pose calculator 207 outputs data indicating the position and orientation in the virtual environment corresponding to the position and orientation of the AR camera 205 in the physical environment. This allow the rendering engine 235 to render two-dimensional image data corresponding to the view of a virtual camera, whose optical specification matches the optical specification of the AR camera 205, positioned and orientated in the virtual environment with the position and orientation indicated by the pose calculator 207. The resultant two-dimensional image data is then output by the rendering engine 235 to the display 203 for viewing by the surgeon. More particularly, the display of the head-mounted AR device 5 is a semi-transparent display device enabling the displayed image to be superimposed in the view of the surgeon 1.
If the head of the patient 3 moves during surgery, then the displacement sensor 7 attached to the head will make a corresponding movement and the displacement sensing system will output displacement sensor readings 218a in six dimensions corresponding to translational movement of the origin point and rotational movement of the displacement sensor co-ordinate system. Based on the location of the origin of the displacement sensor 7 and the orientation of the displacement sensor co-ordinate axes in the virtual environment co-ordinate system determined in the displacement sensor registration process, the displacement sensor readings 218a are converted to a displacement transformation 218b to effect a corresponding translation and rotation of the 3D model in the virtual environment. In this way, co-regi strati on between the head of the patient in the physical environment and the 3D model in the virtual environment is maintained.
It will be appreciated that at the start of the cranial neurosurgery, the surgeon 1 may make a visual check that the virtual image of one or more of the 3D models is aligned with the head of the patient 3 before draping the head of the patient 3 for surgery. After
draping, the surgeon 1 is reliant on the transformations based on the sensor readings from the displacement sensor 7 to maintain alignment.
As a surgical device, e.g. the stylet 13, moves during surgery, the displacement sensor attached to the surgical device will make a corresponding movement and the displacement sensing system will output displacement sensor readings 218a in six dimensions corresponding to translational movement of the origin point and rotational movement of the displacement sensor co-ordinate system. Based on the location of the origin of the displacement sensor 7 and the orientation of the displacement sensor coordinate axes in the virtual environment co-ordinate system determined in the displacement sensor registration process, the displacement sensor readings 218a are converted to a displacement transformation 218b to effect a corresponding translation and rotation of the 3D model of the surgical device in the virtual environment.
In this example, as shown in Figure IB, a target location can be identified in the 3D models of the head of the patient 3, and data corresponding to a distance and trajectory between the tip of the stylet 13 and the target location can be calculated and displayed superimposed over the rendered image of the virtual environment. This data may be displayed with or without a rendered virtual model of the stylet 13.
While the above description has related to maintaining the co-regi strati on of the head of a patient 3 in the physical environment and a 3D model of the head of the patient 3 in the virtual environment in case the head of the patient moves during surgery, the augmented reality surgical navigation system can also be used to modify the form of a 3D model during surgery to take account of modifications to the corresponding body part during the surgical procedure or to track the trajectory of instruments such as spinal screws
For example, Figures 6A and 6B illustrate a surgeon 1 carrying out spinal surgery in which vertebrae of the spine are physically moved relative to each other. In this example, the 3D model is accordingly of the relevant vertebrae of the spine, with each vertebra having a respective sub-model. Before surgery, a displacement sensor 7 is
fixed relative to each vertebra, and the position of the origin and the orientation of the co-ordinate axes of each displacement sensor in the virtual environment is determined in the manner described above. During surgery, as the surgeon 1 physically inserts a screw using the electromagnetically tracked tool A, the insertion and fixation of the screw moves the vertebrae relative to each other, and so the displacement sensor 7 on each vertebra will move with that vertebra respectively. The displacement readings for each displacement sensor can then be converted to transformation data for transforming the position and orientation of the corresponding sub-module in the virtual environment. In this way, the form of the 3D model is modified during surgery to take account of the relative movement of the vertebrae as a result of surgical manipulation. In addition, the trajectory and position of the spinal screws B will be tracked within the vertebrae.
Modifications and Further Embodiments
In the example described above, patient registration involves the processing of two- dimensional images of a body part of the patient to generate a set of three-dimensional points to allow co-regi strati on between three-dimensional scan data in the virtual world and the body part of the patient. In another example, the patient registration mode uses the displacement sensing system instead of, or in conjunction with, the optical system described earlier.
Following displacement sensor registration, a displacement sensor 7 can output displacement sensor readings 218a in six dimensions corresponding to translational movement of the origin point and rotational movement of the displacement sensor coordinate system which, due to the registration process, corresponds to a position and orientation in the virtual environment.
Whereas the optical patient registration process identifies a 2D set of landmark points of the face of the patient 3 within a captured image and aligns a 3D point cloud of landmark points for the 3D model with the 2D set of landmark points from the captured
image, in the patient registration process which uses the displacement sensing system it is not necessary to capture an image of the patient 3.
Instead, the patient registration process which uses the displacement sensing system obtains a 3D set of landmark points which can be aligned with the 3D model point cloud. The 3D position of each landmark point is obtained by e.g. the surgeon bringing a displacement sensor into close proximity with the landmark point and triggering the displacement sensor system to provide a displacement sensor reading 218a that provides the three-dimensional position of that landmark point in the virtual world.
Upon capturing the 3D point cloud of displacement sensor readings relating to the landmark points, the registration system performs an alignment process as described previously to align the 3D landmark point cloud with the 3D model point cloud. The alignment process seeks to minimize the cartesian distance between the landmark points and the 3D model points until a pre-determined accuracy has been achieved.
The displacement sensor used in the patient registration process may be in the form of a stylet sensor, for example, which permits precise identification of each landmark point by bringing the displacement sensor into close proximity with the landmark point of the patient’s body.
More generally, the displacement sensor may be positioned on a pointing tool, and may be positioned away from e.g. a sterile pointing end of the pointing tool. The sterile pointing end of the pointing tool is at a known position relative to the displacement sensor, such that when the sterile pointing end is in close proximity with a landmark position on the patient’s body, the displacement sensor reading can be converted to a 3D landmark position based on the positional relationship between the sterile pointing end of the tool and the position of the displacement sensor on the tool.
Acquisition of a 3D point cloud by the displacement sensing system, rather than a 2D point cloud by the optical system may improve the speed of the patient registration process, as it can remove the need to project the 2D landmarks into a 3D space by ray
tracing or casting. Furthermore, using the displacement sensing system to acquire the 3D points does not rely on the photo resolution of the camera, which can improve the positional accuracy of the landmark points.
The registration process may, in some examples, use the displacement sensing system in conjunction with the optical system. For example, it may be desirable to calculate the position of some landmark points optically to avoid the risk of physical contact with a particular body part of the patient. Combining the two methods of patient registration may help improve accuracy of the overall patient registration process by using one method to verify the other. Furthermore, the optical patient registration system may be performed automatically, and the surgeon may use the displacement sensing system where verification of particular landmark points is required, which may further improve the speed of the patient registration process whilst retaining accuracy.
In the illustrated embodiments, the head-mounted AR device is a Microsoft HoloLens 2 device. It will be appreciated that other head-mounted AR devices that display a virtual environment in association with a physical environment could be used. Further, examples of the augmented reality surgical navigation system need not include a headmounted AR device as alternatively a fixed camera could be used, for example in conjunction with the fixed display 11, with the displacement sensing system being used to maintain registration between a 3D model in a virtual environment and a corresponding body part in a physical environment.
While the displacement sensing system of the illustrated embodiment is an electronic tracking system, other displacement systems that do not rely upon image analysis to track movement in six dimensions could be used. For example, a six axis gyroscopic system could be used.
The addition of optical fiducial markers on the displacement sensors assist in the registration of the position of the origin of the displacement sensor and the orientation of the co-ordinate axes of the displacement sensor in the virtual environment. The
optical fiducial markers are not, however, essential as the displacement sensor may be designed to have sufficient landmark points to enable registration to be performed.
The above embodiments are to be understood as illustrative examples of the invention. Further embodiments of the invention are envisaged. For example, [add possibilities]. It is to be understood that any feature described in relation to any one embodiment may be used alone, or in combination with other features described, and may also be used in combination with one or more features of any other of the embodiments, or any combination of any other of the embodiments. Furthermore, equivalents and modifications not described above may also be employed without departing from the scope of the invention, which is defined in the accompanying claims.
Claims
1. An augmented reality surgery system comprising: a camera for imaging a view of a physical environment; a display for displaying a view of a virtual environment; a displacement sensing system having one or more displacement sensors to be fixed relative to a body part of a patient in the physical environment, the displacement sensing system being operable to output measurement data corresponding to translational and rotational movement of each displacement sensor relative to a respective origin and coordinate system; a processing system operable to: generate the virtual environment based on the physical environment of the camera so that each position in the physical environment has a corresponding position in the virtual environment; receive image data from the camera; receive scan data corresponding to a three-dimensional model of the body part of the patient; determine a position and orientation for the three-dimensional model of the body part in the virtual environment matching the position and orientation of the body part of the patient in the physical environment, and introduce the three- dimensional model of the body part into the virtual environment with the determined position and orientation; for each displacement sensor, determine the position of the origin point and the orientation of the co-ordinate axes of that displacement sensor in the virtual environment using image data for one or more images from the camera; at a first time, render a first visual representation of the virtual environment including the three-dimensional model of the body part and output the first visual representation to the display, the first visual representation corresponding to the view of the physical environment imaged by the camera at the first time; receive measurement data from the displacement sensor corresponding to movement of the body part of the patient in the physical environment;
modify at least one of the position, orientation and form of the three- dimensional model in the virtual environment based on the received measurement data and the determined position of the origin point of the displacement sensor and the determined orientation of the co-ordinate axes of the displacement sensor in the virtual environment; and at a second time, render a second visual representation of the virtual environment including the modified model of the body part and output the second visual representation to the display, the second visual representation corresponding to the view of physical environment imaged by the camera at the second time.
2. The augmented reality surgery system of claim 1, wherein each displacement sensor comprises a plurality of optical fiducial markers positioned relative to the origin and co-ordinate axes of the displacement sensor, wherein the processing system is arranged to identify the positions of the plurality of optical fiducial markers in the one or more images and to determine the position of the origin point and the orientation of the co-ordinate axes of the displacement sensor in the physical environment based on the identified positions.
3. The augmented reality surgery system of claim 1 or claim 2, wherein the displacement sensing system is an electromagnetic tracking system.
4. The augmented reality surgery system of any preceding claim, wherein the camera and the display form part of a head-mounted AR device, and the augmented reality surgery system is operable to detect the position and orientation of the headmounted AR device in the physical environment.
5. The system of any preceding claim, wherein the image data includes an image of the body part of the patient in the physical environment; and the processing system is operable to process the image data and scan data to determine a position and orientation for the three-dimensional model of the body part in the virtual environment matching the position and orientation of the body part of the patient in the physical environment, and
introduce the three-dimensional model of the body part into the virtual environment with the determined position and orientation;
6. The augmented reality surgery system of claim 5, wherein the processor is further operable to receive a patient image from the camera; and identify landmark points in the patient image
7. The augmented reality surgery system of claim 6, wherein the processor is further operable to use perspective n-point methods to calculate a transformation matrix required to map the model to the landmark points identified in the patient image.
8. The augmented reality surgery system of claims 6 or 7, wherein the processor is further operable to automatically identify features by a pre-trained model.
9. The system of any preceding claim, wherein the model received by the processor is a 3D point cloud constructed from DICOM data using automatic landmark recognition.
10. The system of any preceding claim, wherein the visual representation of the model is a 3D holographic rendering constructed from the DICOM data.
11. The system of any preceding claim, wherein the processing system is operable to use the displacement sensing system to determine a position and orientation for the three-dimensional model of the body part in the virtual environment matching the position and orientation of the body part of the patient in the physical environment, and introduce the three-dimensional model of the body part into the virtual environment with the determined position and orientation;
12. An augmented reality surgery system comprising:
25 a camera for imaging a view of a physical environment; a display for displaying a view of a virtual environment; a displacement sensing system having one or more displacement sensors for fixing relative to a surgical device in the physical environment, the displacement sensing system being operable to output measurement data corresponding to translational and rotational movement of each displacement sensor relative to a respective origin and coordinate system; a processing system operable to: generate the virtual environment based on the physical environment of the camera so that each position in the physical environment has a corresponding position in the virtual environment; for each displacement sensor fixed to a surgical device, determine the position of the origin point and the orientation of the co-ordinate axes of that displacement sensor in the virtual environment using image data for one or more images from the camera; introduce a three-dimensional model of the surgical device into the virtual environment with a position based on the determined position of the origin and an orientation based on the determined orientation of the co-ordinate axes in the virtual environment so the position and origin of the three-dimensional model of the surgical device in the virtual environment matches the position and orientation of the surgical device in the physical environment; at a first time, render a first visual representation of the virtual environment including at least one of the three-dimensional model of the surgical device and data derived from the position and orientation of the three-dimensional model of the surgical device, and output the first visual representation to the display, the first visual representation corresponding to the view of the physical environment imaged by the camera at the first time; receive measurement data from the displacement sensor corresponding to movement of the surgical device in the physical environment; modify at least one of the position and orientation of the three-dimensional model of the surgical device in the virtual environment based on the received measurement data and the determined position of the origin point of the
26 displacement sensor and the determined orientation of the co-ordinate axes of the displacement sensor in the virtual environment; and at a second time, render a second visual representation of the virtual environment including at least one of the modified model of the surgical device and data derived from the position and orientation of the three-dimensional model of the surgical device, and output the second visual representation to the display, the second visual representation corresponding to the view of physical environment imaged by the camera at the second time. An augmented reality surgery system according to claim 12, wherein the displacement sensor is fixed to a stylet. A computer program for an augmented reality surgical system according to any of claims 1 to 11, the computer program comprising instructions that, when executed by the processing system: receive image data from the camera, the image data including an image of the body part of the patient in the physical environment; receive scan data corresponding to a three-dimensional model of the body part of the patient; determine a position and orientation for the three-dimensional model of the body part in the virtual environment matching the position and orientation of the body part of the patient in the physical environment, and introduce the three- dimensional model of the body part into the virtual environment with the determined position and orientation; for each displacement sensor, determine the position of the origin point and the orientation of the co-ordinate axes of that displacement sensor in the virtual environment using image data for one or more images from the camera; at a first time, render a first visual representation of the virtual environment including the three-dimensional model of the body part and output the first visual representation to the display, the first visual representation corresponding to the view of the physical environment imaged by the camera at the first time;
27 receive measurement data from the displacement sensor corresponding to movement of the body part of the patient in the physical environment; modify at least one of the position, orientation and form of the three- dimensional model in the virtual environment based on the received measurement data and the determined position of the origin point of the displacement sensor and the determined orientation of the co-ordinate axes of the displacement sensor in the virtual environment; and at a second time, render a second visual representation of the virtual environment including the modified model of the body part and output the second visual representation to the display, the second visual representation corresponding to the view of physical environment imaged by the camera at the second time. The augmented reality surgery system of any preceding claim, wherein the programming instructions for determining the pose of the patient comprise programming instructions configured to: receive a patient image from the camera; and identify landmark points in the patient image. The augmented reality surgery system of claim 15, wherein the programming instructions for registration between the model and the patient comprise programming instructions configured to: use perspective n-point methods to calculate the transformation matrix required to map the model to the landmark points identified in the patient image. The augmented reality surgery system of claims 15 or 16, wherein the programming instructions for identifying the landmark points comprise programming instructions configured to: automatically identify features by a pre-trained model. A computer program for an augmented reality surgical system according to claim 12 or claim 13, the computer program comprising instructions that, when executed by the processing system:
28 for each displacement sensor fixed to a surgical device, determine the position of the origin point and the orientation of the co-ordinate axes of that displacement sensor in the virtual environment using image data for one or more images from the camera; introduce a three-dimensional model of the surgical device into the virtual environment with a position based on the determined position of the origin and an orientation based on the determined orientation of the co-ordinate axes in the virtual environment so the position and origin of the three-dimensional model of the surgical device in the virtual environment matches the position and orientation of the surgical device in the physical environment; at a first time, render a first visual representation of the virtual environment including at least one of the three-dimensional model of the surgical device and data derived from the position and orientation of the three-dimensional model of the surgical device, and output the first visual representation to the display, the first visual representation corresponding to the view of the physical environment imaged by the camera at the first time; receive measurement data from the displacement sensor corresponding to movement of the surgical device in the physical environment; modify at least one of the position and orientation of the three-dimensional model of the surgical device in the virtual environment based on the received measurement data and the determined position of the origin point of the displacement sensor and the determined orientation of the co-ordinate axes of the displacement sensor in the virtual environment; and at a second time, render a second visual representation of the virtual environment including at least one of the modified model of the surgical device and data derived from the position and orientation of the three-dimensional model of the surgical device, and output the second visual representation to the display, the second visual representation corresponding to the view of physical environment imaged by the camera at the second time.
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| GB2112741.0A GB2614025B (en) | 2021-09-07 | 2021-09-07 | Augmented reality surgical navigation system |
| GB2112741.0 | 2021-09-07 |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2023036848A1 true WO2023036848A1 (en) | 2023-03-16 |
Family
ID=78076860
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/EP2022/074921 Ceased WO2023036848A1 (en) | 2021-09-07 | 2022-09-07 | Augmented reality surgical navigation system |
Country Status (2)
| Country | Link |
|---|---|
| GB (1) | GB2614025B (en) |
| WO (1) | WO2023036848A1 (en) |
Cited By (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN116993794A (en) * | 2023-08-02 | 2023-11-03 | 德智鸿(上海)机器人有限责任公司 | Virtual-real registration method and device for augmented reality surgery assisted navigation |
| WO2025229542A1 (en) * | 2024-05-01 | 2025-11-06 | Auris Health, Inc. | Target localization for percutaneous access |
Citations (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20160225192A1 (en) * | 2015-02-03 | 2016-08-04 | Thales USA, Inc. | Surgeon head-mounted display apparatuses |
| US20170258526A1 (en) * | 2016-03-12 | 2017-09-14 | Philipp K. Lang | Devices and methods for surgery |
| WO2019215550A1 (en) * | 2018-05-10 | 2019-11-14 | 3M Innovative Properties Company | Simulated orthodontic treatment via augmented visualization in real-time |
| WO2020163358A1 (en) * | 2019-02-05 | 2020-08-13 | Smith & Nephew, Inc. | Computer-assisted arthroplasty system to improve patellar performance |
Family Cites Families (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US8325614B2 (en) * | 2010-01-05 | 2012-12-04 | Jasper Wireless, Inc. | System and method for connecting, configuring and testing new wireless devices and applications |
-
2021
- 2021-09-07 GB GB2112741.0A patent/GB2614025B/en active Active
-
2022
- 2022-09-07 WO PCT/EP2022/074921 patent/WO2023036848A1/en not_active Ceased
Patent Citations (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20160225192A1 (en) * | 2015-02-03 | 2016-08-04 | Thales USA, Inc. | Surgeon head-mounted display apparatuses |
| US20170258526A1 (en) * | 2016-03-12 | 2017-09-14 | Philipp K. Lang | Devices and methods for surgery |
| WO2019215550A1 (en) * | 2018-05-10 | 2019-11-14 | 3M Innovative Properties Company | Simulated orthodontic treatment via augmented visualization in real-time |
| WO2020163358A1 (en) * | 2019-02-05 | 2020-08-13 | Smith & Nephew, Inc. | Computer-assisted arthroplasty system to improve patellar performance |
Non-Patent Citations (1)
| Title |
|---|
| PEPE ANTONIO ET AL: "A Marker-Less Registration Approach for Mixed Reality-Aided Maxillofacial Surgery: a Pilot Evaluation", JOURNAL OF DIGITAL IMAGING, SPRINGER INTERNATIONAL PUBLISHING, CHAM, vol. 32, no. 6, 4 September 2019 (2019-09-04), pages 1008 - 1018, XP037047699, ISSN: 0897-1889, [retrieved on 20190904], DOI: 10.1007/S10278-019-00272-6 * |
Cited By (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN116993794A (en) * | 2023-08-02 | 2023-11-03 | 德智鸿(上海)机器人有限责任公司 | Virtual-real registration method and device for augmented reality surgery assisted navigation |
| CN116993794B (en) * | 2023-08-02 | 2024-05-24 | 德智鸿(上海)机器人有限责任公司 | Virtual-real registration method and device for augmented reality surgery assisted navigation |
| WO2025229542A1 (en) * | 2024-05-01 | 2025-11-06 | Auris Health, Inc. | Target localization for percutaneous access |
Also Published As
| Publication number | Publication date |
|---|---|
| GB2614025B (en) | 2023-12-27 |
| GB2614025A (en) | 2023-06-28 |
| GB202112741D0 (en) | 2021-10-20 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US11717376B2 (en) | System and method for dynamic validation, correction of registration misalignment for surgical navigation between the real and virtual images | |
| US11712307B2 (en) | System and method for mapping navigation space to patient space in a medical procedure | |
| EP3773305B1 (en) | Systems for performing intraoperative guidance | |
| JP7429120B2 (en) | Non-vascular percutaneous procedure system and method for holographic image guidance | |
| US10166079B2 (en) | Depth-encoded fiducial marker for intraoperative surgical registration | |
| CA2973479C (en) | System and method for mapping navigation space to patient space in a medical procedure | |
| US11191595B2 (en) | Method for recovering patient registration | |
| Grimson et al. | Clinical experience with a high precision image-guided neurosurgery system | |
| US20080119725A1 (en) | Systems and Methods for Visual Verification of CT Registration and Feedback | |
| JP2002186603A (en) | Coordinate transformation method for object guidance | |
| US20240285351A1 (en) | Surgical assistance system with improved registration, and registration method | |
| WO2023036848A1 (en) | Augmented reality surgical navigation system | |
| EP4169470B1 (en) | Apparatus and method for positioning a patient's body and tracking the patient's position during surgery | |
| Giraldez et al. | Design and clinical evaluation of an image-guided surgical microscope with an integrated tracking system | |
| EP4169468B1 (en) | Technique for providing guidance to a user on where to arrange an object of interest in an operating room | |
| Jing et al. | Navigating system for endoscopic sinus surgery based on augmented reality | |
| Li et al. | C-arm based image-guided percutaneous puncture of minimally invasive spine surgery | |
| Edwards et al. | Guiding therapeutic procedures |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 22782667 Country of ref document: EP Kind code of ref document: A1 |
|
| NENP | Non-entry into the national phase |
Ref country code: DE |
|
| 32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 10/06/2024) |
|
| 122 | Ep: pct application non-entry in european phase |
Ref document number: 22782667 Country of ref document: EP Kind code of ref document: A1 |