[go: up one dir, main page]

WO2005088539A2 - Systeme de realite augmentee a enregistrement concomitant d'objets virtuels sur des images d'objets reels - Google Patents

Systeme de realite augmentee a enregistrement concomitant d'objets virtuels sur des images d'objets reels Download PDF

Info

Publication number
WO2005088539A2
WO2005088539A2 PCT/BE2005/000036 BE2005000036W WO2005088539A2 WO 2005088539 A2 WO2005088539 A2 WO 2005088539A2 BE 2005000036 W BE2005000036 W BE 2005000036W WO 2005088539 A2 WO2005088539 A2 WO 2005088539A2
Authority
WO
WIPO (PCT)
Prior art keywords
points
image
cloud
images
combined
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
PCT/BE2005/000036
Other languages
English (en)
Other versions
WO2005088539A3 (fr
Inventor
Benoit Macq
Quentin Noirhomme
Michel Cornet D'elzius
Pierre Delville
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Universite Catholique de Louvain UCL
Original Assignee
Universite Catholique de Louvain UCL
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Universite Catholique de Louvain UCL filed Critical Universite Catholique de Louvain UCL
Publication of WO2005088539A2 publication Critical patent/WO2005088539A2/fr
Publication of WO2005088539A3 publication Critical patent/WO2005088539A3/fr
Anticipated expiration legal-status Critical
Ceased legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing

Definitions

  • the present invention relates to methods and apparatus for augmented reality, especially in medical applications such as surgery, and software therefor.
  • Augmented Reality (AR) systems have been investigated in several fields: in the medical domain, in entertainment, for military training, in engineering design, in robotics and telerobotics, in manufacturing and maintenance, and in consumer product design.
  • the basic structure to support augmented reality applications is based on a perception of the real world, a representation of at least one virtual world and a registration procedure to align the real with the virtual.
  • the registration procedure enables to know where a virtual object should be positioned in the real world.
  • the main challenges for developing robust AR systems focus on methods allowing to perceive the real world and register it to the virtual world.
  • the real world can be acquired by using different methods such as laser scanning, infrared scanning, computer vision with and without fiducial markers, GPS methods for mobile systems, etc.
  • the appropriate method for a specific application will depend on the task, desired resolution, performance and financial resources.
  • the computer vision methods allow acquiring a view of the real world from one or two cameras worn by the user or fixed in the environment. For instance, Sauer [Sauer, F., Schoepf, U. J., Khamene, A., Vogt, S., Das, M., Silverman, S. G.: "Augmented Reality system for CT-guided interventions: system description and initial phantom trials" Medical Imaging 2003: Visualization, Image-Guided Procedures, and Display, Proceedings of the SPIE, Vol. 5029.
  • the environment may be perceived in various ways, the adequate method for an application will depend on the use, the budget, the resolution and the desired speed.
  • a 3D surface of the environment is deduced from two or more views coming from two or more cameras carried by the user.
  • This technique is rapid and cheap but is limited by problems linked to stereovision (occlusions,). Even if stereovision is an impossible problem, one can never give a guarantee for a perfect result for all the environments under all conditions.
  • Various companies have developed software allowing to achieve good results at the specific conditions.
  • the libraries of Point Grey are an example thereof.
  • the advantage here is that the user does not have to carry the cameras with him. However there may be occlusions in the vision of the 3D reconstruction.
  • the positioning of the eyes of the user will then have to be found by another means.
  • the cameras will be put in places judged to be strategic and which will optimise the 3 dimensional reconstruction. In this case if the positioning of the eyes of the user may be found in the precise way transparent spectacles may be used.
  • this problem may be solved in adding two cameras on the user. If this were to be used in an operating theatre there would be a series of cameras fixed around the surgery table permitting to reconstitute a 3 dimensional surface.
  • Coregistration This step consists in positioning the virtual images in or onto the environment. In the case of a global perception of the scene we distinguish achieving this by scanning, by stereovision and by infrared. In the case of a cloud of points obtained by laser scanning studies have been carried out at MIT and at the Harvard Medical School. At MIT the accuracy is such that the majority of points obtained by scanning are at less than two mm from the model.
  • the coregistration points are superimposed on the model of the skin in order to check the quality of the registration. They can be shown as lines of dots. These dots may be colour coded at different distances from the model, e.g. the distance of 0 mm of the model, yellow at 1 mm and red at 5 mm.
  • An object of the present invention is to provide improved augmented reality systems and methods, especially those which are more economical to implement.
  • This present invention presents a markerless technique and apparatus to merge 3D virtual models onto real scenes without the need for tracking markers. Neither landmark markers nor projected structured light are required.
  • Real scenes are acquired using a stereovision technique and a region of interest in the real scene is selected. The selection can be, for instance by means of applying colour and/or spatial filters.
  • the registration algorithm operates by minimizing a distance such as the mean square distance between points obtained from the stereovision and a surface from a virtual model of the object to be viewed.
  • the approach is integrated into a multi-platform framework for visualization and processing of images such as medical images.
  • the achieved spatial accuracy (e.g. rotation and translation) and time performance are suitable for real time applications such as surgery.
  • An advantage of the present invention is that it provides a low-cost markerless approach for augmented reality systems. Instead of using fiducial markers or feature tracking, the present invention makes use of preliminary knowledge of the scene to ber viewed. This solution is cost effective as it is targeted for any stereo camera providing a pair of digital pictures of a real scene. The same camera can also be used to project the final augmented scene, e.g. in a head-mounted display. It can be considered a low-cost solution when compared to laser scanning techniques for image acquisition.
  • the method uses off-the-shelf stereo, cameras, virtual reality goggles and MRI data sets to build up the 3D model considering the medical case.
  • the present invention provides a system for augmenting a user's view of real- world objects to provide a combined augmented reality image comprising: a display device for displaying the combined image, a means for obtaining a first segmented image of the surface of the object from a volumetric image of the object, a stereo camera system for capturing stereo images of the object, a processor for generating a cloud of points form the stereo images and for colour or spatial filtering of the cloud of points, the processor being adapted to register the filtered cloud of points with the first segmented image and to display the combined image.
  • the present invention also provides a method for augmenting a user's view of real- world objects to provide a combined augmented reality image comprising: obtaining a first segmented image of the surface of the object from a volumetric image of the object, capturing stereo images of the object, generating a cloud of points form the stereo images, colour or spatial filtering of the cloud of points, registering the filtered cloud of points with the first segmented image, and displaying the combined image.
  • the present invention provides a computer program product which when executed on a processing engine augments a user's view of real- world objects to provide a combined augmented reality image
  • the computer program product comprising code for: obtaining a first segmented image of the surface of the object from a volumetric image of the object, capturing stereo images of the object, generating a cloud of points form the stereo images, colour or spatial filtering of the cloud of points, registering the filtered cloud of points with the first segmented image, and displaying the combined image.
  • the computer program product may be stored on a machine readable medium such as an optical disk (CD-ROM, DVD-ROM), a magnetic tape or a magnetic disk, for example.
  • the present invention also provides a combined digital image or video image generated by any of the methods of the present invention.
  • the present invention is suitable for indoor Augmented Reality (AR) applications with a possible preliminary modelling of the objects of interest to be seen.
  • AR Augmented Reality
  • the present invention is appropriate for applications where focused attention, e.g. distances between 50 and 100 cm, and high accuracy are required, e.g. surgery.
  • the present invention has been integrated into a medical framework which provides a rich set of tools for visualization and manipulation of 2D and 3D medical data sets, e.g. by using VTK and ITK libraries.
  • Fig. 1 shows a schematic representation of a computer arrangement of according to an embodiment of the present invention.
  • Fig. 2 is a schematic flow diagram of a method in accordance with an embodiment of the present invention, e.g. for an AR system for a medical application.
  • Fig. 3 shows the results of the method of Fig. 2.
  • Fig. 4 shows an image of a phantom head used in an example of the present invention.
  • Fig. 5 shows steps in the acquisition of images in AR system in accordance with an embodiment of the present invention.
  • Fig. 6 shows internal information of a patient's head reconstructed from MRI data in accordance with an embodiment of the present invention.
  • Fig. 7 shows the axes of the stereo camera.
  • Fig. 8 shows an RGB spectral analysis of a stereo image, left to right red, green and blue components.
  • Fig. 9 shows a high pass filtering step as used in an embodiment of the present invention.
  • a white area stands for the accepted points and a black area for the rejected ones.
  • Fig. 10 shows a low pass filtering step as used in an embodiment of the present invention.
  • a white area stands for the rejected points and a black area for the accepted ones.
  • Fig. 11 - left figure shows a 2D model whereby elements of the surface are in black
  • right figure illustrates the distance map of this model.
  • Fig. 12 - left figure shows a cranium slice
  • right figure shows the corresponding distance map where light intensity represents distance, shorter distances are darker.
  • Fig. 13 shows final augmented real scene.
  • Fig. 14 shows a cube used for validation - left for rotation, right for translation.
  • Fig. 15 shows real (black) and computer (lighter grey) rotations of a cube.
  • Fig. 16 shows real (black) and computer (lighter grey) translations of a cube.
  • Fig. 17 shows stereo cameras coupled to a 3D stereo glasses and a gypsum phantom representing the patient.
  • first, second, third and the like in the description and in the claims are used for distinguishing between similar elements and not necessarily for describing a sequential or chronological order. It is to be understood that the terms so used are interchangeable under appropriate circumstances and that the embodiments of the invention described herein are capable of operation in other sequences than described or illustrated herein. Moreover, the terms top, bottom, over, under and the like in the description and the claims are used for descriptive purposes and not necessarily for describing relative positions. It is to be understood that the terms so used are interchangeable under appropriate circumstances and that the embodiments of the invention described herein are capable of operation in other orientations than described or illustrated herein.
  • the present invention provides a method and apparatus for performing augmented reality. This method consists in co-registering a virtual object with an image from one or several cameras directed towards the real object. For example, the real object is filmed by means of one or various cameras which permits to deduce an approximately three- dimensional image therefrom.
  • the virtual model of the object can be obtained beforehand, e.g. from volumetric scans such as a CT-Scan of MRI.
  • the present invention relates to an augmented reality application.
  • augmented reality relates to an application wherein the normal vision of a person is enhanced by means of one or more further images.
  • the additional image and the normal image are preferably co-registered.
  • a surface-to- surface registration is performed with one surface extracted from the stereovision images and representing the real world and the other one from volumetric data, e.g. by thresholding of a volumetric scan, and representing the virtual world.
  • the methods and apparatus of the present invention are adapted to perform the co-registration on the stereoscopic view itself.
  • One of the inputs to a method or system in accordance with the present invention is from one or more cameras.
  • the invention uses cameras for capturing the dynamic and noisy reality, not a static object, e.g. in real rime.
  • the output is to an AR display, e.g. goggles.
  • an AR display e.g. goggles.
  • one model is derived from a stereoscopic image whereby the stereoscopic image need not be previously acquired and can be obtained in real-time which is very different from a laser acquisition process, or PET, MRI, etc.
  • the 3D points modelling the real time images are deduced from the stereovision images.
  • the present invention is able to deal with noise in acquired images.
  • the present invention uses a distance map which has noise after stereovision processing. The noise being random, a higher precision is usually obtained when working with more points. However, more points increases computational intensity.
  • a filtering is done on the basis of the colour in the depth map. If faster results are required, the points of the stereovision image are subsampled. Alternatively or additionally, the images can be spatially filtered. As the coregistration is performed directly on the stereoscopic vision it the installation or projection of markers onto the object to be viewed is not required. Moreover, obtaining a three-dimensional image by means of a camera and not by means of laser scanning can provide a three-dimensional image in real time. Furthermore, the use of simple cameras makes the product much cheaper. Hence, the product allows performance of coregistration in any environment on which a model can be made. This is a cheap solution for performing the augmented reality in an environment for which a model can be made and, e.g.
  • Another aspect of the present invention is a medical intraoperative application although the present invention is not limited to this application.
  • the present invention provides an intraoperative tool allowing an augmented vision for the surgeon.
  • the artificial reality goggles can be used for display.
  • the present invention presents a low-cost markerless methods and apparatus for augmented reality systems. Instead of fiducial markers or feature tracking, the present invention relies, in one aspect, on preliminary knowledge of the scene to be viewed. This solution is cost effective as it is targeted for any stereo camera providing a pair of digital pictures of a real scene. The same camera (or the stored images from the camera) can also be used in the projection of the final augmented scene, e.g. in a head-mounted display.
  • the present invention is particularly suitable for indoor Augmented Reality (AR) applications with a possible preliminary modelling of the objects of interest to be seen.
  • AR Augmented Reality
  • the method and apparatus has been integrated into a medical framework which provides a rich set of tools for visualization and manipulation of 2D and 3D medical data sets by using VTK (Visualization ToolKit - www.vtk.org) and ITK (Insight Registration and Segmentation ToolKit - www.itk.orR) libraries.
  • VTK Visualization ToolKit - www.vtk.org
  • ITK Insight Registration and Segmentation ToolKit - www.itk.orR
  • This framework is called Medical Studio (www.medicalstudio.org) and has been developed at the Communication and Remote Sensing Laboratory from UCL, Belgium (www.tele.ucl.ac.be).
  • a system according to an embodiment of the present invention basically comprises three main parts:
  • Fig. 1 is a schematic representation of a computing system which can be utilized with the methods and in a system according to the present invention.
  • a computer 10 is depicted which may include a display such as AR goggles 14 and/or a video display terminal, a data input means such as a keyboard 16, and a graphic user interface indicating means such as a mouse 18.
  • Computer 10 may be implemented as a general purpose computer, e.g. a UNIX workstation or a Personal Computer.
  • Computer 10 includes a Central Processing Unit ("CPU") 15, such as a conventional microprocessor of which a Pentium IV processor supplied by Intel Corp. USA is only an example, and a number of other units interconnected via system bus 22 (which may include a hierarchy of busses).
  • the computer 10 includes at least one memory.
  • Memory may include any of a variety of data storage devices known to the skilled person such as random-access memory (“RAM”), read-only memory (“ROM”), non-volatile read/write memory such as a hard disc as known to the skilled person.
  • computer 10 may further include random-access memory (“RAM”) 24, readonly memory (“ROM”) 26, as well as an optional display adapter 27 for connecting system bus 22 to an optional video display terminal 14 such as AR goggles, and an optional input/output (I/O) adapter 29 for connecting peripheral devices (e.g., solid state, disk and/or tape drives 23) to system bus 22.
  • Computer 10 further includes user interface adapter 19 for connecting a keyboard 16, mouse 18, optional speaker 36, as well as allowing stereovision inputs, e.g.
  • a stereo camera system 20 and optional camera controllers 40 and/or network cards.
  • a system 21 for capturing volumetric data e.g. MRI or CT- Scan data as well as a controller 41 therefor.
  • Data transfer can be allowed via a network 39, such as a LAN or WAN, connected to bus 22 via a communications adapter 39.
  • the adapter 39 may also connect computer 10 to a data network such as the Internet. This allows transmission of volumetric data over a telecommunications network, e.g. entering the volumetric data at a near location and transmitting it to a remote location, e.g. via the Internet, where a processor carries out a method in accordance with the present invention.
  • Computer 10 also includes a graphical user interface that resides within machine- readable media to direct the operation of computer 10. Any suitable machine-readable media may retain the graphical user interface, such as a random access memory (RAM) 24, a read-only memory (ROM) 26, a magnetic diskette, magnetic tape, or optical disk (the last three being located in disk and tape drives 23). Any suitable operating system and associated graphical user interface (e.g., Microsoft Windows) may direct CPU 15.
  • computer 10 includes a control program 51 which resides within computer memory storage 52. Control program 51 contains instructions that when executed on CPU 15 carry out operations supporting any of the methods of the present invention.
  • Fig. 1 may vary for specific applications.
  • peripheral devices such as optical disk media, audio adapters, or chip programming devices, such as PAL or EPROM programming devices well-known in the art of computer hardware, and the like may be utilized in addition to or in place of the hardware already described.
  • the computer program product for carrying out any of the methods of the present invention may be stored in any of the above mentioned memories.
  • stereo video camera e.g. a Bumblebee camera from Point Grey Research [RESEARCH P. G.: Bumblebee camera, digiclops and tri clops libraries, http://www.ptgrey.com (2003). 2,4] and
  • the augmented scene can be displayed in the AR video-based goggles. These goggles are opaque, but as the images captured by the stereo cameras are placed right in front of them, they become virtually transparent. In the optimal (but optional) case the distance between the camera lenses should correspond to the distance between the screens of the AR goggles and the distance between the operator's eyes.
  • the real world image is captured using two or more cameras in a stereoscopic arrangement.
  • PGR libraries for stereovision computation [RESEARCH P. G.: Bumblebee camera, digiclops and triclops libraries. http://www.ptgrey.com (2003). 2,4].
  • VTK Visualization ToolKit
  • Fig. 2 schematically shows a method in accordance with an embodiment of the present invention.
  • the algorithm is shown pictorially in Fig. 3 showing the results of each related step described in Fig. 2.
  • the method has two distinct phases of processing:
  • off-line a) off-line and b) real time (on-line) or optionally off-line.
  • Off-line processing involves the following steps:
  • Real time processing involves the following steps: (i) stereovision algorithm computation; (ii) points filtering; (iii) registration algorithm;
  • step 110 the stereo images are acquired.
  • step 100 the images from the stereo camera are processed to generate a graphics image in a suitable format, e.g. jpg images.
  • the stereo pictures i.e. .jpg file showed in Fig. 2
  • step 113 the stereovision algorithms
  • any suitable libraries may be used.
  • PGR libraries provided with the Bumblebee camera. The initial images are kept, to be put back in the final augmented scene.
  • a cloud of points has been obtained that may be transformed into a VTK data format in step 101 (Fig. 2).
  • these points are filtered in step 102 using colour and/or spatial filters to only select a region of interest in the image that will be used as one of the inputs to the registration algorithm.
  • a spatial or colour region reduces the computational intensity of the registration procedure.
  • the other input data for the registration algorithm is a reference 3D mesh representation (see Fig. 2) stored in the computer of the virtual model.
  • the mesh is obtained from volumetric data by a suitable segmentation scheme, e.g. by first capturing MRI or CT-Scan images (Fig. 3, step 111) and then segmenting them to obtain a virtual model, e.g.
  • step 103 Fig. 2
  • step 104 Fig. 2
  • step 104 This is done by taking the volumetric data and by first rasterizing the model surface.
  • a distance transform is applied such as a Euclidean distance transform.
  • the basic data used will typically be volumetric data and the surface of the object, e.g. the skin contours are obtained by a suitable algorithm, e.g. thresholding, Marching Cubes.
  • the registration algorithm minimizes a distance between these two sets of data in step 105.
  • the mean square distance between the points obtained from the camera images and the virtual model's surface obtained from the volumetric data As a result a 4x4 matrix is obtained in step 106, the matrix representing the rigid transformation which matches, in the best way, the cloud of points onto the distance map.
  • the virtual model is applied to the real images (although the present invention is not limited thereto).
  • step 114 of Fig. 3 filtered points of the virtual model are applied to the originally stored stereovision images.
  • the inverse of the 4x4 matrix is applied to the reference mesh in step 106.
  • step 107 augmented reality is performed by superimposing the virtual model on the initial images coming from the stereo camera. This is done using camera parameter information and stereo processing infomiation such as focal length and distances of the lenses, distance of the real object from the centre of the cameras, etc.
  • the skin image generated from the volumetric data can be peeled back to reveal the internal structures located relative to the viewpoint of the camera.
  • the surgeon has "x-ray vision" useful for minimally invasive surgery.
  • the present invention is not limited to applying the virtual to the real.
  • An example of the apparatus and methods of the present invention will mainly be described in the following with respect to a craniofacial surgery tool and a situation close to a surgical room but the present invention is not limited thereto.
  • a gypsum "phantom" head was made of a patient's head.
  • Such a phantom head can be made by printing a 3D model reconstructed from volumetric scans of a patient's head, e.g.
  • Fig. 4 shows an example generated from segmenting MRI data.
  • the phantom head was put on a dark grey-blue sheet.
  • a patient is typically covered with a blue or green sheet except where the surgery is to take place.
  • Fig. 5 shows in the two top windows left and right views acquired with a stereo camera.
  • the left bottom windows shows the cloud of points acquired and filtered in accordance with the present invention in real time.
  • the right bottom windows shows the cloud of points filtered and correctly registered to the model 3D.
  • the reference 3D mesh representation that will be used for the distance map creation must be as accurate as possible.
  • the relevant surface is the skin.
  • the skin is segmented from the volumetric dataset, e.g.
  • any suitable algorithm may be used, e.g. the Marching Cubes algorithm [LORENSEN W., CLINE H.: Marching cubes, a high resolution 3D surface constmction algorithm. Computer Graphics (1997), 163-169. 4], which will construct a 3D surface from the MRI images given the intensity of the skin as an input parameter.
  • An alternative algorithm could be a thresholding segmentation as described in the book by Sueten, "Fundamentals of Medical Imaging” Cambridge University Press, 2001. The algorithm should be carefully applied as each structure within the head with the same intensity as the skin will also be reconstructed.
  • the internal volume of the object to be viewed is preferably also segmented to show the internal structures which will be of interest to the operator, e.g. surgeon.
  • internal brain structures can be derived. This can be done by any suitable algorithm such as Marching Cubes or a thresholding technique. For example an automatic atlas-based segmentation method [DHaese, P.F., Bois d' Aische, A., Merchan, Th.
  • FIG. 6 illustrates the reconstructed 3D model highlighting some internal structures of the cranium (e.g. the segmented brain).
  • Stereovision has already been extensively covered in many papers [SCHARSTEIN D., SZELISKI R.: A taxonomy and evaluation of dense two-frame stereo correspondence algorithms. IJCV 47(1/2/3) (June 2002), 7-42. 4, SCHARSTEIN D., SZELISKI R.: High- accuracy stereo depth maps using stmctured light.
  • Any suitable library can be used such as the Point Grey Research [RESEARCH P. G.: Bumblebee camera, digiclops and triclops libraries, http://www.ptgrey.com (2003). 2,4] Digiclops and Triclops libraries, for they were supplied with the Bumblebee stereo camera.
  • Such libraries supply functions permitting computer vision applications. They are in a standard C/C++ interface and some have stereovision parameters, e.g. disparity range, matching windows size, etc. Camera calibration is part of the Digiclops library.
  • a major characteristic (and drawback) of stereovision is its intrinsic lack of accuracy. Because of occlusions and imperfections of stereovision algorithms the output (e.g. 3D image, depth map, cloud of points) is usually "noisy". Moreover, the cloud of points after stereovision processing describes the whole scene captured and is consequently not so suitable for registration with a virtual model corresponding to a small part of it. Selecting the points belonging to the object of interest is of advantage. Two main assumptions regarding the real scene are considered:
  • the object of interest's position does not change abruptly from one frame to the other.
  • a further useful restriction is that the operator (e.g. surgeon) needs accuracy when the real object is at the centre of his sight. For this reason there is a spatial limitation on the relevant part of the images.
  • a filter based on colour and/or space assumptions is applied. These assumptions can be easily deduced from a preliminary observation. For instance during surgery where the patient body is "wrapped" in a green (or blue) sheet points in the captured image having high green (or blue) colour content can be deleted.
  • the criteria for filtering e.g. colour and distance or position
  • each point of the cloud computed by stereovision algorithms is composed of 3 coordinates and 3 RGB colour intensities.
  • the 3 axis coordinates are the point's position (in meters) in space regarding to the centre of the stereo camera along the three axis of the Bumblebee camera (Fig. 7).
  • any suitable image analysis method can be used, e.g. regional colour coding or a colour histogram.
  • a "spectral" analysis highlights the differences in colour distribution.
  • Fig. 8 shows three figures, each figure corresponds to, from left to right, the Red, Green and Blue colours.
  • the horizontal axis represents the intensity (from 0 to 255) and the vertical axis is the number of points with that intensity. Peaks corresponding to uniform intensities of light can be detected. Those peaks correspond to the first assumption that large parts of the background have a uniform colour. It is thus possible to isolate pixels with regard to their intensity as well as their dominant colour. For instance, the dark blue sheet of Fig. 5 is responsible of the first peak of all the three drawings of Fig. 8. This "spectral" analysis permits to determine the criteria of a band pass colour filter. Band pass filters are composed of high pass and low pass filters combined. The high pass filter criteria are determined in order to filter dark backgrounds of blue dominance (the sheet in the example is grey-blue).
  • a threshold is set for each RGB component to augment filter reliability.
  • Fig. 9 and Fig. 10 illustrate on one of the two stereo pictures the concept of colour filtering.
  • Fig. 9 illustrates the high pass filter red component. Comparison of the two other colours (not shown here) shows the head has a dominant red colour.
  • the low pass filter criteria are set to filter light blue sheet standing in front of the cranium.
  • Fig. 10 illustrates the low pass filter blue component.
  • a comparison with the two other elements (not shown here) shows clearly the sheet blue dominance.
  • a spatial filtering step useful in the present invention is built on observations in the relevant real space, e.g. a surgical room. The surgeon space of major interest is determined by observing the surgeon when performing his surgical acts.
  • the registration algorithm minimizes a distance between the virtual model and the real world model. For example, a mean square distance is minimised between the points derived from the stereovision camera and the surface representation derived from segmentation of volumetric data, e.g. MRI segmentation.
  • T argmin ( ⁇ d s (T(p)) Eq. (1)
  • peM d s (q) min (d (q,s)
  • Equation 2 can be efficiently pre-computed for all points q using a suitable distance transform such as a Euclidean distance transform (EDT).
  • EDT Euclidean distance transform
  • the segmented scalp surface is rasterized into a 3D image based on the volumetric patient data.
  • a suitable EDT is applied, e.g. an implementation of Saito's EDT [SAITO T., TORIWAKI, J.: New algorithms for Euclidian distance transformations of an n- dimensional digitized picture with applications. Pattern Recognition 27 (Nov. 1994), 1551-1565] found in [CUISENAIRE 0.: Distance transformation, fast algorithms and applications to medical image processing.
  • the augmented view can be improved by eliminating some noise (e.g. which comes from the segmentation process) in the reference 3D model.
  • An indirect method has been used to determine the precision of the method according to the present invention.
  • the precision of display is hard to determine, unless there are several operators with AR goggles and real-time temporal resolution.
  • the precision of the registration algorithm has already been determined for instance in [NOIRHOMME Q., ROMERO E., CUISENAIRE 0., FERRANT M., VANDERMEER EN Y., OLIVIER E., MACQ B.: Registration of transcranial magnetic stimulation, a visualization tool for brain functions.
  • a cube in rotation (20 pictures of 1 degree rotations from -20 to +17° ) - see Fig. 15.
  • a cube in translation (20 pictures of 1 cm translations from -5cm to +15 cm) - see Fig. 16.
  • the goal was to emulate the movement of the operator looking at the real object and treat it as real-time data.
  • the Bumblebee camera was statically fixed on a tripod, at 70 cm from the object and at a 45° horizontal angle.
  • a raler was disposed under the object to assure the maximum precision the human hand can reach. (Fig. 14).
  • the mean square error found for rotation was 0.7°, and the mean square error found in translation is 1 cm. Two remarks are important here:
  • the precision determined corresponds to the precision of the human hand displacing the object.
  • the Augmented Reality systems without using fiducial markers according to the present invention can be carried out using off-the-shelf stereo cameras and virtual reality goggles.
  • volumetric data is captured by well-known techniques, e.g. MRI, to generate data sets from which to build up the 3D model.
  • the present invention merges the following aspects:
  • the selection of a smaller region of interest in the real scene is performed applying colour and/or spatial filters. This contributes to eliminating the noise in the acquired image and speeds up the computation time of the registration algorithm.
  • the registration algorithm is performed by minimizing a distance such as a mean square distance between the points derived from stereovision and a surface derived from the virtual model.
  • a dynamic registration for deformable objects using finite elements For instance in a maxillo-facial surgery there are some structures that can be moved during the surgery acts.
  • the bounding box can be automatically positioned around the object of attention in order to reduce noise influence down to zero, e.g. by pattern recognition algorithms.
  • FIG. 17 shows an arrangement in accordance with an embodiment of the present invention.
  • the operator wears AR goggles with a stereo camera mounted in front thereof.
  • the augmented image is displayed within the goggles.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Graphics (AREA)
  • Software Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Geometry (AREA)
  • Processing Or Creating Images (AREA)
  • Image Processing (AREA)

Abstract

L'invention concerne des procédés et un appareil pour effectuer un enregistrement dans des systèmes de réalité augmentée sans marqueur. Le filtrage spatial et/ou chromatique des images du monde réel est mis en oeuvre pour réduire l'intensité computationnelle. Cette invention propose un outil puissant pour assister un chirurgien lorsqu'il/elle planifie et/ou exécute des opérations chirurgicales complexes.
PCT/BE2005/000036 2004-03-15 2005-03-15 Systeme de realite augmentee a enregistrement concomitant d'objets virtuels sur des images d'objets reels Ceased WO2005088539A2 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
GB0405792.3 2004-03-15
GBGB0405792.3A GB0405792D0 (en) 2004-03-15 2004-03-15 Augmented reality vision system and method

Publications (2)

Publication Number Publication Date
WO2005088539A2 true WO2005088539A2 (fr) 2005-09-22
WO2005088539A3 WO2005088539A3 (fr) 2005-11-03

Family

ID=32117708

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/BE2005/000036 Ceased WO2005088539A2 (fr) 2004-03-15 2005-03-15 Systeme de realite augmentee a enregistrement concomitant d'objets virtuels sur des images d'objets reels

Country Status (2)

Country Link
GB (1) GB0405792D0 (fr)
WO (1) WO2005088539A2 (fr)

Cited By (33)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102006004731A1 (de) * 2006-02-02 2007-08-09 Bayerische Motoren Werke Ag Verfahren und Einrichtung zur Bestimmung der Position und/oder Orientierung einer Kamera bezüglich eines realen Objektes
US8073243B2 (en) 2008-05-30 2011-12-06 General Instrument Corporation Replacing image information in a captured image
US8352415B2 (en) 2010-06-15 2013-01-08 International Business Machines Corporation Converting images in virtual environments
WO2013176829A1 (fr) * 2012-05-23 2013-11-28 Qualcomm Incorporated Vidéo augmentée enregistrée spatialement
DE102012106890A1 (de) * 2012-07-30 2014-01-30 Carl Zeiss Microscopy Gmbh Dreidimensionale Darstellung von Objekten
US8984503B2 (en) 2009-12-31 2015-03-17 International Business Machines Corporation Porting virtual images between platforms
CN104794748A (zh) * 2015-03-17 2015-07-22 上海海洋大学 基于Kinect视觉技术的三维空间地图构建方法
US9677840B2 (en) 2014-03-14 2017-06-13 Lineweight Llc Augmented reality simulator
US9713871B2 (en) 2015-04-27 2017-07-25 Microsoft Technology Licensing, Llc Enhanced configuration and control of robots
CN107305595A (zh) * 2017-05-22 2017-10-31 朗动信息咨询(上海)有限公司 一种基于ar工具实现的产品设计转化系统
US9980780B2 (en) 2016-03-12 2018-05-29 Philipp K. Lang Guidance for surgical procedures
US10007413B2 (en) 2015-04-27 2018-06-26 Microsoft Technology Licensing, Llc Mixed environment display of attached control elements
WO2018162078A1 (fr) * 2017-03-10 2018-09-13 Brainlab Ag Navigation à réalité augmentée médicale
WO2018162079A1 (fr) * 2017-03-10 2018-09-13 Brainlab Ag Pré-enregistrement de réalité augmentée
US10194131B2 (en) 2014-12-30 2019-01-29 Onpoint Medical, Inc. Augmented reality guidance for spinal surgery and spinal procedures
WO2019032143A1 (fr) * 2016-08-16 2019-02-14 Insight Medical Systems, Inc. Systèmes et procédés de renforcement sensoriel lors d'interventions médicales
US10398514B2 (en) 2016-08-16 2019-09-03 Insight Medical Systems, Inc. Systems and methods for sensory augmentation in medical procedures
CN110288636A (zh) * 2019-05-05 2019-09-27 中国矿业大学 一种基于平面特征约束的LiDAR点云无初值配准方法
US10678338B2 (en) 2017-06-09 2020-06-09 At&T Intellectual Property I, L.P. Determining and evaluating data representing an action to be performed by a robot
US10788672B2 (en) 2016-03-01 2020-09-29 Mirus Llc Augmented visualization during surgery
US10803608B1 (en) 2019-10-30 2020-10-13 Skia Medical procedure using augmented reality
EP3789965A1 (fr) * 2019-09-09 2021-03-10 apoQlar GmbH Procédé de commande d'affichage, programme informatique et dispositif d'affichage de réalité mixte
US11006093B1 (en) 2020-01-22 2021-05-11 Photonic Medical Inc. Open view, multi-modal, calibrated digital loupe with depth sensing
US11071596B2 (en) 2016-08-16 2021-07-27 Insight Medical Systems, Inc. Systems and methods for sensory augmentation in medical procedures
US11348257B2 (en) 2018-01-29 2022-05-31 Philipp K. Lang Augmented reality guidance for orthopedic and other surgical procedures
US11553969B1 (en) 2019-02-14 2023-01-17 Onpoint Medical, Inc. System for computation of object coordinates accounting for movement of a surgical site for spinal and other procedures
US11751944B2 (en) 2017-01-16 2023-09-12 Philipp K. Lang Optical guidance for surgical, medical, and dental procedures
US11786206B2 (en) 2021-03-10 2023-10-17 Onpoint Medical, Inc. Augmented reality guidance for imaging systems
US11801114B2 (en) 2017-09-11 2023-10-31 Philipp K. Lang Augmented reality display for vascular and other interventions, compensation for cardiac and respiratory motion
US11857378B1 (en) 2019-02-14 2024-01-02 Onpoint Medical, Inc. Systems for adjusting and tracking head mounted displays during surgery including with surgical helmets
US12053247B1 (en) 2020-12-04 2024-08-06 Onpoint Medical, Inc. System for multi-directional tracking of head mounted displays for real-time augmented reality guidance of surgical procedures
US12211151B1 (en) 2019-07-30 2025-01-28 Onpoint Medical, Inc. Systems for optimizing augmented reality displays for surgical procedures
US12433761B1 (en) 2022-01-20 2025-10-07 Onpoint Medical, Inc. Systems and methods for determining the shape of spinal rods and spinal interbody devices for use with augmented reality displays, navigation systems and robots in minimally invasive spine procedures

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
BETTING F ET AL: "A new framework for fusing stereo images with volumetric medical images" COMPUTER VISION, VIRTUAL REALITY AND ROBOTICS IN MEDICINE. FIRST INTERNATIONAL CONFERENCE, CVRMED '95. PROCEEDINGS SPRINGER-VERLAG BERLIN, GERMANY, 1995, pages 30-39, XP008052514 ISBN: 3-540-59120-6 *
CUISENAIRE O: "DISTANCE TRANSFORMATIONS: FAST ALGORITHMS AND APPLICATIONS TO MEDICAL IMAGE PROCESSING" THESE PRESENTEE EN VUE DE L'OBTENTION DU GRADE DE DOCTEUR EN SCIENCES APPLIQUEES DE UNIVERSITE CATHOLIQUE DE LOUVAIN, October 1999 (1999-10), pages II-X,1, XP000962220 cited in the application *
HENRI C J ET AL: "Registration of 3-D surface data for intra-operative guidance and visualization in frameless stereotactic neurosurgery" COMPUTER VISION, VIRTUAL REALITY AND ROBOTICS IN MEDICINE. FIRST INTERNATIONAL CONFERENCE, CVRMED '95. PROCEEDINGS SPRINGER-VERLAG BERLIN, GERMANY, 1995, pages 47-56, XP008052515 ISBN: 3-540-59120-6 *

Cited By (96)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102006004731B4 (de) * 2006-02-02 2019-05-09 Bayerische Motoren Werke Aktiengesellschaft Verfahren und Einrichtung zur Bestimmung der Position und/oder Orientierung einer Kamera bezüglich eines realen Objektes
DE102006004731A1 (de) * 2006-02-02 2007-08-09 Bayerische Motoren Werke Ag Verfahren und Einrichtung zur Bestimmung der Position und/oder Orientierung einer Kamera bezüglich eines realen Objektes
US8073243B2 (en) 2008-05-30 2011-12-06 General Instrument Corporation Replacing image information in a captured image
US8984503B2 (en) 2009-12-31 2015-03-17 International Business Machines Corporation Porting virtual images between platforms
US8990794B2 (en) 2009-12-31 2015-03-24 International Business Machines Corporation Porting virtual images between platforms
US10528617B2 (en) 2009-12-31 2020-01-07 International Business Machines Corporation Porting virtual images between platforms
USRE46748E1 (en) 2010-06-15 2018-03-06 International Business Machines Corporation Converting images in virtual environments
US8352415B2 (en) 2010-06-15 2013-01-08 International Business Machines Corporation Converting images in virtual environments
WO2013176829A1 (fr) * 2012-05-23 2013-11-28 Qualcomm Incorporated Vidéo augmentée enregistrée spatialement
US9153073B2 (en) 2012-05-23 2015-10-06 Qualcomm Incorporated Spatially registered augmented video
DE102012106890A1 (de) * 2012-07-30 2014-01-30 Carl Zeiss Microscopy Gmbh Dreidimensionale Darstellung von Objekten
US9677840B2 (en) 2014-03-14 2017-06-13 Lineweight Llc Augmented reality simulator
US10511822B2 (en) 2014-12-30 2019-12-17 Onpoint Medical, Inc. Augmented reality visualization and guidance for spinal procedures
US10326975B2 (en) 2014-12-30 2019-06-18 Onpoint Medical, Inc. Augmented reality guidance for spinal surgery and spinal procedures
US10602114B2 (en) 2014-12-30 2020-03-24 Onpoint Medical, Inc. Augmented reality guidance for spinal surgery and spinal procedures using stereoscopic optical see-through head mounted displays and inertial measurement units
US11483532B2 (en) 2014-12-30 2022-10-25 Onpoint Medical, Inc. Augmented reality guidance system for spinal surgery using inertial measurement units
US11350072B1 (en) 2014-12-30 2022-05-31 Onpoint Medical, Inc. Augmented reality guidance for bone removal and osteotomies in spinal surgery including deformity correction
US10594998B1 (en) 2014-12-30 2020-03-17 Onpoint Medical, Inc. Augmented reality guidance for spinal procedures using stereoscopic optical see-through head mounted displays and surface representations
US12010285B2 (en) 2014-12-30 2024-06-11 Onpoint Medical, Inc. Augmented reality guidance for spinal surgery with stereoscopic displays
US10194131B2 (en) 2014-12-30 2019-01-29 Onpoint Medical, Inc. Augmented reality guidance for spinal surgery and spinal procedures
US10841556B2 (en) 2014-12-30 2020-11-17 Onpoint Medical, Inc. Augmented reality guidance for spinal procedures using stereoscopic optical see-through head mounted displays with display of virtual surgical guides
US11272151B2 (en) 2014-12-30 2022-03-08 Onpoint Medical, Inc. Augmented reality guidance for spinal surgery with display of structures at risk for lesion or damage by penetrating instruments or devices
US11750788B1 (en) 2014-12-30 2023-09-05 Onpoint Medical, Inc. Augmented reality guidance for spinal surgery with stereoscopic display of images and tracked instruments
US11652971B2 (en) 2014-12-30 2023-05-16 Onpoint Medical, Inc. Image-guided surgery with surface reconstruction and augmented reality visualization
US10951872B2 (en) 2014-12-30 2021-03-16 Onpoint Medical, Inc. Augmented reality guidance for spinal procedures using stereoscopic optical see-through head mounted displays with real time visualization of tracked instruments
US11153549B2 (en) 2014-12-30 2021-10-19 Onpoint Medical, Inc. Augmented reality guidance for spinal surgery
US11050990B2 (en) 2014-12-30 2021-06-29 Onpoint Medical, Inc. Augmented reality guidance for spinal procedures using stereoscopic optical see-through head mounted displays with cameras and 3D scanners
US12063338B2 (en) 2014-12-30 2024-08-13 Onpoint Medical, Inc. Augmented reality guidance for spinal surgery with stereoscopic displays and magnified views
US10742949B2 (en) 2014-12-30 2020-08-11 Onpoint Medical, Inc. Augmented reality guidance for spinal procedures using stereoscopic optical see-through head mounted displays and tracking of instruments and devices
CN104794748A (zh) * 2015-03-17 2015-07-22 上海海洋大学 基于Kinect视觉技术的三维空间地图构建方法
US10449673B2 (en) 2015-04-27 2019-10-22 Microsoft Technology Licensing, Llc Enhanced configuration and control of robots
US9713871B2 (en) 2015-04-27 2017-07-25 Microsoft Technology Licensing, Llc Enhanced configuration and control of robots
US10099382B2 (en) 2015-04-27 2018-10-16 Microsoft Technology Licensing, Llc Mixed environment display of robotic actions
US10007413B2 (en) 2015-04-27 2018-06-26 Microsoft Technology Licensing, Llc Mixed environment display of attached control elements
US11275249B2 (en) 2016-03-01 2022-03-15 Mirus Llc Augmented visualization during surgery
US10788672B2 (en) 2016-03-01 2020-09-29 Mirus Llc Augmented visualization during surgery
US10159530B2 (en) 2016-03-12 2018-12-25 Philipp K. Lang Guidance for surgical interventions
US12472000B2 (en) 2016-03-12 2025-11-18 Philipp K. Lang Augmented reality system for monitoring size and laterality of physical implants during surgery and for billing and invoicing
US10743939B1 (en) 2016-03-12 2020-08-18 Philipp K. Lang Systems for augmented reality visualization for bone cuts and bone resections including robotics
US10603113B2 (en) 2016-03-12 2020-03-31 Philipp K. Lang Augmented reality display systems for fitting, sizing, trialing and balancing of virtual implant components on the physical joint of the patient
US10799296B2 (en) 2016-03-12 2020-10-13 Philipp K. Lang Augmented reality system configured for coordinate correction or re-registration responsive to spinal movement for spinal procedures, including intraoperative imaging, CT scan or robotics
US9980780B2 (en) 2016-03-12 2018-05-29 Philipp K. Lang Guidance for surgical procedures
US11957420B2 (en) 2016-03-12 2024-04-16 Philipp K. Lang Augmented reality display for spinal rod placement related applications
US10849693B2 (en) 2016-03-12 2020-12-01 Philipp K. Lang Systems for augmented reality guidance for bone resections including robotics
US11602395B2 (en) 2016-03-12 2023-03-14 Philipp K. Lang Augmented reality display systems for fitting, sizing, trialing and balancing of virtual implant components on the physical joint of the patient
US10368947B2 (en) 2016-03-12 2019-08-06 Philipp K. Lang Augmented reality guidance systems for superimposing virtual implant components onto the physical joint of a patient
US12127795B2 (en) 2016-03-12 2024-10-29 Philipp K. Lang Augmented reality display for spinal rod shaping and placement
US10405927B1 (en) 2016-03-12 2019-09-10 Philipp K. Lang Augmented reality visualization for guiding physical surgical tools and instruments including robotics
US11311341B2 (en) 2016-03-12 2022-04-26 Philipp K. Lang Augmented reality guided fitting, sizing, trialing and balancing of virtual implants on the physical joint of a patient for manual and robot assisted joint replacement
US11013560B2 (en) 2016-03-12 2021-05-25 Philipp K. Lang Systems for augmented reality guidance for pinning, drilling, reaming, milling, bone cuts or bone resections including robotics
US10278777B1 (en) 2016-03-12 2019-05-07 Philipp K. Lang Augmented reality visualization for guiding bone cuts including robotics
US11172990B2 (en) 2016-03-12 2021-11-16 Philipp K. Lang Systems for augmented reality guidance for aligning physical tools and instruments for arthroplasty component placement, including robotics
US10292768B2 (en) 2016-03-12 2019-05-21 Philipp K. Lang Augmented reality guidance for articular procedures
US11452568B2 (en) 2016-03-12 2022-09-27 Philipp K. Lang Augmented reality display for fitting, sizing, trialing and balancing of virtual implants on the physical joint of a patient for manual and robot assisted joint replacement
US12465438B2 (en) 2016-08-16 2025-11-11 Insight Medical Systems, Inc. Augmented reality assisted navigation of knee replacement
US11071596B2 (en) 2016-08-16 2021-07-27 Insight Medical Systems, Inc. Systems and methods for sensory augmentation in medical procedures
US10398514B2 (en) 2016-08-16 2019-09-03 Insight Medical Systems, Inc. Systems and methods for sensory augmentation in medical procedures
WO2019032143A1 (fr) * 2016-08-16 2019-02-14 Insight Medical Systems, Inc. Systèmes et procédés de renforcement sensoriel lors d'interventions médicales
EP4385446A2 (fr) 2016-08-16 2024-06-19 Insight Medical Systems, Inc. Systèmes d'augmentation sensorielle dans des procédures médicales
US11751944B2 (en) 2017-01-16 2023-09-12 Philipp K. Lang Optical guidance for surgical, medical, and dental procedures
US11135016B2 (en) 2017-03-10 2021-10-05 Brainlab Ag Augmented reality pre-registration
WO2018162079A1 (fr) * 2017-03-10 2018-09-13 Brainlab Ag Pré-enregistrement de réalité augmentée
US11759261B2 (en) 2017-03-10 2023-09-19 Brainlab Ag Augmented reality pre-registration
US11460915B2 (en) 2017-03-10 2022-10-04 Brainlab Ag Medical augmented reality navigation
WO2018162078A1 (fr) * 2017-03-10 2018-09-13 Brainlab Ag Navigation à réalité augmentée médicale
US12197637B2 (en) 2017-03-10 2025-01-14 Brainlab Ag Medical augmented reality navigation
CN107305595A (zh) * 2017-05-22 2017-10-31 朗动信息咨询(上海)有限公司 一种基于ar工具实现的产品设计转化系统
US10678338B2 (en) 2017-06-09 2020-06-09 At&T Intellectual Property I, L.P. Determining and evaluating data representing an action to be performed by a robot
US11106284B2 (en) 2017-06-09 2021-08-31 At&T Intellectual Property I, L.P. Determining and evaluating data representing an action to be performed by a robot
US11801114B2 (en) 2017-09-11 2023-10-31 Philipp K. Lang Augmented reality display for vascular and other interventions, compensation for cardiac and respiratory motion
US12290414B2 (en) 2017-09-11 2025-05-06 Philipp K. Lang Augmented reality guidance for vascular procedures
US11727581B2 (en) 2018-01-29 2023-08-15 Philipp K. Lang Augmented reality guidance for dental procedures
US12086998B2 (en) 2018-01-29 2024-09-10 Philipp K. Lang Augmented reality guidance for surgical procedures
US11348257B2 (en) 2018-01-29 2022-05-31 Philipp K. Lang Augmented reality guidance for orthopedic and other surgical procedures
US11553969B1 (en) 2019-02-14 2023-01-17 Onpoint Medical, Inc. System for computation of object coordinates accounting for movement of a surgical site for spinal and other procedures
US12364570B1 (en) 2019-02-14 2025-07-22 Onpoint Medical, Inc. Systems for adjusting and tracking head mounted displays during surgery including with surgical helmets
US11857378B1 (en) 2019-02-14 2024-01-02 Onpoint Medical, Inc. Systems for adjusting and tracking head mounted displays during surgery including with surgical helmets
US12161428B1 (en) 2019-02-14 2024-12-10 Onpoint Medical, Inc. System for computation of object coordinates accounting for movement of a surgical site for spinal and other procedures including interpolation of vertebral position and orientation
CN110288636A (zh) * 2019-05-05 2019-09-27 中国矿业大学 一种基于平面特征约束的LiDAR点云无初值配准方法
CN110288636B (zh) * 2019-05-05 2020-02-18 中国矿业大学 一种基于平面特征约束的LiDAR点云无初值配准方法
US12211151B1 (en) 2019-07-30 2025-01-28 Onpoint Medical, Inc. Systems for optimizing augmented reality displays for surgical procedures
WO2021048158A1 (fr) * 2019-09-09 2021-03-18 apoQlar GmbH Procédé de commande d'un dispositif d'affichage, programme informatique et dispositif d'affichage à réalité mixte
EP3789965A1 (fr) * 2019-09-09 2021-03-10 apoQlar GmbH Procédé de commande d'affichage, programme informatique et dispositif d'affichage de réalité mixte
US11961193B2 (en) 2019-09-09 2024-04-16 apoQlar GmbH Method for controlling a display, computer program and mixed reality display device
US10970862B1 (en) 2019-10-30 2021-04-06 Skia Medical procedure using augmented reality
US11341662B2 (en) 2019-10-30 2022-05-24 Skia Medical procedure using augmented reality
US11710246B2 (en) 2019-10-30 2023-07-25 Skia Skin 3D model for medical procedure
US10803608B1 (en) 2019-10-30 2020-10-13 Skia Medical procedure using augmented reality
US12075019B2 (en) 2020-01-22 2024-08-27 Photonic Medical Inc. Open view, multi-modal, calibrated digital loupe with depth sensing
US11166006B2 (en) 2020-01-22 2021-11-02 Photonic Medical Inc. Open view, multi-modal, calibrated digital loupe with depth sensing
US11006093B1 (en) 2020-01-22 2021-05-11 Photonic Medical Inc. Open view, multi-modal, calibrated digital loupe with depth sensing
US11412202B2 (en) 2020-01-22 2022-08-09 Photonic Medical Inc. Open view, multi-modal, calibrated digital loupe with depth sensing
US11611735B2 (en) 2020-01-22 2023-03-21 Photonic Medical Inc. Open view, multi-modal, calibrated digital loupe with depth sensing
US12053247B1 (en) 2020-12-04 2024-08-06 Onpoint Medical, Inc. System for multi-directional tracking of head mounted displays for real-time augmented reality guidance of surgical procedures
US11786206B2 (en) 2021-03-10 2023-10-17 Onpoint Medical, Inc. Augmented reality guidance for imaging systems
US12433761B1 (en) 2022-01-20 2025-10-07 Onpoint Medical, Inc. Systems and methods for determining the shape of spinal rods and spinal interbody devices for use with augmented reality displays, navigation systems and robots in minimally invasive spine procedures

Also Published As

Publication number Publication date
GB0405792D0 (en) 2004-04-21
WO2005088539A3 (fr) 2005-11-03

Similar Documents

Publication Publication Date Title
WO2005088539A2 (fr) Systeme de realite augmentee a enregistrement concomitant d'objets virtuels sur des images d'objets reels
Colchester et al. Development and preliminary evaluation of VISLAN, a surgical planning and guidance system using intra-operative video imaging
Sun et al. Stereopsis-guided brain shift compensation
Dey et al. Automatic fusion of freehand endoscopic brain images to three-dimensional surfaces: creating stereoscopic panoramas
KR101608848B1 (ko) 다차원 영상 생성 방법 및 시스템
US5531520A (en) System and method of registration of three-dimensional data sets including anatomical body data
Yip et al. Tissue tracking and registration for image-guided surgery
US8704827B2 (en) Cumulative buffering for surface imaging
US12096987B2 (en) Method and assembly for spatial mapping of a model, such as a holographic model, of a surgical tool and/or anatomical structure onto a spatial position of the surgical tool respectively anatomical structure, as well as a surgical tool
US20250232538A1 (en) Systems and methods for masking a recognized object during an application of a synthetic element to an original image
JP3910239B2 (ja) 医用画像合成装置
WO2010081094A2 (fr) Système de calage et de superposition d'information sur des surfaces déformables à partir de données vidéo
US20210128243A1 (en) Augmented reality method for endoscope
KR100346363B1 (ko) 자동 의료 영상 분할을 통한 3차원 영상 데이터 구축방법/장치, 및 그를 이용한 영상유도 수술 장치
WO2001059708A1 (fr) Procede d'enregistrement en 3d/2d de perspectives d'un objet par rapport a un modele de surface
Reichard et al. Intraoperative on-the-fly organ-mosaicking for laparoscopic surgery
Kosaka et al. Augmented reality system for surgical navigation using robust target vision
Vagvolgyi et al. Video to CT registration for image overlay on solid organs
Dey et al. Mixed reality merging of endoscopic images and 3-D surfaces
Wang et al. Endoscopic video texture mapping on pre-built 3-D anatomical objects without camera tracking
Trevisan et al. Towards markerless augmented medical visualization
Jannin et al. Visual matching between real and virtual images in image-guided neurosurgery
ElHelw et al. Photorealistic rendering of large tissue deformation for surgical simulation
EP4641511A1 (fr) Procédé de fourniture d'un signal d'image pour un système de visualisation médicale et système de visualisation médicale
Ackerman et al. Real-time anatomical 3D image extraction for laparoscopic surgery

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A2

Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BW BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE EG ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NA NI NO NZ OM PG PH PL PT RO RU SC SD SE SG SK SL SM SY TJ TM TN TR TT TZ UA UG US UZ VC VN YU ZA ZM ZW

AL Designated countries for regional patents

Kind code of ref document: A2

Designated state(s): GM KE LS MW MZ NA SD SL SZ TZ UG ZM ZW AM AZ BY KG KZ MD RU TJ TM AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IS IT LT LU MC NL PL PT RO SE SI SK TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG

121 Ep: the epo has been informed by wipo that ep was designated in this application
NENP Non-entry into the national phase

Ref country code: DE

WWW Wipo information: withdrawn in national office

Country of ref document: DE

122 Ep: pct application non-entry in european phase