WO2025067673A1 - Vue en réalité mixte - Google Patents
Vue en réalité mixte Download PDFInfo
- Publication number
- WO2025067673A1 WO2025067673A1 PCT/EP2023/077034 EP2023077034W WO2025067673A1 WO 2025067673 A1 WO2025067673 A1 WO 2025067673A1 EP 2023077034 W EP2023077034 W EP 2023077034W WO 2025067673 A1 WO2025067673 A1 WO 2025067673A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- data
- free space
- positioning data
- semi
- viewing device
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B90/00—Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
- A61B90/36—Image-producing devices or illumination devices not otherwise provided for
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B34/00—Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
- A61B34/20—Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B34/00—Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
- A61B34/10—Computer-aided planning, simulation or modelling of surgical operations
- A61B2034/101—Computer-aided simulation of surgical operations
- A61B2034/105—Modelling of the patient, e.g. for ligaments or bones
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B34/00—Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
- A61B34/10—Computer-aided planning, simulation or modelling of surgical operations
- A61B2034/107—Visualisation of planned trajectories or target regions
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B34/00—Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
- A61B34/20—Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
- A61B2034/2046—Tracking techniques
- A61B2034/2051—Electromagnetic tracking systems
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B34/00—Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
- A61B34/20—Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
- A61B2034/2046—Tracking techniques
- A61B2034/2055—Optical tracking systems
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B34/00—Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
- A61B34/20—Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
- A61B2034/2046—Tracking techniques
- A61B2034/2063—Acoustic tracking systems, e.g. using ultrasound
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B90/00—Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
- A61B90/36—Image-producing devices or illumination devices not otherwise provided for
- A61B2090/364—Correlation of different images or relation of image positions in respect to the body
- A61B2090/365—Correlation of different images or relation of image positions in respect to the body augmented reality, i.e. correlating a live optical image with another image
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B90/00—Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
- A61B90/36—Image-producing devices or illumination devices not otherwise provided for
- A61B90/37—Surgical systems with images on a monitor during operation
- A61B2090/372—Details of monitor hardware
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B90/00—Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
- A61B90/50—Supports for surgical instruments, e.g. articulated arms
- A61B2090/502—Headgear, e.g. helmet, spectacles
Definitions
- Augmented reality head mounted displays can be used to overlay virtual objects on the real environment.
- This technology is well known and already used in the medical environment.
- a 3D model of an anatomical body part is overlaid to its physical counterpart (i.e. , the anatomical body part), such that a user of the augmented reality head mounted display can see the 3D model of the anatomical body part at the same location as the physical anatomical body part.
- the 3D model may disturb the view of the user onto the physical anatomical body part. It has been found that a further need exists to provide a method for providing display data for such a semi-transparent see through viewing device in a medical environment.
- the invention proposes a computer implemented method for providing display data for semi-transparent see through viewing device in a medical environment.
- the display data is displayed on the semi-transparent see
- the display data comprises a model (e.g., 2D or 3D model) of an anatomical body part (e.g., a spine) with a so called free space.
- the free space is rendered transparent such that a user of the transparent see through viewing device sees in that free space area the physical anatomical body part and outside the free space the model.
- the free space corresponds for example to an instrument (e.g. an operating knife). Thereby, it is possible to show a user information of the inside of the anatomical body part and simultaneously not to hide the view onto the instrument.
- a computer implemented method for providing display data for a semi-transparent see through viewing device in a medical environment comprises the steps: providing a model of a first object (step S1 ); receiving first positioning data of the first object (step S2); receiving second positioning data of a second object (step S3); acquiring a free space based on the second positioning data (step S4); determining display data for displaying on the semi-transparent see through viewing device based on the model, the acquired determined free space, the first positioning data and the second positioning data (step S5); and providing the determined display data for displaying on the semi-transparent see through viewing device (step S6).
- display data is to be understood broadly and may relate to data used to adapt one or more pixels in a digital display device.
- the display data may comprise a specific colour (e.g. blue or green) for the respective pixel in a digital display.
- the term semi-transparent see through viewing device is to be understood broadly and may relate to a digital display configured to display one or more pixels in one or more colours and to display one or more pixels transparent to the user.
- the semi-transparent see through viewing device may be a head mounted device.
- the semi-transparent see through viewing device may be a movable device.
- the movable device may comprise a movable stand or movable arm.
- the semi-transparent see through viewing device may comprise one or more position sensors to track its own position and/or orientation.
- the semi-transparent see through viewing device may comprise one or more interfaces in order to exchange data with peripherals (e.g., a data processing apparatus used to carry the described method).
- the semitransparent see through viewing device may be augmented reality glasses (i.e. augmented reality glasses or mixed reality glasses) that can be worn by a user.
- augmented reality glasses may allow a user to see real objects (i.e. physical reality) and at least partly overlaid models onto this objects (i.e. augmented reality).
- the term medical environment is to be understood broadly and may relate to a room used to carry out a medical treatment (e.g., a surgery or medical examination).
- the medical environment may comprise an anatomical body part to be treated, a patient bed, one or more medical machines (e.g. a C arm CT), and/or a tracking unit.
- the term model is to be understood broadly and may relate to a geometric model.
- the geometric model may be a 2D model.
- the geometric model may be a 3D model.
- the geometric model may be a combination of a 2D model and a 3D model.
- the geometric model may relate to the first object.
- the first object may be an anatomical body part.
- the anatomical body part may be a vertebrae, a spine, or a joint.
- the geometric model may be rendered in order to be displayed on the semi- transparent see through viewing device.
- the geometric model may be derived from medical image data.
- the medical display data may comprise e.g. X-Ray data, CT data, and/or MRT data.
- the model may be registered to the first positioning data of the first object.
- the model may additionally comprise a planned implant, a target trajectory for such an implant.
- the model may be received from a data base.
- the model may be generated during a medical procedure by means of an imaging device (e.g. C arm CT).
- the model may be modified by a user during a medical procedure for example by adding a planned implant.
- first object is to be understood broadly and may relate to any anatomical body part.
- first object may be a spine, a joint, an arm bone.
- first positioning data is to be understood broadly and may particularly relate to spatial coordinates.
- the first positioning data may comprise three spatial coordinates and three orientations coordinates.
- the first positioning data may be received by optical tracking a marker arranged at the first object.
- the first positioning data may be received by optical tracking the first object without a marker.
- the first positioning data may be received by electromagnetic tracking of the first object.
- the first positioning data may be determined by a tracking unit.
- the tracking unit may be an optical tracking unit.
- the tracking unit may be an electromagnetic tracking unit.
- the first positioning data may be transmitted from the tracking unit to a data processing apparatus carrying out the method described above.
- the term second object is to be understood broadly and may relate to any instrument or the like used in a medical treatment.
- the second object may comprise a knife, a drilling sleeve, and a tactile rod.
- the term second positioning data is to be understood broadly and may particularly relate to spatial coordinates.
- the second positioning data may comprise three spatial coordinates and three orientations coordinates.
- a spatial coordinate may relate to a translatory degree of freedom.
- An orientation may relate to rotatory degree of freedom.
- the second positioning data may be used to determine a pose of the second object.
- the second positioning data may be received by optical tracking a marker arranged to the second object.
- the second positioning data may be received by optical tracking the second object without a marker.
- the second positioning data may be received by electromagnetic tracking of the second object.
- the second positioning data may be determined by a tracking unit.
- the tracking unit may be an optical tracking unit.
- the tracking unit may be an electromagnetic tracking unit.
- the second positioning data may be transmitted from the tracking unit to a data processing apparatus carrying out the method described above.
- free space is to be understood broadly and means particularly an area that is displayed in the semi-transparent see through viewing device transparent such that a user looking through the semitransparent see through viewing device can see at least a part of the second object and/or the first object in a physical reality. This means particularly that the free space is rendered transparent and that no part of a model of the first object is displayed in this area. The user can see the physical reality in the area of the free space.
- the free space can be a 2D space or a 3D space.
- the free space may have fixed dimensions or dynamic dimensions varying over time. The dimensions may be predefined. The dimension of the free space may depend on dimensions of the first object and/or the second object.
- the term acquiring a free space is to be understood broadly and may relate to a determination based on the second positioning data of the second object.
- the free space may be defined by a predefined area (e.g., a 3D volume (e.g., a ball) or a 2D area (e.g., a circle)) and the positioning data of the second object and/or the positioning data of the first object.
- the free space may comprise geometric data of the second object.
- the geometric data of the second object may comprise spatial dimensions of the second object.
- the acquiring may comprise a calculation. For example, if the second object is not in vicinity of the first object, no free space has to be determined as the model overlaid onto the first object doesn’t hide the second object. For example, if the second object is arranged above the first object and in a view of a semi-transparent see through viewing device, the free space has to be determined such that this area can be rendered transparent.
- determining display data for displaying is to be understood broadly and may comprise a rendering of one or more pixels of a semi-transparent see through viewing device.
- the determining may comprise a registration of the model of the first object to the first object based on the first positioning data.
- the determining may comprise a registration of the free space to the first object and/or second object.
- the determining may comprise the process of rendering the model and the free space.
- rendering relates in general to generating an image from a 2D model and/or a 3D model. The rendering here particularly considers the model and the free space.
- the invention is based on the finding that virtual objects can be layered onto the real environment.
- augmented reality head mounted displays e.g., augmented reality glasses.
- These augmented reality head mounted displays can be either optical see through devices (i.e., the display is semi-transparent) or video see through devices (i.e. the real environment is filled of cameras embedded in the augmented reality head mounted display and streamed to the displays).
- the virtual objects may be 2D or 3D objects that represent a physical object, such as anatomical body parts (e.g. spine or head model, reconstructed from CT/MRI scans), surgical instruments, tracking markers, or implants.
- a virtual object can be overlaid to its physical counterpart (overlaid virtual object): it is for example possible to display a 3D model of an anatomical body part such that the user of the AR-HMD can see the 3D model at the same location as the physical anatomical body part. This is particularly useful during minimal invasive surgery, as it gives the surgeon the possibility to look virtually inside the body of the patient.
- the overlaid virtual object may sometimes be disturbing for the surgeon.
- the 3D spine model may hide the skin and the trajectory’s entry point (intersection between the skin surface and the trajectory) while making a skin incision or while navigating/drilling a hole into the pedicle.
- the idea of the invention is to mask/hide automatically a part of the overlaid virtual object based on the position of the navigated instrument.
- the invention proposes a plurality of possibilities to provide such a “modified image” by providing display data for displaying on the semi-transparent see through viewing device. It should be noted that the display data are merely provided for displaying on a display of a semi-transparent see through viewing device.
- the computer implemented method can therefore not be seen as therapeutic, medical or diagnostic method practiced on the human or the animal body.
- the semi-transparent see through viewing device may be a head mountable device; and the method may further comprise the step: obtaining third positioning data of the head mountable device and determining the display data based on the obtained third positioning data.
- a head mountable device may be worn by user.
- the head mountable semitransparent see through viewing device may preferably be glasses.
- the head mountable semi-transparent see through viewing device may preferably comprise two single spectacle lenses. On each of the two spectacle lenses a specific image may be presented such that the user can see a 3D model with the free space through the two spectacle lenses.
- the head mountable semitransparent see through viewing device may be a so called mixed reality glasses.
- the third positioning data may be determined by a tracking unit.
- the third positioning data may comprise three spatial coordinates and three orientations coordinates.
- the third positioning data may be received by optical tracking a marker arranged to the head mountable device.
- the third positioning data may be received by optical tracking the first object without a marker.
- the head mountable semi-transparent see through viewing device may be able to determine its positioning data by itself, e.g. by means of a simultaneous localization and mapping (slam) algorithm.
- the head mountable semitransparent see through device may therefore be equipped with one or more cameras.
- the slam algorithm may determine the positioning data of the head mountable semi-transparent see through viewing device in relation to other objects such as the first object. This may be advantageous as the head mountable semi-transparent see through viewing device doesn’t have to be equipped with markers.
- the third positioning data may be received by electromagnetic tracking of the third object.
- the third positioning data may be determined by a tracking unit.
- the tracking unit may be an optical tracking unit.
- the tracking unit may be an electromagnetic tracking unit.
- the third positioning data may transmitted from the tracking unit to a data processing apparatus carrying out the method described above.
- a viewing direction of a user can be determined.
- the display data can therefore consider the viewing direction or angle of a user in order to provide her/him always a correct image for her/his specific viewing angle. This may increase the accuracy of the overlaid model and also the transparent rendered free space in the overlaid model.
- the free space may be determined as a spherical element. This may be advantageous as it simplifies respectively reduces a calculation and/or programming effort for different viewing angles.
- the head mountable device may allow a user different views on the first object, wherein from each view a free space allows a free view on the second object. This may advantageously increase visibility of the first object in combination with the model and the second object. This may increase the quality and accuracy of the image presented through the semi-transparent see through viewing device to a user.
- the semi-transparent see through viewing device may be a movable display; and wherein the method may further comprise the step: obtaining fourth positioning data of the movable display and determining the display data based on the obtained fourth positioning data.
- the movable display may a display on a stand or an arm that can be moved over the first object.
- the movable display may be movable in three spatial directions and in three orientations.
- the movable display may be arranged between a user’s head and the first object.
- a tracking unit may track a position of the movable display.
- the tracking unit may transmit the positioning data of the movable device to data processing apparatus carrying out the method described above.
- the tracking may comprise optical tracking and/or electromagnetic tracking.
- Based on the fourth positioning data a theoretic viewing angle of a user in front of the movable device can be estimated or determined.
- Based on the fourth positioning data display data can determined that consider the position of the movable display and particularly a viewing angle of a user. In sum, this may increase a visibility of the first object in combination with the model and the second object. This may increase the quality and accuracy of the image presented to a user.
- the free space is rendered transparent on a display of the semi-transparent see through viewing device.
- the method may firstly render the complete model for the display of the semi-transparent see through viewing device such that the complete model is overlaid onto the first object when looking through the display.
- the determined free space is rendered transparent in the display such that a user looking on a free space area of the display though display directly looks onto the first object.
- a head mountable semi-transparent see though viewing device is in general not able to display a pixel in black. When such a semi-transparent see though viewing device gets as display data for a pixel black as colour it displays the pixel transparent.
- the method uses this aspect by rendering the free space black such that is displayed transparent.
- the method may comprise the step: receiving a target trajectory of the second object; wherein the step S5 of determining display data may be based on the received target trajectory of the second object, wherein the display data may comprise a representation of the target trajectory, and wherein the representation of the target trajectory may be superimposed on the model and the free space.
- target trajectory is to be understood broadly and may relate to a desired position of the second object.
- the target trajectory may relate to arrangement of a drilling sleeve.
- the target trajectory and corresponding positioning data may be obtained by a HMI (e.g., a physician inputs the target trajectory) or by interface to a data base (e.g., electronic medical record system).
- the representation of the target trajectory may be a single arrow.
- the representation may be superimposed on the display of the semi-transparent see though viewing device such that firstly the model is shown with the free space and superimposed on the model and the free space the representation is shown. This may be advantageous as a user sees in the free space the first object, the second object and the representation of the target trajectory of the second object. This may advantageously simplify a navigation of a second object (e.g., a scalpel) into a first object (e.g., a body part).
- a second object e.g., a scalpel
- first object e.g
- the method may further comprise the step: determining a current trajectory of the second object based on the received second positioning data, wherein the step S5 of determining display data may be based on the determined current trajectory of the second object, wherein the display data may comprise a representation of the current target trajectory, and wherein the representation of the current target trajectory may be superimposed on the 3D model and the free space.
- the representation of the current trajectory of the second object may be a single arrow.
- the representation may be superimposed on the free space in the display such that user sees the model, the first object in the free space, the second object in the free space, and a virtual representation of the second object in the free space.
- this may be advantageous as it simplifies a navigation of a second object into a first object.
- the method may further comprise the step: receiving fifth positioning data of a third object, wherein the step S5 of determining display data may be based on the received fifth positioning data of a third object, wherein the display data may comprise a representation of the third object, and wherein the representation of the third object may be superimposed on the 3D model and the free space.
- the third object may be an implant or the like.
- the fifth positioning data may be received from a data base storing respective planning data.
- the fifth positioning data may be received via an HMI (e.g., a physician inputs a desired position for an implant).
- the representation may be an arrow or the like. In other words, a virtual representation of the third object may be superimposed over the model and the corresponding free space.
- a shape of the free space may have a fixed radius.
- the free space may be a 2D or a 3D space.
- the free space has a fixed circle shape.
- the free space has a fixed ball shape.
- the fixed ball is advantageous in comparison to the fixed circle shape as it doesn’t have to be adapted for different viewing angles of a user looking through the semi-transparent see through viewing device.
- the fixed radius may be chosen in dependency of one or more second objects. The fixed radius may be a simple but efficient way to implement the method.
- a shape of the free space depends on a geometry of the first object and/or the second object.
- the free space may not be a predefined free space with a fixed radius. Instead, the free space may take into account the geometry of the second object (e.g., in case the second object has a large tip, the free space is larger as in case the second object has a small tip).
- the free space may take into account the geometry of the first object (e.g., the first object is an oblong shaped spine, the free space may have an oblong shape).
- the dependency of the free space from the geometry of the first object and/or second object may increase an efficiency during navigation of the second object in relation to the first object.
- a shape of the free space may depend on a speed of the second object.
- the speed of the second object may be determined by analysing a change of the second positioning data and a corresponding time.
- the speed may be determined by acceleration sensor applied to the second object.
- acceleration sensor applied to the second object.
- the shape of the free space may be enlarged and vice versa. This may be advantageous as it reduces the risk of an unwanted collision during a fast navigation of the second object in relation to the first object, since a bigger part of the first object, for example, is seen compared to a free space that has a smaller spatial extension.
- the display data may comprise a transition area surrounding the free space, and wherein the transition area may comprise a fading effect.
- the transition area may relate to a layer with a constant layer width.
- the fading effect may relate to a transition from transparent to coloured, wherein coloured relates to pixels adapted to display the model.
- the fading effect may improve a representation of the free space, the model and the first object.
- the fading effect may also contribute to enhance the depth effect and/or increase the distinguishability between real world and virtual representation.
- the step S5 of determining display data may comprise a sequence of rendering, wherein the sequence of rendering may be: generating pixel data for the 3D model, generating pixel data for the free space, optionally generating pixel data for the representation of the target trajectory of the second object, optionally generating pixel data for the representation of the current trajectory of the second object, and optionally generating pixel data for the representation of the third object.
- the above outlined sequence may advantageously reduce a programming effort as no sectional models (i.e. cut model) have to be calculated. Instead, the outlined sequence of the model, the free space, the representation of the target trajectory of the second object, the representation of the current trajectory and the representation of the third object is rendered. This allows to simply superimpose the model, the free space and the one or more representations onto the first object and the second object (i.e. physical reality).
- a further aspect of the present disclosure relates to a data processing apparatus comprising means for carrying out the steps of the method described above.
- a further aspect of the present disclosure relates to a computer program comprising instructions, which, when the program is executed by a computer, cause the computer to carry out the method described above.
- This program can be seen as a computer program element and may be part of an existing computer program, but it can also be an entire program by itself. For example, the program may be used to update an already existing computer program to get to the present invention.
- the invention is directed to a computer program which, when running on at least one processor (for example, a processor) of at least one computer (for example, a computer) or when loaded into at least one memory (for example, a memory) of at least one computer (for example, a computer), causes the at least one computer to perform the above-described method according to the first aspect.
- the invention may alternatively or additionally relate to a (physical, for example electrical, for example technically generated) signal wave, for example a digital signal wave, carrying information which represents the program, for example the aforementioned program, which for example comprises code means which are adapted to perform any or all of the steps of the method according to the first aspect.
- a computer program stored on a disc is a data file, and when the file is read out and transmitted it becomes a data stream for example in the form of a (physical, for example electrical, for example technically generated) signal.
- the signal can be implemented as the signal wave which is described herein.
- the signal for example the signal wave is constituted to be transmitted via a computer network, for example LAN, WLAN, WAN, for example the internet.
- the invention according to this aspect therefore may alternatively or additionally relate to a data stream representative of the aforementioned program.
- this aspect of the invention is directed to a non-transitory computer- readable program storage medium on which the program according to the previous aspect is stored.
- the computer readable medium may be seen as a storage medium, such as for example, a USB stick, a CD, a DVD, a data storage device, a hard disk, or any other medium on which a program element as described above can be stored.
Landscapes
- Health & Medical Sciences (AREA)
- Surgery (AREA)
- Life Sciences & Earth Sciences (AREA)
- Engineering & Computer Science (AREA)
- Medical Informatics (AREA)
- Biomedical Technology (AREA)
- Heart & Thoracic Surgery (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Molecular Biology (AREA)
- Animal Behavior & Ethology (AREA)
- General Health & Medical Sciences (AREA)
- Public Health (AREA)
- Veterinary Medicine (AREA)
- Robotics (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Pathology (AREA)
- Processing Or Creating Images (AREA)
Abstract
L'invention concerne un procédé mis en œuvre par ordinateur pour fournir des données d'affichage pour un dispositif de visualisation semi-transparent dans un environnement médical, le procédé comprenant les étapes suivantes : la fourniture d'un modèle 2D et/ou 3D d'un premier objet (étape S1) ; la réception de premières données de positionnement du premier objet (étape S2) ; la réception de secondes données de positionnement d'un second objet (étape S3) ; l'acquisition d'un espace libre sur la base des secondes données de positionnement (étape S4) ; la détermination de données d'affichage à afficher sur un dispositif de visualisation semi-transparent sur la base du modèle 2D et/ou 3D, de l'espace libre déterminé acquis, et des premières et secondes données de positionnement (étape S5) ; et la fourniture des données d'affichage déterminées à afficher sur le dispositif de visualisation semi-transparent (étape S6).
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| PCT/EP2023/077034 WO2025067673A1 (fr) | 2023-09-29 | 2023-09-29 | Vue en réalité mixte |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| PCT/EP2023/077034 WO2025067673A1 (fr) | 2023-09-29 | 2023-09-29 | Vue en réalité mixte |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2025067673A1 true WO2025067673A1 (fr) | 2025-04-03 |
Family
ID=88295764
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/EP2023/077034 Pending WO2025067673A1 (fr) | 2023-09-29 | 2023-09-29 | Vue en réalité mixte |
Country Status (1)
| Country | Link |
|---|---|
| WO (1) | WO2025067673A1 (fr) |
Citations (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20180049622A1 (en) * | 2016-08-16 | 2018-02-22 | Insight Medical Systems, Inc. | Systems and methods for sensory augmentation in medical procedures |
| US20210169578A1 (en) * | 2019-12-10 | 2021-06-10 | Globus Medical, Inc. | Augmented reality headset with varied opacity for navigated robotic surgery |
| US11553969B1 (en) * | 2019-02-14 | 2023-01-17 | Onpoint Medical, Inc. | System for computation of object coordinates accounting for movement of a surgical site for spinal and other procedures |
-
2023
- 2023-09-29 WO PCT/EP2023/077034 patent/WO2025067673A1/fr active Pending
Patent Citations (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20180049622A1 (en) * | 2016-08-16 | 2018-02-22 | Insight Medical Systems, Inc. | Systems and methods for sensory augmentation in medical procedures |
| US11553969B1 (en) * | 2019-02-14 | 2023-01-17 | Onpoint Medical, Inc. | System for computation of object coordinates accounting for movement of a surgical site for spinal and other procedures |
| US20210169578A1 (en) * | 2019-12-10 | 2021-06-10 | Globus Medical, Inc. | Augmented reality headset with varied opacity for navigated robotic surgery |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US20240245463A1 (en) | Visualization of medical data depending on viewing-characteristics | |
| US12086988B2 (en) | Augmented reality patient positioning using an atlas | |
| EP3593227B1 (fr) | Pré-enregistrement de réalité augmentée | |
| CA3065436C (fr) | Enregistrement et suivi de patient base sur video | |
| US10507064B1 (en) | Microscope tracking based on video analysis | |
| EP3917430B1 (fr) | Planification de trajectoire virtuelle | |
| WO2025067673A1 (fr) | Vue en réalité mixte | |
| US11869216B2 (en) | Registration of an anatomical body part by detecting a finger pose | |
| EP4062382A1 (fr) | Positionnement de vues médicales en réalité augmentée | |
| US20240122650A1 (en) | Virtual trajectory planning | |
| EP4409594B1 (fr) | Détermination de la hauteur de la colonne vertébrale à l'aide de la réalité augmentée | |
| US20250078418A1 (en) | Conjunction of 2d and 3d visualisations in augmented reality |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 23785989 Country of ref document: EP Kind code of ref document: A1 |