WO2024056676A1 - Système et procédé de recalage d'un modèle 3d virtuel par affichage en semi-transparence - Google Patents
Système et procédé de recalage d'un modèle 3d virtuel par affichage en semi-transparence Download PDFInfo
- Publication number
- WO2024056676A1 WO2024056676A1 PCT/EP2023/075046 EP2023075046W WO2024056676A1 WO 2024056676 A1 WO2024056676 A1 WO 2024056676A1 EP 2023075046 W EP2023075046 W EP 2023075046W WO 2024056676 A1 WO2024056676 A1 WO 2024056676A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- model
- target organ
- orientation
- endoscope
- axis
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Ceased
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/30—Determination of transform parameters for the alignment of images, i.e. image registration
- G06T7/33—Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
- G06T7/344—Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods involving models
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20092—Interactive image processing based on input by user
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
Definitions
- the invention relates to a system and a method for registering a 3D model on an optical image.
- the invention relates to the registration of a virtual 3D model of an organ obtained from preoperative imaging to an image of the organ obtained by optical imaging, in particular by monocular or stereoscopic endoscopic imaging.
- the invention can be used in particular in the context of laparoscopic and robotic imaging.
- Registration is an operation making it possible to obtain a correspondence of position and orientation of a virtual 3D model of an object to an optical image of the same object obtained by a camera.
- the objective is to determine the transformation (change of reference and deformation) which must be carried out so that the 3D model of the object and the image of the object having distinct coordinate systems and distinct states are aligned.
- knowledge of this transformation resulting from registration makes it possible to display, in an augmented reality context, the virtual 3D model, resulting from precalculated data, on the image obtained by the camera, representing the real world.
- the objective is then to ensure the tracking or location of the object on the image to display the 3D model, after registration.
- registration methods are implemented in the medical and/or surgical field to match a so-called preoperative 3D model because it is obtained upstream from one or more medical imaging techniques (e.g. fluoroscopy). , ultrasound, MRI, CT, etc.), with a real image obtained by an optical camera, for example an endoscope.
- medical imaging techniques e.g. fluoroscopy
- ultrasound, MRI, CT, etc. ultrasound, MRI, CT, etc.
- refinement methods which allow improvement of a previous adjustment, the first adjustment being typically obtained by an initialization method.
- refinement methods also allow adaptation to a change in the shape or position of the object to be tracked.
- the complexity of the initialization methods is the absence of a precise idea of the transformation to be implemented since it is this method which must provide the first registration transformations.
- Known initialization methods can be classified into two categories, automatic methods and manual methods.
- Automatic methods use, for example, descriptors and visual cues or automatic matches to automatically calculate the registration. These methods have variable results, and in a medical and/or surgical context must be systematically validated or corrected by an operator. These automatic methods are further difficult to implement when the 3D model has no texture and is only a shape because few visual cues are available, which is usually the case for the preoperative 3D model.
- Methods requiring manipulation of the model require acting on a rotation, position and possibly scale of the 3D model, which can be very complex even for an experienced person: certain objects have several axes of symmetry which complicate the operation, determining the correct correspondence between scale and depth can be complex, the object may only be partially visible, etc.
- Correspondence point methods can be difficult to implement if said correspondence points are difficult to identify, if the colors of the object seen in the image and the 3D model are different, etc.
- the inventors have thus sought to provide a registration method for initialization making it possible to simplify operator interaction and limit the risks of initialization errors common in existing methods.
- the proposed registration method would form a new subcategory of manual methods.
- the invention aims to provide a system and a method for registering a 3D model of a target organ on at least one image of said organ obtained by an endoscope type camera.
- the invention aims to provide, in at least one embodiment, a registration system and method making it possible to simplify the initialization of the registration while presenting a robust result.
- the invention aims to provide, in at least one embodiment, a registration system and method which can be used for registration of a 3D model on an image obtained by an endoscope.
- the invention aims to provide, in at least one embodiment, a registration system and method that does not require intervention on a computer device external to the medical and/or surgical intervention. Presentation of the invention
- the invention relates to a method for registering a virtual three-dimensional model of a target organ, called a 3D model, with at least one image of said target organ in a scene obtained by an endoscope, comprising: a reception step of the 3D model of the target organ, a step of predicting a position and an orientation of the target organ relative to a reference mark of the scene in which the position and orientation of the endoscope are known, depending on a position and a reference orientation of the endoscope relative to the scene, a step of simulating a position and an orientation of the 3D model as a function of said position and said orientation of the predicted target organ, a step of displaying at least one current image of the endoscope on a display device, a step of superimposing on at least one current image of the endoscope a semi-projection -transparency of the 3D model as a function of the predicted position and orientation, on the display device, said projection being fixed relative to the endoscope, a step of receiving a command
- a registration method therefore makes it possible to facilitate the initialization of the registration by proposing a solution making it possible to display in semi-transparency a projection of the virtual 3D model in relation to a predicted and predefined reference position and orientation, ensures that the user manipulates the endoscope until the projection of the 3D model is aligned with the targeted organ.
- the operator can then indicate that the alignment is done by a interaction with a validation means, for example a physical validation interface or a graphical validation interface.
- a validation means for example a physical validation interface or a graphical validation interface.
- the validation means comprises an automatic validation module making it possible to automatically test the alignment of the projection of the 3D model with the targeted organ, and making it possible to suggest to a user that the alignment is correct and/or send the command indicating an alignment between the semi-transparent projection of the 3D model and the image of the target organ on the current image.
- the command indicating the alignment allows the calculation of the position and orientation of the target organ on the current image relative to the reference mark from the position of the endoscope for the current image when the command indicating alignment is received.
- the couple formed by the position and orientation of an object or a 3D model is commonly called the pose of this object.
- the operator interacts with the virtual model so that it corresponds with the real world
- the operator interacts with the real world to make it correspond with the virtual one.
- the operation can be carried out without direct interaction with equipment not used in routine medical and/or surgical procedures and therefore very naturally by the operator.
- the method is also particularly suitable when the target organ as visible in the current image and the 3D model of the target organ have distinct states, that is to say they have different shapes, in particular because a deformation applies to the model or to the target organ as visible in the current image.
- the registration process works to calculate the pose even in the presence of deformations unlike most prior art processes.
- Predicted position and orientation are linked to prior knowledge of the scene and the reference position and orientation of the endoscope relative to the target organ.
- the predicted position and orientation can thus be calculated or precalculated, manually or automatically, and remain fixed during the rest of the registration process.
- the endoscope is arranged at a standard position and orientation relatively identical to each laparoscopy and the position of the 3D model can thus be adjusted according to this standard position and orientation.
- Several reference position and orientation pairs can be precalculated in order to adapt to several possible configurations.
- the endoscope can be arranged at different entry points depending on the type of intervention, the target organ and the possible pathology to be treated.
- the reference mark is for example a mark of the endoscope, which makes it possible to easily define a fixed transformation of the 3D model to obtain the projection of the fixed 3D model in the current image.
- Other marks can be used if the transformation of this mark to the endoscope model is known or calculable.
- the current image can be a 2D image or a stereoscopic image depending on the variants of the invention.
- the steps of receiving the 3D model of the target organ, predicting a position and an orientation of the target organ and simulating a position and an orientation of the 3D model can advantageously be carried out “preoperatively”, that is to say before the image capture procedure.
- the steps of displaying at least one current image, superimposing a semi-transparent projection of the 3D model, receiving a command and calculating the position and orientation of the target organ are advantageously carried out in parallel with the image capture procedure, “intraoperatively”.
- the method comprises a step of defining a canonical reference of the 3D model of the organ, said canonical reference being defined by an origin and three axes, and in that the step of predicting a position and orientation of the target organ makes it possible to define a transformation of the 3D model of the organ between the canonical frame and the reference frame.
- the transformation of the canonical frame to the reference frame makes it possible to define the position and orientation of the 3D model in the reference frame which will be displayed in the current image as a function of the position and the predicted orientation.
- This step can be carried out upstream, that is to say in the preoperative step, before images of the target organ are captured by the endoscope.
- the target organ is a uterus comprising in particular a uterine fundus, an anterior uterine wall and a cervix
- the canonical reference point is defined by an origin forming the center of mass of a distal part of the uterus, a first left-right axis of the uterus, a second axis connecting a center of mass of the uterine fundus and a center of mass of the cervix and a third axis scalar product of the first axis and the second axis.
- the registration method is particularly suitable for registering a 3D model of a uterus on a current image of a uterus as a target organ.
- the particular shape of the uterus allows the definition of a canonical reference point adapted to the implementation of the registration process.
- the step of defining the canonical reference mark comprises: a sub-step of calculating a main axis of the medial part of the uterine fundus, called the uterine fundus axis, a sub-step of calculating a main axis of the medial part of the anterior uterine wall, called the anterior wall axis, a sub-step of calculating the left-right axis of the uterus by vector product of the uterine fundus axis by the axis anterior wall, a sub-step of calculating the center of mass of the uterine fundus, a sub-step of calculating the center of mass of the cervix, a sub-step of calculating a plane defined by the plane of which each point is equidistant from the center of mass of the uterine fundus and the center of mass of the cervix, a sub-step of determining two parts of the uterus, delimited by the plane, a distal part comprising the
- the target organ may be an organ other than the uterus and the canonical reference point is determined according to the general shape of this target organ.
- the target organ can be a liver (the virtual 3D model of which is generally obtained by CT imaging) or a kidney (the virtual 3D model of which is generally obtained by MRI and/or CT imaging), or many others. organs.
- the canonical reference can be defined by a center of mass forming the origin and by axes fixed according to the principal axes of the 3D model obtained for example by principal component analysis of the vertices of the mesh, a first axis extending along the main axis of greatest length and being positioned parallel to the image plane of the endoscope, horizontal or vertical depending on the organ considered, a second axis extending along the main axis of smaller length and being oriented towards the endoscope and a third axis being a scalar product of the first axis and the second axis.
- the method comprises a step of pre-calculating the projection of the 3D model from the 3D model and the predicted position and orientation, upstream of the step of superimposing said projection on minus a current image of the endoscope.
- the projection of the 3D model makes it possible to obtain a simple two-dimensional image which can be easily combined with the current image to create the effect of semi-transparency.
- the projection can be complete but can also consist of a whole or partial silhouette or outline of the target organ.
- the method comprises a step of receiving a 3D model generated from images captured by the endoscope, called an intraoperative 3D model, and that it comprises a step of calculating translation and scale factor between the 3D model of the target organ and the intraoperative 3D model, comprising: a sub-step of expressing the 3D model of the target organ and the intraoperative 3D model in a common reference frame as a function of the position and the orientation of the target organ on the current image relative to the reference frame, a sub-step of selecting an origin point in the common frame, a sub-step of generating at least one ray starting from the origin point and extending in the direction of the optical axis of the endoscope, a sub-step of calculating the distance between the origin and the intersection of each ray with the 3D model of the virtual organ, a sub-step of calculating the distance between the origin and the intersection of each ray with the intraoperative 3D model, a sub-step of calculating the translation and the scale factor from each distance between the origin
- the calculation of translation and scale factor makes it possible to ensure the registration of the 3D model of the target organ, called the preoperative 3D model, with the target organ as modeled in the model. of the scene obtained by the endoscope, called an intraoperative 3D model.
- these steps are relevant if a scale difference exists between the preoperative 3D model and the intraoperative 3D model.
- calculating the position and orientation of the target organ on the current image makes it possible to obtain registration of the 3D model on said image but does not guarantee registration of the preoperative 3D model with the target organ. in the intraoperative 3D model due to the partial knowledge of depth and distance based on a single image.
- the translation and scale factor calculation steps allow the complete registration of the preoperative 3D model with the intraoperative 3D model by matching the marks associated with each model, in particular by allowing the use of a metric mark for each model making it possible to know the dimensions of each object of each 3D model.
- the method comprises the acquisition of several images and a step of selecting from among the images an image of higher quality than the other images for registration of the 3D model.
- the acquisition of several images makes it possible to guarantee the acquisition of at least one image of sufficient quality which can be used as a key image for generating the intraoperative 3D model, and monitoring the 3D model of the target organ in relation to this intraoperative 3D model.
- the invention also relates to a system for registering a virtual three-dimensional model of a target organ with an image of said target organ, comprising: an endoscope, configured to capture said image, a display device configured to display said image, a validation means, configured to provide a command indicating an alignment between a semi-transparent projection of the 3D model and the image of the target organ, a processing unit, the processing unit comprising: a module for receiving the model 3D of the target organ, a module for predicting a position and an orientation of the target organ relative to a reference mark of the scene in which the position and orientation of the endoscope are known, depending on a position and a reference orientation of the endoscope relative to the scene, a module for simulating a position and an orientation of the 3D model as a function of said position and said predicted orientation of the target organ, a module for displaying at least one current image of the endoscope on a display device, a module for superimposing at least one current image of the endoscope with a semi-trans
- a registration system allows the display of the projection of the 3D model on at least one current image and the reception of the command indicating the alignment of the projection of the 3D model with the target organ on the current image .
- the information relating to the alignment of the projection of the 3D model with the target organ on the current image allows the calculation of the position of the target organ in the current image and thus the registration of the 3D model of the target organ with its image in the current image and in future images captured by the endoscope.
- a module may for example consist of a computing device such as a computer, a set of computing devices, an electronic component or a set of electronic components, or for example a computer program, a set of computer programs, a library of a computer program or a function of a computer program executed by a computer device such as a computer, a set of computer devices, an electronic component or a set of electronic components.
- the validation means may preferably be a physical validation interface with the operator and may for example comprise a physical button, a lever, a pedal intended to be actuated by an operator's foot, etc.
- the foot pedal allows interaction by the operator without using their hands.
- the validation means can also be a graphical validation interface.
- the validation means can also be without physical contact, for example with a sensor making it possible to detect a gesture by the operator or to detect a sound command.
- the validation means can also be automatic, in particular can include a validation module allowing automatic processing of images and automatic detection of the alignment between the semi-transparent projection of the 3D model and the image of the target organ on the current image, so as to suggest to the user that the alignment is done or to directly send the command indicating such alignment.
- the registration system according to the invention is configured to implement the registration method according to the invention.
- the registration method according to the invention is configured to be implemented by a registration system according to the invention.
- the processing unit comprises a module for defining a canonical reference of the 3D model of the organ, said canonical reference being defined by an origin and three axes, and in that the prediction module of a position and an orientation of the target organ makes it possible to define a transformation of the 3D model of the organ between the canonical frame and the reference frame.
- the target organ is a uterus comprising in particular a uterine fundus, an anterior uterine wall and a cervix
- the canonical reference point is defined by an origin forming the center of mass of a distal part of the uterus, a first left-right axis of the uterus, a second axis connecting a center of mass of the uterine fundus and a center of mass of the cervix and a third axis scalar product of the first axis and the second axis.
- the canonical reference definition module is configured for: a calculation of a main axis of the medial part of the uterine fundus, called the uterine fundus axis, a calculation of a main axis of the medial part of the anterior uterine wall, called anterior wall axis, a calculation of the left-right axis of the uterus by vector product of the uterine fundus axis by the anterior wall axis, a calculation of the center of mass of the uterine fundus, a calculation of the center of mass of the cervix, a calculation of a plane defined by the plane each point of which is equidistant from the center of mass of the uterine fundus and the center of mass of the cervix uterus, a determination of two parts of the uterus, delimited by the plane, a distal part comprising the uterine fundus and the anterior wall and a proximal part connected to the cervix, a calculation of
- the processing unit comprises a module for pre-calculating the projection of the 3D model from the 3D model and the predicted position and orientation.
- the processing unit comprises a module for receiving a 3D model of the scene generated from images captured by the endoscope, called an intraoperative 3D model, and a module for calculating translation and scale factor between the 3D model of the target organ and the intraoperative 3D model, configured for: an expression of the 3D model of the target organ and the intraoperative 3D model in a common reference as a function of the position and the orientation of the target organ on the current image relative to the reference frame, a selection of an origin point in the common frame, a generation of at least one ray starting from the origin point and extending in the direction of the optical axis of the endoscope, a calculation of the distance between the origin and the intersection of each ray with the 3D model of the virtual organ, a calculation of the distance between the origin and intersection of each ray with the intraoperative 3D model, a calculation of the translation and the scale factor from each distance between the origin and the intersection of a ray with the 3D model of the organ virtual and each distance between the origin and the origin and the
- the processing unit comprises a module for acquiring several images and a module for selecting among the images an image of higher quality than the other images for registration of the 3D model.
- the invention also relates to a computer program product for registering a computer program for registering a virtual three-dimensional model of a target organ, called a 3D model, with at least one image of said target organ in a scene obtained by the endoscope, said computer program product comprising program code instructions for executing, when said computer program product is executed on a computer, the following steps: a step of receiving the 3D model of the organ target, a step of predicting a position and an orientation of the target organ relative to a reference mark of the scene in which the position and orientation of the endoscope are known, according to a position and a reference orientation of the endoscope relative to the scene, a step of simulating a position and an orientation of the model 3D as a function of said predicted position and said orientation of the target organ, a step of displaying at least one current image of the endoscope on a display device, a step of superimposing at least one current image of the endoscope of a semi-transparent projection of the 3D model as a function of the
- the registration computer program product according to the invention comprises program code instructions for the execution, when said computer program product is executed on a computer, of the steps of the registration method according to the invention, in particular the steps of the registration process of all the variants of the invention described above.
- the registration method according to the invention is configured to be implemented by a registration computer program product according to the invention.
- the invention also relates to a registration system, a registration method, and a registration computer program product characterized in combination by all or part of the characteristics mentioned above or below.
- FIG. 1 is a schematic view of a registration system 10 according to one embodiment of the invention, integrated into a laparoscopic imaging system, in a first configuration.
- FIG. 2 is a schematic view of a registration system 10 according to one embodiment of the invention, integrated into a laparoscopic imaging system, in a second configuration.
- FIG. 3 is a schematic view of a registration method according to one embodiment of the invention.
- FIG. 4 is a schematic view of a uterus forming a target organ of a registration method according to one embodiment of the invention.
- Figures 1 and 2 schematically represent a registration system 10 according to one embodiment of the invention, integrated into a laparoscopic imaging system.
- the objective of the imaging system is to make it possible to acquire and broadcast images taken in a cavity 50 of the patient's body, here a cavity of the abdomen of a patient (or abdominal cavity 50), in particular in the context of a laparoscopic procedure, for example laparoscopic surgery.
- the laparoscopic surgery operation can for example be intended for intervention on a target organ 52.
- the laparoscopic imaging system comprises a registration system 10 according to one embodiment of the invention, receiving images provided for example by an endoscope type camera 12 configured to acquire images of the patient's abdominal cavity 50.
- the endoscope used in a laparoscopic operation is commonly called a laparoscope or laparoscope.
- the registration system comprises several modules making it possible to implement a method according to the invention, brought together here in a processing unit 16.
- the processing unit 16 is for example a computer or an electronic card comprising a processor, for example a processor dedicated to image processing of the method according to the invention or a general processor configured to, among several functions, execute in particular program instructions for the execution of the steps of the method according to the invention .
- the images acquired from the endoscope 12 are displayed on a display device such as a display screen 18 of the registration system intended for an operator.
- the images acquired can be augmented, that is to say, include additional information added by the laparoscopic imaging system, which can come from a registration system or other devices.
- the registration system 10 is configured to determine the position and orientation of the target organ 52 in a reference frame of the scene in order to allow the display additional information depending on the position and orientation of the target organ 52.
- an objective is to display a 3D model of the target organ on the image of the target organ 52, which requires registration of the 3D model with the image of the target organ 52.
- the registration system 10 implements a registration process as shown in Figure 3.
- the registration method 100 comprises: a step 110 of receiving the 3D model of the target organ by the processing unit 16, for example provided by an external computer device and having been obtained by medical imaging, in particular of the imaging type by magnetic resonance (MRI). a step 112 of predicting, by the processing unit 16, a position and an orientation of the target organ relative to a reference mark of the scene in which the position and orientation of the endoscope are known, depending on a predicted position and orientation of the endoscope relative to the scene, a step 114 of simulation, by the processing unit 16, of a position and an orientation of the model 3D as a function of said predicted position and orientation of the target organ. These steps can be carried out before the medical and/or surgical procedure (preoperative phase).
- MRI magnetic resonance
- the registration method 100 also comprises the following steps, preferably implemented in parallel with the medical and/or surgical procedure (intraoperative phase): a step 116 of displaying at least one current image of the endoscope 12 on the display device 18, a step 118 of superimposing on at least one current image of the endoscope a semi-transparent projection 20 of the 3D model as a function of the predicted position and orientation, on the device 18 d display, said projection being fixed relative to the endoscope 12.
- the semi-transparent projection 20 is shown in dotted lines and is fixed relative to the image.
- the registration method 100 may comprise, upstream of the superposition step 118 and preferably in the preoperative phase, a step 128 of pre-calculating the projection of the 3D model from the 3D model and the position and the predicted orientation. This phase makes it possible to prepare the projection of the 3D model which will be displayed on the display device 18.
- the operator in charge of handling the endoscope can attempt to align the image 22 of the target organ with the current image with the projection 20 in semi-transparency.
- Figure 1 represents a first position where the semi-transparent projection 20 and the image 22 of the target organ are not aligned and
- Figure 2 represents a second position where the semi-transparent projection 20 and the image 22 of the target organ are aligned.
- a validation means in particular a physical validation interface 24, for example comprising a pedal actuated by the foot, in order to send a command indicating an alignment between the semi-transparent projection of the 3D model and the image 22 of the target organ on the current image.
- the validation means can also be without physical contact, for example with a sensor allowing to detect an operator gesture or to detect an audio command.
- the validation means can also be automatic, in particular can comprise a validation module allowing automatic processing of images and automatic detection of the alignment between the semi-transparent projection of the 3D model and the image of the target organ on the current image, so as to suggest to the user that the alignment is done or to directly send the command indicating such alignment.
- the registration method 100 then comprises: a step 120 of receiving the command indicating the alignment between the semi-transparent projection 20 of the 3D model and the image 22 of the target organ on the current image, a step 122 for calculating the position and orientation of the target organ on the current image relative to the reference mark. The calculation is carried out from the current image when receiving the command indicating the alignment between projection 20 and image 22 of the target organ.
- the registration method 100 also includes a step 124 of defining a canonical reference of the 3D model of the organ, said canonical reference being defined by an origin and three axes, preferably carried out in the preoperative phase, and step 112 for predicting a position and an orientation of the target organ makes it possible to define a transformation of the 3D model of the organ between the canonical frame and the reference frame.
- the canonical landmark is defined by an origin Gu forming the center of mass of a distal part of the uterus, a first left-right U axis of the uterus, a second Y axis connecting a GF center of mass of the uterine fundus and a center Gc of the cervix and a third axis scalar product of the first axis and the second axis.
- step 124 of defining the canonical benchmark for a uterus 200 comprises: a sub-step of calculating a main axis of the medial part of the uterine fundus 210, called the NF axis of the uterine fundus, a sub-step of calculating a main axis of the medial part of the uterine wall 212 anterior, called the Nw axis of the anterior wall, a sub-step of calculating the left-right axis of the uterus (not shown) by vector product of the NF axis of the uterine fundus by the Nw axis of the anterior wall, a sub-step of calculating the center GF of mass of the uterine fundus, a sub-step of calculating the center Gc of mass of the cervix, a sub-step of calculating a plane P defined by the plane of which each point is equidistant from the center GF of mass of the uterine fundus and the center Gc of mass of the a
- the registration method 100 can also include a step 126 of receiving a 3D model generated from images captured by the endoscope, called an intraoperative 3D model, and a step of calculating translation and scale factor between the 3D model of the target organ and the intraoperative 3D model, comprising: a sub-step of expressing the 3D model of the target organ and the intraoperative 3D model in a common reference frame as a function of the position and orientation of the the target organ on the current image relative to the reference frame, a sub-step of selecting an origin point in the common frame of reference, a sub-step of generating at least one ray starting from the point of origin and extending in the direction of the optical axis of the endoscope, a sub-step of calculating the distance between the origin and the intersection of each ray with the 3D model of the virtual organ, a sub-step of calculating the distance between the origin and the intersection of each ray with the intraoperative 3D model, a sub-step of calculating the translation and the scale factor from each distance between the origin and
- the endoscope 12 can be configured for the acquisition of several images and the registration method 100 can also include a step of selecting from among the images an image of higher quality than the other images for registration of the 3D model.
- the invention is not limited to the embodiments described.
- the registration system can be integrated into different types of imaging, in particular other types of medical imaging, in particular when a position and orientation of the target organ relative to the endoscope can be predicted.
- the target organ may be different: in a laparoscopic context, other organs of the abdomen that can be operated on by laparoscopy may be target organs, for example a liver or a kidney.
Landscapes
- Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Processing Or Creating Images (AREA)
- Image Processing (AREA)
- User Interface Of Digital Computer (AREA)
- Apparatus For Radiation Diagnosis (AREA)
- Magnetic Resonance Imaging Apparatus (AREA)
- Image Analysis (AREA)
Abstract
Description
Claims
Priority Applications (4)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| JP2025514194A JP2025530176A (ja) | 2022-09-13 | 2023-09-12 | 半透明の表示を通して仮想3dモデルの位置合わせをもたらすためのシステム及び方法 |
| EP23765284.7A EP4588004A1 (fr) | 2022-09-13 | 2023-09-12 | Système et procédé de recalage d'un modèle 3d virtuel par affichage en semi-transparence |
| CA3267119A CA3267119A1 (fr) | 2022-09-13 | 2023-09-12 | System and method for bringing a virtual 3d model into register through display in see-through |
| CN202380063840.XA CN120226044A (zh) | 2022-09-13 | 2023-09-12 | 通过半透明显示配准虚拟3d模型的系统和方法 |
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| FRFR2209161 | 2022-09-13 | ||
| FR2209161A FR3139651B1 (fr) | 2022-09-13 | 2022-09-13 | Système et procédé de recalage d’un modèle 3d virtuel par affichage en semi-transparence |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2024056676A1 true WO2024056676A1 (fr) | 2024-03-21 |
Family
ID=83899844
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/EP2023/075046 Ceased WO2024056676A1 (fr) | 2022-09-13 | 2023-09-12 | Système et procédé de recalage d'un modèle 3d virtuel par affichage en semi-transparence |
Country Status (6)
| Country | Link |
|---|---|
| EP (1) | EP4588004A1 (fr) |
| JP (1) | JP2025530176A (fr) |
| CN (1) | CN120226044A (fr) |
| CA (1) | CA3267119A1 (fr) |
| FR (1) | FR3139651B1 (fr) |
| WO (1) | WO2024056676A1 (fr) |
Families Citing this family (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| EP4139888A1 (fr) * | 2020-04-21 | 2023-03-01 | Koninklijke Philips N.V. | Détermination automatique d'orientation d'image médicale en 3d |
Citations (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN107580716A (zh) * | 2015-05-11 | 2018-01-12 | 西门子公司 | 2d/2.5d腹腔镜和内窥镜图像数据与3d立体图像数据配准的方法和系统 |
| US20210137634A1 (en) * | 2017-09-11 | 2021-05-13 | Philipp K. Lang | Augmented Reality Display for Vascular and Other Interventions, Compensation for Cardiac and Respiratory Motion |
-
2022
- 2022-09-13 FR FR2209161A patent/FR3139651B1/fr active Active
-
2023
- 2023-09-12 CA CA3267119A patent/CA3267119A1/fr active Pending
- 2023-09-12 JP JP2025514194A patent/JP2025530176A/ja active Pending
- 2023-09-12 EP EP23765284.7A patent/EP4588004A1/fr active Pending
- 2023-09-12 CN CN202380063840.XA patent/CN120226044A/zh active Pending
- 2023-09-12 WO PCT/EP2023/075046 patent/WO2024056676A1/fr not_active Ceased
Patent Citations (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN107580716A (zh) * | 2015-05-11 | 2018-01-12 | 西门子公司 | 2d/2.5d腹腔镜和内窥镜图像数据与3d立体图像数据配准的方法和系统 |
| US20210137634A1 (en) * | 2017-09-11 | 2021-05-13 | Philipp K. Lang | Augmented Reality Display for Vascular and Other Interventions, Compensation for Cardiac and Respiratory Motion |
Non-Patent Citations (2)
| Title |
|---|
| PUERTO-SOUZA GUSTAVO A ET AL: "Toward Long-Term and Accurate Augmented-Reality for Monocular Endoscopic Videos", IEEE TRANSACTIONS ON BIOMEDICAL ENGINEERING, IEEE, USA, vol. 61, no. 10, 1 October 2014 (2014-10-01), pages 2609 - 2620, XP011559073, ISSN: 0018-9294, [retrieved on 20140916], DOI: 10.1109/TBME.2014.2323999 * |
| ROBU MARIA R.: "Automatic registration of 3D models to laparoscopic video images for guidance during liver surgery", DISSERTATION, 4 April 2020 (2020-04-04), pages 1,7, 51, XP093006865, Retrieved from the Internet <URL:https://discovery.ucl.ac.uk/id/eprint/10094578> [retrieved on 20221212] * |
Also Published As
| Publication number | Publication date |
|---|---|
| CA3267119A1 (fr) | 2024-03-21 |
| FR3139651B1 (fr) | 2024-10-25 |
| EP4588004A1 (fr) | 2025-07-23 |
| JP2025530176A (ja) | 2025-09-11 |
| FR3139651A1 (fr) | 2024-03-15 |
| CN120226044A (zh) | 2025-06-27 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US12201384B2 (en) | Tracking systems and methods for image-guided surgery | |
| JP7127785B2 (ja) | 情報処理システム、内視鏡システム、学習済みモデル、情報記憶媒体及び情報処理方法 | |
| US9681925B2 (en) | Method for augmented reality instrument placement using an image based navigation system | |
| US20140160264A1 (en) | Augmented field of view imaging system | |
| EP1155383A1 (fr) | Dispositif d'observation endoscopique | |
| Lerotic et al. | Pq-space based non-photorealistic rendering for augmented reality | |
| US20190008592A1 (en) | Registration of a surgical image acquisition device using contour signatures | |
| US20180192964A1 (en) | System and method for scanning anatomical structures and for displaying a scanning result | |
| CN103025227B (zh) | 图像处理设备、方法 | |
| FR3095331A1 (fr) | Procédé de chirurgie orthopédique assistée par ordinateur | |
| US20140003738A1 (en) | Method and apparatus for gaze point mapping | |
| WO2014097702A1 (fr) | Appareil de traitement d'images, dispositif électronique, appareil endoscopique, programme et procédé de traitement d'images | |
| CN112734776A (zh) | 一种微创手术器械定位方法和系统 | |
| CN107111875A (zh) | 用于多模态自动配准的反馈 | |
| WO2024056676A1 (fr) | Système et procédé de recalage d'un modèle 3d virtuel par affichage en semi-transparence | |
| Speidel et al. | Recognition of risk situations based on endoscopic instrument tracking and knowledge based situation modeling | |
| CA3178587A1 (fr) | Methode de prediction de la recidive d'une lesion par analyse d'images | |
| US10049480B2 (en) | Image alignment device, method, and program | |
| WO2025007493A1 (fr) | Procédé d'étalonnage et de navigation automatiques de traitement dentaire assisté par ar basé sur l'apprentissage | |
| US20230288690A1 (en) | Microscope system and system, method, and computer program for a microscope system | |
| EP4557218A1 (fr) | Procédé et système de traitement de données d'image à l'aide d'un système d'ia | |
| JP2018153345A (ja) | 内視鏡位置特定装置、方法およびプログラム | |
| US20240378826A1 (en) | Fast, dynamic registration with augmented reality | |
| HK40115638A (en) | Method and device for real-time tracking of the pose of a camera by automatic management of a key-image database | |
| CN119672375A (zh) | 图像处理方法、装置、设备及存储介质 |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 23765284 Country of ref document: EP Kind code of ref document: A1 |
|
| WWE | Wipo information: entry into national phase |
Ref document number: 202380063840.X Country of ref document: CN |
|
| WWE | Wipo information: entry into national phase |
Ref document number: 2025514194 Country of ref document: JP |
|
| WWE | Wipo information: entry into national phase |
Ref document number: 202547035216 Country of ref document: IN |
|
| WWE | Wipo information: entry into national phase |
Ref document number: 2023765284 Country of ref document: EP |
|
| NENP | Non-entry into the national phase |
Ref country code: DE |
|
| ENP | Entry into the national phase |
Ref document number: 2023765284 Country of ref document: EP Effective date: 20250414 |
|
| WWP | Wipo information: published in national office |
Ref document number: 202547035216 Country of ref document: IN |
|
| WWP | Wipo information: published in national office |
Ref document number: 202380063840.X Country of ref document: CN |
|
| WWP | Wipo information: published in national office |
Ref document number: 2023765284 Country of ref document: EP |