[go: up one dir, main page]

CN120431135A - A real-time tracking system and method for spinal motion - Google Patents

A real-time tracking system and method for spinal motion

Info

Publication number
CN120431135A
CN120431135A CN202510580261.3A CN202510580261A CN120431135A CN 120431135 A CN120431135 A CN 120431135A CN 202510580261 A CN202510580261 A CN 202510580261A CN 120431135 A CN120431135 A CN 120431135A
Authority
CN
China
Prior art keywords
dimensional
module
tracking
image
camera module
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202510580261.3A
Other languages
Chinese (zh)
Other versions
CN120431135B (en
Inventor
刘新宇
王连雷
田永昊
原所茂
汤世福
王�锋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Qilu Hospital of Shandong University
Original Assignee
Qilu Hospital of Shandong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Qilu Hospital of Shandong University filed Critical Qilu Hospital of Shandong University
Priority to CN202510580261.3A priority Critical patent/CN120431135B/en
Publication of CN120431135A publication Critical patent/CN120431135A/en
Application granted granted Critical
Publication of CN120431135B publication Critical patent/CN120431135B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Analysis (AREA)

Abstract

The invention relates to the technical field of image processing, and particularly discloses a spine motion real-time tracking system and a spine motion real-time tracking method, wherein the system comprises a vertebra registration module for registering an image of a target vertebra with a preoperative image; the device comprises a camera module, a characteristic point identification module, an optical flow tracking module, a motion tracking module and a motion calculating module, wherein the camera module acquires a depth image and a visible light image of an intraoperative target vertebra after registration, the three-dimensional surface point cloud of the intraoperative target vertebra is obtained by reconstructing according to the depth image, the characteristic point identification module carries out two-dimensional characteristic point identification on the visible light image acquired by the camera module to acquire three-dimensional associated points in the three-dimensional surface point cloud of the intraoperative target vertebra, the optical flow tracking module sets the two-dimensional characteristic points and a position template of the corresponding three-dimensional associated points, and accordingly performs optical flow tracking on a current frame image and a previous frame image acquired by the camera module, and the motion tracking module calculates the motion amplitude of the target vertebra according to the optical flow tracking. The invention can directly track the motion of the target vertebrae in real time, reduce the calculation complexity and ensure the algorithm instantaneity. The calculation complexity is low, and the algorithm instantaneity is guaranteed.

Description

System and method for tracking spine movement in real time
Technical Field
The invention relates to the technical field of image processing, in particular to a system and a method for tracking spinal motion in real time.
Background
In orthopedic spine navigation surgery, such as spinal registration projects of a point cloud on the surface of a vertebra in surgery and a point cloud of CT or MRT before surgery, one key step is how to track the motion condition of a target vertebra in surgery in real time and accurately. In the current mainstream solutions, the tracer is typically fixed to vertebrae or soft tissue adjacent to the target vertebrae of the operative field and, assuming that the tracer is rigidly connected to the human body, the movement of the tracer is tracked based on an optical tracking system to replace the movement of the target vertebrae. However, there is actually a non-rigid connection between the vertebrae of the spine, between the vertebrae and the soft tissue, there may be relative movement between the tracer and the target vertebrae, and the movement of the tracer may not be able to replace the movement of the target vertebrae without effective motion compensation.
For this purpose, a real-time tracking algorithm of the spinal motion is proposed, which directly tracks the target vertebrae themselves, rather than tracking tracers fixed around the target vertebrae. Aiming at direct tracking of target vertebrae, in the existing optical tracking scheme, a 3D camera is generally adopted to reconstruct surface point cloud, and unique bone characteristics of vertebrae are directly identified and tracked on three-dimensional point cloud, so that the scheme has higher complexity and is difficult to realize.
Disclosure of Invention
Aiming at the defects, the invention provides a system and a method for tracking the movement of the spine in real time, which can directly track the movement of the target vertebrae in real time, reduce the calculation complexity and ensure the algorithm instantaneity.
The invention provides a real-time spinal motion tracking system, which comprises:
the vertebrae registration module is used for registering the intraoperative image and the preoperative image of the target vertebrae;
the camera module is used for acquiring a depth image and a visible light image of the target vertebra in the operation after registration, setting an operation area tracking area according to the depth image, and reconstructing according to the depth image to obtain a three-dimensional surface point cloud of the target vertebra in the operation;
the characteristic point identification module is used for carrying out two-dimensional characteristic point identification on the visible light image of the target vertebra collected by the camera module and acquiring three-dimensional association points in the three-dimensional surface point cloud of the target vertebra in operation according to the parameters of the camera module;
The optical flow tracking module is used for setting a position template of the two-dimensional characteristic points and the corresponding three-dimensional association points, and accordingly optical flow tracking is carried out between the current frame image and the previous frame image acquired by the camera module;
and the motion tracking module is used for calculating the motion amplitude of the target vertebrae between the adjacent frame images according to the optical flow tracking of the optical flow tracking module, and thus the motion amplitude of the target vertebrae is obtained.
Specifically, the system further comprises an optical tracking module, wherein the pose of the camera module is acquired through the optical tracking unit, an operation area tracking area is reset on an image acquired by the camera module according to the pose, and the two-dimensional feature points and the position templates of the corresponding three-dimensional association points are updated by combining the position templates set by the optical flow tracking module.
More specifically, the optical flow tracking module acquires the position relationship between the optical tracking unit and the camera module in real time, resets the operation area tracking area on the image acquired by the camera module according to the position relationship, and updates the position templates of the two-dimensional feature points and the corresponding three-dimensional association points according to the position relationship.
Furthermore, the camera module and the optical tracking unit are combined into a whole, and the camera module and the optical tracking unit are rigidly connected, so that the position relationship between the optical tracking unit and the camera module can be obtained;
Or the camera module and the optical tracking unit are mutually independent, the camera module is provided with a camera tracer, the optical tracking unit obtains the pose of the camera tracer, the position relation between the optical tracking unit and the camera module is obtained through calculation, and meanwhile, the optical tracking module monitors the spatial movement condition of the camera tracer relative to the reference tracer in real time, the spatial movement condition of the camera module relative to an affected part of a patient is calculated according to the spatial movement condition, and the corresponding position template is updated according to the spatial movement condition.
The method comprises the steps of setting a plurality of marking points on a target vertebra of a patient, obtaining marking points in preoperative images of the target vertebra, touching the marking points on the target vertebra through navigation probe points in an operation, obtaining position information of an optical tracer array on the navigation probe through an optical tracking unit, calculating to obtain position information of each marking point in the intraoperative images of the target vertebra, and registering the intraoperative images of the target vertebra with the preoperative images.
Specifically, the registration of the vertebra registration module comprises primary registration and re-registration, wherein the primary registration is the first registration at the beginning of an operation, and the re-registration is an operation after the motion tracking module judges that the motion amplitude of the target vertebra exceeds a set threshold.
Specifically, the camera module frames a target vertebra and an adjacent area within a set range thereof on a depth image and a visible light image of an affected part of a patient acquired by the camera module as the operation area tracking area.
Specifically, the camera module reconstructs the ratio of the number of points of the three-dimensional surface point cloud of the intraoperative target vertebra to the number of two-dimensional points on the visible light image thereof is more than 80%.
Specifically, the camera module is obtained by detecting angular points on the visible light image based on a computer vision library, so that two-dimensional feature point identification of the visible light image of the target vertebra acquired by the camera module is completed.
Specifically, the feature point identification module acquires the number of two-dimensional feature points with corresponding three-dimensional association points on the visible light image of the target vertebra, and if the ratio of the number to the total number of the two-dimensional feature points on the visible light image of the target vertebra is lower than a set ratio, the intra-operative image and the pre-operative image of the target vertebra are registered again through the vertebra registration module until the ratio is higher than the set ratio.
Specifically, if the current frame image acquired by the camera module is the first frame image after first registration or re-registration, the optical flow tracking module verifies whether a three-dimensional associated point exists at a corresponding coordinate of a three-dimensional point cloud associated with a pixel coordinate of a two-dimensional feature point in the visible light image, if so, the three-dimensional point is considered to be effective and is set as a position template of the corresponding three-dimensional associated point in the three-dimensional point cloud, and meanwhile, the corresponding two-dimensional feature point is set as a position template of the two-dimensional feature point, and the position templates of the two-dimensional feature point and the three-dimensional associated point are not changed before next registration;
If the current frame image acquired by the camera module is a non-first frame image after first registration or re-registration, setting the two-dimensional feature points in the previous frame image and the three-dimensional associated points in the corresponding three-dimensional point cloud as corresponding position templates, and detecting the two-dimensional feature points in a search area with a set size on the current frame image by using the optical flow tracking module as a center by taking the position templates of the two-dimensional feature points in the previous frame image as the center, wherein if the two-dimensional feature points are detected, the two-dimensional feature points are successfully tracked.
More specifically, the search area is set to a range of motion formed by expanding k times outward in the up, down, left, and right directions thereof, respectively, with reference to the target vertebrae.
More specifically, the optical flow tracking module calculates the ratio between the number of successfully tracked two-dimensional feature points and the number of the position templates of the two-dimensional feature points in the previous frame of image, and when the ratio is smaller than a set proportion, the intra-operative image and the pre-operative image of the target vertebra are registered again through the vertebra registration module until the ratio is higher than the set proportion.
Specifically, the optical flow tracking module adopts sparse optical flow tracking for optical flow tracking between the current frame image and the previous frame image acquired by the camera module.
Specifically, the motion tracking module calculates Euclidean distance between corresponding three-dimensional points in adjacent frame images according to optical flow tracking of the optical flow tracking module, calculates an average value of accumulated values of the Euclidean distance, and then obtains motion amplitude of the target vertebrae between the adjacent frame images.
More specifically, when a certain two-dimensional feature point does not have a three-dimensional point corresponding to the certain two-dimensional feature point, the motion tracking module calculates the Euclidean distance between the two-dimensional feature point and the two-dimensional feature point corresponding to the two-dimensional feature point in the previous frame of image to replace the Euclidean distance, and the two-dimensional feature point and the Euclidean distance participate in accumulation.
The invention also provides a spine motion real-time tracking method based on the spine motion real-time tracking system, which comprises the following steps:
s1, registering an intraoperative image and a preoperative image of a target vertebra by a vertebra registration module;
s2, acquiring a depth image and a visible light image of an intraoperative target vertebra by adopting a camera module, setting an intraoperative tracking area according to the depth image, and reconstructing a three-dimensional surface point cloud of the intraoperative target vertebra according to the depth image to obtain the three-dimensional surface point cloud of the intraoperative target vertebra;
S3, the feature point identification module carries out feature point identification on the visible light image of the target vertebra obtained in the S2, and three-dimensional association points in the three-dimensional surface point cloud of the intraoperative target vertebra obtained in the S2 are obtained according to parameters of the camera module;
S4, the optical flow tracking module judges whether the current frame image acquired by the camera module is the first frame image registered by the S1;
If yes, setting the two-dimensional characteristic points in the visible light image and the three-dimensional associated points in the corresponding three-dimensional point cloud as corresponding position templates;
Otherwise, setting two-dimensional characteristic points in a visible light image in the previous frame image acquired by the camera module and three-dimensional association points in the corresponding three-dimensional point cloud as corresponding position templates, and carrying out optical flow tracking between the current frame image and the previous frame image acquired by the camera module;
and S5, the motion tracking module calculates the motion amplitude of the target vertebrae between the adjacent frame images according to the optical flow tracking of the S4, and the motion amplitude of the target vertebrae is obtained.
The method has the advantages that optical flow tracking is performed on the visible light image based on two-dimensional feature point recognition, then the spatial movement of the corresponding three-dimensional point cloud is directly indexed by the two-dimensional feature points to replace the movement of vertebrae, a 2D-3D cooperative tracking system is constructed, and real-time movement tracking can be directly performed on the target vertebrae. Compared with the method for directly identifying and tracking the bone characteristics on the three-dimensional point cloud, the method for identifying and tracking the two-dimensional characteristic points greatly reduces the computational complexity, and further ensures the algorithm instantaneity.
Drawings
In order to more clearly illustrate the technical solutions of the present invention, the following description will briefly explain the drawings required to be used in the description of the embodiments, and it is obvious that the drawings in the following description are only embodiments of the present invention, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is an exemplary diagram of fusion of a visible light image and a three-dimensional surface point cloud reconstructed therefrom, wherein fig. 1 (a) is an exemplary diagram of a visible light image of a scorpion, fig. 1 (b) is an exemplary diagram of a reconstructed three-dimensional surface point cloud of a scorpion, and fig. 1 (c) is an exemplary diagram of a fusion image of a visible light image of a scorpion and a three-dimensional surface point cloud reconstructed therefrom;
FIG. 2 is an exemplary diagram of a correlation of two-dimensional feature points of a visible light image with corresponding three-dimensional points in a reconstructed three-dimensional surface point cloud, wherein the left diagram of FIG. 2 is an exemplary diagram of FIG. 1 (a) and the operating region tracking area set thereon, and the right diagram of FIG. 2 is an exemplary diagram of FIG. 1 (c) and the operating region tracking area set thereon;
FIG. 3 is a diagram illustrating an exemplary configuration for performing real-time tracking of spinal motion in accordance with one embodiment of the present invention;
FIG. 4 is a diagram illustrating an example of a structure for performing real-time tracking of spinal motion in accordance with another embodiment of the present invention;
fig. 5 is a flow chart of the method of the present invention for real-time tracking of spinal motion.
In the figure, 1 an optical tracking unit, 2a patient, 3a fiducial reference tracer, 4 an anatomical region, 5 a target vertebra, 6 a sickbed, 7 an RGB-D camera, 8 a camera tracer;
71. A depth camera 72. A visible light camera;
A. tracking an area in an operation area, namely, P1, two-dimensional characteristic points, and P2, and associating three-dimensional points with the two-dimensional characteristic points.
Detailed Description
The present application will be further described in detail below with reference to specific embodiments and with reference to the accompanying drawings, in order to make the objects, technical solutions and advantages of the present application more apparent. It should be noted that unless otherwise defined, technical or scientific terms used in the embodiments of the present application should be given the ordinary meaning as understood by one of ordinary skill in the art to which the present application belongs.
The invention provides a real-time spinal motion tracking system, comprising:
the vertebrae registration module is used for registering the intraoperative image of the target vertebrae with the preoperative image of the target vertebrae;
The camera module is used for acquiring a depth image and a visible light image of the target vertebra in the operation after registration, setting an operation area tracking area according to the depth image, and reconstructing according to the depth image to obtain a three-dimensional surface point cloud of the target vertebra in the operation;
the characteristic point identification module is used for carrying out two-dimensional characteristic point identification on the visible light image of the target vertebra collected by the camera module and acquiring three-dimensional association points in the three-dimensional surface point cloud of the target vertebra in operation according to the parameters of the camera module;
The optical flow tracking module is used for setting a position template of the two-dimensional characteristic points and the corresponding three-dimensional association points, and accordingly optical flow tracking is carried out between the current frame image and the previous frame image acquired by the camera module;
And the motion tracking module is used for calculating the motion amplitude between the images of the adjacent frames according to the optical flow tracking of the optical flow tracking module, and obtaining the motion amplitude of the target vertebrae.
The invention also comprises an optical tracking module which acquires the pose of the camera module through the optical tracking unit, resets the tracking area of the operation area on the image acquired by the camera module according to the pose, and updates the position templates of the two-dimensional feature points and the corresponding three-dimensional association points by combining the position templates set by the optical flow tracking module.
In the invention, a plurality of mark points are arranged on a target vertebra of a patient, the mark points in the preoperative image of the target vertebra are acquired, the mark points on the target vertebra of the patient are point-touched through a navigation probe in the operation, the vertebra registration module acquires the position information of an optical tracer array on the navigation probe through an optical tracking unit, further calculates the position information of each mark point, and displays the position information in the intraoperative image of the target vertebra, thereby registering the intraoperative image of the target vertebra with the preoperative image.
In the invention, the optical tracing array is arranged on the navigation probe, the navigation probe points are arranged on all the marking points, and the optical tracing module can obtain the position information of the navigation probe tip by acquiring the pose of the optical tracing array, so that the positions of all the marking points can be obtained.
In the invention, the vertebra registration module acquires the position of the marking point of the navigation probe contacting the surface of the target vertebra of the patient in operation and displays the position of the marking point in the operation image of the target vertebra, and then performs coarse registration on the position of the marking point and the position of the marking point in the operation image of the target vertebra through marking point matching, and then performs fine registration through an ICP algorithm. In the invention, the registration precision can be verified by randomly selecting the verification point on the surface of the target vertebra through the navigation probe, so that the accurate matching of the intraoperative image and the preoperative image of the target vertebra is ensured, and the precision requirements of the optical flow tracking module and the motion tracking module are met.
In the invention, the preoperative image of the target vertebra of the patient can be a preoperative CT image or MRT point cloud of the target vertebra of the patient, and the preoperative image can be obtained through pre-acquisition. The intraoperative image of the target vertebra may be a CBCT point cloud of the target vertebra acquired intraoperatively by a C-arm machine or a depth point cloud acquired by a depth camera.
In the invention, the real-time tracking of the spinal motion can be applied to various spinal navigation operations, and the embodiment only takes the registration of the intraoperative three-dimensional point cloud of the affected part of the patient, namely the target vertebra, and the preoperative CT image or the MRT point cloud as an example for illustration.
In the present invention, the preoperative image of the affected part of the patient is generally a pure vertebra point cloud obtained after treatment, and only a part of exposed vertebrae and various tissues of the affected part of the patient in operation do not correspond to the pure vertebra region in the preoperative image of the affected part of the patient, while in the process of registering, the target vertebrae of the patient in operation and the vertebra region corresponding to the target vertebrae in the preoperative image of the affected part of the patient in the process of registering, as shown in fig. 1. In the existing mainstream scheme, doctors can clean surface tissues with specific osseous characteristics such as spinous processes and lateral processes, so that the registration accuracy is improved, and meanwhile, a foundation is laid for a follow-up tracking algorithm. In general, the larger the area of tissue cleanup of the vertebral surface, the higher the accuracy of registration and the higher the accuracy of subsequent real-time tracking of spinal motion.
In the invention, the registration of the vertebra registration module relates to primary registration and re-registration, wherein the primary registration is the first registration at the beginning of operation, and the re-registration is the operation after the movement amplitude of the target vertebra is identified to exceed a set threshold value. In theory, the two methods are not consistent in time sequence, and whether the registration methods are consistent or not is not required in particular, so that the method is not limited to any registration method meeting the registration accuracy requirement at present. In the invention, the set threshold value can be determined according to actual requirements.
In the invention, after a doctor moves a camera module to a proper working distance and angle in an actual operation scene, an entire region including an anatomical region in an interested region of the camera module is used for accelerating the reconstruction speed of a three-dimensional surface point cloud of an affected part of a patient, reducing interference caused by non-bone characteristics when two-dimensional characteristic points are identified in a visible light image, and combining the allowable movement amplitude of vertebrae, the depth image of the affected part of the patient and the adjacent region in a set range of the vertebrae can be selected as an operation region tracking region on the visible light image by framing, such as a region indicated by A in fig. 2.
In the invention, as the requirement on the precision of the spine registration is higher, the target vertebrae in the spine generally only allow millimeter-level motion amplitude after registration, therefore, the precision of a camera module adopted by the invention also reaches a sub-millimeter level, and the points in the reconstructed three-dimensional surface point cloud are dense, and particularly, the requirement that the ratio of the number of the points in the reconstructed three-dimensional surface point cloud to the number of the two-dimensional points on the visible light image of the target vertebrae is larger than 80% can be satisfied.
The characteristic point identification module is used for carrying out two-dimensional characteristic point identification on the visible light image of the target vertebra, specifically, based on a computer vision library such as OpenCV and the like, corner detection is carried out on the visible light image, so that corresponding two-dimensional characteristic points are obtained, and the types of the corner points are not limited to corners with stronger robustness such as Harris, shi-Tomasi and the like. In the invention, the selected corner type is matched with a subsequent optical flow tracking algorithm.
Furthermore, the invention can further solve the corresponding sub-pixel corner based on the corner detection result, and can obtain more accurate two-dimensional characteristic points so as to improve the precision of the follow-up optical flow tracking algorithm and obtain good tracking effect.
The existing camera module such as RGB-D camera performs operations such as camera calibration before leaving factory, and generally has already achieved the function of calibrating the visible light image and the depth image, that is, there is a mapping function, that is, the parameters of the RGB-D camera relate the pixel points of the two-dimensional image in the visible light image to the coordinates of the points of the three-dimensional surface point cloud in the depth image, that is, the three-dimensional related points in the three-dimensional surface point cloud of the target vertebra in operation can be obtained according to the two-dimensional feature points of the visible light image, as shown by P2 in fig. 2. Then each three-dimensional point in the three-dimensional surface point cloud of the intraoperative target vertebra can find a uniquely corresponding two-dimensional pixel point in the two-dimensional image in the visible light image based on the mapping function, as shown by P1 in fig. 2. Because the depth camera and the visible light camera in the camera module are rigidly connected, the mapping function is not changed once the mapping function is determined under the condition that the mechanical structure is not changed, so that the pixel coordinates of the two-dimensional feature points detected on each frame of visible light image can be directly obtained through the mapping function, and the coordinates of the three-dimensional association points in the corresponding three-dimensional surface point cloud can be directly obtained, thereby saving the time consumption for calculating the bone feature on the three-dimensional surface point cloud each time. It should be noted that, due to the influence of factors such as the reconstruction angle, the number of three-dimensional points in the three-dimensional surface point cloud reconstructed by the camera module is not completely equal to the number of two-dimensional feature points in the visible light image, and generally, the number of three-dimensional points in the three-dimensional surface point cloud is less than or equal to the number of two-dimensional feature points in the visible light image. Therefore, whether the three-dimensional associated points in the three-dimensional surface point cloud corresponding to the two-dimensional feature points in the visible light image exist or not needs to be further judged, when the points which do not exist reach a certain proportion, the registration of the intraoperative image of the target vertebra and the preoperative image of the target vertebra should be carried out again for safety consideration, at the moment, the position of the camera module can be properly adjusted, and then the intraoperative image of the target vertebra and the preoperative image of the target vertebra are registered again through the vertebra registration module until the proportion is higher than a set ratio.
In the invention, if the current frame image acquired by the camera module is the first frame image after first registration or re-registration, the optical flow tracking module verifies whether a three-dimensional associated point exists at the corresponding coordinate of the three-dimensional point cloud associated with the pixel coordinate of the two-dimensional feature point in the visible light image, if so, the three-dimensional point is considered to be effective and is set as a position template of the corresponding three-dimensional associated point in the three-dimensional point cloud, and meanwhile, the corresponding two-dimensional feature point is set as a position template of the two-dimensional feature point, and the position templates of the two-dimensional feature point and the three-dimensional associated point are not changed before the next registration.
In the invention, if the current frame image acquired by the camera module is a non-first frame image after first registration or re-registration, the two-dimensional feature points in the previous frame image and the three-dimensional associated points in the corresponding three-dimensional point cloud are set as corresponding position templates, the optical flow tracking module takes the position templates of the two-dimensional feature points in the previous frame image as the center, performs two-dimensional feature point detection in a search area with a set size on the current frame image, and if the two-dimensional feature points are detected, the two-dimensional feature point tracking is successful.
In a specific embodiment of the present invention, the search area with a set size may take a rectangular or circular shape.
In the invention, in order to improve the recognition accuracy, the search area is generally not set to be too large or too small, so that frequent re-registration caused by operation interruption due to failure of recognition of a large number of two-dimensional feature points in the tracking process is avoided, or tracking abnormality caused by error recognition of a large number of angle points is avoided, and medical accidents are further caused. In the present invention, the search area is generally set to have a range of motion formed by expanding k times, k being preferably 2 to 4, in the up, down, left and right directions, respectively, with reference to the target vertebrae.
In the invention, the optical flow tracking module calculates the ratio between the number of successfully tracked two-dimensional feature points and the number of the position templates of the two-dimensional feature points in the previous frame of image, and the intra-operative image of the target vertebra and the pre-operative image of the target vertebra are registered again through the vertebra registration module when the ratio is smaller than the set ratio until the ratio is higher than the set ratio.
In the invention, sparse optical flow tracking is adopted for optical flow tracking. Compared with dense optical flow tracking, the sparse optical flow tracking selects key pixel points to replace the whole, has the characteristics of small calculated amount, high calculation speed, suitability for scenes with higher real-time requirements and possibly less accurate dense optical flow tracking when processing large-range object movement, but in the application scene of the invention, only small-range movement of the target vertebrae in a millimeter level is generally allowed, so that the sparse optical flow tracking can be adopted.
Furthermore, the invention can adopt LK optical flow tracking to process moving objects with different scales, can reduce noise interference and has better robustness.
In the invention, in the operation process, in order to obtain better operation position, removal and other operations, a doctor can intentionally or unintentionally move a camera module such as an RGB-D camera, at this time, the relative positions of the template positions of the two-dimensional feature points and the corresponding three-dimensional association points set in the registration process and the camera module are changed, and if the template positions are not corrected, the operation is possibly interrupted.
The specific correction is as follows:
The optical flow tracking module acquires the position relation between the optical tracking unit and the camera module in real time, resets the operation area tracking area on the image acquired by the camera module according to the position relation, and updates the position templates of the two-dimensional feature points and the corresponding three-dimensional association points according to the position relation.
Further, referring to fig. 3, in an embodiment of the present invention, the patient 2 is placed on a patient bed 6, an anatomical region 4 is formed on the affected part of the patient, a fiducial reference tracer 3 is disposed in the anatomical region 4, namely, the target vertebra 5, the affected part of the patient, and an RGB-D camera 7 is integrated with the optical tracking unit 1, and a binocular camera of the optical tracking unit 1, and a depth camera 71 and a visible light camera 72 of the RGB-D camera 7 are integrated, which are rigidly connected, so that the positional relationship between the optical tracking unit 1 and the RGB-D camera 7 is fixedly known. At the same time, the optical tracking unit 1 can acquire the pose of the fiducial reference tracer 3.
Further, referring to fig. 4, the patient 2 is placed on the sickbed 6, an anatomical region 4 is formed on the affected part of the patient, a target vertebra 5 is formed in the anatomical region 4, a datum reference tracer 3 is arranged on the affected part of the patient, an RGB-D camera 7 and an optical tracking unit 1 are arranged independently, a camera tracer 8 is installed on the RGB-D camera 7, then the optical tracking unit 1 can obtain the pose of the camera tracer 8 through a binocular camera and an optical sensor therein, further, the position relationship between the optical tracking unit 1 and the RGB-D camera 7 is obtained through calculation, meanwhile, the spatial movement condition of the camera tracer 8 relative to the datum reference tracer 3 is monitored in real time, namely, the spatial movement condition of the RGB-D camera 7 relative to the affected part of the patient is obtained, and the optical tracking module can correct according to the condition to update the position template.
According to the invention, the motion tracking module obtains Euclidean distance between corresponding three-dimensional points in adjacent frame images according to optical flow tracking calculation of the optical flow tracking module, calculates the average value of accumulated values of the Euclidean distance, and obtains the motion amplitude of the target vertebrae between the adjacent frame images.
Further, the motion tracking module may further determine whether the motion amplitude of the target vertebra between the adjacent frame images is greater than the set threshold, and if so, the intra-operative image of the target vertebra and the pre-operative image of the target vertebra need to be registered again by the vertebra registration module.
In the invention, because not every two-dimensional characteristic point has a corresponding three-dimensional point, at the moment, the motion tracking module can calculate the Euclidean distance between the two-dimensional characteristic point and the two-dimensional characteristic point corresponding to the two-dimensional characteristic point in the previous frame of image to replace and participate in accumulation.
The invention also provides a real-time tracking method for the spinal motion, which is shown in fig. 5 and comprises the following steps:
S1, registering an intraoperative image of a target vertebra with a preoperative image of the target vertebra through a vertebra registration module;
s2, acquiring a depth image and a visible light image of an intraoperative target vertebra by adopting a camera module, setting an intraoperative tracking area according to the depth image, and reconstructing a three-dimensional surface point cloud of the intraoperative target vertebra according to the depth image to obtain the three-dimensional surface point cloud of the intraoperative target vertebra;
S3, the feature point identification module carries out feature point identification on the visible light image of the target vertebra obtained in the S2, and three-dimensional association points in the three-dimensional surface point cloud of the intraoperative target vertebra obtained in the S2 are obtained according to parameters of the camera module;
S4, the optical flow tracking module judges whether the current frame image acquired by the camera module is the first frame image registered by the S1;
If yes, setting the two-dimensional characteristic points in the visible light image and the three-dimensional associated points in the corresponding three-dimensional point cloud as corresponding position templates;
Otherwise, setting two-dimensional characteristic points in a visible light image in the previous frame image acquired by the camera module and three-dimensional association points in the corresponding three-dimensional point cloud as corresponding position templates, and carrying out optical flow tracking between the current frame image and the previous frame image acquired by the camera module;
And S5, the motion tracking module calculates the motion amplitude between the adjacent frame images according to the optical flow tracking of the S4, and the motion amplitude of the target vertebrae is obtained.
According to the invention, a visible light camera is added on the basis of a depth camera, a visible light image is simultaneously acquired on the basis of three-dimensional surface point cloud reconstruction, optical flow tracking is performed on the visible light image based on two-dimensional characteristic point recognition, and then the spatial movement of the corresponding three-dimensional point cloud is directly indexed by the two-dimensional characteristic points to replace the movement of vertebrae, so that a 2D-3D collaborative tracking system is constructed. Compared with the method for directly identifying and tracking the bone characteristics on the three-dimensional point cloud, the method for identifying and tracking the two-dimensional characteristic points greatly reduces the computational complexity, and further ensures the algorithm instantaneity.
It will be appreciated by persons skilled in the art that the foregoing discussion of any embodiment is merely exemplary and is not intended to imply that the scope of the invention (including the claims) is limited to these examples, that combinations of technical features in the foregoing embodiments or in different embodiments may be implemented in any order and that many other variations of the different aspects of the embodiments described above exist within the spirit of the invention, which are not provided in detail for clarity.
The present embodiments are intended to embrace all such alternatives, modifications and variances which fall within the broad scope of the appended claims. Therefore, any omissions, modifications, equivalent substitutions, improvements, and the like, which are within the spirit and principles of the embodiments of the invention, are intended to be included within the scope of the invention.

Claims (17)

1.一种脊柱运动实时跟踪系统,其特征在于,包括:1. A real-time spinal motion tracking system, comprising: 椎骨配准模块,用于对目标椎骨的术中影像与术前影像进行配准;Vertebral registration module, used to register intraoperative images with preoperative images of the target vertebra; 相机模块,用于采集配准后术中目标椎骨的深度图像和可见光图像,据此设定术区跟踪区域,根据所述深度图像重建得到术中目标椎骨的三维表面点云;A camera module is used to collect a depth image and a visible light image of the target vertebra during surgery after registration, set the surgical tracking area based on the depth image, and reconstruct a three-dimensional surface point cloud of the target vertebra during surgery based on the depth image; 特征点识别模块,用于对相机模块采集的目标椎骨的可见光图像进行二维特征点识别,并根据相机模块的参数获取术中目标椎骨的三维表面点云中的三维关联点;a feature point recognition module for performing two-dimensional feature point recognition on the visible light image of the target vertebra acquired by the camera module, and obtaining three-dimensional associated points in the three-dimensional surface point cloud of the target vertebra during surgery based on the parameters of the camera module; 光流跟踪模块,用于设定二维特征点及其对应的三维关联点的位置模板,据此对相机模块采集的当前帧图像与上一帧图像之间进行光流跟踪;The optical flow tracking module is used to set the position template of the two-dimensional feature points and their corresponding three-dimensional associated points, and perform optical flow tracking between the current frame image captured by the camera module and the previous frame image; 运动跟踪模块,根据光流跟踪模块的光流跟踪计算目标椎骨在相邻帧图像之间的运动幅度,即得到目标椎骨的运动幅度。The motion tracking module calculates the motion amplitude of the target vertebra between adjacent frame images based on the optical flow tracking of the optical flow tracking module, that is, obtains the motion amplitude of the target vertebra. 2.根据权利要求1所述的脊柱运动实时跟踪系统,其特征在于,还包括光学跟踪模块,其通过光学跟踪单元获取相机模块的位姿,据此在相机模块采集的图像上重新设定术区跟踪区域,并结合光流跟踪模块设定的位置模板更新二维特征点及其对应的三维关联点的位置模板。2. The real-time tracking system for spinal motion according to claim 1 is characterized in that it also includes an optical tracking module, which obtains the position and posture of the camera module through the optical tracking unit, and resets the surgical tracking area on the image captured by the camera module accordingly, and updates the position template of the two-dimensional feature point and its corresponding three-dimensional associated point in combination with the position template set by the optical flow tracking module. 3.根据权利要求2所述的脊柱运动实时跟踪系统,其特征在于,所述光流跟踪模块实时获取所述光学跟踪单元与所述相机模块之间的位置关系,根据该位置关系对所述相机模块采集的图像上重新设定术区跟踪区域,同时根据该位置关系更新二维特征点及其对应的三维关联点的位置模板。3. The real-time tracking system for spinal movement according to claim 2 is characterized in that the optical flow tracking module obtains the positional relationship between the optical tracking unit and the camera module in real time, resets the surgical tracking area on the image captured by the camera module according to the positional relationship, and updates the position template of the two-dimensional feature point and its corresponding three-dimensional associated point according to the positional relationship. 4.根据权利要求3所述的脊柱运动实时跟踪系统,其特征在于,所述相机模块与所述光学跟踪单元合二为一,二者刚性连接,则可得所述光学跟踪单元与所述相机模块之间的位置关系;4. The real-time tracking system for spinal motion according to claim 3, wherein the camera module and the optical tracking unit are combined into one and rigidly connected, so that the positional relationship between the optical tracking unit and the camera module can be obtained; 或,所述相机模块与所述光学跟踪单元相互独立设置,在所述相机模块上安装有相机示踪器,所述光学跟踪单元获取所述相机示踪器的位姿,计算得到光学跟踪单元与所述相机模块之间的位置关系,所述光学跟踪模块实时监控相机示踪器相对于基准参考示踪器的空间运动情况,据此计算相机模块相对于患者患处的空间运动情况,据此更新对应的位置模板。Alternatively, the camera module and the optical tracking unit are independently provided, a camera tracer is mounted on the camera module, the optical tracking unit obtains the position of the camera tracer, calculates the positional relationship between the optical tracking unit and the camera module, and the optical tracking module monitors the spatial motion of the camera tracer relative to a reference tracer in real time, calculates the spatial motion of the camera module relative to the patient's affected part based on the position, and updates the corresponding position template accordingly. 5.根据权利要求1所述的脊柱运动实时跟踪系统,其特征在于,在患者的目标椎骨上设置若干标记点,获取目标椎骨的术前影像中的标记点;在术中通过导航探针点触目标椎骨上的标记点,椎骨配准模块通过光学跟踪单元获取导航探针上的光学示踪阵列的位置信息,计算得到目标椎骨的术中影像中每个标记点的位置信息,据此对目标椎骨的术中影像与术前影像进行配准。5. The real-time tracking system for spinal motion according to claim 1 is characterized in that a plurality of marking points are set on the target vertebra of the patient, and the marking points in the preoperative image of the target vertebra are obtained; during the operation, the marking points on the target vertebra are touched by a navigation probe, and the vertebrae registration module obtains the position information of the optical tracer array on the navigation probe through the optical tracking unit, calculates the position information of each marking point in the intraoperative image of the target vertebra, and aligns the intraoperative image of the target vertebra with the preoperative image based on the information. 6.根据权利要求1所述的脊柱运动实时跟踪系统,其特征在于,所述椎骨配准模块的配准包括初次配准和重新配准,所述初次配准为手术开始时的首次配准,所述重新配准是在所述运动跟踪模块判断目标椎骨的运动幅度超过设定阈值后的操作。6. The real-time tracking system for spinal motion according to claim 1 is characterized in that the alignment of the vertebrae alignment module includes initial alignment and re-alignment, the initial alignment is the first alignment at the beginning of the operation, and the re-alignment is an operation after the motion tracking module determines that the motion amplitude of the target vertebra exceeds a set threshold. 7.根据权利要求1所述的脊柱运动实时跟踪系统,其特征在于,所述相机模块在其采集的患者患处的深度图像和可见光图像上框选目标椎骨及其设定范围内的邻近区域作为所述术区跟踪区域。7. The real-time tracking system for spinal motion according to claim 1 is characterized in that the camera module selects the target vertebra and its adjacent area within a set range from the depth image and visible light image of the patient's affected area collected by it as the surgical area tracking area. 8.根据权利要求1所述的脊柱运动实时跟踪系统,其特征在于,所述相机模块重建的术中目标椎骨的三维表面点云的点的数量与其可见光图像上的二维点的数量之间的占比大于80%。8. The real-time tracking system for spinal motion according to claim 1 is characterized in that the ratio between the number of points in the three-dimensional surface point cloud of the target vertebra during surgery reconstructed by the camera module and the number of two-dimensional points on its visible light image is greater than 80%. 9.根据权利要求1所述的脊柱运动实时跟踪系统,其特征在于,所述相机模块基于计算机视觉库,在可见光图像上进行角点检测得到,进而完成对相机模块采集的目标椎骨的可见光图像的二维特征点识别。9. The real-time tracking system for spinal motion according to claim 1 is characterized in that the camera module performs corner detection on the visible light image based on a computer vision library, and then completes the two-dimensional feature point recognition of the visible light image of the target vertebra captured by the camera module. 10.根据权利要求1所述的脊柱运动实时跟踪系统,其特征在于,所述特征点识别模块获取目标椎骨的可见光图像上存在对应的三维关联点的二维特征点数量,若该数量与目标椎骨的可见光图像上二维特征点的总数的比例低于设定比值,则通过椎骨配准模块重新对目标椎骨的术中影像与术前影像进行配准,直至所述比例高于设定比值。10. The real-time tracking system for spinal motion according to claim 1 is characterized in that the feature point recognition module obtains the number of two-dimensional feature points that have corresponding three-dimensional association points on the visible light image of the target vertebra. If the ratio of this number to the total number of two-dimensional feature points on the visible light image of the target vertebra is lower than a set ratio, the intraoperative image and preoperative image of the target vertebra are re-aligned through the vertebrae registration module until the ratio is higher than the set ratio. 11.根据权利要求1所述的脊柱运动实时跟踪系统,其特征在于,若所述相机模块采集的当前帧图像为首次配准或重新配准后的首帧图像,所述光流跟踪模块验证其中与可见光图像中的二维特征点的像素坐标关联的三维点云的对应坐标处是否存在三维关联点,若存在,则认为该三维点有效并将其设置为三维点云中对应三维关联点的位置模板,同时将对应的二维特征点设置为二维特征点的位置模板,二维特征点和三维关联点的位置模板在下次配准前不再更改;11. The real-time tracking system for spinal motion according to claim 1, characterized in that if the current frame image captured by the camera module is the first frame image after the first registration or re-registration, the optical flow tracking module verifies whether there is a three-dimensional associated point at the corresponding coordinates of the three-dimensional point cloud associated with the pixel coordinates of the two-dimensional feature point in the visible light image; if so, the three-dimensional point is considered valid and is set as a position template of the corresponding three-dimensional associated point in the three-dimensional point cloud, and the corresponding two-dimensional feature point is set as the position template of the two-dimensional feature point, and the position templates of the two-dimensional feature point and the three-dimensional associated point are no longer changed before the next registration; 若所述相机模块采集的当前帧图像为首次配准或重新配准后的非首帧图像,则以上一帧图像中的二维特征点及其对应的三维点云中的三维关联点设为对应的位置模板,光流跟踪模块以其上一帧图像中的二维特征点的位置模板为中心,在当前帧图像上的设定大小的搜索区域内进行二维特征点检测,若检测到二维特征点,则表示该二维特征点跟踪成功。If the current frame image captured by the camera module is a non-first frame image after the first registration or re-registration, the two-dimensional feature points in the previous frame image and the three-dimensional associated points in the corresponding three-dimensional point cloud are set as the corresponding position templates. The optical flow tracking module takes the position templates of the two-dimensional feature points in its previous frame image as the center and performs two-dimensional feature point detection in a search area of a set size on the current frame image. If a two-dimensional feature point is detected, it indicates that the two-dimensional feature point tracking is successful. 12.根据权利要求11所述的脊柱运动实时跟踪系统,其特征在于,所述搜索区域设置为以目标椎骨的基准,分别向其上、下、左、右方向外扩k倍形成的运动范围。12. The real-time tracking system for spinal motion according to claim 11, wherein the search area is set as a motion range formed by expanding the target vertebra up, down, left and right by k times. 13.根据权利要求11所述的脊柱运动实时跟踪系统,其特征在于,所述光流跟踪模块计算成功跟踪二维特征点的个数与上一帧图像中的二维特征点的位置模板的个数之间的比值,在该比值小于设定比例时通过椎骨配准模块重新对目标椎骨的术中影像与术前影像进行配准,直至所述比值高于设定比例。13. The real-time tracking system for spinal motion according to claim 11 is characterized in that the optical flow tracking module calculates the ratio between the number of successfully tracked two-dimensional feature points and the number of position templates of the two-dimensional feature points in the previous frame image, and when the ratio is less than a set ratio, the intraoperative image and the preoperative image of the target vertebra are re-aligned through the vertebral registration module until the ratio is higher than the set ratio. 14.根据权利要求1所述的脊柱运动实时跟踪系统,其特征在于,所述光流跟踪模块对相机模块采集的当前帧图像与上一帧图像之间的光流跟踪采用稀疏光流跟踪。14. The real-time tracking system for spinal motion according to claim 1, wherein the optical flow tracking module uses sparse optical flow tracking for optical flow tracking between a current frame image and a previous frame image captured by the camera module. 15.根据权利要求1所述的脊柱运动实时跟踪系统,其特征在于,所述运动跟踪模块根据光流跟踪模块的光流跟踪计算得到相邻帧图像中对应的三维点之间的欧式距离,计算其累加值的平均值,即得到目标椎骨在相邻帧图像之间的运动幅度。15. The real-time tracking system for spinal motion according to claim 1 is characterized in that the motion tracking module obtains the Euclidean distance between corresponding three-dimensional points in adjacent frame images based on the optical flow tracking calculation of the optical flow tracking module, calculates the average value of the accumulated values, and thus obtains the motion amplitude of the target vertebra between adjacent frame images. 16.根据权利要求15所述的脊柱运动实时跟踪系统,其特征在于,在某一个二维特征点没有与之对应的三维点时,所述运动跟踪模块计算该二维特征点与上一帧图像中与其对应的二维特征点之间的欧式距离进行替换,参与累加。16. The real-time tracking system for spinal motion according to claim 15 is characterized in that when a two-dimensional feature point has no corresponding three-dimensional point, the motion tracking module calculates the Euclidean distance between the two-dimensional feature point and the corresponding two-dimensional feature point in the previous frame image, replaces them, and participates in the accumulation. 17.一种基于权利要求1-16任一所述的脊柱运动实时跟踪系统的脊柱运动实时跟踪方法,其特征在于,包括步骤:17. A method for real-time tracking of spinal motion based on the real-time tracking system for spinal motion according to any one of claims 1 to 16, characterized in that it comprises the steps of: S1、椎骨配准模块对目标椎骨的术中影像与术前影像进行配准;S1, the vertebra registration module registers the intraoperative image and preoperative image of the target vertebra; S2、采用相机模块采集术中目标椎骨的深度图像和可见光图像,据此设定术区跟踪区域,并根据所述深度图像重建其三维表面点云,即得到术中目标椎骨的三维表面点云;S2. Using a camera module to collect a depth image and a visible light image of the target vertebra during surgery, setting a surgical tracking area based on the depth image and reconstructing a three-dimensional surface point cloud thereof based on the depth image, thereby obtaining a three-dimensional surface point cloud of the target vertebra during surgery; S3、特征点识别模块对S2得到的目标椎骨的可见光图像进行特征点识别,根据相机模块的参数获取S2得到的术中目标椎骨的三维表面点云中的三维关联点;S3, a feature point recognition module performs feature point recognition on the visible light image of the target vertebra obtained in S2, and obtains three-dimensional associated points in the three-dimensional surface point cloud of the target vertebra obtained in S2 during the operation according to the parameters of the camera module; S4、光流跟踪模块判断相机模块采集的当前帧图像是否为经S1的配准后的首帧图像;S4, the optical flow tracking module determines whether the current frame image captured by the camera module is the first frame image after the registration in S1; 若是,则将其中可见光图像中的二维特征点及其对应的三维点云中的三维关联点设为对应的位置模板;If yes, then the two-dimensional feature points in the visible light image and the three-dimensional associated points in the corresponding three-dimensional point cloud are set as corresponding position templates; 否则,将相机模块采集的上一帧图像中可见光图像中的二维特征点及其对应的三维点云中的三维关联点设为对应的位置模板,对相机模块采集的当前帧图像与上一帧图像之间进行光流跟踪;Otherwise, the two-dimensional feature points in the visible light image of the previous frame captured by the camera module and the three-dimensional associated points in the corresponding three-dimensional point cloud are set as the corresponding position template, and optical flow tracking is performed between the current frame image captured by the camera module and the previous frame image; S5、运动跟踪模块根据S4的光流跟踪计算目标椎骨在相邻帧图像之间的运动幅度,即得到目标椎骨的运动幅度。S5. The motion tracking module calculates the motion amplitude of the target vertebra between adjacent frame images based on the optical flow tracking in S4, that is, obtains the motion amplitude of the target vertebra.
CN202510580261.3A 2025-05-07 2025-05-07 A real-time spinal motion tracking system and method Active CN120431135B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202510580261.3A CN120431135B (en) 2025-05-07 2025-05-07 A real-time spinal motion tracking system and method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202510580261.3A CN120431135B (en) 2025-05-07 2025-05-07 A real-time spinal motion tracking system and method

Publications (2)

Publication Number Publication Date
CN120431135A true CN120431135A (en) 2025-08-05
CN120431135B CN120431135B (en) 2025-11-11

Family

ID=96551508

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202510580261.3A Active CN120431135B (en) 2025-05-07 2025-05-07 A real-time spinal motion tracking system and method

Country Status (1)

Country Link
CN (1) CN120431135B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150133785A1 (en) * 2012-04-30 2015-05-14 Christopher Schlenger Ultrasonographic systems and methods for examining and treating spinal conditions
CN118279364A (en) * 2024-06-03 2024-07-02 青岛山大齐鲁医院(山东大学齐鲁医院(青岛)) Registration method of MRI image and CBCT image
CN118691647A (en) * 2024-08-22 2024-09-24 中国人民解放军国防科技大学 A contour-based method for tracking spatial targets' pose
CN119055358A (en) * 2024-11-07 2024-12-03 合肥工业大学 A surgical operation force feedback guidance method based on virtual marker tracking and instrument posture

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150133785A1 (en) * 2012-04-30 2015-05-14 Christopher Schlenger Ultrasonographic systems and methods for examining and treating spinal conditions
CN118279364A (en) * 2024-06-03 2024-07-02 青岛山大齐鲁医院(山东大学齐鲁医院(青岛)) Registration method of MRI image and CBCT image
CN118691647A (en) * 2024-08-22 2024-09-24 中国人民解放军国防科技大学 A contour-based method for tracking spatial targets' pose
CN119055358A (en) * 2024-11-07 2024-12-03 合肥工业大学 A surgical operation force feedback guidance method based on virtual marker tracking and instrument posture

Also Published As

Publication number Publication date
CN120431135B (en) 2025-11-11

Similar Documents

Publication Publication Date Title
US12106497B2 (en) Method and navigation system for registering two-dimensional image data set with three-dimensional image data set of body of interest
JP6976266B2 (en) Methods and systems for using multi-view pose estimation
US7970174B2 (en) Medical marker tracking with marker property determination
JP7391399B2 (en) Artificial intelligence-based determination of relative positions of objects in medical images
CN113164149A (en) Method and system for multi-view pose estimation using digital computer tomography
JP2019500185A (en) 3D visualization during surgery with reduced radiation exposure
CN101862220A (en) Pedicle internal fixation navigation surgery system and method based on structured light image
US12239494B1 (en) Hand-held stereovision system for image updating in surgery
US20250349016A1 (en) Registration of time-separated x-ray images
JP2025515072A (en) Method and system for alignment parameters of a surgical object such as the spine - Patents.com
CA3243567A1 (en) Methods and systems for registering initial image data to intraoperative image data of a scene
EP4094185A2 (en) Methods and systems for using multi view pose estimation
CN120431135B (en) A real-time spinal motion tracking system and method
US20220343518A1 (en) Systems and methods for three-dimensional navigation of objects
AU2023290030B2 (en) Patient monitoring during a scan
AU2022389067B2 (en) Method and navigation system for registering two-dimensional image data set with three-dimensional image data set of body of interest
US12213844B2 (en) Operation image positioning method and system thereof
EP4439463A1 (en) Method for use in positioning an anatomy of interest for x-ray imaging
US20250045938A1 (en) Systems and methods for three-dimensional navigation of objects
CN120938603A (en) High-precision spatial registration method and system based on total ankle bone
CN116762095A (en) Registration of time-lapse X-ray images

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant