[go: up one dir, main page]

WO2025185175A1 - Procédé de guidage de chirurgie endoscopique, support de stockage lisible par ordinateur, appareil de commande, produit programme informatique, dispositif électronique, système de navigation et système robotique - Google Patents

Procédé de guidage de chirurgie endoscopique, support de stockage lisible par ordinateur, appareil de commande, produit programme informatique, dispositif électronique, système de navigation et système robotique

Info

Publication number
WO2025185175A1
WO2025185175A1 PCT/CN2024/126088 CN2024126088W WO2025185175A1 WO 2025185175 A1 WO2025185175 A1 WO 2025185175A1 CN 2024126088 W CN2024126088 W CN 2024126088W WO 2025185175 A1 WO2025185175 A1 WO 2025185175A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
endoscope
endoscopic
enhanced
physiological structure
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
PCT/CN2024/126088
Other languages
English (en)
Chinese (zh)
Other versions
WO2025185175A8 (fr
Inventor
吕鑫
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Changzhou Kanghui Medical Innovation Co Ltd
Original Assignee
Changzhou Kanghui Medical Innovation Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Changzhou Kanghui Medical Innovation Co Ltd filed Critical Changzhou Kanghui Medical Innovation Co Ltd
Publication of WO2025185175A1 publication Critical patent/WO2025185175A1/fr
Publication of WO2025185175A8 publication Critical patent/WO2025185175A8/fr
Pending legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/20Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/30Surgical robots
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/20Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
    • A61B2034/2046Tracking techniques
    • A61B2034/2065Tracking using image or pattern recognition
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/20Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
    • A61B2034/2068Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis using pointers, e.g. pointers having reference marks for determining coordinates of body points
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/30Surgical robots
    • A61B2034/301Surgical robots for introducing or steering flexible instruments inserted into the body, e.g. catheters or endoscopes
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/30Surgical robots
    • A61B2034/305Details of wrist mechanisms at distal ends of robotic arms

Definitions

  • the present invention relates to the technical field of medical equipment, and in particular to a surgical navigation system, and more specifically to a method, electronic equipment, navigation system, and robotic system for guiding endoscopic surgery.
  • the endoscope In traditional endoscopic surgery, the endoscope is used solely for surgical observation, tool manipulation, and control. However, the surgeon's field of vision is limited to the narrow scope of the endoscope, and the surgeon cannot see areas outside the scope of the endoscope. This can result in the endoscope not reaching the affected area correctly or inadequate manipulation, leading to failure to achieve the intended surgical goal.
  • endoscopic spinal surgery due to its minimally invasive nature, is an effective means of treating nerve compression symptoms without pyramidal structural instability and has gained widespread clinical recognition and promotion in recent years.
  • doctors due to the physiological structure of the intervertebral foramen and the unique configuration of the spinal endoscope, doctors must face the following challenges in their journey to familiarize themselves with and ultimately master endoscopic spinal surgery techniques:
  • the visual field under the microscope is limited.
  • the spinal endoscope is used in a working channel sleeve, which causes the doctor's field of view to be blocked by the working channel sleeve during use.
  • the doctor pulls up the spinal endoscope and moves it away from the target point of observation, hoping to observe a larger range of physiological structures according to common thinking, the first thing he sees is the wall of the working channel sleeve. If the doctor also pulls up the working channel sleeve away from the target point, the soft tissue on the outside of the working channel sleeve will shrink inward and invade the channel.
  • the distal end of the spinal endoscope is designed with an angled bevel.
  • the lens of the imaging device is mounted on the angled bevel and arranged eccentrically relative to the axis of the endoscope, so that a wider range can be observed by rotating the endoscope around the axis.
  • the configuration of the spinal endoscope is that the optical rigid endoscope module of the endoscope (also called the imaging device of the endoscope) and the channel for the surgical tool are integrated into the same endoscope insertion tube.
  • the surgical tool also rotates with the spinal endoscope. Because the actual position of the optical rigid endoscope module and the surgical tool cannot undergo any relative change due to the special configuration of the spinal endoscope, the intuitive result is that the doctor observes that the position of the surgical tool on the screen has not changed, but the actual situation at this time is that due to the axial rotation of the spinal endoscope, the position of the surgical tool relative to the physiological structure under the mirror has changed.
  • the "learning curve” is steep. Because of the difficulties encountered during these three endoscopic procedures, spinal endoscopists require superior anatomical and physiological knowledge, excellent spatial visualization, and a large caseload to overcome this so-called “learning curve.” Typically, doctors need 30 to 50 surgeries, or even more, to master spinal endoscopy. This makes training a qualified spinal endoscopy surgeon often time-consuming and costly, which also limits the promotion and widespread adoption of spinal endoscopy.
  • navigation technology is primarily used to guide the placement of pedicle screws during spinal fixation surgery, enabling real-time tracking and positioning of tools and implants relative to the patient's anatomy. This helps doctors achieve the clinical goals of "precision, safety, and minimally invasive procedures.”
  • navigation technology particularly electromagnetic navigation
  • This approach involves using an electromagnetic tracker within the navigation system to track the real-time spatial position of the endoscopic tracer.
  • the navigation system's processor calculates the endoscope's real-time position and field of view, projecting this information onto a spatial 3D image model or 2D fluoroscopic image model that has been aligned with the patient's actual anatomy through a navigation registration process.
  • This relative positional relationship is then displayed on the navigation system's display.
  • This solution presents a problem: On the electromagnetic navigation interface, the doctor's simulated viewing angle corresponds to the position of the X-ray source when acquiring the 3D or 2D fluoroscopic image, located somewhere outside the patient's body. However, on the display showing the endoscopic image of the spinal endoscope, the actual viewing angle is located behind the lens of the rigid endoscope module, within the patient's body.
  • the doctor since the viewing angles of the navigation view and the spinal endoscope image view are different, each time the doctor moves or rotates the spinal endoscope from one end to another, the doctor will When changing the position or orientation of the lens in a spinal endoscope's optical rigid lens module, the doctor must integrate their understanding of the anatomy and, through a process of thought, transform and unify different perspectives to "complete" the precise position and orientation of the endoscopic physiological structures within the patient's body. This results in the doctor's mind being constantly on the move, which can lead to increased fatigue.
  • this navigation solution which tracks the endoscope like a standard surgical tool and visualizes its spatial position and orientation, simply and directly applies navigation technology to endoscopic surgery. It does not effectively address clinical needs such as limited visual field and difficulty with hand-eye coordination. Even with regard to the "positioning" requirement of "easy disorientation under the endoscope,” current electromagnetic navigation technology has not substantially resolved a problem that has long plagued spinal endoscopic surgeons.
  • the object of the present invention is to solve at least one of the above problems and defects in the prior art as well as other technical problems.
  • the present invention provides a method for guiding endoscopic surgery, the method comprising the following steps:
  • Endoscopic image acquisition steps acquiring an endoscopic image
  • Enhanced information acquisition step acquiring at least two of the following three types of enhanced information:
  • Endoscopic enhanced image fusion step fusing the acquired current endoscopic image with the at least two types of enhanced information acquired in the enhanced information acquisition step to obtain an endoscopic enhanced image;
  • Displaying step displaying a view including at least a portion of the endoscopically enhanced image.
  • the endoscope image can obtain soft tissues in a larger field of view around the endoscope end, and areas that are invisible to the naked eye.
  • Multi-dimensional (at least two-dimensional) enhancement of bony structure and planned orientation information Because the enhanced endoscopic image incorporates a larger mosaic of images from around the endoscope tip, it provides the operator with a wider field of view.
  • the integration of three-dimensional images of bony structures allows the operator to see previously invisible bony structures, thereby gaining a global understanding of the position.
  • the integration of guidance instructions related to markers facilitates the operator's rapid positioning of the target structure and determination of its direction and orientation as well as the position of surgical tools.
  • the endoscopic enhanced image fusion step is performed with the aid of a navigation system, wherein the method further comprises, before the endoscopic enhanced image fusion step:
  • Imaging orientation acquisition step acquiring the orientation of the imaging device of the endoscope under the navigation system.
  • Enhanced information position acquisition step obtaining the position of the required fused spliced image, three-dimensional image or marker under the navigation system;
  • fusion is performed according to the orientation of the imaging device and the orientation of the spliced image, three-dimensional image or marker to be fused.
  • the images of the imaging devices to be fused and the various enhancement information are unified into the navigation coordinate system.
  • This fusion method has higher fusion accuracy of the images and has higher navigation accuracy and navigation effect.
  • the method further includes: a global enhanced image fusion step, wherein in the global enhanced image fusion step, at least two of the stitched image, the three-dimensional image and the guidance indication associated with the marker are fused to obtain a global enhanced image, wherein the current field of view position of the imaging device and/or the position of the view of the endoscopic enhanced image are indicated on the global enhanced image; and a step of displaying the global enhanced image, wherein the edges of the current field of view position of the imaging device and the edges of the view of the endoscopic enhanced image are indicated by lines of different colors and/or line types on the global enhanced image.
  • the endoscopic enhanced view is displayed as a part of the global enhanced image simultaneously with the global enhanced view, and the associated information of the two is displayed in the global enhanced image, that is, the endoscopic enhanced view is displayed on the global enhanced image.
  • Lines of different colors and/or line types are used to indicate the edge of the current field of view of the imaging device and the edge of the view of the endoscopic enhanced image, so that the operator can understand the position of the endoscopic real-time image in the global context and the position and direction information relative to the surrounding soft tissue, bony structure, and target while viewing the larger endoscopic real-time image, thereby providing all-round guidance to the operator.
  • the type of enhancement information acquired and fused is selected based on operator input, which enables the surgeon to flexibly select the enhancement information for the endoscopic image according to his or her needs and preferences.
  • the range of the endoscopically enhanced image displayed in the endoscopically enhanced image view can be determined based on operator input.
  • the range of the endoscopically enhanced image displayed in the endoscopically enhanced image view (corresponding to the size of the current endoscope image in the endoscopically enhanced view) can be determined based on operator input.
  • the operator can change the size ratio of the current endoscope image in the endoscopically enhanced view as needed by interacting with the system, for example, by zooming in or out.
  • the real-time endoscopic image can be zoomed in (correspondingly, the range of the entire endoscopically enhanced image that can be displayed in the view window becomes smaller), while the content displayed in the surrounding enhanced information remains proportional to the real-time endoscopic image.
  • Enhanced information that extends beyond the edge of the "endoscopically enhanced view” is no longer displayed in the "endoscopically enhanced view.”
  • the enhancement information acquisition step includes a stitching step, in which a plurality of endoscopic images are stitched together based on the orientation of an imaging device corresponding to at least one endoscopic image to obtain a stitched image.
  • a stitching step in which a plurality of endoscopic images are stitched together based on the orientation of an imaging device corresponding to at least one endoscopic image to obtain a stitched image.
  • the enhancement information acquisition step further includes: a distortion calibration step before the stitching step, wherein the image of the endoscope to be stitched is subjected to distortion calibration in the distortion calibration step; and a processing step after the stitching step, wherein a plane image is generated according to the image obtained in the stitching step, wherein the plane image is fused with the current image of the endoscope as a stitched image in the endoscope enhanced image fusion step.
  • the distortion effect of the endoscope is removed by the distortion calibration before stitching, so that the stitched image is closer to the real image.
  • the processing step after stitching the visual difference of the stitched image caused by the different observation angles is eliminated or reduced.
  • the two distortion processing methods are combined to make the stitched image closer to the real surrounding scene, which is easier for the operator to observe.
  • the enhanced information orientation acquisition step includes a marker orientation acquisition step, in which the orientation of the marker under the navigation system is acquired with the aid of the alignment of the three-dimensional image or the two-dimensional image under the navigation system; wherein in the endoscope enhanced image fusion step, the guidance instructions related to the marker are fused with the current image of the endoscope according to the orientation of the marker under the navigation system.
  • the method enables the operator to select one or more of a directional marker and a marker indicating a physiological structure. This enables the operator to make a selection based on their preferences or actual scenarios.
  • the directional marker includes one or more of a directional marker toward the dorsal, ventral, cranial, and caudal sides of the patient.
  • the patient's physiological structure is a spine
  • the marker indicating the physiological structure includes an image model of a marked target (e.g., one or more of a protruding or free intervertebral disc, osteophytes, and ossified yellow ligament). The image model can be removed or separated from the image of the patient's physiological structure, in which case the image model is the marker indicating the physiological structure.
  • the marker indicating the physiological structure includes a physiological structure marker point; preferably, the patient's physiological structure is a spine, and the physiological structure marker point is a marker point indicating one or more of a protruding or free intervertebral disc, osteophytes and ossified yellow ligament, or a physiological structure marker point that does not displace during endoscopic surgery, such as one or more of the ventral side of the articular process, the pedicle of the anterior vertebra, the pedicle of the posterior vertebra, and the intervertebral disc; preferably, there are multiple physiological structure marker points.
  • the enhancement information orientation acquisition step includes registering the three-dimensional image of the patient's physiological structure with the navigation system to acquire the orientation of the three-dimensional image.
  • the three-dimensional image of the patient's physiological structure includes a preoperative three-dimensional image or an intraoperative three-dimensional image.
  • the endoscope is a spinal endoscope.
  • the navigation method of the present invention is particularly beneficial for spinal endoscopes (such as foraminal endoscopes).
  • spinal endoscopes are used in working channel sleeves, which further limits the doctor's visual field under the endoscope.
  • the imaging device of the endoscope and the channel for surgical tools are integrated into the same endoscope insertion tube, which makes it easy for the doctor to have difficulty in hand-eye coordination. How can the doctor avoid the special use of the spinal endoscope during spinal endoscopic surgery?
  • the limited viewing angles of the endoscope (where the endoscope is placed within the working channel sleeve) and the rigid optical endoscope module, as well as the difficulty in hand-eye coordination and disorientation caused by endoscope rotation, have not been easily and reliably addressed.
  • the present method addresses these issues by providing multi-dimensional enhanced information to the endoscopic image, significantly reducing operator error and increasing the reliability of spinal surgery.
  • the imaging position acquisition step is performed by acquiring the position of a tracer having a fixed positional relationship relative to the endoscope from a tracking device of a navigation system.
  • This method conveniently implements position tracking of the imaging device of the endoscope by means of the navigation system.
  • a computer-readable storage medium on which a computer program is stored.
  • the program is executed by a processor, the steps of the methods in the above examples are executed.
  • a control device includes a memory, a processor, and a program stored in the memory and capable of running on the processor, wherein the methods in the above examples are executed when the processor runs the program.
  • a computer program product which includes a computer program.
  • the computer program is executed by a processor, the steps of the methods in the above examples are implemented.
  • an electronic device for endoscopic surgery navigation includes a display device and a processor, wherein the processor has a data interface, wherein the data interface is connectable to an endoscope, so that the processor can acquire an image from the endoscope; and wherein the processor is configured to acquire one or more of a three-dimensional image of a patient's physiological structure and a marker on the three-dimensional image or a two-dimensional image of the patient's physiological structure; wherein the processor is configured to display a view of at least a portion of an endoscope-enhanced image on the display device for at least a period of time when the processor is running, wherein the endoscope-enhanced image is an image that fuses a current image of the endoscope with at least two of the following three types of enhancement information:
  • a navigation system for endoscopic surgery which includes a tracking device suitable for tracking a tracer having a fixed positional relationship relative to an endoscope, a display device and a processor, wherein the processor is suitable for being connected to the tracking device and the endoscope, wherein the methods in the above examples are executed when the processor is running, and the display in the method is realized through the display device.
  • the navigation system integrates a larger range of spliced images around the endoscope, it provides the operator with a wider field of view; it integrates the three-dimensional image of the bone structure, allowing the operator to see the bone structure that was originally invisible and grasp the overall orientation; it integrates direction marks or marks indicating physiological structures or their guiding instructions to facilitate the operator to quickly locate the target position and grasp the orientation of surgical tools, thus having better navigation effect and navigation accuracy.
  • a robot system which includes a robot arm and the above-mentioned navigation system.
  • the concept of the present invention can also be implemented in the navigation system of the robot system.
  • FIG1 exemplarily shows a flow chart of a method for guiding endoscopic surgery according to the present invention.
  • FIG2 is a schematic diagram showing a principle of a navigation system for spinal endoscopic surgery according to an exemplary embodiment of the present invention.
  • FIG3 exemplarily shows an endoscopic enhanced image obtained by fusing the stitched image, the three-dimensional image of the patient's physiological structure, and the current image of the endoscope.
  • FIG. 4 exemplarily shows a view in which the endoscopy enhanced image of FIG. 3 is displayed on a window of a display device.
  • Figure 5 exemplarily shows a view of an endoscopically enhanced image displayed on a window of a display device according to another exemplary embodiment, wherein the endoscopically enhanced image fuses a stitched image, a three-dimensional image of the patient's physiological structure, physiological structure markers and a current image of the endoscope.
  • FIG6 exemplarily shows a display interface of a navigation system having an endoscopic enhanced image view and a global enhanced image.
  • FIG7 exemplarily shows a global enhanced image that is a fusion of a stitched image, a three-dimensional image of a patient's physiological structure, and a direction mark, in which the current field of view position of the imaging device and the position of the view of the endoscopic enhanced image are indicated.
  • FIG8 exemplarily shows a global enhanced image that is a fusion of a stitched image, a three-dimensional image of a patient's physiological structure, and physiological structure markers, wherein the current field of view position of the imaging device and the endoscopic enhancement point are indicated. The location of the view for the strong image.
  • FIG9 exemplarily shows a flow chart of an enhancement information acquisition step including a stitching step to acquire a stitched image.
  • FIG10 exemplarily shows a schematic diagram of the principle of moving the endoscope to obtain multiple endoscopic images of a larger range.
  • FIG11 is a schematic diagram showing the principle of rotating an endoscope to obtain a stitched image.
  • FIG. 12 shows an example of an operator marking a direction on a preoperative 3D image or an intraoperative 3D image.
  • FIG13 shows an example of an operator marking a direction on an image obtained by performing quasi-three-dimensional fitting on a two-dimensional perspective image.
  • FIG14 shows an example of an operator completing physiological structure marking on a preoperative 3D image or an intraoperative 3D image.
  • FIG15 shows an example of an operator completing physiological structure marking on an intraoperative anteroposterior image.
  • FIG16 shows an example of an operator completing physiological structure marking on an intraoperative lateral image.
  • imaging device and “image” of the endoscope are used in this article, However, those skilled in the art will understand that “imaging device” is a broad concept that includes functions such as video recording, video capture, and image capture, and “image” is a broad concept that includes video, dynamic continuous images, and static images.
  • imaging device can be an endoscope module used for video imaging of the endoscope. In the image acquisition step of the present invention, what is acquired is a framed image from the endoscope.
  • the navigation system used includes a tracking device 1, a control device 2, and a display device 3.
  • the tracking device 1 can be an optical tracking device (e.g., an NDI navigator), and a corresponding tracer 4 can be provided on the endoscope 5.
  • the control device 2 can be a general-purpose computer, a dedicated computer, an embedded processor, or any other suitable programmable data processing device, such as a single-chip microcomputer or chip.
  • the control device 2 can include a processor and a memory for storing programs, or it can include only a processor, in which case the processor can be attached to the memory storing the programs. In other words, the control device includes at least a processor.
  • the control device 2 (or processor) and the display device 3 can be integrated or provided separately.
  • the control device or processor has a data interface, which can include a data interface that can be connected to the endoscope, allowing the control device/processor to obtain images of the endoscope in real time.
  • the control device or processor also includes a data interface that can be connected to the tracking device 1 of the navigation system, so that the position and orientation of the tracked target, such as the tracer 4 on the endoscope, can be obtained from the tracking device 1 in real time.
  • the endoscope 5 and/or the tracer 4 can also be considered part of the navigation system of the present invention.
  • the distal end of the endoscope enters the patient's tissue structure or bony structure to observe and/or operate on it, and the proximal end of the endoscope (the end closest to the operator, i.e., the end opposite to the distal end of the endoscope's insertion tube) is located outside the patient's body for manipulation by the operator.
  • the navigation system by providing a tracer 4 suitable for being tracked by a tracking device 1 at the proximal end of the endoscope located outside the patient's body, the navigation system can obtain real-time information about the position and orientation of the tracer 4 on the endoscope 5 in the navigation coordinate system.
  • the imaging device of the endoscope i.e., the distal lens of the optical hard lens module
  • the imaging device of the endoscope is provided at the distal end of the insertion tube of the endoscope.
  • the relative positional relationship of the imaging device with respect to the tracer 4 can be obtained, thereby enabling the tracking device 1 in the navigation system to determine the orientation of the tracer 4 and, therefore, the position and orientation of the imaging device in the navigation coordinate system.
  • This calibration is typically performed before performing endoscopic surgery and its navigation, and is also referred to as calibration of the external parameters of the endoscope.
  • the relative position relationship of the imaging device of the endoscope, i.e., the hard mirror module, relative to the tracer 4 can be calibrated first.
  • the calibration is performed by a calibration tool (not shown in the figure) without the need to obtain images with the help of an endoscope, which reduces the workload of image processing.
  • the navigation system of the present invention optionally includes the calibration tool.
  • a plurality of calibration holes can be formed on the calibration tool, and these calibration holes have bottom surfaces with different inclination angles and/or different apertures to calibrate endoscopes with different end bevel inclination angles and/or barrel diameters.
  • the calibration tool also includes another tracer fixed on its bracket, and the navigation system can know the position of the tracer on the calibration tool in the navigation coordinate system. Moreover, the positional relationship of each calibration hole relative to the tracer on the calibration tool can be known based on the design size of the calibration tool, and thus the positions of these calibration holes in the navigation system are known.
  • the endoscope is positioned by inserting the distal end of the insertion tube into a matching calibration hole. The inclined surface of the distal end of the insertion tube aligns with the inclined bottom surface of the calibration hole. Because the navigation system also knows the position of the endoscope's tracer 4 within the navigation coordinate system, it can determine the relative position of the distal end of the endoscope's insertion tube relative to the tracer 4, completing the calibration process.
  • the position (location and orientation) of the tracer 4 on the endoscope is acquired in real time by the tracking device 1, thereby indirectly acquiring the position and orientation of the endoscope tip, i.e., the orientation of the imaging device.
  • the imaging orientation acquisition step in the method according to the present invention is performed in this manner.
  • the following, in conjunction with FIG1 describes in detail the steps of a specific embodiment of the method for guiding endoscopic surgery of the present invention.
  • the internal and external parameters of the endoscope can be calibrated as described above.
  • the navigation system is aligned, i.e., the navigation coordinate system is determined.
  • the method of the present invention can be used in scenarios where two-dimensional images are used for navigation, as well as in scenarios where three-dimensional images are used for navigation.
  • the alignment method for the navigation system is well-known and will not be described in detail here.
  • the control device automatically obtains the real-time image of the endoscope. And, according to the operator's input, the type of enhancement information to be used to enhance the real-time image of the endoscope is determined.
  • the operator can select the command based on their needs, for example, by inputting the command via various input means such as a keyboard, mouse, handle, or foot switch.
  • the present invention provides the operator with a solution capable of fusing at least two of the following three types of enhanced information onto the endoscopic image: a spliced image obtained by stitching together multiple images around the endoscope tip, a three-dimensional image of the patient's physiological structure, and, for example, directional markings or markings indicating physiological structures made by the operator on the three-dimensional or two-dimensional image of the patient's physiological structure during surgical planning.
  • the stitched image (indicated by reference numeral 6 in Figures 3 and 4) can also be called a "complete three-dimensional map" or a "panoramic image".
  • the "complete” or “panoramic” here only refers to an image of a larger range relative to the limited endoscopic field of view of the endoscope, and does not necessarily mean a 360-degree panoramic image. This is obtained, for example, by the doctor according to his observation needs during the operation, by flexibly translating (for example, as shown in Figure 10) and/or rotating (for example, as shown in Figure 11) the endoscope within a larger range that needs to be observed.
  • Figures 10 and 11 also exemplarily show the conical field of view 51 of the endoscope 5 at three positions.
  • the conical field of view 51 represents the field of view that the imaging device of the endoscope can see.
  • This stitched image around the endoscope tip allows the physician to observe a wider field of view (or global field of view), better displaying the soft tissue surrounding the endoscope tip, such as nerves, dura mater, blood vessels, and intervertebral discs. This allows the physician to quickly determine the endoscope's orientation and the spatial position of key physiological structures relative to the current endoscope image, eliminating the limitation of the endoscope's field of view.
  • the acquisition and fusion of stitched images will be further described below.
  • the three-dimensional image of the patient's physiological structure displays the three-dimensional bone structure from the CT or CBCT image. Therefore, when it is fused with the endoscopic image, the doctor can observe the three-dimensional bone structure that is covered by soft tissue and cannot be observed under the endoscopic field of view on the endoscopic enhanced image (the dotted part pointed to by the figure mark 7 in Figure 4 represents the three-dimensional image of the three-dimensional bone structure), thereby obtaining the global orientation information of the endoscope tip and the surgical tool tip, for example, in the entire spinal structure.
  • the endoscopic image is enhanced from multiple angles with respect to the bony structure, soft tissue, and/or planning marker orientation indication information.
  • the fusion of a larger range of spliced images around the endoscope provides the operator with a wider field of view; the fusion of the three-dimensional image of the bony structure allows the operator to see bony structures that would otherwise be invisible and grasp the overall orientation; the fusion of directional markers or markers indicating physiological structures or their guiding instructions facilitates the operator to quickly locate the target position and determine the direction.
  • the method of this specific embodiment of the present invention includes an endoscopic image acquisition step, where the endoscopic image can be a frame of the endoscopic image acquired in real time by the control device.
  • the control device acquires and records the position and direction of the imaging device of the endoscope corresponding to at least one image of the endoscope (for example, corresponding to each image), that is, the imaging orientation acquisition step in FIG1 .
  • the processor acquires the position of the tracer 4 located outside the patient's body and disposed on the distal end of the endoscope 5 from the tracking device 1 of the navigation system, and indirectly acquires the position and direction of the imaging device in combination with the external parameters calibrated above.
  • the endoscopic image acquisition step here includes the acquisition of the current image of the endoscope fused in the endoscopic enhanced image fusion.
  • the method of the present invention further includes an enhancement information acquisition step, wherein the type of enhancement information to be acquired is determined based on the operator's input.
  • the input component for the operator to input may include, for example, a selection box, a dialog box, etc. displayed on a display device.
  • the operator may, for example, select two types of enhancement information, namely, a stitched image and a three-dimensional image of the patient's physiological structure, or may select two types of enhancement information, namely, a stitched image and a mark, or may select two types of enhancement information, namely, a three-dimensional image of the patient's physiological structure and a mark, or may select three types of enhancement information, namely, a stitched image, a three-dimensional image of the patient's physiological structure, and a mark.
  • the present invention can obtain enhancement information based on the above selections.
  • the corresponding enhancement information is obtained, and the orientation information of the enhancement information is obtained accordingly, so as to be used for fusing the endoscopic enhanced image in the fusion step according to the orientation of the imaging device and the orientation of the spliced image, three-dimensional image or marker to be fused.
  • the enhanced information acquisition step includes stitching multiple endoscope images.
  • Figure 9 illustrates the enhanced information acquisition step including the stitching step.
  • the control device automatically acquires multiple images around the endoscope's distal end.
  • the control device automatically acquires and records the position and orientation of the endoscope's imaging device corresponding to at least one image (e.g., each image) of the endoscope, i.e., the imaging orientation acquisition step in Figure 9 .
  • the above steps are repeated at multiple positions and orientations of the endoscope, thereby obtaining images of the imaging device at multiple orientations, for example, after translating (as shown in Figure 10 ) and/or rotating (as shown in Figure 11 ) the endoscope over a larger range of observation.
  • distortion calibration can be performed on the multiple images of the endoscope, combining the calibrated imaging device internal parameters described above, to remove distortion effects of the imaging device, i.e., the distortion calibration step in Figure 9 .
  • the images are stitched together based on the position and orientation (i.e., orientation) of the endoscope corresponding to the at least one image, thereby obtaining a stitched image.
  • the imaging orientation acquisition step acquires the position and/or orientation of the imaging device under the navigation system corresponding to at least one of the stitched images.
  • the orientation of the imaging device under the navigation system can be acquired for each acquired image, but it is also possible to acquire the position and/or orientation of the imaging device for only a portion of the images, or even just one image.
  • the enhanced information acquisition step may further include a processing step after the stitching step, in which a planar image is generated based on the image obtained in the stitching step to reduce visual distortion, and then the planar image is used as a stitched image to be fused with the current image of the endoscope in the endoscopic enhanced image fusion step (for example, it is placed around the current image of the endoscope).
  • a processing step after the stitching step in which a planar image is generated based on the image obtained in the stitching step to reduce visual distortion, and then the planar image is used as a stitched image to be fused with the current image of the endoscope in the endoscopic enhanced image fusion step (for example, it is placed around the current image of the endoscope).
  • the stitching step involves stitching the images together based on the orientation of the imaging device corresponding to at least one image to produce a stitched image
  • the orientation of the stitched image as viewed by the navigation system is known. Therefore, the orientation of the stitched image as viewed by the navigation system can be obtained during the enhancement information orientation acquisition step, allowing for the subsequent endoscopic enhanced image fusion step to fuse the stitched image with the current endoscopic image and other enhancement information based on this orientation.
  • the enhanced information orientation acquisition step includes a marker orientation acquisition step, in which the orientation of the marker under the navigation system is acquired with the aid of the registration of the three-dimensional image or the two-dimensional image under the navigation system; wherein in the endoscope enhanced image fusion step, the guidance instructions related to the marker are fused with the current image of the endoscope and other enhanced information according to the orientation of the marker under the navigation system.
  • the image on which the operator makes markings can be a preoperative 3D image, such as a preoperative CT image, or an image obtained during surgery, such as an intraoperative CBCT image or an intraoperative 2D fluoroscopic image.
  • a preoperative CT image or an image obtained during surgery
  • intraoperative CBCT image or an intraoperative 2D fluoroscopic image.
  • the navigation system uses a preoperative CT image for navigation
  • the operator makes markings on the preoperative 3D image.
  • the navigation system uses an intraoperative CBCT image or an intraoperative 2D fluoroscopic image for navigation
  • the operator makes markings on the image after it is acquired during surgery.
  • the operator can choose to make various marks, such as directional marks or marks indicating physiological structures, thereby providing the operator with a variety of options and possibilities.
  • the mark indicating the physiological structure can, for example, include an image of the target tissue or target bony structure (such as a protruding or free intervertebral disc, ossified yellow ligament or osteophyte generated by degeneration) removed by the operator during surgical planning, or it can be a physiological structure marker point.
  • the doctor can refer to the MRI image to mark the points indicating the target tissue or target bony structure in the surgical planning (for example, marking points indicating one or more of a protruding or free intervertebral disc, osteophytes and ossified yellow ligament), that is, physiological structure marking points.
  • Physiological structure markers may also include physiological structure markers that do not shift during endoscopic surgery. As shown in FIG14 , the operator may also mark (for example, using preoperative planning software) certain physiological structure markers within or around the intervertebral foramen that do not shift during the entire spinal endoscopic surgery on the three-dimensional image, including but not limited to physiological structure points such as the ventral side of the articular process, the pedicles of the anterior vertebral body, the pedicles of the posterior vertebral body, and the intervertebral disc.
  • physiological structure points such as the ventral side of the articular process, the pedicles of the anterior vertebral body, the pedicles of the posterior vertebral body, and the intervertebral disc.
  • the doctor uses the intraoperative planning software to select physiological structure markers on the intraoperative anteroposterior image ( FIG15 ) and lateral image ( FIG16 ), including but not limited to physiological structure points such as the ventral side of the articular process, the pedicles of the anterior vertebral body, the pedicles of the posterior vertebral body, and the intervertebral disc.
  • the operator can make direction marks on the preoperative or intraoperative three-dimensional image ( Figure 12), such as arrow marks toward the dorsal, ventral, cephalic, and caudal sides of the patient.
  • the operator can also mark the direction on the intraoperative two-dimensional image.
  • the intraoperative two-dimensional image for example, the intraoperative anteroposterior and lateral images
  • the intraoperative two-dimensional image can also be fitted with a quasi-three-dimensional fitting, and the operator marks the direction on the fitted image (as shown in Figure 13).
  • this method can solve the problem of easily marking the head and tail in reverse when marking on the two-dimensional image.
  • direction markers and physiological structure markers are described herein, the selection of direction markers or physiological structure markers can be freely selected based on the operator's preferences and needs, and is not limited to these examples. Furthermore, although four direction markers or physiological structure markers are shown in the embodiments shown in Figures 12-16, the number of direction markers or physiological structure markers can be one, two, three, or more than three as needed.
  • the processor first obtains the preoperative 3D image and simultaneously acquires the markers (e.g., directional markers or markers indicating physiological structures) placed on the preoperative 3D image by the operator as described above.
  • the preoperative 3D image is then registered with the navigation system, and the coordinates of each marker in the navigation system are determined based on the registration relationship established in the registration step, thereby completing the marker position acquisition step.
  • the registration of the image to the navigation system is completed at the same time as the intraoperative 3D image or intraoperative 2D image is acquired. Therefore, in this case, the operator can Mark the image. Since the coordinates of the registered image under the navigation system are known, the coordinates or positions of the marks made in the image under the navigation system can be obtained accordingly, thus completing the step of obtaining the mark orientation.
  • a quasi-3D fitting can be performed on at least two 2D images from different body positions to obtain a quasi-3D image to facilitate operator marking.
  • the markings subsequently captured by the processor are the operator's markings on the fitted image.
  • the images used for the quasi-3D fitting can be intraoperative anteroposterior and lateral images, or images from other body positions, depending on the operator's needs and the actual location of the patient undergoing surgery.
  • the operator can input or select the various forms of markings described above in various ways. This can be input through the interface of the display device, for example, by using the touch interface of the display device and using preoperative planning software or intraoperative planning software to select the cephalad, caudal, ventral, and dorsal direction markings, or the selection of physiological structure markings, as shown in Figures 12 and 13. Input can also be performed using the keyboard and/or mouse of an electronic device, as shown in Figure 14.
  • the description herein uses the method of an operator making marks on an image of a patient's physiological structure to form the marks, it is also understood that in other embodiments, the marks may not be made by the operator, for example, the marks may be automatically formed when the image is formed.
  • a marker may be pre-placed on a certain part of the patient's body or on a location such as an operating table so that the marks are generated when the image is acquired.
  • fusion of the endoscopic enhanced image can be performed in conjunction with the acquired orientation information of the endoscopic imaging device. Because the enhanced information required for fusion and the imaging device have a defined orientation relationship under the navigation system, and thus, the enhanced information required for fusion and the image captured by the imaging device also have a defined orientation relationship under the navigation system, fusion of the enhanced information and the current endoscopic image can be performed based on these orientation relationships.
  • the enhanced information acquisition step and the enhanced information orientation acquisition step can be performed before, after, or simultaneously with the endoscopic image acquisition step and/or the imaging orientation acquisition step.
  • the global enhanced image fusion step (described below) can also be performed before, after, or simultaneously with the endoscopic enhanced image fusion step.
  • FIG3 shows an example of the obtained endoscopic enhanced image, in which a stitched image 6, a three-dimensional image 7 of the patient's physiological structure and a current image 10 of the endoscope are fused.
  • FIG4 exemplarily shows a view (also referred to as an endoscopic enhanced view) in which the endoscopic enhanced image in FIG3 is displayed on a window of a display device, wherein the current image of the endoscope (i.e., the real-time image) is located in the central area of the window.
  • the endoscopic image displayed in this view is larger and shows the surrounding enhancement information.
  • the range of the endoscopic enhanced image displayed in the view of the endoscopic enhanced image (corresponding to the proportion of the current image of the endoscope in the endoscopic enhanced view) can be determined based on the input of the operator.
  • the operator can change the proportion of the current image of the endoscope in the endoscopic enhanced view by interacting with the system, for example, by zooming in or out the current image of the endoscope.
  • the real-time endoscopic image can be enlarged (correspondingly, the portion of the endoscopic enhanced image displayed in the view window becomes smaller), while the proportion of the content displayed in the surrounding enhanced information to the real-time endoscopic image remains unchanged, and the enhanced information beyond the edge of the "endoscopic enhanced view” is no longer displayed on the "endoscopic enhanced view”.
  • the operator's input can be carried out in the form of a zoom icon input component on the display device.
  • the operator's input method for determining the size of the real-time endoscopic image in the view is not limited to the zoom icon method, and can also be carried out in other ways, such as providing the operator with several different size options, or determining it by the operator entering a numerical value.
  • the input component displayed on the display interface can be, for example, a tab, or a dialog box for the operator to enter a numerical value.
  • FIG5 exemplarily shows a view of an endoscopic enhanced image displayed on a window of a display device according to another exemplary embodiment, wherein the endoscopic enhanced image fuses the current image 10 of the endoscope and three types of enhanced information, namely, the stitched image 6, the three-dimensional image 7 of the patient's physiological structure and the physiological structure markers shown in the figure.
  • the ventral side of the articular process, the pedicles of the anterior vertebra and the intervertebral disc are shown, and the pedicles of the posterior vertebra are not shown because they are outside the edge of the view.
  • Guidance instructions related to the physiological structure markers can also be fused to the endoscopic enhanced image and displayed.
  • the guidance instruction can be a guide pointing to the physiological structure marker.
  • Pointed arrows eg, arrows pointing to the pedicles of the posterior vertebral body that are not shown in FIG. 5 ).
  • the present invention may also include a global enhanced image. Specifically, at least two of the mosaic image, the three-dimensional image, and the guidance instructions associated with the markers are fused to generate and display a global enhanced image.
  • the global enhanced image indicates the current field of view of the imaging device and the position of the endoscopic enhanced image. As shown in FIG6 , the endoscopic enhanced image is shown on the left side of the display device, and the global enhanced image is shown on the lower right.
  • FIG7 exemplarily illustrates a global enhanced image fused with the mosaic image 6, the three-dimensional image 7 of the patient's anatomy, and directional markers pointing to the dorsal, ventral, caudal, and cephalad directions.
  • FIG8 exemplarily illustrates a global enhanced image fused with the mosaic image 6, the three-dimensional image 7 of the patient's anatomy, and markers of physiological structures such as the intervertebral disc.
  • the fusion method for the global enhanced image is the same as the fusion method for the enhanced information described above: each acquired enhanced information is fused based on its orientation information as determined by the navigation system. The detailed process will not be elaborated upon.
  • the edge of the imaging device's current field of view (i.e., the position of the current endoscope image in the global enhanced image) is indicated by a dotted line 8
  • the edge of the view corresponding to the endoscopic enhanced image is framed by a dotted line 9.
  • the method of the present invention also includes a step of indicating the current imaging device's field of view in the global enhanced image. In this step, the region corresponding to the current imaging device's field of view is determined in the global enhanced image using the imaging device's current position and orientation, and a marker 8 indicating the edge of this region is generated and displayed in the display step. It will be appreciated by those skilled in the art that other forms other than dotted lines may be used to indicate the edge of the current field of view and the edge of the view of the endoscopic enhanced image, and both may be represented by lines of different colors and/or line types.
  • the endoscopic enhanced view is displayed simultaneously with the global enhanced view as a part of the global enhanced image, and the correlation information between the two is displayed in the global enhanced image, that is, the position of the edge of the real-time endoscopic image and the edge of the endoscopic enhanced view in the global enhanced image is displayed, so that the operator can know the position of the endoscopic field of view in the entire world and the surrounding soft tissue, bone structure, target position and direction information while watching the larger real-time endoscopic image, thereby realizing all-round guidance for the operator.
  • step of displaying the endoscopic enhanced image or “step of displaying the global enhanced image” described herein does not mean that the image is always displayed.
  • the endoscopic enhanced image or the global enhanced image may be displayed only in a certain period of time according to the need. For example, when the operator desires to observe the enhanced image, he or she may trigger the display of the corresponding enhanced image, for example, by means of a foot switch.
  • the present invention also provides an electronic device for endoscopic surgical navigation, comprising a display device 3 and a processor as described above.
  • the processor is included in the control device 2, i.e., the illustrated host.
  • the control device 2 or processor has a data interface.
  • the control device or processor is electrically connected to the tracking device 1 and the endoscope 5 of the navigation system via the data interface to obtain the position and orientation of the imaging device of the endoscope and to obtain an image of the endoscope in the corresponding position and orientation.
  • the processor's data interface also enables the processor to obtain one or more of a three-dimensional image of a patient's physiological structure and a marker on the three-dimensional or two-dimensional image of the patient's physiological structure.
  • the processor executes a computer program (which may be stored in a memory included in the control device or in another memory) to perform the method of the present invention and display a view of at least a portion of an endoscopically enhanced image on the display device 3 for at least a period of time, wherein the endoscopically enhanced image is an image that fuses the current image of the endoscope with at least two of the following three types of enhancement information:
  • a global enhanced image is also displayed on the display interface of the display device 3.
  • the global enhanced image is an image obtained by fusing at least two of the stitched image 6, the three-dimensional image 7 and the guidance indication, wherein the global enhanced image indicates the current field of view position 8 of the imaging device and/or the view position 9 of the endoscopic enhanced image.
  • the display interface of the display device 3 may also display one or more of the following images or views: a view on a fitted two-dimensional perspective plane, a sagittal plane view, a coronal plane view, an axial plane view, a real-time endoscopic image, etc. This allows the operator to conveniently obtain more comprehensive navigation information. Those skilled in the art will appreciate that other views of other planes, orientations, or any other suitable view may also be displayed as needed.
  • the present invention also provides a computer readable storage medium having a computer program stored thereon, wherein the computer program can execute the steps of the above method of the present invention when executed by a processor.
  • a control device is provided, which may include a memory, a processor, and a program stored in the memory and executable on the processor, wherein when the processor executes the program, the steps of the method of the present invention are performed.
  • the present invention also provides a computer program product, including the computer program, which, when executed by the processor, implements the steps of the method of the present invention.
  • RAM random access memory
  • ROM read-only memory
  • EEPROM electrically programmable ROM
  • EEPROM electrically erasable programmable ROM
  • registers hard disk, removable disk, CD-ROM, or any other form of storage medium known in the art.
  • the method can be implemented in whole or in part by software, hardware, firmware, or any combination thereof.
  • software When implemented using software, it can be implemented in whole or in part in the form of a computer program product.
  • a computer program product includes one or more computer instructions. When the computer program instructions are loaded and executed on a computer, the process or function described in the present invention is generated in whole or in part.
  • the computer can be a general-purpose computer, a special-purpose computer, a computer network, or other programmable device.
  • Computer instructions can be stored in a computer-readable storage medium or transmitted from one computer-readable storage medium to another.
  • Computer instructions can be transmitted from one website, computer, server, or data center to another website, computer, server, or data center via wired (e.g., coaxial cable, optical fiber, digital subscriber line) or wireless (e.g., infrared, wireless, microwave, etc.) means.
  • Computer-readable storage media can be any available medium that can be accessed by a computer or a data storage device such as a server or data center that includes one or more available media. Available media can be magnetic media (e.g., floppy disk, hard disk, tape), optical media (e.g., DVD), or semiconductor media (e.g., solid-state disk).
  • the memory of the control device of the present invention may include random access memory (RAM) or non-volatile memory (NVM), such as at least one disk storage.
  • the memory may be at least one storage device separate from the processor.
  • the processor of the control device can be a general-purpose processor, including a central processing unit (CPU), a network processor (NP), etc.; it can also be a digital signal processor (Digital Signal Processor, DSP), Application Specific Integrated Circuit (ASIC), Field-Programmable Gate Array (FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components.
  • CPU central processing unit
  • NP network processor
  • DSP Digital Signal Processor
  • ASIC Application Specific Integrated Circuit
  • FPGA Field-Programmable Gate Array

Landscapes

  • Health & Medical Sciences (AREA)
  • Surgery (AREA)
  • Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Robotics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Medical Informatics (AREA)
  • Molecular Biology (AREA)
  • Animal Behavior & Ethology (AREA)
  • General Health & Medical Sciences (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Endoscopes (AREA)

Abstract

La présente invention concerne un procédé de guidage d'une chirurgie endoscopique, un support de stockage lisible par ordinateur correspondant, un appareil de commande, un produit programme informatique et un dispositif électronique pour la navigation d'une chirurgie endoscopique, un système de navigation et un système robotique. Le procédé comprend : une étape d'acquisition d'image endoscopique ; une étape d'acquisition d'informations d'amélioration consistant : à acquérir au moins deux des trois types suivants d'informations d'amélioration : a) une image assemblé obtenue par assemblage d'une pluralité d'images d'un endoscope ; b) une image tridimensionnelle d'une structure physiologique d'un patient ; et c) un marqueur sur l'image tridimensionnelle ou sur une image bidimensionnelle de la structure physiologique du patient ; une étape de fusion d'image améliorée endoscopique consistant à : fusionner une image actuelle acquise de l'endoscope avec au moins deux types d'informations d'amélioration ; et une étape d'affichage d'une vue comprenant au moins une partie d'une image améliorée endoscopique. Selon la présente invention, l'image actuelle de l'endoscope est fusionnée avec au moins deux des trois types d'informations d'amélioration, de telle sorte que l'image endoscopique est améliorée en de multiples dimensions, ce qui permet d'obtenir un guidage multidimensionnel intégral pour un opérateur.
PCT/CN2024/126088 2024-03-06 2024-10-21 Procédé de guidage de chirurgie endoscopique, support de stockage lisible par ordinateur, appareil de commande, produit programme informatique, dispositif électronique, système de navigation et système robotique Pending WO2025185175A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202410252657.0 2024-03-06
CN202410252657.0A CN117898834A (zh) 2024-03-06 2024-03-06 用于引导内镜手术的方法及计算机可读存储介质、控制装置和计算机程序产品、电子设备、导航系统及机器人系统

Publications (2)

Publication Number Publication Date
WO2025185175A1 true WO2025185175A1 (fr) 2025-09-12
WO2025185175A8 WO2025185175A8 (fr) 2025-10-02

Family

ID=90690803

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2024/126088 Pending WO2025185175A1 (fr) 2024-03-06 2024-10-21 Procédé de guidage de chirurgie endoscopique, support de stockage lisible par ordinateur, appareil de commande, produit programme informatique, dispositif électronique, système de navigation et système robotique

Country Status (2)

Country Link
CN (1) CN117898834A (fr)
WO (1) WO2025185175A1 (fr)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118021446A (zh) * 2024-02-07 2024-05-14 常州市康辉医疗器械有限公司 用于光学硬镜手术的导航方法、电子设备、导航系统及机器人系统
CN117898834A (zh) * 2024-03-06 2024-04-19 常州市康辉医疗器械有限公司 用于引导内镜手术的方法及计算机可读存储介质、控制装置和计算机程序产品、电子设备、导航系统及机器人系统
CN119184849B (zh) * 2024-11-25 2025-06-27 温州医科大学附属第一医院 人工智能结合气道工具进行智能多模式定位以及预警系统

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050182295A1 (en) * 2003-12-12 2005-08-18 University Of Washington Catheterscope 3D guidance and interface system
US20070225553A1 (en) * 2003-10-21 2007-09-27 The Board Of Trustees Of The Leland Stanford Junio Systems and Methods for Intraoperative Targeting
CN107536643A (zh) * 2017-08-18 2018-01-05 北京航空航天大学 一种前交叉韧带重建的增强现实手术导航系统
CN111588464A (zh) * 2019-02-20 2020-08-28 忞惪医疗机器人(苏州)有限公司 一种手术导航方法及系统
CN114711961A (zh) * 2022-04-12 2022-07-08 山东大学 一种脊柱内镜手术的虚拟现实导航方法及系统
CN114945937A (zh) * 2019-12-12 2022-08-26 皇家飞利浦有限公司 用于内窥镜流程的引导式解剖操纵
CN115375595A (zh) * 2022-07-04 2022-11-22 武汉联影智融医疗科技有限公司 图像融合方法、装置、系统、计算机设备和存储介质
CN117462257A (zh) * 2023-11-13 2024-01-30 常州市康辉医疗器械有限公司 用于脊柱内窥镜手术的导航方法、电子设备及导航系统
CN117898834A (zh) * 2024-03-06 2024-04-19 常州市康辉医疗器械有限公司 用于引导内镜手术的方法及计算机可读存储介质、控制装置和计算机程序产品、电子设备、导航系统及机器人系统
CN118021446A (zh) * 2024-02-07 2024-05-14 常州市康辉医疗器械有限公司 用于光学硬镜手术的导航方法、电子设备、导航系统及机器人系统
CN118285913A (zh) * 2024-04-01 2024-07-05 常州市康辉医疗器械有限公司 在导航系统下引导内镜手术的方法、电子设备、导航系统及手术机器人系统

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070225553A1 (en) * 2003-10-21 2007-09-27 The Board Of Trustees Of The Leland Stanford Junio Systems and Methods for Intraoperative Targeting
US20050182295A1 (en) * 2003-12-12 2005-08-18 University Of Washington Catheterscope 3D guidance and interface system
CN107536643A (zh) * 2017-08-18 2018-01-05 北京航空航天大学 一种前交叉韧带重建的增强现实手术导航系统
CN111588464A (zh) * 2019-02-20 2020-08-28 忞惪医疗机器人(苏州)有限公司 一种手术导航方法及系统
CN114945937A (zh) * 2019-12-12 2022-08-26 皇家飞利浦有限公司 用于内窥镜流程的引导式解剖操纵
CN114711961A (zh) * 2022-04-12 2022-07-08 山东大学 一种脊柱内镜手术的虚拟现实导航方法及系统
CN115375595A (zh) * 2022-07-04 2022-11-22 武汉联影智融医疗科技有限公司 图像融合方法、装置、系统、计算机设备和存储介质
CN117462257A (zh) * 2023-11-13 2024-01-30 常州市康辉医疗器械有限公司 用于脊柱内窥镜手术的导航方法、电子设备及导航系统
CN118021446A (zh) * 2024-02-07 2024-05-14 常州市康辉医疗器械有限公司 用于光学硬镜手术的导航方法、电子设备、导航系统及机器人系统
CN117898834A (zh) * 2024-03-06 2024-04-19 常州市康辉医疗器械有限公司 用于引导内镜手术的方法及计算机可读存储介质、控制装置和计算机程序产品、电子设备、导航系统及机器人系统
CN118285913A (zh) * 2024-04-01 2024-07-05 常州市康辉医疗器械有限公司 在导航系统下引导内镜手术的方法、电子设备、导航系统及手术机器人系统

Also Published As

Publication number Publication date
WO2025185175A8 (fr) 2025-10-02
CN117898834A (zh) 2024-04-19

Similar Documents

Publication Publication Date Title
US11800970B2 (en) Computerized tomography (CT) image correction using position and direction (P and D) tracking assisted optical visualization
US11819292B2 (en) Methods and systems for providing visuospatial information
US9289267B2 (en) Method and apparatus for minimally invasive surgery using endoscopes
US8320992B2 (en) Method and system for superimposing three dimensional medical information on a three dimensional image
WO2025185175A1 (fr) Procédé de guidage de chirurgie endoscopique, support de stockage lisible par ordinateur, appareil de commande, produit programme informatique, dispositif électronique, système de navigation et système robotique
CN111970986A (zh) 用于执行术中指导的系统和方法
US20070276234A1 (en) Systems and Methods for Intraoperative Targeting
US20070073136A1 (en) Bone milling with image guided surgery
US20050085718A1 (en) Systems and methods for intraoperative targetting
US20240315778A1 (en) Surgical assistance system and display method
WO2007115825A1 (fr) Procédé et dispositif d'augmentation sans enregistrement
EP3273854A1 (fr) Procédés et systèmes pour chirurgie assistée par ordinateur au moyen d'une vidéo intra-opératoire acquise par une caméra à mouvement libre
AU2018202682A1 (en) Endoscopic view of invasive procedures in narrow passages
JP6952740B2 (ja) ユーザーを支援する方法、コンピュータープログラム製品、データ記憶媒体、及び撮像システム
CN117462257A (zh) 用于脊柱内窥镜手术的导航方法、电子设备及导航系统
WO2025167189A1 (fr) Procédé de navigation pour chirurgie d'endoscope rigide optique, dispositif électronique, système de navigation et système de robot
CN118285913A (zh) 在导航系统下引导内镜手术的方法、电子设备、导航系统及手术机器人系统
Jackson et al. Surgical tracking, registration, and navigation characterization for image-guided renal interventions
JP4510415B2 (ja) 3d対象物のコンピュータ支援表示方法
JP2024541293A (ja) 腹腔鏡手術及びビデオ補助手術のための対話型拡張現実システム
CN117860379A (zh) 导航系统下的内窥镜引导方法、电子设备及导航系统
JP2001204739A (ja) 顕微鏡下手術支援システム
US20250241631A1 (en) System and method for real-time surgical navigation
Ivanov et al. Surgical navigation systems based on augmented reality technologies
Giraldez et al. Multimodal augmented reality system for surgical microscopy

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 24928105

Country of ref document: EP

Kind code of ref document: A1