[go: up one dir, main page]

WO2025185175A1 - Method for guiding endoscopic surgery, computer-readable storage medium, control apparatus, computer program product, electronic device, navigation system, and robotic system - Google Patents

Method for guiding endoscopic surgery, computer-readable storage medium, control apparatus, computer program product, electronic device, navigation system, and robotic system

Info

Publication number
WO2025185175A1
WO2025185175A1 PCT/CN2024/126088 CN2024126088W WO2025185175A1 WO 2025185175 A1 WO2025185175 A1 WO 2025185175A1 CN 2024126088 W CN2024126088 W CN 2024126088W WO 2025185175 A1 WO2025185175 A1 WO 2025185175A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
endoscope
endoscopic
enhanced
physiological structure
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
PCT/CN2024/126088
Other languages
French (fr)
Chinese (zh)
Other versions
WO2025185175A8 (en
Inventor
吕鑫
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Changzhou Kanghui Medical Innovation Co Ltd
Original Assignee
Changzhou Kanghui Medical Innovation Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Changzhou Kanghui Medical Innovation Co Ltd filed Critical Changzhou Kanghui Medical Innovation Co Ltd
Publication of WO2025185175A1 publication Critical patent/WO2025185175A1/en
Publication of WO2025185175A8 publication Critical patent/WO2025185175A8/en
Pending legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/20Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/30Surgical robots
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/20Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
    • A61B2034/2046Tracking techniques
    • A61B2034/2065Tracking using image or pattern recognition
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/20Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
    • A61B2034/2068Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis using pointers, e.g. pointers having reference marks for determining coordinates of body points
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/30Surgical robots
    • A61B2034/301Surgical robots for introducing or steering flexible instruments inserted into the body, e.g. catheters or endoscopes
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/30Surgical robots
    • A61B2034/305Details of wrist mechanisms at distal ends of robotic arms

Definitions

  • the present invention relates to the technical field of medical equipment, and in particular to a surgical navigation system, and more specifically to a method, electronic equipment, navigation system, and robotic system for guiding endoscopic surgery.
  • the endoscope In traditional endoscopic surgery, the endoscope is used solely for surgical observation, tool manipulation, and control. However, the surgeon's field of vision is limited to the narrow scope of the endoscope, and the surgeon cannot see areas outside the scope of the endoscope. This can result in the endoscope not reaching the affected area correctly or inadequate manipulation, leading to failure to achieve the intended surgical goal.
  • endoscopic spinal surgery due to its minimally invasive nature, is an effective means of treating nerve compression symptoms without pyramidal structural instability and has gained widespread clinical recognition and promotion in recent years.
  • doctors due to the physiological structure of the intervertebral foramen and the unique configuration of the spinal endoscope, doctors must face the following challenges in their journey to familiarize themselves with and ultimately master endoscopic spinal surgery techniques:
  • the visual field under the microscope is limited.
  • the spinal endoscope is used in a working channel sleeve, which causes the doctor's field of view to be blocked by the working channel sleeve during use.
  • the doctor pulls up the spinal endoscope and moves it away from the target point of observation, hoping to observe a larger range of physiological structures according to common thinking, the first thing he sees is the wall of the working channel sleeve. If the doctor also pulls up the working channel sleeve away from the target point, the soft tissue on the outside of the working channel sleeve will shrink inward and invade the channel.
  • the distal end of the spinal endoscope is designed with an angled bevel.
  • the lens of the imaging device is mounted on the angled bevel and arranged eccentrically relative to the axis of the endoscope, so that a wider range can be observed by rotating the endoscope around the axis.
  • the configuration of the spinal endoscope is that the optical rigid endoscope module of the endoscope (also called the imaging device of the endoscope) and the channel for the surgical tool are integrated into the same endoscope insertion tube.
  • the surgical tool also rotates with the spinal endoscope. Because the actual position of the optical rigid endoscope module and the surgical tool cannot undergo any relative change due to the special configuration of the spinal endoscope, the intuitive result is that the doctor observes that the position of the surgical tool on the screen has not changed, but the actual situation at this time is that due to the axial rotation of the spinal endoscope, the position of the surgical tool relative to the physiological structure under the mirror has changed.
  • the "learning curve” is steep. Because of the difficulties encountered during these three endoscopic procedures, spinal endoscopists require superior anatomical and physiological knowledge, excellent spatial visualization, and a large caseload to overcome this so-called “learning curve.” Typically, doctors need 30 to 50 surgeries, or even more, to master spinal endoscopy. This makes training a qualified spinal endoscopy surgeon often time-consuming and costly, which also limits the promotion and widespread adoption of spinal endoscopy.
  • navigation technology is primarily used to guide the placement of pedicle screws during spinal fixation surgery, enabling real-time tracking and positioning of tools and implants relative to the patient's anatomy. This helps doctors achieve the clinical goals of "precision, safety, and minimally invasive procedures.”
  • navigation technology particularly electromagnetic navigation
  • This approach involves using an electromagnetic tracker within the navigation system to track the real-time spatial position of the endoscopic tracer.
  • the navigation system's processor calculates the endoscope's real-time position and field of view, projecting this information onto a spatial 3D image model or 2D fluoroscopic image model that has been aligned with the patient's actual anatomy through a navigation registration process.
  • This relative positional relationship is then displayed on the navigation system's display.
  • This solution presents a problem: On the electromagnetic navigation interface, the doctor's simulated viewing angle corresponds to the position of the X-ray source when acquiring the 3D or 2D fluoroscopic image, located somewhere outside the patient's body. However, on the display showing the endoscopic image of the spinal endoscope, the actual viewing angle is located behind the lens of the rigid endoscope module, within the patient's body.
  • the doctor since the viewing angles of the navigation view and the spinal endoscope image view are different, each time the doctor moves or rotates the spinal endoscope from one end to another, the doctor will When changing the position or orientation of the lens in a spinal endoscope's optical rigid lens module, the doctor must integrate their understanding of the anatomy and, through a process of thought, transform and unify different perspectives to "complete" the precise position and orientation of the endoscopic physiological structures within the patient's body. This results in the doctor's mind being constantly on the move, which can lead to increased fatigue.
  • this navigation solution which tracks the endoscope like a standard surgical tool and visualizes its spatial position and orientation, simply and directly applies navigation technology to endoscopic surgery. It does not effectively address clinical needs such as limited visual field and difficulty with hand-eye coordination. Even with regard to the "positioning" requirement of "easy disorientation under the endoscope,” current electromagnetic navigation technology has not substantially resolved a problem that has long plagued spinal endoscopic surgeons.
  • the object of the present invention is to solve at least one of the above problems and defects in the prior art as well as other technical problems.
  • the present invention provides a method for guiding endoscopic surgery, the method comprising the following steps:
  • Endoscopic image acquisition steps acquiring an endoscopic image
  • Enhanced information acquisition step acquiring at least two of the following three types of enhanced information:
  • Endoscopic enhanced image fusion step fusing the acquired current endoscopic image with the at least two types of enhanced information acquired in the enhanced information acquisition step to obtain an endoscopic enhanced image;
  • Displaying step displaying a view including at least a portion of the endoscopically enhanced image.
  • the endoscope image can obtain soft tissues in a larger field of view around the endoscope end, and areas that are invisible to the naked eye.
  • Multi-dimensional (at least two-dimensional) enhancement of bony structure and planned orientation information Because the enhanced endoscopic image incorporates a larger mosaic of images from around the endoscope tip, it provides the operator with a wider field of view.
  • the integration of three-dimensional images of bony structures allows the operator to see previously invisible bony structures, thereby gaining a global understanding of the position.
  • the integration of guidance instructions related to markers facilitates the operator's rapid positioning of the target structure and determination of its direction and orientation as well as the position of surgical tools.
  • the endoscopic enhanced image fusion step is performed with the aid of a navigation system, wherein the method further comprises, before the endoscopic enhanced image fusion step:
  • Imaging orientation acquisition step acquiring the orientation of the imaging device of the endoscope under the navigation system.
  • Enhanced information position acquisition step obtaining the position of the required fused spliced image, three-dimensional image or marker under the navigation system;
  • fusion is performed according to the orientation of the imaging device and the orientation of the spliced image, three-dimensional image or marker to be fused.
  • the images of the imaging devices to be fused and the various enhancement information are unified into the navigation coordinate system.
  • This fusion method has higher fusion accuracy of the images and has higher navigation accuracy and navigation effect.
  • the method further includes: a global enhanced image fusion step, wherein in the global enhanced image fusion step, at least two of the stitched image, the three-dimensional image and the guidance indication associated with the marker are fused to obtain a global enhanced image, wherein the current field of view position of the imaging device and/or the position of the view of the endoscopic enhanced image are indicated on the global enhanced image; and a step of displaying the global enhanced image, wherein the edges of the current field of view position of the imaging device and the edges of the view of the endoscopic enhanced image are indicated by lines of different colors and/or line types on the global enhanced image.
  • the endoscopic enhanced view is displayed as a part of the global enhanced image simultaneously with the global enhanced view, and the associated information of the two is displayed in the global enhanced image, that is, the endoscopic enhanced view is displayed on the global enhanced image.
  • Lines of different colors and/or line types are used to indicate the edge of the current field of view of the imaging device and the edge of the view of the endoscopic enhanced image, so that the operator can understand the position of the endoscopic real-time image in the global context and the position and direction information relative to the surrounding soft tissue, bony structure, and target while viewing the larger endoscopic real-time image, thereby providing all-round guidance to the operator.
  • the type of enhancement information acquired and fused is selected based on operator input, which enables the surgeon to flexibly select the enhancement information for the endoscopic image according to his or her needs and preferences.
  • the range of the endoscopically enhanced image displayed in the endoscopically enhanced image view can be determined based on operator input.
  • the range of the endoscopically enhanced image displayed in the endoscopically enhanced image view (corresponding to the size of the current endoscope image in the endoscopically enhanced view) can be determined based on operator input.
  • the operator can change the size ratio of the current endoscope image in the endoscopically enhanced view as needed by interacting with the system, for example, by zooming in or out.
  • the real-time endoscopic image can be zoomed in (correspondingly, the range of the entire endoscopically enhanced image that can be displayed in the view window becomes smaller), while the content displayed in the surrounding enhanced information remains proportional to the real-time endoscopic image.
  • Enhanced information that extends beyond the edge of the "endoscopically enhanced view” is no longer displayed in the "endoscopically enhanced view.”
  • the enhancement information acquisition step includes a stitching step, in which a plurality of endoscopic images are stitched together based on the orientation of an imaging device corresponding to at least one endoscopic image to obtain a stitched image.
  • a stitching step in which a plurality of endoscopic images are stitched together based on the orientation of an imaging device corresponding to at least one endoscopic image to obtain a stitched image.
  • the enhancement information acquisition step further includes: a distortion calibration step before the stitching step, wherein the image of the endoscope to be stitched is subjected to distortion calibration in the distortion calibration step; and a processing step after the stitching step, wherein a plane image is generated according to the image obtained in the stitching step, wherein the plane image is fused with the current image of the endoscope as a stitched image in the endoscope enhanced image fusion step.
  • the distortion effect of the endoscope is removed by the distortion calibration before stitching, so that the stitched image is closer to the real image.
  • the processing step after stitching the visual difference of the stitched image caused by the different observation angles is eliminated or reduced.
  • the two distortion processing methods are combined to make the stitched image closer to the real surrounding scene, which is easier for the operator to observe.
  • the enhanced information orientation acquisition step includes a marker orientation acquisition step, in which the orientation of the marker under the navigation system is acquired with the aid of the alignment of the three-dimensional image or the two-dimensional image under the navigation system; wherein in the endoscope enhanced image fusion step, the guidance instructions related to the marker are fused with the current image of the endoscope according to the orientation of the marker under the navigation system.
  • the method enables the operator to select one or more of a directional marker and a marker indicating a physiological structure. This enables the operator to make a selection based on their preferences or actual scenarios.
  • the directional marker includes one or more of a directional marker toward the dorsal, ventral, cranial, and caudal sides of the patient.
  • the patient's physiological structure is a spine
  • the marker indicating the physiological structure includes an image model of a marked target (e.g., one or more of a protruding or free intervertebral disc, osteophytes, and ossified yellow ligament). The image model can be removed or separated from the image of the patient's physiological structure, in which case the image model is the marker indicating the physiological structure.
  • the marker indicating the physiological structure includes a physiological structure marker point; preferably, the patient's physiological structure is a spine, and the physiological structure marker point is a marker point indicating one or more of a protruding or free intervertebral disc, osteophytes and ossified yellow ligament, or a physiological structure marker point that does not displace during endoscopic surgery, such as one or more of the ventral side of the articular process, the pedicle of the anterior vertebra, the pedicle of the posterior vertebra, and the intervertebral disc; preferably, there are multiple physiological structure marker points.
  • the enhancement information orientation acquisition step includes registering the three-dimensional image of the patient's physiological structure with the navigation system to acquire the orientation of the three-dimensional image.
  • the three-dimensional image of the patient's physiological structure includes a preoperative three-dimensional image or an intraoperative three-dimensional image.
  • the endoscope is a spinal endoscope.
  • the navigation method of the present invention is particularly beneficial for spinal endoscopes (such as foraminal endoscopes).
  • spinal endoscopes are used in working channel sleeves, which further limits the doctor's visual field under the endoscope.
  • the imaging device of the endoscope and the channel for surgical tools are integrated into the same endoscope insertion tube, which makes it easy for the doctor to have difficulty in hand-eye coordination. How can the doctor avoid the special use of the spinal endoscope during spinal endoscopic surgery?
  • the limited viewing angles of the endoscope (where the endoscope is placed within the working channel sleeve) and the rigid optical endoscope module, as well as the difficulty in hand-eye coordination and disorientation caused by endoscope rotation, have not been easily and reliably addressed.
  • the present method addresses these issues by providing multi-dimensional enhanced information to the endoscopic image, significantly reducing operator error and increasing the reliability of spinal surgery.
  • the imaging position acquisition step is performed by acquiring the position of a tracer having a fixed positional relationship relative to the endoscope from a tracking device of a navigation system.
  • This method conveniently implements position tracking of the imaging device of the endoscope by means of the navigation system.
  • a computer-readable storage medium on which a computer program is stored.
  • the program is executed by a processor, the steps of the methods in the above examples are executed.
  • a control device includes a memory, a processor, and a program stored in the memory and capable of running on the processor, wherein the methods in the above examples are executed when the processor runs the program.
  • a computer program product which includes a computer program.
  • the computer program is executed by a processor, the steps of the methods in the above examples are implemented.
  • an electronic device for endoscopic surgery navigation includes a display device and a processor, wherein the processor has a data interface, wherein the data interface is connectable to an endoscope, so that the processor can acquire an image from the endoscope; and wherein the processor is configured to acquire one or more of a three-dimensional image of a patient's physiological structure and a marker on the three-dimensional image or a two-dimensional image of the patient's physiological structure; wherein the processor is configured to display a view of at least a portion of an endoscope-enhanced image on the display device for at least a period of time when the processor is running, wherein the endoscope-enhanced image is an image that fuses a current image of the endoscope with at least two of the following three types of enhancement information:
  • a navigation system for endoscopic surgery which includes a tracking device suitable for tracking a tracer having a fixed positional relationship relative to an endoscope, a display device and a processor, wherein the processor is suitable for being connected to the tracking device and the endoscope, wherein the methods in the above examples are executed when the processor is running, and the display in the method is realized through the display device.
  • the navigation system integrates a larger range of spliced images around the endoscope, it provides the operator with a wider field of view; it integrates the three-dimensional image of the bone structure, allowing the operator to see the bone structure that was originally invisible and grasp the overall orientation; it integrates direction marks or marks indicating physiological structures or their guiding instructions to facilitate the operator to quickly locate the target position and grasp the orientation of surgical tools, thus having better navigation effect and navigation accuracy.
  • a robot system which includes a robot arm and the above-mentioned navigation system.
  • the concept of the present invention can also be implemented in the navigation system of the robot system.
  • FIG1 exemplarily shows a flow chart of a method for guiding endoscopic surgery according to the present invention.
  • FIG2 is a schematic diagram showing a principle of a navigation system for spinal endoscopic surgery according to an exemplary embodiment of the present invention.
  • FIG3 exemplarily shows an endoscopic enhanced image obtained by fusing the stitched image, the three-dimensional image of the patient's physiological structure, and the current image of the endoscope.
  • FIG. 4 exemplarily shows a view in which the endoscopy enhanced image of FIG. 3 is displayed on a window of a display device.
  • Figure 5 exemplarily shows a view of an endoscopically enhanced image displayed on a window of a display device according to another exemplary embodiment, wherein the endoscopically enhanced image fuses a stitched image, a three-dimensional image of the patient's physiological structure, physiological structure markers and a current image of the endoscope.
  • FIG6 exemplarily shows a display interface of a navigation system having an endoscopic enhanced image view and a global enhanced image.
  • FIG7 exemplarily shows a global enhanced image that is a fusion of a stitched image, a three-dimensional image of a patient's physiological structure, and a direction mark, in which the current field of view position of the imaging device and the position of the view of the endoscopic enhanced image are indicated.
  • FIG8 exemplarily shows a global enhanced image that is a fusion of a stitched image, a three-dimensional image of a patient's physiological structure, and physiological structure markers, wherein the current field of view position of the imaging device and the endoscopic enhancement point are indicated. The location of the view for the strong image.
  • FIG9 exemplarily shows a flow chart of an enhancement information acquisition step including a stitching step to acquire a stitched image.
  • FIG10 exemplarily shows a schematic diagram of the principle of moving the endoscope to obtain multiple endoscopic images of a larger range.
  • FIG11 is a schematic diagram showing the principle of rotating an endoscope to obtain a stitched image.
  • FIG. 12 shows an example of an operator marking a direction on a preoperative 3D image or an intraoperative 3D image.
  • FIG13 shows an example of an operator marking a direction on an image obtained by performing quasi-three-dimensional fitting on a two-dimensional perspective image.
  • FIG14 shows an example of an operator completing physiological structure marking on a preoperative 3D image or an intraoperative 3D image.
  • FIG15 shows an example of an operator completing physiological structure marking on an intraoperative anteroposterior image.
  • FIG16 shows an example of an operator completing physiological structure marking on an intraoperative lateral image.
  • imaging device and “image” of the endoscope are used in this article, However, those skilled in the art will understand that “imaging device” is a broad concept that includes functions such as video recording, video capture, and image capture, and “image” is a broad concept that includes video, dynamic continuous images, and static images.
  • imaging device can be an endoscope module used for video imaging of the endoscope. In the image acquisition step of the present invention, what is acquired is a framed image from the endoscope.
  • the navigation system used includes a tracking device 1, a control device 2, and a display device 3.
  • the tracking device 1 can be an optical tracking device (e.g., an NDI navigator), and a corresponding tracer 4 can be provided on the endoscope 5.
  • the control device 2 can be a general-purpose computer, a dedicated computer, an embedded processor, or any other suitable programmable data processing device, such as a single-chip microcomputer or chip.
  • the control device 2 can include a processor and a memory for storing programs, or it can include only a processor, in which case the processor can be attached to the memory storing the programs. In other words, the control device includes at least a processor.
  • the control device 2 (or processor) and the display device 3 can be integrated or provided separately.
  • the control device or processor has a data interface, which can include a data interface that can be connected to the endoscope, allowing the control device/processor to obtain images of the endoscope in real time.
  • the control device or processor also includes a data interface that can be connected to the tracking device 1 of the navigation system, so that the position and orientation of the tracked target, such as the tracer 4 on the endoscope, can be obtained from the tracking device 1 in real time.
  • the endoscope 5 and/or the tracer 4 can also be considered part of the navigation system of the present invention.
  • the distal end of the endoscope enters the patient's tissue structure or bony structure to observe and/or operate on it, and the proximal end of the endoscope (the end closest to the operator, i.e., the end opposite to the distal end of the endoscope's insertion tube) is located outside the patient's body for manipulation by the operator.
  • the navigation system by providing a tracer 4 suitable for being tracked by a tracking device 1 at the proximal end of the endoscope located outside the patient's body, the navigation system can obtain real-time information about the position and orientation of the tracer 4 on the endoscope 5 in the navigation coordinate system.
  • the imaging device of the endoscope i.e., the distal lens of the optical hard lens module
  • the imaging device of the endoscope is provided at the distal end of the insertion tube of the endoscope.
  • the relative positional relationship of the imaging device with respect to the tracer 4 can be obtained, thereby enabling the tracking device 1 in the navigation system to determine the orientation of the tracer 4 and, therefore, the position and orientation of the imaging device in the navigation coordinate system.
  • This calibration is typically performed before performing endoscopic surgery and its navigation, and is also referred to as calibration of the external parameters of the endoscope.
  • the relative position relationship of the imaging device of the endoscope, i.e., the hard mirror module, relative to the tracer 4 can be calibrated first.
  • the calibration is performed by a calibration tool (not shown in the figure) without the need to obtain images with the help of an endoscope, which reduces the workload of image processing.
  • the navigation system of the present invention optionally includes the calibration tool.
  • a plurality of calibration holes can be formed on the calibration tool, and these calibration holes have bottom surfaces with different inclination angles and/or different apertures to calibrate endoscopes with different end bevel inclination angles and/or barrel diameters.
  • the calibration tool also includes another tracer fixed on its bracket, and the navigation system can know the position of the tracer on the calibration tool in the navigation coordinate system. Moreover, the positional relationship of each calibration hole relative to the tracer on the calibration tool can be known based on the design size of the calibration tool, and thus the positions of these calibration holes in the navigation system are known.
  • the endoscope is positioned by inserting the distal end of the insertion tube into a matching calibration hole. The inclined surface of the distal end of the insertion tube aligns with the inclined bottom surface of the calibration hole. Because the navigation system also knows the position of the endoscope's tracer 4 within the navigation coordinate system, it can determine the relative position of the distal end of the endoscope's insertion tube relative to the tracer 4, completing the calibration process.
  • the position (location and orientation) of the tracer 4 on the endoscope is acquired in real time by the tracking device 1, thereby indirectly acquiring the position and orientation of the endoscope tip, i.e., the orientation of the imaging device.
  • the imaging orientation acquisition step in the method according to the present invention is performed in this manner.
  • the following, in conjunction with FIG1 describes in detail the steps of a specific embodiment of the method for guiding endoscopic surgery of the present invention.
  • the internal and external parameters of the endoscope can be calibrated as described above.
  • the navigation system is aligned, i.e., the navigation coordinate system is determined.
  • the method of the present invention can be used in scenarios where two-dimensional images are used for navigation, as well as in scenarios where three-dimensional images are used for navigation.
  • the alignment method for the navigation system is well-known and will not be described in detail here.
  • the control device automatically obtains the real-time image of the endoscope. And, according to the operator's input, the type of enhancement information to be used to enhance the real-time image of the endoscope is determined.
  • the operator can select the command based on their needs, for example, by inputting the command via various input means such as a keyboard, mouse, handle, or foot switch.
  • the present invention provides the operator with a solution capable of fusing at least two of the following three types of enhanced information onto the endoscopic image: a spliced image obtained by stitching together multiple images around the endoscope tip, a three-dimensional image of the patient's physiological structure, and, for example, directional markings or markings indicating physiological structures made by the operator on the three-dimensional or two-dimensional image of the patient's physiological structure during surgical planning.
  • the stitched image (indicated by reference numeral 6 in Figures 3 and 4) can also be called a "complete three-dimensional map" or a "panoramic image".
  • the "complete” or “panoramic” here only refers to an image of a larger range relative to the limited endoscopic field of view of the endoscope, and does not necessarily mean a 360-degree panoramic image. This is obtained, for example, by the doctor according to his observation needs during the operation, by flexibly translating (for example, as shown in Figure 10) and/or rotating (for example, as shown in Figure 11) the endoscope within a larger range that needs to be observed.
  • Figures 10 and 11 also exemplarily show the conical field of view 51 of the endoscope 5 at three positions.
  • the conical field of view 51 represents the field of view that the imaging device of the endoscope can see.
  • This stitched image around the endoscope tip allows the physician to observe a wider field of view (or global field of view), better displaying the soft tissue surrounding the endoscope tip, such as nerves, dura mater, blood vessels, and intervertebral discs. This allows the physician to quickly determine the endoscope's orientation and the spatial position of key physiological structures relative to the current endoscope image, eliminating the limitation of the endoscope's field of view.
  • the acquisition and fusion of stitched images will be further described below.
  • the three-dimensional image of the patient's physiological structure displays the three-dimensional bone structure from the CT or CBCT image. Therefore, when it is fused with the endoscopic image, the doctor can observe the three-dimensional bone structure that is covered by soft tissue and cannot be observed under the endoscopic field of view on the endoscopic enhanced image (the dotted part pointed to by the figure mark 7 in Figure 4 represents the three-dimensional image of the three-dimensional bone structure), thereby obtaining the global orientation information of the endoscope tip and the surgical tool tip, for example, in the entire spinal structure.
  • the endoscopic image is enhanced from multiple angles with respect to the bony structure, soft tissue, and/or planning marker orientation indication information.
  • the fusion of a larger range of spliced images around the endoscope provides the operator with a wider field of view; the fusion of the three-dimensional image of the bony structure allows the operator to see bony structures that would otherwise be invisible and grasp the overall orientation; the fusion of directional markers or markers indicating physiological structures or their guiding instructions facilitates the operator to quickly locate the target position and determine the direction.
  • the method of this specific embodiment of the present invention includes an endoscopic image acquisition step, where the endoscopic image can be a frame of the endoscopic image acquired in real time by the control device.
  • the control device acquires and records the position and direction of the imaging device of the endoscope corresponding to at least one image of the endoscope (for example, corresponding to each image), that is, the imaging orientation acquisition step in FIG1 .
  • the processor acquires the position of the tracer 4 located outside the patient's body and disposed on the distal end of the endoscope 5 from the tracking device 1 of the navigation system, and indirectly acquires the position and direction of the imaging device in combination with the external parameters calibrated above.
  • the endoscopic image acquisition step here includes the acquisition of the current image of the endoscope fused in the endoscopic enhanced image fusion.
  • the method of the present invention further includes an enhancement information acquisition step, wherein the type of enhancement information to be acquired is determined based on the operator's input.
  • the input component for the operator to input may include, for example, a selection box, a dialog box, etc. displayed on a display device.
  • the operator may, for example, select two types of enhancement information, namely, a stitched image and a three-dimensional image of the patient's physiological structure, or may select two types of enhancement information, namely, a stitched image and a mark, or may select two types of enhancement information, namely, a three-dimensional image of the patient's physiological structure and a mark, or may select three types of enhancement information, namely, a stitched image, a three-dimensional image of the patient's physiological structure, and a mark.
  • the present invention can obtain enhancement information based on the above selections.
  • the corresponding enhancement information is obtained, and the orientation information of the enhancement information is obtained accordingly, so as to be used for fusing the endoscopic enhanced image in the fusion step according to the orientation of the imaging device and the orientation of the spliced image, three-dimensional image or marker to be fused.
  • the enhanced information acquisition step includes stitching multiple endoscope images.
  • Figure 9 illustrates the enhanced information acquisition step including the stitching step.
  • the control device automatically acquires multiple images around the endoscope's distal end.
  • the control device automatically acquires and records the position and orientation of the endoscope's imaging device corresponding to at least one image (e.g., each image) of the endoscope, i.e., the imaging orientation acquisition step in Figure 9 .
  • the above steps are repeated at multiple positions and orientations of the endoscope, thereby obtaining images of the imaging device at multiple orientations, for example, after translating (as shown in Figure 10 ) and/or rotating (as shown in Figure 11 ) the endoscope over a larger range of observation.
  • distortion calibration can be performed on the multiple images of the endoscope, combining the calibrated imaging device internal parameters described above, to remove distortion effects of the imaging device, i.e., the distortion calibration step in Figure 9 .
  • the images are stitched together based on the position and orientation (i.e., orientation) of the endoscope corresponding to the at least one image, thereby obtaining a stitched image.
  • the imaging orientation acquisition step acquires the position and/or orientation of the imaging device under the navigation system corresponding to at least one of the stitched images.
  • the orientation of the imaging device under the navigation system can be acquired for each acquired image, but it is also possible to acquire the position and/or orientation of the imaging device for only a portion of the images, or even just one image.
  • the enhanced information acquisition step may further include a processing step after the stitching step, in which a planar image is generated based on the image obtained in the stitching step to reduce visual distortion, and then the planar image is used as a stitched image to be fused with the current image of the endoscope in the endoscopic enhanced image fusion step (for example, it is placed around the current image of the endoscope).
  • a processing step after the stitching step in which a planar image is generated based on the image obtained in the stitching step to reduce visual distortion, and then the planar image is used as a stitched image to be fused with the current image of the endoscope in the endoscopic enhanced image fusion step (for example, it is placed around the current image of the endoscope).
  • the stitching step involves stitching the images together based on the orientation of the imaging device corresponding to at least one image to produce a stitched image
  • the orientation of the stitched image as viewed by the navigation system is known. Therefore, the orientation of the stitched image as viewed by the navigation system can be obtained during the enhancement information orientation acquisition step, allowing for the subsequent endoscopic enhanced image fusion step to fuse the stitched image with the current endoscopic image and other enhancement information based on this orientation.
  • the enhanced information orientation acquisition step includes a marker orientation acquisition step, in which the orientation of the marker under the navigation system is acquired with the aid of the registration of the three-dimensional image or the two-dimensional image under the navigation system; wherein in the endoscope enhanced image fusion step, the guidance instructions related to the marker are fused with the current image of the endoscope and other enhanced information according to the orientation of the marker under the navigation system.
  • the image on which the operator makes markings can be a preoperative 3D image, such as a preoperative CT image, or an image obtained during surgery, such as an intraoperative CBCT image or an intraoperative 2D fluoroscopic image.
  • a preoperative CT image or an image obtained during surgery
  • intraoperative CBCT image or an intraoperative 2D fluoroscopic image.
  • the navigation system uses a preoperative CT image for navigation
  • the operator makes markings on the preoperative 3D image.
  • the navigation system uses an intraoperative CBCT image or an intraoperative 2D fluoroscopic image for navigation
  • the operator makes markings on the image after it is acquired during surgery.
  • the operator can choose to make various marks, such as directional marks or marks indicating physiological structures, thereby providing the operator with a variety of options and possibilities.
  • the mark indicating the physiological structure can, for example, include an image of the target tissue or target bony structure (such as a protruding or free intervertebral disc, ossified yellow ligament or osteophyte generated by degeneration) removed by the operator during surgical planning, or it can be a physiological structure marker point.
  • the doctor can refer to the MRI image to mark the points indicating the target tissue or target bony structure in the surgical planning (for example, marking points indicating one or more of a protruding or free intervertebral disc, osteophytes and ossified yellow ligament), that is, physiological structure marking points.
  • Physiological structure markers may also include physiological structure markers that do not shift during endoscopic surgery. As shown in FIG14 , the operator may also mark (for example, using preoperative planning software) certain physiological structure markers within or around the intervertebral foramen that do not shift during the entire spinal endoscopic surgery on the three-dimensional image, including but not limited to physiological structure points such as the ventral side of the articular process, the pedicles of the anterior vertebral body, the pedicles of the posterior vertebral body, and the intervertebral disc.
  • physiological structure points such as the ventral side of the articular process, the pedicles of the anterior vertebral body, the pedicles of the posterior vertebral body, and the intervertebral disc.
  • the doctor uses the intraoperative planning software to select physiological structure markers on the intraoperative anteroposterior image ( FIG15 ) and lateral image ( FIG16 ), including but not limited to physiological structure points such as the ventral side of the articular process, the pedicles of the anterior vertebral body, the pedicles of the posterior vertebral body, and the intervertebral disc.
  • the operator can make direction marks on the preoperative or intraoperative three-dimensional image ( Figure 12), such as arrow marks toward the dorsal, ventral, cephalic, and caudal sides of the patient.
  • the operator can also mark the direction on the intraoperative two-dimensional image.
  • the intraoperative two-dimensional image for example, the intraoperative anteroposterior and lateral images
  • the intraoperative two-dimensional image can also be fitted with a quasi-three-dimensional fitting, and the operator marks the direction on the fitted image (as shown in Figure 13).
  • this method can solve the problem of easily marking the head and tail in reverse when marking on the two-dimensional image.
  • direction markers and physiological structure markers are described herein, the selection of direction markers or physiological structure markers can be freely selected based on the operator's preferences and needs, and is not limited to these examples. Furthermore, although four direction markers or physiological structure markers are shown in the embodiments shown in Figures 12-16, the number of direction markers or physiological structure markers can be one, two, three, or more than three as needed.
  • the processor first obtains the preoperative 3D image and simultaneously acquires the markers (e.g., directional markers or markers indicating physiological structures) placed on the preoperative 3D image by the operator as described above.
  • the preoperative 3D image is then registered with the navigation system, and the coordinates of each marker in the navigation system are determined based on the registration relationship established in the registration step, thereby completing the marker position acquisition step.
  • the registration of the image to the navigation system is completed at the same time as the intraoperative 3D image or intraoperative 2D image is acquired. Therefore, in this case, the operator can Mark the image. Since the coordinates of the registered image under the navigation system are known, the coordinates or positions of the marks made in the image under the navigation system can be obtained accordingly, thus completing the step of obtaining the mark orientation.
  • a quasi-3D fitting can be performed on at least two 2D images from different body positions to obtain a quasi-3D image to facilitate operator marking.
  • the markings subsequently captured by the processor are the operator's markings on the fitted image.
  • the images used for the quasi-3D fitting can be intraoperative anteroposterior and lateral images, or images from other body positions, depending on the operator's needs and the actual location of the patient undergoing surgery.
  • the operator can input or select the various forms of markings described above in various ways. This can be input through the interface of the display device, for example, by using the touch interface of the display device and using preoperative planning software or intraoperative planning software to select the cephalad, caudal, ventral, and dorsal direction markings, or the selection of physiological structure markings, as shown in Figures 12 and 13. Input can also be performed using the keyboard and/or mouse of an electronic device, as shown in Figure 14.
  • the description herein uses the method of an operator making marks on an image of a patient's physiological structure to form the marks, it is also understood that in other embodiments, the marks may not be made by the operator, for example, the marks may be automatically formed when the image is formed.
  • a marker may be pre-placed on a certain part of the patient's body or on a location such as an operating table so that the marks are generated when the image is acquired.
  • fusion of the endoscopic enhanced image can be performed in conjunction with the acquired orientation information of the endoscopic imaging device. Because the enhanced information required for fusion and the imaging device have a defined orientation relationship under the navigation system, and thus, the enhanced information required for fusion and the image captured by the imaging device also have a defined orientation relationship under the navigation system, fusion of the enhanced information and the current endoscopic image can be performed based on these orientation relationships.
  • the enhanced information acquisition step and the enhanced information orientation acquisition step can be performed before, after, or simultaneously with the endoscopic image acquisition step and/or the imaging orientation acquisition step.
  • the global enhanced image fusion step (described below) can also be performed before, after, or simultaneously with the endoscopic enhanced image fusion step.
  • FIG3 shows an example of the obtained endoscopic enhanced image, in which a stitched image 6, a three-dimensional image 7 of the patient's physiological structure and a current image 10 of the endoscope are fused.
  • FIG4 exemplarily shows a view (also referred to as an endoscopic enhanced view) in which the endoscopic enhanced image in FIG3 is displayed on a window of a display device, wherein the current image of the endoscope (i.e., the real-time image) is located in the central area of the window.
  • the endoscopic image displayed in this view is larger and shows the surrounding enhancement information.
  • the range of the endoscopic enhanced image displayed in the view of the endoscopic enhanced image (corresponding to the proportion of the current image of the endoscope in the endoscopic enhanced view) can be determined based on the input of the operator.
  • the operator can change the proportion of the current image of the endoscope in the endoscopic enhanced view by interacting with the system, for example, by zooming in or out the current image of the endoscope.
  • the real-time endoscopic image can be enlarged (correspondingly, the portion of the endoscopic enhanced image displayed in the view window becomes smaller), while the proportion of the content displayed in the surrounding enhanced information to the real-time endoscopic image remains unchanged, and the enhanced information beyond the edge of the "endoscopic enhanced view” is no longer displayed on the "endoscopic enhanced view”.
  • the operator's input can be carried out in the form of a zoom icon input component on the display device.
  • the operator's input method for determining the size of the real-time endoscopic image in the view is not limited to the zoom icon method, and can also be carried out in other ways, such as providing the operator with several different size options, or determining it by the operator entering a numerical value.
  • the input component displayed on the display interface can be, for example, a tab, or a dialog box for the operator to enter a numerical value.
  • FIG5 exemplarily shows a view of an endoscopic enhanced image displayed on a window of a display device according to another exemplary embodiment, wherein the endoscopic enhanced image fuses the current image 10 of the endoscope and three types of enhanced information, namely, the stitched image 6, the three-dimensional image 7 of the patient's physiological structure and the physiological structure markers shown in the figure.
  • the ventral side of the articular process, the pedicles of the anterior vertebra and the intervertebral disc are shown, and the pedicles of the posterior vertebra are not shown because they are outside the edge of the view.
  • Guidance instructions related to the physiological structure markers can also be fused to the endoscopic enhanced image and displayed.
  • the guidance instruction can be a guide pointing to the physiological structure marker.
  • Pointed arrows eg, arrows pointing to the pedicles of the posterior vertebral body that are not shown in FIG. 5 ).
  • the present invention may also include a global enhanced image. Specifically, at least two of the mosaic image, the three-dimensional image, and the guidance instructions associated with the markers are fused to generate and display a global enhanced image.
  • the global enhanced image indicates the current field of view of the imaging device and the position of the endoscopic enhanced image. As shown in FIG6 , the endoscopic enhanced image is shown on the left side of the display device, and the global enhanced image is shown on the lower right.
  • FIG7 exemplarily illustrates a global enhanced image fused with the mosaic image 6, the three-dimensional image 7 of the patient's anatomy, and directional markers pointing to the dorsal, ventral, caudal, and cephalad directions.
  • FIG8 exemplarily illustrates a global enhanced image fused with the mosaic image 6, the three-dimensional image 7 of the patient's anatomy, and markers of physiological structures such as the intervertebral disc.
  • the fusion method for the global enhanced image is the same as the fusion method for the enhanced information described above: each acquired enhanced information is fused based on its orientation information as determined by the navigation system. The detailed process will not be elaborated upon.
  • the edge of the imaging device's current field of view (i.e., the position of the current endoscope image in the global enhanced image) is indicated by a dotted line 8
  • the edge of the view corresponding to the endoscopic enhanced image is framed by a dotted line 9.
  • the method of the present invention also includes a step of indicating the current imaging device's field of view in the global enhanced image. In this step, the region corresponding to the current imaging device's field of view is determined in the global enhanced image using the imaging device's current position and orientation, and a marker 8 indicating the edge of this region is generated and displayed in the display step. It will be appreciated by those skilled in the art that other forms other than dotted lines may be used to indicate the edge of the current field of view and the edge of the view of the endoscopic enhanced image, and both may be represented by lines of different colors and/or line types.
  • the endoscopic enhanced view is displayed simultaneously with the global enhanced view as a part of the global enhanced image, and the correlation information between the two is displayed in the global enhanced image, that is, the position of the edge of the real-time endoscopic image and the edge of the endoscopic enhanced view in the global enhanced image is displayed, so that the operator can know the position of the endoscopic field of view in the entire world and the surrounding soft tissue, bone structure, target position and direction information while watching the larger real-time endoscopic image, thereby realizing all-round guidance for the operator.
  • step of displaying the endoscopic enhanced image or “step of displaying the global enhanced image” described herein does not mean that the image is always displayed.
  • the endoscopic enhanced image or the global enhanced image may be displayed only in a certain period of time according to the need. For example, when the operator desires to observe the enhanced image, he or she may trigger the display of the corresponding enhanced image, for example, by means of a foot switch.
  • the present invention also provides an electronic device for endoscopic surgical navigation, comprising a display device 3 and a processor as described above.
  • the processor is included in the control device 2, i.e., the illustrated host.
  • the control device 2 or processor has a data interface.
  • the control device or processor is electrically connected to the tracking device 1 and the endoscope 5 of the navigation system via the data interface to obtain the position and orientation of the imaging device of the endoscope and to obtain an image of the endoscope in the corresponding position and orientation.
  • the processor's data interface also enables the processor to obtain one or more of a three-dimensional image of a patient's physiological structure and a marker on the three-dimensional or two-dimensional image of the patient's physiological structure.
  • the processor executes a computer program (which may be stored in a memory included in the control device or in another memory) to perform the method of the present invention and display a view of at least a portion of an endoscopically enhanced image on the display device 3 for at least a period of time, wherein the endoscopically enhanced image is an image that fuses the current image of the endoscope with at least two of the following three types of enhancement information:
  • a global enhanced image is also displayed on the display interface of the display device 3.
  • the global enhanced image is an image obtained by fusing at least two of the stitched image 6, the three-dimensional image 7 and the guidance indication, wherein the global enhanced image indicates the current field of view position 8 of the imaging device and/or the view position 9 of the endoscopic enhanced image.
  • the display interface of the display device 3 may also display one or more of the following images or views: a view on a fitted two-dimensional perspective plane, a sagittal plane view, a coronal plane view, an axial plane view, a real-time endoscopic image, etc. This allows the operator to conveniently obtain more comprehensive navigation information. Those skilled in the art will appreciate that other views of other planes, orientations, or any other suitable view may also be displayed as needed.
  • the present invention also provides a computer readable storage medium having a computer program stored thereon, wherein the computer program can execute the steps of the above method of the present invention when executed by a processor.
  • a control device is provided, which may include a memory, a processor, and a program stored in the memory and executable on the processor, wherein when the processor executes the program, the steps of the method of the present invention are performed.
  • the present invention also provides a computer program product, including the computer program, which, when executed by the processor, implements the steps of the method of the present invention.
  • RAM random access memory
  • ROM read-only memory
  • EEPROM electrically programmable ROM
  • EEPROM electrically erasable programmable ROM
  • registers hard disk, removable disk, CD-ROM, or any other form of storage medium known in the art.
  • the method can be implemented in whole or in part by software, hardware, firmware, or any combination thereof.
  • software When implemented using software, it can be implemented in whole or in part in the form of a computer program product.
  • a computer program product includes one or more computer instructions. When the computer program instructions are loaded and executed on a computer, the process or function described in the present invention is generated in whole or in part.
  • the computer can be a general-purpose computer, a special-purpose computer, a computer network, or other programmable device.
  • Computer instructions can be stored in a computer-readable storage medium or transmitted from one computer-readable storage medium to another.
  • Computer instructions can be transmitted from one website, computer, server, or data center to another website, computer, server, or data center via wired (e.g., coaxial cable, optical fiber, digital subscriber line) or wireless (e.g., infrared, wireless, microwave, etc.) means.
  • Computer-readable storage media can be any available medium that can be accessed by a computer or a data storage device such as a server or data center that includes one or more available media. Available media can be magnetic media (e.g., floppy disk, hard disk, tape), optical media (e.g., DVD), or semiconductor media (e.g., solid-state disk).
  • the memory of the control device of the present invention may include random access memory (RAM) or non-volatile memory (NVM), such as at least one disk storage.
  • the memory may be at least one storage device separate from the processor.
  • the processor of the control device can be a general-purpose processor, including a central processing unit (CPU), a network processor (NP), etc.; it can also be a digital signal processor (Digital Signal Processor, DSP), Application Specific Integrated Circuit (ASIC), Field-Programmable Gate Array (FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components.
  • CPU central processing unit
  • NP network processor
  • DSP Digital Signal Processor
  • ASIC Application Specific Integrated Circuit
  • FPGA Field-Programmable Gate Array

Landscapes

  • Health & Medical Sciences (AREA)
  • Surgery (AREA)
  • Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Robotics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Medical Informatics (AREA)
  • Molecular Biology (AREA)
  • Animal Behavior & Ethology (AREA)
  • General Health & Medical Sciences (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Endoscopes (AREA)

Abstract

The present invention relates to a method for guiding an endoscopic surgery, a corresponding computer-readable storage medium, a control apparatus, a computer program product, and an electronic device for navigation of an endoscopic surgery, a navigation system, and a robotic system. The method comprises: a step of endoscopic image acquisition; a step of enhancement information acquisition: acquiring at least two of the following three types of enhancement information: a) a spliced image obtained by splicing a plurality of images of an endoscope; b) a three-dimensional image of a physiological structure of a patient; and c) a marker on the three-dimensional image or a two-dimensional image of the physiological structure of the patient; a step of endoscopic enhanced image fusion: fusing an acquired current image of the endoscope with at least two types of enhancement information; and a step of displaying a view including at least a portion of an endoscopic enhanced image. According to the present invention, the current image of the endoscope is fused with at least two of the three types of enhancement information, such that the endoscopic image is enhanced in multiple dimensions, thereby achieving omnibearing multi-dimensional guidance for an operator.

Description

用于引导内镜手术的方法及计算机可读存储介质、控制装置和计算机程序产品、电子设备、导航系统及机器人系统Method for guiding endoscopic surgery, computer-readable storage medium, control device and computer program product, electronic device, navigation system and robotic system 技术领域Technical Field

本发明涉及医疗设备技术领域,尤其涉及手术导航系统,更具体地涉及用于引导内镜手术的方法、电子设备、导航系统及机器人系统。The present invention relates to the technical field of medical equipment, and in particular to a surgical navigation system, and more specifically to a method, electronic equipment, navigation system, and robotic system for guiding endoscopic surgery.

背景技术Background Art

在传统的内镜手术中,单纯地使用内镜视野进行手术观察、工具操作与控制。而医生的视野受限于内镜视野的小范围,无法看到内镜视野之外的区域,从而导致内镜无法正确抵达患处或操作不充分而导致无法实现既定手术目标。In traditional endoscopic surgery, the endoscope is used solely for surgical observation, tool manipulation, and control. However, the surgeon's field of vision is limited to the narrow scope of the endoscope, and the surgeon cannot see areas outside the scope of the endoscope. This can result in the endoscope not reaching the affected area correctly or inadequate manipulation, leading to failure to achieve the intended surgical goal.

尤其是脊柱内镜手术,因其微创的特点,是治疗没有发生锥体结构性不稳定的神经压迫症状的有效手段,在近年得到了广泛的临床认可和推广。但因为椎间孔的生理结构和脊柱内镜的特殊构型,使得医生在熟悉并最终掌握脊柱内镜手术技术的过程中不得不面对以下挑战:In particular, endoscopic spinal surgery, due to its minimally invasive nature, is an effective means of treating nerve compression symptoms without pyramidal structural instability and has gained widespread clinical recognition and promotion in recent years. However, due to the physiological structure of the intervertebral foramen and the unique configuration of the spinal endoscope, doctors must face the following challenges in their journey to familiarize themselves with and ultimately master endoscopic spinal surgery techniques:

第一,镜下视野受限。脊柱内镜在工作通道套筒中使用,导致医生在使用的过程中,视野一直受到工作通道套筒遮挡。当医生按照通常的思维,拉起脊柱内镜,使其远离观察目标点,希望观察到更大的生理结构范围时,首先看到的是工作通道套筒的管壁。而如果医生将工作通道套筒也向远离目标点的方向拉起,则工作通道套筒外侧的软组织会向内收拢,侵入通道中,医生看到的将是工作通道套筒外侧的软组织,挡住医生想要观察的目标点的结构,因此依旧无法实现其观察到更大范围的生理结构的目的。“见木不见林”是脊柱内镜手术医生对于脊柱内镜手术镜下操作视野受限这一困难的形象比喻。First, the visual field under the microscope is limited. The spinal endoscope is used in a working channel sleeve, which causes the doctor's field of view to be blocked by the working channel sleeve during use. When the doctor pulls up the spinal endoscope and moves it away from the target point of observation, hoping to observe a larger range of physiological structures according to common thinking, the first thing he sees is the wall of the working channel sleeve. If the doctor also pulls up the working channel sleeve away from the target point, the soft tissue on the outside of the working channel sleeve will shrink inward and invade the channel. What the doctor sees will be the soft tissue on the outside of the working channel sleeve, which blocks the structure of the target point that the doctor wants to observe. Therefore, he still cannot achieve the purpose of observing a wider range of physiological structures. "Seeing the trees but not the forest" is a figurative metaphor for the difficulty of limited visual field under spinal endoscope surgery by spinal endoscope surgeons.

第二,容易迷失方向。为了克服镜下视野受限这一挑战,脊柱内镜的末端被设计为具有成角度的斜面的末端,成像装置的镜片被安装在成角度斜面上并相对于内镜的轴线偏心地布置,以便通过绕轴线旋转内镜以观察到更大范围。Second, it is easy to get lost. To overcome the challenge of limited visual field under the microscope, the distal end of the spinal endoscope is designed with an angled bevel. The lens of the imaging device is mounted on the angled bevel and arranged eccentrically relative to the axis of the endoscope, so that a wider range can be observed by rotating the endoscope around the axis.

然而,也正是因为这种构型,导致了如果初学者对镜下软组织结构不熟悉, 反而会经常性地由于转动内镜而迷失方向。However, it is precisely because of this configuration that if beginners are not familiar with the soft tissue structure under the microscope, Instead, you will often get lost due to turning the endoscope.

第三,手眼协调困难。脊柱内镜的构型为内镜的光学硬镜模组(也称内镜的成像装置)和手术工具的通道集成在同一个内镜伸入筒内。当医生轴向转动脊柱内镜观察目标位置时,手术工具也随脊柱内镜转动。因为光学硬镜模组与手术工具的实际位置因为脊柱内镜的特殊构型而不可能发生任何相对变化,其直观结果是医生观察到手术工具在屏幕上的位置并没有发生任何变化,但此时的实际情况是因为脊柱内镜的轴向旋转,此时手术工具相对于镜下生理结构的位置已经改变。但医生无法“看穿”人体而直观地看到工具相对于周边生理结构的位置变化,而只能通过显示脊柱内镜图像的屏幕进行观察。因此,医生非常容易出现“手眼协调”困难,每次内镜旋转后,都需要重新适应工具在屏幕中的运动方向,思维一直处于运转状态,极易产生疲劳。Third, hand-eye coordination is difficult. The configuration of the spinal endoscope is that the optical rigid endoscope module of the endoscope (also called the imaging device of the endoscope) and the channel for the surgical tool are integrated into the same endoscope insertion tube. When the doctor axially rotates the spinal endoscope to observe the target position, the surgical tool also rotates with the spinal endoscope. Because the actual position of the optical rigid endoscope module and the surgical tool cannot undergo any relative change due to the special configuration of the spinal endoscope, the intuitive result is that the doctor observes that the position of the surgical tool on the screen has not changed, but the actual situation at this time is that due to the axial rotation of the spinal endoscope, the position of the surgical tool relative to the physiological structure under the mirror has changed. However, the doctor cannot "see through" the human body and intuitively see the change in the position of the tool relative to the surrounding physiological structure, but can only observe through the screen displaying the spinal endoscope image. Therefore, doctors are very likely to have difficulty in "hand-eye coordination". After each rotation of the endoscope, they need to readjust to the movement direction of the tool on the screen. Their thinking is always in a state of operation, which easily leads to fatigue.

第四,“学习曲线”陡峭。因为以上三个镜下操作中面临的困难,导致了脊柱内镜医生需要超群的生理解剖知识、优秀的空间想象能力和更多的手术案例积累才能渡过所谓的“学习曲线”。通常,医生需要30至50例手术甚至更多,方能熟练掌握脊柱内镜技术,这使得培养一位合格的脊柱内镜手术医生往往需要较长的时间和不菲的成本,这也限制了脊柱内镜技术的推广和普及。Fourth, the "learning curve" is steep. Because of the difficulties encountered during these three endoscopic procedures, spinal endoscopists require superior anatomical and physiological knowledge, excellent spatial visualization, and a large caseload to overcome this so-called "learning curve." Typically, doctors need 30 to 50 surgeries, or even more, to master spinal endoscopy. This makes training a qualified spinal endoscopy surgeon often time-consuming and costly, which also limits the promotion and widespread adoption of spinal endoscopy.

而且,传统的导航技术,多用于在脊柱内固定手术中引导椎弓根螺钉的置入过程中对工具和植入物相对于患者生理结构的实时追踪和定位,辅助医生实现“精准、安全、微创”的临床目标。近年来,开始出现将导航技术,特别是电磁导航技术,应用在脊柱内镜手术中。其具体的实现方式为:通过导航系统中的电磁追踪器追踪内镜示踪器的实时空间位置,并使用导航系统所包括的处理器计算内镜的实时位置和视场,将其投影到已经通过导航配准流程与患者实际生理结构匹配的空间三维影像模型或二维透视影像模型上,并在导航系统的显示器上显示这种相对位置关系。该种解决方案存在如下问题,即在电磁导航界面上,医生的模拟观察视角是与获取该三维影像或二维透视影像时X射线源的位置是一致的,位于患者体外某处。而在显示脊柱内镜的内镜图像的显示器上,实际的观察视角位于内镜硬镜模组的镜片后方,在患者体内。因此,由于导航视图与脊柱内镜影像视图的观察视角位置不同,每次医生通过移动或者转动脊柱内镜从 而改变脊柱内镜的光学硬镜模组的镜片的位置或者调整镜片朝向之后,医生必须要结合自己对解剖结构的了解,经过思考过程将不同的观察视角转化并统一,才能“脑补”出此刻内镜的镜下生理结构在患者体内的准确位置和方向。其结果就是医生的思维一直处于运转状态,更容易产生疲劳。Furthermore, traditional navigation technology is primarily used to guide the placement of pedicle screws during spinal fixation surgery, enabling real-time tracking and positioning of tools and implants relative to the patient's anatomy. This helps doctors achieve the clinical goals of "precision, safety, and minimally invasive procedures." In recent years, navigation technology, particularly electromagnetic navigation, has begun to be applied to endoscopic spinal surgery. This approach involves using an electromagnetic tracker within the navigation system to track the real-time spatial position of the endoscopic tracer. The navigation system's processor calculates the endoscope's real-time position and field of view, projecting this information onto a spatial 3D image model or 2D fluoroscopic image model that has been aligned with the patient's actual anatomy through a navigation registration process. This relative positional relationship is then displayed on the navigation system's display. This solution presents a problem: On the electromagnetic navigation interface, the doctor's simulated viewing angle corresponds to the position of the X-ray source when acquiring the 3D or 2D fluoroscopic image, located somewhere outside the patient's body. However, on the display showing the endoscopic image of the spinal endoscope, the actual viewing angle is located behind the lens of the rigid endoscope module, within the patient's body. Therefore, since the viewing angles of the navigation view and the spinal endoscope image view are different, each time the doctor moves or rotates the spinal endoscope from one end to another, the doctor will When changing the position or orientation of the lens in a spinal endoscope's optical rigid lens module, the doctor must integrate their understanding of the anatomy and, through a process of thought, transform and unify different perspectives to "complete" the precise position and orientation of the endoscopic physiological structures within the patient's body. This results in the doctor's mind being constantly on the move, which can lead to increased fatigue.

因此这种将内镜当作普通手术工具一样追踪并将其空间位置和朝向可视化的导航解决方案,只是将导航技术简单直接的应用在内镜手术中,并没有有效的解决镜下视野受限、手眼协调困难这些临床需求。甚至对于“镜下容易迷失方向”这一与“定位”相关的需求,目前的电磁导航技术也并没有在实质上解决长久以来困扰脊柱内镜手术医生的问题。Therefore, this navigation solution, which tracks the endoscope like a standard surgical tool and visualizes its spatial position and orientation, simply and directly applies navigation technology to endoscopic surgery. It does not effectively address clinical needs such as limited visual field and difficulty with hand-eye coordination. Even with regard to the "positioning" requirement of "easy disorientation under the endoscope," current electromagnetic navigation technology has not substantially resolved a problem that has long plagued spinal endoscopic surgeons.

如何发挥导航系统的优势,解决目前内镜手术例如脊柱内镜手术中遇到的“镜下视野受限”、“容易迷失方向”、“手眼协调困难”并克服“学习曲线”陡峭的挑战,使得初学者也能在导航系统的辅助下,精准、高效的完成手术操作,是本领域亟待解决的问题。How to leverage the advantages of the navigation system to address the challenges of "limited visual field under the microscope", "easy to get lost", "difficult hand-eye coordination" and the steep "learning curve" encountered in current endoscopic surgeries such as spinal endoscopic surgery, so that beginners can also complete surgical operations accurately and efficiently with the assistance of the navigation system, is an urgent problem to be solved in this field.

发明内容Summary of the Invention

本发明的目的旨在解决现有技术中存在的上述问题和缺陷的至少一个以及其它技术问题。The object of the present invention is to solve at least one of the above problems and defects in the prior art as well as other technical problems.

本发明一方面提供了一种用于引导内镜手术的方法,该方法包括如下步骤:In one aspect, the present invention provides a method for guiding endoscopic surgery, the method comprising the following steps:

内镜图像获取步骤:获取内镜的图像;Endoscopic image acquisition steps: acquiring an endoscopic image;

增强信息获取步骤:获取以下三种类型的增强信息中的至少两种:Enhanced information acquisition step: acquiring at least two of the following three types of enhanced information:

a)将内镜的多个图像拼接得到的拼接图像;a) a stitched image obtained by stitching multiple images of the endoscope;

b)患者生理结构的三维影像;b) 3D images of the patient’s physiological structures;

c)在患者生理结构的三维影像或二维影像上的标记;c) markings on a three-dimensional image or a two-dimensional image of a patient's physiological structure;

内镜增强图像融合步骤:将所获取的内镜的当前图像与增强信息获取步骤中获取的至少两种增强信息相融合以获得内镜增强图像;以及Endoscopic enhanced image fusion step: fusing the acquired current endoscopic image with the at least two types of enhanced information acquired in the enhanced information acquisition step to obtain an endoscopic enhanced image; and

显示步骤:显示包括内镜增强图像的至少一部分的视图。Displaying step: displaying a view including at least a portion of the endoscopically enhanced image.

在该示例的方案中,通过将内镜的当前图像与上述三种增强信息中的至少两种融合,使得内镜图像得到内镜末端周围更大视野内的软组织、肉眼看不到的 骨性结构、规划方位信息的多维度(至少两种)的增强。由于内镜增强图像融合了内镜末端周围的更大范围的拼接图像而为操作者提供了更大的视野;由于融合了骨性结构的三维影像而使操作者看到本来看不到的骨性结构从而掌握全局方位;由于融合了与标记相关的引导指示而便于操作者快速定位靶点结构位置并确定方向以及手术工具方位。现有技术中即使有对内镜图像的增强,也仅通过有限的一种方式来增强,而没有从至少两个方面同时多维度地实现对内镜图像的增强。在本发明中,操作者能够直观地观察到内镜视野下的当前图像及其周围的软组织、骨性结构、靶点位置和方向信息,实现对操作者的全方位多维度的引导。该方案完美解决了内镜手术中遇到的“镜下视野受限”、“容易迷失方向”、“手眼协调困难”的问题以及“学习曲线陡峭”的问题。In the solution of this example, by fusing the current image of the endoscope with at least two of the three types of enhanced information, the endoscope image can obtain soft tissues in a larger field of view around the endoscope end, and areas that are invisible to the naked eye. Multi-dimensional (at least two-dimensional) enhancement of bony structure and planned orientation information. Because the enhanced endoscopic image incorporates a larger mosaic of images from around the endoscope tip, it provides the operator with a wider field of view. The integration of three-dimensional images of bony structures allows the operator to see previously invisible bony structures, thereby gaining a global understanding of the position. The integration of guidance instructions related to markers facilitates the operator's rapid positioning of the target structure and determination of its direction and orientation as well as the position of surgical tools. Even in the prior art, even if endoscopic image enhancement exists, it is achieved through only one limited method, rather than simultaneously and multi-dimensionally enhancing the endoscopic image from at least two aspects. In the present invention, the operator can intuitively observe the current image within the endoscopic field of view, as well as the surrounding soft tissue, bony structure, target location, and orientation information, providing comprehensive, multi-dimensional guidance for the operator. This solution perfectly addresses the problems encountered in endoscopic surgery, such as "limited visual field,""easydisorientation,""difficult hand-eye coordination," and "steep learning curve."

根据一种示例,内镜增强图像融合步骤借助于导航系统来进行,其中,方法在内镜增强图像融合步骤之前还包括:According to an example, the endoscopic enhanced image fusion step is performed with the aid of a navigation system, wherein the method further comprises, before the endoscopic enhanced image fusion step:

成像方位获取步骤:获取内镜的成像装置在导航系统下的方位;以及Imaging orientation acquisition step: acquiring the orientation of the imaging device of the endoscope under the navigation system; and

增强信息方位获取步骤:获取所需融合的拼接图像、三维影像或标记在导航系统下的方位;Enhanced information position acquisition step: obtaining the position of the required fused spliced image, three-dimensional image or marker under the navigation system;

其中,在内镜增强图像融合步骤中根据成像装置的方位以及所需融合的拼接图像、三维影像或标记的方位来进行融合。In the endoscopic enhanced image fusion step, fusion is performed according to the orientation of the imaging device and the orientation of the spliced image, three-dimensional image or marker to be fused.

该示例中借助于导航系统,将待融合的成像装置的图像以及各增强信息统一到导航坐标系下,这种融合方式获得的图像的融合精度更高,具有更高的导航精度和导航效果。In this example, with the help of the navigation system, the images of the imaging devices to be fused and the various enhancement information are unified into the navigation coordinate system. This fusion method has higher fusion accuracy of the images and has higher navigation accuracy and navigation effect.

根据一种示例,所述方法还包括:全局增强图像融合步骤,其中在全局增强图像融合步骤中将拼接图像、三维影像和与标记相关的引导指示中的至少两种相融合以获取全局增强图像,其中全局增强图像上指示出成像装置的当前视野位置和/或内镜增强图像的视图的位置;以及显示全局增强图像的步骤,其中在全局增强图像上由不同颜色和/或线型的线来表示出成像装置的当前视野位置的边缘和内镜增强图像的视图的边缘。According to one example, the method further includes: a global enhanced image fusion step, wherein in the global enhanced image fusion step, at least two of the stitched image, the three-dimensional image and the guidance indication associated with the marker are fused to obtain a global enhanced image, wherein the current field of view position of the imaging device and/or the position of the view of the endoscopic enhanced image are indicated on the global enhanced image; and a step of displaying the global enhanced image, wherein the edges of the current field of view position of the imaging device and the edges of the view of the endoscopic enhanced image are indicated by lines of different colors and/or line types on the global enhanced image.

在该示例中,内镜增强视图作为全局增强图像的一个局部,与全局增强视图同时显示,并在全局增强图像中显示出两者的关联信息,即在全局增强图像上醒 目地由不同颜色和/或线型的线来表示出成像装置的当前视野位置的边缘和内镜增强图像的视图的边缘,使得操作者在观看较大的内镜实时图像的同时,能够获知内镜实时图像在全局中的位置以及相对于周围的软组织、骨性结构、靶点位置和方向信息,实现对操作者的全方位的引导。In this example, the endoscopic enhanced view is displayed as a part of the global enhanced image simultaneously with the global enhanced view, and the associated information of the two is displayed in the global enhanced image, that is, the endoscopic enhanced view is displayed on the global enhanced image. Purpose: Lines of different colors and/or line types are used to indicate the edge of the current field of view of the imaging device and the edge of the view of the endoscopic enhanced image, so that the operator can understand the position of the endoscopic real-time image in the global context and the position and direction information relative to the surrounding soft tissue, bony structure, and target while viewing the larger endoscopic real-time image, thereby providing all-round guidance to the operator.

根据一种示例,所获取和融合的增强信息的类型基于操作者的输入而选取。该示例使得手术医生能够根据其需要和偏好灵活选择对内镜图像的增强信息。According to one example, the type of enhancement information acquired and fused is selected based on operator input, which enables the surgeon to flexibly select the enhancement information for the endoscopic image according to his or her needs and preferences.

根据一种示例,在内镜增强图像的视图中所显示的内镜增强图像的范围能够根据操作者的输入而确定。在该示例中,在该内镜增强图像的视图中所显示的内镜增强图像的范围(对应于内镜的当前图像在内镜增强视图中的大小)能够根据操作者的输入而确定。换言之,操作者可以根据需要通过与系统交互而改变内镜的当前图像在内镜增强视图中的大小比例,例如将内镜的当前图像放大或缩小。当医生需要更细致地观察镜下视野中的某细节时,可以将内镜实时图像放大(相应地整个内镜增强图像在该视图窗口中能显示的范围变小),而周围的增强信息中显示的内容与内镜实时图像的比例关系保持不变,超出“内镜增强视图”边缘部分的增强信息不再在“内镜增强视图”上显示。According to one example, the range of the endoscopically enhanced image displayed in the endoscopically enhanced image view can be determined based on operator input. In this example, the range of the endoscopically enhanced image displayed in the endoscopically enhanced image view (corresponding to the size of the current endoscope image in the endoscopically enhanced view) can be determined based on operator input. In other words, the operator can change the size ratio of the current endoscope image in the endoscopically enhanced view as needed by interacting with the system, for example, by zooming in or out. When a physician needs to observe a detail in the endoscopic field of view in greater detail, the real-time endoscopic image can be zoomed in (correspondingly, the range of the entire endoscopically enhanced image that can be displayed in the view window becomes smaller), while the content displayed in the surrounding enhanced information remains proportional to the real-time endoscopic image. Enhanced information that extends beyond the edge of the "endoscopically enhanced view" is no longer displayed in the "endoscopically enhanced view."

根据一种示例,在融合的增强信息包括拼接图像的情况下,增强信息获取步骤包括拼接步骤,在拼接步骤中,将多个内镜的图像根据与至少一个内镜图像对应的成像装置的方位进行拼接而获取拼接图像。在本方案中,由于根据与至少一个图像对应的成像装置的方位来拼接各图像,相对于现有的各种拼接方式(例如基于算法)而言,所需要处理的计算更少,很大程度地提高拼接速度,这对于手术过程中的对内镜图像的实时观察来讲尤其重要。According to one example, when the fused enhancement information includes a stitched image, the enhancement information acquisition step includes a stitching step, in which a plurality of endoscopic images are stitched together based on the orientation of an imaging device corresponding to at least one endoscopic image to obtain a stitched image. In this solution, because the images are stitched together based on the orientation of the imaging device corresponding to the at least one image, less computational processing is required compared to existing stitching methods (e.g., algorithm-based methods), significantly improving stitching speed. This is particularly important for real-time observation of endoscopic images during surgery.

根据一种示例,在融合的增强信息包括拼接图像的情况下,增强信息获取步骤还包括:在所述拼接步骤之前的畸变校准步骤,其中,在所述畸变校准步骤中将待拼接的内镜的图像进行畸变校准;以及在拼接步骤之后的处理步骤,在处理步骤中根据在拼接步骤中获得的图像生成平面图像,其中在内镜增强图像融合步骤中将平面图像作为拼接图像与内镜的当前图像进行融合。在该示例中,通过在拼接之前的畸变校准去除内镜的畸变效应,使拼接图像更接近于真实图像。并在拼接之后的处理步骤中消除或减少因观察视角的不同而产生的拼接图像的视 觉上的变形。两种畸变处理方式相结合以使得拼接图像更接近于真实的周围场景,便于操作者观察。According to one example, in the case where the fused enhancement information includes a stitched image, the enhancement information acquisition step further includes: a distortion calibration step before the stitching step, wherein the image of the endoscope to be stitched is subjected to distortion calibration in the distortion calibration step; and a processing step after the stitching step, wherein a plane image is generated according to the image obtained in the stitching step, wherein the plane image is fused with the current image of the endoscope as a stitched image in the endoscope enhanced image fusion step. In this example, the distortion effect of the endoscope is removed by the distortion calibration before stitching, so that the stitched image is closer to the real image. And in the processing step after stitching, the visual difference of the stitched image caused by the different observation angles is eliminated or reduced. The two distortion processing methods are combined to make the stitched image closer to the real surrounding scene, which is easier for the operator to observe.

根据一种示例,在融合的增强信息包括标记的情况下,增强信息方位获取步骤包括标记方位获取步骤,在该步骤中借助于三维影像或二维影像在导航系统下的配准来获取标记在导航系统下的方位;其中在内镜增强图像融合步骤中,根据标记在导航系统下的方位,将与标记相关的引导指示与内镜的当前图像进行融合。According to one example, when the fused enhanced information includes a marker, the enhanced information orientation acquisition step includes a marker orientation acquisition step, in which the orientation of the marker under the navigation system is acquired with the aid of the alignment of the three-dimensional image or the two-dimensional image under the navigation system; wherein in the endoscope enhanced image fusion step, the guidance instructions related to the marker are fused with the current image of the endoscope according to the orientation of the marker under the navigation system.

根据一种示例,所述方法使得操作者能够选择方向标记和指示生理结构的标记中的一者或多者。这使得操作者能够根据其偏好或实际场景进行选择。根据一种示例,方向标记包括朝向患者的背侧、腹侧、头侧和尾侧的方向标记中的一者或多者。根据一种示例,患者生理结构为脊柱,指示生理结构的标记包括标记出的靶点(例如突出的或游离的椎间盘、骨赘和骨化的黄韧带中的一者或多者)的影像模型。该影像模型能够从患者生理结构的影像中摘除或分离出来,这种情况下该影像模型即为指示该生理结构的标记。According to one example, the method enables the operator to select one or more of a directional marker and a marker indicating a physiological structure. This enables the operator to make a selection based on their preferences or actual scenarios. According to one example, the directional marker includes one or more of a directional marker toward the dorsal, ventral, cranial, and caudal sides of the patient. According to one example, the patient's physiological structure is a spine, and the marker indicating the physiological structure includes an image model of a marked target (e.g., one or more of a protruding or free intervertebral disc, osteophytes, and ossified yellow ligament). The image model can be removed or separated from the image of the patient's physiological structure, in which case the image model is the marker indicating the physiological structure.

根据一种示例,指示生理结构的标记包括生理结构标记点;优选地,患者生理结构为脊柱,并且生理结构标记点为指示突出的或游离的椎间盘、骨赘和骨化的黄韧带中的一者或多者的标记点,或为在内镜手术过程中不发生位移的生理结构标记点,例如关节突腹侧、前节椎体的椎弓根、后节椎体的椎弓根、椎间盘中的一者或多者;优选地,生理结构标记点为多个。According to one example, the marker indicating the physiological structure includes a physiological structure marker point; preferably, the patient's physiological structure is a spine, and the physiological structure marker point is a marker point indicating one or more of a protruding or free intervertebral disc, osteophytes and ossified yellow ligament, or a physiological structure marker point that does not displace during endoscopic surgery, such as one or more of the ventral side of the articular process, the pedicle of the anterior vertebra, the pedicle of the posterior vertebra, and the intervertebral disc; preferably, there are multiple physiological structure marker points.

根据一种示例,在融合的增强信息包括患者生理结构的三维影像的情况下,增强信息方位获取步骤包括将患者生理结构的三维影像配准到导航系统下以获取三维影像的方位的步骤。优选地,患者生理结构的三维影像包括术前三维影像或术中三维影像。According to one example, when the fused enhancement information includes a three-dimensional image of the patient's physiological structure, the enhancement information orientation acquisition step includes registering the three-dimensional image of the patient's physiological structure with the navigation system to acquire the orientation of the three-dimensional image. Preferably, the three-dimensional image of the patient's physiological structure includes a preoperative three-dimensional image or an intraoperative three-dimensional image.

根据一种示例,所述内镜为脊柱内镜。According to one example, the endoscope is a spinal endoscope.

本发明的导航方法对于脊柱内镜(例如椎间孔镜)尤其有益。如背景技术中所介绍的,脊柱内镜在工作通道套筒中使用,使得医生的镜下视野进一步受限,内镜的成像装置和手术工具的通道集成在同一个内镜伸入筒中,使得医生容易出现手眼协调困难。如何在脊柱内镜手术中,让医生不受脊柱内镜特殊使用方式 (内镜在工作通道套筒中使用)和内镜光学硬镜模组的可视角度有限的影响、减少“手眼协调困难”以及由于内镜转动造成的迷失方向的问题,一直尚未有便捷、可靠的解决方案。而本发明的方法通过为内镜图像提供多维度的增强信息,解决了上述问题,能够很大程度上避免医生误操作和增加脊柱手术的可靠性。The navigation method of the present invention is particularly beneficial for spinal endoscopes (such as foraminal endoscopes). As described in the background art, spinal endoscopes are used in working channel sleeves, which further limits the doctor's visual field under the endoscope. The imaging device of the endoscope and the channel for surgical tools are integrated into the same endoscope insertion tube, which makes it easy for the doctor to have difficulty in hand-eye coordination. How can the doctor avoid the special use of the spinal endoscope during spinal endoscopic surgery? The limited viewing angles of the endoscope (where the endoscope is placed within the working channel sleeve) and the rigid optical endoscope module, as well as the difficulty in hand-eye coordination and disorientation caused by endoscope rotation, have not been easily and reliably addressed. The present method addresses these issues by providing multi-dimensional enhanced information to the endoscopic image, significantly reducing operator error and increasing the reliability of spinal surgery.

根据一种示例,其中成像方位获取步骤借助于从导航系统的追踪装置获取相对于内镜具有固定的位置关系的示踪器的位置来进行。该方法借助于导航系统方便地实现了对内镜的成像装置的方位追踪。According to an example, the imaging position acquisition step is performed by acquiring the position of a tracer having a fixed positional relationship relative to the endoscope from a tracking device of a navigation system. This method conveniently implements position tracking of the imaging device of the endoscope by means of the navigation system.

根据本发明的另一方面,还提供一种计算机可读存储介质,其上存储有计算机程序,该程序被处理器运行时执行上述各示例中的方法中的步骤。According to another aspect of the present invention, a computer-readable storage medium is provided, on which a computer program is stored. When the program is executed by a processor, the steps of the methods in the above examples are executed.

根据本发明的再一方面,还提供一种控制装置,该控制装置包括存储器、处理器及存储在存储器上并能在处理器上运行的程序,其中在所述处理器运行所述程序时执行上述各示例中的方法。According to another aspect of the present invention, a control device is provided. The control device includes a memory, a processor, and a program stored in the memory and capable of running on the processor, wherein the methods in the above examples are executed when the processor runs the program.

根据本发明的再一方面,还提供了一种计算机程序产品,其包括计算机程序,该计算机程序被处理器执行时实现上述各示例中的方法的步骤。According to yet another aspect of the present invention, a computer program product is provided, which includes a computer program. When the computer program is executed by a processor, the steps of the methods in the above examples are implemented.

根据本发明的再一方面,还提供了一种用于内镜手术的导航的电子设备,其特征在于,电子设备包括显示装置和处理器,处理器具有数据接口,其中数据接口能够连接到内镜,使得处理器能够获取内镜的图像;并且,其中处理器被配置成能够获取患者生理结构的三维影像和在患者生理结构的三维影像或二维影像上的标记中的一者或多者;其中处理器被配置成在处理器运行时在至少一段时间内在显示装置上显示内镜增强图像的至少一部分的视图,其中该内镜增强图像为将内镜的当前图像与以下三种类型的增强信息中的至少两种相融合的图像:According to yet another aspect of the present invention, an electronic device for endoscopic surgery navigation is provided, characterized in that the electronic device includes a display device and a processor, wherein the processor has a data interface, wherein the data interface is connectable to an endoscope, so that the processor can acquire an image from the endoscope; and wherein the processor is configured to acquire one or more of a three-dimensional image of a patient's physiological structure and a marker on the three-dimensional image or a two-dimensional image of the patient's physiological structure; wherein the processor is configured to display a view of at least a portion of an endoscope-enhanced image on the display device for at least a period of time when the processor is running, wherein the endoscope-enhanced image is an image that fuses a current image of the endoscope with at least two of the following three types of enhancement information:

a)将内镜的多个图像拼接得到的拼接图像;a) a stitched image obtained by stitching multiple images of the endoscope;

b)患者生理结构的三维影像;b) 3D images of the patient’s physiological structures;

c)与标记相关的引导指示。c) Guidance instructions related to the marking.

根据本发明的再一方面,还提供一种用于内镜手术的导航系统,该导航系统包括适于追踪相对于内镜具有固定的位置关系的示踪器的追踪装置、显示装置和处理器,其中处理器适于连接到追踪装置和内镜,其中在处理器运行时执行上述各示例中的方法,并通过显示装置实现方法中的显示。 According to another aspect of the present invention, a navigation system for endoscopic surgery is also provided, which includes a tracking device suitable for tracking a tracer having a fixed positional relationship relative to an endoscope, a display device and a processor, wherein the processor is suitable for being connected to the tracking device and the endoscope, wherein the methods in the above examples are executed when the processor is running, and the display in the method is realized through the display device.

由于该导航系统融合了内镜周围的更大范围的拼接图像而为操作者提供了更大的视野;融合了骨性结构的三维影像而使操作者看到本来看不到的骨性结构、掌握全局方位;融合了方向标记或指示生理结构的标记或其引导指示而便于操作者快速定位到目标位置并掌握手术工具方位,因而具有更好的导航效果和导航精度。Because the navigation system integrates a larger range of spliced images around the endoscope, it provides the operator with a wider field of view; it integrates the three-dimensional image of the bone structure, allowing the operator to see the bone structure that was originally invisible and grasp the overall orientation; it integrates direction marks or marks indicating physiological structures or their guiding instructions to facilitate the operator to quickly locate the target position and grasp the orientation of surgical tools, thus having better navigation effect and navigation accuracy.

根据本发明的再一方面,还提供一种机器人系统,该机器人系统包括机械臂以及上述导航系统。换言之,本发明的构思也可在机器人系统中的导航系统下来实施。According to another aspect of the present invention, a robot system is provided, which includes a robot arm and the above-mentioned navigation system. In other words, the concept of the present invention can also be implemented in the navigation system of the robot system.

附图说明BRIEF DESCRIPTION OF THE DRAWINGS

下面参照附图经由示例性实施例对本发明进行详细描述。The present invention is described in detail below by way of exemplary embodiments with reference to the accompanying drawings.

图1示例性地示出本发明的用于引导内镜手术的方法的流程图。FIG1 exemplarily shows a flow chart of a method for guiding endoscopic surgery according to the present invention.

图2示出根据本发明的一种示例性的实施例的用于脊柱内镜手术的导航系统的原理示意图。FIG2 is a schematic diagram showing a principle of a navigation system for spinal endoscopic surgery according to an exemplary embodiment of the present invention.

图3示例性地示出了将拼接图像、患者生理结构的三维影像与内镜的当前图像融合所获得的内镜增强图像。FIG3 exemplarily shows an endoscopic enhanced image obtained by fusing the stitched image, the three-dimensional image of the patient's physiological structure, and the current image of the endoscope.

图4示例性地示出了将图3的内镜增强图像在显示装置的窗口上显示的视图。FIG. 4 exemplarily shows a view in which the endoscopy enhanced image of FIG. 3 is displayed on a window of a display device.

图5示例性地示出了根据另一种示例性的实施例的内镜增强图像在显示装置的窗口上显示的视图,其中该内镜增强图像融合了拼接图像、患者生理结构的三维影像、生理结构标记点与内镜的当前图像。Figure 5 exemplarily shows a view of an endoscopically enhanced image displayed on a window of a display device according to another exemplary embodiment, wherein the endoscopically enhanced image fuses a stitched image, a three-dimensional image of the patient's physiological structure, physiological structure markers and a current image of the endoscope.

图6示例性地示出了导航系统的具有内镜增强图像视图和全局增强图像的显示界面。FIG6 exemplarily shows a display interface of a navigation system having an endoscopic enhanced image view and a global enhanced image.

图7示例性地示出了融合了拼接图像、患者生理结构的三维影像以及方向标记的全局增强图像,其中指示出了成像装置的当前视野位置和内镜增强图像的视图的位置。FIG7 exemplarily shows a global enhanced image that is a fusion of a stitched image, a three-dimensional image of a patient's physiological structure, and a direction mark, in which the current field of view position of the imaging device and the position of the view of the endoscopic enhanced image are indicated.

图8示例性地示出了融合了拼接图像、患者生理结构的三维影像以及生理结构标记点的全局增强图像,其中指示出了成像装置的当前视野位置和内镜增 强图像的视图的位置。FIG8 exemplarily shows a global enhanced image that is a fusion of a stitched image, a three-dimensional image of a patient's physiological structure, and physiological structure markers, wherein the current field of view position of the imaging device and the endoscopic enhancement point are indicated. The location of the view for the strong image.

图9示例性地示出了包括拼接步骤以获取拼接图像的增强信息获取步骤的流程图。FIG9 exemplarily shows a flow chart of an enhancement information acquisition step including a stitching step to acquire a stitched image.

图10示例性地示出了移动内镜以获得更大范围的多个内镜图像的原理示意图。FIG10 exemplarily shows a schematic diagram of the principle of moving the endoscope to obtain multiple endoscopic images of a larger range.

图11示例性地转动内镜以获取拼接图像的原理示意图。FIG11 is a schematic diagram showing the principle of rotating an endoscope to obtain a stitched image.

图12示出了操作者在术前三维影像或术中三维影像上进行方向标记的示例。FIG. 12 shows an example of an operator marking a direction on a preoperative 3D image or an intraoperative 3D image.

图13示出了操作者在将二维透视影像进行类三维拟合后的影像上进行方向标记的示例。FIG13 shows an example of an operator marking a direction on an image obtained by performing quasi-three-dimensional fitting on a two-dimensional perspective image.

图14示出了操作者在术前三维影像或术中三维影像上完成生理结构标记的示例。FIG14 shows an example of an operator completing physiological structure marking on a preoperative 3D image or an intraoperative 3D image.

图15示出了操作者在术中正位影像上完成生理结构标记的示例。FIG15 shows an example of an operator completing physiological structure marking on an intraoperative anteroposterior image.

图16示出了操作者在术中侧位影像上完成生理结构标记的示例。FIG16 shows an example of an operator completing physiological structure marking on an intraoperative lateral image.

应说明的是,附图仅是示意性的。它们仅示出为了阐明本发明而需要的那些部件或步骤,而其它部件或步骤可能被省略或仅仅简单提及。除了附图中所示出的部件或步骤外,本发明还可以包括其它部件或步骤。It should be noted that the drawings are schematic only. They illustrate only those components or steps necessary to illustrate the present invention, and other components or steps may be omitted or only briefly mentioned. In addition to the components or steps shown in the drawings, the present invention may also include other components or steps.

具体实施方式DETAILED DESCRIPTION

下面通过实施例,并结合附图,对本发明的技术方案作进一步具体的说明。下述参照附图对本发明实施方式的说明旨在对本发明的总体构思进行解释,而不应当理解为对本发明的限制。The following examples and accompanying drawings further illustrate the technical solution of the present invention. The following description of the embodiments of the present invention with reference to the accompanying drawings is intended to explain the overall concept of the present invention and should not be construed as limiting the present invention.

以下作为一个具体的实施例,描述本发明的在导航系统下利用内镜(例如脊柱内镜、神经内镜等)的图像引导内镜手术的方法的具体步骤以及所涉及的电子设备、导航系统以及机器人系统(或称定位与导航系统)等。在下面的详细描述中,以极为具体和详细的方式阐述了许多具体的细节、步骤以提供对实施例的全面理解。然而应当理解,一个或多个其他实施例在没有这些具体细节、步骤的情况下也可以被实施。The following describes, as a specific embodiment, the specific steps of the method of the present invention for image-guided endoscopic surgery using an endoscope (e.g., a spinal endoscope, a neuroendoscope, etc.) under a navigation system, as well as the electronic equipment, navigation system, and robotic system (or positioning and navigation system) involved. In the detailed description below, many specific details and steps are set forth in a highly specific and detailed manner to provide a comprehensive understanding of the embodiments. However, it should be understood that one or more other embodiments may be implemented without these specific details and steps.

需要说明的是,虽然本文中采用了内镜的“成像装置”和“图像”的表述方 式,但本领域技术人员可以理解,“成像装置”为广义的概念,包括摄像、视频采集和图像采集等功能,“图像”为广义的概念,包括视频、动态连续的图像以及静态的图像。在本文中,内镜的“成像装置”可以为用于该内镜的视频成像的内镜模组,在本发明的图像获取步骤中获取的是来自于该内镜的影像的成帧的图像。It should be noted that although the terms “imaging device” and “image” of the endoscope are used in this article, However, those skilled in the art will understand that "imaging device" is a broad concept that includes functions such as video recording, video capture, and image capture, and "image" is a broad concept that includes video, dynamic continuous images, and static images. In this article, the "imaging device" of an endoscope can be an endoscope module used for video imaging of the endoscope. In the image acquisition step of the present invention, what is acquired is a framed image from the endoscope.

在本发明的具体的实施例中,所使用的导航系统如图2所示包括追踪装置1、控制装置2和显示装置3。该追踪装置1可以为光学追踪装置(例如NDI导航仪),相应地可在内镜5上设置示踪器4。作为一种具体的示例,控制装置2可以为通用计算机、专用计算机、嵌入式处理机,也可以为例如单片机、芯片等其他任何适当的可编程数据处理设备。该控制装置2可以包括处理器以及用于存储程序的存储器,然而也可以只包括处理器,在这种情况下处理器可以附接存储有程序的存储器。换言之,控制装置至少包括处理器。控制装置2(或处理器)和显示装置3可以集成为一体,也可以分开设置。该控制装置或处理器具有数据接口,该数据接口可包括能连接到内镜的数据接口,使得所述控制装置/处理器能够实时获取内镜的图像。该控制装置或处理器还包括能连接到导航系统的追踪装置1的数据接口,从而能从追踪装置1实时获取被追踪目标例如内镜上的示踪器4的位置和方向。作为一种示例,内镜5和/或示踪器4也可视为本发明的导航系统的一部分。In a specific embodiment of the present invention, the navigation system used, as shown in FIG2 , includes a tracking device 1, a control device 2, and a display device 3. The tracking device 1 can be an optical tracking device (e.g., an NDI navigator), and a corresponding tracer 4 can be provided on the endoscope 5. As a specific example, the control device 2 can be a general-purpose computer, a dedicated computer, an embedded processor, or any other suitable programmable data processing device, such as a single-chip microcomputer or chip. The control device 2 can include a processor and a memory for storing programs, or it can include only a processor, in which case the processor can be attached to the memory storing the programs. In other words, the control device includes at least a processor. The control device 2 (or processor) and the display device 3 can be integrated or provided separately. The control device or processor has a data interface, which can include a data interface that can be connected to the endoscope, allowing the control device/processor to obtain images of the endoscope in real time. The control device or processor also includes a data interface that can be connected to the tracking device 1 of the navigation system, so that the position and orientation of the tracked target, such as the tracer 4 on the endoscope, can be obtained from the tracking device 1 in real time. As an example, the endoscope 5 and/or the tracer 4 can also be considered part of the navigation system of the present invention.

通常,内镜的末端进入到患者的组织结构、骨性结构内部以对其进行观察和/或操作,内镜的近端(接近操作者的一端、亦即与内镜的伸入筒的末端相反的一端)位于患者体外以供操作者操控。在导航系统中,通过在内镜的位于患者体外的近端处设置适于被追踪装置1追踪到的示踪器4,导航系统能够实时获知内镜5上的示踪器4在导航坐标系中的位置和方向。作为本发明的一种示例,内镜的成像装置即光学硬镜模组的远端镜片设置在该内镜的伸入筒的末端。通过标定该内镜的伸入筒的末端相对于内镜5上的示踪器4的相对位置关系,就能获得该成像装置相对于示踪器4的相对位置关系,从而能够通过导航系统中的追踪装置1获知示踪器4的方位进而获知该成像装置在导航坐标系下的位置和方向。该标定通常在进行内镜手术及其导航之前进行,也称为内镜的外参的标定。 Typically, the distal end of the endoscope enters the patient's tissue structure or bony structure to observe and/or operate on it, and the proximal end of the endoscope (the end closest to the operator, i.e., the end opposite to the distal end of the endoscope's insertion tube) is located outside the patient's body for manipulation by the operator. In the navigation system, by providing a tracer 4 suitable for being tracked by a tracking device 1 at the proximal end of the endoscope located outside the patient's body, the navigation system can obtain real-time information about the position and orientation of the tracer 4 on the endoscope 5 in the navigation coordinate system. As an example of the present invention, the imaging device of the endoscope, i.e., the distal lens of the optical hard lens module, is provided at the distal end of the insertion tube of the endoscope. By calibrating the relative positional relationship of the distal end of the insertion tube of the endoscope with respect to the tracer 4 on the endoscope 5, the relative positional relationship of the imaging device with respect to the tracer 4 can be obtained, thereby enabling the tracking device 1 in the navigation system to determine the orientation of the tracer 4 and, therefore, the position and orientation of the imaging device in the navigation coordinate system. This calibration is typically performed before performing endoscopic surgery and its navigation, and is also referred to as calibration of the external parameters of the endoscope.

具体地,在进行内镜手术及其导航之前,可以先标定该内镜的成像装置即硬镜模组相对于示踪器4的相对位置关系。优选地,在本实施例中,该标定通过标定工具(图中未示出)而不需要借助内镜获取图像来进行,这种方式减少了图像处理的工作量。本发明的导航系统可选地包括该标定工具。标定工具上可形成有多个标定孔,这些标定孔具有不同倾斜角度的底面和/或不同孔径,以分别标定具有不同的末端斜面倾角和/或筒径的内镜。具体地,标定工具还包括固定在其支架上的另外的示踪器,导航系统能够获知该标定工具上的示踪器在导航坐标系下的位置。并且,根据标定工具的设计尺寸可已知每个标定孔相对于标定工具上的示踪器的位置关系,进而这些标定孔在导航系统中的位置是已知的。将待标定的某一内镜的伸入筒的末端插入到与之相匹配的标定孔中,伸入筒的末端的倾斜面与标定孔的倾斜底面贴合,实现该内镜的定位。由于导航系统同时还能够获知内镜上的示踪器4在导航坐标系下的位置,因此能够获知内镜伸入筒的末端相对于内镜上的示踪器4的相对位置关系,完成该标定过程。Specifically, before performing endoscopic surgery and its navigation, the relative position relationship of the imaging device of the endoscope, i.e., the hard mirror module, relative to the tracer 4 can be calibrated first. Preferably, in this embodiment, the calibration is performed by a calibration tool (not shown in the figure) without the need to obtain images with the help of an endoscope, which reduces the workload of image processing. The navigation system of the present invention optionally includes the calibration tool. A plurality of calibration holes can be formed on the calibration tool, and these calibration holes have bottom surfaces with different inclination angles and/or different apertures to calibrate endoscopes with different end bevel inclination angles and/or barrel diameters. Specifically, the calibration tool also includes another tracer fixed on its bracket, and the navigation system can know the position of the tracer on the calibration tool in the navigation coordinate system. Moreover, the positional relationship of each calibration hole relative to the tracer on the calibration tool can be known based on the design size of the calibration tool, and thus the positions of these calibration holes in the navigation system are known. The endoscope is positioned by inserting the distal end of the insertion tube into a matching calibration hole. The inclined surface of the distal end of the insertion tube aligns with the inclined bottom surface of the calibration hole. Because the navigation system also knows the position of the endoscope's tracer 4 within the navigation coordinate system, it can determine the relative position of the distal end of the endoscope's insertion tube relative to the tracer 4, completing the calibration process.

在术中通过追踪装置1实时获取内镜上的示踪器4的方位(位置和方向),即可间接地实时获取内镜末端的位置和方向,即获取成像装置的方位。根据本发明的方法中的成像方位获取步骤即通过这种方式进行。During surgery, the position (location and orientation) of the tracer 4 on the endoscope is acquired in real time by the tracking device 1, thereby indirectly acquiring the position and orientation of the endoscope tip, i.e., the orientation of the imaging device. The imaging orientation acquisition step in the method according to the present invention is performed in this manner.

在本发明的该具体的实施例中,在进行内镜手术及其导航之前,还可以标定内镜的成像参数也称内参。示例性地,内参的标定可以使用标定板,例如使用该内镜在不同的方位角度对标定板进行拍摄,保存图像以及记录标定板规格。进而标定内镜的内参,包括畸变矩阵和/或旋转和位移向量。在后文中将介绍的畸变校准过程中将使用该处所标定的内参。In this specific embodiment of the present invention, prior to performing endoscopic surgery and navigation, the imaging parameters of the endoscope, also known as internal parameters, can be calibrated. For example, a calibration plate can be used to calibrate the internal parameters. For example, the endoscope can be used to photograph the calibration plate at different azimuth angles, saving the images and recording the calibration plate specifications. The internal parameters of the endoscope are then calibrated, including the distortion matrix and/or rotation and displacement vectors. These calibrated internal parameters will be used in the distortion calibration process described later.

下文将结合附图1,详细描述本发明的用于引导内镜手术的方法的具体实施例的步骤。在手术开始之前,可如上文所示,先标定该内镜的内参和外参。并在手术开始以及导航开始之前,进行导航系统的配准,即确定导航坐标系。本发明的方法可以用在使用二维影像进行导航的情景中,也可以用在使用三维影像进行导航的情景中。关于导航系统的配准方法是已知方法,在此不做赘述。The following, in conjunction with FIG1 , describes in detail the steps of a specific embodiment of the method for guiding endoscopic surgery of the present invention. Before the surgery begins, the internal and external parameters of the endoscope can be calibrated as described above. Furthermore, before the surgery and navigation begin, the navigation system is aligned, i.e., the navigation coordinate system is determined. The method of the present invention can be used in scenarios where two-dimensional images are used for navigation, as well as in scenarios where three-dimensional images are used for navigation. The alignment method for the navigation system is well-known and will not be described in detail here.

在内镜手术过程中,控制装置自动地获取内镜的实时影像。并且,根据操作者的输入来确定对内镜的实时图像进行增强的增强信息的种类。操作者的输入 可由操作者根据自身的需要而选取,例如可通过例如键盘、鼠标、手柄、脚踏开关等各种输入构件来输入该指令。在本发明中,提供给操作者如下的方案,即能够在内镜图像上融合如下三种类型的增强信息中的至少两种,即由内镜末端周围的多个图像拼接得到的拼接图像、或患者生理结构的三维影像以及例如由操作者在手术规划过程中在患者生理结构的三维影像或者二维影像上做出的方向标记或指示生理结构的标记。During the endoscopic surgery, the control device automatically obtains the real-time image of the endoscope. And, according to the operator's input, the type of enhancement information to be used to enhance the real-time image of the endoscope is determined. The operator can select the command based on their needs, for example, by inputting the command via various input means such as a keyboard, mouse, handle, or foot switch. The present invention provides the operator with a solution capable of fusing at least two of the following three types of enhanced information onto the endoscopic image: a spliced image obtained by stitching together multiple images around the endoscope tip, a three-dimensional image of the patient's physiological structure, and, for example, directional markings or markings indicating physiological structures made by the operator on the three-dimensional or two-dimensional image of the patient's physiological structure during surgical planning.

在上述三种类型的增强信息中,对于该拼接图像(在附图3、4中以附图标记6来指示),其也可称为“完整三维地图”或“全景图像”,然而应当说明的是,该处的“完整”或是“全景”仅表示相对于内镜的有限的镜下视野而言更大范围的图像,不代表必须是360度的全景图像。这例如由医生根据其手术过程中的观察需求,灵活的通过将内镜在需要观察的更大范围内平移(例如图10所示)和/或旋转(例如图11所示)来获取。在图10和图11中还示例性地示出了内镜5在三个位置的锥形视场51。该锥形视场51表示内镜的成像装置能看到的视野范围。Among the three types of enhanced information mentioned above, the stitched image (indicated by reference numeral 6 in Figures 3 and 4) can also be called a "complete three-dimensional map" or a "panoramic image". However, it should be noted that the "complete" or "panoramic" here only refers to an image of a larger range relative to the limited endoscopic field of view of the endoscope, and does not necessarily mean a 360-degree panoramic image. This is obtained, for example, by the doctor according to his observation needs during the operation, by flexibly translating (for example, as shown in Figure 10) and/or rotating (for example, as shown in Figure 11) the endoscope within a larger range that needs to be observed. Figures 10 and 11 also exemplarily show the conical field of view 51 of the endoscope 5 at three positions. The conical field of view 51 represents the field of view that the imaging device of the endoscope can see.

该内镜末端周围的拼接图像使得医生能够观察到更大的视野(也可以为全局视野),能够更好地显示内镜末端周围的软组织例如神经、硬膜、血管、椎间盘等,因此能够引导医生快速判断内镜朝向和关键生理结构相对于此刻内镜当前图像的空间位置关系,消除内镜的视野受限的缺陷。在下文中还将对拼接图像的获取以及融合做进一步的描述。This stitched image around the endoscope tip allows the physician to observe a wider field of view (or global field of view), better displaying the soft tissue surrounding the endoscope tip, such as nerves, dura mater, blood vessels, and intervertebral discs. This allows the physician to quickly determine the endoscope's orientation and the spatial position of key physiological structures relative to the current endoscope image, eliminating the limitation of the endoscope's field of view. The acquisition and fusion of stitched images will be further described below.

患者生理结构的三维影像显示来自于CT或者CBCT影像的三维骨性结构,因此在将其融合到内镜图像上时,医生在内镜增强图像上能够观察到在内镜视野下被软组织所覆盖而无法观察到的三维骨性结构(图4中的附图标记7指向的虚线部分代表该三维骨性结构的三维影像),从而能够获知内镜末端及手术工具末端在全局中例如在整个脊柱结构中的方位信息。The three-dimensional image of the patient's physiological structure displays the three-dimensional bone structure from the CT or CBCT image. Therefore, when it is fused with the endoscopic image, the doctor can observe the three-dimensional bone structure that is covered by soft tissue and cannot be observed under the endoscopic field of view on the endoscopic enhanced image (the dotted part pointed to by the figure mark 7 in Figure 4 represents the three-dimensional image of the three-dimensional bone structure), thereby obtaining the global orientation information of the endoscope tip and the surgical tool tip, for example, in the entire spinal structure.

对于上述在患者生理结构的三维影像或二维影像上的标记,其可以由操作者在手术规划中标记出,例如包括在手术规划中标记出的方向、生理结构标志点、标记出的靶点(例如突出的或游离的椎间盘、骨赘或骨化的黄韧带)的影像模型等方向和/或目标信息。在将与该标记相关的引导指示(其可包括该标记本身, 例如方向标记、生理结构标记点、生理结构靶点的影像模型,以及指向该标记的指示符例如箭头,这些统称为与该标记相关的引导指示)融合到内镜图像上时,在显示出的内镜图像上具有方位引导指示,例如方向标记、生理结构标记、靶点影像模型等。因此医生可以快速地确定此刻内镜下生理结构的方位,从而引导镜下工具的操作。在下文中还将对该标记的做出以及其方位的获取以及融合做进一步的描述。The above-mentioned marks on the three-dimensional image or two-dimensional image of the patient's physiological structure can be marked by the operator in the surgical plan, for example, including the direction, physiological structure landmarks, marked target points (such as protruding or free intervertebral discs, osteophytes or ossified yellow ligaments) and other direction and/or target information marked in the surgical plan. When the guidance instructions related to the mark (which may include the mark itself, When fusion of directional markers, physiological structure markers, image models of physiological structure targets, and indicators pointing to these markers, such as arrows (collectively referred to as guidance indicators associated with these markers), onto an endoscopic image, the displayed endoscopic image will display orientation guidance indicators, such as directional markers, physiological structure markers, and image models of target targets. This allows the physician to quickly determine the orientation of the physiological structure under the endoscope, thereby guiding the operation of endoscopic tools. The creation of these markers, the acquisition of their orientation, and their fusion will be further described below.

综上,在本发明的该方案中,通过将内镜当前图像(来自于该内镜的实时影像的成帧的图像)与上述三种增强信息中的至少两种融合,使得内镜图像得到骨性结构、软组织和/或规划标记方位指示信息的多角度的增强。由于融合内镜周围的更大范围的拼接图像而为操作者提供了更大的视野;由于融合骨性结构的三维影像而使操作者看到本来看不到的骨性结构、掌握全局方位;由于融合了方向标记或指示生理结构的标记或其引导指示而便于操作者快速定位到目标位置并确定方向。完美解决了内镜手术中遇到的“镜下视野受限”、“容易迷失方向”、“手眼协调困难”的问题以及“学习曲线陡峭”的问题。In summary, in this solution of the present invention, by fusing the current image of the endoscope (a framed image of the real-time image from the endoscope) with at least two of the three types of enhanced information mentioned above, the endoscopic image is enhanced from multiple angles with respect to the bony structure, soft tissue, and/or planning marker orientation indication information. The fusion of a larger range of spliced images around the endoscope provides the operator with a wider field of view; the fusion of the three-dimensional image of the bony structure allows the operator to see bony structures that would otherwise be invisible and grasp the overall orientation; the fusion of directional markers or markers indicating physiological structures or their guiding instructions facilitates the operator to quickly locate the target position and determine the direction. This perfectly solves the problems encountered in endoscopic surgery, such as "limited visual field under the microscope", "easy to get lost", "difficult hand-eye coordination", and "steep learning curve".

具体地,如图1所示,本发明的该具体实施例的方法包括内镜图像获取步骤,该处的内镜图像可以为控制装置实时获取的内镜影像中的某帧图像。同时,控制装置获取并记录与所述内镜的至少一个图像对应(例如与每个图像对应的)的内镜的成像装置的位置和方向,即图1中的成像方位获取步骤。在该成像方位获取步骤中,处理器从导航系统的追踪装置1获取设置在内镜5远端上的、位于患者体外的示踪器4的位置,结合上文所标定出的外参,间接地获取成像装置的位置和方向。该处的内镜图像获取步骤包括用于内镜增强图像融合中所融合的内镜当前图像的获取。Specifically, as shown in FIG1 , the method of this specific embodiment of the present invention includes an endoscopic image acquisition step, where the endoscopic image can be a frame of the endoscopic image acquired in real time by the control device. At the same time, the control device acquires and records the position and direction of the imaging device of the endoscope corresponding to at least one image of the endoscope (for example, corresponding to each image), that is, the imaging orientation acquisition step in FIG1 . In this imaging orientation acquisition step, the processor acquires the position of the tracer 4 located outside the patient's body and disposed on the distal end of the endoscope 5 from the tracking device 1 of the navigation system, and indirectly acquires the position and direction of the imaging device in combination with the external parameters calibrated above. The endoscopic image acquisition step here includes the acquisition of the current image of the endoscope fused in the endoscopic enhanced image fusion.

如图1所示,本发明的方法还包括增强信息获取步骤,其中根据操作者的输入来确定获取哪种类型的增强信息。供操作者输入的输入构件例如可以包括在显示装置上所显示的选择框、对话框等。操作者例如可以选择拼接图像与患者生理结构的三维影像两种增强信息,也可以选择拼接图像以及标记两种增强信息,也可以选择患者生理结构的三维影像与标记两种增强信息,或拼接图像、患者生理结构的三维影像以及标记三种增强信息。本发明可以根据上述选择来获 取相应的增强信息,并相应地获取这些增强信息的方位信息,以用于在融合步骤根据成像装置的方位以及所需融合的拼接图像、三维影像或标记的方位来进行内镜增强图像的融合。As shown in FIG1 , the method of the present invention further includes an enhancement information acquisition step, wherein the type of enhancement information to be acquired is determined based on the operator's input. The input component for the operator to input may include, for example, a selection box, a dialog box, etc. displayed on a display device. The operator may, for example, select two types of enhancement information, namely, a stitched image and a three-dimensional image of the patient's physiological structure, or may select two types of enhancement information, namely, a stitched image and a mark, or may select two types of enhancement information, namely, a three-dimensional image of the patient's physiological structure and a mark, or may select three types of enhancement information, namely, a stitched image, a three-dimensional image of the patient's physiological structure, and a mark. The present invention can obtain enhancement information based on the above selections. The corresponding enhancement information is obtained, and the orientation information of the enhancement information is obtained accordingly, so as to be used for fusing the endoscopic enhanced image in the fusion step according to the orientation of the imaging device and the orientation of the spliced image, three-dimensional image or marker to be fused.

具体地,当需要融合的增强信息包括拼接图像的情况下,所述增强信息获取步骤包括将多个内镜的图像拼接的步骤。图9示出了包括拼接步骤的增强信息获取步骤。在图9示出的图像获取步骤中,控制装置自动地获取内镜末端周围的多个图像。同时,控制装置自动地获取并记录与所述内镜的至少一个图像对应(例如与每个图像对应的)的内镜的成像装置的位置和方向,即图9中的成像方位获取步骤。在内镜的多个位置和方位重复进行上述步骤,从而在例如将内镜在需要观察的更大范围内平移(如图10所示)和/或旋转(如图11所示)之后,获得了成像装置在多个方位下的图像;与此同时或之后,可结合上文所标定的成像装置的内参,对内镜的多个图像进行畸变校准,以去除成像装置的畸变效应,即图9中的畸变校准步骤。之后,依据与至少一个图像对应的内镜的位置和方向(即方位),对各图像进行拼接,从而获得拼接图像。Specifically, when the enhanced information to be fused includes a stitched image, the enhanced information acquisition step includes stitching multiple endoscope images. Figure 9 illustrates the enhanced information acquisition step including the stitching step. In the image acquisition step shown in Figure 9 , the control device automatically acquires multiple images around the endoscope's distal end. Simultaneously, the control device automatically acquires and records the position and orientation of the endoscope's imaging device corresponding to at least one image (e.g., each image) of the endoscope, i.e., the imaging orientation acquisition step in Figure 9 . The above steps are repeated at multiple positions and orientations of the endoscope, thereby obtaining images of the imaging device at multiple orientations, for example, after translating (as shown in Figure 10 ) and/or rotating (as shown in Figure 11 ) the endoscope over a larger range of observation. Simultaneously or subsequently, distortion calibration can be performed on the multiple images of the endoscope, combining the calibrated imaging device internal parameters described above, to remove distortion effects of the imaging device, i.e., the distortion calibration step in Figure 9 . Subsequently, the images are stitched together based on the position and orientation (i.e., orientation) of the endoscope corresponding to the at least one image, thereby obtaining a stitched image.

在该方案中,由于采用根据与至少一个图像对应的成像装置的方位来拼接各图像以得到拼接图像的方式,相对于现有的各种拼接方式(例如基于算法)而言,所需要处理的计算更少,很大程度地提高了拼接速度,这对于手术过程中的实时观察来讲尤其重要。并且该方案的拼接精度更高,对处理器的运算能力要求更低。应当说明的是,在成像方位获取步骤中获取成像装置的与所拼接的图像中的至少一个图像对应的在导航系统下的位置和/或方向。示例性地,可以对应于每个所获取的图像都相应获取成像装置在导航系统下的方位,但也可以仅获取成像装置的对应于部分图像的甚至仅一个图像的位置和/或方向。例如,有些情形下,仅需要在时间上间隔开地获取某些图像所对应的成像装置的位置和/或方向,或是仅需要获取开始获取待拼接图像时的成像装置的位置/和/或方向。In this approach, because the images are stitched together based on the orientation of the imaging device corresponding to at least one image to produce a stitched image, compared to existing stitching methods (e.g., algorithm-based), fewer computations are required, significantly improving stitching speed. This is particularly important for real-time observation during surgery. Furthermore, this approach offers higher stitching accuracy and requires less processor power. It should be noted that the imaging orientation acquisition step acquires the position and/or orientation of the imaging device under the navigation system corresponding to at least one of the stitched images. For example, the orientation of the imaging device under the navigation system can be acquired for each acquired image, but it is also possible to acquire the position and/or orientation of the imaging device for only a portion of the images, or even just one image. For example, in some cases, it is only necessary to acquire the position and/or orientation of the imaging device corresponding to certain images at intervals in time, or only to acquire the position and/or orientation of the imaging device at the start of acquisition of the images to be stitched.

如图9所示,该增强信息获取步骤还可包括在拼接步骤之后的处理步骤,在该处理步骤中根据在拼接步骤中获得的图像生成平面图像以减小视觉上的变形,之后将该平面图像作为拼接图像在所述内镜增强图像融合步骤中将其与内镜的当前图像进行融合(例如将其置于内镜当前图像的周围)。该方法消除或减 少了因观察视角的不同而产生的拼接图像的畸变。As shown in FIG9 , the enhanced information acquisition step may further include a processing step after the stitching step, in which a planar image is generated based on the image obtained in the stitching step to reduce visual distortion, and then the planar image is used as a stitched image to be fused with the current image of the endoscope in the endoscopic enhanced image fusion step (for example, it is placed around the current image of the endoscope). This method eliminates or reduces There is less distortion in the stitched image caused by different viewing angles.

由于在上述拼接步骤中,依据与至少一个图像对应的成像装置的方位来拼接各图像以得到拼接图像,因此,拼接所得的图像在导航系统下的方位信息是可获知的。因此在增强信息方位获取步骤中能够获取该拼接图像在导航系统下的方位,以用于在之后的内镜增强图像融合步骤中根据该方位将拼接图像与内镜当前图像以及其它增强信息进行融合。Because the stitching step involves stitching the images together based on the orientation of the imaging device corresponding to at least one image to produce a stitched image, the orientation of the stitched image as viewed by the navigation system is known. Therefore, the orientation of the stitched image as viewed by the navigation system can be obtained during the enhancement information orientation acquisition step, allowing for the subsequent endoscopic enhanced image fusion step to fuse the stitched image with the current endoscopic image and other enhancement information based on this orientation.

当需要融合的增强信息包括上文所述的标记的情况下,增强信息方位获取步骤包括标记方位获取步骤,在该步骤中借助于三维影像或二维影像在导航系统下的配准来获取标记在导航系统下的方位;其中在内镜增强图像融合步骤中,根据标记在导航系统下的方位,将与标记相关的引导指示与内镜的当前图像以及其它增强信息进行融合。When the enhanced information to be fused includes the markers mentioned above, the enhanced information orientation acquisition step includes a marker orientation acquisition step, in which the orientation of the marker under the navigation system is acquired with the aid of the registration of the three-dimensional image or the two-dimensional image under the navigation system; wherein in the endoscope enhanced image fusion step, the guidance instructions related to the marker are fused with the current image of the endoscope and other enhanced information according to the orientation of the marker under the navigation system.

具体地,在本文中,操作者在其上做标记的影像可以为术前三维影像,例如术前CT影像,也可以为术中获得的影像,例如术中CBCT影像或是术中二维透视影像。当导航系统使用术前CT影像来导航时,操作者在术前三维影像上做出标记。当导航系统使用术中CBCT影像或是术中二维透视影像来导航时,操作者在术中获取影像后在其上做出标记。Specifically, in this context, the image on which the operator makes markings can be a preoperative 3D image, such as a preoperative CT image, or an image obtained during surgery, such as an intraoperative CBCT image or an intraoperative 2D fluoroscopic image. When the navigation system uses a preoperative CT image for navigation, the operator makes markings on the preoperative 3D image. When the navigation system uses an intraoperative CBCT image or an intraoperative 2D fluoroscopic image for navigation, the operator makes markings on the image after it is acquired during surgery.

在本发明中,操作者能够选择做出各种标记,例如方向标记或指示生理结构的标记,从而为操作者提供了多种选择和可能。在患者生理结构为脊柱的示例中,指示生理结构的标记例如可以包括操作者在手术规划中摘离出的靶点组织或靶点骨性结构(例如突出的或游离的椎间盘、骨化的黄韧带或退化生成的骨赘)的影像,也可以为生理结构标记点。对于操作者来说,如果靶点组织或骨性结构能够在手术规划中使用的三维影像中区分出来(例如在CT影像中能区分出骨赘,或在MRI影像中区分出突出的或游离的椎间盘),则在手术规划中直接形成其影像模型。该影像模型作为所述标记用作增强信息融合到内镜增强影像中。而如果手术规划使用的三维影像中无法显示靶点组织或靶点骨性结构,例如使用的是CT影像,在其中不能显示椎间盘,则医生可以参考MRI影像在手术规划中标记出指示该靶点组织或靶点骨性结构的点(例如指示突出的或游离的椎间盘、骨赘和骨化的黄韧带中的一者或多者的标记点),即生理结构标记点。 In the present invention, the operator can choose to make various marks, such as directional marks or marks indicating physiological structures, thereby providing the operator with a variety of options and possibilities. In the example where the patient's physiological structure is the spine, the mark indicating the physiological structure can, for example, include an image of the target tissue or target bony structure (such as a protruding or free intervertebral disc, ossified yellow ligament or osteophyte generated by degeneration) removed by the operator during surgical planning, or it can be a physiological structure marker point. For the operator, if the target tissue or bony structure can be distinguished in the three-dimensional image used in the surgical planning (for example, osteophytes can be distinguished in CT images, or protruding or free intervertebral discs can be distinguished in MRI images), then its image model is directly formed in the surgical planning. The image model is used as the marker to enhance information and is fused into the endoscopic enhanced image. If the target tissue or target bony structure cannot be displayed in the three-dimensional image used for surgical planning, for example, if a CT image is used and the intervertebral disc cannot be displayed, the doctor can refer to the MRI image to mark the points indicating the target tissue or target bony structure in the surgical planning (for example, marking points indicating one or more of a protruding or free intervertebral disc, osteophytes and ossified yellow ligament), that is, physiological structure marking points.

生理结构标记点也可以包括在内镜手术过程中不发生位移的生理结构标记点。如图14所示,操作者也可以在三维影像上标记出(例如使用术前规划软件)某些椎间孔内或周边的在脊柱内镜手术过程中全程不发生任何位移的生理结构标记点,包括但不限于关节突腹侧、前节椎体的椎弓根、后节椎体的椎弓根、椎间盘等生理结构点。如图15、16所示,在使用术中二维透视影像时,医生分别在术中正位影像(图15)和侧位影像(图16)上,使用术中规划软件选择生理结构标记点,包括但不限于关节突腹侧、前节椎体的椎弓根、后节椎体的椎弓根、椎间盘等生理结构点。Physiological structure markers may also include physiological structure markers that do not shift during endoscopic surgery. As shown in FIG14 , the operator may also mark (for example, using preoperative planning software) certain physiological structure markers within or around the intervertebral foramen that do not shift during the entire spinal endoscopic surgery on the three-dimensional image, including but not limited to physiological structure points such as the ventral side of the articular process, the pedicles of the anterior vertebral body, the pedicles of the posterior vertebral body, and the intervertebral disc. As shown in FIG15 and FIG16 , when using intraoperative two-dimensional fluoroscopic images, the doctor uses the intraoperative planning software to select physiological structure markers on the intraoperative anteroposterior image ( FIG15 ) and lateral image ( FIG16 ), including but not limited to physiological structure points such as the ventral side of the articular process, the pedicles of the anterior vertebral body, the pedicles of the posterior vertebral body, and the intervertebral disc.

如图12、13所示,操作者可在术前或术中三维影像(图12)上做出方向标记,例如朝向患者的背侧、腹侧、头侧和尾侧的箭头标记。操作者也可以在术中二维影像上标记出方向。优选地,为了便于操作者标记,也可以先对术中二维影像(例如对术中正位和侧位影像)进行类三维拟合,操作者在拟合后的影像(如图13所示)上标记方向。这种方式相对于直接在术中二维影像例如正位影像和侧位影像上标记的方式而言,可以解决在二维影像上标记时容易出现将头尾标反的问题。As shown in Figures 12 and 13, the operator can make direction marks on the preoperative or intraoperative three-dimensional image (Figure 12), such as arrow marks toward the dorsal, ventral, cephalic, and caudal sides of the patient. The operator can also mark the direction on the intraoperative two-dimensional image. Preferably, in order to facilitate the operator's marking, the intraoperative two-dimensional image (for example, the intraoperative anteroposterior and lateral images) can also be fitted with a quasi-three-dimensional fitting, and the operator marks the direction on the fitted image (as shown in Figure 13). Compared with the method of marking directly on the intraoperative two-dimensional image, such as the anteroposterior image and the lateral image, this method can solve the problem of easily marking the head and tail in reverse when marking on the two-dimensional image.

本领域技术人员可以理解,虽然在本文中描述了一些具体的方向标记的示例以及指示生理结构的标记的示例,然而方向标记或指示生理结构的标记的选择能够根据操作者的偏好和需要而自由地选择,而不限于这些示例。而且,虽然在附图12-16所示出的实施例中,方向标记或者生理结构标记点均示出为四个,然而其数量可以根据需要而设置一个、两个、三个或多于三个。Those skilled in the art will appreciate that, although some specific examples of direction markers and physiological structure markers are described herein, the selection of direction markers or physiological structure markers can be freely selected based on the operator's preferences and needs, and is not limited to these examples. Furthermore, although four direction markers or physiological structure markers are shown in the embodiments shown in Figures 12-16, the number of direction markers or physiological structure markers can be one, two, three, or more than three as needed.

对于在术前三维影像上做出标记的增强方式而言,处理器首先获得术前三维影像,同时获取在前文中所述的操作者已经在术前三维影像上做出的标记(例如方向标记或指示生理结构的标记)。之后将术前三维影像配准到导航系统下,再根据在该配准步骤中确定的配准关系来确定各标记在导航系统下的坐标,从而完成标记方位获取步骤。For the enhancement method of placing markers on the preoperative 3D image, the processor first obtains the preoperative 3D image and simultaneously acquires the markers (e.g., directional markers or markers indicating physiological structures) placed on the preoperative 3D image by the operator as described above. The preoperative 3D image is then registered with the navigation system, and the coordinates of each marker in the navigation system are determined based on the registration relationship established in the registration step, thereby completing the marker position acquisition step.

对于在术中三维影像或术中二维影像上做出标记的增强方式而言,通常,在获取到术中三维影像或术中二维影像的同时就完成了该影像到导航系统下的配准。因此,在这种情况下,操作者在完成了配准之后的术中三维影像或术中二维 影像上进行标记。由于配准后的影像在导航系统下的坐标是已知的,相应地即可获取在影像中做出的标记在导航系统下的坐标即位置,从而完成标记方位获取步骤。For the enhancement method of making marks on the intraoperative 3D image or intraoperative 2D image, usually, the registration of the image to the navigation system is completed at the same time as the intraoperative 3D image or intraoperative 2D image is acquired. Therefore, in this case, the operator can Mark the image. Since the coordinates of the registered image under the navigation system are known, the coordinates or positions of the marks made in the image under the navigation system can be obtained accordingly, thus completing the step of obtaining the mark orientation.

如上文所述,在采用术中二维影像的情况下,可对至少两个体位的二维影像进行类三维拟合以获取类似三维的影像以方便操作者标记,在之后处理器所获取的标记是操作者在该拟合后的影像上所做出的标记。本领域技术人员可以理解,用于类三维拟合的图像可以为术中正位影像和术中侧位影像,也可以为其它体位的影像,这可根据操作者的需要或患者待手术部分的实际情况来进行选择。As described above, when using intraoperative 2D images, a quasi-3D fitting can be performed on at least two 2D images from different body positions to obtain a quasi-3D image to facilitate operator marking. The markings subsequently captured by the processor are the operator's markings on the fitted image. Those skilled in the art will appreciate that the images used for the quasi-3D fitting can be intraoperative anteroposterior and lateral images, or images from other body positions, depending on the operator's needs and the actual location of the patient undergoing surgery.

操作者可以通过各种方式来输入或选择上文中所描述的各种形式的标记。这可以通过显示装置的界面来输入,例如通过显示装置的触摸界面并使用术前规划软件或术中规划软件来选择各头侧、尾侧、腹侧、背侧的方向标记、或各生理结构标记的选取,如图12、13所示。也可以通过电子设备的键盘和/或鼠标来进行输入,如图14示例性地示出的那样。The operator can input or select the various forms of markings described above in various ways. This can be input through the interface of the display device, for example, by using the touch interface of the display device and using preoperative planning software or intraoperative planning software to select the cephalad, caudal, ventral, and dorsal direction markings, or the selection of physiological structure markings, as shown in Figures 12 and 13. Input can also be performed using the keyboard and/or mouse of an electronic device, as shown in Figure 14.

需要说明的是,虽然在本文的描述中,以操作者在针对患者生理结构的影像上做出标记的方式来形成所述标记。然而也可以理解,在其他一些实施例中,该标记也可以不由操作者做出,例如在形成影像时自动形成标记,这例如可通过在患者某个身体部位或手术床等位置预置标记物从而在获取影像时就可以产生该标记。It should be noted that, although the description herein uses the method of an operator making marks on an image of a patient's physiological structure to form the marks, it is also understood that in other embodiments, the marks may not be made by the operator, for example, the marks may be automatically formed when the image is formed. For example, a marker may be pre-placed on a certain part of the patient's body or on a location such as an operating table so that the marks are generated when the image is acquired.

在融合的增强信息包括患者生理结构的三维影像的情况下,增强信息获取步骤获取三维影像。并且由于用于导航的三维影像通过配准在导航系统下具有确定的方位信息,因此能够方便地获取该三维影像在导航系统下的方位。When the fused enhanced information includes a three-dimensional image of the patient's physiological structure, the enhanced information acquisition step acquires the three-dimensional image. Furthermore, since the three-dimensional image used for navigation has determined position information under the navigation system through registration, the position of the three-dimensional image under the navigation system can be easily acquired.

在如上文所述获取了各所需融合的增强信息(拼接图像、三维影像或标记)以及各增强信息在导航系统下的方位之后,结合所获取的内镜成像装置的方位信息,即可进行内镜增强图像的融合。由于各需要融合的增强信息与成像装置在导航系统下具有确定的方位关系,进而各需要融合的增强信息与成像装置获取的图像在导航系统下具有确定的方位关系,即可根据这些方位关系进行各增强信息以及内镜的当前图像的融合。After obtaining the enhanced information (stitched images, three-dimensional images, or markers) required for fusion and their orientation under the navigation system as described above, fusion of the endoscopic enhanced image can be performed in conjunction with the acquired orientation information of the endoscopic imaging device. Because the enhanced information required for fusion and the imaging device have a defined orientation relationship under the navigation system, and thus, the enhanced information required for fusion and the image captured by the imaging device also have a defined orientation relationship under the navigation system, fusion of the enhanced information and the current endoscopic image can be performed based on these orientation relationships.

应当理解的是,虽然在图1的流程图、权利要求书以及本文的描述中依次 列出和描述了上述各步骤,但不代表这些步骤中的各个步骤之间具有特定的先后关系。例如增强信息获取步骤以及增强信息方位获取步骤可以在内镜图像获取步骤和/或成像方位获取步骤之前或之后或同时进行。全局增强图像融合步骤(下文中将描述)也可以在内镜增强图像融合步骤之前或之后或同时进行。It should be understood that although the flowchart of FIG1, the claims and the description herein are sequentially The above-mentioned steps are listed and described, but they do not necessarily follow a specific order among them. For example, the enhanced information acquisition step and the enhanced information orientation acquisition step can be performed before, after, or simultaneously with the endoscopic image acquisition step and/or the imaging orientation acquisition step. The global enhanced image fusion step (described below) can also be performed before, after, or simultaneously with the endoscopic enhanced image fusion step.

图3示出了所得到的内镜增强图像的一个示例,其中融合了拼接图像6、患者生理结构的三维影像7与内镜的当前图像10。在图4示例性地示出了将图3中的内镜增强图像在显示装置的窗口上显示的视图(也称内镜增强视图),其中内镜的当前图像(即实时图像)位于该窗口的中心区域。在该视图上显示的内镜图像较大,同时示出周围的增强信息。示例性地,在该内镜增强图像的视图中所显示的内镜增强图像的范围(对应于内镜的当前图像在内镜增强视图中所占比例)能够根据操作者的输入而确定。换言之,操作者可以通过与系统交互改变内镜的当前图像在内镜增强视图中的比例,例如将内镜的当前图像放大或缩小。当医生需要更细致地观察镜下视野中的某细节时,可以将内镜实时图像放大(相应地内镜增强图像显示在该视图窗口中的部分变小),而周围的增强信息中显示的内容与内镜实时图像的比例关系保持不变,超出“内镜增强视图”边缘部分的增强信息不再在“内镜增强视图”上显示。操作人员的输入可采用例如在显示装置上的缩放图标输入构件的方式进行。然而本领域技术人员可以理解,用来确定内镜实时图像在该视图中的大小的操作者的输入的方式不限于缩放图标的方式,也可以通过其他方式来进行,例如可提供给操作者若干种不同大小的选择,或通过操作者输入数值来确定。显示界面上显示的输入构件例如可为选项卡,或供操作者输入数值的对话框等。FIG3 shows an example of the obtained endoscopic enhanced image, in which a stitched image 6, a three-dimensional image 7 of the patient's physiological structure and a current image 10 of the endoscope are fused. FIG4 exemplarily shows a view (also referred to as an endoscopic enhanced view) in which the endoscopic enhanced image in FIG3 is displayed on a window of a display device, wherein the current image of the endoscope (i.e., the real-time image) is located in the central area of the window. The endoscopic image displayed in this view is larger and shows the surrounding enhancement information. Exemplarily, the range of the endoscopic enhanced image displayed in the view of the endoscopic enhanced image (corresponding to the proportion of the current image of the endoscope in the endoscopic enhanced view) can be determined based on the input of the operator. In other words, the operator can change the proportion of the current image of the endoscope in the endoscopic enhanced view by interacting with the system, for example, by zooming in or out the current image of the endoscope. When the doctor needs to observe a detail in the endoscopic field of view in more detail, the real-time endoscopic image can be enlarged (correspondingly, the portion of the endoscopic enhanced image displayed in the view window becomes smaller), while the proportion of the content displayed in the surrounding enhanced information to the real-time endoscopic image remains unchanged, and the enhanced information beyond the edge of the "endoscopic enhanced view" is no longer displayed on the "endoscopic enhanced view". The operator's input can be carried out in the form of a zoom icon input component on the display device. However, those skilled in the art will understand that the operator's input method for determining the size of the real-time endoscopic image in the view is not limited to the zoom icon method, and can also be carried out in other ways, such as providing the operator with several different size options, or determining it by the operator entering a numerical value. The input component displayed on the display interface can be, for example, a tab, or a dialog box for the operator to enter a numerical value.

图5示例性地示出了根据另一种示例性的实施例的内镜增强图像在显示装置的窗口上显示的视图,其中该内镜增强图像融合了内镜的当前图像10和三种增强信息,即融合了拼接图像6、患者生理结构的三维影像7和图中示出的生理结构标记点。在该示例中,示出了关节突腹侧、前节椎体的椎弓根和椎间盘,后节椎体的椎弓根由于位于视图边缘之外而未能显示。也可以将与生理结构标记点相关的引导指示融合到内镜增强图像上并显示,例如在生理结构标记点位于内镜增强图像的视图范围之外的情况下,该引导指示可为指向该生理结构标记 点的箭头(例如指向图5中的未能显示的后节椎体的椎弓根的箭头)。FIG5 exemplarily shows a view of an endoscopic enhanced image displayed on a window of a display device according to another exemplary embodiment, wherein the endoscopic enhanced image fuses the current image 10 of the endoscope and three types of enhanced information, namely, the stitched image 6, the three-dimensional image 7 of the patient's physiological structure and the physiological structure markers shown in the figure. In this example, the ventral side of the articular process, the pedicles of the anterior vertebra and the intervertebral disc are shown, and the pedicles of the posterior vertebra are not shown because they are outside the edge of the view. Guidance instructions related to the physiological structure markers can also be fused to the endoscopic enhanced image and displayed. For example, in the case where the physiological structure marker is outside the view range of the endoscopic enhanced image, the guidance instruction can be a guide pointing to the physiological structure marker. Pointed arrows (eg, arrows pointing to the pedicles of the posterior vertebral body that are not shown in FIG. 5 ).

由于希望在内镜增强视图上的当前内镜图像即实时图像尽量大以方便操作者观察,因此能显示出来的增强信息非常局部和有限。为了提供给操作者更好的全局引导,本发明还可包括全局增强图像。具体地,将拼接图像、三维影像和与标记相关的引导指示中的至少两种相融合以获取全局增强图像并显示,其中在全局增强图像中指示出成像装置的当前视野位置和内镜增强图像的视图的位置。如图6所示,在显示装置的左侧示出了内镜增强图像,右下侧示出了全局增强图像。具体地,图7示例性地示出了融合了拼接图像6、患者生理结构的三维影像7以及指向背侧、腹侧、尾侧和头侧的方向标记的全局增强图像。图8示例性地示出了融合了拼接图像6、患者生理结构的三维影像7以及椎间盘等生理结构标记点的全局增强图像。全局增强图像的融合方式与上文描述的增强信息的融合方式相同,均是根据所获取的各增强信息在导航系统下的方位信息,将所获取的各增强信息进行融合。其具体过程不再赘述。Because the current endoscopic image, or real-time image, in the endoscopic enhanced view is desired to be as large as possible for easier observation by the operator, the enhanced information that can be displayed is very localized and limited. To provide the operator with better global guidance, the present invention may also include a global enhanced image. Specifically, at least two of the mosaic image, the three-dimensional image, and the guidance instructions associated with the markers are fused to generate and display a global enhanced image. The global enhanced image indicates the current field of view of the imaging device and the position of the endoscopic enhanced image. As shown in FIG6 , the endoscopic enhanced image is shown on the left side of the display device, and the global enhanced image is shown on the lower right. Specifically, FIG7 exemplarily illustrates a global enhanced image fused with the mosaic image 6, the three-dimensional image 7 of the patient's anatomy, and directional markers pointing to the dorsal, ventral, caudal, and cephalad directions. FIG8 exemplarily illustrates a global enhanced image fused with the mosaic image 6, the three-dimensional image 7 of the patient's anatomy, and markers of physiological structures such as the intervertebral disc. The fusion method for the global enhanced image is the same as the fusion method for the enhanced information described above: each acquired enhanced information is fused based on its orientation information as determined by the navigation system. The detailed process will not be elaborated upon.

在图7、8示出的全局增强图像上,由虚线8表示出成像装置的当前视野位置的边缘(即内镜当前图像在该全局增强图像中的位置),由虚线9框出对应于内镜增强图像的视图的边缘。具体地,本发明的方法还包括在全局增强图像中指示当前成像装置视野位置的步骤,在该步骤中,借助于成像装置的当前的位置和方向来确定在全局增强图像中对应于当前成像装置视野位置的区域,生成指示该区域的边缘的标记8并在所述显示步骤中显示。本领域技术人员可以理解,也可以不采用虚线而采用其它形式来表示出当前视野的边缘以及内镜增强图像的视图的边缘,两者也可由不同颜色和/或线型的线来表示。In the global enhanced images shown in Figures 7 and 8, the edge of the imaging device's current field of view (i.e., the position of the current endoscope image in the global enhanced image) is indicated by a dotted line 8, and the edge of the view corresponding to the endoscopic enhanced image is framed by a dotted line 9. Specifically, the method of the present invention also includes a step of indicating the current imaging device's field of view in the global enhanced image. In this step, the region corresponding to the current imaging device's field of view is determined in the global enhanced image using the imaging device's current position and orientation, and a marker 8 indicating the edge of this region is generated and displayed in the display step. It will be appreciated by those skilled in the art that other forms other than dotted lines may be used to indicate the edge of the current field of view and the edge of the view of the endoscopic enhanced image, and both may be represented by lines of different colors and/or line types.

在上述方案中,内镜增强视图作为全局增强图像的一个局部,与全局增强视图同时显示,并在全局增强图像中显示出两者的关联信息,即显示出实时内镜图像的边缘和内镜增强视图的边缘在全局增强图像中的位置,使得操作者在观看较大的内镜实时图像的同时,能够获知内镜视野在整个全局中的位置以及周围的软组织、骨性结构、靶点位置和方向信息,实现对操作者的全方位的引导。In the above scheme, the endoscopic enhanced view is displayed simultaneously with the global enhanced view as a part of the global enhanced image, and the correlation information between the two is displayed in the global enhanced image, that is, the position of the edge of the real-time endoscopic image and the edge of the endoscopic enhanced view in the global enhanced image is displayed, so that the operator can know the position of the endoscopic field of view in the entire world and the surrounding soft tissue, bone structure, target position and direction information while watching the larger real-time endoscopic image, thereby realizing all-round guidance for the operator.

还应当说明的是,本文中所描述的“显示内镜增强图像的步骤”或“显示全局增强图像的步骤”并不代表一直显示所述图像。示例性地,也可以根据操作者 的需要而仅在某一时间段显示该内镜增强图像或全局增强图像,例如在操作者期望观察该增强图像时,其可以例如通过脚踏开关来触发相应增强图像的显示。It should also be noted that the "step of displaying the endoscopic enhanced image" or "step of displaying the global enhanced image" described herein does not mean that the image is always displayed. The endoscopic enhanced image or the global enhanced image may be displayed only in a certain period of time according to the need. For example, when the operator desires to observe the enhanced image, he or she may trigger the display of the corresponding enhanced image, for example, by means of a foot switch.

本发明还提供一种用于内镜手术的导航的电子设备,该电子设备包括如上所述的显示装置3和处理器,在如图2所示的具体的示例中,该处理器被包括在控制装置2即图示的主机中。本领域技术人员可以理解,处理器和显示装置可以为一体式的或分体式的。控制装置2或处理器具有数据接口。控制装置或处理器通过该数据接口电连接到导航系统的追踪装置1和内镜5,以获取内镜的成像装置的位置和方向并获取处于相应位置和方向下的内镜的图像。处理器的数据接口还使得处理器能够获取患者生理结构的三维影像和在患者生理结构的三维影像或二维影像上的标记中的一者或多者。在该处理器运行时,执行计算机程序(该计算机程序可以存储在控制装置所包括的存储器上,也可以存储在其他存储器上),以执行本发明的方法并至少在一段时间内在显示装置3上显示内镜增强图像的至少一部分的视图,其中该内镜增强图像为将内镜的当前图像与以下三种类型的增强信息中的至少两种相融合的图像:The present invention also provides an electronic device for endoscopic surgical navigation, comprising a display device 3 and a processor as described above. In the specific example shown in FIG2 , the processor is included in the control device 2, i.e., the illustrated host. Those skilled in the art will appreciate that the processor and display device may be integrated or separate. The control device 2 or processor has a data interface. The control device or processor is electrically connected to the tracking device 1 and the endoscope 5 of the navigation system via the data interface to obtain the position and orientation of the imaging device of the endoscope and to obtain an image of the endoscope in the corresponding position and orientation. The processor's data interface also enables the processor to obtain one or more of a three-dimensional image of a patient's physiological structure and a marker on the three-dimensional or two-dimensional image of the patient's physiological structure. When the processor is running, it executes a computer program (which may be stored in a memory included in the control device or in another memory) to perform the method of the present invention and display a view of at least a portion of an endoscopically enhanced image on the display device 3 for at least a period of time, wherein the endoscopically enhanced image is an image that fuses the current image of the endoscope with at least two of the following three types of enhancement information:

a)将内镜的多个图像拼接得到的拼接图像;a) a stitched image obtained by stitching multiple images of the endoscope;

b)患者生理结构的三维影像;b) 3D images of the patient’s physiological structures;

c)与标记相关的引导指示。c) Guidance instructions related to the marking.

作为一种示例,在该显示装置3的显示界面上还显示全局增强图像,全局增强图像为将拼接图像6、三维影像7和引导指示中的至少两种相融合所获得的图像,其中全局增强图像上指示出成像装置的当前视野位置8和/或内镜增强图像的视图的位置9。As an example, a global enhanced image is also displayed on the display interface of the display device 3. The global enhanced image is an image obtained by fusing at least two of the stitched image 6, the three-dimensional image 7 and the guidance indication, wherein the global enhanced image indicates the current field of view position 8 of the imaging device and/or the view position 9 of the endoscopic enhanced image.

作为一种示例,在该显示装置3的显示界面上,还可显示如下图像或视图中的一者或多者:在拟合的二维透视切面上的视图、矢状面的切面视图、冠状面的切面视图、轴状面的切面视图、内镜的实时影像等。由此操作者能够方便地获得更全面的导航信息。本领域技术人员可以理解,也可根据需要而显示其他切面、方位的视图或任何合适的视图。As an example, the display interface of the display device 3 may also display one or more of the following images or views: a view on a fitted two-dimensional perspective plane, a sagittal plane view, a coronal plane view, an axial plane view, a real-time endoscopic image, etc. This allows the operator to conveniently obtain more comprehensive navigation information. Those skilled in the art will appreciate that other views of other planes, orientations, or any other suitable view may also be displayed as needed.

本发明还提供了一种其上存储有计算机程序的计算机可读存储介质,该计算器程序在被处理器运行时能执行本发明的上述方法的各步骤。并且,本发明还 提供一种控制装置,该控制装置可包括存储器、处理器及存储在存储器上并能在处理器上运行的程序,其中在所述处理器运行所述程序时执行本发明的上述方法的各步骤。本发明还提供了一种计算机程序产品,包括计算机程序,所述计算机程序被处理器执行时实现本发明的上述方法的各步骤。The present invention also provides a computer readable storage medium having a computer program stored thereon, wherein the computer program can execute the steps of the above method of the present invention when executed by a processor. A control device is provided, which may include a memory, a processor, and a program stored in the memory and executable on the processor, wherein when the processor executes the program, the steps of the method of the present invention are performed. The present invention also provides a computer program product, including the computer program, which, when executed by the processor, implements the steps of the method of the present invention.

本领域技术人员可以理解,本文中所描述的方法或算法的步骤可以直接用硬件、处理器执行的软件模块,或者二者的结合来实施。软件模块可以置于随机存储器(RAM)、内存、只读存储器(ROM)、电可编程ROM、电可擦除可编程ROM、寄存器、硬盘、可移动磁盘、CD-ROM、或技术领域内所公知的任意其它形式的存储介质中。Those skilled in the art will appreciate that the steps of the methods or algorithms described herein can be implemented directly using hardware, software modules executed by a processor, or a combination of the two. The software modules can be placed in random access memory (RAM), internal memory, read-only memory (ROM), electrically programmable ROM, electrically erasable programmable ROM, registers, hard disk, removable disk, CD-ROM, or any other form of storage medium known in the art.

在上述实施例中,可以全部或部分地通过软件、硬件、固件或者其任意组合来实现所述方法。当使用软件实现时,可以全部或部分地以计算机程序产品的形式实现。计算机程序产品包括一个或多个计算机指令。在计算机上加载和执行所述计算机程序指令时,全部或部分地产生按照本发明所述的流程或功能。该处计算机可以是通用计算机、专用计算机、计算机网络、或者其他可编程装置。计算机指令可以存储在计算机可读存储介质中,或者从一个计算机可读存储介质向另一个计算机可读存储介质传输,例如,计算机指令可以从一个网站站点、计算机、服务器或数据中心通过有线(例如同轴电缆、光纤、数字用户线)或无线(例如红外、无线、微波等)方式向另一个网站站点、计算机、服务器或数据中心进行传输。计算机可读存储介质可以是计算机能够存取的任何可用介质或者是包含一个或多个可用介质集成的服务器、数据中心等数据存储设备。可用介质可以是磁性介质,(例如,软盘、硬盘、磁带)、光介质(例如,DVD)、或者半导体介质(例如固态硬盘Solid State Disk)等。In the above embodiments, the method can be implemented in whole or in part by software, hardware, firmware, or any combination thereof. When implemented using software, it can be implemented in whole or in part in the form of a computer program product. A computer program product includes one or more computer instructions. When the computer program instructions are loaded and executed on a computer, the process or function described in the present invention is generated in whole or in part. The computer can be a general-purpose computer, a special-purpose computer, a computer network, or other programmable device. Computer instructions can be stored in a computer-readable storage medium or transmitted from one computer-readable storage medium to another. For example, computer instructions can be transmitted from one website, computer, server, or data center to another website, computer, server, or data center via wired (e.g., coaxial cable, optical fiber, digital subscriber line) or wireless (e.g., infrared, wireless, microwave, etc.) means. Computer-readable storage media can be any available medium that can be accessed by a computer or a data storage device such as a server or data center that includes one or more available media. Available media can be magnetic media (e.g., floppy disk, hard disk, tape), optical media (e.g., DVD), or semiconductor media (e.g., solid-state disk).

本领域技术人员可以理解,本发明的控制装置的存储器可以包括随机存取存储器(Random Access Memory,RAM),也可以包括非易失性存储器(Non-Volatile Memory,NVM),例如至少一个磁盘存储器。可选的,存储器还可以是至少一个与处理器分开的存储装置。Those skilled in the art will appreciate that the memory of the control device of the present invention may include random access memory (RAM) or non-volatile memory (NVM), such as at least one disk storage. Alternatively, the memory may be at least one storage device separate from the processor.

控制装置的处理器可以是通用处理器,包括中央处理器(Central Processing Unit,CPU)、网络处理器(Network Processor,NP)等;还可以是数字信号处理器 (Digital Signal Processor,DSP)、专用集成电路(Application Specific Integrated Circuit,ASIC)、现场可编程门阵列(Field-Programmable Gate Array,FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件。The processor of the control device can be a general-purpose processor, including a central processing unit (CPU), a network processor (NP), etc.; it can also be a digital signal processor (Digital Signal Processor, DSP), Application Specific Integrated Circuit (ASIC), Field-Programmable Gate Array (FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components.

虽然本发明的一些实施例已被显示和说明,本领域普通技术人员将理解,在不背离本发明总体构思的原则和精神的情况下,可对这些实施例做出改变,本发明的范围以权利要求和它们的等同物限定。 Although certain embodiments of the present invention have been shown and described, it will be appreciated by those skilled in the art that changes may be made to these embodiments without departing from the principles and spirit of the general inventive concept, the scope of which is defined in the claims and their equivalents.

Claims (29)

一种用于引导内镜手术的方法,其特征在于,所述方法包括如下步骤:A method for guiding endoscopic surgery, characterized in that the method comprises the following steps: 内镜图像获取步骤:获取内镜的图像;Endoscopic image acquisition steps: acquiring an endoscopic image; 增强信息获取步骤:获取以下三种类型的增强信息中的至少两种:Enhanced information acquisition step: acquiring at least two of the following three types of enhanced information: a)将内镜的多个图像拼接得到的拼接图像;a) a stitched image obtained by stitching multiple images of the endoscope; b)患者生理结构的三维影像;b) 3D images of the patient’s physiological structures; c)在患者生理结构的三维影像或二维影像上的标记;c) markings on a three-dimensional image or a two-dimensional image of a patient's physiological structure; 内镜增强图像融合步骤:将所获取的内镜的当前图像与所述增强信息获取步骤中获取的至少两种增强信息相融合以获得内镜增强图像;以及Endoscopic enhanced image fusion step: fusing the acquired current endoscopic image with the at least two types of enhanced information acquired in the enhanced information acquisition step to obtain an endoscopic enhanced image; and 显示步骤:显示包括所述内镜增强图像的至少一部分的视图。Displaying step: displaying a view including at least a portion of the endoscopically enhanced image. 根据权利要求1所述的方法,其特征在于,所述内镜增强图像融合步骤借助于导航系统来进行,其中,所述方法在内镜增强图像融合步骤之前还包括:The method according to claim 1, wherein the endoscopic enhanced image fusion step is performed with the aid of a navigation system, and wherein, before the endoscopic enhanced image fusion step, the method further comprises: 成像方位获取步骤:获取所述内镜的成像装置在所述导航系统下的方位;以及Imaging orientation acquisition step: acquiring the orientation of the imaging device of the endoscope under the navigation system; and 增强信息方位获取步骤:获取所需融合的拼接图像、所述三维影像或所述标记在所述导航系统下的方位;Enhanced information position acquisition step: acquiring the position of the desired fused spliced image, the three-dimensional image or the marker under the navigation system; 其中,在内镜增强图像融合步骤中根据所述成像装置的方位以及所需融合的拼接图像、三维影像或标记的方位来进行所述融合。Wherein, in the endoscopic enhanced image fusion step, the fusion is performed according to the orientation of the imaging device and the orientation of the spliced image, three-dimensional image or marker to be fused. 根据权利要求1或2所述的方法,其特征在于,还包括如下步骤:The method according to claim 1 or 2, further comprising the steps of: 全局增强图像融合步骤:将所述拼接图像、所述三维影像和与所述标记相关的引导指示中的至少两种相融合以获取全局增强图像,其中所述全局增强图像上指示出成像装置的当前视野位置和/或所述内镜增强图像的所述视图的位置;以及 a global enhanced image fusion step: fusing at least two of the stitched image, the three-dimensional image, and the guidance indication associated with the marker to obtain a global enhanced image, wherein the global enhanced image indicates a current field of view position of an imaging device and/or a position of the view of the endoscopic enhanced image; and 显示所述全局增强图像的步骤;a step of displaying the globally enhanced image; 其中,在所述全局增强图像上由不同颜色和/或线型的线来表示出成像装置的当前视野位置的边缘和所述内镜增强图像的视图的边缘。Wherein, on the global enhanced image, the edge of the current field of view position of the imaging device and the edge of the view of the endoscopic enhanced image are represented by lines of different colors and/or line types. 根据权利要求1或2所述的方法,其特征在于,所获取和融合的增强信息的类型基于操作者的输入而选取。The method according to claim 1 or 2, characterized in that the type of enhancement information acquired and fused is selected based on operator input. 根据权利要求1或2所述的方法,其特征在于,在所述内镜增强图像的视图中所显示的内镜增强图像的范围能够根据操作者的输入而确定。The method according to claim 1 or 2, characterized in that the range of the endoscopically enhanced image displayed in the view of the endoscopically enhanced image can be determined according to the input of the operator. 根据权利要求2所述的方法,其特征在于,在融合的增强信息包括拼接图像的情况下,所述增强信息获取步骤包括:The method according to claim 2, wherein, when the fused enhancement information includes a stitched image, the enhancement information acquisition step comprises: 拼接步骤,在所述拼接步骤中,将多个所述内镜的图像根据与至少一个内镜图像对应的所述成像装置的方位进行拼接而获取所述拼接图像。A stitching step, in which a plurality of the endoscopic images are stitched together according to the orientation of the imaging device corresponding to at least one endoscopic image to obtain the stitched image. 根据权利要求6所述的方法,其特征在于,在融合的增强信息包括拼接图像的情况下,所述增强信息获取步骤还包括:The method according to claim 6, wherein, when the fused enhancement information includes a stitched image, the enhancement information acquisition step further comprises: 在所述拼接步骤之前的畸变校准步骤,其中,在所述畸变校准步骤中将待拼接的内镜的图像进行畸变校准;以及a distortion calibration step before the stitching step, wherein the distortion of the endoscope images to be stitched is calibrated in the distortion calibration step; and 在所述拼接步骤之后的处理步骤,在所述处理步骤中根据在所述拼接步骤中获得的图像生成平面图像,其中在所述内镜增强图像融合步骤中将所述平面图像作为所述拼接图像与内镜的当前图像进行融合。A processing step follows the stitching step, in which a planar image is generated based on the image obtained in the stitching step, wherein the planar image is fused with the current image of the endoscope as the stitched image in the endoscopic enhanced image fusion step. 根据权利要求2所述的方法,其特征在于,在融合的增强信息包括所述标记的情况下,所述增强信息方位获取步骤包括标记方位获取步骤,在该步骤中借助于所述三维影像或所述二维影像在导航系统下的配准来获取所述标记在导航系统下的方位;The method according to claim 2, characterized in that, when the fused enhanced information includes the marker, the enhanced information position acquisition step includes a marker position acquisition step, in which the position of the marker under the navigation system is acquired by means of registration of the three-dimensional image or the two-dimensional image under the navigation system; 其中在所述内镜增强图像融合步骤中,根据所述标记在导航系统下的 方位,将与所述标记相关的引导指示与内镜的当前图像进行融合。In the endoscopic enhanced image fusion step, according to the marker under the navigation system The guidance instructions associated with the marker are fused with the current image of the endoscope. 根据权利要求8所述的方法,其特征在于,所述方法使得操作者能够选择方向标记和指示生理结构的标记中的一者或多者。The method according to claim 8, characterized in that the method enables an operator to select one or more of a direction mark and a mark indicating a physiological structure. 根据权利要求9所述的方法,其特征在于,所述患者生理结构为脊柱,所述指示生理结构的标记包括突出的或游离的椎间盘、骨赘和骨化的黄韧带中的一者或多者的影像模型。The method according to claim 9 is characterized in that the patient's physiological structure is a spine, and the markers indicating the physiological structure include image models of one or more of a herniated or free intervertebral disc, osteophytes, and ossified yellow ligaments. 根据权利要求9所述的方法,其特征在于,所述指示生理结构的标记包括生理结构标记点;优选地,所述患者生理结构为脊柱,并且所述生理结构标记点为指示突出的或游离的椎间盘、骨赘和骨化的黄韧带中的一者或多者的标记点,或为在内镜手术过程中不发生位移的生理结构标记点,例如关节突腹侧、前节椎体的椎弓根、后节椎体的椎弓根、椎间盘中的一者或多者;优选地,所述生理结构标记点为多个。The method according to claim 9 is characterized in that the mark indicating the physiological structure includes a physiological structure marker point; preferably, the patient's physiological structure is a spine, and the physiological structure marker point is a marker point indicating one or more of a protruding or free intervertebral disc, osteophytes and ossified yellow ligament, or is a physiological structure marker point that does not displace during endoscopic surgery, such as one or more of the ventral side of the articular process, the pedicle of the anterior vertebra, the pedicle of the posterior vertebra, and the intervertebral disc; preferably, there are multiple physiological structure marker points. 根据权利要求9所述的方法,其特征在于,所述方向标记包括朝向患者的背侧、腹侧、头侧和尾侧的方向标记中的一者或多者。The method according to claim 9, wherein the direction mark includes one or more of direction marks toward the dorsal side, ventral side, head side and tail side of the patient. 根据权利要求2所述的方法,其特征在于,在融合的增强信息包括患者生理结构的三维影像的情况下,所述增强信息方位获取步骤包括将患者生理结构的三维影像配准到导航系统下以获取所述三维影像的所述方位的步骤。The method according to claim 2 is characterized in that, when the fused enhancement information includes a three-dimensional image of the patient's physiological structure, the enhancement information orientation acquisition step includes the step of aligning the three-dimensional image of the patient's physiological structure to a navigation system to obtain the orientation of the three-dimensional image. 根据权利要求13所述的方法,其特征在于,所述患者生理结构的三维影像包括术前三维影像或术中三维影像。The method according to claim 13, characterized in that the three-dimensional image of the patient's physiological structure includes a pre-operative three-dimensional image or an intra-operative three-dimensional image. 根据权利要求1-14中任一项所述的方法,其特征在于,所述内镜 为脊柱内镜。The method according to any one of claims 1 to 14, characterized in that the endoscope For spinal endoscopy. 根据权利要求2所述的方法,其特征在于,其中所述成像方位获取步骤借助于从导航系统的追踪装置(1)获取相对于所述内镜具有固定的位置关系的示踪器(4)的位置来进行。The method according to claim 2 is characterized in that the imaging orientation acquisition step is performed by acquiring the position of a tracer (4) having a fixed positional relationship relative to the endoscope from a tracking device (1) of a navigation system. 一种计算机可读存储介质,其上存储有计算机程序,其特征在于,该计算机程序被处理器运行时执行权利要求1-16中任一项所述的方法的步骤。A computer-readable storage medium having a computer program stored thereon, wherein the computer program executes the steps of any one of the methods of claims 1 to 16 when executed by a processor. 一种控制装置,其中所述控制装置包括存储器、处理器及存储在存储器上并能在处理器上运行的程序,其特征在于,在所述处理器运行所述程序时执行权利要求1-16中任一项所述的方法的步骤。A control device, wherein the control device includes a memory, a processor, and a program stored in the memory and capable of running on the processor, characterized in that the steps of the method described in any one of claims 1 to 16 are executed when the processor runs the program. 一种计算机程序产品,其包括计算机程序,其特征在于,所述计算机程序被处理器执行时实现根据权利要求1-16中任一项所述的方法的步骤。A computer program product, comprising a computer program, wherein when the computer program is executed by a processor, the steps of the method according to any one of claims 1 to 16 are implemented. 一种用于内镜手术的导航的电子设备,其特征在于,所述电子设备包括显示装置(3)和处理器,所述处理器具有数据接口,An electronic device for endoscopic surgery navigation, characterized in that the electronic device comprises a display device (3) and a processor, wherein the processor has a data interface, 其中所述数据接口能够连接到内镜(5),使得所述处理器能够获取所述内镜的图像;并且,wherein the data interface is connectable to an endoscope (5) so that the processor can acquire images of the endoscope; and 其中所述处理器被配置成能够获取患者生理结构的三维影像和在患者生理结构的三维影像或二维影像上的标记中的一者或多者;wherein the processor is configured to acquire one or more of a three-dimensional image of the patient's physiological structure and a marking on the three-dimensional image or the two-dimensional image of the patient's physiological structure; 其中所述处理器被配置成在所述处理器运行时在至少一段时间内在所述显示装置(3)上显示内镜增强图像的至少一部分的视图,其中该内镜增强图像为将内镜的当前图像与以下三种类型的增强信息中的至少两种相融合的图像: The processor is configured to display a view of at least a portion of an endoscopically enhanced image on the display device (3) for at least a period of time when the processor is running, wherein the endoscopically enhanced image is an image obtained by fusing a current image of the endoscope with at least two of the following three types of enhancement information: a)将内镜的多个图像拼接得到的拼接图像;a) a stitched image obtained by stitching multiple images of the endoscope; b)患者生理结构的三维影像;b) 3D images of the patient’s physiological structures; c)与所述标记相关的引导指示。c) Guidance instructions associated with the marking. 根据权利要求20所述的电子设备,其特征在于,所述数据接口能够连接到导航系统的追踪装置(1)。The electronic device according to claim 20, characterized in that the data interface can be connected to a tracking device (1) of a navigation system. 根据权利要求20所述的电子设备,其特征在于,在所述显示装置(3)上还显示全局增强图像,所述全局增强图像为将所述拼接图像、所述三维影像和所述引导指示中的至少两种相融合所获得的图像,在所述全局增强图像上由不同颜色和/或线型的线来表示出成像装置的当前视野位置的边缘和所述内镜增强图像的视图的边缘。The electronic device according to claim 20 is characterized in that a global enhanced image is also displayed on the display device (3), the global enhanced image being an image obtained by fusing at least two of the stitched image, the three-dimensional image and the guidance indication, and the edges of the current field of view position of the imaging device and the edges of the view of the endoscopic enhanced image are indicated on the global enhanced image by lines of different colors and/or line types. 根据权利要求20-22中任一项所述的电子设备,其特征在于,在所述显示装置的显示界面上,还显示如下各项中的一者或多者:在拟合的二维透视切面上的视图、矢状面的切面视图、冠状面的切面视图、轴状面的切面视图、内镜的实时影像。The electronic device according to any one of claims 20 to 22 is characterized in that one or more of the following items are also displayed on the display interface of the display device: a view on a fitted two-dimensional perspective section, a sagittal section view, a coronal section view, an axial section view, and a real-time image of an endoscope. 根据权利要求20-22中任一项所述的电子设备,其特征在于,所述电子设备还包括供操作者输入指令以选择所融合的增强信息的类型的输入构件。The electronic device according to any one of claims 20 to 22, further comprising an input component for an operator to input instructions to select a type of fused enhanced information. 根据权利要求20-22中任一项所述的电子设备,其特征在于,所述电子设备还包括供操作者输入指令以确定所述内镜增强图像的视图中所显示的内镜增强图像的范围的输入构件。The electronic device according to any one of claims 20 to 22, further comprising an input component for an operator to input instructions to determine the range of the endoscopically enhanced image displayed in the view of the endoscopically enhanced image. 根据权利要求20-22中任一项所述的电子设备,其特征在于,所述标记包括方向标记、指示生理结构的标记中的一者或多者; The electronic device according to any one of claims 20 to 22, wherein the mark comprises one or more of a direction mark and a mark indicating a physiological structure; 优选地,所述患者生理结构为脊柱,所述指示生理结构的标记包括突出的或游离的椎间盘、骨赘和骨化的黄韧带中的一者或多者的影像模型,或者,所述指示生理结构的标记包括生理结构标记点,所述生理结构标记点为指示突出的或游离的椎间盘、骨赘和骨化的黄韧带中的一者或多者的标记点,或为在内镜手术过程中不发生位移的生理结构标记点,例如关节突腹侧、前节椎体的椎弓根、后节椎体的椎弓根、椎间盘中的一者或多者;Preferably, the patient's physiological structure is a spine, and the marker indicating the physiological structure includes an image model of one or more of a protruding or free intervertebral disc, osteophytes, and ossified yellow ligaments; or, the marker indicating the physiological structure includes physiological structure marker points, which are marker points indicating one or more of a protruding or free intervertebral disc, osteophytes, and ossified yellow ligaments, or physiological structure marker points that do not displace during endoscopic surgery, such as one or more of the ventral side of the articular process, the pedicle of the anterior vertebral body, the pedicle of the posterior vertebral body, and the intervertebral disc; 优选地,所述方向标记包括朝向患者的背侧、腹侧、头侧和尾侧的方向标记中的一者或多者。Preferably, the direction mark includes one or more of direction marks toward the dorsal side, ventral side, cranial side and caudal side of the patient. 一种用于内镜手术的导航系统,其特征在于,所述导航系统包括:A navigation system for endoscopic surgery, characterized in that the navigation system comprises: 追踪装置(1),所述追踪装置(1)适于追踪相对于所述内镜(5)具有固定的位置关系的示踪器(4);以及A tracking device (1) adapted to track a tracer (4) having a fixed positional relationship relative to the endoscope (5); and 显示装置(3)和处理器,所述处理器适于连接到所述追踪装置(1)和所述内镜(5),其中在所述处理器运行时执行权利要求1-16中任一项所述的方法,并通过所述显示装置(3)实现所述方法中的显示。A display device (3) and a processor, wherein the processor is suitable for being connected to the tracking device (1) and the endoscope (5), wherein the method according to any one of claims 1 to 16 is executed when the processor is running, and the display in the method is realized by the display device (3). 一种用于内镜手术的导航系统,其特征在于,所述导航系统包括:A navigation system for endoscopic surgery, characterized in that the navigation system comprises: 追踪装置(1),所述追踪装置(1)适于追踪相对于所述内镜(5)具有固定的位置关系的示踪器(4);以及A tracking device (1) adapted to track a tracer (4) having a fixed positional relationship relative to the endoscope (5); and 根据权利要求20-26中任一项所述的电子设备,其中所述电子设备中的处理器适于连接到所述追踪装置(1)和所述内镜(5)。An electronic device according to any one of claims 20-26, wherein a processor in the electronic device is adapted to be connected to the tracking device (1) and the endoscope (5). 一种机器人系统,其特征在于,包括机械臂以及根据权利要求27-28中任一项所述的导航系统。 A robotic system, characterized in that it comprises a robotic arm and a navigation system according to any one of claims 27-28.
PCT/CN2024/126088 2024-03-06 2024-10-21 Method for guiding endoscopic surgery, computer-readable storage medium, control apparatus, computer program product, electronic device, navigation system, and robotic system Pending WO2025185175A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202410252657.0 2024-03-06
CN202410252657.0A CN117898834A (en) 2024-03-06 2024-03-06 Method for guiding endoscopic surgery, computer-readable storage medium, control device and computer program product, electronic device, navigation system and robotic system

Publications (2)

Publication Number Publication Date
WO2025185175A1 true WO2025185175A1 (en) 2025-09-12
WO2025185175A8 WO2025185175A8 (en) 2025-10-02

Family

ID=90690803

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2024/126088 Pending WO2025185175A1 (en) 2024-03-06 2024-10-21 Method for guiding endoscopic surgery, computer-readable storage medium, control apparatus, computer program product, electronic device, navigation system, and robotic system

Country Status (2)

Country Link
CN (1) CN117898834A (en)
WO (1) WO2025185175A1 (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118021446A (en) * 2024-02-07 2024-05-14 常州市康辉医疗器械有限公司 Navigation method, electronic equipment, navigation system and robot system for optical hard lens surgery
CN117898834A (en) * 2024-03-06 2024-04-19 常州市康辉医疗器械有限公司 Method for guiding endoscopic surgery, computer-readable storage medium, control device and computer program product, electronic device, navigation system and robotic system
CN119184849B (en) * 2024-11-25 2025-06-27 温州医科大学附属第一医院 Artificial intelligence combined with airway tools for intelligent multi-mode positioning and early warning system

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050182295A1 (en) * 2003-12-12 2005-08-18 University Of Washington Catheterscope 3D guidance and interface system
US20070225553A1 (en) * 2003-10-21 2007-09-27 The Board Of Trustees Of The Leland Stanford Junio Systems and Methods for Intraoperative Targeting
CN107536643A (en) * 2017-08-18 2018-01-05 北京航空航天大学 A kind of augmented reality operation guiding system of Healing in Anterior Cruciate Ligament Reconstruction
CN111588464A (en) * 2019-02-20 2020-08-28 忞惪医疗机器人(苏州)有限公司 Operation navigation method and system
CN114711961A (en) * 2022-04-12 2022-07-08 山东大学 Virtual reality navigation method and system for spinal endoscopic surgery
CN114945937A (en) * 2019-12-12 2022-08-26 皇家飞利浦有限公司 Guided anatomical steering for endoscopic procedures
CN115375595A (en) * 2022-07-04 2022-11-22 武汉联影智融医疗科技有限公司 Image fusion method, device, system, computer equipment and storage medium
CN117462257A (en) * 2023-11-13 2024-01-30 常州市康辉医疗器械有限公司 Navigation method, electronic equipment and navigation system for spinal endoscopic surgery
CN117898834A (en) * 2024-03-06 2024-04-19 常州市康辉医疗器械有限公司 Method for guiding endoscopic surgery, computer-readable storage medium, control device and computer program product, electronic device, navigation system and robotic system
CN118021446A (en) * 2024-02-07 2024-05-14 常州市康辉医疗器械有限公司 Navigation method, electronic equipment, navigation system and robot system for optical hard lens surgery
CN118285913A (en) * 2024-04-01 2024-07-05 常州市康辉医疗器械有限公司 Method for guiding endoscopic surgery under navigation system, electronic equipment, navigation system and surgical robot system

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070225553A1 (en) * 2003-10-21 2007-09-27 The Board Of Trustees Of The Leland Stanford Junio Systems and Methods for Intraoperative Targeting
US20050182295A1 (en) * 2003-12-12 2005-08-18 University Of Washington Catheterscope 3D guidance and interface system
CN107536643A (en) * 2017-08-18 2018-01-05 北京航空航天大学 A kind of augmented reality operation guiding system of Healing in Anterior Cruciate Ligament Reconstruction
CN111588464A (en) * 2019-02-20 2020-08-28 忞惪医疗机器人(苏州)有限公司 Operation navigation method and system
CN114945937A (en) * 2019-12-12 2022-08-26 皇家飞利浦有限公司 Guided anatomical steering for endoscopic procedures
CN114711961A (en) * 2022-04-12 2022-07-08 山东大学 Virtual reality navigation method and system for spinal endoscopic surgery
CN115375595A (en) * 2022-07-04 2022-11-22 武汉联影智融医疗科技有限公司 Image fusion method, device, system, computer equipment and storage medium
CN117462257A (en) * 2023-11-13 2024-01-30 常州市康辉医疗器械有限公司 Navigation method, electronic equipment and navigation system for spinal endoscopic surgery
CN118021446A (en) * 2024-02-07 2024-05-14 常州市康辉医疗器械有限公司 Navigation method, electronic equipment, navigation system and robot system for optical hard lens surgery
CN117898834A (en) * 2024-03-06 2024-04-19 常州市康辉医疗器械有限公司 Method for guiding endoscopic surgery, computer-readable storage medium, control device and computer program product, electronic device, navigation system and robotic system
CN118285913A (en) * 2024-04-01 2024-07-05 常州市康辉医疗器械有限公司 Method for guiding endoscopic surgery under navigation system, electronic equipment, navigation system and surgical robot system

Also Published As

Publication number Publication date
WO2025185175A8 (en) 2025-10-02
CN117898834A (en) 2024-04-19

Similar Documents

Publication Publication Date Title
US11800970B2 (en) Computerized tomography (CT) image correction using position and direction (P and D) tracking assisted optical visualization
US11819292B2 (en) Methods and systems for providing visuospatial information
US9289267B2 (en) Method and apparatus for minimally invasive surgery using endoscopes
US8320992B2 (en) Method and system for superimposing three dimensional medical information on a three dimensional image
WO2025185175A1 (en) Method for guiding endoscopic surgery, computer-readable storage medium, control apparatus, computer program product, electronic device, navigation system, and robotic system
CN111970986A (en) System and method for performing intraoperative guidance
US20070276234A1 (en) Systems and Methods for Intraoperative Targeting
US20070073136A1 (en) Bone milling with image guided surgery
US20050085718A1 (en) Systems and methods for intraoperative targetting
US20240315778A1 (en) Surgical assistance system and display method
WO2007115825A1 (en) Registration-free augmentation device and method
EP3273854A1 (en) Methods and systems for computer-aided surgery using intra-operative video acquired by a free moving camera
AU2018202682A1 (en) Endoscopic view of invasive procedures in narrow passages
JP6952740B2 (en) How to assist users, computer program products, data storage media, and imaging systems
CN117462257A (en) Navigation method, electronic equipment and navigation system for spinal endoscopic surgery
WO2025167189A1 (en) Navigation method for optical rigid endoscope surgery, electronic device, navigation system, and robot system
CN118285913A (en) Method for guiding endoscopic surgery under navigation system, electronic equipment, navigation system and surgical robot system
Jackson et al. Surgical tracking, registration, and navigation characterization for image-guided renal interventions
JP4510415B2 (en) Computer-aided display method for 3D objects
JP2024541293A (en) An interactive augmented reality system for laparoscopic and video-assisted surgery
CN117860379A (en) Endoscope guiding method under navigation system, electronic equipment and navigation system
JP2001204739A (en) Microscopic medical operation support system
US20250241631A1 (en) System and method for real-time surgical navigation
Ivanov et al. Surgical navigation systems based on augmented reality technologies
Giraldez et al. Multimodal augmented reality system for surgical microscopy

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 24928105

Country of ref document: EP

Kind code of ref document: A1