[go: up one dir, main page]

WO2025167189A1 - Procédé de navigation pour chirurgie d'endoscope rigide optique, dispositif électronique, système de navigation et système de robot - Google Patents

Procédé de navigation pour chirurgie d'endoscope rigide optique, dispositif électronique, système de navigation et système de robot

Info

Publication number
WO2025167189A1
WO2025167189A1 PCT/CN2024/126086 CN2024126086W WO2025167189A1 WO 2025167189 A1 WO2025167189 A1 WO 2025167189A1 CN 2024126086 W CN2024126086 W CN 2024126086W WO 2025167189 A1 WO2025167189 A1 WO 2025167189A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
navigation method
navigation
optical
optical hard
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
PCT/CN2024/126086
Other languages
English (en)
Chinese (zh)
Inventor
吕鑫
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Changzhou Kanghui Medical Innovation Co Ltd
Original Assignee
Changzhou Kanghui Medical Innovation Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Changzhou Kanghui Medical Innovation Co Ltd filed Critical Changzhou Kanghui Medical Innovation Co Ltd
Publication of WO2025167189A1 publication Critical patent/WO2025167189A1/fr
Pending legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/20Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B1/00Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
    • A61B1/00002Operational features of endoscopes
    • A61B1/00004Operational features of endoscopes characterised by electronic signal processing
    • A61B1/00009Operational features of endoscopes characterised by electronic signal processing of image signals during a use of endoscope
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B1/00Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
    • A61B1/04Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor combined with photographic or television appliances
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B1/00Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
    • A61B1/313Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor for introducing through surgical openings, e.g. laparoscopes
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B1/00Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
    • A61B1/313Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor for introducing through surgical openings, e.g. laparoscopes
    • A61B1/317Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor for introducing through surgical openings, e.g. laparoscopes for bones or joints, e.g. osteoscopes, arthroscopes
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/25User interfaces for surgical systems
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/30Surgical robots
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/70Manipulators specially adapted for use in surgery
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/003Navigation within 3D models or images
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/20Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
    • A61B2034/2046Tracking techniques
    • A61B2034/2065Tracking using image or pattern recognition

Definitions

  • the present invention relates to the field of medical equipment technology, and in particular to a surgical navigation system, and more specifically to a navigation method, a navigation system, and a robotic system for optical endoscopic surgery.
  • the doctor's field of view under the microscope is further limited.
  • the doctor pulls up the spinal endoscope according to common thinking, away from the observation target point, hoping to observe a larger range of physiological structures the first thing he sees is the tube wall of the working channel sleeve 51.
  • the soft tissue on the outside of the working channel sleeve will shrink inward and invade the channel. What the doctor sees will be the soft tissue on the outside of the working channel sleeve 51, blocking the structure of the target point that the doctor wants to observe, so the purpose of observing a larger range of physiological structures is still not achieved.
  • the present invention provides a navigation method for optical hard endoscope surgery, the method comprising the following steps: an image acquisition step: acquiring a plurality of images of the optical hard endoscope;
  • Imaging orientation acquisition step acquiring a position and/or orientation of an imaging device of the optical hard mirror corresponding to at least one of the plurality of images of the optical hard mirror under a navigation system;
  • Displaying step displaying at least a portion of the stitched image.
  • the stitched images formed in this example provide the operator with a wider field of view (e.g., a global field of view) around the end of the optical rigid scope, successfully resolving the problem of the operator's limited field of view.
  • the navigation method of the present invention is particularly beneficial for endoscope surgery using optical rigid scopes, because, compared to soft scopes whose ends can be bent appropriately, optical rigid scopes are generally limited in their observation range due to the inflexibility of their insertion tubes. This solution addresses this problem, as well as the resulting loss of sense of direction and reduced surgical efficiency caused by the need to frequently rotate the optical rigid scope, thereby improving the operational accuracy and efficiency of the optical rigid scope and reducing its learning difficulty.
  • this solution since the orientation of the imaging device is obtained while the image is being acquired and the images are stitched together based on the orientation of the imaging device corresponding to at least one image to obtain a stitched image, compared to various existing stitching methods (e.g., algorithm-based), less computation is required, greatly improving the stitching speed, which is particularly important for real-time observation during surgery.
  • this solution has higher stitching accuracy and lower requirements on the processor's computing power. It should be noted that in the imaging orientation acquisition step, the position and/or orientation of the imaging device under the navigation system corresponding to at least one of the stitched images is acquired.
  • the orientation of the imaging device under the navigation system may be acquired corresponding to each image, or it may be acquired only for a portion of the images or even for a portion of the images.
  • the position and/or orientation of only one image For example, in some cases, it is only necessary to obtain the position and/or orientation of the imaging device corresponding to certain images at intervals in time, or only to obtain the position and/or orientation of the imaging device at the beginning of the acquisition of the image to be stitched.
  • the navigation method further includes an image fusion step between the stitching step and the display step, wherein in the image fusion step, the stitched image is fused with the current image of the optical hard lens according to the current position and/or orientation of the imaging device, and the fused image is displayed in the display step.
  • the navigation method further includes a processing step between the stitching step and the image fusion step, wherein a planar image is generated based on the images obtained in the stitching step, and the planar image is fused with the current image of the optical hard lens as the stitched image in the image fusion step.
  • the processing step in this example eliminates or reduces visual distortion of the stitched image caused by different viewing angles.
  • the navigation method further includes: a marker orientation acquisition step, in which the position and/or orientation of the marker on the image of the patient's physiological structure under the navigation system is acquired; and a marker fusion step between the stitching step and the display step, wherein the marker and/or guidance instructions related to the marker are fused to the stitched image based on the position and/or orientation of the marker under the navigation system.
  • the navigation method further includes: a marker orientation acquisition step, in which the position and/or orientation of the marker on the image of the patient's physiological structure under the navigation system is acquired; and a marker fusion step between the image fusion step and the display step, wherein the marker and/or guidance instructions related to the marker are fused to the fused image obtained in the image fusion step based on the position and/or orientation of the marker under the navigation system.
  • the mark on the image of the patient's physiological structure includes one or more of a direction mark and a physiological structure mark.
  • the direction mark on the global image fused with the direction mark always indicates a predetermined direction, such as pointing to the head, tail, ventral or dorsal side of the patient, providing an orientation indication for the operator.
  • the direction mark can be It includes one or more direction marks toward the dorsal, ventral, cephalic and caudal sides of the patient.
  • the direction marks are multiple, for example, four direction marks pointing toward the dorsal, ventral, cephalic and caudal sides of the patient respectively.
  • the markers may include physiological structure markers. Since the markers in this example are set at the patient's physiological structure, during the surgical operation, these marker points displayed on the spliced image can help the doctor identify specific position points of the patient's physiological structure even when covered by soft tissue, such as the pedicles of the anterior vertebra, etc., to better help the doctor determine the endoscopic position.
  • the physiological structure markers include physiological structure markers that do not shift during optical hard endoscope surgery. This method can ensure the accuracy and reliability of directional guidance.
  • the patient's physiological structure is a spine
  • the physiological structure markers include one or more of the ventral side of the articular process, the pedicles of the anterior vertebra, the pedicles of the posterior vertebra, and the intervertebral disc.
  • the guidance indication is multiple, for example, four indicator symbols pointing to the ventral side of the articular process, the pedicles of the anterior vertebra, the pedicles of the posterior vertebra, and the intervertebral disc respectively.
  • the marking may include one or both of the direction marking and the physiological structure marking, this provides the operator with a variety of marking options, allowing the operator to flexibly select according to needs.
  • the imaging position acquisition step is performed by acquiring the position of a tracer located outside the patient's body and having a fixed positional relationship relative to the optical hard mirror from a tracking device of a navigation system.
  • the start of the navigation method can be triggered by an operator, and then the steps included in the navigation method are automatically performed.
  • the implementation of the navigation method can be stopped after receiving a stop input from an operator.
  • the navigation method further includes a step of indicating the current imaging device field of view position in the stitched image.
  • a step of indicating the current imaging device field of view position in the stitched image is determined with the aid of the position and orientation of the imaging device, and a mark indicating the area is generated and displayed in the display step.
  • the optical rigid endoscope is a spinal endoscope.
  • the navigation method of the present invention is particularly beneficial for spinal endoscopes (such as interlaminar foraminal endoscopes).
  • spinal endoscopes such as interlaminar foraminal endoscopes.
  • the spinal endoscope is used in the working channel sleeve, which further limits the doctor's field of view under the microscope. How to allow the doctor to not be affected by the special use of the spinal endoscope (the endoscope is used in the working channel sleeve) and the limited viewing angle of the optical hard endoscope module during spinal endoscopic surgery, and to quickly obtain a "complete map" of the target physiological structure and its surrounding soft tissues, blood vessels and nerves under the microscope and guide the efficient operation of the endoscopic tools only by relying on the spinal endoscope image, has not yet been a convenient and reliable solution.
  • the navigation method of the present invention obtains the spliced image of the end of the spinal endoscope, which solves the problem of its limited field of view, can largely avoid the doctor's misoperation and increase the reliability of spinal surgery.
  • the frequency of the doctor frequently rotating the endoscope around the axis to observe various directions is reduced, which also improves the efficiency of spinal endoscopic surgery to a certain extent.
  • the method further includes a calibration step for calibrating the positional relationship of the imaging device of the optical hard scope relative to the tracer, wherein the calibration step is performed by acquiring images using a calibration tool rather than the optical hard scope.
  • This calibration method can avoid image processing.
  • a distortion processing step is included before the stitching step, wherein the distortion of the optical rigid lens images to be stitched is calibrated in the distortion processing step. This example removes the distortion effect of the optical rigid lens through distortion calibration, making the stitched image closer to the real scene and easier for the operator to observe.
  • the navigation method can be combined with both 3D image-based navigation and 2D image-based navigation.
  • the reconstruction of the stitched image and the fusion display effect with the current image are the same, which also makes the navigation method of the present invention have a wider range of applications.
  • the current image is located in the center of the stitched image.
  • the center area is more convenient for the operator to observe the current image.
  • a computer-readable storage medium on which a computer program is stored.
  • the program is executed by a processor, the steps of the navigation method in the above examples are executed.
  • a control device includes a memory, a processor, and a program stored in the memory and capable of running on the processor, wherein the methods in the above examples are executed when the processor runs the program.
  • a computer program product which includes a computer program.
  • the computer program is executed by a processor, the steps of the methods in the above examples are implemented.
  • an electronic device for navigation of optical endoscopic surgery characterized in that the electronic device comprises a display device and a processor, the processor having a data interface, wherein the data interface is connectable to a tracking device of a navigation system and an optical endoscopic lens, so that the processor can obtain multiple images of the optical endoscopic lens and can obtain the position and/or direction of the imaging device of the optical endoscopic lens corresponding to at least one of the multiple images of the optical endoscopic lens under the navigation system; and wherein the processor is configured to display at least a portion of a stitched image on the display device (3) for at least a period of time when the processor is running, the stitched image being stitched together from multiple images of the optical endoscopic lens.
  • a navigation system for optical rigid endoscope surgery includes a tracking device, a display device, and a processor.
  • the tracking device is adapted to track a tracer disposed on an optical rigid endoscope.
  • the processor is adapted to be connected to the tracking device and the optical rigid endoscope.
  • a navigation system for optical rigid endoscope surgery includes a tracking device and the electronic device described above.
  • the tracking device is adapted to track a tracer disposed on an optical rigid endoscope; and a processor in the electronic device is adapted to be connected to the tracking device and the optical rigid endoscope.
  • a robot system which includes a robot arm and the above-mentioned navigation system.
  • the concept of the present invention can also be implemented in the navigation system of the robot system. Implementation.
  • Figure 1A is a schematic diagram showing the field of view of a spinal endoscope in one orientation
  • Figure 1B is a schematic diagram showing the field of view of the spinal endoscope shown in Figure 1A in another orientation. Both Figures 1A and 1B show a working channel sleeve.
  • FIG3 shows a schematic diagram of the principle of a navigation system for spinal endoscopic surgery according to an exemplary embodiment of the present invention.
  • FIG4 exemplarily shows a schematic diagram of moving an optical hard mirror to obtain multiple images of a larger range.
  • FIG5 and FIG6 are schematic diagrams showing the principle of rotating the optical hard mirror to obtain a stitched image for an optical hard mirror with an end bevel angle of 45 degrees and 15 degrees, respectively.
  • FIG. 7 schematically shows a schematic diagram of fusing a real-time image of an endoscope, ie, a current image, into a stitched image.
  • FIG. 8 is a schematic diagram showing the image of FIG. 7 being displayed on a window of a display device.
  • FIG9 shows a schematic diagram of the fusion of direction markers on the stitched image.
  • FIG10 shows a schematic diagram of integrating physiological structure markers into a stitched image.
  • FIG. 11 shows an example of an operator marking a direction on a preoperative 3D image or an intraoperative 3D image.
  • an optical rigid endoscope e.g., a spinal endoscope, a neuroendoscope, or a nasal endoscope
  • images from an optical rigid endoscope under a navigation system as well as the navigation system, electronic equipment, and robotic system (or positioning and navigation system) involved.
  • the navigation system used includes a tracking device 1, a control device 2, and a display device 3.
  • the tracking device 1 can be an optical tracking device (e.g., an NDI navigator), and accordingly, a tracer 4 can be provided on an endoscope 5.
  • the control device 2 can be a general-purpose computer, a dedicated computer, an embedded processor, or any other suitable programmable data processing device, such as a single-chip microcomputer or a chip.
  • the control device 2 can include a processor and a memory for storing programs, but it can also include only a processor, in which case the processor can be attached to the memory storing the programs. In other words, the control device includes at least a processor.
  • the control device 2 (or processor) and the display device 3 can be integrated or provided separately.
  • the control device or processor has a data interface, which can include a data interface that can be connected to an optical hard mirror, allowing the control device/processor to obtain images of the optical hard mirror in real time.
  • the control device or processor also includes a data interface that can be connected to the tracking device 1 of the navigation system, so that the position and orientation of the tracked target, such as the tracer 4 on the optical hard mirror, can be obtained from the tracking device 1 in real time.
  • the endoscope 5 and/or the tracer 4 can also be considered as part of the navigation system of the present invention.
  • the navigation system by setting a tracer suitable for being tracked by the tracking device 1 at the proximal end of the optical hard mirror located outside the patient's body, the navigation system can obtain the position and direction of the tracer 4 on the optical hard mirror 5 in the navigation coordinate system in real time.
  • the imaging device of the optical hard mirror i.e., the distal lens of the hard mirror module, is set at the end of the insertion tube of the optical hard mirror.
  • the relative position relationship of the imaging device of the optical hard endoscope, i.e., the hard endoscope module, relative to the tracer 4 can be calibrated first.
  • the calibration is performed by a calibration tool (not shown in the figure) without the need to obtain images with the help of an optical hard endoscope, which reduces the workload of image processing.
  • the navigation system of the present invention optionally includes the calibration tool.
  • a plurality of calibration holes can be formed on the calibration tool, and these calibration holes have bottom surfaces with different inclination angles and/or different apertures to calibrate optical hard endoscopes with different end bevel angles and/or barrel diameters.
  • the calibration tool also includes another tracer fixed on its bracket, and the navigation system can know the position of the tracer on the calibration tool in the navigation coordinate system.
  • the positional relationship of each calibration hole relative to the tracer on the calibration tool can be known based on the design size of the calibration tool, and thus the positions of these calibration holes in the navigation system are known.
  • the end of the insertion tube of the optical rigid scope to be calibrated is inserted into the matching calibration hole.
  • the inclined surface of the end of the insertion tube aligns with the inclined bottom surface of the calibration hole, thereby achieving the positioning of the optical rigid scope.
  • the navigation system also knows the position of the tracer 4 on the optical rigid scope in the navigation coordinate system, it can also determine the relative position of the end of the endoscope's insertion tube with respect to the tracer 4 on the endoscope, completing the calibration process.
  • the position and orientation of the optical hard endoscope tip that is, the position and orientation of the hard endoscope module
  • the imaging orientation acquisition step in the navigation method according to the present invention is performed in this manner.
  • the imaging parameters of the endoscopic scope can be calibrated prior to performing optical endoscopic surgery and navigation.
  • the imaging parameters of the endoscopic scope also known as internal parameters
  • a calibration plate can be used to calibrate the internal parameters.
  • the endoscopic scope can be used to photograph the calibration plate at different azimuth angles, saving the images and recording the plate specifications.
  • the internal parameters of the endoscopic scope can then be calibrated, including the distortion matrix and/or rotation and displacement vectors. These calibrated internal parameters will be used in the distortion calibration process described later.
  • the following will describe in detail the steps of an embodiment of the navigation method for optical endoscopic surgery of the present invention in conjunction with Figure 2.
  • the internal and external parameters of the optical endoscopic surgery can be calibrated as described above.
  • the navigation system is aligned, that is, the navigation coordinate system is determined.
  • the navigation method of the present invention can be used in scenarios where two-dimensional images are used for navigation, and can also be used in scenarios where three-dimensional images are used for navigation.
  • the alignment method of the navigation system is a known method and will not be described in detail here.
  • the activation of the navigation method of the present invention can be triggered by the operator, so that the operator (doctor) can choose whether to use the navigation function provided by the navigation method according to his or her preferences or needs, or choose when to use the function, for example, when the operator wants to observe the global image around the end of the endoscope, the method is activated.
  • the operator's triggering instruction can be input through various input components such as a keyboard, a mouse, a handle, a foot switch, etc.
  • the navigation method can also be performed automatically. For example, during the surgical operation, the processor executes the various steps in the navigation method in real time and displays the acquired images in real time. Compared with the automatic activation method, the method of manually triggering the activation of the navigation method by the operator processes a smaller amount of data and requires less information processing capability of the processor.
  • the control device automatically obtains multiple images of the optical hard mirror, that is, the image acquisition step in Figure 2; at the same time, the control device automatically obtains and records the position and direction of the imaging device of the optical hard mirror corresponding to at least one image of the optical hard mirror (for example, corresponding to each image), that is, the imaging orientation acquisition step in Figure 2.
  • the processor obtains the position of the tracer 4 located outside the patient's body and arranged on the distal end of the optical hard mirror 5 from the tracking device 1 of the navigation system, and indirectly obtains the position and direction of the imaging device, that is, the hard mirror module, in combination with the external parameters calibrated above.
  • the above steps are repeated at multiple positions and orientations of the optical hard mirror, so that, for example, the endoscope is translated (as shown in Figure 4) and/or rotated (as shown in Figures 5 and 6) within a larger range that needs to be observed. Subsequently, images of the imaging device at multiple orientations are obtained.
  • the multiple rigid optical scope images can be calibrated for distortion, combining the imaging device's internal parameters calibrated above, to remove the distortion effects of the imaging device.
  • the images are stitched together based on the position and orientation (i.e., orientation) of the rigid optical scope corresponding to at least one image, thereby obtaining a stitched image of the patient's physiological structure.
  • This stitched image can then be displayed on a display device.
  • the spliced image can also be called a "complete three-dimensional map" or a "panoramic image”, but it should be noted that the "complete” or “panoramic” here only refers to an image of a larger range relative to the limited field of view of the optical rigid endoscope, and does not necessarily mean a 360-degree panoramic image. This can be obtained, for example, by the doctor's flexible choice of translating (for example, as shown in FIG4 ) and/or rotating (for example, as shown in FIG5 and FIG6 ) the optical rigid endoscope within a larger range that needs to be observed according to his observation needs during the operation.
  • the spliced image around the end of the optical rigid endoscope enables the doctor to observe a larger field of view (which can also be a global field of view), thereby guiding the doctor to operate the optical rigid endoscope, quickly determine the direction of the endoscope and the spatial position relationship of key physiological structures relative to the endoscopic image at the moment, and eliminate the defect of the limited field of view of the optical rigid endoscope.
  • the overall shape of the stitched image can be a shape similar to a spherical cap with the focal length of the optical hard lens module as the radius.
  • FIG5 exemplarily shows a schematic diagram of three exemplary positions of an optical hard lens with an inclined surface angle of 45 degrees during the rotation process, wherein the conical field of view 53 of the optical hard lens at the three positions is shown.
  • FIG6 exemplarily shows a schematic diagram of three exemplary positions of an optical hard lens with an inclined surface angle of 15 degrees during the rotation process, wherein the conical field of view 53 of the optical hard lens at the three positions is shown.
  • the field of view angle of the exemplarily shown conical field of view is 120 degrees.
  • the current image of the optical hard lens can also be fused with the stitched image for display, i.e., the image fusion step.
  • the current image 6 is fused with the central area of the stitched image 7.
  • the displayed image is the fused image, so that the operator can simultaneously view the current image and the stitched image around it.
  • FIG8 shows a schematic diagram of displaying the fused image in FIG7 on a window of a display device. The fused image in FIG7 is displayed in the window (surrounding square frame). A portion of the image, wherein the current real-time image 6 is located in the central area.
  • a processing step may be included between the stitching step and the image fusion step.
  • a planar image is generated based on the image obtained in the stitching step to reduce visual distortion.
  • This planar image is then used as the stitched image and fused with the current image from the optical rigid lens in the image fusion step (e.g., placed around the current image). This method eliminates or reduces distortion in the stitched image caused by different viewing angles.
  • markers e.g., direction markers or physiological structure markers
  • markers can also be fused onto the stitched image, thereby providing orientation guidance indicators, such as direction guidance indicators, physiological structure point guidance indicators, etc., on the displayed stitched image (or the image after the stitched image is fused with the current image). Therefore, the operator can rely on the displayed image to quickly obtain the global orientation and determine the orientation of the physiological structure at the moment.
  • Figure 9 shows a schematic diagram of a stitched image fused with direction markers
  • Figure 10 shows a schematic diagram of a stitched image fused with physiological structure marker points.
  • the navigation method of the present invention may further include a marker orientation acquisition step, in which the position and/or orientation of the marker on the image of the patient's physiological structure under the navigation system is acquired.
  • the image of the patient's physiological structure described herein may be a preoperative three-dimensional image, such as a preoperative CT image, or an image obtained during surgery, such as an intraoperative CBCT image or an intraoperative two-dimensional fluoroscopic image.
  • the navigation system uses the preoperative CT image for navigation, the operator makes a mark on the preoperative three-dimensional image.
  • the navigation system uses the intraoperative CBCT image or the intraoperative two-dimensional fluoroscopic image for navigation, the operator makes a mark on the image after acquiring it during surgery.
  • the operator can choose to make various markings, such as directional markings or physiological structure point markings, thereby providing the operator with a variety of options and possibilities.
  • the operator can mark directions on the preoperative or intraoperative 3D image, such as arrows pointing toward the patient's dorsal, ventral, cranial, and caudal sides.
  • Figure 11 shows a schematic diagram of the operator marking directions on a preoperative 3D image or an intraoperative 3D image.
  • the operator can also mark directions on an intraoperative 2D image.
  • the operator can also mark on the three-dimensional image or two-dimensional image (for example, using preoperative planning software) certain physiological structural landmarks in or around the intervertebral foramen that do not undergo any displacement during the entire spinal endoscopic surgery, including but not limited to the ventral side of the articular process, the pedicle of the anterior vertebra, the pedicle of the posterior vertebra, the vertebral Physiological structures such as intervertebral discs.
  • the marker position acquisition step described above can be performed, for example, as follows. If the operator has made marks on the preoperative 3D image, the processor first acquires the preoperative 3D image and simultaneously acquires the marks (e.g., point marks or direction marks) made by the operator on the preoperative 3D image, i.e., the image and marker acquisition step. This is followed by an image registration step, where the preoperative 3D image is registered with the navigation system. The coordinates of each mark in the navigation system are then determined based on the registration relationship determined in this registration step, thereby completing the marker position acquisition step described above.
  • the processor first acquires the preoperative 3D image and simultaneously acquires the marks (e.g., point marks or direction marks) made by the operator on the preoperative 3D image, i.e., the image and marker acquisition step.
  • the preoperative 3D image is registered with the navigation system.
  • the coordinates of each mark in the navigation system are then determined based on the registration relationship determined in this registration step, thereby completing the marker position acquisition step described above
  • the three-dimensional image or the two-dimensional image used for navigation is generated during the operation.
  • the registration of the intraoperative three-dimensional image or the intraoperative two-dimensional image to the navigation system is completed at the same time as the intraoperative three-dimensional image or the intraoperative two-dimensional image is acquired. Therefore, in this case, the mark acquisition step is performed after the image acquisition and registration steps, and the operator marks the intraoperative three-dimensional image or the intraoperative two-dimensional image after the registration is completed. Since the coordinates of the registered image under the navigation system are known, the coordinates or positions of the marks made by the operator in the image under the navigation system can be obtained accordingly, thereby completing the above-mentioned mark orientation acquisition step.
  • the description herein uses the method of an operator making marks on an image of a patient's physiological structure to form the marks, it is also understood that in some other embodiments, the marks may not be made by the operator, for example, the marks may be automatically formed when the image of the patient's physiological structure is formed. This can be achieved, for example, by pre-positioning a marker on a certain part of the patient's body or on a location such as an operating table so that the marks are generated when the image is acquired.
  • the marker orientation acquisition step After obtaining the position and/or orientation of the marker on the image of the patient's physiological structure under the navigation system, i.e., the marker orientation acquisition step, based on the obtained position and/or orientation of the marker under the navigation system, and because any point in the stitched image has a determined position and/or orientation under the navigation system, in other words, the relative position of the marker and the stitched image can be determined under the navigation system, the marker 10 and/or a guide indicator related to the marker (e.g., an arrow pointing in a certain direction or to a certain marked point) can be fused to the stitched image described above. Alternatively, the marker and/or the guide indicator related to the marker can also be fused to the fused image obtained in the above-mentioned image fusion step.
  • the marker 10 and/or a guide indicator related to the marker e.g., an arrow pointing in a certain direction or to a certain marked point
  • the above-mentioned step of obtaining the marking position can be performed in the above-described
  • the image acquisition step, the imaging position acquisition step, the stitching step (and the possible image fusion step) are performed simultaneously or before each other.
  • Figures 9 and 10 also exemplarily illustrate the display of a marker 9 on the stitched image, indicating the area corresponding to the current imaging device's field of view.
  • the navigation method of the present invention further includes a step of indicating the current imaging device's field of view in the stitched image. In this step, the area corresponding to the current imaging device's field of view in the stitched image is determined using the imaging device's current position and orientation, and a marker 9 indicating this area is generated and displayed in the display step.
  • the dashed line 11 in Figures 9 and 10 represents the area displayed in the display device's window.
  • the navigation method and the navigation function it provides can be started without being triggered by the operator, but the steps in the navigation method can be automatically and in real time executed by the control device. That is, during the operation, the control device automatically records all real-time optical hard mirror images and the corresponding positions and orientations of the optical hard mirror in the navigation system, performs distortion calibration on the acquired images, performs real-time stitching and reconstruction, and updates the stitched image (or the image after the stitched image is fused with the current image and/or the marker).
  • the steps of "displaying the stitched image” or “displaying the fused image” described herein do not necessarily mean that the image is always displayed.
  • the stitched image/fused image can be displayed only for a certain period of time based on the operator's needs. For example, when the operator desires to observe the entire area around the end of the optical microscope, the operator can trigger the display of the stitched image using a foot switch, for example.
  • the present invention also provides an electronic device for navigation of optical hard endoscope surgery, which includes the display device 3 and a processor as described above.
  • the processor is included in the control device 2, i.e., the host shown in the figure. It will be understood by those skilled in the art that the processor and the display device can be integrated or separated.
  • the control device 2 or the processor has a data interface.
  • the control device or the processor is electrically connected to the tracking device 1 and the optical hard endoscope 5 of the navigation system through the data interface to obtain The position and orientation of the imaging device of the optical hard mirror are obtained and an image of the optical hard mirror in the corresponding position and orientation is obtained.
  • the data interface of the processor also enables the processor to obtain an image of the patient's physiological structure, and the processor is configured to obtain marks made by the operator on the image of the patient's physiological structure.
  • a computer program is executed (the computer program can be stored in a memory included in the control device or in other memories) to execute the navigation method of the present invention and display a spliced image composed of images of the optical hard mirror obtained at multiple positions and directions on the display device 3 for at least a period of time, or according to the preferred scheme described above, display an image after the spliced image is fused with the current image of the optical hard mirror, and/or the spliced image described above that is fused with direction marks/physiological structure marks or guidance instructions related to these marks, or an image after the plane image processed by the spliced image is fused with the current image of the optical hard mirror.
  • one or more of the following images or views can be displayed on the display interface of the display device 3: a stitched image, an image fused with the stitched image and the current image of the optical hard endoscope, an image fused with the plane image corresponding to the stitched image and the current image of the optical hard endoscope, an image obtained by fusing the direction mark with the stitched image, an image obtained by fusing the physiological structure mark with the stitched image, navigation views such as the view on the fitted two-dimensional perspective section, the sagittal section view, the coronal section view, and the axial section view, and real-time endoscopic images.
  • the stitched image and the real-time endoscopic image can be displayed simultaneously with any one or more of the navigation views such as the fitted two-dimensional perspective view, the sagittal view, the coronal view, and the axial view;
  • the image obtained by fusing the stitched image and the current image of the optical hard endoscope can also be displayed simultaneously with any one or more of the navigation views such as the fitted two-dimensional perspective view, the sagittal view, the coronal view, and the axial view, as well as the real-time endoscopic image;
  • the image obtained by fusing the direction mark or physiological structure mark with the stitched image can also be displayed simultaneously with any one or more of the navigation views such as the fitted two-dimensional perspective view, the sagittal view, the coronal view, and the axial view, as well as the real-time endoscopic image, etc., etc.
  • the present invention also provides a computer-readable storage medium having a computer program stored thereon, the computer
  • the computer program when executed by a processor, can perform the steps of the navigation method of the present invention.
  • the present invention provides a control device, which may include a memory, a processor, and a program stored in the memory and executable by the processor, wherein the steps of the navigation method of the present invention are performed when the processor executes the program.
  • the present invention also provides a computer program product, including a computer program, which, when executed by a processor, implements the steps of the navigation method of the present invention.
  • the present invention also provides a navigation system. Because the navigation system utilizes an optical hard lens to form a spliced image of a larger area around the user, such as a panoramic image, thereby providing a wider field of view for the operator, and incorporates directional markers or physiological structure markers or other guiding indicators to facilitate the operator's grasp of the overall orientation, the navigation system has improved navigation effectiveness and accuracy.
  • a robotic system also referred to as a positioning and navigation system
  • a robotic arm and the navigation system of each example of the present invention.
  • the concept of the present invention can also be implemented under the navigation system in the robotic system.
  • RAM random access memory
  • ROM read-only memory
  • EEPROM electrically programmable ROM
  • EEPROM electrically erasable programmable ROM
  • registers hard disk, removable disk, CD-ROM, or any other form of storage medium known in the art.
  • the method can be implemented in whole or in part by software, hardware, firmware or any combination thereof.
  • software When implemented using software, it can be implemented in whole or in part in the form of a computer program product.
  • the computer program product includes one or more computer instructions.
  • the computer program instructions When the computer program instructions are loaded and executed on a computer, the process or function described in the present invention is generated in whole or in part.
  • the computer here can be a general-purpose computer, a special-purpose computer, a computer network, or other programmable device.
  • Computer instructions can be stored in a computer-readable storage medium or transmitted from one computer-readable storage medium to another computer-readable storage medium.
  • Computer instructions can be transmitted from one website, computer, server or data center to another website, computer, server or data center by wired (such as coaxial cable, optical fiber, digital subscriber line) or wireless (such as infrared, wireless, microwave, etc.) means.
  • Computer-readable storage media can be any available medium that can be accessed by a computer or a medium containing A data storage device such as a server or data center that integrates one or more available media.
  • the available media can be magnetic media (e.g., floppy disks, hard disks, tapes), optical media (e.g., DVDs), or semiconductor media (e.g., solid-state drives).
  • the memory of the control device of the present invention may include random access memory (RAM) or non-volatile memory (NVM), such as at least one disk storage.
  • the memory may be at least one storage device separate from the processor.

Landscapes

  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Surgery (AREA)
  • Medical Informatics (AREA)
  • Molecular Biology (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Biomedical Technology (AREA)
  • Animal Behavior & Ethology (AREA)
  • General Health & Medical Sciences (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Physics & Mathematics (AREA)
  • Pathology (AREA)
  • Optics & Photonics (AREA)
  • Biophysics (AREA)
  • Radiology & Medical Imaging (AREA)
  • Robotics (AREA)
  • Software Systems (AREA)
  • Computer Graphics (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Hardware Design (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Remote Sensing (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Orthopedic Medicine & Surgery (AREA)
  • Physical Education & Sports Medicine (AREA)
  • Signal Processing (AREA)
  • Human Computer Interaction (AREA)
  • Endoscopes (AREA)

Abstract

La présente invention concerne un procédé de navigation pour une chirurgie d'endoscope rigide optique, un support de stockage lisible par ordinateur correspondant, un appareil de commande, un produit programme d'ordinateur et un dispositif électronique, un système de navigation et un système de robot pour la navigation de la chirurgie d'endoscope rigide optique. Le procédé de navigation comprend : une étape d'acquisition d'image consistant à acquérir une pluralité d'images d'un endoscope rigide optique ; une étape d'acquisition d'orientation d'imagerie consistant à acquérir une position et une direction d'un appareil d'imagerie de l'endoscope rigide optique correspondant à chaque image de l'endoscope rigide optique ; une étape d'épissage consistant à épisser la pluralité d'images selon l'orientation de l'appareil d'imagerie correspondant à chaque image pour obtenir une image épissée ; et une étape d'affichage consistant à afficher l'image épissée. Selon la présente invention, en formant l'image épissée, un champ visuel global est fourni pour un opérateur, ce qui permet de résoudre le problème du champ visuel limité de l'opérateur. La présente invention est particulièrement avantageuse pour la chirurgie d'endoscope rigide optique dans la chirurgie endoscopique, le calcul requis par le mode d'épissage est inférieur, la vitesse d'épissage et la précision d'épissage sont plus élevées, et l'exigence de capacité de fonctionnement d'un processeur est inférieure.
PCT/CN2024/126086 2024-02-07 2024-10-21 Procédé de navigation pour chirurgie d'endoscope rigide optique, dispositif électronique, système de navigation et système de robot Pending WO2025167189A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202410173936.8A CN118021446A (zh) 2024-02-07 2024-02-07 用于光学硬镜手术的导航方法、电子设备、导航系统及机器人系统
CN202410173936.8 2024-02-07

Publications (1)

Publication Number Publication Date
WO2025167189A1 true WO2025167189A1 (fr) 2025-08-14

Family

ID=90985277

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2024/126086 Pending WO2025167189A1 (fr) 2024-02-07 2024-10-21 Procédé de navigation pour chirurgie d'endoscope rigide optique, dispositif électronique, système de navigation et système de robot

Country Status (2)

Country Link
CN (1) CN118021446A (fr)
WO (1) WO2025167189A1 (fr)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118021446A (zh) * 2024-02-07 2024-05-14 常州市康辉医疗器械有限公司 用于光学硬镜手术的导航方法、电子设备、导航系统及机器人系统
CN117898834A (zh) * 2024-03-06 2024-04-19 常州市康辉医疗器械有限公司 用于引导内镜手术的方法及计算机可读存储介质、控制装置和计算机程序产品、电子设备、导航系统及机器人系统

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105596005A (zh) * 2009-03-26 2016-05-25 直观外科手术操作公司 用于为操纵内窥镜设备的末端朝向一个或更多个界标转向提供视觉引导和在内窥镜导航中辅助操作者的系统
CN107079097A (zh) * 2014-11-06 2017-08-18 索尼公司 内窥镜系统、图像处理装置、图像处理方法以及程序
CN108882964A (zh) * 2015-10-09 2018-11-23 柯惠Lp公司 使用成角度内窥镜运用机器人手术系统使体腔可视化的方法
CN114599263A (zh) * 2019-08-21 2022-06-07 艾科缇弗外科公司 用于医疗成像的系统和方法
CN115590456A (zh) * 2022-09-29 2023-01-13 上海交通大学(Cn) 转动式实时全景耳科用内窥镜
US20230181262A1 (en) * 2021-12-10 2023-06-15 Leica Microsystems Cms Gmbh Devices and Methods for Imaging and Surgical Applications
CN116849832A (zh) * 2023-08-15 2023-10-10 杭州华匠医学机器人有限公司 腔内动态全景成像方法、装置、设备及介质
CN117898834A (zh) * 2024-03-06 2024-04-19 常州市康辉医疗器械有限公司 用于引导内镜手术的方法及计算机可读存储介质、控制装置和计算机程序产品、电子设备、导航系统及机器人系统
CN118021446A (zh) * 2024-02-07 2024-05-14 常州市康辉医疗器械有限公司 用于光学硬镜手术的导航方法、电子设备、导航系统及机器人系统
CN118285913A (zh) * 2024-04-01 2024-07-05 常州市康辉医疗器械有限公司 在导航系统下引导内镜手术的方法、电子设备、导航系统及手术机器人系统

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105596005A (zh) * 2009-03-26 2016-05-25 直观外科手术操作公司 用于为操纵内窥镜设备的末端朝向一个或更多个界标转向提供视觉引导和在内窥镜导航中辅助操作者的系统
CN107079097A (zh) * 2014-11-06 2017-08-18 索尼公司 内窥镜系统、图像处理装置、图像处理方法以及程序
CN108882964A (zh) * 2015-10-09 2018-11-23 柯惠Lp公司 使用成角度内窥镜运用机器人手术系统使体腔可视化的方法
CN114599263A (zh) * 2019-08-21 2022-06-07 艾科缇弗外科公司 用于医疗成像的系统和方法
US20230181262A1 (en) * 2021-12-10 2023-06-15 Leica Microsystems Cms Gmbh Devices and Methods for Imaging and Surgical Applications
CN115590456A (zh) * 2022-09-29 2023-01-13 上海交通大学(Cn) 转动式实时全景耳科用内窥镜
CN116849832A (zh) * 2023-08-15 2023-10-10 杭州华匠医学机器人有限公司 腔内动态全景成像方法、装置、设备及介质
CN118021446A (zh) * 2024-02-07 2024-05-14 常州市康辉医疗器械有限公司 用于光学硬镜手术的导航方法、电子设备、导航系统及机器人系统
CN117898834A (zh) * 2024-03-06 2024-04-19 常州市康辉医疗器械有限公司 用于引导内镜手术的方法及计算机可读存储介质、控制装置和计算机程序产品、电子设备、导航系统及机器人系统
CN118285913A (zh) * 2024-04-01 2024-07-05 常州市康辉医疗器械有限公司 在导航系统下引导内镜手术的方法、电子设备、导航系统及手术机器人系统

Also Published As

Publication number Publication date
CN118021446A (zh) 2024-05-14

Similar Documents

Publication Publication Date Title
US11800970B2 (en) Computerized tomography (CT) image correction using position and direction (P and D) tracking assisted optical visualization
RU2668490C2 (ru) Инструменты наведения для ручного управления эндоскопом с помощью 3d-изображений, полученных до операции и во время операции
US10674891B2 (en) Method for assisting navigation of an endoscopic device
CN106659373B (zh) 用于在肺内部的工具导航的动态3d肺图谱视图
EP2433262B1 (fr) Alignement de poursuite sans marqueur et étalonnage pour système endoscopique repéré électromagnétique
WO2025167189A1 (fr) Procédé de navigation pour chirurgie d'endoscope rigide optique, dispositif électronique, système de navigation et système de robot
CN111970986A (zh) 用于执行术中指导的系统和方法
CN108969099B (zh) 一种校正方法、手术导航系统、电子设备及存储介质
JP2007531553A (ja) 術中ターゲティングのシステムおよび方法
JP6952740B2 (ja) ユーザーを支援する方法、コンピュータープログラム製品、データ記憶媒体、及び撮像システム
WO2025185175A1 (fr) Procédé de guidage de chirurgie endoscopique, support de stockage lisible par ordinateur, appareil de commande, produit programme informatique, dispositif électronique, système de navigation et système robotique
JP7580694B2 (ja) 医療システム及びそれと関連した方法
WO2025103076A1 (fr) Procédé de navigation pour chirurgie endoscopique de colonne vertébrale, dispositif électronique et système de navigation
US20240285351A1 (en) Surgical assistance system with improved registration, and registration method
CN118285913A (zh) 在导航系统下引导内镜手术的方法、电子设备、导航系统及手术机器人系统
JP4510415B2 (ja) 3d対象物のコンピュータ支援表示方法
CN117860379A (zh) 导航系统下的内窥镜引导方法、电子设备及导航系统
JP2022013628A (ja) ロボットを動作させるための制御システム及び方法
JP2025501263A (ja) 二次元画像位置合わせ
CN115607275A (zh) 一种图像显示方式、装置、存储介质及电子设备
Bisson et al. 3D visualization tool for minimally invasive discectomy assistance
CN114828768B (zh) 使用从探针的远侧端部的方向选择医学图像上的光标位置
US20240156549A1 (en) Cavity modeling system and cavity modeling method
EP4299029A2 (fr) Intégration de tomographie assistée par ordinateur à faisceau conique pour créer une voie de navigation vers une cible dans le poumon et procédé de navigation vers la cible
CN117898664A (zh) 显示在先内窥镜图像的方法、电子设备、导航系统及机器人系统

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 24923393

Country of ref document: EP

Kind code of ref document: A1