[go: up one dir, main page]

WO2025079075A1 - Caméra de navigation suivante - Google Patents

Caméra de navigation suivante Download PDF

Info

Publication number
WO2025079075A1
WO2025079075A1 PCT/IL2024/050992 IL2024050992W WO2025079075A1 WO 2025079075 A1 WO2025079075 A1 WO 2025079075A1 IL 2024050992 W IL2024050992 W IL 2024050992W WO 2025079075 A1 WO2025079075 A1 WO 2025079075A1
Authority
WO
WIPO (PCT)
Prior art keywords
surgical
camera
view
image information
space
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
PCT/IL2024/050992
Other languages
English (en)
Inventor
Ofir RUF
Ziv SEEMANN
Itamar ESHEL
Gal BARAZANI
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Mazor Robotics Ltd
Original Assignee
Mazor Robotics Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Mazor Robotics Ltd filed Critical Mazor Robotics Ltd
Publication of WO2025079075A1 publication Critical patent/WO2025079075A1/fr
Pending legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/20Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/30Surgical robots
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/20Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
    • A61B2034/2046Tracking techniques
    • A61B2034/2055Optical tracking systems
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/20Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
    • A61B2034/2046Tracking techniques
    • A61B2034/2055Optical tracking systems
    • A61B2034/2057Details of tracking cameras

Definitions

  • the present disclosure is generally directed to surgical navigation systems and imaging devices for use in surgical navigation systems.
  • Imaging devices and navigation systems may assist a surgeon or other medical provider in carrying out a surgical procedure.
  • Imaging may be used by a medical provider for visual guidance in association with diagnostic and/or therapeutic procedures.
  • Navigation systems may be used for tracking objects (e.g., instruments, imaging devices, etc.) associated with carrying out the surgical procedure.
  • the present disclosure provides improvements to navigation systems.
  • embodiments of the present disclosure aim to enable a change of view of a camera during a surgical procedure. Enabling motion of a navigation camera comes with technical difficulties (e.g., changes in registration due to motion), but help improve performance of the system because the camera may not always be optimally positioned at the onset of the surgical procedure or because environmental changes may occur during a surgical procedure that frustrate optimal performance of the navigation system.
  • overall performance of the system can be improved, which may result in improved patient outcomes.
  • Example aspects of the present disclosure include:
  • a system including: a camera; a motorized control unit coupled with the camera and configured to physically adjust a position of the camera; a processor; and a memory storing data that, when executed by the processor, causes the processor to: receive first image information from the camera at a first point in time, where the first image information provides a first view of a surgical space and one or more surgical navigation trackers positioned within the surgical space; determine, based on the first image information, that a position of the camera at the first point in time is suboptimal with respect to the surgical space; determine, based on the first image information, a physical motion for the camera that improves the position of the camera with respect to the surgical space; generate a camera movement signal configured to move the camera according to the physical motion; transmit the camera movement signal to the motorized control unit; receive second image information from the camera at a second point in time, after the camera movement signal has been transmitted, where the second image information provides a second view of the surgical space and the one or more surgical navigation trackers positioned within the surgical space; and confirm,
  • a system including: a processor; and memory storing data that, when executed by the processor, causes the processor to: receive first image information from a camera at a first point in time, where the first image information provides a first view of a surgical space and surgical navigation trackers positioned within the surgical space; determine, based on the first image information, that a position of the camera at the first point in time is suboptimal with respect to the surgical space; determine, based on the first image information, a physical motion for the camera that improves the position of the camera with respect to the surgical space; generate a camera movement signal configured to move the camera according to the physical motion; transmit the camera movement signal to a motorized control unit configured to physically move the camera; receive second image information from the camera at a second point in time, after the camera movement signal has been transmitted, where the second image information provides a second view of the surgical space and the surgical navigation trackers positioned within the surgical space; and confirm, based on an analysis of the second image information, that the second view of the surgical space is improved relative to the first view
  • FIG. 7 illustrates an example of a process flow in accordance with aspects of the present disclosure.
  • the described methods, processes, and techniques may be implemented in hardware, software, firmware, or any combination thereof. If implemented in software, the functions may be stored as one or more instructions or code on a computer-readable medium and executed by a hardware-based processing unit. Alternatively or additionally, functions may be implemented using machine learning models, neural networks, artificial neural networks, or combinations thereof (alone or in combination with instructions). Alternatively or additionally, functions may be implemented using machine learning models, neural networks, artificial neural networks, or combinations thereof (alone or in combination with instructions).
  • Computer-readable media may include non-transitory computer-readable media, which corresponds to a tangible medium such as data storage media (e.g., RAM, ROM, EEPROM, flash memory, or any other medium that can be used to store desired program code in the form of instructions or data structures and that can be accessed by a computer).
  • data storage media e.g., RAM, ROM, EEPROM, flash memory, or any other medium that can be used to store desired program code in the form of instructions or data structures and that can be accessed by a computer.
  • processors such as one or more digital signal processors (DSPs), general purpose microprocessors (e.g., Intel Core i3, i5, i7, or i9 processors; Intel Celeron processors; Intel Xeon processors; Intel Pentium processors; AMD Ryzen processors; AMD Athlon processors; AMD Phenom processors; Apple A10 or 10X Fusion processors; Apple Al l, A12, A12X, A12Z, or A13 Bionic processors; or any other general purpose microprocessors), graphics processing units (e.g., Nvidia GeForce RTX 2000-series processors, Nvidia GeForce RTX 3000-series processors, AMD Radeon RX 5000-series processors, AMD Radeon RX 6000-series processors, or any other graphics processing units), application specific integrated circuits (ASICs), field programmable logic arrays (FPGAs), or other equivalent integrated or discrete logic circuit
  • DSPs digital signal processors
  • proximal and distal are used in this disclosure with their conventional medical meanings, proximal being closer to the operator or user of the system, and further from the region of surgical interest in or on the patient, and distal being closer to the region of surgical interest in or on the patient, and further from the operator or user of the system.
  • the systems and techniques described herein may support autonomous or semi-autonomous object tracking. Embodiments will be described in connection with moving a camera during a surgical procedure, but it should be appreciated that the claims are not so limited. For example, embodiments of the present disclosure contemplate the ability to add a motorized degree of freedom to a camera (or other type of imaging device), that enables changing the view of the camera due to a re-orientation of the camera. While particularly useful in navigation systems that support surgical procedures, it should be appreciated that embodiments can also be applied to non-surgical solutions. Moreover, the term “camera”, “image capture device”, and “imaging device” will be used interchangeably herein.
  • a camera or imaging device may include multiple image sensors. Examples of such cameras or imaging devices include, without limitation, optical cameras, infrared cameras, still cameras, video cameras, electromagnetic sensors, laser sensors, ultrasound imaging devices, etc.
  • the camera's degree of freedom may be located in its center. Centering the camera’s range of motion around the camera center can provide a faster way to change the camera’s range of view in the room. Centering the camera’s range of motion at the camera center can also help to support efficient registration and re-registration for the navigation system.
  • the camera may attempt to keep all surgical navigation trackers within its field of view.
  • the camera may be automatically or semi-automatically rotated to include all of the some of the surgical navigation trackers.
  • the image plane is not initially parallel with a plane passing through multiple surgical navigation trackers, then the accuracy of tracking may be lowered. In such a situation, the camera may be configured to automatically or semi- automatically rotate such that the image plane is closer to parallel with the plane passing through the multiple surgical navigation trackers.
  • the camera may be equipped with one, two, or three degrees of rotational freedom.
  • the camera may be equipped with an ability to translate positions (e.g., move linearly) along one, two, or three axes.
  • the camera may be equipped with 1, 2, 3, 4, 5, or 6 Degrees of Freedom (DoFs).
  • DoFs Degrees of Freedom
  • Some situations may benefit from a camera having fewer DoFs to provide certainty and system simplicity.
  • Some situations may benefit from a camera having more DoFs to provide increased flexibility. Enabling automated or semi-automated motion of the camera helps surgical staff avoid the need for manually re-aligning the camera during a surgical procedure, thereby decreasing the duration of the surgical procedure.
  • Techniques described herein may be implemented in hardware, software, firmware, or any combination thereof that may automatically detect instrument landmarks on ultrasound images during a medical procedure.
  • FIG. 1 illustrates an example of a system 100 that supports aspects of the present disclosure.
  • the system 100 is shown to include a computing device 102, one or more imaging devices 112, a robot 114, a navigation system 118, a database 130, and/or a cloud network 134 (or other network).
  • Systems according to other implementations of the present disclosure may include more or fewer components than the system 100.
  • the system 100 may omit and/or include additional instances of one or more components of the computing device 102, the imaging device(s) 112, a robot 114, a navigation system 118, a database 142, and/or a cloud network 144.
  • the system 100 may omit any instance of the computing device 102, the imaging device(s) 112, the robot 114, the navigation system 118, the database 142, and/or the cloud network 144.
  • the system 100 may support the implementation of one or more other aspects of one or more of the methods disclosed herein.
  • the computing device 102 is shown to include a processor 104, a memory 106, a communication interface 108, and a user interface 110.
  • Computing devices according to other implementations of the present disclosure may include more or fewer components than the computing device 102.
  • the computing device 102 may be, for example, a control device including electronic circuitry associated with controlling any components of the system 100.
  • the processor 104 of the computing device 102 may be any processor described herein or any similar processor.
  • the processor 104 may be configured to execute instructions stored in the memory 106, which instructions may cause the processor 104 to carry out one or more computing steps utilizing or based on data received from the imaging device(s) 112, the robot 114, the navigation system 118, the database 142, and/or the cloud network 144.
  • the memory 106 may be or include RAM, DRAM, SDRAM, other solid-state memory, any memory described herein, or any other tangible, non-transitory memory for storing computer-readable data and/or instructions.
  • the memory 106 may store information or data associated with completing, for example, any step of the process flow 700 described herein, or of any other methods.
  • the memory 106 may store, for example, instructions and/or machine learning models that support one or more functions of the imaging device(s) 112, the robot 114, and the navigation system 118.
  • the memory 106 may store content (e.g., instructions and/or machine learning models) that, when executed by the processor 104, enable image processing 120, segmentation 122, transformation 124, and/or registration 128.
  • Such content if provided as an instruction, may, in some implementations, be organized into one or more applications, modules, packages, layers, or engines.
  • the processor 104 and memory 106 may also cooperate to autonomously or semi-autonomously move the imaging device(s) 112.
  • the memory 106 may include instructions for camera control 132, which may be supported by instructions that perform image comparison 134.
  • the processor 104 may be configured to send signals to a control unit 138 that cause one or more motors 140 to physically actuate or move the one or more imaging devices 112.
  • the processor 104 may be configured to autonomously or semi- autonomously instruct the control unit 138 to actuate a motor 140, which causes the imaging device(s) 112 to perform a physical motion (e.g., rotate and/or translate).
  • the imaging device(s) 112 may be physically moved in an attempt to improve performance of the robot 114 and navigation system 118 as disclosed herein.
  • the memory 106 may store other types of content or data (e.g., machine learning models, artificial neural networks, deep neural networks, etc.) that can be processed by the processor 104 to carry out the various method and features described herein.
  • content or data e.g., machine learning models, artificial neural networks, deep neural networks, etc.
  • the data, algorithms, and/or instructions may cause the processor 104 to manipulate data stored in the memory 106 and/or received from or via the imaging device(s) 112, the robot 114, the navigation system 118, the database 142, and/or the cloud network 144.
  • the processor may also receive feedback from the control unit 138 indicating a motion of motor(s) 140 and/or imaging devices 112 as perceived by the control unit 138.
  • the computing device 102 may also include one or more user interfaces 110.
  • the user interface 110 may be or include user input and/or user output devices.
  • Non-limiting examples of a user input include a keyboard, mouse, trackball, etc.
  • Non-limiting examples of a user output include a monitor, television, screen, etc.
  • a combination user input/user output device may include a touchscreen, and/or any other device for receiving information from a user and/or for providing information to a user.
  • the user interface 110 may be used, for example, to receive a user selection or other user input regarding any step of any method described herein.
  • the computing device 102 may utilize a user interface 110 that is housed separately from one or more remaining components of the computing device 102.
  • the user interface 110 may be located proximate one or more other components of the computing device 102, while in other implementations, the user interface 110 may be located remotely from one or more other components of the computer device 102.
  • the imaging device(s) 112 may be operable to capture images (moving or still) of an environment.
  • the imaging device(s) 112 may be configured to capture images of a surgical space and objects within the surgical space.
  • the imaging device(s) 112 may image anatomical feature(s) (e.g., a bone, veins, tissue, etc.) and/or other aspects of patient anatomy to yield image data (e.g., image data depicting or corresponding to a bone, veins, tissue, etc.).
  • An “image” or “image data” as used herein refers to the data generated or captured by an imaging device 112, including in a machine-readable form, a graphical/visual form, and in any other form.
  • the image or image data may include data corresponding to an anatomical feature of a patient, an object in a surgical space, surgical navigation trackers, or to a portion thereof.
  • the image or image data may be or include a preoperative image, an intraoperative image, a postoperative image, or an image taken independently of any surgical procedure.
  • a first imaging device 112 may be used to obtain first image data (e.g., a first image) at a first time
  • a second imaging device 112 may be used to obtain second image data (e.g., a second image) at a second time after the first time.
  • a single imaging device 112 may be used to capture multiple images before, during, and/or after a surgical procedure.
  • the imaging device(s) 112 may be capable of taking a 2D image or a 3D image to yield the image data.
  • the imaging device(s) 112 may be or include, for example, an ultrasound scanner (which may include, for example, a physically separate transducer and receiver, or a single ultrasound transceiver), an 0-arm, a C-arm, a G-arm, or any other device utilizing X-ray-based imaging (e.g., a fluoroscope, a CT scanner, or other X-ray machine), a magnetic resonance imaging (MRI) scanner, an optical coherence tomography (OCT) scanner, an endoscope, a microscope, an optical camera, a stereo imaging device, a thermographic camera (e.g., an infrared camera), a radar system (which may include, for example, a transmitter, a receiver, a processor, and one or more antennae), or any other imaging device 112 suitable for obtaining images of an anatomical feature of a patient.
  • the imaging device(s) 112 may include more than one imaging device 112.
  • a first imaging device may provide first image data and/or a first image
  • a second imaging device may provide second image data and/or a second image.
  • the same imaging device may be used to provide both the first image data and the second image data, and/or any other image data described herein.
  • the imaging device(s) 112 may be operable to generate a stream of image data.
  • the imaging device 112 may be configured to operate with an open shutter, or with a shutter that continuously alternates between open and shut so as to capture successive images.
  • Motion of the imaging device(s) 112 may be controlled by the control unit 138 and may be imparted by one or more motor(s) 140.
  • the control unit 138 may cooperate with camera control 132 instructions to actuate the one or more motor(s) 140, which cause the imaging device(s) 112 to physically move.
  • the imaging device(s) 112 may be physically rotated, tilted, and/or rolled with one or more motor(s) 140 responsive to instructions of the control unit 138.
  • Rotational motion of the imaging device(s) 112 may be achieved using any type of known motor 140, examples of which include servo motors, stepper motors, etc.
  • the imaging device(s) 112 may also be physically translated (e.g., along one or more axes) in addition to being rotated.
  • the same motor(s) 140 used to impart rotational motion may be used to impart translational motion, although it may be possible to use different motor(s) 140 to achieve different types of physical motion.
  • the robot 114 may be any surgical robot or surgical robotic system.
  • the robot 114 may be or include, for example, the Mazor XTM Stealth Edition robotic guidance system.
  • the robot 114 may be configured to position the imaging device(s) 112 at one or more precise position(s) and orientation(s), and/or to return the imaging device(s) 112 to the same position(s) and orientation(s) at a later point in time.
  • the robot 114 may additionally or alternatively be configured to manipulate a surgical tool (whether based on guidance from the navigation system 118 or not) to accomplish or to assist with a surgical task.
  • the robot 114 may be configured to hold and/or manipulate an anatomical element during or in connection with a surgical procedure.
  • the robot 114 may include one or more robotic arms 116.
  • the robotic arm 116 may include a first robotic arm and a second robotic arm, though the robot 114 may include more than two robotic arms.
  • one or more of the robotic arms 116 may be used to hold and/or maneuver the imaging device 112.
  • the imaging device 112 includes two or more physically separate components (e.g., a transmitter and receiver)
  • one robotic arm 116 may hold one such component
  • another robotic arm 116 may hold another such component.
  • Each robotic arm 116 may be positionable independently of the other robotic arm.
  • the robotic arms 116 may be controlled in a single, shared coordinate space, or in separate coordinate spaces.
  • the robot 114 together with the robotic arm 116, may have, for example, one, two, three, four, five, six, seven, or more degrees of freedom. Further, the robotic arm 116 may be positioned or positionable in any pose, plane, and/or focal point. The pose includes a position and an orientation. As a result, an imaging device 112, surgical tool, or other object held by the robot 114 (or, more specifically, by the robotic arm 116) may be precisely positionable in one or more needed and specific positions and orientations.
  • the robotic arm(s) 116 may include one or more sensors that enable the processor 104 (or a processor of the robot 114) to determine a precise pose in space of the robotic arm (as well as any object or element held by or secured to the robotic arm).
  • reference markers e.g., surgical navigation markers or tracking devices 138
  • the tracking devices 138 may be tracked by the navigation system 118, and the results of the tracking may be used by the robot 114 and/or by an operator of the system 100 or any component thereof.
  • the navigation system 118 can be used to track other components (e.g., imaging device 112, surgical tools, instruments, etc.) of the system and the system can operate without the use of the robot 114 (e.g., with the surgeon manually manipulating the imaging device 112 and/or one or more surgical tools, based on information and/or instructions generated by the navigation system 118, for example).
  • the imaging device(s) 112 may be configured to move or change position such that the tracking device(s) 138 are best positioned within an image to facilitate the accurate tracking thereof.
  • the navigation system 118 may provide navigation for a surgeon and/or a surgical robot during an operation.
  • the navigation system 118 may be any now-known or future-developed navigation system, including, for example, the Medtronic StealthStationTM S8 surgical navigation system or any successor thereof.
  • the navigation system 118 may receive inputs from the imaging device(s) and other sensor(s) for tracking one or more reference markers, navigated trackers (e.g., tracking devices 138, etc.) or other objects within the operating room or other room in which some or all of the system 100 is located.
  • the imaging device(s) 112 supporting operation of the navigation system 118 may include optical cameras, stereo cameras, infrared cameras, or other cameras.
  • the navigation system 118 may include one or more of an optical tracking system, an acoustic tracking system, an electromagnetic tracking system, a radar tracking system, an inertial measurement unit (IMU) based tracking system, and a computer vision based tracking system.
  • the navigation system 118 may include a corresponding transmission device 136 capable of transmitting signals associated with the tracking type.
  • the navigation system 118 may be capable of computer vision based tracking of objects and tracking device(s) 138 present in images captured by the imaging device(s) 112.
  • the navigation system 118 may be used to track a position and orientation (e.g., a pose) of the imaging device 112, the robot 114 and/or robotic arm 116, and/or one or more surgical tools (e.g., instrument, etc.) (or, more particularly, to track a pose of a tracking device 138 attached, directly or indirectly, in fixed relation to the one or more of the foregoing).
  • a position and orientation e.g., a pose
  • surgical tools e.g., instrument, etc.
  • an array of tracking devices 138 e.g., two or more tracking devices 138 secured in a known orientation
  • an array of tracking devices 138 may be connected to one or more robotic arms 116.
  • the navigation system 118 may include a display for displaying one or more images from an external source (e.g., the computing device 102, imaging device 112, or other source) or for displaying an image and/or video stream from the one or more cameras or other sensors of the navigation system 118.
  • an external source e.g., the computing device 102, imaging device 112, or other source
  • the system 100 can operate without the use of the navigation system 118.
  • the navigation system 118 may be configured to provide guidance to a surgeon or other user of the system 100 or a component thereof, to the robot 114, or to any other element of the system 100 regarding, for example, a pose of one or more anatomical elements, whether or not a surgical instrument is in the proper trajectory, and/or how to move a surgical instrument into the proper trajectory to carry out a surgical task according to a preoperative or other surgical plan.
  • the neural network and machine learning model(s) 130 may support Al/machine learning computer vision algorithms and object detection in association with automatically detecting, identifying, and tracking target objects (e.g., instruments, tools, tracking devices 138, etc.) in one or more images 204 or a multimedia file 208, as illustrated in Fig. 2.
  • target objects e.g., instruments, tools, tracking devices 138, etc.
  • the database 142 may store information that correlates one coordinate system to another (e.g., one or more robotic coordinate systems, an ultrasound space coordinate system, a patient coordinate system, and/or a navigation coordinate system, etc.).
  • the database 142 may additionally or alternatively store, for example, one or more surgical plans (including, for example, pose information about a target and/or image information about a patient’ s anatomy at and/or proximate the surgical site, for use by the robot 114, the ultrasound space coordinate system, the navigation system 118, and/or a user of the computing device 102 or of the system 100); one or more images 204 useful in connection with a surgery to be completed by or with the assistance of one or more other components of the system 100; and/or any other useful information.
  • surgical plans including, for example, pose information about a target and/or image information about a patient’ s anatomy at and/or proximate the surgical site, for use by the robot 114, the ultrasound space coordinate system, the navigation system 118, and/or a user
  • the database 142 may be configured to provide any such information to the computing device 102 or to any other device of the system 100 or external to the system 100, whether directly or via the cloud network 144.
  • the database 142 may include information associated with images obtained at different configurations of an imaging device 112 and whether such images correlate with accurate tracking of components in the surgical space or not.
  • the computing device 102 may communicate with a server(s) and/or a database (e.g., database 142) directly or indirectly over a communications network (e.g., the cloud network 144).
  • the communications network may include any type of known communication medium or collection of communication media and may use any type of protocols to transport data between endpoints.
  • the communications network may include wired communications technologies, wireless communications technologies, or any combination thereof.
  • the computing device 102 may be connected to the cloud network 144 via the communication interface 108, using a wired connection, a wireless connection, or both. In some implementations, the computing device 102 may communicate with the database 142 and/or an external device (e.g., a computing device) via the cloud network 144.
  • the system 100 or similar systems may be used, for example, to carry out one or more aspects of any of the process flow 700 described herein. The system 100 or similar systems may also be used for other purposes.
  • FIG. 2 illustrates an example of a surgical environment 200 in which components of the system 100 can be deployed.
  • the surgical environment 200 is shown to include a surgical space 212 in which the robot 114 may support a surgical procedure of a patient 220.
  • the robot 114 may be operated with the assistance of the navigation system 118 and computing device 102.
  • image(s) of the surgical space 212 may be captured by the imaging device(s) 112.
  • surgical navigation trackers 228 may be positioned at various locations in the surgical space 212.
  • the surgical navigation trackers 228 may be similar or identical to tracking devices 138.
  • the surgical navigation trackers 228 may correspond to objects having known geometric properties (e.g., size, shape, etc.) that are visible by the imaging device(s) 112.
  • Multiple surgical navigation trackers 228 may be attached to a common instrument (e.g., a tracking array) in a known pattern or relative configuration.
  • a surgical navigation tracker 228 moves at some point during the surgical procedure or an object obstructs the imaging device(s) 112 view of a surgical navigation tracker 228, then it may be desirable to move the affected imaging device 112 to re-capture the surgical navigation trackers 228 and/or avoid the obstruction.
  • FIG. 3A illustrates a first image 304 of a surgical area 212 in which a first set of surgical navigation trackers 312 are contained within the first image 304.
  • the imaging device 112 may have been in a first orientation that placed the first set of surgical navigation trackers 312 within the field of view 232.
  • the first set of surgical navigation trackers 312 may be part of a tracking array that is connected to a surgical instrument, a part of the robot 114, a robotic arm 116, another imaging device 112, the table 216, an anatomical element 224a-N, or the like.
  • the first set of surgical navigation trackers 312 may include different surgical navigation trackers 228 associated with different objects in the surgical space 212.
  • Figs. 4A and 4B illustrate a second example of first and second images 404, 408 captured with an imaging device 112 before and after physical motion.
  • the first image 404 and second image 408 are both shown to include the same number of surgical navigation trackers 228; however, the first image 404 captures a first and second set of surgical navigation trackers 412, 416 in a first orientation (e.g., from a first perspective) whereas the second image 408 captures the first and second set of surgical navigation trackers 412, 416 in a second orientation (e.g., from a second perspective).
  • Moving the imaging device 112 to change the view of the first and second set of surgical navigation trackers 512, 516 helps to capture all surgical navigation trackers 228 belonging to each set of surgical navigation trackers 512, 516, which may help facilitate more accurate tracking of the surgical navigation trackers 228 and the objects associated therewith.
  • Fig. 7 illustrates an example of a process flow 700 in accordance with aspects of the present disclosure.
  • process flow 700 may implement aspects of a computing device 102, an imaging device 112, a robot 114, a navigation system 118, and a control unit 138 described with reference to Figs. 1 through 6.
  • process flow 700 may be performed by any device (e.g., a computing device 102, an imaging device 112, a robot 114, navigation system 118, control unit 138, etc.) of the system 100 described herein.
  • a process flow 700 will be described in connection with capturing two images, it should be appreciated that embodiments of the present disclosure are not so limited.
  • a process flow 700 may be used to optimize gathering a sequence of images (e.g., two, three, four, or more images). Additionally, the process flow 700 may be triggered automatically by the system and may be iteratively repeated until an optimized solution is reached.
  • the process flow 700 starts when a first image or first image information (e.g., an image 204 or multimedia file 208) is received from a camera or imaging device 112 (step 704).
  • the first image or image file may include information that provides a first view of a surgical space and one or more surgical navigation trackers positioned within the surgical space.
  • the process flow 700 continues by determining that a position of the camera is suboptimal with respect to the surgical space (step 708). This determination may be made based on feedback from a user of the system or based on image quality not meeting a particular threshold.
  • the determination of step 708 may alternatively or additionally be made in response to determining that not all of the surgical navigation trackers are present in the first image or that two or more of the surgical navigation trackers are not separated by at least a predetermined distance in the first image.
  • the determination of step 708 may alternatively or additionally be made in response to determining that the tracking of objects by the navigation system 118 is not as accurate as desired or fails to meet a particular tracking threshold.
  • the process flow 700 may then include receiving a second image or second image information from the camera (step 720).
  • the second image may be received after the camera has been moved and is in the second position.
  • the second image may provide a second view of the surgical space and the one or more surgical navigation trackers.
  • the process flow 700 may then include utilizing the image comparison 134 to compare the first image with the second image (step 724).
  • the positions of surgical navigation trackers may be compared between images, spacing between surgical navigation trackers may be compared between images, a number of surgical navigation trackers may be compared between images, etc.
  • the present disclosure encompasses methods with fewer than all of the steps identified herein (and the corresponding description of respective process flows), as well as methods that include additional steps beyond those identified in the figures and process flows described herein).
  • the present disclosure also encompasses methods that include one or more steps from one method described herein, and one or more steps from another method described herein. Any correlation described herein may be or include a registration or any other correlation.
  • Example aspects of the present disclosure include:
  • a system comprising: a camera; a motorized control unit coupled with the camera and configured to physically adjust a position of the camera; a processor; and a memory storing data that, when executed by the processor, causes the processor to: receive first image information from the camera at a first point in time, wherein the first image information provides a first view of a surgical space and one or more surgical navigation trackers positioned within the surgical space; determine, based on the first image information, that a position of the camera at the first point in time is suboptimal with respect to the surgical space; determine, based on the first image information, a physical motion for the camera that improves the position of the camera with respect to the surgical space; generate a camera movement signal configured to move the camera according to the physical motion; transmit the camera movement signal to the motorized control unit; receive second image information from the camera at a second point in time, after the camera movement signal has been transmitted, wherein the second image information provides a second view of the surgical space and the one or more surgical navigation trackers positioned within the surgical space
  • the physical motion causes the one or more surgical navigation trackers to move closer to a center of a field of view of the camera.
  • the physical motion causes changes a perspective of the one or more surgical navigation trackers that improves a tracking accuracy associated with the one or more surgical navigation trackers.
  • the physical motion causes an image plane to move closer to parallel with a plane bisecting the one or more surgical navigation trackers.
  • the one or more surgical navigation trackers comprises a first set of surgical navigation trackers associated with a first surgical instrument and a second set of surgical navigation trackers associated with a second surgical instrument.
  • At least one of the first surgical instrument and the second surgical instrument comprises a robotically-guided surgical instrument.
  • the camera movement signal causes the camera to at least one of rotate, tilt, and roll.
  • the second view of the surgical space captures at least one additional surgical navigation tracker not captured by the first view.
  • the physical motion causes the surgical navigation trackers to move closer to a center of a field of view of the camera.
  • the surgical navigation trackers comprises a first set of surgical navigation trackers associated with a first surgical instrument and a second set of surgical navigation trackers associated with a second surgical instrument.
  • each of the expressions “at least one of A, B and C,” “at least one of A, B, or C,” “one or more of A, B, and C,” “one or more of A, B, or C,” “A, B, and/or C,” and “A, B, or C” means A alone, B alone, C alone, A and B together, A and C together, B and C together, or A, B and C together.
  • a computer-readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples (a non-exhau stive list) of the computer-readable storage medium would include the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
  • a computer-readable storage medium may be any tangible medium that can contain or store a program for use by or in connection with an instruction execution system, apparatus, or device.
  • a computer-readable signal medium may include a propagated data signal with computer-readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof.
  • a computer-readable signal medium may be any computer-readable medium that is not a computer-readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
  • Program code embodied on a computer-readable medium may be transmitted using any appropriate medium, including, but not limited to, wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.

Landscapes

  • Health & Medical Sciences (AREA)
  • Surgery (AREA)
  • Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Robotics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Medical Informatics (AREA)
  • Molecular Biology (AREA)
  • Animal Behavior & Ethology (AREA)
  • General Health & Medical Sciences (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Apparatus For Radiation Diagnosis (AREA)

Abstract

La présente invention concerne un système et un procédé. Un procédé illustratif consiste à recevoir des premières informations d'image provenant d'une caméra à un premier moment, les premières informations d'image fournissant une première vue d'un espace chirurgical et des suiveurs de navigation chirurgicale positionnés à l'intérieur de l'espace chirurgical. Le procédé peut en outre consister à déterminer, sur la base des premières informations d'image, qu'une position de la caméra au premier moment est sous-optimale par rapport à l'espace chirurgical, déterminer, sur la base des premières informations d'image, un mouvement physique pour la caméra qui améliore la position de la caméra par rapport à l'espace chirurgical, et générer un signal de mouvement de caméra configuré pour déplacer la caméra en fonction du mouvement physique. Le procédé peut en outre consister à transmettre le signal de mouvement de caméra à une unité de commande motorisée configurée pour déplacer physiquement la caméra.
PCT/IL2024/050992 2023-10-12 2024-10-10 Caméra de navigation suivante Pending WO2025079075A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US202363543867P 2023-10-12 2023-10-12
US63/543,867 2023-10-12

Publications (1)

Publication Number Publication Date
WO2025079075A1 true WO2025079075A1 (fr) 2025-04-17

Family

ID=93432249

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IL2024/050992 Pending WO2025079075A1 (fr) 2023-10-12 2024-10-10 Caméra de navigation suivante

Country Status (1)

Country Link
WO (1) WO2025079075A1 (fr)

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150332465A1 (en) * 2012-01-12 2015-11-19 Brain-Lab Ag Method and system for medical tracking using a plurality of camera positions

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150332465A1 (en) * 2012-01-12 2015-11-19 Brain-Lab Ag Method and system for medical tracking using a plurality of camera positions

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
BRAINLAB: "Automatic Registration", 3 May 2021 (2021-05-03), XP093241427, Retrieved from the Internet <URL:https://data2bids.greydongilmore.com/static/brainlab_automatic_registration_v2.5.pdf> *
KARL STORZ: "Navigation Camera Assistant - Gebrauchsanweisung", 1 January 2014 (2014-01-01), XP093242205, Retrieved from the Internet <URL:https://www.bioclinicalservices.com.au/karl-storz-endoskope/clinical/model-408120-01-navigation-camera-assistant-instruction-manual-v1-0-0-jan-2012> *
KARL STORZ: "Navigation Camera Assistant NCA - Die intelligente Positionierungseinheit für die Navigationskamera", 1 January 2013 (2013-01-01), MEDICA 2013, XP093242219, Retrieved from the Internet <URL:https://www.yumpu.com/de/document/read/26072950/presse-mitteilungen-press-releases-medica-2013-karl-storz> *
KLEIN FRANZISKA SOPHIE: "Ein System zur automatischen Registrierung in der Hals-Nasen-Ohren-Chirurgie", 30 March 2021 (2021-03-30), XP093241428, Retrieved from the Internet <URL:https://mediatum.ub.tum.de/doc/1551064/1551064.pdf> *
SCHALLER S. ET AL: "Die Auswirkungen einer robotergeführten Navigationskamera in der HNO-Chirurgie", LARYNGORHINOOTOLOGIE 2011, 1 January 2011 (2011-01-01), XP093241598, Retrieved from the Internet <URL:https://www.thieme-connect.com/products/ejournals/abstract/10.1055/s-0031-1275340#A0160-1> DOI: 10.1055/s-0031-1275340 *

Similar Documents

Publication Publication Date Title
US12465441B2 (en) Multi-arm robotic systems and methods for identifying a target
US20220395342A1 (en) Multi-arm robotic systems and methods for monitoring a target or performing a surgical procedure
US20230346492A1 (en) Robotic surgical system with floating patient mount
WO2025079075A1 (fr) Caméra de navigation suivante
US12094128B2 (en) Robot integrated segmental tracking
US20240156531A1 (en) Method for creating a surgical plan based on an ultrasound view
WO2024229651A1 (fr) Positionnement intelligent d&#39;un chariot de bras de robot
WO2022162668A1 (fr) Systèmes robotiques à bras multiples pour identifier une cible
US12310676B2 (en) Navigation at ultra low to high frequencies
US20240382169A1 (en) Long image multi-field of view preview
US20240398362A1 (en) Ultra-wide 2d scout images for field of view preview
WO2025141396A1 (fr) Système et procédé d&#39;orientation de l&#39;affichage d&#39;une sonde pour une navigation en temps réel
US20240415496A1 (en) System and method to register and calibrate ultrasound probe for navigation in real time
WO2024238179A1 (fr) Prévisualisation multi-champ de vision d&#39;image longue
WO2023141800A1 (fr) Système de positionnement de rayons x mobile
WO2024249025A1 (fr) Images de repérage 2d ultralarges pour prévisualisation de champ de vision
WO2024103286A1 (fr) Bras de type branchez-et-utilisez pour robotique rachidienne
US11847809B2 (en) Systems, devices, and methods for identifying and locating a region of interest
US20240358461A1 (en) Multi-arm robotic systems and methods for monitoring a target or performing a surgical procedure
WO2025122631A1 (fr) Système et procédé de détection et de sélection automatiques de points 3d d&#39;échographie pour l&#39;enregistrement d&#39;une sonde échographique pour la navigation
WO2024257035A1 (fr) Système et procédé d&#39;enregistrement et d&#39;étalonnage de sonde à ultrasons pour la navigation en temps réel
WO2024236472A1 (fr) Systèmes et procédés de prévention et de détection de coupe en biseau
WO2024246897A1 (fr) Systèmes et procédés de réglage de balayage long et de suivi anatomique
WO2024254040A1 (fr) Localisation d&#39;anatomie par toucher-déplacer
EP4472555A1 (fr) Systèmes et procédés d&#39;évitement de collision robotisés utilisant une imagerie médicale

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 24802333

Country of ref document: EP

Kind code of ref document: A1