WO2025141396A1 - System and method to orient display of probe for navigation in real time - Google Patents
System and method to orient display of probe for navigation in real time Download PDFInfo
- Publication number
- WO2025141396A1 WO2025141396A1 PCT/IB2024/062814 IB2024062814W WO2025141396A1 WO 2025141396 A1 WO2025141396 A1 WO 2025141396A1 IB 2024062814 W IB2024062814 W IB 2024062814W WO 2025141396 A1 WO2025141396 A1 WO 2025141396A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- user
- tracked device
- display
- virtual space
- tracked
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B34/00—Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
- A61B34/25—User interfaces for surgical systems
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B34/00—Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
- A61B34/20—Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/0059—Measuring for diagnostic purposes; Identification of persons using light, e.g. diagnosis by transillumination, diascopy, fluorescence
- A61B5/0077—Devices for viewing the surface of the body, e.g. camera, magnifying lens
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/74—Details of notification to user or communication with user or patient; User input means
- A61B5/742—Details of notification to user or communication with user or patient; User input means using visual displays
- A61B5/7425—Displaying combinations of multiple images regardless of image source, e.g. displaying a reference anatomical image with a live image
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B34/00—Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
- A61B34/20—Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
- A61B2034/2046—Tracking techniques
- A61B2034/2051—Electromagnetic tracking systems
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B34/00—Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
- A61B34/20—Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
- A61B2034/2046—Tracking techniques
- A61B2034/2055—Optical tracking systems
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B34/00—Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
- A61B34/20—Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
- A61B2034/2046—Tracking techniques
- A61B2034/2063—Acoustic tracking systems, e.g. using ultrasound
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B90/00—Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
- A61B90/36—Image-producing devices or illumination devices not otherwise provided for
- A61B2090/364—Correlation of different images or relation of image positions in respect to the body
- A61B2090/365—Correlation of different images or relation of image positions in respect to the body augmented reality, i.e. correlating a live optical image with another image
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B8/00—Diagnosis using ultrasonic, sonic or infrasonic waves
- A61B8/42—Details of probe positioning or probe attachment to the patient
- A61B8/4245—Details of probe positioning or probe attachment to the patient involving determining the position of the probe, e.g. with respect to an external reference frame or to the patient
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B8/00—Diagnosis using ultrasonic, sonic or infrasonic waves
- A61B8/52—Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves
- A61B8/5215—Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves involving processing of medical diagnostic data
- A61B8/5238—Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves involving processing of medical diagnostic data for combining image data of patient, e.g. merging several images from different acquisition modes into one image
- A61B8/5246—Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves involving processing of medical diagnostic data for combining image data of patient, e.g. merging several images from different acquisition modes into one image combining images from the same or different imaging techniques, e.g. color Doppler and B-mode
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B8/00—Diagnosis using ultrasonic, sonic or infrasonic waves
- A61B8/52—Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves
- A61B8/5215—Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves involving processing of medical diagnostic data
- A61B8/5238—Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves involving processing of medical diagnostic data for combining image data of patient, e.g. merging several images from different acquisition modes into one image
- A61B8/5261—Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves involving processing of medical diagnostic data for combining image data of patient, e.g. merging several images from different acquisition modes into one image combining images from different diagnostic modalities, e.g. ultrasound and X-ray
Definitions
- Navigation systems may be used for tracking objects (e.g., ultrasound probes, ablation probes, instruments, imaging devices, etc.) associated with carrying out the surgical procedure.
- objects e.g., ultrasound probes, ablation probes, instruments, imaging devices, etc.
- a method of orienting a display of a virtual space including: receiving image data; generating a navigation space based on one or more tracking signals; generating the virtual space by overlaying the image data on the navigation space; identifying a position of a tracked device in relation to the virtual space; determining a focal plane of a user based on the position of the tracked device; and orienting the display of the virtual space based on the determined focal plane of the user.
- determining the focal plane of the user comprises receiving a user input and recording the position of the tracked device upon receiving the user input.
- the tracked device is one of an ultrasound probe and an ablation probe.
- identifying the position of the tracked device comprises guiding a user through a registration process and detecting the tracked device in relation to one or more fiducials.
- FIG. 1 illustrates an example of a system in accordance with aspects of the present disclosure.
- Fig. 2 illustrates an example implementation of a system in accordance with aspects of the present disclosure.
- Fig. 3 illustrates an example implementation of a system in accordance with aspects of the present disclosure.
- Fig. 5A illustrates a user in control of a tracked device in accordance with aspects of the present disclosure.
- Fig. 5B illustrates a display device in accordance with aspects of the present disclosure.
- Fig. 6A illustrates a user interacting with a computing system in accordance with aspects of the present disclosure.
- Fig. 7A illustrates a user in control of a tracked device in accordance with aspects of the present disclosure.
- Fig. 7B illustrates a display device in accordance with aspects of the present disclosure.
- FIG. 8 illustrates an example of a process flow in accordance with aspects of the present disclosure.
- the described methods, processes, and techniques may be implemented in hardware, software, firmware, or any combination thereof. If implemented in software, the functions may be stored as one or more instructions or code on a computer-readable medium and executed by a hardware-based processing unit. Alternatively, or additionally, functions may be implemented using machine learning models, neural networks, artificial neural networks, or combinations thereof (alone or in combination with instructions). Alternatively, or additionally, functions may be implemented using machine learning models, neural networks, artificial neural networks, or combinations thereof (alone or in combination with instructions).
- Computer-readable media may include non-transitory computer-readable media, which corresponds to a tangible medium such as data storage media (e.g., RAM, ROM, EEPROM, flash memory, or any other medium that can be used to store desired program code in the form of instructions or data structures and that can be accessed by a computer).
- data storage media e.g., RAM, ROM, EEPROM, flash memory, or any other medium that can be used to store desired program code in the form of instructions or data structures and that can be accessed by a computer.
- a system may enable a method or process for providing an intuitive viewpoint for a user, such as a medical professional, of a tracked device such as an ultrasound or ablation probe.
- a user may view the image data received from the tracked device on a display.
- the image data from the tracked device may be displayed in relation to existing image data, such as image data from an MRI or CT scan.
- a virtual space comprising a navigable three- dimensional view may be generated by a system as described herein.
- the three- dimensional view may be displayed on a display device, for example in an operating room or examination room.
- a user such as a medical professional, may activate a tracked device, such as an ultrasound probe.
- the tracked device may have a viewpoint or field of view from which image data may be collected by the tracked device.
- image data captured by the tracked device may be displayed on the display overlayed on the navigable three-dimensional view.
- the user may use the navigable three-dimensional view to guide their use of the tracked device.
- the navigable three-dimensional view may include visual indicators, such as markers, which may have been added by the user or another person, or automatically from a computing system. Such indicators may represent points of space which the user may seek to examine with the tracked device, such as during a procedure or operation.
- the user may use the displayed navigable three- dimensional view to guide their use of the tracked device.
- the navigable three-dimensional view displayed on the display may be captured by a virtual camera as described herein.
- the virtual camera may be a point in space of the navigable three-dimensional view from which the viewpoint of the display is captured.
- a problem with conventional systems arises when the viewpoint of the display is from a point in space that is different from the real life viewpoint of the user. For example, in the examination of a patient, the user may be standing on the patient’s left side and looking down at the patient’s chest.
- the viewpoint of the display may be captured from a point in space in a navigable three-dimensional view of a rendering of the patient.
- the point in space may originate at the patient’s right side and looking up at the patient’s back, for example.
- Such a viewpoint may be a hindrance to the user as the user attempts to examine the patient.
- a visual representation of the tracked device in the display may move to the left.
- a visual representation of the tracked device in the display may move down.
- the user must remember to move the tracked device in a mirror to what is displayed on the display device.
- What is needed is a way to reorient the display by moving the virtual camera within the navigable three-dimensional view to a point that is similar to the viewpoint of the user.
- a system may enable such reorientation in a quick and user-friendly manner.
- Fig. 1 illustrates an example of a system 100 that supports aspects of the present disclosure.
- the system 100 includes a computing system 102, one or more imaging devices 112, a navigation system 118, a tracked device 114, a computing device 120, a database 130, and/or a cloud network 134 (or other network).
- Systems according to other implementations of the present disclosure may include more or fewer components than the system 100.
- the system 100 may omit and/or include additional instances of one or more components of the computing system 102, the imaging device(s) 112, navigation system 118, the tracked device 114, the computing device 120, the database 130, and/or the cloud network 134.
- system 100 may omit any instance of the computing system 102, the imaging device(s) 112, navigation system 118, the tracked device 114, the computing device 120, the database 130, and/or the cloud network 134.
- the system 100 may support the implementation of one or more other aspects of one or more of the methods disclosed herein.
- the computing system 102 includes a processor 104, a memory 106, a communication interface 108, and a user interface 110.
- Computing devices may include more or fewer components than the computing system 102.
- the computing system 102 may be, for example, a control device including electronic circuitry associated with controlling any components of the system 100.
- the imaging device 112 may include more than one imaging device 112.
- a first imaging device may provide first image data and/or a first image
- a second imaging device may provide second image data and/or a second image.
- the same imaging device may be used to provide both the first image data and the second image data, and/or any other image data described herein.
- the imaging device 112 may be operable to generate a stream of image data.
- the imaging device 112 may be configured to operate with an open shutter, or with a shutter that continuously alternates between open and shut so as to capture successive images.
- image data may be considered to be continuous and/or provided as an image data stream if the image data represents two or more frames per second.
- the computing system 102 may communicate with a server(s) and/or a database (e.g., database 130) directly or indirectly over a communications network (e.g., the cloud network 134).
- the communications network may include any type of known communication medium or collection of communication media and may use any type of protocols to transport data between endpoints.
- the communications network may include wired communications technologies, wireless communications technologies, or any combination thereof.
- the Internet is an example of the communications network that constitutes an Internet Protocol (IP) network consisting of multiple computers, computing networks, and other communication devices located in multiple locations, and components in the communications network (e.g., computers, computing networks, communication devices) may be connected through one or more telephone systems and other means.
- IP Internet Protocol
- the computing system 102 may be connected to the cloud network 134 via the communication interface 108, using a wired connection, a wireless connection, or both. In some implementations, the computing system 102 may communicate with the database 130 and/or an external device (e.g., a computing device) via the cloud network 134.
- an external device e.g., a computing device
- FIG. 2 illustrates an example environment 200 in which the system 100 may be used to support aspects of the present disclosure. Aspects of the example may be implemented by the computing system 102, imaging device(s) 112, navigation system 118, and tracked device 114.
- a transmission device 136 may be installed within the environment 200 and may be in communication with a computing system 102.
- the transmission device 136 may be, for example, a field emitter held by an emitter holder.
- the transmission device 136 may be used in relation to one or more tracking devices 140 to track a location, position, and/or orientation of a tracked device 114.
- the environment 200 may optionally include an imaging device 112 such as an MRI machine, an O-arm machine, a CT scanner, etc.
- the computing system 102 may be configured to output visual data to a display 202.
- the display 202 may display a live view based on data from the tracked device 114. As the user moves the tracked device 114 in relation to the tracking devices 140 and the transmission device 136, the view of the display 202 may automatically update with the live view.
- Image data in addition to a live view may be acquired such as with one or more imaging systems prior to and/or during a surgical procedure for displaying an image or portion thereof for viewing by a user via, for example, a display 202 such as a monitor of a computing system 102 or computing device 120.
- the additional information or data may be viewed by the user with a live view, as discussed herein.
- a navigation or tracking domain or volume generally defines a navigation space or patient space in which objects such as tracked device 114 may be moved that are tracked, such as with the tracking devices 140.
- the navigation volume or patient space may be registered to an image space defined by the additional information (e.g., prior acquired image or selected image) of the subject, allowing for illustrating and displaying determined positions of various objects relative to one another and/or the image space.
- the orientation of the display 202 may not be intuitive and may be disorienting to the user. As described herein, the user may be enabled to orient the point-of-view (POV) displayed in the display 202 to match the user's own perspective through the methods and systems described herein.
- POV point-of-view
- a user 406 may use a tracked device 114 during a procedure.
- a display 202 illustrated in Fig. 4B, may assist the user 406 in positioning the tracked device 114.
- the display 202 may update in real-time based on the movements.
- a POV 400 of the display 202 may be referred to as a virtual space POV 400 and is represented by the line extending from the camera symbol 408. To best serve the user 406, the POV 400 shown in the display should match the POV of the user. An uncalibrated system may not be properly oriented and may be disorienting to the user 406.
- a POV 404 of the user 406 may be referred to as a focal plane of the user 406 and is represented by the line extending from the eyes of the user 406.
- An orientation 402, or pose, of the tracked device 114 is represented by the line extending through the tracked device 114.
- the POV 400 of the display 202 does not match the POV 404 of the user 406.
- a visual representation of the tracked device 114 is shown in the display 202.
- the orientation 402 of the tracked device 114 is illustrated in the display 202 by a cone 414.
- the cone 414 extends in a direction that is unnatural to the user 406. What is needed is a way to calibrate the display 202 to show a view that is aligned with the POV 404 of the user 406.
- the display 202 may show image data captured by the tracked device 114 along with other image data, either live or recorded image data.
- the display may include annotation information, such as markers 410 added by a medical professional during a review of image data such as MRI images or 0-arm or CT scans.
- Tracking devices 140 may be represented by markers 416 in the display.
- the user 406 has moved the tracked device 114 such that the orientation 402 of the tracked device 114 is in a predetermined position.
- the orientation 402 of the tracked device 114 is in line with the POV 404 of the user 406.
- the predetermined position may be any position for which the user 406 would prefer the POV 400 shown in the display 202 to match the POV 404 of the user 406.
- the predetermined position may be a 3D position which may be saved in memory of a computing system 102.
- a plurality of users may each be associated with one or more associated predetermined positions.
- a user 406 may hold the tracked device 114 in a predetermined position and may instruct the computing system 102 to reorient the display 202.
- predetermined positions may need not be saved in memory. Instead, the user 406 may choose a predetermined position in the moment (e.g., in real-time prior to or during a surgical procedure).
- the position and/or orientation of the tracked device 114 may be determined by the navigation system 118 tracking the tracked device 114. The position and/or the orientation of the tracked device 114 can be used to determine a trajectory that is in line with the user POV 404, as described below.
- the user 406 has positioned the orientation 402 of the tracked device 114 in line with his or her POV 404.
- the POV 400 of the display 202, originating at the camera symbol 408, is not in line with the POV 404 of the user 406.
- the POV 400 of the display 202, originating at the camera symbol 408, is to be in line with the POV 404 of the user 406.
- the display 202 may be an interactive display and may be capable of receiving input, such as via touch.
- the user 406 (or another user 600) may be capable of interacting with the computing system 102 to issue a command to orient the display to the device plane.
- the command may be issued via voice, a button on the tracked device 114, a GUI button 602 displayed on the display 202 as illustrated in Fig. 6B, or by any other imaginable means.
- the POV 400 of the display 202 may be updated.
- the virtual source of the POV 400 of the display, represented by the camera symbol 408, is in the correct location such that the POV 400 of the display 202 is in line with the POV 404 of the user 406 and the orientation 402 of the tracked device 114.
- the display 202 is updated with an intuitive view such that the cone 414 representing the view of the tracked device 114 is parallel with the display 202.
- the display 202 may be updated by aligning the POV 400 of the display 202 with the POV 402 of the tracked device 114 by matching a trajectory of the display 202 with a trajectory or angle of the tracked device 114 (which is in line with a trajectory of the POV 404 of the user 406).
- the trajectory or angle of the tracked device can be determined from a plane of the ultrasound probe as tracked by the navigation system 118.
- an orientation and trajectory or angle of the surgical tool or instrument may be determined by, for example, the navigation system 118.
- the user 406 may continue to use the tracked device 114 in an ordinary fashion.
- the POV 400 of the display 202 may continue to follow the orientation 402 of the tracked device 114 during the procedure. If at any point the user 406 is unhappy with the POV 400 of the display 202, the user 406 may issue another calibration command and reset the view.
- image data may be received by a computing system 102, such as the computing system 102 illustrated in Fig. 1 and described above.
- Image data may be received, for example, from one or more imaging devices 112, from a tracked device 114, from a database 130, or another source.
- Image data as described herein may comprise medical imaging data, such as image data from a CT scan, MRI, etc., and may be 3-dimensional or may be a 2D image which can be overlaid onto a 3D virtual environment.
- image data may include annotation details entered by a user.
- a 3D navigable image may include one or more visual markers, such as targets for which a user may seek to gather additional information during a procedure.
- the computing system 102 may generate a navigation space based on one or more tracking signals. Generating the navigation space may comprise receiving tracking information relating to a position or orientation of a tracked device 114. Such tracking information may be received by the computing system 102 and may be used to generate the navigation space.
- the computing system may generate a virtual space by overlaying the image data on the navigation space.
- the system 102 may enhance the navigation space through the integration of the image data received at 802.
- Generating the virtual space may comprise overlaying the navigation space with the image data which may include one or more of images, graphics, and informational data.
- the overlaid data may include a spectrum of elements such as directional arrows, labels, landmarks, and various other interactive and informative markers.
- the computing system 102 may identify a position of the tracked device 114 in relation to the virtual space.
- the computing system 102 may maintain an awareness of a position of the tracked device 114 within the virtual space.
- a continuous stream of positional data may be relayed to the computing system 102.
- the computing system 102 may generate an updated and synchronized representation of the tracked device 114 within the virtual space.
- the computing system 102 may be capable of identifying the position of the tracked device 114 and determining an angle or orientation of the tracked device 114. Identifying the position of the tracked device 114 may comprise guiding a user through a registration process and detecting the tracked device in relation to one or more fiducials, such as tracking devices 140. In some implementations, electromagnetic (EM) navigation may be used for real-time localization, angle detection, position detection, and/or orientation detection of the tracked device 114 within the virtual space.
- EM electromagnetic
- a focal plane of a user may be determined or estimated based on the position of the tracked device 114. Determining, or estimating, the focal plane of the user may comprise receiving a user input. Upon receiving the user input, the position of the tracked device upon receiving the user input may be recorded.
- the user input may be received from the tracked device 114, such as a button on the device, or from a user input device of the system 102, such as a GUI button on a display 202, via voice input received from a microphone of the system 102, or another source.
- the user input may be received by a user other than the user operating the tracked device 114.
- determining or estimating the focal plane of the user may comprise determining data relating to one or more predetermine positions.
- a user may record predetermined positions of different types of tracked devices 114 and in different positions. For example, a user may set one predetermined position of a tracked device 114 by holding the tracked device 114 in his or her left hand and set another predetermined position of the tracked device 114 by holding the tracked device 114 in his or her right hand. Information relating to each predetermined position may be stored in memory of the computing system 102.
- the user may use two tracked devices simultaneously during a procedure.
- the user may be enabled to select one of the two tracked devices to use to orient the display.
- such a user may be enabled to switch between perspectives of different tracked devices.
- a perspective may be saved in memory, or bookmarked, such that users can save specific views, orientations, perspectives, etc., so that the users can quickly switch from one view to another.
- a user may bookmark a first perspective from the user’s standpoint and another perspective from a different side of the operating table. By bookmarking both perspectives, the user may be enabled to easily and quickly switch back and forth between the two perspectives.
- the computing system 102 may orient the display of the virtual space based on the determined focal plane of the user. For example, the computing system 10 may relocate a virtual camera within the virtual space to match the determined focal plane of the user.
- the display of the virtual space may remain stationary while the user continues to use the tracked device. In some implementations, after orienting the display of the virtual space, the display of the virtual space may move to track the orientation of the tracked device while the user continues to use the tracked device.
- the present disclosure encompasses methods with fewer than all of the steps identified herein (and the corresponding description of respective process flows), as well as methods that include additional steps beyond those identified in the figures and process flows described herein).
- the present disclosure also encompasses methods that include one or more steps from one method described herein, and one or more steps from another method described herein. Any correlation described herein may be or include a registration or any other correlation.
- Example aspects of the present disclosure include:
- identifying the position of the tracked device comprises determining an angle of the tracked device.
- determining the focal plane of the user comprises receiving a user input and recording the position of the tracked device upon receiving the user input.
- the tracked device is one of an ultrasound probe and an ablation probe.
- aspects of the above system include wherein the user input is received from a user input device of the system.
- identifying the position of the tracked device comprises guiding a user through a registration process and detecting the tracked device in relation to one or more fiducials.
- a method of orienting a display of a virtual space comprising: receiving image data; generating a navigation space based on one or more tracking signals; generating the virtual space by overlaying the image data on the navigation space; identifying a position of a tracked device in relation to the virtual space; determining a focal plane of a user based on the position of the tracked device; and orienting the display of the virtual space based on the determined focal plane of the user.
- identifying the position of the tracked device comprises determining an angle of the tracked device.
- determining the focal plane of the user comprises receiving a user input and recording the position of the tracked device upon receiving the user input.
- aspects of the above method include wherein the user input is received from the tracked device.
- aspects of the above method include wherein the user input is received from a user input device of the system.
- identifying the position of the tracked device comprises guiding a user through a registration process and detecting the tracked device in relation to one or more fiducials.
- One or more processing units comprising processing circuitry to perform operations comprising: receiving image data; generating a navigation space based on one or more tracking signals; generating a virtual space by overlaying the image data on the navigation space; identifying a position of a tracked device in relation to the virtual space; determining a focal plane of a user based on the position of the tracked device; and orienting a display of the virtual space based on the determined focal plane of the user.
- identifying the position of the tracked device comprises determining an angle of the tracked device.
- aspects of the above processing unit(s) include wherein determining the focal plane of the user comprises receiving a user input and recording the position of the tracked device upon receiving the user input.
- aspects of the above processing unit(s) include wherein the user input is received from the tracked device. [0128] Aspects of the above processing unit(s) include wherein orienting the display of the virtual space comprises bookmarking the orientation in memory.
- each of the expressions “at least one of A, B and C,” “at least one of A, B, or C,” “one or more of A, B, and C,” “one or more of A, B, or C,” “A, B, and/or C,” and “A, B, or C” means A alone, B alone, C alone, A and B together, A and C together, B and C together, or A, B and C together.
- aspects of the present disclosure may take the form of an implementation that is entirely hardware, an implementation that is entirely software (including firmware, resident software, micro-code, etc.) or an implementation combining software and hardware aspects that may all generally be referred to herein as a “circuit,” “module,” or “system.” Any combination of one or more computer-readable medium(s) may be utilized.
- the computer-readable medium may be a computer-readable signal medium or a computer-readable storage medium.
- a computer-readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer-readable storage medium would include the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
- a computer-readable storage medium may be any tangible medium that can contain or store a program for use by or in connection with an instruction execution system, apparatus, or device.
- a computer-readable signal medium may include a propagated data signal with computer-readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof.
- a computer-readable signal medium may be any computer-readable medium that is not a computer-readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
- Program code embodied on a computer-readable medium may be transmitted using any appropriate medium, including, but not limited to, wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
- Example 1 A system comprising: a processor; and a memory storing instructions that, when executed by the processor, cause the processor to: receive image data; generate a navigation space based on one or more tracking signals; generate a virtual space by overlaying the image data on the navigation space; identify a position of a tracked device in relation to the virtual space; determine a focal plane of a user based on the position of the tracked device; and orient a display of the virtual space based on the determined focal plane of the user.
- Example 2 The system of Example 1, wherein identifying the position of the tracked device comprises determining an angle of the tracked device.
- Example 3 The system of Example 1, wherein determining the focal plane of the user comprises receiving a user input and recording the position of the tracked device upon receiving the user input.
- Example 4 The system of Example 3, wherein the user input is received from the tracked device.
- Example 5 The system of Example 4, wherein the tracked device is one of an ultrasound probe and an ablation probe.
- Example 6 The system of Example 3, wherein the user input is received from a user input device of the system.
- Example 7 The system of Example 1, wherein identifying the position of the tracked device comprises guiding a user through a registration process and detecting the tracked device in relation to one or more fiducials.
- Example 8 The system of Example 1, wherein after orienting the display of the virtual space, the display of the virtual space follows the position of the tracked device.
- Example 9 A method of orienting a display of a virtual space comprising: receiving image data; generating a navigation space based on one or more tracking signals; generating the virtual space by overlaying the image data on the navigation space; identifying a position of a tracked device in relation to the virtual space; determining a focal plane of a user based on the position of the tracked device; and orienting the display of the virtual space based on the determined focal plane of the user.
- Example 10 The method of Example 9, wherein identifying the position of the tracked device comprises determining an angle of the tracked device.
- Example 11 The method of Example 9, wherein determining the focal plane of the user comprises receiving a user input and recording the position of the tracked device upon receiving the user input.
- Example 12 The method of Example 11, wherein the user input is received from the tracked device.
- Example 13 The method of Example 12, wherein the tracked device is one of an ultrasound probe and an ablation probe.
- Example 14 The method of Example 11, wherein the user input is received from a user input device.
- Example 15 The method of Example 9, wherein identifying the position of the tracked device comprises guiding a user through a registration process and detecting the tracked device in relation to one or more fiducials.
- Example 16 The method of 9, wherein after orienting the display of the virtual space, the display of the virtual space follows the position of the tracked device.
- Example 17 One or more processing units comprising processing circuitry to perform operations comprising: receiving image data; generating a navigation space based on one or more tracking signals; generating a virtual space by overlaying the image data on the navigation space; identifying a position of a tracked device in relation to the virtual space; determining a focal plane of a user based on the position of the tracked device; and orienting a display of the virtual space based on the determined focal plane of the user.
- Example 18 The one or more processing units of Example 17, wherein identifying the position of the tracked device comprises determining an angle of the tracked device.
- Example 19 The one or more processing units of Example 17, wherein determining the focal plane of the user comprises receiving a user input and recording the position of the tracked device upon receiving the user input.
- Example 20 The one or more processing units of Example 19, wherein the user input is received from the tracked device.
Landscapes
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Engineering & Computer Science (AREA)
- Surgery (AREA)
- Public Health (AREA)
- Molecular Biology (AREA)
- Veterinary Medicine (AREA)
- General Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Heart & Thoracic Surgery (AREA)
- Medical Informatics (AREA)
- Animal Behavior & Ethology (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Biophysics (AREA)
- Physics & Mathematics (AREA)
- Pathology (AREA)
- Robotics (AREA)
- Radiology & Medical Imaging (AREA)
- Human Computer Interaction (AREA)
- Measuring And Recording Apparatus For Diagnosis (AREA)
Abstract
A system includes a processor and memory storing instructions that cause a processor to receive image data, generate a navigation space based on one or more tracking signals, generate a virtual space by overlaying the image data on the navigation space, and identify a position of a tracked device in relation to the virtual space. A focal plane of a user is determined based on the position of the tracked device. A display of display of the virtual space is oriented based on the determined focal plane of the user.
Description
SYSTEM AND METHOD TO ORIENT DISPLAY OF PROBE FOR NAVIGATION IN
REAL TIME
[0001] This application claims the benefit of U.S. Provisional Patent Application Serial No. 63/614,808, filed December 26, 2023, the entire content of which is incorporated herein by reference.
FIELD OF INVENTION
[0002] The present disclosure is generally directed to navigation and imaging and relates more particularly to providing a display in line with a user’s perspective for navigation.
BACKGROUND
[0003] Imaging devices and navigation systems may assist a surgeon or other medical provider in carrying out a surgical procedure. Imaging may be used by a medical provider for visual guidance in association with diagnostic and/or therapeutic procedures.
Navigation systems may be used for tracking objects (e.g., ultrasound probes, ablation probes, instruments, imaging devices, etc.) associated with carrying out the surgical procedure.
BRIEF SUMMARY
[0004] Example aspects of the present disclosure include: A system including a processor; and a memory storing instructions that, when executed by the processor, cause the processor to: receive image data; generate a navigation space based on one or more tracking signals; generate a virtual space by overlaying the image data on the navigation space; identify a position of a tracked device in relation to the virtual space; determine a focal plane of a user based on the position of the tracked device; and orient a display of the virtual space based on the determined focal plane of the user.
[0005] A method of orienting a display of a virtual space including: receiving image data; generating a navigation space based on one or more tracking signals; generating the virtual space by overlaying the image data on the navigation space; identifying a position of a tracked device in relation to the virtual space; determining a focal plane of a user based on the position of the tracked device; and orienting the display of the virtual space based on the determined focal plane of the user.
[0006] One or more processing units including processing circuitry to perform operations including: receiving image data; generating a navigation space based on one or more tracking signals; generating a virtual space by overlaying the image data on the navigation space; identifying a position of a tracked device in relation to the virtual space; determining a focal plane of a user based on the position of the tracked device; and orienting a display of the virtual space based on the determined focal plane of the user. [0007] Any of the aspects herein, wherein identifying the position of the tracked device comprises determining an angle of the tracked device.
[0008] Any of the aspects herein, wherein determining the focal plane of the user comprises receiving a user input and recording the position of the tracked device upon receiving the user input.
[0009] Any of the aspects herein, wherein the user input is received from the tracked device.
[0010] Any of the aspects herein, wherein the tracked device is one of an ultrasound probe and an ablation probe.
[0011] Any of the aspects herein, wherein the user input is received from a user input device of the system.
[0012] Any of the aspects herein, wherein identifying the position of the tracked device comprises guiding a user through a registration process and detecting the tracked device in relation to one or more fiducials.
[0013] Any of the aspects herein, wherein after orienting the display of the virtual space, the display of the virtual space follows the position of the tracked device.
[0014] Any aspect in combination with any one or more other aspects.
[0015] Any one or more of the features disclosed herein.
[0016] Any one or more of the features as substantially disclosed herein.
[0017] Any one or more of the features as substantially disclosed herein in combination with any one or more other features as substantially disclosed herein.
[0018] Any one of the aspects/features/implementations in combination with any one or more other aspects/features/implementations.
[0019] Use of any one or more of the aspects or features as disclosed herein.
[0020] It is to be appreciated that any feature described herein can be claimed in combination with any other feature(s) as described herein, regardless of whether the features come from the same described implementation.
[0021] The details of one or more aspects of the disclosure are set forth in the accompanying drawings and the description below. Other features, objects, and advantages of the techniques described in this disclosure will be apparent from the description and drawings, and from the claims.
[0022] The preceding is a simplified summary of the disclosure to provide an understanding of some aspects of the disclosure. This summary is neither an extensive nor exhaustive overview of the disclosure and its various aspects, implementations, and configurations. It is intended neither to identify key or critical elements of the disclosure nor to delineate the scope of the disclosure but to present selected concepts of the disclosure in a simplified form as an introduction to the more detailed description presented below. As will be appreciated, other aspects, implementations, and configurations of the disclosure are possible utilizing, alone or in combination, one or more of the features set forth above or described in detail below.
[0023] Numerous additional features and advantages of the present disclosure will become apparent to those skilled in the art upon consideration of the implementation descriptions provided hereinbelow.
BRIEF DESCRIPTION OF THE SEVERAL VIEWS OF THE DRAWINGS [0024] The accompanying drawings are incorporated into and form a part of the specification to illustrate several examples of the present disclosure. These drawings, together with the description, explain the principles of the disclosure. The drawings simply illustrate preferred and alternative examples of how the disclosure can be made and used and are not to be construed as limiting the disclosure to only the illustrated and described examples. Further features and advantages will become apparent from the following more detailed description of the various aspects, implementations, and configurations of the disclosure, as illustrated by the drawings referenced below.
[0025] Fig. 1 illustrates an example of a system in accordance with aspects of the present disclosure.
[0026] Fig. 2 illustrates an example implementation of a system in accordance with aspects of the present disclosure.
[0027] Fig. 3 illustrates an example implementation of a system in accordance with aspects of the present disclosure.
[0028] Fig. 4A illustrates a user in control of a tracked device in accordance with aspects of the present disclosure.
[0029] Fig. 4B illustrates a display device in accordance with aspects of the present disclosure.
[0030] Fig. 5A illustrates a user in control of a tracked device in accordance with aspects of the present disclosure.
[0031] Fig. 5B illustrates a display device in accordance with aspects of the present disclosure.
[0032] Fig. 6A illustrates a user interacting with a computing system in accordance with aspects of the present disclosure.
[0033] Fig. 6B illustrates a display device in accordance with aspects of the present disclosure.
[0034] Fig. 7A illustrates a user in control of a tracked device in accordance with aspects of the present disclosure.
[0035] Fig. 7B illustrates a display device in accordance with aspects of the present disclosure.
[0036] Fig. 8 illustrates an example of a process flow in accordance with aspects of the present disclosure.
DETAILED DESCRIPTION
[0037] It should be understood that various aspects disclosed herein may be combined in different combinations than the combinations specifically presented in the description and accompanying drawings. It should also be understood that, depending on the example or implementation, certain acts or events of any of the processes or methods described herein may be performed in a different sequence, and/or may be added, merged, or left out altogether (e.g., all described acts or events may not be necessary to carry out the disclosed techniques according to different implementations of the present disclosure). In addition, while certain aspects of this disclosure are described as being performed by a single module or unit for purposes of clarity, it should be understood that the techniques of this disclosure may be performed by a combination of units or modules associated with, for example, a computing device and/or a medical device.
[0038] In one or more examples, the described methods, processes, and techniques may be implemented in hardware, software, firmware, or any combination thereof. If implemented in software, the functions may be stored as one or more instructions or code on a computer-readable medium and executed by a hardware-based processing unit. Alternatively, or additionally, functions may be implemented using machine learning models, neural networks, artificial neural networks, or combinations thereof (alone or in combination with instructions). Alternatively, or additionally, functions may be implemented using machine learning models, neural networks, artificial neural networks, or combinations thereof (alone or in combination with instructions). Computer-readable media may include non-transitory computer-readable media, which corresponds to a tangible medium such as data storage media (e.g., RAM, ROM, EEPROM, flash memory, or any other medium that can be used to store desired program code in the form of instructions or data structures and that can be accessed by a computer).
[0039] Instructions may be executed by one or more processors, such as one or more digital signal processors (DSPs), general purpose microprocessors (e.g., Intel Core i3, i5, i7, or i9 processors; Intel Celeron processors; Intel Xeon processors; Intel Pentium processors; AMD Ryzen processors; AMD Athlon processors; AMD Phenom processors; Apple A10 or 10X Fusion processors; Apple Al l, A12, A12X, A12Z, or A13 Bionic processors; or any other general purpose microprocessors), graphics processing units (e.g., Nvidia GeForce RTX 2000-series processors, Nvidia GeForce RTX 3000-series processors, AMD Radeon RX 5000-series processors, AMD Radeon RX 6000-series processors, or any other graphics processing units), application specific integrated circuits (ASICs), field programmable logic arrays (FPGAs), or other equivalent integrated or discrete logic circuitry. Accordingly, the term “processor” as used herein may refer to any of the foregoing structure or any other physical structure suitable for implementation of the described techniques. Also, the techniques could be fully implemented in one or more circuits or logic elements.
[0040] Before any implementations of the disclosure are explained in detail, it is to be understood that the disclosure is not limited in its application to the details of construction and the arrangement of components set forth in the following description or illustrated in the drawings. The disclosure is capable of other implementations and of being practiced or of being carried out in various ways. Also, it is to be understood that the phraseology
and terminology used herein is for the purpose of description and should not be regarded as limiting. The use of “including,” “comprising,” or “having” and variations thereof herein is meant to encompass the items listed thereafter and equivalents thereof as well as additional items. Further, the present disclosure may use examples to illustrate one or more aspects thereof. Unless explicitly stated otherwise, the use or listing of one or more examples (which may be denoted by “for example,” “by way of example,” “e.g.,” “such as,” or similar language) is not intended to and does not limit the scope of the present disclosure.
[0041] The terms proximal and distal are used in this disclosure with their conventional medical meanings, proximal being closer to the operator or user of the system, and further from the region of surgical interest in or on the patient, and distal being closer to the region of surgical interest in or on the patient, and further from the operator or user of the system.
[0042] As described herein, a system may enable a method or process for providing an intuitive viewpoint for a user, such as a medical professional, of a tracked device such as an ultrasound or ablation probe. When such a user uses a tracked device, the user may view the image data received from the tracked device on a display. The image data from the tracked device may be displayed in relation to existing image data, such as image data from an MRI or CT scan. For example, a virtual space comprising a navigable three- dimensional view may be generated by a system as described herein. The three- dimensional view may be displayed on a display device, for example in an operating room or examination room. A user, such as a medical professional, may activate a tracked device, such as an ultrasound probe. The tracked device may have a viewpoint or field of view from which image data may be collected by the tracked device. As the user uses the tracked device, image data captured by the tracked device may be displayed on the display overlayed on the navigable three-dimensional view.
[0043] The user may use the navigable three-dimensional view to guide their use of the tracked device. The navigable three-dimensional view may include visual indicators, such as markers, which may have been added by the user or another person, or automatically from a computing system. Such indicators may represent points of space which the user may seek to examine with the tracked device, such as during a procedure or operation. As
the user uses the tracked device, the user may use the displayed navigable three- dimensional view to guide their use of the tracked device.
[0044] The navigable three-dimensional view displayed on the display may be captured by a virtual camera as described herein. The virtual camera may be a point in space of the navigable three-dimensional view from which the viewpoint of the display is captured. A problem with conventional systems arises when the viewpoint of the display is from a point in space that is different from the real life viewpoint of the user. For example, in the examination of a patient, the user may be standing on the patient’s left side and looking down at the patient’s chest. The viewpoint of the display may be captured from a point in space in a navigable three-dimensional view of a rendering of the patient. The point in space may originate at the patient’s right side and looking up at the patient’s back, for example. Such a viewpoint may be a hindrance to the user as the user attempts to examine the patient. For example, when the user moves the tracked device to the right, a visual representation of the tracked device in the display may move to the left. When the user moves the tracked device up, a visual representation of the tracked device in the display may move down. As a result, the user must remember to move the tracked device in a mirror to what is displayed on the display device. What is needed is a way to reorient the display by moving the virtual camera within the navigable three-dimensional view to a point that is similar to the viewpoint of the user. As described herein, a system may enable such reorientation in a quick and user-friendly manner.
[0045] Fig. 1 illustrates an example of a system 100 that supports aspects of the present disclosure. The system 100 includes a computing system 102, one or more imaging devices 112, a navigation system 118, a tracked device 114, a computing device 120, a database 130, and/or a cloud network 134 (or other network). Systems according to other implementations of the present disclosure may include more or fewer components than the system 100. For example, the system 100 may omit and/or include additional instances of one or more components of the computing system 102, the imaging device(s) 112, navigation system 118, the tracked device 114, the computing device 120, the database 130, and/or the cloud network 134. In an example, the system 100 may omit any instance of the computing system 102, the imaging device(s) 112, navigation system 118, the tracked device 114, the computing device 120, the database 130, and/or the cloud network
134. The system 100 may support the implementation of one or more other aspects of one or more of the methods disclosed herein.
[0046] The computing system 102 includes a processor 104, a memory 106, a communication interface 108, and a user interface 110. Computing devices according to other implementations of the present disclosure may include more or fewer components than the computing system 102. The computing system 102 may be, for example, a control device including electronic circuitry associated with controlling any components of the system 100.
[0047] The processor 104 of the computing system 102 may be any processor described herein or any similar processor. The processor 104 may be configured to execute instructions stored in the memory 106, instructions which may cause the processor 104 to carry out one or more computing steps utilizing or based on data received from the imaging device(s) 112, the navigation system 118, the tracked device 114, the computing device 120, the database 130, and/or the cloud network 134.
[0048] The memory 106 may be or include RAM, DRAM, SDRAM, other solid state memory, any memory described herein, or any other tangible, non-transitory memory for storing computer readable data and/or instructions. The memory 106 may store information or data associated with completing, for example, any step of the method(s) described herein, or of any other methods. The memory 106 may store, for example, instructions and/or machine learning models that support one or more functions of the imaging devices 112, the navigation system 118, the tracked device 114, the computing device 120, and/or other components of the system 100. Such content, if provided as in instruction, may, in some implementations, be organized into one or more applications, modules, packages, layers, or engines.
[0049] Alternatively, or additionally, the memory 106 may store other types of content or data (e.g., machine learning models, artificial neural networks, deep neural networks, etc.) that can be processed by the processor 104 to carry out various methods and features described herein. Thus, although various contents of memory 106 may be described as instructions, it should be appreciated that functionality described herein can be achieved through use of instructions, algorithms, and/or machine learning models. The data, algorithms, and/or instructions may cause the processor 104 to manipulate data stored in the memory 106 and/or received from or via the imaging devices 112, the navigation
system 118, the tracked device 114, the computing device 120, the database 130, and/or the cloud network 134.
[0050] The computing system 102 may also include a communication interface 108. The communication interface 108 may be used for receiving data or other information from an external source (e.g., the imaging devices 112, the navigation system 118, the tracked device 114, the computing device 120, the database 130, the cloud network 134, and/or any other system or component separate from the system 100), and/or for transmitting instructions, data (e.g., image data, tracking data, navigation data, calibration data, registration data, etc.), or other information to an external system or device (e.g., another computing system 102, the imaging devices 112, the navigation system 118, the tracked device 114, the computing device 120, the database 130, the cloud network 134, and/or any other system or component not part of the system 100). The communication interface 108 may include one or more wired interfaces (e.g., a USB port, an Ethernet port, a Firewire port) and/or one or more wireless transceivers or interfaces (configured, for example, to transmit and/or receive information via one or more wireless communication protocols such as 802.1 la/b/g/n, Bluetooth, NFC, ZigBee, and so forth). In some implementations, the communication interface 108 may support communication between the computing system 102 and one or more other processors 104 or computing systems 102, whether to reduce the time needed to accomplish a computing intensive task or for any other reason.
[0051] The computing system 102 may also include one or more user interfaces 110. The user interface 110 may be or include a keyboard, mouse, trackball, monitor, television, screen, touchscreen, and/or any other device for receiving information from a user and/or for providing information to a user. The user interface 110 may be used, for example, to receive a user selection or other user input regarding any step of any method described herein. Notwithstanding the foregoing, any required input for any step of any method described herein may be generated automatically by the system 100 (e.g., by the processor 104 or another component of the system 100) or received by the system 100 from a source external to the system 100. In some implementations, the user interface 110 may support user modification (e.g., by a surgeon, medical personnel, a patient, etc.) of instructions to be executed by the processor 104 according to one or more
implementations of the present disclosure, and/or to user modification or adjustment of a setting of other information displayed on the user interface 110 or corresponding thereto. [0052] In some implementations, the computing system 102 may utilize a user interface 110 that is housed separately from one or more remaining components of the computing system 102. In some implementations, the user interface 110 may be located proximate one or more other components of the computing system 102, while in other implementations, the user interface 110 may be located remotely from one or more other components of the computer device 102. For example, the user interface 110 may be located on a display of a computing device 120.
[0053] The imaging device(s) 112 may be operable to image anatomical feature(s) (e.g., a bone, veins, tissue, etc.) and/or other aspects of patient anatomy to yield image data (e.g., image data depicting or corresponding to a bone, veins, tissue, etc.). Image data as used herein may refer to data generated or captured by an imaging device 112, including in a machine-readable form, a graphical or visual form, and in any other form. In various examples, the image data may include data corresponding to an anatomical feature of a patient, or to a portion thereof. The image data may be or include a preoperative image, an intraoperative image, a postoperative image, or an image taken independently of any surgical procedure. In some implementations, a first imaging device 112 may be used to obtain first image data (e.g., a first image) at a first time, and a second imaging device 112 may be used to obtain second image data (e.g., a second image) at a second time after the first time.
[0054] The imaging device 112 may be capable of taking a 2D image or a 3D image to yield the image data. The imaging device 112 may be or include, for example, an ultrasound scanner (which may include, for example, a physically separate transducer and receiver, or a single ultrasound transceiver), an O arm, a C arm, a G arm, or any other device utilizing X ray based imaging (e.g., a fluoroscope, a CT scanner, or other X ray machine), a magnetic resonance imaging (MRI) scanner, an optical coherence tomography (OCT) scanner, an endoscope, a microscope, an optical camera, a thermographic camera (e.g., an infrared camera), a radar system (which may include, for example, a transmitter, a receiver, a processor, and one or more antennae), or any other imaging device 112 suitable for obtaining images of an anatomical feature of a patient. The imaging device 112 may be
contained entirely within a single housing or may include a transmitter/emitter and a receiver/ detector that are in separate housings or are otherwise physically separated.
[0055] In some implementations, the imaging device 112 may include more than one imaging device 112. For example, a first imaging device may provide first image data and/or a first image, and a second imaging device may provide second image data and/or a second image. In still other implementations, the same imaging device may be used to provide both the first image data and the second image data, and/or any other image data described herein. The imaging device 112 may be operable to generate a stream of image data. For example, the imaging device 112 may be configured to operate with an open shutter, or with a shutter that continuously alternates between open and shut so as to capture successive images. For purposes of the present disclosure, unless specified otherwise, image data may be considered to be continuous and/or provided as an image data stream if the image data represents two or more frames per second.
[0056] The navigation system 118 may provide navigation for a surgeon and/or a surgical robot during an operation. The navigation system 118 may be any now known or future developed navigation system, including, for example, the Medtronic
Stealth StationTM S8 surgical navigation system or any successor thereof. The navigation system 118 may include one or more cameras or other sensor(s) for tracking one or more reference markers, navigated trackers (e.g., tracking devices 140, etc.) or other objects within the operating room or other room in which some or all of the system 100 is located. The one or more cameras may be optical cameras, infrared cameras, or other cameras. In some implementations, the navigation system 118 may include one or more tracking devices 140 (e.g., electromagnetic sensors, acoustic sensors, etc.).
[0057] In some aspects, the navigation system 118 may include one or more of an optical tracking system, an acoustic tracking system, an electromagnetic tracking system, a radar tracking system, an inertial measurement unit (IMU) based tracking system, and a computer vision based tracking system. The navigation system 118 may include a corresponding transmission device 136 capable of transmitting signals associated with the tracking type. In some aspects, the navigation system 118 may be capable of computer vision-based tracking of objects present in images captured by the imaging device(s) 112. [0058] In various implementations, the navigation system 118 may be used to track a position and orientation (e.g., a pose) of the tracked device 114 or to track a pose of a
navigated tracker attached, directly or indirectly, in fixed relation to the tracked device 114. In some examples, the tracked device 114 may be an electromagnetic pointer (or stylus). In other embodiments, the tracked device 114 may be, for example, an ultrasound probe, an ablation probe, and/or a surgical instrument or surgical tool (e.g., a catheter, a screw, a drill, etc.). The navigation system 118 may include a display for displaying one or more images from an external source (e.g., the computing system 102, imaging device 112, or other source) or for displaying an image and/or video stream from the one or more cameras or other sensors of the navigation system 118.
[0059] In some implementations, tracking device(s) 140 (e.g., reference markers, navigation markers, fiducials) may be placed on an object in the surgical space. The reference markers may be tracked by the navigation system 118, and the results of the tracking may be used by a component of the system 100. In some implementations, the navigation system 118 can be used to track other components (e.g., tracked device 114), of the system 100.
[0060] Users, such as medical professionals, may use or operate the system 100 to perform a procedure on a subject with one or more instruments such as tracked device 114. The instruments may include one or more tracked devices 114 that may be tracked with one or more localization systems such as the tracking devices 140 of the navigation system 118. The tracking devices 140 may include one or more of an optical, electromagnetic, acoustic, or other localization systems that may be used to track tracked devices 114. For example, the tracking devices 140 may be used to track a location of the tracked device 114 that may be an instrument relative to the subject. In some implementations, the subject may also have one or more tracking device 140 attached thereto. Therefore, the tracked device 114 and the subject may be tracked relative to one another. As discussed herein, a pose of a tracked device 114 may include all position and orientation information, e.g. six degree of freedom, such as translational (x,y,z) coordinates and orientation (yaw, pitch, roll) coordinates.
[0061] The solutions described herein support directly using electromagnetic tools implemented in some existing medical procedures. In some aspects, direct use of such existing electromagnetic tools may provide increased accuracy due to accurate tracking of electromagnetic tools by some navigation systems.
[0062] In some implementations, a tracked device 114 may be, for example, an ultrasound probe, an ablation probe, or any type of device which a user, such as a doctor or nurse may use during an examination or other type of procedure. In some implementations, tracked devices 114 may be calibrated and/or registered relative to a navigation means (e.g., one or more tracking devices 140, etc.) in association with navigated image acquisition. For example, some systems may establish a transformation matrix that maps a six dimensional (6D) pose (e.g., position and orientation information) of the tracking device(s) 140 to the 6D pose of a tracked device 114. Some systems may map the 6D pose of the tracked sensor to an image generated by the ultrasound probe or to the ultrasound beam of the ultrasound probe.
[0063] In some aspects, the computing system 102 may communicate with a server(s) and/or a database (e.g., database 130) directly or indirectly over a communications network (e.g., the cloud network 134). The communications network may include any type of known communication medium or collection of communication media and may use any type of protocols to transport data between endpoints. The communications network may include wired communications technologies, wireless communications technologies, or any combination thereof.
[0064] Wired communications technologies may include, for example, Ethernet based wired local area network (LAN) connections using physical transmission mediums (e.g., coaxial cable, copper cable/wire, fiber optic cable, etc.). Wireless communications technologies may include, for example, cellular or cellular data connections and protocols (e.g., digital cellular, personal communications service (PCS), cellular digital packet data (CDPD), general packet radio service (GPRS), enhanced data rates for global system for mobile communications (GSM) evolution (EDGE), code division multiple access (CDMA), single carrier radio transmission technology (1 *RTT), evolution data optimized (EVDO), high speed packet access (HSPA), universal mobile telecommunications service (UMTS), 3G, long term evolution (LTE), 4G, and/or 5G, etc.), Bluetooth®, Bluetooth® low energy, Wi Fi, radio, satellite, infrared connections, and/or ZigBee® communication protocols.
[0065] The Internet is an example of the communications network that constitutes an Internet Protocol (IP) network consisting of multiple computers, computing networks, and other communication devices located in multiple locations, and components in the
communications network (e.g., computers, computing networks, communication devices) may be connected through one or more telephone systems and other means. Other examples of the communications network may include, without limitation, a standard Plain Old Telephone System (POTS), an Integrated Services Digital Network (ISDN), the Public Switched Telephone Network (PSTN), a Local Area Network (LAN), a Wide Area Network (WAN), a wireless LAN (WLAN), a Session Initiation Protocol (SIP) network, a Voice over Internet Protocol (VoIP) network, a cellular network, and any other type of packet switched or circuit switched network known in the art. In some cases, the communications network may include any combination of networks or network types. In some aspects, the communications network may include any combination of communication mediums such as coaxial cable, copper cable/wire, fiber optic cable, or antennas for communicating data (e.g., transmitting/receiving data).
[0066] The computing system 102 may be connected to the cloud network 134 via the communication interface 108, using a wired connection, a wireless connection, or both. In some implementations, the computing system 102 may communicate with the database 130 and/or an external device (e.g., a computing device) via the cloud network 134.
[0067] The system 100 or similar systems may be used, for example, to carry out one or more aspects of any of the process flows and methods described herein. The system 100 or similar systems may also be used for other purposes. Fig. 2 illustrates an example environment 200 in which the system 100 may be used to support aspects of the present disclosure. Aspects of the example may be implemented by the computing system 102, imaging device(s) 112, navigation system 118, and tracked device 114.
[0068] During a procedure, such as a medical operation, a user such as a medical professional may utilize one or more tracked devices 114 such as ultrasound probes, ablation probes, or other devices. A transmission device 136 may be installed within the environment 200 and may be in communication with a computing system 102. The transmission device 136 may be, for example, a field emitter held by an emitter holder. The transmission device 136 may be used in relation to one or more tracking devices 140 to track a location, position, and/or orientation of a tracked device 114. The environment 200 may optionally include an imaging device 112 such as an MRI machine, an O-arm machine, a CT scanner, etc.
[0069] The computing system 102 may be configured to output visual data to a display 202. As illustrated in Fig. 3, as a user uses a tracked device 114 near the transmission device 136 and tracking devices 140, the display 202 may display a live view based on data from the tracked device 114. As the user moves the tracked device 114 in relation to the tracking devices 140 and the transmission device 136, the view of the display 202 may automatically update with the live view.
[0070] Image data, in addition to a live view may be acquired such as with one or more imaging systems prior to and/or during a surgical procedure for displaying an image or portion thereof for viewing by a user via, for example, a display 202 such as a monitor of a computing system 102 or computing device 120. The additional information or data may be viewed by the user with a live view, as discussed herein.
[0071] A navigation or tracking domain or volume generally defines a navigation space or patient space in which objects such as tracked device 114 may be moved that are tracked, such as with the tracking devices 140. The navigation volume or patient space may be registered to an image space defined by the additional information (e.g., prior acquired image or selected image) of the subject, allowing for illustrating and displaying determined positions of various objects relative to one another and/or the image space. [0072] Using conventional tracking devices, however, the orientation of the display 202 may not be intuitive and may be disorienting to the user. As described herein, the user may be enabled to orient the point-of-view (POV) displayed in the display 202 to match the user's own perspective through the methods and systems described herein.
[0073] As illustrated in Fig. 4 A, a user 406 may use a tracked device 114 during a procedure. A display 202, illustrated in Fig. 4B, may assist the user 406 in positioning the tracked device 114. As the user 406 moves the tracked device 114 in relation to the transmission device 136, the display 202 may update in real-time based on the movements. [0074] A POV 400 of the display 202 may be referred to as a virtual space POV 400 and is represented by the line extending from the camera symbol 408. To best serve the user 406, the POV 400 shown in the display should match the POV of the user. An uncalibrated system may not be properly oriented and may be disorienting to the user 406. [0075] A POV 404 of the user 406 may be referred to as a focal plane of the user 406 and is represented by the line extending from the eyes of the user 406. An orientation 402,
or pose, of the tracked device 114 is represented by the line extending through the tracked device 114.
[0076] As can be appreciated in the display 202 of Fig. 4B, when the system is not yet calibrated, the POV 400 of the display 202 does not match the POV 404 of the user 406. A visual representation of the tracked device 114 is shown in the display 202. The orientation 402 of the tracked device 114 is illustrated in the display 202 by a cone 414. When the display 202 is uncalibrated, the cone 414 extends in a direction that is unnatural to the user 406. What is needed is a way to calibrate the display 202 to show a view that is aligned with the POV 404 of the user 406.
[0077] The display 202 may show image data captured by the tracked device 114 along with other image data, either live or recorded image data. The display may include annotation information, such as markers 410 added by a medical professional during a review of image data such as MRI images or 0-arm or CT scans. Tracking devices 140 may be represented by markers 416 in the display.
[0078] As illustrated in Fig. 5 A, the user 406 has moved the tracked device 114 such that the orientation 402 of the tracked device 114 is in a predetermined position. At the predetermined position, the orientation 402 of the tracked device 114 is in line with the POV 404 of the user 406. The predetermined position may be any position for which the user 406 would prefer the POV 400 shown in the display 202 to match the POV 404 of the user 406.
[0079] The predetermined position may be a 3D position which may be saved in memory of a computing system 102. In some implementations, a plurality of users may each be associated with one or more associated predetermined positions. As described below to move the POV 400 of the display 202, a user 406 may hold the tracked device 114 in a predetermined position and may instruct the computing system 102 to reorient the display 202.
[0080] In some implementations, predetermined positions may need not be saved in memory. Instead, the user 406 may choose a predetermined position in the moment (e.g., in real-time prior to or during a surgical procedure). In such embodiments, the position and/or orientation of the tracked device 114 may be determined by the navigation system 118 tracking the tracked device 114. The position and/or the orientation of the tracked
device 114 can be used to determine a trajectory that is in line with the user POV 404, as described below.
[0081] In the example scenario illustrated by Figs. 5A and 5B, the user 406 has positioned the orientation 402 of the tracked device 114 in line with his or her POV 404. Before calibration, the POV 400 of the display 202, originating at the camera symbol 408, is not in line with the POV 404 of the user 406. After calibration, as described below, the POV 400 of the display 202, originating at the camera symbol 408, is to be in line with the POV 404 of the user 406.
[0082] As illustrated in Fig. 6A, the display 202 may be an interactive display and may be capable of receiving input, such as via touch. The user 406 (or another user 600) may be capable of interacting with the computing system 102 to issue a command to orient the display to the device plane. The command may be issued via voice, a button on the tracked device 114, a GUI button 602 displayed on the display 202 as illustrated in Fig. 6B, or by any other imaginable means.
[0083] As illustrated in Fig. 7A, upon issuing the command, the POV 400 of the display 202 may be updated. The virtual source of the POV 400 of the display, represented by the camera symbol 408, is in the correct location such that the POV 400 of the display 202 is in line with the POV 404 of the user 406 and the orientation 402 of the tracked device 114.
[0084] As illustrated in Fig. 7B, the display 202 is updated with an intuitive view such that the cone 414 representing the view of the tracked device 114 is parallel with the display 202. The display 202 may be updated by aligning the POV 400 of the display 202 with the POV 402 of the tracked device 114 by matching a trajectory of the display 202 with a trajectory or angle of the tracked device 114 (which is in line with a trajectory of the POV 404 of the user 406). In some embodiments where the tracked device 114 is an ultrasound probe, the trajectory or angle of the tracked device can be determined from a plane of the ultrasound probe as tracked by the navigation system 118. In other embodiments where the tracked device 114 is, for example, a surgical tool or instrument, an orientation and trajectory or angle of the surgical tool or instrument may be determined by, for example, the navigation system 118.
[0085] After orienting the display 202, the user 406 may continue to use the tracked device 114 in an ordinary fashion. The POV 400 of the display 202 may continue to follow the orientation 402 of the tracked device 114 during the procedure. If at any point
the user 406 is unhappy with the POV 400 of the display 202, the user 406 may issue another calibration command and reset the view.
[0086] The system 100 illustrated in Fig. 1, in relation to the scenarios illustrated in Figs. 2-7B, may enable the above-described systems and methods by performing a method 800 as illustrated in Fig. 8. At 802, image data may be received by a computing system 102, such as the computing system 102 illustrated in Fig. 1 and described above. Image data may be received, for example, from one or more imaging devices 112, from a tracked device 114, from a database 130, or another source.
[0087] Image data as described herein may comprise medical imaging data, such as image data from a CT scan, MRI, etc., and may be 3-dimensional or may be a 2D image which can be overlaid onto a 3D virtual environment. In some implementations, image data may include annotation details entered by a user. For example, a 3D navigable image may include one or more visual markers, such as targets for which a user may seek to gather additional information during a procedure.
[0088] At 804, the computing system 102 may generate a navigation space based on one or more tracking signals. Generating the navigation space may comprise receiving tracking information relating to a position or orientation of a tracked device 114. Such tracking information may be received by the computing system 102 and may be used to generate the navigation space.
[0089] At 806, the computing system may generate a virtual space by overlaying the image data on the navigation space. Following the establishment of the navigation space, the system 102 may enhance the navigation space through the integration of the image data received at 802. Generating the virtual space may comprise overlaying the navigation space with the image data which may include one or more of images, graphics, and informational data. The overlaid data may include a spectrum of elements such as directional arrows, labels, landmarks, and various other interactive and informative markers.
[0090] At 808, the computing system 102 may identify a position of the tracked device 114 in relation to the virtual space. The computing system 102 may maintain an awareness of a position of the tracked device 114 within the virtual space. As the tracked device 114 maneuvers through physical space, a continuous stream of positional data may be relayed to the computing system 102. The computing system 102 may generate an
updated and synchronized representation of the tracked device 114 within the virtual space.
[0091] The computing system 102 may be capable of identifying the position of the tracked device 114 and determining an angle or orientation of the tracked device 114. Identifying the position of the tracked device 114 may comprise guiding a user through a registration process and detecting the tracked device in relation to one or more fiducials, such as tracking devices 140. In some implementations, electromagnetic (EM) navigation may be used for real-time localization, angle detection, position detection, and/or orientation detection of the tracked device 114 within the virtual space.
[0092] At 810, a focal plane of a user may be determined or estimated based on the position of the tracked device 114. Determining, or estimating, the focal plane of the user may comprise receiving a user input. Upon receiving the user input, the position of the tracked device upon receiving the user input may be recorded.
[0093] As described above, the user input may be received from the tracked device 114, such as a button on the device, or from a user input device of the system 102, such as a GUI button on a display 202, via voice input received from a microphone of the system 102, or another source. As should be appreciated, in some circumstances, the user input may be received by a user other than the user operating the tracked device 114.
[0094] In some implementations, determining or estimating the focal plane of the user may comprise determining data relating to one or more predetermine positions. A user may record predetermined positions of different types of tracked devices 114 and in different positions. For example, a user may set one predetermined position of a tracked device 114 by holding the tracked device 114 in his or her left hand and set another predetermined position of the tracked device 114 by holding the tracked device 114 in his or her right hand. Information relating to each predetermined position may be stored in memory of the computing system 102.
[0095] In some implementations, the computing system 102 may compare a detected position of the tracked device 114 with one or more predetermined positions and identify one of the predetermined positions based on the comparison. Comparing a detected position of a tracked device 114 with predetermined positions may comprise comparing localization information, angle information, and/or other orientation information of the
tracked device 114 as detected using a navigation system. In this way, orientation of the display, as described herein, may be based on the identified predetermined position.
[0096] In some implementations, instructions may be presented to a user to place the tracked device in a predetermined position. For example, a display may display instructions such as instructions to instruct a user to align a tracked device 114 with his or her own line of sight and, once aligned, issue a command to align the POV of the display. [0097] In some implementations, the focal plane of the user may be determined or estimated at least in part through the use of an EM tracker on a head of the user. For example, a tracking device 140 may be worn on a head of the user and may be used to locate a position of the user's head. In such implementations, the POV of the display may be oriented based at least in part on the located position of the user's head.
[0098] In some implementations, the user may use two tracked devices simultaneously during a procedure. In such a scenario, the user may be enabled to select one of the two tracked devices to use to orient the display. In some implementations, such a user may be enabled to switch between perspectives of different tracked devices.
[0099] In some implementations, a perspective may be saved in memory, or bookmarked, such that users can save specific views, orientations, perspectives, etc., so that the users can quickly switch from one view to another. For example, a user may bookmark a first perspective from the user’s standpoint and another perspective from a different side of the operating table. By bookmarking both perspectives, the user may be enabled to easily and quickly switch back and forth between the two perspectives.
[0100] At 812, the computing system 102 may orient the display of the virtual space based on the determined focal plane of the user. For example, the computing system 10 may relocate a virtual camera within the virtual space to match the determined focal plane of the user.
[0101] Orienting the display of the virtual space may in some implementations comprise positioning a virtual camera in the virtual space to adjust a POV of the display. For example, a virtual camera may be placed in a position of the virtual space to match a determined or estimated focal plane of the user. Upon the camera being positioned, the POV of the display may match the focal plane of the user.
[0102] In some implementations, the computing system 102 may display views oriented in directions other than the perspective of the tracked device. For example, a 90 degree
view may be displayed. Views illustrating different perspectives may be displayed simultaneously or separately.
[0103] In some implementations, after orienting the display of the virtual space, the display of the virtual space may remain stationary while the user continues to use the tracked device. In some implementations, after orienting the display of the virtual space, the display of the virtual space may move to track the orientation of the tracked device while the user continues to use the tracked device.
[0104] As noted above, the present disclosure encompasses methods with fewer than all of the steps identified herein (and the corresponding description of respective process flows), as well as methods that include additional steps beyond those identified in the figures and process flows described herein). The present disclosure also encompasses methods that include one or more steps from one method described herein, and one or more steps from another method described herein. Any correlation described herein may be or include a registration or any other correlation.
[0105] The foregoing is not intended to limit the disclosure to the form or forms disclosed herein. In the foregoing Detailed Description, for example, various features of the disclosure are grouped together in one or more aspects, implementations, and/or configurations for the purpose of streamlining the disclosure. The features of the aspects, implementations, and/or configurations of the disclosure may be combined in alternate aspects, implementations, and/or configurations other than those discussed above. This method of disclosure is not to be interpreted as reflecting an intention that the claims require more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive aspects lie in less than all features of a single foregoing disclosed aspect, implementation, and/or configuration. Thus, the following claims are hereby incorporated into this Detailed Description, with each claim standing on its own as a separate preferred implementation of the disclosure.
[0106] Moreover, though the foregoing has included description of one or more aspects, implementations, and/or configurations and certain variations and modifications, other variations, combinations, and modifications are within the scope of the disclosure, e.g., as may be within the skill and knowledge of those in the art, after understanding the present disclosure. It is intended to obtain rights which include alternative aspects, implementations, and/or configurations to the extent permitted, including alternate,
interchangeable and/or equivalent structures, functions, ranges or steps to those claimed, whether or not such alternate, interchangeable and/or equivalent structures, functions, ranges or steps are disclosed herein, and without intending to publicly dedicate any patentable subject matter.
[0107] Example aspects of the present disclosure include:
[0108] A system including: a processor; and a memory storing instructions that, when executed by the processor, cause the processor to: receive image data; generate a navigation space based on one or more tracking signals; generate a virtual space by overlaying the image data on the navigation space; identify a position of a tracked device in relation to the virtual space; determine a focal plane of a user based on the position of the tracked device; and orient a display of the virtual space based on the determined focal plane of the user.
[0109] Aspects of the above system include wherein identifying the position of the tracked device comprises determining an angle of the tracked device.
[0110] Aspects of the above system include wherein determining the focal plane of the user comprises receiving a user input and recording the position of the tracked device upon receiving the user input.
[0111] Aspects of the above system include wherein the user input is received from the tracked device.
[0112] Aspects of the above system include wherein the tracked device is one of an ultrasound probe and an ablation probe.
[0113] Aspects of the above system include wherein the user input is received from a user input device of the system.
[0114] Aspects of the above system include wherein identifying the position of the tracked device comprises guiding a user through a registration process and detecting the tracked device in relation to one or more fiducials.
[0115] Aspects of the above system include wherein after orienting the display of the virtual space, the display of the virtual space follows the position of the tracked device. [0116] A method of orienting a display of a virtual space comprising: receiving image data; generating a navigation space based on one or more tracking signals; generating the virtual space by overlaying the image data on the navigation space; identifying a position of a tracked device in relation to the virtual space; determining a focal plane of a user
based on the position of the tracked device; and orienting the display of the virtual space based on the determined focal plane of the user.
[0117] Aspects of the above method include wherein identifying the position of the tracked device comprises determining an angle of the tracked device.
[0118] Aspects of the above method include wherein determining the focal plane of the user comprises receiving a user input and recording the position of the tracked device upon receiving the user input.
[0119] Aspects of the above method include wherein the user input is received from the tracked device.
[0120] Aspects of the above method include wherein the tracked device is one of an ultrasound probe and an ablation probe.
[0121] Aspects of the above method include wherein the user input is received from a user input device of the system.
[0122] Aspects of the above method include wherein identifying the position of the tracked device comprises guiding a user through a registration process and detecting the tracked device in relation to one or more fiducials.
[0123] Aspects of the above method include wherein after orienting the display of the virtual space, the display of the virtual space follows the position of the tracked device. [0124] One or more processing units comprising processing circuitry to perform operations comprising: receiving image data; generating a navigation space based on one or more tracking signals; generating a virtual space by overlaying the image data on the navigation space; identifying a position of a tracked device in relation to the virtual space; determining a focal plane of a user based on the position of the tracked device; and orienting a display of the virtual space based on the determined focal plane of the user.
[0125] Aspects of the above processing unit(s) include wherein identifying the position of the tracked device comprises determining an angle of the tracked device.
[0126] Aspects of the above processing unit(s) include wherein determining the focal plane of the user comprises receiving a user input and recording the position of the tracked device upon receiving the user input.
[0127] Aspects of the above processing unit(s) include wherein the user input is received from the tracked device.
[0128] Aspects of the above processing unit(s) include wherein orienting the display of the virtual space comprises bookmarking the orientation in memory.
[0129] Any aspect in combination with any one or more other aspects.
[0130] Any one or more of the features disclosed herein.
[0131] Any one or more of the features as substantially disclosed herein.
[0132] Any one or more of the features as substantially disclosed herein in combination with any one or more other features as substantially disclosed herein.
[0133] Any one of the aspects/features/implementations in combination with any one or more other aspects/features/implementations.
[0134] Use of any one or more of the aspects or features as disclosed herein.
[0135] It is to be appreciated that any feature described herein can be claimed in combination with any other feature(s) as described herein, regardless of whether the features come from the same described implementation.
[0136] The phrases “at least one,” “one or more,” “or,” and “and/or” are open-ended expressions that are both conjunctive and disjunctive in operation. For example, each of the expressions “at least one of A, B and C,” “at least one of A, B, or C,” “one or more of A, B, and C,” “one or more of A, B, or C,” “A, B, and/or C,” and “A, B, or C” means A alone, B alone, C alone, A and B together, A and C together, B and C together, or A, B and C together.
[0137] The term “a” or “an” entity refers to one or more of that entity. As such, the terms “a” (or “an”), “one or more,” and “at least one” can be used interchangeably herein. It is also to be noted that the terms “comprising,” “including,” and “having” can be used interchangeably.
[0138] The term “automatic” and variations thereof, as used herein, refers to any process or operation, which is typically continuous or semi-continuous, done without material human input when the process or operation is performed. However, a process or operation can be automatic, even though performance of the process or operation uses material or immaterial human input, if the input is received before performance of the process or operation. Human input is deemed to be material if such input influences how the process or operation will be performed. Human input that consents to the performance of the process or operation is not deemed to be “material.”
[0139] Aspects of the present disclosure may take the form of an implementation that is entirely hardware, an implementation that is entirely software (including firmware, resident software, micro-code, etc.) or an implementation combining software and hardware aspects that may all generally be referred to herein as a “circuit,” “module,” or “system.” Any combination of one or more computer-readable medium(s) may be utilized. The computer-readable medium may be a computer-readable signal medium or a computer-readable storage medium.
[0140] A computer-readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer-readable storage medium would include the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this document, a computer-readable storage medium may be any tangible medium that can contain or store a program for use by or in connection with an instruction execution system, apparatus, or device.
[0141] A computer-readable signal medium may include a propagated data signal with computer-readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer-readable signal medium may be any computer-readable medium that is not a computer-readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer-readable medium may be transmitted using any appropriate medium, including, but not limited to, wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
[0142] The terms “determine,” “calculate,” “compute,” and variations thereof, as used herein, are used interchangeably, and include any type of methodology, process, mathematical operation, or technique.
[0143] The following examples are a non-limiting list of clauses in accordance with one or more techniques of this disclosure.
[0144] Example 1. A system comprising: a processor; and a memory storing instructions that, when executed by the processor, cause the processor to: receive image data; generate a navigation space based on one or more tracking signals; generate a virtual space by overlaying the image data on the navigation space; identify a position of a tracked device in relation to the virtual space; determine a focal plane of a user based on the position of the tracked device; and orient a display of the virtual space based on the determined focal plane of the user.
[0145] Example 2. The system of Example 1, wherein identifying the position of the tracked device comprises determining an angle of the tracked device.
[0146] Example 3. The system of Example 1, wherein determining the focal plane of the user comprises receiving a user input and recording the position of the tracked device upon receiving the user input.
[0147] Example 4. The system of Example 3, wherein the user input is received from the tracked device.
[0148] Example 5. The system of Example 4, wherein the tracked device is one of an ultrasound probe and an ablation probe.
[0149] Example 6. The system of Example 3, wherein the user input is received from a user input device of the system.
[0150] Example 7. The system of Example 1, wherein identifying the position of the tracked device comprises guiding a user through a registration process and detecting the tracked device in relation to one or more fiducials.
[0151] Example 8. The system of Example 1, wherein after orienting the display of the virtual space, the display of the virtual space follows the position of the tracked device.
[0152] Example 9. A method of orienting a display of a virtual space comprising: receiving image data; generating a navigation space based on one or more tracking signals; generating the virtual space by overlaying the image data on the navigation space; identifying a position of a tracked device in relation to the virtual space; determining a focal plane of a user based on the position of the tracked device; and orienting the display of the virtual space based on the determined focal plane of the user.
[0153] Example 10. The method of Example 9, wherein identifying the position of the tracked device comprises determining an angle of the tracked device.
[0154] Example 11. The method of Example 9, wherein determining the focal plane of the user comprises receiving a user input and recording the position of the tracked device upon receiving the user input.
[0155] Example 12. The method of Example 11, wherein the user input is received from the tracked device.
[0156] Example 13. The method of Example 12, wherein the tracked device is one of an ultrasound probe and an ablation probe.
[0157] Example 14. The method of Example 11, wherein the user input is received from a user input device.
[0158] Example 15. The method of Example 9, wherein identifying the position of the tracked device comprises guiding a user through a registration process and detecting the tracked device in relation to one or more fiducials.
[0159] Example 16. The method of 9, wherein after orienting the display of the virtual space, the display of the virtual space follows the position of the tracked device.
[0160] Example 17. One or more processing units comprising processing circuitry to perform operations comprising: receiving image data; generating a navigation space based on one or more tracking signals; generating a virtual space by overlaying the image data on the navigation space; identifying a position of a tracked device in relation to the virtual space; determining a focal plane of a user based on the position of the tracked device; and orienting a display of the virtual space based on the determined focal plane of the user.
[0161] Example 18. The one or more processing units of Example 17, wherein identifying the position of the tracked device comprises determining an angle of the tracked device.
[0162] Example 19. The one or more processing units of Example 17, wherein determining the focal plane of the user comprises receiving a user input and recording the position of the tracked device upon receiving the user input.
[0163] Example 20. The one or more processing units of Example 19, wherein the user input is received from the tracked device.
Claims
1. A system comprising: a processor; and a memory storing instructions that, when executed by the processor, cause the processor to: receive image data (802); generate a navigation space (804) based on one or more tracking signals; generate a virtual space (806) by overlaying the image data on the navigation space; identify a position (808) of a tracked device (114) in relation to the virtual space; determine a focal plane (810) of a user (406) based on the position of the tracked device (114); and orient a display (812) of the virtual space based on the determined focal plane of the user (406).
2. The system of claim 1, wherein identifying the position of the tracked device comprises determining an angle of the tracked device.
3. The system of claims 1 or 2, wherein determining the focal plane of the user comprises receiving a user input and recording the position of the tracked device upon receiving the user input.
4. The system of any of claims 1-3, wherein the user input is received from the tracked device.
5. The system of any of claims 1-4, wherein the tracked device is one of an ultrasound probe and an ablation probe.
6. The system of any of claims 1-5, wherein the user input is received from a user input device of the system.
7. The system of any of claims 1 -6, wherein identifying the position of the tracked device comprises guiding a user through a registration process and detecting the tracked device in relation to one or more fiducials.
8. The system of any of claims 1-7, wherein after orienting the display of the virtual space, the display of the virtual space follows the position of the tracked device.
9. A method of orienting a display of a virtual space comprising: receiving image data; generating a navigation space based on one or more tracking signals; generating the virtual space by overlaying the image data on the navigation space; identifying a position of a tracked device in relation to the virtual space; determining a focal plane of a user based on the position of the tracked device; and orienting the display of the virtual space based on the determined focal plane of the user.
10. The method of claim 9, wherein identifying the position of the tracked device comprises determining an angle of the tracked device.
11. The method of claims 9 or 10, wherein determining the focal plane of the user comprises receiving a user input and recording the position of the tracked device upon receiving the user input.
12. The method of any of claims 9-11, wherein the user input is received from the tracked device.
13. The method of claims any of 9-12, wherein the tracked device is one of an ultrasound probe and an ablation probe.
14. The method of claims any of 9-13, wherein the user input is received from a user input device.
15. One or more processing units comprising processing circuitry to perform operations comprising: receiving image data; generating a navigation space based on one or more tracking signals; generating a virtual space by overlaying the image data on the navigation space; identifying a position of a tracked device in relation to the virtual space; determining a focal plane of a user based on the position of the tracked device; and orienting a display of the virtual space based on the determined focal plane of the user.
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| US202363614808P | 2023-12-26 | 2023-12-26 | |
| US63/614,808 | 2023-12-26 |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2025141396A1 true WO2025141396A1 (en) | 2025-07-03 |
Family
ID=94283671
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/IB2024/062814 Pending WO2025141396A1 (en) | 2023-12-26 | 2024-12-18 | System and method to orient display of probe for navigation in real time |
Country Status (1)
| Country | Link |
|---|---|
| WO (1) | WO2025141396A1 (en) |
Citations (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20190357982A1 (en) * | 2016-09-27 | 2019-11-28 | Brainlab Ag | Microscope Tracking Based on Video Analysis |
| US20210338367A1 (en) * | 2020-04-29 | 2021-11-04 | Medtronic Navigation, Inc. | System and Method for Viewing a Subject |
| US20230081244A1 (en) * | 2021-04-19 | 2023-03-16 | Globus Medical, Inc. | Computer assisted surgical navigation system for spine procedures |
-
2024
- 2024-12-18 WO PCT/IB2024/062814 patent/WO2025141396A1/en active Pending
Patent Citations (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20190357982A1 (en) * | 2016-09-27 | 2019-11-28 | Brainlab Ag | Microscope Tracking Based on Video Analysis |
| US20210338367A1 (en) * | 2020-04-29 | 2021-11-04 | Medtronic Navigation, Inc. | System and Method for Viewing a Subject |
| US20230081244A1 (en) * | 2021-04-19 | 2023-03-16 | Globus Medical, Inc. | Computer assisted surgical navigation system for spine procedures |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US6511418B2 (en) | Apparatus and method for calibrating and endoscope | |
| US6517478B2 (en) | Apparatus and method for calibrating an endoscope | |
| US10827162B1 (en) | Augmented optical imaging system for use in medical procedures | |
| US20220395342A1 (en) | Multi-arm robotic systems and methods for monitoring a target or performing a surgical procedure | |
| WO2023214398A1 (en) | Robotic arm navigation using virtual bone mount | |
| CN117769399A (en) | Path planning based on working volume mapping | |
| WO2023214399A1 (en) | Robotic surgical system with floating patient mount | |
| US20240382265A1 (en) | Hybrid localization for minimally invasive surgery and cervical spinal referencing, and methods for using the same | |
| WO2025141396A1 (en) | System and method to orient display of probe for navigation in real time | |
| US12094128B2 (en) | Robot integrated segmental tracking | |
| US20240156531A1 (en) | Method for creating a surgical plan based on an ultrasound view | |
| WO2024229651A1 (en) | Intelligent positioning of robot arm cart | |
| US12310676B2 (en) | Navigation at ultra low to high frequencies | |
| WO2025079075A1 (en) | Following navigation camera | |
| US20240415496A1 (en) | System and method to register and calibrate ultrasound probe for navigation in real time | |
| US20240382169A1 (en) | Long image multi-field of view preview | |
| US20240398362A1 (en) | Ultra-wide 2d scout images for field of view preview | |
| WO2025122631A1 (en) | System and method for automatic ultrasound 3d- point detection and selection for ultrasound probe registration for navigation | |
| WO2023141800A1 (en) | Mobile x-ray positioning system | |
| WO2024257035A1 (en) | System and method to register and calibrate ultrasound probe for navigation in real time | |
| WO2024103286A1 (en) | Plug-and-play arm for spinal robotics | |
| WO2024238179A1 (en) | Long image multi-field of view preview | |
| WO2024249025A1 (en) | Ultra-wide 2d scout images for field of view preview | |
| WO2025122777A1 (en) | Self-calibration of a multi-sensor system | |
| WO2024236440A1 (en) | Hybrid localization for minimally invasive surgery and cervical spinal referencing, and methods for using the same |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 24837692 Country of ref document: EP Kind code of ref document: A1 |