[go: up one dir, main page]

WO2025184378A1 - Updating a user interface based on force applied by an instrument during teleoperation - Google Patents

Updating a user interface based on force applied by an instrument during teleoperation

Info

Publication number
WO2025184378A1
WO2025184378A1 PCT/US2025/017648 US2025017648W WO2025184378A1 WO 2025184378 A1 WO2025184378 A1 WO 2025184378A1 US 2025017648 W US2025017648 W US 2025017648W WO 2025184378 A1 WO2025184378 A1 WO 2025184378A1
Authority
WO
WIPO (PCT)
Prior art keywords
force
anatomical structure
medical
amount
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
PCT/US2025/017648
Other languages
French (fr)
Inventor
Anthony M. JARC
Xi Liu
Alfred SONG
Huan Lac Phan
Andrea E. VILLA
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Intuitive Surgical Operations Inc
Original Assignee
Intuitive Surgical Operations Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Intuitive Surgical Operations Inc filed Critical Intuitive Surgical Operations Inc
Publication of WO2025184378A1 publication Critical patent/WO2025184378A1/en
Pending legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/25User interfaces for surgical systems
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/30Surgical robots
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/25User interfaces for surgical systems
    • A61B2034/254User interfaces for surgical systems being adapted depending on the stage of the surgical procedure
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B90/00Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
    • A61B90/06Measuring instruments not otherwise provided for
    • A61B2090/064Measuring instruments not otherwise provided for for measuring force, pressure or mechanical tension
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B90/00Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
    • A61B90/36Image-producing devices or illumination devices not otherwise provided for
    • A61B2090/364Correlation of different images or relation of image positions in respect to the body
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/70Manipulators specially adapted for use in surgery
    • A61B34/76Manipulators having means for providing feel, e.g. force or tactile feedback

Definitions

  • Teleoperation of robotic systems can provide teleoperators with multiple advantages. For example, teleoperators can operate such robotic systems with greater control and precision than would otherwise be achievable using conventional techniques. Further, teleoperators can suspend movements made by such instruments when addressing fatigue, and continue operation when the fatigue has subsided. But because teleoperators are not in physical control of these instruments, it can be difficult to determine whether, and to what degree, the instruments are exerting force on one or more objects in proximity to the instruments.
  • Technical solutions disclosed herein are generally related to systems and methods for updating a user interface based on force applied by an instrument (e.g., a medical instrument) during teleoperation.
  • an instrument e.g., a medical instrument
  • described herein are specific techniques for updating graphical user interfaces during teleoperation to establish a specific teleoperation experience.
  • the graphical user interface can be generated based at least in part on video data received from a sensing system during a medical procedure.
  • the graphical user interface can include one or more indications of the amount of force to control performance of the medical procedure with the robotic medical system. These indications can be in the form of numbers, letters, or the like, or in the form of updates to the to the images represented by the video data that occur in accordance with the techniques described herein.
  • the one or more processors can generate a graphical user interface based at least in part on images representing instruments and anatomical structures that are in a field of view of an imaging device, the images captured at a plurality of time steps.
  • the graphical interface comprises a timer representing an amount of time during which the amount of force to be applied is applied to the anatomical structure or a remaining amount of time during which the amount of force to be applied satisfies a cumulative force threshold.
  • the one or more processors can receive the data stream of the medical procedure.
  • the data stream can comprise data associated with force vectors that represent directions and magnitudes of force interactions between instruments and anatomical structures involved in the medical procedure, the force vectors captured at a plurality of time steps.
  • the one or more processors can determine the amount of force to be applied to the anatomical structure based at least in part on the force vectors.
  • the one or more processors can receive the data stream of the medical procedure, the data stream comprising data associated with a skill level of the one or more surgeons or force vectors involved in previous medical procedures involving the one or more surgeons.
  • the one or more processors can determine the amount of force to be applied to the anatomical structure based at least in part on the skill level of the one or more surgeons or the force vectors involved in the previous medical procedures involving the one or more surgeons.
  • the one or more processors can determine the amount of force to be applied to the anatomical structure based at least in part on comparing a magnitude of a force interaction between the anatomical structure and an instrument involved in the medical procedure to one or more of: a historical magnitude of a prior force interaction, a force magnitude associated with a force limit, an optimal magnitude associated with an optimal force, a cumulative force associated with magnitudes of one or more previous force interactions, an first interaction-type magnitude associated with a primary interaction type for an instrument involved in the interaction with the anatomical structure, or a second interaction-type magnitude associated with a secondary interaction type for the instrument involved in the interaction with the anatomical structure.
  • the one or more processors can provide, for display via a graphical user interface, one or more images associated with the data stream that correspond to the anatomical structure.
  • the one or more processors can update one or more pixels of the one or more images based at least in part on the amount of force to be applied.
  • the one or more pixels correspond to at least a portion of the anatomical structure represented by the one or more images of instruments and anatomical structures that are in a field of view of the imaging device.
  • the one or more processors can determine at least one area associated with at least one overlay, the at least one area corresponding to the anatomical structure represented by the one or more images.
  • the overlay can be configured to cause the graphical user interface to update a representation of the at least one area when displayed via the display.
  • the one or more processor can update the one or more pixels based at least in part on the at least one overlay.
  • the one or more processors can determine at least one area associated with at least one overlay, the at least one area corresponding to the anatomical structures represented by the one or more images, construct a heatmap based at least in part on the at least one area and the amount of force to be applied; and update the one or more pixels based at least in part on the at least one overlay and the heatmap.
  • the one or more processors can determine at least one first area where the amount of force to be applied satisfies a first threshold.
  • the one or more processors can determine at least one second area where the amount of force to be applied satisfies a second threshold.
  • the one or more processors can update the one or more pixels based at least one the at least one first area and the at least one second area.
  • the one or more processors can generate data associated with a graphical user interface based at least in part on the indication of the amount of force to control performance of the medical procedure with the robotic medical system and images representing instruments and anatomical structures that are in a field of view of an imaging device.
  • the one or more processors can provide the data associated with the graphical user interface to a display device.
  • the data can be associated with the graphical user interface configured to cause the display device to provide an output representing the graphical user interface.
  • the one or more processors can determine an amount of force to be applied to the anatomical structure based at least on a lookup table, where the lookup table represents a plurality of force limits corresponding to a plurality of types of anatomical structures.
  • the one or more processors can determine the amount of force to be applied to the anatomical structure based at least in part on the type of the anatomical structure and the type of the interaction with the anatomical structure, the amount of force to be applied representing a range of forces to be applied.
  • the one or more processors can determine the amount of force to be applied at least a portion of the anatomical structure based at least in part on the type of the anatomical structure and the type of the interaction with the anatomical structure.
  • the one or more processors can determine the type of the interaction with the anatomical structure as a grab interaction, a retract interaction, a cut interaction, or a cauterize interaction.
  • the one or more processors can detect a change to the anatomical structure during the medical procedure.
  • the one or more processors can update the amount of force to be applied to the anatomical structure based at least in part on the change to the anatomical structure.
  • the one or more processors can provide a second indication of the amount of force to control performance of the medical procedure with the robotic medical system based at least on the update to the amount of force to be applied to the anatomical structure.
  • the interaction with the anatomical structure can involve contact between an instrument of the robotic medical system and the anatomical structure.
  • the method can include the one or more processors receiving a data stream of a medical procedure performed with a robotic medical system.
  • the method can include the one or more processors identifying, using the data stream and with one or more models trained with machine learning, a type of an anatomical structure on which the medical procedure is performed, and a type of an interaction with the anatomical structure.
  • the method can include the one or more processors determining an amount of force to be applied to the anatomical structure based at least in part on the type of the anatomical structure and the type of the interaction with the anatomical structure.
  • the method can include the one or more processors providing an indication of the amount of force to control performance of the medical procedure with the robotic medical system.
  • the method can include the one or more processors generating a graphical user interface based at least in part on images representing instruments and anatomical structures that are in a field of view of an imaging device, the images captured at a plurality of time steps.
  • the graphical user interface comprises a timer representing an amount of time during which the amount of force to be applied is applied to the anatomical structure or a remaining amount of time during which the amount of force to be applied satisfies a cumulative force threshold.
  • the method can include the one or more processors receiving the data stream of the medical procedure.
  • the data stream can comprise data associated with force vectors that represent directions and magnitudes of force interactions between instruments and anatomical structures involved in the medical procedure, the force vectors captured at a plurality of time steps.
  • the method can include the one or more processors determining the amount of force to be applied to the anatomical structure based at least in part on the force vectors.
  • aspects of the technical solutions are directed to a non-transitory computer- readable medium storing processor-executable instructions that, when executed by one or more processors, cause the one or more processors to receive a data stream of a medical procedure performed with a robotic medical system.
  • the instructions can include instructions to identify, using the data stream and with one or more models trained with machine learning, a type of an anatomical structure on which the medical procedure is performed, and a type of an interaction with the anatomical structure.
  • the instructions can include instructions to determine an amount of force to be applied to the anatomical structure based at least in part on the type of the anatomical structure and the type of the interaction with the anatomical structure.
  • the instructions can include instructions to provide an indication of the amount of force to control performance of the medical procedure with the robotic medical system.
  • FIG. 1 A depicts an example system to update a user interface based on force applied by an instrument during teleoperation robotic systems.
  • FIG. IB illustrates a schematic block diagram of an example environment for updating a user interface based on force applied by an instrument during teleoperation using a system, according to some embodiments;
  • FIG. 2 illustrates a flowchart diagram illustrating an example method for updating a user interface based on force applied by an instrument during teleoperation, according to some embodiments;
  • FIG. 3 illustrates an image of an example graphical user interface, according to some embodiments
  • FIG. 4 illustrates a graph of example force limits, according to some embodiments.
  • FIG. 5 illustrates a diagram of a medical environment, according to some embodiments.
  • FIG. 6 illustrates a block diagram depicting an architecture for a computer system that can be employed to implement elements of the systems and methods described and illustrated herein.
  • the present disclosure is discussed in the context of a surgical procedure, in some embodiments, the present disclosure can be applicable to other medical sessions or environments or activities, as well as non-medical activities where removal of irrelevant information is desired.
  • Systems, methods, apparatuses, and non-transitory computer-readable media are provided for updating a user interface based on force applied by an instrument (e.g., a medical instrument) during teleoperation.
  • methods described herein include receiving a data stream of a medical procedure performed with a robotic medical system; identifying, using the data stream and with one or more models trained with machine learning, a type of an anatomical structure on which the medical procedure is performed, and a type of an interaction with the anatomical structure; and determining an amount of force to be applied to the anatomical structure based at least in part on the type of the anatomical structure and the type of the interaction with the anatomical structure.
  • Arrangements also relate to providing an indication of the amount of force to control performance of the medical procedure with the robotic medical system.
  • the surgeons controlling such robotic surgical system often need to estimate the force applied by medical instruments involved in the surgeries almost solely based on the images displayed for the surgeon during the surgery via a user input system (described herein).
  • the present disclosure includes systems and methods that enable surgeons to quickly quantify an amount of force that is or can be applied to one or more anatomical structures involved in the surgery.
  • the described techniques can improve the perception of the way the robotic system is interacting with patients during medical procedures, reduce the chances of applying force to anatomical structures that unnecessarily result in adverse effects to the short- and long-term health outcomes of patients, and generally improve overall patient outcomes.
  • a surgeon can be able to move faster than they otherwise would to avoid unintended damage to the anatomical structures involved in the medical procedure.
  • FIG. 1 A depicts an example system 100 to update a user interface based on force applied by a medical instrument during teleoperation of robotic systems such as, for example, robotic medical systems used in robot-assisted surgeries.
  • the example system 100 can include a combination of hardware and software for generating indications of an amount of force during operation of a robotic system.
  • the example system 100 can include a network 101, a medical environment 102, and a data processing system 130 as described herein.
  • the example system 100 can include a medical environment 102 (e.g., a medical environment that is the same as, or similar to, the example medical environment 500 of FIG. 5) including one or more data capture devices 110, medical instruments 112, visualization tools 114, displays 116 and robotic medical systems (RMSs) 120.
  • RMS 120 can include or generate various types of data streams 158 that are described herein, and can operate using system configurations 122.
  • One or more RMSs 120 can be communicatively coupled with one or more data processing systems 130.
  • the RMS 120 can be deployed in any medical environment 102.
  • the medical environment 102 can include any space or facility for performing medical procedures, such as a surgical facility, or an operating room.
  • the medical environment 102 can include medical instruments 112 (e.g., surgical tools used for specialized tasks) that the RMS 120 can use for performing operational procedures, such as surgical patient procedures, whether invasive, non-invasive, or any in-patient or out-patient procedures.
  • RMS 120 can be centralized or distributed across a plurality of computing devices or systems, such as computing devices 600 (e.g., used on servers, network devices or cloud computing products) to implement various functionalities of the RMS 120, including communicating or processing data streams 158 across various devices via the network 101.
  • the medical environment 102 can include one or more data capture devices 110 (e.g., optical devices, such as cameras or sensors or other types of sensors or detectors) for capturing data streams 158.
  • the data streams 158 can include any sensor data, such as images or videos of a surgery, kinematics data on any movement of medical instruments 112, or any events data, such as installation, configuration or selection events corresponding to medical instruments 112.
  • the medical environment 102 can include one or more visualization tools 114 to gather the captured data streams 158 and process it for display to the user (e.g., a surgeon, a medical professional or an engineer or a technician configuring RMS) via one or more (e.g., touchscreen) displays 116.
  • a display 116 can present data stream 158 (e.g., images or video frames) of a medical procedure (e.g., surgery) being performed using the RMS 120 while handling, manipulating, holding or otherwise utilizing medical instruments 112 to perform surgical tasks at the surgical site.
  • RMS 120 can include system configurations 122 based on which RMS 120 can operate, and the functionality of which can impact the data flow of the data streams 158.
  • the system 100 can include one or more data capture devices 110 (e.g., video cameras, sensors or detectors) for collecting any data stream 158, that can be used for machine learning, including detection of objects from sensor data (e.g., video frames or force or feedback data), detection of particular events (e.g., user interface selection of, or a surgeon’s engaging of, a medical instrument 112) or detection of kinematics (e.g., movements of the medical instrument 112).
  • the data capture devices 110 can include cameras or other image capture devices for capturing videos or images from a particular viewpoint within the medical environment 102.
  • the data capture devices 110 can be positioned, mounted, or otherwise located to capture content from any viewpoint that facilitates the data processing system capturing various surgical tasks or actions.
  • the data capture devices 110 can include any of a variety of detectors, sensors, cameras, video imaging devices, infrared imaging devices, visible light imaging devices, intensity imaging devices (e.g., black, color, grayscale imaging devices, etc.), depth imaging devices (e.g., stereoscopic imaging devices, time-of-flight imaging devices, etc.), medical imaging devices such as endoscopic imaging devices, ultrasound imaging devices, etc., non- visible light imaging devices, any combination or sub-combination of the above mentioned imaging devices, or any other type of imaging devices that can be suitable for the purposes described herein.
  • the data capture devices 110 can include cameras that a surgeon can use to perform a surgery and observe manipulation components within a purview of field of view suitable for the given task performance.
  • the data capture devices can output any type of data streams 158, including data streams 158 of kinematics data (e.g., kinematics data streams), data streams 158 of events data (e.g., events data streams) and data streams 158 of sensor data (e.g., sensors data streams).
  • data streams 158 of kinematics data e.g., kinematics data streams
  • data streams 158 of events data e.g., events data streams
  • data streams 158 of sensor data e.g., sensors data streams.
  • data capture devices 110 can capture, detect, or acquire sensor data such as videos or images, including for example, still images, video images, vector images, bitmap images, other types of images, or combinations thereof.
  • the data capture devices 110 can capture the images at any suitable predetermined capture rate or frequency.
  • Settings, such as zoom settings or resolution, of each of the data capture devices 110 can vary as desired to capture suitable images from any viewpoint.
  • data capture devices 110 can have fixed viewpoints, locations, positions, or orientations.
  • the data capture devices 110 can be portable, or otherwise configured to change orientation or telescope in various directions.
  • the data capture devices 110 can be part of a multi-sensor architecture including multiple sensors, with each sensor being configured to detect, measure, or otherwise capture a particular parameter (e.g., sound, images, or pressure).
  • the data capture devices 110 can generate sensor data from any type and form of a sensor, such as a positioning sensor, a biometric sensor, a velocity sensor, an acceleration sensor, a vibration sensor, a motion sensor, a pressure sensor, a light sensor, a distance sensor, a current sensor, a focus sensor, a temperature or pressure sensor or any other type and form of sensor used for providing data on the medical instruments 112, or the data capture devices (e.g., optical devices).
  • a the data capture device 110 can include a location sensor, a distance sensor or a positioning sensor providing coordinate locations of a medical instrument 112 (e.g., kinematics data).
  • the data capture device 110 can include a sensor providing information or data on a location, position or spatial orientation of an object (e.g., medical instrument 112 or a lens of data capture device 110) with respect to a reference point for kinematics data.
  • the reference point can include any fixed, defined location used as the starting point for measuring distances and positions in a specific direction, serving as the origin from which all other points or locations can be determined.
  • the display 116 can show, illustrate or play the data stream 158, such as a video stream, in which the medical instruments 112 at or near surgical sites are shown.
  • the display 116 can display a rectangular image of a surgical site along with at least a portion of the medical instruments 112 being used to perform surgical tasks.
  • the display 116 can provide compiled or composite images generated by the visualization tool 114 from a plurality of data capture devices 110 to provide a visual feedback from one or more points of view.
  • the visualization tool 114 can be configured or designed to receive any number of different data streams 158 from any number of data capture devices 110 and combine them into a single data stream displayed on a display 116.
  • the visualization tool 114 can be configured to receive a plurality of data stream components and combine the plurality of data stream components into a single data stream 158.
  • the visualization tool 114 can receive a visual sensor data from one or more of the medical instruments 112, sensors or cameras with respect to a surgical site or an area in which a surgery is performed.
  • the visualization tool 114 can incorporate, combine or utilize multiple types of data (e.g., positioning data of a medical instrument 112 along sensor readings of pressure, temperature, vibration or any other data) to generate an output to present on a display 116.
  • the visualization tool 114 can present locations of medical instruments 112 along with locations of any reference points or surgical sites, including locations of anatomical parts of the patient (e.g., organs, glands or bones).
  • the medical instruments 112 can be any type and form of tool or instrument used for surgery, medical procedures or a tool in an operating room or environment.
  • the medical instrument 112 can be imaged by, associated with, or include an image capture device.
  • a medical instrument 112 can be a tool for making incisions, a tool for suturing a wound, an endoscope for visualizing organs or tissues, an imaging device, a needle and a thread for stitching a wound, a surgical scalpel, forceps, scissors, retractors, graspers, or any other tool or instrument to be used during a surgery.
  • the medical instruments 112 can include hemostats, trocars, surgical drills, suction devices or any instruments for use during a surgery.
  • the medical instrument 112 can include other or additional types of therapeutic or diagnostic medical imaging implements.
  • the medical instrument 112 can be configured to be installed in, coupled with, or manipulated by an RMS 120, such as by manipulator arms or other components for holding, using and manipulating the medical instruments.
  • the medical instruments 112 can be the same as, or similar to, the medical instruments discussed with respect to FIG. 5.
  • the RMS 120 can be a computer-assisted system configured to perform a surgical or medical procedure or activity on a patient via or using or with the assistance of one or more robotic components or the medical instruments 112.
  • the RMS 120 can include any number of manipulator arms for grasping, holding or manipulating various medical instruments 112 and performing computer-assisted medical tasks using the medical instruments 112 controlled by the manipulator arms.
  • the data streams 158 can be generated by the RMS 120.
  • sensor data associated with the data streams 158 can include images (e.g., video images) captured by a medical instrument 112 can be sent to the visualization tool 114.
  • a display 116 e.g., a touchscreen
  • the RMS 120 can include one or more input ports to receive direct or indirect connection of one or more auxiliary devices.
  • the visualization tool 114 can be connected to the RMS 120 to receive the images from the medical instrument when the medical instrument is installed in the RMS 120 (e.g., on a manipulator arm for handing medical instruments 112).
  • the data stream 158 can include data indicative of positioning and movement of the medical instruments 112 that can be captured or identified by data packets of a kinematics data.
  • the visualization tool 114 can combine the data stream components from the data capture devices 110 and the medical instrument 112 into a single combined data stream 158 which can be indicated or presented on a display 116.
  • the RMS 120 provides the data streams 158 to the data processing system 130 periodically, continuously, or in real-time.
  • Data packets can include a unit of data in a data stream 158.
  • the data packets can include the actual information being sent and metadata, such as a source and a destination address, a port identifier or any other information for transmitting data.
  • the data packets can include a data (e.g., a payload) corresponding to an event (e.g., installation, uninstallation, engagement or setup of a medical instrument 112).
  • the data packets can include data corresponding to sensor information (e.g., a video frame captured by a camera), or data on movement of a medical instrument 112.
  • the data packets can be transmitted in the data streams 158 that can be separated or combined.
  • a data stream 158 for kinematics data e.g., a kinematics data stream
  • Data packets can include one or more timestamps, which can indicate a particular time when particular events took place. Timestamps can include time indications expressed in any combination of nanoseconds, microseconds, milliseconds, seconds, hours, days, months or years. Timestamps can be included in the payload or metadata of data packets and can indicate the time when a data packet was generated, the time when the data packet was transmitted from the device that generated the data packet, the time when the data packet was received by another device (e.g., a system within the RMS 120, or another device on a network) or a time when the data packet is stored into a data repository 132.
  • timestamps can include time indications expressed in any combination of nanoseconds, microseconds, milliseconds, seconds, hours, days, months or years. Timestamps can be included in the payload or metadata of data packets and can indicate the time when a data packet was generated, the time when the data packet was transmitted from the device that generated the data packet, the time when the data packet
  • the data repository 132 can include one or more data files, data structures, arrays, values, or other information that facilitates operation of the data processing system 130.
  • the data repository 132 can include one or more local or distributed databases and can include a database management system.
  • the data repository 132 can include, maintain, or manage one or more data streams 158.
  • the data streams 158 can include or be formed from one or more of a video stream, image stream, stream of sensor measurements, event stream, or kinematics stream.
  • the data streams 158 can include data collected by one or more data capture devices 110, such as a set of 3D sensors from a variety of angles or vantage points with respect to the procedure activity (e.g., point or area of surgery).
  • the data stream 158 can include any stream of data.
  • the data streams 158 can include a video stream, including a series of video frames or organized into video fragments, such as video fragments of about 1, 2, 3, 4, 5, 10 or 15 seconds of a video. Each second of the video can include, for example, 30, 45, 60, 90 or 120 video frames per second.
  • the data streams 158 can include an event stream which can include a stream of event data or information, such as packets, which identify or convey a state of the RMS 120 or an event that occurred in association with the RMS 120.
  • data stream 158 can include any portion of system configuration 122, including information on operations on data streams 158, data on installation, uninstallation, calibration, set up, attachment, detachment or any other action performed by or on an RMS 120 with respect to the medical instruments 112.
  • the data stream 158 can include data about an event, such as a state of the RMS 120 indicating whether the medical instrument 112 is calibrated, adjusted or includes a manipulator arm installed on the RMS 120.
  • a data stream 158 representing event data (e.g., event data stream) can include data on whether an RMS 120 was fully functional (e.g., without errors) during the procedure. For example, when a medical instrument 112 is installed on a manipulator arm of the RMS 120, a signal or data packet(s) can be generated indicating that the medical instrument 112 has been installed on the manipulator arm of the RMS 120.
  • the data stream 158 can include a stream of kinematics data which can refer to or include data associated with one or more of the manipulator arms or medical instruments 112 attached to the manipulator arms, such as arm locations or positioning.
  • the data corresponding to the medical instruments 112 can be captured or detected by one or more displacement transducers, orientational sensors, positional sensors, or other types of sensors and devices to measure parameters or generate kinematics information.
  • the kinematics data can include sensor data along with time stamps and an indication of the medical instrument 112 or type of medical instrument 112 associated with the data stream 158.
  • the data repository 132 can store sensor data having video frames that can include one or more static images or frames extracted from a sequence of images of a video file.
  • Video frame can represent a specific moment in time and can be identified by a metadata including a timestamp.
  • Video frames can display visual content of the video of a medical procedure being analyzed by the data processing system 130 (e.g., by the anatomy detector performance of the surgeon performing the procedure. For example, in a video file capturing a robotic surgical procedure, a video frame can depict a snapshot of the surgical task, illustrating a movement or usage of a medical instrument 112 such as a robotic arm manipulating a surgical tool within the patient's body.
  • the data streams 158 corresponding to sensor data (e.g., videos), events, and kinematics can include related, corresponding or duplicate information that can be used for cross-data comparisons and verification that all three data sources are in agreement.
  • the detection function can implement a check for consistency between diverse data types and data sources by mapping and comparing timestamps between different data types to facilitate if they consistently progress over time, such as in accordance with expected flow and correlation of events, video stream details and kinematics values.
  • an installation of a medical instrument 112 can be recorded as a system event and provided in a data stream 158 of events data.
  • the installed medical instrument 112 can shows up in a sensor data (e.g., in a video) which can be detected the data processing system 130, which can include a computer vision model.
  • Kinematics data can confirm movements of the medical instrument 112 according to the movements detected by the data processing system 130.
  • the data processing system 130 can verify time synchronization across the three data sources (e.g., three data streams 158).
  • the data processing system 130 can include any combination of hardware or software that perform one or more of the functions described herein.
  • the data processing system 130 can include any combination of hardware and software for updating a user interface based on force applied by a medical instrument during teleoperation.
  • the data processing system 130 can include any computing device (e.g., a computing device that is the same as, or similar to, the computing device 600 of FIG. 6) and can include one or more servers, virtual machines, or can be part of or include a cloud computing environment.
  • the data processing system 130 can be provided via a centralized computing device or be provided via distributed computing components, such as including multiple, logically grouped servers and facilitating distributed computing techniques.
  • the logical group of servers can be referred to as a data center, server farm or a machine farm.
  • the servers which can include virtual machines, can also be geographically dispersed.
  • a data center or machine farm can be administered as a single entity, or the machine farm can include a plurality of machine farms.
  • the servers within each machine farm can be heterogeneous - one or more of the servers or machines can operate according to one or more type of operating system platform.
  • the topology of the network 101 can assume any form such as point-to-point, bus, star, ring, mesh, tree, etc.
  • the network 101 can utilize different techniques and layers or stacks of protocols, including, for example, the Ethernet protocol, the internet protocol suite (TCP/IP), the ATM (Asynchronous Transfer Mode) technique, the SONET (Synchronous Optical Networking) protocol, the SDH (Synchronous Digital Hierarchy) protocol, etc.
  • the TCP/IP internet protocol suite can include application layer, transport layer, internet layer (including, e.g., IPv6), or the link layer.
  • the network 101 can be a type of a broadcast network, a telecommunications network, a data communication network, a computer network, a Bluetooth network, or other types of wired and wireless networks.
  • the data processing system 130 can be located at least partially at the location of the surgical facility associated with the medical environment 102 or remotely therefrom. Elements of the data processing system 130, or components thereof can be accessible via portable devices such as laptops, mobile devices, wearable smart devices, etc.
  • the data processing system 130, or components thereof can include other or additional elements that can be considered desirable to have in performing the functions described herein.
  • the data processing system 130, or components thereof can include, or be associated with, one or more components or functionality of a computing including, for example, one or more processors coupled with memory that can store instructions, data or commands for implementing the functionalities of the data processing system 130 discussed herein.
  • the data processing system 130 can include data collector 144, an anatomy detector 146, an interaction classifier 148, a force predictor 150, a performance controller 152, or a data repository 132.
  • the performance controller 152 can include a timer or a user interface 156.
  • the data processing system 130 can be communicatively coupled with one or more data processing systems 130.
  • the data processing system 130 can be implemented by one or more components of the medical environment 102.
  • the data processing system 103 can receive one or more data streams 158 that are described herein, and can monitor operation of the RMS 120 using the system configurations 122.
  • One or more RMSs 120 can be communicatively coupled with one or more data processing systems 130.
  • the data repository 132 can be configured to receive, store, and provide the data streams 158 (e.g., one or more data packets associated with the data streams 158) before, during, or after a medical procedure.
  • the data repository 132 stores data associated with one or more of a machine learning (ML) model 134, historical data 136 associated with one or more previously performed medical procedures involving the RMS 120, types 138 (e.g., one or more force types), thresholds 140 (e.g., thresholds representing force limits), or tables 142 (e.g., tables representing one or more sets of force limits).
  • ML machine learning
  • the data collector 144 can be implemented by the data processing system 130 or can be a device that is the same as, or similar to, the computing device 600 of FIG. 6.
  • the data collector 144 can receive the data streams 158.
  • the data collector 144 can receive the data streams via the network 101.
  • the data collector 144 can receive the data streams 158 from the data processing system 130.
  • the data collector 144 can receive the data streams 158 of a medical procedure performed with the RMS 120.
  • the one or more packets associated with the data streams 158 can represent one or more images during a medical procedure. The one or more images can be captured or otherwise obtained by the visualization tool 114.
  • the one or more images can represent one or more anatomical features or one or more medical instruments as describe herein.
  • the data collector 144 can provide the data streams 158 (e.g., one or more packets of the data streams 158) to the anatomy detector 146 or the interaction classifier 148.
  • the anatomy detector 146 can be implemented by the data processing system 130 or can be a device that is the same as, or similar to, the computing device 600 of FIG. 6.
  • the anatomy detector 146 can receive the data streams 158.
  • the anatomy detector 146 can receive the data streams 158 from the data collector 144.
  • the anatomy detector 146 can identify a type of an anatomical structure on which a medical procedure is performed.
  • the anatomy detector 146 can identify the type of the anatomical structure on which a medical procedure is performed based on the data streams 158.
  • the anatomy detector 146 can identify the type of the anatomical structure on which a medical procedure is performed based on the data streams 158 and an ML model.
  • the anatomy detector 146 can provide the data streams 158 to the ML model to cause the ML model to provide an output, the output representing the type of the anatomical structure on which the medical procedure is performed.
  • the anatomy detector 146 can provide data associated with the type of the anatomical structure to the force predictor 150 or the performance controller 152.
  • the interaction classifier 148 can be implemented by the data processing system 130 or can be a device that is the same as, or similar to, the computing device 600 of FIG. 6.
  • the interaction classifier 148 can receive the data streams 158.
  • the interaction classifier 148 can receive the data streams 158 from the data collector 144.
  • the interaction classifier 148 can identify a type of an interaction involving an anatomical structure.
  • the interaction classifier 148 can identify the type of the interaction involving the anatomical structure based on the data streams 158.
  • the anatomy detector 146 can identify the type of the anatomical structure on which a medical procedure is performed based on the data streams 158 and an ML model.
  • the interaction classifier 148 can provide the data streams 158 to the ML model to cause the ML model to provide an output, the output representing the type of the interaction with the anatomical structure on which the medical procedure is performed.
  • the interaction classifier 148 can provide data associated with the type of the interaction to the force predictor 150 or the performance controller 152.
  • the force predictor 150 can be implemented by the data processing system 130 or can be a device that is the same as, or similar to, the computing device 600 of FIG. 6.
  • the force predictor 150 can receive the data streams 158.
  • the force predictor 150 can receive the data streams 158 from the data collector 144.
  • the force predictor 150 can receive data associated with the type of the anatomical structure from the anatomy detector 146 or the force predictor 150 can receive data associated with the type of the interaction from the interaction classifier 148.
  • the force predictor 150 can determine an amount of force to be applied to the anatomical structure.
  • the force predictor 150 can determine the amount of force to be applied to the anatomical structure based on the type of the anatomical structure or the type of the interaction with the anatomical structure. In some embodiments, the force predictor 150 provides data associated with the amount of force to be applied to the anatomical structure to the performance controller 152.
  • the performance controller 152 can be implemented by the data processing system 130 or can be a device that is the same as, or similar to, the computing device 600 of FIG. 6.
  • the performance controller 152 can receive the data associated with the amount of force to be applied to the anatomical structure from the force predictor 150.
  • the performance controller 152 can determine an indication of the amount of force to control performance of the medical procedure with the robotic medical system.
  • the performance controller 152 can determine a timer 154.
  • the performance controller 152 determines the timer 154 periodically (e.g., every 1 second, every 2 seconds, etc.). In some examples, the performance controller 152 determines the timer 154 continuously.
  • the performance controller provides the indication of the amount of force to be applied, where the indication represents the timer 154.
  • the performance controller 152 can determine a user interface 156. In examples, the performance controller 152 determines the user interface 156 periodically (e.g., every 1 second, every 2 seconds, etc.). In some examples, the performance controller 152 determines the user interface 156 continuously. In some embodiments, the performance controller provides the indication of the amount of force to be applied, where the indication represents the user interface 156. In some embodiments, the performance controller 152 can provide data associated with the indication of the amount of force to cause a device to display the indication of the amount of force.
  • the performance controller 152 can provide the data associated with the indication of the amount of force to cause display 116 to display the indication of the amount of force.
  • the data associated with the indication of the amount of force can be configured to cause the display 116 to display the indication.
  • the data repository 132 can be implemented by the data processing system 130 or can be a device that is the same as, or similar to, the computing device 600 of FIG. 6.
  • the data repository can receive data from any of the devices of FIG. 1 A either directly or indirectly.
  • the data includes the ML model 134, the historical data 136, the types 138, the threshold 140, or the table 142.
  • the data stored by the data repository 132 is associated with a previously-performed medical procedure.
  • the data stored by the data repository 132 is associated with a current medical procedure.
  • the data repository 132 can receive the data streams 158 or the system configurations 122 and store the data streams 158 or the system configurations 122. The data repository 132 can then provide the data streams 158 or the system configurations 122 (e.g., one or more data packets thereof) to the one or more of the components of the data processing system 130.
  • FIG. IB is a schematic block diagram illustrating an example environment 160 in which devices, systems, methods, or products described herein can be implemented, according to some embodiments.
  • the environment 160 includes a user interface system 162, a sensing system 164, and a user input system 168.
  • the user interface system 162 is the same as, or similar to, the data processing system 130 of FIG. 1 A.
  • the sensing system 164 is the same as, or similar to, one or more data capture devices 110 of FIG. 1A.
  • the user interface system 162 can receive video data 166 from the sensing system 164. Additionally, the user interface system 162 can receive robotic system data 170, and medical instrument data 172. In examples, the user interface system 162 can receive the video data 166, the robotic system data 170, or the medical instrument data 172 as part of a data stream.
  • the data stream can be received from a robotic medical system that is the same as, or similar to, the robotic medical system 120 of FIG. 1 A.
  • the data stream (e.g., one or more packets included in the data stream) can be the same as, or similar to, the data streams 158 of FIG. 1A.
  • the user interface system 162 can also communicate (e.g., establish communication connections to exchange data) with the user input system 168.
  • the user interface system 162, the sensing system 164, or the user input system 168 can include or be implemented by one or more suitable computing systems, such as the computing device 600 of FIG. 6.
  • user interface system 162, sensing system 164, or user input system 168 can include one or more components that are the same as, or similar to, one or more of the components of the computing device 600.
  • the user interface system 162, sensing system 164, or user input system 168 can be configured to communicate (e.g., to establish communication connections to exchange data).
  • the system 100 can include one or more devices or systems that are the same as, or similar to, one or more devices or systems discussed with respect to example medical environment 500 of FIG. 5.
  • the processes described herein can be implemented by the user interface system 162.
  • the some or all of the processes implemented by the user interface system 162 can be implemented by one or more other devices (alone or in cooperation with the user interface system 162) such as, for example, the sensing system 164 or the user input system 168, which can be the same as, or similar to, the computing device 600 of FIG. 6.
  • the user interface system 162 is illustrated as a separate system from the user input system 168, in examples, the user interface system 162 can be included in (e.g., implemented by) user input system 168. Accordingly, one or more of the functions described herein as being performed by the user interface system 162 can similarly be performed by the user input system 168.
  • the user interface system 162 can be the same as, or similar to, the computing device 600 of FIG. 6.
  • the user input system 168 can be the same as, or similar to, the user control system 510 of FIG. 5 or the computing device 600 of FIG. 6.
  • the sensing system 164 can be the same as, or similar to, the computing device 600 of FIG. 6.
  • the user interface system 162 can receive the robotic system data 170 from the sensing system 164, where the sensing system includes a device that is the same as, or similar to, one or more medical instruments supported by manipulator arms (e.g., manipulator arms that are the same as, or similar to, manipulator arms 535A-535D of FIG. 5) such as, for example, an imaging device (e.g., an endoscope, an ultrasound tool, etc.) or a sensing instrument (e.g., a force-sensing surgical instrument) as described herein are attached.
  • manipulator arms e.g., manipulator arms that are the same as, or similar to,
  • a medical procedure refers to a surgical procedure or operation performed in a medical environment (e.g., a medical or surgical theater, etc. that is the same as, or similar to, the medical environment 500 of FIG. 5) by or using one or more of a medical staff, a robotic system, or a medical instrument.
  • a medical staff include surgeons, nurses, support staff, and so on (e.g., individuals that can be the same as, or similar to, surgeon 530A or additional medical personnel 530B-530D of FIG. 5).
  • the robotic systems include the robotic medical system or the robot surgical system described herein such as, for example, one or more device of medical environment 500 (e.g., robotic medical system 524).
  • Examples of medical instruments include the medical instruments supported by the manipulator arms 535A-535D.
  • Medical procedures can have various modalities, including robotic (e.g., using at least one robotic system), non-robotic laparoscopic, non-robotic open, and so on.
  • the robotic system data 170, and medical instrument data 172 collected during a medical procedure also refers to, or includes, robotic system data 170, and medical instrument data 172 collected by one or more devices in a medical environment (e.g., a medical environment 500) in which the medical procedure is performed and for one or more of medical staff, robotic system, or medical instrument performing or used in performing the medical procedure.
  • the user interface system 162 can receive and process data sources or data streams including one or more of video data 166, robotic system data 170, and medical instrument data 172 collected for a training procedure or a medical procedure. For example, the user interface system 162 can acquire data streams of the video data 166, robotic system data 170, and medical instrument data 172 in real-time. In some examples, the user interface system 162 can utilize all types of robotic system data 170, and medical instrument data 172 collected, obtained, determined, or calculated for a medical procedure when generating one or more user interfaces (UIs) as described herein. [0075] In some embodiments, the user interface system 162 receives the video data 166, the robotic system data 170, or the medical instrument data 172 during operation of the robotic system.
  • UIs user interfaces
  • the user interface system 162 can receive the video data 166 from the sensing system 164 during operation of the robotic system.
  • the video data 166 can be associated with one or more images captured individually or continuously by the imaging device included in the sensing system 164.
  • the imaging device includes a visual image endoscope, laparoscopic ultrasound, camera, etc. Other suitable imaging devices are also contemplated.
  • the sensing system 164 includes a repositionable assembly including one or more linkages supported by the robotic system.
  • the sensing system 164 can include a repositionable assembly including one or more linkages that can be articulated by the robotic system based on the input provided by the surgeons via the user input system 168 described herein.
  • the user interface system 162 can receive the robotic system data 170 from a robotic system (e.g., from one or more components of a robotic system).
  • the robotic system data 170 includes a system event stream, the system event stream further including data associated with one or more system events (e.g., states of one or more devices such as whether one or more devices or medical instruments are connected to the robotic system, whether the one or more devices are operating as expected, error messages, or the like).
  • the robotic system can include one or more devices or components of a robotic medical system (e.g., a robotic medical system that is the same as, or similar to, the robotic medical system 524 of FIG.
  • the one or more images captured by the imaging device included in the sensing system 164 can show at least a portion of at least one medical instrument (tools, surgical instruments, or the like) within a field of view of the imaging device.
  • the sensing system 164 can include an imaging device that is supported along a distal portion of a tool (e.g., a tool that is supported by (e.g., installed on) a robotic medical system 524).
  • the sensing system 164 can be operated by medical staff during a training session where the medical staff are familiarizing themselves with the robotic system or practicing certain maneuvers using the robotic system.
  • the sensing system 164 can be operated by medical staff during a surgery where the a surgeon is operating the user input system 168.
  • the robotic system data 170 can be associated with the state of the control of one or more devices of the robotic system based on inputs received by the user input system 168. For example, as the user input system 168 communicates with the robotic system to control the at least one medical instrument, the robotic system can generate and provide the robotic system data 170 to the user interface system 162.
  • the robotic system data 170 can represent whether the user input system 168 is controlling one or more of the medical instruments, whether the user input system 168 is generating control signals that are configured to cause the maneuvering one or more medical instruments within the field of view of the sensing system, the torque being applied at one or more joints involved in supporting one or more of the medical instruments involved, or the like.
  • the robotic system data 170 can be associated with force exerted by the robotic system on one or more anatomical structures.
  • the robotic system data 170 can be generated by the robotic system based on movement of one or more linkages or one or more components of the medical instruments of the robotic system.
  • one or more sensors corresponding to the one or more linkages can generate sensor signals representative of the force exerted by the linkages when repositioning the medical instrument.
  • the sensor signals can be included in the robotic system data 170 which, in turn, is included in the data stream.
  • One or more different sensors e.g., encoders or the like
  • this sensor data can later be used to derive the position of medical instruments supported by the linkages.
  • the medical instrument data 172 can be associated with one or more states of one or more medical instruments of the robotic system.
  • the medical instrument data 172 can be associated with one or more states of one or more medical instruments controlled during teleoperation of the robotic system by the user input system 168.
  • the one or more states can represent whether or not the one or more medical instruments of the robotic system are performing one or more functions.
  • functions can include tool activations, movement of medical instruments, or the like as described herein.
  • the one or more states can represent whether or not one or more medical instruments are being controlled by the robotic system based on inputs received by the user input system 168.
  • the user interface system 162 receives the data stream of the medical procedure, the data stream comprising data associated with force vectors that represent directions and magnitudes of force interactions between medical instruments and anatomical structures involved in the medical procedure.
  • the user interface system 162 can receive the data associated with force vectors at a plurality of time steps during the medical procedure.
  • the plurality of time steps can be instantaneous time steps (e.g., the procedure can be occurring in real time).
  • the user interface system 162 can receive the data associated with force vectors at a plurality of time steps prior to the instantaneous time step.
  • the user interface system 162 can receive the data associated with force vectors at a plurality of time steps prior to the instantaneous time step.
  • the data can be generated during the current medical procedure or a previous medical procedure (e.g., a previous medical procedure associate with the patient involved in the current medical procedure).
  • the user interface system 162 determines the amount of force to be applied to anatomical structures based at least in part on the force vectors captured at the plurality of time steps between the medical instruments and the anatomical structures involved in the medical procedure.
  • the user interface system 162 determines the amount of force to be applied to the anatomical structures where the amount of force is to be applied to the anatomical structure in at least one direction. For example, where an instrument is interacting with an anatomical structure in a first direction, the user interface system 162 can determine the amount of force to be applied to the anatomical structure based at least in part on continued interaction between the instrument and the anatomical structure (e.g., continued pushing of the anatomical structure in a direction, continued clamping or grasping of at least a portion of the anatomical structure, etc.).
  • the user interface system 162 receives the data stream of the medical procedure, the data stream comprising data associated with a skill level of the one or more surgeons involved in the medical procedure.
  • the user interface system 162 can receive the data associated with the skill level of the one or more surgeons involved in the medical procedure based at least in part on the one or more interactions between the medical instruments of the robotic system and anatomical structures involved in the medical procedure.
  • the skill level can represent, for example, an amount of previous interactions involving similar medical instruments and anatomical structures during previous medical procedures conducted by the surgeons, scores representing patient outcomes specific to the interactions involving similar medical instruments and anatomical structures during previous medical procedures conducted by the surgeons, force vectors associated with previously- performed interactions involving the surgeon, or other historical information that can be used to determine force limit.
  • the user interface system 162 receives the data stream of the medical procedure, where the data stream includes robotic system data 170 that is associate with kinematic information or system event information corresponding to the operation of the robotic system.
  • the user interface system 162 receives the data stream of the medical procedure, where the data stream includes data associated with one or more aspects of the medical procedure (e.g., a type of medical procedure, a complexity level associated with the medical procedure, a segment of the medical procedure associated with a phase, task, or step, or the like).
  • the user interface system 162 receives patient data associated with information about the patient such as their age, demographics, whether the patient has a compromised immune system (or is sick at the time of the medical procedure) or any other such information as can be relevant to the determination of one or more force limits as described herein.
  • the user interface system 162 identifies a type of an anatomical structure involved in the medical procedure. For example, the user interface system 162 can identify a type of an anatomical structure involved in the medical procedure based at least in part on the data stream (e.g., one or more aspects of the data represented by the data stream). The user interface system 162 can identify a type of an anatomical structure involved based on the type of medical procedure.
  • the user interface system 162 can determine the type of the anatomical structure involved based at least in part on the one or more anatomical structures for which access can be gained by the robotic system during the medical procedure.
  • the user interface system 162 identifies the type of the anatomical structure involved based at least in part on one or more models trained with machine learning. For example, the user interface system 162 can receive the video data 166 from the sensing system 164 during the medical procedure. In this example, the user interface system 162 can provide the video data 166 to the one or more models to cause the one or more models to generate an output.
  • the output can represent one or more classifications, the one or more classifications corresponding to identifiers of the one or more anatomical features represented by the video data 166.
  • the one or more classifications can be made on a pixel basis.
  • the one or more classifications can be made based at least in part on one or more groups of pixels.
  • the one or more classifications can be associated with one or more segmentation masks that correspond to groups of pixels representing the one or more anatomical structures.
  • the user interface system 162 identifies the type of the anatomical structure on which the medical procedure is performed and a type of an interaction with the anatomical structure using the one or more models trained with machine learning.
  • the user interface system 162 can receive the video data 166 from the sensing system 164 during the medical procedure and the user interface system 162 can provide the video data 166 to the one or more models to cause the one or more models to generate an output.
  • the output can represent one or more classifications corresponding to identifiers of the one or more anatomical features or the one or more medical instruments represented by the video data 166.
  • the one or more classifications can correspond to identifiers of the one or more interactions represented by the video data 166 between the one or more anatomical features or the one or more medical instruments.
  • one or more images associated with the video data 166 can be provided to the one or more models to cause the one or more models to generate an output, the output representing the movement of the anatomical structure by one or more medical instruments involved in the medical procedure.
  • the output can be further represented as an indication of the type of the interaction, where the type includes one or more of a grab interaction (e.g., grabbing at least a portion of an anatomical structure using jaws of an end effector supported by a medical instrument), a retract interaction (e.g., holding back or separating tissue associated with the one or more anatomical structures), a cut interaction (e.g., cutting at least a portion of an anatomical structure), or a cauterize interaction (e.g., burning tissue associated with the one or more anatomical systems using, for example, electrocautery systems, chemical cauterization systems, or the like).
  • a grab interaction e.g., grabbing at least a portion of an anatomical structure using jaws of an end effector supported by a medical instrument
  • a retract interaction e.g., holding back or separating tissue associated with the one or more anatomical structures
  • a cut interaction e.g., cutting at least a portion of an anatomical structure
  • the user interface system 162 can provide robotic system data 170 or medical instrument data 172 to the one or more models to cause the one or more models to identify the type of the anatomical structure on which the medical procedure is performed, and a type of an interaction with the anatomical structure.
  • the one or more models can be trained on training data that includes the video data 166 from prior medical procedures, the robotic system data 170 from prior medical procedures, or the medical instrument data 3016 from prior medical procedures.
  • the user interface system 162 can provide the data from the data stream received during the surgical procedure to the one or more models to cause the one or more models to generate the outputs discussed above.
  • the user interface system 162 can provide previously- generated video data 166 from prior medical procedures, the robotic system data 170 from prior medical procedures, or the medical instrument data 3016 from prior medical procedures to the one or more models to cause the one or more models to generate the outputs described herein.
  • the user interface system 162 can provide previously-generated video data 166 from prior medical procedures, the robotic system data 170 from prior medical procedures, or the medical instrument data 3016 from prior medical procedures, where the previously-generated data corresponds to one or more earlier points in time and one or more tags.
  • the one or more tags can be determined based on inputs received by individuals annotating the previously generated data.
  • the annotations can correspond to, for example, the types of anatomical structures involved at a given point in time, the locations of the anatomical structures involved at the given point in time, or the type of interaction involved at the given point in time.
  • the user interface system 162 can then compare the output of the one or more models to the input of the one or more models (e.g., the classifications generated by the one or more models to the tags corresponding to the inputs to the one or more models) and determine a difference between the output and the input.
  • the user interface system 162 can then update the one or more models by changing one or more of the weights associated with the one or more models and repeat the training process until the one or more models converge.
  • the user interface system 162 determines the magnitude of the force interaction between the anatomical structure and the medical instrument involved in the medical procedure based on the data stream. For example, the user interface system 162 can determine the magnitude of the force interaction based at least in part on the robotic system data 170. In such an example, the user interface system 162 can determine the magnitude of the force based at least in part on the sensor signals that are representative of the force exerted by the linkages when repositioning the medical instrument or when remaining in contact with the anatomical structures (e.g., when grabbing or moving the anatomical structures).
  • Different sensors can generate sensor data indicative of a position of the linkages relative to one another and the robotic system and included in the data stream.
  • the sensor data can later be used to derive the position of medical instruments supported by the linkages. For example, when the position or pose of the robotic system (e.g., the one or more components of the robotic system) are registered relative to a patient, the sensors data indicative of a position of the linkages relative to one another and the robotic system can be used to determine the relative position of medical instruments and the anatomical structures of the patient.
  • the user interface system 162 determines an amount of force to be applied to the anatomical structure based at least in part on the type of the anatomical structure and the type of the interaction with the anatomical structure. For example, the user interface system 162 can determine the amount of force to be applied to the anatomical structure the amount of force to be applied to the anatomical structure based at least in part on comparing a magnitude of a force interaction between the anatomical structure and a medical instrument involved in the medical procedure to one or more of: a historical magnitude of a prior force interaction (e.g., a prior interaction between a similar medical instrument and a similar anatomical structure), a force magnitude associated with a force limit (e.g., a predetermined amount of newtons or the like which should not be exceeded to avoid damage to the anatomical structure at a point in time), an optimal magnitude associated with an optimal force (e.g., a predetermined amount of newtons or the like associated with a particular interaction or goal of an interaction which should
  • the user interface system 162 determines an amount of force to be applied to the anatomical structure based at least in part on a lookup table. For example, the user interface system 162 can determine the amount of force to be applied to the anatomical structure based at least in part on a lookup table, where the lookup table is associated with (e.g., represents) a plurality of force limits corresponding to a plurality of types of anatomical structures.
  • the lookup table is associated with (e.g., represents) a plurality of force limits corresponding to a plurality of types of anatomical structures.
  • an anatomical structure e.g., a liver
  • a different anatomical structure e.g., a kidney
  • the liver can be associated with the higher force limit because repositioning the liver has a lower chance of resulting in the inadvertent release of catecholamine hormone (e.g., by inadvertent contact between the medical instrument or the liver with the adrenal glands) as opposed to repositioning the kidney, and as a result a lesser relative risk of complications.
  • the plurality of force limits can be predetermined based at least in part on input from one or more users.
  • the plurality of force limits can be predetermined based at least in part on users (e.g., surgeons, institutions, and/or the like) providing input to set the force limits for the types of anatomical structures.
  • a surgeon that is an expert in performing operations on kidneys can set a force limit associated with force applied to portions of the kidneys so that the surgeon or other surgeons (experts and non-experts) can implement the force limit set by the expert surgeon.
  • one or more users can select one or more force limits.
  • the one or more users can select the one or more force limits based at least in part on input from users setting force limits for one or more anatomical structures.
  • the user interface system 162 determines an optimal magnitude associated with an optimal force. For example, the user interface system 162 can determine the optimal magnitude based on one or more objective performance indicators (OPIs) that can be determined based on analyzing the data stream or one or more aspects of the patient (e.g., vital signs or the like). In some embodiments, the user interface system 162 determines the optimal magnitude associated with the optimal force prior to (e.g., before) the beginning of the medical procedure.
  • OPIs objective performance indicators
  • the user interface system 162 can determine the optimal magnitude associated with the optimal force prior to the beginning of the medical procedure based at least on the patient data, the data associated with a skill level of the one or more surgeons involved in the medical procedure, and a probability of a negative outcome associated with an interaction involved in the medical procedure.
  • the probability of a negative outcome can be determined using one or more machine learning models trained to predict the probability of negative outcomes based on a force signature (e.g., amounts of force associated with a particular surgeon or a particular medical procedure), or the like.
  • the user interface system 162 determines an amount of force to be applied to the anatomical structure based at least in part on the type of the anatomical structure and the type of the interaction with the anatomical structure. For example, where the type of the anatomical structure is associated with a more delicate anatomical structure that is susceptible to mechanical damage (e.g., the upper intestines) when compared to other anatomical structures that are less susceptible to mechanical damage (e.g., the liver), the user interface system 162 can determine a lower amount of force to be applied to the anatomical structure (e.g., the more delicate anatomical structure) as opposed to a higher amount of force which can be determined for the other anatomical structures (e.g., the less delicate anatomical structures).
  • the type of the anatomical structure is associated with a more delicate anatomical structure that is susceptible to mechanical damage (e.g., the upper intestines) when compared to other anatomical structures that are less susceptible to mechanical damage (e.g., the liver)
  • the user interface system 162 can determine the amount of force to be applied to the anatomical structure based at least in part on the type of the anatomical structure and the type of the interaction with the anatomical structure, where the amount of force represents a range of forces.
  • the user interface system 162 determines the amount of force to be applied to at least a portion of the anatomical structure based at least in part on the type of the anatomical structure and the type of the interaction with the anatomical structure. In an example, where at least a portion of an anatomical structure that is delicate relative to another anatomical structure, the user interface system 162 can determine a lower amount of force to be applied during a first interaction (e.g., moving the anatomical structure). In this example, the user interface system 162 can determine a second, higher amount of force to be applied during a second interaction (e.g., grabbing the anatomical structure) involving the same anatomical structure.
  • the user interface system 162 can determine a lower amount of force to be applied to the upper intestines when simply moving the upper intestines to gain access to other anatomical structures, and the user interface system 162 can determine a higher amount of force to be applied to the upper intestines when grabbing or cutting the upper intestines.
  • the user interface system 162 provides an indication of the amount of force to be applied to control performance of the medical procedure with the robotic system.
  • the user interface system 162 can provide the indication of the amount of force to be applied based at least in part on the user interface system 162 generating data associated with a user interface.
  • the user interface system 162 can generate the user interface based at least in part on the indication of the amount of force to control performance of the medical procedure.
  • the user interface system 162 can further generate data associated with the user interface that is configured to cause a display device (e.g., a display device of the user input system 168) to provide an output representing the user interface.
  • the user interface can also include images representing the medical instruments and anatomical structures that are in a field of view of an imaging device of the sensing system 164.
  • the user interface system 162 generates a graphical user interface based at least in part on images representing medical instruments or anatomical structures that are in a field of view of an imaging device.
  • the user interface system 162 can generate a graphical user interface based at least in part on images representing medical instruments or anatomical structures that are in a field of view of an imaging device, where the imaging device is included in the sensing system 164.
  • the images can be captured at one or more time steps as described herein.
  • the user interface system 162 generates the graphical user interface based at least in part on the images and the indication of the amount of force to control the performance of the medical procedure.
  • the indication includes a timer.
  • the user interface system 162 can generate a graphical user interface comprising the timer, where the timer represents an amount of time during which the amount of force to be applied is applied to the anatomical structure.
  • the user interface system 162 can generate a graphical user interface comprising the timer, where the timer represents an amount of time remaining during which the amount of force to be applied will satisfy a cumulative force threshold.
  • the user interface system 162 can update the timer based on the user interface system 162 updating the amount of time remaining in response to changes to the amount of force to be applied at one or more time steps (e.g., one or more time steps during which the timer is counting down).
  • the indication includes a color-coded or binary indicator.
  • the indicator can be associated with an area of a graphical user interface that is colored one color when an amount of force to be applied to the anatomical structure satisfies (e.g., is within) a force limit, and colored with a different color when the amount of force to be applied to the anatomical structure does not satisfy the force limit.
  • indicator can be associated with an area of a graphical user interface that is colored yet another color (e.g., a third color) when an amount of force to be applied to the anatomical structure is approaching the force limit.
  • the indication can include a numerical representation of a scale or speed with which one or more medical instruments are moving. For example, as a surgeon causes a medical instrument to move toward an anatomical structure, or when the medical instrument moves while contacting the anatomical structure, the indication can include a speed (e.g., in cm/s, mm/s, or the like) of at least a portion of the medical instrument.
  • the user interface system 162 determines the speed based at least on the relative motion of at least a portion of the medical instrument in comparison with the anatomical structures or the patient.
  • the indication can be associated with haptic feedback or audible feedback that is based at least in part on the force being applied to an anatomical structure.
  • the user interface system 162 can determine an amount of force to be applied by the instrument as described herein.
  • the user interface system 162 can then provide an indication of the amount of force to control performance of the medical procedure based at least in part on the amount of force and one or more force limits.
  • the indication can be associated with increasing vibration at the manipulators of the surgeon console.
  • the indication can be associated with audio signals that are generated (e.g., by a speaker associated with the surgeon console).
  • the audio signals can form one or more patterns that are updated based at least in part on the indication of the amount of force to control performance of the medical procedure.
  • the indication can be provided by a user device (e.g., a tablet, a cell phone, a laptop computer, a desktop computer, etc.).
  • a user device e.g., a tablet, a cell phone, a laptop computer, a desktop computer, etc.
  • one or more users can stream a surgical procedure in real-time or after the surgical procedure (e.g., during playback of the surgical procedure).
  • the indication can be associated with haptic feedback or audible feedback as described herein.
  • the indication can be associated with increasing vibration generated by the user device (e.g., by an eccentric rotating mass vibration motor, by a piezoelectric vibration motor, etc.).
  • the indication can be associated with audio signals that are generated by a speaker of the user device.
  • the audio signals can form one or more patterns that are updated based at least in part on the indication of the amount of force to control performance of the medical procedure.
  • the user interface system 162 provides one or more images associated with the data stream that correspond to the anatomical structure to be displayed via a graphical user interface.
  • the user interface system 162 can generate data associated with the user interface that is configured to cause a display device to provide an output representing the user interface, where the user interface at least in part represents one or more images associated with the data stream that correspond to the anatomical structure.
  • the user interface system 162 can update the graphical user interface based at least in part on updates to the amount of force to be applied.
  • the user interface system 162 can update the graphical user interface by updating one or more pixels of the one or more images based at least in part on updates to the amount of force to be applied.
  • the one or more pixels can correspond to at least a portion of the anatomical structure represented by the one or more images of medical instruments and anatomical structures that are in a field of view of the imaging device.
  • the one or more pixels can correspond to at least a portion of user interface that further corresponds to portions of the field of view that are in proximity to the anatomical structures or that previously illustrated portions of the anatomical structure.
  • the user interface system 162 determines at least one area associated with at least one overlay. For example, the user interface system 162 can determine at least one area associated with at least one overlay, where the at least one area corresponds to anatomical structures represented by the one or more images.
  • the overlay can be configured to cause the graphical user interface to update a representation of the at least one area when displayed via the display.
  • the user interface system 162 then updates one or more pixels associated with the graphical user interface (e.g., one or more pixels of the images associated with the graphical user interface) based at least in part on the at least one overlay.
  • the user interface system 162 then updates one or more pixels associated with the graphical user interface by tinting the one or more pixels with one or more shades or one or more colors.
  • the user interface system 162 then updates a plurality of pixels associated with the graphical user interface by tinting the plurality of pixels in accordance with a segmentation mask (discussed above).
  • the user interface system 162 can generate graphical user interfaces that, for example, color code specific anatomical structures, or the like.
  • the overlay can be associated with one or more colors or shades that represent amounts of force that are being, or can be, applied to anatomical structures involved in the medical procedure.
  • the user interface system 162 constructs a heatmap.
  • the user interface system 162 can construct a heatmap based at least in part on the at least one area and the amount of force to be applied.
  • the heatmap can include one or more regions of tinted shades or colors.
  • the heatmap can include one or more regions of gradients of shades or colors.
  • the heatmap can be a gradient of a color (e.g., red) that corresponds to the instant or cumulative force applied to the anatomical structure during the medical procedure.
  • the user interface system 162 then updates the one or more pixels (e.g., of the images associated with the video data 166) based at least in part on the at least one overlay or the heatmap.
  • the user interface system 162 determines at least one first area where the amount of force to be applied satisfies a first threshold; and the user interface system 162 determines at least one second area where the amount of force to be applied satisfies a second threshold.
  • a first threshold where multiple anatomical features are in a field of view of the imaging device of the sensing system 164, at least one first area can correspond to a first anatomical feature and at least one second area can correspond to a second anatomical feature.
  • the user interface system 162 can then update one or more pixels of the images generated by the imaging device based at least one the at least one first area and the at least one second area.
  • pixels associated with the first area can be updated by tinting the pixels using a first color or first shade; and pixels associated with the second area can be updated by tinting the pixels using a second color or second shade.
  • the user interface system 162 detects a change to an anatomical structure during a medical procedure. For example, the user interface system 162 can detect movement of an anatomical structure (e.g., adjustment of the orientation or position of the anatomical structure) based on a detected interaction between a medical instrument and the anatomical structure. In this example, the user interface system 162 can then update the amount of force to be applied to the anatomical structure based at least in part on the change to the anatomical structure.
  • the user interface system 162 can detect deformations to the surface or structure of the anatomical structure (e.g., detents, tears, cuts, and/or the like) and whether the detected deformations were intentional or unintentional based on the detected interaction between the medical instrument and the anatomical structure. In these examples, the user interface system 162 can then update the amount of force to be applied to the anatomical structure based at least in part on the change to the anatomical structure. In some embodiments, the user interface system 162 can provide a second indication of the amount of force to control performance of the medical procedure with the robotic medical system based at least on the update to the amount of force to be applied to the anatomical structure.
  • deformations to the surface or structure of the anatomical structure e.g., detents, tears, cuts, and/or the like
  • the user interface system 162 can then update the amount of force to be applied to the anatomical structure based at least in part on the change to the anatomical structure.
  • the user interface system 162
  • the user interface system 162 can initiate a timer as described herein and update the timer at each time step during which the medical instrument is determined to remain in contact with the anatomical structure.
  • the user interface system 162 can index and store data associated with the medical procedure and the interactions between the robotic system (e.g., medical instruments of the robotic system) and anatomical structures of the patient. For example, the user interface system 162 can index and store data associated with instant or cumulative amounts of force applied to the anatomical structure involved in the medical procedure. The user interface system 162 can then determine an expected amount of time for recovery for the patient based at least on the instant or cumulative amounts of force applied to the anatomical structure. The user interface system 162 can then determine an expected amount of time for recovery for the patient based at least on one or more aspects of the interactions between medical instruments and anatomical structures during the medical procedure.
  • the robotic system e.g., medical instruments of the robotic system
  • the user interface system 162 can receive data associated with patient feedback. For example, patients can provide feedback indicating how long their recovery process was, whether they experienced discomfort, the degree to which they experienced discomfort, or the like. The user interface system 162 can then correlate the patient feedback with the data associated with the medical procedure (e.g., amounts of force applied to the anatomical structures) and the interactions between the robotic system and anatomical structures of the patient and update one or more of the force limits as described herein for the patient or for other patients.
  • the data associated with the medical procedure e.g., amounts of force applied to the anatomical structures
  • the robotic system data 170 includes or is indicative of robotic system events corresponding to a state or an activity of an attribute or an aspect of a robotic system.
  • the robotic system data 170 of a robotic system can be generated by the robotic system (e.g., in the form of a robotic system log) in its normal course of operations.
  • the robotic system data is determined based on at least input received by the user input system 168 of the robotic system from a user or sensor data of a sensor on the robotic system.
  • the robotic system can include one or more sensors (e.g., camera, infrared sensor, ultrasonic sensors, etc.), actuators, interfaces, consoles, that can output information used to detect such a system event.
  • FIG. 2 is a flowchart diagram illustrating an example method 200 for updating a user interfaces based on force applied by a medical instrument during teleoperation, according to some embodiments.
  • the method 200 can be performed by one or more systems, devices, or components depicted in FIG. 1 A, FIG. IB, FIG. 3, FIG. 5, and FIG. 6 including, for example, the user interface system 162 of FIG. IB.
  • a data stream is received of a medical procedure performed with a robotic medical system.
  • a user interface system e.g., user interface system 162
  • a type of an anatomical structure on which the medical procedure is performed and a type of an interaction with the anatomical structure are identified using the data stream and with one or more models trained with machine learning.
  • a user interface system e.g., a user interface system 162 can identify, using the data stream and with one or more models trained with machine learning, a type of an anatomical structure on which the medical procedure is performed, and a type of an interaction with the anatomical structure.
  • an amount of force is determined to be applied to the anatomical structure based at least in part on the type of the anatomical structure and the type of the interaction with the anatomical structure.
  • a user interface system e.g., a user interface system 162
  • FIG. 5 is an image of an example graphical user interface 400, according to some embodiments.
  • the graphical user interface 400 shows four anatomical structures 402, 404, 406, 408 as well as other anatomical structures.
  • the four anatomical structures 402, 404, 406, 408 are each shown as having overlays associated with different colors (e.g., a first color, a second color, a third color, and a fourth color, respectively).
  • the graphical user interface 400 also includes a label 410 that corresponds to a point in time during the performance of a medical procedure.
  • FIG. 4 is a graph of example force limits, according to some embodiments.
  • interactions between medical instruments and anatomical structures can be associate with interaction types, labeled along the X-axis as “dissect”, “drive needle”, “manipulate” (e.g., move), “retract”, and “tie suture”).
  • interaction types labeled along the X-axis as “dissect”, “drive needle”, “manipulate” (e.g., move), “retract”, and “tie suture”).
  • Each interaction type can be further associate one or more sub-limits that correspond to particular aspects of each interaction type.
  • the force limits can be between 0 newtons and 14 newtons.
  • FIG. 5 is a diagram of a medical environment, according to some embodiments.
  • the medical environment 500 can refer to or include a surgical environment or surgical system.
  • the medical environment 500 can include a robotic medical system 524, a user control system 510, and an auxiliary system 515 communicatively coupled one to another.
  • a visualization tool 520 can be connected to the auxiliary system 515, which in turn can be connected to the robotic medical system 524.
  • the visualization tool 520 can be considered connected to the robotic medical system.
  • the visualization tool 520 can be directly connected to the robotic medical system 524.
  • a user interface system 162 can be connected to the user control system 510 which in turn can be connected to the robotic medical system 524.
  • the user interface system 162 can be connected directly to the robotic medical system 524.
  • the visualization tool can be considered connected to the robotic medical system.
  • the medical environment 500 can be used to perform a computer-assisted medical procedure with a patient 525.
  • surgical team can include a surgeon 530A and additional medical personnel 530B-530D such as a medical assistant, nurse, and anesthesiologist, and other suitable team members who can assist with the surgical procedure or medical session.
  • the medical session can include the surgical procedure being performed on the patient 525, as well as any pre-operative (e.g., which can include setup of the medical environment 500, including preparation of the patient 525 for the procedure), and postoperative (e.g., which can include clean up or post care of the patient), or other processes during the medical session.
  • pre-operative e.g., which can include setup of the medical environment 500, including preparation of the patient 525 for the procedure
  • postoperative e.g., which can include clean up or post care of the patient
  • the medical environment 500 can be implemented in a non-surgical procedure, or other types of medical procedures or diagnostics that can benefit from the accuracy and convenience of the surgical system.
  • the robotic medical system 524 can include a plurality of manipulator arms 535A-535D to which a plurality of medical instruments (e.g., the instruments described herein) can be coupled to, installed to, or supported by.
  • a plurality of medical instruments e.g., the instruments described herein
  • the plurality of manipulator arms 535A-535D can include one or more linkages.
  • Each medical instrument can be any suitable surgical tool (e.g., a tool having tissue-interaction functions), imaging device (e.g., an endoscope, an ultrasound tool, etc.), sensing instrument (e.g., a force-sensing surgical instrument), diagnostic instrument, or other suitable instrument that can be used for a computer-assisted surgical procedure on the patient 525 (e.g., by being at least partially inserted into the patient and manipulated to perform a computer-assisted surgical procedure on the patient).
  • the robotic medical system 524 is shown as including four manipulator arms (e.g., the manipulator arms 535A-535D), in other embodiments, the robotic medical system can include greater than or fewer than four manipulator arms. Further, not all manipulator arms can have a medical instrument installed thereto at all times of the medical session. Moreover, in some embodiments, a medical instrument installed on a manipulator arm can be replaced with another medical instrument as suitable.
  • One or more of the manipulator arms 535A-535D or the medical instruments attached to manipulator arms can include one or more displacement transducers, orientational sensors, positional sensors, or other types of sensors and devices to measure parameters or generate kinematics information.
  • One or more components of the medical environment 500 can be configured to use the measured parameters or the kinematics information to track (e.g., determine poses of) or control the medical instruments, as well as anything connected to the medical instruments or the manipulator arms 535A-535D.
  • the user control system 510 can be used by the surgeon 530A to control (e.g., move) one or more of the manipulator arms 535A-535D or the medical instruments connected to the manipulator arms.
  • the user control system 510 can include a display that can provide the surgeon 530A with imagery (e.g., high-definition 3D imagery) of a surgical site associated with the patient 525 as captured by a medical instrument installed to one of the manipulator arms 535A-535D.
  • imagery e.g., high-definition 3D imagery
  • the user control system 510 can include a stereo viewer having two or more displays where stereoscopic images of a surgical site associated with the patient 525 and generated by a stereoscopic imaging system can be viewed by the surgeon 530A. In some embodiments, the user control system 510 can also receive images from the auxiliary system 515 and the visualization tool 520.
  • the surgeon 530A can use the imagery displayed by the user control system 510 to perform one or more procedures with one or more medical instruments attached to the manipulator arms 535A-535D.
  • the user control system 510 can include a set of controls. These controls can be manipulated by the surgeon 530A to control movement of the manipulator arms 535A-535D or the medical instruments installed thereto.
  • the controls can be configured to detect a wide variety of hand, wrist, and finger movements by the surgeon 530A to allow the surgeon to intuitively perform a procedure on the patient 525 using one or more medical instruments installed to the manipulator arms 535A-535D.
  • the auxiliary system 515 can include one or more computer systems (e.g., computing devices that are the same as, or similar to the computing device 600 of FIG. 6) configured to perform processing operations within the medical environment 500.
  • the one or more computer systems can control or coordinate operations performed by various other components (e.g., the robotic medical system 524, the user control system 510) of the medical environment 500.
  • a computer systems included in the user control system 510 can transmit instructions to the robotic medical system 524 by way of the one or more computing devices of the auxiliary system 515.
  • the auxiliary system 515 can receive and process image data representative of imagery captured by one or more imaging devices (e.g., medical instruments) attached to the robotic medical system 524, as well as other data stream sources received from the visualization tool.
  • imaging devices e.g., medical instruments
  • one or more image capture devices can be located within the medical environment 500. These image capture devices can capture images from various viewpoints within the medical environment 500. These images (e.g., video streams) can be transmitted to the visualization tool 520, which can then passthrough those images to the auxiliary system 515 as a single combined data stream. The auxiliary system 515 can then transmit the single video stream (including any data stream received from the medical instrument s) of the robotic medical system 524) to present on a display of the user control system 510.
  • images e.g., video streams
  • the auxiliary system 515 can then transmit the single video stream (including any data stream received from the medical instrument s) of the robotic medical system 524) to present on a display of the user control system 510.
  • the auxiliary system 515 can be configured to present visual content (e.g., the single combined data stream) to other team members (e.g., the medical personnel 530B-530D) who can not have access to the user control system 510.
  • the auxiliary system 515 can include a display 640 configured to display one or more user interfaces, such as images of the surgical site, information associated with the patient 525 or the surgical procedure, or any other visual content (e.g., the single combined data stream).
  • display 540 can be a touchscreen display or include other features to allow the medical personnel 530B-530D to interact with the auxiliary system 515.
  • the robotic medical system 524, the user control system 510, and the auxiliary system 515 can be communicatively coupled one to another in any suitable manner.
  • the robotic medical system 524, the user control system 510, and the auxiliary system 515 can be communicatively coupled by way of control lines 545, which can represent any wired or wireless communication link as can serve a particular implementation.
  • the robotic medical system 524, the user control system 510, and the auxiliary system 515 can each include one or more wired or wireless communication interfaces, such as one or more local area network interfaces, Wi-Fi network interfaces, cellular interfaces, etc.
  • the medical environment 500 can include other or additional components or elements that can be needed or considered desirable to have for the medical session for which the surgical system is being used.
  • FIG. 6 is a block diagram depicting an architecture for a computing device 600 that can be employed to implement elements of the systems and methods described and illustrated herein, including aspects of the systems depicted in FIGS. 1 A-1B, 3, or 5, and the method depicted in FIG. 2.
  • the user interface system 162, the sensing system 164, the user input system 168, and the devices described with respect to medical environment 500 can include one or more component or functionality of computing device 600.
  • the computing device 600 can be any computing device used herein and can include or be used to implement a data processing system or its components.
  • the computing device 600 includes at least one bus 605 or other communication component or interface for communicating information between various elements of the computer system.
  • the computer system further includes at least one processor 610 or processing circuit coupled to the bus 605 for processing information.
  • the computing device 600 also includes at least one main memory 615, such as a random-access memory (RAM) or other dynamic storage device, coupled to the bus 605 for storing information, and instructions to be executed by the processor 610.
  • the main memory 615 can be used for storing information during execution of instructions by the processor 610.
  • the computing device 600 can further include at least one read only memory (ROM) 620 or other static storage device coupled to the bus 605 for storing static information and instructions for the processor 610.
  • ROM read only memory
  • a storage device 625 such as a solid-state device, magnetic disk or optical disk, can be coupled to the bus 605 to persistently store information and instructions.
  • the computing device 600 can be coupled via the bus 605 to a display 630, such as a liquid crystal display, or active-matrix display, for displaying information.
  • a display 630 such as a liquid crystal display, or active-matrix display
  • An input device 635 such as a keyboard or voice interface can be coupled to the bus 605 for communicating information and commands to the processor 610.
  • the input device 635 can include a touch screen display (e.g., the display 630).
  • the input device 635 can include sensors to detect gestures.
  • the input device 635 can also include a cursor control, such as a mouse, a trackball, or cursor direction keys, for communicating direction information and command selections to the processor 610 and for controlling cursor movement on the display 630.
  • the processes, systems and methods described herein can be implemented by the computing device 600 in response to the processor 610 executing an arrangement of instructions contained in the main memory 615. Such instructions can be read into the main memory 615 from another computer-readable medium, such as the storage device 625. Execution of the arrangement of instructions contained in the main memory 615 causes the computing device 600 to perform the illustrative processes described herein. One or more processors in a multi-processing arrangement can also be employed to execute the instructions contained in the main memory 615. Hard-wired circuitry can be used in place of or in combination with software instructions together with the systems and methods described herein. Systems and methods described herein are not limited to any specific combination of hardware circuitry and software.
  • the processor 610 can execute one or more instructions associated with the system 100.
  • the processor 610 can include an electronic processor, an integrated circuit, or the like including one or more of digital logic, analog logic, digital sensors, analog sensors, communication buses, volatile memory, nonvolatile memory, and the like.
  • the processor 610 can include, but is not limited to, at least one microcontroller unit (MCU), microprocessor unit (MPU), central processing unit (CPU), graphics processing unit (GPU), physics processing unit (PPU), embedded controller (EC), or the like.
  • the processor 610 can include, or be associated with, a main memory 615 operable to store or storing one or more non- transitory computer-readable instructions for operating components of the system 100 and operating components operably coupled to the processor 610.
  • the one or more instructions can include at least one of firmware, software, hardware, operating systems, or embedded operating systems, for example.
  • the processor 610 or the system 100 generally can include at least one communication bus controller to effect communication between the system processor and the other elements of the system
  • the main memory 615 can include one or more hardware memory devices to store binary data, digital data, or the like.
  • the main memory 615 can include one or more electrical components, electronic components, programmable electronic components, reprogrammable electronic components, integrated circuits, semiconductor devices, flip flops, arithmetic units, or the like.
  • the main memory 615 can include at least one of a non-volatile memory device, a solid-state memory device, a flash memory device, a NAND memory device, a volatile memory device, etc.
  • the main memory 615 can include one or more addressable memory regions disposed on one or more physical memory arrays.
  • any two components so associated can also be viewed as being “operably connected,” or “operably coupled,” to each other to achieve the desired functionality, and any two components capable of being so associated can also be viewed as being “operably couplable,” to each other to achieve the desired functionality.
  • operably couplable include but are not limited to physically mateable or physically interacting components or wirelessly interactable or wirelessly interacting components or logically interacting or logically interactable components.

Landscapes

  • Health & Medical Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Surgery (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Medical Informatics (AREA)
  • Robotics (AREA)
  • Biomedical Technology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Molecular Biology (AREA)
  • Animal Behavior & Ethology (AREA)
  • General Health & Medical Sciences (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Human Computer Interaction (AREA)
  • Manipulator (AREA)

Abstract

The arrangements disclosed herein relate to receiving a data stream of a medical procedure performed with a robotic medical system; identifying, using the data stream and with one or more models trained with machine learning, a type of an anatomical structure on which the medical procedure is performed, and a type of an interaction with the anatomical structure; and determining an amount of force to be applied to the anatomical structure based at least in part on the type of the anatomical structure and the type of the interaction with the anatomical structure. Arrangements also relate to providing an indication of the amount of force to control performance of the medical procedure with the robotic medical system.

Description

UPDATING A USER INTERFACE BASED ON FORCE APPLIED BY AN INSTRUMENT DURING TELEOPERATION
CROSS-REFERENCES TO RELATED APPLICATIONS
[0001] This application claims the benefit of priority under 35 U.S.C. § 119 to U.S. Provisional Patent Application No. 63/559,772, filed February 29, 2024, which is hereby incorporated by reference herein in its entirety.
BACKGROUND
[0002] Teleoperation of robotic systems can provide teleoperators with multiple advantages. For example, teleoperators can operate such robotic systems with greater control and precision than would otherwise be achievable using conventional techniques. Further, teleoperators can suspend movements made by such instruments when addressing fatigue, and continue operation when the fatigue has subsided. But because teleoperators are not in physical control of these instruments, it can be difficult to determine whether, and to what degree, the instruments are exerting force on one or more objects in proximity to the instruments.
SUMMARY
[0003] Technical solutions disclosed herein are generally related to systems and methods for updating a user interface based on force applied by an instrument (e.g., a medical instrument) during teleoperation. For example, described herein are specific techniques for updating graphical user interfaces during teleoperation to establish a specific teleoperation experience. The graphical user interface can be generated based at least in part on video data received from a sensing system during a medical procedure. The graphical user interface can include one or more indications of the amount of force to control performance of the medical procedure with the robotic medical system. These indications can be in the form of numbers, letters, or the like, or in the form of updates to the to the images represented by the video data that occur in accordance with the techniques described herein. Some
[0004] Aspects of the technical solutions are directed to a system. The system can include one or more processors, coupled with memory. The one or more processors can receive a data stream of a medical procedure performed with a robotic medical system. The one or more processors can identify, using the data stream and with one or more models trained with machine learning, a type of an anatomical structure on which the medical procedure is performed, and a type of an interaction with the anatomical structure. The one or more processors can determine an amount of force to be applied to the anatomical structure based at least in part on the type of the anatomical structure and the type of the interaction with the anatomical structure. The one or more processors can provide an indication of the amount of force to control performance of the medical procedure with the robotic medical system. [0005] In some aspects, the one or more processors can generate a graphical user interface based at least in part on images representing instruments and anatomical structures that are in a field of view of an imaging device, the images captured at a plurality of time steps. In aspects, the graphical interface comprises a timer representing an amount of time during which the amount of force to be applied is applied to the anatomical structure or a remaining amount of time during which the amount of force to be applied satisfies a cumulative force threshold.
[0006] In some aspects, the one or more processors can receive the data stream of the medical procedure. The data stream can comprise data associated with force vectors that represent directions and magnitudes of force interactions between instruments and anatomical structures involved in the medical procedure, the force vectors captured at a plurality of time steps. The one or more processors can determine the amount of force to be applied to the anatomical structure based at least in part on the force vectors.
[0007] In some aspects, the one or more processors can receive the data stream of the medical procedure, the data stream comprising data associated with a skill level of the one or more surgeons or force vectors involved in previous medical procedures involving the one or more surgeons. The one or more processors can determine the amount of force to be applied to the anatomical structure based at least in part on the skill level of the one or more surgeons or the force vectors involved in the previous medical procedures involving the one or more surgeons. [0008] In aspects, the one or more processors can determine the amount of force to be applied to the anatomical structure based at least in part on comparing a magnitude of a force interaction between the anatomical structure and an instrument involved in the medical procedure to one or more of: a historical magnitude of a prior force interaction, a force magnitude associated with a force limit, an optimal magnitude associated with an optimal force, a cumulative force associated with magnitudes of one or more previous force interactions, an first interaction-type magnitude associated with a primary interaction type for an instrument involved in the interaction with the anatomical structure, or a second interaction-type magnitude associated with a secondary interaction type for the instrument involved in the interaction with the anatomical structure.
[0009] In some aspects, the one or more processors can provide, for display via a graphical user interface, one or more images associated with the data stream that correspond to the anatomical structure. The one or more processors can update one or more pixels of the one or more images based at least in part on the amount of force to be applied. In aspects, the one or more pixels correspond to at least a portion of the anatomical structure represented by the one or more images of instruments and anatomical structures that are in a field of view of the imaging device.
[0010] The one or more processors can determine at least one area associated with at least one overlay, the at least one area corresponding to the anatomical structure represented by the one or more images. The overlay can be configured to cause the graphical user interface to update a representation of the at least one area when displayed via the display. The one or more processor can update the one or more pixels based at least in part on the at least one overlay.
[0011] In aspects, the one or more processors can determine at least one area associated with at least one overlay, the at least one area corresponding to the anatomical structures represented by the one or more images, construct a heatmap based at least in part on the at least one area and the amount of force to be applied; and update the one or more pixels based at least in part on the at least one overlay and the heatmap. [0012] The one or more processors can determine at least one first area where the amount of force to be applied satisfies a first threshold. The one or more processors can determine at least one second area where the amount of force to be applied satisfies a second threshold. The one or more processors can update the one or more pixels based at least one the at least one first area and the at least one second area.
[0013] The one or more processors can generate data associated with a graphical user interface based at least in part on the indication of the amount of force to control performance of the medical procedure with the robotic medical system and images representing instruments and anatomical structures that are in a field of view of an imaging device. The one or more processors can provide the data associated with the graphical user interface to a display device. The data can be associated with the graphical user interface configured to cause the display device to provide an output representing the graphical user interface.
[0014] In some aspects, the one or more processors can determine an amount of force to be applied to the anatomical structure based at least on a lookup table, where the lookup table represents a plurality of force limits corresponding to a plurality of types of anatomical structures.
[0015] The one or more processors can determine the amount of force to be applied to the anatomical structure based at least in part on the type of the anatomical structure and the type of the interaction with the anatomical structure, the amount of force to be applied representing a range of forces to be applied.
[0016] In aspects, the one or more processors can determine the amount of force to be applied at least a portion of the anatomical structure based at least in part on the type of the anatomical structure and the type of the interaction with the anatomical structure.
[0017] The one or more processors can determine the type of the interaction with the anatomical structure as a grab interaction, a retract interaction, a cut interaction, or a cauterize interaction.
[0018] The one or more processors can detect a change to the anatomical structure during the medical procedure. The one or more processors can update the amount of force to be applied to the anatomical structure based at least in part on the change to the anatomical structure. The one or more processors can provide a second indication of the amount of force to control performance of the medical procedure with the robotic medical system based at least on the update to the amount of force to be applied to the anatomical structure.
[0019] In aspects, the interaction with the anatomical structure can involve contact between an instrument of the robotic medical system and the anatomical structure.
[0020] Aspects of the technical solutions are directed to a method. The method can include the one or more processors receiving a data stream of a medical procedure performed with a robotic medical system. The method can include the one or more processors identifying, using the data stream and with one or more models trained with machine learning, a type of an anatomical structure on which the medical procedure is performed, and a type of an interaction with the anatomical structure. The method can include the one or more processors determining an amount of force to be applied to the anatomical structure based at least in part on the type of the anatomical structure and the type of the interaction with the anatomical structure. The method can include the one or more processors providing an indication of the amount of force to control performance of the medical procedure with the robotic medical system.
[0021] In aspects, the method can include the one or more processors generating a graphical user interface based at least in part on images representing instruments and anatomical structures that are in a field of view of an imaging device, the images captured at a plurality of time steps. In some aspects, the graphical user interface comprises a timer representing an amount of time during which the amount of force to be applied is applied to the anatomical structure or a remaining amount of time during which the amount of force to be applied satisfies a cumulative force threshold.
[0022] In some aspects, the method can include the one or more processors receiving the data stream of the medical procedure. The data stream can comprise data associated with force vectors that represent directions and magnitudes of force interactions between instruments and anatomical structures involved in the medical procedure, the force vectors captured at a plurality of time steps. [0023] In aspects, the method can include the one or more processors determining the amount of force to be applied to the anatomical structure based at least in part on the force vectors.
[0024] Aspects of the technical solutions are directed to a non-transitory computer- readable medium storing processor-executable instructions that, when executed by one or more processors, cause the one or more processors to receive a data stream of a medical procedure performed with a robotic medical system. The instructions can include instructions to identify, using the data stream and with one or more models trained with machine learning, a type of an anatomical structure on which the medical procedure is performed, and a type of an interaction with the anatomical structure. The instructions can include instructions to determine an amount of force to be applied to the anatomical structure based at least in part on the type of the anatomical structure and the type of the interaction with the anatomical structure. The instructions can include instructions to provide an indication of the amount of force to control performance of the medical procedure with the robotic medical system.
[0025] These and other aspects and implementations are discussed in detail below. The foregoing information and the following detailed description include illustrative examples of various aspects and implementations and provide an overview or framework for understanding the nature and character of the claimed aspects and implementations. The drawings provide illustration and a further understanding of the various aspects and implementations and are incorporated in and constitute a part of this specification. The foregoing information and the following detailed description and drawings include illustrative examples and should not be considered as limiting.
BRIEF DESCRIPTION OF THE DRAWINGS
[0026] FIG. 1 A depicts an example system to update a user interface based on force applied by an instrument during teleoperation robotic systems.
[0027] FIG. IB illustrates a schematic block diagram of an example environment for updating a user interface based on force applied by an instrument during teleoperation using a system, according to some embodiments; [0028] FIG. 2 illustrates a flowchart diagram illustrating an example method for updating a user interface based on force applied by an instrument during teleoperation, according to some embodiments;
[0029] FIG. 3 illustrates an image of an example graphical user interface, according to some embodiments;
[0030] FIG. 4 illustrates a graph of example force limits, according to some embodiments;
[0031] FIG. 5 illustrates a diagram of a medical environment, according to some embodiments; and
[0032] FIG. 6 illustrates a block diagram depicting an architecture for a computer system that can be employed to implement elements of the systems and methods described and illustrated herein.
DETAILED DESCRIPTION
[0033] Following below are more detailed descriptions of various concepts related to, and implementations of, methods, apparatuses, and systems of tool navigation using a customized medical simulation. The various concepts introduced above and discussed in greater detail below can be implemented in any of numerous ways.
[0034] Although the present disclosure is discussed in the context of a surgical procedure, in some embodiments, the present disclosure can be applicable to other medical sessions or environments or activities, as well as non-medical activities where removal of irrelevant information is desired.
[0035] Systems, methods, apparatuses, and non-transitory computer-readable media are provided for updating a user interface based on force applied by an instrument (e.g., a medical instrument) during teleoperation. In some embodiments, methods described herein include receiving a data stream of a medical procedure performed with a robotic medical system; identifying, using the data stream and with one or more models trained with machine learning, a type of an anatomical structure on which the medical procedure is performed, and a type of an interaction with the anatomical structure; and determining an amount of force to be applied to the anatomical structure based at least in part on the type of the anatomical structure and the type of the interaction with the anatomical structure. Arrangements also relate to providing an indication of the amount of force to control performance of the medical procedure with the robotic medical system.
[0036] Because medical procedures such as surgeries can be performed using robotic systems as described herein, the surgeons controlling such robotic surgical system often need to estimate the force applied by medical instruments involved in the surgeries almost solely based on the images displayed for the surgeon during the surgery via a user input system (described herein). To address the inefficiencies associated with estimating forces applied by medical instruments in this manner, the present disclosure includes systems and methods that enable surgeons to quickly quantify an amount of force that is or can be applied to one or more anatomical structures involved in the surgery. The described techniques can improve the perception of the way the robotic system is interacting with patients during medical procedures, reduce the chances of applying force to anatomical structures that unnecessarily result in adverse effects to the short- and long-term health outcomes of patients, and generally improve overall patient outcomes. And by virtue of understanding the force that can be applied to anatomical structures, a surgeon can be able to move faster than they otherwise would to avoid unintended damage to the anatomical structures involved in the medical procedure.
[0037] FIG. 1 A depicts an example system 100 to update a user interface based on force applied by a medical instrument during teleoperation of robotic systems such as, for example, robotic medical systems used in robot-assisted surgeries. The example system 100 can include a combination of hardware and software for generating indications of an amount of force during operation of a robotic system. For example, the example system 100 can include a network 101, a medical environment 102, and a data processing system 130 as described herein.
[0038] The example system 100 can include a medical environment 102 (e.g., a medical environment that is the same as, or similar to, the example medical environment 500 of FIG. 5) including one or more data capture devices 110, medical instruments 112, visualization tools 114, displays 116 and robotic medical systems (RMSs) 120. RMS 120 can include or generate various types of data streams 158 that are described herein, and can operate using system configurations 122. One or more RMSs 120 can be communicatively coupled with one or more data processing systems 130. [0039] The RMS 120 can be deployed in any medical environment 102. The medical environment 102 can include any space or facility for performing medical procedures, such as a surgical facility, or an operating room. The medical environment 102 can include medical instruments 112 (e.g., surgical tools used for specialized tasks) that the RMS 120 can use for performing operational procedures, such as surgical patient procedures, whether invasive, non-invasive, or any in-patient or out-patient procedures. RMS 120 can be centralized or distributed across a plurality of computing devices or systems, such as computing devices 600 (e.g., used on servers, network devices or cloud computing products) to implement various functionalities of the RMS 120, including communicating or processing data streams 158 across various devices via the network 101.
[0040] The medical environment 102 can include one or more data capture devices 110 (e.g., optical devices, such as cameras or sensors or other types of sensors or detectors) for capturing data streams 158. The data streams 158 can include any sensor data, such as images or videos of a surgery, kinematics data on any movement of medical instruments 112, or any events data, such as installation, configuration or selection events corresponding to medical instruments 112. The medical environment 102 can include one or more visualization tools 114 to gather the captured data streams 158 and process it for display to the user (e.g., a surgeon, a medical professional or an engineer or a technician configuring RMS) via one or more (e.g., touchscreen) displays 116. A display 116 can present data stream 158 (e.g., images or video frames) of a medical procedure (e.g., surgery) being performed using the RMS 120 while handling, manipulating, holding or otherwise utilizing medical instruments 112 to perform surgical tasks at the surgical site. RMS 120 can include system configurations 122 based on which RMS 120 can operate, and the functionality of which can impact the data flow of the data streams 158.
[0041] The system 100 can include one or more data capture devices 110 (e.g., video cameras, sensors or detectors) for collecting any data stream 158, that can be used for machine learning, including detection of objects from sensor data (e.g., video frames or force or feedback data), detection of particular events (e.g., user interface selection of, or a surgeon’s engaging of, a medical instrument 112) or detection of kinematics (e.g., movements of the medical instrument 112). The data capture devices 110 can include cameras or other image capture devices for capturing videos or images from a particular viewpoint within the medical environment 102. The data capture devices 110 can be positioned, mounted, or otherwise located to capture content from any viewpoint that facilitates the data processing system capturing various surgical tasks or actions.
[0042] The data capture devices 110 can include any of a variety of detectors, sensors, cameras, video imaging devices, infrared imaging devices, visible light imaging devices, intensity imaging devices (e.g., black, color, grayscale imaging devices, etc.), depth imaging devices (e.g., stereoscopic imaging devices, time-of-flight imaging devices, etc.), medical imaging devices such as endoscopic imaging devices, ultrasound imaging devices, etc., non- visible light imaging devices, any combination or sub-combination of the above mentioned imaging devices, or any other type of imaging devices that can be suitable for the purposes described herein. The data capture devices 110 can include cameras that a surgeon can use to perform a surgery and observe manipulation components within a purview of field of view suitable for the given task performance. The data capture devices can output any type of data streams 158, including data streams 158 of kinematics data (e.g., kinematics data streams), data streams 158 of events data (e.g., events data streams) and data streams 158 of sensor data (e.g., sensors data streams).
[0043] For example, data capture devices 110 can capture, detect, or acquire sensor data such as videos or images, including for example, still images, video images, vector images, bitmap images, other types of images, or combinations thereof. The data capture devices 110 can capture the images at any suitable predetermined capture rate or frequency. Settings, such as zoom settings or resolution, of each of the data capture devices 110 can vary as desired to capture suitable images from any viewpoint. For instance, data capture devices 110 can have fixed viewpoints, locations, positions, or orientations. The data capture devices 110 can be portable, or otherwise configured to change orientation or telescope in various directions. The data capture devices 110 can be part of a multi-sensor architecture including multiple sensors, with each sensor being configured to detect, measure, or otherwise capture a particular parameter (e.g., sound, images, or pressure).
[0044] The data capture devices 110 can generate sensor data from any type and form of a sensor, such as a positioning sensor, a biometric sensor, a velocity sensor, an acceleration sensor, a vibration sensor, a motion sensor, a pressure sensor, a light sensor, a distance sensor, a current sensor, a focus sensor, a temperature or pressure sensor or any other type and form of sensor used for providing data on the medical instruments 112, or the data capture devices (e.g., optical devices). For example, a the data capture device 110 can include a location sensor, a distance sensor or a positioning sensor providing coordinate locations of a medical instrument 112 (e.g., kinematics data). The data capture device 110 can include a sensor providing information or data on a location, position or spatial orientation of an object (e.g., medical instrument 112 or a lens of data capture device 110) with respect to a reference point for kinematics data. The reference point can include any fixed, defined location used as the starting point for measuring distances and positions in a specific direction, serving as the origin from which all other points or locations can be determined.
[0045] The display 116 can show, illustrate or play the data stream 158, such as a video stream, in which the medical instruments 112 at or near surgical sites are shown. For example, the display 116 can display a rectangular image of a surgical site along with at least a portion of the medical instruments 112 being used to perform surgical tasks. The display 116 can provide compiled or composite images generated by the visualization tool 114 from a plurality of data capture devices 110 to provide a visual feedback from one or more points of view.
[0046] The visualization tool 114 can be configured or designed to receive any number of different data streams 158 from any number of data capture devices 110 and combine them into a single data stream displayed on a display 116. The visualization tool 114 can be configured to receive a plurality of data stream components and combine the plurality of data stream components into a single data stream 158. For instance, the visualization tool 114 can receive a visual sensor data from one or more of the medical instruments 112, sensors or cameras with respect to a surgical site or an area in which a surgery is performed. The visualization tool 114 can incorporate, combine or utilize multiple types of data (e.g., positioning data of a medical instrument 112 along sensor readings of pressure, temperature, vibration or any other data) to generate an output to present on a display 116. The visualization tool 114 can present locations of medical instruments 112 along with locations of any reference points or surgical sites, including locations of anatomical parts of the patient (e.g., organs, glands or bones).
[0047] The medical instruments 112 can be any type and form of tool or instrument used for surgery, medical procedures or a tool in an operating room or environment. The medical instrument 112 can be imaged by, associated with, or include an image capture device. For instance, a medical instrument 112 can be a tool for making incisions, a tool for suturing a wound, an endoscope for visualizing organs or tissues, an imaging device, a needle and a thread for stitching a wound, a surgical scalpel, forceps, scissors, retractors, graspers, or any other tool or instrument to be used during a surgery. The medical instruments 112 can include hemostats, trocars, surgical drills, suction devices or any instruments for use during a surgery. The medical instrument 112 can include other or additional types of therapeutic or diagnostic medical imaging implements. The medical instrument 112 can be configured to be installed in, coupled with, or manipulated by an RMS 120, such as by manipulator arms or other components for holding, using and manipulating the medical instruments. In some embodiments, the medical instruments 112 can be the same as, or similar to, the medical instruments discussed with respect to FIG. 5.
[0048] The RMS 120 can be a computer-assisted system configured to perform a surgical or medical procedure or activity on a patient via or using or with the assistance of one or more robotic components or the medical instruments 112. The RMS 120 can include any number of manipulator arms for grasping, holding or manipulating various medical instruments 112 and performing computer-assisted medical tasks using the medical instruments 112 controlled by the manipulator arms.
[0049] The data streams 158 can be generated by the RMS 120. For instance, sensor data associated with the data streams 158 can include images (e.g., video images) captured by a medical instrument 112 can be sent to the visualization tool 114. For instance, a display 116 (e.g., a touchscreen) can be used by a surgeon to select, engage, or configure a particular medical instrument 112, thereby triggering an event that can be indicated or included in data packets of a data stream 158. The RMS 120 can include one or more input ports to receive direct or indirect connection of one or more auxiliary devices. For example, the visualization tool 114 can be connected to the RMS 120 to receive the images from the medical instrument when the medical instrument is installed in the RMS 120 (e.g., on a manipulator arm for handing medical instruments 112). For example, the data stream 158 can include data indicative of positioning and movement of the medical instruments 112 that can be captured or identified by data packets of a kinematics data. The visualization tool 114 can combine the data stream components from the data capture devices 110 and the medical instrument 112 into a single combined data stream 158 which can be indicated or presented on a display 116. In some embodiments, the RMS 120 provides the data streams 158 to the data processing system 130 periodically, continuously, or in real-time.
[0050] Data packets can include a unit of data in a data stream 158. The data packets can include the actual information being sent and metadata, such as a source and a destination address, a port identifier or any other information for transmitting data. The data packets can include a data (e.g., a payload) corresponding to an event (e.g., installation, uninstallation, engagement or setup of a medical instrument 112). The data packets can include data corresponding to sensor information (e.g., a video frame captured by a camera), or data on movement of a medical instrument 112. The data packets can be transmitted in the data streams 158 that can be separated or combined. For instance, a data stream 158 for kinematics data (e.g., a kinematics data stream) can include a plurality of data packets indicative of movement of robotic system components or features.
[0051] Data packets can include one or more timestamps, which can indicate a particular time when particular events took place. Timestamps can include time indications expressed in any combination of nanoseconds, microseconds, milliseconds, seconds, hours, days, months or years. Timestamps can be included in the payload or metadata of data packets and can indicate the time when a data packet was generated, the time when the data packet was transmitted from the device that generated the data packet, the time when the data packet was received by another device (e.g., a system within the RMS 120, or another device on a network) or a time when the data packet is stored into a data repository 132.
[0052] The data repository 132 can include one or more data files, data structures, arrays, values, or other information that facilitates operation of the data processing system 130. The data repository 132 can include one or more local or distributed databases and can include a database management system. The data repository 132 can include, maintain, or manage one or more data streams 158. The data streams 158 can include or be formed from one or more of a video stream, image stream, stream of sensor measurements, event stream, or kinematics stream. The data streams 158 can include data collected by one or more data capture devices 110, such as a set of 3D sensors from a variety of angles or vantage points with respect to the procedure activity (e.g., point or area of surgery). [0053] The data stream 158 can include any stream of data. The data streams 158 can include a video stream, including a series of video frames or organized into video fragments, such as video fragments of about 1, 2, 3, 4, 5, 10 or 15 seconds of a video. Each second of the video can include, for example, 30, 45, 60, 90 or 120 video frames per second. The data streams 158 can include an event stream which can include a stream of event data or information, such as packets, which identify or convey a state of the RMS 120 or an event that occurred in association with the RMS 120. For example, data stream 158 can include any portion of system configuration 122, including information on operations on data streams 158, data on installation, uninstallation, calibration, set up, attachment, detachment or any other action performed by or on an RMS 120 with respect to the medical instruments 112. [0054] The data stream 158 can include data about an event, such as a state of the RMS 120 indicating whether the medical instrument 112 is calibrated, adjusted or includes a manipulator arm installed on the RMS 120. A data stream 158 representing event data (e.g., event data stream) can include data on whether an RMS 120 was fully functional (e.g., without errors) during the procedure. For example, when a medical instrument 112 is installed on a manipulator arm of the RMS 120, a signal or data packet(s) can be generated indicating that the medical instrument 112 has been installed on the manipulator arm of the RMS 120.
[0055] The data stream 158 can include a stream of kinematics data which can refer to or include data associated with one or more of the manipulator arms or medical instruments 112 attached to the manipulator arms, such as arm locations or positioning. The data corresponding to the medical instruments 112 can be captured or detected by one or more displacement transducers, orientational sensors, positional sensors, or other types of sensors and devices to measure parameters or generate kinematics information. The kinematics data can include sensor data along with time stamps and an indication of the medical instrument 112 or type of medical instrument 112 associated with the data stream 158.
[0056] The data repository 132 can store sensor data having video frames that can include one or more static images or frames extracted from a sequence of images of a video file. Video frame can represent a specific moment in time and can be identified by a metadata including a timestamp. Video frames can display visual content of the video of a medical procedure being analyzed by the data processing system 130 (e.g., by the anatomy detector performance of the surgeon performing the procedure. For example, in a video file capturing a robotic surgical procedure, a video frame can depict a snapshot of the surgical task, illustrating a movement or usage of a medical instrument 112 such as a robotic arm manipulating a surgical tool within the patient's body.
[0057] The data streams 158 corresponding to sensor data (e.g., videos), events, and kinematics can include related, corresponding or duplicate information that can be used for cross-data comparisons and verification that all three data sources are in agreement. For instance, the detection function can implement a check for consistency between diverse data types and data sources by mapping and comparing timestamps between different data types to facilitate if they consistently progress over time, such as in accordance with expected flow and correlation of events, video stream details and kinematics values.
[0058] For example, an installation of a medical instrument 112 can be recorded as a system event and provided in a data stream 158 of events data. At the same or similar expected time frame, the installed medical instrument 112 can shows up in a sensor data (e.g., in a video) which can be detected the data processing system 130, which can include a computer vision model. Kinematics data can confirm movements of the medical instrument 112 according to the movements detected by the data processing system 130. Using these cross-data stream correlation techniques, the data processing system 130 can verify time synchronization across the three data sources (e.g., three data streams 158).
[0001] With continued reference to FIG. 1 A, the data processing system 130 can include any combination of hardware or software that perform one or more of the functions described herein. For example, the data processing system 130 can include any combination of hardware and software for updating a user interface based on force applied by a medical instrument during teleoperation. The data processing system 130 can include any computing device (e.g., a computing device that is the same as, or similar to, the computing device 600 of FIG. 6) and can include one or more servers, virtual machines, or can be part of or include a cloud computing environment. The data processing system 130 can be provided via a centralized computing device or be provided via distributed computing components, such as including multiple, logically grouped servers and facilitating distributed computing techniques. The logical group of servers can be referred to as a data center, server farm or a machine farm. The servers, which can include virtual machines, can also be geographically dispersed. A data center or machine farm can be administered as a single entity, or the machine farm can include a plurality of machine farms. The servers within each machine farm can be heterogeneous - one or more of the servers or machines can operate according to one or more type of operating system platform.
[0002] The data processing system 130, or components thereof can include a physical or virtual computer system operatively coupled, or associated with, the medical environment 102. In some embodiments, the data processing system 130, or components thereof can be coupled, or associated with, the medical environment 102 via a network 101, either directly or directly through an intermediate computing device or system. The network 101 can be any type or form of network. The geographical scope of the network can vary widely and can include a body area network (BAN), a personal area network (PAN), a local-area network (LAN) (e.g., Intranet), a metropolitan area network (MAN), a wide area network (WAN), or the Internet. The topology of the network 101 can assume any form such as point-to-point, bus, star, ring, mesh, tree, etc. The network 101 can utilize different techniques and layers or stacks of protocols, including, for example, the Ethernet protocol, the internet protocol suite (TCP/IP), the ATM (Asynchronous Transfer Mode) technique, the SONET (Synchronous Optical Networking) protocol, the SDH (Synchronous Digital Hierarchy) protocol, etc. The TCP/IP internet protocol suite can include application layer, transport layer, internet layer (including, e.g., IPv6), or the link layer. The network 101 can be a type of a broadcast network, a telecommunications network, a data communication network, a computer network, a Bluetooth network, or other types of wired and wireless networks.
[0059] The data processing system 130, or components thereof, can be located at least partially at the location of the surgical facility associated with the medical environment 102 or remotely therefrom. Elements of the data processing system 130, or components thereof can be accessible via portable devices such as laptops, mobile devices, wearable smart devices, etc. The data processing system 130, or components thereof, can include other or additional elements that can be considered desirable to have in performing the functions described herein. The data processing system 130, or components thereof, can include, or be associated with, one or more components or functionality of a computing including, for example, one or more processors coupled with memory that can store instructions, data or commands for implementing the functionalities of the data processing system 130 discussed herein.
[0060] In some embodiments, the data processing system 130 can include data collector 144, an anatomy detector 146, an interaction classifier 148, a force predictor 150, a performance controller 152, or a data repository 132. In some embodiments, the performance controller 152 can include a timer or a user interface 156. The data processing system 130 can be communicatively coupled with one or more data processing systems 130. In some embodiments, the data processing system 130 can be implemented by one or more components of the medical environment 102. In some embodiments, the data processing system 103 can receive one or more data streams 158 that are described herein, and can monitor operation of the RMS 120 using the system configurations 122. One or more RMSs 120 can be communicatively coupled with one or more data processing systems 130. In some embodiments, the data repository 132 can be configured to receive, store, and provide the data streams 158 (e.g., one or more data packets associated with the data streams 158) before, during, or after a medical procedure. In some embodiments, the data repository 132 stores data associated with one or more of a machine learning (ML) model 134, historical data 136 associated with one or more previously performed medical procedures involving the RMS 120, types 138 (e.g., one or more force types), thresholds 140 (e.g., thresholds representing force limits), or tables 142 (e.g., tables representing one or more sets of force limits).
[0061] In some embodiments, the data collector 144 can be implemented by the data processing system 130 or can be a device that is the same as, or similar to, the computing device 600 of FIG. 6. The data collector 144 can receive the data streams 158. For example, the data collector 144 can receive the data streams via the network 101. In examples, the data collector 144 can receive the data streams 158 from the data processing system 130. In some embodiments, the data collector 144 can receive the data streams 158 of a medical procedure performed with the RMS 120. In some embodiments, the one or more packets associated with the data streams 158 can represent one or more images during a medical procedure. The one or more images can be captured or otherwise obtained by the visualization tool 114. The one or more images can represent one or more anatomical features or one or more medical instruments as describe herein. In some embodiments, the data collector 144 can provide the data streams 158 (e.g., one or more packets of the data streams 158) to the anatomy detector 146 or the interaction classifier 148.
[0062] In some embodiments, the anatomy detector 146 can be implemented by the data processing system 130 or can be a device that is the same as, or similar to, the computing device 600 of FIG. 6. The anatomy detector 146 can receive the data streams 158. For example, the anatomy detector 146 can receive the data streams 158 from the data collector 144. In some embodiments, the anatomy detector 146 can identify a type of an anatomical structure on which a medical procedure is performed. For example, the anatomy detector 146 can identify the type of the anatomical structure on which a medical procedure is performed based on the data streams 158. In some embodiments, the anatomy detector 146 can identify the type of the anatomical structure on which a medical procedure is performed based on the data streams 158 and an ML model. For example, the anatomy detector 146 can provide the data streams 158 to the ML model to cause the ML model to provide an output, the output representing the type of the anatomical structure on which the medical procedure is performed. In some embodiments, the anatomy detector 146 can provide data associated with the type of the anatomical structure to the force predictor 150 or the performance controller 152.
[0063] In some embodiments, the interaction classifier 148 can be implemented by the data processing system 130 or can be a device that is the same as, or similar to, the computing device 600 of FIG. 6. The interaction classifier 148 can receive the data streams 158. For example, the interaction classifier 148 can receive the data streams 158 from the data collector 144. In some embodiments, the interaction classifier 148 can identify a type of an interaction involving an anatomical structure. For example, the interaction classifier 148 can identify the type of the interaction involving the anatomical structure based on the data streams 158. In some embodiments, the anatomy detector 146 can identify the type of the anatomical structure on which a medical procedure is performed based on the data streams 158 and an ML model. For example, the interaction classifier 148 can provide the data streams 158 to the ML model to cause the ML model to provide an output, the output representing the type of the interaction with the anatomical structure on which the medical procedure is performed. In some embodiments, the interaction classifier 148 can provide data associated with the type of the interaction to the force predictor 150 or the performance controller 152.
[0064] In some embodiments, the force predictor 150 can be implemented by the data processing system 130 or can be a device that is the same as, or similar to, the computing device 600 of FIG. 6. The force predictor 150 can receive the data streams 158. For example, the force predictor 150 can receive the data streams 158 from the data collector 144. In some embodiments, the force predictor 150 can receive data associated with the type of the anatomical structure from the anatomy detector 146 or the force predictor 150 can receive data associated with the type of the interaction from the interaction classifier 148. In some embodiments, the force predictor 150 can determine an amount of force to be applied to the anatomical structure. For example, the force predictor 150 can determine the amount of force to be applied to the anatomical structure based on the type of the anatomical structure or the type of the interaction with the anatomical structure. In some embodiments, the force predictor 150 provides data associated with the amount of force to be applied to the anatomical structure to the performance controller 152.
[0065] In some embodiments, the performance controller 152 can be implemented by the data processing system 130 or can be a device that is the same as, or similar to, the computing device 600 of FIG. 6. The performance controller 152 can receive the data associated with the amount of force to be applied to the anatomical structure from the force predictor 150. In some embodiments, the performance controller 152 can determine an indication of the amount of force to control performance of the medical procedure with the robotic medical system. For example, the performance controller 152 can determine a timer 154. In examples, the performance controller 152 determines the timer 154 periodically (e.g., every 1 second, every 2 seconds, etc.). In some examples, the performance controller 152 determines the timer 154 continuously. In some embodiments, the performance controller provides the indication of the amount of force to be applied, where the indication represents the timer 154. [0066] In examples, the performance controller 152 can determine a user interface 156. In examples, the performance controller 152 determines the user interface 156 periodically (e.g., every 1 second, every 2 seconds, etc.). In some examples, the performance controller 152 determines the user interface 156 continuously. In some embodiments, the performance controller provides the indication of the amount of force to be applied, where the indication represents the user interface 156. In some embodiments, the performance controller 152 can provide data associated with the indication of the amount of force to cause a device to display the indication of the amount of force. For example, the performance controller 152 can provide the data associated with the indication of the amount of force to cause display 116 to display the indication of the amount of force. In this example, the data associated with the indication of the amount of force can be configured to cause the display 116 to display the indication.
[0067] In some embodiments, the data repository 132 can be implemented by the data processing system 130 or can be a device that is the same as, or similar to, the computing device 600 of FIG. 6. The data repository can receive data from any of the devices of FIG. 1 A either directly or indirectly. In some embodiments, the data includes the ML model 134, the historical data 136, the types 138, the threshold 140, or the table 142. In examples, the data stored by the data repository 132 is associated with a previously-performed medical procedure. In some examples, the data stored by the data repository 132 is associated with a current medical procedure. In some embodiments, the data repository 132 can receive the data streams 158 or the system configurations 122 and store the data streams 158 or the system configurations 122. The data repository 132 can then provide the data streams 158 or the system configurations 122 (e.g., one or more data packets thereof) to the one or more of the components of the data processing system 130.
[0068] FIG. IB is a schematic block diagram illustrating an example environment 160 in which devices, systems, methods, or products described herein can be implemented, according to some embodiments. As shown, the environment 160 includes a user interface system 162, a sensing system 164, and a user input system 168. In some embodiments, the user interface system 162 is the same as, or similar to, the data processing system 130 of FIG. 1 A. In some embodiments, the sensing system 164 is the same as, or similar to, one or more data capture devices 110 of FIG. 1A.
[0069] The user interface system 162 can receive video data 166 from the sensing system 164. Additionally, the user interface system 162 can receive robotic system data 170, and medical instrument data 172. In examples, the user interface system 162 can receive the video data 166, the robotic system data 170, or the medical instrument data 172 as part of a data stream. The data stream can be received from a robotic medical system that is the same as, or similar to, the robotic medical system 120 of FIG. 1 A. The data stream (e.g., one or more packets included in the data stream) can be the same as, or similar to, the data streams 158 of FIG. 1A.
[0070] The user interface system 162 can also communicate (e.g., establish communication connections to exchange data) with the user input system 168. The user interface system 162, the sensing system 164, or the user input system 168 can include or be implemented by one or more suitable computing systems, such as the computing device 600 of FIG. 6. For example, user interface system 162, sensing system 164, or user input system 168 can include one or more components that are the same as, or similar to, one or more of the components of the computing device 600. In some embodiments, the user interface system 162, sensing system 164, or user input system 168 can be configured to communicate (e.g., to establish communication connections to exchange data). In some embodiments, the system 100 can include one or more devices or systems that are the same as, or similar to, one or more devices or systems discussed with respect to example medical environment 500 of FIG. 5.
[0071] In some examples, the processes described herein, such as the generation of one or more user interfaces, can be implemented by the user interface system 162. The some or all of the processes implemented by the user interface system 162 can be implemented by one or more other devices (alone or in cooperation with the user interface system 162) such as, for example, the sensing system 164 or the user input system 168, which can be the same as, or similar to, the computing device 600 of FIG. 6. While the user interface system 162 is illustrated as a separate system from the user input system 168, in examples, the user interface system 162 can be included in (e.g., implemented by) user input system 168. Accordingly, one or more of the functions described herein as being performed by the user interface system 162 can similarly be performed by the user input system 168.
[0072] In some embodiments, the user interface system 162 can be the same as, or similar to, the computing device 600 of FIG. 6. In some embodiments, the user input system 168 can be the same as, or similar to, the user control system 510 of FIG. 5 or the computing device 600 of FIG. 6. In some embodiments, the sensing system 164 can be the same as, or similar to, the computing device 600 of FIG. 6. In some embodiments, the user interface system 162 can receive the robotic system data 170 from the sensing system 164, where the sensing system includes a device that is the same as, or similar to, one or more medical instruments supported by manipulator arms (e.g., manipulator arms that are the same as, or similar to, manipulator arms 535A-535D of FIG. 5) such as, for example, an imaging device (e.g., an endoscope, an ultrasound tool, etc.) or a sensing instrument (e.g., a force-sensing surgical instrument) as described herein are attached.
[0073] As used herein, a medical procedure refers to a surgical procedure or operation performed in a medical environment (e.g., a medical or surgical theater, etc. that is the same as, or similar to, the medical environment 500 of FIG. 5) by or using one or more of a medical staff, a robotic system, or a medical instrument. Examples of the medical staff include surgeons, nurses, support staff, and so on (e.g., individuals that can be the same as, or similar to, surgeon 530A or additional medical personnel 530B-530D of FIG. 5). Examples of the robotic systems include the robotic medical system or the robot surgical system described herein such as, for example, one or more device of medical environment 500 (e.g., robotic medical system 524). Examples of medical instruments include the medical instruments supported by the manipulator arms 535A-535D. Medical procedures can have various modalities, including robotic (e.g., using at least one robotic system), non-robotic laparoscopic, non-robotic open, and so on. The robotic system data 170, and medical instrument data 172 collected during a medical procedure also refers to, or includes, robotic system data 170, and medical instrument data 172 collected by one or more devices in a medical environment (e.g., a medical environment 500) in which the medical procedure is performed and for one or more of medical staff, robotic system, or medical instrument performing or used in performing the medical procedure.
[0074] The user interface system 162 can receive and process data sources or data streams including one or more of video data 166, robotic system data 170, and medical instrument data 172 collected for a training procedure or a medical procedure. For example, the user interface system 162 can acquire data streams of the video data 166, robotic system data 170, and medical instrument data 172 in real-time. In some examples, the user interface system 162 can utilize all types of robotic system data 170, and medical instrument data 172 collected, obtained, determined, or calculated for a medical procedure when generating one or more user interfaces (UIs) as described herein. [0075] In some embodiments, the user interface system 162 receives the video data 166, the robotic system data 170, or the medical instrument data 172 during operation of the robotic system. For example, the user interface system 162 can receive the video data 166 from the sensing system 164 during operation of the robotic system. The video data 166 can be associated with one or more images captured individually or continuously by the imaging device included in the sensing system 164. In some embodiments, the imaging device includes a visual image endoscope, laparoscopic ultrasound, camera, etc. Other suitable imaging devices are also contemplated. In some embodiments, the sensing system 164 includes a repositionable assembly including one or more linkages supported by the robotic system. For example, the sensing system 164 can include a repositionable assembly including one or more linkages that can be articulated by the robotic system based on the input provided by the surgeons via the user input system 168 described herein.
[0076] In examples, the user interface system 162 can receive the robotic system data 170 from a robotic system (e.g., from one or more components of a robotic system). In some embodiments, the robotic system data 170 includes a system event stream, the system event stream further including data associated with one or more system events (e.g., states of one or more devices such as whether one or more devices or medical instruments are connected to the robotic system, whether the one or more devices are operating as expected, error messages, or the like). The robotic system can include one or more devices or components of a robotic medical system (e.g., a robotic medical system that is the same as, or similar to, the robotic medical system 524 of FIG. 5) having one or more tools (e.g., one or more tools that are the same as, or similar to, the medical instruments supported by the manipulator arms 535A-535D of FIG. 5) supported thereon. In an example, the user interface system 162 can receive the medical instrument data 172 where the medical instrument data 172 is associated with a state of the one or more of the tools supported by the robotic medical system. In some embodiments, the robotic system can include a user input system 168 (e.g., a first surgeon console that is the same as, or similar to, the user control system 510 of FIG. 5).
[0077] The one or more images captured by the imaging device included in the sensing system 164 can show at least a portion of at least one medical instrument (tools, surgical instruments, or the like) within a field of view of the imaging device. For example, the sensing system 164 can include an imaging device that is supported along a distal portion of a tool (e.g., a tool that is supported by (e.g., installed on) a robotic medical system 524). The sensing system 164 can be operated by medical staff during a training session where the medical staff are familiarizing themselves with the robotic system or practicing certain maneuvers using the robotic system. The sensing system 164 can be operated by medical staff during a surgery where the a surgeon is operating the user input system 168. In these examples, the sensing system 164 can be operated such that the imaging device of the sensing system 164 is positioned to capture and generate images of at least a distal portion of at least one medical instrument in a field of view of the imaging device included in the sensing system 164, where the field of view is directed to at least a portion of tissue of a patient. The images can be captured as the one or more medical instruments are controlled by the robotic system based on input received by the user input system 168.
[0078] The robotic system data 170 can be associated with the state of the control of one or more devices of the robotic system based on inputs received by the user input system 168. For example, as the user input system 168 communicates with the robotic system to control the at least one medical instrument, the robotic system can generate and provide the robotic system data 170 to the user interface system 162. The robotic system data 170 can represent whether the user input system 168 is controlling one or more of the medical instruments, whether the user input system 168 is generating control signals that are configured to cause the maneuvering one or more medical instruments within the field of view of the sensing system, the torque being applied at one or more joints involved in supporting one or more of the medical instruments involved, or the like. In some embodiments, the robotic system data 170 can be associated with force exerted by the robotic system on one or more anatomical structures. For example, the robotic system data 170 can be generated by the robotic system based on movement of one or more linkages or one or more components of the medical instruments of the robotic system. In one illustrative example, where the one or more linkages of the robotic system are configured to support the positioning and repositioning of a medical instrument, one or more sensors corresponding to the one or more linkages can generate sensor signals representative of the force exerted by the linkages when repositioning the medical instrument. The sensor signals can be included in the robotic system data 170 which, in turn, is included in the data stream. One or more different sensors (e.g., encoders or the like) can be used generate sensor data indicative of a position of the linkages relative to one another and the robotic system, this sensor data can later be used to derive the position of medical instruments supported by the linkages.
[0079] The medical instrument data 172 can be associated with one or more states of one or more medical instruments of the robotic system. For example, the medical instrument data 172 can be associated with one or more states of one or more medical instruments controlled during teleoperation of the robotic system by the user input system 168. The one or more states can represent whether or not the one or more medical instruments of the robotic system are performing one or more functions. As an example, functions can include tool activations, movement of medical instruments, or the like as described herein. The one or more states can represent whether or not one or more medical instruments are being controlled by the robotic system based on inputs received by the user input system 168.
[0080] With continued reference to FIGS. 1 A-1B, in some embodiments, the user interface system 162 receives the data stream of the medical procedure, the data stream comprising data associated with force vectors that represent directions and magnitudes of force interactions between medical instruments and anatomical structures involved in the medical procedure. For example, the user interface system 162 can receive the data associated with force vectors at a plurality of time steps during the medical procedure. In this example, the plurality of time steps can be instantaneous time steps (e.g., the procedure can be occurring in real time). In some embodiments, the user interface system 162 can receive the data associated with force vectors at a plurality of time steps prior to the instantaneous time step. For example, when the user interface system 162 determines a cumulative force associated with an interaction between one or more medical instruments and one or more anatomical structures, the user interface system 162 can receive the data associated with force vectors at a plurality of time steps prior to the instantaneous time step. In this example, the data can be generated during the current medical procedure or a previous medical procedure (e.g., a previous medical procedure associate with the patient involved in the current medical procedure). In some embodiments, the user interface system 162 determines the amount of force to be applied to anatomical structures based at least in part on the force vectors captured at the plurality of time steps between the medical instruments and the anatomical structures involved in the medical procedure. In some embodiments, the user interface system 162 determines the amount of force to be applied to the anatomical structures where the amount of force is to be applied to the anatomical structure in at least one direction. For example, where an instrument is interacting with an anatomical structure in a first direction, the user interface system 162 can determine the amount of force to be applied to the anatomical structure based at least in part on continued interaction between the instrument and the anatomical structure (e.g., continued pushing of the anatomical structure in a direction, continued clamping or grasping of at least a portion of the anatomical structure, etc.).
[0081] In some embodiments, the user interface system 162 receives the data stream of the medical procedure, the data stream comprising data associated with a skill level of the one or more surgeons involved in the medical procedure. For example, the user interface system 162 can receive the data associated with the skill level of the one or more surgeons involved in the medical procedure based at least in part on the one or more interactions between the medical instruments of the robotic system and anatomical structures involved in the medical procedure. The skill level can represent, for example, an amount of previous interactions involving similar medical instruments and anatomical structures during previous medical procedures conducted by the surgeons, scores representing patient outcomes specific to the interactions involving similar medical instruments and anatomical structures during previous medical procedures conducted by the surgeons, force vectors associated with previously- performed interactions involving the surgeon, or other historical information that can be used to determine force limit.
[0082] In some embodiments, the user interface system 162 receives the data stream of the medical procedure, where the data stream includes robotic system data 170 that is associate with kinematic information or system event information corresponding to the operation of the robotic system. In embodiments, the user interface system 162 receives the data stream of the medical procedure, where the data stream includes data associated with one or more aspects of the medical procedure (e.g., a type of medical procedure, a complexity level associated with the medical procedure, a segment of the medical procedure associated with a phase, task, or step, or the like). In some embodiments, the user interface system 162 receives patient data associated with information about the patient such as their age, demographics, whether the patient has a compromised immune system (or is sick at the time of the medical procedure) or any other such information as can be relevant to the determination of one or more force limits as described herein. [0083] In some embodiments, the user interface system 162 identifies a type of an anatomical structure involved in the medical procedure. For example, the user interface system 162 can identify a type of an anatomical structure involved in the medical procedure based at least in part on the data stream (e.g., one or more aspects of the data represented by the data stream). The user interface system 162 can identify a type of an anatomical structure involved based on the type of medical procedure. For example, where a medical procedure involves addressing a hernia in an abdomen of a patient, the user interface system 162 can determine the type of the anatomical structure involved based at least in part on the one or more anatomical structures for which access can be gained by the robotic system during the medical procedure.
[0084] In some embodiments, the user interface system 162 identifies the type of the anatomical structure involved based at least in part on one or more models trained with machine learning. For example, the user interface system 162 can receive the video data 166 from the sensing system 164 during the medical procedure. In this example, the user interface system 162 can provide the video data 166 to the one or more models to cause the one or more models to generate an output. The output can represent one or more classifications, the one or more classifications corresponding to identifiers of the one or more anatomical features represented by the video data 166. In some embodiments, the one or more classifications can be made on a pixel basis. The one or more classifications can be made based at least in part on one or more groups of pixels. For example, the one or more classifications can be associated with one or more segmentation masks that correspond to groups of pixels representing the one or more anatomical structures.
[0085] In some embodiments, the user interface system 162 identifies the type of the anatomical structure on which the medical procedure is performed and a type of an interaction with the anatomical structure using the one or more models trained with machine learning. For example, the user interface system 162 can receive the video data 166 from the sensing system 164 during the medical procedure and the user interface system 162 can provide the video data 166 to the one or more models to cause the one or more models to generate an output. In this example, the output can represent one or more classifications corresponding to identifiers of the one or more anatomical features or the one or more medical instruments represented by the video data 166. The one or more classifications can correspond to identifiers of the one or more interactions represented by the video data 166 between the one or more anatomical features or the one or more medical instruments. In one illustrative example, where a medical instrument is being used to move an anatomical structure during a medical procedure, one or more images associated with the video data 166 can be provided to the one or more models to cause the one or more models to generate an output, the output representing the movement of the anatomical structure by one or more medical instruments involved in the medical procedure. In some embodiments, the output can be further represented as an indication of the type of the interaction, where the type includes one or more of a grab interaction (e.g., grabbing at least a portion of an anatomical structure using jaws of an end effector supported by a medical instrument), a retract interaction (e.g., holding back or separating tissue associated with the one or more anatomical structures), a cut interaction (e.g., cutting at least a portion of an anatomical structure), or a cauterize interaction (e.g., burning tissue associated with the one or more anatomical systems using, for example, electrocautery systems, chemical cauterization systems, or the like).
[0086] In some embodiments, the user interface system 162 can provide robotic system data 170 or medical instrument data 172 to the one or more models to cause the one or more models to identify the type of the anatomical structure on which the medical procedure is performed, and a type of an interaction with the anatomical structure. For example, during training (discussed below) the one or more models can be trained on training data that includes the video data 166 from prior medical procedures, the robotic system data 170 from prior medical procedures, or the medical instrument data 3016 from prior medical procedures. In these examples, the user interface system 162 can provide the data from the data stream received during the surgical procedure to the one or more models to cause the one or more models to generate the outputs discussed above.
[0087] In some embodiments, the user interface system 162 can provide previously- generated video data 166 from prior medical procedures, the robotic system data 170 from prior medical procedures, or the medical instrument data 3016 from prior medical procedures to the one or more models to cause the one or more models to generate the outputs described herein. For example, the user interface system 162 can provide previously-generated video data 166 from prior medical procedures, the robotic system data 170 from prior medical procedures, or the medical instrument data 3016 from prior medical procedures, where the previously-generated data corresponds to one or more earlier points in time and one or more tags. The one or more tags can be determined based on inputs received by individuals annotating the previously generated data. The annotations can correspond to, for example, the types of anatomical structures involved at a given point in time, the locations of the anatomical structures involved at the given point in time, or the type of interaction involved at the given point in time. The user interface system 162 can then compare the output of the one or more models to the input of the one or more models (e.g., the classifications generated by the one or more models to the tags corresponding to the inputs to the one or more models) and determine a difference between the output and the input. The user interface system 162 can then update the one or more models by changing one or more of the weights associated with the one or more models and repeat the training process until the one or more models converge.
[0088] In some embodiments, the user interface system 162 determines the magnitude of the force interaction between the anatomical structure and the medical instrument involved in the medical procedure based on the data stream. For example, the user interface system 162 can determine the magnitude of the force interaction based at least in part on the robotic system data 170. In such an example, the user interface system 162 can determine the magnitude of the force based at least in part on the sensor signals that are representative of the force exerted by the linkages when repositioning the medical instrument or when remaining in contact with the anatomical structures (e.g., when grabbing or moving the anatomical structures).
[0089] Different sensors can generate sensor data indicative of a position of the linkages relative to one another and the robotic system and included in the data stream. The sensor data can later be used to derive the position of medical instruments supported by the linkages. For example, when the position or pose of the robotic system (e.g., the one or more components of the robotic system) are registered relative to a patient, the sensors data indicative of a position of the linkages relative to one another and the robotic system can be used to determine the relative position of medical instruments and the anatomical structures of the patient.
[0090] In some embodiments, the user interface system 162 determines an amount of force to be applied to the anatomical structure based at least in part on the type of the anatomical structure and the type of the interaction with the anatomical structure. For example, the user interface system 162 can determine the amount of force to be applied to the anatomical structure the amount of force to be applied to the anatomical structure based at least in part on comparing a magnitude of a force interaction between the anatomical structure and a medical instrument involved in the medical procedure to one or more of: a historical magnitude of a prior force interaction (e.g., a prior interaction between a similar medical instrument and a similar anatomical structure), a force magnitude associated with a force limit (e.g., a predetermined amount of newtons or the like which should not be exceeded to avoid damage to the anatomical structure at a point in time), an optimal magnitude associated with an optimal force (e.g., a predetermined amount of newtons or the like associated with a particular interaction or goal of an interaction which should not be exceeded to avoid irritation or minor damage to the anatomical structure during an interaction between a medical instrument and an anatomical structure), a cumulative force associated with magnitudes of one or more previous force interactions (e.g., one or more force interactions associated with one or more time steps occurring at one or more speeds (e.g., tempos) during the instant medical procedure or previous medical procedures), an first interaction-type magnitude associated with a primary interaction type for a medical instrument involved in the interaction with the anatomical structure (e.g., an interaction based on an expected use for the medical instrument such as cutting with a sharp edge of a blade), or a second interaction-type magnitude associated with a secondary interaction type for the medical instrument involved in the interaction with the anatomical structure (e.g., an interaction based on a possible use for the medical instrument such as repositioning an anatomical structure with a back or dull portion of a blade).
[0091] In some embodiments, the user interface system 162 determines an amount of force to be applied to the anatomical structure based at least in part on a lookup table. For example, the user interface system 162 can determine the amount of force to be applied to the anatomical structure based at least in part on a lookup table, where the lookup table is associated with (e.g., represents) a plurality of force limits corresponding to a plurality of types of anatomical structures. In one illustrative example, an anatomical structure (e.g., a liver) can be associated with a higher force limit as compared to a different anatomical structure (e.g., a kidney). In this particular example, the liver can be associated with the higher force limit because repositioning the liver has a lower chance of resulting in the inadvertent release of catecholamine hormone (e.g., by inadvertent contact between the medical instrument or the liver with the adrenal glands) as opposed to repositioning the kidney, and as a result a lesser relative risk of complications. In some embodiments, the plurality of force limits can be predetermined based at least in part on input from one or more users. For example, the plurality of force limits can be predetermined based at least in part on users (e.g., surgeons, institutions, and/or the like) providing input to set the force limits for the types of anatomical structures. In one illustrative example, a surgeon that is an expert in performing operations on kidneys can set a force limit associated with force applied to portions of the kidneys so that the surgeon or other surgeons (experts and non-experts) can implement the force limit set by the expert surgeon. In some embodiments, one or more users can select one or more force limits. For example, the one or more users can select the one or more force limits based at least in part on input from users setting force limits for one or more anatomical structures.
[0092] In some embodiments, the user interface system 162 determines an optimal magnitude associated with an optimal force. For example, the user interface system 162 can determine the optimal magnitude based on one or more objective performance indicators (OPIs) that can be determined based on analyzing the data stream or one or more aspects of the patient (e.g., vital signs or the like). In some embodiments, the user interface system 162 determines the optimal magnitude associated with the optimal force prior to (e.g., before) the beginning of the medical procedure. For example, the user interface system 162 can determine the optimal magnitude associated with the optimal force prior to the beginning of the medical procedure based at least on the patient data, the data associated with a skill level of the one or more surgeons involved in the medical procedure, and a probability of a negative outcome associated with an interaction involved in the medical procedure. In some embodiments, the probability of a negative outcome can be determined using one or more machine learning models trained to predict the probability of negative outcomes based on a force signature (e.g., amounts of force associated with a particular surgeon or a particular medical procedure), or the like.
[0093] In some embodiments, the user interface system 162 determines an amount of force to be applied to the anatomical structure based at least in part on the type of the anatomical structure and the type of the interaction with the anatomical structure. For example, where the type of the anatomical structure is associated with a more delicate anatomical structure that is susceptible to mechanical damage (e.g., the upper intestines) when compared to other anatomical structures that are less susceptible to mechanical damage (e.g., the liver), the user interface system 162 can determine a lower amount of force to be applied to the anatomical structure (e.g., the more delicate anatomical structure) as opposed to a higher amount of force which can be determined for the other anatomical structures (e.g., the less delicate anatomical structures). In some embodiments, the user interface system 162 can determine the amount of force to be applied to the anatomical structure based at least in part on the type of the anatomical structure and the type of the interaction with the anatomical structure, where the amount of force represents a range of forces.
[0094] In some embodiments, the user interface system 162 determines the amount of force to be applied to at least a portion of the anatomical structure based at least in part on the type of the anatomical structure and the type of the interaction with the anatomical structure. In an example, where at least a portion of an anatomical structure that is delicate relative to another anatomical structure, the user interface system 162 can determine a lower amount of force to be applied during a first interaction (e.g., moving the anatomical structure). In this example, the user interface system 162 can determine a second, higher amount of force to be applied during a second interaction (e.g., grabbing the anatomical structure) involving the same anatomical structure. In one illustrative example, the user interface system 162 can determine a lower amount of force to be applied to the upper intestines when simply moving the upper intestines to gain access to other anatomical structures, and the user interface system 162 can determine a higher amount of force to be applied to the upper intestines when grabbing or cutting the upper intestines.
[0095] In some embodiments, the user interface system 162 provides an indication of the amount of force to be applied to control performance of the medical procedure with the robotic system. For example, the user interface system 162 can provide the indication of the amount of force to be applied based at least in part on the user interface system 162 generating data associated with a user interface. In such an example, the user interface system 162 can generate the user interface based at least in part on the indication of the amount of force to control performance of the medical procedure. In such an example, the user interface system 162 can further generate data associated with the user interface that is configured to cause a display device (e.g., a display device of the user input system 168) to provide an output representing the user interface. In some embodiments, the user interface can also include images representing the medical instruments and anatomical structures that are in a field of view of an imaging device of the sensing system 164.
[0096] In some embodiments, the user interface system 162 generates a graphical user interface based at least in part on images representing medical instruments or anatomical structures that are in a field of view of an imaging device. For example, the user interface system 162 can generate a graphical user interface based at least in part on images representing medical instruments or anatomical structures that are in a field of view of an imaging device, where the imaging device is included in the sensing system 164. In this example, the images can be captured at one or more time steps as described herein. In some embodiments, the user interface system 162 generates the graphical user interface based at least in part on the images and the indication of the amount of force to control the performance of the medical procedure.
[0097] In some embodiments, the indication includes a timer. For example, the user interface system 162 can generate a graphical user interface comprising the timer, where the timer represents an amount of time during which the amount of force to be applied is applied to the anatomical structure. The user interface system 162 can generate a graphical user interface comprising the timer, where the timer represents an amount of time remaining during which the amount of force to be applied will satisfy a cumulative force threshold. In this example, the user interface system 162 can update the timer based on the user interface system 162 updating the amount of time remaining in response to changes to the amount of force to be applied at one or more time steps (e.g., one or more time steps during which the timer is counting down).
[0098] In some embodiments, the indication includes a color-coded or binary indicator. For example, the indicator can be associated with an area of a graphical user interface that is colored one color when an amount of force to be applied to the anatomical structure satisfies (e.g., is within) a force limit, and colored with a different color when the amount of force to be applied to the anatomical structure does not satisfy the force limit. In some embodiments, indicator can be associated with an area of a graphical user interface that is colored yet another color (e.g., a third color) when an amount of force to be applied to the anatomical structure is approaching the force limit.
[0099] In some embodiments, the indication can include a numerical representation of a scale or speed with which one or more medical instruments are moving. For example, as a surgeon causes a medical instrument to move toward an anatomical structure, or when the medical instrument moves while contacting the anatomical structure, the indication can include a speed (e.g., in cm/s, mm/s, or the like) of at least a portion of the medical instrument. In some embodiments, the user interface system 162 determines the speed based at least on the relative motion of at least a portion of the medical instrument in comparison with the anatomical structures or the patient.
[0100] In some embodiments, the indication can be associated with haptic feedback or audible feedback that is based at least in part on the force being applied to an anatomical structure. For example, as a surgeon engages with manipulators at a surgeon console during a surgical procedure, the user interface system 162 can determine an amount of force to be applied by the instrument as described herein. The user interface system 162 can then provide an indication of the amount of force to control performance of the medical procedure based at least in part on the amount of force and one or more force limits. In an illustrative example, as an amount of force approaches a force limit during an interaction between an instrument and an anatomical structure, the indication can be associated with increasing vibration at the manipulators of the surgeon console. In another illustrative example, as the amount of force approaches the force limit during the interaction between the instrument and the anatomical structure, the indication can be associated with audio signals that are generated (e.g., by a speaker associated with the surgeon console). In this illustrative example, the audio signals can form one or more patterns that are updated based at least in part on the indication of the amount of force to control performance of the medical procedure.
[0101] In some embodiments, the indication can be provided by a user device (e.g., a tablet, a cell phone, a laptop computer, a desktop computer, etc.). For example, one or more users can stream a surgical procedure in real-time or after the surgical procedure (e.g., during playback of the surgical procedure). In this example, the indication can be associated with haptic feedback or audible feedback as described herein. In one illustrative example, as the amount of force approaches a force limit during an interaction between an instrument and an anatomical structure, the indication can be associated with increasing vibration generated by the user device (e.g., by an eccentric rotating mass vibration motor, by a piezoelectric vibration motor, etc.). In another illustrative example, as the amount of force approaches the force limit during the interaction between the instrument and the anatomical structure, the indication can be associated with audio signals that are generated by a speaker of the user device. In this illustrative example, the audio signals can form one or more patterns that are updated based at least in part on the indication of the amount of force to control performance of the medical procedure.
[0102] In some embodiments, the user interface system 162 provides one or more images associated with the data stream that correspond to the anatomical structure to be displayed via a graphical user interface. For example, the user interface system 162 can generate data associated with the user interface that is configured to cause a display device to provide an output representing the user interface, where the user interface at least in part represents one or more images associated with the data stream that correspond to the anatomical structure. As the medical procedure continues and medical instruments interact with the anatomical structures, the user interface system 162 can update the graphical user interface based at least in part on updates to the amount of force to be applied. For example, the user interface system 162 can update the graphical user interface by updating one or more pixels of the one or more images based at least in part on updates to the amount of force to be applied. In this example, the one or more pixels can correspond to at least a portion of the anatomical structure represented by the one or more images of medical instruments and anatomical structures that are in a field of view of the imaging device. In examples, where the anatomical structure is not in the field of view of the imaging device or the anatomical structure was moved out of the field of view of the imaging device, the one or more pixels can correspond to at least a portion of user interface that further corresponds to portions of the field of view that are in proximity to the anatomical structures or that previously illustrated portions of the anatomical structure.
[0103] In some embodiments, the user interface system 162 determines at least one area associated with at least one overlay. For example, the user interface system 162 can determine at least one area associated with at least one overlay, where the at least one area corresponds to anatomical structures represented by the one or more images. In this example, the overlay can be configured to cause the graphical user interface to update a representation of the at least one area when displayed via the display. In some embodiments, the user interface system 162 then updates one or more pixels associated with the graphical user interface (e.g., one or more pixels of the images associated with the graphical user interface) based at least in part on the at least one overlay. In one illustrative example, the user interface system 162 then updates one or more pixels associated with the graphical user interface by tinting the one or more pixels with one or more shades or one or more colors. In another illustrative example, the user interface system 162 then updates a plurality of pixels associated with the graphical user interface by tinting the plurality of pixels in accordance with a segmentation mask (discussed above). In this way, the user interface system 162 can generate graphical user interfaces that, for example, color code specific anatomical structures, or the like. In some embodiments, the overlay can be associated with one or more colors or shades that represent amounts of force that are being, or can be, applied to anatomical structures involved in the medical procedure.
[0104] In some embodiments, the user interface system 162 constructs a heatmap. For example, the user interface system 162 can construct a heatmap based at least in part on the at least one area and the amount of force to be applied. In some embodiments, the heatmap can include one or more regions of tinted shades or colors. The heatmap can include one or more regions of gradients of shades or colors. In one illustrative example, the heatmap can be a gradient of a color (e.g., red) that corresponds to the instant or cumulative force applied to the anatomical structure during the medical procedure. In some embodiments, the user interface system 162 then updates the one or more pixels (e.g., of the images associated with the video data 166) based at least in part on the at least one overlay or the heatmap.
[0105] In some embodiments, the user interface system 162 determines at least one first area where the amount of force to be applied satisfies a first threshold; and the user interface system 162 determines at least one second area where the amount of force to be applied satisfies a second threshold. For example, where multiple anatomical features are in a field of view of the imaging device of the sensing system 164, at least one first area can correspond to a first anatomical feature and at least one second area can correspond to a second anatomical feature. The user interface system 162 can then update one or more pixels of the images generated by the imaging device based at least one the at least one first area and the at least one second area. In one illustrative example, pixels associated with the first area can be updated by tinting the pixels using a first color or first shade; and pixels associated with the second area can be updated by tinting the pixels using a second color or second shade. [0106] In some embodiments, the user interface system 162 detects a change to an anatomical structure during a medical procedure. For example, the user interface system 162 can detect movement of an anatomical structure (e.g., adjustment of the orientation or position of the anatomical structure) based on a detected interaction between a medical instrument and the anatomical structure. In this example, the user interface system 162 can then update the amount of force to be applied to the anatomical structure based at least in part on the change to the anatomical structure. In examples, the user interface system 162 can detect deformations to the surface or structure of the anatomical structure (e.g., detents, tears, cuts, and/or the like) and whether the detected deformations were intentional or unintentional based on the detected interaction between the medical instrument and the anatomical structure. In these examples, the user interface system 162 can then update the amount of force to be applied to the anatomical structure based at least in part on the change to the anatomical structure. In some embodiments, the user interface system 162 can provide a second indication of the amount of force to control performance of the medical procedure with the robotic medical system based at least on the update to the amount of force to be applied to the anatomical structure. For example, where a medical instrument is determined to have initiated contact with an anatomical structure, the user interface system 162 can initiate a timer as described herein and update the timer at each time step during which the medical instrument is determined to remain in contact with the anatomical structure.
[0107] In some embodiments, the user interface system 162 can index and store data associated with the medical procedure and the interactions between the robotic system (e.g., medical instruments of the robotic system) and anatomical structures of the patient. For example, the user interface system 162 can index and store data associated with instant or cumulative amounts of force applied to the anatomical structure involved in the medical procedure. The user interface system 162 can then determine an expected amount of time for recovery for the patient based at least on the instant or cumulative amounts of force applied to the anatomical structure. The user interface system 162 can then determine an expected amount of time for recovery for the patient based at least on one or more aspects of the interactions between medical instruments and anatomical structures during the medical procedure.
[0108] In some embodiments, the user interface system 162 can receive data associated with patient feedback. For example, patients can provide feedback indicating how long their recovery process was, whether they experienced discomfort, the degree to which they experienced discomfort, or the like. The user interface system 162 can then correlate the patient feedback with the data associated with the medical procedure (e.g., amounts of force applied to the anatomical structures) and the interactions between the robotic system and anatomical structures of the patient and update one or more of the force limits as described herein for the patient or for other patients.
[0109] The robotic system data 170 includes or is indicative of robotic system events corresponding to a state or an activity of an attribute or an aspect of a robotic system. The robotic system data 170 of a robotic system can be generated by the robotic system (e.g., in the form of a robotic system log) in its normal course of operations. The robotic system data is determined based on at least input received by the user input system 168 of the robotic system from a user or sensor data of a sensor on the robotic system. The robotic system can include one or more sensors (e.g., camera, infrared sensor, ultrasonic sensors, etc.), actuators, interfaces, consoles, that can output information used to detect such a system event.
[0110] FIG. 2 is a flowchart diagram illustrating an example method 200 for updating a user interfaces based on force applied by a medical instrument during teleoperation, according to some embodiments. The method 200 can be performed by one or more systems, devices, or components depicted in FIG. 1 A, FIG. IB, FIG. 3, FIG. 5, and FIG. 6 including, for example, the user interface system 162 of FIG. IB.
[OHl] At operation 210, a data stream is received of a medical procedure performed with a robotic medical system. For example, a user interface system (e.g., user interface system 162) can receive the data stream of a medical procedure performed with a robotic medical system.
[0112] At operation 220, a type of an anatomical structure on which the medical procedure is performed and a type of an interaction with the anatomical structure are identified using the data stream and with one or more models trained with machine learning. For example, a user interface system (e.g., a user interface system 162) can identify, using the data stream and with one or more models trained with machine learning, a type of an anatomical structure on which the medical procedure is performed, and a type of an interaction with the anatomical structure.
[0113] At operation 230, an amount of force is determined to be applied to the anatomical structure based at least in part on the type of the anatomical structure and the type of the interaction with the anatomical structure. For example, a user interface system (e.g., a user interface system 162) can determine an amount of force to be applied to the anatomical structure based at least in part on the type of the anatomical structure and the type of the interaction with the anatomical structure.
[0114] At operation 240, an indication is provided of the amount of force to control performance of the medical procedure with the robotic medical system. For example, a user interface system (e.g., a user interface system 162) can provide an indication of the amount of force to control performance of the medical procedure with the robotic medical system [0115] FIG. 5 is an image of an example graphical user interface 400, according to some embodiments. As illustrated, the graphical user interface 400 shows four anatomical structures 402, 404, 406, 408 as well as other anatomical structures. The four anatomical structures 402, 404, 406, 408 are each shown as having overlays associated with different colors (e.g., a first color, a second color, a third color, and a fourth color, respectively). The graphical user interface 400 also includes a label 410 that corresponds to a point in time during the performance of a medical procedure.
[0116] FIG. 4 is a graph of example force limits, according to some embodiments. As illustrated, interactions between medical instruments and anatomical structures can be associate with interaction types, labeled along the X-axis as “dissect”, “drive needle”, “manipulate” (e.g., move), “retract”, and “tie suture”). Each interaction type can be further associate one or more sub-limits that correspond to particular aspects of each interaction type. As illustrated, the force limits can be between 0 newtons and 14 newtons.
[0003] FIG. 5 is a diagram of a medical environment, according to some embodiments. The medical environment 500 can refer to or include a surgical environment or surgical system. The medical environment 500 can include a robotic medical system 524, a user control system 510, and an auxiliary system 515 communicatively coupled one to another. A visualization tool 520 can be connected to the auxiliary system 515, which in turn can be connected to the robotic medical system 524. Thus, when the visualization tool 520 is connected to the auxiliary system 515 and this auxiliary system is connected to the robotic medical system 524, the visualization tool can be considered connected to the robotic medical system. In some embodiments, the visualization tool 520 can be directly connected to the robotic medical system 524. A user interface system 162 can be connected to the user control system 510 which in turn can be connected to the robotic medical system 524. The user interface system 162 can be connected directly to the robotic medical system 524. Thus, when the user interface system 162 is connected to the user control system 510 and user interface system 162 is connected to the robotic medical system 524, the visualization tool can be considered connected to the robotic medical system.
[0004] The medical environment 500 can be used to perform a computer-assisted medical procedure with a patient 525. In some embodiments, surgical team can include a surgeon 530A and additional medical personnel 530B-530D such as a medical assistant, nurse, and anesthesiologist, and other suitable team members who can assist with the surgical procedure or medical session. The medical session can include the surgical procedure being performed on the patient 525, as well as any pre-operative (e.g., which can include setup of the medical environment 500, including preparation of the patient 525 for the procedure), and postoperative (e.g., which can include clean up or post care of the patient), or other processes during the medical session. Although described in the context of a surgical procedure, the medical environment 500 can be implemented in a non-surgical procedure, or other types of medical procedures or diagnostics that can benefit from the accuracy and convenience of the surgical system.
[0005] The robotic medical system 524 can include a plurality of manipulator arms 535A-535D to which a plurality of medical instruments (e.g., the instruments described herein) can be coupled to, installed to, or supported by. In some embodiments, the plurality of manipulator arms 535A-535D can include one or more linkages. Each medical instrument can be any suitable surgical tool (e.g., a tool having tissue-interaction functions), imaging device (e.g., an endoscope, an ultrasound tool, etc.), sensing instrument (e.g., a force-sensing surgical instrument), diagnostic instrument, or other suitable instrument that can be used for a computer-assisted surgical procedure on the patient 525 (e.g., by being at least partially inserted into the patient and manipulated to perform a computer-assisted surgical procedure on the patient). Although the robotic medical system 524 is shown as including four manipulator arms (e.g., the manipulator arms 535A-535D), in other embodiments, the robotic medical system can include greater than or fewer than four manipulator arms. Further, not all manipulator arms can have a medical instrument installed thereto at all times of the medical session. Moreover, in some embodiments, a medical instrument installed on a manipulator arm can be replaced with another medical instrument as suitable.
[0006] One or more of the manipulator arms 535A-535D or the medical instruments attached to manipulator arms can include one or more displacement transducers, orientational sensors, positional sensors, or other types of sensors and devices to measure parameters or generate kinematics information. One or more components of the medical environment 500 can be configured to use the measured parameters or the kinematics information to track (e.g., determine poses of) or control the medical instruments, as well as anything connected to the medical instruments or the manipulator arms 535A-535D.
[0007] The user control system 510 can be used by the surgeon 530A to control (e.g., move) one or more of the manipulator arms 535A-535D or the medical instruments connected to the manipulator arms. To facilitate control of the manipulator arms 535A-535D and track progression of the medical session, the user control system 510 can include a display that can provide the surgeon 530A with imagery (e.g., high-definition 3D imagery) of a surgical site associated with the patient 525 as captured by a medical instrument installed to one of the manipulator arms 535A-535D. The user control system 510 can include a stereo viewer having two or more displays where stereoscopic images of a surgical site associated with the patient 525 and generated by a stereoscopic imaging system can be viewed by the surgeon 530A. In some embodiments, the user control system 510 can also receive images from the auxiliary system 515 and the visualization tool 520.
[0008] The surgeon 530A can use the imagery displayed by the user control system 510 to perform one or more procedures with one or more medical instruments attached to the manipulator arms 535A-535D. To facilitate control of the manipulator arms 535A-535D or the medical instruments installed thereto, the user control system 510 can include a set of controls. These controls can be manipulated by the surgeon 530A to control movement of the manipulator arms 535A-535D or the medical instruments installed thereto. The controls can be configured to detect a wide variety of hand, wrist, and finger movements by the surgeon 530A to allow the surgeon to intuitively perform a procedure on the patient 525 using one or more medical instruments installed to the manipulator arms 535A-535D.
[0009] The auxiliary system 515 can include one or more computer systems (e.g., computing devices that are the same as, or similar to the computing device 600 of FIG. 6) configured to perform processing operations within the medical environment 500. For example, the one or more computer systems can control or coordinate operations performed by various other components (e.g., the robotic medical system 524, the user control system 510) of the medical environment 500. A computer systems included in the user control system 510 can transmit instructions to the robotic medical system 524 by way of the one or more computing devices of the auxiliary system 515. The auxiliary system 515 can receive and process image data representative of imagery captured by one or more imaging devices (e.g., medical instruments) attached to the robotic medical system 524, as well as other data stream sources received from the visualization tool. For example, one or more image capture devices can be located within the medical environment 500. These image capture devices can capture images from various viewpoints within the medical environment 500. These images (e.g., video streams) can be transmitted to the visualization tool 520, which can then passthrough those images to the auxiliary system 515 as a single combined data stream. The auxiliary system 515 can then transmit the single video stream (including any data stream received from the medical instrument s) of the robotic medical system 524) to present on a display of the user control system 510.
[0010] In some embodiments, the auxiliary system 515 can be configured to present visual content (e.g., the single combined data stream) to other team members (e.g., the medical personnel 530B-530D) who can not have access to the user control system 510. Thus, the auxiliary system 515 can include a display 640 configured to display one or more user interfaces, such as images of the surgical site, information associated with the patient 525 or the surgical procedure, or any other visual content (e.g., the single combined data stream). In some embodiments, display 540 can be a touchscreen display or include other features to allow the medical personnel 530B-530D to interact with the auxiliary system 515. [0011] The robotic medical system 524, the user control system 510, and the auxiliary system 515 can be communicatively coupled one to another in any suitable manner. For example, in some embodiments, the robotic medical system 524, the user control system 510, and the auxiliary system 515 can be communicatively coupled by way of control lines 545, which can represent any wired or wireless communication link as can serve a particular implementation. Thus, the robotic medical system 524, the user control system 510, and the auxiliary system 515 can each include one or more wired or wireless communication interfaces, such as one or more local area network interfaces, Wi-Fi network interfaces, cellular interfaces, etc.
[0012] It is to be understood that the medical environment 500 can include other or additional components or elements that can be needed or considered desirable to have for the medical session for which the surgical system is being used.
[0117] FIG. 6 is a block diagram depicting an architecture for a computing device 600 that can be employed to implement elements of the systems and methods described and illustrated herein, including aspects of the systems depicted in FIGS. 1 A-1B, 3, or 5, and the method depicted in FIG. 2. For example, the user interface system 162, the sensing system 164, the user input system 168, and the devices described with respect to medical environment 500 can include one or more component or functionality of computing device 600. The computing device 600 can be any computing device used herein and can include or be used to implement a data processing system or its components. The computing device 600 includes at least one bus 605 or other communication component or interface for communicating information between various elements of the computer system. The computer system further includes at least one processor 610 or processing circuit coupled to the bus 605 for processing information. The computing device 600 also includes at least one main memory 615, such as a random-access memory (RAM) or other dynamic storage device, coupled to the bus 605 for storing information, and instructions to be executed by the processor 610. The main memory 615 can be used for storing information during execution of instructions by the processor 610. The computing device 600 can further include at least one read only memory (ROM) 620 or other static storage device coupled to the bus 605 for storing static information and instructions for the processor 610. A storage device 625, such as a solid-state device, magnetic disk or optical disk, can be coupled to the bus 605 to persistently store information and instructions.
[0118] The computing device 600 can be coupled via the bus 605 to a display 630, such as a liquid crystal display, or active-matrix display, for displaying information. An input device 635, such as a keyboard or voice interface can be coupled to the bus 605 for communicating information and commands to the processor 610. The input device 635 can include a touch screen display (e.g., the display 630). The input device 635 can include sensors to detect gestures. The input device 635 can also include a cursor control, such as a mouse, a trackball, or cursor direction keys, for communicating direction information and command selections to the processor 610 and for controlling cursor movement on the display 630.
[0119] The processes, systems and methods described herein can be implemented by the computing device 600 in response to the processor 610 executing an arrangement of instructions contained in the main memory 615. Such instructions can be read into the main memory 615 from another computer-readable medium, such as the storage device 625. Execution of the arrangement of instructions contained in the main memory 615 causes the computing device 600 to perform the illustrative processes described herein. One or more processors in a multi-processing arrangement can also be employed to execute the instructions contained in the main memory 615. Hard-wired circuitry can be used in place of or in combination with software instructions together with the systems and methods described herein. Systems and methods described herein are not limited to any specific combination of hardware circuitry and software.
[0120] The processor 610 can execute one or more instructions associated with the system 100. The processor 610 can include an electronic processor, an integrated circuit, or the like including one or more of digital logic, analog logic, digital sensors, analog sensors, communication buses, volatile memory, nonvolatile memory, and the like. The processor 610 can include, but is not limited to, at least one microcontroller unit (MCU), microprocessor unit (MPU), central processing unit (CPU), graphics processing unit (GPU), physics processing unit (PPU), embedded controller (EC), or the like. The processor 610 can include, or be associated with, a main memory 615 operable to store or storing one or more non- transitory computer-readable instructions for operating components of the system 100 and operating components operably coupled to the processor 610. The one or more instructions can include at least one of firmware, software, hardware, operating systems, or embedded operating systems, for example. The processor 610 or the system 100 generally can include at least one communication bus controller to effect communication between the system processor and the other elements of the system 100.
[0121] The main memory 615 can include one or more hardware memory devices to store binary data, digital data, or the like. The main memory 615 can include one or more electrical components, electronic components, programmable electronic components, reprogrammable electronic components, integrated circuits, semiconductor devices, flip flops, arithmetic units, or the like. The main memory 615 can include at least one of a non-volatile memory device, a solid-state memory device, a flash memory device, a NAND memory device, a volatile memory device, etc. The main memory 615 can include one or more addressable memory regions disposed on one or more physical memory arrays.
[0122] Although an example computing system has been described in FIG. 6, the subject matter including the operations described in this specification can be implemented in other types of digital electronic circuitry, or in computer software, firmware, or hardware, including the structures disclosed in this specification and their structural equivalents, or in combinations of one or more of them.
[0123] The herein described subject matter sometimes illustrates different components contained within, or connected with, different other components. It is to be understood that such depicted architectures are illustrative, and that in fact many other architectures can be implemented which achieve the same functionality. In a conceptual sense, any arrangement of components to achieve the same functionality is effectively “associated” such that the desired functionality is achieved. Hence, any two components herein combined to achieve a particular functionality can be seen as “associated with” each other such that the desired functionality is achieved, irrespective of architectures or intermedial components. Likewise, any two components so associated can also be viewed as being “operably connected,” or “operably coupled,” to each other to achieve the desired functionality, and any two components capable of being so associated can also be viewed as being “operably couplable,” to each other to achieve the desired functionality. Specific examples of operably couplable include but are not limited to physically mateable or physically interacting components or wirelessly interactable or wirelessly interacting components or logically interacting or logically interactable components. [0124] With respect to the use of plural or singular terms herein, those having skill in the art can translate from the plural to the singular or from the singular to the plural as is appropriate to the context or application. The various singular/plural permutations can be expressly set forth herein for sake of clarity.
[0125] It will be understood by those within the art that, in general, terms used herein, and especially in the appended claims (e.g., bodies of the appended claims) are generally intended as “open” terms (e.g., the term “including” should be interpreted as “including but not limited to,” the term “having” should be interpreted as “having at least,” the term “includes” should be interpreted as “includes but is not limited to,” etc.).
[0126] Although the figures and description can illustrate a specific order of method steps, the order of such steps can differ from what is depicted and described, unless specified differently above. Also, two or more steps can be performed concurrently or with partial concurrence, unless specified differently above. Such variation can depend, for example, on the software and hardware systems chosen and on designer choice. All such variations are within the scope of the disclosure. Likewise, software implementations of the described methods can be accomplished with standard programming techniques with rule-based logic and other logic to accomplish the various connection steps, processing steps, comparison steps, and decision steps.
[0127] It will be further understood by those within the art that if a specific number of an introduced claim recitation is intended, such an intent will be explicitly recited in the claim, and in the absence of such recitation, no such intent is present. For example, as an aid to understanding, the following appended claims can contain usage of the introductory phrases “at least one” and “one or more” to introduce claim recitations. However, the use of such phrases should not be construed to imply that the introduction of a claim recitation by the indefinite articles “a” or “an” limits any particular claim containing such introduced claim recitation to inventions containing only one such recitation, even when the same claim includes the introductory phrases “one or more” or “at least one” and indefinite articles such as “a” or “an” (e.g., “a” or “an” should typically be interpreted to mean “at least one” or “one or more”); the same holds true for the use of definite articles used to introduce claim recitations. In addition, even if a specific number of an introduced claim recitation is explicitly recited, those skilled in the art will recognize that such recitation should typically be interpreted to mean at least the recited number (e.g., the bare recitation of “two recitations,” without other modifiers, typically means at least two recitations, or two or more recitations).
[0128] Furthermore, in those instances where a convention analogous to “at least one of A, B, and C, etc.” is used, in general such a construction is intended in the sense one having skill in the art would understand the convention (e.g., “a system having at least one of A, B, and C” would include but not be limited to systems that have A alone, B alone, C alone, A and B together, A and C together, B and C together, or A, B, and C together, etc.). In those instances where a convention analogous to “at least one of A, B, or C, etc.” is used, in general, such a construction is intended in the sense one having skill in the art would understand the convention (e.g., “a system having at least one of A, B, or C” would include but not be limited to systems that have A alone, B alone, C alone, A and B together, A and C together, B and C together, or A, B, and C together, etc.). It will be further understood by those within the art that virtually any disjunctive word or phrase presenting two or more alternative terms, whether in the description, claims, or drawings, should be understood to contemplate the possibilities of including one of the terms, either of the terms, or both terms. For example, the phrase “A or B” will be understood to include the possibilities of “A” or “B” or “A and B.”
[0129] Further, unless otherwise noted, the use of the words “approximate,” “about,”
“around,” “substantially,” etc., mean plus or minus ten percent.
[0130] The foregoing description of illustrative implementations has been presented for purposes of illustration and of description. It is not intended to be exhaustive or limiting with respect to the precise form disclosed, and modifications and variations are possible in light of the above teachings or can be acquired from practice of the disclosed implementations. It is intended that the scope of the invention be defined by the claims appended hereto and their equivalents.

Claims

CLAIMS What is claimed is:
1. A system, comprising: one or more processors, coupled with memory, to: receive a data stream of a medical procedure performed with a robotic medical system; identify, using the data stream and with one or more models trained with machine learning, a type of an anatomical structure on which the medical procedure is performed, and a type of an interaction with the anatomical structure; determine an amount of force to be applied to the anatomical structure based at least in part on the type of the anatomical structure and the type of the interaction with the anatomical structure; and provide an indication of the amount of force to control performance of the medical procedure with the robotic medical system.
2. The system of claim 1, wherein the one or more processors are further to: generate a graphical user interface based at least in part on images representing instruments and anatomical structures that are in a field of view of an imaging device, the images captured at a plurality of time steps; the graphical user interface comprising a timer representing an amount of time during which the amount of force to be applied is applied to the anatomical structure or a remaining amount of time during which the amount of force to be applied satisfies a cumulative force threshold.
3. The system of claim 1, wherein the one or more processors are configured to: receive the data stream of the medical procedure, the data stream comprising data associated with force vectors that represent directions and magnitudes of force interactions between instruments and anatomical structures involved in the medical procedure, the force vectors captured at a plurality of time steps; and determine the amount of force to be applied to the anatomical structure based at least in part on the force vectors.
4. The system of claim 3, wherein the one or more processors are further configured to: receive the data stream of the medical procedure, the data stream comprising data associated with a skill level of one or more surgeons or the force vectors involved in previous medical procedures involving the one or more surgeons; and determine the amount of force to be applied to the anatomical structure based at least in part on the skill level of the one or more surgeons or the force vectors involved in the previous medical procedures involving the one or more surgeons.
5. The system of claim 3 or 4, wherein the amount of force to be applied represents a cumulative force to be applied to the anatomical structure in at least one direction.
6. The system of claim 1, wherein the one or more processors are further configured to: determine the amount of force to be applied to the anatomical structure based at least in part on comparing a magnitude of a force interaction between the anatomical structure and an instrument involved in the medical procedure to one or more of: a historical magnitude of a prior force interaction, a force magnitude associated with a force limit, an optimal magnitude associated with an optimal force, a cumulative force associated with magnitudes of one or more previous force interactions, an first interaction-type magnitude associated with a primary interaction type for an instrument involved in the interaction with the anatomical structure, or a second interaction-type magnitude associated with a secondary interaction type for the instrument involved in the interaction with the anatomical structure,
7. The system of claim 1, wherein the one or more processors are further configured to: provide, for display via a graphical user interface, one or more images associated with the data stream; and update one or more pixels of the one or more images that are in proximity to the anatomical structure based at least in part on the amount of force to be applied, the one or more pixels corresponding to at least a portion of the graphical user interface represented by the one or more images associated with the data stream, the anatomical structure being in proximity to a field of view of an imaging device that captured the one or more images.
8. The system of claim 1, wherein the one or more processors are further configured to: provide, for display via a graphical user interface, one or more images associated with the data stream that correspond to the anatomical structure; and update one or more pixels of the one or more images based at least in part on the amount of force to be applied, the one or more pixels corresponding to at least a portion of the anatomical structure represented by the one or more images associated with the data stream, the anatomical structure being in a field of view of an imaging device that captured the one or more images.
9. The system of claim 8, wherein the one or more processors are further configured to: determine at least one area associated with at least one overlay, the at least one area corresponding to the anatomical structure represented by the one or more images, the overlay configured to cause the graphical user interface to update a representation of the at least one area when displayed via the display; and update the one or more pixels based at least in part on the at least one overlay.
10. The system of claim 8, wherein the one or more processors are further configured to: determine at least one area associated with at least one overlay, the at least one area corresponding to the anatomical structures represented by the one or more images; construct a heatmap based at least in part on the at least one area and the amount of force to be applied; and update the one or more pixels based at least in part on the at least one overlay and the heatmap.
11. The system of claim 8, wherein the one or more processors are further configured to: determine at least one first area where the amount of force to be applied satisfies a first threshold; determine at least one second area where the amount of force to be applied satisfies a second threshold; and update the one or more pixels based at least one the at least one first area and the at least one second area.
12. The system of claim 1, wherein the one or more processors are further configured to: generate data associated with a graphical user interface based at least in part on the indication of the amount of force to control performance of the medical procedure with the robotic medical system and images representing instruments and anatomical structures that are in a field of view of an imaging device; and provide the data associated with the graphical user interface to a display device, the data associated with the graphical user interface configured to cause the display device to provide an output representing the graphical user interface.
13. The system of claim 1, wherein the one or more processors are further configured to: generate data associated with haptic or audible feedback based at least in part on the indication of the amount of force to control performance of the medical procedure; and provide the data associated with the haptic or audible feedback to a device associated with a surgeon console, the data associated with the haptic or audible feedback configured to cause the device to provide an output representing the haptic or audible feedback.
14. The system of claim 1, wherein the one or more processors are further configured to: determine the amount of force to be applied to the anatomical structure based at least on a lookup table, where the lookup table represents a plurality of force limits corresponding to a plurality of types of anatomical structures.
15. The system of claim 14, wherein the lookup table represents the plurality of force limits that at least partially comprise predetermined force limits corresponding to the plurality of types of anatomical structures, wherein the predetermined force limits are based at least in part on input from users associated with the system.
16. The system of claim 1, wherein the one or more processors are further configured to: determine the amount of force to be applied to the anatomical structure based at least in part on the type of the anatomical structure and the type of the interaction with the anatomical structure, the amount of force to be applied representing a range of forces to be applied.
17. The system of claim 1, wherein the one or more processors are further configured to: determine the amount of force to be applied at least a portion of the anatomical structure based at least in part on the type of the anatomical structure and the type of the interaction with the anatomical structure.
18. The system of claim 1, wherein the one or more processors are further configured to: determine the type of the interaction with the anatomical structure as a grab interaction, a retract interaction, a cut interaction, or a cauterize interaction.
19. The system of claim 1, wherein the one or more processors are further configured to: detect a change to the anatomical structure during the medical procedure; update the amount of force to be applied to the anatomical structure based at least in part on the change to the anatomical structure; and provide a second indication of the amount of force to control performance of the medical procedure with the robotic medical system based at least on the update to the amount of force to be applied to the anatomical structure.
20. The system of any of claims 1, 4, 9-11, and 14-19, where the interaction with the anatomical structure involves contact between an instrument of the robotic medical system and the anatomical structure.
21. A method, comprising: receiving, by one or more processors, a data stream of a medical procedure performed with a robotic medical system; identifying, by the one or more processors, using the data stream and with one or more models trained with machine learning, a type of an anatomical structure on which the medical procedure is performed, and a type of an interaction with the anatomical structure; determining, by the one or more processors, an amount of force to be applied to the anatomical structure based at least in part on the type of the anatomical structure and the type of the interaction with the anatomical structure; and providing, by the one or more processors, an indication of the amount of force to control performance of the medical procedure with the robotic medical system.
22. The method of claim 21 further comprising: generating, by the one or more processors, a graphical user interface based at least in part on images representing instruments and anatomical structures that are in a field of view of an imaging device, the images captured at a plurality of time steps, the graphical user interface comprising a timer representing an amount of time during which the amount of force to be applied is applied to the anatomical structure or a remaining amount of time during which the amount of force to be applied satisfies a cumulative force threshold.
23. The method of claim 21, further comprising: receiving, by the one or more processors, the data stream of the medical procedure, the data stream comprising data associated with force vectors that represent directions and magnitudes of force interactions between instruments and anatomical structures involved in the medical procedure, the force vectors captured at a plurality of time steps; and determining, by the one or more processors, the amount of force to be applied to the anatomical structure based at least in part on the force vectors.
24. A non-transitory computer-readable medium storing processor-executable instructions that, when executed by one or more processors, cause the one or more processors to: receive a data stream of a medical procedure performed with a robotic medical system; identify, using the data stream and with one or more models trained with machine learning, a type of an anatomical structure on which the medical procedure is performed, and a type of an interaction with the anatomical structure; determine an amount of force to be applied to the anatomical structure based at least in part on the type of the anatomical structure and the type of the interaction with the anatomical structure; and provide an indication of the amount of force to control performance of the medical procedure with the robotic medical system.
PCT/US2025/017648 2024-02-29 2025-02-27 Updating a user interface based on force applied by an instrument during teleoperation Pending WO2025184378A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US202463559772P 2024-02-29 2024-02-29
US63/559,772 2024-02-29

Publications (1)

Publication Number Publication Date
WO2025184378A1 true WO2025184378A1 (en) 2025-09-04

Family

ID=95064253

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2025/017648 Pending WO2025184378A1 (en) 2024-02-29 2025-02-27 Updating a user interface based on force applied by an instrument during teleoperation

Country Status (1)

Country Link
WO (1) WO2025184378A1 (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020055435A1 (en) * 2018-09-12 2020-03-19 Verb Surgical Inc. Machine-learning-based visual-haptic feedback system for robotic surgical platforms
US20200268469A1 (en) * 2019-02-21 2020-08-27 Theator inc. Image-based system for estimating surgical contact force

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020055435A1 (en) * 2018-09-12 2020-03-19 Verb Surgical Inc. Machine-learning-based visual-haptic feedback system for robotic surgical platforms
US20200268469A1 (en) * 2019-02-21 2020-08-27 Theator inc. Image-based system for estimating surgical contact force

Similar Documents

Publication Publication Date Title
US20220336078A1 (en) System and method for tracking a portion of the user as a proxy for non-monitored instrument
US12349999B2 (en) Graphical user guidance for a robotic surgical system
CN112804958A (en) Indicator system
US20130178868A1 (en) Surgical robot and method for controlling the same
JP2018511359A (en) Operating room and surgical site recognition
CN113194866A (en) Navigation assistance
CN105078576A (en) Surgical robots and control methods thereof
JP2012529970A (en) Virtual measurement tool for minimally invasive surgery
JP2012529971A (en) Virtual measurement tool for minimally invasive surgery
CN115443108A (en) Surgical procedure guidance system
US20250318882A1 (en) Method and system for estimating positional data in images
CN115135270A (en) Robotic surgical system and method for providing a stadium-style view with arm set guidance
JP2023506355A (en) Computer-assisted surgical system, surgical control device and surgical control method
US20250134610A1 (en) Systems and methods for remote mentoring in a robot assisted medical system
CN115297799A (en) System and method for optimizing configuration of a computer-assisted surgery system for accessibility of a target object
EP4143844A1 (en) System and method for tracking a portion of the user as a proxy for non-monitored instrument
WO2025184378A1 (en) Updating a user interface based on force applied by an instrument during teleoperation
WO2025184368A1 (en) Anatomy based force feedback and instrument guidance
US20240070875A1 (en) Systems and methods for tracking objects crossing body wallfor operations associated with a computer-assisted system
JP2024513991A (en) System and method for changing a surgical field display overlay based on a trigger event
WO2025245005A1 (en) Endoscopic surgical navigation with a 3d surgical workspace
WO2025136980A1 (en) Surgical performance management via triangulation adjustment
WO2025199083A1 (en) Extended reality contextual switching for robotic medical systems
WO2025194117A1 (en) Interaction detection between robotic medical instruments and anatomical structures
WO2024020223A1 (en) Changing mode of operation of an instrument based on gesture detection

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 25713100

Country of ref document: EP

Kind code of ref document: A1