WO2025145069A1 - Intelligent data collection for medical environments - Google Patents
Intelligent data collection for medical environments Download PDFInfo
- Publication number
- WO2025145069A1 WO2025145069A1 PCT/US2024/062134 US2024062134W WO2025145069A1 WO 2025145069 A1 WO2025145069 A1 WO 2025145069A1 US 2024062134 W US2024062134 W US 2024062134W WO 2025145069 A1 WO2025145069 A1 WO 2025145069A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- data
- robotic
- medical
- stream
- streams
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H30/00—ICT specially adapted for the handling or processing of medical images
- G16H30/20—ICT specially adapted for the handling or processing of medical images for handling medical images, e.g. DICOM, HL7 or PACS
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B34/00—Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
- A61B34/30—Surgical robots
- A61B34/37—Leader-follower robots
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H10/00—ICT specially adapted for the handling or processing of patient-related medical or healthcare data
- G16H10/60—ICT specially adapted for the handling or processing of patient-related medical or healthcare data for patient-specific data, e.g. for electronic patient records
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H15/00—ICT specially adapted for medical reports, e.g. generation or transmission thereof
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B34/00—Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
- A61B34/30—Surgical robots
Definitions
- FIG. 1 A is a schematic view of various elements appearing in a surgical theater during a surgical operation, as may occur in relation to some embodiments;
- FIG. IB is a schematic view of various elements appearing in a surgical theater during a surgical operation employing a robotic surgical system, as may occur in relation to some embodiments;
- FIG. 2A is a schematic depth map rendering from an example theater-wide sensor perspective, as may be used in some embodiments;
- FIG. 2B is a schematic top-down view of objects in the theater of FIG. 2A, with corresponding sensor locations;
- FIG. 2C is a pair of images depicting a grid-like pattern of orthogonal rows and columns in perspective, as captured from a theater-wide visual image sensor having a rectilinear view and a theater-wide visual image sensor having a fisheye view, each of which may be used in connection with some embodiments;
- FIG. 3 is a schematic representation of a series of surgical procedures within a surgical theater, their intervening nonoperative periods, and corresponding theater-wide sensor datasets for one such nonoperative period, as may occur in connection with some embodiments;
- FIG. 4 is a schematic block diagram illustrating an example deployment topology for a nonoperative periods analysis system, as may be implemented in some embodiments;
- FIG. 5A is a schematic representation of a collection of metrics intervals, as may be used to assess nonoperative team performance in some embodiments;
- FIG. 5B is a schematic processing diagram indicating full-day relations of various intervals, including intervals from FIG. 5A, as may be applied in some embodiments;
- FIG. 5C is a schematic block diagram indicating possible activity analysis class groupings, as may be used in connection with some embodiments.
- FIG. 6 is a table of example task action temporal definitions, as may be used in some embodiments.
- FIG. 7 is a table of additional example task action temporal definitions, as may be used in some embodiments.
- FIG. 8 is a schematic block diagram illustrating various metrics and their relation in constructing a composite score (referred to as an OR analysis “ORA” score), as may be used in some embodiments;
- FIG. 9A is a schematic block diagram depicting a general nonoperative analysis system processing flow, as may be implemented in some embodiments;
- FIG. 9B is a schematic block diagram depicting elements in a more detailed example nonoperative analysis system processing flow than the flow depicted in FIG. 9A, as may be implemented in some embodiments;
- FIG. 9C is a flow diagram illustrating various operations in an example overall process for analyzing theater-wide sensor data during nonoperative periods, as may be implemented in some embodiments;
- FIG. 10 is a flow diagram illustrating various operations in an example nonoperative segment detection process, as may be performed in some embodiments.
- FIG. 11 A is a schematic block diagram illustrating an example information processing flow for performing object detection, as may be used in connection with some embodiments;
- FIG. 1 IB is a flow diagram illustrating various operations in an example process for performing object detection, as may be used in some embodiments;
- FIG. 18B is a schematic representation of arrow graphical elements, as may be used in, e.g., the element of FIG. 18A in some embodiments;
- FIG. 21 A is a plot of example metric values as acquired in connection with an example prototype implementation of an embodiment
- FIG. 2 IB is a plot of example metric values as acquired in connection with an example prototype implementation of an embodiment
- FIG. 22 is a schematic representation of example GUI elements for providing metrics- derived feedback, as may be used in some embodiments.
- FIG. 23 is an example schematic data processing overview diagram corresponding to aspects of FIG. 4, as may be used in connection with some embodiments;
- FIG. 25 is a screenshot of a feedback interface corresponding to aspects of FIG. 22, as may be used in connection with some embodiments;
- FIG. 28 is a collection of color plots corresponding to aspects of the plots of FIGs. 19A- B and 20A-B;
- FIG. 33 is an example timeline including robotic system data (e.g., system events) over time, according to some embodiments.
- FIG. 34 is a flowchart diagram illustrating an example method for providing smart data collection in a medical environment for a medical procedure, according to some embodiments.
- FIG. 1A is a schematic view of various elements appearing in a surgical theater 100a during a surgical operation as may occur in relation to some embodiments.
- FIG. 1A depicts a non-robotic surgical theater 100a, wherein a patient-side surgeon 105a performs an operation upon a patient 120 with the assistance of one or more assisting members 105b, who may themselves be surgeons, physician’s assistants, nurses, technicians, etc.
- the surgeon 105a may perform the operation using a variety of tools, e.g., a visualization tool 110b such as a laparoscopic ultrasound, visual image acquiring endoscope, etc., and a mechanical instrument 110a such as scissors, retractors, a dissector, etc.
- a visualization tool 110b such as a laparoscopic ultrasound, visual image acquiring endoscope, etc.
- a mechanical instrument 110a such as scissors, retractors, a dissector, etc.
- the visualization output from visualization tool 110b may be recorded and stored for future review, e.g., using hardware or software on the visualization tool 110b itself, capturing the visualization output in parallel as it is provided to display 125, or capturing the output from display 125 once it appears on-screen, etc. While two-dimensional video capture with visualization tool 110b may be discussed extensively herein, as when visualization tool 110b is a visual image endoscope, one will appreciate that, in some embodiments, visualization tool 110b may capture depth data instead of, or in addition to, two-dimensional image data (e.g., with a laser rangefinder, stereoscopy, etc.).
- the surgical operation of theater 100b may require that tools 140a-d, including the visualization tool 140d, be removed or replaced for various tasks as well as new tools, e.g., new tool 165, be introduced.
- tools 140a-d including the visualization tool 140d
- new tools e.g., new tool 165
- Depth values 2051 corresponding to a movable dolly and a boom with a lighting system’s depth values 205k also appear within the field of view.
- the theater-wide sensor capturing the perspective 205 may be only one of several sensors placed throughout the theater.
- FIG. 2B is a schematic top- down view of objects in the theater at a given moment during the surgical operation.
- the perspective 205 may have been captured via a theater-wide sensor 220a with corresponding field of view 225a.
- cabinet depth values 205c may correspond to cabinet 210c
- electronics/control console depth values 205a may correspond to electronics/control console 210a
- tray depth values 205b may correspond to tray 210b.
- Robotic system 210e may correspond to depth values 205e, and each of the individual team members 210d, 210g, 21 Oh, and 210i may correspond to depth values 205d, 205g, 205h, and 205i, respectively.
- dolly 2101 may correspond to depth values 2051.
- Depth values 205j may correspond to table 21 Oj (with an outline of a patient shown here for clarity, though the patient has not yet been placed upon the table corresponding to depth values 205j in the example perspective 205).
- a top-down representation of the boom corresponding to depth values 205k is not shown for clarity, though one will appreciate that the boom may likewise be considered in various embodiments.
- each of the sensors 220a, 220b, 220c is associated with different fields of view 225a, 225b, and 225c, respectively.
- the fields of view 225a-c may sometimes have complementary characters, providing different perspectives of the same object, or providing a view of an object from one perspective when it is outside, or occluded within, another perspective.
- Complementarity between the perspectives may be dynamic both spatially and temporally. Such dynamic character may result from movement of an object being tracked, but also from movement of intervening occluding objects (and, in some cases, movement of the sensors themselves). For example, at the moment depicted in FIGs.
- the field of view 225a has only a limited view of the table 21 Oj , as the electronics/control console 210a substantially occludes that portion of the field of view 225a. Consequently, in the depicted moment, the field of view 225b is better able to view the surgical table 21 Oj .
- neither field of view 225b nor 225a has an adequate view of the operator 21 On in console 210k.
- field of view 225c may be more suitable. However, over the course of the data capture, these complementary relationships may change.
- the theater- wide sensors may take a variety of forms and may, e.g., be configured to acquire visual image data, depth data, both visual and depth data, etc.
- visual and depth image captures may likewise take on a variety of forms, e.g., to afford increased visibility of different portions of the theater.
- FIG. 2C is a pair of images 250b, 255b depicting a grid-like pattern of orthogonal rows and columns in perspective, as captured from a theater-wide sensor having a rectilinear view and a theaterwide sensor having a fisheye view, respectively.
- some theater-wide sensors may capture rectilinear visual images or rectilinear depth frames, e.g., via appropriate lenses, post-processing, combinations of lenses and post-processing, etc. while other theater-wide sensors may instead, e.g., acquire fisheye or distorted visual images or rectilinear depth frames, via appropriate lenses, post-processing, combinations of lenses and post-processing, etc.
- image 250b depicts a checkboard pattern in perspective from a rectilinear theater wide sensor. Accordingly, the orthogonal rows and columns 250a shown here in perspective, retain linear relations with their vanishing points. In contrast, image 255b depicts the same checkboard pattern in the same perspective, but from a fish-eye theater-wise sensor perspective.
- the orthogonal rows and columns 255a while in reality retaining a linear relationship with their vanishing points (as they appear in image 250b) appear here from the sensor data as having curved relations with their vanishing points.
- each type of sensor, and other sensor types may be used alone, or in some instances, in combination, in connection with various embodiments.
- checkered patterns, or other calibration fiducials may facilitate determination of a given theater-wide sensor’s intrinsic parameters.
- the focal point of the fisheye lens, and other details of the theater-wide sensor may vary between devices and even across the same device over time.
- the rectilinear view may be achieved by undistorting the fisheye view once the intrinsic parameters of the camera are known (which may be useful, e.g., to normalize disparate sensor systems to a similar form recognized by a machine learning architecture).
- a fisheye view may allow the system and users to more readily perceive a wider field of view than in the case of the rectilinear perspective
- the differing perspectives may be normalized to a common perspective form (e.g., mapping all the rectilinear data to a fisheye representation or vice versa).
- FIG. 3 depicts a state of a single operating room over time 305, e.g., over the course of a day.
- the team may prepare the operating room for the day’s procedures, collecting appropriate equipment, reviewing scheduled tasks, etc.
- a nonoperative inter-operative period 310b will follow wherein the team performs the turnover from the operating room configuration for performing the surgery 315a to the configuration for performing the surgery 315b.
- nonoperative inter-surgical period 310c here follows the second surgery 315b, etc.
- the team may perform any final maintenance operations, may secure and put away equipment, deactivate devices, upload data, etc., during the post-operative period 310d.
- Ellipsis 310e indicates the possibility of additional intervening operative and nonoperative states (though, naturally, in some theaters there may instead by only one surgery during the day). Because of the theater operations’ sequential character, an error in an upstream period can cause errors and delays to cascade through downstream periods. For example, improper alignment of equipment during pre-surgical period 310a may result in a delay during surgery 315a.
- This delay may itself require nonoperative period 310b to be shortened, providing a team member insufficient time to perform proper cleaning procedures, thereby placing the patient of surgery 315b’s health at risk.
- inefficiencies early in the day may result in the delay, poor execution, or rescheduling of downstream actions.
- efficiencies early in the day may provide tolerance downstream for unexpected events, facilitating more predictable operation outcomes and other benefits.
- Each of the theater states including both the operative periods 315a, 315b, etc. and nonoperative periods 310a, 310b, 310c, 310d, etc. may be divided into a collection of tasks.
- the nonoperative period 310c may be divided into the tasks 320a, 320b, 320c, 320d, and 320e (with intervening tasks represented by ellipsis 320f).
- at least three theater-wide sensors were present in the OR, each sensor capturing at least visual image data (though one will appreciate that there may be fewer than three streams, or more, as indicated by ellipses 370q).
- a first theater-wide sensor captured a collection of visual images 325a (e.g., visual image video) during the first nonoperative task 320a, a collection of visual images 325b during the second nonoperative task 320b, a collection of visual images 325c during the third nonoperative task 320c, a collection of visual images 325d during the fourth nonoperative task 320d, and the collection of visual images 325e during the last nonoperative task 320e (again, intervening groups of frames may have been acquired for other tasks as indicated by ellipsis 325f).
- visual images 325a e.g., visual image video
- task 320c which may be a “turnover” or “patient out” task
- a team member escorts the patient out of the operating room. While the theater-wide sensor associated with collection 325c has a clear view of the departing patient, the theater-wide sensor associated with the collection 335c may be too far away to observe the departure in detail. Similarly, the collection 330c only indicates that the patient is no longer on the operating table.
- task 320d which may be a “setup” task
- a team member positions equipment which will be used in the next operative period (e.g., the final surgery 315c if there are no intervening periods in the ellipsis 3 lOe).
- task 320e which may be a “sterile prep” task before the initial port placements and beginning of the next surgery (again, e.g., surgery 315c)
- the theater-wide sensor associated with collection 330e is able to perceive the pose of the robotic system and its arms, as well as the state of the new patient.
- collections 325e and 335e may provide wider contextual information regarding the state of the theater.
- Consolidating theater-wide data into this taxonomy, in conjunction with various other operations disclosed herein, may more readily facilitate analysis in a manner amenable to larger efficiency review, as described in greater detail herein.
- organizing data in this manner may facilitate comparisons with different days of the week over the course of the month across theaters, surgery configurations (both robotic and non-robotic), and teams, with specific emphasis upon particular of these intervals 550a-d appearing in the corresponding nonoperative periods.
- it may still be useful to determine the duration of the surgery in interval 550e as the duration may inform the efficiency or inefficiency of the preceding or succeeding nonoperative period.
- some of the disclosed metrics may consider events and actions in this interval 550e, even when seeking ultimately to assess the efficiency of a nonoperative period.
- This interval may, e.g., begin when the first personnel enters the theater for the day and may end when the patient enters the theater for the first surgery. Accordingly, as shown by the arrow 555c, this may result in a transition to the first instance of the “patient in to skin cut” interval 550d. From there, as indicated by the circular relation, the data may be cyclically grouped into instances of the intervals 550a-e, e.g., in accordance with the alternating periods 315a, 310b, 315b, 310c. etc. until the period 315c.
- the system may instead transition to a final “patient out to day end” interval 555b, as shown by the arrow 555d (which may be used to assess nonoperative post-operative period 3 lOd).
- the “patient out to day end” interval 555b may end when the last team member leaves the theater or the data acquisition concludes.
- interval 555b may be trained to distinguish actions in the interval 555b from the corresponding data of interval 550b (naturally, conclusion of the data stream may also be used in some embodiments to infer the presence of interval 555b). Though concluding the day’s actions, analysis of interval 555b may still be appropriate in some embodiments, as actions taken at the end of one day may affect the following day’s performance.
- FIG. 5C depicts four high-level task action classes or groupings of tasks, referred to for example as phases or stages: post-surgery 520, turnover 525, pre-surgery 510, and surgery 515.
- Surgery 515 may include the tasks or actions 515a-i.
- FIGs. 6 and 7 provide various example temporal definitions for the actions, though for the reader’s appreciation, brief summaries will be provided here.
- the task “first cut” 515a may correspond to a time when the first incision upon the patient occurs (consider, e.g., the duration 605a).
- the task “docking” 515e may correspond to a duration starting when a team member begins docking a robotic system and concludes when the robotic system is docked (consider, e.g., the duration 605e).
- the task “surgery” 515f may correspond with a duration starting with the first incision and ending with the final closure of the patient (consider, e.g., the durations 705a-c for respective contemplated surgeries, specifically the robotic surgery 705a and non-robotic surgeries 705b and 705c).
- these action blocks may be further broken down into considerably more action and task divisions in accordance with the analyst’s desired focus (e.g., if the action “port placement” 515b were associated with an inefficiency, a supplemental taxonomy wherein each port’s placement were a distinct action, with its own measured duration, may be appropriate for refining the analysis).
- the general task “surgery” 515f e.g., one of durations 705a-c
- the task “clean” 525a may correspond to a duration starting when the first team member begins cleaning equipment in the theater and concludes when the last team member (which may be the same team member) completes the last cleaning of any equipment (consider, e.g., the duration 705j).
- the task “idle” 525b may correspond to a duration that starts when team members are not performing any other task and concludes when they begin performing another task (consider, e.g., the duration 705k).
- a “case volume” scoring metric 810a includes the mean or median number of cases operated per OR, per day, for a team, theater, or hospital, normalized by the expected case volume for a typical OR (e.g., again, as designated in a historical dataset benchmark, such as a mean or median).
- a “first case turnovers” scoring metric 810b is the ratio of first cases in an operating day that were turned over compared to the total number of first cases captured from a team, theater, or hospital.
- a more general “case turnovers” metric is the ratio of all cases that were turned-over compared to the total number of cases as performed by a team, in a theater, or in hospital.
- each of the metrics 805a-c, 810a-c, 815a-c, and 820a-b may be considered individually to assess nonoperative period performances, or in combinations of the multiple of the metrics, as discussed above with respect to EQN. 1, some embodiments consider an “ORA score” 830 reflecting an integrated 825 representation of all these metrics.
- ORA score may provide a readily discernible means for reviewers to quickly and intuitively assess the relative performance of surgical teams, surgical theaters, hospitals and hospital systems, etc.
- the weight 850g may be upscaled relative to the other weights.
- the ORA score 830 across procedures is compared in connection with the durations of one or more of the intervals in FIG. 5 A-C for the groups of surgeries, the reviewer can more readily discern if there exists a relation between the head count and undesirable interval durations.
- weight adjustment as well as particular consideration of specific interval durations, to assess other performance characteristics.
- the results of the analysis may then be presented via component 9101 (e.g., sent over a network to one or more of applications 550f) for presentation to the reviewer.
- application algorithms may consume the determined metrics and nonoperative data and propose customized actionable coaching for each individual in the team, as well as the team as a whole, based upon metrics analysis results (though such coaching or feedback may first be determined on the computer system 910b in some embodiments).
- Example recommendations include, e.g.: changes in the OR layout at various points in time, changes in OR scheduling, changes in communication systems between team members, changes in numbers of staff involved in various tasks, etc.
- FIG. 9C is a flow diagram illustrating various operations in an example overall process 920 for analyzing theater-wide data.
- the computer system may receive the theater- wide sensor data for the theater to be examined.
- the system may perform pre-processing on the data, e.g., reconciling theaterwide data to a common format, as when fisheye and rectilinear sensor data are both to be processed.
- the system may perform operative and nonoperative period recognitions, e.g., identifying each of the segments 3 lOa-d and 315a-c from the raw theater wide sensor data. In some embodiments, such divisions may be recognized, or verified, via ancillary data, e.g., console data, instrument kinematics data, etc. (which may, e.g., be active only during operative periods).
- ancillary data e.g., console data, instrument kinematics data, etc.
- the system may then iterate over the detected nonoperative periods (e.g., periods 310a, 310b) at blocks 920d and 925a.
- operative periods may also be included in the iteration, e.g., to determine metric values that may inform the analysis of the nonoperative segments, though many embodiments will consider only the nonoperative periods.
- the system may identify the relevant tasks and intervals at block 925b, e.g., the intervals, groups, and actions of FIGs. 5A-C.
- the system may iterate over the corresponding portions of the theater data for the respectively identified tasks and intervals, performing object detections at block 925f, motion detection at block 925g, and corresponding metrics generation at block 925h.
- the metrics may thus be generated at the action task level, as well as at the other intervals described in FIGs. 5A-C.
- the metrics may simply be determined for the nonoperative period (e.g., where the duration of the intervals 550a-e are the only metrics to be determined).
- the system may create any additional metric values (e.g., metrics including the values determined at block 925h across multiple tasks as their component values) at block 925d.
- additional metric values e.g., metrics including the values determined at block 925h across multiple tasks as their component values
- the system may perform holistic metrics generation at block 930a (e.g., metrics whose component values depend upon the period metrics of block 925d and block 925h, such as certain composite metrics described herein).
- the system may analyze the metrics generated at blocks 930a, 925d, and at block 925h. As discussed, many metrics (possibly at each of blocks 930a, 925h, and 925d) will consider historical values, e.g., to normalize the specific values here, in their generation. Similarly, at block 930b the system may determine outliers as described in greater detail herein, by considering the metrics results in connection with historical values. Finally, at block 930c, the system may publish its analysis for use, e.g., in applications 450f.
- the absence of kinematics and system events data from robotic surgical systemics consoles or instruments may indicate a prolonged separation between the surgeon and patient or between a robotic platform and the patient, which may suffice to indicate that an inter- surgical nonoperative period has begun (or provide verification of a machine learning system’s parallel determination).
- some embodiments consider instead, or in addition, employing machine learning systems for performing the nonoperative period detection.
- some embodiments employ spatiotemporal model architectures, e.g., like a transformer architecture such as that described in Bertasius, Gedas, Heng Wang, and Lorenzo Torresani. “Is Space- Time Attention All You Need for Video Understanding?” arXivTM preprint arXivTM 21Q2.05095 (2021).
- the spatial segment transformer architecture may be designed to learn features from frames of theater-wide data (e.g., visual image video data, depth frame video data, visual image and depth frame video data, etc.).
- the temporal segment may be based upon a gated recurrent unit (GRU) method and designed to learn the sequence of actions in a long video and may, e.g., be trained in a fully supervised manner (again, where data labelling may be assisted by the activation of surgical instrument data).
- GRU gated recurrent unit
- OR theater-wide data may be first annotated by a human expert to create ground truth labels and then fed to the model for supervised training.
- Some embodiments may employ a two-stage model training strategy: first training the back-bone transformer model to extract features and then training the temporal model to learn a sequence.
- Input to the model training may be long sequences of theater-wide data (e.g., many hours of visual image video) with output time-stamps for each segment (e.g., the nonoperative segments) or activity (e.g., intervals and tasks of FIGs. 5A-C) of interest.
- Some models may operate on individual visual images, individual depth frames, groups of image frames (e.g., segments of video), groups of depth frames (e.g., segments of depth frame video), combinations of visual video and depth video, etc.
- FIG. 10 is a flow diagram illustrating various operations in an example process 1005 for performing nonoperative period detection in some embodiments.
- the system may instead consider the streams individually, or in smaller groups, and then analyze the collective results, e.g., in combination with smoothing operations, so as to assign a categorization to the segment under consideration.
- the system may iterate over the data in intervals at blocks 1005b and 1005c. For example, the system may consider the streams in successive segments (e.g., 30 second, one, or two minute intervals), though the data therein may be down sampled depending upon the framerate of its acquisition.
- FIG. 11 A is a schematic block diagram illustrating an example information processing flow as may be used for performing object detection in connection with some embodiments.
- the system may present the image’s raw pixel or depth values to a convolutional network 1105a trained to produce image features 1105b.
- the sensor data streams may be considered in turn at blocks 1220c and 1220d, performing the applicable detection and tracking method at block 1220e (one will appreciate that alternatively, in some embodiments, the streams may be first integrated before applying the object detection and tracking systems, as when simultaneously acquired depth frames from multiple sensors are consolidated into a single virtual model). As mentioned, some methods may benefit from considering temporal and spatial continuity across the theater-wide sensors, and so reconciliation methods for the particular tracking application may be applied at block 1220f.
- a metadata section 2205a indicating the identity of the case (“Case 1”), the state of the theater (though a surgical operation “Gastric Bypass”, is shown here, in anticipation of the upcoming surgery, the nonoperative actions and intervals of FIG. 5A-C may be shown here additionally or alternatively), the date and time of the data acquisition (“May 27, 20XX 07:49:42”) and the number of identified personnel (here “2” as determined, e.g., in accordance with component 91 Oh and, e.g., the methods of FIGs. 11 A-B).
- the reviewer may readily perceive a corpus of results while simultaneously analyzing the state of a specific instance (e.g., as may have been called to the user’s attention based upon, e.g., correspondingly determined metric values or pattern similarities).
- the one or more memory components 3015 and one or more storage devices 3025 may be computer-readable storage media.
- the one or more memory components 3015 or one or more storage devices 3025 may store instructions, which may perform or cause to be performed various of the operations discussed herein.
- the instructions stored in memory 3015 can be implemented as software and/or firmware. These instructions may be used to perform operations on the one or more processors 3010 to carry out processes described herein. In some embodiments, such instructions may be provided to the one or more processors 3010 by downloading the instructions from another system, e.g., via network adapter 3030.
- Systems, methods, apparatuses, and non-transitory computer-readable media are provided for intelligent recording of data collected for medical procedures and in medical environments such as ORs.
- data collected for medical procedures and in medical environments include video data collected by one or more visual sensors arranged in the medical environments, depth data or three-dimensional point cloud data collected by one or more depth sensors arranged in the medical environments, data (e.g., kinematics data, system event data, sensor data) collected by one or more robotic systems located within the medical environments and used to perform the medical procedures, data (e.g., endoscopic video data) collected by one or more instruments located within the medical environments and used to perform the medical procedures, metrics and workflow analytics determined based on the multimodal data, and so on.
- data collected for medical procedures and in medical environments include video data collected by one or more visual sensors arranged in the medical environments, depth data or three-dimensional point cloud data collected by one or more depth sensors arranged in the medical environments, data (e.g., kinematics data, system event data, sensor data) collected by one or
- Trigger events can be implemented to identify redundant streams of data, based on which recording of the data stream(s) considered to be the best source of information is enabled while recording of the rest of the data stream(s) is disabled. Trigger events can be implemented to identify different qualities of streams of data, based on which recording of the data stream(s) having the best quality is enabled while recording of the rest of the data stream(s) is disabled.
- the trigger events can be implemented to identify noteworthy or important events such as outliner cases, adverse events, private information (e.g., personal health information (PHI)) exposure, and so on, based on which recording of the data stream(s) capturing noteworthy or important events is enabled while recording of the data stream(s) not capturing noteworthy or important events is disabled.
- noteworthy or important events such as outliner cases, adverse events, private information (e.g., personal health information (PHI)) exposure, and so on, based on which recording of the data stream(s) capturing noteworthy or important events is enabled while recording of the data stream(s) not capturing noteworthy or important events is disabled.
- PHI personal health information
- FIG. 31 is a schematic block diagram illustrating an example data collection and analysis system 3100 for providing smart data collection in a medical environment for a medical procedure, according to some embodiments.
- the data collection and analysis system 3100 can be implemented using one or more suitable computing systems, such as one or more computing systems 190a, 190b, 450b, and 3000.
- the computing systems 190a and 190b can facilitate data collection, data processing, and so on of the multimodal data.
- the processing systems 450b can perform automated inference 450c, including the detection of objects in the medical environment (e.g., theater), such as personnel and equipment, as well as to segment the theater-wide data into distinct steps 450d (can correspond to the groupings and their respective actions discussed herein with respect to FIGs. 5 A-C).
- code, instructions, data structures, weights, biases, parameters, and other information that define the data collection and analysis system 3100 can be stored in one or more memory systems such as one or more memory components 3015 and/or one or more storage systems 3025.
- the processes of the data collection and analysis system 3100 such as selecting determining to enable or disable recording of one or more types of data in the multimodal data can be performed using one or more processors such as one or more processors 3010.
- a medical procedure refers to a surgical procedure or operation performed in a medical environment (e.g., a medical or surgical theater 110a or 110b, OR, etc.) by or using one or more of a medical staff, a robotic system, or an instrument.
- a medical staff include surgeons, nurses, support staff, and so on, such as the patient-side surgeon 105a and the assisting members 105b.
- the robotic systems include the robotic medical system or the robot surgical system described herein.
- instruments include the mechanical instrument 110a or the visualization tool 110b.
- Medical procedures can have various modalities, including robotic (e.g., using at least one robotic system), non- robotic laparoscopic, non-robotic open, and so on.
- the multimodal data 3102, 3014, 3106, 3018 and 3110 collected for a medical procedure also refers or includes multimodal data collected in a medical environment in which the medical procedure is performed and for one or more of medical staff, robotic system, or instrument performing or used in performing the medical procedure.
- the data collection and analysis system 3100 system can receive and digest data sources or data streams including one or more of video data 3102, robotic system data 3104, instrument data 3106, metadata 3108, and depth data 3110 collected for a medical procedure.
- the data collection and analysis system 3100 can acquire data streams of the multimodal data 3102, 3014, 3106, 3018 and 3110 in real-time acquisition at 450a, received at 910a, 915e, 920a, 1005a, 1110a, 1215a, 1405a, and so on.
- the data collection and analysis system 3100 can utilize all types of multimodal data 3102, 3014, 3106, 3018 and 3110 collected, obtained, determined, or calculated for a medical procedure to determine trigger events based on which recording of the multimodal data 3102, 3014, 3106, 3018 and 3110 can be enabled or disabled. In some examples, the data collection and analysis system 3100 can utilize at least two types of multimodal data 3102, 3014, 3106, 3018 and 3110 collected, obtained, determined, or calculated for a medical procedure to determine a trigger event. In some examples, although at least one type of multimodal data 3102, 3014, 3106, 3018 and 3110 may not be available for a medical procedure, the data collection and analysis system 3100 can nevertheless determine a trigger event using the available information for that medical procedure.
- the video data 3102 includes two-dimensional visual video data such as color (RGB) image or video data, grayscale image or video data, and so on of a medical procedure.
- the video data 3102 can include videos (e.g., structured video data) captured during a medical procedure.
- the video data 3102 include two-dimensional visual video data obtained using visual image sensors placed within and/or around at least one medical environment (e.g., the theaters 110a and 110b) to capture visual image videos of the medical procedure performed within the at least one medical environment.
- video data 3102 examples include medical environment video data such as OR video data, visual image/video data, theater-wide video data captured by the visual image sensors, visual images 325a-325e, 330a-330e, 335a-335e, visual frames, and so on.
- the visual image sensors used to acquire the structured video data can be fixed relative to the at least one medical environment (e.g., placed on walls or ceilings of the medical environment).
- the instrument data 3106 includes instrument imaging data, instrument kinematics data, and so on collected using an instrument.
- the instrument imaging data can be a part of the video data 3102 for which recording can be enabled or disabled in the manner described herein.
- the instrument imaging data can include instrument image and/or video data (e.g., endoscopic images, endoscopic video data, etc.), ultrasound data (e.g., ultrasound images, ultrasound video data), and so on obtained using imaging devices which can be operated by human operators or robot systems.
- Such instrument imaging data may depict surgical field of views (e.g., field of view of internal anatomy of patients).
- the positions, orientations, and/or poses of imaging devices can be controlled or manipulated by a human operator (e.g., a surgeon or a medical staff member) teleoperationally via robotic systems.
- a human operator e.g., a surgeon or a medical staff member
- an imaging instrument can be coupled to or supported by a manipulator of a robotic system and a human operator can teleoperationally manipulate the imaging instrument by controlling the robotic system.
- the instrument imaging data can be captured using manually manipulated imaging instruments a laparoscopic ultrasound device or a laparoscopic visual image/video acquiring endoscope.
- the metadata 3108 includes information of various aspects and attributes of the medical procedure, including at least one of identifying information of the medical procedure, identifying information of one or more medical environments (e.g., theaters, ORs, hospitals, and so on) in which the medical procedure is performed, identifying information of medical staff by whom the medical procedure is performed, the experience level of the medical staff, schedules of the medical staff and the medical environments, patient complexity of patients subject to the medical procedure, patient health parameters or indicators, identifying information of one or more robotic systems or instruments used in the medical procedure, identifying information of one or more sensors used to capture the multimodal data.
- medical environments e.g., theaters, ORs, hospitals, and so on
- the experience level of the medical staff members includes a role, length of time for practicing medicine, length of time for performing certain types of medical procedures, length of time for using a certain type of robotic systems, certifications, and credentials of each of one or more surgeons, nurses, healthcare team name or ID, and so on.
- the schedules of the medical staff and the medical environments include allocation of the medical staff and the medical environments to perform certain procedures (e.g., defined by types of surgery, surgery name, surgery ID, or surgery reference number, special ty, modality), names of medical staff members, and corresponding time.
- patient complexity refers to conditions that a patient has that may influence the care of other conditions.
- patient health parameters or indicators include various parameters or indicators such as body mass index (BMI), percentage body fat (%BF), blood serum cholesterol (BSC), and systolic (SBP), height, stage of sickness, organ information, outcome of the medical procedure, and so on.
- the identifying information of the one or more robotic systems or instruments includes at least one of a name, model, or version of each of the one or more robotic systems or instruments or an attribute of each of the one or more robotic systems or instruments.
- the identifying information of at least one sensor includes at least one of a name of each of the at least one sensor or a modality of each of the at least one sensor.
- the system events of a robotic system includes different activities, kinematic/motions, sequence of actions, and so on of the robotic system and timestamps thereof.
- the metadata 3108 can be stored in a memory device (e.g., the memory component 3015) or a database.
- the memory device or the database can be provided for a scheduling or work allocation application that schedules the medical staff and the medical procedures in medical environments.
- a user can input using an input system (e.g., of the input/output system 3020) the metadata 3108, or the metadata 3108 can be automatically generated using an automated scheduling application.
- the metadata 3108 can be associated with the video data 3102, the robotic system data 3104, the instrument data 3106, depth data 3110, and so on.
- the depth data 3110 includes three-dimensional medical procedure data captured for medical procedure.
- the depth data 3110 can include three-dimensional video data obtained using depth-acquiring sensors placed within and/or around the at least one medical environment (e.g., the theaters 110a and 110b).
- the depth data 3110 include theater-wide data (e.g., depth data, depth frame, or depth frame data) collected using theaterwide sensors (e.g., depth-acquiring sensors).
- three-dimensional representations e.g., point clouds
- the depth data 3110 for a depth-acquiring sensor with a certain pose can indicate distance measured between the depth-acquiring sensor and points on objects and/or intensity value of the points on objects.
- the depth data 3110 from multiple depth-acquiring sensors with different poses as shown and described relative to FIGS. 2B, 2C, and 3 can be fused into a higher accuracy dataset through registration of depth-acquiring sensors.
- An intensity value can indicate a reflected signal strength for a point in the three-dimensional point cloud or an object in the three-dimensional point cloud.
- a tool-off event has been detected.
- recording of at least some the multimodal data can be enabled or triggered to analyze medical staff activities in connection with instrument cleaning, re-loading, troubleshooting and to determine potential intraoperative disruptions and root causes of the same.
- one or more of the video data 3102 or the depth data 3110 can be enabled.
- a table e.g., a surgical table supporting a patient undergoing a medical operation performed using a robotic medical system
- a robotic system is moved within the medical environment.
- the surgical table’s orientation and/or pose may be changed and one or more manipulators of the robotic medical system may change their configuration (e.g., pose, orientation, etc.) so as to remain docked to the patient during the table’s motion.
- the table-moving-start event can be detected using a sensor located within the medical environment or based on user input from a UI or console.
- the disabling the recording of a data stream of the multimodal data includes erasing previously recorded data of the data stream.
- a certain amount of data is needed to perform certain processes in determine whether to disable the data stream, including segmenting the medical procedure into phases and tasks and determining metrics for those phases and tasks.
- the decision to disable the recording the data stream based on determined phases, tasks, and metrics is made during or even after a medical procedure, a phase, or task for which data recording is to be disabled.
- the data collection and analysis system 3100 can retroactively remove, erase, delete, or destroy the data stream.
- the data stream can be identified according to the timestamps that define a phase or a task.
- the data collection and analysis system 3100 can send the disable command 3140 which includes an erase command that identifies the data to be erased.
Landscapes
- Health & Medical Sciences (AREA)
- Engineering & Computer Science (AREA)
- Public Health (AREA)
- General Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Primary Health Care (AREA)
- Epidemiology (AREA)
- Surgery (AREA)
- Life Sciences & Earth Sciences (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Robotics (AREA)
- Biomedical Technology (AREA)
- Animal Behavior & Ethology (AREA)
- Veterinary Medicine (AREA)
- Molecular Biology (AREA)
- Heart & Thoracic Surgery (AREA)
- Radiology & Medical Imaging (AREA)
- Endoscopes (AREA)
Abstract
The arrangements disclosed herein relate to receiving a plurality of streams of multimodal data comprising robotic system data of a robotic medical system, instrument data of an instrument, video data of a medical environment, and depth data of a medical environment, determining at least one phase and at least one task in each of the at least one phase using the plurality of streams of the multimodal data, determining a trigger event based at least in part on the at least one phase and the at least one task, and enabling recording or disable recording of at least one stream of the plurality of streams of the multimodal data in response to the trigger event.
Description
INTELLIGENT DATA COLLECTION FOR MEDICAL ENVIRONMENTS
CROSS-REFERENCE TO RELATED APPLICATION
[0001] This application claims the benefit of, and priority to, U.S. Provisional Patent Application No. 63/616,232, filed December 29, 2023, the full disclosure of which is incorporated herein in its entirety.
TECHNICAL FIELD
[0002] Various of the disclosed embodiments relate to systems, apparatuses, methods, and non-transitory computer-readable media for providing intelligent data collection for medical environments.
BACKGROUND
[0003] The significance of recording data collected in medical environments for medical procedures for the purposes of training, education, development, and workflow improvement is tremendous. With that said, continuous recording of such data, including processing, transferring, saving, and storing the data, can be exceptionally costly.
BRIEF DESCRIPTION OF THE DRAWINGS
[0004] Various of the embodiments introduced herein may be better understood by referring to the following Detailed Description in conjunction with the accompanying drawings, in which like reference numerals indicate identical or functionally similar elements:
[0005] FIG. 1 A is a schematic view of various elements appearing in a surgical theater during a surgical operation, as may occur in relation to some embodiments;
[0006] FIG. IB is a schematic view of various elements appearing in a surgical theater during a surgical operation employing a robotic surgical system, as may occur in relation to some embodiments;
[0007] FIG. 2A is a schematic depth map rendering from an example theater-wide sensor perspective, as may be used in some embodiments;
[0008] FIG. 2B is a schematic top-down view of objects in the theater of FIG. 2A, with corresponding sensor locations;
[0009] FIG. 2C is a pair of images depicting a grid-like pattern of orthogonal rows and columns in perspective, as captured from a theater-wide visual image sensor having a rectilinear view and a theater-wide visual image sensor having a fisheye view, each of which may be used in connection with some embodiments;
[0010] FIG. 3 is a schematic representation of a series of surgical procedures within a surgical theater, their intervening nonoperative periods, and corresponding theater-wide sensor datasets for one such nonoperative period, as may occur in connection with some embodiments;
[0011] FIG. 4 is a schematic block diagram illustrating an example deployment topology for a nonoperative periods analysis system, as may be implemented in some embodiments;
[0012] FIG. 5A is a schematic representation of a collection of metrics intervals, as may be used to assess nonoperative team performance in some embodiments;
[0013] FIG. 5B is a schematic processing diagram indicating full-day relations of various intervals, including intervals from FIG. 5A, as may be applied in some embodiments;
[0014] FIG. 5C is a schematic block diagram indicating possible activity analysis class groupings, as may be used in connection with some embodiments;
[0015] FIG. 6 is a table of example task action temporal definitions, as may be used in some embodiments;
[0016] FIG. 7 is a table of additional example task action temporal definitions, as may be used in some embodiments;
[0017] FIG. 8 is a schematic block diagram illustrating various metrics and their relation in constructing a composite score (referred to as an OR analysis “ORA” score), as may be used in some embodiments;
[0018] FIG. 9A is a schematic block diagram depicting a general nonoperative analysis system processing flow, as may be implemented in some embodiments;
[0019] FIG. 9B is a schematic block diagram depicting elements in a more detailed example nonoperative analysis system processing flow than the flow depicted in FIG. 9A, as may be implemented in some embodiments;
[0020] FIG. 9C is a flow diagram illustrating various operations in an example overall process for analyzing theater-wide sensor data during nonoperative periods, as may be implemented in some embodiments;
[0021] FIG. 10 is a flow diagram illustrating various operations in an example nonoperative segment detection process, as may be performed in some embodiments;
[0022] FIG. 11 A is a schematic block diagram illustrating an example information processing flow for performing object detection, as may be used in connection with some embodiments;
[0023] FIG. 1 IB is a flow diagram illustrating various operations in an example process for performing object detection, as may be used in some embodiments;
[0024] FIG. 12A is schematic block diagram illustrating an example object tracking information processing flow, as may be used in connection with some embodiments;
[0025] FIG. 12B is flow diagram illustrating various operations in an example process for performing object tracking, as may be used in connection with some embodiments;
[0026] FIG. 13 A is a schematic visual image and depth frame theater-wide data pair, from theater-wide data video, with an indication of the optical-flow derived correspondence, as may be used in some embodiments;
[0027] FIG. 13B is a schematic top-down view of the scene depicted in FIG. 13 A;
[0028] FIG. 13C is a schematic pair of visual images showing team member motion distant from and near to an imaging sensor;
[0029] FIG. 13D three-dimensional is a schematic top-down view depicting the team member motion presented in the visual images of Fig. 13C;
[0030] FIG. 14 is a flow diagram illustrating various operations in an example process for performing motion analysis of nonoperative periods from theater-wide data, as may be used in connection with some embodiments;
[0031] FIG. 15 is flow diagram illustrating various operations in an example process for performing clustering and outlier determination analysis based upon metric values, such as those disclosed herein, as may be performed in some embodiments;
[0032] FIG. 16 is flow diagram illustrating various operations in an example process for providing coaching feedback based upon determined metric values, as may be performed in some embodiments;
[0033] FIG. 17 is a schematic representation of GUI elements in an example dashboard interface layout for nonoperative metrics quick review, as may be implemented in some embodiments;
[0034] FIG. 18A is a schematic representation of a GUI element in an example global nonoperative metrics quick review dashboard, as may be implemented in some embodiments;
[0035] FIG. 18B is a schematic representation of arrow graphical elements, as may be used in, e.g., the element of FIG. 18A in some embodiments;
[0036] FIG. 18C is a schematic representation of an example global nonoperative metrics quick review dashboard layout, as may be implemented in some embodiments;
[0037] FIG. 19A is a plot of example interval metric values acquired in connection with an example prototype implementation of an embodiment;
[0038] FIG. 19B is a plot of example interval metric values as acquired in connection with an example prototype implementation of an embodiment;
[0039] FIG. 20A is a plot of example interval metric values as acquired in connection with an example prototype implementation of an embodiment;
[0040] FIG. 20B is a plot of example interval metric values as acquired in connection with an example prototype implementation of an embodiment;
[0041] FIG. 21 A is a plot of example metric values as acquired in connection with an example prototype implementation of an embodiment;
[0042] FIG. 2 IB is a plot of example metric values as acquired in connection with an example prototype implementation of an embodiment;
[0043] FIG. 21C is a plot of example metric values as acquired in connection with an example prototype implementation of an embodiment;
[0044] FIG. 22 is a schematic representation of example GUI elements for providing metrics- derived feedback, as may be used in some embodiments;
[0045] FIG. 23 is an example schematic data processing overview diagram corresponding to aspects of FIG. 4, as may be used in connection with some embodiments;
[0046] FIG. 24 is a screenshot of a feedback interface corresponding to aspects of FIG. 22, as may be used in connection with some embodiments;
[0047] FIG. 25 is a screenshot of a feedback interface corresponding to aspects of FIG. 22, as may be used in connection with some embodiments;
[0048] FIG. 26 is a screenshot of a feedback interface corresponding to aspects of FIG. 17, as may be used in connection with some embodiments;
[0049] FIG. 27 is a collection of color image plots for example metric values corresponding to aspects of FIGs. 21A-C, as acquired in connection with an example prototype implementation of an embodiment;
[0050] FIG. 28 is a collection of color plots corresponding to aspects of the plots of FIGs. 19A- B and 20A-B;
[0051] FIG. 29A is a collection of photographs of theater-wide sensor visual images captured in a surgical theater during various tasks;
[0052] FIG. 29B is a visual image and a depth frame each captured with a theater-wide sensor and related photographs of an example theater-wide sensor data capture platform, as may be used in some embodiments;
[0053] FIG. 30 is a block diagram of an example computer system as may be used in conjunction with some of the embodiments;
[0054] FIG. 31 is a schematic block diagram illustrating an example system for providing smart data collection in a medical environment for a medical procedure, according to some embodiments;
[0055] FIG. 32 is a diagram illustrating example robotic systems, a data collection and analysis system 3100, a sensing system, a storage system, according to various arrangements;
[0056] FIG. 33 is an example timeline including robotic system data (e.g., system events) over time, according to some embodiments; and
[0057] FIG. 34 is a flowchart diagram illustrating an example method for providing smart data collection in a medical environment for a medical procedure, according to some embodiments.
[0058] The specific examples depicted in the drawings have been selected to facilitate understanding. Consequently, the disclosed embodiments should not be restricted to the specific details in the drawings or the corresponding disclosure. For example, the drawings may not be drawn to scale, the dimensions of some elements in the figures may have been adjusted to facilitate understanding, and the operations of the embodiments associated with the flow diagrams may encompass additional, alternative, or fewer operations than those depicted here. Thus, some components and/or operations may be separated into different blocks or combined into a single block in a manner other than as depicted. The embodiments are intended to cover all modifications, equivalents, and alternatives falling within the scope of the disclosed examples, rather than limit the embodiments to the particular examples described or depicted.
DETAILED DESCRIPTION
[0059] Accordingly, there exists a need for systems and methods to overcome challenges and difficulties such as those described above. For example, there exists a need for systems and methods to process disparate forms of surgical theater data acquired during nonoperative periods so as to facilitate reviewer analysis and feedback generation based upon team member inefficiencies identified therein.
Example Surgical Theaters Overview
[0060] FIG. 1A is a schematic view of various elements appearing in a surgical theater 100a during a surgical operation as may occur in relation to some embodiments. Particularly, FIG. 1A depicts a non-robotic surgical theater 100a, wherein a patient-side surgeon 105a performs an operation upon a patient 120 with the assistance of one or more assisting members 105b, who may themselves be surgeons, physician’s assistants, nurses, technicians, etc. The surgeon 105a may perform the operation using a variety of tools, e.g., a visualization tool 110b such as a laparoscopic ultrasound, visual image acquiring endoscope, etc., and a mechanical instrument 110a such as scissors, retractors, a dissector, etc.
[0061] The visualization tool 110b provides the surgeon 105a with an interior view of the patient 120, e.g., by displaying visualization output from an imaging device mechanically and electrically coupled with the visualization tool 110b. The surgeon may view the visualization output, e.g., through an eyepiece coupled with visualization tool 110b or upon a display 125 configured to receive the visualization output. For example, where the visualization tool 110b is a visual image acquiring endoscope, the visualization output may be a color or grayscale image. Display 125 may allow assisting member 105b to monitor surgeon 105a’s progress during the surgery. The visualization output from visualization tool 110b may be recorded and stored for future review, e.g., using hardware or software on the visualization tool 110b itself, capturing the visualization output in parallel as it is provided to display 125, or capturing the output from display 125 once it appears on-screen, etc. While two-dimensional video capture with visualization tool 110b may be discussed extensively herein, as when visualization tool 110b is a visual image endoscope, one will appreciate that, in some embodiments, visualization
tool 110b may capture depth data instead of, or in addition to, two-dimensional image data (e.g., with a laser rangefinder, stereoscopy, etc.).
[0062] A single surgery may include the performance of several groups (e.g., phases or stages) of actions, each group of actions forming a discrete unit referred to herein as a task. For example, locating a tumor may constitute a first task, excising the tumor a second task, and closing the surgery site a third task. Each task may include multiple actions, e.g., a tumor excision task may require several cutting actions and several cauterization actions. While some surgeries require that tasks assume a specific order (e.g., excision occurs before closure), the order and presence of some tasks in some surgeries may be allowed to vary (e.g., the elimination of a precautionary task or a reordering of excision tasks where the order has no effect). Transitioning between tasks may require the surgeon 105a to remove tools from the patient, replace tools with different tools, or introduce new tools. Some tasks may require that the visualization tool 110b be removed and repositioned relative to its position in a previous task. While some assisting members 105b may assist with surgery -related tasks, such as administering anesthesia 115 to the patient 120, assisting members 105b may also assist with these task transitions, e.g., anticipating the need for a new tool 110c.
[0063] Advances in technology have enabled procedures such as that depicted in FIG. 1 A to also be performed with robotic systems, as well as the performance of procedures unable to be performed in non-robotic surgical theater 100a. Specifically, FIG. IB is a schematic view of various elements appearing in a surgical theater 100b during a surgical operation employing a robotic surgical system, such as a da Vinci™ surgical system, as may occur in relation to some embodiments. Here, patient side cart 130 having tools 140a, 140b, 140c, and 140d attached to each of a plurality of arms 135a, 135b, 135c, and 135d, respectively, may take the position of patient-side surgeon 105a. As before, one or more of tools 140a, 140b, 140c, and 140d may include a visualization tool (here visualization tool 140d), such as a visual image endoscope, laparoscopic ultrasound, etc. An operator 105c, who may be a surgeon, may view the output of visualization tool 140d through a display 160a upon a surgeon console 155. By manipulating a hand-held input mechanism 160b and pedals 160c, the operator 105c may remotely communicate with tools 140a-d on patient side cart 130 so as to perform the surgical procedure on patient 120. Indeed, the operator 105c may or may not be in the same physical location as
patient side cart 130 and patient 120 since the communication between surgeon console 155 and patient side cart 130 may occur across a telecommunication network in some embodiments. An electronics/control console 145 may also include a display 150 depicting patient vitals and/or the output of visualization tool 140d.
[0064] Similar to the task transitions of non-robotic surgical theater 100a, the surgical operation of theater 100b may require that tools 140a-d, including the visualization tool 140d, be removed or replaced for various tasks as well as new tools, e.g., new tool 165, be introduced. As before, one or more assisting members 105d may now anticipate such changes, working with operator 105c to make any necessary adjustments as the surgery progresses.
[0065] Also similar to the non-robotic surgical theater 100a, the output from the visualization tool 140d may here be recorded, e.g., at patient side cart 130, surgeon console 155, from display 150, etc. While some tools 110a, 110b, 110c in non-robotic surgical theater 100a may record additional data, such as temperature, motion, conductivity, energy levels, etc., the presence of surgeon console 155 and patient side cart 130 in theater 100b may facilitate the recordation of considerably more data than is only output from the visualization tool 140d. For example, operator 105c’s manipulation of hand-held input mechanism 160b, activation of pedals 160c, eye movement with respect to display 160a, etc., may all be recorded. Similarly, patient side cart 130 may record tool activations (e.g., the application of radiative energy, closing of scissors, etc.), movement of instruments, etc., throughout the surgery. In some embodiments, the data may have been recorded using an in-theater recording device, which may capture and store sensor data locally or at a networked location (e.g., software, firmware, or hardware configured to record surgeon kinematics data, console kinematics data, instrument kinematics data, system events data, patient state data, etc., during the surgery).
[0066] Within each of theaters 100a, 100b, or in network communication with the theaters from an external location, may be computer systems 190a and 190b, respectively (in some embodiments, computer system 190b may be integrated with the robotic surgical system, rather than serving as a standalone workstation). As will be discussed in greater detail herein, the computer systems 190a and 190b may facilitate, e.g., data collection, data processing, etc.
[0067] Similarly, many of theaters 100a, 100b may include sensors placed around the theater, such as sensors 170a and 170c, respectively, configured to record activity within the surgical theater from the perspectives of their respective fields of view 170b and 170d. Sensors 170a and 170c may be, e.g., visual image sensors (e.g., color or grayscale image sensors), depthacquiring sensors (e.g., via stereoscopically acquired visual image pairs, via time-of-fhght with a laser rangefinder, structural light, etc.), or a multimodal sensor including a combination of a visual image sensor and a depth-acquiring sensor (e.g., a red green blue depth RGB-D sensor). In some embodiments, sensors 170a and 170c may also include audio acquisition sensors or sensors specifically dedicated to audio acquisition may be placed around the theater. A plurality of such sensors may be placed within theaters 100a, 100b, possibly with overlapping fields of view and sensing range, to achieve a more holistic assessment of the surgery. For example, depth-acquiring sensors may be strategically placed around the theater so that their resulting depth frames at each moment may be consolidated into a single three-dimensional virtual element model depicting objects in the surgical theater. Examples of a three- dimensional virtual element model include a three-dimensional point cloud (also referred to as three-dimensional point cloud data). Similarly, sensors may be strategically placed in the theater to focus upon regions of interest. For example, sensors may be attached to display 125, display 150, or patient side cart 130 with fields of view focusing upon the patient 120’ s surgical site, attached to the walls or ceiling, etc. Similarly, sensors may be placed upon console 155 to monitor the operator 105c. Sensors may likewise be placed upon movable platforms specifically designed to facilitate orienting of the sensors in various poses within the theater.
[0068] As used herein, a “pose” refers to a position or location and an orientation of a body. For example, a pose refers to the translational position and rotational orientation of a body. For example, in a three-dimensional space, one may represent a pose with six total degrees of freedom. One will readily appreciate that poses may be represented using a variety of data structures, e.g., with matrices, with quaternions, with vectors, with combinations thereof, etc. Thus, in some situations, when there is no rotation, a pose may include only a translational component. Conversely, when there is no translation, a pose may include only a rotational component.
[0069] Similarly, for clarity, “theater-wide” sensor data refers herein to data acquired from one or more sensors configured to monitor a specific region of the theater (the region encompassing all, or a portion, of the theater) exterior to the patient, to personnel, to equipment, or to any other objects in the theater, such that the sensor can perceive the presence within, or passage through, at least a portion of the region of the patient, personnel, equipment, or other objects, throughout the surgery. Sensors so configured to collect such “theater-wide” data are referred to herein as “theater-wide sensors.” For clarity, one will appreciate that the specific region need not be rigidly fixed throughout the procedure, as, e.g., some sensors may cyclically pan their field of view so as to augment the size of the specific region, even though this may result in temporal lacunae for portions of the region in the sensor’s data (lacunae which may be remedied by the coordinated panning or fields of view of other nearby sensors). Similarly, in some cases, personnel or robotics systems may be able to relocate theater-wide sensors, changing the specific region, throughout the procedure, e.g., to better capture different tasks. Accordingly, sensors 170a and 170c are theater- wide sensors configured to produce theaterwide data. “Visualization data” refers herein to visual image or depth image data captured from a sensor. Thus, visualization data may or may not be theater-wide data. For example, visualization data captured at sensors 170a and 170c is theater- wide data, whereas visualization data captured via visualization tool 140d would not be theater-wide data (for at least the reason that the data is not exterior to the patient).
Example Theater-Wide Sensor Topologies
[0070] For further clarity regarding theater-wide sensor deployment, FIG. 2A is a schematic depth map rendering from an example theater-wide sensor perspective 205 as may be used in some embodiments. Specifically, this example depicts depth values corresponding to an electronics/control console 205a (e.g., the electronics/control console 145) and a nearby tray 205b, and cabinet 205c. Also within the field of view are depth values associated with a first technician 205d, presently adjusting a robotic arm (associated with depth values 205f) upon a robotic surgical system (associated with depth values 205e). Team members, with corresponding depth values 205g, 205h, and 205i, likewise appear in the field of view, as does a portion of the surgical table 205j . Depth values 2051 corresponding to a movable dolly and a boom with a lighting system’s depth values 205k also appear within the field of view.
[0071] The theater-wide sensor capturing the perspective 205 may be only one of several sensors placed throughout the theater. For example, FIG. 2B is a schematic top- down view of objects in the theater at a given moment during the surgical operation. Specifically, the perspective 205 may have been captured via a theater-wide sensor 220a with corresponding field of view 225a. Thus, for clarity, cabinet depth values 205c may correspond to cabinet 210c, electronics/control console depth values 205a may correspond to electronics/control console 210a, and tray depth values 205b may correspond to tray 210b. Robotic system 210e may correspond to depth values 205e, and each of the individual team members 210d, 210g, 21 Oh, and 210i may correspond to depth values 205d, 205g, 205h, and 205i, respectively. Similarly, dolly 2101 may correspond to depth values 2051. Depth values 205j may correspond to table 21 Oj (with an outline of a patient shown here for clarity, though the patient has not yet been placed upon the table corresponding to depth values 205j in the example perspective 205). A top-down representation of the boom corresponding to depth values 205k is not shown for clarity, though one will appreciate that the boom may likewise be considered in various embodiments.
[0072] As indicated, each of the sensors 220a, 220b, 220c is associated with different fields of view 225a, 225b, and 225c, respectively. The fields of view 225a-c may sometimes have complementary characters, providing different perspectives of the same object, or providing a view of an object from one perspective when it is outside, or occluded within, another perspective. Complementarity between the perspectives may be dynamic both spatially and temporally. Such dynamic character may result from movement of an object being tracked, but also from movement of intervening occluding objects (and, in some cases, movement of the sensors themselves). For example, at the moment depicted in FIGs. 2A and 2B, the field of view 225a has only a limited view of the table 21 Oj , as the electronics/control console 210a substantially occludes that portion of the field of view 225a. Consequently, in the depicted moment, the field of view 225b is better able to view the surgical table 21 Oj . However, neither field of view 225b nor 225a has an adequate view of the operator 21 On in console 210k. To observe the operator 210n (e.g., when they remove their head in accordance with “head out” events), field of view 225c may be more suitable. However, over the course of the data capture, these complementary relationships may change. For example, before the procedure begins, electronics/control console 210a may be removed and the robotic system 210e moved into the
position 210m. In this configuration, field of view 225a may instead be much better suited for viewing the patient table 21 Oj than the field of view 225b. As another example, movement of the console 210k to the presently depicted pose of electronics/control console 210a may render field of view 225a more suitable for viewing operator 21 On, than field of view 225c. Suitability of a field of view may thus depend upon the number and duration of occlusions, quality of the field of view (e.g., how close the object of interest is to the sensor), and movement of the object of interest within the theater. Such changes may be transitory and short in duration, as when a team member moving in the theater briefly occludes a sensor, or they may be chronic or sustained, as when equipment is moved into a fixed position throughout the duration of the procedure.
[0073] As mentioned, the theater- wide sensors may take a variety of forms and may, e.g., be configured to acquire visual image data, depth data, both visual and depth data, etc. One will appreciate that visual and depth image captures may likewise take on a variety of forms, e.g., to afford increased visibility of different portions of the theater. For example, FIG. 2C is a pair of images 250b, 255b depicting a grid-like pattern of orthogonal rows and columns in perspective, as captured from a theater-wide sensor having a rectilinear view and a theaterwide sensor having a fisheye view, respectively. More specifically, some theater-wide sensors may capture rectilinear visual images or rectilinear depth frames, e.g., via appropriate lenses, post-processing, combinations of lenses and post-processing, etc. while other theater-wide sensors may instead, e.g., acquire fisheye or distorted visual images or rectilinear depth frames, via appropriate lenses, post-processing, combinations of lenses and post-processing, etc. For clarity, image 250b depicts a checkboard pattern in perspective from a rectilinear theater wide sensor. Accordingly, the orthogonal rows and columns 250a shown here in perspective, retain linear relations with their vanishing points. In contrast, image 255b depicts the same checkboard pattern in the same perspective, but from a fish-eye theater-wise sensor perspective. Accordingly, the orthogonal rows and columns 255a, while in reality retaining a linear relationship with their vanishing points (as they appear in image 250b) appear here from the sensor data as having curved relations with their vanishing points. Thus, each type of sensor, and other sensor types, may be used alone, or in some instances, in combination, in connection with various embodiments.
[0074] Similarly, one will appreciate that not all sensors may acquire perfectly rectilinear, fisheye, or other desired mappings. Accordingly, checkered patterns, or other calibration fiducials (such as known shapes for depth systems), may facilitate determination of a given theater-wide sensor’s intrinsic parameters. For example, the focal point of the fisheye lens, and other details of the theater-wide sensor (principal points, distortion coefficients, etc.), may vary between devices and even across the same device over time. Thus, it may be necessary to recalibrate various processing methods for the particular device at issue, anticipating the device variation when training and configuring a system for machine learning tasks. Additionally, one will appreciate that the rectilinear view may be achieved by undistorting the fisheye view once the intrinsic parameters of the camera are known (which may be useful, e.g., to normalize disparate sensor systems to a similar form recognized by a machine learning architecture). Thus, while a fisheye view may allow the system and users to more readily perceive a wider field of view than in the case of the rectilinear perspective, when a processing system is considering data from some sensors acquiring undistorted perspectives and other sensors acquiring distorted perspectives, the differing perspectives may be normalized to a common perspective form (e.g., mapping all the rectilinear data to a fisheye representation or vice versa).
Example Surgical Theater Nonoperative Data
[0075] As discussed above, granular and meaningful assessment of team member actions and performance during nonoperative periods in a theater may reveal opportunities to improve efficiency and to avoid inefficient behavior having the potential to affect downstream operative and nonoperative periods. For context, FIG. 3 depicts a state of a single operating room over time 305, e.g., over the course of a day. In this example, during an initial pre-surgical period 310a, the team may prepare the operating room for the day’s procedures, collecting appropriate equipment, reviewing scheduled tasks, etc. After performing the day’s first surgery 315a, a nonoperative inter-operative period 310b will follow wherein the team performs the turnover from the operating room configuration for performing the surgery 315a to the configuration for performing the surgery 315b. Such alternating nonoperative and operative periods may continue throughout the day, e.g., nonoperative inter-surgical period 310c here follows the second surgery 315b, etc. After the final procedure 315c is performed for the day, the team
may perform any final maintenance operations, may secure and put away equipment, deactivate devices, upload data, etc., during the post-operative period 310d. Ellipsis 310e indicates the possibility of additional intervening operative and nonoperative states (though, naturally, in some theaters there may instead by only one surgery during the day). Because of the theater operations’ sequential character, an error in an upstream period can cause errors and delays to cascade through downstream periods. For example, improper alignment of equipment during pre-surgical period 310a may result in a delay during surgery 315a. This delay may itself require nonoperative period 310b to be shortened, providing a team member insufficient time to perform proper cleaning procedures, thereby placing the patient of surgery 315b’s health at risk. Thus, inefficiencies early in the day may result in the delay, poor execution, or rescheduling of downstream actions. Conversely, efficiencies early in the day may provide tolerance downstream for unexpected events, facilitating more predictable operation outcomes and other benefits.
[0076] Each of the theater states, including both the operative periods 315a, 315b, etc. and nonoperative periods 310a, 310b, 310c, 310d, etc. may be divided into a collection of tasks. For example, the nonoperative period 310c may be divided into the tasks 320a, 320b, 320c, 320d, and 320e (with intervening tasks represented by ellipsis 320f). In this example, at least three theater-wide sensors were present in the OR, each sensor capturing at least visual image data (though one will appreciate that there may be fewer than three streams, or more, as indicated by ellipses 370q). Specifically, a first theater-wide sensor captured a collection of visual images 325a (e.g., visual image video) during the first nonoperative task 320a, a collection of visual images 325b during the second nonoperative task 320b, a collection of visual images 325c during the third nonoperative task 320c, a collection of visual images 325d during the fourth nonoperative task 320d, and the collection of visual images 325e during the last nonoperative task 320e (again, intervening groups of frames may have been acquired for other tasks as indicated by ellipsis 325f).
[0077] Contemporaneously during each of the tasks of the second nonoperative period 310c, the second theater-wide sensor may acquire the data collections 330a-e (ellipsis 330f depicting possible intervening collections), and the third theater-wide sensor may acquire the collections of 335a-e (ellipsis 335f depicting possible intervening collections). Thus, one will appreciate,
e.g., that the data in sets 325a, 330a, and 335a may be acquired contemporaneously by the three theater-wide sensors during the task 320a (and, similarly, each of the other columns of collected data associated with each respective nonoperative task). Again, though visual images are shown in this example, one will appreciate that other data, such as depth frames, may alternatively, or additionally, be likewise acquired in each collection.
[0078] Thus, in task 320a, which may be an initial “cleaning” task following the surgery 315b, the sensor associated with collections 325a-e depicts a team member and the patient in a first perceptive. In contrast, the sensor capturing collections 335a-e is located on the opposite side of the theater and provides a fisheye view from a different perspective. Consequently, the second sensor’s perception of the patient is more limited. The sensor associated with collections 330a-e is focused upon the patient, however, this sensor’s perspective doesn’t depict the team member very well in the collection 330a, whereas the collection 325a does provide a clear view of the team member.
[0079] Similarly, in task 320b, which may be a “roll-back” task, moving the robotic system away from the patient, the theater- wide sensor associated with collections 330a-e depicts that the patient is no longer subject to anesthesia, but does not depict the state of the team member relocating the robotic system. Rather, the collections 325b and 335b each depict the team member and the new pose of the robotic system at a point distant from the patient and operating table (though the sensor associated with the stream collections 335a-e is better positioned to observe the robot in its post-rollback pose).
[0080] In task 320c, which may be a “turnover” or “patient out” task, a team member escorts the patient out of the operating room. While the theater-wide sensor associated with collection 325c has a clear view of the departing patient, the theater-wide sensor associated with the collection 335c may be too far away to observe the departure in detail. Similarly, the collection 330c only indicates that the patient is no longer on the operating table.
[0081] In task 320d, which may be a “setup” task, a team member positions equipment which will be used in the next operative period (e.g., the final surgery 315c if there are no intervening periods in the ellipsis 3 lOe).
[0082] Finally, in task 320e, which may be a “sterile prep” task before the initial port placements and beginning of the next surgery (again, e.g., surgery 315c), the theater-wide sensor associated with collection 330e is able to perceive the pose of the robotic system and its arms, as well as the state of the new patient. Conversely, collections 325e and 335e may provide wider contextual information regarding the state of the theater.
[0083] Thus, one can appreciate the holistic benefit of multiple sensor perspectives, as the combined views of the streams 325a-e, 330a-e, and 335a-e may provide overlapping situational awareness. Again, as mentioned, not all of the sensors may acquire data in exactly the same manner. For example the sensor associated with collections 335a-e may acquire data from a fisheye perspective, whereas the sensors associated with collections 325a-e and 330a-e may acquire rectilinear data. Similarly, there may be fewer or more theater-wide sensors and streams than are depicted here. Generally, because each collection is timestamped, it will be possible for a reviewing system to correlate respective streams’ representations, even when they are of disparate forms. Thus, data directed to different theater regions may be reconciled and reviewed. Unfortunately, as mentioned, unlike periods 315a-c, surgical instruments, robotic systems, etc., may no longer be capturing data during the nonoperative periods (e.g., periods 310a-d). Accordingly, systems and reviewers regularly accustomed to analyzing the copious datasets available from periods 315a-c may find it especially difficult to review the more sparse data of periods 310a-d as they may need to rely only upon the disparate theaterwide streams 325a-e, 330a-e, and 335a-e. Even as the reader may have perceived in considering this figure, manually reconciling disparate, but contemporaneously captured perspectives, may be cognitively taxing upon a human reviewer.
Example Nonoperative Activity Data Processing Overview
[0084] Various embodiments employ a processing pipeline facilitating analysis of nonoperative periods, and may include methods to facilitate iterative improvement of the surgical team’s performance during these periods. Particularly, some embodiments include computer systems configured to automatically measure and analyze nonoperative activities in surgical operating rooms and recommend customized actionable feedback to operating room staff or hospital management based upon historical dataset patterns so as, e.g., to improve workflow efficiency. Such systems can also help hospital management assess the impact of
new personnel, equipment, facilities, etc., as well as scale their review to a larger number, and more disparate types, of surgical theaters and surgeries, consequently driving down workflow variability. As discussed, various embodiments may be applied to surgical theaters having more than one modality, e.g., robotic, non-robotic laparoscopic, non-robotic open. Neither are various of the disclosed approaches limited to nonoperative periods associated with specific types of surgical procedures (e.g., prostatectomy, cholecystectomy, etc.).
[0085] FIG. 4 is a schematic block diagram illustrating an example deployment topology 450 for a nonoperative periods analysis system of certain embodiments. As described herein, during realtime acquisition 450a, data may be collected from one or more theater-wide sensors in one or more perspectives. Multimodal (e.g., visual image and depth) sensor suites within a surgical theater (whether robotic or non-robotic) produce a wide variety of data. Consolidating this data into elemental and composite OR metrics, as described herein, may more readily facilitate analysis. To determine these metrics, the data may be provided to a processing systems 450b, described in greater detail herein, to perform automated inference 450c, including the detection of objects in the theater, such as personnel and equipment, as well as to segment the theater-wide data into distinct steps 450d (which may, e.g., correspond to the groupings and their respective actions discussed herein with respect to FIGs. 5A-C). The discretization of the theater-wide data into the steps 450d may facilitate more meaningful and granular determinations of metrics from the theater-wide data via various workflow analytics 450e, e.g., to ascertain surgical theater efficiency, to provide actionable coaching recommendations, etc.
[0086] Following the generation of such metrics during workflow analysis 450e, embodiments also disclose software and algorithms for presentation of the metric values along with other suitable information to users (e.g., consultants, students, medical staff, and so on) and for outlier detection within the metric values relative to historical patterns. As used herein, information of a plurality of medical procedures (e.g., procedure-related information or data, case-related information or data, information or data related to medical environments such as the ORs, and so on) refers to metric values and other associated information determined in the manners described herein. These analytics results may then be used to provide coaching and feedback via various applications 450f. Software applications 450f may present various
metrics and derived analysis disclosed herein in various interfaces as part of the actionable feedback, a more rigorous and comprehensive solution than the prior use of human reviewers alone. One will appreciate that such applications 450f may be provided upon any suitable computer system, including desktop applications, tablets, augmented reality devices, etc. Such computer system can be located remote from the surgical theaters 100a and 100b in some examples. In other examples, such computer system can be located within the surgical theaters 100a and 100b (e.g., within the OR or the medical facility in which the hospital or OR processes occur). In one example, a consultant can review the information of a plurality of medical procedures via the applications 450f to provide feedback. In another example, a student can review the information of a plurality of medical procedures via the applications 450f to improve learning experience and to provide feedback. This feedback may result in the adjustment of the theater operation such that subsequent application of the steps 450a-f identify new or more subtle inefficiencies in the team’s workflow. Thus, the cycle may continue again, such that the iterative, automated OR workflow analytics facilitate gradual improvement in the team’s performance, allowing the team to adapt contextually based on upon the respective adjustments. Such iterative application may also help reviewers to better track the impact of the feedback to the team, analyze the effect of changes to the theater composition and scheduling, as well as for the system to consider historical patterns in future assessments and metrics generation.
Example Nonoperative Interval Divisions
[0087] FIG. 5A is a schematic representation of a collection of metrics intervals as may be used to assess nonoperative team performance in some embodiments. One will appreciate that the intervals may be applied cyclically in accordance with the alternating character of the operative and nonoperative periods in the theater described above in FIG. 3. For example, initially, the surgical operation 315b may correspond to the interval 550e. Following the operation 315b’s completion, actions and corresponding data in the theater may be allocated to consecutive intervals 550a-d during the subsequent nonoperative period 310c. Data and actions in the next surgery (e.g., surgery 315c, if there are no intervening periods in ellipsis 3 lOe), may then be ascribed again to a second instance of the interval 550e, and so forth (consequently, data from each of the nonoperative periods 310b, 310b will be allocated to instances of intervals
550a-d). Intervals may also be grouped into larger intervals, as is the case here with the “wheels out to wheels in” interval 550f, which groups the intervals 550b and 550c, sharing the start time of interval 550b and the end time of interval 550c. Consolidating theater-wide data into this taxonomy, in conjunction with various other operations disclosed herein, may more readily facilitate analysis in a manner amenable to larger efficiency review, as described in greater detail herein. For example, organizing data in this manner may facilitate comparisons with different days of the week over the course of the month across theaters, surgery configurations (both robotic and non-robotic), and teams, with specific emphasis upon particular of these intervals 550a-d appearing in the corresponding nonoperative periods. Though not part of the nonoperative period, in some embodiments, it may still be useful to determine the duration of the surgery in interval 550e, as the duration may inform the efficiency or inefficiency of the preceding or succeeding nonoperative period. Accordingly, in some embodiments, some of the disclosed metrics may consider events and actions in this interval 550e, even when seeking ultimately to assess the efficiency of a nonoperative period.
[0088] For further clarity in the reader’s understanding, FIG. 5B is a schematic block diagram indicating full-day relations of the elements from FIG. 5A. Specifically, as discussed above, instances of the intervals of FIG. 5A may be created cyclically in accordance with the alternating operative and nonoperative periods of FIG. 3. In some embodiments, when considering full day data (e.g., data including the nonoperative pre-operative period 310a, nonoperative post-operative period 310d, and all intervening periods), the system may accordingly anticipate a preliminary interval “day start to patient in” 555a to account for actions within the pre-operative period 310a. This interval may, e.g., begin when the first personnel enters the theater for the day and may end when the patient enters the theater for the first surgery. Accordingly, as shown by the arrow 555c, this may result in a transition to the first instance of the “patient in to skin cut” interval 550d. From there, as indicated by the circular relation, the data may be cyclically grouped into instances of the intervals 550a-e, e.g., in accordance with the alternating periods 315a, 310b, 315b, 310c. etc. until the period 315c.
[0089] At the conclusion of the final surgery for the day (e.g., surgery 315c), and following the last instance of the interval 550a after that surgery, then rather than continue with additional cyclical data allocations among instances of the intervals 550a-e, the system may instead
transition to a final “patient out to day end” interval 555b, as shown by the arrow 555d (which may be used to assess nonoperative post-operative period 3 lOd). The “patient out to day end” interval 555b may end when the last team member leaves the theater or the data acquisition concludes. One will appreciate that various of the disclosed computer systems may be trained to distinguish actions in the interval 555b from the corresponding data of interval 550b (naturally, conclusion of the data stream may also be used in some embodiments to infer the presence of interval 555b). Though concluding the day’s actions, analysis of interval 555b may still be appropriate in some embodiments, as actions taken at the end of one day may affect the following day’s performance.
Example Task to Interval Assignments and Action Temporal Intervals
[0090] In some embodiments, the durations of each of intervals 550a-e may be determined based upon respective start and end times of various tasks or actions within the theater. Naturally, when the intervals 550a-e are used consecutively, the end time for a preceding interval (e.g., the end of interval 550c) may be the start time of the succeeding interval (e.g., the beginning of interval 550d). When coupled with a task action grouping ontology, theaterwide data may be readily grouped into meaningful divisions for downstream analysis. This may facilitate, e.g., consistency in verifying that team members have been adhering to proposed feedback, as well as computer-based verification of the same, across disparate theaters, team configurations, etc. As will be explained, some task actions may occur over a period of time (e.g., cleaning), while others may occur at a specific moment (e.g., entrance of a team member).
[0091] Specifically, FIG. 5C depicts four high-level task action classes or groupings of tasks, referred to for example as phases or stages: post-surgery 520, turnover 525, pre-surgery 510, and surgery 515. Surgery 515 may include the tasks or actions 515a-i. As will be discussed, FIGs. 6 and 7 provide various example temporal definitions for the actions, though for the reader’s appreciation, brief summaries will be provided here. Specifically, the task “first cut” 515a, may correspond to a time when the first incision upon the patient occurs (consider, e.g., the duration 605a). The task “port placement” 515b, may correspond to a duration between the time when a first port is placed into the patient and the time when the last port is placed (consider, e.g., the duration 605b). The task “rollup” 515c, may correspond to the duration in which a team member begins moving a robotic system to a time when the robotic system
assumes the pose it will use during at least an initial portion of the surgical procedure (consider, e.g., the duration 605c). The task “room prep” 515d, may correspond to a duration beginning with the first surgery preparation action specific to the surgery being performed and may conclude with the last preparation action specific to the surgery being performed (consider, e.g., the duration 605d). The task “docking” 515e, may correspond to a duration starting when a team member begins docking a robotic system and concludes when the robotic system is docked (consider, e.g., the duration 605e). The task “surgery” 515f, may correspond with a duration starting with the first incision and ending with the final closure of the patient (consider, e.g., the durations 705a-c for respective contemplated surgeries, specifically the robotic surgery 705a and non-robotic surgeries 705b and 705c). Naturally, in many taxonomies, these action blocks may be further broken down into considerably more action and task divisions in accordance with the analyst’s desired focus (e.g., if the action “port placement” 515b were associated with an inefficiency, a supplemental taxonomy wherein each port’s placement were a distinct action, with its own measured duration, may be appropriate for refining the analysis). Here, however, as nonoperative period actions are the subject of review, the general task “surgery” 515f (e.g., one of durations 705a-c) may suffice, despite surgery’s encompassing many constituent actions. The task “undocking” 515g, may correspond to a duration beginning when a team member starts to undock a robotic system and concludes when the robotic system is undocked (consider, e.g., the duration 705d). The task “rollback” 515h, may correspond to a duration when a team member begins moving a robotic system away from a patient and concludes when the robotic system assumes a pose it will retain until turnover begins (consider, e.g., the duration 705e). The task “patient close” 515a, may correspond to a duration (e.g., duration 705f) when the surgeon observes the patient during rollback (e.g., one will appreciate by this example that some action durations may overlap and proceed in parallel).
[0092] Within the post-surgical class grouping 520, the task “robot undraping” 520a may correspond to a duration when a team member first begins undraping a robotic system and ends when the robotic system is undraped (consider, e.g., the duration 705g). The task “patient out” 520b, may correspond to a time, or duration, during which the patient leaves the theater (consider, e.g., the duration 705h). The task “patient undraping” 520c, may correspond to a duration beginning when a team member begins undraping the patient and ends when the patient is undraped (consider, e.g., the duration 705i).
[0093] Within the turnover class grouping 525, the task “clean” 525a, may correspond to a duration starting when the first team member begins cleaning equipment in the theater and concludes when the last team member (which may be the same team member) completes the last cleaning of any equipment (consider, e.g., the duration 705j). The task “idle” 525b, may correspond to a duration that starts when team members are not performing any other task and concludes when they begin performing another task (consider, e.g., the duration 705k). The task “turnover” 505a may correspond to a duration that starts when the first team member begins resetting the theater from the last procedure and concludes when the last team member (which may be the same team member) finishes the reset (consider, e.g., the duration 615a). The task “setup” 505b may correspond to a duration that starts when the first team member begins changing the pose of equipment to be used in a surgery, and concludes when the last team member (which may be the same team member) finishes the last equipment pose adjustment (consider, e.g., the duration 615a). The task “sterile prep” 505c, may correspond to a duration that starts when the first team member begins cleaning the surgical area and concludes when the last team member (which may be the same team member) finishes cleaning the surgical area (consider, e.g., the duration 615c). Again, while shown here in linear sequences, one will appreciate that task actions within the classes may proceed in orders other than that shown or, in some instances, may refer to temporal periods which may overlap and may proceed in parallel (e.g., when performed by different team members).
[0094] Within pre-surgery class grouping 510, the task “patient in” 510a may correspond to a duration that starts and ends when the patient first enters the theater (consider, e.g., the duration 620a). The task “robot draping” 510b may correspond to a duration that starts when the a member begins draping the robotic system and concludes when draping is complete (consider, e.g., the duration 620b). The task “intubate” 510c may correspond to a duration that starts when intubation of the patient begins and concludes when intubation is complete (consider, e.g., the duration 620c). The task “patient prep” 510d may correspond to a duration that starts when a team member begins preparing the patient for surgery and concludes when preparations are complete (consider, e.g., the duration 620d). The task “patient draping” 510e may correspond to a duration that starts when a team member begins draping the patient and concludes when the patient is draped (consider, e.g., the duration 620e).
[0095] Though not discussed herein, as mentioned, one will appreciate the possibility of additional or different task actions. For example, the durations of “Imaging” 720a and “Walk In” 720b, though not part of the example taxonomy of FIG. 5C, may also be determined in some embodiments.
[0096] Thus, as indicated by the respective arrows in FIG. 5C, the intervals of FIG. 5 A may be allocated as follows. “Skin-close to patient-out” 550a may begin at the last closing operation 515j of the previous surgery interval and concludes with the patient’ s departure from the theater (e.g., from the end of the last suture at block 515i until the patient has departed at block 520b). Similarly, the interval “Patient-out to case-open” 550b may begin when the patient’s departure from the theater at block 520b and concludes with the start of sterile prep at block 505c for the next case.
[0097] The interval “case-open to patient-in” 550c, may begin with the start of the sterile prep at block 505c and conclude with the start of the new patient entering the theater at block 510a. The interval “patient-in to skin cut” 550d may begin when the new patient enters the theater at block 510a and concludes at the start of the first cut at block 515. The surgery itself may occur during the interval 550e as shown.
[0098] As previously discussed, the “wheels out to wheels in” interval 550f is the interval from the start of “Patient out to case open” 550b and concludes with the end of “case open to patient in” 550c.
Example Nonoperative Metric Generation and Scoring
[0099] After the nonoperative segments have been identified (e.g., using systems and methods discussed herein with respect to FIGs. 9A-C and FIG. 10), the number and location of objects (e.g., using systems and methods discussed herein with respect to FIGs. 9A-C and FIGs. 11 A- B), such as personnel, within each segment, and their respective motions have been identified (e.g., using systems and methods discussed herein with respect to FIGs. 9A-C, 12A-B, 13 A-D, and 14), the system may generate one or more metric values. As mentioned, the duration and relative times of the intervals, classes, and task actions of FIGs. 5A-C may themselves serve as metrics.
[0100] Various embodiments may also determine “composite” metric scores based upon various of the other determined metrics. These metrics assume the functional form of EQN. 1 : s = (tn)
where s refers to the composite metric score value, which may be confined to a range, e.g., from 0 to 1, from 0 to 100, etc., and (•) represents the mapping from individual metrics to the composite score. For example, m may be a vector of metrics computed using various data streams and models as disclosed herein. In such composite scores, in some embodiments, the constituent metrics may fall within one of temporal workflow, scheduling, human resource, or other groupings disclosed herein.
[0101] Specifically, FIG. 8 is a schematic block diagram illustrating various metrics and their relations in constructing an “ORA score” as may be performed in some embodiments. Within the temporal grouping 805, an “efficiency” scoring metric 805a may combine the nonoperative metrics that measure temporal workflow efficiency in an OR, e.g., the duration of one or more of the six temporal interval metrics of FIG. 5 A. More specifically, the nonoperative metrics, averaged, as a mean or median, over all cases collected from a team, theater, or hospital, may be compared to the top 20% teams, theaters, or hospitals (e.g., as manually indicated by reviewers or from historical patterns via iterations of topology 450) in a database as a benchmark. A “consistency” metric 805b may combine (e.g., sum or find the mean or median) the standard deviations of nonoperative metrics (e.g., the six temporal interval metrics of FIG. 5 A) across all cases collected from a current team, theater, or hospital. An “adverse event” metric 805c may combine (e.g., sum) negative outliers, e.g., as detected in terms of the interval metrics of FIGs. 5A-B. Outliers may, e.g., be detected using statistical analysis algorithms (e.g., clustering, distribution analysis, regression, etc. as discussed herein with reference to FIGs. 15, 16, 19A-B, and 20A-B). Negative outliers may be identified as those for which at least one of the nonoperative interval metrics of FIGs. 5 A- B metrics are outside a threshold, such as a standard deviation, from than the relevant team, theater, or hospital median or mean (e.g., based on a threshold specified by an expert reviewer or upon historical patterns from past iterations of topology 450). Examples of such outliers are discussed herein, e.g., with respect to FIGs. 19A-B and FIGs. 20A-B.
[0102] Within the scheduling grouping 810, a “case volume” scoring metric 810a includes the mean or median number of cases operated per OR, per day, for a team, theater, or hospital, normalized by the expected case volume for a typical OR (e.g., again, as designated in a historical dataset benchmark, such as a mean or median). A “first case turnovers” scoring metric 810b is the ratio of first cases in an operating day that were turned over compared to the total number of first cases captured from a team, theater, or hospital. Alternatively, a more general “case turnovers” metric is the ratio of all cases that were turned-over compared to the total number of cases as performed by a team, in a theater, or in hospital. A “delay” scoring metric 810c is an mean or median positive (behind a scheduled start time of an action) or negative (before a scheduled start time of an action) departure from a scheduled time in minutes for each case, normalized by the acceptable delay (e.g., a historical mean or median benchmark). Naturally, the negative or positive definition may be reversed (e.g., wherein starting late is instead negative and starting early is instead positive) if other contextual parameters are likewise adjusted.
[0103] Within the human resource metrics grouping 815, a “headcount to complete tasks” scoring metric 815a combines the mean or median headcount (the largest number of detected personnel throughout the procedure in the OR at one time) over all cases collected for the team, theater, or hospital needed to complete each of the temporal nonoperative tasks for each case, normalized by the recommended headcount for each task (e.g., a historical benchmark median or mean). An “OR Traffic” scoring metric 815b measures the mean amount of motion in the OR during each case, averaged (itself as a median or mean) over all cases collected for the team, theater, or hospital, normalized by the recommended amount of traffic (e.g., based upon a historical benchmark as described above). For example, this metric may receive (two or three-dimensional) optical flow, and convert such raw data to a single numerical value, e.g., an entropy representation, a mean magnitude, a median magnitude, etc.
[0104] Within the “other” metrics grouping 815, a “room layout” scoring metric 820a includes a ratio of robotic cases with multi-part roll-ups or roll-backs, normalized by the total number of robotic cases for the team, theater, or hospital. That is, ideally, each roll up or back of the robotic system would include a single motion. When, instead, the team member moves the robotic system back and forth, such a “multi-part” roll implies an inefficiency, and so the
number of such multi-part rolls relative to all the roll up and roll back events may provide an indication of the proportion of inefficient attempts. As indicated by this example, some metrics may be unique to robotic theaters, just as some metrics may be unique to nonrobotic theaters. Is some embodiments, correspondences between metrics unique to each theater-type may be specified to facilitate their comparison. A “modality conversion” scoring metric 820b includes a ratio of cases that have both robotic and non-robotic modalities normalized by the total number of cases for the team, theater, or hospital. For example, this metric may count the number of conversions, e.g., transitioning from a planned robotic configuration to a nonrobotic configuration, and vice versa, and then dividing the total number of such cases with such a conversion by the total cases. Whether occurring in an operative or nonoperative periods, such conversions may be reflective of inefficiencies in nonoperative periods (e.g., improper actions in a prior nonoperative period may have rendered the planned robotic procedure in the operative period impractical). Thus, this metric may capture inefficiencies in planning, in equipment, or in unexpected complications in the original surgical plan.
[0105] While each of the metrics 805a-c, 810a-c, 815a-c, and 820a-b may be considered individually to assess nonoperative period performances, or in combinations of the multiple of the metrics, as discussed above with respect to EQN. 1, some embodiments consider an “ORA score” 830 reflecting an integrated 825 representation of all these metrics. When, e.g., presented in combination with data of the duration of one or more of the intervals in FIG. 5A- C, the ORA score may provide a readily discernible means for reviewers to quickly and intuitively assess the relative performance of surgical teams, surgical theaters, hospitals and hospital systems, etc. during nonoperative periods, across theaters, across teams, across types of surgical procedures (nonoperative periods before or after prostatectomies, hernia repair, etc.), types of surgical modalities (nonoperative periods preparing for, or resetting after, nonrobotic laparoscopic procedures, nonrobotic open procedures, robotic procedures, etc.), hospital systems, etc.
[0106] Accordingly, while some embodiments may employ more complicated relationships (e.g., employing any suitable mathematical functions and operations) between the metrics 805a-c, 810a-c, 815a-c, and 820a-b in forming the ORA score 830, in this example, each of the metrics may be weighted by a corresponding weighting value 850a-j such that the integrating
825 is a weighted sum of each of the metrics. The weights may be selected, e.g., by a hospital administrator or reviewers in accordance with which of the metrics are discerned to be more vital to current needs for efficiency improvement. For example, in a system where reviewers wish to assess whether reports that limited staff are affecting efficiency, then the weight 850g may be upscaled relative to the other weights. Thus, when the ORA score 830 across procedures is compared in connection with the durations of one or more of the intervals in FIG. 5 A-C for the groups of surgeries, the reviewer can more readily discern if there exists a relation between the head count and undesirable interval durations. Naturally, one will appreciate other choices and combinations of weight adjustment, as well as particular consideration of specific interval durations, to assess other performance characteristics.
Example Metric Scoring Methodologies - ORA Significance Assessment
[0107] Some higher ORA composite metrics scores may positively correlate with increased system utilization u and reduced OR minutes per case t for the hospitals in a database, e.g., as represented by EQN. 2:
[0108] Thus, the ORA composite score may be used for a variety of analysis and feedback applications. For example, the ORA composite score may be used to detect negative trends and prioritize hospitals, theaters, teams, or team members, that need workflow optimizations. The ORA composite score may also be used to monitor workflow optimizations, e.g., to verify adherence to requested adjustments, as well as to verify that the desired improvements are, in fact, occurring. The ORA composite score may also be used to provide an objective measure of efficiency for when teams perform new types of surgeries for the first time.
Example Metric Scoring Methodologies - Additional Metrics
[0109] Additional metrics to assess workflow efficiency may be generated by compositing time, staff count, and motion metrics. For example, a composite score may consider scheduling efficiency (e.g., a composite formed from one or more of case volume 810a, first case turnovers 810b, and case delay 810c) and one or both of modality conversion 820b and the duration of
an “idle time” metric, which is a mean or median of the idle time (for individual members or teams collectively) over a period (e.g., during action 525b).
[0110] Though, for convenience, sometimes described as considering the behavior of one or more team members, one will appreciate that the metrics described herein may be used to compare the performances of individual members, teams, theaters (across varying teams and modalities), hospitals, hospital systems, etc. Similarly, metrics calculated at the individual, team, or hospital level may be aggregated for assessments of a higher level. For example, to compare hospital systems, metrics for team members within each of the systems, across the system’s hospitals, may be determined, and then averaged (e.g., a mean, median, sum weighted by characteristics of the team members, etc.) for a system-to-system comparison.
Example Nonoperative Data Processing Workflow
[OHl] FIG. 9A is a schematic block diagram depicting a general processing flow as may be implemented in some embodiments. Specifically, this example flow employs various machine learning consolidation systems for producing elemental OR metrics (such as temporal interval durations, personnel presence, personnel motion, equipment motion, etc., from which other metrics, e.g., as described in FIG. 8, may be generated) from the raw multimodal theater-wide sensor data.
[0112] In some embodiments (e.g., where the data has not been pre-processed), a nonoperative segment detection module 905a may be used to detect nonoperative segments from full-day theater-wide data. A personnel count detection module 905b may then be used to detect a number of people involved in each of the detected nonoperative segments/activities of the theater- wide data (e.g., a spatial -temporal machine learning algorithm employing a three- dimensional convolutional network for handing visual image and depth data over time, e.g., as appearing in video). A motion assessment module 905c may then be used to measure the amount of motion (e.g., people, equipment, etc.) observed in each of the nonoperative segment/activities (e.g., using optical flow methods, a machine learning tracking system, etc.). A metrics generation component 905d may then be used to generate metrics, e.g., as disclosed herein (e.g., determining as metrics the temporal durations of each of the intervals and actions of FIGs. 5A-C and the metrics as discussed in FIG. 8). While metrics results may be presented
directly to the reviewer in some embodiments, as described herein, some embodiments may instead provide some initial analytical assessment of the metric values, determining standard deviations relative to historical values, prioritizing greater tolerance departures for prioritized presentation to the reviewer, determining if metric values (e.g., motion) indicate that it would be desirable to perform a more refined analysis of the data (e.g., determining team member movement paths, object collision event detections, etc.), etc. Accordingly, a metrics analysis component 905e may then analyze the generated metrics, e.g., to determine outliers relative to historical patterns.
[0113] FIG. 9B is a schematic block diagram depicting elements in a more detailed example processing flow than the flow depicted in FIG. 9A, as may be implemented in some embodiments. One will appreciate that each depicted component may be logic or may be one or more machine learning systems, as discussed in greater detail herein. The computer system 910b may receive the theater wide sensor data 910a and first perform the nonoperative period detection 910c (e.g., identifying the periods 310a, 310b, 310c, 3 lOd, though some systems may be configured to only detect nonoperative periods of the character of periods 310b, and 310c). Once the portions of the theater-wide data corresponding to the nonoperative periods have been detected, the data may then be further segmented into corresponding action tasks or intervals (e.g., the intervals 550a-d and/or groupings 510, 515, 520, 525 and respective action tasks) at block 910d.
[0114] Using object detection (and in some embodiments, tracking) machine learning systems 910e, the system may perform object detection using machine learning methods, such as of equipment 91 Of or personnel 91 Oh (ellipsis 910g indicating the possibility of other machine learning systems). In some embodiments, only personnel detection 91 Oh is performed, as only the number of personnel and their motion are needed for the desired metrics. Motion detection component 910i may then analyze the objects detected at block 910e to determine their respective motions, e.g., using various machine learning methods, optical flow, combinations thereof, etc. disclosed herein.
[0115] Using the number of objects, detected motion, and determined interval durations, a metric generation system 910j may generate metrics (e.g., the interval durations may themselves serve as metrics, the values of FIG. 8 may be calculated, etc.). The metric values
may then be analyzed via component 910k to determine, e.g., outliers and other deviations from historical data (e.g., previous iterations of the topology 450). The system may consider 915a, 915c historical sensor data 915e and historical metrics data 915f when performing the historical comparison at block 910k (e.g., clustering historical metric values around efficient and inefficient nodes, then assessing the newly arrived data’s distance to these nodes). In this manner, the system may infer that entire teams, groups of members, or individual members performed subpar compared to historical metrics data for similar roles, team member compositions, or individual team members. Conversely, the processed and raw theater-wide sensor data may be provided 915b to the historical data storage 915e for use in future analysis. Similarly, the metrics results and outlier determinations may be recorded 915d in the historical metrics database 915f for future reference.
[0116] The results of the analysis may then be presented via component 9101 (e.g., sent over a network to one or more of applications 550f) for presentation to the reviewer. For example, application algorithms may consume the determined metrics and nonoperative data and propose customized actionable coaching for each individual in the team, as well as the team as a whole, based upon metrics analysis results (though such coaching or feedback may first be determined on the computer system 910b in some embodiments). Example recommendations include, e.g.: changes in the OR layout at various points in time, changes in OR scheduling, changes in communication systems between team members, changes in numbers of staff involved in various tasks, etc. In some embodiments, such coaching and feedback may be generated by comparing the metric values to a finite corpus of known inefficient patterns (or conversely, known efficient patterns) and corresponding remediations to be proposed (e.g., slow port placement and excess headcount may be correlated with an inefficiency resolved by reducing head count for that task).
[0117] For further clarity, FIG. 9C is a flow diagram illustrating various operations in an example overall process 920 for analyzing theater-wide data. At block 920a, the computer system may receive the theater- wide sensor data for the theater to be examined. At block 920b, the system may perform pre-processing on the data, e.g., reconciling theaterwide data to a common format, as when fisheye and rectilinear sensor data are both to be processed.
[0118] At block 920c, the system may perform operative and nonoperative period recognitions, e.g., identifying each of the segments 3 lOa-d and 315a-c from the raw theater wide sensor data. In some embodiments, such divisions may be recognized, or verified, via ancillary data, e.g., console data, instrument kinematics data, etc. (which may, e.g., be active only during operative periods).
[0119] The system may then iterate over the detected nonoperative periods (e.g., periods 310a, 310b) at blocks 920d and 925a. In some embodiments, operative periods may also be included in the iteration, e.g., to determine metric values that may inform the analysis of the nonoperative segments, though many embodiments will consider only the nonoperative periods. For each period, the system may identify the relevant tasks and intervals at block 925b, e.g., the intervals, groups, and actions of FIGs. 5A-C.
[0120] At blocks 925c and 925e, the system may iterate over the corresponding portions of the theater data for the respectively identified tasks and intervals, performing object detections at block 925f, motion detection at block 925g, and corresponding metrics generation at block 925h. In some embodiments, at block 925f, only a number of personnel in the theater may be determined, without determining their roles or identities. Again, the metrics may thus be generated at the action task level, as well as at the other intervals described in FIGs. 5A-C. In alternative embodiments, the metrics may simply be determined for the nonoperative period (e.g., where the duration of the intervals 550a-e are the only metrics to be determined).
[0121] After all the relevant tasks and intervals have been considered for the current period at block 925c, then the system may create any additional metric values (e.g., metrics including the values determined at block 925h across multiple tasks as their component values) at block 925d. Once all the periods have been considered at block 920d the system may perform holistic metrics generation at block 930a (e.g., metrics whose component values depend upon the period metrics of block 925d and block 925h, such as certain composite metrics described herein).
[0122] At block 930b, the system may analyze the metrics generated at blocks 930a, 925d, and at block 925h. As discussed, many metrics (possibly at each of blocks 930a, 925h, and 925d) will consider historical values, e.g., to normalize the specific values here, in their generation.
Similarly, at block 930b the system may determine outliers as described in greater detail herein, by considering the metrics results in connection with historical values. Finally, at block 930c, the system may publish its analysis for use, e.g., in applications 450f.
Example Nonoperative Theater-Wide Data Processing - Nonoperative Data Recognition
[0123] One will appreciate a number of systems and methods sufficient for performing the operative / nonoperative period detection of components 905a or 910c and activity / task / interval segmentation of block 910d (e.g., identifying the actions, tasks, or intervals of FIGs. 5A-C). Indeed, as mentioned, in some embodiments, alternative signals than the theater-wide data or monitoring of gross-signals in the theater-wide data may suffice for distinguishing periods 3 lOa-d from periods 315a-d. For example, in some embodiments, a team member may provide explicit notification. Similarly, the absence of kinematics and system events data from robotic surgical systemics consoles or instruments may indicate a prolonged separation between the surgeon and patient or between a robotic platform and the patient, which may suffice to indicate that an inter- surgical nonoperative period has begun (or provide verification of a machine learning system’s parallel determination).
[0124] However, some embodiments consider instead, or in addition, employing machine learning systems for performing the nonoperative period detection. For example, some embodiments employ spatiotemporal model architectures, e.g., like a transformer architecture such as that described in Bertasius, Gedas, Heng Wang, and Lorenzo Torresani. “Is Space- Time Attention All You Need for Video Understanding?” arXiv™ preprint arXiv™ 21Q2.05095 (2021). Such approaches may also be especially useful for automatic activity detection from long sequences of theater-wide sensor data. The spatial segment transformer architecture may be designed to learn features from frames of theater-wide data (e.g., visual image video data, depth frame video data, visual image and depth frame video data, etc.). The temporal segment may be based upon a gated recurrent unit (GRU) method and designed to learn the sequence of actions in a long video and may, e.g., be trained in a fully supervised manner (again, where data labelling may be assisted by the activation of surgical instrument data). For example, OR theater-wide data may be first annotated by a human expert to create ground truth labels and then fed to the model for supervised training.
[0125] Some embodiments may employ a two-stage model training strategy: first training the back-bone transformer model to extract features and then training the temporal model to learn a sequence. Input to the model training may be long sequences of theater-wide data (e.g., many hours of visual image video) with output time-stamps for each segment (e.g., the nonoperative segments) or activity (e.g., intervals and tasks of FIGs. 5A-C) of interest. One will appreciate that some models may operate on individual visual images, individual depth frames, groups of image frames (e.g., segments of video), groups of depth frames (e.g., segments of depth frame video), combinations of visual video and depth video, etc.
[0126] As another example, FIG. 10 is a flow diagram illustrating various operations in an example process 1005 for performing nonoperative period detection in some embodiments. Specifically, as the number of theater-wide sensors may change across theaters, or across time in the same theater, it may be undesirable to invest in training a machine learning system configured to receive only a specific number of theater-wide data inputs. Thus, in these embodiments, where the classifier is not configured to consider the theater-wide sensor data from all the available streams at once, the system may instead consider the streams individually, or in smaller groups, and then analyze the collective results, e.g., in combination with smoothing operations, so as to assign a categorization to the segment under consideration.
[0127] For example, after receiving the theater-wide data at block 1005a (e.g., all of three streams 325a-e, 330a-e, and 335a-e) the system may iterate over the data in intervals at blocks 1005b and 1005c. For example, the system may consider the streams in successive segments (e.g., 30 second, one, or two minute intervals), though the data therein may be down sampled depending upon the framerate of its acquisition. For each interval of data, the system may iterate over the portion of the interval data associated with the respective sensor’s streams at blocks 1010a and 1010b (e.g., each of streams 325a-e, 330a-e, and 335a-e or groups thereof, possibly considering the same stream more than once in different groupings). For each stream, the system may determine the classification results at block 1010c as pertaining to an operative or nonoperative interval. After all the streams have been considered, at block lOlOd, the system may consider the final classification of the interval. For example, the system may take a majority vote of the individual stream classifications of block 1010c, resolving ties and
smoothing the results based upon continuity with previous (and possibly subsequently determined) classifications.
[0128] After all the theater- wide data has been considered at block 1005b, then at block 1015a the system may consolidate the classification results (e.g., performing smoothing and continuity harmonization for all the data, analogous to that discussed with respect to block lOlOd, but here for larger smoothing windows, e.g., one to two hours). At block 1015b, the system may perform any supplemental data verification before publishing the results. For example, if supplemental data indicates time intervals with known classifications, the classification assignments may be hardcoded for these true positives and the smoothing rerun.
Example Nonoperative Theater-Wide Data Processing - Object Recognition
[0129] Like nonoperative and operative theater-wide data segmentation, one will likewise appreciate a number of ways for performing object detection (e.g., at block 905b or component 910e). Again, in some embodiments, object detection includes merely a number of personnel count, and so a You Only Look Once (YOLO) style network (e.g., as described in Redmon, Joseph, et al. “You Only Look Once: Unified, Realtime Object Detection.” arXiv™ preprint arXiv™'.1506.02640 (2015)), perhaps applied iteratively, may suffice. However, some embodiments consider using groups of visual images or depth frames. For example, some embodiments employ a transformer based spatial model to process frames of the theater-wide data, detecting all humans present and reporting the number. An example of such architecture is described in Carion, Nicolas, et al. “End-to- End Object Detection with Transformers.” arXiv™ preprint arXiv™ 2QQ5.12872 (2020).
[0130] To clarify this specific approach, FIG. 11 A is a schematic block diagram illustrating an example information processing flow as may be used for performing object detection in connection with some embodiments. Given a visual or depth frame image 1105f, the system may present the image’s raw pixel or depth values to a convolutional network 1105a trained to produce image features 1105b. These features may in turn be provided to a transformer encoder-decoder 1105c and the bipartite matching loss 1105d used to make predictions 1105e for the location and number of objects (e.g., personnel or equipment) in the image, reflected here by bounding boxes within the augmented image 1105g (one will appreciate that an actual
augmented image may not be produced by the system, but rather, only indications of the object locations and, in some embodiments, of the type of object found therein).
[0131] FIG. 1 IB is a flow diagram illustrating various operations in an example process 1100 for performing object detection as may be used in connection with some embodiments. At block 1110a, the system may receive the theater- wide data (visual image data, depth data, etc.). At blocks 1110b, and 1110c, as in the process 1005, the system may iterate over the nonoperative periods, considering the data in discrete, successive intervals (as mentioned, in some embodiments the operative periods may be considered as well, e.g., to verify continuity with the object detections and recognitions at the beginnings or ends of the nonoperative periods).
[0132] At blocks 11 lOd and 1115a the system may consider groups of theater-wide data. For example, some embodiments may consider every moment of data capture, whereas other embodiments may consider every other capture or captures at intervals, since some theater sensors may employ high data acquisition rates (indeed, not all sensors in the theater may apply a same rate and so normalization may be applied so as to consolidate the data). For such high rates, it may not be reasonable to interpolate object locations between data captures if the data capture rate is sufficiently larger than the movement speeds of objects in the theater. Similarly, some theater sensor’s data captures may not be perfectly synchronized, or may capture data at different rates, obligating the system to interpolate or to select data captures sufficiently corresponding in time so as to perform detection and metrics calculations.
[0133] At blocks 1115b and 1115c, the system may consider the data in the separate theaterwide sensor data streams and perform object detection at block 1115d, e.g., as described above with respect to FIG. 11 A, or using a YOLO network, etc. After object detection has been performed for each stream for the group under consideration, the system may perform postprocessing at block 1115e. For example, if the relative poses of the theater-wide sensors are known within the theater, then their respective object detections may be reconciled to better confirm the location of the object in a three-dimensional representation such as a three- dimensional point cloud. Similarly, the relative data captures may be used to verify one another’s determinations and to resolve occlusions based upon temporal continuity (e.g., as when a team member occludes one senor’s perspective, but not another sensor’s).
[0134] After all of the temporal groups have been considered at block l l lOd, then at block 11 lOe, additional verification may be performed, e.g., using temporal information from across the intervals of block 11 lOd to reconcile occlusions and lacuna in the object detections of block 1115d. Once all the nonoperative periods of interest have been considered at block 1110b, at block 1120a, the system may perform holistic post- processing and verification in-filling. For example, knowledge regarding object presence between periods or based upon a type of theater or operation may inform the expected numbers and relative locations of objects to be recognized. To this end, even though some embodiments may be interested in analyzing nonoperative periods exclusively, the beginning and end of operative periods may help inform or verify the nonoperative period object detections, and may be considered. For example, if four personnel are consistently recognized throughout an operative period, then the system should expect to identify four personnel at the end of the preceding, and the beginning of the succeeding, nonoperative periods.
Example Nonoperative Theater-Wide Data Processing - Object Tracking
[0135] As with segmentation of the raw data into nonoperative periods (e.g., as performed by nonoperative period detection component 910c), and the detection of objects, such as personnel, within those periods (e.g., via component 910e), one will appreciate a number of ways to perform tracking and motion detection. For example, object detection, as described, e.g., in FIG. 11B, in combination with optical flow analysis (with complementary stream perspectives resolving ambiguities) may readily be used to recognize each particular object’s movement throughout the theater. As another example, some embodiments may employ multi - object machine learning tracking algorithms, which involve detecting and tracking multiple objects within a sequence of theater- wide data. These approaches may identify and locate objects of interest in each frame and then associate those objects across frames to keep track of their movements over time. For example, some embodiments may use an implementation analogous to that described in Meinhardt, Tim, et al. “TrackFormer: Multi-Object Tracking with Transformers.” ar Xiv™ preprint arAzv™:2101.02702 (2021).
[0136] As an example in accordance with the approach of Meinhardt, et al., FIG. 12A is schematic block diagram illustrating an example tracking information processing flow as may be used in connection with some embodiments. In a first visual image or depth frame 1205a,
the system may apply a tracking framework collection 1210a of convolution neural network, transformer encoders and decoders, and initial object detection (e.g., with the assistance of the object detection method of FIG. 11 A). Iterative application 1210b and 1210c of the tracking framework to subsequent images or frames 1205b and 1205c may produce object detections, such as personnel, with a record of the positions across the frames 1205a, 1205b, 1205c (ellipsis 1205d reflect the presence of intervening frames and tracking recognitions).
[0137] FIG. 12B is flow diagram illustrating various operations in an example process 1215 for performing object tracking as may be used in connection with some embodiments. At block 1215a, the system may receive the theater-wide data, e.g., following nonoperative period identification. At blocks 1215b and 1215c the system may iterate over the nonoperative periods and for each period, iterate over the contemplated detection and tracking methods at blocks 1220a and 1220b. For each method, the sensor data streams may be considered in turn at blocks 1220c and 1220d, performing the applicable detection and tracking method at block 1220e (one will appreciate that alternatively, in some embodiments, the streams may be first integrated before applying the object detection and tracking systems, as when simultaneously acquired depth frames from multiple sensors are consolidated into a single virtual model). As mentioned, some methods may benefit from considering temporal and spatial continuity across the theater-wide sensors, and so reconciliation methods for the particular tracking application may be applied at block 1220f.
[0138] Similarly, reconciliation between the tracking methods’ findings across the period may be performed at block 1225a. For example, determined locations for objects found by the various methods may be averaged. Similarly, the number of objects may be determined by taking a majority vote among the methods, possibly weighted by uncertainty or confidence values associated with the methods. Similarly, after all the nonoperative periods have been considered, the system may perform holistic reconciliation at block 1225b, e.g., ensuring that the initial and final object counts and locations agree with those of neighboring periods or action groups.
[0139] As one will note when comparing FIG. 12B and FIG. 9C, object detection, tracking, or motion detection may be performed at the period level (and then associated with tasks / actions
/ intervals for metrics calculation if desired) or may be performed after the actions, tasks, or intervals have been identified, and upon corresponding data specifically.
Example Nonoperative Theater-Wide Data Processing - Motion Assessment
[0140] While some tracking systems may readily facilitate motion analysis at motion detection component 91 Oi, some embodiments may alternatively, or in parallel, perform motion detection and analysis using visual image and depth frame data. In some embodiments, simply the amount of motion (in magnitude, regardless of its direction component) within the theater in three-dimensional space of any objects, or of only objects of interest, may be useful for determining meaningful metrics during nonoperative periods. However, more refined motion analysis may facilitate more refined inquiries, such as team member path analysis, collision detection, etc.
[0141] As an example optical -flow based motion assessment, FIG. 13 A is a schematic visual image 1305a and depth frame 1305b theater-wide data pair, with an indication of the optical- flow derived correspondence as may be used in some embodiments. Specifically, the data processing system may review sequences of visual image data to detect optical flow. Here, the system has detected that the team member 1310b is moving from the right to the left of the image as indicated by arrow 1310a and by the pixel border around the pixels having optical flow around team member 1310b.
[0142] While some embodiments may consider motion based upon the optical flow from visual images alone, it may sometimes be desirable to “standardize” the motion. Specifically, turning to FIG. 13C, movement 1345a far from the camera, as shown in image 1340a may result in a smaller number of pixels (the pixels depicting the member 1350a) being associated with the optical flow. Conversely, as shown in image 1340b, when the team member 1350b is very close to the sensor, their motion 1345b may result in an optical flow affecting many more pixels.
[0143] Rather than allow the number of visual image pixels involved in the flow to affect the motion determination, some embodiments may standardize the motion associated with the optical flow to three-dimensional space. That is, with reference to FIG. Ithree-dimensional, the motions 1345a and 1345b may be the same in magnitude in three- dimensional space, as
the team members move from locations 1355a, 1360a to locations 1355b, 1360b, respectively. While the locations 1360a-b are a smaller distance 1370b from the sensor 1365 than the distance 1370a from the sensor 1365 to the locations 1355a-b, some embodiments may seek to identify the same amount of motion 1345a, 1345b in each instance. Specifically, downstream metrics may treat the speed of the motions 1345a, 1345b equally, regardless of their distance from the capturing sensor.
[0144] To accomplish this, returning to FIG. 13A, for each portion of the visual image 1305a associated with the optical flow, the system may consider the corresponding portions of the simultaneously acquired depth image 1305b, here, where the team member 1310b and their motion, indicated by arrow 1315a, will also be manifest. That is, in this example the pixels 1310c associated with the optical flow may correspond 1320 to the depth values 1315c. By considering these depth values 1320, the system may infer the distance to the object precipitating the optical flow (e.g., one of distances 1370b and 1370a). That is, with reference to FIG. 13B, the system may be able to infer the “standardized” motion 1325c in three- dimensional space for the object moving from position 1325a to position 1325b, once the distances 1330a and 1330b from the capturing sensor 1335 have been inferred from the depth data. In some embodiments, in lieu of first detecting optical flow in the two-dimensional visual image, optical flow in the three- dimensional depth data may instead be used and the standardized motion determined mutatis mutandis.
[0145] FIG. 14 is a flow diagram illustrating various operations in an example process 1400 for performing motion analysis from theater-wide data, as may be applied in some embodiments. At blocks 1405b and 1405c, the system may iterate over the theater-wide data received at block 1405a. For example, theater-wide data may be down sampled and considered in discrete data sets of temporally successive visual image and depth frame pairs. Where one or more optical flow artifacts (contiguous regions with optical flow above a threshold are detected in either the visual images or the depth frames) are detected within the data set at block 1405d, the system may iterate over the artifacts at blocks 1410a and 1410b. Many artifacts may not correspond to objects of interest for preparing metrics. For example, incidental motion of some equipment, adjustment of some lights, opening of some doors, etc., may not be relevant to the downstream analysis. Accordingly, at block 1410c, the system may verify that the
artifact is associated with one or more of the objects of interest (e.g., the personnel or equipment detected using the methods disclosed herein via the machine learning systems of component 910e, e.g., including the systems and methods of FIGs. 11 A-B and 12A-B). for example, pixels corresponding to the optical flow may be compared with pixels identified in, e.g., a YOLO network object detection. In some cases, a single optical flow artifact may be associated with more than one object, e.g., when one moving object occludes another moving object. Assessment of the corresponding depth values may reveal the identities of the respective objects appearing in the artifact or at least their respective locations and trajectories.
[0146] Thus, where the artifact corresponds to an object of interest (e.g., team personnel), then at block 1415a, the system may determine the corresponding depth values and may standardize the detected motion at block 1415b to be in three- dimensional space (e.g., the same motion value regardless of the distance from the sensor) rather than in the two-dimensional plane of a visual image optical flow, e.g., using the techniques discussed herein with respect to FIGs. 13A-D. The resulting motion may then be recorded at block 1415c for use in subsequent metrics calculation as discussed in greater detail herein.
Example Nonoperative Theater-Wide Metrics Analysis - Outlier Detection
[0147] Following metrics generation (e.g., at metric generation system 910j) some embodiments may seek to recognize outlier behavior (e.g., at metric analysis system 910k) to detect outliers in each team / operating room / hospital / etc. based upon the above metrics, including the durations of the actions and intervals in FIGs. 5A-C, the numbers of people involved in each theater and the amount of motion observed, etc. For example, FIG. 15 is flow diagram illustrating various operations in an example process 1500 for outlier analysis based upon the determined metric values, as may be implemented in some embodiments.
[0148] At block 1505a, the system may acquire historical datasets, e.g., for use with metrics having component values (such as normalizations) based upon historical data. At block 1505b, the system may determine metrics results for nonoperative period as a whole (e.g., cumulative motion within the period, regardless of whether it occurred in association with any particular task or interval). At block 1505c, the system may determine metrics results for specific tasks and intervals within each of the nonoperative segments (e.g., the durations of actions and
intervals in FIGs. 5A-C). At block 1505d, the system may then determine composite metric values from the previous of the determined metrics (e.g., the ORA score 830 discussed in FIG. 8).
[0149] At block 1505e, clusters of metric values corresponding to patterns of inefficient or efficient nonoperative theater states, as well as clusters of metric values corresponding to patterns of efficient or positive nonoperative theater states, may be included in the historical data of block 1505a. Such clusters may be used both to find metric scores, and patterns of metrics scores, distance from ideal clusters and distance from undesirable clusters (e.g., where the distance is the Euclidean distance and each metric of a group is considered as a separate dimension).
[0150] Thus, the system may the iterate over the metrics individually, or in groups, at blocks 1510a and 1510b to determine if the metrics or groups exceed a tolerance at block 1510c relative to the historical data clusters (naturally, the nature of the tolerance may change with each expected grouping and may be based upon a historical benchmark, such as one or more standard deviations from a median or mean). Where such tolerance is exceeded (e.g., metric values or groups of metric values are either too close to inefficient clusters or too far from efficient clusters), the system may document the departure at block 15 lOd for future use in coaching and feedback as described herein.
[0151] For clarity, as mentioned, the cluster may occur in an N dimensional space where there are N respective metrics considered in the group (though alternative spaces and surfaces for comparing metric values may also be used). Such an algorithm may be applied to detect outliers for each team / operating room / hospital based upon the above metrics. Cluster algorithms (e.g., based upon K-means, using machine learning classifiers, etc.) may both reveal groupings and identify outliers, the former for recognizing common inefficient / efficient patterns in the values, and the latter for recognizing, e.g., departures from ideal performances or acceptable avoidance of undesirable states.
[0152] Thus the system may determine whether the metrics individually, or in groups, are associated (e.g., within a threshold distance of, such as the cluster’s standard deviation, larges principal component, etc.) with an inefficient, or efficient, cluster at block 1515a, and if so,
document the cluster for future coaching and feedback at block 1515b. For example, raw metric values, composite metric values, outliers, distances to or from clusters, correlated remediations, etc., may be presented in a GUI interface, e.g., as will be described herein with respect to FIGs. 17 or 18A-C.
Example Nonoperative Data Analysis - Coaching
[0153] Following outlier detection and clustering, in some embodiments, the system may also seek to consolidate the results into a form suitable for use by feedback and coaching (e.g., by the applications 550f). For example, remediating actions may already be known for tolerance breaches (e.g., at block 1510c) or nearness to adverse metrics clusters (e.g., at block 1515a). Here, coaching may, e.g., simply include the known remediation when reporting the breach or clustering association.
[0154] Some embodiments may recognize higher level associations in the metric values, from which remediations may be proposed. For example, after considering a new dataset from a theater in a previously unconsidered hospital, various embodiments may determine that a specific surgical specialty (e.g., Urology) in that theater, possesses a large standard deviation in its nonoperative time metrics. Various algorithms disclosed herein may consume such large standard deviations, other data points, and historical data and suggest corrective action regarding with scheduling or staffing model. For example, a regression model may be used that employs historical data to infer potential solutions based upon the data distribution.
[0155] As another example, FIG. 16 is flow diagram illustrating various operations in an example process 1600 for providing coaching feedback based upon the determined metric values, as may be implemented in some embodiments. While focusing on relations between metric values and adverse / inefficient patterns in this example, one will appreciate variations that instead determine relations to desirable / efficient patterns (with corresponding remediations when the metrics depart too far from these preferred states). Similarly, in some embodiments, metrics and groups of metrics may be directly compared to known patterns without first identifying tolerance departures and cluster distances, as in the example process 1600.
[0156] Here, at blocks 1615a and 1615b, the system may iterate over all the previously identified tolerance departures (e.g., as determined at block 1510c) for the groupings of one or more metric results and consider whether they correspond with a known inefficient pattern at block 1615c (e.g., taking an inner product with the metric values with a known inefficient vector). For example, a protracted “case open to patient in” duration in combination with certain delay 810c and case volume 810a values, may, e.g., be indicative of a scheduling inefficiency where adjusting the scheduling regularly resolves the undesirable state. Note that the metric or metrics used for mapping to inefficient patterns for remediation may, or may not, be the same as the metric or metrics, which departed from the tolerance (e.g., at block 1615a) or approached the undesirable clustering (e.g., at block 1620a), e.g., the latter may instead indicate that the former may correspond to an inefficient pattern. For example, an outlier in one duration metric from FIG. 5A may imply an inefficient pattern derived from a combinations of metrics from FIG. 8.
[0157] Accordingly, the system may iterate through the possible inefficient patterns at blocks 1615c and 1615d to consider how the corresponding metric values resemble the inefficient pattern. For example, the Euclidean distance from the metrics to the pattern may be taken at block 1615e. At block 1615f, the system may record the similarity (e.g., the distance) between the inefficient pattern and the metrics group associated with the tolerance departure.
[0158] Similarly, following consideration of the tolerance departures, the system may consider metrics score combinations with clusters near adverse / inefficient events (e.g., as determined at block 1515a) at blocks 1620a and 1620b. As was done previously, the system may iterate over the possible known inefficient patterns at blocks 1620c and 1620d, again determining the inefficient pattern correspondence to the respective metric values (which may or may not be the same group of metric values identified in the cluster association of block 1620a) at block 1620e (again, e.g., the Euclidean or other appropriate similarity metric) and recording the degree of correspondence at block 1620f.
[0159] Based upon the distances and correspondences determined at blocks 1615e and 1620e, respectively, the system may determine a priority ordering for the detected inefficient patterns at block 1625a. At block 1625b, the system may return the most significant threshold number of inefficient pattern associations. For example, each inefficient pattern may be associated
with a priority (e.g., high priority modes may be those with a potential for causing a downstream cascade of inefficiencies, patient harm, damage to equipment, etc., whereas lower priority modes may simply lead to temporal delays) and presented accordingly to reviewers. Consequently, each association may be scored as a weighted sum of a similarity between the score metric values and metric values associated with inefficient pattern and then weighted by the severity / priority of the inefficient pattern. In this manner, the most significant of the possible failures may be identified and returned first to the reviewer. The iterative nature of topology 450 may facilitate reconsideration and reweighting of the priorities for process 1600 as reviewers observe the impact of the proposed feedback over time. Similarly, the iterations may provide opportunities to identify additional remediation and inefficient pattern correspondences.
Example GUI Nonoperative Metrics Analysis Feedback Elements
[0160] Presentation of the analysis results, e.g., at block 9101, may take a variety of forms in various embodiments. For example, FIG. 17 is a schematic representation of GUI elements in a quick review dashboard interface for nonoperative metrics review as may be implemented in some embodiments. In this example GUI 1705, selectors 1710a-d are provided for the user to select the temporal range of nonoperative period performance data that they wish to analyze. In this example, the user has selected to review the data captured during the past year. Following such a temporal selection, a “Nonoperative Metrics” region, a “Case Mix” region, and a “Metadata” region may be populated with values corresponding to the nonoperative periods for the selected range of data.
[0161] The “Case Mix” region may provide a general description of the data filtered from the temporal selection. Here, for example, there are 205 total cases (nonoperative periods) under consideration as indicated by label 1715a. A decomposition of those 205 cases is then provided by type of surgery via labels 1715b-d (specifically, that of the 205 nonoperative periods, 15 were associated with preparation for open surgeries, 180 with preparation for a robotic surgery, and 10 with preparation for a laparoscopic surgery). The nonoperative periods under consideration may be those occurring before and after the 205 surgeries, only those before, or only those after, etc., depending upon the user’s selection.
[0162] The “Metadata” region may likewise be populated with various parameters describing the selected data, such as the number of ORs involved (8 per label 1720a), the number of specialties (4 per label 1720b), the number of procedure types (10 per label 1720c) and the number of different surgeons involved in the surgeries (27 per label 1720d).
[0163] Within the “Nonoperative Metrics” region, a holistic composite score, such as an ORA score, may be presented in region 1725a using the methods described herein (e.g., as described with respect to FIG. 8). Regions 1725b-f may show corresponding statistics for the intervals of FIG. 5 A, here, values for various intervals of FIG. 5 A.
[0164] Some embodiments may also present scoring metrics results comprehensively, e.g., to allow reviewers to quickly scan the feedback and to identify effective and ineffective aspects of the nonoperative theater performance. For example, FIG. 18A is a schematic representation of a GUI element 1805 as may be used for global quick review feedback in some embodiments. Specifically, individual metrics score values, composite metric scores, departures from tolerances, nearness to desirable or undesirable clustering, etc. may be indicated in a numerical region 1805d. The name of the metrics, etc., may be indicated in the name region 1805a and a desired feedback in region 1805b. A quick review icon 1805e may also be included to facilitate ready identification of the nature of the numerical feedback. A quality relation arrow region 1805c may be used to indicate whether the numerical value in region 1805d is above or below an operational point or tolerance, or trending upward or downward over time, and whether this is construed as indicative of improving or decreasing efficiency.
[0165] Specifically, FIG. 18B is a schematic representation of arrow elements as may be used in the quality relation arrow region 1805c of FIG. 18A in some embodiments. The arrow may be, e.g., color-coded to indicate whether the value is efficient (e.g., green) or inefficient (e.g., red). Thus, a rising arrow 1810a may indicate that the value in region 1805d is above a lower bound (e.g., when an idle time following successful completion of a task has increased above a historical average). Similarly, the falling arrow 1810b may indicate that the value in region 1805d is below an upper bound (e.g., when an preparation time has decreased below a historical average). Conversely, a falling arrow 1810c may indicate that the value in region 1805d is below a desired minimum value (e.g., when a number of personnel ready for a required step is below a historical average). Similarly, the rising arrow 181 Od may indicate that the value in
region 1805d is above a desired upper bound (e.g., when an preparation time has increased beyond a historical average).
[0166] By associating relational value both with the arrow direction and highlighting (such as by color, bolding, animation, etc.), reviewers may readily scan a large number of values and discern results indicating efficient or inefficient feedback. Highlighting may also take on a variety of degrees (e.g., alpha values, degree of bolding, frequency of an animation, etc.) to indicate a priority associated with an efficient or inefficient value. For example, FIG. 18C is a schematic representation of GUI elements in a quick review feedback interface 1820 as may be used in some embodiments. Here, the individual quick review feedbacks (instances of the element 1805) may be arranged in a grid and sized so that the reviewer may perceive multiple items at one time. Each element may be selectable, presenting details for the value determination, including, e.g., the corresponding historical data, theater-wide data, intermediate metrics calculation results, etc. One will appreciate that the figure is merely schematic and each “Action” or “Feedback” text may be replaced with one of the metrics described herein (e.g., a duration of intervals from FIGs. 5A-C) and remediations, respectively (though in some configurations the feedback may be omitted from all or some of the elements).
[0167] FIG. 19A is a plot of example analytic values as acquired in connection with a prototype implementation of an embodiment. Specifically, FIG. 19A shows results following processing for various of the intervals of FIG. 5 A. Here, an outlier value 1905a clearly indicates a deviation in the “skin close to patient out” interval from the median duration of ~10 minutes (taking instead approximately -260 minutes). FIG. 19B is similarly a plot of example operating room analytic values as acquired in connection with a prototype implementation of an embodiment. Here, standard deviation intervals may be shown to guide the reviewer in recognizing outlier values (e.g., whether they reflect a longer or shorter duration than the standard deviation interval).
[0168] FIG. 20A is a plot of example values as acquired in connection with a prototype implementation of an embodiment. Even without performing the outlier detection and inefficient pattern recognition methods disclosed herein, one can readily determine by inspection that various of the values are outliers. For example, within the “case open to patient in” interval, the cases 2005a-c clearly indicate outliers above the standard deviation. For the
“patient in to skin cut” interval, the case 2010a is shorter than the standard deviation interval. For the “skin close to patient out” interval, the cases 2015a-b were outside the standard deviation for the selected historical cases. For the “patient out to case open” interval, the case 2020a lies far outside the standard deviation for the selected historical cases. For the “wheels out to wheels in” interval, the cases 2025a and 2025b lie outside the standard deviation for the selected historical cases. FIG. 20B is similarly a plot of example operating room analytic values as acquired in connection with a prototype implementation of an embodiment.
[0169] Similarly, FIG. 21 A is a plot of example operating room analytic values as acquired in connection with an example prototype implementation of an embodiment. FIG. 2 IB is a plot of example operating room analytic values as acquired in connection with an example prototype implementation of an embodiment, in a horizontal format. FIG. 21C is also a plot of example operating room analytic values as acquired in connection with an example prototype implementation of an embodiment.
[0170] FIG. 22 is a schematic representation of example elements in a graphical user interface for providing metrics-derived feedback, as may be used in some embodiments. In this example, the interface includes two elements: a theater-wide sensor playback element 2205; and a consolidated timeline element 2210 depicting the durations of various intervals within a plurality of nonoperative periods. For example, each of temporal interval breakdowns 2210a- g may indicate the durations of intervals 550a-d for ready comparison (though seven periods are shown in this example, one may readily envision variations with many more rows, as well as more instances of playback element 2205).
[0171] Within the theater-wide sensor playback element 2205 may be a metadata section 2205a indicating the identity of the case (“Case 1”), the state of the theater (though a surgical operation “Gastric Bypass”, is shown here, in anticipation of the upcoming surgery, the nonoperative actions and intervals of FIG. 5A-C may be shown here additionally or alternatively), the date and time of the data acquisition (“May 27, 20XX 07:49:42”) and the number of identified personnel (here “2” as determined, e.g., in accordance with component 91 Oh and, e.g., the methods of FIGs. 11 A-B). The theater- wide sensor playback element 2205 may also display the data from one or more theater-wide sensors in the playback section 2205b with bounding boxes (e.g., boxes 2205c and 2205d), overlays, outlines, or other suitable
indications of the personnel detected. In the consolidated timeline element 2210 a plurality of temporal intervals 2210a-g may be rendered, indicating, e.g., a plurality of interval durations (e.g., the durations of intervals 550a-d for seven nonoperative periods). The playback in the region 2205 may correspond to a selection of one of the intervals in the temporal interval breakdowns 2210a-g (e.g., depicting corresponding theater- wide data playback for that interval). In this manner, the reviewer may readily perceive a corpus of results while simultaneously analyzing the state of a specific instance (e.g., as may have been called to the user’s attention based upon, e.g., correspondingly determined metric values or pattern similarities).
Screenshots and Materials Associated with Prototype Implementations of Various Embodiments
[0172] FIG. 23 is an example schematic data processing overview diagram corresponding to aspects of FIG. 4, as may be used in connection with some embodiments. FIG. 24 is a screenshot of a feedback interface corresponding to aspects of FIG. 22, as may be used in connection with some embodiments. FIG. 25 is a screenshot of a feedback interface corresponding to aspects of FIG. 22, as may be used in connection with some embodiments. FIG. 26 is a screenshot of a feedback interface corresponding to aspects of FIG. 17, as may be used in connection with some embodiments. FIG. 27 is a collection of color image plots for example metric values corresponding to aspects of FIGs. 21A-C, as acquired in connection with an example prototype implementation of an embodiment. FIG. 28 is a collection of color plots corresponding to aspects of the plots of FIGs. 19A-B and 20A-B. One will appreciate that dates appearing in the screenshots of FIGs. 23-25, 27, 28 refer to the date of data capture. Accordingly, to better ensure privacy, each instance is here replaced with 20XX.
[0173] FIG. 29A is a collection of photographs of theater-wide sensor depth and image frames captured in a surgical theater during various of the tasks. FIG. 29B is a collection of theaterwide sensor images captured of a surgical theater during deployment of an example prototype implementation of an embodiment and related photographs of an example theater-wide sensor platform. Specifically, image 2910a depicts a depth frame acquired from a theater- wide sensor wherein the depth values have been color coded to facilitate the reader’ s visualization. A visual image 2910b acquired from another theater-wide sensor is also provided. In photograph 2910a
an elevated stand 2910g for mounting two theater-wide sensors 2910d and 2910e is shown. The image 291 Of shows the elevated stand 2910g and sensors 2910d, 2910e from a second perspective.
Computer System
[0174] FIG. 30 is a block diagram of an example computer system 3000 as may be used in conjunction with some of the embodiments. In some examples, each of the processing systems 450b can be implemented using the computing system 3000. In some examples, the application 450f can be executed using the computing system 3000. The computing system 3000 may include an interconnect 3005, connecting several components, such as, e.g., one or more processors 3010, one or more memory components 3015, one or more input/output systems 3020, one or more storage systems 3025, one or more network adaptors 3030, etc. The interconnect 3005 may be, e.g., one or more bridges, traces, busses (e.g., an ISA, SCSI, PCI, I2C, Firewire bus, etc.), wires, adapters, or controllers.
[0175] The one or more processors 3010 may include, e.g., a general-purpose processor (e.g., x86 processor, RISC processor, etc.), a math coprocessor, a graphics processor, etc. The one or more memory components 3015 may include, e.g., a volatile memory (RAM, SRAM, DRAM, etc.), a non-volatile memory (EPROM, ROM, Flash memory, etc.), or similar devices. The one or more input/output devices 3020 may include, e.g., display devices, keyboards, pointing devices, touchscreen devices, etc. The one or more storage devices 3025 may include, e.g., cloud-based storages, removable Universal Serial Bus (USB) storage, disk drives, etc. In some systems memory components 3015 and storage devices 3025 may be the same components. Network adapters 3030 may include, e.g., wired network interfaces, wireless interfaces, Bluetooth™ adapters, line-of-sight interfaces, etc.
[0176] One will recognize that only some of the components, alternative components, or additional components than those depicted in FIG. 30 may be present in some embodiments. Similarly, the components may be combined or serve dual-purposes in some systems. The components may be implemented using special-purpose hardwired circuitry such as, for example, one or more ASICs, PLDs, FPGAs, etc. Thus, some embodiments may be implemented in, for example, programmable circuitry (e.g., one or more microprocessors)
programmed with software and/or firmware, or entirely in special-purpose hardwired (nonprogrammable) circuitry, or in a combination of such forms.
[0177] In some embodiments, data structures and message structures may be stored or transmitted via a data transmission medium, e.g., a signal on a communications link, via the network adapters 3030. Transmission may occur across a variety of mediums, e.g., the Internet, a local area network, a wide area network, or a point-to-point dial-up connection, etc. Thus, “computer readable media” can include computer-readable storage media (e.g., “non- transitory” computer-readable media) and computer-readable transmission media.
[0178] The one or more memory components 3015 and one or more storage devices 3025 may be computer-readable storage media. In some embodiments, the one or more memory components 3015 or one or more storage devices 3025 may store instructions, which may perform or cause to be performed various of the operations discussed herein. In some embodiments, the instructions stored in memory 3015 can be implemented as software and/or firmware. These instructions may be used to perform operations on the one or more processors 3010 to carry out processes described herein. In some embodiments, such instructions may be provided to the one or more processors 3010 by downloading the instructions from another system, e.g., via network adapter 3030.
[0179] For clarity, one will appreciate that while a computer system may be a single machine, residing at a single location, having one or more of the components of FIG. 30, this need not be the case. For example, distributed network computer systems may include multiple individual processing workstations, each workstation having some, or all, of the components depicted in FIG. 30. Processing and various operations described herein may accordingly be spread across the one or more workstations of such a computer system. For example, one will appreciate that a process amenable to being run in a single thread upon a single workstation may instead be separated into an arbitrary number of sub-threads across one or more workstations, such sub-threads then run in serial or in parallel to achieve a same, or substantially similar, result as the process run within the single thread. Similarly, one will appreciate that while a non-transitory computer readable medium may stand alone (e.g., in a single USB storage device), or reside within a single workstation (e.g., in the workstation’s random access memory or disk storage), such a medium need not reside at a single geographic
location, but may include, e.g., multiple memory storage units residing across geographically separated workstations of a computer system in network communication with one another or across geographically separated storage devices.
Intelligent Data Collection for Medical Environments
[0180] Systems, methods, apparatuses, and non-transitory computer-readable media are provided for intelligent recording of data collected for medical procedures and in medical environments such as ORs. Examples of data collected for medical procedures and in medical environments (collectively referred to as multimodal data) include video data collected by one or more visual sensors arranged in the medical environments, depth data or three-dimensional point cloud data collected by one or more depth sensors arranged in the medical environments, data (e.g., kinematics data, system event data, sensor data) collected by one or more robotic systems located within the medical environments and used to perform the medical procedures, data (e.g., endoscopic video data) collected by one or more instruments located within the medical environments and used to perform the medical procedures, metrics and workflow analytics determined based on the multimodal data, and so on.
[0181] Given that it is exceptionally costly in terms of storage resources, network resources, and computational resources to record all data streams capturing aspects of medical environments, the embodiments described herein can, based on detected trigger events, intelligently enable and disable recording of the data streams to conserve storage resources, network resources, and computational resources. Trigger events can be implemented to identify redundant streams of data, based on which recording of the data stream(s) considered to be the best source of information is enabled while recording of the rest of the data stream(s) is disabled. Trigger events can be implemented to identify different qualities of streams of data, based on which recording of the data stream(s) having the best quality is enabled while recording of the rest of the data stream(s) is disabled. The trigger events can be implemented to identify noteworthy or important events such as outliner cases, adverse events, private information (e.g., personal health information (PHI)) exposure, and so on, based on which recording of the data stream(s) capturing noteworthy or important events is enabled while recording of the data stream(s) not capturing noteworthy or important events is disabled. Thus,
the embodiments described herein can further preserve privacy of individuals while recording data for medical procedures and medical environments.
[0182] FIG. 31 is a schematic block diagram illustrating an example data collection and analysis system 3100 for providing smart data collection in a medical environment for a medical procedure, according to some embodiments. The data collection and analysis system 3100 can be implemented using one or more suitable computing systems, such as one or more computing systems 190a, 190b, 450b, and 3000. For example, the computing systems 190a and 190b can facilitate data collection, data processing, and so on of the multimodal data. For example, the processing systems 450b can perform automated inference 450c, including the detection of objects in the medical environment (e.g., theater), such as personnel and equipment, as well as to segment the theater-wide data into distinct steps 450d (can correspond to the groupings and their respective actions discussed herein with respect to FIGs. 5 A-C). For example, code, instructions, data structures, weights, biases, parameters, and other information that define the data collection and analysis system 3100 can be stored in one or more memory systems such as one or more memory components 3015 and/or one or more storage systems 3025. In some examples, the processes of the data collection and analysis system 3100 such as selecting determining to enable or disable recording of one or more types of data in the multimodal data can be performed using one or more processors such as one or more processors 3010.
[0183] As used herein, a medical procedure refers to a surgical procedure or operation performed in a medical environment (e.g., a medical or surgical theater 110a or 110b, OR, etc.) by or using one or more of a medical staff, a robotic system, or an instrument. Examples of the medical staff include surgeons, nurses, support staff, and so on, such as the patient-side surgeon 105a and the assisting members 105b. Examples of the robotic systems include the robotic medical system or the robot surgical system described herein. Examples of instruments include the mechanical instrument 110a or the visualization tool 110b. Medical procedures can have various modalities, including robotic (e.g., using at least one robotic system), non- robotic laparoscopic, non-robotic open, and so on. The multimodal data 3102, 3014, 3106, 3018 and 3110 collected for a medical procedure also refers or includes multimodal data collected in a medical environment in which the medical procedure is performed and for one
or more of medical staff, robotic system, or instrument performing or used in performing the medical procedure.
[0184] The data collection and analysis system 3100 system can receive and digest data sources or data streams including one or more of video data 3102, robotic system data 3104, instrument data 3106, metadata 3108, and depth data 3110 collected for a medical procedure. For example, the data collection and analysis system 3100 can acquire data streams of the multimodal data 3102, 3014, 3106, 3018 and 3110 in real-time acquisition at 450a, received at 910a, 915e, 920a, 1005a, 1110a, 1215a, 1405a, and so on. In some examples, the data collection and analysis system 3100 can utilize all types of multimodal data 3102, 3014, 3106, 3018 and 3110 collected, obtained, determined, or calculated for a medical procedure to determine trigger events based on which recording of the multimodal data 3102, 3014, 3106, 3018 and 3110 can be enabled or disabled. In some examples, the data collection and analysis system 3100 can utilize at least two types of multimodal data 3102, 3014, 3106, 3018 and 3110 collected, obtained, determined, or calculated for a medical procedure to determine a trigger event. In some examples, although at least one type of multimodal data 3102, 3014, 3106, 3018 and 3110 may not be available for a medical procedure, the data collection and analysis system 3100 can nevertheless determine a trigger event using the available information for that medical procedure.
[0185] The sensing system 3130 can include various types of sensors (e.g., the theater-wide sensors) described herein configured to capture and collect data such as the video data 3102, the depth data 3110, and so on within a medical environment. In some examples, the sensing system 3130 can include computing systems 190a and 190b to facilitate data collection and data processing of the video data 3102 and the depth data 3110.
[0186] The video data 3102 includes two-dimensional visual video data such as color (RGB) image or video data, grayscale image or video data, and so on of a medical procedure. In other words, the video data 3102 can include videos (e.g., structured video data) captured during a medical procedure. The video data 3102 include two-dimensional visual video data obtained using visual image sensors placed within and/or around at least one medical environment (e.g., the theaters 110a and 110b) to capture visual image videos of the medical procedure performed within the at least one medical environment. Examples of video data 3102 include medical
environment video data such as OR video data, visual image/video data, theater-wide video data captured by the visual image sensors, visual images 325a-325e, 330a-330e, 335a-335e, visual frames, and so on. The visual image sensors used to acquire the structured video data can be fixed relative to the at least one medical environment (e.g., placed on walls or ceilings of the medical environment).
[0187] The instrument data 3106 includes instrument imaging data, instrument kinematics data, and so on collected using an instrument. In some examples, the instrument imaging data can be a part of the video data 3102 for which recording can be enabled or disabled in the manner described herein. For example, the instrument imaging data can include instrument image and/or video data (e.g., endoscopic images, endoscopic video data, etc.), ultrasound data (e.g., ultrasound images, ultrasound video data), and so on obtained using imaging devices which can be operated by human operators or robot systems. Such instrument imaging data may depict surgical field of views (e.g., field of view of internal anatomy of patients). The positions, orientations, and/or poses of imaging devices can be controlled or manipulated by a human operator (e.g., a surgeon or a medical staff member) teleoperationally via robotic systems. For instance, an imaging instrument can be coupled to or supported by a manipulator of a robotic system and a human operator can teleoperationally manipulate the imaging instrument by controlling the robotic system. Alternatively, or in addition, the instrument imaging data can be captured using manually manipulated imaging instruments a laparoscopic ultrasound device or a laparoscopic visual image/video acquiring endoscope.
[0188] The metadata 3108 includes information of various aspects and attributes of the medical procedure, including at least one of identifying information of the medical procedure, identifying information of one or more medical environments (e.g., theaters, ORs, hospitals, and so on) in which the medical procedure is performed, identifying information of medical staff by whom the medical procedure is performed, the experience level of the medical staff, schedules of the medical staff and the medical environments, patient complexity of patients subject to the medical procedure, patient health parameters or indicators, identifying information of one or more robotic systems or instruments used in the medical procedure, identifying information of one or more sensors used to capture the multimodal data.
[0189] In some examples, the identifying information of the medical procedure includes at least one of a name or type of each of the medical procedure, a time at which or a time duration in which each of the medical procedure is performed, or a modality of each of the medical procedure. In some examples, the identifying information of the one or more ORs includes a name of each of the one or more ORs. In some examples, the identifying information of the one or more hospitals includes a name of each of the one or more hospitals. In some examples, the identifying information of the medical staff members includes a name, specialty, job title, ID and so on of each of one or more surgeons, nurses, healthcare team name, and so on. In some examples, the experience level of the medical staff members includes a role, length of time for practicing medicine, length of time for performing certain types of medical procedures, length of time for using a certain type of robotic systems, certifications, and credentials of each of one or more surgeons, nurses, healthcare team name or ID, and so on. The schedules of the medical staff and the medical environments include allocation of the medical staff and the medical environments to perform certain procedures (e.g., defined by types of surgery, surgery name, surgery ID, or surgery reference number, special ty, modality), names of medical staff members, and corresponding time.
[0190] In some examples, patient complexity refers to conditions that a patient has that may influence the care of other conditions. In some examples, patient health parameters or indicators include various parameters or indicators such as body mass index (BMI), percentage body fat (%BF), blood serum cholesterol (BSC), and systolic (SBP), height, stage of sickness, organ information, outcome of the medical procedure, and so on. In some examples, the identifying information of the one or more robotic systems or instruments includes at least one of a name, model, or version of each of the one or more robotic systems or instruments or an attribute of each of the one or more robotic systems or instruments. In some examples, the identifying information of at least one sensor includes at least one of a name of each of the at least one sensor or a modality of each of the at least one sensor. In some examples, the system events of a robotic system includes different activities, kinematic/motions, sequence of actions, and so on of the robotic system and timestamps thereof.
[0191] As shown in FIG. 25, the metadata 3108 includes information such as case ID, date of procedure, weekday, OR ID, case number, type of medical procedure (surgery name), surgery
code, surgeon name, surgeon ID, procedure modality (robotic), scheduled time, robotic system type, duration of the medical procedure, whether hybrid medical procedure is implemented, whether the medical procedure is implemented entirely using robotic systems, and whether turn over has occurred.
[0192] In some examples, the metadata 3108 can be stored in a memory device (e.g., the memory component 3015) or a database. The memory device or the database can be provided for a scheduling or work allocation application that schedules the medical staff and the medical procedures in medical environments. For example, a user can input using an input system (e.g., of the input/output system 3020) the metadata 3108, or the metadata 3108 can be automatically generated using an automated scheduling application. The metadata 3108 can be associated with the video data 3102, the robotic system data 3104, the instrument data 3106, depth data 3110, and so on. For example, the other types of the multimodal data captured for the same procedure time or scheduled time, in the same medical environment, with the same procedure name, with the same robot or instrument, by the same medical staff, or so on can be associated with the corresponding metadata 3108 and can be processed together by the data collection and analysis system 3100 to determine the insights.
[0193] The depth data 3110 includes three-dimensional medical procedure data captured for medical procedure. Examples of the depth data 3110 can include three-dimensional video data obtained using depth-acquiring sensors placed within and/or around the at least one medical environment (e.g., the theaters 110a and 110b). For example, the depth data 3110 include theater-wide data (e.g., depth data, depth frame, or depth frame data) collected using theaterwide sensors (e.g., depth-acquiring sensors). In some examples, three-dimensional representations (e.g., point clouds) can be generated by inputting the depth data 3110 into at least one of suitable extrapolation methods, mapping methods, and machine learning models. For example, the depth data 3110 for a depth-acquiring sensor with a certain pose can indicate distance measured between the depth-acquiring sensor and points on objects and/or intensity value of the points on objects. The depth data 3110 from multiple depth-acquiring sensors with different poses as shown and described relative to FIGS. 2B, 2C, and 3 can be fused into a higher accuracy dataset through registration of depth-acquiring sensors. An intensity value can indicate a reflected signal strength for a point in the three-dimensional point cloud or an object
in the three-dimensional point cloud. The depth data 3110 can therefore be used to define a three-dimensional point cloud or a three-dimensional point cloud representation corresponding to the medical environment that can be used to track the location and the number of objects such as the medical staff (personnel) and equipment, for example, using the method described in FIG. 11 A. The three-dimensional point cloud includes points or pixels, each of which is defined by an intensity value (e.g., a gray-scale intensity value) and a three-dimensional coordinates of that pixel with respect to a coordinate frame of the depth-acquiring sensor.
[0194] For example, the data collection and analysis system 3100 can execute computer vision algorithms that process the depth data 3110 and provide one or more of temporal activities data and human actions data associated with a medical procedure, sometimes performed using a robotic system and/or an instrument. In some examples, the data collection and analysis system 3100 can perform temporal activity recognition to recognize temporal activities data, including phases and tasks within a nonoperative or inter-operative period. Examples of a nonoperative period include the nonoperative periods 310a, 310b, 310c, 310d. In some embodiments, the nonoperative periods can be detected at 910c and 920c. Examples of a task within a nonoperative period include the tasks 320a, 320b, 320c, 320d, 320f, and 320e. As described herein, two or more tasks can be grouped as a phase or a stage. Examples of a phase include post-surgery 520, turnover 525, pre-surgery 510, and surgery 515, and so on. Accordingly, the data streams such as the video data 3102, the robotic system data 3104, the instrument data 3106, and the depth data 3110 obtained from the theater- wide sensors can be segments into a plurality of periods, including operative periods and nonoperative periods. Each nonoperative periods can include at least one phase. Each phase includes at least one task.
[0195] In some examples, to obtain the human actions data, the data collection and analysis system 3100 can perform human detection to detect at least one individual (e.g., personnel, a medical staff member, a patient, and so on) in each frame of the video data 3102 and/or the depth data 3110 collected by the theater- wide sensor. For example, at 91 Oh, personnel detection can be performed by the machine learning systems 910e or at 925f to determine a number of personnel and their motion to determine one or more metrics as described herein. In some examples, the motion detection component 910i can then analyze the objects (including the equipment at 91 Of and the personnel at 91 Oh) detected at block 91 Oe to determine
their respective motions, e.g., using various machine learning methods, optical flow, combinations thereof, etc. disclosed herein.
[0196] The data collection and analysis system 3100 processes the metadata 3108, the temporal activities data, and the human actions data to determine metrics (e.g., nonoperative metrics) and statistical information. For example, the statistical information can include a number of personnel involved in completion of each task or phase of the non-operative period, which is computed from the number of personnel detected in each frame of the output of the theaterwide sensor. The data collection and analysis system 3100 can determine the metrics based on the activities of personnel, equipment, patient, and so on as evidence in the temporal activities data and the human actions data.
[0197] As described herein, metrics 3120 (e.g., a metric value or a range of metric values) can be determined via the workflow analytics 450e using the multimodal data 3102, 3104, 3106, 3108, and 3110. The metrics 3120 are indicative of the spatial and temporal efficiency of the medical procedure for which the multimodal data 3102, 3104, 3106, 3108, and 3110 is collected. Examples of the metrics include the metrics 805a, 805b, 805c, 810a, 810b, 810c, 815a, 815b, 820a, 820b. For example, with respect to a given medical procedure, at least one metric value or range of metric values can be determined for the entire medical procedure, for a period of the medical procedure, for a phase of the medical procedure, for a task of the medical procedure, for a surgeon, for a care team, for a medical staff, and so on. For example, with respect to a given medical procedure, at least one metric value or range of metric values can be determined for temporal workflow efficiency, for a number of medical staff members, for time duration of each segment (e.g., phase or task) of the medical procedure, for motion, for room size and layout, for timeline, for non-operative periods or adverse events, and so on. In some examples, the metrics 3120 can be provided for each temporal segment (e.g., period, phase, task, and so on) of a medical procedure. Accordingly, for a given medical procedure, a metric value or a range of metric values can be provided for each of two or more multiple temporal segments (e.g., periods, phases, and tasks) of a medical procedure.
[0198] In some embodiments, metrics 3120 such as the ORA score can be provided for each OR, hospital, surgeon, healthcare team, procedure type, over multiple medical procedures. For example, a metric value or a range of metric values can be provided for each OR, hospital,
surgeon, healthcare team, procedure type, and so on. In some examples, a procedure type of a medical procedure can be defined based on one or more of a modality (robotic, open, lap, etc.), operation type (e.g., prostatectomy, nephrectomy, etc.), procedure workflow efficiency rating (e.g., high-efficiency, low efficiency, etc.), certain type of hospital setting (e.g., academic, outpatient, training, etc), and so on.
[0199] FIG. 32 is a diagram illustrating example robotic systems 3210, 3220, 3230, and 3240, a data collection and analysis system 3100, a medical environment sensing system 3260, and a storage system 3260, according to various arrangements. An example of the data collection and analysis system 3100 can be an robotic hub that is communicably coupled to the robotic systems 3210, 3220, 3230, and 3240 via one or more suitable networks. For example, the data collection and analysis system 3100 can be implemented using one or more suitable computing systems, such as one or more computing systems 450b, and 3000. For example, code, instructions, data structures, weights, biases, parameters, and other information that define the data collection and analysis system 3100 can be stored in one or more memory systems such as one or more memory components 3015 and/or one or more storage systems 3025. In some examples, the processes of the data collection and analysis system 3100 such as selecting determining to enable or disable recording of one or more types of data in the multimodal data can be performed using one or more processors such as one or more processors 3010.
[0200] The storage system 3260 can be a memory device, database, datacenter, or another storage facility similar to the storage devices 3025. The storage system 3260 is communicably coupled to the hub 3250 and the medical environment sensing system 3260 (e.g., the sensors and systems thereof) via one or more suitable networks. The storage system 3260 can receive (over the one or more networks), save, and/or store streams of data of the sensors of the medical environment sensing system 3260 and/or streams of data of robotic systems 3210, 3220, 3230, and 3240. For example, the storage system 3260 can receive streams of data of the robotic systems 3210, 3220, 3230, and 3240 from the data collection and analysis system 3100. The storage system 3260 can receive streams of data directly from the sensors of the medical environment sensing system 3260 or indirectly from the data collection and analysis system 3100, which receives the streams of data directly from the sensors of the medical environment sensing system 3260. The data collection and analysis system 3100 can send, forward, or
transfer the streams of data of robotic systems 3210, 3220, 3230, and 3240 and the data collection and analysis system 3100 to the storage system 3260. In some examples, the data collection and analysis system 3100 can enable recording of a data stream by the storage system 3260 by sending, transferring, or otherwise providing the data stream to the storage system 3260, and can disable recording of a data stream by the storage system 3260 by stop sending, stop transferring, or otherwise stop providing the data stream to the storage system 3260. In some examples, the data collection and analysis system 3100 can enable recording of a data stream by the storage system 3260 by sending an enable command identified for the data stream to the storage system 3260, and can disable recording of a data stream by the storage system 3260 by sending a disable command identified for the data stream to the storage system 3260.
[0201] Each of the robotic systems 3210, 3220, 3230, and 3240 can be any one of the robotic systems described herein. The robotic systems 3210, 3220, 3230, and 3240 can communicate with the data collection and analysis system 3100 in real time to provide the robotic system data 3104 to the data collection and analysis system 3100 via a suitable wired or wireless connection. In some examples, the sensing system 3260 is an enable of the sensing system 3130 and can include various types of sensors (e.g., the theater-wide sensors) described herein configured to capture and collect data such as the video data 3102, the depth data 3110, and so on. In some examples, the medical environment sensing system 3260 can include computing systems 190a and 190b to facilitate data collection, data processing, and so on of the video data 3102, the depth data 3110. In some examples, the sensing system 3260 can provide the video data 3102 and the depth data 3110 to the data collection and analysis system 3100 via a suitable wired or wireless connection. In some examples, the data collection and analysis system 3100 can provide instructions or commands to enable or disable collection of at least one type of data by at least one sensor to the sensing system 3260 via a suitable wired or wireless connection. In that regard, the robotic systems 3210, 3220, 3230, and 3240 and the sensing system 3260 can communicate in real time, enabling the intelligent data collection based on particular trigger events.
[0202] In some examples, the robotic systems 3210, 3220, 3230, and 3240, the data collection and analysis system 3100, and the sensing system 3260 (e.g., the sensors and/or the computing systems thereof) are located in a same medical environment (e.g., a same OR). In some
examples, the data collection and analysis system 3100 can offsite, outside of the medical environment in which the robotic systems 3210, 3220, 3230, and 3240 and the sensing system 3260 (e.g., the sensors and/or the computing systems thereof) are located.
[0203] The robotic system data 3104 includes or is indicative of robotic system events corresponding to a state or an activity of an attribute or an aspect of a robotic system. The robotic system data includes a timeline based on timestamps of system events that are time- aligned. The robotic system data 3104 of a robotic system can be generated by the robotic system (e.g., in the form of a robotic system log) in its normal course of operations. The robotic system data is determined based on at least one of input received by a console of the robotic system from a user or sensor data of a sensor on the robotic system. The robotic system can include one or more sensors (e.g., camera, infrared sensor, ultrasonic sensors, etc.), actuators, interfaces, consoles, that can output information used to detect such a system event. The system events can serve as trigger events to trigger enabling and disabling of the recording of the multimodal data.
[0204] FIG. 33 is an example timeline 3300 including robotic system data (e.g., system events) over time, according to some embodiments. Each system event can be described or logged as a string describing the system event, has an associated timestamp, and additional information. In some examples in which the system event includes instrument on, the addition information can include an identity of the instrument.
[0205] As shown, examples of the system events include a head-in event (“Headin”), head-out event (“HeadOut”), a robot-in-follow event (“Targeting SusMovingStart”), a tool-on event (“FirstToolOn”), a tool-off event (“FirstToolOff’), first-cannula-on event (“FirstCannulaOn”), a last-cannula-off event (“LastCannulaOn”), a table-moving-start event (“TableMovingStart”), a table-moving-end event (“TableMovingEnd”), a cart-drive-enabled event (“CartDriveMovingOn”), a cart-drive-disabled event (“CartDriveMovingOff’), a first or all sterile-adapter-engaged event (“FirstSterileAdapterOn” or “AllSterileAdaptersOn”), a first or all sterile-adapter-disengaged event (“FirstSterileAdapterOff’ or “AllSterileAdaptersOff’), to name a few.
[0206] In addition, a system event can be detected using information outputted from a sensor located within the medical environment. The system event can be detected based on motion data or a location of two objects within the medical environment. For example, in response to determining that, based on the 3D depth map of the medical environment, that a surgeon (e.g., a detected individual) has moved to a distance less than a threshold away from a detected robotic system, a surgeon head-in event has occurred. For example, in response to determining that, based on the 3D depth map of the medical environment, that a surgeon (e.g., a detected individual) has moved at least a distance greater than a threshold away from a detected robotic system, a surgeon head-out event has occurred.
[0207] For example, in a head-in event, the surgeon has entered or begins operating a console of the robotic system. As described, the surgeon (e.g., operator 105c) can view the output of visualization tool 140d through a display 160a on a surgeon console 155 in a “head-in” posture. Thus, a head-in event corresponds to the state that surgeon begins to observe the visualization tool 140d or the state that the surgeon is starting to operate the robotic system or the surgeon console 155. The head-in event can be detected using a sensor of the robotic system, a sensor located within the medical environment, or user input from a UI or the console. For example, surgeon console 155 can include a laser sensor, an infrared sensor, or a camera (through computer vision) that can detect the presence and absence of an object (e.g., the surgeon’s head) adjacent to the console 155. For example, the surgeon can provide user input specifying the head-in event by manipulating a user interactive element (e.g., touchscreen, buttons, dials, keyboards, mouse, microphone, and so on) provided on the surgeon console 155. For example, in response to determining that, based on the three-dimensional depth data 3110 (e.g., depth map or point cloud) of the medical environment, that a surgeon (e.g., a detected individual) or a head thereof has moved to a distance less than a threshold from a detected robotic system or the console thereof, a surgeon head-in event has been detected. In response to the head-in event, recording of at least some of the multimodal data can be disabled or stopped, given that the surgeon is not moving in the medical environment while operating the robotic system, and that the most relevant data when the surgeon is operating the robotic system is captured by the robotic system and the instruments thereon, e.g., robotic system data 3104 and instrument data 3106 (e.g., kinematic data, any imaging data (e.g., RGB image, ultrasound, etc.)), and so on.
For example, one or more of the video data 3102 or the depth data 3110 can be disabled. Accordingly, redundant data can be avoided.
[0208] For example, in a head-out event, the surgeon has exited from a console of the robotic system. For example, the surgeon (e.g., operator 105c) can view the output of visualization tool 140d through a display 160a upon a surgeon console 155 in a “head-in” posture. Thus, a head-out event corresponds to the state that surgeon no longer observes the visualization tool 140d or the state that the surgeon no longer operates the robotic system or the surgeon console 155. The head-out event can be detected using a sensor of the robotic system, a sensor located within the medical environment, or based on user input from a UI or console. For example, the surgeon console 155 can include a proximity sensor, a laser sensor, an infrared sensor, or a camera (through computer vision) that can detect the presence and absence of an object (e.g., the surgeon’s head) adjacent to the console 155. For example, the surgeon can provide user input specifying the head-out event by manipulating a user interactive element provided on the surgeon console 155. For example, in response to determining that, based on the three- dimensional data 3110 of the medical environment (if enabled at that time), that a surgeon (e.g., a detected individual) or a head thereof has moved at least a distance greater than a threshold away from a detected robotic system or the console thereof, a surgeon head-out event has been detected. In response to the head-out event, recording of at least some of the multimodal data can be enabled or triggered to analyze surgeon activities while the surgeon is out of the console 155 during the procedure to determine potential intraoperative disruptions and root causes of the same. For example, one or more of the video data 3102 or the depth data 3110 can be enabled.
[0209] For example, in a robot-in-follow event, a surgeon operates the robotic system by controlling the instruments attached thereon to perform a medical procedure, moving robotic arms and manipulators on which the instruments are attached, and so on. Thus, a robot-in- follow corresponds to the state that surgeon is operating the robotic system via the surgeon console 155. The robot-in-follow event can be detected using a sensor of the robotic system, a sensor located within the medical environment, or based on user input from a UI or console. For example, surgeon console 155 can include a laser sensor, an infrared sensor, or a camera (through computer vision) that can detect the presence and absence of an object (e.g., the
surgeon’s hand) adjacent to a portion of the console 155 where the surgeon interacts using hands. For example, the surgeon can provide user input specifically indicating robot-in-follow event by manipulating a user interactive element provided on the surgeon console 155. The user input can include explicit indication that the surgeon begins to operate the robotic system or implicit indication including commands and instructions to the robotic system that operates the instruments thereof. For example, in response to determining that, based on the three- dimensional depth data 3110 of the medical environment, that a surgeon (e.g., a detected individual) or a hand thereof has moved to a distance less than a threshold from a detected robotic system or the console thereof, a robot-in-follow event has been detected. In response to the robot-in-follow event, recording of at least some the multimodal data can be disabled or stopped, given that the surgeon is not moving in the medical environment while operating the robotic system, and that the most relevant data when the surgeon is operating the robotic system is captured by the robotic system and the instruments thereon, e.g., robotic system data 3104 and instrument data 3106 (e.g., instrument kinematic data, any imaging data (e.g., RGB image, ultrasound, etc.)), and so on. Accordingly, redundant data can be avoided. For example, one or more of the video data 3102 or the depth data 3110 can be disabled.
[0210] For example, in a tool-on event, a tool (instrument) is attached to the robotic system. As described, an instrument can be attached to the robotic system for performing imaging and surgical tasks. Thus, a tool-on event corresponds to the state that surgeon has initiated a segment of the medical procedure involving the instrument. The tool-on event can be detected using a sensor of the robotic system, a sensor located within the medical environment, or based on user input from a UI or console. For example, an interface on a robotic manipulator or arm on which an instrument is attached can include a switch, an actuator, a laser sensor, an infrared sensor, or a camera (through computer vision) that can detect the presence and absence of an instrument on the interface. For example, an instrument properly attaching to the interface (e.g., a mechanical fit) can trigger the switch or actuator to send a signal to the console that the instrument has been successfully attached. For example, the surgeon can provide user input specifically indicating tool-on event by manipulating a user interactive element provided on the surgeon console 155. For example, in response to determining that, based on the three- dimensional depth data 3110 of the medical environment, that an instrument (e.g., a detected object) has moved to a distance less than a threshold from a detected robotic system or the
arm/manipulator/interface thereof, a tool-on event has been detected. In response to the tool- on event, recording of at least some of the multimodal data can be disabled or stopped, given that the surgeon is not moving in the medical environment while operating the instrument, and that the most relevant data when the surgeon is operating the robotic system is captured by the robotic system and the instruments thereon, e.g., robotic system data 3106 and instrument data 3106 (e.g., kinematic data, any imaging data (e.g., RGB image, ultrasound, etc.)), and so on. Accordingly, redundant data can be avoided. For example, one or more of the video data 3102 or the depth data 3110 can be disabled.
[0211] For example, in a tool-off event, a tool (instrument) is removed from the robotic system. As described, an instrument can be attached to the robotic system for performing imaging and surgical tasks. Thus, a tool-off event corresponds to the state that surgeon has concluded a segment of the medical procedure involving the instrument, the instrument has malfunctioned or requires cleaning, or reloading, and so on. The tool-off event can be detected using a sensor of the robotic system, a sensor located within the medical environment, or based on user input from a UI or console. For example, an interface on a robotic manipulator or arm on which an instrument is attached can include a switch, an actuator, a laser sensor, an infrared sensor, or a camera (through computer vision) that can detect the presence and absence of an instrument on the interface. For example, an instrument properly attaching to the interface (e.g., a mechanical fit) can trigger the switch or actuator to send a signal to the console that the instrument has been successfully attached, and without such signal, the interface is determined to be free of any instrument. For example, the surgeon can provide user input specifically indicating the tool-off event by manipulating a user interactive element provided on the surgeon console 155. For example, in response to determining that, based on the three-dimensional depth data 3110 (if enabled at that time) of the medical environment, that an instrument (e.g., a detected object) has moved at least a distance greater than a threshold away from a detected robotic system or the arm/manipulator/interface thereof, a tool-off event has been detected. In response to the tool-off event, recording of at least some the multimodal data can be enabled or triggered to analyze medical staff activities in connection with instrument cleaning, re-loading, troubleshooting and to determine potential intraoperative disruptions and root causes of the same. For example, one or more of the video data 3102 or the depth data 3110 can be enabled.
[0212] For example, in a tool-change event, a first tool (instrument) is removed from the robotic system and a second tool is attached to the robotic system. Different instruments can be attached to the robotic system for performing different tasks, such as imaging tasks and surgical tasks. Thus, a tool- change event corresponds to the state that surgeon has concluded a first segment of the medical procedure involving the first instrument and about to begin a second segment of the medical procedure involving the second instrument. The tool- change event can be detected using a sensor of the robotic system, a sensor located within the medical environment, or based on user input from a UI or console. For example, an interface on a robotic manipulator or arm on which the first instrument is attached can include a switch, an actuator, a laser sensor, an infrared sensor, or a camera (through computer vision) that can detect the presence and absence of the first instrument and the second instrument on the interface. For example, the first instrument or the second instrument properly attaching to the interface (e.g., a mechanical fit) can trigger the switch or actuator to send a signal to the console that the first instrument or the second instrument has been successfully attached. The toolchange event corresponds to the sequence of detecting the presence of the first instrument, detecting an absence of the first instrument, and then detecting the presence of the second instrument. For example, the surgeon can provide user input specifically indicating the toolchange event by manipulating a user interactive element provided on the surgeon console 155. For example, in response to determining that, based on the three-dimensional depth data 3110 (if enabled at that time) of the medical environment, that 1) a first instrument (e.g., a detected object) has moved to a distance less than a threshold from a detected robotic system or the arm/manipulator/interface thereof, 2) that the first instrument (e.g., the same detected object) has moved at least a distance greater than a threshold away from a detected robotic system or the arm/manipulator/interface thereof, and 3) a second instrument (e.g., another detected object) has moved to a distance less than a threshold from a detected robotic system or the arm/manipulator/interface thereof, a tool-change event has been detected. In response to the tool-change event, one or more of the video data 3102 or the depth data 3110 can be disabled.
[0213] For example, in a last-cannula-off event, a last cannula is removed from the patient, marking the end of a medical procedure. Thus, a last-cannula-off event corresponds to the state that the medical procedure concludes and that post-surgery activities (e.g., cleaning) begins. The last-cannula-off event can be detected using a sensor located within the medical
environment or based on user input from a UI or console. For example, the surgeon can provide user input specifically indicating the last-cannula-off event by manipulating a user interactive element provided on the surgeon console 155. For example, in response to determining that, based on the three-dimensional depth data 3110 (if enabled at that time) of the medical environment, that a cannula (e.g., a detected object) has moved at least a distance greater than a threshold away from a detected patient, a last-cannula-off event has been detected. In response to the last-cannula-off event, recording of at least some the multimodal data can be enabled or triggered to analyze medical staff activities in connection with non-operative activities post-surgery. For example, one or more of the video data 3102 or the depth data 3110 can be enabled.
[0214] For example, in a table-moving-start event, a table (e.g., a surgical table supporting a patient undergoing a medical operation performed using a robotic medical system) that is paired with the robotic system is moved within the medical environment. For instance, the surgical table’s orientation and/or pose may be changed and one or more manipulators of the robotic medical system may change their configuration (e.g., pose, orientation, etc.) so as to remain docked to the patient during the table’s motion. The table-moving-start event can be detected using a sensor located within the medical environment or based on user input from a UI or console. For example, the surgeon can provide user input specifically indicating the table-moving-start event by manipulating a user interactive element provided on the surgeon console 155. For example, in response to determining that, based on the three-dimensional depth data 3110 (if enabled at that time) of the medical environment, that a pose of the table (e.g., a detected object) has changed from its an original pose, a table-moving-start event has been detected. In response to the table-moving-start event, recording of at least some the multimodal data can be enabled or triggered to analyze activities and potential collisions while the table is being moved. For example, one or more of the video data 3102 or the depth data 3110 can be enabled.
[0215] For example, in a cart-drive-enabled (robot setup/roll-up start) event, a cart of the robotic system is moved or repositioned within the medical environment. The cart-drive- enabled event can be detected using a sensor located within the medical environment or based on user input from a UI or console. For example, the surgeon can provide user input
specifically indicating the cart-drive-enabled event by manipulating a user interactive element provided on the surgeon console 155. For example, in response to determining that, based on the three-dimensional depth data 3110 (if enabled at that time) of the medical environment, that the robotic system or a cart thereof has moved at least a distance greater than a threshold away from an original position, a cart-drive-enabled event has been detected. In response to the cart-drive-enabled event, recording of at least some the multimodal data can be enabled or triggered to analyze activities and potential collisions while the cart is being moved and to provide pathing guidance for the cart. For example, the cart can include guidance features that allow the cart to move automatically and avoid collisions. For example, one or more of the video data 3102 or the depth data 3110 can be enabled.
[0216] For example, in a first-sterile-adapter-engaged (robot draping start) event, robot draping is triggered for a sterilization process to cover at least a portion of the robotic system. The first-sterile-adapter-engaged event can be detected using a sensor located within the medical environment or based on user input from a UI or console. For example, the surgeon can provide user input specifically indicating the first-sterile-adapter-engaged event (e.g., initiating robot draping start command) by manipulating a user interactive element provided on the surgeon console 155. For example, in response to determining that, based on the three-dimensional depth data 3110 (if enabled at that time) of the medical environment, that a first sterile adapter is at least a distance greater than a threshold away from an original undraped position, a first- sterile-adapter-engaged event has been detected. In response to the first-sterile-adapter- engaged event, recording of at least some the multimodal data can be enabled or triggered to analyze the draping procedure to identify any sterilization breaches. For example, one or more of the video data 3102 or the depth data 3110 can be enabled.
[0217] For example, in a first-sterile-adapter-disengaged (robot draping off) event, robot undraping is triggered to retrieve the draping from the at least a portion of the robotic system. The first-sterile-adapter-disengaged event can be detected using a sensor located within the medical environment or based on user input from a UI or console. For example, the surgeon can provide user input specifically indicating the first-sterile-adapter-disengaged event (e.g., initiating robot undraping start command) by manipulating a user interactive element provided on the surgeon console 155. For example, in response to determining that, based on the three-
dimensional depth data 3110 (if enabled at that time) of the medical environment, that a first sterile adapter is at least a distance greater than a threshold away from an original draped position, a first-sterile-adapter-disengaged event has been detected. In response to the first- sterile-adapter-engaged event, recording of at least some the multimodal data can be disabled. For example, one or more of the video data 3102 or the depth data 3110 can be disabled.
[0218] In some examples, a medical staff member (e.g., a surgeon) can provide user input specifically indicating to stop or to start recording of at least some the multimodal data by manipulating a user interactive element provided on the surgeon console 155.
[0219] In some embodiments, each trigger event can also correspond to a segment (e.g., period, phase, task) of a medical procedure. In some embodiments, an Artificial Intelligence (Al) system communicably coupled to the sensing system 3260 for the medical environments can enable or disable recording of the multimodal data in response to determining a trigger event, such as OR events, phases, tasks, periods, and so on. For example, in response to determining that a phase, task, event, activity, or period is allowed to be recorded according to user input or predefined rules (e.g., based on privacy considerations), at least one stream of the plurality of streams of the multimodal data can be triggered to be recorded. On the other hand, in response to determining that a phase, task, event, activity, or period is disallowed to be recorded according to user input or predefined rules, recording of the at least one stream of the plurality of streams of the multimodal data can be disabled. The user input can be provided via an input device of the input/output system 3020.
[0220] Examples of tasks or activities include, for example, turnover, setup, sterile preparation, robot draping, patient in, patient preparation, intubation, patient draping, first cut, port placement, room preparation, robotic system rollup, robotic system docking, robotic system surgery, robotic system undocking, robotic system rollback, patient close, robotic system undraping, patient undraping, patient out, cleaning, and idle. Examples of phases or activity groups include turnover, pre-surgery, surgery, and post-surgery. The phase turnover includes tasks turnover, setup, sterile preparation, cleaning, and idle. The phase pre-surgery includes tasks robot draping, patient in, patient preparation, intubation, and patient draping. The phase surgery includes tasks first cut, port placement, room preparation, robotic system rollup, robotic system docking, robotic system surgery, robotic system undocking, robotic system
rollback, and patient close. The phase post-surgery includes tasks robotic system undraping, patient undraping, and patient out.
[0221] In some examples, based on user input or predefined rules, at least one data stream (e.g., the video data 3102 and the depth data 3110) of the plurality of data streams of the multimodal data can be enabled for a post-surgery phase. In response to determining that a current phase is a post-surgery phase, an enable command 3140 can be sent to the sensing system 3130 (e.g., to the sensors configured to output the at least one data stream) and/or to the storage system 3260 to enable recording of the at least one data stream.
[0222] In some examples, based on user input or predefined rules, at least one data stream (e.g., the video data 3102) of the plurality of data streams of the multimodal data is disabled for a surgery phase. The depth data 3110 can be enabled for the patient in task given that the depth data 3110 does not capture PHI. In response to determining that a current phase is a surgery phase, a disable command 3140 can be sent to the sensing system 3130 (e.g., to the sensors configured to output the at least one data stream) and/or to the storage system 3260 to disable recording of the at least one data stream.
[0223] In some examples, based on user input or predefined rules, at least one data stream (e.g., the video data 3102) of the plurality of data streams of the multimodal data is disabled for a patient in task. The depth data 3110 can be enabled for the patient in task given that the depth data 3110 does not capture PHI. In response to determining that a current phase is a patient in task, a disable command 3140 can be sent to the sensing system 3130 (e.g., to the sensors configured to output the at least one data stream) and/or to the storage system 3260 to disable recording of the at least one data stream.
[0224] In some examples, based on user input or predefined rules, at least one data stream (e.g., one or more of the video data 3102 or the depth data 3110) of the plurality of data streams of the multimodal data can be enabled for a patient draping task. In response to determining that a current task is a patient draping task, an enable command 3140 can be sent to the sensing system 3130 (e.g., to the sensors configured to output the at least one data stream) and/or to the storage system 3260 to enable recording of the at least one data stream. Thus, recording of
the at least one data stream (e.g., the video data 3102) of the plurality of data streams is disabled between the patient in task and the patient draping task.
[0225] In some examples, based on user input or predefined rules, at least one data stream (e.g., the video data 3102) of the plurality of data streams of the multimodal data is disabled for a patient undraping task. The depth data 3110 can be enabled for the patient undraping task given that the depth data 3110 does not capture PHI. In response to determining that a current phase is a patient undraping task, a disable command 3140 can be sent to the sensing system 3130 (e.g., to the sensors configured to output the at least one data stream) and/or to the storage system 3260 to disable recording of the at least one data stream. Thus, recording of the at least one data stream (e.g., the video data 3102 and the depth data 3110) of the plurality of data streams is enabled between the patient draping task and the patient undraping task.
[0226] In some examples, based on user input or predefined rules, at least one data stream (e.g., the video data 3102) of the plurality of data streams of the multimodal data can be enabled for a patient out task. In response to determining that a current task is a patient out task, an enable command 3140 can be sent to the sensing system 3130 (e.g., to the sensors configured to output the at least one data stream) and/or to the storage system 3260 to enable recording of the at least one data stream. Thus, recording of the at least one data stream (e.g., the video data 3102) of the plurality of data streams is disabled between the patient undraping task and the patient out task. Flexibly defining phases and tasks at which recording can be enabled or disabled allow an operator to collect PHI-free data.
[0227] In some examples, a first data stream (e.g., video data 3102) is disabled at a given phase or task while a second data stream (e.g., depth data 3110) is enabled at the same phase or task. In some examples, a first data stream (e.g., depth data 3110) is enabled at a given phase or task while a modified second data stream (e.g., video data 3102) is enabled at the same phase or task. For example, in a modified video data stream, the focus of the camera capturing the modified video data stream may change as compared to another phase or task, objects such as human faces can be removed, blurred, or tokenized by the data collection and analysis system 3100, and so on. In other words, the rules and user inputs for enabling, disabling, or modifying a plurality of data streams can be individually specified for each data stream.
[0228] In some embodiments, user input or predefined rules can specify a particular procedure modalities for which at least one data stream of the multimodal data can or cannot be recorded. In the examples, user input or predefined rules specify that at least one data stream can be recorded for medical procedures performed using robotic systems or a particular type of robotic systems. The data collection and analysis system 3100 can determine based on metadata 3108 whether a robotic system is used in a medical procedure or the particular type of the robotic system used in a medical procedure. In some examples, an Al system (including a machine learning model) can detect, using machine vision algorithms and based on the video data 3102 and/or the depth data 3110 that a robotic system or a particular type of robotic systems is present in the medical environment. In response to determining that a robotic system or a particular type of robotic systems is present in the medical environment and used in a medical procedure, recording of the at least one data stream can be enabled. In response to determining that a robotic system or a particular type of robotic systems is not present in the medical environment or used in a medical procedure, recording of the at least one data stream can be disabled.
[0229] In the examples, user input or predefined rules specify that at least one data stream cannot be recorded for a period of time during the medical procedure in which at least one individual such as a patient is present. In some examples, an Al system (including a machine learning model) can detect, using machine vision algorithms and based on the video data 3102 and/or the depth data 3110 that an individual is present in the medical environment. The Al system can further distinguish between an individual who is a patient from an individual who is a medical staff, using machine vision algorithms and based on the video data 3102 and/or the depth data 3110. In response to determining that an individual is a lying down on a surgical table or a gurney, the Al system determines that the individual is a patient, in response to determining that an individual is standing up or operating a robotic system, the Al system determines that the individual is a medical staff. In response to determining that an individual is present in the medical environment, recording of the at least one data stream can be disabled. In response to determining that no individual is present in the medical environment, recording of the at least one data stream can be enabled.
[0230] In some examples, the multimodal data can be used to generate the metrics 3120 in the manner described. User input or predefined rules can indicate that the multimodal data based on which the metrics 3120 are generated can be discarded after the metrics 3120 are generated.
[0231] In some embodiments, the recording of the multimodal data can be based on institutional data and analytics and metrics. For example, the data collection and analysis system 3100 can determine metrics for a segment (e.g., tasks, phases, and periods) in real time, and recording of multimodal data can be triggered for medical procedures, tasks, phases, and periods having metrics that are outliners (e.g., having significantly efficient metric values or inefficient metric values) as compared to national, institutional, or other standards. The multimodal data for outliner cases are valuable for further analysis.
[0232] In some examples, the national, institutional, or other standards for a given metric can be implemented using one or more thresholds calculated based on statistical information of metric values over a given volume of medical procedures corresponding to the national, institutional, or other standards. In some embodiments, the statistical information can be generated across a large volume of historic medical procedures (e.g., as they are performed or after-the-fact) to build a database of metrics for different medical procedures, different medical environments (e.g., different theaters, different ORs, different hospitals), different types of medical procedures, different medical staff members or care teams, different experience levels of the medical staff members or care teams, different types of patients, different robotic systems, different instruments, different regions, countries, and so on. The diversity of historical knowledge captured in such metrics allow calculation of interested statistics (e.g., mean, median, x-percentile, standard deviation, and so on) for these metrics to serve as thresholds. A set of medical procedures can serve as the basis for determining a threshold for a given metric. The set can be collected based on the types of metadata 3108 (e.g., medical procedure type, medical environments, medical staff, experience level, robotic systems, instruments, and so on).
[0233] In some embodiments, a threshold of a metric can be set as one or more standard deviations above or below the mean, the median, or a percentile. In the examples in which the threshold of the efficiency metric of the “drape” task for a set of medical procedures that has occurred in a given an OR, a hospital, a hospital group, a region, or a country is 5 minutes, the
threshold of the efficiency metric values for the “drape” task can be set to be 5 minutes. In some examples, the threshold can be predefined based on expert opinion, laws, and regulations. In response to determining that a metric value for a medical procedure or a segment thereof is above or below a given threshold, the metric value and the corresponding medical procedure or segment can be considered as an outliner. In response to determining an outliner case, recording of the at least one data stream of the multimodal data can be enabled or disabled depending on the type of metric and predetermined rules.
[0234] In some embodiments, recording of multimodal data can be based on certain types of events detected using the multimodal data. In response to determining occurrence of a certain type of events, the recording of at least one data stream of the multimodal data can be enabled or disabled. In some examples, in response to determining that the “adverse event” metric 805c for a given medical procedure or segment (e.g., a phase or a task) is above a threshold, an adverse event can be detected, and the recording of at least one data stream of the multimodal data can be enabled or disabled as a response.
[0235] In some embodiments, recording of multimodal data can be based on data quality of the multimodal data. In some examples, in response to determining that data quality is below a threshold, the recording of at least one data stream of the multimodal data can be disabled as a response. Data quality can include frame rate, resolution, throughput, and so on. In some examples, a machine learning algorithm can determine a quality score based on occlusion of the view of the sensor.
[0236] In some embodiments, the data collection and analysis system 3100 (e.g., the robotic hub) described herein can enable (e.g., start) or disable (e.g., stop) recording multimodal data in response to determining a trigger event by sending an enable or disable command 3140 to the sensing system 3130 (e.g., to one or more of the sensors thereof) located within the medical environment. In some examples, disabling recording of a stream of the multimodal data includes one or more of not receiving the stream of data, disabling or switching off the sensors collecting the stream of data by sending instructions and commends to the sensors, not processing the stream of data (e.g., using a suitable processing component such as the processors 3010) if received, not transferring or sending the stream of data to another memory device (e.g., the memory component 3015 of another computing system or cloud storage), not
saving the stream of data in a memory device (e.g., the memory component 3015), and not storing the stream of data in a memory device. In some examples, enabling recording of a stream of the multimodal data includes one or more of receiving the stream of data, enabling or switching on the sensors collecting the stream of data by sending instructions and commends to the sensors, processing the stream of data (e.g., using a suitable processing component such as the processors 3010), transferring or sending the stream of data to another memory device (e.g., the memory component 3015 of another computing system or cloud storage), saving the stream of data in a memory device (e.g., the memory component 3015), and storing the stream of data in a memory device. The sensors enabled and disabled can be sensors that provide one or more of the video data 3102, the depth data 3110, the metadata 3110, the instrument data 3106, and so on.
[0237] In the examples in which a system (e.g., a datacenter) in addition to the sensing system 3130 and the data collection and analysis system 3100 performs some aspect of recording the data, an enable/disable command 3140 can likewise be sent to that system, and the sensing system 3130 and the data collection and analysis system 3100 can disable providing (e.g., sending) the data to that system.
[0238] In some embodiments, the disabling the recording of a data stream of the multimodal data includes erasing previously recorded data of the data stream. A certain amount of data is needed to perform certain processes in determine whether to disable the data stream, including segmenting the medical procedure into phases and tasks and determining metrics for those phases and tasks. The decision to disable the recording the data stream based on determined phases, tasks, and metrics is made during or even after a medical procedure, a phase, or task for which data recording is to be disabled. In such situations, the data collection and analysis system 3100 can retroactively remove, erase, delete, or destroy the data stream. The data stream can be identified according to the timestamps that define a phase or a task. In the examples in which data to be erased is stored in a system different from the data collection and analysis system 3100, the data collection and analysis system 3100 can send the disable command 3140 which includes an erase command that identifies the data to be erased.
[0239] FIG. 34 is a flowchart diagram illustrating an example method 3400 for providing smart data collection in a medical environment for a medical procedure, according to some
embodiments. At 3410, a plurality of streams of multimodal data is received. The multimodal data includes at least the robotic system data 3104 of a robotic medical system, instrument data 3106 of an instrument, video data 3102 of the medical environment, and depth data 3110 of the medical environment. At 3420, at least one phase and at least one task is determined in each of the at least one phase using the plurality of streams of the multimodal data. At 3430, a trigger event is determined based at least in part on the at least one phase and the at least one task. At 3440, recording of at least one stream of the plurality of streams of the multimodal data is enabled or disabled in response to the trigger event.
Remarks
[0240] The drawings and description herein are illustrative. Consequently, neither the description nor the drawings should be construed so as to limit the disclosure. For example, titles or subtitles have been provided simply for the reader’s convenience and to facilitate understanding. Thus, the titles or subtitles should not be construed so as to limit the scope of the disclosure, e.g., by grouping features which were presented in a particular order or together simply to facilitate understanding. Unless otherwise defined herein, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this disclosure pertains. In the case of conflict, this document, including any definitions provided herein, will control. A recital of one or more synonyms herein does not exclude the use of other synonyms. The use of examples anywhere in this specification including examples of any term discussed herein is illustrative only and is not intended to further limit the scope and meaning of the disclosure or of any exemplified term.
[0241] Similarly, despite the particular presentation in the figures herein, one skilled in the art will appreciate that actual data structures used to store information may differ from what is shown. For example, the data structures may be organized in a different manner, may contain more or less information than shown, may be compressed and/or encrypted, etc. The drawings and disclosure may omit common or well-known details in order to avoid confusion. Similarly, the figures may depict a particular series of operations to facilitate understanding, which are simply exemplary of a wider class of such collection of operations. Accordingly, one will readily recognize that additional, alternative, or fewer operations may often be used to achieve the same purpose or effect depicted in some of the flow diagrams. For example, data may be
-n-
encrypted, though not presented as such in the figures, items may be considered in different looping patterns (“for” loop, “while” loop, etc.), or sorted in a different manner, to achieve the same or similar effect, etc.
[0242] Reference herein to “an embodiment” or “one embodiment” means that at least one embodiment of the disclosure includes a particular feature, structure, or characteristic described in connection with the embodiment. Thus, the phrase “in one embodiment” in various places herein is not necessarily referring to the same embodiment in each of those various places. Separate or alternative embodiments may not be mutually exclusive of other embodiments. One will recognize that various modifications may be made without deviating from the scope of the embodiments.
Claims
1. A system, comprising: one or more processors, coupled with memory, to: receive a plurality of streams of multimodal data comprising robotic system data of a robotic medical system, instrument data of an instrument, video data of a medical environment, and depth data of the medical environment; determine at least one phase and at least one task in each of the at least one phase using the plurality of streams of the multimodal data; determine a trigger event based at least in part on the at least one phase and the at least one task; and enable recording or disable recording of at least one stream of the plurality of streams of the multimodal data in response to the trigger event.
2. The system of claim 1, wherein the instrument data comprises at least one of instrument imaging data or instrument kinematics data.
3. The system of claim 1, wherein the depth data is determined using two or more depth acquiring sensors located in or around the medical environment in which the medical procedure is performed.
4. The system of claim 1, wherein the video data is received from two or more visual image sensors located in or around the medical environment in which the medical procedure is performed.
5. The system of claim 1, wherein the plurality of streams of the multimodal data further comprises metadata, wherein the metadata comprises at least one of identifying information of the medical procedures, identifying information of at least one medical environment in which the medical procedure is performed, identifying information of medical staff by which the medical procedure is performed, experience level of the medical staff, patient complexity,
patient health parameters or indicators, or identifying information of at least one robotic medical system or at least one instrument used in the medical procedure.
6. The system of claim 1, wherein the robotic system data indicates system events of the robotic medical system corresponding to a state or an activity of the robotic medical system; and the robotic system data comprises a timeline based on timestamps of the system events.
7. The system of claim 1, wherein the robotic system data is determined based on at least one of: input received by a console of the robotic medical system from a user; or sensor data of a sensor on the robotic medical system.
8. The system of claim 1, wherein determining a trigger event comprises determining that a user of a console of the robotic medical system is in the console based on at least one of a sensor of the robotic medical system, a sensor located within the medical environment, or user input of the user received by the console; and the recording of at least one stream of the plurality of streams of the multimodal data is disabled when or in response to the user being in the console.
9. The system of claim 1, wherein determining a trigger event comprises determining that a user of a console of the robotic medical system is out of the console based on at least one of a sensor of the robotic medical system, a sensor located within the medical environment, or user input of the user received by the console; and the recording of at least one stream of the plurality of streams of the multimodal data is enabled when or in response to the user being out of the console.
10. The system of claim 1, wherein
determining a trigger event comprises determining that a user of a console of the robotic medical system is operating the robotic medical system or the instrument attached to the robotic medical system based on at least one of a sensor of the robotic medical system, a sensor located within the medical environment, or user input of the user received by the console; the recording of at least one stream of the plurality of streams of the multimodal data is disabled when or in response to the user operating the robotic medical system or the instrument.
11. The system of claim 1, wherein determining a trigger event comprises determining that the instrument is attached to the robotic medical system based on at least one of a sensor of the robotic medical system, a sensor located within the medical environment, or user input of the user received by a console of the robotic medical system; and the recording of at least one stream of the plurality of streams of the multimodal data is disabled when or in response to the instrument being attached to the console.
12. The system of claim 1, wherein determining a trigger event comprises determining that the instrument is removed from the robotic medical system based on at least one of a sensor of the robotic medical system, a sensor located within the medical environment, or user input of the user received by a console of the robotic medical system; and the recording of at least one stream of the plurality of streams of the multimodal data is enabled when or in response to the instrument is removed from the console.
13. The system of claim 1, wherein determining a trigger event comprises determining that a last cannula is removed from a patient based on at least one of a sensor located within the medical environment or user input of the user received by a console of the robotic medical system; and the recording of at least one stream of the plurality of streams of the multimodal data is enabled when or in response to the last cannula being removed from the patient.
14. The system of claim 1, wherein
determining a trigger event comprises determining that a table paired with the robotic medical system has initiated moving based on at least one of a sensor located within the medical environment or user input of the user received by a console of the robotic medical system; and the recording of at least one stream of the plurality of streams of the multimodal data is enabled when or in response to initiating moving the table.
15. The system of claim 1, wherein determining a trigger event comprises determining that the robotic medical system has initiated moving based on at least one of a sensor located within the medical environment or user input of the user received by a console of the robotic medical system; and the recording of at least one stream of the plurality of streams of the multimodal data is enabled when or in response to initiating moving the robotic medical system.
16. The system of claim 1, wherein determining a trigger event comprises determining, based on at least one of a sensor located within the medical environment or user input of the user received by a console of the robotic medical system, that the robotic medical system has initiated draping of at least a portion of the robotic medical system; and the recording of at least one stream of the plurality of streams of the multimodal data is enabled when or in response to initiating the draping.
17. The system of claim 1, wherein determining a trigger event comprises determining, based on user input of the user received by a console of the robotic medical system, that the recording of at least one stream of the plurality of streams of the multimodal data is enabled or disabled.
18. The system of claim 1, wherein the one or more processors determine a metric value for each of the at least one phase and the at least one task
19. The system of claim 18, wherein the metric value comprises one or more of: a metric value associated with temporal workflow; a metric value associated with scheduling;
a metric value associated with human resources; a metric value for an adverse event; a metric value for a headcount in a medical environment; or a metric value for traffic in the medical environment.
20. The system of claim 18, wherein the metric value comprises one or more of: an Efficiency metric value; a Consistency metric value; an Adverse Events metric value; a Case Volume metric value; a First Case Turnovers metric value; a Delay metric value; an OR Traffic metric value; a Room Layout metric value; and a Modality Conversion metric value.
21. The system of claim 18, wherein the trigger event comprises determining that the metric value for a phase of the at least one phase or a task of the at least one task is above or below a threshold, the threshold is determined using metric values of a plurality of phases or tasks of a plurality of medical procedures.
22. The system of claim 1, wherein at least one of: the trigger event comprises enabling recording of the at least one stream for a first phase or a first task, and the at least one stream is enabled in response to detecting the first phase or the first task; or the trigger event comprises disabling recording of the at least one stream for a second phase or a second task, and the at least one stream is disabled in response to detecting the second phase or the second task.
23. The system of claim 1, wherein the trigger event comprises detecting the robotic medical system;
the one or more processors: detect the robotic medical system using at least one of the depth data or the video data; and enable recording of the at least one stream in response to detecting the robotic medical system.
24. The system of claim 1, wherein the trigger event comprises detecting an individual; the one or more processors: detect the individual using at least one of the depth data or the video data; and disable recording of the at least one stream in response to detecting the individual.
25. The system of claim 1, wherein the trigger event comprises detecting an adverse event; the one or more processors: detect the adverse event using at least one of the depth data or the video data; and disable or enable recording of the at least one stream in response to detecting the individual.
26. The system of claim 1, wherein the trigger event comprises detecting data quality of the at least one stream; the one or more processors: detect the data quality of the at least one stream is below a threshold; and disable recording of the at least one stream in response to detecting that the data quality of the at least one stream is below the threshold.
27. The system of claim 1, wherein recording of at least one stream of the plurality of streams of the multimodal data comprises one or more of: processing of the at least one stream of the plurality of streams of the multimodal data;
transferring the at least one stream of the plurality of streams of the multimodal data to another memory device; saving the at least one stream of the plurality of streams of the multimodal data; or storing the at least one stream of the plurality of streams of the multimodal data.
28. A non-transitory computer-readable medium comprising instructions configured to cause the one or more processors of the system of claims 1-25 to perform operations of the one or more processors in claims 1-27.
29. A method, comprising: receiving a plurality of streams of multimodal data comprising robotic system data of a robotic medical system, instrument data of an instrument, video data of a medical environment, and depth data of the medical environment; determining at least one phase and at least one task in each of the at least one phase using the plurality of streams of the multimodal data; determining a trigger event based at least in part on the at least one phase and the at least one task; and enabling recording or disable recording of at least one stream of the plurality of streams of the multimodal data in response to the trigger event.
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| US202363616232P | 2023-12-29 | 2023-12-29 | |
| US63/616,232 | 2023-12-29 |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2025145069A1 true WO2025145069A1 (en) | 2025-07-03 |
Family
ID=94393927
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/US2024/062134 Pending WO2025145069A1 (en) | 2023-12-29 | 2024-12-27 | Intelligent data collection for medical environments |
Country Status (1)
| Country | Link |
|---|---|
| WO (1) | WO2025145069A1 (en) |
Citations (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20180271615A1 (en) * | 2017-03-21 | 2018-09-27 | Amit Mahadik | Methods and systems to automate surgical interventions |
| WO2023203104A1 (en) * | 2022-04-20 | 2023-10-26 | Covidien Lp | Dynamic adjustment of system features, control, and data logging of surgical robotic systems |
-
2024
- 2024-12-27 WO PCT/US2024/062134 patent/WO2025145069A1/en active Pending
Patent Citations (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20180271615A1 (en) * | 2017-03-21 | 2018-09-27 | Amit Mahadik | Methods and systems to automate surgical interventions |
| WO2023203104A1 (en) * | 2022-04-20 | 2023-10-26 | Covidien Lp | Dynamic adjustment of system features, control, and data logging of surgical robotic systems |
Non-Patent Citations (3)
| Title |
|---|
| CARION, NICOLAS ET AL.: "End-to- End Object Detection with Transformers", AR XIVTM PREPRINT AR XIVTM:2005.12872, 2020 |
| MEINHARDT, TIM ET AL.: "TrackFormer: Multi-Object Tracking with Transformers", ARXIVTM PREPRINT ARXIVTM:2101.02702, 2021 |
| REDMON, JOSEPH ET AL.: "You Only Look Once: Unified, Realtime Object Detection", AR XIVTM PREPRINT ARXIVTM:1506.02640, 2015 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| Padoy | Machine and deep learning for workflow recognition during surgery | |
| US11348682B2 (en) | Automated assessment of surgical competency from video analyses | |
| US20220334787A1 (en) | Customization of overlaid data and configuration | |
| JP6949128B2 (en) | system | |
| EP3505129A1 (en) | Control of a surgical system through a surgical barrier | |
| US12220181B2 (en) | Camera control systems and methods for a computer-assisted surgical system | |
| Ebert et al. | Invisible touch—Control of a DICOM viewer with finger gestures using the Kinect depth camera | |
| US20240324852A1 (en) | Systems and interfaces for computer-based internal body structure assessment | |
| US20240390103A1 (en) | Analysis of video data for addressing time sensitive situations in surgical procedures | |
| US20250204987A1 (en) | Generative artificial intelligence for generating irreversible, synthetic medical procedures videos | |
| US20250157636A1 (en) | Graphical user interface for discovering efficiency information for surgical and hospital processes | |
| Stauder et al. | Surgical data processing for smart intraoperative assistance systems | |
| US12094205B2 (en) | User switching detection during robotic surgeries using deep learning | |
| EP4355247B1 (en) | Joint identification and pose estimation of surgical instruments | |
| WO2025145069A1 (en) | Intelligent data collection for medical environments | |
| WO2025019239A1 (en) | Surgical theater nonoperative period analysis and optimization | |
| Lim et al. | Contagious infection-free medical interaction system with machine vision controlled by remote hand gesture during an operation | |
| US20250316371A1 (en) | Ai-based inventory prediction and optimization for medical procedures | |
| US20250226084A1 (en) | Automated root cause analysis for medical procedures and robotic surgery program optimization | |
| US20250299835A1 (en) | System architecture and method for data-free ai model deployment | |
| US20250384993A1 (en) | System architecture and methods for generating sterile processing analytics | |
| WO2025019245A1 (en) | Digital management and coordination of surgical theater tasks | |
| KR102856164B1 (en) | Apparatus for tracking operating room tools based on artificial intelligence | |
| WO2025015087A1 (en) | Clustered representational learning for surgical theater data | |
| WO2025049409A1 (en) | Latent space training for surgical theater data |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 24847335 Country of ref document: EP Kind code of ref document: A1 |