[go: up one dir, main page]

WO2025036993A1 - Sélecteur de vue dynamique pour observation de salle d'opération multiple - Google Patents

Sélecteur de vue dynamique pour observation de salle d'opération multiple Download PDF

Info

Publication number
WO2025036993A1
WO2025036993A1 PCT/EP2024/073048 EP2024073048W WO2025036993A1 WO 2025036993 A1 WO2025036993 A1 WO 2025036993A1 EP 2024073048 W EP2024073048 W EP 2024073048W WO 2025036993 A1 WO2025036993 A1 WO 2025036993A1
Authority
WO
WIPO (PCT)
Prior art keywords
operating room
display
priority
data
data streams
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
PCT/EP2024/073048
Other languages
English (en)
Inventor
Danail V. Stoyanov
Petros GIATAGANAS
Gauthier Camille Louis GRAS
Imanol Luengo Muntion
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Digital Surgery Ltd
Original Assignee
Digital Surgery Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Digital Surgery Ltd filed Critical Digital Surgery Ltd
Publication of WO2025036993A1 publication Critical patent/WO2025036993A1/fr
Pending legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H40/00ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices
    • G16H40/20ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices for the management or administration of healthcare resources or facilities, e.g. managing hospital staff or surgery rooms
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/25User interfaces for surgical systems
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H20/00ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance
    • G16H20/40ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance relating to mechanical, radiation or invasive therapies, e.g. surgery, laser therapy, dialysis or acupuncture
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H40/00ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices
    • G16H40/60ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices for the operation of medical equipment or devices
    • G16H40/63ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices for the operation of medical equipment or devices for local operation
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H40/00ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices
    • G16H40/60ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices for the operation of medical equipment or devices
    • G16H40/67ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices for the operation of medical equipment or devices for remote operation

Definitions

  • the present disclosure is generally related to computing technology, and particularly to improvements to systems to provide multiple operating room observation.
  • Medical centers e.g., hospitals, clinics, outpatient surgery centers, etc.
  • demands on services such as performing surgical procedures, based on limited health care resources, such as operating rooms, surgeons, staff, instruments, etc.
  • limited health care resources such as operating rooms, surgeons, staff, instruments, etc.
  • Inefficiency in the process of patient care involving surgical services could be extremely costly to medical centers, patients, and society in general.
  • a computer-implemented method includes receiving a plurality of operating room data streams associated with a plurality of operating rooms at an operating room dashboard interface and determining, by a display priority selector, a display order priority of two or more sources selected from the operating room data streams based on an importance score.
  • the method also includes arranging, by the display priority selector, an order of display of the two or more of sources on the operating room dashboard interface based on the display order priority.
  • the method further includes changing, by the display priority selector, the order of display of the two or more sources on the operating room dashboard interface based on detecting a priority change condition.
  • a computer program product includes a memory device having computer executable instructions stored thereon, which when executed by one or more processors cause the one or more processors to perform operations including receiving a plurality of operating room data streams associated with a plurality of operating rooms at an operating room dashboard interface, arranging an order of display of two or more of the operating room data streams on the operating room dashboard interface based on an display order priority, and changing the order of display of the two or more operating room data streams on the operating room dashboard interface based on detecting a priority change condition.
  • a system includes a memory system and a processing system coupled to the memory system.
  • the processing system is configured to execute a plurality of instructions to determine a display order priority of two or more sources selected from a plurality of operating room data streams associated with a plurality of operating rooms at an operating room dashboard interface, arrange an order of display of the two or more of sources on the operating room dashboard interface based on the display order priority, and change the order of display of the two or more sources on the operating room dashboard interface based on detecting a priority change condition.
  • FIG. 1 depicts a computer-assisted surgery (CAS) system according to one or more aspects
  • FIG. 2 depicts an operating room data stream collector according to one or more aspects
  • FIG. 3 depicts a system for prediction generation that can be incorporated according to one or more aspects;
  • FIG. 4 depicts an example of an operating room dashboard interface according to one or more aspects;
  • FIG. 5 depicts an example of a display priority adjustment for an operating room dashboard interface according to one or more aspects
  • FIG. 6 depicts another example of a display priority adjustment for an operating room dashboard interface according to one or more aspects
  • FIG. 7 depicts a flowchart of a method for dynamic view adjustment for multiple operating room observation according to one or more aspects.
  • FIG. 8 depicts a computer system according to one or more aspects.
  • Exemplary aspects of technical solutions described herein relate to, among other things, devices, systems, methods, computer-readable media, techniques, and methodologies for dynamic view adjustment for multiple operating room observation.
  • machine learning and/or computer vision can be used to interpret operating room data streams to improve a computerized management system of a medical center.
  • aspects of the technical solutions herein improve computerized management of operating rooms in the medical center by determining which of the operating room data streams are the highest priority for display and observation on an operating room dashboard interface.
  • An operating room dashboard interface can provide insights into current activities and utilization of a medical center’s resources, such as operating rooms, and other resources (human and materials) used during surgical procedures.
  • the operating room dashboard interface can provide such insights based on surgical video(s) captured of the surgical procedures being performed.
  • the video(s) can include intracorporal video from within a patients’ body and extracorporeal video captured from within the operating room(s).
  • the insights provided can be automatically determined to facilitate operators, such as staff, administrators, surgeons, etc. of the medical center to take one or more responsive actions.
  • a dynamic view selector can use machine learning results from localized analysis at the operating room level to make higher level decisions about which sources are the most useful to present and/or highlight on the operating room dashboard interface.
  • structures can be predicted dynamically and substantially in real-time as surgical data is being captured and analyzed by technical solutions described herein.
  • a predicted structure can be an anatomical structure, a surgical instrument, etc.
  • Exemplary aspects of technical solutions described herein further facilitate generating augmented views of surgical sites using semantic surgical representations based on the predictions of the one or more structures in the surgical data.
  • technical solutions herein can determine phases, actions, and other specific aspects of a surgical procedure being performed. Based on the determination of such information about the ongoing surgical procedures, aspects of the technical solutions herein facilitate providing insights (e.g., notifications, highlights, etc.) for an administrator to manage one or more resources.
  • resources can include scheduling resources, assigning resources, replacing resources, etc. as noted elsewhere, resources can include human resources (e.g., surgeons, staff, etc.) and/or material resources (e.g., operating rooms, tools, equipment, pharmaceuticals, dyes, etc.).
  • human resources e.g., surgeons, staff, etc.
  • material resources e.g., operating rooms, tools, equipment, pharmaceuticals, dyes, etc.
  • FIG. 1 depicts an example of a system 100 according to one or more aspects.
  • the system 100 includes at least a surgical data capture system 102, a video/audio recording system 104, and a surgical instrumentation system 106, in each operating room (OR) 101.
  • OR operating room
  • actor 112 can be medical personnel that uses the system 100 to perform a surgical procedure on a patient 110.
  • Medical personnel can be a surgeon, assistant, nurse, administrator, or any other actor that interacts with the system 100 in a surgical environment.
  • the surgical procedure can be any type of surgery such as but not limited to cataract surgery, laparoscopic cholecystectomy, endoscopic endonasal transsphenoidal approach (eTSA) to resection of pituitary adenomas, or any other surgical procedure.
  • the actor 112 can be a technician, and administrator, an engineer, or any other such personnel that interacts with the system 100.
  • the actor 112 can record data from the system 100, configure/update one or more attributes of the system 100, review past performance of the system 100, repair the system 100, etc.
  • a surgical procedure can include multiple phases, and each phase can include one or more surgical actions.
  • a “surgical action” can include an incision, a compression, a stapling, a clipping, a suturing, a cauterization, a sealing, or any other such actions performed to complete a phase in the surgical procedure.
  • a “phase” represents a surgical event that is composed of a series of steps (e.g., closure).
  • a “step” refers to the completion of a named surgical objective (e.g., hemostasis).
  • certain surgical instruments 108 e.g., forceps
  • the surgical instrumentation system 106 provides electrical energy to operate one or more surgical instruments 108 to perform the surgical actions.
  • the electrical energy triggers an activation in the surgical instrument 108.
  • the electrical energy can be provided in the form of an electrical current or an electrical voltage.
  • the activation can cause a surgical action to be performed.
  • the surgical instrumentation system 106 can further include electrical energy sensors, electrical impedance sensors, force sensors, bubble and occlusion sensors, and various other types of sensors.
  • the electrical energy sensors can measure and indicate an amount of electrical energy applied to one or more surgical instruments 108 being used for the surgical procedure.
  • the impedance sensors can indicate an amount of impedance measured by the surgical instruments 108, for example, from the tissue being operated upon.
  • the force sensors can indicate an amount of force being applied by the surgical instruments 108. Measurements from various other sensors, such as position sensors, pressure sensors, flow meters, can also be input.
  • the video/audio recording system 104 shown in FIG. 1 includes one or more cameras 105, such as operating room cameras, endoscopic cameras, etc.
  • the cameras 105 capture video data of the surgical procedure being performed.
  • the video/audio recording system 104 includes one or more video capture devices that can include cameras 105 placed in the surgical room to capture events surrounding (i.e., outside) the patient being operated upon.
  • the video/audio recording system 104 further includes cameras 105 that are passed inside (e.g., endoscopic cameras) the patient 110 to capture endoscopic data.
  • the endoscopic data provides video and images of the surgical procedure.
  • the video/audio recording system 104 can also include one or more microphones 107, which can be located on a central console, affixed (e.g., via a clip or other means) to medical personnel or objects in the operating room, and/or attached to or integrated into one or more devices in the operating room. Examples of devices in the operating room can include, but are not limited to surgical tools, video recorders, cameras, goggles, personal computers, smart watches, and/or smart phones.
  • the microphones 107 capture audio data, and can be wired or wireless or a combination of both.
  • the video data captured by the cameras 105 and the audio data captured by the microphones 107 can both include timestamps (or other indicia) that are used to correlate the video data and the audio data.
  • the timestamps can be used to correlate, or synchronize, the sounds captured in the operating room with the images of the medical procedure performed in the operating room.
  • the surgical data capture system 102 includes one or more memory devices, one or more processors, a user interface device, among other components.
  • the surgical data capture system 102 can execute one or more computer executable instructions. The execution of the instructions facilitates the surgical data capture system 102 to perform one or more methods, including those described herein.
  • the surgical data capture system 102 can communicate with other computing systems via a wired and/or a wireless network.
  • the surgical data capture system 102 includes one or more trained machine learning models that can detect and/or predict features of/from the surgical procedure that is being performed, or has been performed earlier.
  • Features can include structures such as anatomical structures, surgical instruments 108 in the surgical procedure.
  • Features can further include events such as phases, actions in the surgical procedure.
  • Features that are detected can further include actor 112, patient 110.
  • the surgical data capture system 102 in one or more examples, can provide recommendation for subsequent actions to be taken by actor 112. Alternatively, or in addition, the surgical data capture system 102 can provide one or more reports based on the detections.
  • the detections by the machine learning models can be performed in an autonomous or semi-autonomous manner.
  • the machine learning models can include artificial neural networks, such as deep neural networks, convolutional neural networks, recurrent neural networks, encoders, decoders, or any other type of machine learning models.
  • the machine learning models can be trained in a supervised, unsupervised, or hybrid manner.
  • the machine learning models can be trained to perform detection and/or prediction using one or more types of data acquired by the system 100.
  • the machine learning models can use the video data captured via the video/audio recording system 104.
  • the machine learning models use the surgical instrumentation data from the surgical instrumentation system 106.
  • the machine learning models use a combination of the video and the surgical instrumentation data.
  • the machine learning models can also use audio data captured during the surgical procedure.
  • the audio data can include sounds emitted by the surgical instrumentation system 106 while activating one or more surgical instruments 108.
  • the audio data can include voice commands, snippets, or dialog from one or more actors 112.
  • the audio data can further include sounds made by the surgical instruments 108 during their use.
  • the machine learning models can detect surgical actions, surgical phases, anatomical structures, surgical instruments, and various other features from the data associated with a surgical procedure. The detection can be performed in real-time in some examples.
  • the surgical data capture system 102 analyzes the surgical data, i.e., the various types of data captured during the surgical procedure, in an offline manner (e.g., post-surgery).
  • the surgical data capture system 102 from each operating room 101 can be in communication with an operating room dashboard interface 120.
  • the operating room dashboard interface 120 can be (or be a part of) a computing system.
  • the operating room dashboard interface 120 can be a part of a hospital management system (HMS), or any other such enterprise level system.
  • the operating room dashboard interface 120 can include the machine learning capabilities that are described herein.
  • the operating room dashboard interface 120 receives information and commands from the one or more surgical data capture systems 102. Further, in some aspects, the operating room dashboard interface 120 provides data and commands to the surgical data capture system 102.
  • the operating room dashboard interface 120 can interact with a display priority selector 122 and an importance model 124 that me be collectively referred to as a dynamic view selector 125.
  • the display priority selector 122 can determine which operating room data streams and/or specific data sources from the operating room data streams should have a higher viewing priority for a user of the operating room dashboard interface 120.
  • the importance model 124 can determine an importance score associated with a surgical procedure being performed in each of the operating rooms 101. The importance score can quantify a medical challenge according to scale.
  • the importance model 124 can use machine learning summary data provided with at least one of the operating room data streams to determine the importance score.
  • the surgical data capture system 102 of each operating room 101 may incorporate one or more machine learning models that provide insights about a current and/or predicted state of a surgical procedure. This information can be output to the dynamic view selector 125 for use by the importance model 124 to assess an importance score and compare the importance scores across operating rooms 101 that are currently active.
  • phase information can be used enable viewers of the operating room dashboard interface 120 to observe closing phases of a surgical procedure thereby knowing the operation is ending or has been successful (or the opposite for the start of the surgical procedure); observe a commonly complicated phase in order to see if a complication occurs or if a specialist assistant needs to be called in; and/or limit the display to procedures currently under way that are not in setup or closing phases, e.g., relevant procedures of interest where observation may be useful.
  • a higher priority can be assigned by the display priority selector 122 to displaying phases where a complication has occurred or where there is an event (e.g., excessive bleeding or smoke) indicating that a complication may occur.
  • This information can prioritize highlighting a specific video/source view.
  • Phases of particular interest may also be based on instrument use or utilization, for example, a phase containing stapling or multiple stapling lines, may be prioritized to monitor the number of staples used or the instrument use itself with regard to safety.
  • the display priority selector 122 and/or importance model 124 can determine priority and/or importance scores based in part on procedural complexity.
  • procedural complexity may be graded according to the Parkland grading scale (or other scales).
  • the procedural grade complexity can then be used to prioritize the observation of complex cases over more routine ones.
  • prioritization can also be based on anatomical detection systems, for example automated video detection of inflamed anatomies or adhesions or other physiological signs that could be surrogate measures of factors leading to surgical complexity.
  • FIG. 2 depicts an operating room data stream collector 200 according to one or more aspects.
  • the operating room data stream collector 200 can combine multiple sources of data from the operating room data streams of operating rooms 101 of FIG. 1.
  • a video stream 202, patient sensor data stream 204, and/or a surgical instrument data stream 206 can be combined by a data concentrator 208 to produce an operating room data stream 210 for one or more of the operating rooms 101.
  • the video stream 202 can include video from multiple cameras 105 of FIG. 1, such as one or more of an endoscopic camera and a room camera. Video streams can also include cameras 105 positioned to observe the surgical team.
  • the patient sensor data stream 204 can include patient condition data collected from multiple sensor system with respect to time.
  • the surgical instrument data stream 206 can include current information, settings, and/or use information about the surgical instruments 108 of FIG. 1.
  • Collectively the video stream 202, patient sensor data stream 204, and/or a surgical instrument data stream 206 represent different sources that can be available for viewing on an operating room 101 basis.
  • the dynamic view selector 125 of FIG. 1 can select one or more of the sources to be displayed to a user of the operating room dashboard interface 120.
  • the user can also manually select one or more of the sources to be displayed on the operating room dashboard interface 120.
  • One or more machine learning models can be applied at the device level, operating room 101 level, and/or a multiple operating room 101 basis to infer / predict information and insights based on one or more sources of the operating room data stream 210.
  • FIG. 3 a system 300 for analyzing data that includes video data and/or other surgical data is generally shown according to one or more aspects.
  • the video data can be captured from video/audio recording system 104 of FIG. 1.
  • the analysis can result in predicting surgical phases and structures (e.g., instruments, anatomical structures, etc.) in the video data using machine learning.
  • System 300 can be the system 100 of FIG. 1, or a part thereof in one or more examples.
  • System 300 uses data streams in the surgical data to identify procedural states according to some aspects.
  • System 300 includes a data reception system 305 that collects surgical data, including the video data and surgical instrumentation data.
  • the data reception system 305 can include one or more devices (e.g., one or more user devices and/or servers) located within and/or associated with a surgical operating room and/or control center.
  • the data reception system 305 can receive surgical data in real-time, i.e., as the surgical procedure is being performed.
  • the data reception system 305 can receive or access surgical data in an offline manner, for example, by accessing data that is stored in a data collection system.
  • System 300 further includes a machine learning processing system 310 that processes the surgical data using one or more machine learning models to identify one or more features, such as surgical phase, instrument, anatomical structure, etc., in the surgical data.
  • machine learning processing system 310 can include one or more devices (e.g., one or more servers), each of which can be configured to include part or all of one or more of the depicted components of the machine learning processing system 310.
  • a part or all of the machine learning processing system 310 is in the cloud and/or remote from an operating room and/or physical location corresponding to a part or all of data reception system 305.
  • the components of the machine learning processing system 310 are depicted and described herein. However, the components are just one example structure of the machine learning processing system 310, and that in other examples, the machine learning processing system 310 can be structured using a different combination of the components. Such variations in the combination of the components are encompassed by the technical solutions described herein.
  • the machine learning processing system 310 includes a machine learning training system 325, which can be a separate device (e.g., server) that stores its output as one or more trained machine learning models 330.
  • the machine learning models 330 are accessible by a machine learning execution system 340.
  • the machine learning execution system 340 can be separate from the machine learning training system 325 in some examples.
  • devices that “train” the models are separate from devices that “infer,” i.e., perform real-time processing of surgical data using the trained machine learning models 330.
  • Machine learning processing system 310 further includes a data generator 315 to generate simulated surgical data, such as a set of virtual images, or record the video data from the video/audio recording system 104, to train the machine learning models 330.
  • Data generator 315 can access (read/write) a data store 320 to record data, including multiple images and/or multiple videos.
  • the images and/or videos can include images and/or videos collected during one or more procedures (e.g., one or more surgical procedures). For example, the images and/or video may have been collected by a user device worn by the actor 112 of FIG.
  • the data store 320 can be part of a data collection system that records data for future use.
  • Each of the images and/or videos recorded in the data store 320 for training the machine learning models 330 can be defined as a base image and can be associated with other data that characterizes an associated procedure and/or rendering specifications.
  • the other data can identify a type of procedure, a location of a procedure, one or more people involved in performing the procedure, surgical objectives, and/or an outcome of the procedure.
  • the other data can indicate a stage of the procedure with which the image or video corresponds, rendering specification with which the image or video corresponds and/or a type of imaging device that captured the image or video (e.g., and/or, if the device is a wearable device, a role of a particular person wearing the device, etc.).
  • the other data can include imagesegmentation data that identifies and/or characterizes one or more objects (e.g., tools, anatomical obj ects, etc.) that are depicted in the image or video.
  • the characterization can indicate the position, orientation, or pose of the object in the image.
  • the characterization can indicate a set of pixels that correspond to the object and/or a state of the object resulting from a past or current user handling. Localization can be performed using a variety of techniques for identifying objects in one or more coordinate systems.
  • the machine learning training system 325 uses the recorded data in the data store 320, which can include the simulated surgical data (e.g., set of virtual images) and actual surgical data to train the machine learning models 330.
  • the machine learning model 330 can be defined based on a type of model and a set of hyperparameters (e.g., defined based on input from a client device).
  • the machine learning models 330 can be configured based on a set of parameters that can be dynamically defined based on (e.g., continuous or repeated) training (i.e., learning, parameter tuning).
  • Machine learning training system 325 can use one or more optimization algorithms to define the set of parameters to minimize or maximize one or more loss functions.
  • the set of (learned) parameters can be stored as part of a trained machine learning model 330 using a specific data structure for that trained machine learning model 330.
  • the data structure can also include one or more non-learnable variables (e.g., hyperparameters and/or model definitions).
  • Examples of the trained machine learning model 330 can include a surgical video monitoring model 332 and a surgical data monitoring model 334, where the surgical video monitoring model 332 and surgical data monitoring model 334 can be surgical monitoring modules of the surgical data capture system 102.
  • the surgical video monitoring model 332 can be trained to classify and/or detect features in a surgical video stream.
  • the surgical data monitoring model 334 can be trained to classify and/or detect features in other sources of surgical data, such as sensor data, instrument data, audio data, and/or other data accessible to the system 100 of FIG. 1.
  • Machine learning execution system 340 can access the data structure(s) of the machine learning models 330 and accordingly configure the machine learning models 330 for inference (i.e., prediction).
  • the machine learning models 330 can include, for example, a fully convolutional network adaptation, an adversarial network model, an encoder, a decoder, or other types of machine learning models.
  • the type of the machine learning models 330 can be indicated in the corresponding data structures.
  • the machine learning model 330 can be configured in accordance with one or more hyperparameters and the set of learned parameters.
  • the one or more machine learning models 330 receive, as input, surgical data to be processed and subsequently generate one or more inferences according to the training.
  • the video data captured by the video/audio recording system 104 of FIG. 1 can include data streams (e.g., an array of intensity, depth, and/or RGB values) for a single image or for each of a set of frames (e.g., including multiple images or an image with sequencing data) representing a temporal window of fixed or variable length in a video.
  • the video data that is captured by the video/audio recording system 104 can be received by the data reception system 305, which can include one or more devices located within an operating room where the surgical procedure is being performed.
  • the data reception system 305 can include devices that are located remotely, to which the captured video data is streamed live during the performance of the surgical procedure.
  • the data reception system 305 accesses the data in an offline manner from a data collection system or from any other data source (e.g., local or remote storage device).
  • the data reception system 305 can process the video and/or other data received.
  • the processing can include decoding when a video stream is received in an encoded format such that data for a sequence of images can be extracted and processed.
  • the data reception system 305 can also process other types of data included in the input surgical data.
  • the surgical data can include additional data streams, such as audio data, RFID data, textual data, measurements from one or more surgical instrum ents/sensors, etc., that can represent stimuli/procedural states from the operating room.
  • the data reception system 305 synchronizes the different inputs from the different devices/ sensors before inputting them in the machine learning processing system 310.
  • audio data can also be used as a data source to generate predictions.
  • Synchronization can be achieved by using a common reference clock to generate time stamps alongside each data stream.
  • the clocks can be shared via network protocols or through hardware locking or through any other means.
  • Such time stamps can be associated with any processed data format, such as, but not limited to text or other discrete data created from the audio signal.
  • Additional synchronization can be performed by linking actions, events, or phase segmented that have been automatically processed from the raw signals using machine learning models. For example, text generated from an audio signal can be associated to specific phases of the procedure that are extracted from that audio or any other data stream signal. Text generated may be captured and/or displayed through a user interface.
  • the machine learning models 330 can analyze the input surgical data, and in one or more aspects, predict and/or characterize structures included in the video data included with the surgical data.
  • the video data can include sequential images and/or encoded video data (e.g., using digital video file/stream formats and/or codecs, such as MP4, MOV, AVI, WEBM, AVCHD, OGG, etc.).
  • the prediction and/or characterization of the structures can include segmenting the video data or predicting the localization of the structures with a probabilistic heatmap.
  • the one or more machine learning models include or are associated with a preprocessing or augmentation (e.g., intensity normalization, resizing, cropping, etc.) that is performed prior to segmenting the video data.
  • An output of the one or more machine learning models can include image-segmentation or probabilistic heatmap data that indicates which (if any) of a defined set of structures are predicted within the video data, a location and/or position and/or pose of the structure(s) within the video data, and/or state of the structure(s).
  • the location can be a set of coordinates in an image/frame in the video data.
  • the coordinates can provide a bounding box.
  • the coordinates can provide boundaries that surround the structure(s) being predicted.
  • the trained machine learning models 330 in one or more examples, are trained to perform higher-level predictions and tracking, such as predicting a phase of a surgical procedure and tracking one or more surgical instruments used in the surgical procedure.
  • the machine learning processing system 310 includes a detector 350 that uses the machine learning models to identify a phase within the surgical procedure (“procedure”).
  • Detector 350 uses a particular procedural tracking data structure 355 from a list of procedural tracking data structures. Detector 350 selects the procedural tracking data structure 355 based on the type of surgical procedure that is being performed. In one or more examples, the type of surgical procedure is predetermined or input by actor 112. The procedural tracking data structure 355 identifies a set of potential phases that can correspond to a part of the specific type of procedure.
  • the procedural tracking data structure 355 can be a graph that includes a set of nodes and a set of edges, with each node corresponding to a potential phase.
  • the edges can provide directional connections between nodes that indicate (via the direction) an expected order during which the phases will be encountered throughout an iteration of the procedure.
  • the procedural tracking data structure 355 may include one or more branching nodes that feed to multiple next nodes and/or can include one or more points of divergence and/or convergence between the nodes.
  • a phase indicates a procedural action (e.g., surgical action) that is being performed or has been performed and/or indicates a combination of actions that have been performed.
  • a phase relates to a biological state of a patient undergoing a surgical procedure.
  • the biological state can indicate a complication (e.g., blood clots, clogged arteries/veins, etc.), pre-condition (e.g., lesions, polyps, etc.).
  • pre-condition e.g., lesions, polyps, etc.
  • the machine learning models 330 are trained to detect an “abnormal condition,” such as hemorrhaging, arrhythmias, blood vessel abnormality, etc.
  • Each node within the procedural tracking data structure 355 can identify one or more characteristics of the phase corresponding to that node.
  • the characteristics can include visual characteristics.
  • the node identifies one or more tools that are typically in use or availed for use (e.g., on a tool tray) during the phase.
  • the node also identifies one or more roles of people who are typically performing a surgical task, a typical type of movement (e.g., of a hand or tool), etc.
  • detector 350 can use the segmented data generated by machine learning execution system 340 that indicates the presence and/or characteristics of particular objects within a field of view to identify an estimated node to which the real image data corresponds.
  • Identification of the node can further be based upon previously detected phases for a given procedural iteration and/or other detected input (e.g., verbal audio data that includes person- to-person requests or comments, explicit identifications of a current or past phase, information requests, etc.).
  • other detected input e.g., verbal audio data that includes person- to-person requests or comments, explicit identifications of a current or past phase, information requests, etc.
  • the detector 350 outputs the prediction associated with a portion of the video data that is analyzed by the machine learning processing system 310.
  • the prediction is associated with the portion of the video data by identifying a start time and an end time of the portion of the video that is analyzed by the machine learning execution system 340.
  • the prediction that is output can include an identity of a surgical phase, activity, or event as detected by the detector 350 based on the output of the machine learning execution system 340.
  • the prediction in one or more examples, can include identities of the structures (e.g., instrument, anatomy, etc.) that are identified by the machine learning execution system 340 in the portion of the video that is analyzed.
  • the prediction can also include a confidence score of the prediction.
  • Various types of information in the prediction that can be output may include phases, actions, and/or events associated with a surgical procedure.
  • the technical solutions described herein can be applied to analyze video and image data captured by cameras that are not endoscopic (i.e., cameras external to the patient’s body) when performing open surgeries (i.e., not laparoscopic surgeries).
  • the video and image data can be captured by cameras that are mounted on one or more personnel in the operating room, e.g., surgeon.
  • the cameras can be mounted on surgical instruments, walls, or other locations in the operating room.
  • FIG. 4 depicts an example of an operating room dashboard interface 400 according to one or more aspects.
  • the operating room dashboard interface 400 is a visual example of a configuration of the operating room dashboard interface 120 of FIG. 1 with four active views.
  • a highest priority display 402 can have a larger size than lower priority displays 404, 406, 408.
  • the display priority selector 122 of FIG. 1 can determine an order and priority of arrangement of displays 402-408, which can change over time.
  • a user of the operating room dashboard interface 400 can manually override the priority of lower priority displays 404, 406, 408 through manual overrides 405, 407, 409 respectively.
  • Insights identified by machine learning models 330 at the operating room 101 level can appear, for instance, as Al overlays 410, 412, which can be graphical overlays (e.g., Al overlay 410) or text overlays (e.g., Al overlay 412).
  • Al overlays 410, 412 can be graphical overlays (e.g., Al overlay 410) or text overlays (e.g., Al overlay 412).
  • an importance level, as determined by importance model 124 of FIG. 1, of a phase of a surgical procedure associated with the highest priority display 402 diminishes and an importance level of a phase of surgical procedure associated with a lower priority display 404-408 increases
  • the associated streams can be rearranged automatically on the operating room dashboard interface 400.
  • a user of the operating room dashboard interface 400 may be prompted to approve of a change in display order before rearranging, adding, or removing content from the operating room dashboard interface 400.
  • FIG. 5 depicts an example of a display priority adjustment for an operating room dashboard interface according to one or more aspects.
  • an operating room dashboard interface 500 includes a highest priority display portion 502, and lower priority display portions 504, 506, 508. As priority of a source or operating room data stream decreases, the associated content can be shifted from the highest priority display portion 502 to the lower priority display portion 504 and progressively be demoted through lower priority display portions 506, 508. There may be a maximum number of displays portions defined for a particular configuration of the operating room dashboard interface 500, and operating room data streams having a lower priority may not be visible based on the relative priority of other available operating room data streams. When a priority changes, any of the available operating room data streams may be promoted to the highest priority display portion 502, e.g., transitioning from lower priority display portion 508 directly to the highest priority display portion 502 by the display priority selector 122 of FIG. 1.
  • FIG. 6 depicts another example of a display priority adjustment for an operating room dashboard interface according to one or more aspects.
  • Operating room dashboard interface 600 may have multiple higher priority operating room data streams appearing in foreground views 602, 604, 606, 608. There need not be a single highest priority operating room data stream identified. Lower priority operating room data streams having a priority below a minimum level can be in background views 612A, 612B, 612C, ..., 612N.
  • the background views 612A-612N may not be directly visible on the operating room dashboard interface 600 or may appear as reduced size views (e.g., thumbnail views). Where background views 612A-612N are visible, the background views 612A-612N may have a reduced frame rate compared to the foreground views 602-608.
  • one or more of operating room data streams of the background views 612A-612N can be promoted to one or more foreground views 602-608.
  • An operating room data stream that is replaced can be shifted to one of the background views 612A-612N.
  • a user of the operating room dashboard interface 600 can lock one or more of the operating room data streams to a desired position to avoid having the associated operating room data stream shift in position automatically.
  • a position locked display may be visually identified by a status icon 610 or other such indicator.
  • FIG. 7 a flowchart of a method 700 of dynamic view adjustment for multiple operating room observation is generally shown in accordance with one or more aspects. All or a portion of method 700 can be implemented, for example, by all or a portion of system 100 of FIG. 1, the system 300 of FIG. 3, and/or computer system 800 of FIG. 8. The method 700 is described in reference to FIGS. 1-8.
  • a plurality of operating room data streams (e.g., two or more operating room data streams 210) associated with a plurality of operating rooms 101 can be received at an operating room dashboard interface 120, 400, 500, 600 of a dynamic view selector 125.
  • a display priority selector 122 can determine a display order priority of two or more sources (e.g., a video stream 202, patient sensor data stream 204, and/or a surgical instrument data stream 206) selected from the operating room data streams based on an importance score.
  • the importance score can be determined by the importance model 124.
  • the display priority selector 122 can arrange an order of display of the two or more of sources on the operating room dashboard interface 120, 400, 500, 600 based on the display order priority
  • the display priority selector 122 can change the order of display of the two or more sources on the operating room dashboard interface 120, 400, 500, 600 based on detecting a priority change condition.
  • the importance model 124 can determine the importance score associated with a surgical procedure being performed in each of the operating rooms 101.
  • a higher priority can be assigned in the display priority order to the operating room data streams having a higher importance score.
  • One or more of the operating room data streams having an importance score below a minimum display threshold can be assigned as background views 612A-612N.
  • One or more of the operating room data streams having an importance score above the minimum display threshold can be assigned as foreground views 602-608.
  • a display frame rate of the background views 612A-612N can be reduced relative to the foreground views 602- 608.
  • the importance model 124 can adjust the importance score based on a current surgical phase and/or a detected event.
  • One or more of patient sensor data streams, surgical instrument usage, and video streams from the operating rooms 101 can be monitored to identify the current surgical phase and/or the detected event.
  • the detected event can be, for example, a deviation from an expected performance metric.
  • one or more of the operating room data streams can be filtered to prevent display based on determining that a current surgical phase is a setup phase or a closing phase.
  • Other filters can be configured to sort views based on surgery type, surgeon, anatomy, and other such parameters.
  • the importance score can be adjusted by the importance model 124 based on a detected anatomical structure and a condition of the detected anatomical structure.
  • the display order priority can be determined based on one or more of: phases, events, and procedural complexity.
  • the processing shown in FIG. 7 is not intended to indicate that the operations are to be executed in any particular order or that all of the operations shown in FIG. 7 are to be included in every case. Additionally, the processing shown in FIG. 7 can include any suitable number of additional operations.
  • the computer system 800 can be an electronic computer framework comprising and/or employing any number and combination of computing devices and networks utilizing various communication technologies, as described herein.
  • the computer system 800 can be easily scalable, extensible, and modular, with the ability to change to different services or reconfigure some features independently of others.
  • the computer system 800 may be, for example, a server, desktop computer, laptop computer, tablet computer, or smartphone.
  • computer system 800 may be a cloud computing node.
  • Computer system 800 may be described in the general context of computer-executable instructions, such as program modules, being executed by a computer system.
  • program modules may include routines, programs, objects, components, logic, data structures, and so on that perform particular tasks or implement particular abstract data types.
  • Computer system 800 may be practiced in distributed cloud computing environments where tasks are performed by remote processing devices that are linked through a communications network.
  • program modules may be located in both local and remote computer system storage media, including memory storage devices.
  • the computer system 800 has one or more central processing units (CPU(s)) 801a, 801b, 801c, etc. (collectively or generically referred to as processor(s) 801).
  • the processors 801 can be a single-core processor, multi-core processor, computing cluster, or any number of other configurations.
  • the processors 801 can be any type of circuitry capable of executing instructions.
  • the processors 801, also referred to as processing circuits are coupled via a system bus 802 to a system memory 803 and various other components.
  • the system memory 803 can include one or more memory devices, such as read-only memory (ROM) 804 and a random-access memory (RAM) 805.
  • ROM read-only memory
  • RAM random-access memory
  • the ROM 804 is coupled to the system bus 802 and may include a basic input/output system (BIOS), which controls certain basic functions of the computer system 800.
  • BIOS basic input/output system
  • the RAM is read-write memory coupled to the system bus 802 for use by the processors 801.
  • the system memory 803 provides temporary memory space for operations of said instructions during operation.
  • the system memory 803 can include random access memory (RAM), read-only memory, flash memory, or any other suitable memory systems.
  • the computer system 800 comprises an input/output (I/O) adapter 806 and a communications adapter 807 coupled to the system bus 802.
  • the I/O adapter 806 may be a small computer system interface (SCSI) adapter that communicates with a hard disk 808 and/or any other similar component.
  • SCSI small computer system interface
  • the I/O adapter 806 and the hard disk 808 are collectively referred to herein as a mass storage 810.
  • Software 811 for execution on the computer system 800 may be stored in the mass storage 810.
  • the mass storage 810 is an example of a tangible storage medium readable by the processors 801, where the software 811 is stored as instructions for execution by the processors 801 to cause the computer system 800 to operate, such as is described hereinbelow with respect to the various Figures. Examples of computer program product and the execution of such instruction is discussed herein in more detail.
  • the communications adapter 807 interconnects the system bus 802 with a network 812, which may be an outside network, enabling the computer system 800 to communicate with other such systems.
  • a portion of the system memory 803 and the mass storage 810 collectively store an operating system, which may be any appropriate operating system to coordinate the functions of the various components shown in FIG. 8.
  • Additional input/output devices are shown as connected to the system bus 802 via a display adapter 815 and an interface adapter 816.
  • the adapters 806, 807, 815, and 816 may be connected to one or more I/O buses that are connected to the system bus 802 via an intermediate bus bridge (not shown).
  • a display 819 e.g., a screen or a display monitor
  • a display adapter 815 which may include a graphics controller to improve the performance of graphics-intensive applications and a video controller.
  • a keyboard, a mouse, a touchscreen, one or more buttons, a speaker, etc. can be interconnected to the system bus 802 via the interface adapter 816, which may include, for example, a Super I/O chip integrating multiple device adapters into a single integrated circuit.
  • Suitable I/O buses for connecting peripheral devices such as hard disk controllers, network adapters, and graphics adapters typically include common protocols, such as the Peripheral Component Interconnect (PCI).
  • PCI Peripheral Component Interconnect
  • the computer system 800 includes processing capability in the form of the processors 801, and storage capability including the system memory 803 and the mass storage 810, input means such as the buttons, touchscreen, and output capability including the speaker 823 and the display 819.
  • the communications adapter 807 can transmit data using any suitable interface or protocol, such as the internet small computer system interface, among others.
  • the network 812 may be a cellular network, a radio network, a wide area network (WAN), a local area network (LAN), or the Internet, among others.
  • An external computing device may connect to the computer system 800 through the network 812.
  • an external computing device may be an external web server or a cloud computing node.
  • FIG. 8 the block diagram of FIG. 8 is not intended to indicate that the computer system 800 is to include all of the components shown in FIG. 8. Rather, the computer system 800 can include any appropriate fewer or additional components not illustrated in FIG. 8 (e.g., additional memory components, embedded controllers, modules, additional network interfaces, etc.). Further, the aspects described herein with respect to computer system 800 may be implemented with any appropriate logic, wherein the logic, as referred to herein, can include any suitable hardware (e.g., a processor, an embedded controller, or an application-specific integrated circuit, among others), software (e.g., an application, among others), firmware, or any suitable combination of hardware, software, and firmware, in various aspects. Various aspects can be combined to include two or more of the aspects described herein.
  • a computer program product can include a memory device having computer executable instructions stored thereon, which when executed by one or more processors cause the one or more processors to perform operations including receiving a plurality of operating room data streams associated with a plurality of operating rooms 101 at an operating room dashboard interface 120, 400, 500, 600.
  • the operations can also include arranging an order of display of two or more of the operating room data streams on the operating room dashboard interface 120, 400, 500, 600 based on an display order priority.
  • the operations can further include changing the order of display of the two or more operating room data streams on the operating room dashboard interface 120, 400, 500, 600 based on detecting a priority change condition.
  • each of the operating room data streams can include one or more sources including one or more of an endoscopic camera, a room camera, a patient sensor data stream, and a surgical instrument data stream. Further operations can include determining an importance score associated with a surgical procedure being performed in each of the operating rooms 101 based on analyzing two or more of the sources per operating room 101, and assigning a higher priority in the display priority order to the operating room data streams having a higher importance score.
  • Operations can also include assigning one or more of the operating room data streams having an importance score below a minimum display threshold as background views, assigning one or more of the operating room data streams having an importance score above the minimum display threshold as foreground views, and displaying the foreground views with a greater prominence on the operating room dashboard interface 120, 400, 500, 600 than the background views.
  • operations can include determining an importance score associated with a surgical procedure being performed in each of the operating rooms, and adjusting the importance score based on a current surgical phase and/or a detected event.
  • operations can include monitoring user interactions with the operating room dashboard interface 120, 400, 500, 600, and adapting selection of the two or more of the operating room data streams based on the user interactions.
  • a system can include a memory system and a processing system coupled to the memory system.
  • the processing system can be configured to execute a plurality of instructions to determine a display order priority of two or more sources selected from a plurality of operating room data streams associated with a plurality of operating rooms 101 at an operating room dashboard interface 120, 400, 500, 600; arrange an order of display of the two or more of sources on the operating room dashboard interface 120, 400, 500, 600 based on the display order priority; and change the order of display of the two or more sources on the operating room dashboard interface 120, 400, 500, 600 based on detecting a priority change condition.
  • one or more of the two or more sources selected for display can be displayed with a highlighting aspect, a position aspect, and/or a size aspect that differs based on the display order priority.
  • the processing system can be configured to execute instructions to apply an importance model 124 to determine an importance score associated with a surgical procedure being performed in each of the operating rooms 101, where the importance model 124 uses machine learning summary data provided with at least one of the operating room data streams to determine the importance score, and assign a lower priority in the display priority order to the operating room data streams having a lower importance score.
  • the processing system can be configured to execute instructions to display the two or more sources from a same operating room on the operating room dashboard interface 120, 400, 500, 600, and arrange outputs of the two or more sources according to one or more sorting priorities.
  • the operating room dashboard interface 120, 400, 500, 600 can be configured to automatically present action options or user interfaces on the operating room data streams depending on most common user taken actions for a current state / condition. This can result in suggested actions consistent with previously observed similar scenarios.
  • the operating room dashboard interface 120, 400, 500, 600 can include user tracking to learn behaviors and generate suggestions for user actions based on a role/identity of the user and locations of interest of the user on the operating room dashboard interface 120, 400, 500, 600.
  • eye tracking or facial recognition can be used to adapt display types and selected sources for display.
  • Examples described herein facilitate providing a user-interactive system to visualize and analyze substantial amounts of data associated with the system 100. Generating such user-interactive interfaces with the substantial amounts of data is not practical for a human, and hence, the technical solutions described herein provide a practical application to address technical challenges and provide improvements to computing systems, such as operating room management systems, hospital management systems, etc. For example, the technical solutions described herein facilitate service providers to review surgical procedures performed at a medical center in real time and provide feedback to the hospital, actors, or any other stake holder.
  • the present invention may be a system, a method, and/or a computer program product at any possible technical detail level of integration
  • the computer program product may include a computer-readable storage medium (or media) having computer-readable program instructions thereon for causing a processor to carry out aspects of the present invention
  • the computer-readable storage medium can be a tangible device that can retain and store instructions for use by an instruction execution device.
  • the computer-readable storage medium may be, for example, but is not limited to, an electronic storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing.
  • a non-exhaustive list of more specific examples of the computer-readable storage medium includes the following: a portable computer diskette, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), a static random access memory (SRAM), a portable compact disc read-only memory (CD-ROM), a digital versatile disk (DVD), a memory stick, a floppy disk, a mechanically encoded device such as punch-cards or raised structures in a groove having instructions recorded thereon, and any suitable combination of the foregoing.
  • RAM random access memory
  • ROM read-only memory
  • EPROM or Flash memory erasable programmable read-only memory
  • SRAM static random access memory
  • CD-ROM compact disc read-only memory
  • DVD digital versatile disk
  • memory stick a floppy disk
  • a mechanically encoded device such as punch-cards or raised structures in a groove having instructions recorded thereon
  • a computer-readable storage medium is not to be construed as being transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through a waveguide or other transmission media (e.g., light pulses passing through a fiber-optic cable), or electrical signals transmitted through a wire.
  • Computer-readable program instructions described herein can be downloaded to respective computing/processing devices from a computer-readable storage medium or to an external computer or external storage device via a network, for example, the Internet, a local area network, a wide area network, and/or a wireless network.
  • the network may comprise copper transmission cables, optical transmission fibers, wireless transmission, routers, firewalls, switches, gateway computers, and/or edge servers.
  • the wireless network(s) can include, but is not limited to fifth generation (5G) and sixth generation (6G) protocols and connections.
  • a network adapter card or network interface in each computing/processing device receives computer-readable program instructions from the network and forwards the computer-readable program instructions for storage in a computer-readable storage medium within the respective computing/processing device.
  • Computer-readable program instructions for carrying out operations of the present invention may be assembler instructions, instruction-set-architecture (ISA) instructions, machine instructions, machine-dependent instructions, microcode, firmware instructions, state-setting data, configuration data for integrated circuitry, or either source-code or object code written in any combination of one or more programming languages, including an object-oriented programming language such as Smalltalk, C++, high-level languages such as Python, or the like, and procedural programming languages, such as the “C” programming language or similar programming languages.
  • the computer-readable program instructions may execute entirely on the user’s computer, partly on the user’s computer, as a stand-alone software package, partly on the user’s computer and partly on a remote computer, or entirely on the remote computer or server.
  • the remote computer may be connected to the user’s computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider).
  • electronic circuitry including, for example, programmable logic circuitry, field-programmable gate arrays (FPGA), or programmable logic arrays (PLA) may execute the computer-readable program instruction by utilizing state information of the computer- readable program instructions to personalize the electronic circuitry, in order to perform aspects of the present invention.
  • These computer-readable program instructions may be provided to a processor of a general-purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks.
  • These computer- readable program instructions may also be stored in a computer-readable storage medium that can direct a computer, a programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer-readable storage medium having instructions stored therein comprises an article of manufacture including instructions which implement aspects of the function/act specified in the flowchart and/or block diagram block or blocks.
  • the computer-readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other device to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer-implemented process, such that the instructions which execute on the computer, other programmable apparatus, or other device implement the functions/acts specified in the flowchart and/or block diagram block or blocks.
  • the flowchart and block diagrams in the Figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods, and computer program products according to various aspects of the present invention.
  • each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s).
  • the functions noted in the blocks may occur out of the order noted in the Figures.
  • two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved.
  • each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration can be implemented by special purpose hardware-based systems that perform the specified functions or acts or carry out combinations of special purpose hardware and computer instructions.
  • exemplary is used herein to mean “serving as an example, instance or illustration.” Any aspect or design described herein as “exemplary” is not necessarily to be construed as preferred or advantageous over other aspects or designs.
  • the terms “at least one” and “one or more” may be understood to include any integer number greater than or equal to one, i.e., one, two, three, four, etc.
  • the terms “a plurality” may be understood to include any integer number greater than or equal to two, i.e., two, three, four, five, etc.
  • connection may include both an indirect “connection” and a direct “connection.”
  • the described techniques may be implemented in hardware, software, firmware, or any combination thereof. If implemented in software, the functions may be stored as one or more instructions or code on a computer-readable medium and executed by a hardware-based processing unit.
  • Computer-readable media may include non- transitory computer-readable media, which corresponds to a tangible medium such as data storage media (e.g., RAM, ROM, EEPROM, flash memory, or any other medium that can be used to store desired program code in the form of instructions or data structures and that can be accessed by a computer).
  • processors such as one or more digital signal processors (DSPs), graphics processing units (GPUs), general-purpose microprocessors, application-specific integrated circuits (ASICs), field programmable logic arrays (FPGAs), or other equivalent integrated or discrete logic circuitry.
  • DSPs digital signal processors
  • GPUs graphics processing units
  • ASICs application-specific integrated circuits
  • FPGAs field programmable logic arrays
  • processors may refer to any of the foregoing structure or any other physical structure suitable for implementation of the described techniques. Also, the techniques could be fully implemented in one or more circuits or logic elements.

Landscapes

  • Health & Medical Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Biomedical Technology (AREA)
  • General Health & Medical Sciences (AREA)
  • Public Health (AREA)
  • Medical Informatics (AREA)
  • Primary Health Care (AREA)
  • Epidemiology (AREA)
  • General Business, Economics & Management (AREA)
  • Business, Economics & Management (AREA)
  • Surgery (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Animal Behavior & Ethology (AREA)
  • Molecular Biology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Veterinary Medicine (AREA)
  • Robotics (AREA)
  • Human Computer Interaction (AREA)
  • Urology & Nephrology (AREA)
  • Image Analysis (AREA)

Abstract

L'invention concerne des techniques de sélection de vue dynamique pour une observation de salle d'opération multiple. Des aspects peuvent comprendre la détermination d'une priorité d'ordre d'affichage d'au moins deux sources sélectionnées parmi une pluralité de flux de données de salle d'opération associés à une pluralité de salles d'opération au niveau d'une interface de tableau de bord de salles d'opération, l'agencement d'un ordre d'affichage des au moins deux sources sur l'interface de tableau de bord de salles d'opération sur la base de la priorité d'ordre d'affichage et le changement de l'ordre d'affichage des au moins deux sources sur l'interface de tableau de bord de salles d'opération sur la base de la détection d'une condition de changement de priorité.
PCT/EP2024/073048 2023-08-17 2024-08-16 Sélecteur de vue dynamique pour observation de salle d'opération multiple Pending WO2025036993A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
GR20230100681 2023-08-17
GR20230100681 2023-08-17

Publications (1)

Publication Number Publication Date
WO2025036993A1 true WO2025036993A1 (fr) 2025-02-20

Family

ID=92543131

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/EP2024/073048 Pending WO2025036993A1 (fr) 2023-08-17 2024-08-16 Sélecteur de vue dynamique pour observation de salle d'opération multiple

Country Status (1)

Country Link
WO (1) WO2025036993A1 (fr)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007007040A (ja) * 2005-06-29 2007-01-18 Hitachi Medical Corp 手術支援システム
US20140267658A1 (en) * 2013-03-15 2014-09-18 Arthrex, Inc. Surgical Imaging System And Method For Processing Surgical Images
US20220331047A1 (en) * 2021-04-14 2022-10-20 Cilag Gmbh International Method for intraoperative display for surgical systems

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007007040A (ja) * 2005-06-29 2007-01-18 Hitachi Medical Corp 手術支援システム
US20140267658A1 (en) * 2013-03-15 2014-09-18 Arthrex, Inc. Surgical Imaging System And Method For Processing Surgical Images
US20220331047A1 (en) * 2021-04-14 2022-10-20 Cilag Gmbh International Method for intraoperative display for surgical systems

Similar Documents

Publication Publication Date Title
US12387005B2 (en) De-identifying data obtained from microphones
US20250143806A1 (en) Detecting and distinguishing critical structures in surgical procedures using machine learning
US20240037949A1 (en) Surgical workflow visualization as deviations to a standard
CN118120224A (zh) 低延迟视频捕获和覆盖
EP4388506A1 (fr) Réseaux de graphes temporels sensibles à la position destinés à la reconnaissance de phase chirurgicale sur des vidéos laparoscopiques
US20240153269A1 (en) Identifying variation in surgical approaches
WO2024105050A1 (fr) Réseau spatio-temporel pour segmentation sémantique de vidéo dans des vidéos chirurgicales
EP4616332A1 (fr) Segmentation d'action avec représentation partagée-privée de multiples sources de données
US20240252263A1 (en) Pose estimation for surgical instruments
CN118216156A (zh) 基于特征的手术视频压缩
WO2025036993A1 (fr) Sélecteur de vue dynamique pour observation de salle d'opération multiple
US20240161934A1 (en) Quantifying variation in surgical approaches
US20250014717A1 (en) Removing redundant data from catalogue of surgical video
US20240428956A1 (en) Query similar cases based on video information
WO2025036994A1 (fr) Notification de diffusion en continu conditionnelle au contexte
WO2024213571A1 (fr) Commande de permutation de chirurgiens
WO2025036995A1 (fr) Superposition d'annotation par l'intermédiaire d'une interface de diffusion en continu
WO2024223462A1 (fr) Interface utilisateur pour sélection de participants pendant une diffusion en continu d'opération chirurgicale
EP4623446A1 (fr) Tableau de bord d'analyse vidéo pour examen de cas
WO2025088222A1 (fr) Traitement de caractéristiques basées sur une vidéo aux fins d'une modélisation statistique de synchronisations chirurgicales
WO2024189115A1 (fr) Matrices de transition de markov pour identifier des points d'écart pour des procédures chirurgicales
WO2024213771A1 (fr) Tableau de bord de données chirurgicales
WO2025210185A1 (fr) Multimédia stocké et affiché avec une vidéo chirurgicale
WO2024105054A1 (fr) Segmentation hiérarchique de scènes chirurgicales
EP4612705A1 (fr) Tableau de bord de salle d'opération

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 24761568

Country of ref document: EP

Kind code of ref document: A1