WO2025120487A1 - Reinforcement learning for surgical procedures and surgical plans - Google Patents
Reinforcement learning for surgical procedures and surgical plans Download PDFInfo
- Publication number
- WO2025120487A1 WO2025120487A1 PCT/IB2024/062100 IB2024062100W WO2025120487A1 WO 2025120487 A1 WO2025120487 A1 WO 2025120487A1 IB 2024062100 W IB2024062100 W IB 2024062100W WO 2025120487 A1 WO2025120487 A1 WO 2025120487A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- surgical
- data
- machine learning
- learning model
- processor
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H20/00—ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance
- G16H20/40—ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance relating to mechanical, radiation or invasive therapies, e.g. surgery, laser therapy, dialysis or acupuncture
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N20/00—Machine learning
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H30/00—ICT specially adapted for the handling or processing of medical images
- G16H30/40—ICT specially adapted for the handling or processing of medical images for processing medical images, e.g. editing
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H40/00—ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices
- G16H40/60—ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices for the operation of medical equipment or devices
Definitions
- Surgical robots may assist a surgeon or other medical provider in carrying out a surgical procedure and/or may complete one or more surgical procedures autonomously.
- the surgical procedure(s) may be performed using one or more surgical instruments or tools.
- a surgeon or other medical provider may manually select the one or more surgical instruments or tools prior to and for performing the surgical procedure(s).
- a surgeon or other medical provider may determine one or more steps to take for a surgical procedure. Such steps may be determined during the surgical procedure or during planning of the surgical procedure. Surgical plans can be updated by the surgeon or other medical provider, if needed.
- Embodiments of the present disclosure contemplate the utilization of pre- and intraoperative information from an ecosystem of surgical products to update, or reinforce machine learning based models.
- models can be updated to reflect changes in how surgical products (e.g., tools, surgical robots, surgical navigation systems, etc.) are used, patient demographics, disease states, or even surgeon specific preferences without having to start from scratch with a large volume of curated and annotated data.
- Embodiments of the present disclosure also contemplate a reinforcement learning approach which may include: (1) a process to store surgical procedure data; (2) the ability to annotate/curate stored surgical procedure data; (3) a process to update algorithms or features based on newly collected surgical procedure data (e.g., by applying reinforcement learning); and (4) metrics to verify the effectiveness of updated algorithms.
- Example aspects of the present disclosure include: [0008] A system that includes: a processor; and a memory storing data for processing by the processor, the data, when processed, causes the processor to: store a data set comprising surgical procedure data in a data repository; retrain a machine learning model based on curating the data set, where retraining the machine learning model includes applying reinforcement learning; verify a performance metric associated with the retraining of the machine learning model; and provide or modify one or more features associated with an ecosystem of surgical products in response to retraining the machine learning model and verifying the performance metric.
- the performance metric includes at least one of a measured effectiveness and a patient outcome.
- the one or more features associated with the ecosystem of the surgical products include an algorithm implemented by a surgical robot, an algorithm implemented by a surgical navigation system, an imaging technique implemented by an imaging device, a surgical plan implemented by a care provider, a surgical instrument selected for use during a surgical procedure, a device implanted into a patient, and a layout for an operating room.
- the one or more features include an image segmentation approach.
- the one or more features include a therapy delivery technique.
- the reinforcement learning includes a policy-based reinforcement learning and at least one policy is provided to the machine learning model during the retraining.
- the reinforcement learning includes a value-function-based reinforcement learning and at least one function is defined for the machine learning model during the retraining.
- the at least one function includes an optimization function that seeks to optimize a patient outcome after a surgical procedure.
- the memory stores further data for processing by the processor that, when processed, causes the processor to: replace the machine learning model with an updated machine learning model in response to the updated machine learning model exhibiting an improvement in the performance metric as compared to the machine learning model.
- the memory stores further data for processing by the processor that, when processed, causes the processor to: develop a virtual model for the ecosystem of surgical products; provide the data set to the virtual model; compare a performance of the virtual model with the performance metric; determine the performance of the virtual model is better than the machine learning model based on the comparison; and replace the machine learning model with the virtual model in response to determining that the performance of the virtual model is better than the machine learning model.
- the memory stores further data for processing by the processor that, when processed, causes the processor to: receive feedback after the surgical procedure for the patient is performed; and further update the data set with the feedback.
- the feedback is used, in part, to train the machine learning model.
- the data set includes data from a surgical robot and data from a surgical navigation system.
- the data from the surgical robot and the data from the surgical navigation system are curated to indicate an association with a common surgical procedure.
- a method including: storing a data set comprising surgical procedure data in a data repository; retraining a machine learning model based on curating the data set, where retraining the machine learning model comprises applying reinforcement learning; verifying a performance metric associated with the retraining of the machine learning model; and providing or modifying one or more features associated with an ecosystem of surgical products in response to retraining the machine learning model and verifying the performance metric.
- the reinforcement learning includes at least one of a policy-based reinforcement learning, a value-function-based reinforcement learning, an associative reinforcement learning, a deep reinforcement learning, an adversarial deep reinforcement learning, a fuzzy reinforcement learning, an inverse reinforcement learning, and a safe reinforcement learning.
- the one or more features associated with the ecosystem of the surgical products include an algorithm implemented by a surgical robot, an algorithm implemented by a surgical navigation system, an imaging technique implemented by an imaging device, a surgical plan implemented by a care provider, a surgical instrument selected for use during a surgical procedure, a device implanted into a patient, and a layout for an operating room.
- a method of implementing a machine learning pipeline including: receiving surgical procedure data; annotating the surgical procedure data; providing the annotated surgical procedure data to a machine learning model; training the machine learning model with reinforcement learning; verifying a performance metric associated with the training of the machine learning model; and updating or replacing the machine learning model with an updated machine learning model, where the updated machine learning model provides at least one output associated with improving an ecosystem of surgical products.
- surgical procedure data includes data from a surgical robot and a surgical navigation system and wherein the at least one output includes a suggested surgical plan that utilizes at least one of the surgical robot and the surgical navigation system.
- each of the expressions “at least one of A, B and C”, “at least one of A, B, or C”, “one or more of A, B, and C”, “one or more of A, B, or C” and “A, B, and/or C” means A alone, B alone, C alone, A and B together, A and C together, B and C together, or A, B and C together.
- each one of A, B, and C in the above expressions refers to an element, such as X, Y, and Z, or class of elements, such as XI -Xn, Yl-Ym, and Zl-Zo
- the phrase is intended to refer to a single element selected from X, Y, and Z, a combination of elements selected from the same class (e.g., XI and X2) as well as a combination of elements selected from two or more classes (e.g., Y1 and Zo).
- FIG. 1 is a block diagram of a system according to at least one embodiment of the present disclosure
- Fig. 2 is a diagram of a workflow according to at least one embodiment of the present disclosure
- FIG. 3 is a diagram of a system according to at least one embodiment of the present disclosure.
- FIG. 4 is a diagram of a data pipeline according to at least one embodiment of the present disclosure.
- FIG. 5 is a flowchart of a first method according to at least one embodiment of the present disclosure.
- Fig. 6 is a flowchart of a second method according to at least one embodiment of the present disclosure.
- Fig. 7 is a flowchart of a third method according to at least one embodiment of the present disclosure.
- Fig. 8 is a flowchart of a fourth method according to at least one embodiment of the present disclosure.
- Fig. 9 is a flowchart of a fifth method according to at least one embodiment of the present disclosure.
- the described methods, processes, and techniques may be implemented in hardware, software, firmware, or any combination thereof. If implemented in software, the functions may be stored as one or more instructions or code on a computer-readable medium and executed by a hardware-based processing unit. Alternatively or additionally, functions may be implemented using machine learning models, neural networks, artificial neural networks, or combinations thereof (alone or in combination with instructions).
- Computer-readable media may include non-transitory computer-readable media, which corresponds to a tangible medium such as data storage media (e.g., RAM, ROM, EEPROM, flash memory, or any other medium that can be used to store desired program code in the form of instructions or data structures and that can be accessed by a computer).
- processors such as one or more digital signal processors (DSPs), general purpose microprocessors (e.g., Intel Core i3, i5, i7, or i9 processors; Intel Celeron processors; Intel Xeon processors; Intel Pentium processors; AMD Ryzen processors; AMD Athlon processors; AMD Phenom processors; Apple A10 or 10X Fusion processors; Apple Al l, A12, A12X, A12Z, or A13 Bionic processors; or any other general purpose microprocessors), graphics processing units (e.g., Nvidia GeForce RTX 2000-series processors, Nvidia GeForce RTX 3000-series processors, AMD Radeon RX 5000-series processors, AMD Radeon RX 6000-series processors, or any other graphics processing units), application specific integrated circuits (ASICs), field programmable logic arrays (FPGAs), or other equivalent integrated or discrete logic circuit
- DSPs digital signal processors
- proximal and distal are used in this disclosure with their conventional medical meanings, proximal being closer to the operator or user of the system, and further from the region of surgical interest in or on the patient, and distal being closer to the region of surgical interest in or on the patient, and further from the operator or user of the system.
- a minimally invasive procedure may be performed for the treatment of pathological fractures of the vertebral body (e.g., spine and associated elements) due to osteoporosis, cancer, benign lesions, or other ailments.
- pathological fractures of the vertebral body e.g., spine and associated elements
- osteoporosis e.g., cancer, benign lesions, or other ailments.
- the minimally invasive procedure may include a corpectomy (e.g., a surgical procedure that involves removing all or part of the vertebral body, usually as a way to decompress the spinal cord and nerves), kyphoplasty (e.g., a surgical procedure used to treat a spinal compression fracture based on inserting an inflatable balloon tamp into a fractured vertebra to restore height to the collapsed vertebra), vertebroplasty (e.g., a procedure for stabilizing compression fractures in the spine based on injecting bone cement into vertebrae that have cracked or broken), radiofrequency ablation (e.g., a medical procedure in which part of the electrical conduction system of the heart, tumor, or other dysfunctional tissue is ablated using the heat generated from a medium frequency alternating current to treat a range of conditions, including chronic back and neck pain), or another procedure not explicitly listed herein.
- a corpectomy e.g., a surgical procedure that involves removing all or part of the vertebral body, usually as a way to decom
- the surgical procedures described herein may more generally include spine surgeries, cranial surgeries, heart surgeries, or another type of surgical procedure.
- the surgical procedures may comprise a number of steps. It is to be understood that the surgical procedures described herein may not include all of the steps described above or may include additional steps not listed. More generally, the surgical procedures may include presurgical planning of ensuring all the correct surgical instruments and disposables are ready for the surgical procedure, working with radiology imaging to ensure a correct scan format is used, and troubleshooting any communication issues with integrated third-party systems.
- the surgical procedures may include a before surgery time to provide system setup and functional verification for the surgical procedure, a during surgery time for troubleshooting and resolving equipment and instruments issues and for providing real-time guidance and training to operating room staff, and a post-surgery time for checking and stowing any equipment used during the procedure and for reviewing case questions with the operating room staff.
- a surgeon or other medical provider may choose the appropriate and correct medical instruments prior to and for performing the surgical procedures.
- the surgeon or other medical provider may also choose the appropriate and correct implant device or collection of implant devices, if the surgical procedure includes the placement of an implant.
- the surgeon or other medical provider may also choose the surgical technique for utilizing the selected instrument to place the implant device or collection of implant devices.
- the surgeon or other medical provider may determine the appropriate and correct medical instruments based on the disease state for a given patient, which may depend on various factors, such as angle, position, depth, level of deterioration, size of tumor, etc.
- surgical instruments may be available for performing the surgical procedure. For example, more than 200 surgical instruments and/or devices may be available for spine surgeries, and more than 100 surgical instruments and/or devices may be available for cranial surgeries. Hundreds more surgical instruments and devices may be available for other surgeries (e.g., heart surgeries, gastrointestinal surgeries, orthopedic surgeries, organ surgeries, etc.).
- a machine learning model e.g., artificial intelligence (Al)-based learning model or algorithm
- the features may be improved by training and/or retraining a machine learning model with reinforcement learning.
- a performance metric associated with the retraining of the machine learning model may be verified.
- the feature(s) associated with the ecosystem of surgical products may be provided or modified.
- a performance metric or multiple performance metrics may include a measured effectiveness (e.g., impact on a patient) of any algorithms that are updated based on the retraining of the machine learning model.
- Example features that may be provided or modified in response to retraining the machine learning model may include: (1) any algorithms implemented by the system in association with focused operations (e.g., image segmentation, therapy delivery, navigation, etc.) provided by the ecosystem and (2) operational features (e.g., imaging, therapy delivery techniques, tracking, etc.) related to a surgical product/tool included in the ecosystem.
- focused operations e.g., image segmentation, therapy delivery, navigation, etc.
- operational features e.g., imaging, therapy delivery techniques, tracking, etc.
- Embodiments of the present disclosure provide solutions to one or more of the problems of (1) prolonged surgical procedure durations, (2) increased exposure to anesthesia and/or radiation for a patient, and (3) higher chances of misdiagnoses or improperly performed surgical procedures. More simply, embodiments of the present disclosure aim to provide improvements to safety and efficacy for surgical procedures.
- the techniques described herein may enable better selection of a surgical plan, may assist with utilizing a particular device in an ecosystem of surgical products, and may help with improving aspects of the ecosystem of surgical products, which results in shorter procedure durations, may reduce a patient’s anesthesia dosage and timing, reduces radiation exposure (e.g., to confirm implant positioning), and promotes faster recovery.
- the techniques may be driven by an intelligent learning model that is normalized and optimized to meet clinical demand, thereby reducing the chances of misdiagnoses, and considering the complexity of the surgical procedures, the patient may benefit from both time and cost perspectives.
- Fig. 1 is a block diagram of a system 100 according to at least one embodiment of the present disclosure.
- the system 100 may include one or more inputs 102 that are used by a processor 104 to generate one or more outputs 106.
- the processor 104 may be part of a computing device or different device. Additionally, the processor 104 may be any processor described herein or any similar processor.
- the processor 104 may be configured to execute instructions or data stored in a memory, which the instructions or data may cause the processor 104 to carry out one or more computing steps utilizing or based on the inputs 102 to generate the outputs 106.
- the inputs 102 may include a set of surgical procedure data 108 for a surgical procedure for a patient.
- the set of surgical procedure data 108 may include patient demographic data, one or more radiological images, pathology data, or a combination thereof.
- the surgical procedure data may include historical and/or realtime data from an ecosystem of surgical products.
- data sources that may provide the surgical procedure data include a surgical navigation system 122, a surgical robotic system 124, operating room data 126, clinician data 128, patient data 130, imaging device(s) 132, and provider data 134.
- Data from one or more of the data sources may be provided to a data curation unit 116, where the data is formatted with a formatting engine 118 and annotated with an annotation engine 120.
- Operation of the formatting engine 118 may be autonomous (e.g., performed automatically) or may be supported with manual inputs (e.g., may include a manual process).
- the surgical procedure data 108 may correspond to curated and/or annotated data from multiple different data sources in an ecosystem of surgical products.
- the multiple different data sources may also include data sources not related to the ecosystem of surgical products, but related to a patient, clinician (e.g., surgeon), care provider, operating room, etc.
- Non-limiting examples of surgical procedure data 108 that may be received from the surgical navigation system 122 include historical data from surgeries in which the surgical navigation system 122 was used, image segmentation data, anatomical object data, surgical plans used during a surgical procedure, surgical devices used during a surgical procedure, information describing changes to surgical plans, information describing movement of objects during a surgical procedure, information describing object locations in a coordinate space, etc.
- the surgical navigation system 122 may also provide information describing a location and movement of a surgeon, nurse, surgical robot, or other object in an operating room during a surgical procedure.
- the surgical navigation system 122 may also provide information describing whether or not a surgical procedure followed a surgical plan and, if not, what deviations from the surgical plan were made during the surgical procedure.
- Surgical procedure data may also include data from non-imaging sources.
- surgical procedure data may include data received from an anesthesia machine, a heart monitor, a blood pressure monitor, combinations thereof, etc.
- Non-limiting examples of surgical procedure data 108 that may be received from the surgical robotic system 124 include historical data from surgeries in which the surgical robotic system 124 was used, information describing surgical instruments attached to the surgical robot during a surgical procedure, information describing maneuvers of a robotic arm during a surgical procedure, information describing end effectors used during a surgical procedure, information describing whether the surgical robotic system 124 was utilized in an autonomous mode or semi- autonomous mode, information describing operating parameters of the surgical robotic system 124, etc.
- Non-limiting examples of surgical procedure data 108 that may be received from the operating room data 126 may include a time of a surgical procedure, a location of the operating room, conditions of the operating room during a surgical procedure (e.g., temperature, humidity, lighting conditions, number of personnel in the operating room, types of devices in the operating room, etc.), duration of the surgical procedure, insurance codes associated with a surgical procedure, etc.
- conditions of the operating room during a surgical procedure e.g., temperature, humidity, lighting conditions, number of personnel in the operating room, types of devices in the operating room, etc.
- duration of the surgical procedure e.g., insurance codes associated with a surgical procedure, etc.
- Non-limiting examples of surgical procedure data 108 that may be received from clinician data 128 include a name of a clinician or surgeon, a practice with which a clinician or surgeon is associated, a patient history for the clinician or surgeon, patient outcome statistics for the clinician or surgeon, an educational history for the clinician or surgeon, surgical techniques used by the clinician or surgeon, etc.
- the clinician data 128 may be anonymized such that no personally identifiable information (PII) or any other personal data associated with a clinician, surgeon, or patient is exposed to the data curation unit 116.
- the anonymization of data received from the clinician data 128 may be performed by the formatting engine 118.
- the patient data 130 may be formatted by the formatting engine 118 to remove any PII therefrom.
- the patient data 130 may be anonymized to remove all PII therefrom.
- surgical procedure data 108 that may be received from the patient data 128 include patient zip code, patient demographics, patient height, patient weight, patient conditions, patient feedback (e.g., inputs received from a patient before or after a surgical procedure to help define a patient outcome or patient comfort), etc.
- Non-limiting examples of surgical procedure data 108 that may be received from the imaging device(s) 132 include raw signal data (e.g., signals received directly from the imaging device(s) 132, raw image data, formatted image data, information describing settings of imaging device(s) 132, segmented images, filtered images, image streams, video images, pre-operative images, intra-operative images, post-operative images, etc. It should be appreciated that some surgical procedure data 108 received from an imaging device 132 may be correlated to the surgical navigation system 122 and/or surgical robotic system 124 that was being used when an image was captured with an imaging device 132. In other words, surgical procedure data 108 received from the imaging device(s) 132 may be formatted and/or annotated to include information describing what type of surgical navigation system 122 and surgical robotic system 124 were being used when an image was captured.
- raw signal data e.g., signals received directly from the imaging device(s) 132, raw image data, formatted image data, information describing settings of imaging device(s)
- Non-limiting examples of surgical procedure data 108 that may be received from the provider data 134 includes any type of data related to insurance providers or carriers with which a surgical procedure was documented.
- the provider data 134 may include insurance codes, patient co-pays, prescriptions used before/during/after the surgical procedure, treatments given during a surgical procedure, etc.
- the data curation unit 116 may utilize the formatting engine 118 to remove any unnecessary and/or problematic information from the data sources prior to committing the data as surgical procedure data 108 to the ML input(s) 102. Alternatively or additionally, the data curation unit 116 may annotate the data with the annotation engine 120 to help associate the various types of data from different data sources to a common surgical procedure. Data curation and annotation may help normalize the data from different sources before being processed by the processor 104.
- the processor 104 may use the set of surgical procedure data 108 to predict an exact disease state for the patient and for the surgical procedure based on a machine learning model 110 (e.g., machine learning algorithm, Al-based algorithm or model, etc.).
- the machine learning model 110 may be created based on available historical data of previously performed surgical procedures, which includes procedure and instrument flow and abnormalities (e.g., which surgical instruments were used, in which order the surgical instruments were used, any abnormalities that were present, etc.), radiology images and annotations (e.g., MRI scans, CT scans or images, X- rays, etc.), demographic information of the patients that underwent the previously performed surgical procedures, 3D anatomical models (e.g., indicating angles, positions, dimensions, etc.
- the machine learning model 110 may be continuously improved based on continuous feedback (e.g., from surgeons and/or patients) after surgical procedures are completed.
- the processor 104 may use the machine learning model 110 to compare the set of surgical procedure data 108 with the available historical data. Based on the comparison using the machine learning model 110, the processor 104 may generate a list of features associated with an ecosystem of surgical products to display to the surgeon (e.g., or other medical provider).
- the list of features may be an algorithm implemented by a surgical robot, an algorithm implemented by a surgical navigation system, an imaging technique implemented by an imaging device, a surgical plan implemented by a care provider, a surgical instrument selected for use during a surgical procedure, a device implanted into a patient, and/or a layout for an operating room.
- the list of features may alternatively or additionally include an image segmentation approach to use with the imaging device(s) 132, a therapy delivery technique, a type of therapy, and the like.
- the closest matching surgical procedures may be compared to the surgical procedure for which the set of surgical procedure data 108 are provided and compared between each other with a similarity index that comprises a positional coordinate correlation (e.g., surgery anatomical position, implant location, etc.), a deformation coefficient, demographic similarity (e.g., body mass index (BMI) and/or other demographic information for the associated patients), 3D model similarity, inventory stock matching (e.g., whether same surgical instruments are available for the surgical procedure that were available and used for the closest matching surgical procedures), or a combination thereof.
- a similarity index that comprises a positional coordinate correlation (e.g., surgery anatomical position, implant location, etc.), a deformation coefficient, demographic similarity (e.g., body mass index (BMI) and/or other demographic information for the associated patients), 3D model similar
- different similarity index values for the different components of the similarity indexes of each of the closest matching surgical procedures may be displayed to the surgeon to indicate how similar each of the individual components are between the surgical procedure and the closest matching surgical procedures. Additionally or alternatively, an overall similarity index may be displayed indicating how similar the surgical procedure is in relation to each of the closest matching surgical procedures.
- the surgeon can choose the closest possible previously executed surgery, which will help to map and to identify the surgical flow.
- the processor 104 may provide a surgical procedure suggestion 112 as part of the output(s) 106 (e.g., suggestion of which surgical instruments to use based on which surgical instruments were used for the chosen closest possible previously executed surgery).
- the processor 104 may display (e.g., via a user interface) a suggestion of devices to use from a surgical ecosystem.
- the surgeon can edit or accept the surgical procedure suggestion 112 corresponding to the closest possible previously executed surgery.
- the processor 104 may distribute information to various parties describing the surgical procedure to be used for the patient and which products from the ecosystem of surgical products should be used. Additionally or alternatively, the processor 104 may simply provide an output that indicates which surgical instruments to place or load on a surgical robotic system 124. The surgeon can then complete the surgery and provide the feedback back to the machine learning model 110 as part of a feedback loop, which will help to further mature the machine learning model 110.
- the machine learning model 110 may be trained or retrained using reinforcement learning instead of supervised learning.
- the machine learning model 110 may be developed based on the surgical procedure data 108 (e.g., patient input data, such as radiology and physiology images and data) and previous surgical data of similar procedures (e.g., stored in a database), where the previous surgical data may include 3D models, angle and position, depth of implant and dimension, annotations, treatment plans, radiology diagnostic imaging, an implant used with respect to the 3D models, or a combination thereof for each of the similar procedures.
- the processor 104 may then use the machine learning model 110 to suggest one or more surgery plans to the surgeon, including the disease state for the patient (e.g., angle, depth, and position of the targeted area) based on the similar surgical data.
- the suggested surgery plans may be suggested or displayed to the surgeon in a 3D model view.
- the surgeon may select one of the similar procedures to follow.
- the surgeon may be able to make changes to the suggested surgical plan (e.g., based on differences between the selected procedure and the surgical procedure to be performed). Changes made to a surgical plan may be considered as part of training or retraining the machine learning model 110.
- the processor 104 may provide the surgical procedure suggestion 112 to indicate all details associated with a future surgical procedure (e.g., surgical plan, device(s) to use, operating room setup details, suggested personnel, surgical navigation system 122 settings, imaging device 132 settings, surgical robotic system 124 settings, etc.
- a rendering of the patient’s radiology images and mapping (e.g., acquired from the set of surgical procedure data 108) with the machine learning model 110 may enable the processor 104 to display a first cut sectional view for the surgery planning.
- the processor may then also suggest a closest match of the previously conducted surgeries based on a comparison of the cuts and/or incisions made for the previously conducted surgeries and the first cut sectional view, allowing the surgeon to select between the various options of the previously conducted surgeries based on the similarity index(es) and a visualization of the previously conducted surgeries with respect to the surgical procedure to be performed.
- the availability of the previous surgeries data and planning information can be used as a training material for end users and employees.
- Fig. 2 is a diagram of a workflow 200 according to at least one embodiment of the present disclosure.
- the workflow 200 may implement aspects of or may be implemented by aspects of Fig. 1.
- the workflow 200 may be a more detailed view of the system 100, where a machine learning model uses inputs for a surgery to determine a surgical plan and a surgical procedure suggestion based on historical data of previously performed surgeries.
- the workflow 200 may be performed by a processor described herein, such as the processor 104 as described with reference to Fig. 1.
- one or more inputs for a given surgical procedure for a patient may be provided or received.
- the one or more inputs may include demographic information for the patient, radiology and physiology images and data of the patient for the given surgical procedure, pathology data for the patient, or a combination thereof.
- the inputs for surgery 202 may include pre-operative images (e.g., CT images, CBCT images, MRI images, x-ray images, ultrasound images, pictures, etc.).
- the inputs for surgery 202 may also include notes from a clinician, insurance data, patient health history, operating room availability, products available from an ecosystem of surgical products, etc.
- the inputs for surgery 202 may include some or all of the types of data inputs provided to the data curation unit 116 described in connection with Fig. 1.
- a predictive position and treatment for the patient and given surgical procedure may be provided.
- planning of a surgical procedure for the patient may be launched.
- the planning may be launched or may be based on the machine learning model as described herein.
- the machine learning model may include or may be trained based on historical data 226 (e.g., stored in a database or cloud database) of previously performed surgeries, including surgery data 224.
- the machine learning model may be trained using reinforcement learning, which may include any one of a policy-based reinforcement learning, a value-function-based reinforcement learning, an associative reinforcement learning, a deep reinforcement learning, an adversarial deep reinforcement learning, a fuzzy reinforcement learning, an inverse reinforcement learning, and a safe reinforcement learning.
- reinforcement learning may include any one of a policy-based reinforcement learning, a value-function-based reinforcement learning, an associative reinforcement learning, a deep reinforcement learning, an adversarial deep reinforcement learning, a fuzzy reinforcement learning, an inverse reinforcement learning, and a safe reinforcement learning.
- the workflow 200 may perform operation 208 to execute a comparison between the given surgical procedure and the historical data 226 of the previously performed surgeries.
- the processor may display (e.g., via a user interface) and list the closest procedure match(es) from the previously performed surgeries that are most similar to the given surgical procedure to be performed.
- the processor may display similarity index(es) indicating how similar each of the previously performed surgeries are to the given surgical procedure to be performed and/or how similar different aspects of the previously performed surgeries are to corresponding aspects of the given surgical procedure to be performed.
- one or multiple of the closest procedure matches may be selected (e.g., by a surgeon or other medical provider) based on the similarity index(es).
- a surgical plan and instrument suggestion may be previewed and displayed by the processor to the surgeon (e.g., via a user interface) at operation 214.
- the surgical plan may include a disease state for the patient, such as an angle, depth, and position of a targeted area in the patient to be accessed as part of the surgical procedure.
- the processor may suggest a position of the patient for performing the given surgical procedure (e.g., to reduce load on the targeted area, fractured bone, etc.), such as on their side, on their stomach, etc.
- the surgeon may edit and/or accept the surgical plan that is based on the selected closest procedure. For example, after making the selection, the surgeon may be able to make changes to the suggested surgical plan (e.g., based on differences between the selected procedure and the surgical procedure to be performed, differences between the patient for the given surgical procedure and the patient for which the selected procedure was performed, etc.).
- the processor may distribute the surgical plan. Additionally or alternatively, the processor may display information describing the surgical plan to the patient, surgeon, operating room personnel, nurses, insurance company, and the like.
- the surgical plan may be transmitted via electronic communications to various computing devices of the entities mentioned above.
- the surgeon may perform and complete the surgical procedure using products from an ecosystem of surgical products, as suggested, displayed, and/or implemented based on the selected closest procedure and suggested surgical plan.
- the surgeon, patient, and/or operating room personnel may provide feedback for the machine learning model (e.g., which products from the ecosystem of surgical products were or were not used, performance data for the suggested surgical plan, additional data, etc.) to further train and/or update the machine learning model.
- the feedback may include surgery data 224 for the completed surgical procedure, such as 3D models, angle and position of the surgery to reach a targeted area of the patient for the surgical procedure, dimensions, annotations, treatment plans, radiology diagnostic imaging, an implant used with respect to the 3D models, imaging used, object motion captured by the surgical navigation system, etc.
- the surgery data 224 may also include similar information from the historical data 226 for the previously performed surgeries.
- the historical data 226 and the surgery data 224 may be used to train the machine learning model at operation 228 in a continuous feedback loop to mature and continually refine the machine learning model (e.g., including performing validation and testing of the machine learning model).
- the training may include reinforcement learning in which policies, values, functions, and/or objectives are provided to the learning agent.
- the purpose of reinforcement learning is for the agent to learn an optimal, or nearly-optimal, policy that maximizes the "reward function" or other user- provided reinforcement signal that accumulates from the immediate rewards.
- the machine learning model may be created and updated at operation 230 of the workflow 200 after training based on the reinforcement learning, which may be driven by historical data 226 and the surgical procedure data 224.
- FIG. 3 a diagram of a system 300 according to at least one embodiment of the present disclosure is shown.
- the system 300 may be used to suggest a surgical plan and/or products from an ecosystem of surgical products for performing a surgical procedure.
- the system 300 is illustrated to include a computing device 302, one or more imaging devices 312, a robot 314, a navigation system 318, a database 330, and/or a cloud or other network 334.
- One, some, or all of the components of system 300 may be configured to provide surgical procedure data 108 to the data curation unit 116 for eventual inputs to a machine learning process.
- Systems according to other embodiments of the present disclosure may comprise more or fewer components than the system 300.
- system 300 may not include the imaging device 312, the robot 314, the navigation system 318, one or more components of the computing device 302, the database 330, and/or the cloud 328.
- the various components of system 300 may be considered products within an ecosystem of surgical products, each of which support a surgical procedure and may be selected for use with a particular surgical plan.
- the computing device 302 comprises a processor 304, a memory 306, a communication interface 308, and a user interface 310.
- Computing devices according to other embodiments of the present disclosure may comprise more or fewer components than the computing device 302.
- the processor 304 of the computing device 302 may be any processor described herein or any similar processor.
- the processor 304 may be configured to execute instructions stored in the memory 306, which instructions may cause the processor 304 to carry out one or more computing steps utilizing or based on data received from the imaging device 312, the robot 314, the navigation system 318, the database 330, and/or the cloud 328.
- the memory 306 may be or comprise RAM, DRAM, SDRAM, other solid-state memory, any memory described herein, or any other tangible, non-transitory memory for storing computer- readable data and/or instructions.
- the memory 306 may store information or data useful for completing, for example, any step of the methods described herein, or of any other methods.
- the memory 306 may store, for example, instructions and/or machine learning models that support one or more functions of the robot 314.
- the memory 306 may store content (e.g., instructions and/or machine learning models) that, when executed by the processor 304, enable surgical plan determination 320, surgical plan selection 322, surgical robot settings 324, and navigation/imaging settings 326.
- the surgical plan determination 320 enables the processor 304 to receive a set of inputs for a surgical procedure for a patient and to determine one or more potential plans for the surgical procedure based at least in part on the set of inputs and a machine learning model.
- the set of inputs for the surgical procedure may comprise patient demographic data, one or more radiological images, pathology data, or a combination thereof.
- the one or more potential plans may be determined based at least in part on a disease state corresponding to the surgical procedure for a patient.
- the surgical plan determination 320 enables the processor 304 to compare the set of inputs for the surgical procedure with historical data of previously performed surgical procedures to determine the one or more potential plans based at least in part on a similarity index.
- the historical data of previously performed surgical procedures may comprise procedure and instrument flow and abnormalities, radiology images and annotations, demographic information, three-dimensional anatomical models, angles, positions, dimensions, implants used with respect to the three-dimensional models, instrument availability information, treatment plans, or a combination thereof, for the previously performed surgical procedures.
- the similarity index may comprise a positional coordinate correlation, a deformation coefficient, demographic information, three-dimensional model, inventory stock matching of available surgical instruments, or a combination thereof.
- the surgical plan selection 322 enables the processor 304 to receive a selection of a plan from the one or more potential plans. Additionally, the surgical plan selection 322 enables the processor 304 to display (e.g., via the user interface 310) one or more similarity index values for each of the one or more potential plans, where the selection of the plan is based at least in part on the similarity index values. In some embodiments, the surgical plan selection 322 enables the processor 304 to provide a position suggestion for the surgical procedure for the patient corresponding to the plan from the selection.
- the surgical plan selection 322 enables the processor 304 to provide a cut sectional view for the surgical procedure for the patient based at least in part on the set of inputs and to provide one or more cut sectional views for the one or more potential plans based at least in part on historical surgical data corresponding to the one or more potential plans, where the selection of the plan from the one or more potential plans is received based at least in part on a comparison of the cut sectional view for the surgical procedure and the one or more cut sectional views for the one or more potential plans.
- the surgical robot settings 324 enables the processor 304 to determine one or more settings to be used by the robot 314 during a surgical procedure.
- Surgical robot settings 324 may define, for example, end effectors to use with the robotic arm(s) 316, instruments to use with the surgical robot 314, navigation trackers to connect to the robot arm(s) 316, no fly zone(s) for the robot arm(s) 316, a number of surgical robots 314 to use in the surgical procedure, a number of operators for the robot(s) 314, and the like.
- the surgical robot settings 324 enables the processor 304 to receive one or more changes to a surgical plan from the selection, and the plurality interaction between the robot 314 with other surgical products in the ecosystem of surgical products may be defined.
- the navigation/imaging settings 328 enables the processor 304 to provide an output that indicates settings for the surgical navigation system 318 and/or imaging device(s) 312 during a surgical procedure.
- the navigation/imaging settings 326 may be selected and at least depend upon one or more surgical robot settings 324.
- different navigation and/or imaging settings 326 may be selected based on surgical robot settings 324 that are being used for a surgical procedure.
- the surgical robot settings 324 and navigation/imaging settings 326 may be co-dependent on one another, meaning that a change in one of the settings 324, 326 may result or require a change in the other of the settings 324, 326.
- the interactions between products in an ecosystem of surgical products is complex and may depend on desired patient outcomes, surgeon preferences, operating room capabilities, product availability, etc.
- the surgical plan selection 322 can depend on the surgical robot settings 324, which may depend on navigation/imaging settings 326, and vice versa.
- the output indicates which products to use from an ecosystem of surgical products and settings to utilize for each of the products.
- the surgical plan selection 322, surgical robot settings 324, and navigation/imaging settings 326 may be provided to the processor 304 for feedback after the surgical procedure for the patient is performed based at least in part on the output.
- the feedback may also include surgeon, patient, and/or operating room personnel feedback regarding outcomes, challenges, complications, resolutions of problems, etc. Accordingly, the feedback may be used, in part, to train the machine learning model.
- Content stored in the memory 306, if provided as in instruction, may, in some embodiments, be organized into one or more applications, modules, packages, layers, or engines.
- the memory 306 may store other types of content or data (e.g., machine learning models, artificial neural networks, deep neural networks, etc.) that can be processed by the processor 304 to carry out the various method and features described herein.
- various contents of memory 306 may be described as instructions, it should be appreciated that functionality described herein can be achieved through use of instructions, algorithms, and/or machine learning models.
- the data, algorithms, and/or instructions may cause the processor 304 to manipulate data stored in the memory 306 and/or received from or via the imaging device 312, the robot 314, the database 330, and/or the cloud 328.
- the computing device 302 may also comprise a communication interface 308.
- the communication interface 308 may be used for receiving image data or other information from an external source (such as the imaging device 312, the robot 314, the navigation system 318, the database 330, the cloud 328, and/or any other system or component not part of the system 300), and/or for transmitting instructions, images, or other information to an external system or device (e.g., another computing device 302, the imaging device 312, the robot 314, the navigation system 318, the database 330, the cloud 328, and/or any other system or component not part of the system 300).
- an external source such as the imaging device 312, the robot 314, the navigation system 318, the database 330, the cloud 328, and/or any other system or component not part of the system 300.
- the communication interface 308 may comprise one or more wired interfaces (e.g., a USB port, an Ethernet port, a Firewire port) and/or one or more wireless transceivers or interfaces (configured, for example, to transmit and/or receive information via one or more wireless communication protocols such as 802.11a/b/g/n, Bluetooth, NFC, ZigBee, and so forth).
- the communication interface 308 may be useful for enabling the device 302 to communicate with one or more other processors 304 or computing devices 302, whether to reduce the time needed to accomplish a computing-intensive task or for any other reason.
- the computing device 302 may also comprise one or more user interfaces 310.
- the user interface 310 may be or comprise a keyboard, mouse, trackball, monitor, television, screen, touchscreen, and/or any other device for receiving information from a user and/or for providing information to a user.
- the user interface 310 may be used, for example, to receive a user selection or other user input regarding any step of any method described herein. Notwithstanding the foregoing, any required input for any step of any method described herein may be generated automatically by the system 300 (e.g., by the processor 304 or another component of the system 300) or received by the system 300 from a source external to the system 300.
- the user interface 310 may be useful to allow a surgeon or other user to modify instructions to be executed by the processor 304 according to one or more embodiments of the present disclosure, and/or to modify or adjust a setting of other information displayed on the user interface 310 or corresponding thereto.
- the user interface 310 is shown as part of the computing device 302, in some embodiments, the computing device 302 may utilize a user interface 310 that is housed separately from one or more remaining components of the computing device 302. In some embodiments, the user interface 310 may be located proximate one or more other components of the computing device 302, while in other embodiments, the user interface 310 may be located remotely from one or more other components of the computer device 302.
- the imaging device 312 may be operable to image anatomical feature(s) (e.g., a bone, veins, tissue, etc.) and/or other aspects of patient anatomy to yield image data (e.g., image data depicting or corresponding to a bone, veins, tissue, etc.).
- image data refers to the data generated or captured by an imaging device 312, including in a machine-readable form, a graphical/visual form, and in any other form.
- the image data may comprise data corresponding to an anatomical feature of a patient, or to a portion thereof.
- the image data may be or comprise a preoperative image, an intraoperative image, a postoperative image, or an image taken independently of any surgical procedure.
- a first imaging device 312 may be used to obtain first image data (e.g., a first image) at a first time, and a second imaging device 312 may be used to obtain second image data (e.g., a second image) at a second time after the first time.
- the imaging device 312 may be capable of taking a 2D image or a 3D image to yield the image data.
- the imaging device 312 may be or comprise, for example, an ultrasound scanner (which may comprise, for example, a physically separate transducer and receiver, or a single ultrasound transceiver), an 0-arm, a C-arm, a G-arm, or any other device utilizing X-ray-based imaging (e.g., a fluoroscope, a CT scanner, a or other X-ray machine), a magnetic resonance imaging (MRI) scanner, an optical coherence tomography (OCT) scanner, an endoscope, a microscope, an optical camera, a thermographic camera (e.g., an infrared camera), a radar system (which may comprise, for example, a transmitter, a receiver, a processor, and one or more antennae), a cone beam computed tomography (CBCT) imaging device, or any other imaging device 312 suitable for obtaining images of an anatomical feature of a patient.
- the imaging device 312 may be contained entirely within a single housing, or may comprise a transmiter/ emitter
- the imaging device 312 may comprise more than one imaging device 312.
- a first imaging device may provide first image data and/or a first image
- a second imaging device may provide second image data and/or a second image.
- the same imaging device may be used to provide both the first image data and the second image data, and/or any other image data described herein.
- the imaging device 312 may be operable to generate a stream of image data.
- the imaging device 312 may be configured to operate with an open shutter, or with a shuter that continuously alternates between open and shut so as to capture successive images.
- image data may be considered to be continuous and/or provided as an image data stream if the image data represents two or more frames per second.
- the robot 314 may be any surgical robot or surgical robotic system.
- the robot 314 may be or comprise, for example, the Mazor XTM Stealth Edition robotic guidance system.
- the robot 314 may be configured to position the imaging device 312 at one or more precise position(s) and orientation(s), and/or to return the imaging device 312 to the same position(s) and orientation(s) at a later point in time.
- the robot 314 may additionally or alternatively be configured to manipulate a surgical tool (whether based on guidance from the navigation system 318 or not) to accomplish or to assist with a surgical task.
- the robot 314 may be configured to hold and/or manipulate an anatomical element during or in connection with a surgical procedure.
- the robot 314 may comprise one or more robotic arms 316.
- the robotic arm 316 may comprise a first robotic arm and a second robotic arm, though the robot 314 may comprise more than two robotic arms.
- one or more of the robotic arms 316 may be used to hold and/or maneuver the imaging device 312.
- the imaging device 312 comprises two or more physically separate components (e.g., a transmitter and receiver)
- one robotic arm 316 may hold one such component
- another robotic arm 316 may hold another such component.
- Each robotic arm 316 may be positionable independently of the other robotic arm.
- the robotic arms 316 may be controlled in a single, shared coordinate space, or in separate coordinate spaces.
- the robot 314, together with the robotic arm 316 may have, for example, one, two, three, four, five, six, seven, or more degrees of freedom. Further, the robotic arm 316 may be positioned or positionable in any pose, plane, and/or focal point. The pose includes a position and an orientation. As a result, an imaging device 312, surgical tool, or other object held by the robot 314 (or, more specifically, by the robotic arm 316) may be precisely positionable in one or more needed and specific positions and orientations.
- the robotic arm(s) 316 may comprise one or more sensors that enable the processor 304 (or a processor of the robot 314) to determine a precise pose in space of the robotic arm (as well as any object or element held by or secured to the robotic arm).
- reference markers may be placed on the robot 314 (including, e.g., on the robotic arm 316), the imaging device 312, or any other object in the surgical space.
- the reference markers may be tracked by the navigation system 318, and the results of the tracking may be used by the robot 314 and/or by an operator of the system 300 or any component thereof.
- the navigation system 318 can be used to track other components of the system (e.g., imaging device 312) and the system can operate without the use of the robot 314 (e.g., with the surgeon manually manipulating the imaging device 312 and/or one or more surgical tools, based on information and/or instructions generated by the navigation system 318, for example).
- Historical information describing robot 314 motion during a surgical procedure may be provided as part of surgical procedure data 108.
- Information describing patient motion and/or surgeon motion during the surgical procedure may also be provided from the surgical navigation system 122 as surgical procedure data 108.
- the navigation system 318 may provide navigation for a surgeon and/or a surgical robot during an operation.
- the navigation system 318 may be any now-known or future-developed navigation system, including, for example, the Medtronic StealthStationTM S8 surgical navigation system or any successor thereof.
- the navigation system 318 may include one or more cameras or other sensor(s) for tracking one or more reference markers, navigated trackers, or other objects within the operating room or other room in which some or all of the system 300 is located.
- the one or more cameras may be optical cameras, infrared cameras, or other cameras.
- the navigation system 318 may comprise one or more set of electromagnetic transmitters and sensors.
- the navigation system 318 may be used to track a position and orientation (e.g., a pose) of the imaging device 312, the robot 314 and/or robotic arm 316, and/or one or more surgical tools (or, more particularly, to track a pose of a navigated tracker attached, directly or indirectly, in fixed relation to the one or more of the foregoing).
- the navigation system 318 may include a display for displaying one or more images from an external source (e.g., the computing device 302, imaging device 312, or other source) or for displaying an image and/or video stream from the one or more cameras or other sensors of the navigation system 318.
- the system 300 can operate without the use of the navigation system 318.
- the navigation system 318 may be configured to provide guidance to a surgeon or other user of the system 300 or a component thereof, to the robot 314, or to any other element of the system 300 regarding, for example, a pose of one or more anatomical elements, whether or not a tool is in the proper trajectory, and/or how to move a tool into the proper trajectory to carry out a surgical task according to a preoperative or other surgical plan.
- the database 330 may store information that correlates one coordinate system to another (e.g., one or more robotic coordinate systems to a patient coordinate system and/or to a navigation coordinate system).
- the database 330 may additionally or alternatively store, for example, one or more surgical plans (including, for example, pose information about a target and/or image information about a patient’s anatomy at and/or proximate the surgical site, for use by the robot 314, the navigation system 318, and/or a user of the computing device 302 or of the system 300); one or more images useful in connection with a surgery to be completed by or with the assistance of one or more other components of the system 300; and/or any other useful information.
- surgical plans including, for example, pose information about a target and/or image information about a patient’s anatomy at and/or proximate the surgical site, for use by the robot 314, the navigation system 318, and/or a user of the computing device 302 or of the system 300
- the database 330 may be configured to provide any such information to the computing device 302 or to any other device of the system 300 or external to the system 300, whether directly or via the cloud 328.
- the database 330 may be or comprise part of a hospital image storage system, such as a picture archiving and communication system (PACS), a health information system (HIS), and/or another system for collecting, storing, managing, and/or transmitting electronic medical records including image data.
- a hospital image storage system such as a picture archiving and communication system (PACS), a health information system (HIS), and/or another system for collecting, storing, managing, and/or transmitting electronic medical records including image data.
- the cloud 328 may be or represent the Internet or any other wide area network.
- the computing device 302 may be connected to the cloud 328 via the communication interface 308, using a wired connection, a wireless connection, or both.
- the computing device 302 may communicate with the database 330 and/or an external device (e.g., a computing device) via the cloud 328.
- the system 300 or similar systems may be used, for example, to carry out one or more aspects of any of the methods 400, 500, and/or 600 described herein.
- the system 300 or similar systems may also be used for other purposes.
- Fig. 4 illustrates a data pipeline 400 that may be used to utilize reinforcement learning as part of providing and/or modifying one or more features associated with an ecosystem of surgical products.
- the ecosystem of surgical products may include one or more of a surgical navigation system, a surgical robot, a surgical technique, a surgical instrument, an implant, a therapy delivery technique, a therapy delivery device, and the like.
- the one or more features associated with the ecosystem of the surgical products include an algorithm implemented by a surgical robot, an algorithm implemented by a surgical navigation system, an imaging technique implemented by an imaging device, a surgical plan implemented by a care provider, a surgical instrument selected for use during a surgical procedure, a device implanted into a patient, a therapy delivery technique, an image segmentation approach, and a layout for an operating room.
- the data pipeline 400 includes data storage 402 in which surgical procedure data is stored.
- the data storage 402 may utilize any memory, database, or cloud storage device depicted and/or described herein.
- the data storage 402 may be ongoing and/or updated as machine learning models are updated.
- memory management approaches may be utilized to delete unnecessary, expired, and/or unwanted data that is not impacting the machine learning process.
- the data pipeline 400 may further include a data normalization and annotation 404 in which surgical procedure data is curated, formatted, and/or annotated for further use within the pipeline 400.
- the data normalization may include formatting the data such that PII information is removed from all surgical procedure data, conforming data fields to predefined formats, etc.
- the annotation may include defining any previously undefined data fields, organizing data in a stream that is efficient to process, removing unnecessary bits of data, adding metadata that associates the surgical procedure data with other surgical procedure data (e.g., binds the surgical procedure data from different devices toa common surgical procedure), etc.
- the data pipeline 400 may further include a reinforcement learning 406 in which the annotation surgical procedure data is subjected to one or more reinforcement learning approaches.
- the reinforcement learning may include a policy-based reinforcement learning where one or more policies are provided to the machine learning model during training or retraining. Examples of policies that may be provided in a policy-based reinforcement learning include, without limitation, insurance policy requirements, hospital policy requirements, patient-defined policies, surgeon-defined policies, etc.
- the reinforcement learning may include a value-function-based reinforcement learning where one or more functions are defined for the machine learning model during training or retraining.
- the function(s) used in a value-function-based reinforcement learning may include an optimization function that seeks to optimize a patient outcome after a surgical procedure.
- the optimization function may include a maximization function that maximizes one or more of: patient happiness (e.g., as measured in patient survey data), surgeon approval (e.g., as measured in surgeon survey data), insurance coverage for the surgical procedure, etc.
- the optimization function may include a minimization function that minimizes one or more of: patient cost, elapsed time of the surgical procedure, patient discomfort (e.g., as measured in patient survey data), number of surgical products used from an ecosystem of surgical products, number of follow-up visits, number of surgical procedures required to complete a larger surgical plan, etc.
- the reinforcement learning may include a model-based reinforcement learning where one or more virtual models are developed for the ecosystem of surgical products.
- surgical procedure data may be provided to multiple virtual models along with other machine learning models in production (e.g., currently in-use for recommending surgical procedure data).
- the data pipeline 400 may further include a comparison 408 in which new model(s) or models that are subject to training/retraining are compared with models currently in production (e.g., models in use).
- performance metrics of the models in products may be compared with performance metrics of the models being trained or retrained.
- performance of the different models (actual and virtual) may be compared over time to determine which model(s) is performing better than other models.
- a model currently in use may be replaced with a virtual model if outcomes achieved with the virtual model are better as compared with outcomes achieved with the model in use.
- the data pipeline 400 may further include a model update 410 in which model(s) in production may be updated, retrained, and/or replaced, depending upon the outcome of the comparison 408.
- a model update 410 in which model(s) in production may be updated, retrained, and/or replaced, depending upon the outcome of the comparison 408.
- performance metrics associated with a model in training are better than performance metrics associated with a model in production, by at least a predetermined amount or percentage, then the model in production may be replaced with the model in training.
- verifying the efficacy of a model may include verifying one or more performance metrics associated with use of the machine learning model.
- the performance metric may include a measured effectiveness (e.g., impact on a patient) of any algorithms that are updated based on the retraining of the machine learning model.
- the data pipeline 400 may be a closed loop pipeline such that updates and verifications may be stored 402 as new surgical procedure data 108 and the process may be repeated/ iterated.
- Fig. 5 depicts, in accordance with at least one embodiment, a method 500 that may be used to suggest a surgery plan and provide or modify one or more features associated with an ecosystem of surgical products.
- the method 500 (and/or one or more steps thereof) may be carried out or otherwise performed, for example, by at least one processor.
- the at least one processor may be the same as or similar to the processor(s) 304 of the computing device 302 described above.
- the at least one processor may include the processor 104.
- the at least one processor may be part of a robot (such as a robot 314) or part of a navigation system (such as a navigation system 318).
- a processor other than any processor described herein may also be used to execute the method 500.
- the at least one processor may perform the method 500 by executing elements stored in a memory such as the memory 306.
- the elements stored in the memory and executed by the processor may cause the processor to execute one or more steps of a function as shown in method 500.
- One or more portions of a method 500 may be performed by the processor executing any of the contents of memory, such as a surgical plan determination 320, a surgical plan selection 322, a surgical robot settings 324, and/or a navigation/imaging settings 326.
- the method 500 comprises receiving surgical procedure data (step 504).
- the surgical procedure data may be received as one or more data sets from a data repository.
- the surgical procedure data may include data of one or many different types and may originate from one or many different sources of surgical procedure data.
- method 500 may further include storing the updated data if the data is received as a data stream or from an external data source (step 508).
- the method 500 also comprises curating and annotating the surgical procedure data (step 512).
- the curating and/or annotating may be performed by the data curation unit 116 and may support additional processing of the data during reinforcement learning.
- the method 500 also comprises training or retraining a machine learning model with reinforcement learning or reinforcement training based on the updated data set (step 516).
- the surgical procedure data may be provided to a machine learning model that is being trained or retrained using a reinforcement learning approach.
- the method 500 also comprises providing one or more performance metrics associated with retraining of the machine learning model (step 520).
- the performance metric(s) may be verified during training or retraining of the machine learning model. For instance, during reinforcement learning, penalties may be applied to a training agent when models are changed to deteriorate a performance metric whereas benefits or rewards may be provided to a training agent when models are changed to enhance a performance metric. As performance is measured and re-measured, the performance metrics may be verified to enable additional penalties and/or rewards to be applied to the training agent.
- the method 500 also comprises providing an output that includes a definition of one or more products to use from an ecosystem of surgical products during a surgical procedure (step 524).
- the output may include providing or modifying one or more features associated with the ecosystem of surgical products.
- providing or modifying one or more features may include defining, changing, suggesting, or amending at least one of: an algorithm implemented by a surgical robot, an algorithm implemented by a surgical navigation system, an imaging technique implemented by an imaging device, a surgical plan implemented by a care provider, a surgical instrument selected for use during a surgical procedure, a device implanted into a patient, and a layout for an operating room.
- the present disclosure encompasses embodiments of the method 500 that comprise more or fewer steps than those described above, and/or one or more steps that are different than the steps described above.
- Fig. 6 depicts a method 600 that may be used, for example, to train or retrain one or more machine learning models and to implement a change in an ecosystem of surgical products.
- the method 600 (and/or one or more steps thereof) may be carried out or otherwise performed, for example, by at least one processor.
- the at least one processor may be the same as or similar to other processor(s) depicted and described herein.
- a processor other than any processor described herein may also be used to execute the method 600.
- the at least one processor may perform the method 600 by executing elements stored in a memory such as the memory 306.
- the elements stored in memory and executed by the processor may cause the processor to execute one or more steps of a function as shown in method 600.
- One or more portions of a method 600 may be performed by the processor executing any of the contents of memory, such as a surgical plan determination 320, a surgical plan selection 322 a surgical robot settings 324, and/or a navigation/imaging settings 326.
- the method 600 comprises receiving surgical procedure data (step 604).
- the method 600 also comprises annotating the surgical procedure data (step 608).
- Steps 604 and 608 may implement similar aspect of steps 504 and 512, respectively, as described with reference to Fig. 5.
- the method 600 may also comprise inputting the annotated surgical procedure data to one or more machine learning models (step 612).
- the annotated surgical procedure data may be provided to the machine learning models as part of a training or retraining process.
- the method 600 may further include receiving an output from one or more of the machine learning models (step 616). The output generated by the machine learning models may be generated in response to the machine learning models processing the annotated surgical procedure data.
- the method 600 may further include implementing reinforcement learning on the machine learning model(s) (step 620).
- the reinforcement learning may be implemented based, at least in part, on an analysis of the outputs received from the machine learning models. Unlike supervised training, the reinforcement learning may apply rewards and/or penalties to an agent that simultaneously tracks a state of the ecosystem of surgical products to determine whether certain behaviors are viewed positively or negatively by uses of the ecosystem of surgical products.
- the machine learning model(s) may be updated, as appropriate, to implement a change in an ecosystem of surgical products (step 624).
- the updates made to the machine learning models may be monitored and performance metrics of the machine learning models may be analyzed to verify whether or not the updates resulted in an improvement to the performance metrics and/or a deterioration to the performance metrics.
- the present disclosure encompasses embodiments of the method 600 that comprise more or fewer steps than those described above, and/or one or more steps that are different than the steps described above.
- Fig. 7 depicts a method 700 that may be used, for example, to train or retrain one or more machine learning models utilizing a policy-based reinforcement learning approach.
- the method 700 (and/or one or more steps thereof) may be carried out or otherwise performed, for example, by at least one processor.
- the at least one processor may be the same as or similar to other processor(s) depicted and described herein.
- a processor other than any processor described herein may also be used to execute the method 700.
- the at least one processor may perform the method 700 by executing elements stored in a memory such as the memory 306.
- the elements stored in memory and executed by the processor may cause the processor to execute one or more steps of a function as shown in method 700.
- One or more portions of a method 700 may be performed by the processor executing any of the contents of memory, such as a surgical plan determination 320, a surgical plan selection 322 a surgical robot settings 324, and/or a navigation/imaging settings 326.
- the method 700 comprises receiving surgical procedure data (step 704).
- the surgical procedure data may include data received from a surgical robot and/or a surgical navigation system.
- the data may be curated and annotated as described herein.
- the surgical procedure data when the surgical procedure data is received from a surgical robot and a surgical navigation system for a common surgical procedure, then the data from both the surgical robot and the surgical navigation system may be annotated to indicate that the surgical procedure data received from both data sources is associated with the common surgical procedure.
- the method 700 may further comprise providing the surgical procedure data to one or more machine learning models (step 708).
- the surgical procedure data may be provided to machine learning models that are being trained, retrained, or the like.
- the method 700 may also comprise providing one or more policies to the machine learning models (step 712).
- the policies may be used as part of implementing a machine learning approach for the machine learning models.
- the policies may define behaviors of a system that are required.
- the policies may define behaviors of a system that are prohibited.
- the policies may include conditional policies that should be complied with, depending upon circumstances and/or conditions.
- the method 700 may also comprise updating the surgical procedure data and/or parameters of the machine learning models during training and/or retraining of the machine learning models (step 716).
- surgical procedure data may be updated to indicate such changes.
- machine learning models may be replaced both other machine learning models, whereas in other embodiments a machine learning model may be updated by having its operating parameters (e.g., coefficients) modified.
- the present disclosure encompasses embodiments of the method 700 that comprise more or fewer steps than those described above, and/or one or more steps that are different than the steps described above.
- Fig. 8 depicts a method 800 that may be used, for example, to train or retrain one or more machine learning models utilizing a policy-based reinforcement learning approach.
- the method 800 (and/or one or more steps thereof) may be carried out or otherwise performed, for example, by at least one processor.
- the at least one processor may be the same as or similar to other processor(s) depicted and described herein.
- a processor other than any processor described herein may also be used to execute the method 800.
- the at least one processor may perform the method 800 by executing elements stored in a memory such as the memory 306.
- the elements stored in memory and executed by the processor may cause the processor to execute one or more steps of a function as shown in method 800.
- One or more portions of a method 800 may be performed by the processor executing any of the contents of memory, such as a surgical plan determination 320, a surgical plan selection 322 a surgical robot settings 324, and/or a navigation/imaging settings 326.
- the method 800 comprises receiving surgical procedure data (step 804).
- the method 800 may further comprise providing the surgical procedure data to one or more machine learning models (step 808).
- Steps 804 and 808 may be similar to steps 704 and 708 of method 700, although the steps do not have to be the same.
- the method 800 may also comprise defining one or more functions for reinforcement learning (step 812).
- the functions defined for reinforcement learning may include maximization functions, minimization functions, averaging functions, objective functions, a plurality of maximization and minimization functions, or combinations thereof.
- the one or more functions may be used to train and/or retrain the machine learning models as the machine learning models process the surgical procedure data provided thereto.
- new outputs may be generated by the machine learning models.
- the output(s) of the trained or retrained machine learning models may be used to update a surgical plan, update surgical procedure data, and/or provide suggestions for changes to an ecosystem of surgical products (step 816).
- the method 800 may also comprise replacing one or more machine learning models with an updated machine learning model if one or more performance metrics associated with the machine learning model being trained improves an output of a reinforcement learning function (step 820).
- a reinforcement learning function e.g., a model in production
- the machine learning model being trained may replace the model in production. If a replacement of models occurs, verification of the replacement machine learning model’s performance may be analyzed and verified.
- the present disclosure encompasses embodiments of the method 800 that comprise more or fewer steps than those described above, and/or one or more steps that are different than the steps described above.
- Fig. 9 depicts a method 900 that may be used, for example, to train or retrain one or more machine learning models utilizing a model-based reinforcement learning approach.
- the method 900 (and/or one or more steps thereof) may be carried out or otherwise performed, for example, by at least one processor.
- the at least one processor may be the same as or similar to other processor(s) depicted and described herein.
- a processor other than any processor described herein may also be used to execute the method 900.
- the at least one processor may perform the method 900 by executing elements stored in a memory such as the memory 306.
- the elements stored in memory and executed by the processor may cause the processor to execute one or more steps of a function as shown in method 900.
- One or more portions of a method 900 may be performed by the processor executing any of the contents of memory, such as a surgical plan determination 320, a surgical plan selection 322 a surgical robot settings 324, and/or a navigation/imaging settings 326.
- the method 900 comprises receiving surgical procedure data (step 904).
- Step 904 may be similar to step 704 of method 700, although the steps do not have to be the same.
- the method 900 may also comprise developing one or more virtual models for an ecosystem of surgical products (step 908).
- the one or more virtual models may be similar to a machine learning model in use, but may have different coefficients, may have been trained with different training data, may not have received all surgical procedure data that has been processed by the machine learning model in use, and may be implemented in a virtual environment (e.g., without an ability to make an actual impact on an ecosystem of surgical products).
- the method 900 may further comprise providing the surgical procedure data to the virtual model (step 912). This step may also include providing the surgical procedure data to the machine learning model(s) that are currently in use. The surgical procedure data may be curated and/or annotated prior to being provided to the virtual model(s).
- the method 900 may further comprise receiving outputs from the virtual models and the machine learning model(s) currently in use. Performance metrics associated with the outputs from each of the virtual models and the machine learning models(s) currently in use may be compared with one another. The method 900 may then identify a best-performing machine learning model among the virtual model(s) (step 916). The method 900 may also compare a performance the bestperforming machine learning model with a performance of the machine learning model currently in use (step 920). If the performance metrics associated with the best-performing virtual model are better than the performance metrics associated with the machine learning model currently in use, the method 900 may continue by replacing the machine learning model currently in use with a virtual model (step 924).
- the method 900 may include analyzing and verifying that the new machine learning model continues to provide better performance characteristics than the machine learning model that it replaced. If this improvement is verified, then the change may be maintained. If the improvement is not verified, then the previous machine learning model may be brought back into production (e.g., to replace the virtual model that previously replaced it).
- the present disclosure encompasses embodiments of the method 900 that comprise more or fewer steps than those described above, and/or one or more steps that are different than the steps described above.
- the present disclosure encompasses methods with fewer than all of the steps identified in Figs. 5, 6, 7, 8, and 9 (and the corresponding description of the methods 500, 600, 700, 800, and 900), as well as methods that include additional steps beyond those identified in Figs. 5, 6, 7, 8, and 9 (and the corresponding description of the methods 500, 600, 700, 800, and 900).
- the present disclosure also encompasses methods that comprise one or more steps from one method described herein, and one or more steps from another method described herein. Any correlation described herein may be or comprise a registration or any other correlation.
- Example 1 A system, comprising: a processor; and a memory storing data for processing by the processor, the data, when processed, causes the processor to: store a data set comprising surgical procedure data in a data repository; retrain a machine learning model based on curating the data set, wherein retraining the machine learning model comprises applying reinforcement learning; verify a performance metric associated with the retraining of the machine learning model; and provide or modify one or more features associated with an ecosystem of surgical products in response to retraining the machine learning model and verifying the performance metric.
- Example 2 The system of example 1, wherein the performance metric comprises at least one of a measured effectiveness and a patient outcome.
- Example 3 The system of example 1 or 2, wherein the one or more features associated with the ecosystem of the surgical products include an algorithm implemented by a surgical robot, an algorithm implemented by a surgical navigation system, an imaging technique implemented by an imaging device, a surgical plan implemented by a care provider, a surgical instrument selected for use during a surgical procedure, a device implanted into a patient, and a layout for an operating room.
- the one or more features associated with the ecosystem of the surgical products include an algorithm implemented by a surgical robot, an algorithm implemented by a surgical navigation system, an imaging technique implemented by an imaging device, a surgical plan implemented by a care provider, a surgical instrument selected for use during a surgical procedure, a device implanted into a patient, and a layout for an operating room.
- Example 4 The system of any preceding example, wherein the one or more features comprises an image segmentation approach.
- Example 5 The system of any preceding example, wherein the one or more features comprises a therapy delivery technique.
- Example 6 The system of any preceding example, wherein the reinforcement learning comprises a policy-based reinforcement learning and wherein at least one policy is provided to the machine learning model during the retraining.
- Example 7 The system of example of any preceding claim, wherein the reinforcement learning comprises a value-function-based reinforcement learning and wherein at least one function is defined for the machine learning model during the retraining.
- Example 8 The system of example 7, wherein the at least one function comprises an optimization function that seeks to optimize a patient outcome after a surgical procedure.
- Example 9 The system of any preceding example, wherein the memory stores further data for processing by the processor that, when processed, causes the processor to: replace the machine learning model with an updated machine learning model in response to the updated machine learning model exhibiting an improvement in the performance metric as compared to the machine learning model.
- Example 10 The system of any preceding example, wherein the memory stores further data for processing by the processor that, when processed, causes the processor to: develop a virtual model for the ecosystem of surgical products; provide the data set to the virtual model; compare a performance of the virtual model with the performance metric; determine the performance of the virtual model is better than the machine learning model based on the comparison; and replace the machine learning model with the virtual model in response to determining that the performance of the virtual model is better than the machine learning model.
- Example 11 The system of any preceding example, wherein the memory stores further data for processing by the processor that, when processed, causes the processor to: receive feedback after the surgical procedure for the patient is performed; and further update the data set with the feedback.
- Example 12 The system of example 11, wherein the feedback is used, in part, to train the machine learning model.
- Example 13 The system of any preceding example, wherein the data set comprises data from a surgical robot and data from a surgical navigation system.
- Example 14 The system of example 13, wherein the data from the surgical robot and the data from the surgical navigation system are curated to indicate an association with a common surgical procedure.
- Example 15 The system of any preceding example, wherein the ecosystem of surgical products comprises at least one of a surgical robot, a surgical navigation system, an imaging device, a surgical instrument, and an implant.
- Example 16 A method, comprising: storing a data set comprising surgical procedure data in a data repository; retraining a machine learning model based on curating the data set, wherein retraining the machine learning model comprises applying reinforcement learning; verifying a performance metric associated with the retraining of the machine learning model; and providing or modifying one or more features associated with an ecosystem of surgical products in response to retraining the machine learning model and verifying the performance metric.
- Example 17 The method of example 16, wherein the reinforcement learning comprises at least one of a policy-based reinforcement learning, a value- function-based reinforcement learning, an associative reinforcement learning, a deep reinforcement learning, an adversarial deep reinforcement learning, a fuzzy reinforcement learning, an inverse reinforcement learning, and a safe reinforcement learning.
- the reinforcement learning comprises at least one of a policy-based reinforcement learning, a value- function-based reinforcement learning, an associative reinforcement learning, a deep reinforcement learning, an adversarial deep reinforcement learning, a fuzzy reinforcement learning, an inverse reinforcement learning, and a safe reinforcement learning.
- Example 18 The method of example 16 or 17, wherein the one or more features associated with the ecosystem of the surgical products include an algorithm implemented by a surgical robot, an algorithm implemented by a surgical navigation system, an imaging technique implemented by an imaging device, a surgical plan implemented by a care provider, a surgical instrument selected for use during a surgical procedure, a device implanted into a patient, and a layout for an operating room.
- Example 19 An algorithm implemented by a surgical robot, an algorithm implemented by a surgical navigation system, an imaging technique implemented by an imaging device, a surgical plan implemented by a care provider, a surgical instrument selected for use during a surgical procedure, a device implanted into a patient, and a layout for an operating room.
- a method of implementing a machine learning pipeline comprising: receiving surgical procedure data; annotating the surgical procedure data; providing the annotated surgical procedure data to a machine learning model; training the machine learning model with reinforcement learning; verifying a performance metric associated with the training of the machine learning model; and updating or replacing the machine learning model with an updated machine learning model, wherein the updated machine learning model provides at least one output associated with improving an ecosystem of surgical products.
- Example 20 The method of example 19, wherein surgical procedure data comprises data from a surgical robot and a surgical navigation system and wherein the at least one output comprises a suggested surgical plan that utilizes at least one of the surgical robot and the surgical navigation system.
Landscapes
- Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Theoretical Computer Science (AREA)
- General Health & Medical Sciences (AREA)
- Physics & Mathematics (AREA)
- Medical Informatics (AREA)
- Software Systems (AREA)
- Biomedical Technology (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Primary Health Care (AREA)
- Epidemiology (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Mathematical Physics (AREA)
- Artificial Intelligence (AREA)
- Public Health (AREA)
- Computational Linguistics (AREA)
- Molecular Biology (AREA)
- Biophysics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Business, Economics & Management (AREA)
- General Business, Economics & Management (AREA)
- Radiology & Medical Imaging (AREA)
- Surgery (AREA)
- Urology & Nephrology (AREA)
- Image Analysis (AREA)
Abstract
A system and techniques are provided for performing reinforcement learning. In some embodiments, the method may include storing a data set comprising surgical procedure data in a data repository, retraining a machine learning model based on curating the data set, where retraining the machine learning model include applying reinforcement learning, verifying a performance metric associated with the retraining of the machine learning model, and providing or modifying one or more features associated with an ecosystem of surgical products in response to retraining the machine learning model and verifying the performance metric.
Description
REINFORCEMENT LEARNING FOR SURGICAL PROCEDURES AND SURGICAL PLANS
BACKGROUND
[0001] This application claims the benefit of U.S. Provisional Patent Application Serial No. 63/605,975, filed 4 December 2023, the entire content of which is incorporated herein by reference. [0002] The present disclosure is generally directed to surgical procedures, and relates more particularly to reinforcement learning to improve surgical procedures, including surgical procedures assisted by surgical robots and/or surgical navigation systems.
[0003] Surgical robots may assist a surgeon or other medical provider in carrying out a surgical procedure and/or may complete one or more surgical procedures autonomously. The surgical procedure(s) may be performed using one or more surgical instruments or tools. In some cases, a surgeon or other medical provider may manually select the one or more surgical instruments or tools prior to and for performing the surgical procedure(s). Additionally, a surgeon or other medical provider may determine one or more steps to take for a surgical procedure. Such steps may be determined during the surgical procedure or during planning of the surgical procedure. Surgical plans can be updated by the surgeon or other medical provider, if needed.
BRIEF SUMMARY
[0004] The use of machine learning is increasing in many fields. In surgical procedures, there is a significant push to leverage multitudes of data to achieve better outcomes for patients, reduce costs, and streamline overhead burden within the healthcare system.
[0005] Embodiments of the present disclosure contemplate the utilization of pre- and intraoperative information from an ecosystem of surgical products to update, or reinforce machine learning based models. In reinforcement learning, models can be updated to reflect changes in how surgical products (e.g., tools, surgical robots, surgical navigation systems, etc.) are used, patient demographics, disease states, or even surgeon specific preferences without having to start from scratch with a large volume of curated and annotated data.
[0006] Embodiments of the present disclosure also contemplate a reinforcement learning approach which may include: (1) a process to store surgical procedure data; (2) the ability to annotate/curate stored surgical procedure data; (3) a process to update algorithms or features based on newly collected surgical procedure data (e.g., by applying reinforcement learning); and (4) metrics to verify the effectiveness of updated algorithms.
[0007] Example aspects of the present disclosure include:
[0008] A system that includes: a processor; and a memory storing data for processing by the processor, the data, when processed, causes the processor to: store a data set comprising surgical procedure data in a data repository; retrain a machine learning model based on curating the data set, where retraining the machine learning model includes applying reinforcement learning; verify a performance metric associated with the retraining of the machine learning model; and provide or modify one or more features associated with an ecosystem of surgical products in response to retraining the machine learning model and verifying the performance metric.
[0009] In some aspects, the performance metric includes at least one of a measured effectiveness and a patient outcome.
[0010] In some aspects, the one or more features associated with the ecosystem of the surgical products include an algorithm implemented by a surgical robot, an algorithm implemented by a surgical navigation system, an imaging technique implemented by an imaging device, a surgical plan implemented by a care provider, a surgical instrument selected for use during a surgical procedure, a device implanted into a patient, and a layout for an operating room.
[0011] In some aspects, the one or more features include an image segmentation approach.
[0012] In some aspects, the one or more features include a therapy delivery technique.
[0013] In some aspects, the reinforcement learning includes a policy-based reinforcement learning and at least one policy is provided to the machine learning model during the retraining.
[0014] In some aspects, the reinforcement learning includes a value-function-based reinforcement learning and at least one function is defined for the machine learning model during the retraining. [0015] In some aspects, the at least one function includes an optimization function that seeks to optimize a patient outcome after a surgical procedure.
[0016] In some aspects, the memory stores further data for processing by the processor that, when processed, causes the processor to: replace the machine learning model with an updated machine learning model in response to the updated machine learning model exhibiting an improvement in the performance metric as compared to the machine learning model.
[0017] In some aspects, the memory stores further data for processing by the processor that, when processed, causes the processor to: develop a virtual model for the ecosystem of surgical products; provide the data set to the virtual model; compare a performance of the virtual model with the performance metric; determine the performance of the virtual model is better than the machine learning model based on the comparison; and replace the machine learning model with the virtual model in response to determining that the performance of the virtual model is better than the machine learning model.
[0018] In some aspects, the memory stores further data for processing by the processor that, when processed, causes the processor to: receive feedback after the surgical procedure for the patient is performed; and further update the data set with the feedback.
[0019] In some aspects, the feedback is used, in part, to train the machine learning model.
[0020] In some aspects, the data set includes data from a surgical robot and data from a surgical navigation system.
[0021] In some aspects, the data from the surgical robot and the data from the surgical navigation system are curated to indicate an association with a common surgical procedure.
[0022] In some aspects, the ecosystem of surgical products comprises at least one of a surgical robot, a surgical navigation system, an imaging device, a surgical instrument, and an implant.
[0023] A method, including: storing a data set comprising surgical procedure data in a data repository; retraining a machine learning model based on curating the data set, where retraining the machine learning model comprises applying reinforcement learning; verifying a performance metric associated with the retraining of the machine learning model; and providing or modifying one or more features associated with an ecosystem of surgical products in response to retraining the machine learning model and verifying the performance metric.
[0024] In some aspects, the reinforcement learning includes at least one of a policy-based reinforcement learning, a value-function-based reinforcement learning, an associative reinforcement learning, a deep reinforcement learning, an adversarial deep reinforcement learning, a fuzzy reinforcement learning, an inverse reinforcement learning, and a safe reinforcement learning.
[0025] In some aspects, the one or more features associated with the ecosystem of the surgical products include an algorithm implemented by a surgical robot, an algorithm implemented by a surgical navigation system, an imaging technique implemented by an imaging device, a surgical plan implemented by a care provider, a surgical instrument selected for use during a surgical procedure, a device implanted into a patient, and a layout for an operating room.
[0026] A method of implementing a machine learning pipeline, the method including: receiving surgical procedure data; annotating the surgical procedure data; providing the annotated surgical procedure data to a machine learning model; training the machine learning model with reinforcement learning; verifying a performance metric associated with the training of the machine learning model; and updating or replacing the machine learning model with an updated machine learning model, where the updated machine learning model provides at least one output associated with improving an ecosystem of surgical products.
[0027] In some aspects, surgical procedure data includes data from a surgical robot and a surgical navigation system and wherein the at least one output includes a suggested surgical plan that utilizes at least one of the surgical robot and the surgical navigation system.
[0028] Any aspect in combination with any one or more other aspects.
[0029] Any one or more of the features disclosed herein.
[0030] Any one or more of the features as substantially disclosed herein.
[0031] Any one or more of the features as substantially disclosed herein in combination with any one or more other features as substantially disclosed herein.
[0032] Any one of the aspects/features/embodiments in combination with any one or more other aspects/features/ embodiments .
[0033] Use of any one or more of the aspects or features as disclosed herein.
[0034] It is to be appreciated that any feature described herein can be claimed in combination with any other feature(s) as described herein, regardless of whether the features come from the same described embodiment.
[0035] The details of one or more aspects of the disclosure are set forth in the accompanying drawings and the description below. Other features, objects, and advantages of the techniques described in this disclosure will be apparent from the description and drawings, and from the claims. [0036] The phrases “at least one”, “one or more”, and “and/or” are open-ended expressions that are both conjunctive and disjunctive in operation. For example, each of the expressions “at least one of A, B and C”, “at least one of A, B, or C”, “one or more of A, B, and C”, “one or more of A, B, or C” and “A, B, and/or C” means A alone, B alone, C alone, A and B together, A and C together, B and C together, or A, B and C together. When each one of A, B, and C in the above expressions refers to an element, such as X, Y, and Z, or class of elements, such as XI -Xn, Yl-Ym, and Zl-Zo, the phrase is intended to refer to a single element selected from X, Y, and Z, a combination of elements selected from the same class (e.g., XI and X2) as well as a combination of elements selected from two or more classes (e.g., Y1 and Zo).
[0037] The term “a” or “an” entity refers to one or more of that entity. As such, the terms “a” (or “an”), “one or more” and “at least one” can be used interchangeably herein. It is also to be noted that the terms “comprising”, “including”, and “having” can be used interchangeably.
[0038] The preceding is a simplified summary of the disclosure to provide an understanding of some aspects of the disclosure. This summary is neither an extensive nor exhaustive overview of the disclosure and its various aspects, embodiments, and configurations. It is intended neither to identify key or critical elements of the disclosure nor to delineate the scope of the disclosure but to present
selected concepts of the disclosure in a simplified form as an introduction to the more detailed description presented below. As will be appreciated, other aspects, embodiments, and configurations of the disclosure are possible utilizing, alone or in combination, one or more of the features set forth above or described in detail below.
[0039] Numerous additional features and advantages of the present disclosure will become apparent to those skilled in the art upon consideration of the embodiment descriptions provided hereinbelow.
BRIEF DESCRIPTION OF THE SEVERAL VIEWS OF THE DRAWINGS
[0040] The accompanying drawings are incorporated into and form a part of the specification to illustrate several examples of the present disclosure. These drawings, together with the description, explain the principles of the disclosure. The drawings simply illustrate preferred and alternative examples of how the disclosure can be made and used and are not to be construed as limiting the disclosure to only the illustrated and described examples. Further features and advantages will become apparent from the following, more detailed, description of the various aspects, embodiments, and configurations of the disclosure, as illustrated by the drawings referenced below. [0041] Fig. 1 is a block diagram of a system according to at least one embodiment of the present disclosure;
[0042] Fig. 2 is a diagram of a workflow according to at least one embodiment of the present disclosure;
[0043] Fig. 3 is a diagram of a system according to at least one embodiment of the present disclosure;
[0044] Fig. 4 is a diagram of a data pipeline according to at least one embodiment of the present disclosure;
[0045] Fig. 5 is a flowchart of a first method according to at least one embodiment of the present disclosure;
[0046] Fig. 6 is a flowchart of a second method according to at least one embodiment of the present disclosure;
[0047] Fig. 7 is a flowchart of a third method according to at least one embodiment of the present disclosure;
[0048] Fig. 8 is a flowchart of a fourth method according to at least one embodiment of the present disclosure; and
[0049] Fig. 9 is a flowchart of a fifth method according to at least one embodiment of the present disclosure.
DETAILED DESCRIPTION
[0050] It should be understood that various aspects disclosed herein may be combined in different combinations than the combinations specifically presented in the description and accompanying drawings. It should also be understood that, depending on the example or embodiment, certain acts or events of any of the processes or methods described herein may be performed in a different sequence, and/or may be added, merged, or left out altogether (e.g., all described acts or events may not be necessary to carry out the disclosed techniques according to different embodiments of the present disclosure). In addition, while certain aspects of this disclosure are described as being performed by a single module or unit for purposes of clarity, it should be understood that the techniques of this disclosure may be performed by a combination of units or modules associated with, for example, a computing device and/or a medical device.
[0051] In one or more examples, the described methods, processes, and techniques may be implemented in hardware, software, firmware, or any combination thereof. If implemented in software, the functions may be stored as one or more instructions or code on a computer-readable medium and executed by a hardware-based processing unit. Alternatively or additionally, functions may be implemented using machine learning models, neural networks, artificial neural networks, or combinations thereof (alone or in combination with instructions). Computer-readable media may include non-transitory computer-readable media, which corresponds to a tangible medium such as data storage media (e.g., RAM, ROM, EEPROM, flash memory, or any other medium that can be used to store desired program code in the form of instructions or data structures and that can be accessed by a computer).
[0052] Instructions may be executed by one or more processors, such as one or more digital signal processors (DSPs), general purpose microprocessors (e.g., Intel Core i3, i5, i7, or i9 processors; Intel Celeron processors; Intel Xeon processors; Intel Pentium processors; AMD Ryzen processors; AMD Athlon processors; AMD Phenom processors; Apple A10 or 10X Fusion processors; Apple Al l, A12, A12X, A12Z, or A13 Bionic processors; or any other general purpose microprocessors), graphics processing units (e.g., Nvidia GeForce RTX 2000-series processors, Nvidia GeForce RTX 3000-series processors, AMD Radeon RX 5000-series processors, AMD Radeon RX 6000-series processors, or any other graphics processing units), application specific integrated circuits (ASICs), field programmable logic arrays (FPGAs), or other equivalent integrated or discrete logic circuitry.
Accordingly, the term “processor” as used herein may refer to any of the foregoing structure or any other physical structure suitable for implementation of the described techniques. Also, the techniques could be fully implemented in one or more circuits or logic elements.
[0053] Before any embodiments of the disclosure are explained in detail, it is to be understood that the disclosure is not limited in its application to the details of construction and the arrangement of components set forth in the following description or illustrated in the drawings. The disclosure is capable of other embodiments and of being practiced or of being carried out in various ways. Also, it is to be understood that the phraseology and terminology used herein is for the purpose of description and should not be regarded as limiting. The use of “including,” “comprising,” or “having” and variations thereof herein is meant to encompass the items listed thereafter and equivalents thereof as well as additional items. Further, the present disclosure may use examples to illustrate one or more aspects thereof. Unless explicitly stated otherwise, the use or listing of one or more examples (which may be denoted by “for example,” “by way of example,” “e.g.,” “such as,” or similar language) is not intended to and does not limit the scope of the present disclosure.
[0054] The terms proximal and distal are used in this disclosure with their conventional medical meanings, proximal being closer to the operator or user of the system, and further from the region of surgical interest in or on the patient, and distal being closer to the region of surgical interest in or on the patient, and further from the operator or user of the system.
[0055] In some surgical procedures (e.g., robotic-assisted surgeries), a minimally invasive procedure may be performed for the treatment of pathological fractures of the vertebral body (e.g., spine and associated elements) due to osteoporosis, cancer, benign lesions, or other ailments. For example, the minimally invasive procedure may include a corpectomy (e.g., a surgical procedure that involves removing all or part of the vertebral body, usually as a way to decompress the spinal cord and nerves), kyphoplasty (e.g., a surgical procedure used to treat a spinal compression fracture based on inserting an inflatable balloon tamp into a fractured vertebra to restore height to the collapsed vertebra), vertebroplasty (e.g., a procedure for stabilizing compression fractures in the spine based on injecting bone cement into vertebrae that have cracked or broken), radiofrequency ablation (e.g., a medical procedure in which part of the electrical conduction system of the heart, tumor, or other dysfunctional tissue is ablated using the heat generated from a medium frequency alternating current to treat a range of conditions, including chronic back and neck pain), or another procedure not explicitly listed herein. Additionally or alternatively, the surgical procedures described herein may more generally include spine surgeries, cranial surgeries, heart surgeries, or another type of surgical procedure.
[0056] The surgical procedures may comprise a number of steps. It is to be understood that the surgical procedures described herein may not include all of the steps described above or may include additional steps not listed. More generally, the surgical procedures may include presurgical planning of ensuring all the correct surgical instruments and disposables are ready for the surgical procedure, working with radiology imaging to ensure a correct scan format is used, and troubleshooting any communication issues with integrated third-party systems. Additionally, the surgical procedures may include a before surgery time to provide system setup and functional verification for the surgical procedure, a during surgery time for troubleshooting and resolving equipment and instruments issues and for providing real-time guidance and training to operating room staff, and a post-surgery time for checking and stowing any equipment used during the procedure and for reviewing case questions with the operating room staff.
[0057] In any of the examples of the surgical procedures described herein, a surgeon or other medical provider may choose the appropriate and correct medical instruments prior to and for performing the surgical procedures. The surgeon or other medical provider may also choose the appropriate and correct implant device or collection of implant devices, if the surgical procedure includes the placement of an implant. The surgeon or other medical provider may also choose the surgical technique for utilizing the selected instrument to place the implant device or collection of implant devices. For example, the surgeon or other medical provider may determine the appropriate and correct medical instruments based on the disease state for a given patient, which may depend on various factors, such as angle, position, depth, level of deterioration, size of tumor, etc.
[0058] However, a large number of surgical instruments may be available for performing the surgical procedure. For example, more than 200 surgical instruments and/or devices may be available for spine surgeries, and more than 100 surgical instruments and/or devices may be available for cranial surgeries. Hundreds more surgical instruments and devices may be available for other surgeries (e.g., heart surgeries, gastrointestinal surgeries, orthopedic surgeries, organ surgeries, etc.).
[0059] As described herein, a machine learning model (e.g., artificial intelligence (Al)-based learning model or algorithm) is provided for improving one or more features associated with an ecosystem of surgical products. The features may be improved by training and/or retraining a machine learning model with reinforcement learning. During the retraining, a performance metric associated with the retraining of the machine learning model may be verified. Based on the verification of the performance metric, the feature(s) associated with the ecosystem of surgical products may be provided or modified.
[0060] A performance metric or multiple performance metrics may include a measured effectiveness (e.g., impact on a patient) of any algorithms that are updated based on the retraining of the machine learning model. Example features that may be provided or modified in response to retraining the machine learning model may include: (1) any algorithms implemented by the system in association with focused operations (e.g., image segmentation, therapy delivery, navigation, etc.) provided by the ecosystem and (2) operational features (e.g., imaging, therapy delivery techniques, tracking, etc.) related to a surgical product/tool included in the ecosystem.
[0061] Embodiments of the present disclosure provide solutions to one or more of the problems of (1) prolonged surgical procedure durations, (2) increased exposure to anesthesia and/or radiation for a patient, and (3) higher chances of misdiagnoses or improperly performed surgical procedures. More simply, embodiments of the present disclosure aim to provide improvements to safety and efficacy for surgical procedures. For example, the techniques described herein may enable better selection of a surgical plan, may assist with utilizing a particular device in an ecosystem of surgical products, and may help with improving aspects of the ecosystem of surgical products, which results in shorter procedure durations, may reduce a patient’s anesthesia dosage and timing, reduces radiation exposure (e.g., to confirm implant positioning), and promotes faster recovery. Additionally, the techniques may be driven by an intelligent learning model that is normalized and optimized to meet clinical demand, thereby reducing the chances of misdiagnoses, and considering the complexity of the surgical procedures, the patient may benefit from both time and cost perspectives.
[0062] Fig. 1 is a block diagram of a system 100 according to at least one embodiment of the present disclosure. The system 100 may include one or more inputs 102 that are used by a processor 104 to generate one or more outputs 106. The processor 104 may be part of a computing device or different device. Additionally, the processor 104 may be any processor described herein or any similar processor. The processor 104 may be configured to execute instructions or data stored in a memory, which the instructions or data may cause the processor 104 to carry out one or more computing steps utilizing or based on the inputs 102 to generate the outputs 106.
[0063] As described herein, the inputs 102 may include a set of surgical procedure data 108 for a surgical procedure for a patient. For example, the set of surgical procedure data 108 may include patient demographic data, one or more radiological images, pathology data, or a combination thereof. Alternatively or additionally, the surgical procedure data may include historical and/or realtime data from an ecosystem of surgical products. Non-limiting examples of data sources that may provide the surgical procedure data include a surgical navigation system 122, a surgical robotic system 124, operating room data 126, clinician data 128, patient data 130, imaging device(s) 132,
and provider data 134. Data from one or more of the data sources may be provided to a data curation unit 116, where the data is formatted with a formatting engine 118 and annotated with an annotation engine 120. Operation of the formatting engine 118 may be autonomous (e.g., performed automatically) or may be supported with manual inputs (e.g., may include a manual process). Specifically, the surgical procedure data 108 may correspond to curated and/or annotated data from multiple different data sources in an ecosystem of surgical products. The multiple different data sources may also include data sources not related to the ecosystem of surgical products, but related to a patient, clinician (e.g., surgeon), care provider, operating room, etc.
[0064] The types of surgical procedure data received from the different data sources may vary and need not be limited to any particular data type or data format. Non-limiting examples of surgical procedure data 108 that may be received from the surgical navigation system 122 include historical data from surgeries in which the surgical navigation system 122 was used, image segmentation data, anatomical object data, surgical plans used during a surgical procedure, surgical devices used during a surgical procedure, information describing changes to surgical plans, information describing movement of objects during a surgical procedure, information describing object locations in a coordinate space, etc. The surgical navigation system 122 may also provide information describing a location and movement of a surgeon, nurse, surgical robot, or other object in an operating room during a surgical procedure. The surgical navigation system 122 may also provide information describing whether or not a surgical procedure followed a surgical plan and, if not, what deviations from the surgical plan were made during the surgical procedure. Surgical procedure data may also include data from non-imaging sources. For instance, surgical procedure data may include data received from an anesthesia machine, a heart monitor, a blood pressure monitor, combinations thereof, etc.
[0065] Non-limiting examples of surgical procedure data 108 that may be received from the surgical robotic system 124 include historical data from surgeries in which the surgical robotic system 124 was used, information describing surgical instruments attached to the surgical robot during a surgical procedure, information describing maneuvers of a robotic arm during a surgical procedure, information describing end effectors used during a surgical procedure, information describing whether the surgical robotic system 124 was utilized in an autonomous mode or semi- autonomous mode, information describing operating parameters of the surgical robotic system 124, etc.
[0066] Non-limiting examples of surgical procedure data 108 that may be received from the operating room data 126 may include a time of a surgical procedure, a location of the operating
room, conditions of the operating room during a surgical procedure (e.g., temperature, humidity, lighting conditions, number of personnel in the operating room, types of devices in the operating room, etc.), duration of the surgical procedure, insurance codes associated with a surgical procedure, etc.
[0067] Non-limiting examples of surgical procedure data 108 that may be received from clinician data 128 include a name of a clinician or surgeon, a practice with which a clinician or surgeon is associated, a patient history for the clinician or surgeon, patient outcome statistics for the clinician or surgeon, an educational history for the clinician or surgeon, surgical techniques used by the clinician or surgeon, etc. The clinician data 128 may be anonymized such that no personally identifiable information (PII) or any other personal data associated with a clinician, surgeon, or patient is exposed to the data curation unit 116. The anonymization of data received from the clinician data 128 may be performed by the formatting engine 118.
[0068] The patient data 130, much like the clinician data 128, may be formatted by the formatting engine 118 to remove any PII therefrom. In some embodiments, the patient data 130 may be anonymized to remove all PII therefrom. Non-limiting examples of surgical procedure data 108 that may be received from the patient data 128 include patient zip code, patient demographics, patient height, patient weight, patient conditions, patient feedback (e.g., inputs received from a patient before or after a surgical procedure to help define a patient outcome or patient comfort), etc.
[0069] Non-limiting examples of surgical procedure data 108 that may be received from the imaging device(s) 132 include raw signal data (e.g., signals received directly from the imaging device(s) 132, raw image data, formatted image data, information describing settings of imaging device(s) 132, segmented images, filtered images, image streams, video images, pre-operative images, intra-operative images, post-operative images, etc. It should be appreciated that some surgical procedure data 108 received from an imaging device 132 may be correlated to the surgical navigation system 122 and/or surgical robotic system 124 that was being used when an image was captured with an imaging device 132. In other words, surgical procedure data 108 received from the imaging device(s) 132 may be formatted and/or annotated to include information describing what type of surgical navigation system 122 and surgical robotic system 124 were being used when an image was captured.
[0070] Non-limiting examples of surgical procedure data 108 that may be received from the provider data 134 includes any type of data related to insurance providers or carriers with which a surgical procedure was documented. In other words, the provider data 134 may include insurance
codes, patient co-pays, prescriptions used before/during/after the surgical procedure, treatments given during a surgical procedure, etc.
[0071] The data curation unit 116 may utilize the formatting engine 118 to remove any unnecessary and/or problematic information from the data sources prior to committing the data as surgical procedure data 108 to the ML input(s) 102. Alternatively or additionally, the data curation unit 116 may annotate the data with the annotation engine 120 to help associate the various types of data from different data sources to a common surgical procedure. Data curation and annotation may help normalize the data from different sources before being processed by the processor 104.
[0072] The processor 104 may use the set of surgical procedure data 108 to predict an exact disease state for the patient and for the surgical procedure based on a machine learning model 110 (e.g., machine learning algorithm, Al-based algorithm or model, etc.). For example, the machine learning model 110 may be created based on available historical data of previously performed surgical procedures, which includes procedure and instrument flow and abnormalities (e.g., which surgical instruments were used, in which order the surgical instruments were used, any abnormalities that were present, etc.), radiology images and annotations (e.g., MRI scans, CT scans or images, X- rays, etc.), demographic information of the patients that underwent the previously performed surgical procedures, 3D anatomical models (e.g., indicating angles, positions, dimensions, etc. for the previously performed surgical procedures), instruments availability information (e.g., hospital available inventory data indicating which surgical instruments are or were available for use), or a combination thereof for each of the previously performed surgical procedures. In some embodiments, the machine learning model 110 may be continuously improved based on continuous feedback (e.g., from surgeons and/or patients) after surgical procedures are completed.
[0073] In some embodiments, the processor 104 may use the machine learning model 110 to compare the set of surgical procedure data 108 with the available historical data. Based on the comparison using the machine learning model 110, the processor 104 may generate a list of features associated with an ecosystem of surgical products to display to the surgeon (e.g., or other medical provider). The list of features may be an algorithm implemented by a surgical robot, an algorithm implemented by a surgical navigation system, an imaging technique implemented by an imaging device, a surgical plan implemented by a care provider, a surgical instrument selected for use during a surgical procedure, a device implanted into a patient, and/or a layout for an operating room. The list of features may alternatively or additionally include an image segmentation approach to use with the imaging device(s) 132, a therapy delivery technique, a type of therapy, and the like.
[0074] The closest matching surgical procedures may be compared to the surgical procedure for which the set of surgical procedure data 108 are provided and compared between each other with a similarity index that comprises a positional coordinate correlation (e.g., surgery anatomical position, implant location, etc.), a deformation coefficient, demographic similarity (e.g., body mass index (BMI) and/or other demographic information for the associated patients), 3D model similarity, inventory stock matching (e.g., whether same surgical instruments are available for the surgical procedure that were available and used for the closest matching surgical procedures), or a combination thereof. In some examples, different similarity index values for the different components of the similarity indexes of each of the closest matching surgical procedures may be displayed to the surgeon to indicate how similar each of the individual components are between the surgical procedure and the closest matching surgical procedures. Additionally or alternatively, an overall similarity index may be displayed indicating how similar the surgical procedure is in relation to each of the closest matching surgical procedures.
[0075] Subsequently, based on the similarity index(es), the surgeon can choose the closest possible previously executed surgery, which will help to map and to identify the surgical flow. As part of the identified surgical flow, the processor 104 may provide a surgical procedure suggestion 112 as part of the output(s) 106 (e.g., suggestion of which surgical instruments to use based on which surgical instruments were used for the chosen closest possible previously executed surgery). For example, the processor 104 may display (e.g., via a user interface) a suggestion of devices to use from a surgical ecosystem.
[0076] Based on these inputs (e.g., surgical flow, surgical procedure suggestion 112, position of the patient, etc.), the surgeon can edit or accept the surgical procedure suggestion 112 corresponding to the closest possible previously executed surgery. In some embodiments, after a surgical plan is confirmed and/or edited, the processor 104 may distribute information to various parties describing the surgical procedure to be used for the patient and which products from the ecosystem of surgical products should be used. Additionally or alternatively, the processor 104 may simply provide an output that indicates which surgical instruments to place or load on a surgical robotic system 124. The surgeon can then complete the surgery and provide the feedback back to the machine learning model 110 as part of a feedback loop, which will help to further mature the machine learning model 110. As will be described in further detail herein, the machine learning model 110 may be trained or retrained using reinforcement learning instead of supervised learning.
[0077] Accordingly, as described herein, the machine learning model 110 may be developed based on the surgical procedure data 108 (e.g., patient input data, such as radiology and physiology images
and data) and previous surgical data of similar procedures (e.g., stored in a database), where the previous surgical data may include 3D models, angle and position, depth of implant and dimension, annotations, treatment plans, radiology diagnostic imaging, an implant used with respect to the 3D models, or a combination thereof for each of the similar procedures. The processor 104 may then use the machine learning model 110 to suggest one or more surgery plans to the surgeon, including the disease state for the patient (e.g., angle, depth, and position of the targeted area) based on the similar surgical data. In some examples, the suggested surgery plans may be suggested or displayed to the surgeon in a 3D model view.
[0078] Depending on the similar procedures and which surgical instruments are available at a hospital in which the surgical procedure is to be performed (e.g., a hospital inventory of surgical instruments), the processor 104 may present one or more similarity index(es) to the surgeon indicating a closest match of the available previous surgeries to the surgical procedure to be performed. For example, the similarity index(es) may be percentage(s) of how close different aspects of each similar procedure is to the surgical procedure to be performed, such as a similarity of percentage deterioration between the surgical procedures, a location similarity, an implant depth percentage, a disease correlation, or other comparable aspects between the similar procedures and the surgical procedure to be performed.
[0079] Based on the similarity index(es), the surgeon may select one of the similar procedures to follow. In some embodiments, after making the selection, the surgeon may be able to make changes to the suggested surgical plan (e.g., based on differences between the selected procedure and the surgical procedure to be performed). Changes made to a surgical plan may be considered as part of training or retraining the machine learning model 110. After confirming the surgical plan (e.g., with any changes made), the processor 104 may provide the surgical procedure suggestion 112 to indicate all details associated with a future surgical procedure (e.g., surgical plan, device(s) to use, operating room setup details, suggested personnel, surgical navigation system 122 settings, imaging device 132 settings, surgical robotic system 124 settings, etc.
[0080] In some embodiments, a rendering of the patient’s radiology images and mapping (e.g., acquired from the set of surgical procedure data 108) with the machine learning model 110 may enable the processor 104 to display a first cut sectional view for the surgery planning. The processor may then also suggest a closest match of the previously conducted surgeries based on a comparison of the cuts and/or incisions made for the previously conducted surgeries and the first cut sectional view, allowing the surgeon to select between the various options of the previously conducted surgeries based on the similarity index(es) and a visualization of the previously conducted surgeries
with respect to the surgical procedure to be performed. Additionally, in some embodiments, the availability of the previous surgeries data and planning information can be used as a training material for end users and employees.
[0081] Fig. 2 is a diagram of a workflow 200 according to at least one embodiment of the present disclosure. In some examples, the workflow 200 may implement aspects of or may be implemented by aspects of Fig. 1. For example, the workflow 200 may be a more detailed view of the system 100, where a machine learning model uses inputs for a surgery to determine a surgical plan and a surgical procedure suggestion based on historical data of previously performed surgeries. In some examples, the workflow 200 may be performed by a processor described herein, such as the processor 104 as described with reference to Fig. 1.
[0082] At operation 202 of the workflow 200, one or more inputs for a given surgical procedure for a patient may be provided or received. For example, the one or more inputs may include demographic information for the patient, radiology and physiology images and data of the patient for the given surgical procedure, pathology data for the patient, or a combination thereof. The inputs for surgery 202 may include pre-operative images (e.g., CT images, CBCT images, MRI images, x-ray images, ultrasound images, pictures, etc.). The inputs for surgery 202 may also include notes from a clinician, insurance data, patient health history, operating room availability, products available from an ecosystem of surgical products, etc. The inputs for surgery 202 may include some or all of the types of data inputs provided to the data curation unit 116 described in connection with Fig. 1.
[0083] At operation 204 of the workflow 200, a predictive position and treatment for the patient and given surgical procedure may be provided. At operation 206, planning of a surgical procedure for the patient may be launched. In some examples, the planning may be launched or may be based on the machine learning model as described herein. For example, the machine learning model may include or may be trained based on historical data 226 (e.g., stored in a database or cloud database) of previously performed surgeries, including surgery data 224. The machine learning model may be trained using reinforcement learning, which may include any one of a policy-based reinforcement learning, a value-function-based reinforcement learning, an associative reinforcement learning, a deep reinforcement learning, an adversarial deep reinforcement learning, a fuzzy reinforcement learning, an inverse reinforcement learning, and a safe reinforcement learning.
[0084] Subsequently, the workflow 200 may perform operation 208 to execute a comparison between the given surgical procedure and the historical data 226 of the previously performed surgeries. At operation 210, the processor may display (e.g., via a user interface) and list the closest procedure match(es) from the previously performed surgeries that are most similar to the given
surgical procedure to be performed. In some embodiments, the processor may display similarity index(es) indicating how similar each of the previously performed surgeries are to the given surgical procedure to be performed and/or how similar different aspects of the previously performed surgeries are to corresponding aspects of the given surgical procedure to be performed.
[0085] At operation 212 of the workflow 200, one or multiple of the closest procedure matches may be selected (e.g., by a surgeon or other medical provider) based on the similarity index(es). Based on the selected closest procedure, a surgical plan and instrument suggestion may be previewed and displayed by the processor to the surgeon (e.g., via a user interface) at operation 214. In some embodiments, the surgical plan may include a disease state for the patient, such as an angle, depth, and position of a targeted area in the patient to be accessed as part of the surgical procedure. Additionally, at operation 216, the processor may suggest a position of the patient for performing the given surgical procedure (e.g., to reduce load on the targeted area, fractured bone, etc.), such as on their side, on their stomach, etc.
[0086] At operation 218 of the workflow 200, the surgeon may edit and/or accept the surgical plan that is based on the selected closest procedure. For example, after making the selection, the surgeon may be able to make changes to the suggested surgical plan (e.g., based on differences between the selected procedure and the surgical procedure to be performed, differences between the patient for the given surgical procedure and the patient for which the selected procedure was performed, etc.). At operation 220, the processor may distribute the surgical plan. Additionally or alternatively, the processor may display information describing the surgical plan to the patient, surgeon, operating room personnel, nurses, insurance company, and the like. The surgical plan may be transmitted via electronic communications to various computing devices of the entities mentioned above.
[0087] At operation 222, the surgeon may perform and complete the surgical procedure using products from an ecosystem of surgical products, as suggested, displayed, and/or implemented based on the selected closest procedure and suggested surgical plan. After completing the surgical procedure, the surgeon, patient, and/or operating room personnel may provide feedback for the machine learning model (e.g., which products from the ecosystem of surgical products were or were not used, performance data for the suggested surgical plan, additional data, etc.) to further train and/or update the machine learning model. In some examples, the feedback may include surgery data 224 for the completed surgical procedure, such as 3D models, angle and position of the surgery to reach a targeted area of the patient for the surgical procedure, dimensions, annotations, treatment plans, radiology diagnostic imaging, an implant used with respect to the 3D models, imaging used,
object motion captured by the surgical navigation system, etc. The surgery data 224 may also include similar information from the historical data 226 for the previously performed surgeries. [0088] The historical data 226 and the surgery data 224 may be used to train the machine learning model at operation 228 in a continuous feedback loop to mature and continually refine the machine learning model (e.g., including performing validation and testing of the machine learning model). The training may include reinforcement learning in which policies, values, functions, and/or objectives are provided to the learning agent. The purpose of reinforcement learning is for the agent to learn an optimal, or nearly-optimal, policy that maximizes the "reward function" or other user- provided reinforcement signal that accumulates from the immediate rewards. Accordingly, the machine learning model may be created and updated at operation 230 of the workflow 200 after training based on the reinforcement learning, which may be driven by historical data 226 and the surgical procedure data 224.
[0089] Turning to Fig. 3, a diagram of a system 300 according to at least one embodiment of the present disclosure is shown. The system 300 may be used to suggest a surgical plan and/or products from an ecosystem of surgical products for performing a surgical procedure. The system 300 is illustrated to include a computing device 302, one or more imaging devices 312, a robot 314, a navigation system 318, a database 330, and/or a cloud or other network 334. One, some, or all of the components of system 300 may be configured to provide surgical procedure data 108 to the data curation unit 116 for eventual inputs to a machine learning process. Systems according to other embodiments of the present disclosure may comprise more or fewer components than the system 300. For example, the system 300 may not include the imaging device 312, the robot 314, the navigation system 318, one or more components of the computing device 302, the database 330, and/or the cloud 328. The various components of system 300 may be considered products within an ecosystem of surgical products, each of which support a surgical procedure and may be selected for use with a particular surgical plan.
[0090] The computing device 302 comprises a processor 304, a memory 306, a communication interface 308, and a user interface 310. Computing devices according to other embodiments of the present disclosure may comprise more or fewer components than the computing device 302.
[0091] The processor 304 of the computing device 302 may be any processor described herein or any similar processor. The processor 304 may be configured to execute instructions stored in the memory 306, which instructions may cause the processor 304 to carry out one or more computing steps utilizing or based on data received from the imaging device 312, the robot 314, the navigation system 318, the database 330, and/or the cloud 328.
[0092] The memory 306 may be or comprise RAM, DRAM, SDRAM, other solid-state memory, any memory described herein, or any other tangible, non-transitory memory for storing computer- readable data and/or instructions. The memory 306 may store information or data useful for completing, for example, any step of the methods described herein, or of any other methods. The memory 306 may store, for example, instructions and/or machine learning models that support one or more functions of the robot 314. For instance, the memory 306 may store content (e.g., instructions and/or machine learning models) that, when executed by the processor 304, enable surgical plan determination 320, surgical plan selection 322, surgical robot settings 324, and navigation/imaging settings 326.
[0093] The surgical plan determination 320 enables the processor 304 to receive a set of inputs for a surgical procedure for a patient and to determine one or more potential plans for the surgical procedure based at least in part on the set of inputs and a machine learning model. For example, the set of inputs for the surgical procedure may comprise patient demographic data, one or more radiological images, pathology data, or a combination thereof. Additionally, the one or more potential plans may be determined based at least in part on a disease state corresponding to the surgical procedure for a patient.
[0094] In some embodiments, the surgical plan determination 320 enables the processor 304 to compare the set of inputs for the surgical procedure with historical data of previously performed surgical procedures to determine the one or more potential plans based at least in part on a similarity index. For example, the historical data of previously performed surgical procedures may comprise procedure and instrument flow and abnormalities, radiology images and annotations, demographic information, three-dimensional anatomical models, angles, positions, dimensions, implants used with respect to the three-dimensional models, instrument availability information, treatment plans, or a combination thereof, for the previously performed surgical procedures. Additionally, the similarity index may comprise a positional coordinate correlation, a deformation coefficient, demographic information, three-dimensional model, inventory stock matching of available surgical instruments, or a combination thereof.
[0095] The surgical plan selection 322 enables the processor 304 to receive a selection of a plan from the one or more potential plans. Additionally, the surgical plan selection 322 enables the processor 304 to display (e.g., via the user interface 310) one or more similarity index values for each of the one or more potential plans, where the selection of the plan is based at least in part on the similarity index values. In some embodiments, the surgical plan selection 322 enables the processor
304 to provide a position suggestion for the surgical procedure for the patient corresponding to the plan from the selection.
[0096] In some embodiments, the surgical plan selection 322 enables the processor 304 to provide a cut sectional view for the surgical procedure for the patient based at least in part on the set of inputs and to provide one or more cut sectional views for the one or more potential plans based at least in part on historical surgical data corresponding to the one or more potential plans, where the selection of the plan from the one or more potential plans is received based at least in part on a comparison of the cut sectional view for the surgical procedure and the one or more cut sectional views for the one or more potential plans.
[0097] The surgical robot settings 324 enables the processor 304 to determine one or more settings to be used by the robot 314 during a surgical procedure. Surgical robot settings 324 may define, for example, end effectors to use with the robotic arm(s) 316, instruments to use with the surgical robot 314, navigation trackers to connect to the robot arm(s) 316, no fly zone(s) for the robot arm(s) 316, a number of surgical robots 314 to use in the surgical procedure, a number of operators for the robot(s) 314, and the like. In some embodiments, the surgical robot settings 324 enables the processor 304 to receive one or more changes to a surgical plan from the selection, and the plurality interaction between the robot 314 with other surgical products in the ecosystem of surgical products may be defined.
[0098] The navigation/imaging settings 328 enables the processor 304 to provide an output that indicates settings for the surgical navigation system 318 and/or imaging device(s) 312 during a surgical procedure. In some embodiments, the navigation/imaging settings 326 may be selected and at least depend upon one or more surgical robot settings 324. In particular, different navigation and/or imaging settings 326 may be selected based on surgical robot settings 324 that are being used for a surgical procedure. The surgical robot settings 324 and navigation/imaging settings 326 may be co-dependent on one another, meaning that a change in one of the settings 324, 326 may result or require a change in the other of the settings 324, 326. The interactions between products in an ecosystem of surgical products is complex and may depend on desired patient outcomes, surgeon preferences, operating room capabilities, product availability, etc. This means that the surgical plan selection 322 can depend on the surgical robot settings 324, which may depend on navigation/imaging settings 326, and vice versa. In some embodiments, the output indicates which products to use from an ecosystem of surgical products and settings to utilize for each of the products. In some embodiments, the surgical plan selection 322, surgical robot settings 324, and navigation/imaging settings 326 may be provided to the processor 304 for feedback after the surgical
procedure for the patient is performed based at least in part on the output. The feedback may also include surgeon, patient, and/or operating room personnel feedback regarding outcomes, challenges, complications, resolutions of problems, etc. Accordingly, the feedback may be used, in part, to train the machine learning model.
[0099] Content stored in the memory 306, if provided as in instruction, may, in some embodiments, be organized into one or more applications, modules, packages, layers, or engines. Alternatively or additionally, the memory 306 may store other types of content or data (e.g., machine learning models, artificial neural networks, deep neural networks, etc.) that can be processed by the processor 304 to carry out the various method and features described herein. Thus, although various contents of memory 306 may be described as instructions, it should be appreciated that functionality described herein can be achieved through use of instructions, algorithms, and/or machine learning models. The data, algorithms, and/or instructions may cause the processor 304 to manipulate data stored in the memory 306 and/or received from or via the imaging device 312, the robot 314, the database 330, and/or the cloud 328.
[0100] The computing device 302 may also comprise a communication interface 308. The communication interface 308 may be used for receiving image data or other information from an external source (such as the imaging device 312, the robot 314, the navigation system 318, the database 330, the cloud 328, and/or any other system or component not part of the system 300), and/or for transmitting instructions, images, or other information to an external system or device (e.g., another computing device 302, the imaging device 312, the robot 314, the navigation system 318, the database 330, the cloud 328, and/or any other system or component not part of the system 300). The communication interface 308 may comprise one or more wired interfaces (e.g., a USB port, an Ethernet port, a Firewire port) and/or one or more wireless transceivers or interfaces (configured, for example, to transmit and/or receive information via one or more wireless communication protocols such as 802.11a/b/g/n, Bluetooth, NFC, ZigBee, and so forth). In some embodiments, the communication interface 308 may be useful for enabling the device 302 to communicate with one or more other processors 304 or computing devices 302, whether to reduce the time needed to accomplish a computing-intensive task or for any other reason.
[0101] The computing device 302 may also comprise one or more user interfaces 310. The user interface 310 may be or comprise a keyboard, mouse, trackball, monitor, television, screen, touchscreen, and/or any other device for receiving information from a user and/or for providing information to a user. The user interface 310 may be used, for example, to receive a user selection or other user input regarding any step of any method described herein. Notwithstanding the foregoing,
any required input for any step of any method described herein may be generated automatically by the system 300 (e.g., by the processor 304 or another component of the system 300) or received by the system 300 from a source external to the system 300. In some embodiments, the user interface 310 may be useful to allow a surgeon or other user to modify instructions to be executed by the processor 304 according to one or more embodiments of the present disclosure, and/or to modify or adjust a setting of other information displayed on the user interface 310 or corresponding thereto. [0102] Although the user interface 310 is shown as part of the computing device 302, in some embodiments, the computing device 302 may utilize a user interface 310 that is housed separately from one or more remaining components of the computing device 302. In some embodiments, the user interface 310 may be located proximate one or more other components of the computing device 302, while in other embodiments, the user interface 310 may be located remotely from one or more other components of the computer device 302.
[0103] The imaging device 312 may be operable to image anatomical feature(s) (e.g., a bone, veins, tissue, etc.) and/or other aspects of patient anatomy to yield image data (e.g., image data depicting or corresponding to a bone, veins, tissue, etc.). “Image data” as used herein refers to the data generated or captured by an imaging device 312, including in a machine-readable form, a graphical/visual form, and in any other form. In various examples, the image data may comprise data corresponding to an anatomical feature of a patient, or to a portion thereof. The image data may be or comprise a preoperative image, an intraoperative image, a postoperative image, or an image taken independently of any surgical procedure. In some embodiments, a first imaging device 312 may be used to obtain first image data (e.g., a first image) at a first time, and a second imaging device 312 may be used to obtain second image data (e.g., a second image) at a second time after the first time. The imaging device 312 may be capable of taking a 2D image or a 3D image to yield the image data. The imaging device 312 may be or comprise, for example, an ultrasound scanner (which may comprise, for example, a physically separate transducer and receiver, or a single ultrasound transceiver), an 0-arm, a C-arm, a G-arm, or any other device utilizing X-ray-based imaging (e.g., a fluoroscope, a CT scanner, a or other X-ray machine), a magnetic resonance imaging (MRI) scanner, an optical coherence tomography (OCT) scanner, an endoscope, a microscope, an optical camera, a thermographic camera (e.g., an infrared camera), a radar system (which may comprise, for example, a transmitter, a receiver, a processor, and one or more antennae), a cone beam computed tomography (CBCT) imaging device, or any other imaging device 312 suitable for obtaining images of an anatomical feature of a patient. The imaging device 312 may be contained entirely within a single
housing, or may comprise a transmiter/ emitter and a receiver/detector that are in separate housings or are otherwise physically separated.
[0104] In some embodiments, the imaging device 312 may comprise more than one imaging device 312. For example, a first imaging device may provide first image data and/or a first image, and a second imaging device may provide second image data and/or a second image. In still other embodiments, the same imaging device may be used to provide both the first image data and the second image data, and/or any other image data described herein. The imaging device 312 may be operable to generate a stream of image data. For example, the imaging device 312 may be configured to operate with an open shutter, or with a shuter that continuously alternates between open and shut so as to capture successive images. For purposes of the present disclosure, unless specified otherwise, image data may be considered to be continuous and/or provided as an image data stream if the image data represents two or more frames per second.
[0105] The robot 314 may be any surgical robot or surgical robotic system. The robot 314 may be or comprise, for example, the Mazor X™ Stealth Edition robotic guidance system. The robot 314 may be configured to position the imaging device 312 at one or more precise position(s) and orientation(s), and/or to return the imaging device 312 to the same position(s) and orientation(s) at a later point in time. The robot 314 may additionally or alternatively be configured to manipulate a surgical tool (whether based on guidance from the navigation system 318 or not) to accomplish or to assist with a surgical task. In some embodiments, the robot 314 may be configured to hold and/or manipulate an anatomical element during or in connection with a surgical procedure. The robot 314 may comprise one or more robotic arms 316. In some embodiments, the robotic arm 316 may comprise a first robotic arm and a second robotic arm, though the robot 314 may comprise more than two robotic arms. In some embodiments, one or more of the robotic arms 316 may be used to hold and/or maneuver the imaging device 312. In embodiments where the imaging device 312 comprises two or more physically separate components (e.g., a transmitter and receiver), one robotic arm 316 may hold one such component, and another robotic arm 316 may hold another such component. Each robotic arm 316 may be positionable independently of the other robotic arm. The robotic arms 316 may be controlled in a single, shared coordinate space, or in separate coordinate spaces.
[0106] The robot 314, together with the robotic arm 316, may have, for example, one, two, three, four, five, six, seven, or more degrees of freedom. Further, the robotic arm 316 may be positioned or positionable in any pose, plane, and/or focal point. The pose includes a position and an orientation. As a result, an imaging device 312, surgical tool, or other object held by the robot 314 (or, more
specifically, by the robotic arm 316) may be precisely positionable in one or more needed and specific positions and orientations.
[0107] The robotic arm(s) 316 may comprise one or more sensors that enable the processor 304 (or a processor of the robot 314) to determine a precise pose in space of the robotic arm (as well as any object or element held by or secured to the robotic arm).
[0108] In some embodiments, reference markers (e.g., navigation markers) may be placed on the robot 314 (including, e.g., on the robotic arm 316), the imaging device 312, or any other object in the surgical space. The reference markers may be tracked by the navigation system 318, and the results of the tracking may be used by the robot 314 and/or by an operator of the system 300 or any component thereof. In some embodiments, the navigation system 318 can be used to track other components of the system (e.g., imaging device 312) and the system can operate without the use of the robot 314 (e.g., with the surgeon manually manipulating the imaging device 312 and/or one or more surgical tools, based on information and/or instructions generated by the navigation system 318, for example). Historical information describing robot 314 motion during a surgical procedure may be provided as part of surgical procedure data 108. Information describing patient motion and/or surgeon motion during the surgical procedure may also be provided from the surgical navigation system 122 as surgical procedure data 108.
[0109] The navigation system 318 may provide navigation for a surgeon and/or a surgical robot during an operation. The navigation system 318 may be any now-known or future-developed navigation system, including, for example, the Medtronic StealthStation™ S8 surgical navigation system or any successor thereof. The navigation system 318 may include one or more cameras or other sensor(s) for tracking one or more reference markers, navigated trackers, or other objects within the operating room or other room in which some or all of the system 300 is located. The one or more cameras may be optical cameras, infrared cameras, or other cameras. In some embodiments, the navigation system 318 may comprise one or more set of electromagnetic transmitters and sensors. In various embodiments, the navigation system 318 may be used to track a position and orientation (e.g., a pose) of the imaging device 312, the robot 314 and/or robotic arm 316, and/or one or more surgical tools (or, more particularly, to track a pose of a navigated tracker attached, directly or indirectly, in fixed relation to the one or more of the foregoing). The navigation system 318 may include a display for displaying one or more images from an external source (e.g., the computing device 302, imaging device 312, or other source) or for displaying an image and/or video stream from the one or more cameras or other sensors of the navigation system 318. In some embodiments, the system 300 can operate without the use of the navigation system 318. The navigation system 318
may be configured to provide guidance to a surgeon or other user of the system 300 or a component thereof, to the robot 314, or to any other element of the system 300 regarding, for example, a pose of one or more anatomical elements, whether or not a tool is in the proper trajectory, and/or how to move a tool into the proper trajectory to carry out a surgical task according to a preoperative or other surgical plan.
[0110] The database 330 may store information that correlates one coordinate system to another (e.g., one or more robotic coordinate systems to a patient coordinate system and/or to a navigation coordinate system). The database 330 may additionally or alternatively store, for example, one or more surgical plans (including, for example, pose information about a target and/or image information about a patient’s anatomy at and/or proximate the surgical site, for use by the robot 314, the navigation system 318, and/or a user of the computing device 302 or of the system 300); one or more images useful in connection with a surgery to be completed by or with the assistance of one or more other components of the system 300; and/or any other useful information. The database 330 may be configured to provide any such information to the computing device 302 or to any other device of the system 300 or external to the system 300, whether directly or via the cloud 328. In some embodiments, the database 330 may be or comprise part of a hospital image storage system, such as a picture archiving and communication system (PACS), a health information system (HIS), and/or another system for collecting, storing, managing, and/or transmitting electronic medical records including image data.
[0111] The cloud 328 may be or represent the Internet or any other wide area network. The computing device 302 may be connected to the cloud 328 via the communication interface 308, using a wired connection, a wireless connection, or both. In some embodiments, the computing device 302 may communicate with the database 330 and/or an external device (e.g., a computing device) via the cloud 328.
[0112] The system 300 or similar systems may be used, for example, to carry out one or more aspects of any of the methods 400, 500, and/or 600 described herein. The system 300 or similar systems may also be used for other purposes.
[0113] Fig. 4 illustrates a data pipeline 400 that may be used to utilize reinforcement learning as part of providing and/or modifying one or more features associated with an ecosystem of surgical products. The ecosystem of surgical products may include one or more of a surgical navigation system, a surgical robot, a surgical technique, a surgical instrument, an implant, a therapy delivery technique, a therapy delivery device, and the like. The one or more features associated with the ecosystem of the surgical products include an algorithm implemented by a surgical robot, an
algorithm implemented by a surgical navigation system, an imaging technique implemented by an imaging device, a surgical plan implemented by a care provider, a surgical instrument selected for use during a surgical procedure, a device implanted into a patient, a therapy delivery technique, an image segmentation approach, and a layout for an operating room.
[0114] The data pipeline 400 includes data storage 402 in which surgical procedure data is stored. The data storage 402 may utilize any memory, database, or cloud storage device depicted and/or described herein. The data storage 402 may be ongoing and/or updated as machine learning models are updated. Moreover, memory management approaches may be utilized to delete unnecessary, expired, and/or unwanted data that is not impacting the machine learning process.
[0115] The data pipeline 400 may further include a data normalization and annotation 404 in which surgical procedure data is curated, formatted, and/or annotated for further use within the pipeline 400. The data normalization may include formatting the data such that PII information is removed from all surgical procedure data, conforming data fields to predefined formats, etc. The annotation may include defining any previously undefined data fields, organizing data in a stream that is efficient to process, removing unnecessary bits of data, adding metadata that associates the surgical procedure data with other surgical procedure data (e.g., binds the surgical procedure data from different devices toa common surgical procedure), etc.
[0116] The data pipeline 400 may further include a reinforcement learning 406 in which the annotation surgical procedure data is subjected to one or more reinforcement learning approaches. In some embodiments, the reinforcement learning may include a policy-based reinforcement learning where one or more policies are provided to the machine learning model during training or retraining. Examples of policies that may be provided in a policy-based reinforcement learning include, without limitation, insurance policy requirements, hospital policy requirements, patient-defined policies, surgeon-defined policies, etc.
[0117] In some embodiments, the reinforcement learning may include a value-function-based reinforcement learning where one or more functions are defined for the machine learning model during training or retraining. The function(s) used in a value-function-based reinforcement learning may include an optimization function that seeks to optimize a patient outcome after a surgical procedure. Illustratively, and without limitation, the optimization function may include a maximization function that maximizes one or more of: patient happiness (e.g., as measured in patient survey data), surgeon approval (e.g., as measured in surgeon survey data), insurance coverage for the surgical procedure, etc. Alternatively or additionally, the optimization function may include a minimization function that minimizes one or more of: patient cost, elapsed time of the surgical
procedure, patient discomfort (e.g., as measured in patient survey data), number of surgical products used from an ecosystem of surgical products, number of follow-up visits, number of surgical procedures required to complete a larger surgical plan, etc.
[0118] In some embodiments, the reinforcement learning may include a model-based reinforcement learning where one or more virtual models are developed for the ecosystem of surgical products. In a model-based reinforcement learning approach, surgical procedure data may be provided to multiple virtual models along with other machine learning models in production (e.g., currently in-use for recommending surgical procedure data).
[0119] The data pipeline 400 may further include a comparison 408 in which new model(s) or models that are subject to training/retraining are compared with models currently in production (e.g., models in use). In the comparison 408, performance metrics of the models in products may be compared with performance metrics of the models being trained or retrained. In the example of model-based reinforcement learning, performance of the different models (actual and virtual) may be compared over time to determine which model(s) is performing better than other models. Eventually, in a model-based reinforcement learning approach, a model currently in use may be replaced with a virtual model if outcomes achieved with the virtual model are better as compared with outcomes achieved with the model in use.
[0120] The data pipeline 400 may further include a model update 410 in which model(s) in production may be updated, retrained, and/or replaced, depending upon the outcome of the comparison 408. In some embodiments, if performance metrics associated with a model in training are better than performance metrics associated with a model in production, by at least a predetermined amount or percentage, then the model in production may be replaced with the model in training.
[0121] As models are retrained, replaced, or updated, the efficacy of the model(s) may be verified 412. In some embodiments, verifying the efficacy of a model may include verifying one or more performance metrics associated with use of the machine learning model. The performance metric may include a measured effectiveness (e.g., impact on a patient) of any algorithms that are updated based on the retraining of the machine learning model.
[0122] As shown in Fig. 4, the data pipeline 400 may be a closed loop pipeline such that updates and verifications may be stored 402 as new surgical procedure data 108 and the process may be repeated/ iterated.
[0123] Fig. 5 depicts, in accordance with at least one embodiment, a method 500 that may be used to suggest a surgery plan and provide or modify one or more features associated with an ecosystem of surgical products.
[0124] The method 500 (and/or one or more steps thereof) may be carried out or otherwise performed, for example, by at least one processor. The at least one processor may be the same as or similar to the processor(s) 304 of the computing device 302 described above. Alternatively or additionally, the at least one processor may include the processor 104. Alternatively or additionally, the at least one processor may be part of a robot (such as a robot 314) or part of a navigation system (such as a navigation system 318). A processor other than any processor described herein may also be used to execute the method 500. The at least one processor may perform the method 500 by executing elements stored in a memory such as the memory 306. The elements stored in the memory and executed by the processor may cause the processor to execute one or more steps of a function as shown in method 500. One or more portions of a method 500 may be performed by the processor executing any of the contents of memory, such as a surgical plan determination 320, a surgical plan selection 322, a surgical robot settings 324, and/or a navigation/imaging settings 326.
[0125] The method 500 comprises receiving surgical procedure data (step 504). For example, the surgical procedure data may be received as one or more data sets from a data repository. The surgical procedure data may include data of one or many different types and may originate from one or many different sources of surgical procedure data. In some embodiments, method 500 may further include storing the updated data if the data is received as a data stream or from an external data source (step 508).
[0126] The method 500 also comprises curating and annotating the surgical procedure data (step 512). The curating and/or annotating may be performed by the data curation unit 116 and may support additional processing of the data during reinforcement learning.
[0127] The method 500 also comprises training or retraining a machine learning model with reinforcement learning or reinforcement training based on the updated data set (step 516). In other words, the surgical procedure data may be provided to a machine learning model that is being trained or retrained using a reinforcement learning approach.
[0128] The method 500 also comprises providing one or more performance metrics associated with retraining of the machine learning model (step 520). In some embodiments, the performance metric(s) may be verified during training or retraining of the machine learning model. For instance, during reinforcement learning, penalties may be applied to a training agent when models are changed to deteriorate a performance metric whereas benefits or rewards may be provided to a
training agent when models are changed to enhance a performance metric. As performance is measured and re-measured, the performance metrics may be verified to enable additional penalties and/or rewards to be applied to the training agent.
[0129] The method 500 also comprises providing an output that includes a definition of one or more products to use from an ecosystem of surgical products during a surgical procedure (step 524). In some examples, the output may include providing or modifying one or more features associated with the ecosystem of surgical products. In some embodiments, providing or modifying one or more features may include defining, changing, suggesting, or amending at least one of: an algorithm implemented by a surgical robot, an algorithm implemented by a surgical navigation system, an imaging technique implemented by an imaging device, a surgical plan implemented by a care provider, a surgical instrument selected for use during a surgical procedure, a device implanted into a patient, and a layout for an operating room.
[0130] The present disclosure encompasses embodiments of the method 500 that comprise more or fewer steps than those described above, and/or one or more steps that are different than the steps described above.
[0131] Fig. 6 depicts a method 600 that may be used, for example, to train or retrain one or more machine learning models and to implement a change in an ecosystem of surgical products.
[0132] The method 600 (and/or one or more steps thereof) may be carried out or otherwise performed, for example, by at least one processor. The at least one processor may be the same as or similar to other processor(s) depicted and described herein. A processor other than any processor described herein may also be used to execute the method 600. The at least one processor may perform the method 600 by executing elements stored in a memory such as the memory 306. The elements stored in memory and executed by the processor may cause the processor to execute one or more steps of a function as shown in method 600. One or more portions of a method 600 may be performed by the processor executing any of the contents of memory, such as a surgical plan determination 320, a surgical plan selection 322 a surgical robot settings 324, and/or a navigation/imaging settings 326.
[0133] The method 600 comprises receiving surgical procedure data (step 604). The method 600 also comprises annotating the surgical procedure data (step 608). Steps 604 and 608 may implement similar aspect of steps 504 and 512, respectively, as described with reference to Fig. 5.
[0134] The method 600 may also comprise inputting the annotated surgical procedure data to one or more machine learning models (step 612). The annotated surgical procedure data may be provided to the machine learning models as part of a training or retraining process.
[0135] The method 600 may further include receiving an output from one or more of the machine learning models (step 616). The output generated by the machine learning models may be generated in response to the machine learning models processing the annotated surgical procedure data.
[0136] The method 600 may further include implementing reinforcement learning on the machine learning model(s) (step 620). The reinforcement learning may be implemented based, at least in part, on an analysis of the outputs received from the machine learning models. Unlike supervised training, the reinforcement learning may apply rewards and/or penalties to an agent that simultaneously tracks a state of the ecosystem of surgical products to determine whether certain behaviors are viewed positively or negatively by uses of the ecosystem of surgical products.
[0137] Based on the rewards and/or penalties, the machine learning model(s) may be updated, as appropriate, to implement a change in an ecosystem of surgical products (step 624). In some embodiments, the updates made to the machine learning models may be monitored and performance metrics of the machine learning models may be analyzed to verify whether or not the updates resulted in an improvement to the performance metrics and/or a deterioration to the performance metrics.
[0138] The present disclosure encompasses embodiments of the method 600 that comprise more or fewer steps than those described above, and/or one or more steps that are different than the steps described above.
[0139] Fig. 7 depicts a method 700 that may be used, for example, to train or retrain one or more machine learning models utilizing a policy-based reinforcement learning approach.
[0140] The method 700 (and/or one or more steps thereof) may be carried out or otherwise performed, for example, by at least one processor. The at least one processor may be the same as or similar to other processor(s) depicted and described herein. A processor other than any processor described herein may also be used to execute the method 700. The at least one processor may perform the method 700 by executing elements stored in a memory such as the memory 306. The elements stored in memory and executed by the processor may cause the processor to execute one or more steps of a function as shown in method 700. One or more portions of a method 700 may be performed by the processor executing any of the contents of memory, such as a surgical plan determination 320, a surgical plan selection 322 a surgical robot settings 324, and/or a navigation/imaging settings 326.
[0141] The method 700 comprises receiving surgical procedure data (step 704). In some embodiments, the surgical procedure data may include data received from a surgical robot and/or a surgical navigation system. The data may be curated and annotated as described herein. In some
embodiments, when the surgical procedure data is received from a surgical robot and a surgical navigation system for a common surgical procedure, then the data from both the surgical robot and the surgical navigation system may be annotated to indicate that the surgical procedure data received from both data sources is associated with the common surgical procedure.
[0142] The method 700 may further comprise providing the surgical procedure data to one or more machine learning models (step 708). The surgical procedure data may be provided to machine learning models that are being trained, retrained, or the like.
[0143] The method 700 may also comprise providing one or more policies to the machine learning models (step 712). The policies may be used as part of implementing a machine learning approach for the machine learning models. In some embodiments, the policies may define behaviors of a system that are required. In some embodiments, the policies may define behaviors of a system that are prohibited. In some embodiments, the policies may include conditional policies that should be complied with, depending upon circumstances and/or conditions.
[0144] The method 700 may also comprise updating the surgical procedure data and/or parameters of the machine learning models during training and/or retraining of the machine learning models (step 716). Specifically, but without limitation, as changes are suggested for an ecosystem of surgical products or different surgical procedures are recommended by a system, surgical procedure data may be updated to indicate such changes. In some embodiments, machine learning models may be replaced both other machine learning models, whereas in other embodiments a machine learning model may be updated by having its operating parameters (e.g., coefficients) modified.
[0145] The present disclosure encompasses embodiments of the method 700 that comprise more or fewer steps than those described above, and/or one or more steps that are different than the steps described above.
[0146] Fig. 8 depicts a method 800 that may be used, for example, to train or retrain one or more machine learning models utilizing a policy-based reinforcement learning approach.
[0147] The method 800 (and/or one or more steps thereof) may be carried out or otherwise performed, for example, by at least one processor. The at least one processor may be the same as or similar to other processor(s) depicted and described herein. A processor other than any processor described herein may also be used to execute the method 800. The at least one processor may perform the method 800 by executing elements stored in a memory such as the memory 306. The elements stored in memory and executed by the processor may cause the processor to execute one or more steps of a function as shown in method 800. One or more portions of a method 800 may be performed by the processor executing any of the contents of memory, such as a surgical plan
determination 320, a surgical plan selection 322 a surgical robot settings 324, and/or a navigation/imaging settings 326.
[0148] The method 800 comprises receiving surgical procedure data (step 804). The method 800 may further comprise providing the surgical procedure data to one or more machine learning models (step 808). Steps 804 and 808 may be similar to steps 704 and 708 of method 700, although the steps do not have to be the same.
[0149] The method 800 may also comprise defining one or more functions for reinforcement learning (step 812). The functions defined for reinforcement learning may include maximization functions, minimization functions, averaging functions, objective functions, a plurality of maximization and minimization functions, or combinations thereof.
[0150] The one or more functions may be used to train and/or retrain the machine learning models as the machine learning models process the surgical procedure data provided thereto. In some embodiments, as the machine learning models are trained and/or retrained, new outputs may be generated by the machine learning models. The output(s) of the trained or retrained machine learning models may be used to update a surgical plan, update surgical procedure data, and/or provide suggestions for changes to an ecosystem of surgical products (step 816).
[0151] The method 800 may also comprise replacing one or more machine learning models with an updated machine learning model if one or more performance metrics associated with the machine learning model being trained improves an output of a reinforcement learning function (step 820). In other words, if one or more machine learning models being trained is capable of satisfying a function defined in step 812 better than an existing machine learning model (e.g., a model in production), then the machine learning model being trained may replace the model in production. If a replacement of models occurs, verification of the replacement machine learning model’s performance may be analyzed and verified.
[0152] The present disclosure encompasses embodiments of the method 800 that comprise more or fewer steps than those described above, and/or one or more steps that are different than the steps described above.
[0153] Fig. 9 depicts a method 900 that may be used, for example, to train or retrain one or more machine learning models utilizing a model-based reinforcement learning approach.
[0154] The method 900 (and/or one or more steps thereof) may be carried out or otherwise performed, for example, by at least one processor. The at least one processor may be the same as or similar to other processor(s) depicted and described herein. A processor other than any processor described herein may also be used to execute the method 900. The at least one processor may
perform the method 900 by executing elements stored in a memory such as the memory 306. The elements stored in memory and executed by the processor may cause the processor to execute one or more steps of a function as shown in method 900. One or more portions of a method 900 may be performed by the processor executing any of the contents of memory, such as a surgical plan determination 320, a surgical plan selection 322 a surgical robot settings 324, and/or a navigation/imaging settings 326.
[0155] The method 900 comprises receiving surgical procedure data (step 904). Step 904 may be similar to step 704 of method 700, although the steps do not have to be the same.
[0156] The method 900 may also comprise developing one or more virtual models for an ecosystem of surgical products (step 908). The one or more virtual models may be similar to a machine learning model in use, but may have different coefficients, may have been trained with different training data, may not have received all surgical procedure data that has been processed by the machine learning model in use, and may be implemented in a virtual environment (e.g., without an ability to make an actual impact on an ecosystem of surgical products).
[0157] The method 900 may further comprise providing the surgical procedure data to the virtual model (step 912). This step may also include providing the surgical procedure data to the machine learning model(s) that are currently in use. The surgical procedure data may be curated and/or annotated prior to being provided to the virtual model(s).
[0158] The method 900 may further comprise receiving outputs from the virtual models and the machine learning model(s) currently in use. Performance metrics associated with the outputs from each of the virtual models and the machine learning models(s) currently in use may be compared with one another. The method 900 may then identify a best-performing machine learning model among the virtual model(s) (step 916). The method 900 may also compare a performance the bestperforming machine learning model with a performance of the machine learning model currently in use (step 920). If the performance metrics associated with the best-performing virtual model are better than the performance metrics associated with the machine learning model currently in use, the method 900 may continue by replacing the machine learning model currently in use with a virtual model (step 924).
[0159] After a machine learning model is replaced with a virtual model, the method 900 may include analyzing and verifying that the new machine learning model continues to provide better performance characteristics than the machine learning model that it replaced. If this improvement is verified, then the change may be maintained. If the improvement is not verified, then the previous
machine learning model may be brought back into production (e.g., to replace the virtual model that previously replaced it).
[0160] The present disclosure encompasses embodiments of the method 900 that comprise more or fewer steps than those described above, and/or one or more steps that are different than the steps described above.
[0161] As noted above, the present disclosure encompasses methods with fewer than all of the steps identified in Figs. 5, 6, 7, 8, and 9 (and the corresponding description of the methods 500, 600, 700, 800, and 900), as well as methods that include additional steps beyond those identified in Figs. 5, 6, 7, 8, and 9 (and the corresponding description of the methods 500, 600, 700, 800, and 900). The present disclosure also encompasses methods that comprise one or more steps from one method described herein, and one or more steps from another method described herein. Any correlation described herein may be or comprise a registration or any other correlation.
[0162] The foregoing is not intended to limit the disclosure to the form or forms disclosed herein. In the foregoing Detailed Description, for example, various features of the disclosure are grouped together in one or more aspects, embodiments, and/or configurations for the purpose of streamlining the disclosure. The features of the aspects, embodiments, and/or configurations of the disclosure may be combined in alternate aspects, embodiments, and/or configurations other than those discussed above. This method of disclosure is not to be interpreted as reflecting an intention that the claims require more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive aspects he in less than all features of a single foregoing disclosed aspect, embodiment, and/or configuration. Thus, the following claims are hereby incorporated into this Detailed Description, with each claim standing on its own as a separate preferred embodiment of the disclosure.
[0163] Moreover, though the foregoing has included description of one or more aspects, embodiments, and/or configurations and certain variations and modifications, other variations, combinations, and modifications are within the scope of the disclosure, e.g., as may be within the skill and knowledge of those in the art, after understanding the present disclosure. It is intended to obtain rights which include alternative aspects, embodiments, and/or configurations to the extent permitted, including alternate, interchangeable and/or equivalent structures, functions, ranges or steps to those claimed, whether or not such alternate, interchangeable and/or equivalent structures, functions, ranges or steps are disclosed herein, and without intending to publicly dedicate any patentable subject matter.
[0164] The techniques of this disclosure may also be described in the following examples." Before the examples.
[0165] Example 1. A system, comprising: a processor; and a memory storing data for processing by the processor, the data, when processed, causes the processor to: store a data set comprising surgical procedure data in a data repository; retrain a machine learning model based on curating the data set, wherein retraining the machine learning model comprises applying reinforcement learning; verify a performance metric associated with the retraining of the machine learning model; and provide or modify one or more features associated with an ecosystem of surgical products in response to retraining the machine learning model and verifying the performance metric.
[0166] Example 2. The system of example 1, wherein the performance metric comprises at least one of a measured effectiveness and a patient outcome.
[0167] Example 3. The system of example 1 or 2, wherein the one or more features associated with the ecosystem of the surgical products include an algorithm implemented by a surgical robot, an algorithm implemented by a surgical navigation system, an imaging technique implemented by an imaging device, a surgical plan implemented by a care provider, a surgical instrument selected for use during a surgical procedure, a device implanted into a patient, and a layout for an operating room.
[0168] Example 4. The system of any preceding example, wherein the one or more features comprises an image segmentation approach.
[0169] Example 5. The system of any preceding example, wherein the one or more features comprises a therapy delivery technique.
[0170] Example 6. The system of any preceding example, wherein the reinforcement learning comprises a policy-based reinforcement learning and wherein at least one policy is provided to the machine learning model during the retraining.
[0171] Example 7. The system of example of any preceding claim, wherein the reinforcement learning comprises a value-function-based reinforcement learning and wherein at least one function is defined for the machine learning model during the retraining.
[0172] Example 8. The system of example 7, wherein the at least one function comprises an optimization function that seeks to optimize a patient outcome after a surgical procedure.
[0173] Example 9. The system of any preceding example, wherein the memory stores further data for processing by the processor that, when processed, causes the processor to: replace the machine learning model with an updated machine learning model in response to the updated machine learning model exhibiting an improvement in the performance metric as compared to the machine learning model.
[0174] Example 10. The system of any preceding example, wherein the memory stores further data for processing by the processor that, when processed, causes the processor to: develop a virtual model for the ecosystem of surgical products; provide the data set to the virtual model; compare a performance of the virtual model with the performance metric; determine the performance of the virtual model is better than the machine learning model based on the comparison; and replace the machine learning model with the virtual model in response to determining that the performance of the virtual model is better than the machine learning model.
[0175] Example 11. The system of any preceding example, wherein the memory stores further data for processing by the processor that, when processed, causes the processor to: receive feedback after the surgical procedure for the patient is performed; and further update the data set with the feedback.
[0176] Example 12. The system of example 11, wherein the feedback is used, in part, to train the machine learning model.
[0177] Example 13. The system of any preceding example, wherein the data set comprises data from a surgical robot and data from a surgical navigation system.
[0178] Example 14. The system of example 13, wherein the data from the surgical robot and the data from the surgical navigation system are curated to indicate an association with a common surgical procedure.
[0179] Example 15. The system of any preceding example, wherein the ecosystem of surgical products comprises at least one of a surgical robot, a surgical navigation system, an imaging device, a surgical instrument, and an implant.
[0180] Example 16. A method, comprising: storing a data set comprising surgical procedure data in a data repository; retraining a machine learning model based on curating the data set, wherein retraining the machine learning model comprises applying reinforcement learning; verifying a performance metric associated with the retraining of the machine learning model; and providing or modifying one or more features associated with an ecosystem of surgical products in response to retraining the machine learning model and verifying the performance metric.
[0181] Example 17. The method of example 16, wherein the reinforcement learning comprises at least one of a policy-based reinforcement learning, a value- function-based reinforcement learning, an associative reinforcement learning, a deep reinforcement learning, an adversarial deep reinforcement learning, a fuzzy reinforcement learning, an inverse reinforcement learning, and a safe reinforcement learning.
[0182] Example 18. The method of example 16 or 17, wherein the one or more features associated with the ecosystem of the surgical products include an algorithm implemented by a surgical robot, an algorithm implemented by a surgical navigation system, an imaging technique implemented by an imaging device, a surgical plan implemented by a care provider, a surgical instrument selected for use during a surgical procedure, a device implanted into a patient, and a layout for an operating room.
[0183] Example 19. A method of implementing a machine learning pipeline, comprising: receiving surgical procedure data; annotating the surgical procedure data; providing the annotated surgical procedure data to a machine learning model; training the machine learning model with reinforcement learning; verifying a performance metric associated with the training of the machine learning model; and updating or replacing the machine learning model with an updated machine learning model, wherein the updated machine learning model provides at least one output associated with improving an ecosystem of surgical products.
[0184] Example 20. The method of example 19, wherein surgical procedure data comprises data from a surgical robot and a surgical navigation system and wherein the at least one output comprises a suggested surgical plan that utilizes at least one of the surgical robot and the surgical navigation system.
Claims
1. A system, comprising: a processor; and a memory storing data for processing by the processor, the data, when processed, causes the processor to: store a data set comprising surgical procedure data in a data repository; retrain a machine learning model based on curating the data set, wherein retraining the machine learning model comprises applying reinforcement learning; verify a performance metric associated with the retraining of the machine learning model; and provide or modify one or more features associated with an ecosystem of surgical products in response to retraining the machine learning model and verifying the performance metric.
2. The system of claim 1, wherein the performance metric comprises at least one of a measured effectiveness and a patient outcome.
3. The system of claim 1 or 2, wherein the one or more features associated with the ecosystem of the surgical products include an algorithm implemented by a surgical robot, an algorithm implemented by a surgical navigation system, an imaging technique implemented by an imaging device, a surgical plan implemented by a care provider, a surgical instrument selected for use during a surgical procedure, a device implanted into a patient, and a layout for an operating room.
4. The system of any preceding claim, wherein the one or more features comprises an image segmentation approach.
5. The system of any preceding claim, wherein the one or more features comprises a therapy delivery technique.
6. The system of any preceding claim, wherein the reinforcement learning comprises a policy-based reinforcement learning and wherein at least one policy is provided to the machine learning model during the retraining.
7. The system of claim of any preceding claim, wherein the reinforcement learning comprises a value-function-based reinforcement learning and wherein at least one function is defined for the machine learning model during the retraining.
8. The system of claim 7, wherein the at least one function comprises an optimization function that seeks to optimize a patient outcome after a surgical procedure.
9. The system of any preceding claim, wherein the memory stores further data for processing by the processor that, when processed, causes the processor to: replace the machine learning model with an updated machine learning model in response to the updated machine learning model exhibiting an improvement in the performance metric as compared to the machine learning model.
10. The system of any preceding claim, wherein the memory stores further data for processing by the processor that, when processed, causes the processor to: develop a virtual model for the ecosystem of surgical products; provide the data set to the virtual model; compare a performance of the virtual model with the performance metric; determine the performance of the virtual model is better than the machine learning model based on the comparison; and replace the machine learning model with the virtual model in response to determining that the performance of the virtual model is better than the machine learning model.
11. The system of any preceding claim, wherein the memory stores further data for processing by the processor that, when processed, causes the processor to: receive feedback after the surgical procedure for the patient is performed; and further update the data set with the feedback.
12. The system of claim 11, wherein the feedback is used, in part, to train the machine learning model.
13. The system of any preceding claim, wherein the data set comprises data from a surgical robot and data from a surgical navigation system and wherein the data from the surgical robot and the data from the surgical navigation system are curated to indicate an association with a common surgical procedure.
14. The system of any preceding claim, wherein the ecosystem of surgical products comprises at least one of a surgical robot, a surgical navigation system, an imaging device, a surgical instrument, and an implant.
15. A method, comprising: storing a data set comprising surgical procedure data in a data repository; retraining a machine learning model based on curating the data set, wherein retraining the machine learning model comprises applying reinforcement learning; verifying a performance metric associated with the retraining of the machine learning model; and providing or modifying one or more features associated with an ecosystem of surgical products in response to retraining the machine learning model and verifying the performance metric.
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| US202363605975P | 2023-12-04 | 2023-12-04 | |
| US63/605,975 | 2023-12-04 |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2025120487A1 true WO2025120487A1 (en) | 2025-06-12 |
Family
ID=93924925
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/IB2024/062100 Pending WO2025120487A1 (en) | 2023-12-04 | 2024-12-02 | Reinforcement learning for surgical procedures and surgical plans |
Country Status (1)
| Country | Link |
|---|---|
| WO (1) | WO2025120487A1 (en) |
Citations (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20200337648A1 (en) * | 2019-04-24 | 2020-10-29 | GE Precision Healthcare LLC | Medical machine time-series event data processor |
| US20230252631A1 (en) * | 2022-02-07 | 2023-08-10 | Microvention, Inc. | Neural network apparatus for identification, segmentation, and treatment outcome prediction for aneurysms |
-
2024
- 2024-12-02 WO PCT/IB2024/062100 patent/WO2025120487A1/en active Pending
Patent Citations (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20200337648A1 (en) * | 2019-04-24 | 2020-10-29 | GE Precision Healthcare LLC | Medical machine time-series event data processor |
| US20230252631A1 (en) * | 2022-02-07 | 2023-08-10 | Microvention, Inc. | Neural network apparatus for identification, segmentation, and treatment outcome prediction for aneurysms |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US20080235052A1 (en) | System and method for sharing medical information between image-guided surgery systems | |
| US20230135286A1 (en) | Systems, devices, and methods for tracking one or more surgical landmarks | |
| US20250127572A1 (en) | Methods and systems for planning a surgical procedure | |
| US20240358436A1 (en) | Augmented reality system and method with periprocedural data analytics | |
| Wagner et al. | Future directions in robotic neurosurgery | |
| CN117769399A (en) | Path planning based on working volume mapping | |
| US20250325379A1 (en) | Systems and methods for training and using an implant plan evaluation model | |
| US12213748B2 (en) | Systems and methods for registering one or more anatomical elements | |
| US20240173077A1 (en) | Smart surgical instrument selection and suggestion | |
| US12446962B2 (en) | Spine stress map creation with finite element analysis | |
| WO2025120487A1 (en) | Reinforcement learning for surgical procedures and surgical plans | |
| Taylor | Computer-integrated interventional medicine: A 30 year perspective | |
| US20230240753A1 (en) | Systems and methods for tracking movement of an anatomical element | |
| US20250057603A1 (en) | Systems and methods for real-time visualization of anatomy in navigated procedures | |
| Andronic et al. | NONCONVENTIONAL RADIOLOGICAL TECHNOLOGIES AND PROCESSES USED FOR NEUROSURGERY IN CORRELATION WITH INTELLECTUAL PROPERTY PROTECTION WITH DIRECT APPLICATION IN HEALTHCARE ORGANIZATION. | |
| WO2025229497A1 (en) | Systems and methods for generating one or more reconstructions | |
| US20230240659A1 (en) | Systems, methods, and devices for tracking one or more objects | |
| WO2025003918A1 (en) | Spine deformation induced by rigid bodies: synthetic long scan imaging for artificial intelligence (ai) methods | |
| WO2025037243A1 (en) | Systems and methods for real-time visualization of anatomy in navigated procedures | |
| CN118632669A (en) | Systems and methods for robotic collision avoidance using medical imaging | |
| Vijayan | ADVANCED INTRAOPERATIVE IMAGE REGISTRATION FOR PLANNING AND GUIDANCE OF ROBOT-ASSISTED SURGERY | |
| CN118647331A (en) | System and apparatus for generating hybrid images | |
| WO2025146597A1 (en) | Methods and systems adding preoperative and real-time annotations in a navigation space | |
| WO2024236563A1 (en) | Systems and methods for generating and updating a surgical plan | |
| CN118613830A (en) | System, method and apparatus for reconstructing a three-dimensional representation |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 24827183 Country of ref document: EP Kind code of ref document: A1 |