WO2023067587A1 - Respiration analysis and synchronization of the operation of automated medical devices therewith - Google Patents
Respiration analysis and synchronization of the operation of automated medical devices therewith Download PDFInfo
- Publication number
- WO2023067587A1 WO2023067587A1 PCT/IL2022/051063 IL2022051063W WO2023067587A1 WO 2023067587 A1 WO2023067587 A1 WO 2023067587A1 IL 2022051063 W IL2022051063 W IL 2022051063W WO 2023067587 A1 WO2023067587 A1 WO 2023067587A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- respiration
- data
- medical instrument
- subject
- triggering
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Ceased
Links
Classifications
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B90/00—Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
- A61B90/10—Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges for stereotaxic surgery, e.g. frame-based stereotaxis
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B34/00—Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
- A61B34/30—Surgical robots
- A61B34/32—Surgical robots operating autonomously
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/08—Measuring devices for evaluating the respiratory organs
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/72—Signal processing specially adapted for physiological signals or for diagnostic purposes
- A61B5/7235—Details of waveform analysis
- A61B5/7264—Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems
- A61B5/7267—Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems involving training the classification device
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/72—Signal processing specially adapted for physiological signals or for diagnostic purposes
- A61B5/7271—Specific aspects of physiological measurement analysis
- A61B5/7285—Specific aspects of physiological measurement analysis for synchronizing or triggering a physiological measurement or image acquisition with a physiological event or waveform, e.g. an ECG signal
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N20/00—Machine learning
- G06N20/10—Machine learning using kernel methods, e.g. support vector machines [SVM]
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N20/00—Machine learning
- G06N20/20—Ensemble learning
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/044—Recurrent networks, e.g. Hopfield networks
- G06N3/0442—Recurrent networks, e.g. Hopfield networks characterised by memory or gating, e.g. long short-term memory [LSTM] or gated recurrent units [GRU]
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
- G06N3/0455—Auto-encoder networks; Encoder-decoder networks
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/0464—Convolutional networks [CNN, ConvNet]
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/047—Probabilistic or stochastic networks
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/0475—Generative networks
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/094—Adversarial learning
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N5/00—Computing arrangements using knowledge-based models
- G06N5/01—Dynamic search techniques; Heuristics; Dynamic trees; Branch-and-bound
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N7/00—Computing arrangements based on specific mathematical models
- G06N7/01—Probabilistic graphical models, e.g. probabilistic networks
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B17/00—Surgical instruments, devices or methods
- A61B2017/00017—Electrical control of surgical instruments
- A61B2017/00022—Sensing or detecting at the treatment site
- A61B2017/00039—Electric or electromagnetic phenomena other than conductivity, e.g. capacity, inductivity, Hall effect
- A61B2017/00044—Sensing electrocardiography, i.e. ECG
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B17/00—Surgical instruments, devices or methods
- A61B2017/00017—Electrical control of surgical instruments
- A61B2017/00022—Sensing or detecting at the treatment site
- A61B2017/00075—Motion
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B17/00—Surgical instruments, devices or methods
- A61B2017/00017—Electrical control of surgical instruments
- A61B2017/00203—Electrical control of surgical instruments with speech control or speech recognition
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B17/00—Surgical instruments, devices or methods
- A61B2017/00681—Aspects not otherwise provided for
- A61B2017/00694—Aspects not otherwise provided for with means correcting for movement of or for synchronisation with the body
- A61B2017/00699—Aspects not otherwise provided for with means correcting for movement of or for synchronisation with the body correcting for movement caused by respiration, e.g. by triggering
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B17/00—Surgical instruments, devices or methods
- A61B2017/00681—Aspects not otherwise provided for
- A61B2017/00725—Calibration or performance testing
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B34/00—Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
- A61B34/10—Computer-aided planning, simulation or modelling of surgical operations
- A61B2034/101—Computer-aided simulation of surgical operations
- A61B2034/105—Modelling of the patient, e.g. for ligaments or bones
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B34/00—Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
- A61B34/10—Computer-aided planning, simulation or modelling of surgical operations
- A61B2034/107—Visualisation of planned trajectories or target regions
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B34/00—Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
- A61B34/20—Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
- A61B2034/2046—Tracking techniques
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B90/00—Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
- A61B90/06—Measuring instruments not otherwise provided for
- A61B2090/064—Measuring instruments not otherwise provided for for measuring force, pressure or mechanical tension
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B90/00—Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
- A61B90/36—Image-producing devices or illumination devices not otherwise provided for
- A61B2090/364—Correlation of different images or relation of image positions in respect to the body
- A61B2090/365—Correlation of different images or relation of image positions in respect to the body augmented reality, i.e. correlating a live optical image with another image
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B90/00—Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
- A61B90/36—Image-producing devices or illumination devices not otherwise provided for
- A61B90/37—Surgical systems with images on a monitor during operation
- A61B2090/374—NMR or MRI
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B90/00—Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
- A61B90/36—Image-producing devices or illumination devices not otherwise provided for
- A61B90/37—Surgical systems with images on a monitor during operation
- A61B2090/376—Surgical systems with images on a monitor during operation using X-rays, e.g. fluoroscopy
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B90/00—Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
- A61B90/36—Image-producing devices or illumination devices not otherwise provided for
- A61B90/37—Surgical systems with images on a monitor during operation
- A61B2090/376—Surgical systems with images on a monitor during operation using X-rays, e.g. fluoroscopy
- A61B2090/3762—Surgical systems with images on a monitor during operation using X-rays, e.g. fluoroscopy using computed tomography systems [CT]
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B90/00—Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
- A61B90/36—Image-producing devices or illumination devices not otherwise provided for
- A61B90/37—Surgical systems with images on a monitor during operation
- A61B2090/376—Surgical systems with images on a monitor during operation using X-rays, e.g. fluoroscopy
- A61B2090/3762—Surgical systems with images on a monitor during operation using X-rays, e.g. fluoroscopy using computed tomography systems [CT]
- A61B2090/3764—Surgical systems with images on a monitor during operation using X-rays, e.g. fluoroscopy using computed tomography systems [CT] with a rotating C-arm having a cone beam emitting source
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B90/00—Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
- A61B90/36—Image-producing devices or illumination devices not otherwise provided for
- A61B90/37—Surgical systems with images on a monitor during operation
- A61B2090/378—Surgical systems with images on a monitor during operation using ultrasound
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B2562/00—Details of sensors; Constructional details of sensor housings or probes; Accessories for sensors
- A61B2562/02—Details of sensors specially adapted for in-vivo measurements
- A61B2562/0219—Inertial sensors, e.g. accelerometers, gyroscopes, tilt switches
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B2562/00—Details of sensors; Constructional details of sensor housings or probes; Accessories for sensors
- A61B2562/02—Details of sensors specially adapted for in-vivo measurements
- A61B2562/0233—Special features of optical sensors or probes classified in A61B5/00
- A61B2562/0238—Optical sensor arrangements for performing transmission measurements on body tissue
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B2562/00—Details of sensors; Constructional details of sensor housings or probes; Accessories for sensors
- A61B2562/02—Details of sensors specially adapted for in-vivo measurements
- A61B2562/0247—Pressure sensors
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B2562/00—Details of sensors; Constructional details of sensor housings or probes; Accessories for sensors
- A61B2562/02—Details of sensors specially adapted for in-vivo measurements
- A61B2562/0261—Strain gauges
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B2562/00—Details of sensors; Constructional details of sensor housings or probes; Accessories for sensors
- A61B2562/02—Details of sensors specially adapted for in-vivo measurements
- A61B2562/0261—Strain gauges
- A61B2562/0266—Optical strain gauges
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/06—Devices, other than using radiation, for detecting or locating foreign bodies ; Determining position of diagnostic devices within or on the body of the patient
- A61B5/061—Determining position of a probe within the body employing means separate from the probe, e.g. sensing internal probe position employing impedance electrodes on the surface of the body
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/103—Measuring devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
- A61B5/11—Measuring movement of the entire body or parts thereof, e.g. head or hand tremor or mobility of a limb
- A61B5/1126—Measuring movement of the entire body or parts thereof, e.g. head or hand tremor or mobility of a limb using a particular sensing technique
- A61B5/1128—Measuring movement of the entire body or parts thereof, e.g. head or hand tremor or mobility of a limb using a particular sensing technique using image analysis
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/103—Measuring devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
- A61B5/11—Measuring movement of the entire body or parts thereof, e.g. head or hand tremor or mobility of a limb
- A61B5/113—Measuring movement of the entire body or parts thereof, e.g. head or hand tremor or mobility of a limb occurring during breathing
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/72—Signal processing specially adapted for physiological signals or for diagnostic purposes
- A61B5/7221—Determining signal validity, reliability or quality
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/72—Signal processing specially adapted for physiological signals or for diagnostic purposes
- A61B5/7235—Details of waveform analysis
- A61B5/7246—Details of waveform analysis using correlation, e.g. template matching or determination of similarity
Definitions
- the present disclosure relates to computer-implemented methods and systems for analyzing respiration activity of subjects and synchronizing operation of automated medical devices and/or imaging systems therewith. More specifically, the disclosed methods and systems include collecting data related to respiration cycle(s) of subject(s), analyzing the collected data and synchronizing the operation of automated medical devices and/or imaging systems with the respiration cycle, to facilitate planning, insertion and/or steering of a medical instrument toward an internal target.
- Various diagnostic and therapeutic procedures used in clinical practice involve the insertion of medical instruments, such as needles and catheters, percutaneously to a subject’s body, and in many cases further involve the steering of the medical instruments within the body, to reach a target region.
- the target region can be, for example but not limited to, a lesion, a tumor, an organ and/or a vessel, such a target may be any object a user indicates as target.
- procedures requiring insertion and steering of such medical instruments include vaccinations, blood/fluid sampling, drug delivery, regional anesthesia, tissue biopsy, catheter insertion, cryogenic ablation, electrolytic ablation, brachytherapy, neurosurgery, deep brain stimulation, various minimally invasive surgeries, and the like.
- Some automated systems are based on manipulating robotic arm(s) and some utilize a robotic device which can be attached to the patient’s body or positioned in close proximity thereto. These automated systems typically assist the physician in aligning the medical instrument with a selected insertion point at a desired insertion point and the insertion itself is carried out manually by the physician.
- Some automated systems further include an insertion mechanism that can insert the instrument toward the target, typically in a linear manner.
- More advanced automated systems further include non-linear steering capabilities, as described, for example, in U.S. Patents Nos. 8,348,861, 8,663,130 and 10,507,067, and in co-owned U.S. Patent No. US 10,245,110, co-owned U.S. Patent Application Publication No. 2019/290,372, and co-owned International Patent Application Publication No. WO 2021/105,992, all of which are incorporated herein by reference in their entireties.
- imaging be executed at the same point/phase during the breathing cycle, so as to minimize the effect of breathing related movement on the target, internal tissues, organs, and the like and allow proper analysis of the scanned volume and planning and execution of the interventional procedure.
- breathing instructions are typically given to patients, which include instructions to hold their breath at a specific point/phase during their breathing cycle, to increase the synchronization between imaging initiation and/or medical instrument insertion and the patient’s breathing behavior.
- patients do not always manage to hold their breath at the exact same point/phase of the cycle.
- providing breathing instructions may not be applicable if the patient is sedated, or if the patient is unable to follow breathing instructions due to a medical or mental condition, or if the patient is a child, for example.
- active systems may sometimes be used to enforce desired breathing patterns in such cases, the use of such systems may lead to increased stress or discomfort for the patient during the medical procedure.
- the present disclosure is directed to systems and computer-implemented methods for determination/identification/prediction of respiration behavior of subject(s) and synchronization of the operation of an imaging system and/or an automated medical device with the identified respiration behavior.
- the methods and systems may include collecting data related to respiration of the subject(s), determining and/or predicting the respiration cycle/behavior of the subject, to accordingly allow to synchronize the operation of the imaging system and/or the automated medical device with specific points/states along the breathing cycle, to facilitate planning, insertion and/or steering of a medical instrument toward an internal target.
- the systems and methods disclosed herein advantageously increase the accuracy of inserting and/or steering of the medical instrument and the corresponding image acquisition (such as, for example, CT scanning) by facilitating the performance/execution thereof at the same points/states of the respiration cycle.
- various datasets related to respiration of the subject may be obtained and consequently manipulated and/or utilized to generate algorithms or learning-based models to one or more of: identify/determine stages of the breathing cycle, identify/determine patterns in or of the breathing cycle, predict future behavior of the breathing cycle, and the like.
- the generated algorithms and/or models may consequently be used to synchronize the operation of automated medical devices and/or imaging systems with the determined or predicted breathing cycle, so as to ensure that specific steps (e.g., image acquisition, instrument insertion/steering), in the medical procedure are performed at the same points/stages of the respiration cycle, thereby increasing accuracy, safety and efficiency of the medical procedure.
- the computerized methods for the determination and/or prediction of the breathing cycle of the subject may utilize specific algorithms which may be generated using machine learning tools, deep learning tools, data wrangling tools, and, more generally, Al and data analysis tools.
- the specific algorithms may be implemented using artificial neural network(s) (ANN), such as convolutional neural network (CNN), recurrent neural network (RNN), long-short term memory (LSTM), auto-encoder (AE), generative adversarial network (GAN), Reinforcement- Learning (RL) and the like, as further detailed below.
- the specific algorithms may be implemented using machine learning methods, such as support vector machine (SVM), decision tree (DT), random forest (RF), and the like. Both “supervised” and “unsupervised” methods may be implemented.
- data related to the breathing behavior may be collected prior to, during or resulting from procedures performed by automated medical devices.
- data related to the breathing behavior may be collected prior to, during or resulting from procedures performed manually by physicians.
- the collected data may be used to generate algorithms/models which may consequently provide, for example, information or prediction regarding the breathing cycle and specific stages thereof, which may further be used for controlling, instructing, enhancing, alerting or providing recommendations regarding various operations and/or operating parameters and/or other parameters related to automated medical devices.
- a data-analysis algorithm may be generated, to provide output which is indicative or predictive of the breathing cycle of the subject, that can consequently enhance the operation of the automated medical devices (and optionally related imaging systems) and/or the decisions of the users (e.g., physicians) of such devices.
- the automated medical devices are devices for insertion and steering of medical instruments (for example, needles, introducers or probes) in a subject’s body for various diagnostic and/or therapeutic purposes.
- the automated medical device may utilize real-time instrument position detection and realtime trajectory updating. For example, when utilizing real-time trajectory updating and instrument steering according thereto, the most effective spatio-temporal and safe route of the medical instrument to the target within the body may be achieved. Further, safety may be increased as it reduces the risk of harming non-target regions and tissues within the subject’s body, as the trajectory update may take into account obstacles or any other regions along the route, and moreover, it may take into account changes in the real-time location of such obstacles.
- robotic steering following trajectory updating improves the accuracy of the procedures, thus enabling the reaching of small and hard to reach targets. This is of particular importance in early detection of malignant neoplasms, for example. In addition, it provides increased safety for the patient, as there is a significant lower risk of human error. Further, in some embodiments, the automated device may be remotely controlled, i.e., from outside the procedure room, such that the procedure may be safer for the medical personnel, as their exposure to harmful radiation and/or pathogens during the procedure is minimized.
- the automated medical devices are configured to insert and steer a medical instrument (in particular, the tip of the medical instrument) in the body of the subject, to reach a target region within the subject’s body, to perform various medical procedures, such as ablations, biopsies, fluid drainage, etc.
- the operation of the medical devices may be controlled by at least one processor configured to provide instructions, in real-time, to steer the medical instrument (e.g., the tip thereof), toward the target, according to a planned and/or the updated trajectory, while taking into consideration the breathing cycle of the subject as determined according to the methods disclosed herein and the plausible effects thereof on various paraments related to the planning or steering of the medical instrument.
- the steering may be controlled by the processor, via a suitable controller.
- the steering may be controlled in a closed-loop manner, whereby the processor generates motion commands to the steering device via a suitable controller and receives feedback regarding the real-time location of the medical instrument and/or the target.
- the processor(s) may be able to predict the location and/or movement pattern of the target, e.g., using Al-based algorithm(s).
- the automated medical device may be configured to operate in conjunction with an imaging system, which may include any type of imaging system, including, but not limited to: X-ray fluoroscopy, CT, cone beam CT, CT fluoroscopy, MRI, ultrasound, or any other suitable imaging modality.
- the processor is configured to calculate a trajectory for the medical instrument based on a target, entry point and, optionally, obstacles en route (such as bones or blood vessels), which may be manually marked by the user, or automatically identified by the processor, on one or more obtained images.
- obstacles en route such as bones or blood vessels
- at least one of the steps of initiating imaging of a region of interest, planning a trajectory, updating a trajectory, inserting and/or steering a medical tool may be synchronized with the breathing cycle of the subject.
- the respiration related primary datasets collected and utilized by the systems and methods disclosed herein may be used to generate data-analysis algorithm(s) and/or learning-based model(s), which may output, inter alia, prediction of a future time window of the breathing cycle (e.g., any time window between 2 seconds and 10 seconds), which may be used to provide operating instructions for the automated medical device and/or the imaging system and/or instructions/alerts/recommendations to the user of the automated medical device and/or the imaging system.
- a future time window of the breathing cycle e.g., any time window between 2 seconds and 10 seconds
- the collected datasets and/or the data derived therefrom may be used for the generation of a training set, which may be part of the generated algorithm/model, or utilized for the generation of the model/algorithm and/or the validation or update thereof.
- the training step may be performed in an “offline” manner, i.e., the model may be trained/generated based on a static dataset.
- the training step may be performed utilizing an “online” or incremental/continuous manner, in which the model is continuously updated with every new incoming data.
- a computer-implemented method of generating a data analysis algorithm for determining or predicting respiratory behavior of a subject which may further be used for providing instructions, recommendations and/or alerts related to insertion of a medical instrument toward a target in a body of a patient.
- FIG. 1 shows illustration of exemplary breathing behavior of a subject as measured using a pressure sensor, according to some embodiments
- FIGS. 2A-2B show perspective views of an exemplary device (FIG. 2A) and an exemplary console (FIG. 2B) of a system for inserting a medical instrument toward a target, according to some embodiments;
- FIG. 3 shows an exemplary non-linear trajectory for a medical instrument to reach an internal target within the body of the subject, according to some embodiments
- FIGS. 4A-4D demonstrate real-time updating of a trajectory and steering of an automated medical instrument according thereto, according to some embodiments.
- FIG. 5 shows a flowchart of steps in an exemplary method for planning and steering a medical instrument, utilizing respiration behavior analyzed using data-analysis algorithm(s) to trigger scanning and instrument insertion at a specific time or state/phase of the respiration cycle, according to some embodiments;
- FIGS. 6A-6B show an exemplary training module (FIG. 6A) and an exemplary training process (FIG. 6B) for training a data-analysis algorithm, according to some embodiments;
- FIGS. 7A-7B show an exemplary inference module (FIG. 7A) and an exemplary inference process (FIG. 7B) for utilizing a data-analysis algorithm, according to some embodiments;
- FIG. 8 shows a block diagram illustrating an exemplary method of training a triggering determination model, according to some embodiments
- FIG. 9 shows a block diagram illustrating an exemplary method of training a respiration prediction model, according to some embodiments.
- FIG. 10 shows a flowchart illustrating the steps of an exemplary method of utilizing a respiration prediction model, according to some embodiments
- FIGS. 11A-11C show line graphs of measured respiration activity of a subject and predicted respiration activity generated using a respiration prediction model, according to some embodiments;
- FIG. 12 shows a flowchart illustrating the steps of an exemplary method of steering a medical instrument toward a moving target utilizing a dynamic trajectory model and further utilizing respiration behavior analyzed using data-analysis algorithm(s) to trigger scanning and instrument insertion at a specific respiration state, according to some embodiments;
- the valid triggering events may be determined for a future time window/segment of the respiratory cycle of the subject predicted using data analysis algorithms and/or Al-based algorithms.
- FIG. 1 shows an illustration of several respiratory cycles of a subject measured over a period of time.
- Exemplary breathing cycle 6 including exemplary inhalation (inspiration) stage 2, in which air is inhaled (inserted into the airways/lungs), and exemplary exhalation (expiration) stage 4, in which air is exhaled (removed from the airways/lungs).
- the breathing cycles may be similar, identical or different therefrom, with respect to one or more parameters, such as length, amplitude and/or shape.
- the respiratory behavior is known to be non-stationary by nature, i.e., its characteristics can vary with time. The breathing behavior may also vary between patients.
- automated medical device 20 may include a housing (also referred to as “cover”) 21 accommodating therein at least a portion of the steering mechanism.
- the steering mechanism may include at least one moveable platform (not shown) and at least two moveable arms 26A and 26B, configured to allow or control movement of an end effector (also referred to as “control head”) 24, at any one of desired movement angles or axis, to provide several degrees of freedom.
- the steering mechanism may provide up to five degrees of freedoms - forward-backward and left-right linear translations, front-back and left-right rotations, and longitudinal needle translation toward the target.
- six degrees of freedoms may be provided - forwardbackward and left-right linear translations, front-back and left-right rotations, longitudinal needle translation toward the target, and longitudinal needle drilling toward the target.
- the moveable arms 26A and 26B may be configured as piston mechanisms.
- a suitable medical instrument (not shown) may be connected, either directly or by means of a suitable insertion module.
- the medical instrument may be any suitable instrument capable of being inserted and steered within the body of the subject, to reach a designated target, wherein the control of the operation and movement of the medical instrument is effected by the end effector 24.
- the end effector 24 may include at least a portion of a driving mechanism (also referred to as “insertion mechanism”) configured to advance the medical instrument toward the target in the patient’s body.
- the end effector 24 may be controlled by a suitable control system, as detailed herein.
- the medical instrument may be selected from, but not limited to: a needle, probe (e.g., an ablation probe), port, introducer, catheter (such as a drainage needle catheter), cannula, surgical tool, fluid delivery tool, or any other suitable insertable tool configured to be inserted into a subject’s body for diagnostic and/or therapeutic purposes.
- the medical instrument includes a distal tip at a distal end thereof (i.e., the end which is inserted into the subject’s body).
- the automated medical device 20 may have a plurality of degrees of freedom (DOF) in operating and controlling the movement of the medical instrument along one or more axis.
- DOF degrees of freedom
- the device may have up to six degrees of freedom.
- the device may have at least five degrees of freedom.
- the device may have five degrees of freedom, including two linear translation DOF (in a first axis), a longitudinal linear translation DOF (in a second axis substantially perpendicular to the first axis) and two rotational DOF.
- the device may have forward-backward and left-right linear translations facilitated by two moveable platforms, front-back and leftright rotations facilitated by two moveable arms (e.g., piston mechanism), and longitudinal translation toward the subject’s body facilitated by the insertion mechanism.
- a control system i.e., processor and/or controller
- the steering mechanism including the moveable platforms and the moveable arms
- the insertion mechanism simultaneously, thus enabling non-linear steering of the medical instrument, i.e., enabling the medical instrument to reach the target by following a non-linear trajectory.
- the device may have six degrees of freedom, including the five degrees of freedom described above and, in addition, rotation of the medical instrument about its longitudinal axis (e.g., for drilling purposes). In some embodiments, rotation of the medical instrument about its longitudinal axis may be facilitated by a designated rotation mechanism.
- the control system i.e., processor and/or controller
- the steering mechanism, the insertion mechanism and the rotation mechanism simultaneously.
- the device may further include a base 23, which allows positioning of the device on or in close proximity to the subject’s body.
- the device may be configured for attachment to the subject’s body either directly or via a suitable mounting surface. Attachment of the automated medical device 20 to the mounting surface may be carried out using dedicated latches, such as latches 27A and 27B.
- the device may be couplable to a dedicated arm or base which is secured to the patient’s bed, to a cart positioned adjacent the patient’s bed or to an imaging device (if used), and held on the subject’s body or in close proximity thereto.
- the device may include electronic components and motors (not shown) allowing the controlled operation of the automated medical device 20 in inserting and steering the medical instrument.
- the device may include one or more Printed Circuit Board (PCB) (not shown) and electrical cables/wires (not shown) to provide electrical connection between a controller (not shown) and the motors of the device and other electronic components thereof.
- the controller may be embedded, at least in part, within automated medical device 20.
- the controller may be a separate component.
- the automated medical device 20 may include a power supply (e.g., one or more batteries) (not shown).
- the automated medical device 20 may be configured to communicate wirelessly with the controller and/or processor.
- automated medical device 20 may include one or more sensors, such as a force sensor and/or an acceleration sensor (not shown).
- sensors such as a force sensor and/or an acceleration sensor (not shown).
- sensor/s for sensing parameters associated with the interaction between a medical instrument and a bodily tissue, e.g., a force sensor, and utilizing the sensor data for monitoring and/or guiding the insertion of the instrument and/or for initiating imaging.
- the housing 21 is configured to cover and protect, at least partially, the mechanical and/or electronic components of automated medical device 20 from being damaged or otherwise compromised.
- the housing 21 may include at least one adjustable cover, and it may be configured to protect the device from being soiled by dirt, as well as by blood and/or other bodily fluids, thus preventing/minimizing the risk of cross -contamination between patients.
- the device may further include registration elements disposed at specific locations on the automated medical device 20, such as registration elements 29A and 29B, for registration of the device to the image space, in image-guided procedures.
- registration elements may be disposed on the mounting surface to which device 20 may be coupled, either instead or in addition to registration elements disposed on device 20.
- the device may include a CCD/CMOS camera mounted on the device and/or on the device’s frame and/or as a separate apparatus, allowing the collection of visual images and/or videos of the patient’s body during a medical procedure.
- the medical instrument is configured to be removably couplable to the device 20, such that the device can be used repeatedly with new medical instruments.
- the medical instruments are disposable. In some embodiments, the medical instruments are reusable.
- automated medical device 20 is part of a system for inserting and steering a medical instrument in a subject’s body based on a preplanned and, optionally, real-time updated trajectory.
- the system may include the steering and insertion device 20, as disclosed herein, and a control unit (or - “workstation” or “console”) configured to allow control of the operating parameters of device 20.
- the user may operate the device 20 using a pedal or an activation button.
- the system may include a remote control unit, which may enable the user to activate the device 20 from a remote location, such as the control room adjacent the procedure room (e.g., CT suite), a different location at the medical facility or even a location outside the medical facility.
- the user may operate the device using voice commands.
- FIG. 2B shows an exemplary workstation (also referred to as “console”) 25 of an insertion system for inserting a medical instrument toward a target, according to some embodiments.
- the workstation 25 may include a display 252 and a user interface (not shown).
- the user interface may be in the form of buttons, switches, keys, keyboard, computer mouse, joystick, touch-sensitive screen, and the like.
- the monitor and user interface may be two separate components, or they may form together a single component (e.g., in the form of a touch- screen).
- the workstation 25 may include one or more suitable processors (for example, in the form of a PC) and one or more suitable controllers, configured to functionally interact with automated medical device 20, to determine and control the operation thereof.
- the one or more processors may be implemented in the form of a computer (such as a workstation, a server, a PC, a laptop, a tablet, a smartphone or any other processor-based device).
- the workstation 25 may be portable (e.g., by having or being placed on a movable platform 254).
- the one or more processors may be configured to perform one or more of: determine (plan) a trajectory for the medical instrument to reach the target; update the trajectory in real-time; present the planned and/or updated trajectory on the monitor 252; control the movement (insertion/steering) of the medical instrument based on the planned and/or updated trajectory by providing executable instructions (directly or via the one or more controllers) to the device; determine the actual location of a tip of medical instrument by performing required compensation calculations; receive, process and visualize on the monitor images or image-views created from a set of images (between which the user may be able to scroll), control operating parameters, and the like; or any combination thereof.
- Al-based models e.g., machine-learning and/or deeplearning based models
- use of Al-based models requires a “training” stage in which collected data is used to create (train) models.
- the generated (trained) models may later be used for “inference” to obtain specific insights, predictions, alerts and/or recommendations when applied to new data during the clinical procedure or at any later time.
- the insertion and steering system and the system creating (training) the algorithms/models may be separate systems (i.e., each of the systems includes a different set of processors, memory modules, etc.). In some embodiments, the insertion and steering system, and the system creating the algorithms/models may be the same system. In some embodiments, the insertion system and steering, and the system creating the algorithms/models may share one or more resources (such as, processors, memory modules, GUI, and the like). In some embodiments, the insertion and steering system, and the system creating the algorithms/models may be physically and/or functionally associated. Each possibility is a separate embodiment.
- the insertion and steering system and the system utilizing the algorithms/models for inference may be separate systems (i.e., each of the systems includes a different set of processors, memory modules, etc.). In some embodiments, the insertion and steering system, and the system utilizing the algorithms/models for inference may be the same system. In some embodiments, the insertion and steering system, and the system utilizing the algorithms/models for inference may share one or more resources (such as, processors, memory modules, GUI, and the like). In some embodiments, the insertion system and steering, and the system utilizing the algorithms/models for inference may be physically and/or functionally associated. Each possibility is a separate embodiment.
- the device may be configured to operate in conjunction with an imaging system, including, but not limited to: X-Ray, CT, cone beam CT, CT fluoroscopy, MRI, ultrasound, or any other suitable imaging modality, such that inserting and steering of the medical instrument based on a planned and, optionally, real-time updated 2D or 3D trajectory of the medical instrument, is image-guided.
- an imaging system including, but not limited to: X-Ray, CT, cone beam CT, CT fluoroscopy, MRI, ultrasound, or any other suitable imaging modality, such that inserting and steering of the medical instrument based on a planned and, optionally, real-time updated 2D or 3D trajectory of the medical instrument, is image-guided.
- various types of data may be generated, accumulated and/or collected, for further use and/or manipulation.
- the data may be divided into various types/sets of data, including, for example, data related to operating parameters of the device, data related to clinical procedures, data related to the treated patient, data related to administrative information, and the like, or any combination thereof.
- such collected datasets may be collected from one or more (i.e., a plurality) of automated medical devices, operating under various circumstances (for example, different procedures, different medical instruments, different patients, different locations and operating staff, etc.), to thereby generate a large data base ("big data"), that can be used, utilizing suitable data analysis tools and/or Al-based tools to ultimately generate algorithms/models/that allow performance enhancements, automatic control or affecting control (i.e., by providing recommendations), of the medical devices.
- big data a large data base
- suitable data analysis tools and/or Al-based tools to ultimately generate algorithms/models/that allow performance enhancements, automatic control or affecting control (i.e., by providing recommendations), of the medical devices.
- FIG. 3 schematically shows a trajectory planned using a processor, such as the processor(s) described above, for delivering a medical instrument to a target within a body of a subject, using an automated medical device, such as the automated device of FIG. 2A.
- the planned trajectory may be linear or substantially linear.
- the trajectory may be non-linear trajectory having any suitable/acceptable degree of curvature.
- the one or more processors may calculate a planned trajectory for the medical instrument to reach the target.
- the planning of the trajectory and the controlled steering of the medical instrument according to the planned trajectory may be based on a model of the medical instrument as a flexible beam having a plurality of virtual springs connected laterally thereto to simulate lateral forces exerted by the tissue on the instrument, thereby calculating the trajectory through the tissue on the basis of the influence of the plurality of virtual springs on the instrument, and utilizing an inverse kinematics solution applied to the virtual springs model to calculate the required motion to be imparted to the instrument to follow the planned trajectory.
- the processor may then provide motion commands to the automated medical device, for example via a controller.
- steering of the medical instrument may be controlled in a closed-loop manner, whereby the processor generates motion commands to the automated medical device and receives feedback regarding the real-time location of the medical instrument (e.g., the distal tip thereof), which is then used for real-time trajectory corrections. For example, if the medical instrument has deviated from the planned trajectory aiming the target, the processor may calculate the motion to be applied to the robot to reduce the deviation in order to reach the target.
- the real-time location of the medical instrument and/or the corrections may be calculated and/or applied using data-analysis models/algorithms.
- certain deviations of the medical instrument from the planned trajectory for example deviations which exceed a predetermined threshold, may require recalculation of the trajectory for the remainder of the procedure, as described in further detail hereinbelow.
- a trajectory 32 is planned between an entry point 36 and a target 38.
- the planning of the trajectory 32 may take into account various variables, including, but not limited to: the type of the medical instrument to be used and its characteristics, the dimensions of the medical instrument (e.g., length, gauge), the type of imaging modality (such as, CT, CBCT, MRI, X-Ray, CT fluoroscopy, ultrasound and the like), the tissues through which the medical instrument is to be inserted, the location of the target, the size of the target, the insertion point, the angle of insertion (relative to one or more axis), milestone points (“secondary targets” through which the medical instrument should pass) and the like, or any combination thereof.
- At least one of the milestone points may be a pivot point, i.e., a predefined point along the trajectory in which the deflection of the medical instrument is prevented or minimized, to maintain minimal pressure on the tissue (even if this results in a larger deflection of the instrument in other parts of the trajectory).
- the planned trajectory is an optimal trajectory based on one or more of these parameters. Further taken into account in determining the trajectory may be various obstacles 39A-39C, which may be identified along the path and which should be avoided, to prevent damage to neighboring tissues and/or to the medical instrument.
- safety margins 34 may be marked along the planned trajectory 32, to ensure a minimal distance between the trajectory 32 and potential obstacles en route.
- the width of the safety margins may be symmetrical in relation to the trajectory 32.
- the width of the safety margins 34 may be asymmetrical in relation to the trajectory 32.
- the width of the safety margins 34 may be preprogrammed.
- the width of the safety margins 34 may be automatically set, or recommended to the user, by the processor, based on data obtained from previous procedures using a data analysis algorithm.
- the width of the safety margins 34 may be determined and/or adjusted by the user.
- FIG. 3 is an end of an end effector 30 of the exemplary automated medical device, to which the medical instrument (not shown in FIG. 3) is coupled, as virtually displayed on the monitor, to indicate its position and orientation.
- 3 is a planar trajectory (i.e., two dimensional).
- steering of the medical instrument is carried out according to a planner trajectory, for example trajectory 32.
- the calculated planner trajectory may be superpositioned with one or more additional planner trajectories, to form a three- dimensional (3D) trajectory.
- additional planner trajectories may be planned on one or more different planes, which may be perpendicular to the plane of the first planner trajectory (e.g., trajectory 32) or otherwise angled relative thereto.
- the 3D trajectory may include any type of trajectory, including a linear trajectory or a nonlinear trajectory.
- the steering of the medical instrument is carried out in a 3D space, wherein the medical instructions are determined on each of the planes of the superpositioned planner trajectories, and are then superpositioned to form the steering in the three-dimensional space.
- the data/parameters/values thus obtained during the steering of the medical instrument by the automated medical device can be used as data/parameters/values for the generation/training and/or utilization/inference of the data-analysis model(s)/algorithm(s).
- FIGS. 4A-4D demonstrate real-time updating of a trajectory and steering a medical instrument according thereto, according to some embodiments.
- the exemplary planned and updated trajectories presented may be calculated using a processor executing the models and methods disclosed herein, such as the processor(s) of the insertion system described in FIG. 2B, and the insertion and steering of the medical instrument toward the predicted target location according to the planned and updated trajectories may be executed using an automated medical device, such as the automated device of FIG. 2A.
- the trajectories shown in FIGS. 4A-4D are shown on CT image- views, however it can be appreciated that the planning can be carried out similarly on images obtained from other imaging systems, such as ultrasound, MRI and the like.
- FIG. 4A shows an automated medical device 150 mounted on a subject’s body (a cross-section of which is shown in FIGS. 4A-4D) and a planned (initial) trajectory 160 from an entry point (not shown) toward the initial position of a target 162.
- a planned trajectory 160 from an entry point (not shown) toward the initial position of a target 162.
- checkpoints along the trajectory may be set.
- Checkpoints may be used to pause the insertion of the automated medical device 150 and initiate imaging of the target (region of interest), to verify the position of a medical instrument 170, (specifically, in order to verify that the medical instrument (e.g., the distal tip thereof) follows the planned trajectory), to monitor the location of marked obstacles and/or identify previously unmarked obstacles along the trajectory, and to verify the target’s position, such that recalculation of the trajectory may be initiated, if the user chooses to do so, before advancing the instrument to the next checkpoint/the target.
- the checkpoints may be manually set by the user, or they may be automatically set or recommended by the processor, as described in further detail hereinbelow.
- the planned trajectory 160 may be a linear or substantially linear trajectory. In some embodiments, if necessitated (for example, due to obstacles), the planned trajectory may be a non-linear trajectory. As further detailed below, the planned trajectory may be updated in real-time based on the real-time position of the medical instrument (for example, the distal tip thereof) and/or the real-time position of the target and/or the real-time positions of obstacle/s, or based on tissue and target movement predictions generated by one or more machine learning models. The initial target location may be obtained manually (i.e., marked by the user) or automatically (i.e., determined by the processor).
- FIG. 4B shows medical instrument 174 being inserted into the subject’s body, along the planned trajectory 160.
- the target has moved from its initial position to new (updated) position 162’ as a result of for example but not limited to the advancement of the medical instrument within the tissue, respiration cycle behavior, patient movements, as detailed herein.
- the determination of the real-time location of the target may be performed manually by the user, i.e., the user visually identifies the target in images (continuously or manually or automatically initiated, for example when the instrument reaches a checkpoint), and marks the new target position on the image using the GUI.
- the determination of the real-time target location may be performed automatically by a processor using image processing techniques and/or data- analysis algorithm(s).
- the trajectory may be updated based on the determined real-time position of the target.
- the subsequent movement of the target is predicted, for example using a target movement model, and the trajectory may then be updated based on the predicted location (e.g., the end-point location) of the target.
- the updating of the trajectory based on the predicted location of the target may be performed automatically, by utilizing one or more of the Al models, including the respiration behavior model, tissue movement model, target movement model, trajectory model and any suitable sub-model (or individual model).
- recalculation of the trajectory may also be required if, for example, an obstacle is identified along the trajectory.
- an obstacle may be an obstacle which was marked (manually or automatically) prior to the calculation of the planned trajectory but tissue movement, e.g., tissue movement resulting from for example and not limited to, the advancement of the instrument within the tissue, respiration cycle behavior, patient movements, caused the obstacle to move such that it entered the planned path.
- the obstacle may be a new obstacle, i.e., an obstacle which was not visible in the image (or set of images) based upon which the planned trajectory was calculated, and became visible during the insertion procedure.
- the user may be prompted to initiate an update (recalculation) of the trajectory.
- recalculation of the trajectory if required, is executed automatically by the processor and the insertion of the medical instrument automatically continues according to the updated trajectory.
- recalculation of the trajectory if required, is executed automatically by the processor, however the user is prompted to confirm the recalculated trajectory before advancement of the medical instrument (e.g., to the next checkpoint or to the target) according to the updated trajectory can be resumed.
- an updated trajectory 160' may be calculated based on the predicted end-point location of the target 162”, to facilitate the medical instrument 170 reaching the target at its end-point location.
- the trajectory may be updated based on the real-time location of the target. In such embodiments, the trajectory may first be updated so as to reach the target at its position 162’, and then updated again so as to reach the target at its end-point location 162’ ’ after movement of the target from position 162’ to end-point location 162” has been detected.
- the recalculation of the trajectory e.g., using a learning-based model, due to movement of the target, resulted in the medical instrument 170, specifically the distal tip of medical instrument 170, following a non-linear trajectory to accurately reach the target.
- FIG. 4D summarizes the target movement during the procedure shown in FIGS. 4A- 4C, from an initial target location 162 to an updated target location 162’ and finally to an end-point target location 162”.
- the movement of the target during the procedure may be predicted by a target movement model, which may be further used (optionally with additional models, such as, breathing behavior model and/or tissue movement model) to update the trajectory utilizing the trajectory model, to thereby facilitate the medical instrument 170 reaching the target at its endpoint location in an optimal manner, as detailed herein.
- additional models such as, breathing behavior model and/or tissue movement model
- FIG. 4D also shown in FIG. 4D are the planned trajectory 160 and the updated trajectory 160’, which allowed the medical instrument 170 to reach the moving target, without having to remove and re-insert the instrument.
- the target, insertion point and, optionally, obstacle/s may be marked manually by the user.
- the processor of the insertion system (or of a separate system) may be configured to identify and mark at least one of the target, the insertion point and the obstacle/s, and the user may, optionally, be prompted to confirm or adjust the processor’s proposed markings.
- the target and/or obstacle/s may be identified using known image processing techniques and/or data-analysis models/algorithms, based on data obtained from previous procedures.
- the insertion point may be suggested based solely on the obtained images, or, alternatively or additionally, on data obtained from previous procedures using data-analysis models/algorithms .
- FIG. 5 is a flowchart 60 of an exemplary method for planning a medical instrument trajectory and steering the medical instrument according to the planned trajectory, and utilizing respiration behavior analyzed using data-analysis algorithm(s) to trigger scanning and instrument insertion at a specific time or state/phase of the respiration cycle, according to some embodiments.
- respiration behavior of the patient is analyzed using a data analysis algorithm.
- the respiration behavior data analysis algorithm may be used to predict a future segment of the patient’s respiration, which allows triggering of various consequent medical actions, including, for example, imaging and/or insertion and/or steering of a medical tool, such that the triggering is performed at a specific time (time point or time range) during the respiration cycle, thereby ensuring synchronization between the execution of different actions prior or during the medical procedure and the respiration cycle.
- the respiration behavior is analyzed prior to commencement of the medical procedure, and a respiration behavior baseline is established for the specific patient.
- the patient may be requested to cough, clear his/her throat, perform a sudden movement, etc., so as to analyze how the patient’s breathing activity is influenced by such events.
- the breathing behavior of the patient may change during the course of the procedure, for example, due to the gradual effect of sedation on the patient, due to a change in the patient’s stress levels.
- the respiration of the patient is continuously monitored throughout the procedure (e.g., by means of a respiration sensor), and the respiration activity is continuously analyzed and taken into consideration in subsequent steps of the disclosed method.
- Time t t ig of the respiration cycle may be any desired or suitable time point during the respiration cycle, wherein Cig is a specific respiration time or state of the respiration cycle.
- Cig may be determined to be at the peak region/point of the inhalation phase, at a specific point/region of the exhalation phase, at the beginning of the pause duration between two consecutive respiration cycles (also referred to as “Lull” or “automatic pause”), and the like, or any other suitable region/point/state during the respiration cycle.
- Cig is determined to be the start/onset of a triggering event (e.g., lull period, but not limited to).
- the analysis of the patient’s respiration activity may assist the user (e.g., physician, technician) in determining also the imaging duration and/or the scan dose, and may thus enable reducing the radiation dose to which the patient and the medical staff are exposed.
- triggering planning imaging may refer to automatic initiation of the scan, for example, via direct interface between a processor of the automated medical system and the imaging system. In other embodiments, triggering planning imaging may refer to generating an alert/instruction to the user (e.g., physician) to manually initiate the imaging.
- the triggering may be in the form of a countdown (for example, a countdown from 5, a countdown from 4, a countdown from 3, etc.), so as to allow the user to be prepared to timely initiate the imaging and minimize any possible delay.
- a radiation sensor/detector may be used to verify the exact start and end points of actual image acquisition to allow proper synchronization between the scan and the respiration cycle while minimizing possible errors due to scanner latencies, scanner speed, human operator reaction time, etc.
- a scanner-specific calibration step may be required to configure the system to accurately perform the described flow while taking into consideration the exact characteristics of the imaging system used during the procedure, such as imaging system latencies, imaging speed, etc.
- a triggering deviation error a (epsilon) may be defined and used during the triggering prediction process to allow some variability of trigger location on the time axis which will comply with ttrig ⁇ £.
- a trajectory is calculated for the medical instrument from an entry point to a target in the patient’s body, as detailed above (for example, in FIG. 3 and FIGs. 4A-4D), using, inter alia, the planning scan.
- data-analysis algorithm(s) such as learning-based models, may be used to determine or recommend to the user one or more of the location of the target, an optimal entry point position, location of “no- fly” zones and an optimal trajectory for the procedure.
- checkpoints may be set along the trajectory.
- Checkpoints may be used to pause the insertion/steering of the medical instrument and initiate imaging of the region of interest, to verify the position of the medical instrument (specifically, in order to verify that the instrument (e.g., the distal tip thereof) follows the planned trajectory), to monitor the location of the marked obstacles and/or identify previously unmarked obstacles along the traj ectory, and to verify the target’ s position, such that recalculation of the traj ectory may be initiated, if the user chooses to do so, before advancing the medical instrument to the next checkpoint/to the target.
- the checkpoints may be manually set by the user, or they may be automatically set or recommended by the processor.
- the checkpoints may be spaced apart (including the first checkpoint from the entry point and the last checkpoint from the target) at an essentially similar distance along the trajectory, for example every 20 millimeters (mm), every 30 millimeters, every 40 millimeters, or any other appropriate distance.
- the characteristics of the patient’s breathing behavior including, for example, the average duration of a single breathing cycle and the average duration of triggering event (e.g., the lull period), may be used in the determination (or recommendation) of the number and location of checkpoints along the trajectory.
- the processor may recommend to the user to set the checkpoints 15mm or 20mm apart, so that the instrument can be advanced from one checkpoint to the next during a single lull period.
- registration imaging for example, registration CT scan
- time ttng to register the automated medical device, which the medical instrument is coupled thereto as shown in Figs. 5C-5D, to the image space.
- the automated medical device may include registration elements, which are visible in generated images, disposed at specific locations on the device. Thereby, synchronization of distinct imaging events is achieved as they are triggered at the same time (i.e., time Cig during the breathing cycle).
- insertion/steering of the medical instrument is triggered at time Cig, to thereby ensure that the insertion/steering of the medical instrument and corresponding imaging (e.g., scan) are executed at the same point/phase of the breathing cycle (i.e., time Cig), thus ensuring that the state of the anatomical volume during the insertion/steering of the medical instrument matches the state of the anatomical volume captured during the planning and/or registration imaging.
- the medical instrument is advanced to the next checkpoint during a single triggering event (also referred to as “gating window”).
- the characteristics of the patient’s breathing behavior may be taken into consideration in the insertion/steering of the instrument. For example, if the duration of the triggering event does not allow advancement of the instrument to the next checkpoint in a single triggering event, the insertion steps may be split to two or more segments, such that the insertion segments are executed during consecutive triggering events (i.e., initiated at consecutive Cig occurrences). Alternatively, the insertion speed may be increased in certain insertion steps, in order to enable the instrument to reach the next checkpoint during a single triggering event.
- the characteristics of the patient’s breathing behavior may be taken into consideration, for example by increasing/decreasing the insertion speed or splitting the insertion step (e.g., distance between consecutive checkpoints) to two or more segments, when the instrument is to be advanced in sensitive areas or conditions, such as, transition between tissue layers, insertion into or in close proximity to the pleura, insertion when there is a detected risk of clinical complications such as pneumothorax or bleeding, final insertion step before reaching a small target, etc.
- the insertion speed or splitting the insertion step e.g., distance between consecutive checkpoints
- a desired triggering confidence level may be defined by the user for the entire procedure and/or for specific stages of the procedure and/or for specific insertion steps/steering legs according to the procedure planning or based on decisions made in realtime during the procedure.
- Such user- specific triggering confidence settings may be useful in scenarios where a medical operation includes a relatively high risk or is relatively complex and possible errors or deviations in respiration triggering should be minimized or avoided completely.
- any possible automated operation e.g., steering of the medical instrument
- the user can decrease the required triggering confidence level and by that possibly decrease procedure duration, procedure cost, etc.
- triggering insertion/steering of the medical instrument may refer to automatic activation of the automated device.
- triggering insertion/steering of the medical instrument may refer to generating an instruction/alert to the user to manually activate the automated device, either via the workstation (e.g., by depressing a pedal) or via a remote control unit (e.g., by pressing or rotating an activation button).
- the triggering may be in the form of a countdown, so as to allow the user to be prepared to timely activate the automated device and minimize any possible delay.
- confirmation imaging (e.g., CT scan) is triggered at time ttrig-
- the confirmation imaging may be triggered upon the instrument reaching a checkpoint set by the user or the processor along the trajectory.
- the insertion/steering of the instrument may be executed continuously, and the confirmation imaging may be triggered at predetermined time(s) and/or expected instrument positions along the trajectory.
- triggering confirmation imaging may refer to automatic initiation of the imaging, for example, via direct interface between a processor of the automated medical device and the imaging system.
- triggering confirmation imaging may refer to generating an alert/instruction to the user to manually initiate the imaging.
- the triggering may be in the form of a countdown, as described above, so as to allow the user to be prepared to timely initiate the imaging and minimize any possible delay.
- a radiation sensor/detector may be used to verify proper synchronization between the scan and the respiration cycle.
- the real-time (actual) position of the medical instrument (e.g., of the distal tip thereof) and the target may be determined, as detailed herein.
- various other real-time parameters may be determined, including, for example, reaching of checkpoints, the position of known or new obstacles/no-fly zones, and the like.
- step 608 based on the determined real-time positions of the target and the medical instrument, it is determined if the medical instrument reached the target. If it is determined that the medical instrument has reached the target, the insertion/steering process ends 609. If it is determined that the medical instrument has not reached the target then, at step 610, the trajectory is updated, if required due to target movement, and steps 605-608 are repeated until the target is reached.
- the method disclosed in FIG. 6, allows synchronization of various steps, including, imaging and inserting/steering of the medical instrument, with the patient’s breathing cycle, by triggering the execution thereof at a specific time/state of the respiration cycle.
- the triggered medical operation/action (such as, for example, triggering a scan or triggering insertion/steering of the medical instrument) may be performed automatically.
- triggering the medical operation/action may include issuing instructions or alerts to the user to execute the action.
- various data sets related to respiration of the subject(s) may be obtained.
- the various obtained datasets may be used for the training, construction and/or validation of respiration related algorithm(s) or learning-based models, as detailed herein.
- the respiration related datasets or data parameters or values may be obtained from one or more sensor types, including, for example, but not limited to: respiration sensor, stretch sensor, pressure sensor, accelerometer, ECG sensor, motion sensor positioned on the automated device and/or the patient, etc.
- Data may further be obtained from optical devices (e.g., camera, laser), real-time or semi real-time medical imaging, etc.
- the datasets may include such data parameters or values as, but not limited to: voltage, pressure, stretch level, power, acceleration, speed, coordinates, frequency, etc.
- the datasets may further include patient-specific data parameters or values, including, for example, age, gender, race, relevant medical history, vital signs before/after/during the procedure, body dimensions (height, weight, BMI, circumference, etc.), current medical condition, pregnancy, smoking habits, demographic data, and the like, or any combination thereof.
- patient-specific data parameters or values including, for example, age, gender, race, relevant medical history, vital signs before/after/during the procedure, body dimensions (height, weight, BMI, circumference, etc.), current medical condition, pregnancy, smoking habits, demographic data, and the like, or any combination thereof.
- a training module may be used to train an Al model (e.g., machine learning (ML) or deep learning (DL)-based model) to be used in an inference module, based on the datasets and/or the features extracted therefrom and/or additional metadata, in the form of annotations (e.g., labels, bounding -boxes, segmentation maps, visual locations markings, etc.).
- the training module may constitute part of the inference module or it may be a separate module.
- a training process (step) may precede the inference process (step).
- the training process may be on-going and may be used to update/validate/enhance the inference step (see “active-learning” approach described herein).
- the inference module and/or the training module may be located on a local server (“on premise”), a remote server (such as, a server farm or a cloudbased server) or on a computer associated with the automated medical device.
- the training module and the inference module may be implemented using separate computational resources.
- the training module may be located on a server (local or remote) and the inference module may be located on a local computational resource (computer), or vice versa.
- both the training module and the inference module may be implemented using common computational resources, i.e., processors and memory components shared therebetween.
- the inference module and/or the training module may be located or associated with a controller (or steering system) of an automated medical device.
- a plurality of inference modules and/or learning modules (each associated with a medical device or a group of medical devices), may interact to share information therebetween, for example, utilizing a communication network.
- the model(s) may be updated periodically (for example, every 1-36 weeks, every 1-12 months, etc.).
- the model(s) may be updated based on other business logic.
- the processor(s) of the automated medical device e.g., the processor of the insertion system
- the learning module may be used to construct a suitable algorithm (such as, a classification algorithm), by establishing relations/connections/pattems/correspondences/correlations between one or more variables of the primary datasets and/or between parameters derived therefrom.
- a suitable algorithm such as, a classification algorithm
- the learning may be supervised learning (e.g., classification, object detection, segmentation and the like).
- the learning may be unsupervised learning (e.g., clustering, anomaly detection, dimensionality reduction and the like).
- the learning may be reinforcement learning.
- the learning may use a self-learning approach.
- the learning process is automatic. In some embodiments, the learning process is semi-automatic. In some embodiments, the learning is manually supervised. In some embodiments, at least some variables of the learning process may be manually supervised/confirmed, for example, by a user (such as a physician).
- the training stage may be an offline process, during which a database of annotated training data is assembled and used for the creation of data-analysis model(s)/algorithm(s), which may then be used in the inference stage. In some embodiments, the training stage may be performed "online", as detailed herein.
- the generated algorithm may essentially constitute at least any suitable specialized software (including, for example, but not limited to: image recognition and analysis software, statistical analysis software, regression algorithms (linear, non-linear, or logistic etc.), and the like).
- the generated algorithm may be implemented using an artificial neural network (ANN), such as a deep neural network (DNN), a convolutional neural network (CNN), a recurrent neural network (RNN) long-short term memory (LSTM), auto-encoder (AE), generative adversarial network (GAN), Reinforcement-Learning (RL) and the like, decision tree (DT), random forest (RF), decision graph, association rule learning, support vector machine (SVM), boosting algorithms, linear regression, logistic regression, clustering algorithms, inductive logic programming, Bayesian networks, instance-based learning, manifold learning, sub-space learning, and the like, or any combination thereof.
- the algorithm or model may be generated using machine learning tools, data wrangling tools, deep learning tools, and, more generally, data
- a training module 70 may include two main hardware components/units: at least one memory 72 and at least one processing unit 74, which are functionally and/or physically associated. Training module 70 may be configured to train a model based on data.
- Memory 72 may include any type of accessible memory (volatile and/or non-volatile), configured to receive, store and/or provide various types of data, to be processed by processing unit 74, which may include any type of at least one suitable processor, as detailed below.
- the memory and the processing units may be functionally or physically integrated, for example, in the form of a Static Random Access Memory (SRAM) array.
- the memory is a non-volatile memory having stored therein executable instructions (for example, in the form of a code, service, executable program and/or a model file).
- the memory unit 72 may be configured to receive, store and/or provide various types of data values or parameters related to the data.
- Memory 72 may store or accept raw (primary) data 722 that has been collected, as detailed herein. Additionally, metadata 724, related to the raw data 722 may also be collected/stored in memory 72.
- Such metadata 724 may include a variety of parameters/values related to the raw data, such as, but not limited to: the specific device which was used to provide the data, the time the data was obtained, the place the data was obtained (such as a specific procedure/operating room, specific institution, etc.), and the like.
- Memory 72 may further be configured to store/collect data annotations (e.g., labels) 726.
- the collected data may require additional steps for the generation of data- annotations that will be used for the generation of the machine-learning, deep-learning models or other statistical or predictive algorithms as disclosed herein.
- such data annotations may include labels describing the clinical procedure’s characteristics, the automated device’s operation and computer-vision related annotations, such as segmentation masks, target marking, organs and tissues marking, and the like.
- the different annotations may be generated in an “online” manner, which is performed while the data is being collected, or in an “offline” manner, which is performed at a later time after sufficient data has been collected.
- the memory 72 may further include features database 728.
- the features database 728 may include a database (“store") of previously known or generated features that may be used in the training/generation of the models.
- the memory 72 of training module 70 may further, optionally, include pre-trained models 729.
- the pre-trained models 729 include existing pre-trained algorithms which may be used to automatically annotate a portion of the data and/or to ease training of new models using “transfer-learning” methods
- processing unit 74 of training module 70 may include at least one processor, configured to process the data and allow/provide model training by various processing steps (detailed in FIG. 6B).
- processing unit 74 may be configured at least to perform pre-processing of the data 742.
- Pre-processing of the data may include actions for preparing the data stored in memory 72 for downstream processing, such as, but not limited to, checking for and handling null values, imputation, standardization, handling categorical variables, one-hot encoding, resampling, scaling, filtering, outlier removal etc.
- Processing unit 74 may further, optionally, be configured to perform feature extraction 744, in order to reduce the raw data dimension and/or add informative domainknowledge into the training process and allow the use of additional machine-learning algorithms not suitable for training on raw data and/or optimization of existing or new models by training them on both the raw data and the extracted features.
- Feature extraction may be executed using dimensionality reduction methods, for example, Principal Components Analysis (PCA), Independent Component Analysis (ICA), Linear Discriminant Analysis (LDA), Locally Linear Embedding (LLE), t-distributed Stochastic Neighbor Embedding (t- SNE), Unified Manifold Approximation and Projection (UMAP) and/or Autoencoders, etc.
- PCA Principal Components Analysis
- ICA Independent Component Analysis
- LDA Linear Discriminant Analysis
- LLE Locally Linear Embedding
- t- SNE t-distributed Stochastic Neighbor Embedding
- UMAP Unified
- Feature extraction may be executed using feature engineering methods in which mathematical tools are used to extract domain -know ledge features from the raw data, for example - statistical features, such as mean, variance, ratio, frequency etc. and/or visual features, such as dimension or shape of certain objects in an image.
- Another optional technique which may be executed by the processing unit 74 to reduce the number of features in the dataset is feature selection, in which the importance of the existing features in the dataset is ranked and the less important features are discarded (i.e., no new features are created).
- Processing unit 74 may further be configured to execute model training 746.
- the collected datasets may first require an Extract- Transform-Load (ETL) or ELT process that may be used to (1) Extract the data from a single or multiple data sources (including, but not limited to, the automated medical device itself, Picture Archiving and Communication System (PACS), Radiology Information System (RIS), imaging device, healthcare facility’s Electronic Health Record (EHR) system, etc.), (2) Transform the data by applying one or more of the following steps: handling missing values, checking for duplicates, converting data types as needed, encoding values, joining data from multiple sources, aggregating data, translating coded values etc.
- ETL Extract- Transform-Load
- ELT Extract the data from a single or multiple data sources
- PACS Picture Archiving and Communication System
- RIS Radiology Information System
- imaging device including, but not limited to, the automated medical device itself, Picture Archiving and Communication System (PACS), Radiology Information System (RIS), imaging device, healthcare facility’s Electronic Health Record (EHR) system, etc.
- EHR Electronic Health Record
- the ETL process may be automatic and triggered with every new data collected. In other embodiments, the ETL process may be triggered at a predefined schedule, such as once a day or once a week, for example. In some embodiments, another business logic may be used to decide when to trigger the ETL process.
- the data may be cleaned to ensure high quality data by, for example removal of duplicates, removal or modification of incorrect and/or incomplete and/or irrelevant data samples, etc.
- the data is annotated.
- the different annotations may be generated in an “online” manner, which is performed while the data is being collected, or in an “offline” manner, which is performed at a later time after sufficient data has been collected.
- the data annotations may be generated automatically using an “active learning” approach, in which existing pre-trained algorithms are used to automatically annotate a portion of the data.
- the data annotations may be generated using a partially automated approach with “human in the loop”, i.e., human approval or human annotations will be required in cases where the annotation confidence is low, or per other business logic decision or metric.
- the data annotations may be generated in a manual approach, i.e., using human annotators to generate the required annotations using convenient annotation tools.
- the annotated data is pre-processed, for example, by one or more of checking for and handling null values, imputation, standardization, handling categorical variables, one -hot encoding, resampling, scaling, filtering, outlier removal and other data manipulations, to prepare the data for further processing.
- training data which will be used to train the model
- testing data which will not be introduced into the model during model training so it can be used as “hold-out” data to test the final trained model before deployment.
- the training data may be further divided into a “train set” and a “validation set”, where the train set is used to train the model and the validation set is used to validate the model’s performance on unseen data, to allow optimization/fine-tuning of the training process’ configuration/hyperparameters during the training process.
- the training process may include the use of a Cross-Validation (CV) methods in which the training data is divided into a “train set” and a “validation set”, however, upon training completion, the training process may repeat multiple times with different selections of “train set” and “validation set” out of the original training data.
- CV Cross-Validation
- the use of CV may allow a better validation of the model during the training process as the model is being validated against different selections of validation data.
- data augmentation is performed. Data augmentation may include, for example, generation of additional data from/based on the collected or annotated data.
- augmentations that may be used for image data are: rotation, flip, noise addition, color distribution change, crop, stretch, etc. Augmentations may also be generated using other types of data, for example by adding noise or applying a variety of mathematical operations. In some embodiments, augmentation may be used to generate synthetic data samples using synthetic data generation approaches, such as distribution based, Monte-Carlo, Variational Autoencoder (VAE), Generative-Adversarial-Network (GAN), etc.
- VAE Variational Autoencoder
- GAN Generative-Adversarial-Network
- the model is trained, wherein the training may be performed “from scratch” (i.e., an initial/primary model with initialized weights is trained based on all relevant data) and/or utilizing existing pre-trained models as starting points and training them only on new data.
- Model validation may include evaluation of different model performance metrics, such as accuracy, precision, recall, Fl score, AUC- ROC, etc., and comparison of the trained model against other existing models, to allow deployment of the model which best fits the desired solution.
- the evaluation of the model at this step is performed using the testing data (“test set”) which was not used for model training nor for hyperparameters optimization and best represents the real-world (unseen) data.
- the trained model is deployed and integrated or utilized with the inference module to generate output based on newly collected data, as detailed herein. According to some embodiments, as more data is collected, the training database may grow in size and may be updated.
- the updated database may then be used to re-train the model, thereby updating/enhancing/improving the model’s output.
- the new instances in the training database may be obtained from new clinical cases or procedures or from previous (existing) procedures that have not been previously used for training.
- an identified shift in the collected data’s distribution may serve as a trigger for the re-training of the model.
- an identified shift in the deployed model’s performance may serve as a trigger for the re-training of the model.
- the training database may be a centralized database (for example, a cloud-based database), or it may be a local database (for example, for a specific healthcare facility).
- learning and updating may be performed continuously or periodically on a remote location (for example, a cloud server), which may be shared among various users (for example, between various institutions, such as hospitals).
- learning and updating may be performed continuously or periodically on a single or on a cohort of medical devices, which may constitute an internal network (for example, of an institution, such as a hospital).
- a validated model may be executed locally on processors of one or more medical systems operating in a defined environment (for example, a designated institution, such as a hospital), or on local online servers of the designated institution.
- the model may be continuously updated based on data obtained from the specific institution ("local data"), or periodically updated based on the local data and/or on additional external data, obtained from other resources.
- federated learning may be used to update a local model with a model that has been trained on data from multiple facilities/tenants without requiring the local data to leave the facility or the institution.
- FIGS. 7A-7B show an exemplary inference module (FIG. 7A) and an exemplary inference process (FIG. 7B), according to some embodiments.
- inference module 80 may include two main hardware components/units: at least one memory unit 82 and at least one processing unit 84, which are functionally and/or physically associated. Inference module 80 is essentially configured to run collated data into the trained model to calculate/process an output/prediction.
- Memory 82 may include any type of accessible memory (volatile and/or non-volatile), configured to receive, store and/or provide various types of data and executable instructions, to be processed by processing unit 84, which may include any type of at least one suitable processor.
- the memory 82 and the processing unit 84 may be functionally or physically integrated, for example, in the form of a Static Random Access Memory (SRAM) array.
- SRAM Static Random Access Memory
- the memory is a non-volatile memory having stored therein executable instructions (for example, in the form of a code, service, executable program and/or a model file containing the model architecture and/or weights) that can be used to perform a variety of tasks, such as data cleaning, required pre-processing steps and inference operation (as detailed below) on new data to obtain the model’ s prediction or result.
- executable instructions for example, in the form of a code, service, executable program and/or a model file containing the model architecture and/or weights
- memory 82 may be configured to accept/receive, store and/or provide various types of data values or parameters related to the data as well as executable algorithms (in the case of machine learning based algorithms, these may be referred to as “trained models”).
- Memory unit 82 may store or accept new acquired data 822, which may be raw (primary) data that has been collected, as detailed herein.
- Memory module 82 may further store metadata 824 related to the raw data.
- metadata may include a variety of parameters/values related to the raw data, such as, but not limited to: the specific device which was used to provide the data, the time the data was obtained, the place the data was obtained (such as specific operation room, specific institution, etc.), and the like.
- Memory 82 may further store the trained model(s) 826.
- the trained models may be the models generated and deployed by a training module, such as training module 70 of FIG. 6A.
- the trained model(s) may be stored, for example in the form of executable instructions and/or model file containing the model’s weights, capable of being executed by processing unit 84.
- Processing unit 84 of inference module 80 may include at least one processor, configured to process the new obtained data and execute a trained model to provide corresponding results (detailed in FIG. 7B).
- processing unit 84 is configured at least to perform pre-processing of the data 842, which may include actions for preparing the data stored in memory 82 for downstream processing, such as, but not limited to, checking for and handling null values, imputation, standardization, handling categorical variables, one-hot encoding, resampling, scaling, filtering, outlier removal etc.
- processing unit 84 may further be configured to extract features 844 from the acquired data, using techniques such as, but not limited to, Principal Components Analysis (PCA), Independent Component Analysis (ICA), Linear Discriminant Analysis (LDA), Locally Linear Embedding (LLE), t- distributed Stochastic Neighbor Embedding (t-SNE), Unified Manifold Approximation and Projection (UMAP) and/or Autoencoders, etc.
- Feature extraction may be executed using feature engineering methods in which mathematical tools are used to extract domainknowledge features from the raw data, for example: statistical features such as mean, variance, ratio, frequency etc. and/or visual features such as dimension or shape of certain objects in an image.
- processing unit 84 may be configured to perform feature selection.
- Processing unit 84 may further be configured to execute the model on the collected data and/or features extracted therefrom, to obtain model results 846.
- the processing unit 84 may further be configured to execute a business logic 848, which can provide further fine-tuning of the model results and/or utilization of the model’s results to a variety of automated decisions, guidelines or recommendations supplied to the user.
- FIG. 7B shows steps in an exemplary inference process 86, executed by a suitable inference module (such as inference module 80 of FIG. 7 A).
- a suitable inference module such as inference module 80 of FIG. 7 A.
- new data is acquired/collected from or related to newly executed medical procedures.
- the new data may include any type of raw (primary) data, as detailed herein.
- suitable trained model(s) (generated, for example by a suitable training model in a corresponding training process) may be loaded, per task(s). This step may be required in instances in which computational resources are limited and only a subset of the required models or algorithms can be loaded into RAM memory to be used for inference.
- the inference process may require an additional management step responsible to load the required models from storage memory for a specific subset of inference tasks/jobs, and once inference is completed, the loaded models are replaced with other models that will be loaded to allow an additional subset of inference tasks/jobs.
- the raw data collected in step 861 is pre-processed.
- the pre-processing steps may be similar or identical to the pre-processing step preformed in the training process (by the training module), to thereby allow the data to be processed similarly by the two modules (i.e., training module and inference module).
- this step may include actions such as, but not limited to, checking for and handling null values, imputation, standardization, handling categorical variables, one-hot encoding, etc., to prepare the input data for analysis by the model(s).
- extraction of features from the data may be performed using, for example, Principal Components Analysis (PCA), Independent Component Analysis (ICA), Linear Discriminant Analysis (LDA), Locally Linear Embedding (LLE), t-distributed Stochastic Neighbor Embedding (t-SNE), Unified Manifold Approximation and Projection (UMAP) and/or Autoencoders, etc.
- PCA Principal Components Analysis
- ICA Independent Component Analysis
- LDA Linear Discriminant Analysis
- LLE Locally Linear Embedding
- t-SNE t-distributed Stochastic Neighbor Embedding
- UMAP Unified Manifold Approximation and Projection
- Autoencoders etc.
- feature selection
- the results of the model are obtained, i.e., the model is executed on the processed data to provide corresponding results.
- fine-tuning of the model results may be performed, whereby post-inference business logic is executed.
- Execution of post-inference business logic refers to the utilization of the model’s results to a variety of automated decisions, guidelines or recommendations supplied to the user.
- Postinference business logic may be configured to accommodate specific business and/or clinical needs or metrics, and can vary between different scenarios or institutions based on users’ or institutions’ requests or needs.
- the model results may be utilized in various means, including, for example, providing operating instructions to automated medical devices and/or imaging systems (including triggering operation thereof at specific time points or states of the respiration cycle), providing instructions and/or recommendations and/or alerts to users regarding various device operations (including instructions to initiate operation of the automated medical device and/or the imaging system at specific time points or states of the respiration cycle), and the like, as further detailed hereinabove.
- inference operation may be performed on a single data instance. In other embodiments, inference operation may be performed using a batch of multiple data instances to receive multiple predictions or results for all data instances in the batch. In some embodiments, an ensemble of models or algorithms can be used for inference, where the same input data is processed by a group of different models and results are being aggregated using averaging, majority voting or the like. In some embodiments, the model can be designed in a hierarchical manner where input data is processed by a primary model and based on the prediction or result of the primary model’s inference, the data is processed by a secondary model. In some embodiments, multiple secondary models may be used, and hierarchy may have more than two levels.
- the methods and systems disclosed herein utilize data-driven methods to create algorithms based, at least in part, on various breath related datasets.
- artificial intelligence e.g., machine-learning or deep learning
- algorithms are used to learn the complex mapping/correlation/correspondence between the multimodal (e.g., data obtained from different modalities, such as images, logs, sensory data, etc.) input datasets parameters (procedure, clinical, operation, patient related and/or administrative information), to optimize the clinical procedure’s outcome or any other desired functionalities, by allowing, inter alia, synchronization thereof in accordance with specific states of the breathing cycle.
- the systems and methods disclosed herein may determine such optimal mapping using various approaches, such as, for example, a statistical approach, and utilizing machine-learning algorithms to learn the mapping/correlation/correspondence from the training datasets.
- the algorithm may be a generic algorithm, which is agnostic to specific procedure characteristics, such as type of procedure, user, service provider or patient.
- the algorithm may be customized to a specific user (for example, preferences of a specific healthcare provider), a specific service provider (for example, preferences of a specific hospital), a specific population (for example, preferences of different age groups), a specific patient (for example, preferences of a specific patient), and the like.
- the algorithm may be combined a generic portion and a customized portion.
- FIG. 8 shows a block diagram 90 illustrating an exemplary method of generating (training) a trigger determination model, according to some embodiments.
- triggering imaging and/or insertion/steering of a medical instrument may be performed at a specific time/state during or along the breathing cycle, to thereby minimize breathing effects/artifacts on the procedures and increase their accuracy and efficiency.
- a corresponding model may be trained for determination or prediction of the triggering event .
- respiration related data 901 is used to train the trigger determination model.
- the input respiration data 901 may be obtained from previous procedures, from a current procedure and/or from external datasets/databases.
- the respiration related data/datasets may be obtained, for example, from one or more breath sensors or breath monitoring devices as detailed herein.
- the input data may include a prediction/forecast of the respiration behavior for a future segment of the respiration cycle (for example, future time window Tforecast). Feature extraction/engineering 902 of the input data may be performed.
- Exemplary features may include such features as, but not limited to: breathing stability, breathing rate, breathing amplitude, gating window length (i.e., compliant length of time between two selected points of the breathing cycle where a synchronized medical operation can be executed), patient movement, standard statistical parameters such as mean, median, standard-deviation, histogram, phase, integral, slope, FFT, correlation, and the like, or any combination thereof.
- ground truth trigger labels 903 may be obtained and utilized for the training process. Ground truth trigger labels may be obtained, for example, from previous procedures, external datasets/databases, ongoing procedure, and the like.
- Exemplary ground truth trigger labels may include, for example, timestamps of valid triggers, gating windows’ start and end timestamps, gating windows’ length, etc.
- the input respiration data 901, extracted//engineered features 902 and/or ground truth labels 903 are used for training model 904 to predict the next valid trigger event(s).
- the trigger determination model may be trained to predict the next valid triggering event(s) during Tforecast. Training the trigger determination model may be established using machine learning (ML)/deep learning (DL) tools, as detailed above.
- the model is trained to output an accurate trigger event which will reduce, minimize or diminish breathing artifacts, by allowing a procedure to be performed at the same state (e.g., trig) of the breathing cycle.
- the training may utilize a loss function 905 to calculate the loss, aimed to minimize the trigger prediction error and/or the gating window duration error etc.
- loss function 905 may be a Multi-Loss scheme.
- the training may be executed using a Multi-Output regression/classification approach, for example, to utilize the respiration analysis capabilities of the model to predict not only the next valid triggering event, but also breathing-related events in which triggering is to be avoided, such as the patient coughing or a sudden movement of the patient. If such events are predicted then automatically triggered actions may be aborted or halted (if already initiated) and/or the user may be alerted before manually initiating the medical action or while the action is in process (if already initiated).
- the model may be further trained to predict breathing-related anomalies, predict possible breathing complications or other clinical complications (e.g., pneumothorax), monitor the subject’s stress level and the like.
- the generated/calculated loss (e.g., prediction error) may be used to fine-tune or adjust the trigger determination model, as part of the training process.
- the determination/prediction of the next valid trigger(s) is based on raw (or minimally processed) respiration data and/or on a forecasted breathing signal.
- a one step process may be used and a trigger may be provided if a medical action (e.g., imaging, instrument insertion/steering) is to be executed at that time point (i.e., a yes/no answer as to whether an action is to be triggered at this time point/phase) and/or an indication of the exact timing of the next valid triggering event is provided.
- a medical action e.g., imaging, instrument insertion/steering
- a suitable classification and/or regression algorithm may be used to determine, based on the input data (parameter-based), if a trigger is to be executed and/or what would be the timing of the next valid triggering event.
- the next valid trigger(s) may be based on a forecast/prediction of the breathing signal. Such forecast/prediction may be performed for a future time window (Tforecast), which may be, for example, the next 3 seconds, the next 6 seconds, the next 9 seconds, or any other appropriate time window.
- the next valid trigger(s) may be predicted for future time window Tforecast.
- the classification and/or regression algorithm/model may predict the length of the gating window (also referred to as the “triggering period”), i.e., the start time and end time of the gating window in which a medical action can be executed in synchronization with the subject’s respiratory behavior.
- the medical action may be performed automatically once a valid trigger is determined.
- the user may be provided with an alert/instruction/recommendation to perform the required medical action at the next predicted time point/state (ttrig), during the next time window (Tforecast).
- information regarding the confidence and/or the quality of the forecasted triggering period may be utilized to optimize automatic triggering of a medical action. In some embodiments, information regarding the confidence and/or the quality of the forecasted triggering period may be presented to the user, to optimize the user’s decision-making or execution.
- FIG. 9 shows a block diagram 100 illustrating an exemplary method of generating (training) a respiration prediction model, according to some embodiments.
- triggering imaging and/or insertion/steering of a medical instrument may be performed at a specific time/state during or along the breathing cycle and, in some embodiments, triggering determination is for a future time window for which respiration behavior is predicted/forecasted.
- a corresponding algorithm/model may be trained.
- respiration related data 1001 is used to train the respiration prediction model.
- the input respiration data 101 may be obtained from previous procedures, from a current procedure and/or from external datasets/databases.
- the respiration data 1001 may be obtained, for example, from one or more breath sensors or breath monitoring devices, including, for example, pressure sensor, stretch sensor, motion sensor, accelerometer, ECG sensor, and the like. Data may further be obtained from optical devices (e.g., camera, laser), real-time or semi real-time medical imaging, etc.
- the sensor/monitor may be associated with or placed on the subject’s body, in close proximity to the subject’s body, in association with the automated medical device, in association with the imaging system, in association with other devices, and the like. In some embodiments, the sensor/monitor may operate autonomously or in conjunction with the automated medical device, the imaging device, medical systems, and the like. In some embodiments, the sensor/monitor may operate continuously or periodically.
- the operation of the respiration sensor/monitor may be at least partially controlled by the processor or controller of the automated medical device.
- the datasets may include such data parameters or values as, but not limited to: voltage, pressure, stretch level, power, acceleration, speed, coordinates, frequency, etc.
- Pre-processing of the data may include any suitable pre-processing method of the raw data to prepare the data for downstream processing, including, for example, but not limited to: checking for and handling null values, imputation, standardization, handling categorical variables, one -hot encoding, resampling, scaling, filtering, outlier removal, and the like.
- feature extraction/feature engineering 1003 of the input data may be performed.
- Exemplary features may include such features as, but not limited to: breathing stability, breathing rate, breathing amplitude, gating window length (i.e., compliant length of time between two selected points of the breathing cycle where a synchronized medical operation can be executed), patient movement, standard statistical parameters such as mean, median, standard-deviation, histogram, phase, integral, slope, FFT, correlation, and the like, or any combination thereof.
- the pre-processed data is processed/analyzed by a signal quality analysis algorithm/model 1004.
- Such signal quality analysis algorithm/model utilizes data related to the breathing measurements (e.g., breathing activity sampled by an acquisition device) to estimate the quality of the signal. Short-term and/or long-term signal quality analysis may be performed to analyze current state and/or possible trends.
- Degradation in signal quality may result from, for example, movement of the sensor, misplacement of the sensor, functional defects of the sensor, functional defects of the host system, electromagnetic interferences from external sources and the like.
- the output of the signal quality analysis step includes valuable information about the respiration signal’s quality that will be utilized during the training process of the respiration prediction model.
- the output of the signal quality algorithm/model may be used to provide information and/or instructions to the user related to the quality of the respiration signal and/or the need to take action/s or perform adjustments to increase the signal’s estimated quality.
- ground truth data 1005 is obtained and utilized for training the model.
- the ground truth data is the actual respiration behavior during the time period/window Tforecast for which the prediction is desired.
- the pre-processed respiration data 1002, ground truth data 1005 and, optionally, extracted features 1003 and/or signal quality analysis output 1004 are used for training the respiration model 1006, which may predict/forecast the breathing behavior for the next time period/window (for example, next 3 seconds, next 6 seconds, next 9 seconds, or any other appropriate time window).
- Training respiration model 1006 may be executed using machine learning (ML)/deep learning (DL) tools, as detailed above.
- the model 1006 may be trained to output an accurate as possible prediction of the characteristics of one or more future breathing cycles or specific portions/segments thereof, including, for example, size, amplitude, length, dispersion, and the like.
- respiration prediction model may be used for example, in the prediction of triggering event(s), to reduce, minimize or diminish breathing artifacts, by allowing steps of the procedure to be performed at the same time or state (e.g., trig) of the predicted breathing cycle.
- the training may utilize a loss function 1007 to calculate the loss, aimed to minimize the respiration prediction error and/or the gatingwindow duration error etc.
- loss function 1007 may be a Multi-Loss scheme.
- the training may be executed using a Multi-Output regression/classification approach, for example, to utilize the respiration analysis capabilities of the model to predict breathing -related anomalies, possible breathing complications or other clinical complications (e.g., pneumothorax, bleeding), monitor the subject’s stress level and the like.
- the model may be used to identify respiration patterns, which may then be used to classify patients according to their respiration patterns. Such classification may be used, for example, to adapt or personalize the trigger determination model and/or the insertion/steering procedure to the patient’s respiration classification.
- the generated/calculated loss e.g., prediction error
- the generated/calculated loss may be used to finetune or adjust the respiration prediction model 1006, as part of the training process.
- FIG. 10 shows a flowchart 110 illustrating an exemplary method for predicting/forecasting respiration of a subject utilizing the respiration prediction model, according to some embodiments.
- patient respiration activity is obtained.
- the respiration activity data may be obtained, for example, from one or more breath sensors, including, for example, pressure sensor, stretch sensor, motion sensor, imaging sensor, accelerometer, ECG sensor, camera, and the like.
- the sensor/monitor may be associated with or be placed on the subject’s body, in close proximity to the subject’s body, in association with the automated medical device, in association with the imaging system, in association with other devices, and the like.
- the sensor/monitor may operate autonomously or in conjunction with the automated medical device, the imaging device, medical systems, and the like. In some embodiments, the sensor/monitor may operate continuously or periodically. In some embodiments, the operation of the respiration sensor/monitor may be at least partially controlled by the processor or controller of the automated medical device. As shown in FIG. 10, at calibration step 1102, patient specific respiration parameters are calculated in order to ensure that the characteristics of the obtained respiration signal allow successful completion of the analysis flow.
- such patient specific respiration parameters may include various features, patterns and/or respiratory related statistics, such as breathing stability, noise level, breathing rate, breathing amplitude, gating window length (i.e., compliant length of time between two selected points of the breathing cycle where a synchronized medical operation can be executed) statistics, patient movement, standard statistical parameters such as mean, median, standard-deviation, histogram, phase, integral, slope, FFT, correlation, and the like, or any combination thereof.
- breathing stability i.e., noise level, breathing rate, breathing amplitude, gating window length (i.e., compliant length of time between two selected points of the breathing cycle where a synchronized medical operation can be executed) statistics, patient movement, standard statistical parameters such as mean, median, standard-deviation, histogram, phase, integral, slope, FFT, correlation, and the like, or any combination thereof.
- standard statistical parameters such as mean, median, standard-deviation, histogram, phase, integral, slope, FFT, correlation, and the like, or any
- step 1104 may be employed, whereby indication to improve breathing related signal acquisition (for example, by adjusting sensor location, sensor contact, sensor operating parameters (for example, sensitivity), is issued prior to repeating step 1102.
- indication to improve breathing related signal acquisition for example, by adjusting sensor location, sensor contact, sensor operating parameters (for example, sensitivity)
- sensor operating parameters for example, sensitivity
- the generic model/algorithm which, in some embodiments, has been trained on a (large) population of subjects, may be fine-tuned and optimized for the specific patient, by including patient-specific respiration data that will be used for re-training and/or fine-tuning of the model/algorithm such that a new personalized model/algorithm will be generated, that will better correspond to the specific respiratory profile of the patient, while still utilizing more generic respiration analysis capabilities obtained from training on a larger population.
- re-training or fine-tuning of the model will require a deployment step of the ML/DL patient-specific model/algorithm 1106, such deployment step marks the transition between the re-training step into the inference step where the new, retrained, model will be used for prediction of new data.
- the new, re- trained/fine-tuned model may replace the original, more generic model.
- the re-trained model may be used in combination with the original model or with multiple other models.
- step 1107 is repeated, until the prediction confidence level is satisfactory. If the signal prediction confidence is adequate, respiration activity prediction is provided, in step 1109. Providing the respiration activity prediction may be, for example, presented to a user and/or it may be used in further calculations, predictions and/or controlling operations of medical devices. For example, the respiration activity prediction may be used as input for the generation of a triggering determination algorithm/model, as described hereinabove. In some embodiments, pre-processing stages may be used during the described flow in order to prepare the data for the analysis process.
- Such pre-processing of the data may include any suitable pre-processing of the raw data to prepare the data for downstream processing, including, for example, but not limited to: checking for and handling null values, imputation, standardization, handling categorical variables, one -hot encoding, resampling, scaling, filtering, outlier removal, and the like.
- FIGS. 11A-11C illustrate graphs of respiratory activity of actual (measured) and predicted respiratory activity of subjects.
- respiration behavior input data of a subject with relatively stable breathing behavior represented by line graph 1202.
- the input data is used for predicting breathing behavior utilizing the methods disclosed herein.
- the predicted breathing behavior during time window T forecast is represented by graph line 1204 and the actual (measured) breathing activity during time window Tf ore cast is represented by graph line 1206.
- the predicted respiratory behavior (breath cycle) closely matches the actual respiratory behavior.
- FIG. 11B Shown in FIG. 11B, is respiration behavior input data of a subject with relatively unstable breathing behavior (represented by line graph 1212).
- the input data is used for predicting breathing behavior utilizing the methods disclosed herein.
- the predicted breathing behavior during Tforecast is represented by graph line 1214 and the actual (measured) breathing activity during Tforecast is represented by graph line 1216.
- the predicted respiratory behavior (breath cycle) closely matches the actual respiratory behavior, despite the relative instability identified in the input data.
- FIG. 11C Shown in FIG. 11C, is respiration behavior input data of a subject (represented by line graph 1222).
- the input data is used for predicting breathing behavior utilizing the methods disclosed herein.
- the predicted breathing behavior is represented by graph line 1224 and the actual (measured) breathing activity is represented by graph line 1226.
- ttng 1228 is the selected state to trigger an event (for example, imaging, insertion/steering of medical instrument, etc.).
- the ttng is at beginning the pause (Lull) phases of the breathing cycle.
- the ttng marked on the predicted respiratory behavior very closely matches the ttrig marked on the actual respiratory behavior, further substantiating the accuracy of the respiration prediction models disclosed herein.
- FIG. 12 shows a flowchart 1300 illustrating steps of an exemplary method of steering of a medical instrument toward a predicted location of a moving target, in synchronization with a respiratory cycle of a subject, according to some embodiments.
- respiration behavior of the patient is analyzed using a data analysis algorithm.
- the respiration behavior data analysis algorithm may be used to predict a future segment of the patient’s respiration, which allows triggering of various consequent medical actions, including, for example, imaging and/or insertion and/or steering of a medical tool, such that the triggering is performed at a specific time (time point or time range) during the respiration cycle, thereby ensuring synchronization between the execution of different actions prior or during the medical procedure and the respiration cycle.
- the respiration behavior is analyzed prior to commencement of the medical procedure, and a respiration behavior baseline is established for the specific patient.
- the breathing behavior of the patient may change during the course of the procedure, for example, due to the gradual effect of sedation on the patient, due to a change in the patient’s stress levels, due to a change in the patient’s physical conditions.
- the respiration of the patient is continuously monitored throughout the procedure (e.g., by means of a respiration sensor, one or more breath sensors, including, for example, pressure sensor, stretch sensor, motion sensor, imaging sensor, accelerometer, ECG sensor, camera, and the like), and the respiration activity is continuously analyzed and taken into consideration in subsequent steps of the disclosed method.
- Time Gig of the respiration cycle may be any desired or suitable time point during the respiration cycle, wherein Gig is a specific respiration time or state of the respiration cycle.
- Gig may be determined to be at the peak region/point of the inhalation phase, at a specific point/region of the exhalation phase, at the beginning of the automatic pause (“lull”) period between two consecutive respiration cycles, and the like, or any other suitable region/point/state during the respiration cycle.
- Gig is determined to be the start/onset of a triggering event (e.g., lull period).
- triggering planning imaging may refer to automatic initiation of the scan, for example, via direct interface between a processor of an automated medical device and the imaging system.
- triggering planning imaging may refer to generating an alert/instruction to a user (e.g., physician, technician) to manually initiate the imaging.
- the triggering may be in the form of a countdown (for example, a countdown from 5, a countdown from 4, a countdown from 3, etc.), so as to allow the user to be prepared to timely initiate the imaging and minimize any possible delay.
- a trajectory is calculated for a medical instrument, which is coupled to the automated medical device as shown in Figs. 5C-5D, from an entry point to a target in the patient’s body, as detailed above based, inter alia, on the planning scan.
- data-analysis algorithm(s) such as learning-based models, may be used to determine or recommend to the user one or more of the location of the target, an optimal entry point position, location of “no-fly” zones and an optimal trajectory for the procedure.
- checkpoints may be set along the trajectory.
- a registration imaging for example, registration CT-scan
- time ttrig may be triggered at time ttrig, to register the automated medical device to the image space.
- synchronization of distinct imaging events are triggered to occur at the same time (i.e., time ttrig along/over the breathing cycle(s).
- insertion/steering of the medical instrument at time ttrig is triggered, to thereby ensure that the insertion/steering of the medical instrument and corresponding imaging (e.g., scan) are executed at the same point/phase of the breathing cycle (i.e., time ttrig), thus ensuring that the state of the anatomical volume during the insertion/steering of the medical instrument matches the state of the anatomical volume captured during the planning and/or registration imaging.
- triggering steering of the medical instrument may refer to automatic activation of the automated device.
- triggering steering of the medical instrument may refer to generating an instruction/alert to the user to manually activate the automated device, either via a workstation (e.g., by depressing a pedal, pushing a button, and the like) or via a remote control unit (e.g., by pressing or rotating an activation button or any suitable activation mechanism).
- the triggering may be in the form of a countdown, so as to allow the user to be prepared to timely activate the automated medical device and minimize any possible delay.
- a confirmation imaging (e.g., CT-scan) is triggered at time ttrig.
- the confirmation imaging may be triggered upon the medical instrument reaching a checkpoint set by the user or the processor along the trajectory.
- the steering of the medical instrument may be executed continuously, and the confirmation imaging may be triggered at predetermined time(s) and/or expected medical instrument positions along the trajectory.
- triggering confirmation imaging may refer to automatic initiation of the imaging.
- triggering confirmation imaging may refer to generating an alert/instruction to the user to manually initiate the imaging.
- the triggering may be in the form of a countdown, as described above, so as to allow the user to be prepared to timely initiate the imaging and minimize any possible delay.
- the real-time position of the medical instrument and the target may be determined (as detailed herein).
- a dynamic trajectory model (DTM) is applied to update the trajectory (if needed).
- the dynamic trajectory model may include one or more algorithms and/or Al-based models, each of which may be configured to provide information, predictions, estimations and/or calculations regarding various parameters and variables that may affect tissue and target movement and the consequent trajectory.
- Such algorithms and models may provide parameters such as, predicted/estimated tissue movement and predicted/estimated target movement, to ultimately predict the estimated target spatiotemporal location during and/or at the end of the procedure, to thereby allow the planning and/or updating of a corresponding trajectory to facilitate the medical instrument reaching the target at its predicted location.
- estimation of tissue movement may take into account tissue movement resulting from the patient’s respiratory cycle.
- the patient’s respiratory cycle and/or the tissue movement resulting from the patient’s respiratory cycle may be predicted using a separate algorithm/model.
- the dynamic trajectory model may include algorithms/models to predict the movement of previously determined “no-fly” zones and/or algorithms/models to update the “no-fly” zones map according to the predicted tissue and target movement.
- the dynamic trajectory model may include determining if a calculated trajectory is optimal, based on various parameters as described herein, such that the output of the model is the optimal trajectory. It can be appreciated that different trajectories may be considered as “optimal”, depending on the chosen parameters, the weight given to each parameter, user preferences, etc.
- step 1320 it is determined if the medical instrument reached the target. If it is determined that the medical instrument has reached the target, the steering process ends 1322. If it determined that the medical instrument has not reached the target, steps 1312-1320 are repeated, until the target is reached.
- the method disclosed in FIG. 12 allows synchronization of various steps, including, imaging and steering of the medical instrument with the patient’s breathing cycle, by triggering the execution thereof at a specific time/state along the respiration cycle. Implementations of the systems, devices and methods described above may further include any of the features described in the present disclosure, including any of the features described hereinabove in relation to other system, device and method implementations.
- computer-readable storage medium having stored therein data-analysis algorithm(s), executable by one or more processors, for generating one or more models for prediction of respiration behavior and for providing recommendations, operating instructions and/or functional enhancements related to the operation of automated medical devices and/or related imaging systems.
- the embodiments described in the present disclosure may be implemented in digital electronic circuitry, or in computer software, firmware or hardware, or in combinations thereof.
- the disclosed embodiments may be implemented as one or more computer programs, i.e., one or more modules of computer program instructions, encoded on computer storage medium for execution by, or to control the operation of, one or more data processing apparatus.
- the computer program instructions may be encoded on an artificially generated propagated signal, for example, a machine-generated electrical, optical or electromagnetic signal, that is generated to encode information for transmission to suitable receiver apparatus for execution by a data processing apparatus.
- a computer storage medium can be, or be included in, a computer-readable storage device, a computer-readable storage substrate, a random or serial access memory array or device, or a combination of any one or more of the above.
- a computer storage medium is not a propagated signal, a computer storage medium can be a source or destination of computer program instructions encoded in an artificially generated propagated signal.
- the computer storage medium can also be, or be included in, one or more separate physical components or media (for example, multiple CDs, disks, or other storage devices).
- the operations described in the present disclosure can be implemented as operations performed by a data processing apparatus on data stored on one or more computer-readable storage devices or received from other sources.
- the term “data processing apparatus” as used herein may encompass all types of apparatus, devices, and machines for processing data, including by way of example a programmable processor, a computer, a system on a chip/s, or combinations thereof.
- the data processing apparatus can include special purpose logic circuitry, for example, an FPGA (field programmable gate array) or an ASIC (application specific integrated circuit).
- the apparatus can also include, in addition to hardware, code that creates an execution environment for the computer program in question, for example, code that constitutes processor firmware, a protocol stack, a database management system, an operating system, a cross -platform runtime environment, a virtual machine, or combinations thereof.
- the apparatus and execution environment can realize various different computing model infrastructures, such as web services, distributed computing and grid computing infrastructures.
- a computer program (also referred to as a program, software, software application, script or code) can be written in any form of programming language, including compiled or interpreted languages, declarative or procedural languages, and it can be deployed in any form, including as a standalone program or as a module, component, subroutine, object, or other unit suitable for use in a computing environment.
- a computer program may, but need not, correspond to a file in a file system.
- a computer program can be stored in a portion of a file that holds other programs or data, in a single file dedicated to the program in question, or in multiple coordinated files (for example, files that store one or more modules, sub programs or portions of code).
- a computer program can be deployed to be executed on one computer or on multiple computers that are located at one site or distributed across multiple sites and interconnected by a communication network.
- the processes and logic flows described herein can be performed by one or more programmable processors, executing one or more computer programs to perform actions by operating on input data and generating output.
- the processes and logic flows can also be performed by, and an apparatus can also be implemented as, special purpose logic circuitry, for example, an FPGA or an ASIC.
- Processors suitable for the execution of a computer program include both general and special purpose microprocessors, and any one or more processors of any type of digital computer.
- a processor will receive instructions and data from a read-only memory or a random access memory or both.
- the essential elements of a computer are a processor for performing actions in accordance with instructions and one or more memory devices for storing instructions and data.
- a computer may, optionally, also include, or be operatively coupled to receive data from or transfer data to, or both, one or more mass storage devices for storing data, for example, magnetic, magneto optical discs, or optical discs.
- a computer can be embedded in another device, for example, a mobile phone, a tablet, a personal digital assistant (PDA, a game console, a Global Positioning System (GPS) receiver, or a portable storage device (for example, a USB flash drive).
- PDA personal digital assistant
- GPS Global Positioning System
- USB flash drive for example, a USB flash drive
- Non-volatile memory media and memory devices
- semiconductor memory devices for example, EPROM, EEPROM, random access memories (RAMs), including SRAM, DRAM, embedded DRAM (eDRAM) and Hybrid Memory Cube (HMC), and flash memory devices
- RAMs random access memories
- eDRAM embedded DRAM
- HMC Hybrid Memory Cube
- flash memory devices magnetic discs, for example, internal hard discs or removable discs; magneto optical discs; read-only memories (ROMs), including CD-ROM and DVD-ROM discs; solid state drives (SSDs); and cloud-based storage.
- the processor and the memory can be supplemented by, or incorporated in, special purpose logic circuitry.
- cloud computing is generally used to describe a computing model which enables on-demand access to a shared pool of computing resources, such as computer networks, servers, software applications, and services, and which allows for rapid provisioning and release of resources with minimal management effort or service provider interaction.
- terms such as “processing”, “computing”, “calculating”, “determining”, “estimating”, “assessing” or the like may refer to the action and/or processes of a computer or computing system, or similar electronic computing device, that manipulate and/or transform data, represented as physical (e.g. electronic) quantities within the computing system’s registers and/or memories, into other data similarly represented as physical quantities within the computing system’s memories, registers or other such information storage, transmission or display devices.
- the term “moving target” relates to a mobile target, i. e., a target that is capable of moving within the body of the subject, independently or at least partially due to or during a medical procedure.
- the terms “respiratory cycle”, and “breathing cycle” may be used interchangeably.
- model In some embodiments, the terms “model”, “algorithm”, “data-analysis algorithm” and “data-based algorithm” may be used interchangeably.
- triggering event and “gating window” may be used interchangeably.
- the terms “user”, “doctor”, “physician”, “clinician”, “technician”, “medical personnel” and “medical staff’ are used interchangeably throughout this disclosure and may refer to any person taking part in the performed medical procedure.
- subject and “patient” may be used interchangeably, and they may refer either to a human subject or to an animal subject.
- the words “include” and “have”, and forms thereof, are not limited to members in a list with which the words may be associated.
Landscapes
- Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- Software Systems (AREA)
- Molecular Biology (AREA)
- Mathematical Physics (AREA)
- Biomedical Technology (AREA)
- General Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- Evolutionary Computation (AREA)
- Data Mining & Analysis (AREA)
- General Engineering & Computer Science (AREA)
- Computing Systems (AREA)
- Biophysics (AREA)
- Medical Informatics (AREA)
- Surgery (AREA)
- Computational Linguistics (AREA)
- Veterinary Medicine (AREA)
- Animal Behavior & Ethology (AREA)
- Heart & Thoracic Surgery (AREA)
- Public Health (AREA)
- Pathology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physiology (AREA)
- Signal Processing (AREA)
- Psychiatry (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Probability & Statistics with Applications (AREA)
- Pulmonology (AREA)
- Algebra (AREA)
- Computational Mathematics (AREA)
- Mathematical Analysis (AREA)
- Mathematical Optimization (AREA)
- Pure & Applied Mathematics (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Robotics (AREA)
- Fuzzy Systems (AREA)
Abstract
Provided are computer-implemented methods and systems for determining or predicting respiration behavior of a subject and synchronization of the operation of automated medical devices and/or imaging systems therewith, to optimize insertion and/or steering of a medical instrument toward a target in a body of a patient.
Description
RESPIRATION ANALYSIS AND SYNCHRONIZATION OF THE OPERATION OF AUTOMATED MEDICAL DEVICES THEREWITH
FIELD OF THE INVENTION
The present disclosure relates to computer-implemented methods and systems for analyzing respiration activity of subjects and synchronizing operation of automated medical devices and/or imaging systems therewith. More specifically, the disclosed methods and systems include collecting data related to respiration cycle(s) of subject(s), analyzing the collected data and synchronizing the operation of automated medical devices and/or imaging systems with the respiration cycle, to facilitate planning, insertion and/or steering of a medical instrument toward an internal target.
BACKGROUND
Various diagnostic and therapeutic procedures used in clinical practice involve the insertion of medical instruments, such as needles and catheters, percutaneously to a subject’s body, and in many cases further involve the steering of the medical instruments within the body, to reach a target region. The target region can be, for example but not limited to, a lesion, a tumor, an organ and/or a vessel, such a target may be any object a user indicates as target. Examples of procedures requiring insertion and steering of such medical instruments include vaccinations, blood/fluid sampling, drug delivery, regional anesthesia, tissue biopsy, catheter insertion, cryogenic ablation, electrolytic ablation, brachytherapy, neurosurgery, deep brain stimulation, various minimally invasive surgeries, and the like.
The guidance and steering of medical instruments in the body is a complicated task that requires good three-dimensional coordination, knowledge of the patient’s anatomy and a high level of experience. Thus, image-guided automated systems (e.g., robotic) have been proposed for performing these functions.
Some automated systems are based on manipulating robotic arm(s) and some utilize a robotic device which can be attached to the patient’s body or positioned in close proximity thereto. These automated systems typically assist the physician in aligning the medical instrument with a selected insertion point at a desired insertion point and the insertion itself is carried out manually by the physician. Some automated systems further include an
insertion mechanism that can insert the instrument toward the target, typically in a linear manner. More advanced automated systems further include non-linear steering capabilities, as described, for example, in U.S. Patents Nos. 8,348,861, 8,663,130 and 10,507,067, and in co-owned U.S. Patent No. US 10,245,110, co-owned U.S. Patent Application Publication No. 2019/290,372, and co-owned International Patent Application Publication No. WO 2021/105,992, all of which are incorporated herein by reference in their entireties.
During the operation of such automated medical devices in various procedures and in various settings, various considerations are taken into account. For example, motion of patient’s organs and tissues due to respiration behavior can have a significant impact in radiology-based interventional procedures, where the appearance and location of tissues are critical to properly analyze the scanned volume and determine the proper timing for inserting the medical instrument toward the target in the subject’s body.
Furthermore, important details, such as location and shape of the target and its surrounding tissues, may not be visualized or detected during part of the breathing cycle when image acquisition takes place (such as, for example, CT scanning). Accordingly, it is of outmost importance that the imaging be executed at the same point/phase during the breathing cycle, so as to minimize the effect of breathing related movement on the target, internal tissues, organs, and the like and allow proper analysis of the scanned volume and planning and execution of the interventional procedure.
In current practice, breathing instructions are typically given to patients, which include instructions to hold their breath at a specific point/phase during their breathing cycle, to increase the synchronization between imaging initiation and/or medical instrument insertion and the patient’s breathing behavior. However, patients do not always manage to hold their breath at the exact same point/phase of the cycle. Further, providing breathing instructions may not be applicable if the patient is sedated, or if the patient is unable to follow breathing instructions due to a medical or mental condition, or if the patient is a child, for example. Although active systems may sometimes be used to enforce desired breathing patterns in such cases, the use of such systems may lead to increased stress or discomfort for the patient during the medical procedure.
The use of robotic systems for insertion and/or steering of a medical instrument (e.g., needle, probe) in interventional radiology procedures further increases the importance of respiration synchronization due to the need to have a proper and consistent registration
between the robot, the target tissue and the planned trajectory, and the need to control the timing of the insertion and/or steering of the medical instrument using the robotic system.
Thus, there is a need for methods and systems for collecting, processing and analyzing data related to respiration activity of subjects, and synchronization the operation of imaging systems and/or automated medical devices therewith, so as to facilitate accurate insertion and/or steering of medical instruments by the automated devices and accurately reaching an internal target in a safe and efficient manner.
SUMMARY
According to some embodiments, the present disclosure is directed to systems and computer-implemented methods for determination/identification/prediction of respiration behavior of subject(s) and synchronization of the operation of an imaging system and/or an automated medical device with the identified respiration behavior. The methods and systems may include collecting data related to respiration of the subject(s), determining and/or predicting the respiration cycle/behavior of the subject, to accordingly allow to synchronize the operation of the imaging system and/or the automated medical device with specific points/states along the breathing cycle, to facilitate planning, insertion and/or steering of a medical instrument toward an internal target. The systems and methods disclosed herein advantageously increase the accuracy of inserting and/or steering of the medical instrument and the corresponding image acquisition (such as, for example, CT scanning) by facilitating the performance/execution thereof at the same points/states of the respiration cycle.
According to some embodiments, for the determination or prediction of respiration behavior or respiration cycle of the subject, various datasets related to respiration of the subject may be obtained and consequently manipulated and/or utilized to generate algorithms or learning-based models to one or more of: identify/determine stages of the breathing cycle, identify/determine patterns in or of the breathing cycle, predict future behavior of the breathing cycle, and the like. The generated algorithms and/or models may consequently be used to synchronize the operation of automated medical devices and/or imaging systems with the determined or predicted breathing cycle, so as to ensure that specific steps (e.g., image acquisition, instrument insertion/steering), in the medical procedure are performed at the same points/stages of the respiration cycle, thereby increasing accuracy, safety and efficiency
of the medical procedure. In some embodiments, the computerized methods for the determination and/or prediction of the breathing cycle of the subject may utilize specific algorithms which may be generated using machine learning tools, deep learning tools, data wrangling tools, and, more generally, Al and data analysis tools. In some embodiments, the specific algorithms may be implemented using artificial neural network(s) (ANN), such as convolutional neural network (CNN), recurrent neural network (RNN), long-short term memory (LSTM), auto-encoder (AE), generative adversarial network (GAN), Reinforcement- Learning (RL) and the like, as further detailed below. In other embodiments, the specific algorithms may be implemented using machine learning methods, such as support vector machine (SVM), decision tree (DT), random forest (RF), and the like. Both “supervised” and “unsupervised” methods may be implemented.
In some embodiments, data related to the breathing behavior may be collected prior to, during or resulting from procedures performed by automated medical devices. In some embodiments, data related to the breathing behavior may be collected prior to, during or resulting from procedures performed manually by physicians. In some embodiments, the collected data may be used to generate algorithms/models which may consequently provide, for example, information or prediction regarding the breathing cycle and specific stages thereof, which may further be used for controlling, instructing, enhancing, alerting or providing recommendations regarding various operations and/or operating parameters and/or other parameters related to automated medical devices. Thus, based at least on some of the collected primary data related to the breathing of the subject (also referred to as "raw data") and/or metadata and/or data and/or features derived therefrom ("manipulated data") and/or annotations generated manually or automatically, a data-analysis algorithm may be generated, to provide output which is indicative or predictive of the breathing cycle of the subject, that can consequently enhance the operation of the automated medical devices (and optionally related imaging systems) and/or the decisions of the users (e.g., physicians) of such devices.
In some exemplary embodiments, the automated medical devices are devices for insertion and steering of medical instruments (for example, needles, introducers or probes) in a subject’s body for various diagnostic and/or therapeutic purposes. In some embodiments, the automated medical device may utilize real-time instrument position detection and realtime trajectory updating. For example, when utilizing real-time trajectory updating and
instrument steering according thereto, the most effective spatio-temporal and safe route of the medical instrument to the target within the body may be achieved. Further, safety may be increased as it reduces the risk of harming non-target regions and tissues within the subject’s body, as the trajectory update may take into account obstacles or any other regions along the route, and moreover, it may take into account changes in the real-time location of such obstacles. Additionally, robotic steering following trajectory updating (in a closed-loop or semi-closed loop manner) improves the accuracy of the procedures, thus enabling the reaching of small and hard to reach targets. This is of particular importance in early detection of malignant neoplasms, for example. In addition, it provides increased safety for the patient, as there is a significant lower risk of human error. Further, in some embodiments, the automated device may be remotely controlled, i.e., from outside the procedure room, such that the procedure may be safer for the medical personnel, as their exposure to harmful radiation and/or pathogens during the procedure is minimized. In some embodiments, the automated medical devices are configured to insert and steer a medical instrument (in particular, the tip of the medical instrument) in the body of the subject, to reach a target region within the subject’s body, to perform various medical procedures, such as ablations, biopsies, fluid drainage, etc. In some embodiments, the operation of the medical devices may be controlled by at least one processor configured to provide instructions, in real-time, to steer the medical instrument (e.g., the tip thereof), toward the target, according to a planned and/or the updated trajectory, while taking into consideration the breathing cycle of the subject as determined according to the methods disclosed herein and the plausible effects thereof on various paraments related to the planning or steering of the medical instrument. In some embodiments, the steering may be controlled by the processor, via a suitable controller. In some embodiments, the steering may be controlled in a closed-loop manner, whereby the processor generates motion commands to the steering device via a suitable controller and receives feedback regarding the real-time location of the medical instrument and/or the target. In some embodiments, the processor(s) may be able to predict the location and/or movement pattern of the target, e.g., using Al-based algorithm(s). In some embodiments, the automated medical device may be configured to operate in conjunction with an imaging system, which may include any type of imaging system, including, but not limited to: X-ray fluoroscopy, CT, cone beam CT, CT fluoroscopy, MRI, ultrasound, or any other suitable imaging modality. In some embodiments, the processor is configured to calculate a trajectory for the medical instrument based on a target, entry point and, optionally, obstacles en route (such as
bones or blood vessels), which may be manually marked by the user, or automatically identified by the processor, on one or more obtained images. In some embodiments, as detailed herein, at least one of the steps of initiating imaging of a region of interest, planning a trajectory, updating a trajectory, inserting and/or steering a medical tool, may be synchronized with the breathing cycle of the subject.
In some embodiments, the respiration related primary datasets collected and utilized by the systems and methods disclosed herein may be used to generate data-analysis algorithm(s) and/or learning-based model(s), which may output, inter alia, prediction of a future time window of the breathing cycle (e.g., any time window between 2 seconds and 10 seconds), which may be used to provide operating instructions for the automated medical device and/or the imaging system and/or instructions/alerts/recommendations to the user of the automated medical device and/or the imaging system.
According to some embodiments, the collected datasets and/or the data derived therefrom may be used for the generation of a training set, which may be part of the generated algorithm/model, or utilized for the generation of the model/algorithm and/or the validation or update thereof. In some embodiments, the training step may be performed in an “offline” manner, i.e., the model may be trained/generated based on a static dataset. In some embodiments, the training step may be performed utilizing an “online” or incremental/continuous manner, in which the model is continuously updated with every new incoming data.
According to some embodiments, there is thus provided a computer-implemented method of generating a data analysis algorithm for determining or predicting respiratory behavior of a subject, which may further be used for providing instructions, recommendations and/or alerts related to insertion of a medical instrument toward a target in a body of a patient.
Certain embodiments of the present disclosure may include some, all, or none of the above advantages. One or more other technical advantages may be readily apparent to those skilled in the art from the figures, descriptions, and claims included herein. Moreover, while specific advantages have been enumerated above, various embodiments may include all, some, or none of the enumerated advantages.
BRIEF DESCRIPTION OF THE DRAWINGS
Some exemplary implementations of the methods and systems of the present disclosure are described with reference to the accompanying drawings. In the drawings, like reference numbers indicate identical or substantially similar elements.
FIG. 1 shows illustration of exemplary breathing behavior of a subject as measured using a pressure sensor, according to some embodiments;
FIGS. 2A-2B show perspective views of an exemplary device (FIG. 2A) and an exemplary console (FIG. 2B) of a system for inserting a medical instrument toward a target, according to some embodiments;
FIG. 3 shows an exemplary non-linear trajectory for a medical instrument to reach an internal target within the body of the subject, according to some embodiments;
FIGS. 4A-4D demonstrate real-time updating of a trajectory and steering of an automated medical instrument according thereto, according to some embodiments.
FIG. 5 shows a flowchart of steps in an exemplary method for planning and steering a medical instrument, utilizing respiration behavior analyzed using data-analysis algorithm(s) to trigger scanning and instrument insertion at a specific time or state/phase of the respiration cycle, according to some embodiments;
FIGS. 6A-6B show an exemplary training module (FIG. 6A) and an exemplary training process (FIG. 6B) for training a data-analysis algorithm, according to some embodiments;
FIGS. 7A-7B show an exemplary inference module (FIG. 7A) and an exemplary inference process (FIG. 7B) for utilizing a data-analysis algorithm, according to some embodiments;
FIG. 8 shows a block diagram illustrating an exemplary method of training a triggering determination model, according to some embodiments;
FIG. 9 shows a block diagram illustrating an exemplary method of training a respiration prediction model, according to some embodiments;
FIG. 10 shows a flowchart illustrating the steps of an exemplary method of utilizing a respiration prediction model, according to some embodiments;
FIGS. 11A-11C show line graphs of measured respiration activity of a subject and predicted respiration activity generated using a respiration prediction model, according to some embodiments;
FIG. 12 shows a flowchart illustrating the steps of an exemplary method of steering a medical instrument toward a moving target utilizing a dynamic trajectory model and further utilizing respiration behavior analyzed using data-analysis algorithm(s) to trigger scanning and instrument insertion at a specific respiration state, according to some embodiments;
DETAILED DESCRIPTION
The principles, uses and implementations of the teachings herein may be better understood with reference to the accompanying description and figures. Upon perusal of the description and figures present herein, one skilled in the art will be able to implement the teachings herein without undue effort or experimentation. In the figures, same reference numerals refer to same parts throughout.
In the following description, various aspects of the invention will be described. For the purpose of explanation, specific details are set forth in order to provide a thorough understanding of the invention. However, it will also be apparent to one skilled in the art that the invention may be practiced without specific details being presented herein. Furthermore, well-known features may be omitted or simplified in order not to obscure the invention.
In some embodiments, there are provided computerized systems and methods for generating and using data analysis algorithms and/or Al-based algorithms for determining or predicting valid triggering events during the respiratory cycle of the subject.
In some embodiments, the valid triggering events may be determined for a future time window/segment of the respiratory cycle of the subject predicted using data analysis algorithms and/or Al-based algorithms.
Reference is now made to FIG. 1, which shows an illustration of several respiratory cycles of a subject measured over a period of time. Exemplary breathing cycle 6, including exemplary inhalation (inspiration) stage 2, in which air is inhaled (inserted into the airways/lungs), and exemplary exhalation (expiration) stage 4, in which air is exhaled (removed from the airways/lungs). The breathing cycles may be similar, identical or different therefrom, with respect to one or more parameters, such as length, amplitude and/or shape.
The respiratory behavior is known to be non-stationary by nature, i.e., its characteristics can vary with time. The breathing behavior may also vary between patients.
Reference is now made to FIG. 2A, which shows an exemplary automated medical device for inserting and/or steering a medical instrument in a body of a subject, according to some embodiments. As shown in FIG. 2A, automated medical device 20 may include a housing (also referred to as “cover”) 21 accommodating therein at least a portion of the steering mechanism. The steering mechanism may include at least one moveable platform (not shown) and at least two moveable arms 26A and 26B, configured to allow or control movement of an end effector (also referred to as “control head”) 24, at any one of desired movement angles or axis, to provide several degrees of freedom. For example, the steering mechanism may provide up to five degrees of freedoms - forward-backward and left-right linear translations, front-back and left-right rotations, and longitudinal needle translation toward the target. In some embodiments six degrees of freedoms may be provided - forwardbackward and left-right linear translations, front-back and left-right rotations, longitudinal needle translation toward the target, and longitudinal needle drilling toward the target. The moveable arms 26A and 26B may be configured as piston mechanisms. To an end 28 of end effector 24, a suitable medical instrument (not shown) may be connected, either directly or by means of a suitable insertion module. The medical instrument may be any suitable instrument capable of being inserted and steered within the body of the subject, to reach a designated target, wherein the control of the operation and movement of the medical instrument is effected by the end effector 24. The end effector 24 may include at least a portion of a driving mechanism (also referred to as “insertion mechanism”) configured to advance the medical instrument toward the target in the patient’s body. The end effector 24 may be controlled by a suitable control system, as detailed herein.
According to some embodiments, the medical instrument may be selected from, but not limited to: a needle, probe (e.g., an ablation probe), port, introducer, catheter (such as a drainage needle catheter), cannula, surgical tool, fluid delivery tool, or any other suitable insertable tool configured to be inserted into a subject’s body for diagnostic and/or therapeutic purposes. In some embodiments, the medical instrument includes a distal tip at a distal end thereof (i.e., the end which is inserted into the subject’s body).
In some embodiments, the automated medical device 20 may have a plurality of degrees of freedom (DOF) in operating and controlling the movement of the medical
instrument along one or more axis. For example, the device may have up to six degrees of freedom. For example, the device may have at least five degrees of freedom. For example, the device may have five degrees of freedom, including two linear translation DOF (in a first axis), a longitudinal linear translation DOF (in a second axis substantially perpendicular to the first axis) and two rotational DOF. For example, the device may have forward-backward and left-right linear translations facilitated by two moveable platforms, front-back and leftright rotations facilitated by two moveable arms (e.g., piston mechanism), and longitudinal translation toward the subject’s body facilitated by the insertion mechanism. In some embodiments, a control system (i.e., processor and/or controller) may be capable of controlling the steering mechanism (including the moveable platforms and the moveable arms) and the insertion mechanism simultaneously, thus enabling non-linear steering of the medical instrument, i.e., enabling the medical instrument to reach the target by following a non-linear trajectory. In some embodiments, the device may have six degrees of freedom, including the five degrees of freedom described above and, in addition, rotation of the medical instrument about its longitudinal axis (e.g., for drilling purposes). In some embodiments, rotation of the medical instrument about its longitudinal axis may be facilitated by a designated rotation mechanism. In some embodiments, the control system (i.e., processor and/or controller) may be capable of controlling the steering mechanism, the insertion mechanism and the rotation mechanism simultaneously.
In some embodiments, the device may further include a base 23, which allows positioning of the device on or in close proximity to the subject’s body. In some embodiments, the device may be configured for attachment to the subject’s body either directly or via a suitable mounting surface. Attachment of the automated medical device 20 to the mounting surface may be carried out using dedicated latches, such as latches 27A and 27B. In some embodiments, the device may be couplable to a dedicated arm or base which is secured to the patient’s bed, to a cart positioned adjacent the patient’s bed or to an imaging device (if used), and held on the subject’s body or in close proximity thereto.
In some embodiments, the device may include electronic components and motors (not shown) allowing the controlled operation of the automated medical device 20 in inserting and steering the medical instrument. In some exemplary embodiments, the device may include one or more Printed Circuit Board (PCB) (not shown) and electrical cables/wires (not shown) to provide electrical connection between a controller (not shown) and the motors of
the device and other electronic components thereof. In some embodiments, the controller may be embedded, at least in part, within automated medical device 20. In some embodiments, the controller may be a separate component. In some embodiments, the automated medical device 20 may include a power supply (e.g., one or more batteries) (not shown). In some embodiments, the automated medical device 20 may be configured to communicate wirelessly with the controller and/or processor. In some embodiments, automated medical device 20 may include one or more sensors, such as a force sensor and/or an acceleration sensor (not shown). Use of sensor/s for sensing parameters associated with the interaction between a medical instrument and a bodily tissue, e.g., a force sensor, and utilizing the sensor data for monitoring and/or guiding the insertion of the instrument and/or for initiating imaging.
In some embodiments, the housing 21 is configured to cover and protect, at least partially, the mechanical and/or electronic components of automated medical device 20 from being damaged or otherwise compromised. In some embodiments, the housing 21 may include at least one adjustable cover, and it may be configured to protect the device from being soiled by dirt, as well as by blood and/or other bodily fluids, thus preventing/minimizing the risk of cross -contamination between patients.
In some embodiments, the device may further include registration elements disposed at specific locations on the automated medical device 20, such as registration elements 29A and 29B, for registration of the device to the image space, in image-guided procedures. In some embodiments, registration elements may be disposed on the mounting surface to which device 20 may be coupled, either instead or in addition to registration elements disposed on device 20. In some embodiments, the device may include a CCD/CMOS camera mounted on the device and/or on the device’s frame and/or as a separate apparatus, allowing the collection of visual images and/or videos of the patient’s body during a medical procedure.
In some embodiments, the medical instrument is configured to be removably couplable to the device 20, such that the device can be used repeatedly with new medical instruments. In some embodiments, the medical instruments are disposable. In some embodiments, the medical instruments are reusable.
In some embodiments, automated medical device 20 is part of a system for inserting and steering a medical instrument in a subject’s body based on a preplanned and, optionally, real-time updated trajectory. In some embodiments, the system may include the steering and
insertion device 20, as disclosed herein, and a control unit (or - “workstation” or “console”) configured to allow control of the operating parameters of device 20. In some embodiments, the user may operate the device 20 using a pedal or an activation button. In some embodiments, the system may include a remote control unit, which may enable the user to activate the device 20 from a remote location, such as the control room adjacent the procedure room (e.g., CT suite), a different location at the medical facility or even a location outside the medical facility. In some embodiments, the user may operate the device using voice commands.
Reference is now made to FIG. 2B, which shows an exemplary workstation (also referred to as “console”) 25 of an insertion system for inserting a medical instrument toward a target, according to some embodiments. The workstation 25 may include a display 252 and a user interface (not shown). In some embodiments, the user interface may be in the form of buttons, switches, keys, keyboard, computer mouse, joystick, touch-sensitive screen, and the like. The monitor and user interface may be two separate components, or they may form together a single component (e.g., in the form of a touch- screen). The workstation 25 may include one or more suitable processors (for example, in the form of a PC) and one or more suitable controllers, configured to functionally interact with automated medical device 20, to determine and control the operation thereof. The one or more processors may be implemented in the form of a computer (such as a workstation, a server, a PC, a laptop, a tablet, a smartphone or any other processor-based device). In some embodiments, the workstation 25 may be portable (e.g., by having or being placed on a movable platform 254).
In some embodiments, the one or more processors may be configured to perform one or more of: determine (plan) a trajectory for the medical instrument to reach the target; update the trajectory in real-time; present the planned and/or updated trajectory on the monitor 252; control the movement (insertion/steering) of the medical instrument based on the planned and/or updated trajectory by providing executable instructions (directly or via the one or more controllers) to the device; determine the actual location of a tip of medical instrument by performing required compensation calculations; receive, process and visualize on the monitor images or image-views created from a set of images (between which the user may be able to scroll), control operating parameters, and the like; or any combination thereof.
In some embodiments, use of Al-based models (e.g., machine-learning and/or deeplearning based models) requires a “training” stage in which collected data is used to create
(train) models. The generated (trained) models may later be used for “inference” to obtain specific insights, predictions, alerts and/or recommendations when applied to new data during the clinical procedure or at any later time.
In some embodiments, the insertion and steering system and the system creating (training) the algorithms/models may be separate systems (i.e., each of the systems includes a different set of processors, memory modules, etc.). In some embodiments, the insertion and steering system, and the system creating the algorithms/models may be the same system. In some embodiments, the insertion system and steering, and the system creating the algorithms/models may share one or more resources (such as, processors, memory modules, GUI, and the like). In some embodiments, the insertion and steering system, and the system creating the algorithms/models may be physically and/or functionally associated. Each possibility is a separate embodiment.
In some embodiments, the insertion and steering system and the system utilizing the algorithms/models for inference may be separate systems (i.e., each of the systems includes a different set of processors, memory modules, etc.). In some embodiments, the insertion and steering system, and the system utilizing the algorithms/models for inference may be the same system. In some embodiments, the insertion and steering system, and the system utilizing the algorithms/models for inference may share one or more resources (such as, processors, memory modules, GUI, and the like). In some embodiments, the insertion system and steering, and the system utilizing the algorithms/models for inference may be physically and/or functionally associated. Each possibility is a separate embodiment.
In some embodiments, the device may be configured to operate in conjunction with an imaging system, including, but not limited to: X-Ray, CT, cone beam CT, CT fluoroscopy, MRI, ultrasound, or any other suitable imaging modality, such that inserting and steering of the medical instrument based on a planned and, optionally, real-time updated 2D or 3D trajectory of the medical instrument, is image-guided.
According to some embodiments, during the operation of the automated medical device, various types of data may be generated, accumulated and/or collected, for further use and/or manipulation. In some embodiments, the data may be divided into various types/sets of data, including, for example, data related to operating parameters of the device, data related to clinical procedures, data related to the treated patient, data related to administrative information, and the like, or any combination thereof.
In some embodiments, such collected datasets may be collected from one or more (i.e., a plurality) of automated medical devices, operating under various circumstances (for example, different procedures, different medical instruments, different patients, different locations and operating staff, etc.), to thereby generate a large data base ("big data"), that can be used, utilizing suitable data analysis tools and/or Al-based tools to ultimately generate algorithms/models/that allow performance enhancements, automatic control or affecting control (i.e., by providing recommendations), of the medical devices. Thus, by generating such advantageous and specialized algorithms/models , enhanced control and/or operation of the medical device may be achieved.
Reference is now made to FIG. 3, which schematically shows a trajectory planned using a processor, such as the processor(s) described above, for delivering a medical instrument to a target within a body of a subject, using an automated medical device, such as the automated device of FIG. 2A. In some embodiments, the planned trajectory may be linear or substantially linear. In some embodiments, and as shown in FIG. 3, the trajectory may be non-linear trajectory having any suitable/acceptable degree of curvature.
In some embodiments, the one or more processors may calculate a planned trajectory for the medical instrument to reach the target. The planning of the trajectory and the controlled steering of the medical instrument according to the planned trajectory may be based on a model of the medical instrument as a flexible beam having a plurality of virtual springs connected laterally thereto to simulate lateral forces exerted by the tissue on the instrument, thereby calculating the trajectory through the tissue on the basis of the influence of the plurality of virtual springs on the instrument, and utilizing an inverse kinematics solution applied to the virtual springs model to calculate the required motion to be imparted to the instrument to follow the planned trajectory. The processor may then provide motion commands to the automated medical device, for example via a controller. In some embodiments, steering of the medical instrument may be controlled in a closed-loop manner, whereby the processor generates motion commands to the automated medical device and receives feedback regarding the real-time location of the medical instrument (e.g., the distal tip thereof), which is then used for real-time trajectory corrections. For example, if the medical instrument has deviated from the planned trajectory aiming the target, the processor may calculate the motion to be applied to the robot to reduce the deviation in order to reach the target. The real-time location of the medical instrument and/or the corrections may be
calculated and/or applied using data-analysis models/algorithms. In some embodiments, certain deviations of the medical instrument from the planned trajectory, for example deviations which exceed a predetermined threshold, may require recalculation of the trajectory for the remainder of the procedure, as described in further detail hereinbelow.
As shown in FIG. 3, a trajectory 32 is planned between an entry point 36 and a target 38. The planning of the trajectory 32 may take into account various variables, including, but not limited to: the type of the medical instrument to be used and its characteristics, the dimensions of the medical instrument (e.g., length, gauge), the type of imaging modality (such as, CT, CBCT, MRI, X-Ray, CT fluoroscopy, ultrasound and the like), the tissues through which the medical instrument is to be inserted, the location of the target, the size of the target, the insertion point, the angle of insertion (relative to one or more axis), milestone points (“secondary targets” through which the medical instrument should pass) and the like, or any combination thereof. In some embodiments, at least one of the milestone points may be a pivot point, i.e., a predefined point along the trajectory in which the deflection of the medical instrument is prevented or minimized, to maintain minimal pressure on the tissue (even if this results in a larger deflection of the instrument in other parts of the trajectory). In some embodiments, the planned trajectory is an optimal trajectory based on one or more of these parameters. Further taken into account in determining the trajectory may be various obstacles 39A-39C, which may be identified along the path and which should be avoided, to prevent damage to neighboring tissues and/or to the medical instrument. According to some embodiments, safety margins 34 may be marked along the planned trajectory 32, to ensure a minimal distance between the trajectory 32 and potential obstacles en route. The width of the safety margins may be symmetrical in relation to the trajectory 32. The width of the safety margins 34 may be asymmetrical in relation to the trajectory 32. According to some embodiments, the width of the safety margins 34 may be preprogrammed. According to some embodiments, the width of the safety margins 34 may be automatically set, or recommended to the user, by the processor, based on data obtained from previous procedures using a data analysis algorithm. According to some embodiments, the width of the safety margins 34 may be determined and/or adjusted by the user. Further shown in FIG. 3 is an end of an end effector 30 of the exemplary automated medical device, to which the medical instrument (not shown in FIG. 3) is coupled, as virtually displayed on the monitor, to indicate its position and orientation.
The trajectory 32 shown in FIG. 3 is a planar trajectory (i.e., two dimensional). In some embodiments, steering of the medical instrument is carried out according to a planner trajectory, for example trajectory 32. In some embodiments, the calculated planner trajectory may be superpositioned with one or more additional planner trajectories, to form a three- dimensional (3D) trajectory. Such additional planner trajectories may be planned on one or more different planes, which may be perpendicular to the plane of the first planner trajectory (e.g., trajectory 32) or otherwise angled relative thereto. According to some embodiments, the 3D trajectory may include any type of trajectory, including a linear trajectory or a nonlinear trajectory.
According to some embodiments, the steering of the medical instrument is carried out in a 3D space, wherein the medical instructions are determined on each of the planes of the superpositioned planner trajectories, and are then superpositioned to form the steering in the three-dimensional space. The data/parameters/values thus obtained during the steering of the medical instrument by the automated medical device can be used as data/parameters/values for the generation/training and/or utilization/inference of the data-analysis model(s)/algorithm(s).
Reference is now made to FIGS. 4A-4D, which demonstrate real-time updating of a trajectory and steering a medical instrument according thereto, according to some embodiments. As detailed herein, the exemplary planned and updated trajectories presented may be calculated using a processor executing the models and methods disclosed herein, such as the processor(s) of the insertion system described in FIG. 2B, and the insertion and steering of the medical instrument toward the predicted target location according to the planned and updated trajectories may be executed using an automated medical device, such as the automated device of FIG. 2A. The trajectories shown in FIGS. 4A-4D are shown on CT image- views, however it can be appreciated that the planning can be carried out similarly on images obtained from other imaging systems, such as ultrasound, MRI and the like.
FIG. 4A shows an automated medical device 150 mounted on a subject’s body (a cross-section of which is shown in FIGS. 4A-4D) and a planned (initial) trajectory 160 from an entry point (not shown) toward the initial position of a target 162. According to some embodiments, once the planned trajectory 162 has been determined, checkpoints along the trajectory may be set. Checkpoints may be used to pause the insertion of the automated medical device 150 and initiate imaging of the target (region of interest), to verify the position
of a medical instrument 170, (specifically, in order to verify that the medical instrument (e.g., the distal tip thereof) follows the planned trajectory), to monitor the location of marked obstacles and/or identify previously unmarked obstacles along the trajectory, and to verify the target’s position, such that recalculation of the trajectory may be initiated, if the user chooses to do so, before advancing the instrument to the next checkpoint/the target. The checkpoints may be manually set by the user, or they may be automatically set or recommended by the processor, as described in further detail hereinbelow.
In some embodiments, the planned trajectory 160 may be a linear or substantially linear trajectory. In some embodiments, if necessitated (for example, due to obstacles), the planned trajectory may be a non-linear trajectory. As further detailed below, the planned trajectory may be updated in real-time based on the real-time position of the medical instrument (for example, the distal tip thereof) and/or the real-time position of the target and/or the real-time positions of obstacle/s, or based on tissue and target movement predictions generated by one or more machine learning models. The initial target location may be obtained manually (i.e., marked by the user) or automatically (i.e., determined by the processor).
FIG. 4B shows medical instrument 174 being inserted into the subject’s body, along the planned trajectory 160. As shown in FIG. 4B, the target has moved from its initial position to new (updated) position 162’ as a result of for example but not limited to the advancement of the medical instrument within the tissue, respiration cycle behavior, patient movements, as detailed herein. In some embodiments, the determination of the real-time location of the target may be performed manually by the user, i.e., the user visually identifies the target in images (continuously or manually or automatically initiated, for example when the instrument reaches a checkpoint), and marks the new target position on the image using the GUI. In some embodiments, the determination of the real-time target location may be performed automatically by a processor using image processing techniques and/or data- analysis algorithm(s). In some embodiments, the trajectory may be updated based on the determined real-time position of the target. In some embodiments, the subsequent movement of the target is predicted, for example using a target movement model, and the trajectory may then be updated based on the predicted location (e.g., the end-point location) of the target. In some embodiments, the updating of the trajectory based on the predicted location of the target may be performed automatically, by utilizing one or more of the Al models, including the
respiration behavior model, tissue movement model, target movement model, trajectory model and any suitable sub-model (or individual model).
According to some embodiments, recalculation of the trajectory may also be required if, for example, an obstacle is identified along the trajectory. Such an obstacle may be an obstacle which was marked (manually or automatically) prior to the calculation of the planned trajectory but tissue movement, e.g., tissue movement resulting from for example and not limited to, the advancement of the instrument within the tissue, respiration cycle behavior, patient movements, caused the obstacle to move such that it entered the planned path. In some embodiments, the obstacle may be a new obstacle, i.e., an obstacle which was not visible in the image (or set of images) based upon which the planned trajectory was calculated, and became visible during the insertion procedure.
In some embodiments, the user may be prompted to initiate an update (recalculation) of the trajectory. In some embodiments, recalculation of the trajectory, if required, is executed automatically by the processor and the insertion of the medical instrument automatically continues according to the updated trajectory. In some embodiments, recalculation of the trajectory, if required, is executed automatically by the processor, however the user is prompted to confirm the recalculated trajectory before advancement of the medical instrument (e.g., to the next checkpoint or to the target) according to the updated trajectory can be resumed.
As shown in FIG. 4C, an updated trajectory 160' may be calculated based on the predicted end-point location of the target 162”, to facilitate the medical instrument 170 reaching the target at its end-point location. In some embodiments, the trajectory may be updated based on the real-time location of the target. In such embodiments, the trajectory may first be updated so as to reach the target at its position 162’, and then updated again so as to reach the target at its end-point location 162’ ’ after movement of the target from position 162’ to end-point location 162” has been detected. As shown, although the preplanned trajectory 160 was linear, the recalculation of the trajectory, e.g., using a learning-based model, due to movement of the target, resulted in the medical instrument 170, specifically the distal tip of medical instrument 170, following a non-linear trajectory to accurately reach the target.
FIG. 4D summarizes the target movement during the procedure shown in FIGS. 4A- 4C, from an initial target location 162 to an updated target location 162’ and finally to an
end-point target location 162”. In some embodiments, the movement of the target during the procedure may be predicted by a target movement model, which may be further used (optionally with additional models, such as, breathing behavior model and/or tissue movement model) to update the trajectory utilizing the trajectory model, to thereby facilitate the medical instrument 170 reaching the target at its endpoint location in an optimal manner, as detailed herein. Also shown in FIG. 4D are the planned trajectory 160 and the updated trajectory 160’, which allowed the medical instrument 170 to reach the moving target, without having to remove and re-insert the instrument.
According to some embodiments, the target, insertion point and, optionally, obstacle/s, may be marked manually by the user. According to other embodiments, the processor of the insertion system (or of a separate system) may be configured to identify and mark at least one of the target, the insertion point and the obstacle/s, and the user may, optionally, be prompted to confirm or adjust the processor’s proposed markings. In such embodiments, the target and/or obstacle/s may be identified using known image processing techniques and/or data-analysis models/algorithms, based on data obtained from previous procedures. The insertion point may be suggested based solely on the obtained images, or, alternatively or additionally, on data obtained from previous procedures using data-analysis models/algorithms .
Reference is now made to FIG. 5, which is a flowchart 60 of an exemplary method for planning a medical instrument trajectory and steering the medical instrument according to the planned trajectory, and utilizing respiration behavior analyzed using data-analysis algorithm(s) to trigger scanning and instrument insertion at a specific time or state/phase of the respiration cycle, according to some embodiments.
At step 601, respiration behavior of the patient is analyzed using a data analysis algorithm. In some embodiments, as further detailed hereinbelow, the respiration behavior data analysis algorithm may be used to predict a future segment of the patient’s respiration, which allows triggering of various consequent medical actions, including, for example, imaging and/or insertion and/or steering of a medical tool, such that the triggering is performed at a specific time (time point or time range) during the respiration cycle, thereby ensuring synchronization between the execution of different actions prior or during the medical procedure and the respiration cycle. In some embodiments, the respiration behavior is analyzed prior to commencement of the medical procedure, and a respiration behavior
baseline is established for the specific patient. In some embodiments, the patient may be requested to cough, clear his/her throat, perform a sudden movement, etc., so as to analyze how the patient’s breathing activity is influenced by such events. In some embodiments, the breathing behavior of the patient may change during the course of the procedure, for example, due to the gradual effect of sedation on the patient, due to a change in the patient’s stress levels. Thus, in some embodiments, the respiration of the patient is continuously monitored throughout the procedure (e.g., by means of a respiration sensor), and the respiration activity is continuously analyzed and taken into consideration in subsequent steps of the disclosed method.
At step 602, planning imaging (such as, for example, a CT scan) of a region of interest is triggered, at time ttrig of the respiration cycle. Time ttig of the respiration cycle may be any desired or suitable time point during the respiration cycle, wherein Cig is a specific respiration time or state of the respiration cycle. For example, Cig may be determined to be at the peak region/point of the inhalation phase, at a specific point/region of the exhalation phase, at the beginning of the pause duration between two consecutive respiration cycles (also referred to as “Lull” or “automatic pause”), and the like, or any other suitable region/point/state during the respiration cycle. In some embodiments, Cig is determined to be the start/onset of a triggering event (e.g., lull period, but not limited to). In some embodiments, the analysis of the patient’s respiration activity may assist the user (e.g., physician, technician) in determining also the imaging duration and/or the scan dose, and may thus enable reducing the radiation dose to which the patient and the medical staff are exposed. In some embodiments, triggering planning imaging may refer to automatic initiation of the scan, for example, via direct interface between a processor of the automated medical system and the imaging system. In other embodiments, triggering planning imaging may refer to generating an alert/instruction to the user (e.g., physician) to manually initiate the imaging. In such embodiments, the triggering may be in the form of a countdown (for example, a countdown from 5, a countdown from 4, a countdown from 3, etc.), so as to allow the user to be prepared to timely initiate the imaging and minimize any possible delay. In some embodiments, a radiation sensor/detector may be used to verify the exact start and end points of actual image acquisition to allow proper synchronization between the scan and the respiration cycle while minimizing possible errors due to scanner latencies, scanner speed, human operator reaction time, etc. In some embodiments, a scanner-specific calibration step may be required to
configure the system to accurately perform the described flow while taking into consideration the exact characteristics of the imaging system used during the procedure, such as imaging system latencies, imaging speed, etc. In some embodiments, a triggering deviation error a (epsilon) may be defined and used during the triggering prediction process to allow some variability of trigger location on the time axis which will comply with ttrig ± £.
Next, at step 603, a trajectory is calculated for the medical instrument from an entry point to a target in the patient’s body, as detailed above (for example, in FIG. 3 and FIGs. 4A-4D), using, inter alia, the planning scan. In some embodiments, data-analysis algorithm(s), such as learning-based models, may be used to determine or recommend to the user one or more of the location of the target, an optimal entry point position, location of “no- fly” zones and an optimal trajectory for the procedure. In some embodiments, once the trajectory has been calculated, checkpoints may be set along the trajectory. Checkpoints may be used to pause the insertion/steering of the medical instrument and initiate imaging of the region of interest, to verify the position of the medical instrument (specifically, in order to verify that the instrument (e.g., the distal tip thereof) follows the planned trajectory), to monitor the location of the marked obstacles and/or identify previously unmarked obstacles along the traj ectory, and to verify the target’ s position, such that recalculation of the traj ectory may be initiated, if the user chooses to do so, before advancing the medical instrument to the next checkpoint/to the target. The checkpoints may be manually set by the user, or they may be automatically set or recommended by the processor. In some embodiments, the checkpoints may be spaced apart (including the first checkpoint from the entry point and the last checkpoint from the target) at an essentially similar distance along the trajectory, for example every 20 millimeters (mm), every 30 millimeters, every 40 millimeters, or any other appropriate distance. In some embodiments, the characteristics of the patient’s breathing behavior, including, for example, the average duration of a single breathing cycle and the average duration of triggering event (e.g., the lull period), may be used in the determination (or recommendation) of the number and location of checkpoints along the trajectory. For example, if the lull period is chosen as the triggering event, and a certain patient has short lull periods, the processor may recommend to the user to set the checkpoints 15mm or 20mm apart, so that the instrument can be advanced from one checkpoint to the next during a single lull period.
Optionally, if required, registration imaging (for example, registration CT scan) is triggered at optional step 604, at time ttng, to register the automated medical device, which the medical instrument is coupled thereto as shown in Figs. 5C-5D, to the image space. In some embodiments, the automated medical device may include registration elements, which are visible in generated images, disposed at specific locations on the device. Thereby, synchronization of distinct imaging events is achieved as they are triggered at the same time (i.e., time Cig during the breathing cycle).
At step 605, insertion/steering of the medical instrument is triggered at time Cig, to thereby ensure that the insertion/steering of the medical instrument and corresponding imaging (e.g., scan) are executed at the same point/phase of the breathing cycle (i.e., time Cig), thus ensuring that the state of the anatomical volume during the insertion/steering of the medical instrument matches the state of the anatomical volume captured during the planning and/or registration imaging. In some embodiments, the medical instrument is advanced to the next checkpoint during a single triggering event (also referred to as “gating window”). In some embodiments, the characteristics of the patient’s breathing behavior, including, for example, the average duration of a single breathing cycle and the average duration of the triggering event (e.g., the lull period but not limited to), may be taken into consideration in the insertion/steering of the instrument. For example, if the duration of the triggering event does not allow advancement of the instrument to the next checkpoint in a single triggering event, the insertion steps may be split to two or more segments, such that the insertion segments are executed during consecutive triggering events (i.e., initiated at consecutive Cig occurrences). Alternatively, the insertion speed may be increased in certain insertion steps, in order to enable the instrument to reach the next checkpoint during a single triggering event. In some embodiments, the characteristics of the patient’s breathing behavior, including the characteristics of the triggering event, may be taken into consideration, for example by increasing/decreasing the insertion speed or splitting the insertion step (e.g., distance between consecutive checkpoints) to two or more segments, when the instrument is to be advanced in sensitive areas or conditions, such as, transition between tissue layers, insertion into or in close proximity to the pleura, insertion when there is a detected risk of clinical complications such as pneumothorax or bleeding, final insertion step before reaching a small target, etc. In some embodiments, a desired triggering confidence level may be defined by the user for the entire procedure and/or for specific stages of the procedure and/or for specific insertion
steps/steering legs according to the procedure planning or based on decisions made in realtime during the procedure. Such user- specific triggering confidence settings may be useful in scenarios where a medical operation includes a relatively high risk or is relatively complex and possible errors or deviations in respiration triggering should be minimized or avoided completely. For example, by increasing the required respiration triggering confidence, any possible automated operation (e.g., steering of the medical instrument) will be performed only in case the trigger prediction confidence is sufficiently high, while refraining from triggering medical operation as long as confidence is lower than the threshold defined by the user. On the other hand, in scenarios where respiration synchronization is less critical, the user can decrease the required triggering confidence level and by that possibly decrease procedure duration, procedure cost, etc.
In some embodiments, triggering insertion/steering of the medical instrument may refer to automatic activation of the automated device. In other embodiments, triggering insertion/steering of the medical instrument may refer to generating an instruction/alert to the user to manually activate the automated device, either via the workstation (e.g., by depressing a pedal) or via a remote control unit (e.g., by pressing or rotating an activation button). In such embodiments, the triggering may be in the form of a countdown, so as to allow the user to be prepared to timely activate the automated device and minimize any possible delay.
At step 606, confirmation imaging (e.g., CT scan) is triggered at time ttrig- In some embodiments, the confirmation imaging may be triggered upon the instrument reaching a checkpoint set by the user or the processor along the trajectory. In some embodiments, the insertion/steering of the instrument may be executed continuously, and the confirmation imaging may be triggered at predetermined time(s) and/or expected instrument positions along the trajectory. In some embodiments, triggering confirmation imaging may refer to automatic initiation of the imaging, for example, via direct interface between a processor of the automated medical device and the imaging system. In other embodiments, triggering confirmation imaging may refer to generating an alert/instruction to the user to manually initiate the imaging. In such embodiments, the triggering may be in the form of a countdown, as described above, so as to allow the user to be prepared to timely initiate the imaging and minimize any possible delay. In some embodiments, a radiation sensor/detector may be used to verify proper synchronization between the scan and the respiration cycle.
At step 607, based on the obtained confirmation imaging, the real-time (actual) position of the medical instrument (e.g., of the distal tip thereof) and the target may be determined, as detailed herein. In some embodiments, various other real-time parameters may be determined, including, for example, reaching of checkpoints, the position of known or new obstacles/no-fly zones, and the like.
At step 608, based on the determined real-time positions of the target and the medical instrument, it is determined if the medical instrument reached the target. If it is determined that the medical instrument has reached the target, the insertion/steering process ends 609. If it is determined that the medical instrument has not reached the target then, at step 610, the trajectory is updated, if required due to target movement, and steps 605-608 are repeated until the target is reached. Thus, the method disclosed in FIG. 6, allows synchronization of various steps, including, imaging and inserting/steering of the medical instrument, with the patient’s breathing cycle, by triggering the execution thereof at a specific time/state of the respiration cycle. According to some embodiments, the triggered medical operation/action (such as, for example, triggering a scan or triggering insertion/steering of the medical instrument) may be performed automatically. According to some embodiments, triggering the medical operation/action may include issuing instructions or alerts to the user to execute the action.
According to some embodiments, in order to analyze the respiration behavior, various data sets related to respiration of the subject(s) may be obtained. According to some embodiments, the various obtained datasets may be used for the training, construction and/or validation of respiration related algorithm(s) or learning-based models, as detailed herein. The respiration related datasets or data parameters or values may be obtained from one or more sensor types, including, for example, but not limited to: respiration sensor, stretch sensor, pressure sensor, accelerometer, ECG sensor, motion sensor positioned on the automated device and/or the patient, etc. Data may further be obtained from optical devices (e.g., camera, laser), real-time or semi real-time medical imaging, etc. In some embodiments, the datasets may include such data parameters or values as, but not limited to: voltage, pressure, stretch level, power, acceleration, speed, coordinates, frequency, etc. In some embodiments, the datasets may further include patient- specific data parameters or values, including, for example, age, gender, race, relevant medical history, vital signs before/after/during the procedure, body dimensions (height, weight, BMI, circumference,
etc.), current medical condition, pregnancy, smoking habits, demographic data, and the like, or any combination thereof. Each possibility is a separate embodiment.
According to some embodiments, a training module (also referred to as "learning module") may be used to train an Al model (e.g., machine learning (ML) or deep learning (DL)-based model) to be used in an inference module, based on the datasets and/or the features extracted therefrom and/or additional metadata, in the form of annotations (e.g., labels, bounding -boxes, segmentation maps, visual locations markings, etc.). In some embodiments, the training module may constitute part of the inference module or it may be a separate module. In some embodiments, a training process (step) may precede the inference process (step). In some embodiments, the training process may be on-going and may be used to update/validate/enhance the inference step (see “active-learning” approach described herein). In some embodiments, the inference module and/or the training module may be located on a local server (“on premise”), a remote server (such as, a server farm or a cloudbased server) or on a computer associated with the automated medical device. According to some embodiments, the training module and the inference module may be implemented using separate computational resources. In some embodiments, the training module may be located on a server (local or remote) and the inference module may be located on a local computational resource (computer), or vice versa. According to some embodiments, both the training module and the inference module may be implemented using common computational resources, i.e., processors and memory components shared therebetween. In some embodiments, the inference module and/or the training module may be located or associated with a controller (or steering system) of an automated medical device. In such embodiments, a plurality of inference modules and/or learning modules (each associated with a medical device or a group of medical devices), may interact to share information therebetween, for example, utilizing a communication network. In some embodiments, the model(s) may be updated periodically (for example, every 1-36 weeks, every 1-12 months, etc.). In some embodiments, the model(s) may be updated based on other business logic. In some embodiments, the processor(s) of the automated medical device (e.g., the processor of the insertion system) may run/execute the model(s) locally, including updating and/or enhancing the model(s).
According to some embodiments, during training of the model (as detailed below), the learning module (either implemented as a separate module or as a portion of the inference
module), may be used to construct a suitable algorithm (such as, a classification algorithm), by establishing relations/connections/pattems/correspondences/correlations between one or more variables of the primary datasets and/or between parameters derived therefrom. In some embodiments, the learning may be supervised learning (e.g., classification, object detection, segmentation and the like). In some embodiments, the learning may be unsupervised learning (e.g., clustering, anomaly detection, dimensionality reduction and the like). In some embodiments the learning may be reinforcement learning. In some embodiments, the learning may use a self-learning approach. In some embodiments, the learning process is automatic. In some embodiments, the learning process is semi-automatic. In some embodiments, the learning is manually supervised. In some embodiments, at least some variables of the learning process may be manually supervised/confirmed, for example, by a user (such as a physician). In some embodiments, the training stage may be an offline process, during which a database of annotated training data is assembled and used for the creation of data-analysis model(s)/algorithm(s), which may then be used in the inference stage. In some embodiments, the training stage may be performed "online", as detailed herein.
According to some embodiments, the generated algorithm may essentially constitute at least any suitable specialized software (including, for example, but not limited to: image recognition and analysis software, statistical analysis software, regression algorithms (linear, non-linear, or logistic etc.), and the like). According to some embodiments, the generated algorithm may be implemented using an artificial neural network (ANN), such as a deep neural network (DNN), a convolutional neural network (CNN), a recurrent neural network (RNN) long-short term memory (LSTM), auto-encoder (AE), generative adversarial network (GAN), Reinforcement-Learning (RL) and the like, decision tree (DT), random forest (RF), decision graph, association rule learning, support vector machine (SVM), boosting algorithms, linear regression, logistic regression, clustering algorithms, inductive logic programming, Bayesian networks, instance-based learning, manifold learning, sub-space learning, and the like, or any combination thereof. The algorithm or model may be generated using machine learning tools, data wrangling tools, deep learning tools, and, more generally, data science and artificial intelligence (Al) learning tools, as elaborated hereinbelow.
Reference is now made to FIGS. 6A-6B, which show an exemplary training module (FIG. 6A) and an exemplary training process (FIG. 6B), according to some embodiments.
As shown in FIG. 6A, a training module 70 may include two main hardware components/units: at least one memory 72 and at least one processing unit 74, which are functionally and/or physically associated. Training module 70 may be configured to train a model based on data. Memory 72 may include any type of accessible memory (volatile and/or non-volatile), configured to receive, store and/or provide various types of data, to be processed by processing unit 74, which may include any type of at least one suitable processor, as detailed below. In some embodiments, the memory and the processing units may be functionally or physically integrated, for example, in the form of a Static Random Access Memory (SRAM) array. In some embodiments, the memory is a non-volatile memory having stored therein executable instructions (for example, in the form of a code, service, executable program and/or a model file). As shown in FIG. 6A, the memory unit 72 may be configured to receive, store and/or provide various types of data values or parameters related to the data. Memory 72 may store or accept raw (primary) data 722 that has been collected, as detailed herein. Additionally, metadata 724, related to the raw data 722 may also be collected/stored in memory 72. Such metadata 724 may include a variety of parameters/values related to the raw data, such as, but not limited to: the specific device which was used to provide the data, the time the data was obtained, the place the data was obtained (such as a specific procedure/operating room, specific institution, etc.), and the like. Memory 72 may further be configured to store/collect data annotations (e.g., labels) 726. In some embodiments, the collected data may require additional steps for the generation of data- annotations that will be used for the generation of the machine-learning, deep-learning models or other statistical or predictive algorithms as disclosed herein. In some embodiments, such data annotations may include labels describing the clinical procedure’s characteristics, the automated device’s operation and computer-vision related annotations, such as segmentation masks, target marking, organs and tissues marking, and the like. The different annotations may be generated in an “online” manner, which is performed while the data is being collected, or in an “offline” manner, which is performed at a later time after sufficient data has been collected. The memory 72 may further include features database 728. The features database 728 may include a database ("store") of previously known or generated features that may be used in the training/generation of the models. The memory 72 of training module 70 may further, optionally, include pre-trained models 729. The pre-trained models 729 include existing pre-trained algorithms which may be used to automatically annotate a portion of the data and/or to ease training of new models using “transfer-learning” methods
T1
and/or to shorten training time by using the pre-trained models as starting points for the training process on new data and/or to evaluate and compare performance metrics of existing versus newly developed models before deployment of new model to production, as detailed hereinbelow.
In some embodiments, processing unit 74 of training module 70 may include at least one processor, configured to process the data and allow/provide model training by various processing steps (detailed in FIG. 6B). Thus, as shown in FIG. 6A, processing unit 74 may be configured at least to perform pre-processing of the data 742. Pre-processing of the data may include actions for preparing the data stored in memory 72 for downstream processing, such as, but not limited to, checking for and handling null values, imputation, standardization, handling categorical variables, one-hot encoding, resampling, scaling, filtering, outlier removal etc. Processing unit 74 may further, optionally, be configured to perform feature extraction 744, in order to reduce the raw data dimension and/or add informative domainknowledge into the training process and allow the use of additional machine-learning algorithms not suitable for training on raw data and/or optimization of existing or new models by training them on both the raw data and the extracted features. Feature extraction may be executed using dimensionality reduction methods, for example, Principal Components Analysis (PCA), Independent Component Analysis (ICA), Linear Discriminant Analysis (LDA), Locally Linear Embedding (LLE), t-distributed Stochastic Neighbor Embedding (t- SNE), Unified Manifold Approximation and Projection (UMAP) and/or Autoencoders, etc. Feature extraction may be executed using feature engineering methods in which mathematical tools are used to extract domain -know ledge features from the raw data, for example - statistical features, such as mean, variance, ratio, frequency etc. and/or visual features, such as dimension or shape of certain objects in an image. Another optional technique which may be executed by the processing unit 74 to reduce the number of features in the dataset is feature selection, in which the importance of the existing features in the dataset is ranked and the less important features are discarded (i.e., no new features are created). Processing unit 74 may further be configured to execute model training 746.
Reference is now made to FIG. 6B, which shows steps in an exemplary training process 76, executed by a suitable training module (such as training module 70 of FIG. 6A). As shown in FIG. 6B, at optional step 761, the collected datasets may first require an Extract- Transform-Load (ETL) or ELT process that may be used to (1) Extract the data from a single
or multiple data sources (including, but not limited to, the automated medical device itself, Picture Archiving and Communication System (PACS), Radiology Information System (RIS), imaging device, healthcare facility’s Electronic Health Record (EHR) system, etc.), (2) Transform the data by applying one or more of the following steps: handling missing values, checking for duplicates, converting data types as needed, encoding values, joining data from multiple sources, aggregating data, translating coded values etc. and (3) Load the data to a variety of data storage devices (on-premise or at a remote location (such as a cloud server)) and/or to a variety of data stores, such as file systems, SQL databases, no-SQL databases, distributed databases, object storage, etc. In some embodiments, the ETL process may be automatic and triggered with every new data collected. In other embodiments, the ETL process may be triggered at a predefined schedule, such as once a day or once a week, for example. In some embodiments, another business logic may be used to decide when to trigger the ETL process.
At step 762, the data may be cleaned to ensure high quality data by, for example removal of duplicates, removal or modification of incorrect and/or incomplete and/or irrelevant data samples, etc. At step 763, the data is annotated. The different annotations may be generated in an “online” manner, which is performed while the data is being collected, or in an “offline” manner, which is performed at a later time after sufficient data has been collected. In some embodiments, the data annotations may be generated automatically using an “active learning” approach, in which existing pre-trained algorithms are used to automatically annotate a portion of the data. In some embodiments, the data annotations may be generated using a partially automated approach with “human in the loop”, i.e., human approval or human annotations will be required in cases where the annotation confidence is low, or per other business logic decision or metric. In some embodiments, the data annotations may be generated in a manual approach, i.e., using human annotators to generate the required annotations using convenient annotation tools. Next, at step 764, the annotated data is pre-processed, for example, by one or more of checking for and handling null values, imputation, standardization, handling categorical variables, one -hot encoding, resampling, scaling, filtering, outlier removal and other data manipulations, to prepare the data for further processing. At optional step 765, extraction (or selection) of various features of the data may be performed, as explained hereinabove. At step 766, the data and/or features extracted therefrom is divided to training data (“training set”), which will be used to train the model,
and testing data (“testing set”), which will not be introduced into the model during model training so it can be used as “hold-out” data to test the final trained model before deployment. The training data may be further divided into a “train set” and a “validation set”, where the train set is used to train the model and the validation set is used to validate the model’s performance on unseen data, to allow optimization/fine-tuning of the training process’ configuration/hyperparameters during the training process. Examples for such hyperparameters may be the learning-rate, weights regularization, model architecture, optimizer selection, etc. In some embodiments, the training process may include the use of a Cross-Validation (CV) methods in which the training data is divided into a “train set” and a “validation set”, however, upon training completion, the training process may repeat multiple times with different selections of “train set” and “validation set” out of the original training data. The use of CV may allow a better validation of the model during the training process as the model is being validated against different selections of validation data. At optional step 767, data augmentation is performed. Data augmentation may include, for example, generation of additional data from/based on the collected or annotated data. Possible augmentations that may be used for image data are: rotation, flip, noise addition, color distribution change, crop, stretch, etc. Augmentations may also be generated using other types of data, for example by adding noise or applying a variety of mathematical operations. In some embodiments, augmentation may be used to generate synthetic data samples using synthetic data generation approaches, such as distribution based, Monte-Carlo, Variational Autoencoder (VAE), Generative-Adversarial-Network (GAN), etc. Next, at step 768, the model is trained, wherein the training may be performed “from scratch” (i.e., an initial/primary model with initialized weights is trained based on all relevant data) and/or utilizing existing pre-trained models as starting points and training them only on new data. At step 769, the generated model is validated. Model validation may include evaluation of different model performance metrics, such as accuracy, precision, recall, Fl score, AUC- ROC, etc., and comparison of the trained model against other existing models, to allow deployment of the model which best fits the desired solution. The evaluation of the model at this step is performed using the testing data (“test set”) which was not used for model training nor for hyperparameters optimization and best represents the real-world (unseen) data. At step 770, the trained model is deployed and integrated or utilized with the inference module to generate output based on newly collected data, as detailed herein.
According to some embodiments, as more data is collected, the training database may grow in size and may be updated. The updated database may then be used to re-train the model, thereby updating/enhancing/improving the model’s output. In some embodiments, the new instances in the training database may be obtained from new clinical cases or procedures or from previous (existing) procedures that have not been previously used for training. In some embodiments, an identified shift in the collected data’s distribution may serve as a trigger for the re-training of the model. In other embodiments, an identified shift in the deployed model’s performance may serve as a trigger for the re-training of the model. In some embodiments, the training database may be a centralized database (for example, a cloud-based database), or it may be a local database (for example, for a specific healthcare facility). In some embodiments, learning and updating may be performed continuously or periodically on a remote location (for example, a cloud server), which may be shared among various users (for example, between various institutions, such as hospitals). In some embodiments, learning and updating may be performed continuously or periodically on a single or on a cohort of medical devices, which may constitute an internal network (for example, of an institution, such as a hospital). For example, in some instances, a validated model may be executed locally on processors of one or more medical systems operating in a defined environment (for example, a designated institution, such as a hospital), or on local online servers of the designated institution. In such case, the model may be continuously updated based on data obtained from the specific institution ("local data"), or periodically updated based on the local data and/or on additional external data, obtained from other resources. In some embodiments, federated learning may be used to update a local model with a model that has been trained on data from multiple facilities/tenants without requiring the local data to leave the facility or the institution.
Reference is now made to FIGS. 7A-7B, which show an exemplary inference module (FIG. 7A) and an exemplary inference process (FIG. 7B), according to some embodiments.
As shown in FIG. 7A, inference module 80 may include two main hardware components/units: at least one memory unit 82 and at least one processing unit 84, which are functionally and/or physically associated. Inference module 80 is essentially configured to run collated data into the trained model to calculate/process an output/prediction. Memory 82 may include any type of accessible memory (volatile and/or non-volatile), configured to receive, store and/or provide various types of data and executable instructions, to be
processed by processing unit 84, which may include any type of at least one suitable processor. In some embodiments, the memory 82 and the processing unit 84 may be functionally or physically integrated, for example, in the form of a Static Random Access Memory (SRAM) array. In some embodiments, the memory is a non-volatile memory having stored therein executable instructions (for example, in the form of a code, service, executable program and/or a model file containing the model architecture and/or weights) that can be used to perform a variety of tasks, such as data cleaning, required pre-processing steps and inference operation (as detailed below) on new data to obtain the model’ s prediction or result. As shown in FIG. 7A, memory 82 may be configured to accept/receive, store and/or provide various types of data values or parameters related to the data as well as executable algorithms (in the case of machine learning based algorithms, these may be referred to as “trained models”). Memory unit 82 may store or accept new acquired data 822, which may be raw (primary) data that has been collected, as detailed herein. Memory module 82 may further store metadata 824 related to the raw data. Such metadata may include a variety of parameters/values related to the raw data, such as, but not limited to: the specific device which was used to provide the data, the time the data was obtained, the place the data was obtained (such as specific operation room, specific institution, etc.), and the like. Memory 82 may further store the trained model(s) 826. The trained models may be the models generated and deployed by a training module, such as training module 70 of FIG. 6A. The trained model(s) may be stored, for example in the form of executable instructions and/or model file containing the model’s weights, capable of being executed by processing unit 84. Processing unit 84 of inference module 80 may include at least one processor, configured to process the new obtained data and execute a trained model to provide corresponding results (detailed in FIG. 7B). Thus, as shown in FIG. 7A, processing unit 84 is configured at least to perform pre-processing of the data 842, which may include actions for preparing the data stored in memory 82 for downstream processing, such as, but not limited to, checking for and handling null values, imputation, standardization, handling categorical variables, one-hot encoding, resampling, scaling, filtering, outlier removal etc. In some embodiments, processing unit 84 may further be configured to extract features 844 from the acquired data, using techniques such as, but not limited to, Principal Components Analysis (PCA), Independent Component Analysis (ICA), Linear Discriminant Analysis (LDA), Locally Linear Embedding (LLE), t- distributed Stochastic Neighbor Embedding (t-SNE), Unified Manifold Approximation and Projection (UMAP) and/or Autoencoders, etc. Feature extraction may be executed using
feature engineering methods in which mathematical tools are used to extract domainknowledge features from the raw data, for example: statistical features such as mean, variance, ratio, frequency etc. and/or visual features such as dimension or shape of certain objects in an image. Alternatively, or additionally, the processing unit 84 may be configured to perform feature selection. Processing unit 84 may further be configured to execute the model on the collected data and/or features extracted therefrom, to obtain model results 846. In some embodiments, the processing unit 84 may further be configured to execute a business logic 848, which can provide further fine-tuning of the model results and/or utilization of the model’s results to a variety of automated decisions, guidelines or recommendations supplied to the user.
Reference is now made to FIG. 7B, which shows steps in an exemplary inference process 86, executed by a suitable inference module (such as inference module 80 of FIG. 7 A). As shown in FIG. 7B, at step 861, new data is acquired/collected from or related to newly executed medical procedures. The new data may include any type of raw (primary) data, as detailed herein. At optional step 862, suitable trained model(s) (generated, for example by a suitable training model in a corresponding training process) may be loaded, per task(s). This step may be required in instances in which computational resources are limited and only a subset of the required models or algorithms can be loaded into RAM memory to be used for inference. In such cases, the inference process may require an additional management step responsible to load the required models from storage memory for a specific subset of inference tasks/jobs, and once inference is completed, the loaded models are replaced with other models that will be loaded to allow an additional subset of inference tasks/jobs. Next, at step 863, the raw data collected in step 861 is pre-processed. In some embodiments, the pre-processing steps may be similar or identical to the pre-processing step preformed in the training process (by the training module), to thereby allow the data to be processed similarly by the two modules (i.e., training module and inference module). In some embodiments, this step may include actions such as, but not limited to, checking for and handling null values, imputation, standardization, handling categorical variables, one-hot encoding, etc., to prepare the input data for analysis by the model(s). Next, at optional step 864, extraction of features from the data may be performed using, for example, Principal Components Analysis (PCA), Independent Component Analysis (ICA), Linear Discriminant Analysis (LDA), Locally Linear Embedding (LLE), t-distributed Stochastic Neighbor
Embedding (t-SNE), Unified Manifold Approximation and Projection (UMAP) and/or Autoencoders, etc. Alternatively, or additionally, feature selection may be executed. At inference step 865, the results of the model are obtained, i.e., the model is executed on the processed data to provide corresponding results. At optional step 866, fine-tuning of the model results may be performed, whereby post-inference business logic is executed. Execution of post-inference business logic refers to the utilization of the model’s results to a variety of automated decisions, guidelines or recommendations supplied to the user. Postinference business logic may be configured to accommodate specific business and/or clinical needs or metrics, and can vary between different scenarios or institutions based on users’ or institutions’ requests or needs.
At step 867, the model results may be utilized in various means, including, for example, providing operating instructions to automated medical devices and/or imaging systems (including triggering operation thereof at specific time points or states of the respiration cycle), providing instructions and/or recommendations and/or alerts to users regarding various device operations (including instructions to initiate operation of the automated medical device and/or the imaging system at specific time points or states of the respiration cycle), and the like, as further detailed hereinabove.
In some embodiments, inference operation may be performed on a single data instance. In other embodiments, inference operation may be performed using a batch of multiple data instances to receive multiple predictions or results for all data instances in the batch. In some embodiments, an ensemble of models or algorithms can be used for inference, where the same input data is processed by a group of different models and results are being aggregated using averaging, majority voting or the like. In some embodiments, the model can be designed in a hierarchical manner where input data is processed by a primary model and based on the prediction or result of the primary model’s inference, the data is processed by a secondary model. In some embodiments, multiple secondary models may be used, and hierarchy may have more than two levels.
According to some embodiments, the methods and systems disclosed herein utilize data-driven methods to create algorithms based, at least in part, on various breath related datasets. In some embodiments, artificial intelligence (e.g., machine-learning or deep learning) algorithms are used to learn the complex mapping/correlation/correspondence between the multimodal (e.g., data obtained from different modalities, such as images, logs,
sensory data, etc.) input datasets parameters (procedure, clinical, operation, patient related and/or administrative information), to optimize the clinical procedure’s outcome or any other desired functionalities, by allowing, inter alia, synchronization thereof in accordance with specific states of the breathing cycle. In some embodiments, the systems and methods disclosed herein may determine such optimal mapping using various approaches, such as, for example, a statistical approach, and utilizing machine-learning algorithms to learn the mapping/correlation/correspondence from the training datasets.
In some embodiments, the algorithm may be a generic algorithm, which is agnostic to specific procedure characteristics, such as type of procedure, user, service provider or patient. In some embodiments, the algorithm may be customized to a specific user (for example, preferences of a specific healthcare provider), a specific service provider (for example, preferences of a specific hospital), a specific population (for example, preferences of different age groups), a specific patient (for example, preferences of a specific patient), and the like. In some embodiments, the algorithm may be combined a generic portion and a customized portion.
Reference is now made to FIG. 8, which shows a block diagram 90 illustrating an exemplary method of generating (training) a trigger determination model, according to some embodiments. As described hereinabove, triggering imaging and/or insertion/steering of a medical instrument may be performed at a specific time/state during or along the breathing cycle, to thereby minimize breathing effects/artifacts on the procedures and increase their accuracy and efficiency. For determination or prediction of the triggering event a corresponding model may be trained. As shown in FIG. 8, respiration related data 901 is used to train the trigger determination model. The input respiration data 901 may be obtained from previous procedures, from a current procedure and/or from external datasets/databases. In some embodiments, the respiration related data/datasets may be obtained, for example, from one or more breath sensors or breath monitoring devices as detailed herein. In some embodiments, as further detailed hereinbelow, the input data may include a prediction/forecast of the respiration behavior for a future segment of the respiration cycle (for example, future time window Tforecast). Feature extraction/engineering 902 of the input data may be performed. Exemplary features may include such features as, but not limited to: breathing stability, breathing rate, breathing amplitude, gating window length (i.e., compliant length of time between two selected points of the breathing cycle where a synchronized
medical operation can be executed), patient movement, standard statistical parameters such as mean, median, standard-deviation, histogram, phase, integral, slope, FFT, correlation, and the like, or any combination thereof. Additionally, ground truth trigger labels 903 may be obtained and utilized for the training process. Ground truth trigger labels may be obtained, for example, from previous procedures, external datasets/databases, ongoing procedure, and the like. Exemplary ground truth trigger labels may include, for example, timestamps of valid triggers, gating windows’ start and end timestamps, gating windows’ length, etc. The input respiration data 901, extracted//engineered features 902 and/or ground truth labels 903 are used for training model 904 to predict the next valid trigger event(s). In embodiments in which the input data includes a prediction/forecast of the respiration behavior for a future segment of the respiration cycle Tforecast, the trigger determination model may be trained to predict the next valid triggering event(s) during Tforecast. Training the trigger determination model may be established using machine learning (ML)/deep learning (DL) tools, as detailed above. In some embodiments, the model is trained to output an accurate trigger event which will reduce, minimize or diminish breathing artifacts, by allowing a procedure to be performed at the same state (e.g., trig) of the breathing cycle. In some embodiments, the training may utilize a loss function 905 to calculate the loss, aimed to minimize the trigger prediction error and/or the gating window duration error etc. In some embodiments, loss function 905 may be a Multi-Loss scheme. In some embodiments, the training may be executed using a Multi-Output regression/classification approach, for example, to utilize the respiration analysis capabilities of the model to predict not only the next valid triggering event, but also breathing-related events in which triggering is to be avoided, such as the patient coughing or a sudden movement of the patient. If such events are predicted then automatically triggered actions may be aborted or halted (if already initiated) and/or the user may be alerted before manually initiating the medical action or while the action is in process (if already initiated). In some embodiments, the model may be further trained to predict breathing-related anomalies, predict possible breathing complications or other clinical complications (e.g., pneumothorax), monitor the subject’s stress level and the like. In some embodiments, the generated/calculated loss (e.g., prediction error) may be used to fine-tune or adjust the trigger determination model, as part of the training process.
According to some embodiments, the determination/prediction of the next valid trigger(s) is based on raw (or minimally processed) respiration data and/or on a forecasted
breathing signal. According to some embodiments, when determining a valid triggering event using respiration data, a one step process may be used and a trigger may be provided if a medical action (e.g., imaging, instrument insertion/steering) is to be executed at that time point (i.e., a yes/no answer as to whether an action is to be triggered at this time point/phase) and/or an indication of the exact timing of the next valid triggering event is provided. In such instances, a suitable classification and/or regression algorithm may be used to determine, based on the input data (parameter-based), if a trigger is to be executed and/or what would be the timing of the next valid triggering event. According to other embodiments, the next valid trigger(s) may be based on a forecast/prediction of the breathing signal. Such forecast/prediction may be performed for a future time window (Tforecast), which may be, for example, the next 3 seconds, the next 6 seconds, the next 9 seconds, or any other appropriate time window. In such embodiments, the next valid trigger(s) may be predicted for future time window Tforecast. In some embodiments, the classification and/or regression algorithm/model may predict the length of the gating window (also referred to as the “triggering period”), i.e., the start time and end time of the gating window in which a medical action can be executed in synchronization with the subject’s respiratory behavior. According to some embodiments, the medical action may be performed automatically once a valid trigger is determined. In some embodiments, when a valid trigger is determined, the user may be provided with an alert/instruction/recommendation to perform the required medical action at the next predicted time point/state (ttrig), during the next time window (Tforecast). In some embodiments, information regarding the confidence and/or the quality of the forecasted triggering period may be utilized to optimize automatic triggering of a medical action. In some embodiments, information regarding the confidence and/or the quality of the forecasted triggering period may be presented to the user, to optimize the user’s decision-making or execution.
Reference is now made to FIG. 9, which shows a block diagram 100 illustrating an exemplary method of generating (training) a respiration prediction model, according to some embodiments. As described hereinabove, triggering imaging and/or insertion/steering of a medical instrument may be performed at a specific time/state during or along the breathing cycle and, in some embodiments, triggering determination is for a future time window for which respiration behavior is predicted/forecasted. For predicting breathing behavior, a corresponding algorithm/model may be trained. As shown in FIG. 10, respiration related data
1001 is used to train the respiration prediction model. The input respiration data 101 may be obtained from previous procedures, from a current procedure and/or from external datasets/databases. In some embodiments, the respiration data 1001 may be obtained, for example, from one or more breath sensors or breath monitoring devices, including, for example, pressure sensor, stretch sensor, motion sensor, accelerometer, ECG sensor, and the like. Data may further be obtained from optical devices (e.g., camera, laser), real-time or semi real-time medical imaging, etc. The sensor/monitor may be associated with or placed on the subject’s body, in close proximity to the subject’s body, in association with the automated medical device, in association with the imaging system, in association with other devices, and the like. In some embodiments, the sensor/monitor may operate autonomously or in conjunction with the automated medical device, the imaging device, medical systems, and the like. In some embodiments, the sensor/monitor may operate continuously or periodically. In some embodiments, the operation of the respiration sensor/monitor may be at least partially controlled by the processor or controller of the automated medical device. In some embodiments, the datasets may include such data parameters or values as, but not limited to: voltage, pressure, stretch level, power, acceleration, speed, coordinates, frequency, etc.
The obtained respiration related data is then pre-processed 1002. Pre-processing of the data may include any suitable pre-processing method of the raw data to prepare the data for downstream processing, including, for example, but not limited to: checking for and handling null values, imputation, standardization, handling categorical variables, one -hot encoding, resampling, scaling, filtering, outlier removal, and the like. Optionally, feature extraction/feature engineering 1003 of the input data may be performed. Exemplary features may include such features as, but not limited to: breathing stability, breathing rate, breathing amplitude, gating window length (i.e., compliant length of time between two selected points of the breathing cycle where a synchronized medical operation can be executed), patient movement, standard statistical parameters such as mean, median, standard-deviation, histogram, phase, integral, slope, FFT, correlation, and the like, or any combination thereof. Optionally, the pre-processed data is processed/analyzed by a signal quality analysis algorithm/model 1004. Such signal quality analysis algorithm/model utilizes data related to the breathing measurements (e.g., breathing activity sampled by an acquisition device) to estimate the quality of the signal. Short-term and/or long-term signal quality analysis may be
performed to analyze current state and/or possible trends. Degradation in signal quality may result from, for example, movement of the sensor, misplacement of the sensor, functional defects of the sensor, functional defects of the host system, electromagnetic interferences from external sources and the like. In some embodiments, the output of the signal quality analysis step includes valuable information about the respiration signal’s quality that will be utilized during the training process of the respiration prediction model. In some embodiments, the output of the signal quality algorithm/model may be used to provide information and/or instructions to the user related to the quality of the respiration signal and/or the need to take action/s or perform adjustments to increase the signal’s estimated quality. Additionally, ground truth data 1005 is obtained and utilized for training the model. When training a respiration prediction model to predict respiration behavior for the next time period/window Tforecast, the ground truth data is the actual respiration behavior during the time period/window Tforecast for which the prediction is desired. The pre-processed respiration data 1002, ground truth data 1005 and, optionally, extracted features 1003 and/or signal quality analysis output 1004 are used for training the respiration model 1006, which may predict/forecast the breathing behavior for the next time period/window (for example, next 3 seconds, next 6 seconds, next 9 seconds, or any other appropriate time window). Training respiration model 1006 may be executed using machine learning (ML)/deep learning (DL) tools, as detailed above. In some embodiments, the model 1006 may be trained to output an accurate as possible prediction of the characteristics of one or more future breathing cycles or specific portions/segments thereof, including, for example, size, amplitude, length, dispersion, and the like. Such respiration prediction model may be used for example, in the prediction of triggering event(s), to reduce, minimize or diminish breathing artifacts, by allowing steps of the procedure to be performed at the same time or state (e.g., trig) of the predicted breathing cycle. In some embodiments, the training may utilize a loss function 1007 to calculate the loss, aimed to minimize the respiration prediction error and/or the gatingwindow duration error etc. In some embodiments, loss function 1007 may be a Multi-Loss scheme. In some embodiments, the training may be executed using a Multi-Output regression/classification approach, for example, to utilize the respiration analysis capabilities of the model to predict breathing -related anomalies, possible breathing complications or other clinical complications (e.g., pneumothorax, bleeding), monitor the subject’s stress level and the like. In some embodiments, the model may be used to identify respiration patterns, which may then be used to classify patients according to their respiration patterns. Such
classification may be used, for example, to adapt or personalize the trigger determination model and/or the insertion/steering procedure to the patient’s respiration classification. In some embodiments, the generated/calculated loss (e.g., prediction error) may be used to finetune or adjust the respiration prediction model 1006, as part of the training process.
Reference is now made to FIG. 10, which shows a flowchart 110 illustrating an exemplary method for predicting/forecasting respiration of a subject utilizing the respiration prediction model, according to some embodiments. As shown in FIG. 10, at step 1101, patient respiration activity is obtained. As detailed above, the respiration activity data may be obtained, for example, from one or more breath sensors, including, for example, pressure sensor, stretch sensor, motion sensor, imaging sensor, accelerometer, ECG sensor, camera, and the like. The sensor/monitor may be associated with or be placed on the subject’s body, in close proximity to the subject’s body, in association with the automated medical device, in association with the imaging system, in association with other devices, and the like. In some embodiments, the sensor/monitor may operate autonomously or in conjunction with the automated medical device, the imaging device, medical systems, and the like. In some embodiments, the sensor/monitor may operate continuously or periodically. In some embodiments, the operation of the respiration sensor/monitor may be at least partially controlled by the processor or controller of the automated medical device. As shown in FIG. 10, at calibration step 1102, patient specific respiration parameters are calculated in order to ensure that the characteristics of the obtained respiration signal allow successful completion of the analysis flow. In some embodiments, such patient specific respiration parameters may include various features, patterns and/or respiratory related statistics, such as breathing stability, noise level, breathing rate, breathing amplitude, gating window length (i.e., compliant length of time between two selected points of the breathing cycle where a synchronized medical operation can be executed) statistics, patient movement, standard statistical parameters such as mean, median, standard-deviation, histogram, phase, integral, slope, FFT, correlation, and the like, or any combination thereof. At step 1103 it is determined if the calibration process is satisfactorily completed. If the calibration process is not completed, step 1102 is repeated. Optionally, step 1104 may be employed, whereby indication to improve breathing related signal acquisition (for example, by adjusting sensor location, sensor contact, sensor operating parameters (for example, sensitivity), is issued prior to repeating step 1102. Once the calibration phase 1104 is satisfactorily completed,
respiration activity is predicted using the corresponding ML/DL model/algorithm 1107. Optionally, at step 1105, the generic model/algorithm which, in some embodiments, has been trained on a (large) population of subjects, may be fine-tuned and optimized for the specific patient, by including patient-specific respiration data that will be used for re-training and/or fine-tuning of the model/algorithm such that a new personalized model/algorithm will be generated, that will better correspond to the specific respiratory profile of the patient, while still utilizing more generic respiration analysis capabilities obtained from training on a larger population. In some embodiments, re-training or fine-tuning of the model will require a deployment step of the ML/DL patient-specific model/algorithm 1106, such deployment step marks the transition between the re-training step into the inference step where the new, retrained, model will be used for prediction of new data. In some embodiments, the new, re- trained/fine-tuned model may replace the original, more generic model. In some embodiments, the re-trained model may be used in combination with the original model or with multiple other models. After the patient’s respiratory activity is predicted, in step 1107, it is determined, in step 1108, if the signal prediction confidence is satisfactory (for example, if it is sufficiently high relative to a predetermined or patient- specific threshold). If the signal prediction confidence is not satisfactory, step 1107 is repeated, until the prediction confidence level is satisfactory. If the signal prediction confidence is adequate, respiration activity prediction is provided, in step 1109. Providing the respiration activity prediction may be, for example, presented to a user and/or it may be used in further calculations, predictions and/or controlling operations of medical devices. For example, the respiration activity prediction may be used as input for the generation of a triggering determination algorithm/model, as described hereinabove. In some embodiments, pre-processing stages may be used during the described flow in order to prepare the data for the analysis process. Such pre-processing of the data may include any suitable pre-processing of the raw data to prepare the data for downstream processing, including, for example, but not limited to: checking for and handling null values, imputation, standardization, handling categorical variables, one -hot encoding, resampling, scaling, filtering, outlier removal, and the like.
Reference is now made to FIGS. 11A-11C, which illustrate graphs of respiratory activity of actual (measured) and predicted respiratory activity of subjects. Shown in FIG. 11A is respiration behavior input data of a subject with relatively stable breathing behavior (represented by line graph 1202). The input data is used for predicting breathing behavior
utilizing the methods disclosed herein. The predicted breathing behavior during time window T forecast is represented by graph line 1204 and the actual (measured) breathing activity during time window Tforecast is represented by graph line 1206. As shown in FIG. 11 A, the predicted respiratory behavior (breath cycle) closely matches the actual respiratory behavior.
Shown in FIG. 11B, is respiration behavior input data of a subject with relatively unstable breathing behavior (represented by line graph 1212). The input data is used for predicting breathing behavior utilizing the methods disclosed herein. The predicted breathing behavior during Tforecast is represented by graph line 1214 and the actual (measured) breathing activity during Tforecast is represented by graph line 1216. As shown in FIG. 1 IB, the predicted respiratory behavior (breath cycle) closely matches the actual respiratory behavior, despite the relative instability identified in the input data.
Shown in FIG. 11C, is respiration behavior input data of a subject (represented by line graph 1222). The input data is used for predicting breathing behavior utilizing the methods disclosed herein. The predicted breathing behavior is represented by graph line 1224 and the actual (measured) breathing activity is represented by graph line 1226. Further shown is ttng 1228, which is the selected state to trigger an event (for example, imaging, insertion/steering of medical instrument, etc.). In the example shown, the ttng is at beginning the pause (Lull) phases of the breathing cycle. As shown in FIG. 11C, the ttng marked on the predicted respiratory behavior (breath cycle) very closely matches the ttrig marked on the actual respiratory behavior, further substantiating the accuracy of the respiration prediction models disclosed herein.
Reference is now made to FIG. 12, which shows a flowchart 1300 illustrating steps of an exemplary method of steering of a medical instrument toward a predicted location of a moving target, in synchronization with a respiratory cycle of a subject, according to some embodiments. As shown in FIG. 12, at step 1302, respiration behavior of the patient is analyzed using a data analysis algorithm. In some embodiments, as further detailed hereinbelow, the respiration behavior data analysis algorithm may be used to predict a future segment of the patient’s respiration, which allows triggering of various consequent medical actions, including, for example, imaging and/or insertion and/or steering of a medical tool, such that the triggering is performed at a specific time (time point or time range) during the respiration cycle, thereby ensuring synchronization between the execution of different actions prior or during the medical procedure and the respiration cycle. In some
embodiments, the respiration behavior is analyzed prior to commencement of the medical procedure, and a respiration behavior baseline is established for the specific patient. In some embodiments, the breathing behavior of the patient may change during the course of the procedure, for example, due to the gradual effect of sedation on the patient, due to a change in the patient’s stress levels, due to a change in the patient’s physical conditions. Thus, in some embodiments, the respiration of the patient is continuously monitored throughout the procedure (e.g., by means of a respiration sensor, one or more breath sensors, including, for example, pressure sensor, stretch sensor, motion sensor, imaging sensor, accelerometer, ECG sensor, camera, and the like), and the respiration activity is continuously analyzed and taken into consideration in subsequent steps of the disclosed method.
In step 1304, planning imaging (such as, for example, a CT scan) of a region of interest is triggered, at time Gig of the respiration cycle. Time Gig of the respiration cycle may be any desired or suitable time point during the respiration cycle, wherein Gig is a specific respiration time or state of the respiration cycle. For example, Gig may be determined to be at the peak region/point of the inhalation phase, at a specific point/region of the exhalation phase, at the beginning of the automatic pause (“lull”) period between two consecutive respiration cycles, and the like, or any other suitable region/point/state during the respiration cycle. In some embodiments, Gig is determined to be the start/onset of a triggering event (e.g., lull period). In some embodiments, triggering planning imaging may refer to automatic initiation of the scan, for example, via direct interface between a processor of an automated medical device and the imaging system. In other embodiments, triggering planning imaging may refer to generating an alert/instruction to a user (e.g., physician, technician) to manually initiate the imaging. In such embodiments, the triggering may be in the form of a countdown (for example, a countdown from 5, a countdown from 4, a countdown from 3, etc.), so as to allow the user to be prepared to timely initiate the imaging and minimize any possible delay.
Next, at step 1306 a trajectory is calculated for a medical instrument, which is coupled to the automated medical device as shown in Figs. 5C-5D, from an entry point to a target in the patient’s body, as detailed above based, inter alia, on the planning scan. In some embodiments, data-analysis algorithm(s), such as learning-based models, may be used to determine or recommend to the user one or more of the location of the target, an optimal entry point position, location of “no-fly” zones and an optimal trajectory for the procedure.
In some embodiments, once the trajectory has been calculated, checkpoints may be set along the trajectory.
Optionally, if required, a registration imaging (for example, registration CT-scan) at optional step 1308 may be triggered at time ttrig, to register the automated medical device to the image space. Thereby, synchronization of distinct imaging events are triggered to occur at the same time (i.e., time ttrig along/over the breathing cycle(s).
At consequent step 1310, insertion/steering of the medical instrument at time ttrig is triggered, to thereby ensure that the insertion/steering of the medical instrument and corresponding imaging (e.g., scan) are executed at the same point/phase of the breathing cycle (i.e., time ttrig), thus ensuring that the state of the anatomical volume during the insertion/steering of the medical instrument matches the state of the anatomical volume captured during the planning and/or registration imaging. In some embodiments, triggering steering of the medical instrument may refer to automatic activation of the automated device. In other embodiments, triggering steering of the medical instrument may refer to generating an instruction/alert to the user to manually activate the automated device, either via a workstation (e.g., by depressing a pedal, pushing a button, and the like) or via a remote control unit (e.g., by pressing or rotating an activation button or any suitable activation mechanism). In such embodiments, the triggering may be in the form of a countdown, so as to allow the user to be prepared to timely activate the automated medical device and minimize any possible delay.
At step 1312, a confirmation imaging (e.g., CT-scan) is triggered at time ttrig. In some embodiments, the confirmation imaging may be triggered upon the medical instrument reaching a checkpoint set by the user or the processor along the trajectory. In some embodiments, the steering of the medical instrument may be executed continuously, and the confirmation imaging may be triggered at predetermined time(s) and/or expected medical instrument positions along the trajectory. In some embodiments, triggering confirmation imaging may refer to automatic initiation of the imaging. In other embodiments, triggering confirmation imaging may refer to generating an alert/instruction to the user to manually initiate the imaging. In such embodiments, the triggering may be in the form of a countdown, as described above, so as to allow the user to be prepared to timely initiate the imaging and minimize any possible delay.
At step 1314, based, inter alia, on the obtained confirmation imaging, the real-time position of the medical instrument and the target may be determined (as detailed herein). At step 1316, a dynamic trajectory model (DTM) is applied to update the trajectory (if needed). The dynamic trajectory model may include one or more algorithms and/or Al-based models, each of which may be configured to provide information, predictions, estimations and/or calculations regarding various parameters and variables that may affect tissue and target movement and the consequent trajectory. Such algorithms and models may provide parameters such as, predicted/estimated tissue movement and predicted/estimated target movement, to ultimately predict the estimated target spatiotemporal location during and/or at the end of the procedure, to thereby allow the planning and/or updating of a corresponding trajectory to facilitate the medical instrument reaching the target at its predicted location. In some embodiments, estimation of tissue movement may take into account tissue movement resulting from the patient’s respiratory cycle. In some embodiments, the patient’s respiratory cycle and/or the tissue movement resulting from the patient’s respiratory cycle may be predicted using a separate algorithm/model. In some embodiments, the dynamic trajectory model may include algorithms/models to predict the movement of previously determined “no-fly” zones and/or algorithms/models to update the “no-fly” zones map according to the predicted tissue and target movement. In some embodiments, the dynamic trajectory model may include determining if a calculated trajectory is optimal, based on various parameters as described herein, such that the output of the model is the optimal trajectory. It can be appreciated that different trajectories may be considered as “optimal”, depending on the chosen parameters, the weight given to each parameter, user preferences, etc.
Based on the results of the dynamic trajectory model, steering of the medical instrument is triggered at time ttrig, at step 1318. At step 1320, it is determined if the medical instrument reached the target. If it is determined that the medical instrument has reached the target, the steering process ends 1322. If it determined that the medical instrument has not reached the target, steps 1312-1320 are repeated, until the target is reached. Thus, the method disclosed in FIG. 12, allows synchronization of various steps, including, imaging and steering of the medical instrument with the patient’s breathing cycle, by triggering the execution thereof at a specific time/state along the respiration cycle.
Implementations of the systems, devices and methods described above may further include any of the features described in the present disclosure, including any of the features described hereinabove in relation to other system, device and method implementations.
According to some embodiments, there is provided computer-readable storage medium having stored therein data-analysis algorithm(s), executable by one or more processors, for generating one or more models for prediction of respiration behavior and for providing recommendations, operating instructions and/or functional enhancements related to the operation of automated medical devices and/or related imaging systems.
The embodiments described in the present disclosure may be implemented in digital electronic circuitry, or in computer software, firmware or hardware, or in combinations thereof. The disclosed embodiments may be implemented as one or more computer programs, i.e., one or more modules of computer program instructions, encoded on computer storage medium for execution by, or to control the operation of, one or more data processing apparatus. Alternatively or in addition, the computer program instructions may be encoded on an artificially generated propagated signal, for example, a machine-generated electrical, optical or electromagnetic signal, that is generated to encode information for transmission to suitable receiver apparatus for execution by a data processing apparatus. A computer storage medium can be, or be included in, a computer-readable storage device, a computer-readable storage substrate, a random or serial access memory array or device, or a combination of any one or more of the above. Furthermore, while a computer storage medium is not a propagated signal, a computer storage medium can be a source or destination of computer program instructions encoded in an artificially generated propagated signal. The computer storage medium can also be, or be included in, one or more separate physical components or media (for example, multiple CDs, disks, or other storage devices).
The operations described in the present disclosure can be implemented as operations performed by a data processing apparatus on data stored on one or more computer-readable storage devices or received from other sources.
The term “data processing apparatus” as used herein may encompass all types of apparatus, devices, and machines for processing data, including by way of example a programmable processor, a computer, a system on a chip/s, or combinations thereof. The data processing apparatus can include special purpose logic circuitry, for example, an FPGA (field programmable gate array) or an ASIC (application specific integrated circuit). The apparatus
can also include, in addition to hardware, code that creates an execution environment for the computer program in question, for example, code that constitutes processor firmware, a protocol stack, a database management system, an operating system, a cross -platform runtime environment, a virtual machine, or combinations thereof. The apparatus and execution environment can realize various different computing model infrastructures, such as web services, distributed computing and grid computing infrastructures.
A computer program (also referred to as a program, software, software application, script or code) can be written in any form of programming language, including compiled or interpreted languages, declarative or procedural languages, and it can be deployed in any form, including as a standalone program or as a module, component, subroutine, object, or other unit suitable for use in a computing environment. A computer program may, but need not, correspond to a file in a file system. A computer program can be stored in a portion of a file that holds other programs or data, in a single file dedicated to the program in question, or in multiple coordinated files (for example, files that store one or more modules, sub programs or portions of code). A computer program can be deployed to be executed on one computer or on multiple computers that are located at one site or distributed across multiple sites and interconnected by a communication network.
The processes and logic flows described herein can be performed by one or more programmable processors, executing one or more computer programs to perform actions by operating on input data and generating output. The processes and logic flows can also be performed by, and an apparatus can also be implemented as, special purpose logic circuitry, for example, an FPGA or an ASIC. Processors suitable for the execution of a computer program include both general and special purpose microprocessors, and any one or more processors of any type of digital computer. Generally, a processor will receive instructions and data from a read-only memory or a random access memory or both. The essential elements of a computer are a processor for performing actions in accordance with instructions and one or more memory devices for storing instructions and data. A computer may, optionally, also include, or be operatively coupled to receive data from or transfer data to, or both, one or more mass storage devices for storing data, for example, magnetic, magneto optical discs, or optical discs. Moreover, a computer can be embedded in another device, for example, a mobile phone, a tablet, a personal digital assistant (PDA, a game console, a Global Positioning System (GPS) receiver, or a portable storage device (for example, a USB flash
drive). Devices suitable for storing computer program instructions and data include all forms of non-volatile memory, media and memory devices, including semiconductor memory devices, for example, EPROM, EEPROM, random access memories (RAMs), including SRAM, DRAM, embedded DRAM (eDRAM) and Hybrid Memory Cube (HMC), and flash memory devices; magnetic discs, for example, internal hard discs or removable discs; magneto optical discs; read-only memories (ROMs), including CD-ROM and DVD-ROM discs; solid state drives (SSDs); and cloud-based storage. The processor and the memory can be supplemented by, or incorporated in, special purpose logic circuitry.
The processes and logic flows described herein may be performed in whole or in part in a cloud computing environment. For example, some or all of a given disclosed process may be executed by a secure cloud-based system comprised of co-located and/or geographically distributed server systems. The term “cloud computing” is generally used to describe a computing model which enables on-demand access to a shared pool of computing resources, such as computer networks, servers, software applications, and services, and which allows for rapid provisioning and release of resources with minimal management effort or service provider interaction.
Unless specifically stated otherwise, as apparent from the disclosure, it is appreciated that, according to some embodiments, terms such as “processing”, “computing”, “calculating”, “determining”, “estimating”, “assessing” or the like, may refer to the action and/or processes of a computer or computing system, or similar electronic computing device, that manipulate and/or transform data, represented as physical (e.g. electronic) quantities within the computing system’s registers and/or memories, into other data similarly represented as physical quantities within the computing system’s memories, registers or other such information storage, transmission or display devices.
It is to be understood that although some examples used throughout this disclosure relate to procedures for insertion of a needle into a subject’s body, this is done for simplicity reasons alone, and the scope of this disclosure is not meant to be limited to insertion of a needle into the subject’s body, but is understood to include insertion of any medical tool/instrument into the subject’s body for diagnostic and/or therapeutic purposes, including a port, probe (e.g., an ablation probe), introducer, catheter (e.g., drainage needle catheter), cannula, surgical tool, fluid delivery tool, or any other such insertable tool.
In some embodiments, the terms “medical instrument” and “medical tool” may be used interchangeably.
In some embodiments, the term “moving target” relates to a mobile target, i. e., a target that is capable of moving within the body of the subject, independently or at least partially due to or during a medical procedure.
In some embodiments, the terms "respiratory cycle", and “breathing cycle” may be used interchangeably.
In some embodiments, the terms "model", "algorithm", “data-analysis algorithm” and “data-based algorithm” may be used interchangeably.
In some embodiments, the terms “triggering event” and “gating window” may be used interchangeably.
In some embodiments, the terms "user", “doctor”, “physician”, “clinician”, “technician”, “medical personnel” and “medical staff’ are used interchangeably throughout this disclosure and may refer to any person taking part in the performed medical procedure.
It can be appreciated that the terms “subject” and “patient” may be used interchangeably, and they may refer either to a human subject or to an animal subject.
In the description and claims of the application, the words “include” and “have”, and forms thereof, are not limited to members in a list with which the words may be associated.
Unless otherwise defined, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this disclosure pertains. In case of conflict, the patent specification, including definitions, governs. As used herein, the indefinite articles “a” and “an” mean “at least one” or “one or more” unless the context clearly dictates otherwise.
It is appreciated that certain features of the disclosure, which are, for clarity, described in the context of separate embodiments, may also be provided in combination in a single embodiment. Conversely, various features of the disclosure, which are, for brevity, described in the context of a single embodiment, may also be provided separately or in any suitable sub-combination or as suitable in any other described embodiment of the disclosure. No feature described in the context of an embodiment is to be considered an essential feature of that embodiment, unless explicitly specified as such.
Although steps of methods according to some embodiments may be described in a specific sequence, methods of the disclosure may include some or all of the described steps carried out in a different order. The methods of the disclosure may include a few of the steps described or all of the steps described. No particular step in a disclosed method is to be considered an essential step of that method, unless explicitly specified as such.
The phraseology and terminology employed herein are for descriptive purpose and should not be regarded as limiting. Citation or identification of any reference in this application shall not be construed as an admission that such reference is available as prior art to the disclosure. Section headings are used herein to ease understanding of the specification and should not be construed as necessarily limiting.
Claims
1. A method of generating a respiration behavior model for allowing synchronization of an operation of an automated medical device and/or an imaging system in an image-guided medical procedure, with a specific state of a respiration cycle of a subject, the method comprising: collecting one or more datasets related to respiration of the subject; creating a training set comprising at least a portion of the one or more respiration related datasets of the subject, and one or more parameters related to respiration behavior in one or more databases and/or in one or more previously performed procedures utilizing the automated medical device; and training the respiration behavior model to output a prediction of the respiration behavior of the subject.
2. The method according to claim 1, further comprising pre-processing the datasets.
3. The method according to any one of the previous claims, further comprising extracting features from the one or more datasets.
4. The method according to any one of the previous claims, wherein the training further comprises signal quality analysis algorithm/trained model.
5. The method according to any one of the previous claims, wherein at least one of the one or more respiratory related datasets comprises data collected from a respiration sensor.
6. The method according to claim 5, wherein the respiration sensor is selected from: optical sensor, stretch sensor, pressure sensor, accelerometer, camera and motion sensor.
7. The method any one of the previous claims, wherein the training set further comprises one or more additional datasets selected from clinical procedure related dataset, patient related dataset and administrative related dataset.
8. The method of any one of the previous claims, wherein the respiration behavior model is generated utilizing artificial intelligence tools comprising one or more of: machine
learning tools, data wrangling tools, deep learning tools, artificial neural network (ANN), deep neural network (DNN), convolutional neural network (CNN), recurrent neural network (RNN), long short term memory network (LSTM), decision trees or graphs, association rule learning, support vector machines, inductive logic programming, Bayesian networks, instance-based learning, manifold learning, sub-space learning, dictionary learning, reinforcement learning (RL), generative adversarial network (GAN), clustering algorithms, or any combination thereof.
9. The method of any one of the previous claims, wherein training the respiration behavior model comprises using one or more of: loss function, Ensemble Learning methods, Multi-Task Learning, Multi-Output regression and Multi-Output classification.
10. The method of any one of the previous claims, further comprising training a triggering determination model using at least a portion of the one or more datasets, to determine timing of a specific triggering event (ttrig) during the respiration cycle based, in part, on the output of the respiration behavior model.
11. The method according to claim 10, wherein the triggering determination model is configured to predict the next valid trigger(s).
12. The method according to any one of the previous claims, further comprising triggering operation of the medical device in synchronization with the timing of the specific state of the respiration cycle.
13. The method according to claim 12, wherein the automated medical device is configured to insert and/or steer a medical instrument toward a target in the subject body.
14. The method according claim 13, wherein the insertion and/or steering of the medical instrument is performed according to a planned path/trajectory.
15. The method according to claim 14, wherein the path/trajectory is configured to be updated in real time.
16. The method according to any one of the previous claims, further comprising triggering operation of an imaging system at time ttrig-
17. The method of any one of the previous claims, wherein generating the respiration behavior model is executed by a training module comprising a memory and one or more processors.
18. A method of synchronizing insertion of a medical instrument toward a target in a subject body by an automated medical device in image guided procedures, with specific state of breathing cycle of a subject, the method comprising: predicting respiration behavior of the subject based on an output of a respiration behavior model utilizing one or more data sets related to respiration of the subject; determining timing of the specific state (ttrig) of the respiration cycle based on the output of the respiration behavior model; and triggering insertion of the medical instrument at ttrig, towards the target in the subject body.
19. The method according to claim 18, further comprising triggering operation of an imaging system.
20. The method according to claim 19, wherein the imaging system is selected from: X-ray fluoroscopy, CT, cone beam CT, CT fluoroscopy, MRI and ultrasound.
21. The method according to any one of claims 18-20, wherein the insertion of the medical instrument is performed according to a planned trajectory.
22. The method according to any one of claims 18-21, wherein the trajectory is configured to be updated in real time.
23. The method according to any one of claims 18-22, further comprising executing a dynamic trajectory model.
24. The method according to any one of claims 19-23, wherein the triggering comprises alerting or instructing a user to operate the medical device and/or the imaging system at ttrig-
25. The method according to any one of claims 19-23, wherein triggering comprises automatically affecting the medical device and/or the imaging system to operate at ttrig-
26. The method according to any one of claims 19-25, wherein triggering comprises triggering imaging, triggering a scan, triggering insertion of medical instrument, triggering steering of the medical instrument in the subject body.
27. The method according to any one of claims 19-26, wherein triggering further comprises triggering registration of the automated medical device to an image space, wherein
synchronization of distinct imaging events are triggered to occur at time (ttrig) along/over the breathing cycle(s).
28. A method of synchronizing insertion of a medical instrument toward a target in a subject body in image guided procedures with specific state of breathing cycle of a subject, the method comprising: predicting respiration behavior of the subject based on an output of a respiration behavior model utilizing one or more data sets related to respiration of the subject; determining timing of a specific state (ttrig) of the respiration cycle based on the output of the respiration behavior model; and triggering insertion of the medical instrument at ttrig, towards the target in the subject body.
29. A system for generating a respiration behavior model for allowing synchronization of operation of an automated medical device in an image guided procedure, with a specific state of respiration cycle of a subject, the system comprising: a training module comprising: a memory configured to store one or more respiration related datasets; and one or more processors configured to execute the method of any one of claims 1 to 17.
30. The system of claim 29, wherein the training module is located on a remote server, an “on premise” server or a computer associated with the automated medical device.
31. The system of claim 30, wherein the remote server is a cloud server.
32. A system for steering a medical instrument toward a moving target in a body of a subject, the system comprising: an automated medical device configured for steering the medical instrument toward a moving target, the automated device comprising one or more actuators and an end effector configured for coupling the medical instrument thereto; and a processor configured for executing the method of any one of claims 20 to 27.
33. The system according to claim 32, further comprising a controller configured to control the operation of the device.
34. A system for synchronizing operation of a medical instrument with specific state of a respiration cycle of a subject, the system comprising: an automated medical device configured for inserting and steering the medical instrument toward a target in the body of the subject, the automated device comprising one or more actuators and an end effector configured for coupling the medical instrument thereto; and a processor configured for executing the method of any one of claims 18 to 27.
35. The system according to claim 34, further comprising a controller configured to control the operation of the automated medical device.
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| US202163257800P | 2021-10-20 | 2021-10-20 | |
| US63/257,800 | 2021-10-20 |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2023067587A1 true WO2023067587A1 (en) | 2023-04-27 |
Family
ID=86058939
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/IL2022/051063 Ceased WO2023067587A1 (en) | 2021-10-20 | 2022-10-06 | Respiration analysis and synchronization of the operation of automated medical devices therewith |
Country Status (1)
| Country | Link |
|---|---|
| WO (1) | WO2023067587A1 (en) |
Citations (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20170000571A1 (en) * | 2013-12-12 | 2017-01-05 | Koninklijke Philips N.V. | Method and system for respiratory monitoring during ct-guided interventional procedures |
| US20180221098A1 (en) * | 2012-06-21 | 2018-08-09 | Globus Medical, Inc. | Surgical robotic systems with target trajectory deviation monitoring and related methods |
| US20210057083A1 (en) * | 2018-11-16 | 2021-02-25 | Elekta, Inc. | Real-time motion monitoring using deep neural network |
| CN113112499A (en) * | 2021-04-29 | 2021-07-13 | 中国科学院深圳先进技术研究院 | Displacement prediction method, device and system for internal tissues of liver and electronic equipment |
| WO2021214754A1 (en) * | 2020-04-19 | 2021-10-28 | Xact Robotics Ltd. | Optimizing checkpoint locations along an insertion trajectory of a medical instrument using data analysis |
-
2022
- 2022-10-06 WO PCT/IL2022/051063 patent/WO2023067587A1/en not_active Ceased
Patent Citations (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20180221098A1 (en) * | 2012-06-21 | 2018-08-09 | Globus Medical, Inc. | Surgical robotic systems with target trajectory deviation monitoring and related methods |
| US20170000571A1 (en) * | 2013-12-12 | 2017-01-05 | Koninklijke Philips N.V. | Method and system for respiratory monitoring during ct-guided interventional procedures |
| US20210057083A1 (en) * | 2018-11-16 | 2021-02-25 | Elekta, Inc. | Real-time motion monitoring using deep neural network |
| WO2021214754A1 (en) * | 2020-04-19 | 2021-10-28 | Xact Robotics Ltd. | Optimizing checkpoint locations along an insertion trajectory of a medical instrument using data analysis |
| CN113112499A (en) * | 2021-04-29 | 2021-07-13 | 中国科学院深圳先进技术研究院 | Displacement prediction method, device and system for internal tissues of liver and electronic equipment |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US20230044419A1 (en) | Optimizing checkpoint locations along an insertion trajectory of a medical instrument using data analysis | |
| US20230157757A1 (en) | Extended Intelligence for Pulmonary Procedures | |
| US20230157762A1 (en) | Extended Intelligence Ecosystem for Soft Tissue Luminal Applications | |
| US20220401151A1 (en) | Dynamic planning method for needle insertion | |
| US12171592B2 (en) | System and method for identification, labeling, and tracking of a medical instrument | |
| WO2022254436A1 (en) | Closed-loop steering of a medical instrument toward a moving target | |
| JP2019508072A (en) | System and method for navigation of targets to anatomical objects in medical imaging based procedures | |
| US20250336541A1 (en) | Use of cath lab images for procedure and device evaluation | |
| US20240358436A1 (en) | Augmented reality system and method with periprocedural data analytics | |
| US20250342935A1 (en) | Use of cath lab images for physician training and communication | |
| WO2024058837A1 (en) | Procedure information overlay over angiography data | |
| WO2023067587A1 (en) | Respiration analysis and synchronization of the operation of automated medical devices therewith | |
| WO2023239742A1 (en) | Use of cath lab images for prediction and control of contrast usage | |
| EP4588063A1 (en) | Assembly of medical images from different sources to create a 3-dimensional model | |
| EP4436508A1 (en) | Extended intelligence ecosystem for soft tissue luminal applications | |
| US20250356987A1 (en) | Use of cath lab images for treatment planning | |
| US20250356990A1 (en) | Percutaneous coronary intervention planning | |
| EP4588060A1 (en) | Virtual procedure modeling, risk assessment and presentation | |
| WO2023239738A1 (en) | Percutaneous coronary intervention planning | |
| CN120220988A (en) | AI-assisted surgical marking and injectable filler system for facial cosmetic surgery |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 22883102 Country of ref document: EP Kind code of ref document: A1 |
|
| DPE2 | Request for preliminary examination filed before expiration of 19th month from priority date (pct application filed from 20040101) | ||
| NENP | Non-entry into the national phase |
Ref country code: DE |
|
| 32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 08.08.2024) |
|
| 122 | Ep: pct application non-entry in european phase |
Ref document number: 22883102 Country of ref document: EP Kind code of ref document: A1 |