WO2025090636A1 - Ultrasound and machine learning based junctional tourniquet - Google Patents
Ultrasound and machine learning based junctional tourniquet Download PDFInfo
- Publication number
- WO2025090636A1 WO2025090636A1 PCT/US2024/052603 US2024052603W WO2025090636A1 WO 2025090636 A1 WO2025090636 A1 WO 2025090636A1 US 2024052603 W US2024052603 W US 2024052603W WO 2025090636 A1 WO2025090636 A1 WO 2025090636A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- tourniquet
- ultrasonic probe
- ultrasonic
- junctional
- ultrasound
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B8/00—Diagnosis using ultrasonic, sonic or infrasonic waves
- A61B8/48—Diagnostic techniques
- A61B8/488—Diagnostic techniques involving Doppler signals
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B17/00—Surgical instruments, devices or methods
- A61B17/12—Surgical instruments, devices or methods for ligaturing or otherwise compressing tubular parts of the body, e.g. blood vessels or umbilical cord
- A61B17/132—Tourniquets
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B90/00—Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
- A61B90/36—Image-producing devices or illumination devices not otherwise provided for
- A61B90/37—Surgical systems with images on a monitor during operation
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B90/00—Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
- A61B90/50—Supports for surgical instruments, e.g. articulated arms
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B17/00—Surgical instruments, devices or methods
- A61B2017/00017—Electrical control of surgical instruments
- A61B2017/00115—Electrical control of surgical instruments with audible or visual output
- A61B2017/00119—Electrical control of surgical instruments with audible or visual output alarm; indicating an abnormal situation
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B17/00—Surgical instruments, devices or methods
- A61B2017/00017—Electrical control of surgical instruments
- A61B2017/00115—Electrical control of surgical instruments with audible or visual output
- A61B2017/00128—Electrical control of surgical instruments with audible or visual output related to intensity or progress of surgical action
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B17/00—Surgical instruments, devices or methods
- A61B2017/00017—Electrical control of surgical instruments
- A61B2017/00132—Setting operation time of a device
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B90/00—Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
- A61B90/06—Measuring instruments not otherwise provided for
- A61B2090/064—Measuring instruments not otherwise provided for for measuring force, pressure or mechanical tension
- A61B2090/065—Measuring instruments not otherwise provided for for measuring force, pressure or mechanical tension for measuring contact or contact pressure
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B90/00—Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
- A61B90/36—Image-producing devices or illumination devices not otherwise provided for
- A61B90/37—Surgical systems with images on a monitor during operation
- A61B2090/378—Surgical systems with images on a monitor during operation using ultrasound
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B8/00—Diagnosis using ultrasonic, sonic or infrasonic waves
- A61B8/08—Clinical applications
- A61B8/0891—Clinical applications for diagnosis of blood vessels
Definitions
- the present disclosure relates to the treatment of severe wounds. More particularly, the present disclosure relates to the treatment of severe wounds, including devices and methods therefor.
- Hemorrhage is the leading cause for preventable death in trauma casualties, both civilian and in combat.
- the increased use of long-range artillery, man-portable rockets and high-explosive charges results in more serious multi-organ, hemorrhage injuries, highlighting the importance of hemorrhage control, especially with proximal limb and pelvic injuries.
- While the abundant use of tourniquets has greatly diminished the mortality from extremity hemorrhaging, junctional hemorrhage remains a largely unsolved problem.
- junctional hemorrhage is defined as hemorrhage from the areas connecting the extremities to the torso - axillae, shoulders, groin, buttocks and proximal thighs, as well as from the neck.
- Resuscitative endovascular balloon occlusion of the aorta is a highly invasive procedure that requires skill and is only relevant to the lower body.
- junctional tourniquets Utilizing the same principles as MPP, these devices are designed to maintain ongoing pressure on the vessel, pending definitive surgical solution. While massive bleeding from more distal injuries can be effectively managed with a tourniquet, junctional tourniquets (JTs) like the SAM junctional tourniquet, the Combat- Ready Clamp (Croc), the Abdominal Aortic Junctional Tourniquet (AAJT) and more have so far shown very limited success.
- JTs junctional tourniquets
- Croc Combat- Ready Clamp
- AJT Abdominal Aortic Junctional Tourniquet
- the principal of their operation is occluding a major artery by exerting mechanical pressure on the artery against a bony prominence - the bony pelvis, the first rib or the transverse processes of cervical vertebrae.
- JTs have been used in the military, for example, but events in which there were used are very few, and experience gained from training exercises shows they’re very hard to place correctly due to the difficulty to precisely locate the artery or other structure and underlying bone using a “blind technique.” Moreover, JTs are likely to move from the correct position, which needs to be accurate and maintained in order to “trap” the artery between the device and the bone. Application on the aorta will be especially useful for pelvic hemorrhage, including obstetric hemorrhage. Accordingly, studies have shown their utilization to be cumbersome and time consuming and have a high failure rate in training and real combat scenarios, for example. Moreover, their effectiveness drops dramatically once the patient is transported. [0008] Junctional and pelvic hemorrhaging continue to be a significant cause of early preventable death among trauma casualties, and there is a need for a device that can provide rapid and reliable hemostasis without requiring advanced expertise.
- FIG. 1 depicts a simple geometry phantom with a single channel, in accordance with various embodiments of the present disclosure.
- FIG. 2 depicts an anatomical phantom with skeletal and vascular features, in accordance with various embodiments of the present disclosure.
- FIGs. 3, 4 and 5 illustrate phantom tissue training systems, in accordance with various embodiments of the present disclosure.
- FIGs. 6A-6C illustrates results using the simple phantom of FIG. 1, in accordance with various embodiments of the present disclosure.
- FIGs. 7A-7E illustrate results using the femoral phantom of FIG. 2, in accordance with various embodiments of the present disclosure.
- FIG. 8 illustrates strain properties of various ballistic gelatin compositions, in accordance with various embodiments of the present disclosure.
- FIG. 9 illustrates an object detection model network architecture, in accordance with various embodiments of the present disclosure.
- FIG. 10 illustrates Al model predictions, in accordance with various embodiments of the present disclosure.
- FIG. 11 depicts Ex vivo swine data collection, in accordance with various embodiments of the present disclosure.
- FIG. 12 depicts ultrasound image results for a 2-class model trained with a dataset from an ex -vivo swine model, in accordance with various embodiments of the present disclosure.
- FIG. 13 summarizes an example image capture protocol, in accordance with various embodiments of the present disclosure.
- FIG. 14 summarizes performance metrics for ShrapML models, in accordance with various embodiments of the present disclosure.
- FIGs. 15A-D illustrate confusion matrices for MobileNetV2 and ShrapML, in accordance with various embodiments of the present disclosure.
- FIG. 16 illustrates performance matrices for MobileNetV2 and ShrapML, in accordance with various embodiments of the present disclosure.
- FIG. 17 illustrates gradient-weighted class activation maps for trained two category models, in accordance with various embodiments of the present disclosure.
- FIGs. 18A and 18B illustrate three category ShrapML performance, in accordance with various embodiments of the present disclosure.
- FIGs. 19A-D illustrate confusion matrix and receiver operating characteristic (ROC) curve for ShrapML, in accordance with various embodiments of the present disclosure.
- FIG. 20 provides a summary of performance metrics for ShrapML trained, in accordance with various embodiments of the present disclosure.
- FIGs. 21A-H illustrate GradCAM overlays for ShrapML model, in accordance with various embodiments of the present disclosure.
- FIG. 22 is a flowchart of a method of training, in accordance with various embodiments of the present disclosure.
- FIG. 23 is a logic diagram 2300 of ultrasound (US) paired with artificial intelligence (Al) machine learning (ML) models to visualize and guide proper junctional occlusion, in accordance with various embodiments of the present disclosure.
- FIGs. 24-31 illustrate process flows of a junctional toumiquet(s) having US and ML capabilities, in accordance with various embodiments of the present disclosure.
- FIG. 32 depicts a flowchart detailing an application of a junctional tourniquet for managing the junctional tourniquet Al models, streaming ultrasound signal, and providing instructions to the end user for proper junctional, in accordance with various embodiments of the present disclosure.
- FIG. 33 depicts an example graphical user interface, in accordance with various embodiments of the present disclosure.
- FIG. 34 illustrates a functional block diagram, in accordance with various embodiments of the present disclosure.
- FIGs. 35, 40-45 illustrates various frameworks for securing a junctional tourniquet to a body, in accordance with various embodiments of the present disclosure.
- FIGs. 36 and 37 illustrate actuation of an ultrasound probe in the z-axis direction, in accordance with various embodiments of the present disclosure.
- FIGs. 38 and 39 illustrate user interfaces that provide user guidance, in accordance with various embodiments of the present disclosure.
- FIG. 46 illustrates results of junctional tourniquet testing, in accordance with various embodiments of the present disclosure.
- FIG. 47 illustrates a performance comparison of improved junctional tourniquets with other tourniquets, in accordance with various embodiments of the present disclosure.
- the artificial intelligence (Al) provided by machine learning (ML) algorithms include models for guiding junctional compression and for guiding to the location. There is cross-talk or communication between these ML models that allow them to function harmoniously.
- Ultrasound (US) is paired with artificial intelligence (AI)/machine learning (ML) models to visualize and guide proper junctional occlusion.
- AI artificial intelligence
- ML machine learning
- OD object detection Al models can be trained and used to provide guidance to a junctional pressure point and classification models can be trained and used to identify and help maintain occlusion.
- Al and ML are used interchangeably herein.
- Machine learning (ML) based methodologies serve to guide a junctional tourniquet that an confirm and track appropriate pressure while continuously monitoring the effectiveness of applied pressure.
- Integrating ultrasound (US) technology into junctional tourniquet devices lowers the skill threshold for precise pressure point occlusion when accompanied by trained Al models to guide localization and confirm and maintain the occlusion state.
- Junctional tourniquets employing ultrasound and trained ML algorithms satisfy the need to provide rapid and reliable hemostasis without requiring advanced expertise on the part of the user/operator. Integrating ultrasound (US) technology into junctional tourniquet devices can therefore lower the skill threshold for precise pressure point occlusion when accompanied by trained Al models to guide localization and confirm occlusion states.
- junctional tourniquets are improved by adding a “smart” component and accompanying methodologies that facilitate accurate application, monitoring, and adjusting of occlusal pressure on compressible structures to prevent recurrence of bleeding that may otherwise occur while the casualty is being moved or handled while enroute to other care.
- a junctional tourniquet device guides the user/operator in the correct placement of the tourniquet, monitors appropriate arterial occlusion, and potentially auto-adjusts to maintain constant hemostasis until vascular surgery is available, for example.
- Such a hemorrhage control device in military, commercial and civilian settings treats junctional hemorrhaging from the pelvis, uterus, groin, buttocks, limbs, armpit and neck by finding the correct location to apply and maintain pressure, whether that be the abdominal aorta or the femoral, subclavian or carotid arteries. Further, the application of machine learning algorithms in real time on ultrasound imaging to create a closed-loop/provider-in-loop system for the purpose of providing accurate pressure on major arteries as well as ongoing monitoring of its accuracy and effectiveness.
- the disclosure accordingly describes a device, method and system that guides placement of an ultrasound probe to a position of maximal occlusion of major arteries, pressing the arteries against bony prominences in junctional areas and continuously monitors effective hemostasis to verify effective arterial occlusion.
- the device receives sonographic images taken, gathered or acquired by the ultrasound probe and uses real-time machine learning algorithms to guide probe placement either by user interface or an autonomous motor system. This will simplify hemorrhage control from junctional bleeds, reduce both the dedicated manpower (once device is placed) and cognitive burden on medics/first responders, as well as enable inexperienced bystanders to render potentially lifesaving aid.
- the application of machine-learning based real-time image analysis is used to guide an ultrasound to apply pressure on major arteries against bony prominences. It is to be embodied in the form of device, which will be a "smart version" of a junctional tourniquet, and described systems and methodologies.
- This junctional tourniquet will use an ultrasound probe as the pressure-exerting component.
- the ultrasound sonographic images will be transferred in real time to a controller that will utilize machine learning algorithms to: 1. Guide the ultrasound probe to correct placement (either by the user with a user interface with guidance arrow lights or with an autonomous motor system) on the vessel (femoral, subclavian, aortic) such as veins, arteries, 2. Continuously verify appropriate arterial occlusion, 3.
- the machine learning algorithms include: 1. Image classification, differentiating between occluded and non-occluded artery. 2. Object detection, locating the artery and the bone (or other hard surface of the patient sufficiently firm to withstand occlusion pressures described herein) in order to direct the movement of the pressure-exerting probe to the correct location. It will operate in real-time as a closed loop/ provider-in-loop system, with the ultrasound images as its input and user guidance / autonomous motor action as its output. As used herein the terms sonographic images and ultrasound images are taken by an ultrasound probe and may be used interchangeably.
- the artificial intelligence (Al) provided by the ML algorithms thus include models for guiding junctional compression and for guiding to the location for junctional compression. There is cross-talk or communication between these ML models that allow them to function harmoniously.
- a custom test platform suitable for tourniquet testing in the femoral junction region was developed to train classification models to identify proper femoral vessel occlusion. Trained object detection Al models help identify the correct pressure point by identifying key anatomical features and their positions.
- a test platform was developed for collecting guidance and occlusion Al training datasets. The test platformed consisted of a linear actuator for occluding flow with a base for different custom tissue-mimicking models. Three different models were used, including:
- An ultrasonically compatible phantom also known as a tissue phantom herein, for image collection to train and test the algorithm is presented.
- a phantom collects US image data for training and testing Al models.
- the phantom was made using clear ballistic gel with latex tubing, for example, connected to a pump simulating a vessel.
- the phantom is a 3D printed rendering of patient junctional areas, such as human junctional areas, using surgical tubing for blood vessels, under a covering of Ballistic hydrogel modified to closely mimic the composition of muscle, fat, and skin over the vessel. Imaging and flow data is collected from healthy controls and send data to use for training the algorithm for actual human anatomy.
- the simple tissue phantom of FIG. 1 was first created for data collection in the system of FIG. 3.
- the phantom was comprised of synthetic gelatin (gel) with a single channel through the center that can be placed over an aluminum bar or other hard surface sufficiently firm to withstand occlusion pressures as described herein.
- the tissue phantom platform must be (i) ultrasound compliant, (ii) mechanically robust enough to withstand the compressive forces at the point of occlusion, (iii) mechanically tough enough to withstand repeated junctional tourniquet applications, (iv) anatomically accurate for the junctional occlusion site as needed to develop guidance Al models, (v) capable of physiological fluid flow through artery and vein features, (vi) be easy to fabricate for various anatomies and replicate experiments. [0057] As the ultrasound probe pushes, the vessel is occluded against the aluminum bar and can be observed by ultrasound. Images were collected with pulsatile flow through the phantom at physiological pressures.
- the probe was lowered incrementally until pressure distal to the phantom dropped by at least 90%, denoting occlusion.
- Ultrasound clips were collected at different angles, placements on the surface to create a large database of images for algorithm training. Phantom image collection was conducted. Changes to the setup are ongoing to acquire better datasets for training the algorithm.
- FIG. 3 a system 300 for collection of data from simple tissue phantom of FIG. 1.
- a data acquisition controller or microcontroller, or computer is connected to both the ultrasound machine USM 310 and a flow sensor. Also shown in the loop is a controlled pressing device CPD 320, controlled by the ultrasound machine USM 310 and configured to cause to press the ultrasound probe USP 330, connected to a fluid reservoir 360 and pump 350, in a controlled manner into the simple phantom SP 370.
- Pressure sensor 340 monitors the pressure of CPD 320.
- FIG. 4 a more anatomically realistic tissue phantom, including accurate bone and vessel placement is shown.
- an anatomical phantom of hip and thigh area made from 3D-printed hip bone and femur and ultrasound compliant ballistic gel (gelatin), with femoral vessels in anatomical positions is shown. It too connects to the same data acquisition system as in FIG. 3.
- junctional tissue phantoms of FIGs. 1 and 2 were developed because commercially available tissue phantoms were inadequate for mechanical compression, ultrasound compatibility, and/or as a means of controlling flow through the phantom at desired blood pressures. An overview of how the phantoms are made and their features is found herein.
- a tissue phantom is comprised of relevant artery, vein, and bones in a custom- made anatomical mold.
- Anatomical molds were vacuum formed based on available mannikins/molds. However, the mannikin molds had to be modified for use in this application. Molds were situated with 3D printed or purchased bones at proper anatomical locations. Openings in the vacuum-formed mold at the distal and proximal end of the occlusion site were created to allow rigid tubing to act as a placeholder for vessel channels. Rigid tubing was removed after phantom casting to allow for soft, flexible tubing to be placed in the artery and vein locations. These were then attached to a flow loop that is capable of creating pulsatile flow, hemorrhage site distal to the occlusion point, physiologically relevant pressure as determined by patient monitor, and bypass flow when occlusion is applied to the phantom.
- Phantoms have been constructed for femoral, subclavian, and aortic junctional tourniquet sites. Phantoms have been created from synthetic ballistic gelatin, silicone elastomers, or Styrene-block-(ethylene-co-butylene)-block-styrene (SEBS) copolymers, for example. SEBS copolymers are biocompatible elastomers able to remain stable when subjected to UV radiation.
- a phantom was created with 10% clear ballistic gel (CBG) and a 3-D printed mold (Raise3d Pro2Plus). The volume of the mold was calculated and used to determine the cube size to be cut from the main block with some added volume for a residual estimate.
- CBG clear ballistic gel
- 3-D printed mold Ra3d Pro2Plus
- the CBG was then cut into small pieces and placed into a 500 mL to then be warmed to 130°C using an oven (HERATherm) for approximately 2 hours or until the gel was de-bubbled. Due to the high temperature needed to melt the CBG, a polycarbonate filament (PC 1500 FR, Jabil) was selected to print the mold to ensure the form was kept. Using a ’A" OD biopsy punch to hold the place of a vessel, the CBG was slowly poured into the silicone oil lined mold and left to cool at room temperature. Once cooled, the phantom was removed from the mold and placed over a wax block which acted as a bone to allow the vessel to occlude.
- PC 1500 FR polycarbonate filament
- Stress-strain properties 800 of the various ballistic gelatin compositions were evaluated as shown in FIG. 8 with ratios between clear ballistic gelatin (CBG) and other synthetic gelatin material types tested. Additionally, there are other qualitative properties of interest, such as whether the material reach junctional occlusion can be used under repeated use without ripping.
- CBG clear ballistic gelatin
- Anatomical tissue phantom 540 has an arterial side and a venous side, is formed of ultrasound complaint material, and has a one or more compressible structures within the ultrasound compliant material that accommodate fluid flow through the anatomical tissue phantom; the compressible structures as previously mentioned would include anatomical vessels, arteries, veins, nerves, and bones formed within the tissue phantom.
- the ultrasound compliant material of the tissue phantom may be a synthetic gelatin, a ballistic gelatin, a ballistic hydrogel, and a clear ballistic gelatin as will be described.
- the anatomical tissue phantom may be a femoral, a subclavian or an aortic tissue phantom having compressible structures like compressible tubing representative of vessels, arteries, veins, nerves, and bones.
- Fluid reservoir 510 houses ultrasonic compliant fluid that is pumped by pump 520 to the tissue phantom 540.
- Pressure sensor(s) 530 is configured to receive ultrasonic compliant fluid from the pump 520, measure pressure of the received ultrasonic compliant fluid, and provide the ultrasonic compliant fluid to an arterial side of the tissue phantom 540.
- a flow sensor 550 is coupled to the tissue phantom 540 and the fluid reservoir 510 and is configured to measure the flow of the ultrasonic compliant fluid and provide the ultrasonic compliant fluid to the fluid reservoir 510.
- a hydrostatic reservoir 560 such as a hydrostatic IV bag provides hydrostatic fluid to the venous side of the tissue phantom.
- Ultrasonic compliant fluid flows in a flow loop of the system, with ultrasonic compliant fluid pumped by the pump from the fluid reservoir is provided to the pressure sensor, the pump provides the pumped ultrasonic compliant fluid to the arterial aide of the tissue phantom, the ultrasonic compliant fluid flows through the arterial aide of the tissue phantom, is measured by the flow sensor at an output of the tissue phantom, and flows back to the fluid reservoir and where responsive to pressure on the ultrasound compliant material of the anatomical tissue phantom at a position that is proximal a location of a compressible structure of the one or more compressible structures, the compressible structure is compressed against the hard surface of the anatomical tissue phantom to at least partially occlude fluid flow of the ultrasonic compliant fluid in the compressible structure.
- the phantom flow loop of FIG. 5 can be used with a peristaltic pump or piston pump (SuperPump) 520 using different flow sensors 550, pressure sensors 530, or a mass balance
- Tissue Phantom Imaging [0070] The phantom was fitted with a diameter latex tubing(penrose) to act as vessel, which was connecter to a flow loop described above in FIG. 5. This loop consisted of a peristaltic pump (masterflex) that took doppler compliant fluid(CIRS) from a reservoir and fed it to the phantom, a pressure sensor (ADI) that connected directly to the data acquisition unit (ADI) was paced downstream of the phantom. Between the pump and the phantom there was a bypass line. The vessel in the phantom was kept underwater during imaging. Ultrasound imaging was performed using a ultrasound probe (Terason) from a Terason500 US imaging system (Terason).
- FIGs. 6A-6C and FIGs. 7A-7E results using the simple phantom of FIG. 1 and the femoral phantom of FIG. 2, respectively, are shown.
- FIG. 6A the simple phantom of FIG. 1 was used to train classification models for image interpretation, using the color overlay feature of the ultrasound system. Representative images without and without color flow are shown below.
- the Al models were trained to classify images between two classes: baseline (full flow) and occlusion; or three classes, adding progress (partial occlusion). Confusion matrices for both models are shown in FIGs. 6B and 6C.
- FIG. 7B Representative images for each category are shown in FIG. 7B along with a normalized confusion matrix from model predictions on the test dataset in FIG. 7A.
- Ultrasound images are processed for training Al models for both occlusion and guidance applications.
- the captured US scans will be split to frames, cropped to remove non-US image information from the image and resized.
- MATLAB is used to approximate the reduction in flow rate relative to baseline rates measured by the Doppler device, and each US frame is labeled with its flow reduction percentage.
- the categorical classification between flow and occluded conditions is optimized based on percent flow reduction to determine the effect of this hyperparameter on training performance.
- the Al model may be re-configured for a regression output layer where the Al model for occlusion estimates the current flow based on the US image. This architecture may be optimized for these applications until an acceptable average accuracy, such as 85% or higher, is achieved.
- the guidance US scans will be labeled with bounding box overlays for the regions of interest, specifically the artery, vein, and underlying boney surfaces, for example. Individual frames will then be used as training input for re-tuning the model architecture, such as a YOLOv7tiny model architecture, for example. Data augmentation and model modifications may be used until blind test performances surpasses the acceptable average accuracy, such as 85% or higher sensitivity across all scan points.
- a You Only Look Once (YOLOv3, for example) object detection network may be used to identify artery, vein, and bone objects in an ultrasound image.
- YOLOv3 You Only Look Once
- the prototype will guide end-user to move probe to proper anatomical location and move the ultrasound probe until the Al detects on artery in view. From there it will guide the user to go left or right until it is centered in the US image, followed by adjusting the angle until the bone is also visible. Proper location equals bone centered under the artery.
- This model is also useful for successful shrapnel detection in Al images. Description of methods from that paper on this Al model are below.
- frames of all video clips were extracted using an implementation of FFmpeg with a Ruby script, yielding 90 individual frames per video.
- Duplicate frames were removed, and all images were processed with MATLAB’s image processing toolbox (MathWorks, Natick, MA, USA) in which a function was written to crop images to remove ultrasound settings from view and then resize them to 512 * 512 * 3, for example.
- the object detection model, ShrapOD used a SqueezeNet neural network backbone with modifications to include YOLOv3 object detection heads, as shown in FIG. 9.
- This network architecture 900 was built based on MATLAB -provided object detection code.
- the feature extraction network in SqueezeNet was modified to use an image input layer 902 of (such as 512 x 512 x 3, for example) followed by a convolutional block 904 containing a convolutional layer with rectified linear unit (ReLU) activation and max pooling layer 908. This is followed by 4 Fire blocks 906, 910 prior to the network splitting after fire block (x2) 910 to integrate the YOLOv3 object detection heads.
- Fire modules shown in blocks 930-942, per the SqueezeNet architecture comprised a single convolutional squeeze layer 930 (fo l , ReLU activator block 932) followed by expanding layers 934, 936 consisting of a mixture of (1 x 1) and (3 x 3) convolutional layers in parallel to increase the depth and width for higher detection accuracy. These parallel layers are concatenated prior to the next layer in the network architecture to reduce the number of model parameters.
- Five additional Fire blocks 914 are used on the YOLOv3 class output layer pathway, followed by a convolutional layer 916 with batch normalization and ReLU activation. See the left pathway of FIG. 9.
- Flow 900 of FIG. 9 provides an overview of an example ShrapOD model network architecture.
- Diagram for the object detection algorithm using SqueezeNet as the classification backbone, with added YOLOv3 outputs to generate bounding boxes and class predictions is shown.
- individual layers are shown as well as “blocks” that consist of multiple layers.
- the convolutional block (904) has a convolutional layer, ReLU activation layer, and a max pooling layer (908, 912).
- the Fire blocks (906, 910, 914) - that repeat two or five times as indicated - begin with a convolutional layer with ReLU activation and then split into parallel chains with varying convolutional filter sizes (1 x 1 and 3 x 3) with ReLU activation.
- the parallel chains come back together using a depth concatenation layer 924, which is followed by a backend convolutional block 926.
- the feature convolutional block 918 and both backend convolutional blocks 904 and 916 are identical in layer content, beginning with a convolutional layer, followed by a batch normalization and ending with a ReLU activator.
- Both output layers, for class output layer 920 and bounding box output layer 928 are also convolutional layers.
- bounding box output layer 928 was used for bounding box predictions in which the network was fused after the Fire block 9 concatenation with Fire block 8 with an additional convolutional block 918 for feature resizing block 922.
- the model contained a final concatenation layer and convolutional block to align the predicted bounding box coordinates to the output image. See the right pathway of FIG. 9.
- the YOLOv3 network also used optimized anchor boxes to help the network predict boxes more accurately.
- the object detection algorithm can be trained using the LOSO (leave one subject out) methodology, in which a single subject is left out of training instances, allowing to assess model overfitting. The predictions across each LOSO model were then aggregated and over 85% blind accuracy was achieved, compared to 70% accuracy without the LOSO and aggregation methods. This approach was combined with a live US feed and implemented into a single-board computer (SBC) for real-time prediction with inference times under one second per image.
- SBC single-board computer
- Model training was performed using MATLAB R2022b with the deeplearning and machine-learning toolboxes for the base phantom and then repeated for the modified, neurovascular phantom.
- For the base phantom use case only images containing shrapnel were used in this example.
- For the neurovascular phantom images were taken from datasets with and without shrapnel. Images were cropped to remove ultrasound file information, sized to 512 x 512 x 3 and then datasets were split into 75% training, 10% validation and 15% testing quantities.
- Augmentation of the training datasets included random X/Y axis reflection, +/- 20% scaling, and +/- 360° rotation. These augmentation steps were written into a function that also applied it to the bounding box data.
- Training was performed using a stochastic gradient descent with momentum (SGDM) solver, 23 anchors, 125 epochs, L2 regularization of 0.0005, with a penalty threshold of less than 0.5 Intersection over Union (loU), validation frequency of 79 iterations, and an image batch size of 16 images.
- the learning rate started at 0.001 and, after a warmup period of 1000 iterations, began a scheduled slowdown by ( iteration ⁇ learning rate x ( - ) .
- Training parameters were adapted from MATLAB
- Training was performed using the CPU on an HP workstation (Hewlett-Packard, Palo Alto, CA, USA) running Windows 10 Pro (Microsoft, Redmond, WA, USA) and an Intel Xeon W-2123 (3.6 GHz, 4 core, San Clara, CA, USA) processor with 64 GB RAM.
- HP workstation Hewlett-Packard, Palo Alto, CA, USA
- Windows 10 Pro Microsoft, Redmond, WA, USA
- Intel Xeon W-2123 3.6 GHz, 4 core, San Clara, CA, USA
- TP true positive
- FP false positive
- FN false negative
- Mean loU (mloU) scores were calculated across each object class, and, for the multi-object model, mean AP (mAP) and an average mloU across the object classes was determined.
- Ex vivo Swine Data Collection As part of a euthanized model focused on automated central vascular access devices, ultrasound images were collected while performing a junctional tourniquet in a euthanized swine model, a biological structure with artery and vein flow, as shown in FIG. 11. Images were collected as clips recorded via a computer or controller interface for the ultrasound systems at different positions and vessel occlusion amounts for creating a robust dataset for algorithm training.
- the ex-vivo swine model was set up in which a euthanized swine tissue was used.
- the model consisted of lumbar-to-shin swine tissue.
- 8Fr feeding tubes (Covidien, Mansfield, MA, USA) were used to guide along the arterial and venous vessels distally towards the back of the knee. Small dissections were made into the muscle fascia which allowed the muscle layers to separate and create a flap to further visualize the vessels while preserving the femoral sheath.
- the distal vessels were cannulated using 14G IV catheters (MedOfficeDirect, Naples, FL, USA) and held in place using Perma- Hand silk ligatures (Ethicon, Raritan, NJ, USA).
- the proximal vessels were cannulated using 8Fr PCI introducers (Argon Medical Devices, Athens, TX, USA) and held in place similarly to the distal vessels.
- the distal vessels were connected using a shunt loop that consisted of tubing connected using a 3 -way stopcock.
- the proximal cannulations were connected to a Vivitro SuperPump AR Series (Vivitro Labs, Victoria, BC, Canada) and a hydrostatic reservoir, such as an IV bag.
- Doppler fluid CIRS Tissue Simulation Technology, Norfolk, Virginia, USA
- a third occlusion classification model was trained with a dataset from an ex-vivo swine model.
- Representative ultrasound image results 1200 for a 2- class model, along with the Grad-CAM overlay for each prediction are shown in FIG. 12.
- Al models are trained, for example, on a two-class classification: flow and occlusion (defined as 90% flow reduction) or a three-class classification: flow, progress (50 - 90% flow reduction or partial occlusion) and occlusion, and object detection for artery, vein, and bone features and other compressible anatomical features.
- a two-class classification flow and occlusion (defined as 90% flow reduction) or a three-class classification: flow, progress (50 - 90% flow reduction or partial occlusion) and occlusion, and object detection for artery, vein, and bone features and other compressible anatomical features.
- Use of the junctional tourniquet phantom and methods therefore can be broken into ultrasound tissue phantom data collection, ex vivo swine data collection, and algorithm training and testing.
- the object detection and image classification functionalities of the machine learning model(s) are trained on phantom data.
- occlusion Al data is collected from the flow loop with a PowerLab Data Acquisition system via the LabChart software. Inputs to the software include data from: flow sensor, pressure sensor, force sensor, and portable ultrasound system, such as shown in FIG. 5.
- One example data collection run consists of running the flow loop with an active bleed, while recording data, then after 30 seconds the actuated test platform occludes the artery (actuator extends for lOseconds). Once occluded, data is recorded for an additional 10 seconds before allowing the actuator to retract. Recording in LabChart for one run stops after flow and pressure values have returned to baseline.
- the data from the flow sensor is time-synced with the ultrasound video and the flow sensor output is used to divide ultrasound video frames by flow level which correlates to occlusion levels.
- This data is labeled with bounding boxes around three major features: vein, artery, and bone, and is used to train the object detection model of the guidance Al model.
- a phantom in the flow loop of FIG. 5 collects US image data for training and testing Al models.
- the phantom could be made using clear ballistic gel with latex tubing connected to a pump simulating a vessel. Flow is occluded by applying pressure with the US probe until flow was reduced to 90%, as an ex of the initial rate. Occlusion US images were labeled as positive or negative for occlusion based on flow reduction. Performance metrics were measured to determine an optimal or preferred percent occlusion threshold. A phantom that replicated physiology of the human inguinal crease was also developed.
- the guidance US scanning protocol in an example consisted of 1) sliding probe lateral to medial along the inguinal crease, 2) rotating probe 180°, and 3) tilting probe ⁇ 45° with the vessels in view.
- OD models were trained to identify vessels and underlying bone surface. These models can then guide the user to the proper location and their performance was assessed.
- a table 1300 summarizing this example image capture protocol is shown in FIG. 13.
- ultrasound images for guidance are captured using a protocol in each region to ensure the vessels and underlying honey surfaces are captured in the US scan for training the Al models.
- pressure is slowly applied at the pressure point until occlusion is reached, as determined by continuous waveform Doppler distal of the pressure point at either the ankle (femoral), wrist (subclavian), or femoral artery (aorta). Pressure will be held for a brief period of time, such as 3 or 5 seconds (3 s or 5 s), followed by slow release while simultaneously measuring distal flow using Doppler.
- the guidance US scans will be labeled with bounding box overlays for the regions of interest, specifically the artery, vein, and underlying boney surfaces. Individual frames will then be used as training input for re-tuning the YOLOv7tiny model architecture. Data augmentation and model modifications will be used until blind test performances surpasses 85% sensitivity across all scan points.
- Neural network model development and evaluation were performed using Matlab v2022a on a AMD Ryzen 9 5900HX 3.3GHz, 32 GB RAM, and NVIDIA RTX 3800 16GB VRAM computer system (Lenovo).
- Two neural network architectures were used: (1) a ShrapML model optimized for shrapnel identification in ultrasound images and (2) MobileNetV2 a neural network model that performed best for shrapnel identification in ultrasound images.
- Each model was fitted with a 512 x 512 x 3 image input layer and a two or three category classification output layer, depending on the image sets used.
- Model training was performed for up to 100 epochs using an Root Mean Squared Propagation (RMSProp) optimizer with a 0.001 learn rate.
- RMSProp Root Mean Squared Propagation
- a batch size of 32 was used throughout with evaluation of validation loss performed at the end of each epoch.
- a validation patience of five was used which meant if the validation loss was not further reduced in five epochs, training ceased early and the lowest validation loss was selected as an optimal or preferred model. Training was repeated three times with different random image splits for each training strategy, and each model were independently evaluated for determining overall performance. It is noted that different parameters other than these set forth may be used without departing from the scope of the disclosure.
- Gradient-weighted Class Activation Mapping overlays were created for l/24 th of the testing images for each model.
- Grad-CAM are used to produce an approximate localization heat map identifying “hot spots” for regions important to the model prediction as means of making models more explainable and confirming irrelevant image artifacts are not being tracked.
- Grad-CAM was performed using a built-in Matlab command for every 24 th test image and were saved according to the ground truth and prediction labels. Representative images were selected for highlighting regions of the images the models identified when making a classification prediction.
- FIGs. 15A- D illustrate confusion matrices for MobileNetV2 (FIGs. 15 A, 15C) and ShrapML (FIGs. 15B, 15D) for ultrasound tracking junctional tourniquet application.
- FIGs. 15A- D illustrate confusion matrices for MobileNetV2 (FIGs. 15 A, 15C) and ShrapML (FIGs. 15B, 15D) for ultrasound tracking junctional tourniquet application.
- two different model architectures were used - ShrapML and MobileNetV2 - each with and without affine transformations for data augmentation.
- Average confusion matrices for 3 trained models are shown for (FIG. 15 A, FIG. 15C) MobileNetV2 and (FIG. 15B, FIG. 15D) ShrapML (FIG. 15 A, FIG. 15B) without data augmentation and (FIG. 15C, FIG. 15D) with data augmentation.
- Confusion matrix values are expressed as percentages across each ground truth category.
- MobileNetV2 with or without augmentation trended toward false positive (no flow) results, resulting in recall metrics of 0.675 and 0.646 without and with data augmentation, respectively, as shown in the table of FIG. 16.
- MobileNetV2 was strong at identifying baseline, full flow conditions, with specificity reaching 0.990 and 0.996 without and with data augmentation, respectively. In contrast, augmentation had a more pronounced effect on ShrapML training.
- ShrapML Without augmentation, ShrapML had a high false negative (full flow) rate, with a specificity of 0.683. Augmentation with ShrapML solved this false negative bias, increasing specificity to 0.991 without impacting the false positive rate. Overall, ShrapML with augmentation had the strongest accuracy (0.934) and Fl score (0.918) and was selected as a preferred configuration for this application.
- FIG. 17 illustrates gradient-weighted class activation maps (GradCAM) for trained two category models for ultrasound tracking junctional tourniquet occlusion.
- Column 1 Base ultrasound images are shown for reference as well as (left to right) GradCAMs for Mobile Net V2 without or with data augmentation and ShrapML without and with data augmentation.
- Four representative ultrasound images are shown: two identified as full flow and two identified as no flow. When looking at full flow ultrasound images, most of the models accurately tracked the vessel patency as the key feature with the exception of ShrapML without augmentation.
- FIG. 18A and 18B in which a three category ShrapML performance for tracking junctional tourniquet occlusion is shown.
- FIG. 18A shows a confusion matrix for three categories - no occlusion, partial occlusion, full flow - ShrapML with affine transformations for data augmentation.
- FIG. 18B illustrates GradCAM overlays for representative ultrasound images for each of these three categories. [0123] However, this was at the expense of the partial flow category as over 75% of the predictions were incorrect. This is further highlighted through GradCAM overlays.
- the full flow and no flow identified images were still tracking the vessel placement and phantom compression, respectively, as shown in FIGs. 18A and 18B.
- the partial flow designation was not identifying any obvious trends in the ultrasound image, including frequently tracking features outside of the tissue phantom. As a result, the two category methodology was identified as more suitable for this application.
- FIGs. 19A-19D illustrates confusion matrix and receiver operating characteristic (ROC) curve for ShrapML trained with swine image sets for tracking junctional tourniquet application. Results are shown for three replicated trained models, shown as average values for the (FIG. 19 A, FIG. 19C) confusion matrix and (FIG. 19B, FIG. 19D) individual ROC curves for a (FIG. 19 A, FIG. 19B) 70% occlusion threshold or a (FIG. 19C, FIG. 19D) 90% occlusion threshold.
- ROC confusion matrix and receiver operating characteristic
- Results across the three replicated trained models were consistent, each with similar area under ROC in FIGs. 19A-19D and with low standard deviations for each performance metric as shown in FIG. 20.
- Table of FIG. 20 a summary of performance metrics for the ShrapML trained with swine image sets for tracking junctional tourniquet application is shown. Results are shown as average and standard deviations across three replicate trained models for a 70% or 90% occlusion threshold. Accuracy was over 90% for swine image sets, similar to the tissue phantom performance. For comparison, the model was also trained using the most aggressive 90% distal pressure reduction threshold for occlusion. Overall, performance was minimally impacted when using swine images with this new occlusion threshold (FIG. 19B, FIG.
- FIGs. 21A-H illustrates GradCAM overlays for ShrapML model trained with swine image sets. Representative images are shown for (FIGs. 21A-21D) a 70% occlusion threshold or (FIGs. 21E-21H) a 90% occlusion threshold. Full flow and no flow ultrasound representative image designations are shown with and without GradCAM overlays which highlight image regions most responsible for the image classification outcome.
- FIG. 22 To summarize a method of training to generate a trained machine learning model(s) having both image classification and object detection components, refer to flow 2200 of FIG. 22.
- the database of ultrasound imaging includes ultrasound images.
- a logic diagram 2300 illustrates ultrasound (US) paired with artificial intelligence (Al) machine learning (ML) models to visualize and guide proper junctional occlusion in a junctional tourniquet.
- the machine learning algorithm(s) are applied to guide to an ultrasound probe to the right location, actuate the probe to bring the needed occlusion to bear on compressible structures of the patient at the determined, correct location in proximity to a wound, and maintain the correct position by means of movement and/or tilt of the ultrasound probe in the x-, y-, and/or z-axes directions and the correct occlusion pressure.
- the junctional tourniquet by means of its ML capabilities is able to perform guidance and/or actuation of the probe at the necessary location and pressure and is also able to maintain desired location and pressure of the ultrasound probe by virtue of constant monitoring and adjustments provided by the ML algorithms.
- FIGs. 24-31 illustrate the process flows of a junctional tourniquet having US and ML capabilities.
- flow 2400 for treating a patient starts with the acquisition of sonographic (ultrasound) images of a wound of a patient at Block 2410. These images may preferably be provided in real time by the ultrasound probe of the junctional tourniquet but could also be stored and retrieved images.
- a predicted location of a compressible structure of the patient, normally close to or proximal to the wound is determined.
- the junctional tourniquet is guided in applying pressure at the predicted location to occlude the structure and reduce blood flow to the wound by the ML algorithms of the junctional tourniquet at block 2430.
- the ML algorithms for image classification and object detection have been trained in accordance with the previous description.
- Flow 2500 of FIG. 25 describes a method of using the junctional tourniquet with trained ML algorithms.
- the movement of the ultrasonic probe of the junctional tourniquet is guided to a position that is proximal a location of compressible structure(s) of a wound of a patient in accordance with analysis of images performed by a machine learning model of the junctional tourniquet.
- the ultrasonic probe is actuated at the position to apply pressure to the structure of the wound and compress the structure against a hard surface of the patient, at least partially occluding fluid flow in the structure in accordance with analysis of the images performed by the machine learning model.
- Flow 2600 of FIG. 26 provides more detail on this operation.
- sonographic (ultrasound) images of a wound of a patient having one or more compressible structures are acquired by ultrasound. These images may be acquired by the ultrasound probe of the junctional tourniquet in real time or they may be stored images that have been retrieved.
- the sonographic images are analyzed by a trained machine learning model to generate a prediction of a location of compressible structures and determine a lateral actuation and/or a directional actuation of the ultrasonic probe of the junctional tourniquet needed to maintain at least partial occlusion of fluid flow in the compressible structures.
- the ultrasonic probe is guided in accordance with the predicted location of one or more compressible structures.
- the ultrasonic probe is actuated at the position in accordance with lateral actuation and/or directional actuation to apply pressure to the compressible structure(s) of the wound and compress the structure against a hard surface of the patient to at least partially occlude fluid flow in the compressible structure(s).
- Flow 2700 of FIG. 27 describes that the ultrasonic probe of the junctional tourniquet can be guided by the user or autonomously in accordance with an autonomous guidance module of the junctional tourniquet.
- sonographic (ultrasound) images of a wound of a patient are collected (gathered).
- an ultrasonic probe of a junctional tourniquet is guided to a position of a vessel of the wound in accordance with object detection analysis of sonographic images performed by a machine learning model of the junctional tourniquet, where a user guides the ultrasonic probe in accordance with a user interface of the junctional tourniquet or in accordance with an autonomous guidance module of the junctional tourniquet.
- the ultrasonic probe is actuated in a z-axis to apply pressure at the position to the vessel and at least partially occlude fluid flow in the vessel in accordance with image classification analysis performed by the machine learning model.
- the machine learning model continuously monitors an occlusion status of the vessel and the location of the ultrasonic probe relative to the vessel.
- the pressure (z-axis) and the direction of pressure (x- and/or y-axis) applied by the ultrasonic probe against the vessel is adjusted to at least partially occlude fluid flow in the vessel.
- the user can guide and actuate the ultrasound probe of the junctional tourniquet at blocks 2810, 2820.
- the user actuates the probe as guided by the user interface.
- the user can secure the ultrasonic probe at block 2830 and the ML algorithm continuously monitors for effective occlusion using object detection and image classification in a “manual” version as will be described, at block 2840. If or when the occlusion is no longer effective, the ML algorithm activates an alarm.
- the user manually adjusts the pressure (z-axis) and/or the direction of pressure (x- and/or y- axis) applied by the ultrasonic probe against the vessel to at least partially occlude fluid flow in the vessel.
- Flow 2900 of FIG. 29 refers to a “semi-automatic” version of the junctional tourniquet in which a user secures the ultrasonic probe after guiding it to the correct position.
- a user via a user interface guides an ultrasonic probe of a junctional tourniquet to a position of a vessel of the wound in accordance with object detection analysis of images performed by a machine learning model of the junctional tourniquet.
- the user secures the ultrasonic probe.
- the junctional tourniquet autonomously actuates the ultrasonic probe in a z-axis to apply pressure at the position to the vessel and at least partially occlude fluid flow in the vessel in accordance with image classification analysis performed by the machine learning model.
- the machine learning model of the junctional tourniquet continuously monitors an occlusion status of the vessel and the location of the ultrasonic probe relative to the vessel.
- the pressure (z-axis) applied by the ultrasonic probe against the vessel is adjusted autonomously by the junctional tourniquet via a motor or a pump to maintain at least partially occlude fluid flow in the vessel.
- a user via a user interface guides an ultrasonic probe of a junctional tourniquet to a position of a vessel of the wound in accordance with object detection analysis of images performed by a machine learning model of the junctional tourniquet. The user can then secure the ultrasonic probe at 3020.
- the ultrasonic probe via a motor or a pump the ultrasonic probe is autonomously actuated in a z-axis to apply pressure at the position to the vessel and at least partially occlude fluid flow in the vessel in accordance with image classification analysis performed by the machine learning model.
- the machine learning model of the junctional tourniquet continuously monitors an occlusion status of the vessel and the location of the ultrasonic probe relative to the vessel.
- the pressure (z-axis) and/or the direction of pressure (x- and/or y-axis) applied by the ultrasonic probe against the vessel is adjusted autonomously via a motor or a pump to maintain at least partially occlude fluid flow in the vessel.
- Flow 3100 of FIG. 31 describes a scenario in which the junctional tourniquet is not guided and actuated by a user using a user interface. Rather, using the ML capabilities of the junctional tourniquet for object detection and image classification, the junctional tourniquet performs these tasks automatically/autonomously.
- an ultrasonic probe of a junctional tourniquet is guided to a position of a vessel of the wound by an autonomous guidance module of the junctional tourniquet in accordance with object detection analysis of images performed by a machine learning model of the junctional tourniquet.
- the ultrasonic probe autonomously via a motor or a pump actuate the ultrasonic probe in a z-axis to apply pressure at the position to the vessel and at least partially occlude fluid flow in the vessel in accordance with image classification analysis performed by the machine learning model.
- the machine learning model of the junctional tourniquet continuously monitors an occlusion status of the vessel and the location of the ultrasonic probe relative to the vessel.
- the pressure (z-axis) and/or the direction of pressure (x- and/or y-axis) applied by the ultrasonic probe against the vessel is adjusted autonomously via a motor or a pump to maintain at least partially occlude fluid flow in the vessel.
- the junctional tourniquet is a device that has an ultrasound probe as the pressure-exerting component and machine learning algorithms for ensuring proper placement and vessel occlusion, benefits described herein. Two separate machine learning algorithms are used in the functionality described herein: one for guidance to the proper location and a second for measuring vessel occlusion.
- the ultrasound images are transferred in real time to a controller that utilizes the machine learning algorithms to:
- the junctional tourniquet device may guide the user (light or audio indicators) to the correct location and pressure for proper junctional tourniquet operation. It may additionally include the integration of z-axis automation for reaching and maintaining proper pressure after user guidance (light and/or audio indicators) to the correct location for proper junctional tourniquet has occurred. Further, the junctional tourniquet may include the integration of x-, y-, z-axis automation for reaching and maintaining proper location and pressure for proper junction tourniquet. Again, reference FIGs. 24-31.
- the artificial intelligence (Al) provided by the machine learning algorithms thus include models for guiding junctional compression and for guiding to the location. There is cross-talk or communication between these models that allow them to function harmoniously.
- the first algorithm uses an object detection framework to identify where the artery, vein, and bone are in the tissue and uses that information to identify when the vessels are aligned over the bone. Using that information, three product versions are described below and other embodiments within the scope of the disclosure are contemplated:
- a Manual (“Decision Support”) Version guides the end-user (by activating a user interface such as arrow-shaped lights) on which direction to move the probe and changes in probe angle to ensure proper probe placement.
- the second algorithm is an image classification framework that looks at a single ultrasound image and determines if the image of the vessels is positive or negative for occlusion. Using this algorithm, the z-position is adjusted to ensure vessel occlusion and is continuously used to track if the vessel enables flow, which instructs further (manual) tightening in the z-axis. After occlusion is achieved, the probe is secured in place (see “Mechanical prototype” below) and the algorithm continuously monitors for effective occlusion.
- the algorithm activates an alarm, providing actionable information to either tighten or reposition the device to restore occlusion.
- the user does not need to be trained on the use of ultrasound, and no screen is needed, only an easy to understand interface such as arrow-shaped lights on the side of the probe. Reference is made to flow 2800 of FIG. 28.
- junctional tourniquets such as a CRoC junctional tourniquet
- a “smart” component that facilitates accurate application, continuously monitors the occlusion and adjusts to prevent recurrence of the bleeding including during casualty movement and transport, as these actions can result in a 50% decrease in effectiveness of conventional junctional tourniquets.
- the ultrasound probe is then attached to the mechanical framework which includes the motor components required for the Semi- or Fully- automatic version.
- the prototype could be further configured for use in resource limited situations by allowing the ultrasound probe to be removed for use on other casualties while the mechanical prototype assists the junctional tourniquet remain applied.
- junctional tourniquet for managing the junctional tourniquet Al models, streaming ultrasound signal, and providing instructions to the end user for proper junctional occlusion is envisioned.
- FIG. 32 A flowchart detailing the application framework is shown in FIG. 32.
- a Clarius handheld US device will be used as the US probe device and has an open API (application programming interface) for developing the device.
- the application will prompt the user to select the subclavian, femoral, or aortic sites, and between guidance or occlusion modes.
- initial prompts will indicate to place the ultrasound probe near the clavicle, inguinal crease, or upper abdomen followed by directions to move the probe laterally until the artery is centered in the US image.
- prompts will direct the user to tilt the probe until the relevant boney surface is in view under the artery.
- Correct placement will trigger the occlusion mode of the application. In this mode, the application will prompt the user to apply pressure until the vessel is occluded, at which point the user will be guided to maintain pressure.
- the occlusion functionality will continue indefinitely so that the end user will always be alerted if occlusion has been lost, defined as no occlusion for more than 10 seconds. If that occurs, the application will prompt to restart the Guidance mode. The application will be set up to run Al predictions in real-time using the tuned guidance and occlusion models, detailed herein.
- a user of the junctional tourniquet selects a scan point on the patient. This will typically be close to or at the wound site of the patient.
- the user next selects either a guidance mode or an occlusion mode at decision block 3206. If the guidance mode with its object detection ML algorithm is selected, at block 3208 the user places the ultrasound probe at the scan point.
- a clock starts at block 3210. If the time is more than 90 seconds (90s) at decision block 3212 and there is no action, then an ALERT that the location of a structure cannot be identified is issued at block 3218.
- This alert may be displayed to the user of the junctional tourniquet visually or audibly, for example. If the time is less than 90 seconds (90s) at decision block 3212, the next inquiry is whether the structure, in this case an artery, is centered 3214 within the displayed US image? If no, then the ultrasound probe must go left or right to be centered at block 3216. This may be accomplished by adjustment indicators displayed to the user/operator of the junctional tourniquet or it may occur autonomously via an autonomous guidance module of the junctional tourniquet. Once the structure is centered, then the inquiry at decision block 3220 is whether the structure is centered over the bone or other hard surface of the patient. If no, then the probe must be tilted at block 3222.
- occlusion is ready to begin.
- a clock is started at block 3226 after which time the occlusion mode is selected at block 3228.
- a clock is started.
- the query at block 3232 is whether the structure is already occluded. If yes, than the actuation positioning is to be maintained and the clock reset at blocks 3234 and 3236.
- the action of maintaining actuation positioning at block 3234 may be conveyed to a user/operator of the junctional tourniquet such as via an audible or visual user interface. If the structure (again, an artery in this example) is not occluded, at decision block 3238 the query is whether the state of nonocclusion has been for less than 10 seconds (10s). If yes, the actuator pressure is increased at block 3240.
- the pressure could be increased by the user/operator or it could be increased in stages autonomously by the junctional tourniquet, such as by an autonomous motor assembly of the junctional tourniquet. If maximum pressure is indicated at decision block 3242, then an ALERT that the actuation pressure is at maximum is issued at block 3248. This alert may be displayed to the user of the junctional tourniquet visually or audibly, or by movement (vibration), for example. If the indication at decision block 3238 is that the time in a nonoccluded state is not less than 10s, then an ALERT to reorient the ultrasound probe is issued to the user/operator of the junctional tourniquet at block 3244. This alert may be displayed to the user of the junctional tourniquet visually or audibly, for example. In order to reorient the ultrasound probe, the guidance mode is selected at block 3246.
- FIG. 33 An example of a user interface 3300, in this case a graphical user interface (GUI) is displayed in FIG. 33.
- GUI graphical user interface
- the GUI allows the user/operator to choose between guidance and occlusion modes for femoral, subclavian and aortic scan sites.
- the guidance mode has been selected for a femoral tourniquet junction.
- the 2D ultrasound view is shown in which bounding boxes have been placed around two structures of interest in the patient.
- FIG. 38 and 39 further illustrate user guidance provided by user interfaces that are different than the GUI of FIG. 33.
- a screen for feedback and input to a user/operation with directions on how to move the ultrasound probe is provided.
- the screen may be coupled directly to the ultrasound probe as shown or may be more remotely coupled but still available to the user/operator.
- FIG. 39 is a top down view of the ultrasonic probe with corner lights in a rectangular pattern that light up indicating whether movement is to be in a x- or a y-axis. This allows by the user to make x-axis, y-axis or angle (tilt) adjustments of the ultrasound probe.
- This is an example of a user interface that communications guidance to the user/operator visually without a screen.
- the on-board nature of these user interfaces provide simplicity in the junctional tourniquet design.
- Ultrasound (US) transducer of an ultrasound probe may be connected to a controller, such as a computer, microcontroller, single board or the like, by a wireless connectivity block 3420 or a wired connection 3430.
- Controller 3440 has the guidance Al (ML) model 3445 and an image classification Al (ML) model 3450, force sensors 3455 and software 3460, such as the application methodology set forth in FIG. 32, and is configured to control manual control, linear actuator and multi-directional automation modules 3470, 3480, 3490, respectively.
- Controller 3440 is also coupled to and configured to control user interfaces used by a user/operator of the junctional tourniquet.
- the user interface is illustrated as the GUI of FIG. 33 but other types of user interfaces may be used to communicate with the user/operator visually, audibly or by touch (vibration, for example).
- US transducer 3410 of an ultrasonic probe is configured to collect images, sonographic images, of a wound of a patient.
- Controller 3440 is configured to receive the images captured by the US transducer 3410 and in accordance with analysis of the plurality of images by ML models 3445 and 3450 controller 3440 is configured to: guide movement of the ultrasonic probe to a position that is proximal a location of a compressible structure in accordance with analysis of the images performed by the guidance ML model 3445; and actuate the ultrasonic probe at the position to apply pressure to the structure of the wound and compress the structure against a hard surface of the patient to at least partially occlude fluid flow in the structure in accordance with analysis of the images performed by the guidance ML model 3450.
- Guidance ML model 3445 of controller 3440 is configured to guide the ultrasonic probe of the junctional tourniquet to the position proximal the location of the structure in accordance with object detection performed by guidance ML model, including determining the location of the structure. Controller then guides movement of the ultrasonic probe to the position proximal the location of the structure.
- the guidance provided by guidance ML model 3445 may be provided by a user interface 3495 and/or an autonomous guidance provided by multi -directional automation module 3490 as controlled by controller 3440.
- the automation module 3490 may be an autonomous motor assembly of the junctional tourniquet. Guided movement may be movement in the x-, y- axes as well as at any angle (tilt).
- the analysis by image classification ML model 3450 allows the controller 3440 to actuate the ultrasonic probe at the determined position to apply pressure to the structure of the wound and compress the structure against a hard surface of the patient to at least partially occlude fluid flow in the structure in accordance with analysis of images performed by Image classification ML model 3450.
- actuation of the linear actuator at the position is performed in real time in response to analysis of the images by the image classification ML model 3450.
- the image classification performed by ML model 3450 differentiates between occluded and non-occluded status of the compressible structure.
- the controller can lateral actuation of the ultrasound probe by control of the linear actuator 3480.
- Linear actuator 3480 may be an autonomous motor assembly, such as a motor or a pump, that actuates the ultrasonic probe in a z-axis of the junctional tourniquet, as illustrated in FIGs. 36 and 37, respectively.
- actuation of the ultrasound probe in the z-axis direction is possibly by moving a motor-driven actuator in the z-axis.
- the motor-driven actuator is coupled to the ultrasound probe.
- z- axis actuation may be accomplished by a pump and bellows configuration coupled to the ultrasound probe. The bellows expands and contracts based on an air pump inflation controlled by the image classification ML model 3450 of controller 3440.
- junctional tourniquets operating in accordance with the functional block diagram of FIG. 34 were very effective.
- a commercial phantom was perfused through a flow-loop with a peristaltic pump, pressure sensor, bleed site and flow sensor, such as shown in FIG. 5.
- Testing was performed in four stages: 1) active bleed and tourniquet placement, 2) Tourniquet placed at junctional occlusion site reducing flow rate at bleed site, 3) occlusion maintained for at least 5 minutes, and 4) tourniquet release. These results 4600 are shown in FIG. 46.
- FIGs. 35, 40-45 illustrate a framework in a crane-like arrangement in which the ultrasound probe is free to move laterally in a z-axis motion and in which the ultrasound probe may be secured after it is manually positioned.
- the ultrasound probe is free to move laterally in a z-axis motion.
- the ultrasound probe is removably after manually positioning but this may not permit occlusion to be maintained.
- FIG. 41 is a universal strap-like design. In the example of the strap configuration shown, there are three points of contact for the ultrasound probe of the junctional tourniquet with the human body: the inguinal crease, the aorta, and the femor.
- the junctional tourniquet may have different configurations of a mechanical attachment module coupled to the controller and configured to removably attach the junctional tourniquet to the patient.
- the mechanical attachment module may have a linear actuator which may be an autonomous motor assembly configured to laterally actuate the ultrasonic probe. Further the mechanical attachment module may be configured to releasably secure the ultrasound probe after the ultrasound probe at least partially occludes fluid flow in the structure.
- the mechanical attachment module includes a base coupled to a frame and/or straps.
- a number of prototypes were designed and tested with various design criteria, listed in priority from high to low, including: total time taken to occlude; stability of the junctional tourniquet at the anatomical site; effectiveness of occlusion by the junctional tourniquet; ultrasound compatibility of the junctional tourniquet; versatility and ease of use of the junctional tourniquet; durability and portability of the junctional tourniquet; reusability of the junctional tourniquet; power - battery life of the junctional tourniquet; and total cost of the junctional tourniquet. Two prototypes performed well according to these criteria and are shown in FIGs. 42-45.
- FIGs. 42-43 a base and tightening straps (BaTS) embodiment of a junctional tourniquet is shown.
- the BaTS junctional tourniquet of FIG. 43 includes a wireless ultrasound probe 4310 (such as Clarius) with probe case 4320, strap(s) 4340 with rotating collar(s) 4330 coupled to the strap(s) 4340, an encased linear actuator 4350 for movement in the z-axis, and an electronics box (controller) 4360.
- One or more strap(s) 4340 are configured to releasably secure the ultrasound probe 4310 after the ultrasound probe 4310 at least partially occludes fluid flow in the structure.
- the junctional tourniquet concept is controlled by the underlying ML guidance and occlusion algorithms that are held on a single board computer, microcontroller device, or other controller to allow for miniaturized deployment of the AI/ML models.
- the single board computer is housed in the electronics box 6, along with interface cables to the actuator and wireless (such as Bluetooth) connectivity to the ultrasound device; while wireless has obvious advantages in this setting, wired connectivity could also be used.
- the AI/ML guidance and occlusion models within the electronics box 6 (or single board computer, for example) will process ultrasound images and inform actuation decisions to maintain occlusion.
- a user interface may also connect to a display that provides instructions to the end-user/operator as displayed within a graphical user interface or other user interface.
- FIGs. 44-45 show a frame reinforced tourniquet (FReT) junctional tourniquet.
- the FReT junctional tourniquet 4500 has a wireless ultrasound probe 4510 (such as Clarius), a rigid frame 4520, a linear occlusion actuator 4530, an electronics box (controller) 4540, and a base plate 4550.
- the ultrasound probe 4510 assists with guidance in accordance with the Al (ML) guidance model of the junctional tourniquet.
- the crane-like rigid frame 4520 supports the linear occlusion actuator 4530 above the patient (at least in this orientation shown).
- Base 4550 is configured to be placed under the patient and the frame 4520.
- Linear occlusion actuator 4530 allows for automated compression control as needed to reach occlusion, as controlled and determined by the Al (ML) occlusion model of the junctional tourniquet.
- the junctional tourniquet concept is controlled by the underlying ML guidance and occlusion algorithms that are held on a single board computer, microcontroller device, or other controller to allow for miniaturized deployment of the AI/ML models.
- the single board computer is housed in the electronics box 4540, along with interface cables to the actuator and wireless (such as Bluetooth) connectivity to the ultrasound device; while wireless has obvious advantages in this setting, wired connectivity could also be used.
- the AI/ML guidance and occlusion models within the electronics box 4540 (or single board computer, for example) will process ultrasound images and inform actuation decisions to maintain occlusion.
- a user interface may also connect to a display that provides instructions to the end-user/operator as displayed within a graphical user interface or other user interface.
- FIG. 47 A performance comparison of the BaTS Fret improved junctional tourniquets with commercially available SAM and CRoC junctional tourniquets is shown in FIG. 47. A summary of time metrics obtained for each run with the different tourniquets is illustrated. While all tested junctional tourniquets were able to reach occlusion with a hemorrhage reduction of greater than 97%, the FReT and BaTS improved junctional tourniquets were overall quickest to use and had lower variability compared to the SAM and CRoC commercial options.
- ML machine learning
- algorithm(s) that analyze sonographic images in real time, guiding the user to press an artery or other compressible structure in the right location, with the right occlusive form, and monitoring the effectiveness of this pressure.
- ML algorithms may be integrated into an ultrasound probe, making this probe an effective pressure head as part of a junctional tourniquet, without the need for medical and/or ultrasound expertise on the part of the user/operator of the junctional tourniquet. Constant monitoring of occlusal effectiveness will allow for rapid or automated response to displacement, an especially important advantage when transporting the patient.
- This automated occlusion junctional tourniquet device utilizes ultrasound (US) to apply pressure to stop hemorrhage allowing for Al driven guidance to the proper pressure point based on US feedback.
- Al driven occlusion algorithms will provide feedback to the medical provider when enough pressure has been applied to ensure hemorrhage control.
- This device, system and methodology technology can improve speed, efficiency, and accuracy in administering junctional hemorrhage control and improving safety by preventing excessive application force and likely tissue or bone damage.
- the Al guided approach will be critical to standardize results across different providers and potentially aid with junctional hemorrhage control training.
- AI/ML models have the potential to monitor vessel occlusion (above 90% overall accuracy) and track key anatomical features using object detection Al models. Both Al models, for occlusion and guidance, predict on animal tissue and/or human volunteers. Advancement of this technology will simplify tourniquet use and help reduce the high mortality associated with junctional hemorrhage.
- junctional hemorrhage includes packing with hemostatic bandages or sponges (which is ineffective in case of a significant arterial hemorrhage), junctional tourniquets (which are difficult to use and tend to lose efficacy once casualty is moved) and REBOA (which is a highly invasive procedure, requires skill and is only relevant to the lower body).
- the embodiments presented herein can allow for abdominal aortic junctional tourniquet as a non-invasive alternative to the invasive resuscitative endovascular balloon occlusion of the aorta (REBOA) procedures.
- Another significant advantage of the use of ML over the “standard” or non- ML use of ultrasound for this purpose is its ability for continuous monitoring.
- the ultrasound junctional tourniquet system maintains “visualization” of the obstructed vessel and can raise an alarm in case this obstruction is no longer effective.
- the first sign of failure might be a pool of blood forming under the casualty or clinical deterioration in the casualty’s mental status or vital signs, all signifying loss of a substantial amount of precious blood.
- ML is demonstrated as useful for ultrasound guidance and monitoring of pressure against major vessels, to be used as part of a “smart” junctional tourniquet.
- the tourniquet has a controller; an ultrasonic probe portion of the ultrasonic junctional tourniquet configured to acquire images of a wound of a patient using ultrasound doppler; and a tourniquet portion of the ultrasonic junctional tourniquet, where the controller determines from the images a location of a compressible structure of a patient that is proximal to the wound and controls the tourniquet portion to apply pressure at the location of the structure to occlude the structure and reduce blood flow to the wound.
- the images are sonographic images.
- the structure is a blood vessel, artery, vein, nerve, bone, or other physiological pressure point of the patient.
- the tourniquet portion includes the ultrasonic probe portion and the controller controls the ultrasonic probe portion to apply pressure at the location of the structure to occlude the structure and reduce blood flow to the wound.
- the controller is configured to predict the location of the structure from the ultrasonic images acquired by the ultrasonic probe portion of the ultrasonic junctional tourniquet.
- the controller is configured to predict the location of the structure from ultrasonic images acquired by the ultrasonic probe portion of the ultrasonic junctional tourniquet.
- the controller uses machine learning to predict the location of structure from the ultrasonic images acquired by the ultrasonic probe portion of the ultrasonic junctional tourniquet.
- the tourniquet portion of the ultrasonic junctional tourniquet includes the ultrasonic probe portion and the controller controls the ultrasonic probe portion to apply pressure to the structure at the predicted location to occlude the structure and reduce blood flow to the wound.
- the controller is configured to determine the location of the structure using machine learning on the images.
- the controller is integrated with the ultrasonic probe portion and uses machine learning to analyze the plurality of images to determine the location of the structure.
- the controller is configured to monitor the plurality of images as acquired and updates the determined location of the structure of the patient using machine learning.
- the controller is configured to guide the tourniquet portion of the ultrasonic junctional tourniquet to the location of the structure and controls the tourniquet portion to compress the structure at the location against a hard surface of the patient that is proximal the location of the structure.
- the controller uses machine learning in processing the plurality of images to guide the tourniquet portion to the location of the structure and to control the tourniquet portion to compress the structure at the location.
- the controller is configured to: determine the location of the structure as an x,y location in a cartesian coordinate system; determine an angle of the structure within the cartesian coordinate system; and control the tourniquet portion to apply pressure to the structure at the determined x,y location and the determined angle.
- the ultrasound probe portion is a removable portion of the ultrasonic junctional tourniquet and configured to be removed from the ultrasonic junctional tourniquet after the tourniquet portion is controlled to apply pressure to the structure at the determined x,y location and the determined angle.
- the ultrasound probe portion is a removable portion of the ultrasonic junctional tourniquet configured to be removed from the ultrasonic junctional tourniquet after the tourniquet portion is controlled to apply pressure to the structure at the determined location.
- the method includes acquiring sonographic images of a wound of a patient; determining from the plurality of images a location of a compressible structure of a patient that is proximal to the wound; and guiding an ultrasonic junctional tourniquet in applying pressure at the location of the structure to occlude the structure and reduce blood flow to the wound.
- an ultrasonic probe of the ultrasonic junctional tourniquet acquiring the plurality of images.
- the images are sonographic images.
- the structure is a blood vessel, artery, vein, nerve, bone, or other physiological pressure point of the patient.
- the ultrasonic probe applying pressure at the predicted location of the structure to occlude the structure and reduce blood flow to the wound.
- an ultrasonic probe of the ultrasonic junctional tourniquet applying pressure at the location of the structure to occlude the structure and reduce blood flow to the wound.
- applying pressure at the location of the structure further includes guiding a tourniquet of the ultrasonic junctional tourniquet to the location of the structure and the tourniquet compressing at the location to press the structure against a hard surface of the patient that is proximal the location of the structure.
- the hard surface is a bone of the patient proximal the location.
- guiding the tourniquet further includes determining the location of the structure as an x,y location in a cartesian coordinate system; determining an angle of the structure within the cartesian coordinate system; and applying the tourniquet to the structure at the determined x,y location and the determined angle.
- the tourniquet including: an ultrasonic probe configured to collect a plurality of images of a wound of a patient, the plurality of images being sonographic images; and a controller of the junctional tourniquet, coupled to the ultrasonic probe, configured to receive the plurality of images.
- the controller is configured to: guide movement of the ultrasonic probe to a position that is proximal a location of a structure of a plurality of compressible structures of the wound in accordance with analysis of the plurality of images performed by the machine learning model; and actuate the ultrasonic probe at the position to apply pressure to the structure of the wound and compress the structure against a hard surface of the patient to at least partially occlude fluid flow in the structure in accordance with analysis of the plurality of images performed by the machine learning model.
- the controller is configured to guide the ultrasonic probe to the position proximal the location of the structure in accordance with object detection performed by the machine learning model, the object detection performed by the machine learning model includes determining the location of the structure and the controller configured to guide movement of the ultrasonic probe to the position proximal the location of the structure.
- the ultrasonic probe is guided to the position that is proximal to the location of the structure in accordance with one or more of a user interface and an autonomous guidance module of the junctional tourniquet, the user interface and the autonomous guidance module controlled by the controller.
- the autonomous guidance module is an autonomous motor assembly of the junctional tourniquet controlled by the controller.
- the controller is configured to guide movement including an angle of the ultrasonic probe with respect to the structure.
- the controller is configured to actuate the ultrasonic probe in accordance with image classification that differentiates between an occluded status and a non-occluded status of the structure.
- a linear actuator coupled to the controller configured to linearly actuate the ultrasonic probe as controlled by the controller.
- the linear actuator is an autonomous motor assembly that actuates the ultrasonic probe in a z-axis of the junctional tourniquet.
- the linear actuator is a motor or a pump assembly.
- the controller of the junctional tourniquet configured to guide movement of the ultrasonic probe to the position and actuate the ultrasonic probe at the position in real time responsive to analysis of the plurality of images by the machine learning model.
- the junctional tourniquet having a user interface controlled by the controller, where the ultrasonic probe is guided to the position that is proximal the location of the structure of the wound by a user in accordance with a user interface of the junctional tourniquet.
- the user interface includes one or more guidance indicators generated by the controller in accordance with analysis of the plurality of images performed by the machine learning model.
- the user interface includes a screen that displays the one or more guidance indicators.
- the screen is coupled to the ultrasound probe.
- the one or more guidance indicators include one or more of audio and visual indicators on the ultrasound probe or on a housing of the ultrasound probe or junctional tourniquet.
- audio indicators include voice prompts, beeps or alarms and where the visual indicators include guidance lights or arrows generated by the controller in accordance with the machine learning model of the junctional tourniquet.
- the one or more guidance indicators of the user interface are displayed on the ultrasound probe.
- the one or more guidance indicators of the user interface prompt the user to: move the ultrasonic probe to center the structure in a sonographic image; and adjust an angle of the ultrasonic probe to center the hard surface under the structure in an sonographic image.
- a screen of the user interface displays the one or more guidance indicators.
- the controller is configured to laterally actuate the ultrasonic probe at the position to apply pressure to the structure.
- the controller is configured to control a linear actuator to laterally actuate the ultrasonic probe along a Z-axis of the ultrasonic probe.
- the linear actuator is an autonomous motor assembly that actuates the ultrasonic probe in a z-axis of the junctional tourniquet.
- the linear actuator is a motor or a pump assembly.
- the controller further configured to monitor the position of the ultrasonic probe proximal the location of the structure and an occlusion status of the structure.
- the machine learning model continuously monitors the position of the ultrasonic probe and the occlusion status of the structure.
- one or more of the pressure applied to the structure by the ultrasonic probe and the direction of the pressure applied to the structure is adjusted to maintain at least partial occlusion of fluid flow in the structure.
- the controller is configured to guide movement including an angle of the ultrasonic probe with respect to the structure to maintain at least partial occlusion of fluid flow in the structure.
- the controller configured to generate an alarm when the ultrasonic probe does not maintain at least partial occlusion of fluid flow in the structure.
- the alarm is conveyed to a user of the junctional tourniquets via a user interface of the junctional tourniquet.
- lateral actuation of the ultrasonic probe along a Z-axis of the ultrasonic probe controls the pressure applied to the structure and where directional actuation of the ultrasonic probe along one or more of an x-axis and a y- axis of the ultrasonic probe controls a direction of the pressure applied to the structure to maintain at least partial occlusion of fluid flow in the structure.
- the controller is configured to control an autonomous motor assembly to actuate one or more of lateral actuation and directional actuation of the ultrasonic probe.
- the junctional tourniquet having a user interface controlled by a user that includes one or more guidance indicators configured to guide one or more of lateral actuation of the ultrasonic probe along a Z-axis of the ultrasonic probe to control the pressure applied to the structure and directional actuation of the ultrasonic probe along one or more of an x-axis and a y-axis of the ultrasonic probe to control a direction of the pressure applied to the structure.
- the user interface includes a screen that displays the one or more guidance indicators.
- the one or more guidance indicators include one or more of audio and visual indicators on the ultrasound probe or on a housing of the ultrasound probe or junctional tourniquet.
- audio indicators include voice prompts, beeps or alarms and where the visual indicators include guidance lights or arrows generated by the controller in accordance with the machine learning model of the junctional tourniquet.
- directional actuation includes an angle of the ultrasonic probe.
- the controller configured to monitor the occlusion status of the structure responsive to force readings provided by one or more force sensors.
- the ultrasound probe includes the one or more force sensors.
- one or more of the pressure and a direction of the ultrasound probe in applying pressure to the structure is adjusted by a user in accordance with one or more of a user interface of the junctional tourniquet configured to convey adjustment instructions to the user of the junctional tourniquet and an autonomous module of the junctional tourniquet, the user interface and the autonomous module controlled by the controller.
- the controller is configured to laterally actuate the ultrasonic probe along a Z-axis of the ultrasonic probe to adjust the pressure applied by the ultrasound probe to the structure.
- the controller controls a motor or pump assembly to laterally actuate the ultrasonic probe along a Z-axis of the ultrasonic probe to adjust the pressure applied by the ultrasound probe to the structure.
- one or more force sensors that sense force, where force readings received from the force sensors are processed by the controller to determine an occlusion status of the structure.
- a mechanical attachment module coupled to the controller and configured to removably attach the junctional tourniquet to the patient, the mechanical attachment module having a linear actuator.
- the linear actuator is an autonomous motor assembly configured to laterally actuate the ultrasonic probe.
- the mechanical attachment module is configured to releasably secure the ultrasound probe after the ultrasound probe at least partially occludes fluid flow in the structure.
- the mechanical attachment module includes a base coupled to one or more of a frame and one or more straps, the base configured to be placed under the wound of the patient and the frame and the one or more straps configured to releasably secure the ultrasound probe after the ultrasound probe at least partially occludes fluid flow in the structure.
- the frame is a rigid frame.
- the mechanical attachment module including one or more rotating collars coupled to the one or more straps.
- the ultrasound probe coupled to the controller via a wired or a wireless connection.
- a controller-implemented method for using a junctional tourniquet including: acquiring sonographic images of a wound of a patient having one or more compressible structures, the sonographic images acquired by ultrasound; a trained machine learning model analyzing the plurality of sonographic images to generate a prediction of a location of one or more structures of the one or more compressible structures and one or more of a lateral actuation and a directional actuation of an ultrasonic probe of a junctional tourniquet needed to maintain at least partial occlusion of fluid flow in the one or more structures; guiding movement of the ultrasonic probe in accordance with the predicted location of one or more structures of the one or more compressible structures; and actuating the ultrasonic probe at the position in accordance with one or more of the lateral actuation and the directional actuation to apply pressure to the one or more structures of the wound and compress the one or more structures against a hard surface of the patient, at least partially occluding fluid flow in the structure
- lateral actuation of the ultrasonic probe along a Z-axis of the ultrasonic probe controls the pressure applied to the structure and where directional actuation of the ultrasonic probe along one or more of an x-axis and a y-axis of the ultrasonic probe controls a direction of the pressure applied to the structure to maintain at least partial occlusion of fluid flow in the structure.
- communicating including one or more of creating an audio message of the prediction of the location, the lateral actuation and the directional actuation and displaying the location, the lateral actuation and the directional actuation in a user interface.
- a controller-implemented method for using a junctional tourniquet including guiding movement of an ultrasonic probe of the junctional tourniquet to a position that is proximal a location of a structure of a plurality of compressible structures of a wound of a patient in accordance with analysis of a plurality of images performed by a machine learning model of the junctional tourniquet; and actuating the ultrasonic probe at the position to apply pressure to the structure of the wound and compress the structure against a hard surface of the patient, at least partially occluding fluid flow in the structure in accordance with analysis of the plurality of images performed by the machine learning model.
- actuating further including actuating the ultrasonic probe in accordance with image classification that differentiates between an occluded status and a non-occluded status of the structure.
- a linear actuator is an autonomous motor assembly that performs said laterally actuating the ultrasonic probe.
- the linear actuator is a motor or a pump assembly.
- said guiding further includes: the machine learning model performing object detection of the plurality of images to identify the location of the structure; and guiding movement of the ultrasonic probe to the position proximal the location of the structure.
- guiding further includes one or more of: a user guiding movement of the ultrasonic probe to the position proximal the location of the structure using a user interface of the junctional tourniquet; and an autonomous guidance module of the junctional tourniquet performing said guiding movement of the ultrasonic probe to the position proximal the structure.
- the machine learning model performing object detection of the plurality of images to guide a user including displaying the one or more guidance indicators in the user interface prompting the user to: move the ultrasonic probe to center the structure in a sonographic image; and adjust an angle of the ultrasonic probe to center the hard surface under the structure in an sonographic image.
- a screen of the user interface displaying the one or more guidance indicators.
- the one or more guidance indicators include one or more of audio and visual indicators on the ultrasound probe or on a housing of the ultrasound probe or junctional tourniquet.
- the audio indicators include voice prompts, beeps or alarms and the visual indicators include guidance lights or arrows.
- monitoring includes the machine learning model continuously monitoring the position of the ultrasonic probe proximal the location of the structure and the occlusion status of the structure; and adjusting one or more of the pressure applied to the structure by the ultrasonic probe and a direction of the pressure applied by the ultrasound probe to the structure to maintain at least partial occlusion of fluid flow in the structure.
- adjusting includes one or more of adjusting lateral actuation of the ultrasonic probe along a Z-axis of the ultrasonic probe to control the pressure applied to the structure and adjusting directional actuation of the ultrasonic probe along one or more of an x-axis and a y-axis of the ultrasonic probe to control a direction of the pressure applied to the structure.
- adjusting directional actuation of the ultrasonic probe includes adjusting an angle of the ultrasonic probe to maintain at least partial occlusion of fluid flow in the structure.
- the method further generating one or more guidance indicators in accordance with the machine learning model of the junctional tourniquet and displaying the one or more guidance indicators in a user interface presented to a user, the user performing one or more of adjusting lateral actuation of the ultrasonic probe along a Z-axis of the ultrasonic probe to control the pressure applied to the structure and adjusting directional actuation of the ultrasonic probe along one or more of an x-axis and a y- axis of the ultrasonic probe to control a direction of the pressure applied to the structure in accordance with the one or more guidance indicators displayed in the user interface.
- the one or more guidance indicators include one or more of audio and visual indicators on the ultrasound probe or on a housing of the ultrasound probe or junctional tourniquet.
- the audio indicators include voice prompts, beeps or alarms and the visual indicators include guidance lights or arrows generated by the controller in accordance with the machine learning model of the junctional tourniquet.
- adjusting further includes adjusting one or more of the pressure and a direction of the ultrasound probe in applying pressure to the structure by a user in accordance with a user interface of the junctional tourniquet or autonomously by an autonomous module of the junctional tourniquet.
- monitoring the occlusion status of the structure includes processing force readings and determining the occlusion status of the structure from the processed force readings.
- actuating further includes adjusting actuation of the ultrasonic probe in compressing the structure against a hard surface of the patient to at least partially occlude fluid flow in the structure.
- adjusting actuation of the ultrasonic probe in compressing the structure against the hard surface of the patient to least partially occlude fluid flow in the structure includes adjusting one or more of lateral actuation of the ultrasonic probe along a Z-axis of the ultrasonic probe to control pressure applied to the structure and directional actuation of the ultrasonic probe along one or more of an x-axis and a y-axis of the ultrasonic probe to control a direction of actuation of the ultrasound probe to the structure.
- a tissue phantom system including an anatomical tissue phantom, having an arterial side and a venous side, formed of ultrasound complaint material and having a one or more compressible structures within the ultrasound compliant material that accommodate fluid flow through the anatomical tissue phantom; a fluid reservoir housing ultrasonic compliant fluid; a pump configured to receive ultrasonic compliant fluid from the fluid reservoir and pump ultrasonic compliant fluid to the tissue phantom; a pressure sensor configured to receive ultrasonic compliant fluid from the pump, measure pressure of the received ultrasonic compliant fluid, and provide the ultrasonic compliant fluid to an arterial side of the tissue phantom, where flow of the ultrasonic compliant fluid through the pressure sensor, the pump and the fluid reservoir forms a fluid bypass loop of the system; a flow sensor coupled to the tissue phantom and the fluid reservoir, the flow sensor configured to measure the flow of the ultrasonic compliant fluid and provide the ultrasonic compliant fluid to the fluid reservoir; and a hard surface
- the ultrasonic compliant fluid flows in a flow loop of the system, with ultrasonic compliant fluid pumped by the pump from the fluid reservoir is provided to the pressure sensor, the pump provides the pumped ultrasonic compliant fluid to the arterial aide of the tissue phantom, the ultrasonic compliant fluid flows through the arterial aide of the tissue phantom, is measured by the flow sensor at an output of the tissue phantom, and flows back to the fluid reservoir and where responsive to pressure on the ultrasound compliant material of the anatomical tissue phantom at a position that is proximal a location of a compressible structure of the one or more compressible structures, the compressible structure is compressed against the hard surface of the anatomical tissue phantom to at least partially occlude fluid flow of the ultrasonic compliant fluid in the compressible structure.
- tissue phantom configured to provide ultrasonic compliant fluid to the flow sensor, the sensor configured to measure the flow of the ultrasonic compliant fluid and to provide the ultrasonic compliant fluid to the fluid reservoir.
- the structure is representative of a blood vessel, artery, vein, nerve, bone, or other physiological pressure point.
- the ultrasound compliant material of the tissue phantom is one or more of a synthetic gelatin, a ballistic gelatin, a ballistic hydrogel, and a clear ballistic gelatin.
- the anatomical tissue phantom is one or more of a femoral, a subclavian and an aortic tissue phantom and the one or more compressible structures are representative of one or more of vessels, arteries, veins, nerves, and bones.
- the one or more compressible structures are compressible tubing.
- the method includes: analyzing a database of ultrasound imaging and flow data points representative of one or more compressible structures of an anatomical structure subjected to a plurality of levels of flow of ultrasonic compliant fluid therethrough, including occlusion of the one or more compressible structures, the database of ultrasound imaging including a plurality of ultrasound images; sorting each ultrasound image of the plurality of ultrasound images of the database into a plurality of classification categories based on a measured distal pressure of an ultrasound image, the measured distal pressure a measure of flow of the ultrasonic compliant fluid through the compressible structure of the ultrasound image; processing the sorted plurality of classification categories of the plurality of ultrasound images into processed classification categories; and training a machine learning model on a training dataset of the processed classification categories to generate a trained machine learning model, including providing the machine learning model with an image input layer of the training dataset and generating an output layer with the two or more classification categories.
- the full flow classification category and the full occlusion classification category are separated by a percent reduction of measured distal pressure.
- the full flow classification category and the full occlusion classification category are separated by a range of 90 to 50% reduction of measured distal pressure.
- each of the plurality of ultrasound images are further sorted into classification categories of full flow, partial occlusion or full occlusion of ultrasonic compliant fluid flow through a compressible structure of the ultrasound image.
- a full flow classification category is characterizes as unobstructed flow to a 10% reduction in measured distal pressure
- a partial occlusion classification category is a range of approximately 50 to 90% reduction in measured distal pressure
- a full occlusion classification category is approximately 90% or more reduction in measured distal pressure
- processing further including processing the plurality of ultrasound images sorted into classification categories by cropping to remove ultrasound image information, resizing the cropped plurality of ultrasound images, and converting the cropped and resized ultrasound images to grey scale images.
- processing further including processing the ultrasound images sorted into classification categories by cropping to remove ultrasound image information and then converting the cropped ultrasound images to grey scale images.
- positive predictions are full occlusion images and negative predictions are full flow images of the testing dataset.
- the structure is a blood vessel, artery, vein, nerve, bone, or other physiological pressure point of the patient.
- the anatomical structure is a biological structure.
- the anatomical structure is an anatomical tissue phantom with the plurality of compressible structures and the measured distal pressure is a measure of flow of the ultrasonic compliant fluid through the compressible structure distal to the anatomical tissue phantom
- the method further collecting the database of ultrasound imaging and flow data points using an ultrasound probe actuated against the plurality of compressible structures that accommodate flow of ultrasonic compliant fluid therethrough, where the ultrasound probe performs collecting the database from a plurality of angles, placements and pressures actuated by the ultrasound probe against the plurality of compressible structures against a hard surface of the anatomical tissue phantom.
- the anatomical tissue phantom has an arterial side and a venous side in a system having the anatomical tissue phantom, a pump, a fluid reservoir, and a flow sensor, where in a flow loop of the system the ultrasonic compliant fluid pumped by the pump from the fluid reservoir is provided to the pressure sensor, the pump provides the pumped ultrasonic compliant fluid to the arterial aide of the tissue phantom, the ultrasonic compliant fluid flows through the arterial aide of the tissue phantom, is measured by the flow sensor at an output of the tissue phantom, and flows back to the fluid reservoir and where responsive to pressure by the ultrasound probe on the ultrasound compliant material of the anatomical tissue phantom at a position that is proximal a location of a compressible structure of the one or more compressible structures, the compressible structure is compressed against the hard surface of the anatomical tissue phantom to at least partially occlude fluid flow of the ultrasonic compliant fluid in the
- training the machine learning model to generate the trained machine leaning model further includes for each ultrasound image of the plurality of ultrasound images in the image input layer: providing a plurality of bounding boxes for the one or more compressible structures and labeling the one or more compressible structures in each of the bounding boxes.
- the output layer and the bounding box prediction output layer are both convolutional layers of the machine learning model.
- the output layer and the bounding box prediction output layer both include a convolution layer, a rectified linear unit (ReLU) activation layer, and a max pooling layer.
- ReLU rectified linear unit
Landscapes
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Surgery (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Molecular Biology (AREA)
- Public Health (AREA)
- Engineering & Computer Science (AREA)
- Biomedical Technology (AREA)
- Heart & Thoracic Surgery (AREA)
- Medical Informatics (AREA)
- Veterinary Medicine (AREA)
- Animal Behavior & Ethology (AREA)
- General Health & Medical Sciences (AREA)
- Pathology (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Radiology & Medical Imaging (AREA)
- Gynecology & Obstetrics (AREA)
- Reproductive Health (AREA)
- Vascular Medicine (AREA)
- Physics & Mathematics (AREA)
- Biophysics (AREA)
- Ultra Sonic Daignosis Equipment (AREA)
Abstract
An ultrasound junctional tourniquet having a controller, an ultrasonic probe portion of the ultrasonic junctional tourniquet configured to acquire images of a wound of a patient using ultrasound doppler, and a tourniquet portion of the ultrasonic junctional tourniquet. The controller determines from the images a location of a compressible structure of a patient that is proximal to the wound and controls the tourniquet portion to apply pressure at the location of the structure to occlude the structure and reduce blood flow to the wound. The controller works to guide the ultrasonic probe portion to the location using object detection machine learning (ML) and the controller controls the tourniquet portion t apply pressure using image classification machine learning (ML).
Description
Ultrasound and Machine Learning Based Junctional Tourniquet
STATEMENT OF GOVERNMENT INTEREST
[0001] The invention described herein may be manufactured, used and licensed by or for the United States Government.
PRIORITY CLAIM
[0002] This application claims the benefit of provisional application serial number 63/593,597 filed October 27, 2023 and titled “Ultrasound and Machine Learning Based Junctional Tourniquet” and provisional application serial number 63/669,014 filed July 9, 2024 and titled “Ultrasound and Machine Learning Based Junctional Tourniquet,” the entire contents of which are hereby incorporated by reference.
BACKGROUND
[0001] The present disclosure relates to the treatment of severe wounds. More particularly, the present disclosure relates to the treatment of severe wounds, including devices and methods therefor.
[0002] Hemorrhage is the leading cause for preventable death in trauma casualties, both civilian and in combat. The increased use of long-range artillery, man-portable rockets and high-explosive charges results in more serious multi-organ, hemorrhage injuries, highlighting the importance of hemorrhage control, especially with proximal limb and pelvic injuries. While the abundant use of tourniquets has greatly diminished the mortality from extremity hemorrhaging, junctional hemorrhage remains a largely unsolved problem.
Junctional hemorrhage is defined as hemorrhage from the areas connecting the extremities to the torso - axillae, shoulders, groin, buttocks and proximal thighs, as well as from the neck.
[0003] While limb tourniquets have proven highly effective at controlling hemorrhage, treating junctional hemorrhages present a unique challenge requiring precise pressure point occlusion which carries both high skill level and physical demands for healthcare providers. Since these junctional areas are not amenable to tourniquet placement, several alternative solutions are in use, including:
[0004] Packing - The creation of pressure inside the bleeding wood in order to press against the bleeding vessel, with materials ranging from gauze supplemented with procoagulant substances to expanding sponges inserted by a syringe into the wound. While packing techniques (with or without hemostatic agents) are effective against hemorrhage from a venous source, they have limited effectiveness against bleeding from a major artery, such as the femoral, subclavian/axillary or carotid.
[0005] Manual pressure points (MPP) - The pressing of a major artery, proximal to the hemorrhage source, against a bony surface to stop the blood flow to the wound and beyond. While a single study has described this practice as ineffective, leading to its elimination from most clinical practice guidelines, more recent studies show promising results. However, this technique should be used as a brief solution, as a provider’s fatigue may cause restoration of the hemorrhage.
[0006] Resuscitative endovascular balloon occlusion of the aorta (REBOA) is a highly invasive procedure that requires skill and is only relevant to the lower body.
[0007] Junctional tourniquets - Utilizing the same principles as MPP, these devices are designed to maintain ongoing pressure on the vessel, pending definitive surgical solution. While massive bleeding from more distal injuries can be effectively managed with a tourniquet, junctional tourniquets (JTs) like the SAM junctional tourniquet, the Combat- Ready Clamp (Croc), the Abdominal Aortic Junctional Tourniquet (AAJT) and more have so far shown very limited success. The principal of their operation is occluding a major artery by exerting mechanical pressure on the artery against a bony prominence - the bony pelvis, the first rib or the transverse processes of cervical vertebrae. These JTs have been used in the military, for example, but events in which there were used are very few, and experience gained from training exercises shows they’re very hard to place correctly due to the difficulty to precisely locate the artery or other structure and underlying bone using a “blind technique.” Moreover, JTs are likely to move from the correct position, which needs to be accurate and maintained in order to “trap” the artery between the device and the bone. Application on the aorta will be especially useful for pelvic hemorrhage, including obstetric hemorrhage. Accordingly, studies have shown their utilization to be cumbersome and time consuming and have a high failure rate in training and real combat scenarios, for example. Moreover, their effectiveness drops dramatically once the patient is transported.
[0008] Junctional and pelvic hemorrhaging continue to be a significant cause of early preventable death among trauma casualties, and there is a need for a device that can provide rapid and reliable hemostasis without requiring advanced expertise.
BRIEF DESCRIPTION OF THE DRAWINGS
[0009] FIG. 1 depicts a simple geometry phantom with a single channel, in accordance with various embodiments of the present disclosure.
[0010] FIG. 2 depicts an anatomical phantom with skeletal and vascular features, in accordance with various embodiments of the present disclosure.
[0011] FIGs. 3, 4 and 5 illustrate phantom tissue training systems, in accordance with various embodiments of the present disclosure.
[0012] FIGs. 6A-6C illustrates results using the simple phantom of FIG. 1, in accordance with various embodiments of the present disclosure.
[0013] FIGs. 7A-7E illustrate results using the femoral phantom of FIG. 2, in accordance with various embodiments of the present disclosure.
[0014] FIG. 8 illustrates strain properties of various ballistic gelatin compositions, in accordance with various embodiments of the present disclosure.
[0015] FIG. 9 illustrates an object detection model network architecture, in accordance with various embodiments of the present disclosure.
[0016] FIG. 10 illustrates Al model predictions, in accordance with various embodiments of the present disclosure.
[0017] FIG. 11 depicts Ex vivo swine data collection, in accordance with various embodiments of the present disclosure.
[0018] FIG. 12 depicts ultrasound image results for a 2-class model trained with a dataset from an ex -vivo swine model, in accordance with various embodiments of the present disclosure.
[0019] FIG. 13 summarizes an example image capture protocol, in accordance with various embodiments of the present disclosure.
[0020] FIG. 14 summarizes performance metrics for ShrapML models, in accordance with various embodiments of the present disclosure.
[0021] FIGs. 15A-D illustrate confusion matrices for MobileNetV2 and ShrapML, in accordance with various embodiments of the present disclosure.
[0022] FIG. 16 illustrates performance matrices for MobileNetV2 and ShrapML, in accordance with various embodiments of the present disclosure.
[0023] FIG. 17 illustrates gradient-weighted class activation maps for trained two category models, in accordance with various embodiments of the present disclosure.
[0024] FIGs. 18A and 18B illustrate three category ShrapML performance, in accordance with various embodiments of the present disclosure.
[0025] FIGs. 19A-D illustrate confusion matrix and receiver operating characteristic (ROC) curve for ShrapML, in accordance with various embodiments of the present disclosure.
[0026] FIG. 20 provides a summary of performance metrics for ShrapML trained, in accordance with various embodiments of the present disclosure.
[0027] FIGs. 21A-H illustrate GradCAM overlays for ShrapML model, in accordance with various embodiments of the present disclosure.
[0028] FIG. 22 is a flowchart of a method of training, in accordance with various embodiments of the present disclosure.
[0029] FIG. 23 is a logic diagram 2300 of ultrasound (US) paired with artificial intelligence (Al) machine learning (ML) models to visualize and guide proper junctional occlusion, in accordance with various embodiments of the present disclosure.
[0030] FIGs. 24-31 illustrate process flows of a junctional toumiquet(s) having US and ML capabilities, in accordance with various embodiments of the present disclosure.
[0031] FIG. 32 depicts a flowchart detailing an application of a junctional tourniquet for managing the junctional tourniquet Al models, streaming ultrasound signal, and providing instructions to the end user for proper junctional, in accordance with various embodiments of the present disclosure.
[0032] FIG. 33 depicts an example graphical user interface, in accordance with various embodiments of the present disclosure.
[0033] FIG. 34 illustrates a functional block diagram, in accordance with various embodiments of the present disclosure.
[0034] FIGs. 35, 40-45 illustrates various frameworks for securing a junctional tourniquet to a body, in accordance with various embodiments of the present disclosure.
[0035] FIGs. 36 and 37 illustrate actuation of an ultrasound probe in the z-axis direction, in accordance with various embodiments of the present disclosure.
[0036] FIGs. 38 and 39 illustrate user interfaces that provide user guidance, in accordance with various embodiments of the present disclosure.
[0037] FIG. 46 illustrates results of junctional tourniquet testing, in accordance with various embodiments of the present disclosure.
[0038] FIG. 47 illustrates a performance comparison of improved junctional tourniquets with other tourniquets, in accordance with various embodiments of the present disclosure.
DETAILED DESCRIPTION
[0039] Embodiments of the present disclosure will now be described with reference to the drawing figures, in which like reference numerals refer to like parts throughout.
[0040] In accordance with the detailed description, the artificial intelligence (Al) provided by machine learning (ML) algorithms include models for guiding junctional compression and for guiding to the location. There is cross-talk or communication between these ML models that allow them to function harmoniously. Ultrasound (US) is paired with artificial intelligence (AI)/machine learning (ML) models to visualize and guide proper junctional occlusion. Using custom tissue phantoms, object detection (OD) Al models can be trained and used to provide guidance to a junctional pressure point and classification models can be trained and used to identify and help maintain occlusion. The terms Al and ML are used interchangeably herein.
[0041] Machine learning (ML) based methodologies embodied in image classification and object detection algorithm(s) serve to guide a junctional tourniquet that an confirm and track appropriate pressure while continuously monitoring the effectiveness of applied pressure. Integrating ultrasound (US) technology into junctional tourniquet devices lowers the skill threshold for precise pressure point occlusion when accompanied by trained Al
models to guide localization and confirm and maintain the occlusion state. Junctional tourniquets employing ultrasound and trained ML algorithms satisfy the need to provide rapid and reliable hemostasis without requiring advanced expertise on the part of the user/operator. Integrating ultrasound (US) technology into junctional tourniquet devices can therefore lower the skill threshold for precise pressure point occlusion when accompanied by trained Al models to guide localization and confirm occlusion states.
[0042] The use of ML for ultrasound guidance of medical interventions for hemorrhage control has not been described before, potentially due to the relatively high cost of ultrasound machines. With the gradual decrease in cost and size of ultrasound technology, the use of ultrasound for hemorrhage control becomes feasible, making expertise the limiting factor for use. ML models offers a pathway to overcome this limit, to allow healthcare providers not trained in sonography to utilize ultrasound technology to treat junctional hemorrhaging.
[0043] Therefore, in accordance with various embodiments presented herein, junctional tourniquets are improved by adding a “smart” component and accompanying methodologies that facilitate accurate application, monitoring, and adjusting of occlusal pressure on compressible structures to prevent recurrence of bleeding that may otherwise occur while the casualty is being moved or handled while enroute to other care. A junctional tourniquet device guides the user/operator in the correct placement of the tourniquet, monitors appropriate arterial occlusion, and potentially auto-adjusts to maintain constant hemostasis until vascular surgery is available, for example. Such a hemorrhage control device in military, commercial and civilian settings treats junctional hemorrhaging from the pelvis, uterus, groin, buttocks, limbs, armpit and neck by finding the correct location to apply and maintain pressure, whether that be the abdominal aorta or the femoral, subclavian or carotid arteries. Further, the application of machine learning algorithms in real time on ultrasound imaging to create a closed-loop/provider-in-loop system for the purpose of providing accurate pressure on major arteries as well as ongoing monitoring of its accuracy and effectiveness.
[0044] The disclosure accordingly describes a device, method and system that guides placement of an ultrasound probe to a position of maximal occlusion of major arteries, pressing the arteries against bony prominences in junctional areas and continuously monitors effective hemostasis to verify effective arterial occlusion. The device receives sonographic images taken, gathered or acquired by the ultrasound probe and uses real-time machine
learning algorithms to guide probe placement either by user interface or an autonomous motor system. This will simplify hemorrhage control from junctional bleeds, reduce both the dedicated manpower (once device is placed) and cognitive burden on medics/first responders, as well as enable inexperienced bystanders to render potentially lifesaving aid.
[0045] As described herein, the application of machine-learning based real-time image analysis is used to guide an ultrasound to apply pressure on major arteries against bony prominences. It is to be embodied in the form of device, which will be a "smart version" of a junctional tourniquet, and described systems and methodologies. This junctional tourniquet will use an ultrasound probe as the pressure-exerting component. The ultrasound sonographic images will be transferred in real time to a controller that will utilize machine learning algorithms to: 1. Guide the ultrasound probe to correct placement (either by the user with a user interface with guidance arrow lights or with an autonomous motor system) on the vessel (femoral, subclavian, aortic) such as veins, arteries, 2. Continuously verify appropriate arterial occlusion, 3. Correct pressure direction either autonomously with a motor system or by directing the user and 4. Raise an alarm when occlusion is lost and system cannot restore it automatically. The machine learning algorithms include: 1. Image classification, differentiating between occluded and non-occluded artery. 2. Object detection, locating the artery and the bone (or other hard surface of the patient sufficiently firm to withstand occlusion pressures described herein) in order to direct the movement of the pressure-exerting probe to the correct location. It will operate in real-time as a closed loop/ provider-in-loop system, with the ultrasound images as its input and user guidance / autonomous motor action as its output. As used herein the terms sonographic images and ultrasound images are taken by an ultrasound probe and may be used interchangeably. The artificial intelligence (Al) provided by the ML algorithms thus include models for guiding junctional compression and for guiding to the location for junctional compression. There is cross-talk or communication between these ML models that allow them to function harmoniously.
[0046] Test Platform
[0047] A custom test platform suitable for tourniquet testing in the femoral junction region was developed to train classification models to identify proper femoral vessel occlusion. Trained object detection Al models help identify the correct pressure point by identifying key anatomical features and their positions.
[0048] A test platform was developed for collecting guidance and occlusion Al training datasets. The test platformed consisted of a linear actuator for occluding flow with a base for different custom tissue-mimicking models. Three different models were used, including:
[0049] 1. A simple geometry phantom with a single channel, as shown in FIG. 1.
[0050] 2. An anatomical phantom with skeletal and vascular features, as shown in FIG. 2
[0051] 3. An ex-vivo swine model with artery and vein flow.
[0052] These models were then connected to a flow loop that included a peristaltic pump, pressure sensor, flow sensor and doppler compliant fluid, the flow loop illustrated in the training systems shown in FIGs. 3, 4, and 5.
[0053] Tissue Phantom
[0054] An ultrasonically compatible phantom, also known as a tissue phantom herein, for image collection to train and test the algorithm is presented. A phantom collects US image data for training and testing Al models. The phantom was made using clear ballistic gel with latex tubing, for example, connected to a pump simulating a vessel.
[0055] The phantom is a 3D printed rendering of patient junctional areas, such as human junctional areas, using surgical tubing for blood vessels, under a covering of Ballistic hydrogel modified to closely mimic the composition of muscle, fat, and skin over the vessel. Imaging and flow data is collected from healthy controls and send data to use for training the algorithm for actual human anatomy.
[0056] The simple tissue phantom of FIG. 1 was first created for data collection in the system of FIG. 3. The phantom was comprised of synthetic gelatin (gel) with a single channel through the center that can be placed over an aluminum bar or other hard surface sufficiently firm to withstand occlusion pressures as described herein. The tissue phantom platform must be (i) ultrasound compliant, (ii) mechanically robust enough to withstand the compressive forces at the point of occlusion, (iii) mechanically tough enough to withstand repeated junctional tourniquet applications, (iv) anatomically accurate for the junctional occlusion site as needed to develop guidance Al models, (v) capable of physiological fluid flow through artery and vein features, (vi) be easy to fabricate for various anatomies and replicate experiments.
[0057] As the ultrasound probe pushes, the vessel is occluded against the aluminum bar and can be observed by ultrasound. Images were collected with pulsatile flow through the phantom at physiological pressures. The probe was lowered incrementally until pressure distal to the phantom dropped by at least 90%, denoting occlusion. Ultrasound clips were collected at different angles, placements on the surface to create a large database of images for algorithm training. Phantom image collection was conducted. Changes to the setup are ongoing to acquire better datasets for training the algorithm.
[0058] Referring now to FIG. 3, a system 300 for collection of data from simple tissue phantom of FIG. 1. A data acquisition controller or microcontroller, or computer, is connected to both the ultrasound machine USM 310 and a flow sensor. Also shown in the loop is a controlled pressing device CPD 320, controlled by the ultrasound machine USM 310 and configured to cause to press the ultrasound probe USP 330, connected to a fluid reservoir 360 and pump 350, in a controlled manner into the simple phantom SP 370. Pressure sensor 340 monitors the pressure of CPD 320.
[0059] Referring to FIG. 4, a more anatomically realistic tissue phantom, including accurate bone and vessel placement is shown. In an example, an anatomical phantom of hip and thigh area, made from 3D-printed hip bone and femur and ultrasound compliant ballistic gel (gelatin), with femoral vessels in anatomical positions is shown. It too connects to the same data acquisition system as in FIG. 3.
[0060] The junctional tissue phantoms of FIGs. 1 and 2 were developed because commercially available tissue phantoms were inadequate for mechanical compression, ultrasound compatibility, and/or as a means of controlling flow through the phantom at desired blood pressures. An overview of how the phantoms are made and their features is found herein.
[0061] A tissue phantom is comprised of relevant artery, vein, and bones in a custom- made anatomical mold. Anatomical molds were vacuum formed based on available mannikins/molds. However, the mannikin molds had to be modified for use in this application. Molds were situated with 3D printed or purchased bones at proper anatomical locations. Openings in the vacuum-formed mold at the distal and proximal end of the occlusion site were created to allow rigid tubing to act as a placeholder for vessel channels. Rigid tubing was removed after phantom casting to allow for soft, flexible tubing to be placed in the artery and vein locations. These were then attached to a flow loop that is capable of
creating pulsatile flow, hemorrhage site distal to the occlusion point, physiologically relevant pressure as determined by patient monitor, and bypass flow when occlusion is applied to the phantom.
[0062] Phantoms have been constructed for femoral, subclavian, and aortic junctional tourniquet sites. Phantoms have been created from synthetic ballistic gelatin, silicone elastomers, or Styrene-block-(ethylene-co-butylene)-block-styrene (SEBS) copolymers, for example. SEBS copolymers are biocompatible elastomers able to remain stable when subjected to UV radiation. In certain example embodiments, a phantom was created with 10% clear ballistic gel (CBG) and a 3-D printed mold (Raise3d Pro2Plus). The volume of the mold was calculated and used to determine the cube size to be cut from the main block with some added volume for a residual estimate. The CBG was then cut into small pieces and placed into a 500 mL to then be warmed to 130°C using an oven (HERATherm) for approximately 2 hours or until the gel was de-bubbled. Due to the high temperature needed to melt the CBG, a polycarbonate filament (PC 1500 FR, Jabil) was selected to print the mold to ensure the form was kept. Using a ’A" OD biopsy punch to hold the place of a vessel, the CBG was slowly poured into the silicone oil lined mold and left to cool at room temperature. Once cooled, the phantom was removed from the mold and placed over a wax block which acted as a bone to allow the vessel to occlude.
[0063] These different approaches were tried and evaluated to identify which material properties have the optimal performance for mechanical compression and ultrasound image quality. Testing was performed via uniaxial testing system and testing the junctional compression device to occlusion to determine if the tissue phantom would be damaged during the process. The synthetic ballistic gelatin formulas performed best for the femoral site, but other materials are contemplated as well.
[0064] Stress-strain properties 800 of the various ballistic gelatin compositions were evaluated as shown in FIG. 8 with ratios between clear ballistic gelatin (CBG) and other synthetic gelatin material types tested. Additionally, there are other qualitative properties of interest, such as whether the material reach junctional occlusion can be used under repeated use without ripping.
[0065] Referring now to FIG. 5, a flow diagram of a tissue phantom test system 500 with a flow bypass loop is shown. Anatomical tissue phantom 540 has an arterial side and a venous side, is formed of ultrasound complaint material, and has a one or more compressible
structures within the ultrasound compliant material that accommodate fluid flow through the anatomical tissue phantom; the compressible structures as previously mentioned would include anatomical vessels, arteries, veins, nerves, and bones formed within the tissue phantom. The ultrasound compliant material of the tissue phantom may be a synthetic gelatin, a ballistic gelatin, a ballistic hydrogel, and a clear ballistic gelatin as will be described. The anatomical tissue phantom may be a femoral, a subclavian or an aortic tissue phantom having compressible structures like compressible tubing representative of vessels, arteries, veins, nerves, and bones.
[0066] Fluid reservoir 510 houses ultrasonic compliant fluid that is pumped by pump 520 to the tissue phantom 540. Pressure sensor(s) 530 is configured to receive ultrasonic compliant fluid from the pump 520, measure pressure of the received ultrasonic compliant fluid, and provide the ultrasonic compliant fluid to an arterial side of the tissue phantom 540. A flow sensor 550 is coupled to the tissue phantom 540 and the fluid reservoir 510 and is configured to measure the flow of the ultrasonic compliant fluid and provide the ultrasonic compliant fluid to the fluid reservoir 510. A hydrostatic reservoir 560, such as a hydrostatic IV bag provides hydrostatic fluid to the venous side of the tissue phantom.
[0067] Ultrasonic compliant fluid flows in a flow loop of the system, with ultrasonic compliant fluid pumped by the pump from the fluid reservoir is provided to the pressure sensor, the pump provides the pumped ultrasonic compliant fluid to the arterial aide of the tissue phantom, the ultrasonic compliant fluid flows through the arterial aide of the tissue phantom, is measured by the flow sensor at an output of the tissue phantom, and flows back to the fluid reservoir and where responsive to pressure on the ultrasound compliant material of the anatomical tissue phantom at a position that is proximal a location of a compressible structure of the one or more compressible structures, the compressible structure is compressed against the hard surface of the anatomical tissue phantom to at least partially occlude fluid flow of the ultrasonic compliant fluid in the compressible structure. The phantom flow loop of FIG. 5 can be used with a peristaltic pump or piston pump (SuperPump) 520 using different flow sensors 550, pressure sensors 530, or a mass balance to measure ongoing hemorrhage rate.
[0068] Flow of the ultrasonic compliant fluid through the pressure sensor, the pump and the fluid reservoir forms a fluid bypass loop of the system.
[0069] Tissue Phantom Imaging
[0070] The phantom was fitted with a diameter latex tubing(penrose) to act as vessel, which was connecter to a flow loop described above in FIG. 5. This loop consisted of a peristaltic pump (masterflex) that took doppler compliant fluid(CIRS) from a reservoir and fed it to the phantom, a pressure sensor (ADI) that connected directly to the data acquisition unit (ADI) was paced downstream of the phantom. Between the pump and the phantom there was a bypass line. The vessel in the phantom was kept underwater during imaging. Ultrasound imaging was performed using a ultrasound probe (Terason) from a Terason500 US imaging system (Terason). Real time video feed from the US screen was recorded with the LabChart (ADI) software using a video capture box. The color imaging modality was used to confirm the presence or absence of flow in the vessel. With the vessel in view and phantom placed on top of the wax block, the ultrasound probe was used to squish the vessel until there was no flow going through, confirmed by the lack of color and pressure at less than 90% of the baseline.
[0071] Referring to FIGs. 6A-6C and FIGs. 7A-7E, results using the simple phantom of FIG. 1 and the femoral phantom of FIG. 2, respectively, are shown. In FIG. 6A, the simple phantom of FIG. 1 was used to train classification models for image interpretation, using the color overlay feature of the ultrasound system. Representative images without and without color flow are shown below. The Al models were trained to classify images between two classes: baseline (full flow) and occlusion; or three classes, adding progress (partial occlusion). Confusion matrices for both models are shown in FIGs. 6B and 6C.
[0072] Similarly, the femoral phantom of FIG. 2 was used to collect images for training a classification Al model for tracking occlusion. Data were split into the three categories: baseline (full flow), progress (partial occlusion), and occlusion. Representative images for each category are shown in FIG. 7B along with a normalized confusion matrix from model predictions on the test dataset in FIG. 7A.
[0073] Ultrasound images are processed for training Al models for both occlusion and guidance applications. The captured US scans will be split to frames, cropped to remove non-US image information from the image and resized. Starting with occlusion datasets, MATLAB is used to approximate the reduction in flow rate relative to baseline rates measured by the Doppler device, and each US frame is labeled with its flow reduction percentage. Similar to training the initial occlusion model, the categorical classification between flow and occluded conditions is optimized based on percent flow reduction to determine the effect of this hyperparameter on training performance. Additionally, the Al
model may be re-configured for a regression output layer where the Al model for occlusion estimates the current flow based on the US image. This architecture may be optimized for these applications until an acceptable average accuracy, such as 85% or higher, is achieved.
[0074] The guidance US scans will be labeled with bounding box overlays for the regions of interest, specifically the artery, vein, and underlying boney surfaces, for example. Individual frames will then be used as training input for re-tuning the model architecture, such as a YOLOv7tiny model architecture, for example. Data augmentation and model modifications may be used until blind test performances surpasses the acceptable average accuracy, such as 85% or higher sensitivity across all scan points.
[0075] Test Platform - Object Detection
[0076] Next, the femoral phantom model was used to train an object detection Al model. Predicted bounding boxes are shown in FIGs. 7D and 7E, with instances of the model accurately identifying all features and with mislabeled or missed features. Summary of performance is highlighted by the confusion matrix in FIG. 7C.
[0077] Consider now a second algorithm that uses object detection to guide a junctional tourniquet to a proper location. As an example, a You Only Look Once (YOLOv3, for example) object detection network may be used to identify artery, vein, and bone objects in an ultrasound image. Overall idea would be the prototype will guide end-user to move probe to proper anatomical location and move the ultrasound probe until the Al detects on artery in view. From there it will guide the user to go left or right until it is centered in the US image, followed by adjusting the angle until the bone is also visible. Proper location equals bone centered under the artery. This model is also useful for successful shrapnel detection in Al images. Description of methods from that paper on this Al model are below. It is noted that while the following description is described for finding shrapnel objects, it is applicable herein for the identification of compressible structures, including blood vessels, arteries, veins, nerves, bones, or other physiological pressure points at, in or near a wound of a patient, as is clear from the detailed description.
[0078] Image Processing and Bounding Boxes
[0079] In a particular example, frames of all video clips were extracted using an implementation of FFmpeg with a Ruby script, yielding 90 individual frames per video. Duplicate frames were removed, and all images were processed with MATLAB’s image processing toolbox (MathWorks, Natick, MA, USA) in which a function was written to crop
images to remove ultrasound settings from view and then resize them to 512 * 512 * 3, for example. MATLAB also was used for the manual addition of bounding boxes to all images for all the objects. Individual rectangular boxes were drawn enclosing the smallest area around the shrapnel, vein, artery, or nerve (n = 6734 images base phantom; n = 10,777 images modified neurovascular phantom). It is understood that while all frames were extracted and processed as described herein, less than all could also be extracted and processed in this manner.
[0080] ShrapOD Architecture
[0081] The object detection model, ShrapOD, used a SqueezeNet neural network backbone with modifications to include YOLOv3 object detection heads, as shown in FIG. 9. This network architecture 900 was built based on MATLAB -provided object detection code. The feature extraction network in SqueezeNet was modified to use an image input layer 902 of (such as 512 x 512 x 3, for example) followed by a convolutional block 904 containing a convolutional layer with rectified linear unit (ReLU) activation and max pooling layer 908. This is followed by 4 Fire blocks 906, 910 prior to the network splitting after fire block (x2) 910 to integrate the YOLOv3 object detection heads. Fire modules shown in blocks 930-942, per the SqueezeNet architecture, comprised a single convolutional squeeze layer 930 (fo l , ReLU activator block 932) followed by expanding layers 934, 936 consisting of a mixture of (1 x 1) and (3 x 3) convolutional layers in parallel to increase the depth and width for higher detection accuracy. These parallel layers are concatenated prior to the next layer in the network architecture to reduce the number of model parameters. Five additional Fire blocks 914 are used on the YOLOv3 class output layer pathway, followed by a convolutional layer 916 with batch normalization and ReLU activation. See the left pathway of FIG. 9.
[0082] Flow 900 of FIG. 9 provides an overview of an example ShrapOD model network architecture. Diagram for the object detection algorithm using SqueezeNet as the classification backbone, with added YOLOv3 outputs to generate bounding boxes and class predictions is shown. In the diagram, individual layers are shown as well as “blocks” that consist of multiple layers. The convolutional block (904) has a convolutional layer, ReLU activation layer, and a max pooling layer (908, 912). The Fire blocks (906, 910, 914) - that repeat two or five times as indicated - begin with a convolutional layer with ReLU activation and then split into parallel chains with varying convolutional filter sizes (1 x 1 and 3 x 3) with ReLU activation. As depicted in the first occurrence, then the parallel chains come back together using a depth concatenation layer 924, which is followed by a backend convolutional
block 926. The feature convolutional block 918 and both backend convolutional blocks 904 and 916 are identical in layer content, beginning with a convolutional layer, followed by a batch normalization and ending with a ReLU activator. Both output layers, for class output layer 920 and bounding box output layer 928 are also convolutional layers.
[0083] It can be seen that an additional output layer, bounding box output layer 928, was used for bounding box predictions in which the network was fused after the Fire block 9 concatenation with Fire block 8 with an additional convolutional block 918 for feature resizing block 922. The model contained a final concatenation layer and convolutional block to align the predicted bounding box coordinates to the output image. See the right pathway of FIG. 9. The YOLOv3 network also used optimized anchor boxes to help the network predict boxes more accurately.
[0084] It is noted that the object detection algorithm can be trained using the LOSO (leave one subject out) methodology, in which a single subject is left out of training instances, allowing to assess model overfitting. The predictions across each LOSO model were then aggregated and over 85% blind accuracy was achieved, compared to 70% accuracy without the LOSO and aggregation methods. This approach was combined with a live US feed and implemented into a single-board computer (SBC) for real-time prediction with inference times under one second per image.
[0085] Multiple object detection models have been trained for foreign body detection to compare commonly used architectures for speed and performance. A range of object detection architectures were evaluated and YOLOv7tiny was found to be optimal based on mean average precision performance and inference speed, see FIG. 10. Identifying a region of interest in the image was used to develop measure how close foreign bodies were to neurovascular features as a triage support tool. These object detection models will be used for junctional guidance.
[0086] ShrapOD Training Overview
[0087] Model training was performed using MATLAB R2022b with the deeplearning and machine-learning toolboxes for the base phantom and then repeated for the modified, neurovascular phantom. For the base phantom use case, only images containing shrapnel were used in this example. For the neurovascular phantom, images were taken from datasets with and without shrapnel. Images were cropped to remove ultrasound file information, sized to 512 x 512 x 3 and then datasets were split into 75% training, 10%
validation and 15% testing quantities. Augmentation of the training datasets included random X/Y axis reflection, +/- 20% scaling, and +/- 360° rotation. These augmentation steps were written into a function that also applied it to the bounding box data. Validation and testing set images were not augmented. Training was performed using a stochastic gradient descent with momentum (SGDM) solver, 23 anchors, 125 epochs, L2 regularization of 0.0005, with a penalty threshold of less than 0.5 Intersection over Union (loU), validation frequency of 79 iterations, and an image batch size of 16 images. The learning rate started at 0.001 and, after a warmup period of 1000 iterations, began a scheduled slowdown by ( iteration \ learning rate x ( - ) . Training parameters were adapted from MATLAB
\warmup period object detection example code [21], Training was performed using the CPU on an HP workstation (Hewlett-Packard, Palo Alto, CA, USA) running Windows 10 Pro (Microsoft, Redmond, WA, USA) and an Intel Xeon W-2123 (3.6 GHz, 4 core, San Clara, CA, USA) processor with 64 GB RAM.
[0088] Evaluating ShrapOD Performance
[0089] After ShrapOD model training, blind test (15%) images were used to measure model performance. For the ShrapOD model trained on the original phantom image sets (shrapnel only object class, for example), 1010 images were used for testing while 1617 images were used in the multi-object trained model from the neurovascular phantom image sets. Predictions were compared to ground truth images to generate precision-recall curves using the evaluate DetectionPrecision function in MATLAB. The area under the precisionrecall curve was found for determining average precision (AP) [22,23], For intersection over union (loU), a bboxoverlay function (MATLAB) was used for the test images. While calculating loU scores, true positive (TP) counts were identified as having a prediction and ground truth with an loU score greater than or equal to 0.50. False positive (FP) and false negative (FN) counts were based on this same loU criteria of 0.50 for when no prediction exceeded this threshold and a ground truth was present or there was a prediction without a ground truth, respectively. Additionally, false positives were counted when multiple predictions for a single ground truth were detected. Precision, recall, and Fl scores were then calculated with this loU gating of 0.50. Mean loU (mloU) scores were calculated across each object class, and, for the multi-object model, mean AP (mAP) and an average mloU across the object classes was determined.
[0090] Ex vivo Swine Data Collection
[0091] As part of a euthanized model focused on automated central vascular access devices, ultrasound images were collected while performing a junctional tourniquet in a euthanized swine model, a biological structure with artery and vein flow, as shown in FIG. 11. Images were collected as clips recorded via a computer or controller interface for the ultrasound systems at different positions and vessel occlusion amounts for creating a robust dataset for algorithm training.
[0092] This ex-vivo pseudo-perfused swine model, allowing for realistic image acquisition with control over arterial and venous flow and pressures, was developed for assessment of automated central vascular access devices and may be used with the same data acquisition system shown in FIG. 5.
[0093] The ex-vivo swine model was set up in which a euthanized swine tissue was used. In an example, embodiment, the model consisted of lumbar-to-shin swine tissue. 8Fr feeding tubes (Covidien, Mansfield, MA, USA) were used to guide along the arterial and venous vessels distally towards the back of the knee. Small dissections were made into the muscle fascia which allowed the muscle layers to separate and create a flap to further visualize the vessels while preserving the femoral sheath. The distal vessels were cannulated using 14G IV catheters (MedOfficeDirect, Naples, FL, USA) and held in place using Perma- Hand silk ligatures (Ethicon, Raritan, NJ, USA). The proximal vessels were cannulated using 8Fr PCI introducers (Argon Medical Devices, Athens, TX, USA) and held in place similarly to the distal vessels. The distal vessels were connected using a shunt loop that consisted of tubing connected using a 3 -way stopcock. The proximal cannulations were connected to a Vivitro SuperPump AR Series (Vivitro Labs, Victoria, BC, Canada) and a hydrostatic reservoir, such as an IV bag. Doppler fluid (CIRS Tissue Simulation Technology, Norfolk, Virginia, USA) was pumped through the vessels to prime the vessels. After flow was initiated, leaks were identified and were stopped by ligating the leaking vessels.
[0094] In an example, a third occlusion classification model was trained with a dataset from an ex-vivo swine model. Representative ultrasound image results 1200 for a 2- class model, along with the Grad-CAM overlay for each prediction are shown in FIG. 12.
[0095] It can be seen that Al models are trained, for example, on a two-class classification: flow and occlusion (defined as 90% flow reduction) or a three-class classification: flow, progress (50 - 90% flow reduction or partial occlusion) and occlusion, and object detection for artery, vein, and bone features and other compressible anatomical
features. Use of the junctional tourniquet phantom and methods therefore can be broken into ultrasound tissue phantom data collection, ex vivo swine data collection, and algorithm training and testing. The object detection and image classification functionalities of the machine learning model(s) are trained on phantom data.
[0096] Algorithm training has been performed with tissue phantom images. Images were split into three categories - full flow, partial occlusion (50 to 90% pressure reduction) , full occlusion (90% pressure reduction)- based on the pressure distal to the phantom. Image were then cropped and converted to grey scale, split into training and validation, followed by training a ShrapML model using a RMSprop optimizer, learn rate of 0.001, and batch size between 18 and 25 images. Trained algorithms reached 90+ % accuracy with split validation images. The resulting trained model was then used in real-time by acquiring frames from an ultrasound video, making a prediction, and using text-to-speech, announcing the prediction in real time. The trained model may not in some instances make accurate predictions in real time as the algorithm is extracting non-relevant features to vessel occlusion.
[0097] Ultrasound Image Processing
[0098] For occlusion Al, data is collected from the flow loop with a PowerLab Data Acquisition system via the LabChart software. Inputs to the software include data from: flow sensor, pressure sensor, force sensor, and portable ultrasound system, such as shown in FIG. 5. One example data collection run consists of running the flow loop with an active bleed, while recording data, then after 30 seconds the actuated test platform occludes the artery (actuator extends for lOseconds). Once occluded, data is recorded for an additional 10 seconds before allowing the actuator to retract. Recording in LabChart for one run stops after flow and pressure values have returned to baseline. For curation, the data from the flow sensor is time-synced with the ultrasound video and the flow sensor output is used to divide ultrasound video frames by flow level which correlates to occlusion levels.
[0099] For the guidance Al, data from the portable ultrasound system is recorded using three motions along the vessels of interest. For femoral data this includes the following:
1. Medial to lateral movements along the inguinal crease; starting with vessels in view, until at the edge of the field of view for the probe.
2. With the vessels and bone centered in view, tilting the probe +/- 45° within 10 seconds
3. Starting with the vessels and bone centered in view, rotating the probe 180° so that the vessels are in view while cross-sectional, the in-plane, and finally cross-sectional again.
[0100] This data is labeled with bounding boxes around three major features: vein, artery, and bone, and is used to train the object detection model of the guidance Al model.
[0101] Accordingly, a phantom in the flow loop of FIG. 5 collects US image data for training and testing Al models. As stated, the phantom could be made using clear ballistic gel with latex tubing connected to a pump simulating a vessel. Flow is occluded by applying pressure with the US probe until flow was reduced to 90%, as an ex of the initial rate. Occlusion US images were labeled as positive or negative for occlusion based on flow reduction. Performance metrics were measured to determine an optimal or preferred percent occlusion threshold. A phantom that replicated physiology of the human inguinal crease was also developed. After connecting the phantom to a pump, the guidance US scanning protocol in an example consisted of 1) sliding probe lateral to medial along the inguinal crease, 2) rotating probe 180°, and 3) tilting probe ±45° with the vessels in view. OD models were trained to identify vessels and underlying bone surface. These models can then guide the user to the proper location and their performance was assessed. A table 1300 summarizing this example image capture protocol is shown in FIG. 13.
[0102] After images were collected, videos were exported from LabChart as well as mean distal pressure reading down sample to 10 Hz to match the frame rate for the recorded video. Frames were identified as full flow, partial flow, or no flow based on the distal pressure at the time the frame was captured relevant to the starting, unobstructed pressure. Using Matlab v2022a, mean pressure vs. time data were plotted for each recording, and three regions were identified - (i) beginning and (ii) end region for unobstructed pressure measurement and (iii) end of probe occlusion of the vessel. A mean unobstructed pressure was measured the identified region which was then used to create gates for the classification categories. For two class scenarios, full flow and no flow categories were separated by a percent reduction of the distal pressure, ranging from 90 to 50% in different experimental setups. For three class scenarios, full flow was characterized as unobstructed flow to 10% reduction in mean pressure, partial flow was 10% reduction to the no flow marker (50 to 90% reduction, depending on the experimental setup), and no flow was any pressure below this marker. During frame categorization, images were cropped to remove ultrasound image
information from the image and then resized to 512 x 512 x 3. This process was repeated for each recorded ultrasound video for tissue phantom and swine.
[0103] Neural Network Training
[0104] As previously described, ultrasound images (sonographic) for guidance are captured using a protocol in each region to ensure the vessels and underlying honey surfaces are captured in the US scan for training the Al models. Next, pressure is slowly applied at the pressure point until occlusion is reached, as determined by continuous waveform Doppler distal of the pressure point at either the ankle (femoral), wrist (subclavian), or femoral artery (aorta). Pressure will be held for a brief period of time, such as 3 or 5 seconds (3 s or 5 s), followed by slow release while simultaneously measuring distal flow using Doppler.
[0105] The guidance US scans will be labeled with bounding box overlays for the regions of interest, specifically the artery, vein, and underlying boney surfaces. Individual frames will then be used as training input for re-tuning the YOLOv7tiny model architecture. Data augmentation and model modifications will be used until blind test performances surpasses 85% sensitivity across all scan points.
[0106] Neural network model development and evaluation were performed using Matlab v2022a on a AMD Ryzen 9 5900HX 3.3GHz, 32 GB RAM, and NVIDIA RTX 3800 16GB VRAM computer system (Lenovo). Two neural network architectures were used: (1) a ShrapML model optimized for shrapnel identification in ultrasound images and (2) MobileNetV2 a neural network model that performed best for shrapnel identification in ultrasound images. Each model was fitted with a 512 x 512 x 3 image input layer and a two or three category classification output layer, depending on the image sets used.
[0107] For phantom training, image sets were loaded and randomly split 80:20 for training and validation, while a phantom image set was completely held out for blind testing. For swine training, a single image set was loaded and randomly split 60:20:20 for training, validation, and testing, respectively. In some training cases, data augmentation in the form of affine transformations were randomly introduced to training images. Specifically, reflections and translation in the X- (-128 to 128 pixels) or Y-direction (-64 to 64 pixels) were introduced randomly in these data augmentation training scenarios.
[0108] In another training example using swine phantoms, Al (ML) models for guidance and image classification for swine were developed, requiring 28 swine subjects to surpass 85% blind testing accuracy. These ML models were developed using a LOSO
methodology in which subjects were split into 5 or 6 clusters and aggregated so that one cluster remained for blind testing for each testing iteration.
[0109] Model training was performed for up to 100 epochs using an Root Mean Squared Propagation (RMSProp) optimizer with a 0.001 learn rate. A batch size of 32 was used throughout with evaluation of validation loss performed at the end of each epoch. A validation patience of five was used which meant if the validation loss was not further reduced in five epochs, training ceased early and the lowest validation loss was selected as an optimal or preferred model. Training was repeated three times with different random image splits for each training strategy, and each model were independently evaluated for determining overall performance. It is noted that different parameters other than these set forth may be used without departing from the scope of the disclosure.
[0110] Evaluation of Neural Network Model Performance
[OHl] Model performance was evaluated with test images held out from the training, validation process. Predictions and confidences were calculated for each test image and compared to ground truth labels in order to build a confusion matrix using GraphPad Prism 9 (San Diego, CA, USA ?). For two category models, positive predictions were no flow or occlusion images while negative predictions were full flow images. Using these identifications, accuracy, precision, recall, specificity, and Fl scores were calculated. Confidences were used to construct a receiver operating characteristic (ROC) curve and measure the area under the ROC (AUROC). Performance metrics were found for triplicated models and were shown as average values throughout.
[0112] In addition, Gradient-weighted Class Activation Mapping (Grad-CAM) overlays were created for l/24th of the testing images for each model. Grad-CAM are used to produce an approximate localization heat map identifying “hot spots” for regions important to the model prediction as means of making models more explainable and confirming irrelevant image artifacts are not being tracked. Grad-CAM was performed using a built-in Matlab command for every 24th test image and were saved according to the ground truth and prediction labels. Representative images were selected for highlighting regions of the images the models identified when making a classification prediction.
[0113] Results:
[0114] Determination of the Optimal or Preferred Threshold for Occlusion
[0115] To develop a machine learning model for monitoring JT occlusion, the occlusion threshold most suitable for distinguishing flow and no flow conditions was first identified. Using a tissue phantom model, training performance was compared with thresholds set at 50 - 90% distal pressure reduction for occlusion, as shown in FIG. 14. A performance metrics summary 1400 for ShrapML models with training sets split at different pressure thresholds. In this table, models consisted of two categories - full flow and no flow - with affine transformations randomly applied for data augmentation. Metrics are shown as average results for n=3 trained models. A color or shading map is overlayed on each row to highlight the higher performance metrics.
[0116] Lower threshold values had higher accuracy and improved performance in most performance metrics while the 80 and 90% threshold condition’s performance were reduced. As the highest occlusion threshold is ideal, 70% was selected as the threshold for testing as the differences were minimal with lower thresholds but avoids the performance reduction in the higher thresholds.
[0117] ShrapML and Mobilenetv2 Performance for Tracking Tissue Phantom Vessel Occlusion
[0118] Next, various deep-learning model setups were used for classifying JT ultrasound images for flow or no flow due to tourniquet application, as shown in FIGs. 15A- D, which illustrate confusion matrices for MobileNetV2 (FIGs. 15 A, 15C) and ShrapML (FIGs. 15B, 15D) for ultrasound tracking junctional tourniquet application. Thus two different model architectures were used - ShrapML and MobileNetV2 - each with and without affine transformations for data augmentation. Average confusion matrices for 3 trained models are shown for (FIG. 15 A, FIG. 15C) MobileNetV2 and (FIG. 15B, FIG. 15D) ShrapML (FIG. 15 A, FIG. 15B) without data augmentation and (FIG. 15C, FIG. 15D) with data augmentation. Confusion matrix values are expressed as percentages across each ground truth category.
[0119] MobileNetV2 with or without augmentation trended toward false positive (no flow) results, resulting in recall metrics of 0.675 and 0.646 without and with data augmentation, respectively, as shown in the table of FIG. 16. Performance metrics for MobileNetV2 and ShrapML for ultrasound tracking junctional tourniquet application. Average (n=3 trained models) performance metrics and standard deviations are shown for MobileNetV2 and ShrapML with and without data augmentation. However, MobileNetV2
was strong at identifying baseline, full flow conditions, with specificity reaching 0.990 and 0.996 without and with data augmentation, respectively. In contrast, augmentation had a more pronounced effect on ShrapML training. Without augmentation, ShrapML had a high false negative (full flow) rate, with a specificity of 0.683. Augmentation with ShrapML solved this false negative bias, increasing specificity to 0.991 without impacting the false positive rate. Overall, ShrapML with augmentation had the strongest accuracy (0.934) and Fl score (0.918) and was selected as a preferred configuration for this application.
[0120] To further understand model performance, gradient-weighted class activation maps (Grad-CAM) were constructed to highlight regions of ultrasound images most critical to the model prediction. FIG. 17 illustrates gradient-weighted class activation maps (GradCAM) for trained two category models for ultrasound tracking junctional tourniquet occlusion. (Column 1) Base ultrasound images are shown for reference as well as (left to right) GradCAMs for Mobile Net V2 without or with data augmentation and ShrapML without and with data augmentation. Four representative ultrasound images are shown: two identified as full flow and two identified as no flow. When looking at full flow ultrasound images, most of the models accurately tracked the vessel patency as the key feature with the exception of ShrapML without augmentation. The magnitude of tracking the vessel was reduced in images where less or no doppler signal was present. For no flow image classes, model trends were less consistent. MobileNet models were tracking features at the edges of the image (No Augmentation) or below the tissue phantom (With augmentation). ShrapML without augmentation, had no strong feature correlation, indicating no flow was identified as an absence of key feature extraction. ShrapML with data augmentation was successfully tracking the compression of the tissue phantom, but the precise feature being tracked was not obvious.
[0121] Effect of Three Categories on ShrapML Model Performance for Tracking Junctional Tourniquet Occlusion
[0122] An alternative model design was assessed in which an added third category of partial flow represented a 10 to 70% distal pressure reduction. This improved the full flow and no flow true prediction rate. Reference FIGs. 18A and 18B in which a three category ShrapML performance for tracking junctional tourniquet occlusion is shown. FIG. 18A shows a confusion matrix for three categories - no occlusion, partial occlusion, full flow - ShrapML with affine transformations for data augmentation. FIG. 18B illustrates GradCAM overlays for representative ultrasound images for each of these three categories.
[0123] However, this was at the expense of the partial flow category as over 75% of the predictions were incorrect. This is further highlighted through GradCAM overlays. The full flow and no flow identified images were still tracking the vessel placement and phantom compression, respectively, as shown in FIGs. 18A and 18B. The partial flow designation was not identifying any obvious trends in the ultrasound image, including frequently tracking features outside of the tissue phantom. As a result, the two category methodology was identified as more suitable for this application.
[0124] Performance of ShrapML Model for Tracking Junctional Tourniquet Application in an ex vivo Swine Model
[0125] Lastly, the ShrapML two category network design was retrained for use with junctional tourniquet datasets collected with an ex vivo swine model. Multiple ultrasound clips were collected in a single ex vivo swine subject, and 20% of the images were held out for testing model performance. Overall, the model had a similar performance to that with the tissue phantom, with a slight bias towards false positive, no flow, results shown in FIGs. 19A-19D, which illustrates confusion matrix and receiver operating characteristic (ROC) curve for ShrapML trained with swine image sets for tracking junctional tourniquet application. Results are shown for three replicated trained models, shown as average values for the (FIG. 19 A, FIG. 19C) confusion matrix and (FIG. 19B, FIG. 19D) individual ROC curves for a (FIG. 19 A, FIG. 19B) 70% occlusion threshold or a (FIG. 19C, FIG. 19D) 90% occlusion threshold.
[0126] Results across the three replicated trained models were consistent, each with similar area under ROC in FIGs. 19A-19D and with low standard deviations for each performance metric as shown in FIG. 20. In the table of FIG. 20, a summary of performance metrics for the ShrapML trained with swine image sets for tracking junctional tourniquet application is shown. Results are shown as average and standard deviations across three replicate trained models for a 70% or 90% occlusion threshold. Accuracy was over 90% for swine image sets, similar to the tissue phantom performance. For comparison, the model was also trained using the most aggressive 90% distal pressure reduction threshold for occlusion. Overall, performance was minimally impacted when using swine images with this new occlusion threshold (FIG. 19B, FIG. 19C). As additional swine subjects were not evaluated, GradCAM overlays were used to track if the features the model identified were subject specific or tracking the vessel as it occludes, as previously observed in the tissue phantom. In full flow images, structures such as the artery and vein were being tracked sometimes while
other times only the artery was the primary structure or feature responsible for the model prediction. In no flow images, the trends were less obvious, but in general the bottom tissue features were tracked if they appeared higher in the ultrasound image due to tissue compression from the junctional tourniquet. This provided proof-of-concept that the model can work in more complex animal tissue, but more images a subject variability provide for more robust performance.
[0127] FIGs. 21A-H illustrates GradCAM overlays for ShrapML model trained with swine image sets. Representative images are shown for (FIGs. 21A-21D) a 70% occlusion threshold or (FIGs. 21E-21H) a 90% occlusion threshold. Full flow and no flow ultrasound representative image designations are shown with and without GradCAM overlays which highlight image regions most responsible for the image classification outcome.
[0128] Further, as shown by Grad-CAM (gradient class activation map) overlays, Al model predictions identified the correct regions of interest, as shown in FIG. 10. In addition, the guidance models may pair with the occlusion models. These ML model may be integrated into portable ultrasound systems already on the market, improving future manufacturing feasibility, or may be integrated with improved junctional tourniquet configurations, such as those demonstrated in FIGs. 36-45.
[0129] To summarize a method of training to generate a trained machine learning model(s) having both image classification and object detection components, refer to flow 2200 of FIG. 22. At block 2210, analyze a database of ultrasound imaging and flow data points representative of one or more compressible structures of an anatomical structure subjected to levels of flow of ultrasonic compliant fluid therethrough, including occlusion of the one or more compressible structures. The database of ultrasound imaging includes ultrasound images. At 2220, sort each ultrasound image of the ultrasound images of the database into classification categories based on a measured distal pressure of an ultrasound image, the measured distal pressure a measure of flow of the ultrasonic compliant fluid through the compressible structure of the ultrasound image. At block 2230, process the sorted classification categories of the ultrasound images into processed classification categories. At block 2240, train a machine learning model on a training dataset of the processed classification categories to generate a trained machine learning model, including providing the machine learning model with an image input layer of the training dataset and generating an output layer with the two or more classification categories.
[0130] A Junctional Tourniquet
[0131] Referring now to FIG. 23, a logic diagram 2300 illustrates ultrasound (US) paired with artificial intelligence (Al) machine learning (ML) models to visualize and guide proper junctional occlusion in a junctional tourniquet. Following US image acquisition, the machine learning algorithm(s) are applied to guide to an ultrasound probe to the right location, actuate the probe to bring the needed occlusion to bear on compressible structures of the patient at the determined, correct location in proximity to a wound, and maintain the correct position by means of movement and/or tilt of the ultrasound probe in the x-, y-, and/or z-axes directions and the correct occlusion pressure.
[0132] In real time, then, the junctional tourniquet by means of its ML capabilities is able to perform guidance and/or actuation of the probe at the necessary location and pressure and is also able to maintain desired location and pressure of the ultrasound probe by virtue of constant monitoring and adjustments provided by the ML algorithms.
[0133] FIGs. 24-31 illustrate the process flows of a junctional tourniquet having US and ML capabilities. Referring first to FIG. 24, flow 2400 for treating a patient starts with the acquisition of sonographic (ultrasound) images of a wound of a patient at Block 2410. These images may preferably be provided in real time by the ultrasound probe of the junctional tourniquet but could also be stored and retrieved images. At block 2420, a predicted location of a compressible structure of the patient, normally close to or proximal to the wound, is determined. The junctional tourniquet is guided in applying pressure at the predicted location to occlude the structure and reduce blood flow to the wound by the ML algorithms of the junctional tourniquet at block 2430. The ML algorithms for image classification and object detection have been trained in accordance with the previous description.
[0134] Flow 2500 of FIG. 25 describes a method of using the junctional tourniquet with trained ML algorithms. At block 2510, the movement of the ultrasonic probe of the junctional tourniquet is guided to a position that is proximal a location of compressible structure(s) of a wound of a patient in accordance with analysis of images performed by a machine learning model of the junctional tourniquet. At block 2520, the ultrasonic probe is actuated at the position to apply pressure to the structure of the wound and compress the structure against a hard surface of the patient, at least partially occluding fluid flow in the
structure in accordance with analysis of the images performed by the machine learning model.
[0135] Flow 2600 of FIG. 26 provides more detail on this operation. At block 2610, sonographic (ultrasound) images of a wound of a patient having one or more compressible structures are acquired by ultrasound. These images may be acquired by the ultrasound probe of the junctional tourniquet in real time or they may be stored images that have been retrieved. At block 2620, the sonographic images are analyzed by a trained machine learning model to generate a prediction of a location of compressible structures and determine a lateral actuation and/or a directional actuation of the ultrasonic probe of the junctional tourniquet needed to maintain at least partial occlusion of fluid flow in the compressible structures. At 2630, movement of the ultrasonic probe is guided in accordance with the predicted location of one or more compressible structures. Finally, at 2640, the ultrasonic probe is actuated at the position in accordance with lateral actuation and/or directional actuation to apply pressure to the compressible structure(s) of the wound and compress the structure against a hard surface of the patient to at least partially occlude fluid flow in the compressible structure(s).
[0136] Flow 2700 of FIG. 27 describes that the ultrasonic probe of the junctional tourniquet can be guided by the user or autonomously in accordance with an autonomous guidance module of the junctional tourniquet. At block 2710, sonographic (ultrasound) images of a wound of a patient are collected (gathered). At block 2720, an ultrasonic probe of a junctional tourniquet is guided to a position of a vessel of the wound in accordance with object detection analysis of sonographic images performed by a machine learning model of the junctional tourniquet, where a user guides the ultrasonic probe in accordance with a user interface of the junctional tourniquet or in accordance with an autonomous guidance module of the junctional tourniquet. At block 2730, the ultrasonic probe is actuated in a z-axis to apply pressure at the position to the vessel and at least partially occlude fluid flow in the vessel in accordance with image classification analysis performed by the machine learning model. At block 2740, the machine learning model continuously monitors an occlusion status of the vessel and the location of the ultrasonic probe relative to the vessel. At block 2750, the pressure (z-axis) and the direction of pressure (x- and/or y-axis) applied by the ultrasonic probe against the vessel is adjusted to at least partially occlude fluid flow in the vessel.
[0137] Referring now to flow 2800 of FIG. 28, the user can guide and actuate the ultrasound probe of the junctional tourniquet at blocks 2810, 2820. At Block 2820, the user
actuates the probe as guided by the user interface. After occlusion by the probe is achieved, the user can secure the ultrasonic probe at block 2830 and the ML algorithm continuously monitors for effective occlusion using object detection and image classification in a “manual” version as will be described, at block 2840. If or when the occlusion is no longer effective, the ML algorithm activates an alarm. At block 2850, responsive to adjustment indicator(s), the user manually adjusts the pressure (z-axis) and/or the direction of pressure (x- and/or y- axis) applied by the ultrasonic probe against the vessel to at least partially occlude fluid flow in the vessel.
[0138] Flow 2900 of FIG. 29 refers to a “semi-automatic” version of the junctional tourniquet in which a user secures the ultrasonic probe after guiding it to the correct position. At block 2910, a user via a user interface guides an ultrasonic probe of a junctional tourniquet to a position of a vessel of the wound in accordance with object detection analysis of images performed by a machine learning model of the junctional tourniquet. At block 2920, the user secures the ultrasonic probe. At block 2930, the junctional tourniquet autonomously actuates the ultrasonic probe in a z-axis to apply pressure at the position to the vessel and at least partially occlude fluid flow in the vessel in accordance with image classification analysis performed by the machine learning model. At block 2940, the machine learning model of the junctional tourniquet continuously monitors an occlusion status of the vessel and the location of the ultrasonic probe relative to the vessel. At block 2950, responsive to adjustment indicator(s), the pressure (z-axis) applied by the ultrasonic probe against the vessel is adjusted autonomously by the junctional tourniquet via a motor or a pump to maintain at least partially occlude fluid flow in the vessel.
[0139] In an “automated” version of the junctional tourniquet, additional motor components automatically actuate motion in the x- and y-axes as well, ensuring proper occlusion in the monitoring phase. In flow 3000 of FIG. 30, at block 3010 a user via a user interface guides an ultrasonic probe of a junctional tourniquet to a position of a vessel of the wound in accordance with object detection analysis of images performed by a machine learning model of the junctional tourniquet. The user can then secure the ultrasonic probe at 3020. Next at block 3030, via a motor or a pump the ultrasonic probe is autonomously actuated in a z-axis to apply pressure at the position to the vessel and at least partially occlude fluid flow in the vessel in accordance with image classification analysis performed by the machine learning model. At block 3040, the machine learning model of the junctional tourniquet continuously monitors an occlusion status of the vessel and the location of the
ultrasonic probe relative to the vessel. Finally at block 3050, responsive to adjustment indicator(s), the pressure (z-axis) and/or the direction of pressure (x- and/or y-axis) applied by the ultrasonic probe against the vessel is adjusted autonomously via a motor or a pump to maintain at least partially occlude fluid flow in the vessel.
[0140] Flow 3100 of FIG. 31 describes a scenario in which the junctional tourniquet is not guided and actuated by a user using a user interface. Rather, using the ML capabilities of the junctional tourniquet for object detection and image classification, the junctional tourniquet performs these tasks automatically/autonomously. At block 3110, an ultrasonic probe of a junctional tourniquet is guided to a position of a vessel of the wound by an autonomous guidance module of the junctional tourniquet in accordance with object detection analysis of images performed by a machine learning model of the junctional tourniquet. At block 3120, autonomously via a motor or a pump actuate the ultrasonic probe in a z-axis to apply pressure at the position to the vessel and at least partially occlude fluid flow in the vessel in accordance with image classification analysis performed by the machine learning model. At block 3130, the machine learning model of the junctional tourniquet continuously monitors an occlusion status of the vessel and the location of the ultrasonic probe relative to the vessel. At block 3140, responsive to adjustment indicator(s), the pressure (z-axis) and/or the direction of pressure (x- and/or y-axis) applied by the ultrasonic probe against the vessel is adjusted autonomously via a motor or a pump to maintain at least partially occlude fluid flow in the vessel.
[0141] The junctional tourniquet is a device that has an ultrasound probe as the pressure-exerting component and machine learning algorithms for ensuring proper placement and vessel occlusion, benefits described herein. Two separate machine learning algorithms are used in the functionality described herein: one for guidance to the proper location and a second for measuring vessel occlusion.
[0142] The ultrasound images are transferred in real time to a controller that utilizes the machine learning algorithms to:
(1) guide the ultrasound probe for correct placement. Proper placement may be either user-guided using audio/light indicators, referred to herein as adjustment indicators, or accomplished using a fully-autonomous motor system of the junctional tourniquet. Arrow lights on the probe, or on a housing holding the probe/attaching to the
tourniquet, such as shown in FIG. 38, or an environment that utilized voice prompts are all contemplated;
(2) continuously verify appropriate arterial occlusion;
(3) correct occlusion pressure and/or direction as needed either autonomously with a motor system or by directing the user (for example, user guidance only; Z-axis autonomy; and 3 -axis autonomy are contemplated as explained below); and
(4) provide an alarm when occlusion is lost and the system cannot restore needed occlusion automatically.
[0143] As described herein, the junctional tourniquet device may guide the user (light or audio indicators) to the correct location and pressure for proper junctional tourniquet operation. It may additionally include the integration of z-axis automation for reaching and maintaining proper pressure after user guidance (light and/or audio indicators) to the correct location for proper junctional tourniquet has occurred. Further, the junctional tourniquet may include the integration of x-, y-, z-axis automation for reaching and maintaining proper location and pressure for proper junction tourniquet. Again, reference FIGs. 24-31.
[0144] The artificial intelligence (Al) provided by the machine learning algorithms thus include models for guiding junctional compression and for guiding to the location. There is cross-talk or communication between these models that allow them to function harmoniously. The first algorithm uses an object detection framework to identify where the artery, vein, and bone are in the tissue and uses that information to identify when the vessels are aligned over the bone. Using that information, three product versions are described below and other embodiments within the scope of the disclosure are contemplated:
[0145] 1 A Manual (“Decision Support") Version guides the end-user (by activating a user interface such as arrow-shaped lights) on which direction to move the probe and changes in probe angle to ensure proper probe placement. The second algorithm is an image classification framework that looks at a single ultrasound image and determines if the image of the vessels is positive or negative for occlusion. Using this algorithm, the z-position is adjusted to ensure vessel occlusion and is continuously used to track if the vessel enables flow, which instructs further (manual) tightening in the z-axis. After occlusion is achieved, the probe is secured in place (see “Mechanical prototype” below) and the algorithm continuously monitors for effective occlusion. If/when the occlusion is no longer effective, the algorithm activates an alarm, providing actionable information to either tighten or
reposition the device to restore occlusion. Importantly, the user does not need to be trained on the use of ultrasound, and no screen is needed, only an easy to understand interface such as arrow-shaped lights on the side of the probe. Reference is made to flow 2800 of FIG. 28.
[0146] 2 A Semi-automatic Version - a user guides the probe to the correct position as directed by the algorithm and secures it (see “Mechanical prototype” below). Then, a motor component actuates z-axis motion for appropriate tightening. Monitoring is performed similarly to the manual version, but with tightening is adjusted automatically. Reference is made to flow 2900 of FIG. 29.
[0147] 3. A Fully automatic Version - similarly to the Semi-automatic version, with additional motor components to automatically actuate motion in the x and y axes as well, ensuring proper occlusion in the monitoring phase in a “fire and forget” fashion. Reference is made to flow 3000 of FIG. 30.
[0148] Mechanical prototype: A number of approaches are suggested for securing the probe in place, thus integrating the various components into an automated junction tourniquet device. One example of this integrates the ultrasound methodology with a combat-ready clamp (CRoC) junctional tourniquet. The ultrasound probe is integrated into the mechanical “crane-like” framing so that the ultrasound probe itself is the object responsible for occlusion. The technology improvements described herein, including the device, system and method of use described herein, improve on current junctional tourniquets such as a CRoC junctional tourniquet by adding a “smart” component that facilitates accurate application, continuously monitors the occlusion and adjusts to prevent recurrence of the bleeding including during casualty movement and transport, as these actions can result in a 50% decrease in effectiveness of conventional junctional tourniquets. Following proper placement of the probe as described above, the ultrasound probe is then attached to the mechanical framework which includes the motor components required for the Semi- or Fully- automatic version. The prototype could be further configured for use in resource limited situations by allowing the ultrasound probe to be removed for use on other casualties while the mechanical prototype assists the junctional tourniquet remain applied. However, this will lose the autonomous feedback mechanisms in the tourniquet design, which is needed to ensure proper occlusion during casualty’s transportation. This is one of many methodologies the algorithm- ultrasound mechanisms can be configured for implementation in junctional tourniquet prototypes.
[0149] A software application of the junctional tourniquet for managing the junctional tourniquet Al models, streaming ultrasound signal, and providing instructions to the end user for proper junctional occlusion is envisioned. A flowchart detailing the application framework is shown in FIG. 32. In an example embodiment, a Clarius handheld US device will be used as the US probe device and has an open API (application programming interface) for developing the device. Next, the application will prompt the user to select the subclavian, femoral, or aortic sites, and between guidance or occlusion modes. For guidance, initial prompts will indicate to place the ultrasound probe near the clavicle, inguinal crease, or upper abdomen followed by directions to move the probe laterally until the artery is centered in the US image. Then, if necessary, prompts will direct the user to tilt the probe until the relevant boney surface is in view under the artery. Correct placement will trigger the occlusion mode of the application. In this mode, the application will prompt the user to apply pressure until the vessel is occluded, at which point the user will be guided to maintain pressure. The occlusion functionality will continue indefinitely so that the end user will always be alerted if occlusion has been lost, defined as no occlusion for more than 10 seconds. If that occurs, the application will prompt to restart the Guidance mode. The application will be set up to run Al predictions in real-time using the tuned guidance and occlusion models, detailed herein.
[0150] More specifically, referring now to flow 3200 of FIG. 32, the methodology of a junctional tourniquet with guidance and occlusion ML models is set forth. The flow starts at 3202. At block 3204, a user of the junctional tourniquet selects a scan point on the patient. This will typically be close to or at the wound site of the patient. The user next selects either a guidance mode or an occlusion mode at decision block 3206. If the guidance mode with its object detection ML algorithm is selected, at block 3208 the user places the ultrasound probe at the scan point. A clock starts at block 3210. If the time is more than 90 seconds (90s) at decision block 3212 and there is no action, then an ALERT that the location of a structure cannot be identified is issued at block 3218. This alert may be displayed to the user of the junctional tourniquet visually or audibly, for example. If the time is less than 90 seconds (90s) at decision block 3212, the next inquiry is whether the structure, in this case an artery, is centered 3214 within the displayed US image? If no, then the ultrasound probe must go left or right to be centered at block 3216. This may be accomplished by adjustment indicators displayed to the user/operator of the junctional tourniquet or it may occur autonomously via an autonomous guidance module of the junctional tourniquet. Once the
structure is centered, then the inquiry at decision block 3220 is whether the structure is centered over the bone or other hard surface of the patient. If no, then the probe must be tilted at block 3222. This may be accomplished by adjustment indicators displayed to the user/operator of the junctional tourniquet or it may occur autonomously via an autonomous guidance module of the junctional tourniquet. Once both the structure is centered and centered over the bone, then at block 3224 occlusion is ready to begin. A clock is started at block 3226 after which time the occlusion mode is selected at block 3228.
[0151] Returning back to decision block 3206, if the occlusion mode is selected then the flow continues to block 3230 where a clock is started. The query at block 3232 is whether the structure is already occluded. If yes, than the actuation positioning is to be maintained and the clock reset at blocks 3234 and 3236. The action of maintaining actuation positioning at block 3234 may be conveyed to a user/operator of the junctional tourniquet such as via an audible or visual user interface. If the structure (again, an artery in this example) is not occluded, at decision block 3238 the query is whether the state of nonocclusion has been for less than 10 seconds (10s). If yes, the actuator pressure is increased at block 3240. The pressure could be increased by the user/operator or it could be increased in stages autonomously by the junctional tourniquet, such as by an autonomous motor assembly of the junctional tourniquet. If maximum pressure is indicated at decision block 3242, then an ALERT that the actuation pressure is at maximum is issued at block 3248. This alert may be displayed to the user of the junctional tourniquet visually or audibly, or by movement (vibration), for example. If the indication at decision block 3238 is that the time in a nonoccluded state is not less than 10s, then an ALERT to reorient the ultrasound probe is issued to the user/operator of the junctional tourniquet at block 3244. This alert may be displayed to the user of the junctional tourniquet visually or audibly, for example. In order to reorient the ultrasound probe, the guidance mode is selected at block 3246.
[0152] An example of a user interface 3300, in this case a graphical user interface (GUI) is displayed in FIG. 33. The GUI allows the user/operator to choose between guidance and occlusion modes for femoral, subclavian and aortic scan sites. There are multiple views that may be employed, such as a 2D ultrasound, a continuous wave (CW) doppler, a doppler (DP) view and a lower temperature view (indicated by the snowflake). In the particular example GUI shown, the guidance mode has been selected for a femoral tourniquet junction. The 2D ultrasound view is shown in which bounding boxes have been placed around two structures of interest in the patient.
[0153] FIGs. 38 and 39 further illustrate user guidance provided by user interfaces that are different than the GUI of FIG. 33. In FIG. 38, a screen for feedback and input to a user/operation with directions on how to move the ultrasound probe is provided. The screen may be coupled directly to the ultrasound probe as shown or may be more remotely coupled but still available to the user/operator. FIG. 39 is a top down view of the ultrasonic probe with corner lights in a rectangular pattern that light up indicating whether movement is to be in a x- or a y-axis. This allows by the user to make x-axis, y-axis or angle (tilt) adjustments of the ultrasound probe. This is an example of a user interface that communications guidance to the user/operator visually without a screen. The on-board nature of these user interfaces provide simplicity in the junctional tourniquet design.
[0154] Referring now to the functional block diagram 3400 of FIG. 34, a junctional tourniquet has several functional blocks. Ultrasound (US) transducer of an ultrasound probe may be connected to a controller, such as a computer, microcontroller, single board or the like, by a wireless connectivity block 3420 or a wired connection 3430. Controller 3440 has the guidance Al (ML) model 3445 and an image classification Al (ML) model 3450, force sensors 3455 and software 3460, such as the application methodology set forth in FIG. 32, and is configured to control manual control, linear actuator and multi-directional automation modules 3470, 3480, 3490, respectively. Controller 3440 is also coupled to and configured to control user interfaces used by a user/operator of the junctional tourniquet. In this example, the user interface is illustrated as the GUI of FIG. 33 but other types of user interfaces may be used to communicate with the user/operator visually, audibly or by touch (vibration, for example).
[0155] US transducer 3410 of an ultrasonic probe is configured to collect images, sonographic images, of a wound of a patient. Controller 3440 is configured to receive the images captured by the US transducer 3410 and in accordance with analysis of the plurality of images by ML models 3445 and 3450 controller 3440 is configured to: guide movement of the ultrasonic probe to a position that is proximal a location of a compressible structure in accordance with analysis of the images performed by the guidance ML model 3445; and actuate the ultrasonic probe at the position to apply pressure to the structure of the wound and compress the structure against a hard surface of the patient to at least partially occlude fluid flow in the structure in accordance with analysis of the images performed by the guidance ML model 3450.
[0156] Guidance ML model 3445 of controller 3440 is configured to guide the ultrasonic probe of the junctional tourniquet to the position proximal the location of the structure in accordance with object detection performed by guidance ML model, including determining the location of the structure. Controller then guides movement of the ultrasonic probe to the position proximal the location of the structure. The guidance provided by guidance ML model 3445 may be provided by a user interface 3495 and/or an autonomous guidance provided by multi -directional automation module 3490 as controlled by controller 3440. The automation module 3490 may be an autonomous motor assembly of the junctional tourniquet. Guided movement may be movement in the x-, y- axes as well as at any angle (tilt).
[0157] The analysis by image classification ML model 3450 allows the controller 3440 to actuate the ultrasonic probe at the determined position to apply pressure to the structure of the wound and compress the structure against a hard surface of the patient to at least partially occlude fluid flow in the structure in accordance with analysis of images performed by Image classification ML model 3450. Preferably actuation of the linear actuator at the position is performed in real time in response to analysis of the images by the image classification ML model 3450. The image classification performed by ML model 3450 differentiates between occluded and non-occluded status of the compressible structure.
[0158] The controller can lateral actuation of the ultrasound probe by control of the linear actuator 3480. Linear actuator 3480 may be an autonomous motor assembly, such as a motor or a pump, that actuates the ultrasonic probe in a z-axis of the junctional tourniquet, as illustrated in FIGs. 36 and 37, respectively. In FIG. 36, actuation of the ultrasound probe in the z-axis direction is possibly by moving a motor-driven actuator in the z-axis. As shown in this embodiment the motor-driven actuator is coupled to the ultrasound probe. In FIG. 37, z- axis actuation may be accomplished by a pump and bellows configuration coupled to the ultrasound probe. The bellows expands and contracts based on an air pump inflation controlled by the image classification ML model 3450 of controller 3440.
[0159] Junctional tourniquets operating in accordance with the functional block diagram of FIG. 34 were very effective. For testing the custom designed junctional tourniquets, a commercial phantom was perfused through a flow-loop with a peristaltic pump, pressure sensor, bleed site and flow sensor, such as shown in FIG. 5. Testing was performed in four stages: 1) active bleed and tourniquet placement, 2) Tourniquet placed at junctional
occlusion site reducing flow rate at bleed site, 3) occlusion maintained for at least 5 minutes, and 4) tourniquet release. These results 4600 are shown in FIG. 46.
[0160] The junctional tourniquet may be secured with respect to the body of the patient in a variety of ways, as indicated by FIGs. 35, 40-45. FIGs. 35, 40, 44, and 45 illustrate a framework in a crane-like arrangement in which the ultrasound probe is free to move laterally in a z-axis motion and in which the ultrasound probe may be secured after it is manually positioned. The ultrasound probe is free to move laterally in a z-axis motion. The ultrasound probe is removably after manually positioning but this may not permit occlusion to be maintained. FIG. 41 is a universal strap-like design. In the example of the strap configuration shown, there are three points of contact for the ultrasound probe of the junctional tourniquet with the human body: the inguinal crease, the aorta, and the femor.
[0161] The junctional tourniquet may have different configurations of a mechanical attachment module coupled to the controller and configured to removably attach the junctional tourniquet to the patient. The mechanical attachment module may have a linear actuator which may be an autonomous motor assembly configured to laterally actuate the ultrasonic probe. Further the mechanical attachment module may be configured to releasably secure the ultrasound probe after the ultrasound probe at least partially occludes fluid flow in the structure.
[0162] More particularly, the mechanical attachment module includes a base coupled to a frame and/or straps. A number of prototypes were designed and tested with various design criteria, listed in priority from high to low, including: total time taken to occlude; stability of the junctional tourniquet at the anatomical site; effectiveness of occlusion by the junctional tourniquet; ultrasound compatibility of the junctional tourniquet; versatility and ease of use of the junctional tourniquet; durability and portability of the junctional tourniquet; reusability of the junctional tourniquet; power - battery life of the junctional tourniquet; and total cost of the junctional tourniquet. Two prototypes performed well according to these criteria and are shown in FIGs. 42-45.
[0163] In FIGs. 42-43, a base and tightening straps (BaTS) embodiment of a junctional tourniquet is shown. The BaTS junctional tourniquet of FIG. 43 includes a wireless ultrasound probe 4310 (such as Clarius) with probe case 4320, strap(s) 4340 with rotating collar(s) 4330 coupled to the strap(s) 4340, an encased linear actuator 4350 for movement in the z-axis, and an electronics box (controller) 4360. One or more strap(s) 4340
are configured to releasably secure the ultrasound probe 4310 after the ultrasound probe 4310 at least partially occludes fluid flow in the structure.
[0164] With respect to the electronics box/controller 6, the junctional tourniquet concept is controlled by the underlying ML guidance and occlusion algorithms that are held on a single board computer, microcontroller device, or other controller to allow for miniaturized deployment of the AI/ML models. The single board computer is housed in the electronics box 6, along with interface cables to the actuator and wireless (such as Bluetooth) connectivity to the ultrasound device; while wireless has obvious advantages in this setting, wired connectivity could also be used. The AI/ML guidance and occlusion models within the electronics box 6 (or single board computer, for example) will process ultrasound images and inform actuation decisions to maintain occlusion. A user interface may also connect to a display that provides instructions to the end-user/operator as displayed within a graphical user interface or other user interface.
[0165] FIGs. 44-45 show a frame reinforced tourniquet (FReT) junctional tourniquet. As illustrated in FIG. 45, the FReT junctional tourniquet 4500 has a wireless ultrasound probe 4510 (such as Clarius), a rigid frame 4520, a linear occlusion actuator 4530, an electronics box (controller) 4540, and a base plate 4550. The ultrasound probe 4510 assists with guidance in accordance with the Al (ML) guidance model of the junctional tourniquet. The crane-like rigid frame 4520 supports the linear occlusion actuator 4530 above the patient (at least in this orientation shown). Base 4550 is configured to be placed under the patient and the frame 4520. Linear occlusion actuator 4530 allows for automated compression control as needed to reach occlusion, as controlled and determined by the Al (ML) occlusion model of the junctional tourniquet.
[0166] With respect to the electronics box/controller 4540, the junctional tourniquet concept is controlled by the underlying ML guidance and occlusion algorithms that are held on a single board computer, microcontroller device, or other controller to allow for miniaturized deployment of the AI/ML models. The single board computer is housed in the electronics box 4540, along with interface cables to the actuator and wireless (such as Bluetooth) connectivity to the ultrasound device; while wireless has obvious advantages in this setting, wired connectivity could also be used. The AI/ML guidance and occlusion models within the electronics box 4540 (or single board computer, for example) will process ultrasound images and inform actuation decisions to maintain occlusion. A user interface may
also connect to a display that provides instructions to the end-user/operator as displayed within a graphical user interface or other user interface.
[0167] A performance comparison of the BaTS Fret improved junctional tourniquets with commercially available SAM and CRoC junctional tourniquets is shown in FIG. 47. A summary of time metrics obtained for each run with the different tourniquets is illustrated. While all tested junctional tourniquets were able to reach occlusion with a hemorrhage reduction of greater than 97%, the FReT and BaTS improved junctional tourniquets were overall quickest to use and had lower variability compared to the SAM and CRoC commercial options. It is readily seen that integration of actuation automation functionality with each prototype, as well as testing across various junctional pressure points and integration of Al model -based guidance, enables a fully automated junctional tourniquet for hemorrhage control on the battlefield, and other military, commercial and civilian applications.
[0168] Described herein is a machine learning (ML) methodology, implemented as algorithm(s) that analyze sonographic images in real time, guiding the user to press an artery or other compressible structure in the right location, with the right occlusive form, and monitoring the effectiveness of this pressure. These ML algorithms may be integrated into an ultrasound probe, making this probe an effective pressure head as part of a junctional tourniquet, without the need for medical and/or ultrasound expertise on the part of the user/operator of the junctional tourniquet. Constant monitoring of occlusal effectiveness will allow for rapid or automated response to displacement, an especially important advantage when transporting the patient.
[0169] This automated occlusion junctional tourniquet device utilizes ultrasound (US) to apply pressure to stop hemorrhage allowing for Al driven guidance to the proper pressure point based on US feedback. Al driven occlusion algorithms will provide feedback to the medical provider when enough pressure has been applied to ensure hemorrhage control. This device, system and methodology technology can improve speed, efficiency, and accuracy in administering junctional hemorrhage control and improving safety by preventing excessive application force and likely tissue or bone damage. Furthermore, the Al guided approach will be critical to standardize results across different providers and potentially aid with junctional hemorrhage control training.
[0170] As has been shown, AI/ML models have the potential to monitor vessel occlusion (above 90% overall accuracy) and track key anatomical features using object detection Al models. Both Al models, for occlusion and guidance, predict on animal tissue and/or human volunteers. Advancement of this technology will simplify tourniquet use and help reduce the high mortality associated with junctional hemorrhage.
[0171] As previously described, current solutions for junctional hemorrhage include packing with hemostatic bandages or sponges (which is ineffective in case of a significant arterial hemorrhage), junctional tourniquets (which are difficult to use and tend to lose efficacy once casualty is moved) and REBOA (which is a highly invasive procedure, requires skill and is only relevant to the lower body). The embodiments presented herein can allow for abdominal aortic junctional tourniquet as a non-invasive alternative to the invasive resuscitative endovascular balloon occlusion of the aorta (REBOA) procedures.
[0172] The use of ML for ultrasound guidance of medical interventions for hemorrhage control has never been described before, potentially due to the relatively high cost of ultrasound machines. With the gradual decrease in their cost and size, the use of ultrasound for hemorrhage control begins to appear feasible, making expertise the limiting factor for use. ML offers a pathway to overcome this limit, to allow healthcare providers not trained in sonography to utilize ultrasound technology to treat junctional hemorrhaging.
[0173] Another significant advantage of the use of ML over the “standard” or non- ML use of ultrasound for this purpose is its ability for continuous monitoring. Using ML, the ultrasound junctional tourniquet system maintains “visualization” of the obstructed vessel and can raise an alarm in case this obstruction is no longer effective. In the absence of such an alarm, the first sign of failure might be a pool of blood forming under the casualty or clinical deterioration in the casualty’s mental status or vital signs, all signifying loss of a substantial amount of precious blood.
[0174] In conclusion, ML is demonstrated as useful for ultrasound guidance and monitoring of pressure against major vessels, to be used as part of a “smart” junctional tourniquet.
[0175] Embodiments of the present disclosure described above and summarized below are combinable.
[0176] In one embodiment of an ultrasound junctional tourniquet, the tourniquet has a controller; an ultrasonic probe portion of the ultrasonic junctional tourniquet configured to
acquire images of a wound of a patient using ultrasound doppler; and a tourniquet portion of the ultrasonic junctional tourniquet, where the controller determines from the images a location of a compressible structure of a patient that is proximal to the wound and controls the tourniquet portion to apply pressure at the location of the structure to occlude the structure and reduce blood flow to the wound.
[0177] In another embodiment of the tourniquet, the images are sonographic images.
[0178] In another embodiment of the tourniquet, the structure is a blood vessel, artery, vein, nerve, bone, or other physiological pressure point of the patient.
[0179] In another embodiment of the tourniquet, the tourniquet portion includes the ultrasonic probe portion and the controller controls the ultrasonic probe portion to apply pressure at the location of the structure to occlude the structure and reduce blood flow to the wound.
[0180] In another embodiment of the tourniquet, the controller is configured to predict the location of the structure from the ultrasonic images acquired by the ultrasonic probe portion of the ultrasonic junctional tourniquet.
[0181] In another embodiment of the tourniquet, the controller is configured to predict the location of the structure from ultrasonic images acquired by the ultrasonic probe portion of the ultrasonic junctional tourniquet.
[0182] In another embodiment of the tourniquet, the controller uses machine learning to predict the location of structure from the ultrasonic images acquired by the ultrasonic probe portion of the ultrasonic junctional tourniquet.
[0183] In another embodiment of the tourniquet, the tourniquet portion of the ultrasonic junctional tourniquet includes the ultrasonic probe portion and the controller controls the ultrasonic probe portion to apply pressure to the structure at the predicted location to occlude the structure and reduce blood flow to the wound.
[0184] In another embodiment of the tourniquet, the controller is configured to determine the location of the structure using machine learning on the images.
[0185] In another embodiment of the tourniquet, the controller is integrated with the ultrasonic probe portion and uses machine learning to analyze the plurality of images to determine the location of the structure.
[0186] In another embodiment of the tourniquet, the controller is configured to monitor the plurality of images as acquired and updates the determined location of the structure of the patient using machine learning.
[0187] In another embodiment of the tourniquet, the controller is configured to guide the tourniquet portion of the ultrasonic junctional tourniquet to the location of the structure and controls the tourniquet portion to compress the structure at the location against a hard surface of the patient that is proximal the location of the structure.
[0188] In another embodiment of the tourniquet, where the hard surface is a bone of the patient proximal the location.
[0189] In another embodiment of the tourniquet, the controller uses machine learning in processing the plurality of images to guide the tourniquet portion to the location of the structure and to control the tourniquet portion to compress the structure at the location.
[0190] In another embodiment of the tourniquet, the controller is configured to: determine the location of the structure as an x,y location in a cartesian coordinate system; determine an angle of the structure within the cartesian coordinate system; and control the tourniquet portion to apply pressure to the structure at the determined x,y location and the determined angle.
[0191] In another embodiment of the tourniquet, the ultrasound probe portion is a removable portion of the ultrasonic junctional tourniquet and configured to be removed from the ultrasonic junctional tourniquet after the tourniquet portion is controlled to apply pressure to the structure at the determined x,y location and the determined angle.
[0192] In another embodiment of the tourniquet, the ultrasound probe portion is a removable portion of the ultrasonic junctional tourniquet configured to be removed from the ultrasonic junctional tourniquet after the tourniquet portion is controlled to apply pressure to the structure at the determined location.
[0193] In one embodiment of a method for treating a patient with a wound, the method includes acquiring sonographic images of a wound of a patient; determining from the plurality of images a location of a compressible structure of a patient that is proximal to the wound; and guiding an ultrasonic junctional tourniquet in applying pressure at the location of the structure to occlude the structure and reduce blood flow to the wound.
[0194] In another embodiment of the method, an ultrasonic probe of the ultrasonic junctional tourniquet acquiring the plurality of images.
[0195] In another embodiment of the method, the images are sonographic images.
[0196] In another embodiment of the method, the structure is a blood vessel, artery, vein, nerve, bone, or other physiological pressure point of the patient.
[0197] In another embodiment of the method, further including predicting the location of the structure of the patient that is proximal to the wound from a plurality of ultrasonic images acquired by an ultrasonic probe of the ultrasonic junctional tourniquet.
[0198] In another embodiment of the method, the ultrasonic probe applying pressure at the predicted location of the structure to occlude the structure and reduce blood flow to the wound.
[0199] In another embodiment of the method, an ultrasonic probe of the ultrasonic junctional tourniquet applying pressure at the location of the structure to occlude the structure and reduce blood flow to the wound.
[0200] In another embodiment of the method, further including predicting the location of the structure of the patient that is proximal to the wound from ultrasonic images acquired by the ultrasonic probe of the ultrasonic junctional tourniquet.
[0201] In another embodiment of the method, further including using machine learning to determine from the plurality of images the location of the structure.
[0202] In another embodiment of the method, further including monitoring the plurality of images as acquired and updating the determined location of the structure of the patient using machine learning.
[0203] In another embodiment of the method, applying pressure at the location of the structure further includes guiding a tourniquet of the ultrasonic junctional tourniquet to the location of the structure and the tourniquet compressing at the location to press the structure against a hard surface of the patient that is proximal the location of the structure.
[0204] In another embodiment of the method, further after the tourniquet is guided to the location and compresses at the location removing an ultrasound probe of the ultrasonic junctional tourniquet that provided the images used in said determining the location of the structure.
[0205] In another embodiment of the method, the hard surface is a bone of the patient proximal the location.
[0206] In another embodiment of the method, guiding the tourniquet further includes determining the location of the structure as an x,y location in a cartesian coordinate system; determining an angle of the structure within the cartesian coordinate system; and applying the tourniquet to the structure at the determined x,y location and the determined angle.
[0207] In another embodiment of the method, further removing an ultrasound probe of the ultrasonic junctional tourniquet that provided the plurality of images used in said determining the location of the structure.
[0208] In one embodiment of a junctional tourniquet, the tourniquet including: an ultrasonic probe configured to collect a plurality of images of a wound of a patient, the plurality of images being sonographic images; and a controller of the junctional tourniquet, coupled to the ultrasonic probe, configured to receive the plurality of images. In accordance with analysis of the plurality of images by a machine learning model of the junctional tourniquet the controller is configured to: guide movement of the ultrasonic probe to a position that is proximal a location of a structure of a plurality of compressible structures of the wound in accordance with analysis of the plurality of images performed by the machine learning model; and actuate the ultrasonic probe at the position to apply pressure to the structure of the wound and compress the structure against a hard surface of the patient to at least partially occlude fluid flow in the structure in accordance with analysis of the plurality of images performed by the machine learning model.
[0209] In another embodiment of the tourniquet, the controller is configured to guide the ultrasonic probe to the position proximal the location of the structure in accordance with object detection performed by the machine learning model, the object detection performed by the machine learning model includes determining the location of the structure and the controller configured to guide movement of the ultrasonic probe to the position proximal the location of the structure.
[0210] In another embodiment of the tourniquet, the ultrasonic probe is guided to the position that is proximal to the location of the structure in accordance with one or more of a user interface and an autonomous guidance module of the junctional tourniquet, the user interface and the autonomous guidance module controlled by the controller.
[0211] In another embodiment of the tourniquet, the autonomous guidance module is an autonomous motor assembly of the junctional tourniquet controlled by the controller.
[0212] In another embodiment of the tourniquet, the controller is configured to guide movement including an angle of the ultrasonic probe with respect to the structure.
[0213] In another embodiment of the tourniquet, the controller is configured to actuate the ultrasonic probe in accordance with image classification that differentiates between an occluded status and a non-occluded status of the structure.
[0214] In another embodiment of the tourniquet, a linear actuator coupled to the controller configured to linearly actuate the ultrasonic probe as controlled by the controller.
[0215] In another embodiment of the tourniquet, the linear actuator is an autonomous motor assembly that actuates the ultrasonic probe in a z-axis of the junctional tourniquet.
[0216] In another embodiment of the tourniquet, the linear actuator is a motor or a pump assembly.
[0217] In another embodiment of the tourniquet, the controller of the junctional tourniquet configured to guide movement of the ultrasonic probe to the position and actuate the ultrasonic probe at the position in real time responsive to analysis of the plurality of images by the machine learning model.
[0218] In another embodiment of the tourniquet, the junctional tourniquet having a user interface controlled by the controller, where the ultrasonic probe is guided to the position that is proximal the location of the structure of the wound by a user in accordance with a user interface of the junctional tourniquet.
[0219] In another embodiment of the tourniquet, the user interface includes one or more guidance indicators generated by the controller in accordance with analysis of the plurality of images performed by the machine learning model.
[0220] In another embodiment of the tourniquet, the user interface includes a screen that displays the one or more guidance indicators.
[0221] In another embodiment of the tourniquet, the screen is coupled to the ultrasound probe.
[0222] In another embodiment of the tourniquet, the one or more guidance indicators include one or more of audio and visual indicators on the ultrasound probe or on a housing of the ultrasound probe or junctional tourniquet.
[0223] In another embodiment of the tourniquet, audio indicators include voice prompts, beeps or alarms and where the visual indicators include guidance lights or arrows generated by the controller in accordance with the machine learning model of the junctional tourniquet.
[0224] In another embodiment of the tourniquet, the one or more guidance indicators of the user interface are displayed on the ultrasound probe.
[0225] In another embodiment of the tourniquet, the one or more guidance indicators of the user interface prompt the user to: move the ultrasonic probe to center the structure in a sonographic image; and adjust an angle of the ultrasonic probe to center the hard surface under the structure in an sonographic image.
[0226] In another embodiment of the tourniquet, a screen of the user interface displays the one or more guidance indicators.
[0227] In another embodiment of the tourniquet, the controller is configured to laterally actuate the ultrasonic probe at the position to apply pressure to the structure.
[0228] In another embodiment of the tourniquet, the controller is configured to control a linear actuator to laterally actuate the ultrasonic probe along a Z-axis of the ultrasonic probe.
[0229] In another embodiment of the tourniquet, the linear actuator is an autonomous motor assembly that actuates the ultrasonic probe in a z-axis of the junctional tourniquet.
[0230] In another embodiment of the tourniquet, the linear actuator is a motor or a pump assembly.
[0231] In another embodiment of the tourniquet, the controller further configured to monitor the position of the ultrasonic probe proximal the location of the structure and an occlusion status of the structure.
[0232] In another embodiment of the tourniquet, the machine learning model continuously monitors the position of the ultrasonic probe and the occlusion status of the structure.
[0233] In another embodiment of the tourniquet, one or more of the pressure applied to the structure by the ultrasonic probe and the direction of the pressure applied to the structure is adjusted to maintain at least partial occlusion of fluid flow in the structure.
[0234] In another embodiment of the tourniquet, the controller is configured to guide movement including an angle of the ultrasonic probe with respect to the structure to maintain at least partial occlusion of fluid flow in the structure.
[0235] In another embodiment of the tourniquet, the controller configured to generate an alarm when the ultrasonic probe does not maintain at least partial occlusion of fluid flow in the structure.
[0236] In another embodiment of the tourniquet, the alarm is conveyed to a user of the junctional tourniquets via a user interface of the junctional tourniquet.
[0237] In another embodiment of the tourniquet, lateral actuation of the ultrasonic probe along a Z-axis of the ultrasonic probe controls the pressure applied to the structure and where directional actuation of the ultrasonic probe along one or more of an x-axis and a y- axis of the ultrasonic probe controls a direction of the pressure applied to the structure to maintain at least partial occlusion of fluid flow in the structure.
[0238] In another embodiment of the tourniquet, the controller is configured to control an autonomous motor assembly to actuate one or more of lateral actuation and directional actuation of the ultrasonic probe.
[0239] In another embodiment of the tourniquet, the junctional tourniquet having a user interface controlled by a user that includes one or more guidance indicators configured to guide one or more of lateral actuation of the ultrasonic probe along a Z-axis of the ultrasonic probe to control the pressure applied to the structure and directional actuation of the ultrasonic probe along one or more of an x-axis and a y-axis of the ultrasonic probe to control a direction of the pressure applied to the structure.
[0240] In another embodiment of the tourniquet, the user interface includes a screen that displays the one or more guidance indicators.
[0241] In another embodiment of the tourniquet, the one or more guidance indicators include one or more of audio and visual indicators on the ultrasound probe or on a housing of the ultrasound probe or junctional tourniquet.
[0242] In another embodiment of the tourniquet, audio indicators include voice prompts, beeps or alarms and where the visual indicators include guidance lights or arrows generated by the controller in accordance with the machine learning model of the junctional tourniquet.
[0243] In another embodiment of the tourniquet, directional actuation includes an angle of the ultrasonic probe.
[0244] In another embodiment of the tourniquet, the controller configured to monitor the occlusion status of the structure responsive to force readings provided by one or more force sensors.
[0245] In another embodiment of the tourniquet, the ultrasound probe includes the one or more force sensors.
[0246] In another embodiment of the tourniquet, one or more of the pressure and a direction of the ultrasound probe in applying pressure to the structure is adjusted by a user in accordance with one or more of a user interface of the junctional tourniquet configured to convey adjustment instructions to the user of the junctional tourniquet and an autonomous module of the junctional tourniquet, the user interface and the autonomous module controlled by the controller.
[0247] In another embodiment of the tourniquet, the controller is configured to laterally actuate the ultrasonic probe along a Z-axis of the ultrasonic probe to adjust the pressure applied by the ultrasound probe to the structure.
[0248] In another embodiment of the tourniquet, the controller controls a motor or pump assembly to laterally actuate the ultrasonic probe along a Z-axis of the ultrasonic probe to adjust the pressure applied by the ultrasound probe to the structure.
[0249] In another embodiment of the tourniquet, one or more force sensors that sense force, where force readings received from the force sensors are processed by the controller to determine an occlusion status of the structure.
[0250] In another embodiment of the tourniquet, further a mechanical attachment module coupled to the controller and configured to removably attach the junctional tourniquet to the patient, the mechanical attachment module having a linear actuator.
[0251] In another embodiment of the tourniquet, the linear actuator is an autonomous motor assembly configured to laterally actuate the ultrasonic probe.
[0252] In another embodiment of the tourniquet, the mechanical attachment module is configured to releasably secure the ultrasound probe after the ultrasound probe at least partially occludes fluid flow in the structure.
[0253] In another embodiment of the tourniquet, the mechanical attachment module includes a base coupled to one or more of a frame and one or more straps, the base configured to be placed under the wound of the patient and the frame and the one or more straps configured to releasably secure the ultrasound probe after the ultrasound probe at least partially occludes fluid flow in the structure.
[0254] In another embodiment of the tourniquet, the frame is a rigid frame.
[0255] In another embodiment of the tourniquet, the mechanical attachment module including one or more rotating collars coupled to the one or more straps.
[0256] In another embodiment of the tourniquet, the ultrasound probe coupled to the controller via a wired or a wireless connection.
[0257] In one embodiment of a controller-implemented method for using a junctional tourniquet, the method including: acquiring sonographic images of a wound of a patient having one or more compressible structures, the sonographic images acquired by ultrasound; a trained machine learning model analyzing the plurality of sonographic images to generate a prediction of a location of one or more structures of the one or more compressible structures and one or more of a lateral actuation and a directional actuation of an ultrasonic probe of a junctional tourniquet needed to maintain at least partial occlusion of fluid flow in the one or more structures; guiding movement of the ultrasonic probe in accordance with the predicted location of one or more structures of the one or more compressible structures; and actuating the ultrasonic probe at the position in accordance with one or more of the lateral actuation and the directional actuation to apply pressure to the one or more structures of the wound and compress the one or more structures against a hard surface of the patient, at least partially occluding fluid flow in the structure.
[0258] In another embodiment of the method, acquiring in real time the plurality of sonographic images.
[0259] In another embodiment of the method, lateral actuation of the ultrasonic probe along a Z-axis of the ultrasonic probe controls the pressure applied to the structure and where directional actuation of the ultrasonic probe along one or more of an x-axis and a y-axis of
the ultrasonic probe controls a direction of the pressure applied to the structure to maintain at least partial occlusion of fluid flow in the structure.
[0260] In another embodiment of the method, further communicating the prediction of the location and the one or more of the lateral actuation and the directional actuation.
[0261] In another embodiment of the method, communicating including one or more of creating an audio message of the prediction of the location, the lateral actuation and the directional actuation and displaying the location, the lateral actuation and the directional actuation in a user interface.
[0262] In one embodiment of a controller-implemented method for using a junctional tourniquet, the method including guiding movement of an ultrasonic probe of the junctional tourniquet to a position that is proximal a location of a structure of a plurality of compressible structures of a wound of a patient in accordance with analysis of a plurality of images performed by a machine learning model of the junctional tourniquet; and actuating the ultrasonic probe at the position to apply pressure to the structure of the wound and compress the structure against a hard surface of the patient, at least partially occluding fluid flow in the structure in accordance with analysis of the plurality of images performed by the machine learning model.
[0263] In another embodiment of the method, actuating further including actuating the ultrasonic probe in accordance with image classification that differentiates between an occluded status and a non-occluded status of the structure.
[0264] In another embodiment of the method, laterally actuating the ultrasonic probe along a Z-axis of the ultrasonic probe to adjust the pressure applied by the ultrasound probe to the structure.
[0265] In another embodiment of the method, a linear actuator is an autonomous motor assembly that performs said laterally actuating the ultrasonic probe.
[0266] In another embodiment of the method, the linear actuator is a motor or a pump assembly.
[0267] In another embodiment of the method, said guiding further includes: the machine learning model performing object detection of the plurality of images to identify the location of the structure; and guiding movement of the ultrasonic probe to the position proximal the location of the structure.
[0268] In another embodiment of the method, guiding further includes one or more of: a user guiding movement of the ultrasonic probe to the position proximal the location of the structure using a user interface of the junctional tourniquet; and an autonomous guidance module of the junctional tourniquet performing said guiding movement of the ultrasonic probe to the position proximal the structure.
[0269] In another embodiment of the method, further generating one or more guidance indicators in accordance with the machine learning model of the junctional tourniquet and displaying the one or more guidance indicators in the user interface, the user guiding movement of the ultrasonic probe to the position proximal the location of the structure in accordance with the one or more guidance indicators displayed in the user interface.
[0270] In another embodiment of the method, the machine learning model performing object detection of the plurality of images to guide a user including displaying the one or more guidance indicators in the user interface prompting the user to: move the ultrasonic probe to center the structure in a sonographic image; and adjust an angle of the ultrasonic probe to center the hard surface under the structure in an sonographic image.
[0271] In another embodiment of the method, a screen of the user interface displaying the one or more guidance indicators.
[0272] In another embodiment of the method, displaying the one or more guidance indicators on a screen of the user interface.
[0273] In another embodiment of the method, the one or more guidance indicators include one or more of audio and visual indicators on the ultrasound probe or on a housing of the ultrasound probe or junctional tourniquet.
[0274] In another embodiment of the method, the audio indicators include voice prompts, beeps or alarms and the visual indicators include guidance lights or arrows.
[0275] In another embodiment of the method, guiding movement of the ultrasound probe to the position and actuating the ultrasonic probe at the position occurring in real time responsive to analysis of the plurality of images by the machine learning model.
[0276] In another embodiment of the method, further monitoring the position of the ultrasonic probe proximal the location of the structure and an occlusion status of the structure.
[0277] In another embodiment of the method, monitoring the position of the ultrasound probe and the occlusion status of the structure occurring in real time responsive to analysis of the plurality of images by the machine learning model.
[0278] In another embodiment of the method, monitoring includes the machine learning model continuously monitoring the position of the ultrasonic probe proximal the location of the structure and the occlusion status of the structure; and adjusting one or more of the pressure applied to the structure by the ultrasonic probe and a direction of the pressure applied by the ultrasound probe to the structure to maintain at least partial occlusion of fluid flow in the structure.
[0279] In another embodiment of the method, further generating an alarm when the ultrasonic probe does not maintain at least partial occlusion of fluid flow in the structure.
[0280] In another embodiment of the method, adjusting includes one or more of adjusting lateral actuation of the ultrasonic probe along a Z-axis of the ultrasonic probe to control the pressure applied to the structure and adjusting directional actuation of the ultrasonic probe along one or more of an x-axis and a y-axis of the ultrasonic probe to control a direction of the pressure applied to the structure.
[0281] In another embodiment of the method, adjusting directional actuation of the ultrasonic probe includes adjusting an angle of the ultrasonic probe to maintain at least partial occlusion of fluid flow in the structure.
[0282] In another embodiment of the method, further generating one or more guidance indicators in accordance with the machine learning model of the junctional tourniquet and displaying the one or more guidance indicators in a user interface presented to a user, the user performing one or more of adjusting lateral actuation of the ultrasonic probe along a Z-axis of the ultrasonic probe to control the pressure applied to the structure and adjusting directional actuation of the ultrasonic probe along one or more of an x-axis and a y- axis of the ultrasonic probe to control a direction of the pressure applied to the structure in accordance with the one or more guidance indicators displayed in the user interface.
[0283] In another embodiment of the method, including displaying the one or more guidance indicators on a screen of the user interface.
[0284] In another embodiment of the method, the one or more guidance indicators include one or more of audio and visual indicators on the ultrasound probe or on a housing of the ultrasound probe or junctional tourniquet.
[0285] In another embodiment of the method, the audio indicators include voice prompts, beeps or alarms and the visual indicators include guidance lights or arrows generated by the controller in accordance with the machine learning model of the junctional tourniquet.
[0286] In another embodiment of the method, adjusting further includes adjusting one or more of the pressure and a direction of the ultrasound probe in applying pressure to the structure by a user in accordance with a user interface of the junctional tourniquet or autonomously by an autonomous module of the junctional tourniquet.
[0287] In another embodiment of the method, further determining that an occlusion status is below an occlusion threshold for at least a period of time and in response to said determining activating an alarm.
[0288] In another embodiment of the method, responsive to activating the alarm, adjusting one or more of the pressure and a direction of the pressure applied by the ultrasound probe to the structure by a user in accordance with a user interface of the junctional tourniquet or autonomously by an autonomous module of the junctional tourniquet.
[0289] In another embodiment of the method, monitoring the occlusion status of the structure includes processing force readings and determining the occlusion status of the structure from the processed force readings.
[0290] In another embodiment of the method, receiving the pressure readings from one or more force sensors of the ultrasonic probe.
[0291] In another embodiment of the method, further receiving the plurality of images and in accordance with an machine learning model of the junctional tourniquet guiding the ultrasonic probe to the position proximal the location of the structure.
[0292] In another embodiment of the method, actuating further includes adjusting actuation of the ultrasonic probe in compressing the structure against a hard surface of the patient to at least partially occlude fluid flow in the structure.
[0293] In another embodiment of the method, adjusting actuation of the ultrasonic probe in compressing the structure against the hard surface of the patient to least partially occlude fluid flow in the structure includes adjusting one or more of lateral actuation of the ultrasonic probe along a Z-axis of the ultrasonic probe to control pressure applied to the structure and directional actuation of the ultrasonic probe along one or more of an x-axis and
a y-axis of the ultrasonic probe to control a direction of actuation of the ultrasound probe to the structure.
[0294] In another embodiment of the method, further collecting the plurality of images of a wound of a patient by the ultrasonic probe.
[0295] In another embodiment of the method, further releasably securing the ultrasound probe to a mechanical attachment module following at least partially occluding fluid flow in the structure.
[0296] In one embodiment of a tissue phantom system, the system including an anatomical tissue phantom, having an arterial side and a venous side, formed of ultrasound complaint material and having a one or more compressible structures within the ultrasound compliant material that accommodate fluid flow through the anatomical tissue phantom; a fluid reservoir housing ultrasonic compliant fluid; a pump configured to receive ultrasonic compliant fluid from the fluid reservoir and pump ultrasonic compliant fluid to the tissue phantom; a pressure sensor configured to receive ultrasonic compliant fluid from the pump, measure pressure of the received ultrasonic compliant fluid, and provide the ultrasonic compliant fluid to an arterial side of the tissue phantom, where flow of the ultrasonic compliant fluid through the pressure sensor, the pump and the fluid reservoir forms a fluid bypass loop of the system; a flow sensor coupled to the tissue phantom and the fluid reservoir, the flow sensor configured to measure the flow of the ultrasonic compliant fluid and provide the ultrasonic compliant fluid to the fluid reservoir; and a hard surface of the tissue phantom. The ultrasonic compliant fluid flows in a flow loop of the system, with ultrasonic compliant fluid pumped by the pump from the fluid reservoir is provided to the pressure sensor, the pump provides the pumped ultrasonic compliant fluid to the arterial aide of the tissue phantom, the ultrasonic compliant fluid flows through the arterial aide of the tissue phantom, is measured by the flow sensor at an output of the tissue phantom, and flows back to the fluid reservoir and where responsive to pressure on the ultrasound compliant material of the anatomical tissue phantom at a position that is proximal a location of a compressible structure of the one or more compressible structures, the compressible structure is compressed against the hard surface of the anatomical tissue phantom to at least partially occlude fluid flow of the ultrasonic compliant fluid in the compressible structure.
[0297] In another embodiment of the system, including a hydrostatic reservoir configured to provide hydrostatic fluid to the venous side of the tissue phantom.
[0298] In another embodiment of the system, the tissue phantom configured to provide ultrasonic compliant fluid to the flow sensor, the sensor configured to measure the flow of the ultrasonic compliant fluid and to provide the ultrasonic compliant fluid to the fluid reservoir.
[0299] In another embodiment of the system, the structure is representative of a blood vessel, artery, vein, nerve, bone, or other physiological pressure point.
[0300] In another embodiment of the system, the ultrasound compliant material of the tissue phantom is one or more of a synthetic gelatin, a ballistic gelatin, a ballistic hydrogel, and a clear ballistic gelatin.
[0301] In another embodiment of the system, the anatomical tissue phantom is one or more of a femoral, a subclavian and an aortic tissue phantom and the one or more compressible structures are representative of one or more of vessels, arteries, veins, nerves, and bones.
[0302] In another embodiment of the system, the one or more compressible structures are compressible tubing.
[0303] In one embodiment of a computer-implemented method for training a machine learning model in image classification and object detection of ultrasound generated images, the method includes: analyzing a database of ultrasound imaging and flow data points representative of one or more compressible structures of an anatomical structure subjected to a plurality of levels of flow of ultrasonic compliant fluid therethrough, including occlusion of the one or more compressible structures, the database of ultrasound imaging including a plurality of ultrasound images; sorting each ultrasound image of the plurality of ultrasound images of the database into a plurality of classification categories based on a measured distal pressure of an ultrasound image, the measured distal pressure a measure of flow of the ultrasonic compliant fluid through the compressible structure of the ultrasound image; processing the sorted plurality of classification categories of the plurality of ultrasound images into processed classification categories; and training a machine learning model on a training dataset of the processed classification categories to generate a trained machine learning model, including providing the machine learning model with an image input layer of the training dataset and generating an output layer with the two or more classification categories.
[0304] In another embodiment of the method, each of the plurality of ultrasound images are sorted into classification categories of full flow or full occlusion of ultrasonic compliant fluid flow through a compressible structure of the ultrasound image.
[0305] In another embodiment of the method, the full flow classification category and the full occlusion classification category are separated by a percent reduction of measured distal pressure.
[0306] In another embodiment of the method, the full flow classification category and the full occlusion classification category are separated by a range of 90 to 50% reduction of measured distal pressure.
[0307] In another embodiment of the method, each of the plurality of ultrasound images are further sorted into classification categories of full flow, partial occlusion or full occlusion of ultrasonic compliant fluid flow through a compressible structure of the ultrasound image.
[0308] In another embodiment of the method, a full flow classification category is characterizes as unobstructed flow to a 10% reduction in measured distal pressure, a partial occlusion classification category is a range of approximately 50 to 90% reduction in measured distal pressure, and a full occlusion classification category is approximately 90% or more reduction in measured distal pressure.
[0309] In another embodiment of the method, processing further including processing the plurality of ultrasound images sorted into classification categories by cropping to remove ultrasound image information, resizing the cropped plurality of ultrasound images, and converting the cropped and resized ultrasound images to grey scale images.
[0310] In another embodiment of the method, resizing to 512 x 512 x 3.
[0311] In another embodiment of the method, processing further including processing the ultrasound images sorted into classification categories by cropping to remove ultrasound image information and then converting the cropped ultrasound images to grey scale images.
[0312] In another embodiment of the method, further splitting the processed plurality of classification categories into at least the training dataset and a testing dataset.
[0313] In another embodiment of the method, further splitting the processed plurality of classification categories into the training dataset, the testing dataset, and a validation dataset.
[0314] In another embodiment of the method, further validating the trained machine learning model on the validation dataset.
[0315] In another embodiment of the method, further randomly augmenting the testing dataset with affine transformations including one or more of reflection in the x-axis, reflection in the y-axis, scaling and rotation.
[0316] In another embodiment of the method, for each image of the testing dataset, determining predictions and calculations for the machine learning model.
[0317] In another embodiment of the method, for a two classification category machine learning model, positive predictions are full occlusion images and negative predictions are full flow images of the testing dataset.
[0318] In another embodiment of the method, for a portion of the images of the testing dataset, creating a gradient-weighted class activation mapping (Grad-CAM) overlay, generating an approximate localization heat map from the Grad-CAM overlay, and using the localization heat map in identifying representative images useful to improve predictions by the machine learning model.
[0319] In another embodiment of the method, further determining an occlusion threshold to distinguish between full occlusion and full flow images in the testing dataset.
[0320] In another embodiment of the method, training the machine learning model at a learn rate of 0.001 and using a batch size of between 18 and 32 characterized ultrasound images of the characterized plurality of ultrasound images.
[0321] In another embodiment of the method, further providing the machine learning model with a convolution layer with a rectified linear unit (ReLU) activation layer and a max pooling layer.
[0322] In another embodiment of the method, the structure is a blood vessel, artery, vein, nerve, bone, or other physiological pressure point of the patient.
[0323] In another embodiment of the method, the anatomical structure is a biological structure.
[0324] In another embodiment of the method, the anatomical structure is an anatomical tissue phantom with the plurality of compressible structures and the measured distal pressure is a measure of flow of the ultrasonic compliant fluid through the compressible structure distal to the anatomical tissue phantom, the method further collecting the database
of ultrasound imaging and flow data points using an ultrasound probe actuated against the plurality of compressible structures that accommodate flow of ultrasonic compliant fluid therethrough, where the ultrasound probe performs collecting the database from a plurality of angles, placements and pressures actuated by the ultrasound probe against the plurality of compressible structures against a hard surface of the anatomical tissue phantom.
[0325] In another embodiment of the method, the anatomical tissue phantom has an arterial side and a venous side in a system having the anatomical tissue phantom, a pump, a fluid reservoir, and a flow sensor, where in a flow loop of the system the ultrasonic compliant fluid pumped by the pump from the fluid reservoir is provided to the pressure sensor, the pump provides the pumped ultrasonic compliant fluid to the arterial aide of the tissue phantom, the ultrasonic compliant fluid flows through the arterial aide of the tissue phantom, is measured by the flow sensor at an output of the tissue phantom, and flows back to the fluid reservoir and where responsive to pressure by the ultrasound probe on the ultrasound compliant material of the anatomical tissue phantom at a position that is proximal a location of a compressible structure of the one or more compressible structures, the compressible structure is compressed against the hard surface of the anatomical tissue phantom to at least partially occlude fluid flow of the ultrasonic compliant fluid in the compressible structure, the method further including collecting the database of ultrasound imaging and flow data points upon actuation of the ultrasound probe against the tissue phantom.
[0326] In another embodiment of the method, training the machine learning model to generate the trained machine leaning model further includes for each ultrasound image of the plurality of ultrasound images in the image input layer: providing a plurality of bounding boxes for the one or more compressible structures and labeling the one or more compressible structures in each of the bounding boxes.
[0327] In another embodiment of the method, further predicting the plurality of bounding boxes for the one or more compressible structures to generate a plurality of predicted bounding boxes.
[0328] In another embodiment of the method, further generating a bounding box prediction output layer with the plurality of predicted bounding boxes.
[0329] In another embodiment of the method, the output layer and the bounding box prediction output layer are both convolutional layers of the machine learning model.
[0330] In another embodiment of the method, the output layer and the bounding box prediction output layer both include a convolution layer, a rectified linear unit (ReLU) activation layer, and a max pooling layer.
[0331] While implementations of the disclosure are susceptible to embodiment in many different forms, there is shown in the drawings and will herein be described in detail specific embodiments, with the understanding that the present disclosure is to be considered as an example of the principles of the disclosure and not intended to limit the disclosure to the specific embodiments shown and described. In the description above, like reference numerals may be used to describe the same, similar or corresponding parts in the several views of the drawings.
[0332] In this document, relational terms such as first and second, top and bottom, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. The terms “comprises,” “comprising,” “includes,” “including,” “has,” “having,” or any other variations thereof, are intended to cover a nonexclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. An element preceded by “comprises . . .a” does not, without more constraints, preclude the existence of additional identical elements in the process, method, article, or apparatus that comprises the element.
[0333] Reference throughout this document to “one embodiment,” “certain embodiments,” “an embodiment,” “implementation(s),” “aspect(s),” or similar terms means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment of the present disclosure. Thus, the appearances of such phrases or in various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments without limitation.
[0334] The term “or” as used herein is to be interpreted as an inclusive or meaning any one or any combination. Therefore, “A, B or C” means “any of the following: A; B; C; A and B; A and C; B and C; A, B and C.” An exception to this definition will occur only when a combination of elements, functions, steps or acts are in some way inherently mutually
exclusive. Also, grammatical conjunctions are intended to express any and all disjunctive and conjunctive combinations of conjoined clauses, sentences, words, and the like, unless otherwise stated or clear from the context. Thus, the term “or” should generally be understood to mean “and/or” and so forth. References to items in the singular should be understood to include items in the plural, and vice versa, unless explicitly stated otherwise or clear from the text.
[0335] Recitation of ranges of values herein are not intended to be limiting, referring instead individually to any and all values falling within the range, unless otherwise indicated, and each separate value within such a range is incorporated into the specification as if it were individually recited herein. The words “about,” “approximately,” or the like, when accompanying a numerical value, are to be construed as indicating a deviation as would be appreciated by one of ordinary skill in the art to operate satisfactorily for an intended purpose. Ranges of values and/or numeric values are provided herein as examples only, and do not constitute a limitation on the scope of the described embodiments. The use of any and all examples, or exemplary language (“e.g.,” “such as,” “for example,” or the like) provided herein, is intended merely to better illuminate the embodiments and does not pose a limitation on the scope of the embodiments. No language in the specification should be construed as indicating any unclaimed element as essential to the practice of the embodiments.
[0336] For simplicity and clarity of illustration, reference numerals may be repeated among the figures to indicate corresponding or analogous elements. Numerous details are set forth to provide an understanding of the embodiments described herein. The embodiments may be practiced without these details. In other instances, well-known methods, procedures, and components have not been described in detail to avoid obscuring the embodiments described. The description is not to be considered as limited to the scope of the embodiments described herein.
[0337] In the following description, it is understood that terms such as “first,” “second,” “top,” “bottom,” “up,” “down,” “above,” “below,” and the like, are words of convenience and are not to be construed as limiting terms. Also, the terms apparatus, device, system, etc. may be used interchangeably in this text.
[0338] The many features and advantages of the disclosure are apparent from the detailed specification, and, thus, it is intended by the appended claims to cover all such features and advantages of the disclosure which fall within the scope of the disclosure.
Further, since numerous modifications and variations will readily occur to those skilled in the art, it is not desired to limit the disclosure to the exact construction and operation illustrated and described, and, accordingly, all suitable modifications and equivalents may be resorted to that fall within the scope of the disclosure.
Claims
1. An ultrasound junctional tourniquet, comprising: a controller; an ultrasonic probe portion of the ultrasonic junctional tourniquet configured to acquire a plurality of images of a wound of a patient using ultrasound doppler; and a tourniquet portion of the ultrasonic junctional tourniquet, where the controller determines from the plurality of images a location of a compressible structure of a patient that is proximal to the wound and controls the tourniquet portion to apply pressure at the location of the structure to occlude the structure and reduce blood flow to the wound.
2. The tourniquet of claim 1 , where the plurality of images are sonographic images.
3. The tourniquet of claim 1, where the structure is a blood vessel, artery, vein, nerve, bone, or other physiological pressure point of the patient.
4. The tourniquet of claim 1, where the tourniquet portion includes the ultrasonic probe portion and the controller controls the ultrasonic probe portion to apply pressure at the location of the structure to occlude the structure and reduce blood flow to the wound.
5. The tourniquet of claim 4, where the controller is configured to predict the location of the structure from a plurality of ultrasonic images acquired by the ultrasonic probe portion of the ultrasonic junctional tourniquet..
6. The tourniquet of claim 1, where the controller is configured to predict the location of the structure from a plurality of ultrasonic images acquired by the ultrasonic probe portion of the ultrasonic junctional tourniquet.
7. The tourniquet of claim 6, where the controller uses machine learning to predict the location of structure from the plurality of ultrasonic images acquired by the ultrasonic probe portion of the ultrasonic junctional tourniquet.
8. The tourniquet of claim 6, where the tourniquet portion of the ultrasonic junctional tourniquet includes the ultrasonic probe portion and the controller controls the ultrasonic probe portion to apply pressure to the structure at the predicted location to occlude the structure and reduce blood flow to the wound.
9. The tourniquet of claim 1, where the controller is configured to determine the location of the structure using machine learning on the plurality of images.
10. The tourniquet of claim 9, where the controller is integrated with the ultrasonic probe portion and uses machine learning to analyze the plurality of images to determine the location of the structure.
11. The tourniquet of claim 9, where the controller is configured to monitor the plurality of images as acquired and updates the determined location of the structure of the patient using machine learning.
12. The tourniquet of claim 1, where the controller is configured to guide the tourniquet portion of the ultrasonic junctional tourniquet to the location of the structure and controls the tourniquet portion to compress the structure at the location against a hard surface of the patient that is proximal the location of the structure.
13. The tourniquet of claim 12, where the hard surface is a bone of the patient proximal the location.
14. The tourniquet of claim 12, where the controller uses machine learning in processing the plurality of images to guide the tourniquet portion to the location of the structure and to control the tourniquet portion to compress the structure at the location.
15. The tourniquet of claim 12, where the controller is configured to: determine the location of the structure as an x,y location in a cartesian coordinate system; determine an angle of the structure within the cartesian coordinate system; and
control the tourniquet portion to apply pressure to the structure at the determined x,y location and the determined angle.
16. The tourniquet of claim 15, the ultrasound probe portion is a removable portion of the ultrasonic junctional tourniquet and configured to be removed from the ultrasonic junctional tourniquet after the tourniquet portion is controlled to apply pressure to the structure at the determined x,y location and the determined angle.
17. The tourniquet of claim 1, the ultrasound probe portion is a removable portion of the ultrasonic junctional tourniquet configured to be removed from the ultrasonic junctional tourniquet after the tourniquet portion is controlled to apply pressure to the structure at the determined location.
18. A method of treating a patient with a wound, comprising: acquiring a plurality of sonographic images of a wound of a patient; determining from the plurality of images a location of a compressible structure of a patient that is proximal to the wound; and guiding an ultrasonic junctional tourniquet in applying pressure at the location of the structure to occlude the structure and reduce blood flow to the wound.
19. The method of claim 18, an ultrasonic probe of the ultrasonic junctional tourniquet acquiring the plurality of images.
20. The method of claim 18, where the plurality of images are sonographic images.
21. The method of claim 18, where the structure is a blood vessel, artery, vein, nerve, bone, or other physiological pressure point of the patient.
22. The method of claim 18, further comprising predicting the location of the structure of the patient that is proximal to the wound from a plurality of ultrasonic images acquired by an ultrasonic probe of the ultrasonic junctional tourniquet.
23. The method of claim 22, the ultrasonic probe applying pressure at the predicted location of the structure to occlude the structure and reduce blood flow to the wound.
24. The method of claim 18, an ultrasonic probe of the ultrasonic junctional tourniquet applying pressure at the location of the structure to occlude the structure and reduce blood flow to the wound.
25. The method of claim 24, further comprising predicting the location of the structure of the patient that is proximal to the wound from a plurality of ultrasonic images acquired by the ultrasonic probe of the ultrasonic junctional tourniquet.
26. The method of claim 18, further comprising using machine learning to determine from the plurality of images the location of the structure.
27. The method of claim 26, further comprising monitoring the plurality of images as acquired and updating the determined location of the structure of the patient using machine learning.
28. The method of claim 18, where said applying pressure at the location of the structure further includes guiding a tourniquet of the ultrasonic junctional tourniquet to the location of the structure and the tourniquet compressing at the location to press the structure against a hard surface of the patient that is proximal the location of the structure.
29. The method of claim 28, further comprising after the tourniquet is guided to the location and compresses at the location: removing an ultrasound probe of the ultrasonic junctional tourniquet that provided the plurality of images used in said determining the location of the structure.
30. The method of claim 28, where the hard surface is a bone of the patient proximal the location.
31. The method of claim 28, where guiding the tourniquet further comprises:
determining the location of the structure as an x,y location in a cartesian coordinate system; determining an angle of the structure within the cartesian coordinate system; and applying the tourniquet to the structure at the determined x,y location and the determined angle.
32. The method of claim 31, further comprising: removing an ultrasound probe of the ultrasonic junctional tourniquet that provided the plurality of images used in said determining the location of the structure.
33. Ajunctional tourniquet, including: an ultrasonic probe configured to collect a plurality of images of a wound of a patient, the plurality of images being sonographic images; and a controller of the junctional tourniquet, coupled to the ultrasonic probe, configured to receive the plurality of images and in accordance with analysis of the plurality of images by a machine learning model of the junctional tourniquet the controller is configured to: guide movement of the ultrasonic probe to a position that is proximal a location of a structure of a plurality of compressible structures of the wound in accordance with analysis of the plurality of images performed by the machine learning model; and actuate the ultrasonic probe at the position to apply pressure to the structure of the wound and compress the structure against a hard surface of the patient to at least partially occlude fluid flow in the structure in accordance with analysis of the plurality of images performed by the machine learning model.
34. The junctional tourniquet of claim 33, where the controller is configured to guide the ultrasonic probe to the position proximal the location of the structure in accordance with object detection performed by the machine learning model, the object detection performed by
the machine learning model includes determining the location of the structure and the controller configured to guide movement of the ultrasonic probe to the position proximal the location of the structure.
35. The junctional tourniquet of claim 34, where the ultrasonic probe is guided to the position that is proximal to the location of the structure in accordance with one or more of a user interface and an autonomous guidance module of the junctional tourniquet, the user interface and the autonomous guidance module controlled by the controller.
36. The junctional tourniquet of claim 35, where the autonomous guidance module is an autonomous motor assembly of the junctional tourniquet controlled by the controller.
37. The junctional tourniquet of claim 33, where controller is configured to guide movement including an angle of the ultrasonic probe with respect to the structure.
38. The junctional tourniquet of claim 33, where the controller is configured to actuate the ultrasonic probe in accordance with image classification that differentiates between an occluded status and a non-occluded status of the structure.
39. The junctional tourniquet of claim 38, further comprising a linear actuator coupled to the controller configured to linearly actuate the ultrasonic probe as controlled by the controller.
40. The junctional tourniquet of claim 39, where the linear actuator is an autonomous motor assembly that actuates the ultrasonic probe in a z-axis of the junctional tourniquet.
41. The junctional tourniquet of claim 40, where the linear actuator is a motor or a pump assembly.
42. The junctional tourniquet of claim 33, the controller of the junctional tourniquet configured to guide movement of the ultrasonic probe to the position and actuate the ultrasonic probe at the position in real time responsive to analysis of the plurality of images by the machine learning model.
43. The junctional tourniquet of claim 33, the junctional tourniquet having a user interface controlled by the controller, where the ultrasonic probe is guided to the position that is proximal the location of the structure of the wound by a user in accordance with a user interface of the junctional tourniquet.
44. The junctional tourniquet of claim 43, where the user interface includes one or more guidance indicators generated by the controller in accordance with analysis of the plurality of images performed by the machine learning model.
45. The junctional tourniquet of claim 44, where the user interface includes a screen that displays the one or more guidance indicators.
46. The junctional tourniquet of claim 45, where the screen is coupled to the ultrasound probe.
47. The junctional tourniquet of claim 44, where the one or more guidance indicators include one or more of audio and visual indicators on the ultrasound probe or on a housing of the ultrasound probe or junctional tourniquet.
48. The junctional tourniquet of claim 47, where audio indicators include voice prompts, beeps or alarms and where the visual indicators include guidance lights or arrows generated by the controller in accordance with the machine learning model of the junctional tourniquet.
49. The junctional tourniquet of claim 44, where the one or more guidance indicators of the user interface are displayed on the ultrasound probe.
50. The junctional tourniquet of claim 44, where the one or more guidance indicators of the user interface prompt the user to: move the ultrasonic probe to center the structure in a sonographic image; and adjust an angle of the ultrasonic probe to center the hard surface under the structure in an sonographic image.
51. The junctional tourniquet of claim 50, where a screen of the user interface displays the one or more guidance indicators.
52. The junctional tourniquet of claim 33, where the controller is configured to laterally actuate the ultrasonic probe at the position to apply pressure to the structure.
53. The junctional tourniquet of claim 52, where the controller is configured to control a linear actuator to laterally actuate the ultrasonic probe along a Z-axis of the ultrasonic probe.
54. The junctional tourniquet of claim 53, where the linear actuator is an autonomous motor assembly that actuates the ultrasonic probe in a z-axis of the junctional tourniquet.
55. The junctional tourniquet of claim 54, where the linear actuator is a motor or a pump assembly.
56. The junctional tourniquet of claim 33, the controller further configured to monitor the position of the ultrasonic probe proximal the location of the structure and an occlusion status of the structure.
57. The junctional tourniquet of claim 56, where the machine learning model continuously monitors the position of the ultrasonic probe and the occlusion status of the structure.
58. The junctional tourniquet of claim 56, where one or more of the pressure applied to the structure by the ultrasonic probe and the direction of the pressure applied to the structure is adjusted to maintain at least partial occlusion of fluid flow in the structure.
59. The junctional tourniquet of claim 58, where the controller is configured to guide movement including an angle of the ultrasonic probe with respect to the structure to maintain at least partial occlusion of fluid flow in the structure.
60. The junctional tourniquet of claim 58, the controller further configured to generate an alarm when the ultrasonic probe does not maintain at least partial occlusion of fluid flow in the structure.
61. The junctional tourniquet of claim 60, where the alarm is conveyed to a user of the junctional tourniquets via a user interface of the junctional tourniquet.
62. The junctional tourniquet of claim 58, where lateral actuation of the ultrasonic probe along a Z-axis of the ultrasonic probe controls the pressure applied to the structure and where directional actuation of the ultrasonic probe along one or more of an x-axis and a y-axis of the ultrasonic probe controls a direction of the pressure applied to the structure to maintain at least partial occlusion of fluid flow in the structure.
63. The junctional tourniquet of claim 62, the controller is configured to control an autonomous motor assembly to actuate one or more of lateral actuation and directional actuation of the ultrasonic probe.
64. The junctional tourniquet of claim 62, the junctional tourniquet having a user interface controlled by a user that includes one or more guidance indicators configured to guide one or more of lateral actuation of the ultrasonic probe along a Z-axis of the ultrasonic probe to control the pressure applied to the structure and directional actuation of the ultrasonic probe along one or more of an x-axis and a y-axis of the ultrasonic probe to control a direction of the pressure applied to the structure.
65. The junctional tourniquet of claim 64, where the user interface includes a screen that displays the one or more guidance indicators.
66. The junctional tourniquet of claim 64, where the one or more guidance indicators include one or more of audio and visual indicators on the ultrasound probe or on a housing of the ultrasound probe or junctional tourniquet.
67. The junctional tourniquet of claim 66, where audio indicators include voice prompts, beeps or alarms and where the visual indicators include guidance lights or arrows generated by the controller in accordance with the machine learning model of the junctional tourniquet.
68. The junctional tourniquet of claim 64, where directional actuation includes an angle of the ultrasonic probe.
69. The junctional tourniquet of claim 57, the controller configured to monitor the occlusion status of the structure responsive to force readings provided by one or more force sensors.
70. The junctional tourniquet of claim 59, where the ultrasound probe includes the one or more force sensors.
71. The junctional tourniquet of claim 33, where one or more of the pressure and a direction of the ultrasound probe in applying pressure to the structure is adjusted by a user in accordance with one or more of a user interface of the junctional tourniquet configured to convey adjustment instructions to the user of the junctional tourniquet and an autonomous module of the junctional tourniquet, the user interface and the autonomous module controlled by the controller.
72. The junctional tourniquet of claim 71, where the controller is configured to laterally actuate the ultrasonic probe along a Z-axis of the ultrasonic probe to adjust the pressure applied by the ultrasound probe to the structure.
73. The junctional tourniquet of claim 72, where the controller controls a motor or pump assembly to laterally actuate the ultrasonic probe along a Z-axis of the ultrasonic probe to adjust the pressure applied by the ultrasound probe to the structure.
74. The junctional tourniquet of claim 33, further comprising one or more force sensors that sense force, where force readings received from the force sensors are processed by the controller to determine an occlusion status of the structure.
75. The junctional tourniquet of claim 33, further comprising a mechanical attachment module coupled to the controller and configured to removably attach the junctional tourniquet to the patient, the mechanical attachment module having a linear actuator.
76. The junctional tourniquet of claim 75, where the linear actuator is an autonomous motor assembly configured to laterally actuate the ultrasonic probe.
77. The junctional tourniquet of claim 75, where the mechanical attachment module is configured to releasably secure the ultrasound probe after the ultrasound probe at least partially occludes fluid flow in the structure.
78. The junctional tourniquet of claim 75, where the mechanical attachment module includes a base coupled to one or more of a frame and one or more straps, the base configured to be placed under the wound of the patient and the frame and the one or more straps configured to releasably secure the ultrasound probe after the ultrasound probe at least partially occludes fluid flow in the structure.
79. The junctional tourniquet of claim 78, where the frame is a rigid frame.
80. The junctional tourniquet of claim 78, the mechanical attachment module including one or more rotating collars coupled to the one or more straps.
81. The junctional tourniquet of claim 33, where the ultrasound probe coupled to the controller via a wired or a wireless connection.
82. A controller-implemented method for using a junctional tourniquet, comprising: acquiring a plurality of sonographic images of a wound of a patient having one or more compressible structures, the plurality of sonographic images acquired by ultrasound; a trained machine learning model analyzing the plurality of sonographic images to generate a prediction of a location of one or more structures of the one or more compressible structures and one or more of a lateral actuation and a directional actuation of an ultrasonic probe of a junctional tourniquet needed to maintain at least partial occlusion of fluid flow in the one or more structures;
guiding movement of the ultrasonic probe in accordance with the predicted location of one or more structures of the one or more compressible structures; and actuating the ultrasonic probe at the position in accordance with one or more of the lateral actuation and the directional actuation to apply pressure to the one or more structures of the wound and compress the one or more structures against a hard surface of the patient, at least partially occluding fluid flow in the structure.
83. The method of claim 82, said acquiring in real time the plurality of sonographic images.
84. The method of claim 83, where lateral actuation of the ultrasonic probe along a Z-axis of the ultrasonic probe controls the pressure applied to the structure and where directional actuation of the ultrasonic probe along one or more of an x-axis and a y-axis of the ultrasonic probe controls a direction of the pressure applied to the structure to maintain at least partial occlusion of fluid flow in the structure.
85. The method of claim 82, further comprising communicating the prediction of the location and the one or more of the lateral actuation and the directional actuation.
86. The method of claim 85, said communicating including one or more of creating an audio message of the prediction of the location, the lateral actuation and the directional actuation and displaying the location, the lateral actuation and the directional actuation in a user interface.
87. A controller-implemented method for using a junctional tourniquet, comprising: guiding movement of an ultrasonic probe of the junctional tourniquet to a position that is proximal a location of a structure of a plurality of compressible structures of a wound of a
patient in accordance with analysis of a plurality of images performed by a machine learning model of the junctional tourniquet; and actuating the ultrasonic probe at the position to apply pressure to the structure of the wound and compress the structure against a hard surface of the patient, at least partially occluding fluid flow in the structure in accordance with analysis of the plurality of images performed by the machine learning model.
88. The method of claim 87, where said actuating further comprises actuating the ultrasonic probe in accordance with image classification that differentiates between an occluded status and a non-occluded status of the structure.
89. The method of claim 88, further comprising laterally actuating the ultrasonic probe along a Z-axis of the ultrasonic probe to adjust the pressure applied by the ultrasound probe to the structure.
90. The method of claim 89, where a linear actuator is an autonomous motor assembly that performs said laterally actuating the ultrasonic probe.
91. The method of claim 90, where the linear actuator is a motor or a pump assembly.
92. The method of claim 87, where said guiding further comprises: the machine learning model performing object detection of the plurality of images to identify the location of the structure; and guiding movement of the ultrasonic probe to the position proximal the location of the structure.
93. The method of claim 87, where said guiding further comprises one or more of: a user guiding movement of the ultrasonic probe to the position proximal the location of the structure using a user interface of the junctional tourniquet; and
an autonomous guidance module of the junctional tourniquet performing said guiding movement of the ultrasonic probe to the position proximal the structure.
94. The method of claim 93, further comprising generating one or more guidance indicators in accordance with the machine learning model of the junctional tourniquet and displaying the one or more guidance indicators in the user interface, the user guiding movement of the ultrasonic probe to the position proximal the location of the structure in accordance with the one or more guidance indicators displayed in the user interface.
95. The method of claim 94, the machine learning model performing object detection of the plurality of images to guide a user including displaying the one or more guidance indicators in the user interface prompting the user to: move the ultrasonic probe to center the structure in a sonographic image; and adjust an angle of the ultrasonic probe to center the hard surface under the structure in an sonographic image.
96. The method of claim 95, including a screen of the user interface displaying the one or more guidance indicators.
97. The method of claim 94, including displaying the one or more guidance indicators on a screen of the user interface.
98. The method of claim 94, where the one or more guidance indicators include one or more of audio and visual indicators on the ultrasound probe or on a housing of the ultrasound probe or junctional tourniquet.
99. The method of claim 98, where the audio indicators include voice prompts, beeps or alarms and the visual indicators include guidance lights or arrows.
100. The method of claim 87, said guiding movement of the ultrasound probe to the position and actuating the ultrasonic probe at the position occurring in real time responsive to analysis of the plurality of images by the machine learning model.
101. The method of claim 87, further comprising monitoring the position of the ultrasonic probe proximal the location of the structure and an occlusion status of the structure.
102. The method of claim 101, said monitoring the position of the ultrasound probe and the occlusion status of the structure occurring in real time responsive to analysis of the plurality of images by the machine learning model.
103. The method of claim 101, where said monitoring includes the machine learning model continuously monitoring the position of the ultrasonic probe proximal the location of the structure and the occlusion status of the structure; and adjusting one or more of the pressure applied to the structure by the ultrasonic probe and a direction of the pressure applied by the ultrasound probe to the structure to maintain at least partial occlusion of fluid flow in the structure.
104. The method of claim 103, further comprising generating an alarm when the ultrasonic probe does not maintain at least partial occlusion of fluid flow in the structure.
105. The method of claim 103, said adjusting includes one or more of adjusting lateral actuation of the ultrasonic probe along a Z-axis of the ultrasonic probe to control the pressure applied to the structure and adjusting directional actuation of the ultrasonic probe along one or more of an x-axis and a y-axis of the ultrasonic probe to control a direction of the pressure applied to the structure.
106. The method of claim 105, where adjusting directional actuation of the ultrasonic probe includes adjusting an angle of the ultrasonic probe to maintain at least partial occlusion of fluid flow in the structure.
107. The method of claim 105, further comprising generating one or more guidance indicators in accordance with the machine learning model of the junctional tourniquet and displaying the one or more guidance indicators in a user interface presented to a user, the user performing one or more of adjusting lateral actuation of the ultrasonic probe along a Z-axis of
the ultrasonic probe to control the pressure applied to the structure and adjusting directional actuation of the ultrasonic probe along one or more of an x-axis and a y-axis of the ultrasonic probe to control a direction of the pressure applied to the structure in accordance with the one or more guidance indicators displayed in the user interface.
108. The method of claim 107, including displaying the one or more guidance indicators on a screen of the user interface.
109. The method of claim 107, where the one or more guidance indicators include one or more of audio and visual indicators on the ultrasound probe or on a housing of the ultrasound probe or junctional tourniquet.
110. The method of claim 109, where the audio indicators include voice prompts, beeps or alarms and the visual indicators include guidance lights or arrows generated by the controller in accordance with the machine learning model of the junctional tourniquet.
111. The method of claim 103, where said adjusting further comprises adjusting one or more of the pressure and a direction of the ultrasound probe in applying pressure to the structure by a user in accordance with a user interface of the junctional tourniquet or autonomously by an autonomous module of the junctional tourniquet.
112. The method of claim 103, further including determining that an occlusion status is below an occlusion threshold for at least a period of time and in response to said determining activating an alarm.
113. The method of claim 112, responsive to activating the alarm, adjusting one or more of the pressure and a direction of the pressure applied by the ultrasound probe to the structure by a user in accordance with a user interface of the junctional tourniquet or autonomously by an autonomous module of the junctional tourniquet.
114. The method of claim 101, where said monitoring the occlusion status of the structure includes processing force readings and determining the occlusion status of the structure from the processed force readings.
115. The method of claim 104, further comprising receiving the pressure readings from one or more force sensors of the ultrasonic probe.
116. The method of claim 87, further comprising receiving the plurality of images and in accordance with an machine learning model of the junctional tourniquet guiding the ultrasonic probe to the position proximal the location of the structure.
117. The method of claim 87, where said actuating further comprises: adjusting actuation of the ultrasonic probe in compressing the structure against a hard surface of the patient to at least partially occlude fluid flow in the structure.
118. The method of claim 117, where adjusting actuation of the ultrasonic probe in compressing the structure against the hard surface of the patient to least partially occlude fluid flow in the structure includes adjusting one or more of lateral actuation of the ultrasonic probe along a Z-axis of the ultrasonic probe to control pressure applied to the structure and directional actuation of the ultrasonic probe along one or more of an x-axis and a y-axis of the ultrasonic probe to control a direction of actuation of the ultrasound probe to the structure.
119. The method of claim 87, further including collecting the plurality of images of a wound of a patient by the ultrasonic probe.
120. The method of claim 87, further comprising releasably securing the ultrasound probe to a mechanical attachment module following at least partially occluding fluid flow in the structure.
121. A tissue phantom system, comprising:
an anatomical tissue phantom, having an arterial side and a venous side, formed of ultrasound complaint material and having a one or more compressible structures within the ultrasound compliant material that accommodate fluid flow through the anatomical tissue phantom; a fluid reservoir housing ultrasonic compliant fluid; a pump configured to receive ultrasonic compliant fluid from the fluid reservoir and pump ultrasonic compliant fluid to the tissue phantom; a pressure sensor configured to receive ultrasonic compliant fluid from the pump, measure pressure of the received ultrasonic compliant fluid, and provide the ultrasonic compliant fluid to an arterial side of the tissue phantom, where flow of the ultrasonic compliant fluid through the pressure sensor, the pump and the fluid reservoir forms a fluid bypass loop of the system; a flow sensor coupled to the tissue phantom and the fluid reservoir, the flow sensor configured to measure the flow of the ultrasonic compliant fluid and provide the ultrasonic compliant fluid to the fluid reservoir; and a hard surface of the tissue phantom, where the ultrasonic compliant fluid flows in a flow loop of the system, with ultrasonic compliant fluid pumped by the pump from the fluid reservoir is provided to the pressure sensor, the pump provides the pumped ultrasonic compliant fluid to the arterial aide of the tissue phantom, the ultrasonic compliant fluid flows through the arterial aide of the tissue phantom, is measured by the flow sensor at an output of the tissue phantom, and flows back to the fluid reservoir and where responsive to pressure on the ultrasound compliant material of the anatomical tissue phantom at a position that is proximal a location of a compressible structure of the one or more compressible structures, the compressible structure is compressed against the hard surface of the anatomical tissue phantom to at least partially occlude fluid flow of the ultrasonic compliant fluid in the compressible structure.
122. The system of claim 121, further including a hydrostatic reservoir configured to provide hydrostatic fluid to the venous side of the tissue phantom.
123. The system of claim 121, the tissue phantom configured to provide ultrasonic compliant fluid to the flow sensor, the sensor configured to measure the flow of the ultrasonic compliant fluid and to provide the ultrasonic compliant fluid to the fluid reservoir.
124. The system of claim 121, where the structure is representative of a blood vessel, artery, vein, nerve, bone, or other physiological pressure point.
125. The system of claim 121, where the ultrasound compliant material of the tissue phantom is one or more of a synthetic gelatin, a ballistic gelatin, a ballistic hydrogel, and a clear ballistic gelatin.
126. The system of claim 121, where the anatomical tissue phantom is one or more of a femoral, a subclavian and an aortic tissue phantom and the one or more compressible structures are representative of one or more of vessels, arteries, veins, nerves, and bones.
127. The system of claim 126, where the one or more compressible structures are compressible tubing.
128. A computer-implemented method for training a machine learning model in image classification and object detection of ultrasound generated images, the method comprising: analyzing a database of ultrasound imaging and flow data points representative of one or more compressible structures of an anatomical structure subjected to a plurality of levels of flow of ultrasonic compliant fluid therethrough, including occlusion of the one or more compressible structures, the database of ultrasound imaging including a plurality of ultrasound images; sorting each ultrasound image of the plurality of ultrasound images of the database into a plurality of classification categories based on a measured distal pressure of an ultrasound
image, the measured distal pressure a measure of flow of the ultrasonic compliant fluid through the compressible structure of the ultrasound image; processing the sorted plurality of classification categories of the plurality of ultrasound images into processed classification categories; and training a machine learning model on a training dataset of the processed classification categories to generate a trained machine learning model, including providing the machine learning model with an image input layer of the training dataset and generating an output layer with the two or more classification categories.
129. The method of claim 128, where each of the plurality of ultrasound images are sorted into classification categories of full flow or full occlusion of ultrasonic compliant fluid flow through a compressible structure of the ultrasound image.
130. The method of claim 129, where the full flow classification category and the full occlusion classification category are separated by a percent reduction of measured distal pressure.
131. The method of claim 130, where the full flow classification category and the full occlusion classification category are separated by a range of 90 to 50% reduction of measured distal pressure.
132. The method of claim 129, where each of the plurality of ultrasound images are further sorted into classification categories of full flow, partial occlusion or full occlusion of ultrasonic compliant fluid flow through a compressible structure of the ultrasound image.
133. The method of claim 129, where a full flow classification category is characterizes as unobstructed flow to a 10% reduction in measured distal pressure, a partial occlusion classification category is a range of approximately 50 to 90% reduction in measured distal pressure, and a full occlusion classification category is approximately 90% or more reduction in measured distal pressure.
134. The method of claim 128, said processing further comprising processing the plurality of ultrasound images sorted into classification categories by cropping to remove ultrasound image information, resizing the cropped plurality of ultrasound images, and converting the cropped and resized ultrasound images to grey scale images.
135. The method of claim 134, said resizing to 512 x 512 x 3.
136. The method of claim 128, said processing further comprising processing the plurality of ultrasound images sorted into classification categories by cropping to remove ultrasound image information and then converting the cropped ultrasound images to grey scale images.
137. The method of claim 128, further comprising splitting the processed plurality of classification categories into at least the training dataset and a testing dataset.
138. The method of claim 137, further comprising splitting the processed plurality of classification categories into the training dataset, the testing dataset, and a validation dataset.
139. The method of claim 138, further comprising validating the trained machine learning model on the validation dataset.
140. The method of claim 137, further comprising randomly augmenting the testing dataset with affine transformations including one or more of reflection in the x-axis, reflection in the y-axis, scaling and rotation.
141. The method of claim 137, where for each image of the testing dataset, determining predictions and calculations for the machine learning model.
142. The method of claim 141, where for a two classification category machine learning model, positive predictions are full occlusion images and negative predictions are full flow images of the testing dataset.
143. The method of claim 141, for a portion of the images of the testing dataset, creating a gradient-weighted class activation mapping (Grad-CAM) overlay, generating an approximate localization heat map from the Grad-CAM overlay, and using the localization heat map in
identifying representative images useful to improve predictions by the machine learning model.
144. The method of claim 141, further comprising determining an occlusion threshold to distinguish between full occlusion and full flow images in the testing dataset.
145. The method of claim 128, said training the machine learning model at a learn rate of 0.001 and using a batch size of between 18 and 32 characterized ultrasound images of the characterized plurality of ultrasound images.
146. The method of claim 128, further comprising providing the machine learning model with a convolution layer with a rectified linear unit (ReLU) activation layer and a max pooling layer.
147. The method of claim 128, where the structure is a blood vessel, artery, vein, nerve, bone, or other physiological pressure point of the patient.
148. The method of claim 128, where the anatomical structure is a biological structure.
149. The method of claim 128, where the anatomical structure is an anatomical tissue phantom with the plurality of compressible structures and the measured distal pressure is a measure of flow of the ultrasonic compliant fluid through the compressible structure distal to the anatomical tissue phantom, the method further comprising collecting the database of ultrasound imaging and flow data points using an ultrasound probe actuated against the plurality of compressible structures that accommodate flow of ultrasonic compliant fluid therethrough, where the ultrasound probe performs collecting the database from a plurality of angles, placements and pressures actuated by the ultrasound probe against the plurality of compressible structures against a hard surface of the anatomical tissue phantom.
150. The method of claim 149, where the anatomical tissue phantom has an arterial side and a venous side in a system having the anatomical tissue phantom, a pump, a fluid reservoir, and a flow sensor, where in a flow loop of the system the ultrasonic compliant fluid
pumped by the pump from the fluid reservoir is provided to the pressure sensor, the pump provides the pumped ultrasonic compliant fluid to the arterial aide of the tissue phantom, the ultrasonic compliant fluid flows through the arterial aide of the tissue phantom, is measured by the flow sensor at an output of the tissue phantom, and flows back to the fluid reservoir and where responsive to pressure by the ultrasound probe on the ultrasound compliant material of the anatomical tissue phantom at a position that is proximal a location of a compressible structure of the one or more compressible structures, the compressible structure is compressed against the hard surface of the anatomical tissue phantom to at least partially occlude fluid flow of the ultrasonic compliant fluid in the compressible structure, the method further comprising collecting the database of ultrasound imaging and flow data points upon actuation of the ultrasound probe against the tissue phantom.
151. The method of claim 128, where training the machine learning model to generate the trained machine leaning model further comprises for each ultrasound image of the plurality of ultrasound images in the image input layer: providing a plurality of bounding boxes for the one or more compressible structures; and labeling the one or more compressible structures in each of the plurality of bounding boxes.
152. The method of claim 151, further comprising predicting the plurality of bounding boxes for the one or more compressible structures to generate a plurality of predicted bounding boxes.
153. The method of claim 151, further comprising generating a bounding box prediction output layer with the plurality of predicted bounding boxes.
154. The method of claim 152, where the output layer and the bounding box prediction output layer are both convolutional layers of the machine learning model.
155. The method of claim 128, where the output layer and the bounding box prediction output layer both include a convolution layer, a rectified linear unit (ReLU) activation layer, and a max pooling layer.
Applications Claiming Priority (4)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| US202363593597P | 2023-10-27 | 2023-10-27 | |
| US63/593,597 | 2023-10-27 | ||
| US202463669014P | 2024-07-09 | 2024-07-09 | |
| US63/669,014 | 2024-07-09 |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2025090636A1 true WO2025090636A1 (en) | 2025-05-01 |
Family
ID=95516379
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/US2024/052603 Pending WO2025090636A1 (en) | 2023-10-27 | 2024-10-23 | Ultrasound and machine learning based junctional tourniquet |
Country Status (1)
| Country | Link |
|---|---|
| WO (1) | WO2025090636A1 (en) |
Citations (13)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20130162796A1 (en) * | 2010-10-14 | 2013-06-27 | The Arizona Board Of Regents On Behalf Of The University Of Arizona | Methods and apparatus for imaging, detecting, and monitoring surficial and subdermal inflammation |
| US20130296704A1 (en) * | 2005-02-08 | 2013-11-07 | Volcano Corporation | Apparatus and Methods for Low-Cost Intravascular Ultrasound Imaging and for Crossing Severe Vascular Occlusions |
| US20130296921A1 (en) * | 2012-04-18 | 2013-11-07 | Board Of Regents Of The University Of Texas System | Junctional tourniquet |
| US20150327836A1 (en) * | 2014-05-16 | 2015-11-19 | University Of Virginia Patent Foundation | Endovascular occlusion device and method of use |
| US20160000446A1 (en) * | 2010-04-21 | 2016-01-07 | The Regents Of The University Of Michigan | Fluoroscopy-independent, endovascular aortic occlusion system |
| US20160120607A1 (en) * | 2014-11-03 | 2016-05-05 | Michael Sorotzkin | Ultrasonic imaging device for examining superficial skin structures during surgical and dermatological procedures |
| US20170135760A1 (en) * | 2015-11-17 | 2017-05-18 | Covidien Lp | Systems and methods for ultrasound image-guided ablation antenna placement |
| US20190151029A1 (en) * | 2011-02-15 | 2019-05-23 | Intuitive Surgical Operations, Inc. | System for moveable element position indication |
| US20190357917A1 (en) * | 2017-02-17 | 2019-11-28 | Philip M. Chun | Junctional hemorrhage control device |
| US20220203014A1 (en) * | 2020-11-27 | 2022-06-30 | J&M Shuler Medical Inc. | Wound therapy systems |
| US20220313273A1 (en) * | 2019-09-17 | 2022-10-06 | Yair Galili | System and method for temporarily stopping blood flow through a blood vessel |
| US20220367034A1 (en) * | 2013-03-15 | 2022-11-17 | Djo, Llc | Personalized image-based guidance for energy-based therapeutic devices |
| US20230001141A1 (en) * | 2015-03-19 | 2023-01-05 | Prytime Medical Devices, Inc. | System and method for low-profile occlusion balloon catheter |
-
2024
- 2024-10-23 WO PCT/US2024/052603 patent/WO2025090636A1/en active Pending
Patent Citations (13)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20130296704A1 (en) * | 2005-02-08 | 2013-11-07 | Volcano Corporation | Apparatus and Methods for Low-Cost Intravascular Ultrasound Imaging and for Crossing Severe Vascular Occlusions |
| US20160000446A1 (en) * | 2010-04-21 | 2016-01-07 | The Regents Of The University Of Michigan | Fluoroscopy-independent, endovascular aortic occlusion system |
| US20130162796A1 (en) * | 2010-10-14 | 2013-06-27 | The Arizona Board Of Regents On Behalf Of The University Of Arizona | Methods and apparatus for imaging, detecting, and monitoring surficial and subdermal inflammation |
| US20190151029A1 (en) * | 2011-02-15 | 2019-05-23 | Intuitive Surgical Operations, Inc. | System for moveable element position indication |
| US20130296921A1 (en) * | 2012-04-18 | 2013-11-07 | Board Of Regents Of The University Of Texas System | Junctional tourniquet |
| US20220367034A1 (en) * | 2013-03-15 | 2022-11-17 | Djo, Llc | Personalized image-based guidance for energy-based therapeutic devices |
| US20150327836A1 (en) * | 2014-05-16 | 2015-11-19 | University Of Virginia Patent Foundation | Endovascular occlusion device and method of use |
| US20160120607A1 (en) * | 2014-11-03 | 2016-05-05 | Michael Sorotzkin | Ultrasonic imaging device for examining superficial skin structures during surgical and dermatological procedures |
| US20230001141A1 (en) * | 2015-03-19 | 2023-01-05 | Prytime Medical Devices, Inc. | System and method for low-profile occlusion balloon catheter |
| US20170135760A1 (en) * | 2015-11-17 | 2017-05-18 | Covidien Lp | Systems and methods for ultrasound image-guided ablation antenna placement |
| US20190357917A1 (en) * | 2017-02-17 | 2019-11-28 | Philip M. Chun | Junctional hemorrhage control device |
| US20220313273A1 (en) * | 2019-09-17 | 2022-10-06 | Yair Galili | System and method for temporarily stopping blood flow through a blood vessel |
| US20220203014A1 (en) * | 2020-11-27 | 2022-06-30 | J&M Shuler Medical Inc. | Wound therapy systems |
Non-Patent Citations (5)
| Title |
|---|
| AVITAL GUY, HERNANDEZ TORRES SOFIA I., KNOWLTON ZECHARIAH J., BEDOLLA CARLOS, SALINAS JOSE, SNIDER ERIC J.: "Toward Smart, Automated Junctional Tourniquets—AI Models to Interpret Vessel Occlusion at Physiological Pressure Points", BIOENGINEERING, MDPI AG, vol. 11, no. 2, pages 109, XP093312573, ISSN: 2306-5354, DOI: 10.3390/bioengineering11020109 * |
| DAVENPORT: "Haemorrhage control of the pre-hospital trauma patient.", SCANDINAVIAN JOURNAL OF TRAUMA, RESUSCITATION AND EMERGENCY MEDICINE, vol. 22, 2014, pages A4, XP021191234, Retrieved from the Internet <URL:https://link.springer.com/content/pdf/10.1186/1757-7241-22-S1-A4.pdf> [retrieved on 20250115], DOI: 10.1186/1757-7241-22-S1-A4 * |
| HUMPHRIES RHIANNON, NAUMANN DAVID N., AHMED ZUBAIR: "Use of Haemostatic Devices for the Control of Junctional and Abdominal Traumatic Haemorrhage: A Systematic Review", SURGICAL RECONSTRUCTION AND MICROBIOLOGY RESEARCH CENTRE, NATIONAL INSTITUTE FOR HEALTH RESEARCH, QUEEN ELIZABETH HOSPITAL, BIRMINGHAM B15 2TH, UK, vol. 2, no. 1, pages 23 - 34, XP093312574, ISSN: 2673-866X, DOI: 10.3390/traumacare2010003 * |
| JÜSTEL DOMINIK, IRL HEDWIG, HINTERWIMMER FLORIAN, DEHNER CHRISTOPH, SIMSON WALTER, NAVAB NASSIR, SCHNEIDER GERHARD, NTZIACHRISTOS : "Spotlight on Nerves: Portable Multispectral Optoacoustic Imaging of Peripheral Nerve Vascularization and Morphology", ADVANCED SCIENCE, JOHN WILEY & SONS, INC, GERMANY, vol. 10, no. 19, 1 July 2023 (2023-07-01), Germany, XP093312575, ISSN: 2198-3844, DOI: 10.1002/advs.202301322 * |
| NACHMAN DEAN, DULCE DOR, WAGNERT-AVRAHAM LINN, GAVISH LILACH, MARK NOY, GERRASI RAFI, GERTZ S DAVID, EISENKRAFT ARIK: "Assessment of the Efficacy and Safety of a Novel, Low-Cost, Junctional Tourniquet in a Porcine Model of Hemorrhagic Shock", MILITARY MEDICINE, ASSOCIATION OF MILITARY SURGEONS OF THE US, BETHESDA, MD, US, vol. 185, no. Supplement_1, 7 January 2020 (2020-01-07), US , pages 96 - 102, XP093312571, ISSN: 0026-4075, DOI: 10.1093/milmed/usz351 * |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US20230179680A1 (en) | Reality-augmented morphological procedure | |
| US20250253058A1 (en) | Methods For Surgical Simulation | |
| US11683577B2 (en) | Systems and methods for registering headset system | |
| EP1652145B1 (en) | Method for monitoring labor parameters | |
| KR101926123B1 (en) | Device and method for segmenting surgical image | |
| EP3592242B1 (en) | Blood vessel obstruction diagnosis apparatus | |
| KR20230165284A (en) | Systems and methods for processing electronic medical images for diagnostic or interventional use | |
| US10206575B2 (en) | Apparatus and method for using internal inclusion for mechanical characterization of soft materials | |
| US11660142B2 (en) | Method for generating surgical simulation information and program | |
| US20170272699A1 (en) | Systems and methods for communicating with a fetus | |
| US20170265807A1 (en) | Systems and methods for fetal monitoring | |
| EP4437931A1 (en) | Surgery assisting system, surgery assisting method, and surgery assisting program | |
| CA2989910C (en) | Obstetrical imaging at the point of care for untrained or minimally trained operators | |
| US20210327305A1 (en) | System for validating and training invasive interventions | |
| US20240268784A1 (en) | Ultrasound probe | |
| WO2025090636A1 (en) | Ultrasound and machine learning based junctional tourniquet | |
| US20240000511A1 (en) | Visually positioned surgery | |
| EP4315303B1 (en) | Simulation doll | |
| CN119606502B (en) | Intelligent monitoring method and system for evaluating safety of caesarean operation | |
| EP4321101A1 (en) | Patient motion detection in diagnostic imaging | |
| US11986248B2 (en) | Apparatus and method for matching the real surgical image with the 3D-based virtual simulated surgical image based on POI definition and phase recognition | |
| KR20250012331A (en) | Mixed reality-based ultrasound image output system and method | |
| WO2024127403A1 (en) | Imaging probe | |
| HK40021307B (en) | Blood vessel obstruction diagnosis apparatus | |
| HK40021307A (en) | Blood vessel obstruction diagnosis apparatus |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 24883255 Country of ref document: EP Kind code of ref document: A1 |