WO2025114068A1 - Systems and methods of quantifying b-lines and merged b-lines in lung ultrasound images - Google Patents
Systems and methods of quantifying b-lines and merged b-lines in lung ultrasound images Download PDFInfo
- Publication number
- WO2025114068A1 WO2025114068A1 PCT/EP2024/082769 EP2024082769W WO2025114068A1 WO 2025114068 A1 WO2025114068 A1 WO 2025114068A1 EP 2024082769 W EP2024082769 W EP 2024082769W WO 2025114068 A1 WO2025114068 A1 WO 2025114068A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- line
- video loop
- merged
- interest
- lung ultrasound
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B8/00—Diagnosis using ultrasonic, sonic or infrasonic waves
- A61B8/08—Clinical applications
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B8/00—Diagnosis using ultrasonic, sonic or infrasonic waves
- A61B8/52—Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves
- A61B8/5215—Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves involving processing of medical diagnostic data
- A61B8/5223—Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves involving processing of medical diagnostic data for extracting a diagnostic or physiological parameter from medical diagnostic data
Definitions
- the present disclosure relates generally to ultrasound imaging systems and methods of processing ultrasound images, and more specifically to ultrasound imaging systems and methods that improve the quantification of certain pathological features seen in lung ultrasound images.
- Lung ultrasound is an imaging technique that can be used at the point-of-care to assess the lungs in a variety of settings, including emergency medicine and critical care. This technique has been used widely as a portable, non-invasive, and radiation-free modality for evaluation of pulmonary and infectious diseases.
- One characteristic pathological feature seen in lung ultrasounds is the B-line.
- B-lines are defined as discrete, vertical, hyperechoic artifacts that appear as long bands originating at the pleural line and extend vertically the length of the image.
- the number of B-lines i.e., B-line count
- B-line count is known to correlate with accumulation of fluid in the interstitial spaces of the lung.
- the present disclosure relates to ultrasound imaging systems and methods that improve the quantification of B-lines and merged B-lines, which are pathological features seen in lung ultrasound images. Because B-lines and merged B-lines may be observed in a variety of conditions, automated evaluation of these pathological features can play an important role in screening, diagnosis, and/or management of disease progression, for example, by standardizing interpretation among experts and/or enabling adoption by under-trained lung ultrasound users.
- a system for viewing and analyzing lung ultrasound images including an ultrasound imaging device comprising one or more ultrasound imaging transducers configured to generate a lung ultrasound video loop of a subject, the lung ultrasound video loop comprising a plurality of lung ultrasound imaging frames; and an electronic device in communication with the ultrasound imaging device.
- the electronic device can include a display device configured to display a graphical user interface, a computer-readable storage medium having stored thereon computer-readable instructions to be executed by one or more processors, and one or more processors configured by the computer- readable instructions stored on the computer-readable storage medium to perform the following operations: (i) obtain a lung ultrasound video loop of a subject, the lung ultrasound video loop comprising a plurality of lung ultrasound imaging frames; (ii) analyze the lung ultrasound video loop using a B-line classifier and a merged B-line classifier to generate B-line data for the lung ultrasound video loop; and (iii) output, via the display device, a graphical user interface comprising the B-line data generated for the lung ultrasound video loop.
- the B-line data for the lung ultrasound video loop can be generated by: pre-processing each imaging frame of the lung ultrasound video loop to obtain a pre-processed lung ultrasound video loop; determining a B-line analysis region-of-interest for each imaging frame of the pre-processed lung ultrasound video loop; analyzing the B-line analysis region-of- interest of each imaging frame to identify one or more B-line candidates; for each B-line candidate, extracting a set of B-line features from the imaging frames of the pre-processed lung ultrasound video loop; classifying each of the B-line candidates, based on the corresponding set of B-line features, using the B-line classifier and the merged B-line classifier to predict a likelihood that the B-line candidate is a probable B-line and/or a probable merged B-line; identifying one or more probable B-lines and/or probable merged B-lines in each imaging frame of the pre-processed ultrasound video loop based on the classification of each of the B-line candidates;
- the B-line classifier can be a first trained machine learning model configured to receive a plurality of B-line features as an input and output a likelihood that a B-line candidate is a probable B-line.
- the merged B-line classifier can be a second trained machine learning model configured to receive a plurality of B-line features as an input and output a likelihood that a B-line candidate is a probable merged B-line.
- the lung ultrasound video loop is positive for merged B-lines if the number of imaging frames of the pre-processed lung ultrasound video loop that contain a probable merged B-line meets or exceeds a predefined minimum number of imaging frames.
- identifying one or more B-line candidates within the B-line analysis region-of-interest can include performing the following operations for each imaging frame of the pre-processed lung ultrasound imaging video loop: smoothing an intensity profile of the B-line analysis region-of-interest for the corresponding imaging frame; identifying one or more local peaks along the smoothed intensity profile of the B-line analysis region-of-interest for the corresponding imaging frame; and defining a B-line candidate region-of-interest for each of the one or more local peaks identified, wherein each B-line candidate region-of-interest corresponds to a B-line candidate.
- a set of B-line features for each B-line candidate can be extracted from the B-line candidate region-of-interest defined in the imaging frames of the pre-processed lung ultrasound video loop.
- identifying one or more B-line candidates within the B-line analysis region-of-interest can include performing the following operations for each imaging frame of the pre-processed lung ultrasound imaging video loop: smoothing an intensity profile of the B-line analysis region-of-interest for the corresponding imaging frame using a first smoothing kernel; identifying one or more local peaks along the intensity profile smoothed using the first smoothing kernel; defining a B-line candidate region-of-interest for each of the one or more local peaks identified in the intensity profile smoothed using the first smoothing kernel, wherein each B-line candidate region-of-interest corresponds to a B-line candidate; smoothing the intensity profile of the B-line analysis region-of-interest for the corresponding imaging frame using a second smoothing kernel, wherein the second smoothing kernel is a different size than the first smoothing kernel; identifying one or more local peaks along the intensity profile smoothed using the second smoothing kernel; and defining a B-line candidate region-of-interest for each of the one or more
- a first set of B-line features for each B-line candidate can be extracted from the B-line candidate regions-of-interest defined based on the intensity profile smoothed using the first smoothing kernel, and a second set of B-line features for each B-line candidate can be extracted from the B-line candidate regions-of-interest defined based on the intensity profile smoothed using the second smoothing kernel.
- the set of B-line features extracted from the imaging frames of the pre- processed lung ultrasound video loop can include at least one B-line feature measured at two or more different spatial scales.
- an image processing method including pre-processing each imaging frame of a lung ultrasound video loop to obtain a pre-processed lung ultrasound video loop, wherein the lung ultrasound video loop comprises a plurality of image frames; determining a B-line analysis region-of-interest for each imaging frame of the pre-processed lung ultrasound video loop; analyzing the B-line analysis region-of-interest of each imaging frame to identify one or more B-line candidates; for each B-line candidate, extracting a set of B-line features from the imaging frames of the pre-processed lung ultrasound video loop; classifying each of the B-line candidates, based on the corresponding set of B-line features, using a B-line classifier and a merged B-line classifier to predict a likelihood that the B- line candidate is a probable B-line and/or a probable merged B-line; identifying one or more probable B-lines and/or probable merged B-lines in each imaging frame of the pre-processed ultrasound
- the B-line classifier can be a first trained machine learning model configured to receive a plurality of B-line features as an input and output a likelihood that a B-line candidate is a probable B-line.
- the merged B-line classifier can be a second trained machine learning model configured to receive a plurality of B-line features as an input and output a likelihood that a B-line candidate is a probable merged B-line.
- the lung ultrasound video loop is positive for merged B-lines if the number of imaging frames of the pre-processed lung ultrasound video loop that contain a probable merged B-line meets or exceeds a predefined minimum number of imaging frames.
- identifying one or more B-line candidates within the B-line analysis region-of-interest can include performing the following operations for each imaging frame of the pre-processed lung ultrasound imaging video loop: smoothing an intensity profile of the B-line analysis region-of-interest for the corresponding imaging frame; identifying one or more local peaks along the smoothed intensity profile of the B-line analysis region-of-interest for the corresponding imaging frame; and defining a B-line candidate region-of-interest for each of the one or more local peaks identified, wherein each B-line candidate region-of-interest corresponds to a B-line candidate.
- identifying one or more B-line candidates within the B-line analysis region-of-interest can include performing the following operations for each imaging frame of the pre-processed lung ultrasound imaging video loop: smoothing an intensity profile of the B-line analysis region-of-interest for the corresponding imaging frame using a first smoothing kernel; identifying one or more local peaks along the intensity profile smoothed using the first smoothing kernel; defining a B-line candidate region-of-interest for each of the one or more local peaks identified in the intensity profile smoothed using the first smoothing kernel, wherein each B-line candidate region-of-interest corresponds to a B-line candidate; smoothing the intensity profile of the B-line analysis region-of-interest for the corresponding imaging frame using a second smoothing kernel, wherein the second smoothing kernel is a different size than the first smoothing kernel; identifying one or more local peaks along the intensity profile smoothed using the second smoothing kernel; and defining a B-line candidate region-of-interest for each of the one or more
- a computer program product can include a non-transitory computer- readable storage medium having stored thereon computer-readable instructions that, when executed by one or more processors, cause the one or more processors to perform the following operations: (i) obtain a lung ultrasound video loop of a subject, the lung ultrasound video loop comprising a plurality of lung ultrasound imaging frames; (ii) pre-process each imaging frame of the lung ultrasound video loop to obtain a pre-processed lung ultrasound video loop; (iii) determine a B-line analysis region-of-interest for each imaging frame of the pre-processed lung ultrasound video loop; (iv) analyze the B-line analysis region-of-interest of each imaging frame to identify one or more B-line candidates; (v) for each B-line candidate, extract a set of B-line features from the imaging frames of the pre-processed lung ultrasound video loop; (vi) classify each of the B- line candidates, based on the corresponding set of B-
- FIG. 1 is a series of lung ultrasound imaging frames presenting with worsening lung condition severity in accordance with aspects of the present disclosure.
- FIG. 2 is a block diagram illustrating an improved system configured to quantify of B- lines and merged B-lines in lung ultrasound examinations in accordance with aspects of the present disclosure.
- FIG. 3 is a flowchart illustrating a method for quantifying of B-lines and merged B- lines in lung ultrasound examinations in accordance with aspects of the present disclosure.
- FIG. 4 is a block diagram illustrating the components of an electronic device for use in connection with an ultrasound imaging device in accordance with aspects of the present disclosure.
- FIG. 5 is a flowchart illustrating a method of analyzing an ultrasound video loop for B- lines and merged B-lines and visualizing the results in accordance with aspects of the present disclosure.
- FIG. 6 is a flow diagram illustrating the process of analyzing an ultrasound video loop for B-lines and merged B-lines and visualizing the results in accordance with certain aspects of the present disclosure.
- FIG. 7A shows two lung ultrasound image frames before and after intensity normalization is applied in accordance with aspects of the present disclosure.
- FIG. 7B shows another two lung ultrasound image frames before and after intensity normalization is applied in accordance with further aspects of the present disclosure.
- FIG. 8 is a flow diagram illustrating a process for pleural line detection and tracking that may be used to determine B-line analysis regions-of-interest in accordance with aspects of the present disclosure.
- FIG. 9 shows two pre-processed lung ultrasound image frames annotated to illustrate a B-line analysis region-of-interest, a corresponding intensity profile, and a plurality of B-line candidate regions-of-interest in accordance with aspects of the present disclosure.
- FIG. 10 shows a pre-processed lung ultrasound image frame with a B-line analysis region-of-interest and two corresponding intensity profiles that are smoothed using two different smoothing kernels in accordance with aspects of the present disclosure.
- FIG. 11 shows a pre-processed lung ultrasound image frame annotated with two narrow B-line candidate regions-of-interest and two expanded B-line candidate regions-of-interest in accordance with aspects of the present disclosure.
- FIG. 12 is a flow diagram illustrating the process of analyzing a lung ultrasound imaging frame to detect and quantify B-lines and merged B-lines using a single smoothing kernel in accordance with aspects of the present disclosure.
- FIG. 13 is a flow diagram illustrating the process of analyzing a lung ultrasound imaging frame to detect and quantify B-lines and merged B-lines using two different smoothing kernels in accordance with aspects of the present disclosure.
- FIG. 14 is an illustration of a first graphical user interface containing B-line analysis results in accordance with aspects of the present disclosure.
- FIG. 15 is an illustration of a second graphical user interface containing B-line analysis results in accordance with aspects of the present disclosure.
- FIG. 16 is an illustration of a third graphical user interface containing B-line analysis results in accordance with aspects of the present disclosure.
- lung ultrasound is an imaging technique that can be used at the point-of-care to assess the lungs in a variety of settings, including emergency medicine and critical care. This technique has been used widely as a portable, non-invasive, and radiation-free modality for evaluation of pulmonary and infectious diseases.
- B-lines are a pathological feature that can be seen in lung ultrasound images, which are known to correlate with an accumulation of fluid in the interstitial spaces of the lung.
- B-lines are discrete, vertical, hyperechoic artifacts that appear in lung ultrasound images as long bands originating at the pleural line and extend vertically the length of the image.
- the level of interstitial fluid builds up and becomes more severe, more B- lines will become present and eventually causes separate B-lines to merge.
- FIG. 1 four lung ultrasound images are shown along a continuum of increasing lung fluid associated with worsening severity. In the left-most lung ultrasound image (labeled ‘A’), there are no B-lines evident in this portion of the lung.
- an improved system 100 for obtaining, analyzing, and view lung ultrasound images is provided in accordance with certain aspects and embodiments of the present disclosure.
- the system 100 is configured to detect and differentiate between B-lines and merged B-lines in real-time.
- the system 100 can be used to evaluate pulmonary and infectious diseases in emergency medicine and critical care situations.
- the system 100 includes an ultrasound imaging device 102 configured to generate a lung ultrasound data 104 of a subject (e.g., a patient) 106.
- the ultrasound imaging device 102 can be a handheld ultrasound device comprising one or more ultrasound imaging transducers (not shown).
- the lung ultrasound data 104 can be one or more lung ultrasound video loops of a particular region of a lung of the subject 106, and each video loop can include a plurality of lung ultrasound imaging frames.
- the system 100 further includes an electronic device 108 in communication with the ultrasound imaging device 102.
- the electronic device 108 may include, for example and without limitation, a display device 110, a computer-readable storage medium (e.g., memory 404 shown in FIG. 4) having stored thereon computer-readable instructions to be executed by one or more processors, and one or more processors (e.g., processors 402 shown in FIG. 4) configured by the computer-readable instructions to perform one or more steps of the methods described herein.
- the one or more processors may be configured by the computer-readable instructions stored on the computer-readable storage medium to perform the following operations: (i) obtain a lung ultrasound video loop 104 of a subject 106, the lung ultrasound video loop 104 comprising a plurality of lung ultrasound imaging frames; (ii) analyze the lung ultrasound video loop 104 using a B-line classifier and a merged B-line classifier to generate B-line data for the lung ultrasound video loop 104; and (iii) output, via the display device 110, a graphical user interface comprising the B-line data generated for the lung ultrasound video loop 104.
- the B-line data generated for the ultrasound video loop 104 can include, for example, a discrete B-line count for each imaging frame, a maximum B-line count for all imaging frames of the video loop 104, and/or a merged B-line indicator that indicates the presence of merged B-lines in the video loop 104.
- the one or more processors may be configured by the computer-readable instructions stored on the computer-readable storage medium (e.g., memory 404 shown in FIG.
- a step 310 obtains a lung ultrasound video loop 104; in a step 320, analyzing the lung ultrasound video loop 104 to generate B-line data for the video loop 104; in a step 330, determining whether the video loop 104 contains any B-lines; optionally, in a step 340, outputting a B-line count of zero (e.g., via the user interface) if the video loop 104 does not contain any B- lines; in a step 350, determining whether at least N frames of the video loop 104 contain merged B-lines; optionally, in a step 360, outputting a maximum B-line count for the video loop 104 (e.g., via the user interface) if the at least N frames of the video loop 104 do not contain merged B-lines; and in a step 370, outputting a merged B-line indicator (e.g., via the user interface) if at least N frames of the video loop 104 contain merged B-lines.
- N can be considered a frame threshold for the video loop 104.
- N can be an integer greater than zero.
- N may be a predetermined number that is independent of the size / number of frames in a video loop 104.
- N may be a function of the size / number of frames in a video loop 104.
- a merged B-line indicator may be output (i.e., in the step 370) if more than 25 frames in a 250-frame lung ultrasound video loop 104 contain a merged B-line.
- a merged B-line indicator may be output (i.e., in the step 370) if more than 50% of the frames in a lung ultrasound video loop 104 contain a merged B-line.
- the electronic device 108 can include one or more processors 402 and a computer-readable memory 404 interconnected and/or in communication via a system bus 406 containing conductive circuit pathways through which instructions (e.g., machine-readable signals) may travel to effectuate communication, tasks, storage, and the like.
- the electronic device 108 can be connected to a power source (not shown), which can include an internal power supply and/or an external power supply.
- the electronic device 108 can also include one or more additional components, such as a display 110, an input device 112, an input/output (I/O) interface 412, a networking unit 414, and the like, including combinations thereof. As shown, each of these components may be interconnected and/or in communication via the system bus 406, for example.
- the one or more processors 402 can include one or more high-speed data processors adequate to execute the program components described herein and/or perform one or more operations of the methods described herein.
- the one or more processors 402 may include a microprocessor, a multi-core processor, a multithreaded processor, an ultra-low voltage processor, an embedded processor, and/or the like, including combinations thereof.
- the one or more processors 402 can include multiple processor cores on a single die and/or may be a part of a system on a chip (SoC) in which the processor 402 and other components are formed into a single integrated circuit, or a single package. That is, the one or more processors 402 may be a single processor, multiple independent processors, or multiple processor cores on a single die.
- SoC system on a chip
- the display device 110 may be configured to display information, including text, graphs, and/or the like.
- the display device 110 may be configured to a graphical user interface comprising the B-line data generated for one or more lung ultrasound video loops 104.
- the display device 110 can include, but is not limited to, a liquid crystal display (LCD), a light-emitting diode (LED) display, a touch screen or other touch-enabled display, a foldable display, a projection display, and so on, or combinations thereof.
- LCD liquid crystal display
- LED light-emitting diode
- the input device 112 may be configured to receive various forms of input from a user associated with the electronic device 108.
- the input device 112 can include, but is not limited to, one or more of a keyboard, keypad, trackpad, trackball(s), capacitive keyboard, controller (e.g., a gaming controller), computer mouse, computer stylus / pen, a voice input device, and/or the like, including combinations thereof.
- the input/output (I/O) interface 412 may be configured to connect and/or enable communication with one or more peripheral devices (not shown), including but not limited to additional machine-readable memory devices, diagnostic equipment, and other attachable devices.
- the I/O interface 412 may include one or more I/O ports that provide a physical connection to the one or more peripheral devices.
- the I/O interface 412 may include one or more serial ports.
- the networking unit 414 may include one or more types of networking interfaces that facilitate wired and/or wireless communication between the electronic device 108 and one or more external devices. That is, the networking unit 414 may operatively connect the electronic device 108 to one or more types of communications networks 216, which can include a direction interconnection, the Internet, a local area network (“LAN”), a metropolitan area network (“MAN”), a wide area network (“WAN”), a wired or Ethernet connection, a wireless connection, a cellular network, and similar types of communications networks, including combinations thereof.
- the electronic device 108 may communicate with one or more remote / cloud-based servers and/or cloud-based services, such as remote server 418, via the communications network 416.
- the memory 404 can be variously embodied in one or more forms of machine accessible and machine-readable memory.
- the memory 404 includes a storage device (not shown), which can include, but is not limited to, a non-transitory storage medium, a magnetic disk storage, an optical disk storage, an array of storage devices, a solid-state memory device, and/or the like, as well as combinations thereof.
- the memory 404 may also include one or more other types of memory, such as dynamic random-access memory (DRAM), static random-access memory (SRAM), erasable programmable read-only memory (EPROM), electrically erasable programmable read-only memory (EEPROM), Flash memory, and/or the like, as well as combinations thereof.
- the memory 404 may include one or more types of transitory and/or non-transitory memory.
- the electronic device 108 can be configured by software components stored in the memory 404 to perform one or more processes of the methods described herein. More specifically, the memory 404 can be configured to store data / information 420 and computer-readable instructions 422 that, when executed by the one or more processors 402, causes the electronic device 108 to (i) obtain a lung ultrasound video loop 104 of a subject 106, the lung ultrasound video loop 104 comprising a plurality of lung ultrasound imaging frames; (ii) analyze the lung ultrasound video loop 104 using a B-line classifier and a merged B-line classifier to generate B- line data for the lung ultrasound video loop 104; and (iii) output, via the display device 110, a graphical user interface comprising the B-line data generated for the lung ultrasound video loop 104.
- Such data 420 and the computer-readable instructions 422 stored in the memory 404 may form a B-line analysis package 424 that may be incorporated into, loaded from, loaded onto, or otherwise operatively available to and from the electronic device 108.
- the B-line analysis package 424 and/or one or more individual software packages may be stored in a local storage device of the memory 404.
- the B-line analysis package 424 and/or one or more individual software packages may be loaded onto and/or updated from a remote server or service, such as server 418, via the communications network 416.
- the B-line analysis package 424 includes at least one B-line classifier (e.g., B-line classifier 610 shown in FIGS. 6, 12, and 13) and at least one merged B-line classifier (e.g., merged B-line classifier 612 shown in FIGS. 6, 12, and 13).
- Each of the classifiers 610, 612 may be one or more trained models, such as one or more trained machine learning models.
- the B-line classifier 610 and the merged B-line classifier 612 can be trained machine learning models that are configured to receive a plurality of B-line features (i.e., feature scores for a plurality of image features) associated with a single B-line candidate and generate a likelihood that the single B-line candidate is a probable B-line or probable merged B- line.
- each classifier 610, 612 is a separately trained logistic regression model. That is, the B-line classifier 610 can be a first trained model and the merged B-line classifier 612 can be a second trained model that is different from the first trained model.
- the electronic device 108 may also include an operating system component 426, which may be stored in the memory 404.
- the operating system component 426 may be an executable program facilitating the operation of the electronic device 108 and/or the ultrasound device 102.
- the operating system component 426 can facilitate access of the I/O interface 412, network interface 414, the input device 112, and the display 110, and can communicate or control other components of the electronic device 108.
- a computer program product 424 comprising a non- transitory computer-readable storage medium 404 having stored thereon computer-readable instructions 422 that, when executed by one or more processors (such as processors 402), cause the one or more processors to perform one or more operations of the methods described below.
- the computer-readable storage medium 404 may include computer-readable instructions 422 that, when executed by one or more processors (such as processors 402), cause the one or more processors to perform the following operations: (i) obtain a lung ultrasound video loop 104 of a subject 106, the lung ultrasound video loop 104 comprising a plurality of lung ultrasound imaging frames; (ii) analyze the lung ultrasound video loop 104 using a B-line classifier and a merged B-line classifier to generate B-line data for the lung ultrasound video loop 104; and (iii) output, via the display device 110, a graphical user interface comprising the B-line data generated for the lung ultrasound video loop 104.
- processors such as processors 402
- the computer-readable storage medium 404 may include computer-readable instructions 422 that, when executed by one or more processors (such as processors 402), cause the one or more processors to perform the following operations: (i) obtain a lung ultrasound video loop of a subject, the lung ultrasound video loop comprising a plurality of lung ultrasound imaging frames; (ii) pre-process each imaging frame of the lung ultrasound video loop to obtain a pre-processed lung ultrasound video loop; (iii) determine a B-line analysis region- of-interest for each imaging frame of the pre-processed lung ultrasound video loop; (iv) analyze the B-line analysis region-of-interest of each imaging frame to identify one or more B-line candidates; (v) for each B-line candidate, extract a set of B-line features from the imaging frames of the pre-processed lung ultrasound video loop; (vi) classify each of the B-line candidates, based on the corresponding set of B-line features, using a B-line classifier and a merged B
- the computer-readable storage medium 404 can include computer-readable instructions 422 that, when executed by one or more processors (such as processors 402), cause the one or more processors to perform an improved method for detecting and distinguishing B- lines and merged B-lines in lung ultrasound images in accordance with the various aspects described herein.
- processors such as processors 402
- the method 500 can include: in a step 510, preprocessing each imaging frame of the lung ultrasound video loop to obtain a pre-processed lung ultrasound video loop; in a step 520, determining a B-line analysis region-of-interest (ROI) for each imaging frame of the pre-processed lung ultrasound video loop; in a step 530, analyzing the B-line analysis ROI of each imaging frame to identify one or more B-line candidates; in a step 540, extracting a set of B-line features for each B-line candidate from / based on the imaging frames of the pre-processed lung ultrasound video loop; in a step 550, classifying each of the B-line candidates, based on the corresponding set of B-line features, using a B-line classifier and a merged B-line classifier to predict
- each of the steps of the methods 300, 500 described herein may be implemented in several ways, including in one or more sub-steps.
- FIG. 6 one implementation of the method 500 is illustrated in accordance with certain aspects of the present disclosure.
- an ultrasound video loop 602 comprising a plurality of lung ultrasound imaging frames is presented / obtained.
- the ultrasound video loop 602 may be obtained, for example, from an ultrasound imaging device 102 comprising one or more ultrasound imaging transducers.
- the ultrasound video loop 602 can include native (i.e., pre-scan converted) ultrasound line scan data so as to maintain greater visual consistency across different transducer types (e.g., sector, linear, and curvilinear) and to allow for a consistent set of image feature extraction steps to be used across all transducers.
- native i.e., pre-scan converted
- each frame of the ultrasound video loop 602 may be pre-processed to improve the robustness of subsequent B-line detection steps, including across different transducer types.
- the pre-processing steps can include, but are not limited to, image normalization, noise normalization, TGC normalization, frame blending, and/or the like, including combinations thereof.
- input frames of the ultrasound video loop 602 may be normalized (i.e. rescaled) to a fixed intensity distribution prior to the B-line analysis.
- this may be performed on an image frame by first computing the mean and variance of the image frame after excluding outlier pixels (i.e., pixels with intensity values at the low and high extremes) from the distribution estimate.
- the image frame can then be standardized to zero mean and unit variance.
- pixels outside of a certain number (+/?) standard deviations are truncated, and the image is then rescaled to a 0-255 intensity scale.
- the input frames of the ultrasound video loop 602 may also be pre-processed for noise normalization because different transducer types will generate images with different noise and speckle characteristics, which can affect B-line detection and analysis.
- the noise distribution may be computed from images acquired with one transducer type and used to de-noise images collected with another transducer such that a similar noise distribution is achieved.
- de-noising could be done by Gaussian smoothing, median filtering, and/or the like.
- the time-gain compensation (TGC) profiles of the input frames of the ultrasound video loop 602 may be very different across different transducer types and therefore impact B-line detection and analysis, the TGC profiles may also be normalized.
- the TGC profiles may be computed from images acquired from one transducer type and used to adjust the TGC on images from a different transducer type.
- frame blending may be applied as another pre-processing step to reduce frame-to-frame “jitter” and improve the consistency of the results of the B-line analysis.
- frame blending may be achieved by: (1) averaging n consecutive frames by computing mean pixel intensities over n consecutive frames, and then (2) projecting maximum intensity across n consecutive frames by computing maximum intensities over n consecutive frames.
- the value n may be set to three frames, which would capture the current frame plus the two previous frames.
- other values are contemplated.
- one or more B-line analysis regions-of-interest 604 are determined for the ultrasound video loop 602. That is, a B-line analysis region-of-interest (ROI) 604 is determined for each imaging frame of the (pre-processed) ultrasound video loop 602.
- the B-line analysis ROI 604 may be determined based on the pleural line of the subject 108 seen in the imaging frames of the ultrasound video loop 602.
- the term “pleural line” refers to the interface between the soft tissues (fluid-rich) of the wall and the lung tissue (gas-rich).
- the pleural line presents as a hyper echoic line and represents the junction of the visceral and the parietal pleura. Because B-lines appear in lung ultrasound images to originate from the pleural line, the B-line analysis ROI 604 may be sectioned off after detecting and tracking the pleural line through the frames of the ultrasound video loop 602.
- this process begins, in a step 810, with identifying the presence and location of the pleural line from the first few frames (e.g., n frames) of the ultrasound video loop 602.
- the pleural line detection is based on analyzing at least two features, including but not limited to: (1) the intensity profile along the (vertical) depth direction within a region of the image frame likely containing the pleural line, and (2) the motion of the lung above and below the likely pleural line candidate within this region of the frame.
- the region of the image frame most likely to contain the pleural line is typically the upper half of the image, approximately 1 to 5 cm in depth.
- the process then includes, in a step 820, tracking the position of the pleural line throughout the remaining frames of the ultrasound video loop 602.
- the pleural line tracking algorithm may be similar in design to the pleural line detection algorithm, but operates on fewer frames, uses a smaller search range (e.g., region 815B instead of region 815A), and applies a reduced image sampling density to speed up the processing time required for each frame.
- a B-line analysis ROI 604 can be defined and updated in each frame of the ultrasound video loop 602 based on the location of the tracked pleural line.
- the B-line analysis ROI 604 may be set to originate a predefined distance beneath the detected pleural line and extend toward the bottom of the image frame to a fixed image depth.
- the predefined starting position of the B-line analysis ROI 604 may be 0.25 cm beneath the detected pleural line.
- the fixed image depth may be transducer-dependent.
- a default B-line analysis ROI 604 may be defined based on a default ROI.
- the default ROI may be based on the specifications of the ultrasound imaging probe 102 used to obtain the ultrasound video loop 602.
- B-line analysis ROIs 604 have been determined for each frame of the ultrasound video loop 602
- the B-line and merged B-line analysis may then proceed.
- the B-line analysis ROI of each image frame of the ultrasound video loop 602 may then be analyzed to identify one or more B-line candidates 606.
- B-line candidates 606 can be discrete B-line candidates and/or merged B-line candidates.
- the one or more B-line candidates 606 for each frame of the ultrasound video loop 602 may be identified based on a local “peak” in the intensity profile computed inside of the B-line analysis ROI 604 defined for the corresponding frame.
- a local “peak” in the intensity profile computed inside of the B-line analysis ROI 604 defined for the corresponding frame For example, with reference to FIG. 9, an annotated example of an image frame 900A of an ultrasound video loop 602 is illustrated according to certain aspects of this operation. As shown in the example of FIG. 9, the image frame 900A is annotated to show the pleural line and the B-line analysis ROI 604 defined for the image frame 900A. Below the image frame 900 A, a representative intensity profile 910 for the ROI 604 is illustrated, which can be computed as the median of each column of the ROI 604.
- an intensity profile 910 for the B-line analysis ROI 604 for each image frame (e.g., frame 900A) of the ultrasound video loop 602 can be generated and one or more B-line candidates 606 may be detected by identifying one or more local peaks in the intensity profile 910.
- a set of B-line features may be extracted from the imaging frames of the ultrasound video loop 602 for each of the B-line candidates 606 identified, as discussed in more detail below.
- each B-line candidate 606 may be associated with B-line candidate region-of-interest (i.e., a B-line candidate ROI).
- a B-line candidate ROI i.e., a B-line candidate ROI.
- FIG. 9 an annotated image frame 900B is illustrated showing seven B-line candidate ROIs 912 associated with the seven B-line candidates 606 seen in the image frame 900A.
- a B-line candidate ROI that is a smaller, local region-of-interest and is centered around each B-line candidate 606 may be computed prior to extracting the sets of B-line features.
- a B-line candidate ROI 912 may be defined for each of the one or more B-line candidates 606 (i.e., each of the one or more local peaks identified), wherein each B-line candidate ROI 912 corresponds to a B-line candidate 606.
- the B-line features may then be extracted from within each B-line candidate ROI 912 for subsequent B-line and merged B-line classification.
- each B-line candidate ROI 912 may be the length of the B-line analysis ROI 604, while the width of each B-line candidate ROI 912 may be a predefined number of columns (i.e., pixels) to the left and right of the identified local peak (e.g., three pixels to the left and three pixels to the right of the peak’s local maximum, etc.).
- the size of the B-line candidate ROI 912 may be varied, as described in more detail below.
- the detection of B-lines candidates 606 may be computed at different spatial scales, including but not limited to, two or more different spatial scales.
- the step of detecting one or more B-line candidates 606 within a B-line the intensity profile of an image frame of the ultrasound video loop 602 may include: (i) smoothing the intensity profile of the B-line analysis ROI 604 of the image frame at two or more spatial scales; (ii) identifying one or more local peaks along the smoothed intensity profile(s); and (iii) defining a B-line candidate ROI 912 for each local peak identified, wherein each B-line candidate ROI 912 corresponds to a B-line candidate 606.
- the smoothing may be applied at two or more spatial scales using two or more smoothing kernels of different sizes, such that a smaller smoothing kernel results in less smoothing while a larger smoothing kernel results in greater smoothing.
- an example of an image frame 1000A with intensity profiles 1010A, 1010B that are processed at two different spatial scales is illustrated according to aspects of the present disclosure.
- a first smoothing kernel i.e., a small smoothing kernel
- four discrete B-line candidates 606 can be identified from the local peaks of the median intensity profile 1010A.
- a second smoothing kernel i.e., a larger smoothing kernel
- a single / wide merged B-line candidate is identified from the local peak of the median intensity profile 1010B.
- the second smoothing kernel may be larger than the first smoothing kernel by at least about 50%, including at least about 75%, at least about 100%, at least about 150%, at least about 200%, and/or at least about 300%.
- the size ratio of the first smoothing kernel to the second smoothing kernel may be about 1: 1.5, about 1: 1.75, about 1:2, about 1:2.5, about 1:3, and/or about 1:4.
- the size of the second smoothing kernel can be at least two to three times larger than the size of the first smoothing kernel.
- smoothing the intensity profile of a B-line candidate ROI 604 can involve an averaging-type filter (e.g., a moving average filter).
- averaging-type filter e.g., a moving average filter
- smoothing filters with different smoothing kernels including but not limited to a Gaussian smoothing filter.
- the B-line candidate ROIs 912 may also be defined at two or more different spatial scales. That is, in embodiments, one or more B-line candidate ROIs 912 may be defined at a first scale and at a second scale that is different from the first scale.
- an image frame 1100 presenting with two B-line candidates 606 is shown and annotated with B-line candidate ROIs 1110, 1120.
- each B-line candidate 606 has a narrower B-line candidate ROI 1110 (indicated using a solid line) as well as an expanded / wider B-line candidate ROI 1120 (indicated using a dotted line).
- the width of the expanded B-line candidate ROIs 1120 may be at least about 50% larger than the width of the narrower B-line candidate ROIs 1100, including but not limited to, at least about 100% larger, at least about 150% larger, and/or at least about 200% larger than the width of the narrower B-line candidate ROIs 1100. Put another way, in specific embodiments, the width of the expanded B-line candidate ROIs 1120 may be two or three times larger than the width of the narrower B-line candidate ROIs 1110.
- a set of B-line features 608 may be extracted from the imaging frames of the ultrasound video loop 602 for each of the B-line candidates 606 identified. That is, for each B-line candidate identified in an ultrasound video loop 602, a set of B- line features 608 are extracted from the corresponding B-line candidate ROI.
- a plurality of sets of B-line features 608 may be generated that correspond to a plurality of B-line candidates 606 identified in an ultrasound video loop 602.
- each set of B-line features can include one or more image features extracted from the local image region around the B-line candidate 606 (i.e., the B-line candidate ROIs).
- the image features can be descriptive of the visual appearance of the B-line candidates.
- each image feature may be represented as a continuous floating point value(s) or feature score(s).
- One or more of the image features may be applied at multiple spatial resolution scales (e.g., a smaller spatial scale and a larger spatial scale). Thus, where an image feature is computed at two or more spatial scales, two or more feature scores are determined instead of one.
- the set of B-line features can include feature scores for one or a combination of the statistical image features, which may be statistical features of the ultrasound image.
- feature scores can be determined for a number of B-line features, which can include, but is not limited to, a B-line extent, a B-line intensity variance, a temporal profile, a peak local prominence, a peak amplitude, a B-line min/max intensity, a B-line homogeneity, and/or a B-line width.
- the set of B- line features includes feature scores for at least the B-line extent for each of the B-line candidates.
- a “B-line extent” refers to the extent to which a B-line candidate extends vertically to the bottom of the ultrasound image.
- a “B-line extent” may be calculated as a percentage of pixel values along the center of the B-line candidate having an intensity that exceeds the background intensity around the candidate.
- other ways of determining the B-line extent may be implemented.
- one or more of these features may be calculated at different spatial resolution scales and therefore result in two or more feature scores for the same image feature that can be passed to the B-line and merged B-line classifiers 610, 612.
- at least the peak local prominence image feature and/or the B-line width image feature may be computed at two or more different spatial resolution scales.
- one or more image features may result in two or more feature scores at the same spatial resolution scale.
- At least the B-line min/max intensity image feature and the B-line homogeneity image feature can result in two feature scores that are included in the set of B-line features 608 and passed to the B-line and/or merged B-line classifiers 610, 612.
- each of the classifiers 610, 612 may be one or more trained models, such as one or more trained machine learning models.
- the B-line classifier 610 may be a trained machine learning model that is configured to receive a plurality of B-line features (i.e., feature scores for a plurality of image features) associated with a single B-line candidate and generate a likelihood that the single B-line candidate is a probable B-line.
- the merged B-line classifier 612 may be a different trained machine learning model that is configured to receive a plurality of B-line features (i.e., feature scores for a plurality of image features) associated with a single B-line candidate and generate a likelihood that the single B-line candidate is a probable merged B-line.
- a set of B-line features 608 associated with that candidate can be provided as a floating-point vector (X Features ) to a first machine learning classifier 610 (i.e., the B-line classifier).
- the B-line classifier 610 can be configured to output a single floating-point prediction score between 0 and 1 (yBitne), where a higher score indicates a higher predicted likelihood of the candidate being a B-line.
- the B-line classifier 610 is a logistic regression model with a plurality of optimizable parameters that are tuned during training of the model on training datasets.
- the optimizable parameters can include, for example and without limitation, feature weights ( ⁇ Biine ) that determine the relative contribution of each feature to the prediction and a final classification threshold (r Biine ).
- the input vector (X Features ) can have 14 parameters
- the logistic regression model can have 15 optimizable parameters, including 14 feature weights (W B u ne ) that determine the relative contribution of each feature to the prediction and a final classification threshold (T BZ , ne ).
- B-line candidates 606 with a prediction score meeting or exceeding the classification threshold (y B u ne > T B une) ma Y be classified as probable B-lines 614, whereas B-line candidates 606 with scores that do not meet the threshold may be classified as false candidates and ignored.
- more than one classification threshold value may be used by the B-line classifier 610. That is, in some embodiments, two classification threshold values (r B u ne and Tg line initial) may be used, where T BUn e lnltlal > T BUne ⁇ The higher threshold (T B une lnltlal ) may be applied to the B-line candidate having the highest prediction score among all candidates in a given image frame, that is, max(y Biine ) > ⁇ BUnei nUia i- According to these embodiments, all remaining B-lines within the given image frame are compared against the default threshold T B u ne only if this initial condition (max(y Biine ) > ⁇ BUne initi(a ) ⁇ is true. As a result, better (i.e., more specific) filtering of negative videos (i.e., maximum B-line count of zero) from positive videos (i.e., maximum count of at least 1) may be obtained.
- the set of B-line features 608 associated with that candidate can also be provided as a floating-point vector (X Features ) to a second machine learning classifier 612 (i.e., the merged B-line classifier).
- X Features floating-point vector
- the B-line features 608 provided to the merged B-line classifier 612 may include feature scores for the same image features that were provided to the B-line classifier 610.
- the B-line features 608 provided to the merged B-line classifier 612 may be different from those provided to the B-line classifier 610 (i.e., may include one or more different feature scores and/or feature scores for different image features).
- the merged B-line classifier 612 outputs a single floating-point prediction score between 0 and 1 (y Merged), where a higher score indicates a higher predicted likelihood of the candidate being a merged B-line.
- the merged B-line classifier 612 is a logistic regression model with a plurality of optimizable parameters that are tuned during training of the model on training datasets.
- the optimizable parameters can include, for example and without limitation, feature weights (W M erged) that determine the relative contribution of each feature to the prediction and a final classification threshold (r Merged)- F° r
- the input vector (X Features) can have 14 parameters
- the logistic regression model can have 15 optimizable parameters, including 14 feature weights (W Merged) that determine the relative contribution of each feature to the prediction and a final classification threshold (T Merged)- I n embodiments, B- line candidates 606 with a prediction score meeting or exceeding the classification threshold ( y Merged T Merged ) may be classified as probable merged B-lines 616, whereas B-line candidates 606 with scores that do not meet the threshold may be classified as discrete B-lines (or false candidates) and ignored.
- one or more smoothing kernels may be applied to the intensity profiles of the image frames of the ultrasound video loop 602 as part of the process of detecting and classifying one or more B-line candidates 606. Accordingly, the conditions for counting as a true B-line or a true merged B-line.
- the process 1200 includes: defining a B-line analysis ROI 1202 for the image frame; computing an intensity profile 1204 within the B-line analysis ROI 1202; applying one smoothing kernel to the intensity profile 1204 to generate a smoothed intensity profile 1206; detecting one or more B- line candidates 1208 based on the smoothed intensity profile 1206; defining a B-line candidate ROI 1210 for each of the one or more B-line candidates 1208 detected; extracting a set of B-line features 1212 for each B-line candidate 1208 based on the corresponding B-line candidate ROIs 1210; and independently passing these sets of B-line features 1212 to a B-line classifier 610 and a merged B-line classifier 612.
- the B-line classifier 610 will determine whether each of the B-line candidates 1208 is a probable B-line 614, while the merged B-line classifier 612 will determine whether each of the B-line candidates 1208 is a probable merged B- line 616.
- the process 1200 includes checking each probable B-line 614 to determine whether the underlying B-line candidate 1208 was also positively classified as a probable merged B-line 616. If so, the merged B-line classification takes precedence and the B-line candidate 1208 will be recorded as a merged B-line rather than a B-line. Otherwise, the B-line classification may be recorded.
- the process 1200 illustrated in FIG. 12 may be repeated for each image frame of one or more ultrasound video loops 602. Once the process 1200 is repeated for a particular ultrasound video loop 602, video-level assessments may be performed (as shown in FIG. 5 and discussed in more detail below).
- two or more smoothing kernels may be applied to the intensity profiles of the image frames of an ultrasound video loop 602 as part of the process of detecting one or more B-line candidates 606.
- a process 1300 for detecting and classifying B-line candidates 1308 of single image frame using two smoothing kernel is illustrated.
- the process 1300 includes: defining a B-line analysis ROI 1302 for the image frame; computing an intensity profile 1304 within the B-line analysis ROI 1302; applying a first smoothing kernel to the intensity profile 1304 to generate a first smoothed intensity profile 1306; detecting one or more B-line candidates 1308 based on the first smoothed intensity profile 1206; defining a B-line candidate ROI 1310 for each of the one or more B-line candidates 1308 detected; extracting a set of B-line features 1312 for each B-line candidate 1308 based on the corresponding B-line candidate ROIs 1310; passing these sets of B-line features 1312 to a B-line classifier 610; applying a second smoothing kernel to the intensity profile 1304 to generate a second smoothed intensity profile 1316; detecting one or more B-line candidates 1318 based on the second smoothed intensity profile 1316; defining a B-line candidate ROI 1320 for each of the one or more B-line candidates 1318 detected; extracting a set of
- the B-line classifier 610 will determine whether each of the B-line candidates 1308 is a probable B-line 614, while the merged B-line classifier 612 will determine whether each of the B-line candidates 1318 is a probable merged B-line 616.
- the process 1200 includes, in a step 1324, the image frame is searched to determine whether any of the detected B-lines 614 correspond (i.e., overlap) with a detected merged B-line 616. If so, the merged B-line classification takes precedence and the B-line candidate 1308 will be recorded as a merged B-line rather than a B-line. Otherwise, the B-line classification may be recorded.
- the process 1300 illustrated in FIG. 13 may be repeated for each image frame of one or more ultrasound video loops 602. Once the process 1300 is repeated for a particular ultrasound video loop 602, video-level assessments may be performed (as shown in FIG. 5 and discussed in more detail below).
- a video-level assessment of the probable B-lines and a video-level assessment of the merged B-lines may be performed for a given ultrasound video loop 602.
- the process 500 can include: in a step 570, processing the ultrasound video loop 602 to determine one or more video-level B-line parameters; and in a step 580, processing the ultrasound video loop 602 to determine one or more video-level merged B-line parameters.
- the step 570 can include computing at least a first videolevel parameter, such as a “maximum B-line count” for the video loop 602.
- the maximum B-line count may be computed as the maximum number of discrete B-lines 614 appearing in any single frame of the video loop.
- the maximum B-line count may be reported (i.e., output via an electronic device 108) in the form of categories, namely: “0 B-lines”, “1-2 B-lines”, “3+ B-lines”, and/or the like.
- the raw integer count may be reported (i.e., output) instead.
- the step 580 can include computing at least a second videolevel parameter, such as a determination of whether the video loop 602 is positive for merged B- lines.
- a video loop 602 is positive for merged B-lines may depend on one or more of the following: (i) the number of frames containing at least one merged B-line; (ii) the average or total number of merged B-lines detected throughout the video; (iii) the average or total width of all merged B-lines throughout the video; (iv) the average or total prediction confidence score for all merged B-lines throughout the video; and/or (v) any combination of the above.
- a video loop 602 may be determined to be positive for merged B-lines if the number of imaging frames containing at least one probable merged B-line meets or exceeds a predefined minimum number of frame (e.g., optimized for each transducer type.
- the system 100 may be configured to report the maximum B-line count (or associated category) only if the ultrasound video loop 602 is not positive for merged B-lines. If the video loop 602 is positive, a separate category (“Merged (1)”) can be reported instead. Alternatively, the maximum B-line count (or associated category) may be reported along with the “Merged” category in all cases.
- users 114 of the system 100 may be able to more quickly and more accurately visualize lung ultrasound imaging data 104 for a patient 106, including in real-time.
- evaluation of B-lines and merged B-lines are important in screening, diagnosis, and management of disease progression and treatment, it is appreciated that it can be difficult or impossible to quickly and accurately detect certain pathological features seen in lung ultrasounds, even for experienced users.
- the systems and methods disclosed herein not only improve the quantification of B-lines and merged B-lines in lung ultrasound examinations, but also provide more consistent lung ultrasound interpretations and facilitate utilization by more novice users.
- the systems and methods described herein can include generating a graphical user interface comprising B-line data produced for one or more lung ultrasound video loops 104, 602, and displaying the graphical user interface on a display device 110. For example, as shown in the example of FIG.
- the process 300 can include: in a step 340, outputting a zero B-line count if none of the image frames of an ultrasound video loop 104, 602 contain any B-lines; in a step 360, outputting the maximum B-line count for the video loop 104, 602 if the ultrasound video loop 104, 602 is not positive for merged B-lines; and in a step 370, outputting a merged B-line indicator if the ultrasound video loop 104, 602 is positive for merged B-lines.
- steps 340, 360, 370, and/or one or more other steps may be summarized in the step 590 of the process 500, which includes outputting frame-level as well as video-level results.
- the outputted results can include the frame-level and video-level assessments described above, as well as one or more representative image frames (without or without annotation).
- the graphical user interface can include one or more of the following: one or more image frames from the ultrasound video loop 104, 602; a video-level output category; a lung zone indicator; an overlay of B-line and/or merged B-line indicators; an overlay indicating the B-line analysis ROI; and/or the like, including combinations thereof.
- a first exemplary graphical user interface 1400 comprising one or more image frames from the ultrasound video loop 104, 602, a video-level output category, a lung zone indicator, an overlay of B-line and/or merged B-line indicators, and an overlay indicating the B- line analysis ROI.
- the graphical user interface can include a lung zone summary for one or more ultrasound video loops 104, 602 taken across all scanned lung zones.
- a second exemplary graphical user interface 1500 is illustrated according to aspects of the present disclosure.
- the graphical user interface 1500 includes a visual overlay of eight front-facing lung zones (Rl, R2, R3, R4, LI, L2, L3, and L4) along with videolevel output categories for each of these lung zones, and a visual overlay of four back-facing lung zones (R5, R6, L5, L6) along with video-level output categories for each of these lung zones.
- the graphical user interface can include multiple image frames from a lung ultrasound video loop 104, 602 that correspond to ultrasound line scan data obtained using different transducers of the ultrasound imaging probe 102.
- the graphical user interface 1600 includes image frames from three separate transducers of an ultrasound probe 102 (labeled transducers S4-1, L12-4, and C5-2), which are annotated to indicate the B-lines and merged B-lines, the B-line analysis ROI, and the video-level output category.
- the phrase “at least one,” in reference to a list of one or more elements, should be understood to mean at least one element selected from any one or more of the elements in the list of elements, but not necessarily including at least one of each and every element specifically listed within the list of elements and not excluding any combinations of elements in the list of elements.
- This definition also allows that elements may optionally be present other than the elements specifically identified within the list of elements to which the phrase “at least one” refers, whether related or unrelated to those elements specifically identified.
- the present disclosure can be implemented as a system, a method, and/or a computer program product at any possible technical detail level of integration
- the computer program product can include a computer readable storage medium (or media) having computer readable program instructions thereon for causing a processor to carry out aspects of the present disclosure.
- the computer readable storage medium can be a tangible device that can retain and store instructions for use by an instruction execution device.
- the computer readable storage medium can be, for example, but is not limited to, an electronic storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing.
- a non-exhaustive list of more specific examples of the computer readable storage medium comprises the following: a portable computer diskette, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), a static random access memory (SRAM), a portable compact disc read-only memory (CD-ROM), a digital versatile disk (DVD), a memory stick, a floppy disk, a mechanically encoded device such as punch-cards or raised structures in a groove having instructions recorded thereon, and any suitable combination of the foregoing.
- RAM random access memory
- ROM read-only memory
- EPROM or Flash memory erasable programmable read-only memory
- SRAM static random access memory
- CD-ROM compact disc read-only memory
- DVD digital versatile disk
- memory stick a floppy disk
- a mechanically encoded device such as punch-cards or raised structures in a groove having instructions recorded thereon
- a computer readable storage medium is not to be construed as being transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through a waveguide or other transmission media (e.g., light pulses passing through a fiber-optic cable), or electrical signals transmitted through a wire.
- Computer readable program instructions described herein can be downloaded to respective computing/processing devices from a computer readable storage medium or to an external computer or external storage device via a network, for example, the Internet, a local area network, a wide area network and/or a wireless network.
- the network can comprise copper transmission cables, optical transmission fibers, wireless transmission, routers, firewalls, switches, gateway computers and/or edge servers.
- a network adapter card or network interface in each computing/processing device receives computer readable program instructions from the network and forwards the computer readable program instructions for storage in a computer readable storage medium within the respective computing/processing device.
- Computer readable program instructions for carrying out operations of the present disclosure can be assembler instructions, instruction-set-architecture (ISA) instructions, machine instructions, machine dependent instructions, microcode, firmware instructions, state-setting data, configuration data for integrated circuitry, or either source code or object code written in any combination of one or more programming languages, comprising an object oriented programming language such as Smalltalk, C++, or the like, and procedural programming languages, such as the “C” programming language or similar programming languages.
- the computer readable program instructions can execute entirely on the user’s computer, partly on the user’s computer, as a standalone software package, partly on the user’s computer and partly on a remote computer or entirely on the remote computer or server.
- the remote computer can be connected to the user's computer through any type of network, comprising a local area network (LAN) or a wide area network (WAN), or the connection can be made to an external computer (for example, through the Internet using an Internet Service Provider).
- electronic circuitry comprising, for example, programmable logic circuitry, field-programmable gate arrays (FPGA), or programmable logic arrays (PLA) can execute the computer readable program instructions by utilizing state information of the computer readable program instructions to personalize the electronic circuitry, in order to perform aspects of the present disclosure.
- the computer readable program instructions can be provided to a processor of a, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks.
- These computer readable program instructions can also be stored in a computer readable storage medium that can direct a computer, a programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer readable storage medium having instructions stored therein comprises an article of manufacture comprising instructions which implement aspects of the function/act specified in the flowchart and/or block diagram or blocks.
- the computer readable program instructions can also be loaded onto a computer, other programmable data processing apparatus, or other device to cause a series of operational steps to be performed on the computer, other programmable apparatus or other device to produce a computer implemented process, such that the instructions which execute on the computer, other programmable apparatus, or other device implement the functions/acts specified in the flowchart and/or block diagram block or blocks.
- the flowchart and block diagrams in the Figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods, and computer program products according to various examples of the present disclosure.
- each block in the flowchart or block diagrams can represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s).
- the functions noted in the blocks can occur out of the order noted in the Figures.
- two blocks shown in succession can, in fact, be executed substantially concurrently, or the blocks can sometimes be executed in the reverse order, depending upon the functionality involved.
- each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration can be implemented by special purpose hardware-based systems that perform the specified functions or acts or carry out combinations of special purpose hardware and computer instructions.
- inventive embodiments are presented by way of example only and that, within the scope of the appended claims and equivalents thereto, inventive embodiments may be practiced otherwise than as specifically described and claimed.
- inventive embodiments of the present disclosure are directed to each individual feature, system, article, material, kit, and/or method described herein.
Landscapes
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Engineering & Computer Science (AREA)
- Biomedical Technology (AREA)
- Medical Informatics (AREA)
- Veterinary Medicine (AREA)
- Biophysics (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Pathology (AREA)
- Radiology & Medical Imaging (AREA)
- Public Health (AREA)
- Heart & Thoracic Surgery (AREA)
- Physics & Mathematics (AREA)
- Molecular Biology (AREA)
- Surgery (AREA)
- Animal Behavior & Ethology (AREA)
- General Health & Medical Sciences (AREA)
- Physiology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Ultra Sonic Daignosis Equipment (AREA)
Abstract
Provided herein are systems and methods of quantifying B-lines and merged B-lines in lung ultrasound images that improve upon the efficiency and accuracy of detecting such B-lines and merged B-lines, thereby facilitating the use of this imaging modality in additional point-of- care settings such as emergency medicine and critical care settings. The systems described herein include an ultrasound imaging device and an electronic device in communication with the ultrasound imaging device that is configured to analyze one or more lung ultrasound video loops using a trained B-line classifier and a trained merged B-line classifier. In particular aspects, the one or more ultrasound video loops may be analyzed at two or more spatial resolution scales to extract the B-line image features that are used by the B-line and merged B-line classifiers to predict a likelihood that a B-line candidate is either a B-line or a merged B-line.
Description
SYSTEMS AND METHODS OF QUANTIFYING B-LINES AND MERGED B-LINES IN LUNG ULTRASOUND IMAGES
GOVERNMENT INTEREST
[0001] This invention was made with United States government support awarded by the United States Department of Health and Human Services under the grant number HHS/ASPR/BARDA 75A50120C00097. The United States has certain rights in this invention.
Field of the Disclosure
[0002] The present disclosure relates generally to ultrasound imaging systems and methods of processing ultrasound images, and more specifically to ultrasound imaging systems and methods that improve the quantification of certain pathological features seen in lung ultrasound images.
Background
[0003] Lung ultrasound is an imaging technique that can be used at the point-of-care to assess the lungs in a variety of settings, including emergency medicine and critical care. This technique has been used widely as a portable, non-invasive, and radiation-free modality for evaluation of pulmonary and infectious diseases. One characteristic pathological feature seen in lung ultrasounds is the B-line. B-lines are defined as discrete, vertical, hyperechoic artifacts that appear as long bands originating at the pleural line and extend vertically the length of the image. The number of B-lines (i.e., B-line count) is known to correlate with accumulation of fluid in the interstitial spaces of the lung. As the level of interstitial fluid builds up and becomes more severe, more B-lines will become present, which causes the B-lines to eventually merge. As a result, these merged B-lines (otherwise known as confluent B-lines) become difficult to distinguish from one another and impossible to count accurately.
Summary of the Disclosure
[0004] As described in more detail below, the present disclosure relates to ultrasound imaging systems and methods that improve the quantification of B-lines and merged B-lines, which are
pathological features seen in lung ultrasound images. Because B-lines and merged B-lines may be observed in a variety of conditions, automated evaluation of these pathological features can play an important role in screening, diagnosis, and/or management of disease progression, for example, by standardizing interpretation among experts and/or enabling adoption by under-trained lung ultrasound users.
[0005] According to some embodiments of the present disclosure, a system for viewing and analyzing lung ultrasound images is provided, including an ultrasound imaging device comprising one or more ultrasound imaging transducers configured to generate a lung ultrasound video loop of a subject, the lung ultrasound video loop comprising a plurality of lung ultrasound imaging frames; and an electronic device in communication with the ultrasound imaging device. The electronic device can include a display device configured to display a graphical user interface, a computer-readable storage medium having stored thereon computer-readable instructions to be executed by one or more processors, and one or more processors configured by the computer- readable instructions stored on the computer-readable storage medium to perform the following operations: (i) obtain a lung ultrasound video loop of a subject, the lung ultrasound video loop comprising a plurality of lung ultrasound imaging frames; (ii) analyze the lung ultrasound video loop using a B-line classifier and a merged B-line classifier to generate B-line data for the lung ultrasound video loop; and (iii) output, via the display device, a graphical user interface comprising the B-line data generated for the lung ultrasound video loop.
[0006] In an aspect, the B-line data for the lung ultrasound video loop can be generated by: pre-processing each imaging frame of the lung ultrasound video loop to obtain a pre-processed lung ultrasound video loop; determining a B-line analysis region-of-interest for each imaging frame of the pre-processed lung ultrasound video loop; analyzing the B-line analysis region-of- interest of each imaging frame to identify one or more B-line candidates; for each B-line candidate, extracting a set of B-line features from the imaging frames of the pre-processed lung ultrasound video loop; classifying each of the B-line candidates, based on the corresponding set of B-line features, using the B-line classifier and the merged B-line classifier to predict a likelihood that the B-line candidate is a probable B-line and/or a probable merged B-line; identifying one or more probable B-lines and/or probable merged B-lines in each imaging frame of the pre-processed ultrasound video loop based on the classification of each of the B-line candidates; determining a maximum B-line count for the lung ultrasound video loop, wherein the maximum B-line count is
the maximum number of probable B-lines appearing in any single imaging frame of the pre- processed lung ultrasound video loop; and determining whether the lung ultrasound video loop is positive for merged B-lines based on the classification of each of the B-line candidates.
[0007] In an aspect, the B-line classifier can be a first trained machine learning model configured to receive a plurality of B-line features as an input and output a likelihood that a B-line candidate is a probable B-line. Further, the merged B-line classifier can be a second trained machine learning model configured to receive a plurality of B-line features as an input and output a likelihood that a B-line candidate is a probable merged B-line.
[0008] In an aspect, the lung ultrasound video loop is positive for merged B-lines if the number of imaging frames of the pre-processed lung ultrasound video loop that contain a probable merged B-line meets or exceeds a predefined minimum number of imaging frames.
[0009] In an aspect, identifying one or more B-line candidates within the B-line analysis region-of-interest can include performing the following operations for each imaging frame of the pre-processed lung ultrasound imaging video loop: smoothing an intensity profile of the B-line analysis region-of-interest for the corresponding imaging frame; identifying one or more local peaks along the smoothed intensity profile of the B-line analysis region-of-interest for the corresponding imaging frame; and defining a B-line candidate region-of-interest for each of the one or more local peaks identified, wherein each B-line candidate region-of-interest corresponds to a B-line candidate.
[0010] In an aspect, a set of B-line features for each B-line candidate can be extracted from the B-line candidate region-of-interest defined in the imaging frames of the pre-processed lung ultrasound video loop.
[0011] In an aspect, identifying one or more B-line candidates within the B-line analysis region-of-interest can include performing the following operations for each imaging frame of the pre-processed lung ultrasound imaging video loop: smoothing an intensity profile of the B-line analysis region-of-interest for the corresponding imaging frame using a first smoothing kernel; identifying one or more local peaks along the intensity profile smoothed using the first smoothing kernel; defining a B-line candidate region-of-interest for each of the one or more local peaks identified in the intensity profile smoothed using the first smoothing kernel, wherein each B-line candidate region-of-interest corresponds to a B-line candidate; smoothing the intensity profile of the B-line analysis region-of-interest for the corresponding imaging frame using a second
smoothing kernel, wherein the second smoothing kernel is a different size than the first smoothing kernel; identifying one or more local peaks along the intensity profile smoothed using the second smoothing kernel; and defining a B-line candidate region-of-interest for each of the one or more local peaks identified in the intensity profile smoothed using the second smoothing kernel, wherein each B-line candidate region-of-interest corresponds to an additional B-line candidate.
[0012] In an aspect, a first set of B-line features for each B-line candidate can be extracted from the B-line candidate regions-of-interest defined based on the intensity profile smoothed using the first smoothing kernel, and a second set of B-line features for each B-line candidate can be extracted from the B-line candidate regions-of-interest defined based on the intensity profile smoothed using the second smoothing kernel.
[0013] In an aspect, the set of B-line features extracted from the imaging frames of the pre- processed lung ultrasound video loop can include at least one B-line feature measured at two or more different spatial scales.
[0014] According to other embodiments of the present disclosure, an image processing method is provided, including pre-processing each imaging frame of a lung ultrasound video loop to obtain a pre-processed lung ultrasound video loop, wherein the lung ultrasound video loop comprises a plurality of image frames; determining a B-line analysis region-of-interest for each imaging frame of the pre-processed lung ultrasound video loop; analyzing the B-line analysis region-of-interest of each imaging frame to identify one or more B-line candidates; for each B-line candidate, extracting a set of B-line features from the imaging frames of the pre-processed lung ultrasound video loop; classifying each of the B-line candidates, based on the corresponding set of B-line features, using a B-line classifier and a merged B-line classifier to predict a likelihood that the B- line candidate is a probable B-line and/or a probable merged B-line; identifying one or more probable B-lines and/or probable merged B-lines in each imaging frame of the pre-processed ultrasound video loop based on the classification of each of the B-line candidates; determining a maximum B-line count for the lung ultrasound video loop, wherein the maximum B-line count is the maximum number of probable B-lines appearing in any single imaging frame of the pre- processed lung ultrasound video loop; and determining whether the lung ultrasound video loop is positive for merged B-lines based on the classification of each of the B-line candidates.
[0015] In an aspect, the B-line classifier can be a first trained machine learning model configured to receive a plurality of B-line features as an input and output a likelihood that a B-line
candidate is a probable B-line. Further, the merged B-line classifier can be a second trained machine learning model configured to receive a plurality of B-line features as an input and output a likelihood that a B-line candidate is a probable merged B-line.
[0016] In an aspect, the lung ultrasound video loop is positive for merged B-lines if the number of imaging frames of the pre-processed lung ultrasound video loop that contain a probable merged B-line meets or exceeds a predefined minimum number of imaging frames.
[0017] In an aspect, identifying one or more B-line candidates within the B-line analysis region-of-interest can include performing the following operations for each imaging frame of the pre-processed lung ultrasound imaging video loop: smoothing an intensity profile of the B-line analysis region-of-interest for the corresponding imaging frame; identifying one or more local peaks along the smoothed intensity profile of the B-line analysis region-of-interest for the corresponding imaging frame; and defining a B-line candidate region-of-interest for each of the one or more local peaks identified, wherein each B-line candidate region-of-interest corresponds to a B-line candidate.
[0018] In an aspect, identifying one or more B-line candidates within the B-line analysis region-of-interest can include performing the following operations for each imaging frame of the pre-processed lung ultrasound imaging video loop: smoothing an intensity profile of the B-line analysis region-of-interest for the corresponding imaging frame using a first smoothing kernel; identifying one or more local peaks along the intensity profile smoothed using the first smoothing kernel; defining a B-line candidate region-of-interest for each of the one or more local peaks identified in the intensity profile smoothed using the first smoothing kernel, wherein each B-line candidate region-of-interest corresponds to a B-line candidate; smoothing the intensity profile of the B-line analysis region-of-interest for the corresponding imaging frame using a second smoothing kernel, wherein the second smoothing kernel is a different size than the first smoothing kernel; identifying one or more local peaks along the intensity profile smoothed using the second smoothing kernel; and defining a B-line candidate region-of-interest for each of the one or more local peaks identified in the intensity profile smoothed using the second smoothing kernel, wherein each B-line candidate region-of-interest corresponds to an additional B-line candidate.
[0019] According to still another embodiment of the present disclosure, a computer program product is provided. The computer program product can include a non-transitory computer- readable storage medium having stored thereon computer-readable instructions that, when
executed by one or more processors, cause the one or more processors to perform the following operations: (i) obtain a lung ultrasound video loop of a subject, the lung ultrasound video loop comprising a plurality of lung ultrasound imaging frames; (ii) pre-process each imaging frame of the lung ultrasound video loop to obtain a pre-processed lung ultrasound video loop; (iii) determine a B-line analysis region-of-interest for each imaging frame of the pre-processed lung ultrasound video loop; (iv) analyze the B-line analysis region-of-interest of each imaging frame to identify one or more B-line candidates; (v) for each B-line candidate, extract a set of B-line features from the imaging frames of the pre-processed lung ultrasound video loop; (vi) classify each of the B- line candidates, based on the corresponding set of B-line features, using a B-line classifier and a merged B-line classifier to predict a likelihood that the B-line candidate is a probable B-line and/or a probable merged B-line; (vii) identify one or more probable B-lines and/or probable merged B- lines in each imaging frame of the pre-processed ultrasound video loop based on the classification of each of the B-line candidates; (viii) determine a maximum B-line count for the lung ultrasound video loop, wherein the maximum B-line count is the maximum number of probable B-lines appearing in any single imaging frame of the pre-processed lung ultrasound video loop; (ix) determine whether the lung ultrasound video loop is positive for merged B-lines based on the classification of each of the B-line candidates; and (x) output, via a display device, a graphical user interface comprising the maximum B-line count and/or the determination of whether the lung ultrasound video loop is positive for merged B-lines.
[0020] These and other aspects of the various embodiments will be apparent from and elucidated with reference to the embodiments described hereinafter.
Brief Description of the Drawings
[0021] In the drawings, like reference characters generally refer to the same parts throughout the different views. Also, the drawings are not necessarily to scale, emphasis instead generally being placed upon illustrating the principles of the various embodiments.
[0022] FIG. 1 is a series of lung ultrasound imaging frames presenting with worsening lung condition severity in accordance with aspects of the present disclosure.
[0023] FIG. 2 is a block diagram illustrating an improved system configured to quantify of B- lines and merged B-lines in lung ultrasound examinations in accordance with aspects of the present disclosure.
[0024] FIG. 3 is a flowchart illustrating a method for quantifying of B-lines and merged B- lines in lung ultrasound examinations in accordance with aspects of the present disclosure.
[0025] FIG. 4 is a block diagram illustrating the components of an electronic device for use in connection with an ultrasound imaging device in accordance with aspects of the present disclosure. [0026] FIG. 5 is a flowchart illustrating a method of analyzing an ultrasound video loop for B- lines and merged B-lines and visualizing the results in accordance with aspects of the present disclosure.
[0027] FIG. 6 is a flow diagram illustrating the process of analyzing an ultrasound video loop for B-lines and merged B-lines and visualizing the results in accordance with certain aspects of the present disclosure.
[0028] FIG. 7A shows two lung ultrasound image frames before and after intensity normalization is applied in accordance with aspects of the present disclosure.
[0029] FIG. 7B shows another two lung ultrasound image frames before and after intensity normalization is applied in accordance with further aspects of the present disclosure.
[0030] FIG. 8 is a flow diagram illustrating a process for pleural line detection and tracking that may be used to determine B-line analysis regions-of-interest in accordance with aspects of the present disclosure.
[0031] FIG. 9 shows two pre-processed lung ultrasound image frames annotated to illustrate a B-line analysis region-of-interest, a corresponding intensity profile, and a plurality of B-line candidate regions-of-interest in accordance with aspects of the present disclosure.
[0032] FIG. 10 shows a pre-processed lung ultrasound image frame with a B-line analysis region-of-interest and two corresponding intensity profiles that are smoothed using two different smoothing kernels in accordance with aspects of the present disclosure.
[0033] FIG. 11 shows a pre-processed lung ultrasound image frame annotated with two narrow B-line candidate regions-of-interest and two expanded B-line candidate regions-of-interest in accordance with aspects of the present disclosure.
[0034] FIG. 12 is a flow diagram illustrating the process of analyzing a lung ultrasound imaging frame to detect and quantify B-lines and merged B-lines using a single smoothing kernel in accordance with aspects of the present disclosure.
[0035] FIG. 13 is a flow diagram illustrating the process of analyzing a lung ultrasound imaging frame to detect and quantify B-lines and merged B-lines using two different smoothing kernels in accordance with aspects of the present disclosure.
[0036] FIG. 14 is an illustration of a first graphical user interface containing B-line analysis results in accordance with aspects of the present disclosure.
[0037] FIG. 15 is an illustration of a second graphical user interface containing B-line analysis results in accordance with aspects of the present disclosure.
[0038] FIG. 16 is an illustration of a third graphical user interface containing B-line analysis results in accordance with aspects of the present disclosure.
Detailed Description of Embodiments
[0039] According to the present disclosure, systems and methods that improve the quantification of B-lines and merged B-lines in lung ultrasound examinations are provided. As mentioned above, lung ultrasound is an imaging technique that can be used at the point-of-care to assess the lungs in a variety of settings, including emergency medicine and critical care. This technique has been used widely as a portable, non-invasive, and radiation-free modality for evaluation of pulmonary and infectious diseases.
[0040] However, it can be difficult or impossible to quickly and accurately detect certain pathological features seen in lung ultrasounds. For example, it can be difficult to detect B-lines and merged B-lines quickly and accurately. Thus, while automated evaluation of these pathological features can play an important role in screening, diagnosis, and/or management of disease progression, conventional systems and methods are limited in their ability detect and distinguish between B-lines and merged B-lines, which leads to inconsistent lung ultrasound interpretation among experts and under-utilization of lung ultrasound by more novice users.
[0041] As described herein, B-lines are a pathological feature that can be seen in lung ultrasound images, which are known to correlate with an accumulation of fluid in the interstitial spaces of the lung. B-lines are discrete, vertical, hyperechoic artifacts that appear in lung ultrasound images as long bands originating at the pleural line and extend vertically the length of the image. However, as the level of interstitial fluid builds up and becomes more severe, more B- lines will become present and eventually causes separate B-lines to merge.
[0042] For example, with reference to FIG. 1, four lung ultrasound images are shown along a continuum of increasing lung fluid associated with worsening severity. In the left-most lung ultrasound image (labeled ‘A’), there are no B-lines evident in this portion of the lung. In the second left-most lung ultrasound image (labeled ‘B’), there are two discrete B-lines evident in this portion of the lung, indicating some build-up of interstitial fluid. In the third left-most lung ultrasound image (labeled ‘C’), there are four discrete B-lines evident in this portion of the lung, indicating worsening build-up of interstitial fluid. And in the right-most lung ultrasound image (labeled ‘D’), a merged B-line has formed due to the still worsening build-up of interstitial fluid in this portion of the lung. As can be seen, it is important to not only identify discrete B-lines, but also differentiate between discrete B-lines and merged B-lines in order to properly screen, diagnose, and/or mange of disease progression in patients. It is also important that users be able to quickly visualize these pathological features, otherwise lung ultrasound imaging loses effectiveness, for example, in evaluating pulmonary and infectious diseases in emergency medicine and critical care situations.
[0043] Thus, with reference to FIG. 2, an improved system 100 for obtaining, analyzing, and view lung ultrasound images is provided in accordance with certain aspects and embodiments of the present disclosure. Preferably, the system 100 is configured to detect and differentiate between B-lines and merged B-lines in real-time. As a result, the system 100 can be used to evaluate pulmonary and infectious diseases in emergency medicine and critical care situations.
[0044] In many embodiments, the system 100 includes an ultrasound imaging device 102 configured to generate a lung ultrasound data 104 of a subject (e.g., a patient) 106. The ultrasound imaging device 102 can be a handheld ultrasound device comprising one or more ultrasound imaging transducers (not shown). In embodiments, the lung ultrasound data 104 can be one or more lung ultrasound video loops of a particular region of a lung of the subject 106, and each video loop can include a plurality of lung ultrasound imaging frames.
[0045] The system 100 further includes an electronic device 108 in communication with the ultrasound imaging device 102. In some embodiments, the electronic device 108 may include, for example and without limitation, a display device 110, a computer-readable storage medium (e.g., memory 404 shown in FIG. 4) having stored thereon computer-readable instructions to be executed by one or more processors, and one or more processors (e.g., processors 402 shown in FIG. 4)
configured by the computer-readable instructions to perform one or more steps of the methods described herein.
[0046] In particular embodiments, the one or more processors may be configured by the computer-readable instructions stored on the computer-readable storage medium to perform the following operations: (i) obtain a lung ultrasound video loop 104 of a subject 106, the lung ultrasound video loop 104 comprising a plurality of lung ultrasound imaging frames; (ii) analyze the lung ultrasound video loop 104 using a B-line classifier and a merged B-line classifier to generate B-line data for the lung ultrasound video loop 104; and (iii) output, via the display device 110, a graphical user interface comprising the B-line data generated for the lung ultrasound video loop 104. In embodiments, the B-line data generated for the ultrasound video loop 104 can include, for example, a discrete B-line count for each imaging frame, a maximum B-line count for all imaging frames of the video loop 104, and/or a merged B-line indicator that indicates the presence of merged B-lines in the video loop 104.
[0047] More specifically, with reference to FIG. 3, the one or more processors (e.g., processors 402 shown in FIG. 4) may be configured by the computer-readable instructions stored on the computer-readable storage medium (e.g., memory 404 shown in FIG. 4) to perform the following operations: in a step 310, obtain a lung ultrasound video loop 104; in a step 320, analyzing the lung ultrasound video loop 104 to generate B-line data for the video loop 104; in a step 330, determining whether the video loop 104 contains any B-lines; optionally, in a step 340, outputting a B-line count of zero (e.g., via the user interface) if the video loop 104 does not contain any B- lines; in a step 350, determining whether at least N frames of the video loop 104 contain merged B-lines; optionally, in a step 360, outputting a maximum B-line count for the video loop 104 (e.g., via the user interface) if the at least N frames of the video loop 104 do not contain merged B-lines; and in a step 370, outputting a merged B-line indicator (e.g., via the user interface) if at least N frames of the video loop 104 contain merged B-lines.
[0048] As described herein, N can be considered a frame threshold for the video loop 104. In embodiments, N can be an integer greater than zero. In some embodiments, N may be a predetermined number that is independent of the size / number of frames in a video loop 104. In other embodiments, N may be a function of the size / number of frames in a video loop 104. For example, in some embodiments, a merged B-line indicator may be output (i.e., in the step 370) if more than 25 frames in a 250-frame lung ultrasound video loop 104 contain a merged B-line. In
other embodiments, for example, a merged B-line indicator may be output (i.e., in the step 370) if more than 50% of the frames in a lung ultrasound video loop 104 contain a merged B-line.
[0049] As shown in the example of FIG. 4, the electronic device 108 can include one or more processors 402 and a computer-readable memory 404 interconnected and/or in communication via a system bus 406 containing conductive circuit pathways through which instructions (e.g., machine-readable signals) may travel to effectuate communication, tasks, storage, and the like. The electronic device 108 can be connected to a power source (not shown), which can include an internal power supply and/or an external power supply. In embodiments, the electronic device 108 can also include one or more additional components, such as a display 110, an input device 112, an input/output (I/O) interface 412, a networking unit 414, and the like, including combinations thereof. As shown, each of these components may be interconnected and/or in communication via the system bus 406, for example.
[0050] In embodiments, the one or more processors 402 can include one or more high-speed data processors adequate to execute the program components described herein and/or perform one or more operations of the methods described herein. The one or more processors 402 may include a microprocessor, a multi-core processor, a multithreaded processor, an ultra-low voltage processor, an embedded processor, and/or the like, including combinations thereof. The one or more processors 402 can include multiple processor cores on a single die and/or may be a part of a system on a chip (SoC) in which the processor 402 and other components are formed into a single integrated circuit, or a single package. That is, the one or more processors 402 may be a single processor, multiple independent processors, or multiple processor cores on a single die.
[0051] In embodiments, the display device 110 may be configured to display information, including text, graphs, and/or the like. In particular embodiments, the display device 110 may be configured to a graphical user interface comprising the B-line data generated for one or more lung ultrasound video loops 104. The display device 110 can include, but is not limited to, a liquid crystal display (LCD), a light-emitting diode (LED) display, a touch screen or other touch-enabled display, a foldable display, a projection display, and so on, or combinations thereof.
[0052] In embodiments, the input device 112 may be configured to receive various forms of input from a user associated with the electronic device 108. The input device 112 can include, but is not limited to, one or more of a keyboard, keypad, trackpad, trackball(s), capacitive keyboard,
controller (e.g., a gaming controller), computer mouse, computer stylus / pen, a voice input device, and/or the like, including combinations thereof.
[0053] In embodiments, the input/output (I/O) interface 412 may be configured to connect and/or enable communication with one or more peripheral devices (not shown), including but not limited to additional machine-readable memory devices, diagnostic equipment, and other attachable devices. The I/O interface 412 may include one or more I/O ports that provide a physical connection to the one or more peripheral devices. In some embodiments, the I/O interface 412 may include one or more serial ports.
[0054] In embodiments, the networking unit 414 may include one or more types of networking interfaces that facilitate wired and/or wireless communication between the electronic device 108 and one or more external devices. That is, the networking unit 414 may operatively connect the electronic device 108 to one or more types of communications networks 216, which can include a direction interconnection, the Internet, a local area network (“LAN”), a metropolitan area network (“MAN”), a wide area network (“WAN”), a wired or Ethernet connection, a wireless connection, a cellular network, and similar types of communications networks, including combinations thereof. In some embodiments, the electronic device 108 may communicate with one or more remote / cloud-based servers and/or cloud-based services, such as remote server 418, via the communications network 416.
[0055] In embodiments, the memory 404 can be variously embodied in one or more forms of machine accessible and machine-readable memory. In some embodiments, the memory 404 includes a storage device (not shown), which can include, but is not limited to, a non-transitory storage medium, a magnetic disk storage, an optical disk storage, an array of storage devices, a solid-state memory device, and/or the like, as well as combinations thereof. The memory 404 may also include one or more other types of memory, such as dynamic random-access memory (DRAM), static random-access memory (SRAM), erasable programmable read-only memory (EPROM), electrically erasable programmable read-only memory (EEPROM), Flash memory, and/or the like, as well as combinations thereof. In embodiments, the memory 404 may include one or more types of transitory and/or non-transitory memory.
[0056] The electronic device 108 can be configured by software components stored in the memory 404 to perform one or more processes of the methods described herein. More specifically, the memory 404 can be configured to store data / information 420 and computer-readable
instructions 422 that, when executed by the one or more processors 402, causes the electronic device 108 to (i) obtain a lung ultrasound video loop 104 of a subject 106, the lung ultrasound video loop 104 comprising a plurality of lung ultrasound imaging frames; (ii) analyze the lung ultrasound video loop 104 using a B-line classifier and a merged B-line classifier to generate B- line data for the lung ultrasound video loop 104; and (iii) output, via the display device 110, a graphical user interface comprising the B-line data generated for the lung ultrasound video loop 104. Such data 420 and the computer-readable instructions 422 stored in the memory 404 may form a B-line analysis package 424 that may be incorporated into, loaded from, loaded onto, or otherwise operatively available to and from the electronic device 108. Thus, in some embodiments, the B-line analysis package 424 and/or one or more individual software packages may be stored in a local storage device of the memory 404. However, in other embodiments, the B-line analysis package 424 and/or one or more individual software packages may be loaded onto and/or updated from a remote server or service, such as server 418, via the communications network 416.
[0057] In particular embodiments, the B-line analysis package 424 includes at least one B-line classifier (e.g., B-line classifier 610 shown in FIGS. 6, 12, and 13) and at least one merged B-line classifier (e.g., merged B-line classifier 612 shown in FIGS. 6, 12, and 13). Each of the classifiers 610, 612 may be one or more trained models, such as one or more trained machine learning models. As described in more detail below, the B-line classifier 610 and the merged B-line classifier 612 can be trained machine learning models that are configured to receive a plurality of B-line features (i.e., feature scores for a plurality of image features) associated with a single B-line candidate and generate a likelihood that the single B-line candidate is a probable B-line or probable merged B- line. In specific embodiments, each classifier 610, 612 is a separately trained logistic regression model. That is, the B-line classifier 610 can be a first trained model and the merged B-line classifier 612 can be a second trained model that is different from the first trained model.
[0058] The electronic device 108 may also include an operating system component 426, which may be stored in the memory 404. The operating system component 426 may be an executable program facilitating the operation of the electronic device 108 and/or the ultrasound device 102. Typically, the operating system component 426 can facilitate access of the I/O interface 412, network interface 414, the input device 112, and the display 110, and can communicate or control other components of the electronic device 108.
[0059] Accordingly, provided herein is a computer program product 424 comprising a non- transitory computer-readable storage medium 404 having stored thereon computer-readable instructions 422 that, when executed by one or more processors (such as processors 402), cause the one or more processors to perform one or more operations of the methods described below. For example, in specific embodiments, the computer-readable storage medium 404 may include computer-readable instructions 422 that, when executed by one or more processors (such as processors 402), cause the one or more processors to perform the following operations: (i) obtain a lung ultrasound video loop 104 of a subject 106, the lung ultrasound video loop 104 comprising a plurality of lung ultrasound imaging frames; (ii) analyze the lung ultrasound video loop 104 using a B-line classifier and a merged B-line classifier to generate B-line data for the lung ultrasound video loop 104; and (iii) output, via the display device 110, a graphical user interface comprising the B-line data generated for the lung ultrasound video loop 104.
[0060] In further embodiments, the computer-readable storage medium 404 may include computer-readable instructions 422 that, when executed by one or more processors (such as processors 402), cause the one or more processors to perform the following operations: (i) obtain a lung ultrasound video loop of a subject, the lung ultrasound video loop comprising a plurality of lung ultrasound imaging frames; (ii) pre-process each imaging frame of the lung ultrasound video loop to obtain a pre-processed lung ultrasound video loop; (iii) determine a B-line analysis region- of-interest for each imaging frame of the pre-processed lung ultrasound video loop; (iv) analyze the B-line analysis region-of-interest of each imaging frame to identify one or more B-line candidates; (v) for each B-line candidate, extract a set of B-line features from the imaging frames of the pre-processed lung ultrasound video loop; (vi) classify each of the B-line candidates, based on the corresponding set of B-line features, using a B-line classifier and a merged B-line classifier to predict a likelihood that the B-line candidate is a probable B-line and/or a probable merged B- line; (vii) identify one or more probable B-lines and/or probable merged B-lines in each imaging frame of the pre-processed ultrasound video loop based on the classification of each of the B-line candidates; (viii) determine a maximum B-line count for the lung ultrasound video loop, wherein the maximum B-line count is the maximum number of probable B-lines appearing in any single imaging frame of the pre-processed lung ultrasound video loop; (ix) determine whether the lung ultrasound video loop is positive for merged B-lines based on the classification of each of the B- line candidates; and (x) output, via a display device, a graphical user interface comprising the
maximum B-line count and/or the determination of whether the lung ultrasound video loop is positive for merged B-lines.
[0061] That is, the computer-readable storage medium 404 can include computer-readable instructions 422 that, when executed by one or more processors (such as processors 402), cause the one or more processors to perform an improved method for detecting and distinguishing B- lines and merged B-lines in lung ultrasound images in accordance with the various aspects described herein.
[0062] For example, with reference to FIG. 5, a method 500 for detecting and distinguishing B-lines and merged B-lines in lung ultrasound images is illustrated in accordance with certain aspects of the present disclosure. As shown, the method 500 can include: in a step 510, preprocessing each imaging frame of the lung ultrasound video loop to obtain a pre-processed lung ultrasound video loop; in a step 520, determining a B-line analysis region-of-interest (ROI) for each imaging frame of the pre-processed lung ultrasound video loop; in a step 530, analyzing the B-line analysis ROI of each imaging frame to identify one or more B-line candidates; in a step 540, extracting a set of B-line features for each B-line candidate from / based on the imaging frames of the pre-processed lung ultrasound video loop; in a step 550, classifying each of the B-line candidates, based on the corresponding set of B-line features, using a B-line classifier and a merged B-line classifier to predict a likelihood that the B-line candidate is a probable B-line and/or a probable merged B-line; in a step 560, determining whether each B-line candidate qualifies as a probable B-line and/or a probable merged B-line based on the classification of the B-line candidate; in a step 570, assessing the B-lines detected on a video loop level; in a step 580, assessing the merged B-lines detected on a video loop level; and in a step 590, outputting the frame-level and video-level results.
[0063] According to aspects of present disclosure, each of the steps of the methods 300, 500 described herein may be implemented in several ways, including in one or more sub-steps. For example, as shown in FIG. 6, one implementation of the method 500 is illustrated in accordance with certain aspects of the present disclosure. As shown, an ultrasound video loop 602 comprising a plurality of lung ultrasound imaging frames is presented / obtained. The ultrasound video loop 602 may be obtained, for example, from an ultrasound imaging device 102 comprising one or more ultrasound imaging transducers. In embodiments, the ultrasound video loop 602 can include native (i.e., pre-scan converted) ultrasound line scan data so as to maintain greater visual consistency
across different transducer types (e.g., sector, linear, and curvilinear) and to allow for a consistent set of image feature extraction steps to be used across all transducers.
[0064] In embodiments, each frame of the ultrasound video loop 602 may be pre-processed to improve the robustness of subsequent B-line detection steps, including across different transducer types. The pre-processing steps can include, but are not limited to, image normalization, noise normalization, TGC normalization, frame blending, and/or the like, including combinations thereof.
[0065] More specifically, for example, input frames of the ultrasound video loop 602 may be normalized (i.e. rescaled) to a fixed intensity distribution prior to the B-line analysis. In embodiments, this may be performed on an image frame by first computing the mean and variance of the image frame after excluding outlier pixels (i.e., pixels with intensity values at the low and high extremes) from the distribution estimate. The image frame can then be standardized to zero mean and unit variance. In some embodiments, pixels outside of a certain number (+/?) standard deviations are truncated, and the image is then rescaled to a 0-255 intensity scale. As a result, the influence of low and high transducer gain is greatly reduced, and images acquired with different transducer types can be made to have similar intensity distributions. As shown in FIGS. 7A and 7B, for example, images acquired with low and high transducer gains are shown before and after image normalization.
[0066] In further embodiments, the input frames of the ultrasound video loop 602 may also be pre-processed for noise normalization because different transducer types will generate images with different noise and speckle characteristics, which can affect B-line detection and analysis. To normalize noise levels across the transducer types, the noise distribution may be computed from images acquired with one transducer type and used to de-noise images collected with another transducer such that a similar noise distribution is achieved. For example, de-noising could be done by Gaussian smoothing, median filtering, and/or the like.
[0067] In still further embodiments, the time-gain compensation (TGC) profiles of the input frames of the ultrasound video loop 602 may be very different across different transducer types and therefore impact B-line detection and analysis, the TGC profiles may also be normalized. The TGC profiles may be computed from images acquired from one transducer type and used to adjust the TGC on images from a different transducer type.
[0068] In yet further embodiments, frame blending may be applied as another pre-processing step to reduce frame-to-frame “jitter” and improve the consistency of the results of the B-line analysis. For example, frame blending may be achieved by: (1) averaging n consecutive frames by computing mean pixel intensities over n consecutive frames, and then (2) projecting maximum intensity across n consecutive frames by computing maximum intensities over n consecutive frames. In embodiments, the value n may be set to three frames, which would capture the current frame plus the two previous frames. However, other values are contemplated.
[0069] After obtaining and optionally pre-processing the ultrasound video loop 602 as described herein, one or more B-line analysis regions-of-interest 604 are determined for the ultrasound video loop 602. That is, a B-line analysis region-of-interest (ROI) 604 is determined for each imaging frame of the (pre-processed) ultrasound video loop 602. In particular embodiments, the B-line analysis ROI 604 may be determined based on the pleural line of the subject 108 seen in the imaging frames of the ultrasound video loop 602.
[0070] As used herein, the term “pleural line” refers to the interface between the soft tissues (fluid-rich) of the wall and the lung tissue (gas-rich). In a lung ultrasound, the pleural line presents as a hyper echoic line and represents the junction of the visceral and the parietal pleura. Because B-lines appear in lung ultrasound images to originate from the pleural line, the B-line analysis ROI 604 may be sectioned off after detecting and tracking the pleural line through the frames of the ultrasound video loop 602.
[0071] With reference to FIG. 8, one approach for detecting and tracking the pleural line of a subject 108 using the imaging frames of the ultrasound video loop 602 is illustrated in accordance with certain aspects of the present disclosure. As shown, this process begins, in a step 810, with identifying the presence and location of the pleural line from the first few frames (e.g., n frames) of the ultrasound video loop 602. In embodiments, the pleural line detection is based on analyzing at least two features, including but not limited to: (1) the intensity profile along the (vertical) depth direction within a region of the image frame likely containing the pleural line, and (2) the motion of the lung above and below the likely pleural line candidate within this region of the frame. In particular, the region of the image frame most likely to contain the pleural line is typically the upper half of the image, approximately 1 to 5 cm in depth.
[0072] If the pleural line is successfully detected, the process then includes, in a step 820, tracking the position of the pleural line throughout the remaining frames of the ultrasound video
loop 602. In embodiments, the pleural line tracking algorithm may be similar in design to the pleural line detection algorithm, but operates on fewer frames, uses a smaller search range (e.g., region 815B instead of region 815A), and applies a reduced image sampling density to speed up the processing time required for each frame.
[0073] Then, in a step 830, a B-line analysis ROI 604 can be defined and updated in each frame of the ultrasound video loop 602 based on the location of the tracked pleural line. In specific embodiments, the B-line analysis ROI 604 may be set to originate a predefined distance beneath the detected pleural line and extend toward the bottom of the image frame to a fixed image depth. For example, in some embodiments, the predefined starting position of the B-line analysis ROI 604 may be 0.25 cm beneath the detected pleural line. In further embodiments, the fixed image depth may be transducer-dependent.
[0074] However, it should be appreciated that other methods of defining the B-line analysis ROI 604 for each image frame of the ultrasound video loop 602 may be implemented. For example, as shown in FIG. 8, if no pleural line is detected, then a default B-line analysis ROI 604 may be defined based on a default ROI. In some embodiments, the default ROI may be based on the specifications of the ultrasound imaging probe 102 used to obtain the ultrasound video loop 602.
[0075] Once the B-line analysis ROIs 604 have been determined for each frame of the ultrasound video loop 602, the B-line and merged B-line analysis may then proceed. For example, as shown in FIG. 6, the B-line analysis ROI of each image frame of the ultrasound video loop 602 may then be analyzed to identify one or more B-line candidates 606. In particular embodiments, B-line candidates 606 can be discrete B-line candidates and/or merged B-line candidates.
[0076] In embodiments, the one or more B-line candidates 606 for each frame of the ultrasound video loop 602 may be identified based on a local “peak” in the intensity profile computed inside of the B-line analysis ROI 604 defined for the corresponding frame. For example, with reference to FIG. 9, an annotated example of an image frame 900A of an ultrasound video loop 602 is illustrated according to certain aspects of this operation. As shown in the example of FIG. 9, the image frame 900A is annotated to show the pleural line and the B-line analysis ROI 604 defined for the image frame 900A. Below the image frame 900 A, a representative intensity profile 910 for the ROI 604 is illustrated, which can be computed as the median of each column of the ROI 604. Accordingly, an intensity profile 910 for the B-line analysis ROI 604 for each image frame (e.g.,
frame 900A) of the ultrasound video loop 602 can be generated and one or more B-line candidates 606 may be detected by identifying one or more local peaks in the intensity profile 910.
[0077] In embodiments, a set of B-line features may be extracted from the imaging frames of the ultrasound video loop 602 for each of the B-line candidates 606 identified, as discussed in more detail below. However, in particular embodiments, each B-line candidate 606 may be associated with B-line candidate region-of-interest (i.e., a B-line candidate ROI). As shown in FIG. 9, for example, an annotated image frame 900B is illustrated showing seven B-line candidate ROIs 912 associated with the seven B-line candidates 606 seen in the image frame 900A. As such, in embodiments, a B-line candidate ROI that is a smaller, local region-of-interest and is centered around each B-line candidate 606 may be computed prior to extracting the sets of B-line features. Put another way, a B-line candidate ROI 912 may be defined for each of the one or more B-line candidates 606 (i.e., each of the one or more local peaks identified), wherein each B-line candidate ROI 912 corresponds to a B-line candidate 606. The B-line features may then be extracted from within each B-line candidate ROI 912 for subsequent B-line and merged B-line classification.
[0078] In particular embodiments, the height of each B-line candidate ROI 912 may be the length of the B-line analysis ROI 604, while the width of each B-line candidate ROI 912 may be a predefined number of columns (i.e., pixels) to the left and right of the identified local peak (e.g., three pixels to the left and three pixels to the right of the peak’s local maximum, etc.). However, it should be appreciated that the size of the B-line candidate ROI 912 may be varied, as described in more detail below.
[0079] According to further aspects of the present disclosure, the detection of B-lines candidates 606 (including B-line candidates and merged B-line candidates) may be computed at different spatial scales, including but not limited to, two or more different spatial scales. For example, in certain embodiments, the step of detecting one or more B-line candidates 606 within a B-line the intensity profile of an image frame of the ultrasound video loop 602 may include: (i) smoothing the intensity profile of the B-line analysis ROI 604 of the image frame at two or more spatial scales; (ii) identifying one or more local peaks along the smoothed intensity profile(s); and (iii) defining a B-line candidate ROI 912 for each local peak identified, wherein each B-line candidate ROI 912 corresponds to a B-line candidate 606. In embodiments, the smoothing may be applied at two or more spatial scales using two or more smoothing kernels of different sizes, such
that a smaller smoothing kernel results in less smoothing while a larger smoothing kernel results in greater smoothing.
[0080] For example, with reference to FIG. 10, an example of an image frame 1000A with intensity profiles 1010A, 1010B that are processed at two different spatial scales is illustrated according to aspects of the present disclosure. As shown, by applying a first smoothing kernel (i.e., a small smoothing kernel) to the intensity profile 1010A, four discrete B-line candidates 606 can be identified from the local peaks of the median intensity profile 1010A. However, by applying a second smoothing kernel (i.e., a larger smoothing kernel), a single / wide merged B-line candidate is identified from the local peak of the median intensity profile 1010B.
[0081] In embodiments, the second smoothing kernel may be larger than the first smoothing kernel by at least about 50%, including at least about 75%, at least about 100%, at least about 150%, at least about 200%, and/or at least about 300%. Put another way, the size ratio of the first smoothing kernel to the second smoothing kernel may be about 1: 1.5, about 1: 1.75, about 1:2, about 1:2.5, about 1:3, and/or about 1:4. As a result, in particular embodiments, the size of the second smoothing kernel can be at least two to three times larger than the size of the first smoothing kernel.
[0082] As described herein, smoothing the intensity profile of a B-line candidate ROI 604 can involve an averaging-type filter (e.g., a moving average filter). However, it should be appreciated that other types of smoothing filters with different smoothing kernels may be used, including but not limited to a Gaussian smoothing filter.
[0083] In particular embodiments, the B-line candidate ROIs 912 may also be defined at two or more different spatial scales. That is, in embodiments, one or more B-line candidate ROIs 912 may be defined at a first scale and at a second scale that is different from the first scale. For example, with reference to FIG. 11, an image frame 1100 presenting with two B-line candidates 606 is shown and annotated with B-line candidate ROIs 1110, 1120. As shown, each B-line candidate 606 has a narrower B-line candidate ROI 1110 (indicated using a solid line) as well as an expanded / wider B-line candidate ROI 1120 (indicated using a dotted line).
[0084] In particular embodiments, the width of the expanded B-line candidate ROIs 1120 may be at least about 50% larger than the width of the narrower B-line candidate ROIs 1100, including but not limited to, at least about 100% larger, at least about 150% larger, and/or at least about 200% larger than the width of the narrower B-line candidate ROIs 1100. Put another way, in specific
embodiments, the width of the expanded B-line candidate ROIs 1120 may be two or three times larger than the width of the narrower B-line candidate ROIs 1110.
[0085] Returning to FIG. 6, after the B-line candidates 606 and B-line candidate ROIs (e.g., ROIs 912, 1110, 1120) are detected and computed, a set of B-line features 608 may be extracted from the imaging frames of the ultrasound video loop 602 for each of the B-line candidates 606 identified. That is, for each B-line candidate identified in an ultrasound video loop 602, a set of B- line features 608 are extracted from the corresponding B-line candidate ROI. Thus, in embodiments, a plurality of sets of B-line features 608 may be generated that correspond to a plurality of B-line candidates 606 identified in an ultrasound video loop 602.
[0086] In embodiments, each set of B-line features can include one or more image features extracted from the local image region around the B-line candidate 606 (i.e., the B-line candidate ROIs). The image features can be descriptive of the visual appearance of the B-line candidates. In certain embodiments, each image feature may be represented as a continuous floating point value(s) or feature score(s). One or more of the image features may be applied at multiple spatial resolution scales (e.g., a smaller spatial scale and a larger spatial scale). Thus, where an image feature is computed at two or more spatial scales, two or more feature scores are determined instead of one. [0087] In specific embodiments, the set of B-line features can include feature scores for one or a combination of the statistical image features, which may be statistical features of the ultrasound image. For example, in some embodiments, feature scores can be determined for a number of B-line features, which can include, but is not limited to, a B-line extent, a B-line intensity variance, a temporal profile, a peak local prominence, a peak amplitude, a B-line min/max intensity, a B-line homogeneity, and/or a B-line width. In particular embodiments, the set of B- line features includes feature scores for at least the B-line extent for each of the B-line candidates. As described herein, a “B-line extent” refers to the extent to which a B-line candidate extends vertically to the bottom of the ultrasound image. In specific embodiments, a “B-line extent” may be calculated as a percentage of pixel values along the center of the B-line candidate having an intensity that exceeds the background intensity around the candidate. However, it should be appreciated that other ways of determining the B-line extent may be implemented.
[0088] In embodiments, one or more of these features may be calculated at different spatial resolution scales and therefore result in two or more feature scores for the same image feature that can be passed to the B-line and merged B-line classifiers 610, 612. For example, in particular
embodiments, at least the peak local prominence image feature and/or the B-line width image feature may be computed at two or more different spatial resolution scales. In further embodiments, one or more image features may result in two or more feature scores at the same spatial resolution scale. For example, at least the B-line min/max intensity image feature and the B-line homogeneity image feature can result in two feature scores that are included in the set of B-line features 608 and passed to the B-line and/or merged B-line classifiers 610, 612.
[0089] Although specific image features are described herein, it should be appreciated that fewer image features may be included in the sets of B-line features 608 in some embodiments, while additional image features are included in the sets of B-line features 608 in other embodiments. Further, it should be appreciated that the B-line features 608 extracted for some B- line candidates may be different than the B-line features 608 extracted for other B-line candidates. For example, one or more B-line features 608 may be specific to B-line candidates that are possibly merged B-lines and/or vice versa.
[0090] After a set of B-line features 608 is extracted from the ultrasound video loop 602 for each of the B-line candidates 606, these sets of B-line features 608 including one or more feature scores corresponding to one or more image features are provided to the classifiers 610, 612 for B- line and merged B-line classification. In embodiments, each of the classifiers 610, 612 may be one or more trained models, such as one or more trained machine learning models. For example, in some embodiments, the B-line classifier 610 may be a trained machine learning model that is configured to receive a plurality of B-line features (i.e., feature scores for a plurality of image features) associated with a single B-line candidate and generate a likelihood that the single B-line candidate is a probable B-line. In further embodiments, the merged B-line classifier 612 may be a different trained machine learning model that is configured to receive a plurality of B-line features (i.e., feature scores for a plurality of image features) associated with a single B-line candidate and generate a likelihood that the single B-line candidate is a probable merged B-line.
[0091] Thus, for example, for each B-line candidate 606 identified, a set of B-line features 608 associated with that candidate can be provided as a floating-point vector (XFeatures) to a first machine learning classifier 610 (i.e., the B-line classifier). The B-line classifier 610 can be configured to output a single floating-point prediction score between 0 and 1 (yBitne), where a higher score indicates a higher predicted likelihood of the candidate being a B-line.
[0092] In specific embodiments, the B-line classifier 610 is a logistic regression model with a plurality of optimizable parameters that are tuned during training of the model on training datasets. The optimizable parameters can include, for example and without limitation, feature weights (^Biine ) that determine the relative contribution of each feature to the prediction and a final classification threshold (rBiine). For example, in certain embodiments, the input vector (XFeatures) can have 14 parameters, and the logistic regression model can have 15 optimizable parameters, including 14 feature weights (WBune) that determine the relative contribution of each feature to the prediction and a final classification threshold (TBZ,ne). B-line candidates 606 with a prediction score meeting or exceeding the classification threshold (yBune > TBune) maY be classified as probable B-lines 614, whereas B-line candidates 606 with scores that do not meet the threshold may be classified as false candidates and ignored.
[0093] Optionally, more than one classification threshold value may be used by the B-line classifier 610. That is, in some embodiments, two classification threshold values (rBune and Tg line initial) may be used, where TBUnelnltlal > TBUne ■ The higher threshold (TBunelnltlal) may be applied to the B-line candidate having the highest prediction score among all candidates in a given image frame, that is, max(yBiine) > ^BUneinUiai- According to these embodiments, all remaining B-lines within the given image frame are compared against the default threshold TBune only if this initial condition (max(yBiine) > ^BUneiniti(a) ■ is true. As a result, better (i.e., more specific) filtering of negative videos (i.e., maximum B-line count of zero) from positive videos (i.e., maximum count of at least 1) may be obtained.
[0094] Further, for each B-line candidate 606 identified, the set of B-line features 608 associated with that candidate can also be provided as a floating-point vector (XFeatures) to a second machine learning classifier 612 (i.e., the merged B-line classifier). It should be appreciated that the B-line features 608 provided to the merged B-line classifier 612 may include feature scores for the same image features that were provided to the B-line classifier 610. However, it is also contemplated that the B-line features 608 provided to the merged B-line classifier 612 may be different from those provided to the B-line classifier 610 (i.e., may include one or more different feature scores and/or feature scores for different image features). The merged B-line classifier 612 outputs a single floating-point prediction score between 0 and 1 (y Merged), where a higher score indicates a higher predicted likelihood of the candidate being a merged B-line.
[0095] In specific embodiments, the merged B-line classifier 612 is a logistic regression model with a plurality of optimizable parameters that are tuned during training of the model on training datasets. The optimizable parameters can include, for example and without limitation, feature weights (W Merged) that determine the relative contribution of each feature to the prediction and a final classification threshold (r Merged)- F°r example, in certain embodiments, the input vector (X Features) can have 14 parameters, and the logistic regression model can have 15 optimizable parameters, including 14 feature weights (W Merged) that determine the relative contribution of each feature to the prediction and a final classification threshold (T Merged)- In embodiments, B- line candidates 606 with a prediction score meeting or exceeding the classification threshold ( y Merged TMerged ) may be classified as probable merged B-lines 616, whereas B-line candidates 606 with scores that do not meet the threshold may be classified as discrete B-lines (or false candidates) and ignored.
[0096] As described above, one or more smoothing kernels may be applied to the intensity profiles of the image frames of the ultrasound video loop 602 as part of the process of detecting and classifying one or more B-line candidates 606. Accordingly, the conditions for counting as a true B-line or a true merged B-line.
[0097] For example, with reference to FIG. 12, a process 1200 for detecting and classifying B-line candidates 1208 of single image frame using one smoothing kernel is illustrated. As shown, the process 1200 includes: defining a B-line analysis ROI 1202 for the image frame; computing an intensity profile 1204 within the B-line analysis ROI 1202; applying one smoothing kernel to the intensity profile 1204 to generate a smoothed intensity profile 1206; detecting one or more B- line candidates 1208 based on the smoothed intensity profile 1206; defining a B-line candidate ROI 1210 for each of the one or more B-line candidates 1208 detected; extracting a set of B-line features 1212 for each B-line candidate 1208 based on the corresponding B-line candidate ROIs 1210; and independently passing these sets of B-line features 1212 to a B-line classifier 610 and a merged B-line classifier 612. In the example of FIG. 12, the B-line classifier 610 will determine whether each of the B-line candidates 1208 is a probable B-line 614, while the merged B-line classifier 612 will determine whether each of the B-line candidates 1208 is a probable merged B- line 616. After the probable B-lines 614 and probable merged B-lines 616 are obtained, the process 1200 includes checking each probable B-line 614 to determine whether the underlying B-line candidate 1208 was also positively classified as a probable merged B-line 616. If so, the merged
B-line classification takes precedence and the B-line candidate 1208 will be recorded as a merged B-line rather than a B-line. Otherwise, the B-line classification may be recorded.
[0098] In embodiments, the process 1200 illustrated in FIG. 12 may be repeated for each image frame of one or more ultrasound video loops 602. Once the process 1200 is repeated for a particular ultrasound video loop 602, video-level assessments may be performed (as shown in FIG. 5 and discussed in more detail below).
[0099] In other embodiments, two or more smoothing kernels may be applied to the intensity profiles of the image frames of an ultrasound video loop 602 as part of the process of detecting one or more B-line candidates 606. For example, with reference to FIG. 13, a process 1300 for detecting and classifying B-line candidates 1308 of single image frame using two smoothing kernel is illustrated. As shown, the process 1300 includes: defining a B-line analysis ROI 1302 for the image frame; computing an intensity profile 1304 within the B-line analysis ROI 1302; applying a first smoothing kernel to the intensity profile 1304 to generate a first smoothed intensity profile 1306; detecting one or more B-line candidates 1308 based on the first smoothed intensity profile 1206; defining a B-line candidate ROI 1310 for each of the one or more B-line candidates 1308 detected; extracting a set of B-line features 1312 for each B-line candidate 1308 based on the corresponding B-line candidate ROIs 1310; passing these sets of B-line features 1312 to a B-line classifier 610; applying a second smoothing kernel to the intensity profile 1304 to generate a second smoothed intensity profile 1316; detecting one or more B-line candidates 1318 based on the second smoothed intensity profile 1316; defining a B-line candidate ROI 1320 for each of the one or more B-line candidates 1318 detected; extracting a set of B-line features 1322 for each B- line candidate 1318 based on the corresponding B-line candidate ROIs 1320; and passing these sets of B-line features 1322 to a merged B-line classifier 610. As described above, the first smoothing kernel can be smaller than the second smoothing kernel.
[00100] In the example of FIG. 13, the B-line classifier 610 will determine whether each of the B-line candidates 1308 is a probable B-line 614, while the merged B-line classifier 612 will determine whether each of the B-line candidates 1318 is a probable merged B-line 616. After the probable B-lines 614 and probable merged B-lines 616 are obtained, the process 1200 includes, in a step 1324, the image frame is searched to determine whether any of the detected B-lines 614 correspond (i.e., overlap) with a detected merged B-line 616. If so, the merged B-line classification
takes precedence and the B-line candidate 1308 will be recorded as a merged B-line rather than a B-line. Otherwise, the B-line classification may be recorded.
[0100] In embodiments, the process 1300 illustrated in FIG. 13 may be repeated for each image frame of one or more ultrasound video loops 602. Once the process 1300 is repeated for a particular ultrasound video loop 602, video-level assessments may be performed (as shown in FIG. 5 and discussed in more detail below).
[0101] In embodiments, a video-level assessment of the probable B-lines and a video-level assessment of the merged B-lines may be performed for a given ultrasound video loop 602. For example, with further reference to FIG. 5, the process 500 can include: in a step 570, processing the ultrasound video loop 602 to determine one or more video-level B-line parameters; and in a step 580, processing the ultrasound video loop 602 to determine one or more video-level merged B-line parameters.
[0102] In particular embodiments, the step 570 can include computing at least a first videolevel parameter, such as a “maximum B-line count” for the video loop 602. The maximum B-line count may be computed as the maximum number of discrete B-lines 614 appearing in any single frame of the video loop. In certain embodiments, the maximum B-line count may be reported (i.e., output via an electronic device 108) in the form of categories, namely: “0 B-lines”, “1-2 B-lines”, “3+ B-lines”, and/or the like. Alternatively, the raw integer count may be reported (i.e., output) instead.
[0103] In further embodiments, the step 580 can include computing at least a second videolevel parameter, such as a determination of whether the video loop 602 is positive for merged B- lines. According to various aspects of the present disclosure, whether a video loop 602 is positive for merged B-lines may depend on one or more of the following: (i) the number of frames containing at least one merged B-line; (ii) the average or total number of merged B-lines detected throughout the video; (iii) the average or total width of all merged B-lines throughout the video; (iv) the average or total prediction confidence score for all merged B-lines throughout the video; and/or (v) any combination of the above.
[0104] For example, in particular embodiments, a video loop 602 may be determined to be positive for merged B-lines if the number of imaging frames containing at least one probable merged B-line meets or exceeds a predefined minimum number of frame (e.g.,
optimized for each transducer type.
[0105] According to certain aspects of the present disclosure, the system 100 may be configured to report the maximum B-line count (or associated category) only if the ultrasound video loop 602 is not positive for merged B-lines. If the video loop 602 is positive, a separate category (“Merged (1)”) can be reported instead. Alternatively, the maximum B-line count (or associated category) may be reported along with the “Merged” category in all cases.
[0106] It is a particular feature of the present disclosure that users 114 of the system 100 may be able to more quickly and more accurately visualize lung ultrasound imaging data 104 for a patient 106, including in real-time. Although evaluation of B-lines and merged B-lines are important in screening, diagnosis, and management of disease progression and treatment, it is appreciated that it can be difficult or impossible to quickly and accurately detect certain pathological features seen in lung ultrasounds, even for experienced users. Thus, the systems and methods disclosed herein not only improve the quantification of B-lines and merged B-lines in lung ultrasound examinations, but also provide more consistent lung ultrasound interpretations and facilitate utilization by more novice users.
[0107] Thus, according to aspects of the present disclosure, the systems and methods described herein can include generating a graphical user interface comprising B-line data produced for one or more lung ultrasound video loops 104, 602, and displaying the graphical user interface on a display device 110. For example, as shown in the example of FIG. 3, the process 300 can include: in a step 340, outputting a zero B-line count if none of the image frames of an ultrasound video loop 104, 602 contain any B-lines; in a step 360, outputting the maximum B-line count for the video loop 104, 602 if the ultrasound video loop 104, 602 is not positive for merged B-lines; and in a step 370, outputting a merged B-line indicator if the ultrasound video loop 104, 602 is positive for merged B-lines. As shown in the example of FIG. 5, one or more of these steps 340, 360, 370, and/or one or more other steps may be summarized in the step 590 of the process 500, which includes outputting frame-level as well as video-level results. In embodiments, the outputted results (e.g., results 618 shown in FIG. 6) can include the frame-level and video-level assessments described above, as well as one or more representative image frames (without or without annotation).
[0108] In particular embodiments, the graphical user interface can include one or more of the following: one or more image frames from the ultrasound video loop 104, 602; a video-level output category; a lung zone indicator; an overlay of B-line and/or merged B-line indicators; an overlay indicating the B-line analysis ROI; and/or the like, including combinations thereof. For example, with reference to FIG. 14, a first exemplary graphical user interface 1400 comprising one or more image frames from the ultrasound video loop 104, 602, a video-level output category, a lung zone indicator, an overlay of B-line and/or merged B-line indicators, and an overlay indicating the B- line analysis ROI.
[0109] In further embodiments, the graphical user interface can include a lung zone summary for one or more ultrasound video loops 104, 602 taken across all scanned lung zones. For example, with reference to FIG. 15, a second exemplary graphical user interface 1500 is illustrated according to aspects of the present disclosure. As shown, the graphical user interface 1500 includes a visual overlay of eight front-facing lung zones (Rl, R2, R3, R4, LI, L2, L3, and L4) along with videolevel output categories for each of these lung zones, and a visual overlay of four back-facing lung zones (R5, R6, L5, L6) along with video-level output categories for each of these lung zones.
[0110] In still further embodiments, the graphical user interface can include multiple image frames from a lung ultrasound video loop 104, 602 that correspond to ultrasound line scan data obtained using different transducers of the ultrasound imaging probe 102. As shown in the example of FIG. 16, the graphical user interface 1600 includes image frames from three separate transducers of an ultrasound probe 102 (labeled transducers S4-1, L12-4, and C5-2), which are annotated to indicate the B-lines and merged B-lines, the B-line analysis ROI, and the video-level output category.
[0111] It should be appreciated that all combinations of the foregoing concepts and additional concepts discussed in greater detail below (provided such concepts are not mutually inconsistent) are contemplated as being part of the inventive subject matter disclosed herein. In particular, all combinations of claimed subject matter appearing at the end of this disclosure are contemplated as being part of the inventive subject matter disclosed herein. It should also be appreciated that terminology explicitly employed herein that also may appear in any disclosure incorporated by reference should be accorded a meaning most consistent with the particular concepts disclosed herein.
[0112] All definitions, as defined and used herein, should be understood to control over dictionary definitions, definitions in documents incorporated by reference, and/or ordinary meanings of the defined terms.
[0113] The indefinite articles “a” and “an,” as used herein in the specification and in the claims, unless clearly indicated to the contrary, should be understood to mean “at least one.”
[0114] The phrase “and/or,” as used herein in the specification and in the claims, should be understood to mean “either or both” of the elements so conjoined, i.e., elements that are conjunctively present in some cases and disjunctively present in other cases. Multiple elements listed with “and/or” should be construed in the same fashion, i.e., “one or more” of the elements so conjoined. Other elements may optionally be present other than the elements specifically identified by the “and/or” clause, whether related or unrelated to those elements specifically identified.
[0115] As used herein in the specification and in the claims, “or” should be understood to have the same meaning as “and/or” as defined above. For example, when separating items in a list, “or” or “and/or” shall be interpreted as being inclusive, i.e., the inclusion of at least one, but also including more than one, of a number or list of elements, and, optionally, additional unlisted items. Only terms clearly indicated to the contrary, such as “only one of’ or “exactly one of,” or, when used in the claims, “consisting of,” will refer to the inclusion of exactly one element of a number or list of elements. In general, the term “or” as used herein shall only be interpreted as indicating exclusive alternatives (i.e. “one or the other but not both”) when preceded by terms of exclusivity, such as “either,” “one of,” “only one of,” or “exactly one of.”
[0116] As used herein in the specification and in the claims, the phrase “at least one,” in reference to a list of one or more elements, should be understood to mean at least one element selected from any one or more of the elements in the list of elements, but not necessarily including at least one of each and every element specifically listed within the list of elements and not excluding any combinations of elements in the list of elements. This definition also allows that elements may optionally be present other than the elements specifically identified within the list of elements to which the phrase “at least one” refers, whether related or unrelated to those elements specifically identified.
[0117] As used herein, although the terms first, second, third, etc. may be used herein to describe various elements or components, these elements or components should not be limited by
these terms. These terms are only used to distinguish one element or component from another element or component. Thus, a first element or component discussed below could be termed a second element or component without departing from the teachings of the inventive concept.
[0118] Unless otherwise noted, when an element or component is said to be “connected to,” “coupled to,” or “adjacent to” another element or component, it will be understood that the element or component can be directly connected or coupled to the other element or component, or intervening elements or components may be present. That is, these and similar terms encompass cases where one or more intermediate elements or components may be employed to connect two elements or components. However, when an element or component is said to be “directly connected” to another element or component, this encompasses only cases where the two elements or components are connected to each other without any intermediate or intervening elements or components.
[0119] In the claims, as well as in the specification above, all transitional phrases such as “comprising,” “including,” “carrying,” “having,” “containing,” “involving,” “holding,” “composed of,” and the like are to be understood to be open-ended, i.e., to mean including but not limited to. Only the transitional phrases “consisting of’ and “consisting essentially of’ shall be closed or semi-closed transitional phrases, respectively.
[0120] It should also be understood that, unless clearly indicated to the contrary, in any methods claimed herein that include more than one step or act, the order of the steps or acts of the method is not necessarily limited to the order in which the steps or acts of the method are recited.
[0121] The above-described examples of the described subject matter can be implemented in any of numerous ways. For example, some aspects can be implemented using hardware, software or a combination thereof. When any aspect is implemented at least in part in software, the software code can be executed on any suitable processor or collection of processors, whether provided in a single device or computer or distributed among multiple devices/computers.
[0122] The present disclosure can be implemented as a system, a method, and/or a computer program product at any possible technical detail level of integration. The computer program product can include a computer readable storage medium (or media) having computer readable program instructions thereon for causing a processor to carry out aspects of the present disclosure. [0123] The computer readable storage medium can be a tangible device that can retain and store instructions for use by an instruction execution device. The computer readable storage medium
can be, for example, but is not limited to, an electronic storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing. A non-exhaustive list of more specific examples of the computer readable storage medium comprises the following: a portable computer diskette, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), a static random access memory (SRAM), a portable compact disc read-only memory (CD-ROM), a digital versatile disk (DVD), a memory stick, a floppy disk, a mechanically encoded device such as punch-cards or raised structures in a groove having instructions recorded thereon, and any suitable combination of the foregoing. A computer readable storage medium, as used herein, is not to be construed as being transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through a waveguide or other transmission media (e.g., light pulses passing through a fiber-optic cable), or electrical signals transmitted through a wire.
[0124] Computer readable program instructions described herein can be downloaded to respective computing/processing devices from a computer readable storage medium or to an external computer or external storage device via a network, for example, the Internet, a local area network, a wide area network and/or a wireless network. The network can comprise copper transmission cables, optical transmission fibers, wireless transmission, routers, firewalls, switches, gateway computers and/or edge servers. A network adapter card or network interface in each computing/processing device receives computer readable program instructions from the network and forwards the computer readable program instructions for storage in a computer readable storage medium within the respective computing/processing device.
[0125] Computer readable program instructions for carrying out operations of the present disclosure can be assembler instructions, instruction-set-architecture (ISA) instructions, machine instructions, machine dependent instructions, microcode, firmware instructions, state-setting data, configuration data for integrated circuitry, or either source code or object code written in any combination of one or more programming languages, comprising an object oriented programming language such as Smalltalk, C++, or the like, and procedural programming languages, such as the “C” programming language or similar programming languages. The computer readable program instructions can execute entirely on the user’s computer, partly on the user’s computer, as a standalone software package, partly on the user’s computer and partly on a remote computer or entirely
on the remote computer or server. In the latter scenario, the remote computer can be connected to the user's computer through any type of network, comprising a local area network (LAN) or a wide area network (WAN), or the connection can be made to an external computer (for example, through the Internet using an Internet Service Provider). In some examples, electronic circuitry comprising, for example, programmable logic circuitry, field-programmable gate arrays (FPGA), or programmable logic arrays (PLA) can execute the computer readable program instructions by utilizing state information of the computer readable program instructions to personalize the electronic circuitry, in order to perform aspects of the present disclosure.
[0126] Aspects of the present disclosure are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to examples of the disclosure. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer readable program instructions.
[0127] The computer readable program instructions can be provided to a processor of a, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer readable program instructions can also be stored in a computer readable storage medium that can direct a computer, a programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer readable storage medium having instructions stored therein comprises an article of manufacture comprising instructions which implement aspects of the function/act specified in the flowchart and/or block diagram or blocks.
[0128] The computer readable program instructions can also be loaded onto a computer, other programmable data processing apparatus, or other device to cause a series of operational steps to be performed on the computer, other programmable apparatus or other device to produce a computer implemented process, such that the instructions which execute on the computer, other programmable apparatus, or other device implement the functions/acts specified in the flowchart and/or block diagram block or blocks.
[0129] The flowchart and block diagrams in the Figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods, and computer program products according to various examples of the present disclosure. In this regard, each block in the flowchart or block diagrams can represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the blocks can occur out of the order noted in the Figures. For example, two blocks shown in succession can, in fact, be executed substantially concurrently, or the blocks can sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems that perform the specified functions or acts or carry out combinations of special purpose hardware and computer instructions.
[0130] Other implementations are within the scope of the following claims and other claims to which the applicant can be entitled.
[0131] While several inventive embodiments have been described and illustrated herein, those of ordinary skill in the art will readily envision a variety of other means and/or structures for performing the function and/or obtaining the results and/or one or more of the advantages described herein, and each of such variations and/or modifications is deemed to be within the scope of the inventive embodiments described herein. More generally, those skilled in the art will readily appreciate that all parameters, dimensions, materials, and configurations described herein are meant to be exemplary and that the actual parameters, dimensions, materials, and/or configurations will depend upon the specific application or applications for which the inventive teachings is/are used. Those skilled in the art will recognize, or be able to ascertain using no more than routine experimentation, many equivalents to the specific inventive embodiments described herein. It is, therefore, to be understood that the foregoing embodiments are presented by way of example only and that, within the scope of the appended claims and equivalents thereto, inventive embodiments may be practiced otherwise than as specifically described and claimed. Inventive embodiments of the present disclosure are directed to each individual feature, system, article, material, kit, and/or method described herein. In addition, any combination of two or more such features, systems, articles, materials, kits, and/or methods, if such features, systems, articles, materials, kits, and/or
methods are not mutually inconsistent, is included within the inventive scope of the present disclosure.
Claims
1. A system (100) for viewing and analyzing lung ultrasound images, the system comprising: an ultrasound imaging device (102) comprising one or more ultrasound imaging transducers, wherein the ultrasound imaging device (102) is configured to generate a lung ultrasound video loop (104) of a subject (106), the lung ultrasound video loop (104, 602) comprising a plurality of lung ultrasound imaging frames; and an electronic device (108) in communication with the ultrasound imaging device (102), wherein the electronic device (108) comprises: a display device (110) configured to display a graphical user interface; a computer-readable storage medium (404) having stored thereon computer-readable instructions (422) to be executed by one or more processors; and one or more processors (402) configured by the computer-readable instructions (422) stored on the computer-readable storage medium (404) to perform the following operations: (i) obtain a lung ultrasound video loop of a subject, the lung ultrasound video loop comprising a plurality of lung ultrasound imaging frames; (ii) analyze the lung ultrasound video loop using a B-line classifier (610) and a merged B-line classifier (612) to generate B-line data for the lung ultrasound video loop; and (iii) output, via the display device, a graphical user interface comprising the B-line data generated for the lung ultrasound video loop.
2. The system (100) of claim 1, wherein the B-line data for the lung ultrasound video loop (104, 602) is generated by: pre-processing each imaging frame of the lung ultrasound video loop (104) to obtain a pre-processed lung ultrasound video loop (602); determining a B-line analysis region-of-interest for each imaging frame of the pre- processed lung ultrasound video loop (602); analyzing the B-line analysis region-of-interest of each imaging frame to identify one or more B-line candidates;
for each B-line candidate, extracting a set of B-line features from the imaging frames of the pre-processed lung ultrasound video loop (602); classifying each of the B-line candidates, based on the corresponding set of B-line features, using the B-line classifier (610) and the merged B-line classifier (612) to predict a likelihood that the B-line candidate is a probable B-line and/or a probable merged B-line; identifying one or more probable B-lines and/or probable merged B-lines in each imaging frame of the pre-processed ultrasound video loop (602) based on the classification of each of the B-line candidates; determining a maximum B-line count for the lung ultrasound video loop (104), wherein the maximum B-line count is the maximum number of probable B-lines appearing in any single imaging frame of the pre-processed lung ultrasound video loop (602); and determining whether the lung ultrasound video loop (104) is positive for merged B- lines based on the classification of each of the B-line candidates.
3. The system (100) of claim 2, wherein the B-line classifier (610) is a first trained machine learning model configured to receive a plurality of B-line features as an input and output a likelihood that a B-line candidate is a probable B-line, and wherein the merged B-line classifier (612) is a second trained machine learning model configured to receive a plurality of B-line features as an input and output a likelihood that a B-line candidate is a probable merged B-line.
4. The system (100) of claim 2, wherein the lung ultrasound video loop (104) is positive for merged B-lines if the number of imaging frames of the pre-processed lung ultrasound video loop (602) that contain a probable merged B-line meets or exceeds a predefined minimum number of imaging frames.
5. The system (100) of claim 2, wherein identifying one or more B-line candidates within the B-line analysis region-of-interest includes performing the following operations for each imaging frame of the pre-processed lung ultrasound imaging video loop (602): smoothing an intensity profile of the B-line analysis region-of-interest for the corresponding imaging frame;
identifying one or more local peaks along the smoothed intensity profile of the B- line analysis region-of-interest for the corresponding imaging frame; and defining a B-line candidate region-of-interest for each of the one or more local peaks identified, wherein each B-line candidate region-of-interest corresponds to a B-line candidate.
6. The system (100) of claim 5, wherein the set of B-line features for each B-line candidate is extracted from the B-line candidate region-of-interest defined in the imaging frames of the pre-processed lung ultrasound video loop.
7. The system (100) of claim 2, wherein identifying one or more B-line candidates within the B-line analysis region-of-interest includes performing the following operations for each imaging frame of the pre-processed lung ultrasound imaging video loop (602): smoothing an intensity profile of the B-line analysis region-of-interest for the corresponding imaging frame using a first smoothing kernel; identifying one or more local peaks along the intensity profile smoothed using the first smoothing kernel; defining a B-line candidate region-of-interest for each of the one or more local peaks identified in the intensity profile smoothed using the first smoothing kernel, wherein each B-line candidate region-of-interest corresponds to a B-line candidate; smoothing the intensity profile of the B-line analysis region-of-interest for the corresponding imaging frame using a second smoothing kernel, wherein the second smoothing kernel is a different size than the first smoothing kernel; identifying one or more local peaks along the intensity profile smoothed using the second smoothing kernel; and defining a B-line candidate region-of-interest for each of the one or more local peaks identified in the intensity profile smoothed using the second smoothing kernel, wherein each B-line candidate region-of-interest corresponds to an additional B-line candidate.
8. The system (100) of claim 7, wherein a first set of B-line features for each B-line candidate is extracted from the B-line candidate regions-of-interest defined based on the intensity profile smoothed using the first smoothing kernel, and a second set of B-line features for each B- line candidate is extracted from the B-line candidate regions-of-interest defined based on the intensity profile smoothed using the second smoothing kernel.
9. The system (100) of claim 2, wherein the set of B-line features extracted from the imaging frames of the pre-processed lung ultrasound video loop (602) includes at least one B-line feature measured at two or more different spatial scales.
10. An image processing method (500) comprising: pre-processing (510) each imaging frame of a lung ultrasound video loop of a subject to obtain a pre-processed lung ultrasound video loop, wherein the lung ultrasound video loop comprises a plurality of imaging frames; determining (520) a B-line analysis region-of-interest for each imaging frame of the pre-processed lung ultrasound video loop; analyzing (530) the B-line analysis region-of-interest of each imaging frame to identify one or more B-line candidates; for each B-line candidate, extracting (540) a set of B-line features from the imaging frames of the pre-processed lung ultrasound video loop; classifying (550) each of the B-line candidates, based on the corresponding set of B-line features, using a B-line classifier and a merged B-line classifier to predict a likelihood that the B-line candidate is a probable B-line and/or a probable merged B-line; identifying (560) one or more probable B-lines and/or probable merged B-lines in each imaging frame of the pre-processed ultrasound video loop based on the classification of each of the B-line candidates; determining (570) a maximum B-line count for the lung ultrasound video loop, wherein the maximum B-line count is the maximum number of probable B-lines appearing in any single imaging frame of the pre-processed lung ultrasound video loop; and determining (580) whether the lung ultrasound video loop is positive for merged B- lines based on the classification of each of the B-line candidates.
11. The image processing method (500) of claim 10, wherein the B-line classifier (610) is a first trained machine learning model configured to receive a plurality of B-line features as an input and output a likelihood that a B-line candidate is a probable B-line, and wherein the merged B-line classifier (612) is a second trained machine learning model configured to receive a plurality of B-line features as an input and output a likelihood that a B-line candidate is a probable merged B-line.
12. The image processing method (500) of claim 10, wherein the lung ultrasound video loop is positive for merged B-lines if the number of imaging frames of the pre-processed lung ultrasound video loop that contain a probable merged B-line meets or exceeds a predefined minimum number of imaging frames.
13. The image processing method (500) of claim 10, wherein identifying (530) one or more B-line candidates within the B-line analysis region-of-interest includes performing the following operations for each imaging frame of the pre-processed lung ultrasound imaging video loop: smoothing an intensity profile of the B-line analysis region-of-interest for the corresponding imaging frame; identifying one or more local peaks along the smoothed intensity profile of the B- line analysis region-of-interest for the corresponding imaging frame; and defining a B-line candidate region-of-interest for each of the one or more local peaks identified, wherein each B-line candidate region-of-interest corresponds to a B-line candidate.
14. The image processing method (500) of claim 10, wherein identifying (530) one or more B-line candidates within the B-line analysis region-of-interest includes performing the following operations for each imaging frame of the pre-processed lung ultrasound imaging video loop: smoothing an intensity profile of the B-line analysis region-of-interest for the corresponding imaging frame using a first smoothing kernel;
identifying one or more local peaks along the intensity profile smoothed using the first smoothing kernel; defining a B-line candidate region-of-interest for each of the one or more local peaks identified in the intensity profile smoothed using the first smoothing kernel, wherein each B-line candidate region-of-interest corresponds to a B-line candidate; smoothing the intensity profile of the B-line analysis region-of-interest for the corresponding imaging frame using a second smoothing kernel, wherein the second smoothing kernel is a different size than the first smoothing kernel; identifying one or more local peaks along the intensity profile smoothed using the second smoothing kernel; and defining a B-line candidate region-of-interest for each of the one or more local peaks identified in the intensity profile smoothed using the second smoothing kernel, wherein each B-line candidate region-of-interest corresponds to an additional B-line candidate.
15. A computer program product (424) comprising: a non-transitory computer-readable storage medium (404) having stored thereon computer-readable instructions (422) that, when executed by one or more processors, cause the one or more processors to perform the following operations: (i) obtain a lung ultrasound video loop of a subject, the lung ultrasound video loop comprising a plurality of lung ultrasound imaging frames; (ii) pre-process each imaging frame of the lung ultrasound video loop to obtain a pre- processed lung ultrasound video loop; (iii) determine a B-line analysis region-of-interest for each imaging frame of the pre-processed lung ultrasound video loop; (iv) analyze the B-line analysis region-of-interest of each imaging frame to identify one or more B-line candidates; (v) for each B- line candidate, extract a set of B-line features from the imaging frames of the pre-processed lung ultrasound video loop; (vi) classify each of the B-line candidates, based on the corresponding set of B-line features, using a B-line classifier and a merged B-line classifier to predict a likelihood that the B-line candidate is a probable B-line and/or a probable merged B-line; (vii) identify one or more probable B-lines and/or probable merged B-lines in each imaging frame of the pre- processed ultrasound video loop based on the classification of each of the B-line candidates; (viii) determine a maximum B-line count for the lung ultrasound video loop, wherein the maximum B- line count is the maximum number of probable B-lines appearing in any single imaging frame of
the pre-processed lung ultrasound video loop; (ix) determine whether the lung ultrasound video loop is positive for merged B-lines based on the classification of each of the B-line candidates; and (x) output, via a display device, a graphical user interface comprising the maximum B-line count and/or the determination of whether the lung ultrasound video loop is positive for merged B-lines.
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| US202363603875P | 2023-11-29 | 2023-11-29 | |
| US63/603,875 | 2023-11-29 |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2025114068A1 true WO2025114068A1 (en) | 2025-06-05 |
Family
ID=93607769
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/EP2024/082769 Pending WO2025114068A1 (en) | 2023-11-29 | 2024-11-19 | Systems and methods of quantifying b-lines and merged b-lines in lung ultrasound images |
Country Status (1)
| Country | Link |
|---|---|
| WO (1) | WO2025114068A1 (en) |
Citations (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20200054306A1 (en) * | 2018-08-17 | 2020-02-20 | Inventive Government Solutions, Llc | Automated ultrasound video interpretation of a body part, such as a lung, with one or more convolutional neural networks such as a single-shot-detector convolutional neural network |
-
2024
- 2024-11-19 WO PCT/EP2024/082769 patent/WO2025114068A1/en active Pending
Patent Citations (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20200054306A1 (en) * | 2018-08-17 | 2020-02-20 | Inventive Government Solutions, Llc | Automated ultrasound video interpretation of a body part, such as a lung, with one or more convolutional neural networks such as a single-shot-detector convolutional neural network |
Non-Patent Citations (1)
| Title |
|---|
| SHEA DANIEL E ET AL: "Deep Learning Video Classification of Lung Ultrasound Features Associated with Pneumonia", 2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS (CVPRW), IEEE, 17 June 2023 (2023-06-17), pages 3103 - 3112, XP034397050, DOI: 10.1109/CVPRW59228.2023.00312 * |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| Zotin et al. | Edge detection in MRI brain tumor images based on fuzzy C-means clustering | |
| Frank et al. | Integrating domain knowledge into deep networks for lung ultrasound with applications to COVID-19 | |
| Singh et al. | Retinal blood vessels segmentation by using Gumbel probability distribution function based matched filter | |
| US8774479B2 (en) | System and method for automated segmentation, characterization, and classification of possibly malignant lesions and stratification of malignant tumors | |
| JP6196922B2 (en) | Image processing apparatus, image processing method, and image processing program | |
| US9159127B2 (en) | Detecting haemorrhagic stroke in CT image data | |
| Roy et al. | An effective method for computerized prediction and segmentation of multiple sclerosis lesions in brain MRI | |
| EP3642797A1 (en) | Segmentation of retinal blood vessels in optical coherence tomography angiography images | |
| Agarwal et al. | A novel approach to detect glaucoma in retinal fundus images using cup-disk and rim-disk ratio | |
| US20210158523A1 (en) | Method and system for standardized processing of mr images | |
| US8831311B2 (en) | Methods and systems for automated soft tissue segmentation, circumference estimation and plane guidance in fetal abdominal ultrasound images | |
| Balaji et al. | Detection of heart muscle damage from automated analysis of echocardiogram video | |
| Sánchez et al. | Mixture model-based clustering and logistic regression for automatic detection of microaneurysms in retinal images | |
| Raghesh Krishnan et al. | Automatic classification of liver diseases from ultrasound images using GLRLM texture features | |
| Abdushkour et al. | Enhancing fine retinal vessel segmentation: Morphological reconstruction and double thresholds filtering strategy | |
| Ribeiro et al. | Handling inter-annotator agreement for automated skin lesion segmentation | |
| Tavakoli et al. | Unsupervised automated retinal vessel segmentation based on Radon line detector and morphological reconstruction | |
| CN102247144A (en) | Time intensity characteristic-based computer aided method for diagnosing benign and malignant breast lesions | |
| Nagaraj et al. | Segmentation of intima media complex from carotid ultrasound images using wind driven optimization technique | |
| KR20130090740A (en) | Apparatus and method processing image | |
| Nagaraj et al. | Carotid wall segmentation in longitudinal ultrasound images using structured random forest | |
| Medrano-Gracia et al. | An atlas for cardiac MRI regional wall motion and infarct scoring | |
| Snehkunj et al. | Brain MRI/CT images feature extraction to enhance abnormalities quantification | |
| Sulas et al. | Impact of pulsed-wave-Doppler velocity-envelope tracing techniques on classification of complete fetal cardiac cycles | |
| WO2025114068A1 (en) | Systems and methods of quantifying b-lines and merged b-lines in lung ultrasound images |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 24809611 Country of ref document: EP Kind code of ref document: A1 |