WO2025231468A1 - Transducteur à facettes multiples - Google Patents
Transducteur à facettes multiplesInfo
- Publication number
- WO2025231468A1 WO2025231468A1 PCT/US2025/027740 US2025027740W WO2025231468A1 WO 2025231468 A1 WO2025231468 A1 WO 2025231468A1 US 2025027740 W US2025027740 W US 2025027740W WO 2025231468 A1 WO2025231468 A1 WO 2025231468A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- imaging data
- probe
- imaging
- array
- transducers
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B8/00—Diagnosis using ultrasonic, sonic or infrasonic waves
- A61B8/44—Constructional features of the ultrasonic, sonic or infrasonic diagnostic device
- A61B8/4477—Constructional features of the ultrasonic, sonic or infrasonic diagnostic device using several separate ultrasound transducers or probes
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B8/00—Diagnosis using ultrasonic, sonic or infrasonic waves
- A61B8/44—Constructional features of the ultrasonic, sonic or infrasonic diagnostic device
- A61B8/4444—Constructional features of the ultrasonic, sonic or infrasonic diagnostic device related to the probe
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B8/00—Diagnosis using ultrasonic, sonic or infrasonic waves
- A61B8/06—Measuring blood flow
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B8/00—Diagnosis using ultrasonic, sonic or infrasonic waves
- A61B8/08—Clinical applications
- A61B8/0875—Clinical applications for diagnosis of bone
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B8/00—Diagnosis using ultrasonic, sonic or infrasonic waves
- A61B8/42—Details of probe positioning or probe attachment to the patient
- A61B8/4209—Details of probe positioning or probe attachment to the patient by using holders, e.g. positioning frames
- A61B8/4218—Details of probe positioning or probe attachment to the patient by using holders, e.g. positioning frames characterised by articulated arms
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B8/00—Diagnosis using ultrasonic, sonic or infrasonic waves
- A61B8/42—Details of probe positioning or probe attachment to the patient
- A61B8/4245—Details of probe positioning or probe attachment to the patient involving determining the position of the probe, e.g. with respect to an external reference frame or to the patient
- A61B8/4263—Details of probe positioning or probe attachment to the patient involving determining the position of the probe, e.g. with respect to an external reference frame or to the patient using sensors not mounted on the probe, e.g. mounted on an external reference frame
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B8/00—Diagnosis using ultrasonic, sonic or infrasonic waves
- A61B8/42—Details of probe positioning or probe attachment to the patient
- A61B8/4272—Details of probe positioning or probe attachment to the patient involving the acoustic interface between the transducer and the tissue
- A61B8/429—Details of probe positioning or probe attachment to the patient involving the acoustic interface between the transducer and the tissue characterised by determining or monitoring the contact between the transducer and the tissue
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B8/00—Diagnosis using ultrasonic, sonic or infrasonic waves
- A61B8/44—Constructional features of the ultrasonic, sonic or infrasonic diagnostic device
- A61B8/4416—Constructional features of the ultrasonic, sonic or infrasonic diagnostic device related to combined acquisition of different diagnostic modalities, e.g. combination of ultrasound and X-ray acquisitions
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B8/00—Diagnosis using ultrasonic, sonic or infrasonic waves
- A61B8/44—Constructional features of the ultrasonic, sonic or infrasonic diagnostic device
- A61B8/4483—Constructional features of the ultrasonic, sonic or infrasonic diagnostic device characterised by features of the ultrasound transducer
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B8/00—Diagnosis using ultrasonic, sonic or infrasonic waves
- A61B8/46—Ultrasonic, sonic or infrasonic diagnostic devices with special arrangements for interfacing with the operator or the patient
- A61B8/461—Displaying means of special interest
- A61B8/462—Displaying means of special interest characterised by constructional features of the display
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B8/00—Diagnosis using ultrasonic, sonic or infrasonic waves
- A61B8/48—Diagnostic techniques
- A61B8/483—Diagnostic techniques involving the acquisition of a 3D volume of data
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B8/00—Diagnosis using ultrasonic, sonic or infrasonic waves
- A61B8/52—Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves
- A61B8/5215—Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves involving processing of medical diagnostic data
- A61B8/5223—Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves involving processing of medical diagnostic data for extracting a diagnostic or physiological parameter from medical diagnostic data
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B8/00—Diagnosis using ultrasonic, sonic or infrasonic waves
- A61B8/56—Details of data transmission or power supply
- A61B8/565—Details of data transmission or power supply involving data transmission via a network
Definitions
- the usefulness of ultrasound images for making clinical diagnosis depends on the quality of the images and the completeness of the set of images provided to a clinician.
- the quality of images depends on a variety of factors, including imaging frequency, inter-element pitch, performance of the ultrasound beamforming and image formation processes, distance between the ultrasound transducer and the organ, artifacts from obstructions such as bones and gas, artifacts produced from intervening tissue such as fascia and the active aperture size.
- the shape and size of the ultrasound probe typically depend on the circumstances, including the depth and size of the anatomy being scanned, the anatomy’s location relative to obstructions (such as ribs and gas), the acoustic/mechanical properties of intervening tissue between the probe and the structure and the quantity of intervening tissue between the probe and the anatomy, which may depend on the body habitus of the patient.
- the transducer arrays are located at opposing ends of the probe, such that the transducer arrays are substantially parallel to one another.
- the probe is typically elongated, such that the operator can manually grip the upright large surfaces of the probe.
- the technology is generally directed to a probe, such as an ultrasound probe, having two or more transducer arrays.
- the transducer arrays are configured on the probe such that a first array is transverse to a second array.
- Each transducer array may be of different sizes, such that the first transducer array has a larger aperture as compared with the second transducer array.
- the probe may be handheld and, therefore, positioned via a user.
- the probe is coupled to a robotic positioning system that can automatically and/or semi-automatically position the probe to capture imaging data of a region of interest.
- One aspect of the technology is directed to a device, comprising a first array of transducers on a first surface of a probe, the first array of transducers having a first imaging aperture configured to capture first imaging data of target anatomy and a second array of transducers on a second surface of the probe, the second array of transducers having a second imaging aperture configured to capture second imaging data of the target anatomy.
- the first surface of the probe may be transverse to the second surface of the probe.
- the first surface of the probe may be defined by a first length and a first width, and the second surface of the probe may be defined by a second length and the first width.
- the device may be configured to be a hand-held device or coupled to a robotic arm.
- the device may further comprise a third array of transducers on a third surface of the probe, the third array of transducers having a third imaging aperture configured to capture third imaging data of the target anatomy, wherein the third surface of the probe is transverse to at least one of the first surface or the second surface of the probe.
- the first imaging aperture may be larger than the third imaging aperture, and the second imaging aperture may be smaller than the third imaging aperture.
- At least one surface of the probe may comprise a coupling mechanism configured to releasably couple the probe to a robotic arm.
- the first array of transducers may be a MEMS transducer array and the second array of transducers may be a single crystal piezoelectric transducer array.
- the first imaging aperture may be larger than the second imaging aperture.
- the probe may comprise a first array of transducers on a first surface of a probe, the first array of transducers having a first imaging aperture configured to capture first imaging data of target anatomy and a second array of transducers on a second surface of the probe, the second array of transducers having a second imaging aperture configured to capture second imaging data of the target anatomy, wherein the first surface of the probe is transverse to the second surface of the probe.
- the system may further comprise one or more processors in communication with the probe. The one or more processors may be configured to receive the first imaging data of the target anatomy, receive the second imaging data of the target anatomy, and generate, based on the first and second imaging data, a representation of the target anatomy.
- the one or more processors may be further configured to determine, based on the first imaging data, whether additional imaging data is needed to generate the representation of the target anatomy, andprovide for output, based on the determination, a notification to obtain the additional imaging data.
- the additional imaging data may be the second imaging data.
- the one or more processors may be configured to segment the first imaging data, identify, based on the segmented first imaging data, anatomical structures within the first imaging data, provide the identified anatomical structures as input into an artificial intelligence (Al) model, and predict, by executing the Al model regions of missing imaging data.
- Al artificial intelligence
- the one or more processors may determine that the additional imaging data is needed.
- the notification may comprise instructions to obtain the additional imaging data using the second array of transducers.
- the notification may comprise a visual representation of at least one of a location for capturing the additional imaging information or what surface of the probe to use to obtain the additional imaging information.
- the first array of transducers may be a MEMS transducer array and the second array of transducers may be a single crystal piezoelectric transducer array.
- the representation of the target anatomy may be a two-dimensional or three-dimensional representation of the target anatomy.
- the one or more processors may be further configured to provide for output the generated representation of the target anatomy.
- the first imaging aperture may be larger than the second imaging aperture.
- Yet another aspect of the technology is directed to a method, comprising receiving, by one or more processors, in communication with a probe, first imaging data of target anatomy, wherein the first imaging data is captured by a first array of transducers having a first imaging aperture, the first array of transducers being positioned on a first surface of the probe, and receiving, by the one or more processors, the second imaging data of the target anatomy.
- the first second imaging data may be captured by a second array of transducers having a second imaging aperture, the second array of transducers being positioned on a second surface of the probe.
- the first surface of the probe may be transverse to the second surface of the probe.
- the method may further comprise generating, by the one or more processors based on the first and second imaging data, a representation of the target anatomy.
- Yet another aspect of the technology is directed to one or more non-transitory computer-readable storage media encoding instructions that, when executed by one or more processors, cause the one or more processors to perform operations comprising receiving first imaging data of target anatomy, wherein the first imaging data is captured by a first array of transducers having a first imaging aperture, the first array of transducers being positioned on a first surface of a probe, and receiving the second imaging data of the target anatomy.
- the first second imaging data may be captured by a second array of transducers having a second imaging aperture, the second array of transducers being positioned on a second surface of the probe.
- the first surface of the probe may be transverse to the second surface of the probe.
- the one or more processors may further perform operations comprising generating, based on the first and second imaging data, a representation of the target anatomy.
- the probe may comprise a first array of transducers on a first surface of a probe, the first array of transducers having a first imaging aperture configured to capture first imaging data of target anatomy, and a second array of transducers on a second surface of the probe, the second array of transducers having a second imaging aperture configured to capture second imaging data of the target anatomy, wherein the first surface of the probe is transverse to the second surface of the probe.
- the system may further comprise one or more processors in communication with the probe.
- the one or more processors may be configured to receive first imaging data of target anatomy, determine, based on the first imaging data, the target anatomy is at least partially incomplete or of poor quality, provide for output, based on the determination, a notification that additional imaging data is needed, receive, in response to the notification, second imaging data of the target anatomy, generate, based on the first and second imaging data, a representation of the target anatomy.
- the one or more processors are further configured to segment the first imaging data, identify, based on the segmented first imaging data, anatomical structures within the first imaging data, provide, the identified anatomical structures as input into an artificial intelligence (Al) model, and predict, by executing the Al model, regions of missing imaging data or imaging data of poor quality.
- the notification may comprise instructions to obtain the additional imaging data using the second array of transducers.
- the notification may comprise a visual representation of at least one of a location to capture the additional imaging information or what surface of the probe to use to obtain the additional imaging information.
- the one or more processors may be further configured to identify one or more anatomical features present in both the first imaging data and the second imaging data, and align, based on the one or more anatomical features, the first imaging data with the second imaging data.
- the first array of transducers may be a MEMS transducer array and the second array of transducers may be a single crystal piezoelectric transducer array.
- the one or more processors may be further configured to provide for output the generated representation of the target anatomy.
- the first array of transducers may have a first imaging aperture larger than a second imaging aperture of the second array of transducers.
- the one or more processors may be further to capture the first and second imaging data based on at least one of a standard view of the target anatomy, a standard slice plane of the target anatomy, or a slice plane corresponding to a slice plane of a previous imaging procedure.
- Yet another aspect of the technology is directed to a method, comprising receiving, by one or more processors from a first array of transducers on a first surface of a probe, first imaging data of target anatomy, determining, by the one or more processors based on the first imaging data, the target anatomy is at least partially incomplete or of poor image quality, providing for output, by the one or more processors based on the determination, a notification that additional imaging data is needed, receiving, in response to the notification, by the one or more processors from a second array of transducers on a second surface of the probe, second imaging data of the target anatomy, wherein the first surface of the probe is transverse to the second surface of the probe, and generating, by the one or more processors based on the first and second imaging data, a representation of the target anatomy.
- Yet another aspect of the technology is directed to one or more non-transitory computer-readable storage media encoding instructions that, when executed by one or more processors, cause the one or more processors to perform operations comprising receiving first imaging data of target anatomy, wherein the first imaging data is captured by a first array of transducers on a first surface of a probe, determining, based on the first imaging data, the target anatomy is at least partially incomplete or of poor image quality, providing for output, based on the determination, a notification that additional imaging data is needed; and receive, in response to the notification, second imaging data of the target anatomy.
- the second imaging data may be captured by a second array of transducers on a second surface of the probe, and the first surface of the probe may be transverse to the second surface of the probe.
- the one or more processors further perform operations comprising generating, based on the first and second imaging data, a representation of the target anatomy.
- Figure 1 is an example configuration of a probe for use with the system in Figure 29, according to aspects of the disclosure.
- Figures 2A and 2B is another example configuration of a probe for use with the system in Figure
- Figures 3A and 3B is another example configuration of a probe for use with the system in Figure
- Figure 4A and 4B is another example configuration of a probe for use with the system in Figure
- Figure 5A and 5B is another example configuration of a probe for use with the system in Figure
- Figures 6A and 6B is another example configuration of a probe for use with the system in Figure
- Figures 7A and 7B is another example configuration of a probe for use with the system in Figure
- Figure 8 is another example configuration of a probe for use with the system in Figure 29, according to aspects of the disclosure.
- Figure 9 is another example configuration of a probe for use with the system in Figure 29, according to aspects of the disclosure.
- Figure 10 is another example configuration of a probe for use with the system in Figure 29, according to aspects of the disclosure.
- Figure 11 is another example configuration of a probe for use with the system in Figure 29, according to aspects of the disclosure.
- Figures 12A and 12B is another example configuration of a probe for use with the system in Figure
- Figures 13A and 13 B is another example configuration of a probe for use with the system in Figure
- Figure 14 is another example configuration of a probe for use with the system in Figure 29, according to aspects of the disclosure.
- Figure 15 is another example configuration of a probe for use with the system in Figure 29, according to aspects of the disclosure.
- Figure 16 is another example configuration of a probe for use with the system in Figure 29, according to aspects of the disclosure.
- Figure 17 is another example configuration of a probe for use with the system in Figure 29, according to aspects of the disclosure.
- Figure 18 is another example configuration of a probe for use with the system in Figure 29, according to aspects of the disclosure.
- Figure 19 is another example configuration of a probe for use with the system in Figure 29, according to aspects of the disclosure.
- Figures 20A-20F is an example use case of a probe, according to aspects of the disclosure.
- Figures 21A-21C is another example use case of a probe, according to aspects of the disclosure.
- Figures 22A-22F is another example use case of a probe, according to aspects of the disclosure.
- Figures 23A-23C is another example use case of a probe, according to aspects of the disclosure.
- Figures 24A-24C is another example use case of a probe, according to aspects of the disclosure.
- Figures 25A-25C is another example use case of a probe, according to aspects of the disclosure.
- Figure 26 is another example configuration of a probe for use with the system in Figure 29, according to aspects of the disclosure.
- Figure 27 is another example configuration of a probe for use with the system in Figure 29, according to aspects of the disclosure.
- Figure 28 is another example configuration of a probe for use with the system in Figure 29, according to aspects of the disclosure.
- Figure 29 is a block diagram of an example imaging system according to aspects of the disclosure.
- Figures 30A and 30B illustrate an example use of a robotic positioning system of Figure 29 capturing imaging data, according to aspects of the disclosure.
- Figure 31 is an example robotic positioning system, such as the robotic positioning system in
- Figure 32 is an example body map overlaid with probe poses recorded using the system of Figure 29, according to aspects of the disclosure.
- Figures 33A and 33B are examples of selected slice planes from within imaging data captured using the system of Figure 29, according to aspects of the disclosure.
- Figures 34A and 34B illustrate an example slice plane optimization using the system of Figure 29, according to aspects of the disclosure.
- Figure 35A illustrates an example of capturing full volumetric images using the imaging system of Figure 29, according to aspects of the disclosure.
- Figure 35B illustrates an example of capturing a target plane using the system of Figure 29, according to aspects of the disclosure.
- Figure 36 is an example of a slice plane overlaid on a body map, according to aspects of the disclosure.
- Figure 37 is an example body map with a plurality of standard probe poses, according to aspects of the disclosure.
- Figure 38 is a flow diagram of an example method for generating a representation of target anatomy using the system of Figure 29, according to aspects of the disclosure.
- Figure 39 is a flow diagram of another example method for generating a representation of target anatomy using the system of Figure 29, according to aspects of the disclosure.
- Figure 40 is a block diagram illustrating an example ultrasound imaging pipeline, according to aspects of the disclosure.
- the technology generally relates to a probe having one or more surfaces configured to transmit and receive signals that can be used to generate image data.
- the probe may be, for example, an ultrasound probe.
- the probe may be configured to be handheld by a user, such as a physician, or coupled to a robot, such as a robotic arm or body of a robot, etc.
- At least two surfaces of the probe may include transducer arrays configured to transmit and receive signals, e.g., ultrasonic signals.
- the surfaces of the probe having the transducer arrays are arranged transverse to one another. Surfaces that are transverse to one another are surfaces that are at an angle less than 180 degrees but greater than 0 degrees to one another.
- An edge of the surfaces of the probe having transducer arrays may, in some examples, be adjacent, touching, separated via another surface, or the like.
- the transducer arrays may have a size substantially corresponding to the surface are of the surface of which the transducer array is located. In some examples, the transducer arrays may have a size smaller than the surface area of the surface on which the transducer is located. According to some examples, the surfaces and, therefore, the transducer arrays may vary in size. For example, one surface may have a larger surface area than another surface such that the surface having the larger surface area has a larger aperture and the surface having the smaller surface area has a smaller aperture, comparatively. In some examples, the surfaces and/or transducer arrays may be substantially the same size.
- the area of the first imaging aperture may be 50% or more larger than the area of the second imaging aperture or greater.
- the area of the first imaging aperture may be 100%, 125%, 200%, 250%, or some other percentage larger than the area of the second imaging aperture.
- the probe may be removably coupled with a robotic arm.
- the robotic arm may be communicatively coupled with a computing system such that the robotic arm can receive instructions from the computing system.
- the instructions may be, for example, for the robotic arm to position, move, and/or rotate the probe to capture imaging data at a given location.
- the location may be, in some examples, a predetermined location.
- the predetermined locations may be determined and/or known based on sensors that can capture data used to generate a map of the patient. According to some examples, the map may be generated and/or updated in real-time based on the location of the patient, movement of the patient, or the like.
- the computing system may receive an input to capture imaging data of a liver of a patient.
- the computing system may provide instructions to the robotic arm to move the probe to the region of the patient corresponding to the liver of the patient.
- the robotic arm may cause the probe to make contact with the patient such that imaging data of the liver can be captured.
- the computing system may determine based on the location which surface of the probe to use. Continuing with the example of imaging the liver, the computing system may determine that the liver requires a large aperture, as compared to the appendix. Based on the determination that a large aperture would efficiently capture imaging data of the liver, the computing system may transmit instructions to the robotic arm to rotate the probe such that the surface of the probe with the largest aperture is used to capture the image data.
- the robotic arm may be used for repeating, reproducing, and/or standardizing imaging data captured by the probe. For example, based on the map of the patient generated using the sensor data, the robotic arm can position the probe at a predetermined location and/or at a predetermined orientation on the surface of the patient’ s body, based on instructions from the computing system. The amount of force applied and/or movement of the probe may be controlled based on the instructions from the computing system.
- FIG. 1 is a perspective view of an example probe 100.
- probe 100 is a substantially rectangular prism having six surfaces, surfaces 101-103. Each surface of the probe 100 is transverse to at least one other surface, e.g., surface 101 is transverse to surfaces 102, 103. At least two of the surfaces include a transducer array.
- surface 101 may include a transducer array and surface 103 may include a transducer array.
- the transducer arrays may have a surface area substantially corresponding to the surface area of surface 101, 103. As shown, the surface area of surface 101, its respective transducer array, and, therefore, its aperture is larger than the surface area of surface 103, its respective transducer array, and, therefore, its aperture.
- surfaces 102, 104-106 may, additionally or alternatively, include a transducer array having a surface area substantially corresponding to the surface area of surface 102, 104- 106, respectively.
- the probe 100 can include transducers arrays on any of the surfaces 101-106 in any configuration as long as the probe includes transducer arrays on at least two surfaces that are transverse to one another.
- at least one of the surfaces 101-106 is a curved surface. The curvature of the surface may convex or concave.
- at least one of the surfaces 101-106 comprises a display. The display may be configured to output a representation of the target anatomy that is generated based on the image data captured by the transducer arrays on the other surfaces of the probe 100.
- the probe is configured to be used to capture a large imaging area with a first transducer array on a first surface of the probe and a smaller imaging area with a second transducer array on a second surface of the probe.
- the imaging area is determined based on the size of the surface and, therefore, the size of the corresponding transducer array on the surface.
- the first surface and the second surface are transverse to one another such that the first surface can be used to capture imaging data of the target anatomy and the probe can be rotated such that the second surface is also used to capture imaging data of the target anatomy.
- the smaller second surface may be used to capture imaging data of the target anatomy between bony structure.
- the smaller second surface may be used to press harder on the body near target anatomy, thereby moving any fatty or interfering structures between the smaller probe surface and the target anatomy. The use of the smaller second surface, as compared to the larger first surface, allows for pressure to be applied more easily and effectively in the region near the target anatomy.
- Smaller apertures may improve image quality by enabling the probe to be positioned in an optimal location away from obstructions.
- Smaller probe footprints make it easier to compress the tissue in between the probe and the anatomy and make it easier to establish contact between the entire aperture and the patient’s body, which enables all transceivers to send and receive acoustic energy between the transducer and the patient’s body.
- the orientation of a smaller footprint probe is easier to manipulate whilst maintaining contact with the patient’s body.
- These attributes of a small footprint and smaller aperture size are useful for intercostal imaging, imaging through thick layers of intervening tissue that cause reverberations and aberrations, imaging at a highly acute angle relative to the patient’s body, imaging within small acoustic windows created by obstructions (e.g., ribs, lungs, and gastrointestinal gas), and the like.
- obstructions e.g., ribs, lungs, and gastrointestinal gas
- a larger aperture can improve the image quality of a matrix array ultrasound transducer.
- the extent to which an ultrasound system can focus in a particular direction is dependent on the length of the aperture in that direction.
- Wider apertures in the elevation plane are useful for imaging with matrix array transducers, as the width of the aperture not only improves the elevational field of view but also the focusing ability, which improves resolution in the elevation plane and reduces out of plane artifacts in azimuthal plane images. Longer apertures (in the azimuthal plane) can also improve resolution and contrast.
- an imager with a the larger aperture may generate 3D or 4D data (also referred to as “volumetric data” herein) on a first ‘scout scan.’
- the scout scan may involve one or several placements of and imaging by the larger aperture to build a sufficient 3D model. If several placements of the larger aperture are performed, the placements may follow a grid pattern, tiling, or learned sequence.
- the volumetric data from the scout scan data can be used to inform the selection and/or positioning of a second imager (or imager face if a multifaceted probe) having a smaller aperture that can take higher fidelity image slice planes (relative to the larger aperture), such as 2D for example.
- the volumetric data generated by the scout scan an be used to identify locations where higher fidelity images are needed.
- the transducer array on the surface 106 may be activated such that the transducer array on surface 106 is being used to actively capture imaging data
- the transducer arrays on the other surfaces e.g., any of surfaces 101-105, may be deactivated.
- a transducer array may be activated based on a determination that the transducer array is in contact with the surface of the body of a patient.
- the system may determine that the transducer array is or is not in contact with the body of the patient based on spectral analysis (e.g., in “k-space”) of the received signals in the region of focus, by contact force detection or other methods.
- the system may determine that the transducer array is in contact with the body of the patient.
- the transducer array may be deactivated based on a determination that the transducer array is not in contact with the surface of the body of the patient.
- the probe and/or robotic positioning system may include a force sensor configured to determine the magnitude of force exerted between the transducer array and the body of the patient. If the magnitude of the force is above a threshold, e.g., above zero (within a tolerance), the system may determine that the transducer array is in contact with the body of the patient and, therefore, should be activated. However, if the magnitude of the force is zero (within a tolerance), the system may determine that the transducer array is not in contact with the body of the patient and, therefore, should be deactivated. Based on the determination of whether a given transducer array should be activated and/or deactivated, the system may activate and/or deactivate the respective transducer arrays. By de-activating the transducer arrays that are not being used to capture image data, resources, e.g., power, are saved, as compared to having all transducer arrays of the probe in an active state even when not in use.
- resources e.g., power
- Rectangular apertures are common for ID arrays where imaging is done in a single plane and elevation focusing is accomplished with an acoustic lens.
- Matrix array ultrasound transducers are able to steer the ultrasound beam beyond a single plane, which allows them to create volumetric images (also referred to as “volumetric ultrasound datasets”) without moving the probe.
- Rectangular matrix array apertures with unequal aperture dimensions, are useful in that they have an extended size in one direction, thereby enhancing image quality and field of view in the azimuthal plane, whilst maintaining some of the benefits of a smaller probe footprint.
- Square or cylindrical apertures are useful in matrix array imaging in that they enable more isotropic resolution in the azimuthal and elevation direction and reduce the dependence of the images captured on the orientation of the probe.
- Convex apertures are useful because they are easier to manipulate (e.g., to tilt, rock and compress tissue) whilst in contact with a patient’s body. They also may reduce the effect of reverberation artifacts on image quality.
- Concave apertures are useful in that they conform better to convex surfaces on a patient’s body. They also make it easier to apply variable amounts of compression over the surface of the probe, which can be helpful in some circumstances (e.g., when part of the probe is located subcostal and part of the probe is located above the rib cage).
- Figure 1 illustrates the probe 100 as being a substantially rectangular prism.
- the probe can be of any shape, size, and/or configuration.
- the probe may be a cube, spherical, cylindrical, a three-dimensional polygon, or the like.
- Probes can be made in forms that include multiple apertures of different shapes, sizes, and curvatures, or can contain apertures with variable curvatures.
- the various shapes, sizes, and/or curvatures of the probe enables an imaging system to optimize the aperture and footprint shape, size, curvature and frequency for a particular circumstance by rotating the probe.
- different apertures may operate at different transmitting and receiving frequencies.
- the imaging system can rotate the probe to choose the aperture with optimal frequency for a particular situation.
- the area taken up by the probe during rotation may be, for example, the working area.
- Figures 2A-19 and 26-28 illustrate additional shapes and/or configurations of the probe.
- Figures 2A and 2B illustrate a probe 200A in which at least one surface 203A includes a radius of curvature.
- the radius of curvature of surface 203 A is convex.
- the radius of curvature may be concave and/or complex (e.g., a combination of concave and convex curves).
- two or more of the surfaces 201A-206A may include a transducer array such that the surface is configured to capture imaging data, e.g., ultrasound imaging data.
- At least two of the surfaces 201 A-206A that are transverse to each other include transducer arrays such that the probe 200A has a larger aperture, e.g., surfaces 201A, 206A, and a smaller aperture, e.g., surfaces 202A-205A.
- FIGs 3 A and 3B illustrate a probe 200B in which two surfaces 203B, 205B include a radius of curvature.
- two or more of the surfaces 201B-206B may include a transducer array such that the surface is configured to capture imaging data, e.g., ultrasound imaging data.
- At least two of the surfaces 201B-206B that are transverse to each other include transducer arrays such that the probe 200B has a larger aperture, e.g., surfaces 201B, 206B, and a smaller aperture, e.g., surfaces 202B- 205B.
- the curved surfaces, e.g., surfaces 203B and 205B are transverse to each other. For example, a tangent line taken from surface 203B is transverse to a tangent line of surface 205B.
- Figures 4A and 4B illustrate a probe 200C in which at least one surface 205C includes a radius of curvature.
- the radius of curvature of surface 205C is concave.
- the radius of curvature may be convex and/or complex (e.g., a combination of concave and convex curves).
- two or more of the surfaces 201C-206C may include a transducer array such that the surface is configured to capture imaging data, e.g., ultrasound imaging data.
- At least two of the surfaces 201C-206C that are transverse to each other include transducer arrays such that the probe 200C has a larger aperture, e.g., surfaces 201C, 206C, and a smaller aperture, e.g., surfaces 202C-205C.
- Figures 5A and 5B illustrate a probe 200D in which at least one surface 205D includes a radius of curvature.
- Probe 200D is similar to probe 200C except surface 202D creates an acute angle at the intersection of surface 202D and surface 204D and an obtuse angle at the intersection of surface 202D and surface 203D.
- the radius of curvature of surface 203D is concave. However, the radius of curvature may be convex and/or complex (e.g., a combination of concave and convex curves).
- two or more of the surfaces 201D-206D may include a transducer array such that the surface is configured to capture imaging data, e.g., ultrasound imaging data.
- At least two of the surfaces 201D-206D that are transverse to each other include transducer arrays such that the probe 200D has a larger aperture, e.g., surfaces 201D, 206D, and a smaller aperture, e.g., surfaces 202D-205D.
- Figures 6A and 6B illustrate a probe 200E in which at least one surface 202E includes a radius of curvature.
- probe 200E substantially corresponds to a cylinder.
- Surface 202E is substantially perpendicular to surfaces 20 IE and 206E.
- Either or both surfaces 20 IE and 206E include a transducer array.
- Surface 202E includes a transducer array such that at least two surfaces that are transverse to each other include a transducer array.
- surface 202E and 206E both include a transducer array as surface 202E is transverse to surface 206E.
- Surface 202E and surface 20 IE both include a transducer array as surface 202E is transverse to surface 206E.
- surfaces 20 IE and/or 206E may include a radius of curvature (e.g., concave, convex, and/or complex).
- surfaces that are transverse to one another are determined based on a line tangent from a first surface being transverse to a line tangent from another surface.
- Figures 7A and 7B illustrate a probe 200F in which at least one surface 202F includes a radius of curvature.
- Probe 202F is similar to probe 200E except instead of surfaces 20 IE, 206E being substantially circular, surfaces 201F, 206F are oblong, e.g., oval.
- Surface 202F is substantially perpendicular to surfaces 20 IF and 206F. Either or both surfaces 20 IF and 206F include a transducer array.
- Surface 202F includes a transducer array such that at least two surfaces that are transverse to each other include a transducer array.
- surfaces 201EF and/or 206F may include a radius of curvature (e.g., concave, convex, and/or complex).
- surfaces that are transverse to one another are determined based on a line tangent from a first surface being transverse to a line tangent from another surface.
- Figures 8, 9, and 11 illustrate a top planar view of an example probe 200G 200H, 200J in which at least one surface 205G, 205H, 200J includes a radius of curvature and at least one other surface, e.g., surface 203G, 203H, 203J, is substantially planar.
- the radius of curvatures of surfaces 205G, 205H, 205J may be consistent throughout the surface or may change along the surface 205G, 205H, 205J.
- At least two surfaces of the probe 200G, 200H, 200J that transverse to one another include transducer arrays.
- Figure 10 illustrates a top planar view of an example prob 2001 in which two surfaces 2051, 2021 include a radius of curvature and at least two other surfaces, e.g., surfaces 2031, 2041, are substantially planar.
- the radius of curvatures of surfaces 2051, 2021 may be consistent throughout the surface or may change along the surface 2051, 2021.
- At least two surfaces of the probe 2001 that transverse to one another include transducer arrays.
- FIGS 12A, 12B, and 15 illustrate example probes 200K, 2000.
- Probes 200K, 2000 substantially corresponds to a triangular prism.
- surfaces 20 IK, 2010 have a shape substantially corresponding to a triangle.
- the z-axis, e.g., height, when extended causes the probe to have a shape substantially corresponding to a triangular prism.
- Surfaces 203K-205K, 2030-2050 may include transducer arrays having a smaller aperture as compared to the transducer arrays of surfaces 20 IK, 2010 and/or 206K, 2060.
- At least two transverse surfaces of the probe 200K, 2000 include transducer arrays, in which one surface is smaller than the other such that one surface has a smaller aperture than the other larger surface.
- FIGS 13A and 13B illustrate another example probe 200E. Similar to probe 200K, probe 200E substantially corresponds to a triangular prism. As compared to probe 200K, probe 200E has rounded points and/or surfaces, e.g., the intersection of surfaces 203E-205E are curved as compared to coming to a point. Surfaces 203E-205E may include transducer arrays having a smaller aperture as compared to the transducer arrays of surfaces 20 IE and/or 206L. At least two transverse surfaces of the probe 200L include transducer arrays, in which one surface is smaller than the other such that one surface has a smaller aperture than the other larger surface.
- the rounded surfaces 207L-209L may be considered separate surfaces such that the rounded surfaces 207L- 209L includes a separate transducer array.
- the separate transducer array may be, for example, a distinct transducer array as compared to the transducer arrays on surfaces 203L-205L. Including transducer arrays on any of the rounded surfaces 207L-209L in addition to transducer arrays on any of the surfaces 203L- 205L allows for the probe 100L to be rotated while capturing imaging data from the transducer arrays without losing contact with the body of the patient of the image.
- surface 203L may be used to capture imaging data.
- the probe 200L may be rotated such that surface 203L maintains at least some contact with the surface of the patient until surface 209L is in contact with the surface of the patient. Accordingly, the use of probe 200L can engage having to remove the probe from the surface of the patient and, therefore, negate an interruption of capturing imaging data.
- Figures 14, 15, and 17 illustrate example probes 200M, 200N, 200P, respectively.
- Probes 200M, 200N, 200P are similar to probes 200K, 200L in which the probes 200M, 200N, 200P substantially correspond to a triangular prism.
- surfaces 201M, 201N, 201P have a shape substantially corresponding to a triangle.
- probes 200M, 200N, 200P include various radii of curvature where surfaces 203M-205M, 203N-205N, 203P-205P intersect.
- Figures 18 and 19 illustrate example probes 200Q, 200R in which the surface 201Q, 201R defines a polygonal shape.
- One or more edges of the surface 201Q, 201R may include a radius of curvature.
- surface 203R includes a convex curve.
- the probe shapes 200Q, 200R are not limited to the examples shown. Any number of surfaces, e.g., surfaces 201Q-205Q, 201R-201Q (including surface 206 shown).
- Figures 26 and 27 illustrate example probes 200S, 200T in which the two or more edges of surface 20 IS, 20 IT include a radius of curvature.
- the radius of curvature may be used to allow for the probe 200S, 200T to correspond to the shape of the surface of the body of the patient.
- surfaces 205S, 205T have a convex radius of curvature while surfaces 202S, 202T have a concave radius of curvature.
- Figure 27 illustrates an example probe 200T in which a third surface, e.g., surface 204T, includes a radius of curvature.
- a third surface e.g., surface 204T
- the radius of curvature of surface 204T may be concave.
- Figure 28 illustrates an example probe 200U in which the surface 201U defines a polygonal shape.
- surfaces 202U1, 202U2 may be angled towards surface 205U such that surface 201U appears as if a portion of a rectangle is cut out, or removed.
- the z-axis includes a thickness.
- the thickness may be consistent and/or varying throughout the length, e.g., y-axis, and the width, e.g., the x-axis.
- surface 201 e.g., the surface shown in the planar view, and the opposing surface may be planar and/or include a radius of curvature.
- the thickness e.g., the z-axis
- the width e.g., the x-axis
- the thickness may vary throughout the length and/or width.
- the lengths, widths, and/or thickness may be vary such that they are not consistent and/or equal.
- the radius of curvatures for each surface of the probe may be different such that some surfaces are curved while others are substantially straight.
- the radius of curvature may be a convex curve, some may be a convex curve, some may have a complex curve, e.g., a combination of concave and convex curves, or the like.
- the example probe shapes and/or configurations shown in Figures 1-19 and 26-28 are, therefore, just some examples of what shape and/or configuration the probe may be and are not intended to be limited.
- the probes discussed above and herein include at least two apertures that are transverse to one another.
- Apertures that are transverse to one another include, for example, apertures in which the angle between average face normal vectors of each aperture is less than 180 degrees.
- apertures that are transverse to one another are apertures in which the average normal vectors from the surfaces associated with the aperture are within a range of 30 degrees to 150 degrees.
- the probe can be used at a first orientation using the first aperture and a second orientation using the second aperture. Rotating the probe from the first orientation to the second orientation includes a rotation less than 180 degrees from one orientation to the other.
- a 2D array e.g., a matrix array
- a ID array may be integrated into a single probe.
- a first aperture may be a 2D array and a second aperture may be a ID array.
- the apertures may include MEMs transducers, single crystal piezoelectric transducers, or the like.
- one aperture of the probe may include MEMs transducers and another aperture of the probe may include ID arrays or single crystal piezoelectric transducers.
- the larger apertures may include MEMs transducers and the smaller apertures may include ID arrays or single crystal piezoelectric transducers.
- the probe may include a surface, e.g., aperture or transducer array, with a complex curvature, variable curvature, concave curvature, convex curvature, or no curvature.
- Complex curvature corresponds to a curvature that includes both concave and convex curvatures.
- Variable curvatures correspond to a curvature in which the radius of curvature changes such that one part of the aperture is more concave or convex than another part of the aperture.
- the probe may include two or more apertures, e.g., transducer arrays, on different surfaces of the probe.
- the apertures are each within a different plane, or surface, of the probe.
- the apertures may be on surfaces with different normal vectors as compared to the other faces.
- Each of the apertures may include different curvatures or no curvature.
- any intersection of surfaces of the probe can include a distinct transducer array, e.g., separate from the adjacent surfaces.
- the intersection of surfaces may form its own surface between the other surfaces, may be a sub-portion of the adjacent surfaces, or the like.
- Such an arrangement e.g., having a transducer array at the intersection of surfaces, as described with respect to Figures 13A and 13B, allows for imaging data to be continuously captured during the rotation of the probe.
- the probe can be rotated from one transducer array to another transducer array without lifting the probe off the surface of the patient.
- previous probes having two transducer arrays required the probe to be removed from the body of the patient and rotated 180 degrees before imaging data could be captured again.
- Figures 20A-20F illustrate an example use of a probe for capturing imaging data.
- the probe 2000 may include at least two transducer arrays, each on a different surface of the probe 2000.
- the surfaces having the transducers arrays are transverse to one another.
- the surface with the larger surface and, therefore, larger transducer array and aperture may, for example, capture trans-abdominal imaging data, as shown.
- the large aperture may capture imaging through multiple intercostal and subcostal spaces.
- Figure 20B the large aperture may capture imaging through a large subcostal window.
- Figures 20C-20F illustrates an example user of a smaller aperture of the probe 2000 being used to capture trans-abdominal imaging data.
- the smaller aperture may, in some examples, be a 2D, e.g., matrix, array.
- Figure 20C illustrates the smaller aperture of probe 2000 being used to capture imaging data whilst substantially compressing the intervening tissue.
- Figures 20D-20F illustrate the smaller aperture of probe 2000 using the convex aperture being used to capture subcostal acute angle imaging data (Figure 20D), intercostal imaging data (Figure 20E), and high compression imaging data (Figure 20F).
- the probe 2000 in Figures 20A-F may have a shape similar to the shape of probe 200A in Figures 2A and 2B.
- Figures 21 A-21C illustrate another example use of a probe for capturing imaging data.
- the probe 2100 may include at least two transducer arrays, each on a different surface of the probe 2100.
- the surfaces having the transducers arrays are transverse to one another.
- the surface with the larger surface and, therefore, larger transducer array and aperture may, for example, capture trans-abdominal imaging data, as shown.
- One of the surfaces may have a convex surface.
- Figures 21A-21C illustrate the convex aperture being used to capture imaging data.
- the probe 2100 in Figures 21A-21C may have a shape similar to the shape of probe 200U in Figure 28.
- Figures 22A-22F illustrate another example use of a probe for capturing imaging data.
- the probe 2200 may include at least two transducer arrays, each on a different surface of the probe 2200. The surfaces having the transducers arrays are transverse to one another. At least one of the apertures may be used to capture trans-abdominal imaging data, as shown. For example, a long rectangular surface of the probe 2200 may be used to capture the imaging data.
- Figure 22A illustrates the aperture capturing imaging data through a large subcostal window.
- Figure 22B illustrates the aperture capturing imaging data through a large subcostal window at an angle relative to the patient’s body.
- Figure 22C illustrates the aperture capturing imaging data through multiple intercostal and subcostal spaces.
- Figure 22D illustrates the aperture capturing imaging data through a large subcostal acoustic window.
- Figure 22E illustrates an aperture capturing parasternal views of the heart.
- Figure 22F illustrates the aperture capturing apical views of the heart.
- the parasternal and apical views may be used to capture imaging data associated with the heart, e.g., cardiac imaging data.
- the probe 2200 in Figures 22A-F may have a shape similar to the shape of probe 100 in Figure 1.
- FIGS 23A-23C illustrate another example use of a probe for capturing imaging data.
- the probe 2300 may include at least two transducer arrays, each on a different surface of the probe 2230.
- the surfaces having the transducer arrays are transverse to one another.
- the different transducer arrays may have different apertures.
- the apertures may be determined based on the size of the transducer array.
- the size of the transducer array may correspond to the surface area of the surface of the probe.
- a smaller transducer array of the probe 2300 e.g., a transducer array on a surface of the probe 2300 that has a surface area smaller than the largest surface of the probe, may be used to capture cardiac imaging data.
- Figure 23A illustrates the aperture capturing apical views of imaging data.
- Figure 23B illustrates the aperture capturing parasternal views of imaging data.
- Figure 23C illustrates the aperture capturing subcostal views of imaging data.
- the smaller aperture may be used to capture additional imaging data after using a larger aperture to capture imaging data, such as in Figures 22A-22F.
- the probe 2300 in Figures 23A-23C may have a shape similar to the shape of probe 100 in Figure 1.
- FIGS 24A-25C illustrate another example use of a probe for capturing imaging data.
- the probe 2400, 2500 may include at least two transducer arrays, each on a different surface of the probe.
- the surfaces having the transducer arrays are transverse to one another.
- the different transducer arrays may have different apertures.
- the apertures may be determined based on the size of the transducer array.
- At least one surface of having a transducer array, e.g., aperture may be curved. The curved surface may allow for the probe 2400 to substantially correspond to the curvature of the body of the patient.
- Figure 24A illustrates the curved aperture capturing apical views of imaging data.
- the aperture may capture imaging data through multiple intercostal spaces and/or around obstructions, such as gas within the lung.
- Figure 24B illustrates the curved aperture capturing parasternal and subcostal views of imaging data.
- the aperture may capture imaging data through multiple intercostal spaces.
- Figure 24C illustrates the curved aperture capturing cardiac imaging data, which may include parasternal and apical views. Examples of probes with curved apertures can be found, for example, in Figures 26 and 27.
- Figures 25A-25C illustrate example uses of the curved aperture capturing obstetric imaging data.
- the probe 2500 may be similar in shape to probe 200S and/or 200T shown in Figures 26 and 27.
- Figure 25A illustrates the large, curved aperture in an inferior-superior position.
- Figure 25B illustrates the large, curved aperture in a medial-lateral position.
- Figure 25C illustrates the smaller aperture of probe 2500 being used to capturing imaging data.
- the imaging system 300 may determine that additional imaging data is required.
- the imaging system 300 may automatically position probe 2500 and/or provide outputs corresponding to instructions to reposition the probe to capture the additional imaging data using the smaller aperture of probe 2500.
- Imaging system 300 can include server computing device 303, client computing device 333, robotic positioning system 323, storage system 350, and probe 100.
- the client computing device 333 may be a full-sized personal computing device and/or a mobile computing device capable of wirelessly exchanging data with a server over a network, such as the Internet.
- client computing device 333 may be a mobile phone or device, such as smartphone, wireless- enabled PDA, a tablet PC, laptop, etc.
- client computing device 333 may be a computer or computing device coupled to a patient imaging device, such as probe 100, or any other patient imaging device, e.g., an x-ray machine, magnetic resonance imaging (“MRI”) machine, computerized tomography (“CT”) scanner, etc.
- MRI magnetic resonance imaging
- CT computerized tomography
- Client computing device 333 may include one or more processors 332, memory 334, data 336, instructions 338, inputs 320, outputs 322, communications interface 324, and other components typically present in general purpose computing devices.
- Memory 334 of client computing device 333 can store information accessible by the one or more processors 332, including instructions 338 that can be executed by the processors 332.
- Memory can also include data 336 that can be retrieved, manipulated, or stored by the processor 332.
- the memory 334 can be of any non-transitory type capable of storing information accessible by the processor, such as a hard-drive, memory card, ROM, RAM, DVD, CD-ROM, write-capable, and read-only memories.
- the instructions 338 can be any set of instructions to be executed directly, such as machine code, or indirectly, such as scripts, by the one or more processors 332.
- the terms “instructions,” “application,” “steps,” and “programs” can be used interchangeably herein.
- the instructions can be stored in object code format for direct processing by a processor, or in any other computing device language including scripts or collections of independent source code modules that are interpreted on demand or compiled in advance. Functions, methods, and routines of the instructions are explained in more detail below.
- Data 336 may be retrieved, stored, or modified by the one or more processors 332 in accordance with the instructions 338.
- the data can be stored in computer registers, in a relational database as a table having many different fields and records, or XML documents.
- the data can also be formatted in any computing device-readable format such as, but not limited to, binary values, ASCII or Unicode.
- the data can comprise any information sufficient to identify the relevant information, such as numbers, descriptive text, proprietary codes, pointers, references to data stored in other memories such as at other network locations, or information that is used by a function to calculate the relevant data.
- the one or more processors 332 can be any conventional processors, such as a commercially available CPU or GPU. Alternatively, the processors can be dedicated components such as an application specific integrated circuit (“ASIC”) or other hardware-based processor.
- ASIC application specific integrated circuit
- Figure 29 functionally illustrates the processor 332, memory 334, and other elements of client computing device 333 as being within the same block
- the processor, computer, computing device, or memory can comprise multiple processors, computers, computing devices, or memories that may or may not be stored within the same physical housing.
- the memory can be a hard drive or other storage media located in housings different from that of the client computing device 333. Accordingly, references to a processor, computer, computing device, or memory will be understood to include references to a collection of processors, computers, computing devices, or memories that may or may not operate in parallel.
- the inputs 320 may be, for example, a touchscreen, keyboard, mouse, microphone, etc.
- the inputs 320 may receive user interaction with patient imaging data, a map of the patient, or the like.
- the inputs 320 may receive a user input corresponding to a selection of a location on the map of the body of the patient for imaging.
- the client computing device, or another component of the imaging system 300, e.g., server computing device 303 may transmit instructions to the robotic positioning system 323 to position probe 100 to capture the imaging data.
- the outputs 322 may include, for example, a display, such as a monitor having a screen, a projector, a television, etc. that is capable of providing a visual output to a user.
- the visual output may electronically display information to a user via a user interface, such as a graphical user interface.
- the displays may electronically display the image data captured by probe 100, instructions for positioning probe 100, a body map of the patient generated based on the data captured sensors 3340, or the like.
- the outputs 322 may include audio outputs, such as speakers, which are capable of providing an audible output to the user.
- the outputs 322 may include haptic outputs, such as a haptic actuator, configured to provide haptic outputs to the user.
- the imaging system 300 may include any number of client computing devices 333, with each being at a different node of network 340.
- a client computing device may be at a location where the patient imaging data is captured while another client computing device is at a physician’ s office being used to interact with the patient imaging data, receive inputs for what imaging data to capture, or the like.
- the client computing devices 333, server computing device 303, and robotic positioning system 323 can be at different nodes of a network 340 and capable of directly and indirectly communicating with other nodes of network 340. Although only a few computing devices are depicted in Figure 3, it should be appreciated that a typical system can include many connected computing devices, with each different computing device being at a different node of the network 340.
- the network 340 and intervening nodes described herein can be interconnected using various protocols and systems, such that the network can be part of the Internet, World Wide Web, specific intranets, wide area networks, or local networks.
- the network can utilize standard communications protocols, such as Ethernet, WiFi and HTTP, protocols that are proprietary to one or more companies, and various combinations of the foregoing.
- server computing device 303, client computing device 333, and robotic positioning system 323 may include a communications interface 314, 324, 335 or web servers, capable of communicating with storage system 350, as well as other computing devices via network 340.
- server computing devices 303 may use network 340 to transmit and present information to a user on an output 322 of the client computing device 333.
- the imaging system 300 may include one or more server computing devices 303.
- Server computing device 303 may include processors 302, memory 304, data 306, and instructions 308 that operate in the same or similar fashion the processors 332, memory 334, data 336, and instructions 338 of client computing device 333.
- server computing device 303 may operate as a load-balanced server farm, distributed system, etc.
- Storage system 350 can be any type of computerized storage capable of storing information accessible by the server computing device 303 and/or client computing device 333, such as a hard-drive, memory card, ROM, RAM, DVD, CD-ROM, write-capable, and read-only memories.
- storage system 350 may include a distributed storage system where data is stored on a plurality of different storage devices which may be physically located at the same or different geographic locations.
- Storage system 350 may be connected to server computing device 303, client computing device 333, and robotic positioning system 323 via network 340 as shown in Figure 29 and/or may be directly connected thereto (not shown).
- the probe 100 may be in communication with the client computing device 333, server computing device 202, and/or robotic positioning system 323 via network 340.
- the ultrasound signals transmitted and received by the probe 100 can be processed by one or more components of the imaging system 300, e.g., client computing device 333, server computing device 202, and/or robotic positioning system 323.
- the imaging system 300 may be configured to receive the imaging data from the probe 100, whether captured by the first or second surface, and generate a representation of the target anatomy.
- the representation of the target anatomy may be a two-dimensional (“2D”) or three-dimensional (“3D”) representation of the target anatomy.
- the imaging system 300 may be configured to combine the imaging data captured by two different surfaces of the probe, collected at two different times, into a representation of the target anatomy.
- the imaging system 300 may be configured to combine imaging data captured by the probe 100 at two different image capture locations.
- the imaging data from the apertures of the probe 100 may be aligned.
- image based registration may be used to align and/or combine the imaging data.
- Image based registration includes, for example, an optimization of a registration metric determined based on imaging data and/or features extracted from the imaging data. The features may be identified after the imaging data is segmented.
- vascular flow imaging data may be used to register the imaging data captured by the different apertures of the probe.
- the vascular flow imaging data may be captured using pulsed wave Doppler, blood speckle tracking, or the like.
- the system may analyze the vascular flow imaging data to identify vascular features, such as veins, arteries, or the like. The identified vascular features may be provided as input to one or more models configured to register the imaging data captured by the different apertures of the probe.
- high-quality two-dimensional (2D) imaging data may be obtained in addition to larger field-of-view three-dimensional (3D) imaging data.
- the 2D imaging data may be, for example, ultrasound images.
- the system may provide for output of the 2D image such that the 2D imaging data can be analyzed by a user.
- the imaging data used to generate the 2D images may contain a 2D array of pixel values, a set of arrays of pixel values, or the like. Each array of pixel values in the set may correspond to imaging data acquired at different times, with different transducer arrays, or the like.
- the 2D image generated using the imaging data may be a B-mode image, color flow image, or the like.
- the images, e.g., images provided for output may be static images or video (“cine”) clips illustrating an image changing in time.
- the 3D imaging data may be, for example, imaging data that contains a 3D array of pixel values or a set of 3D arrays of pixel values (e.g., “4D data”). Each array of pixel values in the set may correspond to imaging data acquired at different times, with different transducer arrays, or the like.
- the 3D imaging data typically has a larger field of view than equivalent 2D imaging data and, therefore, can be used to improve the accuracy of the image registration models.
- two different 3D imaging datasets may substantially overlap and the regions in which the datasets overlap may be used to train and/or improve the accuracy of the image registration models.
- the 3D imaging data may be aligned with the 2D imaging data.
- the system may rotate and/or translate the imaging data captured by the different apertures of the probe based on data from the robotic positioning system.
- the sensors 3340 of the robotic positioning system may be configured to determine the pose, e.g., the location and/or orientation, at which the imaging data was captured by each of the apertures. Based on the location and/or orientation associated with the imaging data, the imaging system 300 can rotate and/or translate the imaging data from each of the apertures such that the imaging data substantially aligns, e.g., has corresponding orientations.
- the imaging system 300 may compensate for the motion of the patient when aligning the imaging data.
- the imaging system 300 may track the respiratory cycle of the patient when imaging data is being captured.
- the motion of the chest wall during the respiratory cycle may be determined, e.g., measured.
- the motion of the anatomy, e.g., lungs, ribcage, diaphragm, etc., that occurs during human respiration affects the position and/or orientation of abdominal organs, such as the liver and kidneys, as well as the acoustic windows used in imaging.
- the respiratory cycle can also affect the shape of these organs, which are deformable.
- capturing imaging data at substantially corresponding phases of the respiratory cycle can ensure that the organs and/or acoustic windows align. Moreover, capturing imaging data at substantially corresponding phases of the respiratory cycle improves the degree to which tomographic images, e.g., slices taken from the imaging data, can be aligned to one another as the underlying anatomical structures are in a substantially similar state. Tracking the motion of the chest wall, or other anatomical features, during the respiratory cycle allows the system to determine when to capture imaging data such that the imaging data is captured during substantially the same phases of the respiratory cycle.
- the representation of the target anatomy may be provided for output via output 322 on the client computing device 333.
- robotic positioning system 323 and/or probe 100 may include one or more outputs for outputting the representation of the target anatomy.
- the imaging system 300 may include one or more robotic positioning systems 323.
- Robotic positioning system 323 may include processors 322, memory 324, data 326, and instructions 328 that operate in the same or similar fashion the processors 332, memory 334, data 336, and instructions 338 of client computing device 333.
- the robotic positioning system 323 may include one or more sensors 3340.
- the sensors 3340 may include, for example, optical cameras, stereo cameras, LIDAR sensors, or the like.
- the sensors 3340 may be configured to capture mapping data for generating a map of the body of the patient. For example, the sensors 3340 may be positioned on vertical column coupled to and/or associated with the robotic arm.
- the sensors 3340 may be positioned at one of more links or pivoting joints of the robotic arm. In some examples, the sensors 3340 are positioned at known locations within the imaging room. The sensors 3340 may be positioned such that the patient is within at least a portion of the field of view of the sensors 3340.
- the imaging system 300 may be configured to receive and process the mapping data from the sensors 3340 to generate a map of the body of the patient being imaged.
- the mapping data may be captured in real-time by the sensors 3340 such that the map of the body of the patient can be generated and/or updated in real-time.
- the map may be, in some examples, a two-dimensional and/or three-dimensional map of the patient. In some examples, one or more locations of known anatomical features may be identified on the map.
- indicators may be provided on or relative to anatomical features of the patient, such as the liver, kidneys, heart, lungs, appendix, or the like.
- the identified anatomical features may be determined based on the imaging procedure.
- the two- dimensional and/or three-dimensional map and/or indicators may be provided for output via one or more outputs, e.g., output 322 on client computing device 333.
- FIG 30A illustrates an example robotic positioning system, such as the robotic positioning system 323 of Figure 29.
- the robotic positioning system may include a robotic arm 442 with one or more joints 444.
- the joints 444 may define sections of the robotic arm 442 such that the robotic arm 442 can be adjusted, positioned, moved, etc. with a plurality of degrees of freedom to place the probe 100 at any position on the body 440 of the patient.
- the probe 100 may be removably coupled to the robotic arm 442 via a connection mechanism 446.
- the robotic arm 442 may be moved to a given position based on instructions received from client computing device 333 and/or robotic positioning system 323.
- sensor 3340 may capture mapping data such that a map of the body 440 of the patient can be generated.
- maps, such as a 3D map, of the surface of the body 440 of the patient may be generated by one or more image processing algorithms based on the data captured by sensors 3340, camera(s) 550, and/or optical sensors 552 ( Figure 31).
- stereovision depth cameras may project a light pattern onto the imaging bed 452, imaging room 448 and patient.
- the system may capture the light pattern from two different perspectives. For example, the system may capture an image of the light pattern.
- the images of the light patterns may be used to estimate the depth of the various surfaces, e.g., imaging bed 542, surface of the body 440 of the patient, etc.
- the lidar systems may scan a laser over the field of view, e.g., imaging bed 452 and/or imaging room 448. The system may measure the time of flight of the laser beam to obtain a map of depth across the field.
- multiple images from a single method or multiple images from different methods may be combined, e.g., optical imaging and/or lidar, to expand the field of view and/or to improve the resolution and accuracy of the 3D map.
- the sensors 3340 are positioned throughout imaging room 448, including on vertical column 450. While a plurality of sensors 3340 are shown in imaging room 448, any number of sensors, e.g., one or more, may be within imaging room 448, on a robot, such as on robotic arm 442, and/or on the robotic positioning system.
- the vertical column 450 may be part of the robotic positioning system 323.
- the vertical column 450 in conjunction with robotic arm 442, may include vision system sensors, force and/or torque sensors 554, joint angle encoders, or the like.
- the vision system sensors and/or joint angle encoders may be positioned on or within joints 444.
- the robotic positioning system 323 may include, in some examples, cameras 550, such as lidar depth cameras.
- the robotic positioning system 323 may include an optical tracker 552.
- the cameras 550 and/or optical trackers 552 may be configured to capture mapping data to generate a map of the surface of the body of the patient.
- the force and/or torque sensors 554 may be configured to determine the amount of force being applied by the probe 100 on the surface of the body of the patient.
- the robotic positioning system 323 may be configured to adjust the position of the probe 100 to increase the amount of force, e.g., push harder down on the body of the patient, or reduce the amount of force, e.g., move the probe 100 away from the body of the patient.
- the generated map may be provided for output via a display, such as display 556 in Figure 31.
- the display may be an interactive display such that the display is configured to receive inputs.
- the display may be a touch screen such that the display is configured to receive touch inputs.
- the display may receive one or more touch inputs identifying areas on the map of the body 440 for capturing image data.
- the robotic positioning system may position the probe 100 on the surface of the body corresponding to the received inputs such that the probe 100 can capture imaging data at the selected locations.
- the display may in communication with processors configured to receive user inputs via a mouse, keyboard, or other input mechanism. Similarly, based on the inputs received, the robotic positioning system may position the probe via robotic arm 442 to capture imaging data at the selected locations.
- the robotic positioning system may be configured to determine the location of the probe 100 with respect to the body 440 of the patient based on a tracking element associated with the probe.
- probe 100 may include a tracking element that is configured to be tracked by the sensors 3340.
- the sensors 3340 can determine the location of the probe 100 based on the tracking element.
- the sensors 3340 can determine the location of the probe 100 relative to the body 440 of the patient such that the robotic positioning system can move robotic arm 442 to place probe 100 at a given location on the body 440 of the patient.
- the robotic positioning system 323 may be configured to determine the location of the probe 100 based on a tracking system.
- the tracking system may be, for example, an optical or electromagnetic tracking system.
- the sensors 3340 may include optical or electromagnetic sensors for tracking the location of the probe 100.
- the robotic positioning system may be configured to determine the location of the probe based on encoders associated with the joins 444 of the robotic arm 442.
- the encoders may be configured to determine a position of the joint relative to a given location, e.g., a point on the vertical column 450, the surface of the imaging bed 452, or the like. Based on the location of a given join 444 with respect to a given location, the robotic positioning system 323 can determine the location of the probe 100 relative to the given location.
- the robotic positioning system 323 is configured to move the probe 100 to a given location on the body 440 to capture imaging data.
- the robotic positioning system 323 may position the probe 100 with a given orientation based on the target anatomy to be imaged.
- the storage system 350 may be configured to store a plurality of probe poses, e.g., positions and orientations, associated with target anatomy.
- the stored probe poses may be updated as part of a feedback loop as the robotic positioning system is used to collect imaging data. For example, if an initial probe pose does not collect adequate data for certain target anatomy, e.g., the kidney, the stored probe poses for imaging the target anatomy may be updated based on changes to the pose during the actual imaging.
- the robotic positioning system 323 may identify one or more landmarks when determining the position of the probe 100.
- the landmarks may be, for example, anatomical landmarks or fiducial landmarks.
- Anatomical landmarks may include, for example, typical bony or visual structures that are easily identify on the body of the patient.
- anatomical landmarks may include the umbilicus, xiphoid process, lips, eyes, fingers, toes, joints, or the like.
- Fiducial landmarks may include, for example, markers that are placed on the body 440 of the patient prior to the imaging procedure that can be easily identified by sensors 3340 and/or probe 100.
- the landmarks may be used by the robotic positioning system 323 to determine a location on the body of the patient corresponding to the target anatomy for imaging. For example, based on the location of the landmark and the target anatomy to be imaged, the robotic positioning system 323 may determine how far in a given direction, e.g., proximal, distal, anterior, posterior, medial, lateral, to move the relative to the landmark to reach the region of the target anatomy. The distance, e.g., how far, may be determined based on the coordinate system established by the sensors 3340.
- a new map of the body 440 may be generated, in real-time and/or automatically, based on the data captured by sensors 3340.
- the robotic positioning system 323 can then adjust the position of the probe 100 based on the updated map.
- the probe 100 once positioned on the body 440, may be configured to capture imaging data.
- the probe 100 includes transducer arrays on two or more transverse surfaces of the probe, e.g., surface 104, 102, 106.
- the transducer array on the surface may substantially correspond to the size of the surface.
- the transducer arrays may be configured to have different apertures.
- surface 104 may have a smaller transducer array and therefore a smaller aperture as compared to the transducer array and aperture associated with surface 102 and/or 106.
- Imaging data may be captured by probe 100 via a first surface 106 and, therefore, a first transducer array.
- the imaging system 300 may be configured to determine, based on image data captured by the first surface 106 of the probe 100, that additional imaging data is needed to generate a complete representation of the target anatomy.
- the imaging system 300 may receive imaging data captured by the first surface 106 of the probe 100.
- the first surface 106 of the probe 100 may be large such that its aperture extends across one or more bony or hard objects within the region of the target anatomy.
- the imaging data of the target anatomy captured by the first surface 104 of the probe may be blocked by the bony or hard objects.
- the imaging system 300 may detect that the imaging data of the target anatomy is not complete.
- the imaging system 300 may be configured to segment anatomical structures within the image data captured by the first surface 106 of probe 100.
- the imaging system 300 then classifies (also referred to as “identifies”) the anatomical structures within the segmented image data.
- the segmentation and classification may be performed, for example, by the imaging system 300 executing an artificial intelligence (“Al”) model, such as a machine learning (“ML”) model.
- Al artificial intelligence
- ML machine learning
- segmentation may be performed with a convolutional neural network using a u-net architecture or using a vision transformer. Segmentation algorithms may directly take volumetric images as an input or may segment slices of volumetric images individually.
- Image classification may be performed using a convolutional neural network, vision transformer or other architecture.
- the model may be trained using data that has been previously identified as accurately segmented such that, once trained, the model can segment at least a portion of the anatomy, e.g., a particular organ, that is visible within the imaging data.
- the model may, additionally or alternatively, be trained using high-quality and/or low-quality images of the same anatomy.
- High-quality images may be images that are above a threshold quality, which may be measured in pixels, visible anatomy, or the like.
- Low-quality images may be images that are below the threshold quality, which may also be measured in pixels, visible anatomy, or the like.
- the image quality may be classified in other ways, e.g., by determining an amount of noise within the imaging data and comparing the amount of noise to a threshold.
- the model may be trained, based on the training data, to identify one or more regions in which the imaging data is incomplete, missing, of quality below a threshold, or the like.
- the segmented imaging data and/or the classified anatomical structures of the currently captured imaging data may be provided as input into an Al model trained to predict regions of missing image data.
- the Al model may be trained based on imaging data that has been annotated for training purposes.
- the Al model may be trained based on imaging data from other modalities, such as magnetic resonance imaging (“MRI”) and/or computed tomography (“CT”) imaging data.
- MRI and/or CT imaging data may be used as the baseline imaging for completeness of target anatomy such that the Al model may compare imaging data from probe 100 to the MRI and/or CT imaging data to determine whether additional imaging data is needed.
- the Al model may be trained as part of a feedback loop from imaging data captured by the robotic positioning system 323.
- robotic positioning system 323 may identify when additional imaging data is captured using one or more surfaces of the probe.
- the robotic positioning system 323 may note, or annotate, when the additional imaging data was captured, e.g., after a given probe surface was used, target anatomy being imaged, etc., how the additional imaging data was captured, e.g., whether a large aperture surface or small aperture surface was used, probe pose, or the like.
- the information obtained by the robotic positioning system 323 may be provided as training data to the Al model such that the Al model can be updated.
- the Al model may be trained to predict regions of missing data based on image quality.
- the model may be, for example, a segmentation algorithm.
- the segmentation of the imaging data may be a region of imaging data that corresponds to the area of the target anatomy that has been captured with a high quality.
- High imaging quality may be, for example, imaging data and/or regions of imaging data in which the contrast to noise ratio and/or the spatial resolution is above a threshold.
- High imaging quality includes imaging data and/or regions of imaging data that contains useful information for clinical diagnosis purposes.
- high quality imaging data may be based on clear boundaries of anatomical features and/or structures. Poor image quality may be, for example, regions of the image that do not contain useful information for clinical diagnosis.
- the poor image quality may be due to noise (e.g. clutter).
- the noise may be seen as regions of the imaging data with poor contrast to noise ratio.
- the poor image quality may be due to low intensity, the imaging data having above a threshold amount of artifacts, poor spatial resolution, or the like.
- the poor image quality may include unclear boundaries of anatomical features and/or structures. If the system determines that the imaging data includes at least some data that is of poor image quality, the model may determine that the imaging data is incomplete. In some examples, the model may determine that the imaging data is incomplete due to the target anatomy being outside the field of view of the probe. In some examples, an Al model may rate the image quality of the entire image.
- the model may be a classification and/or feature detection algorithm.
- the classification and/or feature detection algorithm may be trained to detect features within an image.
- the features may include, for example, organs, structures within organs, vessels, obstructions, or the like.
- the model may be a slice plane inference algorithm trained to evaluate the proximity to standard slice planes and/or slice plans that have been previously captured.
- the model may be an image registration algorithm trained to align 3D imaging data with 3D imaging data, 3D imaging data with 2D imaging data, and/or 2D imaging data with 2D imaging data.
- the model may be an algorithm trained to predict optimal probe placement, aperture type, and/or imaging parameters.
- the model may be a real-time sonography algorithm that is trained to enable the system to manipulate the position and/or orientation of the probe, magnitude of force, imaging parameters, patient pose, respiratory phase, etc. to find and capture optimal views.
- a machine learning algorithm may be trained to predict the transformation (translation and/or orientation) between a given image and a target imaging plane.
- an algorithm may be trained to predict the location of a standard imaging plane given a set of images and probe poses at which those images were acquired.
- An algorithm may be used to optimize the quality and correctness of an imaging plane using Bayesian optimization, by sampling the state space, or a combination of both these methods in combination with other methods.
- model may be trained using data that has been previously segmented and labeled, high-quality and/or low-quality images of the same target anatomy, simulated imaging data, real-time ultrasound exam data or the like.
- the imaging system 300 may provide, for output, a notification to the user to capture additional imaging data using another probe surface.
- the notification may include directions for capturing additional data.
- the notification may indicate that additional imaging data should be captured at a certain angle, using a certain surface of the probe, that the additional imaging data should be of a certain region near the target anatomy, or the like.
- the imaging system 300 may include outputs, such as a display, haptic actuators, speakers, or the like.
- the notifications may be provided for output via one or more outputs.
- imaging system 300 may provide for output to a display textual and/or visual directions directing the user how and/or where to capture the additional data.
- the robotic positioning system 323 may automatically adjust the position of the robotic arm 442 and/or probe 100 to capture the additional imaging data.
- Figure 30B illustrates an example in which the robotic positioning system 323 in which the positioning of the robotic arm 442 and/or probe 100 has been adjusted as compared to Figure 30A.
- the robotic positioning system 323 may automatically adjust the position of the robotic arm 442 and/or probe 100 in response to a determination that additional imaging data was required to generate a complete representation of the target anatomy.
- robotic positioning system 323 may automatically adjust the position of the robotic arm 442 and/or probe 100 in response to receiving user input to capture additional imaging data.
- robotic positioning system 323 may adjust the position of the robotic arm 442 and/or probe 100 based on the pre -procedural imaging plan.
- the robotic arm 442 may change the position of probe 100 such that a second surface, e.g., surface 104, is configured to be in contact with the body 440 of the patient in the region of the target anatomy.
- the additional imaging data may be captured by the second surface 104.
- the robotic arm 442 may adjust the position of probe 100 such that the first surface, e.g., surface 106, captures the additional imaging data.
- the probe 100 may not lose contact with the surface of the patient during rotation of the probe to a different transducer array. For example, if the transducer arrays are on surfaces of the probe that are adjacent and/or substantially adjacent (e.g., touching), the probe 100 may be rotates such that at least a portion of the first surface of the probe 100 remains in contact with the body 440 of the patient until the second surface of the probe and, therefore, the second transducer array is in contact with the body 440 of the patient. In this way, the imaging system 300 can continuously capture imaging data even while the orientation of the probe 100 is changing, e.g., the probe is being rotated.
- previous imaging probes having more than one transducer array are configured with the transducer arrays on opposing parallel surfaces.
- the first array of the previous probe configuration would have to lose contact with the body while the previous probe configuration is rotated 180 degrees to use the second array to capture imaging data.
- the rotation of the probe 100 may be confined to a working area.
- the working area may be the three-dimensional space taken up by the probe during rotation.
- the working area would be a full 180 degree rotation from one end to the other.
- the three-dimensional space would, therefore, at least a semi-circular shape having a diameter corresponding to a length of the previous probe configuration and a width corresponding to the width of the previous probe configuration.
- the working space of the probe 100 disclosed above and herein is substantially reduced.
- the working space may not require a full 180 degree rotation in examples where the transducer arrays are transverse to one another. In such an example, the rotation is less than 180 degrees.
- the working area is still reduced as compared to the previous probe configuration.
- probes having the transducer arrays on opposing substantially parallel surfaces are typically elongated to allow for a user to hold the device when capturing imaging data.
- the probes described above and herein are small in size, comparatively, and configured to be handheld and/or coupled to a robotic positioning system 323.
- the compact nature of the probes described above and herein allow for the probes to be rotated and positioned on the body 440 of the patient without being blocked by the imaging bed 452 and/or body 440.
- the compact nature of the probe 100 in conjunction with the multiple degrees of rotation of the robotic arm 442 allow for the probe 100 to be positioned in the required, standard, and/or preferred orientation and/or location. This can allow for the
- the imaging data collected from the first surface 106 and/or the second surface 104 may be used to generate a representation of the target anatomy.
- the imaging system 300 may be configured to align the imaging data captured by the first surface 106 and/or second surface 104. For example, the imaging system 300 may identify anatomical features in imaging data captured by the first surface 106 and corresponding anatomical features within imaging data captured by the second surface 104. According to some examples, the anatomical features may be extracted using image segmentation.
- the imaging system 300 may use the corresponding anatomical features to align the imaging data captured by the first and second surfaces 106, 104.
- vascular flow imaging data may be captured and used to align, or register, the imaging data captured by the first and second surfaces 106, 104.
- the imaging data captured by the first and second surfaces 106, 104 may be 2D imaging data. Additional imaging data, such as larger aperture or field of view 3D imaging data, may be captured and used to improve the performance of aligning the imaging data captured by the first and second surfaces 106, 104.
- the methods described herein include the use of a robotic positioning system 323.
- the robotic positioning system 323 may be autonomous, e.g., configured to automatically capture imaging data based on one or more received inputs, or semi-autonomous, e.g., the movements of the robotic arm 442 are controlled by user inputs.
- the probe 100 may be separate from the robotic arm 442 of the robotic positioning system such that the probe 100 is handheld, e.g., the placement of the probe 100 is controlled by a user.
- the robotic positioning system 323 may still capture data via sensors 3340 to track the position and/or pose of the probe 100, the magnitude of force exerted by the probe on the body of the patient, or the like.
- the robotic positioning system 323 can provide, as output, feedback to the user controlling the probe.
- the probe 100 may be configured to communicate to imaging system 300.
- the probe may be configured to transmit the imaging data to the imaging system 300 for image processing via a wireless connector or wired connection to the system.
- the probe 100 may comprise one or more processors such that the processors of the probe are configured to process the image data.
- the system and/or the probe may include outputs, such as a display, speakers, haptic actuators, or the like. The representation generated by the system and/or probe may be provided for output to the display.
- the imaging system 300 of Figure 29 allows for imaging procedures to be reproducible and/or standardized. Repeatability and standardization allow a clinician, such as a radiologist to have more certainty in diagnosis, and to identify changes that might otherwise be missed.
- Reproducibility, or repeatability, of imaging procedures generally corresponds to being able to obtain consistent results, or images, regardless of who the operator is.
- reproducibility corresponds to the extent to which an imaging procedure repeated on the same patient, without any significant anatomical changes or progression of disease, produces the same set of images. More generally, given the need of clinicians to repeat imaging over time (e.g., to track the progression of a disease), reproducibility can refer to the extent to which an exam performed at one time and an exam repeated at a later time, will produce images that can be compared to one another.
- Standardization generally corresponds to making something consistent or regular.
- standardization generally corresponds to producing images that capture anatomy according to a procedure that is used across multiple patients.
- institutions may create protocols for sonographers to capture standardized imaging planes, which are views of specific anatomical features from a particular orientation.
- Standardization can also refer to a procedure for making measurements that are consistent across patients.
- the robotic positioning system 323 may capture mapping data of the patient.
- sensors 3340 may capture the mapping data and the imaging system 300 may generate a map of the body of the patient.
- the location and orientation of the probe 100 may be tracked via external sensors 3340 and/or sensors internal to probe 100.
- the location and orientation of the probe 100 may be tracked using joint encoders, optical and/or electromagnetic trackers, or the like.
- the location and orientation of the probe may be recorded and associated with the imaging data captured by probe 100.
- Figure 32 illustrates an example of the map of the body overlaid with the various probe positions and/or orientation recorded using the robotic positioning system 323.
- the robotic positioning system 323 may capture new mapping data via sensors 3340 such that a new map of the body of the patient can be generated.
- the imaging system 300 e.g., robotic positioning system 323, may compare the new map with the previous map to determine a coordinate transformation to align the surface of the body of the patient of the new map with the surface of the body of the patient of the previous map.
- the robotic positioning system 323 can then position the probe 100 to capture new imaging data in substantially the same location and/or orientation as the location and/or orientation of the previous imaging procedure. This may include, for example, determining one or more surfaces, e.g., transducers arrays, for capturing the imaging data.
- the robotic positioning system 323 may automatically position the probe 100 in the location and/or orientation to reproduce the results of the previous imaging procedure. According to some examples, if the previous imaging procedure used more than one transducer array on the probe 100 to capture the imaging data, the robotic positioning system 323 may use the same transducer arrays to capture the subsequent imaging data. In this regard, the capturing imaging data may correspond to the quality size, configuration, etc. of the previously captured imaging data. In another example, the robotic positioning system 323 may provide for output instructions for an operator to manually position the probe in the location and/or orientation to reproduce the results of the previous imaging procedure.
- imaging data captured during an initial imaging procedure may be analyzed to identify slice plane(s) captured as part of the imaging procedure, as shown in Figures 33A-33B and/or Figures 34A-34B.
- a slice plane corresponds to a two-dimensional plane that captures a particular cross-section of anatomy.
- the location and/or orientation of the slice planes of the captured imaging data may be associated with the captured imaging data, e.g., stored as metadata associated with the imaging data.
- the locations and/or orientations of the slice planes from the initial imaging procedure may be used to position the probe 100 in subsequent imaging procedures.
- the robotic positioning system 323 may, based on the map of the patient generated via mapping data captured by sensors 3340, position probe 100 in substantially the same location and/or orientation during a subsequent imaging procedure based on the slice planes determined during the initial imaging procedure. In some examples, the robotic positioning system 323 may rotate probe 100 such that a given transducer of the plurality of transducer arrays on the probe 100 is used to capturing the imaging data.
- imaging data may be captured during an initial imaging procedure.
- the imaging data may be volumetric images.
- a matrix array on probe 100 may be used to capture volumetric images.
- a volumetric image corresponds to a 3D imaging, e.g., ultrasound, dataset containing pixel intensity values across three dimensions of points, e.g., voxels.
- the volumetric images may be analyzed to identify standard slice planes, such as shown in Figures 33A-33B and/or Figures 34A-34B. Standard slice planes may, for example, be views of anatomical features from a particular orientation.
- imaging protocols are created and/or implemented to capture images in standardized slice planes.
- the imaging protocols may be stored in the memory of the imaging system 300 such that the robotic positioning system 323 is position the probe 100 based on the imaging protocols.
- a matrix array on probe 100 may be used to capture volumetric images.
- the volumetric images may be analyzed by imaging system 300 during the imaging procedure, e.g., in realtime, to identify slice planes in the current imaging data that were captured during previous imaging procedures, as shown in Figures 33A-33B and/or Figures 34A-34B.
- the imaging system 300 may compare the most recently captured imaging data to previously captured imaging data to identify a degree of similarity.
- Figures 33A-33B and/or Figures 34A-34B illustrate example imaging data that has been processed to identify slice planes. The location and/or orientation of the imaging data from previous imaging procedures may initiate the search.
- the location and/or orientation of slice places from the previous imaging procedure may be used to guide the current imaging procedure.
- the matrix array on probe 100 may be positioned such that its location and/or orientation is substantially the same as the location and/or orientation in the previous imaging procedure.
- the slice planes and/or a region of interest with the slice planes of the current imaging procedure correspond to those from the previous imaging procedure.
- a non-optimal slice plane may be identified by the imaging system 300.
- the imaging system 300 may be configured to determine a degree of similarity between the non-optimal slice plane and a slice plane of a previously captured slice plane and/or a standardized slice plane.
- the system may determine a similarity metric for a given 2D cross-sectional image of the imaging data as compared to a standard view of the target anatomy and/or a previously acquired image.
- the system may determine, based on the similarity metric, an amount to adjust the orientation and/or slice plane of the 2D cross-sectional image to optimize the similarity metric.
- Optimizing the similarity metric may include, for example, adjusting the orientation and/or slice plane of the 2D cross- sectional image until it substantially corresponds to the orientation and/or splice plane of the standard view of the anatomy and/or a previously acquired image.
- the imaging system 300 may translate and/or rotate the slice plane of the currently captured imaging data to correspond to the slice plane of the previously captured imaging data and/or the standardized slice plane to improve the image grade.
- An algorithm, or set of algorithms, which determine adjustment actions may take as input a 2D image, multiple 2D images, a 3D volume or multiple 3D volumes, slice plane orientations and poses, maps of a subject’s body as well as other data and output a prediction of the location and orientation of the ideal slice plane.
- the algorithm! s) may take as an input a desired image, or features of a desired image and output the predicted position and orientation of the desired slice plane relative to the slice plane(s) of image(s) that have been acquired.
- These algorithms may be machine learning algorithms using a convolutional neural network, a vision transformer or another architecture. These algorithms may also be action policies trained through reinforcement learning, outputting incremental translations and rotations of the 2D plane.
- the imaging system 300 may capture additional imaging data. For example, if the similarity is below a threshold similarity, the system may position the probe 100 at the location and/or orientation on the body of the patient to capture imaging data corresponding to the standard view of the anatomy and/or a previously acquired image. A similarity metric, indicating commonalities between the additional imaging data and (i) the standard view and/or (ii)_previously acquired image may be determined. If the similarity metric of the additional imaging data is above a threshold and/or can be translated and/or rotated into alignment, the system may determine that additional imaging data is not required. However, if the similarity metric of the additional imaging data is below a threshold, the system may continue to capture additional imaging data until the similarity metric is above a threshold.
- Focused imaging within a slice plane allows for an increased density of scanlines with tighter transmit and receive focusing, thereby improving the imaging quality.
- the volumetric images captured by the matrix array on probe 100 may be analyzed by imaging system 300 during the imaging procedure, e.g., in real-time, to identify standard slice planes, as shown in Figures 33A-33B, Figures 34A-34B, Figures 35A-35B, and/or Figure 36.
- the slice planes may be optimized using an Al model.
- the model may be trained to localize standard slice planes from within imaging datasets, such as 3D ultrasound datasets.
- the model may be trained to identify slice planes, in real-time.
- the predicted slice planes can be used by the robotic positioning system 323 to position probe 100 to capture imaging data focused within those slice planes and/or within a region of interest within the slice planes.
- the predicted slice planes may be used to determine which aperture, e.g., the surface of the probe 100, should be used to capture the imaging data. In some examples, the use of a smaller aperture may result in improved image quality.
- the volumetric images captured by the matrix array on probe 100 may be analyzed by imaging system 300 during the imaging procedure, e.g., in real-time, to identify slice planes in the current imaging data that correspond to slice planes captured during previous imaging procedures as shown in Figures 33A-33B, Figures 34A-34B, and/or Figure 26.
- the imaging system 300 may compare the most recently captured imaging data to previously captured imaging data to identify a degree of similarity.
- the location and/or orientation of the imaging data from previous imaging procedures may initiate the search. The location and/or orientation of slice places from the previous imaging procedure may be used to guide the current imaging procedure.
- the robotic positioning system 323 may automatically and/or semi-autonomously position the surface of the probe 100 having the smaller aperture, e.g., a surface of the probe having a transducer array with a surface area smaller than the largest surface of the probe having a transducer array, to capture imaging data.
- the smaller transducer array may be used to focus the region of interest for the newly captured imaging data to within the slice plane of the previous imaging data.
- the use of the smaller aperture may improve image quality for a focused region of interest as compared to the use of a larger aperture.
- the volumetric images captured by the matrix array on probe 100 may be analyzed by imaging system 300 during the imaging procedure, e.g., in real-time.
- the imaging system 300 may identify one or more previously captured slice planes by virtually sweeping and/or rotating the imaging plane of the current imaging procedure.
- the imaging system 300 may determine a similarity index, e.g., a degree of similarity, between the currently captured imaging data and previously captured imaging data. Once the degree of similarity is optimized, final imaging data may be captured.
- the system optimizes the degree of similarity by adjusting the imaging plane (e.g. rotating and translating it).
- the volumetric images captured by the matrix array on probe 100 may be analyzed by imaging system 300 during the imaging procedure, e.g., in real-time.
- the imaging system 300 may identify one or more standard slice planes by virtually sweeping and/or rotating the imaging plane of the current imaging procedure.
- the imaging system 300 may determine a similarity index, e.g., a degree of similarity, between the currently captured imaging data and the standard slice planes. Once the degree of similarity is optimized, final imaging data may be captured.
- the degree of similarity may be optimized, for example, by rotating and/or translating the currently captured imaging data to correspond to the orientation of the standard slice planes. Once the orientation of the currently captured imaging data substantially corresponds to the orientation of the standard slice planes, the degree of similarity may be considered to be optimized.
- the imaging system 300 may capture additional imaging data. For example, if the similarity is below a threshold similarity, the system may position the probe 100 at the location and/or orientation on the body of the patient to capture imaging data corresponding to standard slice planes. A similarity metric of the additional imaging data and the standard slice planes may be determined. If the similarity metric of the additional imaging data is above a threshold and/or can be translated and/or rotated into alignment, the system may determine that additional imaging data is not required. However, if the similarity metric of the additional imaging data is below a threshold, the system may continue to capture additional imaging data until the similarity metric is above a threshold.
- the optimization of the slice plane may be accomplished through neural networks using supervised learning, reinforcement learning, by aligning imaging data with standard anatomical models, through Bayesian optimization and/or through other techniques.
- a supervised learning algorithm may be trained to predict the pose (position and orientation) of a standard slice plane given a dataset consisting of image(s) from other slice plans and tracking data.
- the supervised learning algorithm may include a convolutional neural network, a transformer, a diffusion model or another type.
- a reinforcement learning algorithm may be trained to generate actions which correspond to movements (translations and rotations) of the slice plane, which bring the slice plane close to a desired standard plane.
- a reinforcement learning model may take the form of a Q function, or an action policy, for instance.
- This model may be trained on in-vivo data from human subjects and/or may be trained in a virtual environment with simulated ultrasound images.
- a model of the anatomy such as a statistical shape model, may be aligned with the captured volumetric images and used to identify the likely location of standard slice planes.
- a single model (such as a neural network) for optimizing the slice plane may take in additional inputs, beyond ultrasound images and tracking data, including RGB-data (e.g. maps of the patient’s body and the environment), contact force data and proprioceptive data, describing the configuration of the body.
- the probe 100 may be positioned at a particular location and/or orientation on the surface of the body of the patient.
- the imaging system 300 may provide, as output, instructions for the patient to breathe in and out.
- the imaging system 300 may capture and analyze the imaging data in real-time during the respiratory cycle of the patient. At the point in the respiratory cycle when the similarity between the imaging data being currently captured and previously acquired imaging data is at its highest, the final imaging data will be captured.
- the optimization algorithm may terminate using many different criteria, including when a loss function related to the similarity index and other factors (such as image quality and image completeness) reaches a certain target value or the time the system has spent on the task exceeds a certain value. Alternatively, the optimization algorithm may capture multiple candidate images for a given target slice plane.
- the imaging system 300 may be configured to determine the amount of force the probe 100 exerts on the surface of the body of the patient based on data captured by force sensor 554. The magnitude of force between the probe 100 and the surface of the body of the patient may be recorded by the imaging system 300 while the imaging data is captured. In subsequent imaging procedures, the robotic positioning system 323 may be configured to position the transducer so as to match the magnitude of force applied during previous imaging procedure(s) within a threshold tolerance of error.
- the robotic positioning system may be configured to provide, as output, to a user an indication to increase or decrease the magnitude of force, such as when the probe 100 is being manually operated, e.g., handheld, or when the robotic positioning system 323 is being controlled by a user, e.g., in a semi-autonomous state.
- a sequence of standard positions and/or orientations of the probe 100 may be used across patients, such as the positions 3700 shown in Figure 37.
- the surface of the patient’s body scanned should include a substantial portion of the acoustic windows available to image the anatomy, or region, of interest.
- the imaging system 300 may be configured to analyze the body surface map to determine these positions and orientations. For example, the imaging system 300 may identify anatomical landmarks, such as the xiphoid process, inferior portion of the rib cage, umbilicus, iliac crest, etc.
- the location of the landmarks, along with the 3D body map, and/or other data may be provided as input into a model trained to predict organ location and plan an imaging procedure based on the predicted organ location.
- the robotic positioning system 323 can then execute the imaging procedure by positioning the probe 100 in the determined location(s) at the specified orientation(s) using the specified transducer array(s).
- the motion of the body of the patient may be monitored, e.g. , tracked, using the sensors 3340 of the robotic positioning system 323.
- the motion of the chest accompanying inhalation and exhalation may be monitored, such as by a vision system, e.g., camera 55, or time of flight distance sensor.
- the imaging system 300 may capture imaging data at the same or substantially the same phase of the respiratory cycle at which the previous imaging data was captured.
- the imaging system 300 may capture imaging data based on the portion of the respiratory cycle. For example, some anatomical structures are clearer in imaging data when the imaging data is captured during a given portion of the respiratory cycle.
- the imaging system 300 may track the respiratory cycle of the patient and, based on the phases of the respiratory cycle and the target anatomy, capture imaging data.
- a data file e.g., a file containing one or more instructions for execution by imaging system 300
- the data file may include information intended to enhance the degree to which the images acquired during the imaging procedure can be reproduced in subsequent imaging procedures.
- the information may include, for example, transducer type, contact forces and torques during image acquisition, position and orientation of the probe during each image acquisition, body surface map of the patient, phase of respiratory cycle at which images were acquired, patient pose (e.g. supine, decubitus, etc.), imaging settings (e.g. imaging frequencies, beamforming sequence, field of view, image depth, focal depths, dynamic range, gain(s), output power level, etc.), or the like.
- the imaging system 300 may spatially and/or temporally align imaging data of the same anatomy captured from another placement.
- the initial imaging data may be incomplete, for example, due to a limited field of view, a substance that impedes the transmission of sound (e.g., bone or gas), or the like.
- the alignment of imaging data captured from different probe placements may be done using imaging processing techniques, probe tracking data, body surface maps, and/or respiratory phase tracking.
- the imaging system 300 may record the location and/or orientation of the probe 100 at the time imaging data is captured by the probe 100.
- the location and/or orientation of the probe 100 can be used by the system to generate a coordination transformation.
- the coordinate transformation determined based on the location and/or orientation of the probe 100 can be used by the imaging system 300 to transform images acquired at different times, e.g., previously and/or subsequently acquired imaging data, into a global coordinate system.
- Comparing the body maps for the previous and current imaging procedures can be used to generate coordinate transformations.
- the coordinate transformations determined based on the body map comparisons can be used to compensate for patient motion during the imaging procedure.
- the respiratory cycle can cause the position, orientation, and/or the shape of organs in the abdominal cavity to change based on the phase of the respiratory cycle. Accordingly, capturing imaging data at substantially corresponding phases of the respiratory cycle can ensure that the organs and/or acoustic windows align. Moreover, capturing imaging data at substantially corresponding phases of the respiratory cycle improves the degree to which image frames, e.g., frames taken from the imaging data, can be aligned to one another as the underlying anatomical structures are in a substantially similar state.
- Tracking the motion of the chest wall, or other anatomical features, during the respiratory cycle allows the system to determine when to capture imaging data such that the imaging data is captured during substantially the same phases of the respiratory cycle
- Spatially and/or temporally aligning imaging data captured from two different probe placements may allow for the representation of the target anatomy to be a complete representation.
- the alignment increases the accuracy and repeatability of standard measurements, such as distances and volumes.
- Figure 38 is a flow diagram for generating a representation of the target anatomy using the imaging system 300 of Figure 29.
- the following operations do not have to be performed in the precise order described below. Rather, various operations can be handled in a different order or simultaneously, and operations may be added or omitted.
- the imaging system 300 may determine, based on the first imaging data, whether additional imaging data is needed to generate a representation of the target anatomy. For example, when determining whether additional imaging data is needed, the first imaging data may be segmented. Based on the segmented first imaging data, anatomical structures within the first imaging data may be identified. The identified anatomical structures may be provided as input into an artificial intelligence (Al) model, such as a machine learning (ME) model. The Al model may be trained to predict missing image data.
- Al artificial intelligence
- ME machine learning
- the Al model may be trained to predict missing image data.
- a neural network may, for example, be trained on images of anatomy, such as the heart, from which regions have been intentionally masked. The network may be trained to predict the cross section of the original image from the modified images.
- a machine learning algorithm may create a 3D or 4D map, representing the quality of imaging for different regions of the organ.
- This model may be aligned with a standard anatomical model of the heart, such as a statistical shape model, and regions within the standard model which are missing may be identified.
- the imaging system 300 may determine that additional imaging data is needed.
- a notification to obtain the additional imaging data may be provided for output.
- the notification may include instructions to obtain the additional imaging data using a second array of transducers.
- the notification may include a visual representation of at least one of a location to capture the additional imaging information or what surface of the probe to use to obtain the additional imaging information.
- second imaging data of the target anatomy is received.
- the second imaging data may be, for example, the additional imaging data captured in response to the notification to obtain additional imaging data.
- the second imaging data may be captured by a second array of transducers on a second surface of a probe.
- the second array of transducers may have a second imaging aperture configured to capture the second imaging data of the target anatomy.
- the second array of transducers may be, for example, a single crystal piezoelectric transducer array.
- the first surface and the second surface of the probe are transverse to one another.
- the first imaging aperture is larger than the second imaging aperture.
- a representation of the target anatomy is generated based on the first and second imaging data.
- the representation may be, for example, a two-dimensional or three-dimensional representation of the target anatomy.
- first imaging data of the target anatomy may be received from a first array of transducers on a first surface of a probe.
- the system determines whether the target anatomy is at least partially incomplete or of poor quality.
- Poor image quality may be, for example, regions of the image that do not contain useful information for clinical diagnosis.
- the poor image quality may be due to noise (e.g. clutter).
- the noise may be seen as regions of the imaging data with poor contrast to noise ratio.
- the poor image quality may be due to low intensity, the imaging data having above a threshold amount of artifacts, poor spatial resolution, or the like.
- the poor image quality may include unclear boundaries of anatomical features and/or structures.
- the target anatomy may be partially blocked by anatomical features, such as bones, gas, or the like or may be of poor image quality for a different reason.
- the first imaging data may be segmented to determine the regions of the image with poor image quality.
- Anatomical structures e.g., bones, may be identified based on the segmented first imaging data.
- the identified anatomical structures may be provided as input into an Al model trained to predict regions of missing imaging data.
- the Al model may predict the regions of missing imaging data in the first imaging data.
- a neural network may, for example, be trained on images of anatomy, such as the heart, to which noise has intentionally been added. The network may be trained to predict the regions where noise was added.
- a machine learning algorithm may create a 3D or 4D map, representing the quality of imaging for different regions of the organ.
- This model may be aligned with a standard anatomical model of the heart, such as a statistical shape model. Regions within the standard model which are of poor quality (e.g., statistically different than the standard anatomical model) may be identified.
- High imaging quality may be, for example, imaging data and/or regions of imaging data in which the contrast to noise ratio and/or the spatial resolution is above a threshold.
- High imaging quality includes imaging data and/or regions of imaging data that contains useful information for clinical diagnosis purposes.
- high quality imaging data may be based on clear boundaries of anatomical features and/or structures. For example, when at least one region is predicted to be a region of missing imaging data, the imaging system 300 may provide, as output, a notification to obtain additional imaging data.
- the notification may be to obtain the additional imaging data using the second array of transducers.
- the notification may include a visual representation of at least one of a location to capture the additional imaging information or what surface of the probe to use to obtain the additional imaging information.
- second imaging data of the target anatomy is received from a second array of transducers on a second surface of the probe.
- the first surface of the probe is transverse to the second surface of the probe.
- the first array of transducers has a first imaging aperture larger than a second imaging aperture of the second array of transducers.
- the first array of transducers is a MEMs transducer array
- the second array of transducers is a single-crystal piezoelectric transducer array.
- Figure 40 is a block diagram illustrating an example ultrasound imaging pipeline 4001 of the imaging system 300 of Figure 29.
- the components and functionalities illustrated in Figure 40 may be performed by a computing device, such as client computing device 333, server computing device 303, and/or robotic positioning system 323.
- US probe 4027 and External probe 4026 may be compared to probe 100.
- ultrasound sound (US) images 4003 are captured by a probe, such as ultrasound probe 4027 or an external probe 4026.
- Probe 4027 may be a probe attached to robotic arm actuators 4023 of a robotic arm.
- positioning of the US probe 4027 may be controlled by the robotic arm actuators 4023 and/or via manual robotic controls 4025 provided via user inputs.
- the positioning of the US probe 4027 may be manually done, such as by a user physically moving components of the robotic arm to which the US probe 4027 is attached.
- an external probe, separate from imaging system 300, such as a handheld probe may be used instead of, or in addition to, US probe 4027.
- the positioning of the US probe 4027 may be controlled by path planning & motion control 4021.
- Path planning & motion control 4021 may receive probe motion commands from the autonomous image acquisition unit 4011, as well as robotic telemetry data 4009.
- Robotic telemetry data 4009 may include data related to the components of the robotic arm, such as information from tactile sensors, joint encoders, joint torques, contact force sensors, etc., which indicate the operational states of the robotic arm and/or components therein or otherwise attached.
- the path planning and motion control 4021 might receive additional information, such as joint angle trajectories for the robotic arm actuators, generated by the autonomous imaging acquisition system.
- the path planning & motion control 4021 may control the operation of the robotic arm actuators 4023 to position the US probe 4027 as needed.
- the probe motion commands 4015 which provide the path planning & motion control 4021 with instructions on where the US probe 4027 needs to be positioned, may be determined based on user input(s) 4002, exam protocol(s) 4004, machine vision 4005, patient measurements 4007, and US Images 4003.
- Exam protocol(s) 4004 may define imagery that needs to be captured during an imaging procedure. For instance, the exam protocol may define images required to properly capture a particular organ or organs.
- User input(s) 4002 may provide commands that control the autonomous image acquisition unit, such as stop and starting procedures, requesting additional images, requesting retakes of images, etc.
- Machine Vision 4005 may include information from sensors, such as sensors 3340, that provide mapping data for generating a map of the body of the patient.
- Patient measurements may include information about the patient, such as ECG measurements. Such patient information may be input into the autonomous image acquisition system to help build 4D models, to ensure the clip length contains enough periods of a physical characteristic (e.g., enough heartbeats), etc.
- the autonomous image acquisition unit may determine the sequence of positions where the US probe 4027 needs to be positioned and output probe motion commands 4015 accordingly.
- the path planning & motion control 4021 may control the robotic arm actuators 4023 to move the US probe to these positions.
- the autonomous image acquisition unit 4011 may also control imaging parameter adjustments 4012 of the US probe 4027, changing transducer arrays via transducer array change commands 4013, image capture commands via clip capture commands 4014, and virtual plane adjustments via virtual plane adjustment commands 4016. Additionally, the autonomous image acquisition unit 4011 may control one or more user interfaces (UI) to provide commands to patients, such as breathing and positioning instructions, as shown by UI: Patient Requests 4017.
- UI user interfaces
Landscapes
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Pathology (AREA)
- Radiology & Medical Imaging (AREA)
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Heart & Thoracic Surgery (AREA)
- Medical Informatics (AREA)
- Molecular Biology (AREA)
- Surgery (AREA)
- Animal Behavior & Ethology (AREA)
- General Health & Medical Sciences (AREA)
- Public Health (AREA)
- Veterinary Medicine (AREA)
- Gynecology & Obstetrics (AREA)
- Ultra Sonic Daignosis Equipment (AREA)
Abstract
La technologie concerne de manière générale une sonde, telle qu'une sonde ultrasonore, ayant deux réseaux de transducteurs ou plus. Les réseaux de transducteurs sont conçus sur la sonde de telle sorte qu'un premier réseau est transversal à un second réseau. Chaque réseau de transducteurs peut être de différentes tailles, de telle sorte que le premier réseau de transducteurs présente une ouverture plus grande en comparaison au second réseau de transducteurs. La sonde peut être portative et, par conséquent, positionnée par l'intermédiaire d'un utilisateur. Dans certains exemples, la sonde est couplée à un système de positionnement robotique qui peut positionner automatiquement et/ou semi-automatiquement la sonde pour capturer des données d'imagerie d'une région d'intérêt.
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| US202463642242P | 2024-05-03 | 2024-05-03 | |
| US63/642,242 | 2024-05-03 |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| WO2025231468A1 true WO2025231468A1 (fr) | 2025-11-06 |
| WO2025231468A9 WO2025231468A9 (fr) | 2025-12-26 |
Family
ID=96306369
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/US2025/027740 Pending WO2025231468A1 (fr) | 2024-05-03 | 2025-05-05 | Transducteur à facettes multiples |
Country Status (1)
| Country | Link |
|---|---|
| WO (1) | WO2025231468A1 (fr) |
Citations (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20090048520A1 (en) * | 2007-08-17 | 2009-02-19 | Jean-Michel Marteau | Multi-headed imaging probe and imaging system using same |
| US20110125022A1 (en) * | 2009-11-25 | 2011-05-26 | Siemens Medical Solutions Usa, Inc. | Synchronization for multi-directional ultrasound scanning |
| US20170258445A1 (en) * | 2014-11-25 | 2017-09-14 | Koninklijke Philips N.V. | A multi-sensor ultrasound probe and related methods |
| US20180168550A1 (en) * | 2016-12-19 | 2018-06-21 | Samsung Medison Co., Ltd. | Ultrasound imaging apparatus and method of controlling the same |
| WO2023031438A1 (fr) * | 2021-09-03 | 2023-03-09 | Diagnoly | Dispositif et procédé de guidage dans l'évaluation ultrasonore d'un organe |
-
2025
- 2025-05-05 WO PCT/US2025/027740 patent/WO2025231468A1/fr active Pending
Patent Citations (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20090048520A1 (en) * | 2007-08-17 | 2009-02-19 | Jean-Michel Marteau | Multi-headed imaging probe and imaging system using same |
| US20110125022A1 (en) * | 2009-11-25 | 2011-05-26 | Siemens Medical Solutions Usa, Inc. | Synchronization for multi-directional ultrasound scanning |
| US20170258445A1 (en) * | 2014-11-25 | 2017-09-14 | Koninklijke Philips N.V. | A multi-sensor ultrasound probe and related methods |
| US20180168550A1 (en) * | 2016-12-19 | 2018-06-21 | Samsung Medison Co., Ltd. | Ultrasound imaging apparatus and method of controlling the same |
| WO2023031438A1 (fr) * | 2021-09-03 | 2023-03-09 | Diagnoly | Dispositif et procédé de guidage dans l'évaluation ultrasonore d'un organe |
Non-Patent Citations (1)
| Title |
|---|
| KIM RIA ET AL: "Robot-Assisted Semi-Autonomous Ultrasound Imaging With Tactile Sensing and Convolutional Neural-Networks", IEEE TRANSACTIONS ON MEDICAL ROBOTICS AND BIONICS, IEEE, vol. 3, no. 1, 24 December 2020 (2020-12-24), pages 96 - 105, XP011839586, [retrieved on 20210219], DOI: 10.1109/TMRB.2020.3047154 * |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| CN105813573B (zh) | 使用基于模型的分割的成像视图操纵 | |
| US12236582B2 (en) | Breast mapping and abnormality localization | |
| KR101908520B1 (ko) | 메디컬 이미징에서 공간 및 시간 제약들을 이용하는 랜드마크 검출 | |
| EP2934328B1 (fr) | Échocardiographie anatomiquement intelligente pour centre de soins | |
| US10290076B2 (en) | System and method for automated initialization and registration of navigation system | |
| JP2023519878A (ja) | 複数のイメージングモダリティにおける関心領域を相関させるためのシステムおよび方法 | |
| JP6974354B2 (ja) | 同期された表面および内部腫瘍検出 | |
| EP3162292B1 (fr) | Appareil d'imagerie à ultrasons et procédé de commande correspondant | |
| US11712224B2 (en) | Method and systems for context awareness enabled ultrasound scanning | |
| JP6501796B2 (ja) | 超音波画像のモデル・ベースのセグメンテーションのための取得方位依存特徴 | |
| WO2025231468A1 (fr) | Transducteur à facettes multiples | |
| WO2025231468A9 (fr) | Transducteur à facettes multiples | |
| Li et al. | 3D ultrasound shape completion and anatomical feature detection for minimally invasive spine surgery | |
| JP2023073109A (ja) | 情報処理装置、医用画像診断装置、プログラム及び記憶媒体 | |
| DE112021006546T5 (de) | Ultraschallbildgebung mit anatomiebasierten akustischen einstellungen | |
| US12182929B2 (en) | Systems and methods for volume reconstructions using a priori patient data | |
| Chatelain | Quality-driven control of a robotized ultrasound probe | |
| Chen | QUiLT (Quantitative Ultrasound in Longitudinal Tissue Tracking): Stitching 2D images into 3D Volumes for Organ Health Monitoring | |
| IL301424B2 (en) | System and methods for performing remote ultrasound examinations | |
| WO2025255165A1 (fr) | Sonde tee autonome à génération de modèle de graphe et navigation basée sur des points de repère | |
| Yang et al. | Closing the Sim-to-Real Gap: An End-to-End Robotic Ultrasound System Leveraging In Vivo Reinforcement Learning and 3D-Prior Guided Hybrid Control |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 25736063 Country of ref document: EP Kind code of ref document: A1 |