[go: up one dir, main page]

WO2025098851A1 - Angioplasty imaging, analysis, and procedure facilitation - Google Patents

Angioplasty imaging, analysis, and procedure facilitation Download PDF

Info

Publication number
WO2025098851A1
WO2025098851A1 PCT/EP2024/080702 EP2024080702W WO2025098851A1 WO 2025098851 A1 WO2025098851 A1 WO 2025098851A1 EP 2024080702 W EP2024080702 W EP 2024080702W WO 2025098851 A1 WO2025098851 A1 WO 2025098851A1
Authority
WO
WIPO (PCT)
Prior art keywords
implantable device
view
processor
vessel wall
images
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
PCT/EP2024/080702
Other languages
French (fr)
Inventor
Amin FEIZPOUR
Ayushi Sinha
Javad Fotouhi
Brian Curtis LEE
Leili SALEHI
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Koninklijke Philips NV
Original Assignee
Koninklijke Philips NV
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Koninklijke Philips NV filed Critical Koninklijke Philips NV
Publication of WO2025098851A1 publication Critical patent/WO2025098851A1/en
Pending legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/10Computer-aided planning, simulation or modelling of surgical operations
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61FFILTERS IMPLANTABLE INTO BLOOD VESSELS; PROSTHESES; DEVICES PROVIDING PATENCY TO, OR PREVENTING COLLAPSING OF, TUBULAR STRUCTURES OF THE BODY, e.g. STENTS; ORTHOPAEDIC, NURSING OR CONTRACEPTIVE DEVICES; FOMENTATION; TREATMENT OR PROTECTION OF EYES OR EARS; BANDAGES, DRESSINGS OR ABSORBENT PADS; FIRST-AID KITS
    • A61F2/00Filters implantable into blood vessels; Prostheses, i.e. artificial substitutes or replacements for parts of the body; Appliances for connecting them with the body; Devices providing patency to, or preventing collapsing of, tubular structures of the body, e.g. stents
    • A61F2/82Devices providing patency to, or preventing collapsing of, tubular structures of the body, e.g. stents
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/10Computer-aided planning, simulation or modelling of surgical operations
    • A61B2034/101Computer-aided simulation of surgical operations
    • A61B2034/102Modelling of surgical devices, implants or prosthesis
    • A61B2034/104Modelling the effect of the tool, e.g. the effect of an implanted prosthesis or for predicting the effect of ablation or burring
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B90/00Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
    • A61B90/36Image-producing devices or illumination devices not otherwise provided for
    • A61B90/37Surgical systems with images on a monitor during operation
    • A61B2090/373Surgical systems with images on a monitor during operation using light, e.g. by using optical scanners
    • A61B2090/3735Optical coherence tomography [OCT]
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B90/00Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
    • A61B90/36Image-producing devices or illumination devices not otherwise provided for
    • A61B90/37Surgical systems with images on a monitor during operation
    • A61B2090/378Surgical systems with images on a monitor during operation using ultrasound
    • A61B2090/3782Surgical systems with images on a monitor during operation using ultrasound transmitter or receiver in catheter or minimal invasive instrument
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10116X-ray image
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30052Implant; Prosthesis
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30101Blood vessel; Artery; Vein; Vascular

Definitions

  • Examples generally relate to identifying views to observe characteristics of a medical procedure in real-time. In detail, examples relate to determining views that illustrate distances between the vasculature and an implantable device, an instrument associated with the implantable device and the vasculature.
  • a catheter may be inserted into a patient’s blood vessels to deliver a stent (e.g., flow diverter) to a target site (e.g., for an aneurysm or stroke treatment).
  • a stent may be a small mesh tube that holds open passages in the body (e.g., for weak or narrowed arteries).
  • the stent may be attached to the target site with an instrument (e.g., an angioplasty balloon and catheter) and remains at the target site to operate as a flow diverter when the instrument is removed.
  • Malapposition in this context, is defined as any deviation from the prefect attachment of the stent to the vessel walls.
  • the techniques described herein relate to a system, including a processor in communication with memory.
  • the processor is configured to receive one or more images of an implantable device within a vasculature; extract, from the one or more images, at least one image feature indicating a position of the implantable device relative to a vessel wall of the vasculature; and select, based on the at least one image feature, at least one view for optimally observing a medical procedure to deploy the implantable device to the vessel wall.
  • the techniques described herein relate to a non-transitory computer-readable storage medium having stored a computer program comprising instructions.
  • the instructions when executed by a processor, cause the processor to receive one or more images of an implantable device within a vasculature; extract, from the one or more images, at least one image feature indicating a position of the implantable device relative to a vessel wall of the vasculature; and select, based on the at least one image feature, at least one view for optimally observing a medical procedure to deploy the implantable device to the vessel wall.
  • the techniques described herein relate to a method including receiving one or more images of an implantable device within a vasculature; extracting, from the one or more images, at least one image feature indicating a position of the implantable device relative to a vessel wall of the vasculature; and selecting, based on the at least one image feature, at least one view for optimally observing a medical procedure to deploy the implantable device to the vessel wall.
  • FIG. 1 is a diagram of an example of stent appositions according to an embodiment
  • FIG. 2 is a diagram of an example of an enhanced imaging process 150 according to an embodiment
  • FIG. 3 is a diagram of an example of a method to identify and obtain images during a procedure according to an embodiment
  • FIG. 4 is a block diagram of an example of an imaging architecture according to an embodiment.
  • examples include a structured and flexible approach that operates in real-time to identify when a stent is properly attached to a blood vessel during a procedure, or whether stent malapposition (e.g., incomplete stent apposition), has occurred.
  • Stent apposition may refer to the proximity of struts of the stent to the vascular wall.
  • An adequate stent apposition is when the stent is sufficiently close to the vascular wall (e.g., contacts the vascular wall) to preclude blood flow between any strut of the stent and the underlying artery.
  • Stent malapposition is a separation of any strut from the intimal surface of the vascular wall (e.g., arterial wall) that is, potentially , not overlapping a side branch.
  • imaging may identify position(s) of the stent relative to the vascular wall.
  • a provider e.g., a doctor that is implanting the stent
  • Examples include a technical and efficient manner to select at least one view to image the stent to detect whether stent malapposition exists.
  • examples may provide a technical enhancement over existing examples. For example, existing examples may employ inefficient methods to image the vasculature and stent, consuming resources (e.g., imaging resources), increasing latency and potentially overlooking stent malapposition resulting in potential health complications for patients.
  • existing examples operate in a clinic to employ 3D imaging techniques (e.g., VasoCT and optical coherence tomography) to: 1) diagnose if a malapposition exists; and 2) select the right view-angle to correct the malapposition through angioplasty.
  • 3D imaging techniques e.g., VasoCT and optical coherence tomography
  • Such existing processes are heavily manually intensive resulting in higher error rates, cumbersome and time-consuming, while a patient is under anesthesia. Examples execute an "automation” and removal of above impediments.
  • present examples implement an automation that optimizes the processes (e.g., finds the ideal view for angioplasty, which is difficult to do manually for above reasons and prone to error).
  • a neural network may be trained to output a 3D heatmap of the distance between the stent (or other flow diverter) and the blood vessel surfaces, identify two-dimensional (2D) slices (e.g., view angles) at which high-risk defects are observable (for clinical verification of the pathology) and the closest possible working projections (allowed by system geometry and still showing both 1) the vessel and 2) the malapposition) transmitted to one or two arms of a mono-plane or bi-plane imaging system, respectively.
  • 2D two-dimensional
  • a catheter is inserted into patient’ s blood vessels to deliver the stent to the target site.
  • Image capture may be executed immediately after or during device deployment to determine if the stent is properly attached to the vessel walls. Doing so may be a clinical routine in most patient care settings. Malapposition of a stent may cause thrombosis and lead to fatal results.
  • IVUS Intravascular Ultrasound
  • OCT optical coherence tomography
  • VasoCT vascular CT
  • IVUS Intravascular Ultrasound
  • OCT optical coherence tomography
  • vascularCT vascular coherence tomography
  • a malapposition may hinder blood flow and cause thrombosis and/or movement of the stent.
  • stent appositions 100 are illustrated.
  • a baseline 102 and follow-up 104 are illustrated.
  • the baseline 102 is an initial image of a stent attached to a blood vessel take shortly (e.g., within an hour, minutes, etc.) after the stent is attached to the blood vessel, and the follow-up 104 is taken after some time (e.g., weeks, months, days, etc.) after the initial image.
  • the stent appositions 100 includes baseline incomplete stent apposition 106.
  • the baseline incomplete stent apposition 106 may include an incomplete stent apposition 108.
  • the incomplete stent apposition 108 may result in no vascular remodeling 110.
  • No vascular remodeling 110 includes a resolved but incomplete apposition.
  • the incomplete stent apposition 108 may also result in no vascular remodeling 112.
  • the no vascular remodeling 112 is a persistent, incomplete apposition.
  • a late-acquired incomplete stent apposition 114 includes a complete stent apposition 116.
  • the complete stent apposition 116 may result in no vascular remodeling 118.
  • the no vascular remodeling 118 includes a late-acquired incomplete apposition (e.g. , thrombus dissolution).
  • the complete stent apposition 116 may also result in a positive vascular remodeling 120.
  • the positive vascular remodeling 120 includes a late-acquired incomplete apposition.
  • the incomplete stent apposition 108 and complete stent apposition 116 may result in different appositions and health complications. Therefore, detecting the incomplete stent apposition 108 and complete stent apposition 116 initially may result in better outcomes, since the incomplete stent apposition 108 and complete stent apposition 116 may be corrected expediently and prior to the development of the no vascular remodeling 110, no vascular remodeling 112, no vascular remodeling 118 and positive vascular remodeling 120.
  • a provider e.g., interventionalist doctor
  • a decision may be made about the presence of a malapposition and the requirement for fixing the malapposition through angioplasty before the procedure is finalized. If the malapposition is detected, optimal view angles to allow visualization of an angioplasty balloon will be selected so that the provider may attempt to push and attach the stent to the vessel walls based on images from X-ray working projections associated with the optimal view angles. Afterwards, another round of imaging is performed to ensure that the corrective angioplasty has fixed the malapposition.
  • examples herein receive one or more images of an implantable device (e.g., a stent) within a vasculature, extract, from the one or more images, generate a representation indicating distances of the implantable device from a vessel wall, and select at least one view to observe the vasculature based on the distances and an amount of visibility of the vessel wall around the locations of malapposition.
  • an implantable device e.g., a stent
  • examples herein receive one or more images of an implantable device (e.g., a stent) within a vasculature, extract, from the one or more images, generate a representation indicating distances of the implantable device from a vessel wall, and select at least one view to observe the vasculature based on the distances and an amount of visibility of the vessel wall around the locations of malapposition.
  • some examples generate: 1) slices/view-angles of a 3D image that most clearly visualize the malapposition, for clinician's verification, and 2) view-angles/working-projections of the 3D image that would most clearly visualize the vessel, an angioplasty balloon and the malapposition in a live X- ray/fluoroscopy image series, to be transmitted to the C-arm(s) of an imaging device before the angioplasty operation to reduce the malapposition begins.
  • examples may provide an analysis of the images in real-time to guide and facilitate a provider’s actions.
  • FIG. 2 an enhanced imaging process 150 is illustrated.
  • an operator may try to rapidly investigate images and to determine if the entirety of an implantable device 168 (e.g., a stent) is fully attached to vessel walls 170 of a vasculature 182.
  • an implantable device 168 e.g., a stent
  • a decision is made about the presence of a malapposition and whether the malapposition will be attempted to be fixed through further angioplasty before the procedure is finalized. If a malapposition is detected, optimal view angles, to allow visualization of an angioplasty balloon, the vessel and, potentially, the malapposition, may be selected so that the operator may push and attach the implantable device 168 to the vessel walls 170. Afterwards, another round of imaging may be performed to determine if the further angioplasty fixed the malapposition.
  • the enhanced imaging process 150 includes a series of images, such as image 184, that form a VasoCT volume 152 (e.g., images) of the implantable device 168 (e.g., a stent) within vessel walls 170 (e.g., a blood vein) of the vasculature 182.
  • a neural network 154 (shown as “NN”) analyzes the VasoCT volume 152.
  • the VasoCT volume 152 may be a series of images of the implantable device 168 and the vessel walls 170.
  • a machine learning model may be substituted for a neural network 154 and operate similarly to the neural network 154 as described herein. It will be understood that rules-based processors may also be substituted for the neural network 154.
  • the enhanced imaging process 150 is an automated process that receives the VasoCT volume 152 (e.g., an input imaging data).
  • the enhanced imaging process 150 outputs the presence of any potential malapposition in stent placement, the 2D view angles and/or 3D image slices at which the malapposition may be best viewed and confirmed by an operator (e.g., interventionalists), and at what optimal view angles an angioplasty balloon 176, the vasculature 182, implantable device 168, and potentially, the malapposition may be viewed (e.g., in live fluoroscopy), to allow for expedient and facile decision making and performance of the procedure. Doing so may result in better health outcomes, reduced time and less consumed resources from avoiding excessive imaging and time to complete the procedure.
  • a VasoCT volume 152 may be captured with an imaging device 160 (e.g., IVUS device, OCT device and/or VasoCT device, etc.) which may image flow diverters (e.g., stents) and blood vessels with differentiable contrasts.
  • an imaging device 160 e.g., IVUS device, OCT device and/or VasoCT device, etc.
  • the neural network may be trained to output a three-dimensional (3D) 3D distance map 156 (e.g. a representation) showing at each voxel of images, a distance (e.g., shortest distance, longest distance, etc.) between the implantable device 168 and vessel walls 170.
  • the neural network 154 may be trained to also output optimal viewing angles 180 (e.g., view angles) and viewing images 158 (e.g., corresponding image slices at which a malapposition may be best viewed).
  • optimal viewing angles 180 e.g., view angles
  • viewing images 158 e.g., corresponding image slices at which a malapposition may be best viewed.
  • an output of the neural network 154 may be the optimal viewing angles 180 to view the angioplasty balloon 176 and the malapposition.
  • the viewing images may correspond to slices/view-angles of the VasoCT volume 152 (e.g., 3D image) that most clearly visualizes the malapposition for clinician's verification.
  • the optimal viewing angles 180 corresponds to view- angles/working-projections that would most clearly visualize the vasculature 182, the angioplasty balloon 176 and the malapposition in a live X-ray/fluoroscopy image series, to be transmitted to C-arm(s) of the imaging device 160 before the angioplasty operation to fix the malapposition begins. While the above examples are described with respect to a VasoCT volume 152, the processes described are also applicable to any other volume imaging that shows the implantable device 168 and the vessel walls 170 of the vasculature 182.
  • the optimal viewing angles 180 may be transmitted to the imaging device 160 (e.g., a c-arm), which may then image a patient that is undergoing the procedure by positioning detectors (e.g., imaging device arms such as C-arms) 166, 164 at the optimal view angles.
  • the imaging device 160 e.g., a c-arm
  • detectors e.g., imaging device arms such as C-arms
  • the optimal viewing angles 180 provided to imaging device 160 may facilitate post-deployment angioplasty.
  • the optimal viewing angles 180 (e.g., view angles) provide the best possible visibility of the angioplasty balloon 176 that is used to implant the implantable device 168.
  • the optimal viewing angles 180 (e.g., view angles) also show the implantable device 168 and vessel walls 170 malapposition.
  • the neural network 154 (or a machine learning and/or deep learning model) may be trained to optimize and identify the relevant views and view angles.
  • the neural network 154 may be trained to generate the 3D distance map 156.
  • the 3D distance map 156 may include distances of position of the implantable device 168 from nearest points of the vessel walls 170 to the positions.
  • the neural network 154 may also select a first image 172 and second image 174 from the VasoCT volume 152 (e.g., a plurality of images containing distances).
  • the first image 172 and the second image 174 may be selected based on distances of the 3d distance map 156 and an amount of visibility of the vessel walls 170 around locations.
  • the neural network 154 may select the first image 172 to be displayed to the operator based on the first image 172 (e.g., a first view) corresponding to a largest distance of the distances from the 3d distance map 156. That is, the distance in the first image 172 between the vessel walls 170 and the implantable device 168 may be the largest distance in the 3d distance map 156. Other images (not shown) may have shorter distances between the implantable device 168 and the vessel walls 170 and are bypassed for display to the operator.
  • the first image 172 e.g., a first view
  • Other images may have shorter distances between the implantable device 168 and the vessel walls 170 and are bypassed for display to the operator.
  • the second image 174 (e.g., a second view) may be selected for display to the operator based on an identification that a clarity measurement of the second image 174 meets a clarity threshold, where the second image 174 includes an instrument to position the implantable device.
  • the instrument is the angioplasty balloon 176 that pushes the implantable device 168 against the vessel walls 170. That is, the operator may seek to identify where the angioplasty balloon 176 is positioned in order to accurately maneuver the angioplasty balloon 176.
  • the neural network 154 may further extract, from the VasoCT volume 152 (e.g., one or more images), image features indicating a position of the implantable device 168 relative to vessel walls 170 of the vasculature 182. Based on the image features, the neural network 154 generates a representation, referred to as the 3D distance map 156, indicating distances of locations of the implantable device 168 from the vessel walls 170. The neural network 154 selects viewing images 158 (e.g., slices of the 3D image at certain view angles) to show the malapposition based on the distances and an amount of visibility of the malapposed stent and the vessel wall (e.g., clarity).
  • viewing images 158 e.g., slices of the 3D image at certain view angles
  • the neural network 154 may also identify view angles, stored as the optimal viewing angles 180, to obtain new views of the implantable device 168 and the vessel walls 170 based on the distances, and a visibility prediction of an instrument, such as the angioplasty balloon 176, that is to re-position and correct the malapposed implantable device 168. Examples further orient the imaging device 160 based on the predicted optimal view angles, and obtain, with the imaging device 160 that is oriented based on the view angles, new images of the implantable device 168 and the vessel walls 170 and/or vasculature 182. Doing so may facilitate further angioplasty operations and highlight the relevant sections of different images.
  • the neural network 154 may be trained in a supervised manner using image data that includes expert annotation of the best view angles in retrospective images. In some examples, training may be done in an unsupervised or semisupervised manner.
  • the loss function for the neural network 154 e.g., for such an unsupervised learning example
  • the metrics associated with malapposition and vessel visibility e.g., balloon visibility
  • Such metrics to determine the optimal view-angle may include a largest malapposition (e.g., location(s) on a stent that have the largest distance from vessel wall), and largest observable (non-obstructed) vessel cross-section around the coordinates of the stent location.
  • the loss function may begin at either of such view angles (e.g., created using above suggested methods or other available methods) and search for a view angle where both (or more) of such above metrics may be maximized. This optimization may then identify optimal viewing angles 180 to satisfy both the best malapposition visibility and the best angioplasty balloon visibility, which may be automatically transmitted to the imaging device 160 to execute imaging based on the angles.
  • the view angles (or system geometry) are saved during a post-operation angioplasty procedure, or displayed to the user to configure the imaging device 160.
  • the neural network 154 generates the 3D distance map 156 and provides viewing images 158 that emphasize (e.g., optimally show) the malapposition between the implantable device 168 and vessel walls 170.
  • the implantable device 168 is shown in cross-hatch, and a relevant part of the vessel walls 170 is shown in black with dots.
  • the vessel walls 170 include portions in white as well.
  • View angles 2 and 3 in Fig. 2 are examples of VasoCT slices that this system would automatically detect and show to a clinician to confirm the presence of a significant stent malapposition, as show in example slices 172 and 174.
  • significant malapposition is detected, and angioplasty is required.
  • the neural network 154 further provides optimal viewing angles 180 to perform corrective angioplasty.
  • the optimal viewing angles 180 can include predicted fluoroscopy images such as image 186 in Fig. 2.
  • the imaging device 160 may then be controlled based on the optimal viewing angles 180 to obtain images with the detectors 166, 164 being positioned at the optimal viewing angles 180 to achieve image series such as 186 in which the vessel, the balloon and the malapposition are as visible as possible.
  • the loss function of the neural network 154 may be weighted to put more emphasis on producing accurate outputs around a proximal side of the implantable device 168, as opposed to the distal side of the implantable device 168.
  • Proximal means “nearer the center of the body” as opposed to “distal” meaning “away from the center of the body”.
  • Devices are typically inserted from the proximal side of a condition (e.g., aneurysm) toward the distal side.
  • the proximal side of the implantable device 168 usually has a greater risk when the direction of blood flow is proximal to distal.
  • the neural network 154 may be trained to focus on the greater interest areas, and/or to investigate, focus and analyze the proximal side more precisely.
  • the neural network 154 may be trained to automatically associate a stronger attention (e.g., a greater emphasis) to, more deeply, analyze the proximal side and or detect whether a malapposition has occurred on the proximal side.
  • a stronger attention e.g., a greater emphasis
  • Such examples may achieve the above by providing independent annotations of proximal and distal sides of the implantable device 168 and optimizing the loss function to apply different weights to different the regions (e.g., greater weights for the proximal side that are greater than the weights for the distal side).
  • the neural network 154 may generate an accuracy metric that indicates how clearly the proximal side of the implantable device 168 will be shown from images at different angles.
  • An angle associated with largest accuracy metric (e.g., predicted to most clearly shows the proximal side) and/or that meets a threshold may be selected and provided to the imaging device 160 to cause the imaging device 160 to obtain an image at the angle.
  • examples may select a view based on an accuracy metric associated with the proximal side of the implantable device 162 meeting a threshold.
  • the neural network 154 may be designed to predict a stent malapposition during implantable device 168 and/or angioplasty balloon 176 deployment, and before the implantable device 168 is detached from a delivery device.
  • a 2D fluoroscopy image series may be included as an input to the neural network 154 running in real-time during stent deployment.
  • the neural network 154 may be designed to learn the relationship between stent behavior during deployment and the outcomes as seen in post-deployment imaging, such as VasoCT. Therefore, by learning what kinds of device behavior during deployment would lead to malapposition, the output of the neural network 154 may warn the operator during deployment how likely it is that the implantable device 168 may not perfectly match the vessel walls 170.
  • a provider may have a chance to reposition and fix stent placement of the implantable device 168 before detaching the implantable device 168 from a delivery device.
  • Some examples of the neural network 154 may provide a notification that further angioplasty should be performed even before any other imaging, such as VasoCT, is performed for quality control. That is, in some examples the neural network 154 may identify a likelihood that the implantable device 168 will fail to fit the vessel walls perfectly, determines that the likelihood meets a threshold, and provides a warning to an operator based on the likelihood meeting the threshold.
  • the neural network 154 additionally outputs suggestions on where the angioplasty balloon 176 may be placed and deployed to increase the probability of the angioplasty having a positive outcome. In some examples, the neural network 154 may provide such an update when the neural network 154 determines that such an update will provide a better outcome and is more likely to result in apposition.
  • a second neural network separate from the neural network 154 may have an input that is an output of the neural network 154 (e.g., whether angioplasty is necessary, view angles that should be used during angioplasty, etc.) and outputs landmarks for where the distal end of the angioplasty balloon 176 may be placed before deployment or shows an outline of a deployed balloon that may be followed by the interventionalists. Prior successful angioplasty procedures may be used to generate ground truth to train the second neural network.
  • the second neural network may provide a location to position an angioplasty balloon 176 based on an angle associated with the viewing images 158, where the angioplasty balloon 176 is associated with the implantable device 168. The angle may be the imaging angle for the detectors 164, 166 to obtain the viewing images 158.
  • FIG. 3 illustrates a method 300 to identify and obtain images during a procedure.
  • One or more aspects of method 300 may be implemented as part of and/or in conjunction with the enhanced imaging process 150 (FIG. 1).
  • Method 300 may be implemented in a computing device, computing system (e.g., hardware, configurable logic, fixed-function logic hardware, at least one computer readable storage medium comprising a set of instructions for execution, etc.).
  • Illustrated processing block 310 receives one or more images of an implantable device within a vasculature.
  • Illustrated processing block 320 extracts, from the one or more images, image features indicating a position of the implantable device relative to a vessel wall of the vasculature.
  • Illustrated processing block 330 based on the image features, generates a representation indicating distances of locations of the implantable device from the vessel wall.
  • Illustrated processing block 340 selects at least one view to observe the vasculature based on the distances and an amount of visibility of the vessel wall around the locations.
  • the method 300 selects a first view to be part of the at least one view based on the first view corresponding to a largest distance of the distances, and selects a second view to be part of the at least one view based on an identification that a clarity measurement of the second view meets a clarity threshold, where the second view includes an instrument to position the implantable device.
  • processing blocks 330 and 340 include executing a machine learning model that generates the representation and selects the at least one view.
  • the method 300 includes identifying a likelihood that the implantable device will fail based on the representation, determining that the likelihood meets a threshold, and providing a warning to an operator based on the likelihood meeting the threshold.
  • processing block 340 further comprises selecting the at least on view based on an accuracy metric associated with a proximal side of the implantable device meeting a threshold.
  • the method 300 includes providing a location to position a balloon based on an angle associated with the at least one view, where the balloon is associated with the implantable device, identifying view angles to obtain new views of the implantable device and the vasculature based on the distances, and a visibility measurement of an instrument that is to position the implantable device, orienting an imaging system based on the view angles, and obtaining, with the imaging system that is oriented based on the view angles, new images of the implantable device and the vasculature.
  • FIG. 4 shows a more detailed example of an imaging architecture 650.
  • the imaging architecture 650 may be readily implemented in conjunction with the enhanced imaging process 150 (FIG. 2) and/or method 300 (FIG. 3).
  • the imaging architecture 650 may include a display 654.
  • the display 654 may display images to an operator.
  • a neural network 662 may identify relevant images, 3D distance maps, determine view angles to obtain further images, etc.
  • the neural network 662 may correspond to the neural network 154 (FIG. 1).
  • the neural network 662 may include a processor 662a (e.g., embedded controller, central processing unit/CPU) and a memory 662b (e.g., non-volatile memory/NVM and/or volatile memory).
  • the memory 662b contains a set of instructions, which when executed by the processor 662a, cause the neural network 662 to operate as described herein.
  • An imaging device 664 may receive the view angles from the neural network 662, position cameras of the imaging device 664 to the view angles and obtain images.
  • the imaging device 664 may correspond to the imaging device 160 (FIG. 1).
  • the imaging device 664 may include a processor 664a (e.g., embedded controller, central processing unit/CPU) and a memory 664b (e.g., non-volatile memory/NVM and/or volatile memory).
  • the memory 664b contains a set of instructions, which when executed by the processor 664a, cause the imaging device 664 to operate as described herein.
  • Example 1 includes a system, comprising a processor, and a memory containing a set of instructions, which when executed by the processor, cause the system to receive one or more images of an implantable device within a vasculature, extract, from the one or more images, image features indicating a position of the implantable device relative to a vessel wall of the vasculature, and select at least one view to observe a largest distance of distances between the implantable device and the vessel wall.
  • Example 2 includes the system of Example 1, wherein the set of instructions, which when executed by the processor, cause the system to select a first view to be part of the at least one view based on an identification that a clarity measurement of one or more of an angioplasty balloon, the vessel wall and a malapposition of the first view meets a clarity threshold.
  • Example 3 includes the system of Example 1, wherein to extract the image features and select the at least one view, the set of instructions, which when executed by the processor, cause the system to execute a machine learning model that is to extract the image features and select the at least one view.
  • Example 4 includes the system of Example 1, the set of instructions, which when executed by the processor, cause the system to identify a likelihood, during device deployment, that the implantable device will fail based on the image features, determine that the likelihood meets a threshold, and provide a warning to an operator based on the likelihood meeting the threshold.
  • Example 5 includes the system of Example 1 , wherein to select the at least one view, the set of instructions, which when executed by the processor, cause the system to select the at least one view based on an accuracy metric associated with a proximal side of the implantable device meeting a threshold.
  • Example 6 includes the system of Example 1, wherein the set of instructions, which when executed by the processor, cause the system to provide a location to position an angioplasty balloon based on an angle associated with the at least one view, wherein the angioplasty balloon is associated with the implantable device.
  • Example 7 includes the system of Example 1, wherein the set of instructions, which when executed by the processor, cause the system to identify view angles to obtain new views of the implantable device and the vasculature based on the distances, and a visibility prediction of an angioplasty balloon that is to position the implantable device, orient an imaging system based on the view angles, and obtain, with the imaging system that is oriented based on the view angles, new images of the angioplasty balloon, the implantable device and the vasculature.
  • the set of instructions which when executed by the processor, cause the system to identify view angles to obtain new views of the implantable device and the vasculature based on the distances, and a visibility prediction of an angioplasty balloon that is to position the implantable device, orient an imaging system based on the view angles, and obtain, with the imaging system that is oriented based on the view angles, new images of the angioplasty balloon, the implantable device and the vasculature.
  • Example 8 includes at least one computer readable storage medium comprising a set of instructions, which when executed by a computing device, cause the computing device to receive one or more images of an implantable device within a vasculature, extract, from the one or more images, image features indicating a position of the implantable device relative to a vessel wall of the vasculature, and select at least one view to observe a largest distance of distances between the implantable device and the vessel wall.
  • Example 9 includes the at least one computer readable storage medium of Example 8, wherein the instructions, when executed, cause the computing device to select a first view to be part of the at least one view based on an identification that a clarity measurement of one or more of an angioplasty balloon, the vessel wall and a malapposition of the first view meets a clarity threshold.
  • Example 10 includes the at least one computer readable storage medium of Example 8, wherein to extract the image features and select the at least one view, the instructions, when executed, cause the computing device to execute a machine learning model that is to extract the image features and select the at least one view.
  • Example 11 includes the at least one computer readable storage medium of Example 8, wherein the instructions, when executed, cause the computing device to identify a likelihood, during device deployment, that the implantable device will fail based on the image features, determine that the likelihood meets a threshold, and provide a warning to an operator based on the likelihood meeting the threshold.
  • Example 12 includes the at least one computer readable storage medium of Example 8, wherein to select the at least one view, the instructions, when executed, cause the computing device to select the at least one view based on an accuracy metric associated with a proximal side of the implantable device meeting a threshold.
  • Example 13 includes the at least one computer readable storage medium of Example 8, wherein the instructions, when executed, cause the computing device to provide a location to position an angioplasty balloon based on an angle associated with the at least one view, wherein the angioplasty balloon is associated with the implantable device.
  • Example 14 includes the at least one computer readable storage medium of Example 8, wherein the instructions, when executed, cause the computing device to identify view angles to obtain new views of the implantable device and the vasculature based on the distances, and a visibility prediction of an angioplasty balloon that is to position the implantable device, orient an imaging system based on the view angles, and obtain, with the imaging system that is oriented based on the view angles, new images of the angioplasty balloon, the implantable device and the vasculature.
  • Example 15 includes a method comprising receiving one or more images of an implantable device within a vasculature, extracting, from the one or more images, image features indicating a position of the implantable device relative to a vessel wall of the vasculature, and selecting at least one view to observe a largest distance of distances between the implantable device and the vessel wall.
  • Example 16 includes the method of Example 15, further comprising selecting a first view to be part of the at least one view based on an identification that a clarity measurement of one or more of an angioplasty balloon, the vessel wall and a malapposition of the first view meets a clarity threshold.
  • Example 17 includes the method of Example 15, wherein the extracting and the selecting further comprises executing a machine learning model that extract the image features and selects the at least one view.
  • Example 18 includes the method of Example 15, further comprising identifying a likelihood, during device deployment, that the implantable device will fail based on the image features, determining that the likelihood meets a threshold, and providing a warning to an operator based on the likelihood meeting the threshold.
  • Example 19 includes the method of Example 15, wherein the selecting further comprises selecting the at least one view based on an accuracy metric associated with a proximal side of the implantable device meeting a threshold.
  • Example 20 includes the method of Example 15, further comprising providing a location to position an angioplasty balloon based on an angle associated with the at least one view, wherein the angioplasty balloon is associated with the implantable device, identifying view angles to obtain new views of the implantable device and the vasculature based on the distances, and a visibility prediction of the angioplasty balloon, orienting an imaging system based on the view angles, and obtaining, with the imaging system that is oriented based on the view angles, new images of the angioplasty balloon, the implantable device and the vasculature.
  • the above described methods and systems may be readily combined together if desired.
  • Coupled may be used herein to refer to any type of relationship, direct or indirect, between the components in question, and may apply to electrical, mechanical, fluid, optical, electromagnetic, electromechanical or other connections.
  • first”, “second”, etc. may be used herein only to facilitate discussion, and carry no particular temporal or chronological significance unless otherwise indicated.

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Theoretical Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • General Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Medical Informatics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Surgery (AREA)
  • Data Mining & Analysis (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Engineering & Computer Science (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • Mathematical Physics (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Evolutionary Computation (AREA)
  • Veterinary Medicine (AREA)
  • Public Health (AREA)
  • Molecular Biology (AREA)
  • Animal Behavior & Ethology (AREA)
  • Radiology & Medical Imaging (AREA)
  • Transplantation (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Vascular Medicine (AREA)
  • Cardiology (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Robotics (AREA)
  • Quality & Reliability (AREA)
  • Apparatus For Radiation Diagnosis (AREA)

Abstract

A system includes a processor in communication with memory. The processor is configured to receive one or more images of an implantable device within a vasculature; extract, from the one or more images, at least one image feature indicating a position of the implantable device relative to a vessel wall of the vasculature; and select, based on the at least one image feature, at least one view for optimally observing a medical procedure to deploy the implantable device to the vessel wall.

Description

ANGIOPLASTY IMAGING, ANALYSIS, AND PROCEDURE FACILITATION
FIELD OF THE INVENTION
[0001] Examples generally relate to identifying views to observe characteristics of a medical procedure in real-time. In detail, examples relate to determining views that illustrate distances between the vasculature and an implantable device, an instrument associated with the implantable device and the vasculature.
BACKGROUND
[0002] During an endovascular procedure, a catheter may be inserted into a patient’s blood vessels to deliver a stent (e.g., flow diverter) to a target site (e.g., for an aneurysm or stroke treatment). A stent may be a small mesh tube that holds open passages in the body (e.g., for weak or narrowed arteries). Thus, the stent may be attached to the target site with an instrument (e.g., an angioplasty balloon and catheter) and remains at the target site to operate as a flow diverter when the instrument is removed. Malapposition, in this context, is defined as any deviation from the prefect attachment of the stent to the vessel walls.
SUMMARY
[0003] In some aspects, the techniques described herein relate to a system, including a processor in communication with memory. The processor is configured to receive one or more images of an implantable device within a vasculature; extract, from the one or more images, at least one image feature indicating a position of the implantable device relative to a vessel wall of the vasculature; and select, based on the at least one image feature, at least one view for optimally observing a medical procedure to deploy the implantable device to the vessel wall.
[0004] In some aspects, the techniques described herein relate to a non-transitory computer-readable storage medium having stored a computer program comprising instructions. The instructions, when executed by a processor, cause the processor to receive one or more images of an implantable device within a vasculature; extract, from the one or more images, at least one image feature indicating a position of the implantable device relative to a vessel wall of the vasculature; and select, based on the at least one image feature, at least one view for optimally observing a medical procedure to deploy the implantable device to the vessel wall.
[0005] In some aspects, the techniques described herein relate to a method including receiving one or more images of an implantable device within a vasculature; extracting, from the one or more images, at least one image feature indicating a position of the implantable device relative to a vessel wall of the vasculature; and selecting, based on the at least one image feature, at least one view for optimally observing a medical procedure to deploy the implantable device to the vessel wall.
BRIEF DESCRIPTION OF THE DRAWINGS
[0006] The various advantages of the embodiments of the present disclosure will become apparent to one skilled in the art by reading the following specification and appended claims, and by referencing the following drawings, in which:
[0007] FIG. 1 is a diagram of an example of stent appositions according to an embodiment;
[0008] FIG. 2 is a diagram of an example of an enhanced imaging process 150 according to an embodiment;
[0009] FIG. 3 is a diagram of an example of a method to identify and obtain images during a procedure according to an embodiment; and
[0010] FIG. 4 is a block diagram of an example of an imaging architecture according to an embodiment.
DETAILED DESCRIPTION OF THE EMBODIMENTS
[0011] From a technical point of view, examples include a structured and flexible approach that operates in real-time to identify when a stent is properly attached to a blood vessel during a procedure, or whether stent malapposition (e.g., incomplete stent apposition), has occurred. Stent apposition may refer to the proximity of struts of the stent to the vascular wall. An adequate stent apposition is when the stent is sufficiently close to the vascular wall (e.g., contacts the vascular wall) to preclude blood flow between any strut of the stent and the underlying artery. Stent malapposition is a separation of any strut from the intimal surface of the vascular wall (e.g., arterial wall) that is, potentially , not overlapping a side branch.
[0012] In some cases, imaging may identify position(s) of the stent relative to the vascular wall. In doing so, a provider (e.g., a doctor that is implanting the stent) may determine whether a stent malapposition exists and correct the stent, if a stent malapposition does exist. Examples include a technical and efficient manner to select at least one view to image the stent to detect whether stent malapposition exists. Thus, examples may provide a technical enhancement over existing examples. For example, existing examples may employ inefficient methods to image the vasculature and stent, consuming resources (e.g., imaging resources), increasing latency and potentially overlooking stent malapposition resulting in potential health complications for patients. For example, existing examples operate in a clinic to employ 3D imaging techniques (e.g., VasoCT and optical coherence tomography) to: 1) diagnose if a malapposition exists; and 2) select the right view-angle to correct the malapposition through angioplasty. Such existing processes are heavily manually intensive resulting in higher error rates, cumbersome and time-consuming, while a patient is under anesthesia. Examples execute an "automation" and removal of above impediments. Additionally, as noted above, present examples implement an automation that optimizes the processes (e.g., finds the ideal view for angioplasty, which is difficult to do manually for above reasons and prone to error).
[0013] If a minor mismatch is missed and a required angioplasty is skipped, blood clots can form around the stent, with complications potentially leading to fatal results for the patient. Examples automate the process of investigating the intra-operation or posttreatment images in a quantitative manner. A neural network may be trained to output a 3D heatmap of the distance between the stent (or other flow diverter) and the blood vessel surfaces, identify two-dimensional (2D) slices (e.g., view angles) at which high-risk defects are observable (for clinical verification of the pathology) and the closest possible working projections (allowed by system geometry and still showing both 1) the vessel and 2) the malapposition) transmitted to one or two arms of a mono-plane or bi-plane imaging system, respectively.
[0014] As noted above, during an endovascular procedure, a catheter is inserted into patient’ s blood vessels to deliver the stent to the target site. Image capture may be executed immediately after or during device deployment to determine if the stent is properly attached to the vessel walls. Doing so may be a clinical routine in most patient care settings. Malapposition of a stent may cause thrombosis and lead to fatal results. Depending on the procedure type and the clinical site’s capabilities various imaging (e.g., Intravascular Ultrasound (IVUS), optical coherence tomography (OCT) or VasoCT, etc.) is used after or during device deployment and before the catheter is withdrawn to ensure a perfect apposition by looking for a mismatch (e.g., space) between the stent and the blood vessel walls. Particularly on the proximal side of the stent, a malapposition may hinder blood flow and cause thrombosis and/or movement of the stent.
[0015] Turning now to FIG. 1, examples of stent appositions 100 are illustrated. In the stent appositions 100, a baseline 102 and follow-up 104 are illustrated. The baseline 102 is an initial image of a stent attached to a blood vessel take shortly (e.g., within an hour, minutes, etc.) after the stent is attached to the blood vessel, and the follow-up 104 is taken after some time (e.g., weeks, months, days, etc.) after the initial image.
[0016] The stent appositions 100 includes baseline incomplete stent apposition 106. The baseline incomplete stent apposition 106 may include an incomplete stent apposition 108. The incomplete stent apposition 108 may result in no vascular remodeling 110. No vascular remodeling 110 includes a resolved but incomplete apposition.
[0017] The incomplete stent apposition 108 may also result in no vascular remodeling 112. The no vascular remodeling 112 is a persistent, incomplete apposition.
[0018] A late-acquired incomplete stent apposition 114 includes a complete stent apposition 116. The complete stent apposition 116 may result in no vascular remodeling 118. The no vascular remodeling 118 includes a late-acquired incomplete apposition (e.g. , thrombus dissolution). The complete stent apposition 116 may also result in a positive vascular remodeling 120. The positive vascular remodeling 120 includes a late-acquired incomplete apposition.
[0019] Thus, the incomplete stent apposition 108 and complete stent apposition 116 may result in different appositions and health complications. Therefore, detecting the incomplete stent apposition 108 and complete stent apposition 116 initially may result in better outcomes, since the incomplete stent apposition 108 and complete stent apposition 116 may be corrected expediently and prior to the development of the no vascular remodeling 110, no vascular remodeling 112, no vascular remodeling 118 and positive vascular remodeling 120.
[0020] After image acquisition, a provider (e.g., interventionalist doctor) may attempt to rapidly investigate the incomplete stent apposition 108 and no vascular remodeling 110 and make sure the entirety of the stent is fully attached to the vessel walls. At this stage, a decision may be made about the presence of a malapposition and the requirement for fixing the malapposition through angioplasty before the procedure is finalized. If the malapposition is detected, optimal view angles to allow visualization of an angioplasty balloon will be selected so that the provider may attempt to push and attach the stent to the vessel walls based on images from X-ray working projections associated with the optimal view angles. Afterwards, another round of imaging is performed to ensure that the corrective angioplasty has fixed the malapposition.
[0021] Manual investigations and decision-making processes take a significant amount of time (e.g., 5-15 minutes per stent placement procedure) and is prone to error due to human subjectivity and voluminous data (e.g., numerous images to analyze, such as many slices of a 3D image), resulting in costly operations and sub-optimal health outcomes. That is, in addition to being time-consuming which results in costs for patients and the healthcare system, the accuracy of such manual processes is operator-dependent and may be sub- optimal at times. As will be discussed with respect to enhanced imaging process 150 (FIG. 2), examples herein address the aforementioned drawbacks of the existing examples. In detail, examples herein receive one or more images of an implantable device (e.g., a stent) within a vasculature, extract, from the one or more images, generate a representation indicating distances of the implantable device from a vessel wall, and select at least one view to observe the vasculature based on the distances and an amount of visibility of the vessel wall around the locations of malapposition. That is, some examples generate: 1) slices/view-angles of a 3D image that most clearly visualize the malapposition, for clinician's verification, and 2) view-angles/working-projections of the 3D image that would most clearly visualize the vessel, an angioplasty balloon and the malapposition in a live X- ray/fluoroscopy image series, to be transmitted to the C-arm(s) of an imaging device before the angioplasty operation to reduce the malapposition begins. In doing so, examples may provide an analysis of the images in real-time to guide and facilitate a provider’s actions. Turning now to FIG. 2, an enhanced imaging process 150 is illustrated. After image acquisition, an operator (e.g., interventionalist) may try to rapidly investigate images and to determine if the entirety of an implantable device 168 (e.g., a stent) is fully attached to vessel walls 170 of a vasculature 182. At this stage, a decision is made about the presence of a malapposition and whether the malapposition will be attempted to be fixed through further angioplasty before the procedure is finalized. If a malapposition is detected, optimal view angles, to allow visualization of an angioplasty balloon, the vessel and, potentially, the malapposition, may be selected so that the operator may push and attach the implantable device 168 to the vessel walls 170. Afterwards, another round of imaging may be performed to determine if the further angioplasty fixed the malapposition.
[0022] The enhanced imaging process 150 includes a series of images, such as image 184, that form a VasoCT volume 152 (e.g., images) of the implantable device 168 (e.g., a stent) within vessel walls 170 (e.g., a blood vein) of the vasculature 182. A neural network 154 (shown as “NN”) analyzes the VasoCT volume 152. The VasoCT volume 152 may be a series of images of the implantable device 168 and the vessel walls 170. In some examples, a machine learning model may be substituted for a neural network 154 and operate similarly to the neural network 154 as described herein. It will be understood that rules-based processors may also be substituted for the neural network 154.
[0023] The enhanced imaging process 150 is an automated process that receives the VasoCT volume 152 (e.g., an input imaging data). The enhanced imaging process 150 outputs the presence of any potential malapposition in stent placement, the 2D view angles and/or 3D image slices at which the malapposition may be best viewed and confirmed by an operator (e.g., interventionalists), and at what optimal view angles an angioplasty balloon 176, the vasculature 182, implantable device 168, and potentially, the malapposition may be viewed (e.g., in live fluoroscopy), to allow for expedient and facile decision making and performance of the procedure. Doing so may result in better health outcomes, reduced time and less consumed resources from avoiding excessive imaging and time to complete the procedure.
[0024] A VasoCT volume 152 (e.g., representation) may be captured with an imaging device 160 (e.g., IVUS device, OCT device and/or VasoCT device, etc.) which may image flow diverters (e.g., stents) and blood vessels with differentiable contrasts. As illustrated, using the VasoCT volume 152 together with annotations of any observed mismatches between the implantable device 168 and vessel walls 170, the neural network may be trained to output a three-dimensional (3D) 3D distance map 156 (e.g. a representation) showing at each voxel of images, a distance (e.g., shortest distance, longest distance, etc.) between the implantable device 168 and vessel walls 170.
[0025] Additionally, the neural network 154 may be trained to also output optimal viewing angles 180 (e.g., view angles) and viewing images 158 (e.g., corresponding image slices at which a malapposition may be best viewed). Using the viewing images 158 together with input images and annotations (the annotations may be provided by the operator) of the best view angles for angioplasty where the angioplasty balloon 176 and malapposition may be best viewed from the VasoCT volume 152, an output of the neural network 154 may be the optimal viewing angles 180 to view the angioplasty balloon 176 and the malapposition. The viewing images may correspond to slices/view-angles of the VasoCT volume 152 (e.g., 3D image) that most clearly visualizes the malapposition for clinician's verification. The optimal viewing angles 180 corresponds to view- angles/working-projections that would most clearly visualize the vasculature 182, the angioplasty balloon 176 and the malapposition in a live X-ray/fluoroscopy image series, to be transmitted to C-arm(s) of the imaging device 160 before the angioplasty operation to fix the malapposition begins. While the above examples are described with respect to a VasoCT volume 152, the processes described are also applicable to any other volume imaging that shows the implantable device 168 and the vessel walls 170 of the vasculature 182.
[0026] The optimal viewing angles 180 may be transmitted to the imaging device 160 (e.g., a c-arm), which may then image a patient that is undergoing the procedure by positioning detectors (e.g., imaging device arms such as C-arms) 166, 164 at the optimal view angles.
[0027] The optimal viewing angles 180 provided to imaging device 160 may facilitate post-deployment angioplasty. The optimal viewing angles 180 (e.g., view angles) provide the best possible visibility of the angioplasty balloon 176 that is used to implant the implantable device 168. The optimal viewing angles 180 (e.g., view angles) also show the implantable device 168 and vessel walls 170 malapposition. To do so, the neural network 154 (or a machine learning and/or deep learning model) may be trained to optimize and identify the relevant views and view angles.
[0028] Notably, the neural network 154 may be trained to generate the 3D distance map 156. The 3D distance map 156 may include distances of position of the implantable device 168 from nearest points of the vessel walls 170 to the positions. The neural network 154 may also select a first image 172 and second image 174 from the VasoCT volume 152 (e.g., a plurality of images containing distances). The first image 172 and the second image 174 may be selected based on distances of the 3d distance map 156 and an amount of visibility of the vessel walls 170 around locations. For example, the neural network 154 may select the first image 172 to be displayed to the operator based on the first image 172 (e.g., a first view) corresponding to a largest distance of the distances from the 3d distance map 156. That is, the distance in the first image 172 between the vessel walls 170 and the implantable device 168 may be the largest distance in the 3d distance map 156. Other images (not shown) may have shorter distances between the implantable device 168 and the vessel walls 170 and are bypassed for display to the operator. The second image 174 (e.g., a second view) may be selected for display to the operator based on an identification that a clarity measurement of the second image 174 meets a clarity threshold, where the second image 174 includes an instrument to position the implantable device. In this example, the instrument is the angioplasty balloon 176 that pushes the implantable device 168 against the vessel walls 170. That is, the operator may seek to identify where the angioplasty balloon 176 is positioned in order to accurately maneuver the angioplasty balloon 176.
[0029] The neural network 154 may further extract, from the VasoCT volume 152 (e.g., one or more images), image features indicating a position of the implantable device 168 relative to vessel walls 170 of the vasculature 182. Based on the image features, the neural network 154 generates a representation, referred to as the 3D distance map 156, indicating distances of locations of the implantable device 168 from the vessel walls 170. The neural network 154 selects viewing images 158 (e.g., slices of the 3D image at certain view angles) to show the malapposition based on the distances and an amount of visibility of the malapposed stent and the vessel wall (e.g., clarity). The neural network 154 may also identify view angles, stored as the optimal viewing angles 180, to obtain new views of the implantable device 168 and the vessel walls 170 based on the distances, and a visibility prediction of an instrument, such as the angioplasty balloon 176, that is to re-position and correct the malapposed implantable device 168. Examples further orient the imaging device 160 based on the predicted optimal view angles, and obtain, with the imaging device 160 that is oriented based on the view angles, new images of the implantable device 168 and the vessel walls 170 and/or vasculature 182. Doing so may facilitate further angioplasty operations and highlight the relevant sections of different images.
[0030] In some examples, the neural network 154 may be trained in a supervised manner using image data that includes expert annotation of the best view angles in retrospective images. In some examples, training may be done in an unsupervised or semisupervised manner. The loss function for the neural network 154 (e.g., for such an unsupervised learning example), of the view-angle may be minimized when the metrics associated with malapposition and vessel visibility (e.g., balloon visibility) are met. Such metrics to determine the optimal view-angle may include a largest malapposition (e.g., location(s) on a stent that have the largest distance from vessel wall), and largest observable (non-obstructed) vessel cross-section around the coordinates of the stent location. As such, the loss function may begin at either of such view angles (e.g., created using above suggested methods or other available methods) and search for a view angle where both (or more) of such above metrics may be maximized. This optimization may then identify optimal viewing angles 180 to satisfy both the best malapposition visibility and the best angioplasty balloon visibility, which may be automatically transmitted to the imaging device 160 to execute imaging based on the angles. In some examples, the view angles (or system geometry) are saved during a post-operation angioplasty procedure, or displayed to the user to configure the imaging device 160.
[0031] In this example, the neural network 154 generates the 3D distance map 156 and provides viewing images 158 that emphasize (e.g., optimally show) the malapposition between the implantable device 168 and vessel walls 170. The implantable device 168 is shown in cross-hatch, and a relevant part of the vessel walls 170 is shown in black with dots. The vessel walls 170 include portions in white as well.
[0032] View angles 2 and 3 in Fig. 2 are examples of VasoCT slices that this system would automatically detect and show to a clinician to confirm the presence of a significant stent malapposition, as show in example slices 172 and 174. In this example, significant malapposition is detected, and angioplasty is required. Thus, the neural network 154 further provides optimal viewing angles 180 to perform corrective angioplasty. The optimal viewing angles 180 can include predicted fluoroscopy images such as image 186 in Fig. 2. The imaging device 160 may then be controlled based on the optimal viewing angles 180 to obtain images with the detectors 166, 164 being positioned at the optimal viewing angles 180 to achieve image series such as 186 in which the vessel, the balloon and the malapposition are as visible as possible.
[0033] In some examples, the loss function of the neural network 154 may be weighted to put more emphasis on producing accurate outputs around a proximal side of the implantable device 168, as opposed to the distal side of the implantable device 168. Proximal means "nearer the center of the body" as opposed to "distal" meaning "away from the center of the body". Devices are typically inserted from the proximal side of a condition (e.g., aneurysm) toward the distal side. The proximal side of the implantable device 168 usually has a greater risk when the direction of blood flow is proximal to distal. Thus, the neural network 154 may be trained to focus on the greater interest areas, and/or to investigate, focus and analyze the proximal side more precisely. In this example, the neural network 154 may be trained to automatically associate a stronger attention (e.g., a greater emphasis) to, more deeply, analyze the proximal side and or detect whether a malapposition has occurred on the proximal side. Such examples may achieve the above by providing independent annotations of proximal and distal sides of the implantable device 168 and optimizing the loss function to apply different weights to different the regions (e.g., greater weights for the proximal side that are greater than the weights for the distal side). For example, the neural network 154 may generate an accuracy metric that indicates how clearly the proximal side of the implantable device 168 will be shown from images at different angles. An angle associated with largest accuracy metric (e.g., predicted to most clearly shows the proximal side) and/or that meets a threshold may be selected and provided to the imaging device 160 to cause the imaging device 160 to obtain an image at the angle. Thus, examples may select a view based on an accuracy metric associated with the proximal side of the implantable device 162 meeting a threshold. [0034] In another example, the neural network 154 may be designed to predict a stent malapposition during implantable device 168 and/or angioplasty balloon 176 deployment, and before the implantable device 168 is detached from a delivery device. In such an example, a 2D fluoroscopy image series may be included as an input to the neural network 154 running in real-time during stent deployment. The neural network 154 may be designed to learn the relationship between stent behavior during deployment and the outcomes as seen in post-deployment imaging, such as VasoCT. Therefore, by learning what kinds of device behavior during deployment would lead to malapposition, the output of the neural network 154 may warn the operator during deployment how likely it is that the implantable device 168 may not perfectly match the vessel walls 170. In this case, a provider may have a chance to reposition and fix stent placement of the implantable device 168 before detaching the implantable device 168 from a delivery device. Some examples of the neural network 154 may provide a notification that further angioplasty should be performed even before any other imaging, such as VasoCT, is performed for quality control. That is, in some examples the neural network 154 may identify a likelihood that the implantable device 168 will fail to fit the vessel walls perfectly, determines that the likelihood meets a threshold, and provides a warning to an operator based on the likelihood meeting the threshold.
[0035] In some examples, the neural network 154 additionally outputs suggestions on where the angioplasty balloon 176 may be placed and deployed to increase the probability of the angioplasty having a positive outcome. In some examples, the neural network 154 may provide such an update when the neural network 154 determines that such an update will provide a better outcome and is more likely to result in apposition. In some examples, a second neural network separate from the neural network 154 may have an input that is an output of the neural network 154 (e.g., whether angioplasty is necessary, view angles that should be used during angioplasty, etc.) and outputs landmarks for where the distal end of the angioplasty balloon 176 may be placed before deployment or shows an outline of a deployed balloon that may be followed by the interventionalists. Prior successful angioplasty procedures may be used to generate ground truth to train the second neural network. For example, the second neural network may provide a location to position an angioplasty balloon 176 based on an angle associated with the viewing images 158, where the angioplasty balloon 176 is associated with the implantable device 168. The angle may be the imaging angle for the detectors 164, 166 to obtain the viewing images 158.
[0036] FIG. 3 illustrates a method 300 to identify and obtain images during a procedure. One or more aspects of method 300 may be implemented as part of and/or in conjunction with the enhanced imaging process 150 (FIG. 1). Method 300 may be implemented in a computing device, computing system (e.g., hardware, configurable logic, fixed-function logic hardware, at least one computer readable storage medium comprising a set of instructions for execution, etc.).
[0037] Illustrated processing block 310 receives one or more images of an implantable device within a vasculature. Illustrated processing block 320 extracts, from the one or more images, image features indicating a position of the implantable device relative to a vessel wall of the vasculature. Illustrated processing block 330 based on the image features, generates a representation indicating distances of locations of the implantable device from the vessel wall. Illustrated processing block 340 selects at least one view to observe the vasculature based on the distances and an amount of visibility of the vessel wall around the locations.
[0038] In some examples, the method 300 selects a first view to be part of the at least one view based on the first view corresponding to a largest distance of the distances, and selects a second view to be part of the at least one view based on an identification that a clarity measurement of the second view meets a clarity threshold, where the second view includes an instrument to position the implantable device. In some examples, processing blocks 330 and 340 include executing a machine learning model that generates the representation and selects the at least one view. In some examples, the method 300 includes identifying a likelihood that the implantable device will fail based on the representation, determining that the likelihood meets a threshold, and providing a warning to an operator based on the likelihood meeting the threshold. In some examples, processing block 340 further comprises selecting the at least on view based on an accuracy metric associated with a proximal side of the implantable device meeting a threshold.
[0039] In some examples, the method 300 includes providing a location to position a balloon based on an angle associated with the at least one view, where the balloon is associated with the implantable device, identifying view angles to obtain new views of the implantable device and the vasculature based on the distances, and a visibility measurement of an instrument that is to position the implantable device, orienting an imaging system based on the view angles, and obtaining, with the imaging system that is oriented based on the view angles, new images of the implantable device and the vasculature.
[0040] FIG. 4 shows a more detailed example of an imaging architecture 650. The imaging architecture 650 may be readily implemented in conjunction with the enhanced imaging process 150 (FIG. 2) and/or method 300 (FIG. 3).
[0041] In the illustrated example, the imaging architecture 650 may include a display 654. The display 654 may display images to an operator.
[0042] A neural network 662 may identify relevant images, 3D distance maps, determine view angles to obtain further images, etc. The neural network 662 may correspond to the neural network 154 (FIG. 1). The neural network 662 may include a processor 662a (e.g., embedded controller, central processing unit/CPU) and a memory 662b (e.g., non-volatile memory/NVM and/or volatile memory). The memory 662b contains a set of instructions, which when executed by the processor 662a, cause the neural network 662 to operate as described herein.
[0043] An imaging device 664 may receive the view angles from the neural network 662, position cameras of the imaging device 664 to the view angles and obtain images. The imaging device 664 may correspond to the imaging device 160 (FIG. 1). The imaging device 664 may include a processor 664a (e.g., embedded controller, central processing unit/CPU) and a memory 664b (e.g., non-volatile memory/NVM and/or volatile memory). The memory 664b contains a set of instructions, which when executed by the processor 664a, cause the imaging device 664 to operate as described herein.
[0044] Further, the disclosure comprises additional examples as detailed in the following Examples below.
[0045] Example 1 includes a system, comprising a processor, and a memory containing a set of instructions, which when executed by the processor, cause the system to receive one or more images of an implantable device within a vasculature, extract, from the one or more images, image features indicating a position of the implantable device relative to a vessel wall of the vasculature, and select at least one view to observe a largest distance of distances between the implantable device and the vessel wall. [0046] Example 2 includes the system of Example 1, wherein the set of instructions, which when executed by the processor, cause the system to select a first view to be part of the at least one view based on an identification that a clarity measurement of one or more of an angioplasty balloon, the vessel wall and a malapposition of the first view meets a clarity threshold.
[0047] Example 3 includes the system of Example 1, wherein to extract the image features and select the at least one view, the set of instructions, which when executed by the processor, cause the system to execute a machine learning model that is to extract the image features and select the at least one view.
[0048] Example 4 includes the system of Example 1, the set of instructions, which when executed by the processor, cause the system to identify a likelihood, during device deployment, that the implantable device will fail based on the image features, determine that the likelihood meets a threshold, and provide a warning to an operator based on the likelihood meeting the threshold.
[0049] Example 5 includes the system of Example 1 , wherein to select the at least one view, the set of instructions, which when executed by the processor, cause the system to select the at least one view based on an accuracy metric associated with a proximal side of the implantable device meeting a threshold.
[0050] Example 6 includes the system of Example 1, wherein the set of instructions, which when executed by the processor, cause the system to provide a location to position an angioplasty balloon based on an angle associated with the at least one view, wherein the angioplasty balloon is associated with the implantable device.
[0051] Example 7 includes the system of Example 1, wherein the set of instructions, which when executed by the processor, cause the system to identify view angles to obtain new views of the implantable device and the vasculature based on the distances, and a visibility prediction of an angioplasty balloon that is to position the implantable device, orient an imaging system based on the view angles, and obtain, with the imaging system that is oriented based on the view angles, new images of the angioplasty balloon, the implantable device and the vasculature.
[0052] Example 8 includes at least one computer readable storage medium comprising a set of instructions, which when executed by a computing device, cause the computing device to receive one or more images of an implantable device within a vasculature, extract, from the one or more images, image features indicating a position of the implantable device relative to a vessel wall of the vasculature, and select at least one view to observe a largest distance of distances between the implantable device and the vessel wall.
[0053] Example 9 includes the at least one computer readable storage medium of Example 8, wherein the instructions, when executed, cause the computing device to select a first view to be part of the at least one view based on an identification that a clarity measurement of one or more of an angioplasty balloon, the vessel wall and a malapposition of the first view meets a clarity threshold.
[0054] Example 10 includes the at least one computer readable storage medium of Example 8, wherein to extract the image features and select the at least one view, the instructions, when executed, cause the computing device to execute a machine learning model that is to extract the image features and select the at least one view.
[0055] Example 11 includes the at least one computer readable storage medium of Example 8, wherein the instructions, when executed, cause the computing device to identify a likelihood, during device deployment, that the implantable device will fail based on the image features, determine that the likelihood meets a threshold, and provide a warning to an operator based on the likelihood meeting the threshold.
[0056] Example 12 includes the at least one computer readable storage medium of Example 8, wherein to select the at least one view, the instructions, when executed, cause the computing device to select the at least one view based on an accuracy metric associated with a proximal side of the implantable device meeting a threshold.
[0057] Example 13 includes the at least one computer readable storage medium of Example 8, wherein the instructions, when executed, cause the computing device to provide a location to position an angioplasty balloon based on an angle associated with the at least one view, wherein the angioplasty balloon is associated with the implantable device.
[0058] Example 14 includes the at least one computer readable storage medium of Example 8, wherein the instructions, when executed, cause the computing device to identify view angles to obtain new views of the implantable device and the vasculature based on the distances, and a visibility prediction of an angioplasty balloon that is to position the implantable device, orient an imaging system based on the view angles, and obtain, with the imaging system that is oriented based on the view angles, new images of the angioplasty balloon, the implantable device and the vasculature.
[0059] Example 15 includes a method comprising receiving one or more images of an implantable device within a vasculature, extracting, from the one or more images, image features indicating a position of the implantable device relative to a vessel wall of the vasculature, and selecting at least one view to observe a largest distance of distances between the implantable device and the vessel wall.
[0060] Example 16 includes the method of Example 15, further comprising selecting a first view to be part of the at least one view based on an identification that a clarity measurement of one or more of an angioplasty balloon, the vessel wall and a malapposition of the first view meets a clarity threshold.
[0061] Example 17 includes the method of Example 15, wherein the extracting and the selecting further comprises executing a machine learning model that extract the image features and selects the at least one view.
[0062] Example 18 includes the method of Example 15, further comprising identifying a likelihood, during device deployment, that the implantable device will fail based on the image features, determining that the likelihood meets a threshold, and providing a warning to an operator based on the likelihood meeting the threshold.
[0063] Example 19 includes the method of Example 15, wherein the selecting further comprises selecting the at least one view based on an accuracy metric associated with a proximal side of the implantable device meeting a threshold.
[0064] Example 20 includes the method of Example 15, further comprising providing a location to position an angioplasty balloon based on an angle associated with the at least one view, wherein the angioplasty balloon is associated with the implantable device, identifying view angles to obtain new views of the implantable device and the vasculature based on the distances, and a visibility prediction of the angioplasty balloon, orienting an imaging system based on the view angles, and obtaining, with the imaging system that is oriented based on the view angles, new images of the angioplasty balloon, the implantable device and the vasculature. [0065] The above described methods and systems may be readily combined together if desired. The term “coupled” may be used herein to refer to any type of relationship, direct or indirect, between the components in question, and may apply to electrical, mechanical, fluid, optical, electromagnetic, electromechanical or other connections. In addition, the terms “first”, “second”, etc. may be used herein only to facilitate discussion, and carry no particular temporal or chronological significance unless otherwise indicated. [0066] Those skilled in the art will appreciate from the foregoing description that the broad techniques of the embodiments of the present disclosure can be implemented in a variety of forms. Therefore, while the embodiments of this disclosure have been described in connection with particular examples thereof, the true scope of the embodiments of the disclosure should not be so limited since other modifications will become apparent to the skilled practitioner upon a study of the drawings, specification, and following claims.

Claims

CLAIMS We claim:
1. A system, comprising: a processor in communication with memory, the processor configured to: receive one or more images of an implantable device within a vasculature; extract, from the one or more images, at least one image feature indicating a position of the implantable device relative to a vessel wall of the vasculature; and select, based on the at least one image feature, at least one view for optimally observing a medical procedure to deploy the implantable device to the vessel wall.
2. The system of claim 1, wherein the processor is further configured to: select a first view to be part of the at least one view based on identification that a clarity measurement of one or more of an angioplasty balloon, the vessel wall, and a malapposition of the first view meets a clarity threshold.
3. The system of claim 1, wherein, to extract the at least one image feature and select the at least one view, the processor is further configured to: execute a machine learning model configured to receive the at least one image feature and predict the at least one view for optimally observing the medical procedure to deploy the implantable device to the vessel wall.
4. The system of claim 1, wherein the processor is further configured to: determine a likelihood of failure of deployment of the implantable device based on the at least one image feature; and provide a warning based on the likelihood meeting a threshold.
5. The system of claim 1 , wherein, to select the at least one view, the processor is further configured to: select the at least one view based on an accuracy metric associated with the implantable device meeting a threshold.
6. The system of claim 1, wherein: the angioplasty balloon is associated with the implantable device; and the processor is further configured to provide a location to position the angioplasty balloon based on an angle associated with the at least one view.
7. The system of claim 1, wherein the processor is further configured to: identify view angles to obtain new views of the implantable device based on a distance between the implantable device and the vessel wall and a visibility prediction of an angioplasty balloon; orient an imaging system based on the view angles; and obtain, with the oriented imaging system, new images of the angioplasty balloon, the implantable device, and the vasculature.
8. A non- transitory computer-readable storage medium having stored a computer program comprising instructions, which, when executed by a processor, cause the processor to: receive one or more images of an implantable device within a vasculature; extract, from the one or more images, at least one image feature indicating a position of the implantable device relative to a vessel wall of the vasculature; and select, based on the at least one image feature, at least one view for optimally observing a medical procedure to deploy the implantable device to the vessel wall.
9. The non-transitory computer-readable storage medium of claim 8, wherein the instructions, when executed by the processor, further cause the processor to: select a first view to be part of the at least one view based on identification that a clarity measurement of one or more of an angioplasty balloon, the vessel wall, and a malapposition of the first view meets a clarity threshold.
10. The non-transitory computer-readable storage medium of claim 8, wherein the instructions, when executed by the processor, further cause the processor to: execute a machine learning model configured to receive the at least one image feature and predict the at least one view for optimally observing the medical procedure to deploy the implantable device to the vessel wall.
11. The non-transitory computer-readable storage medium of claim 8, wherein the instructions, when executed by the processor, further cause the processor to: determine a likelihood of failure of deployment of the implantable device based on the at least one image feature; and provide a warning based on the likelihood meeting a threshold.
12. The non-transitory computer-readable storage medium of claim 8, wherein to select the at least one view, the instructions, when executed by the processor, further cause the processor to: select the at least one view based on an accuracy metric associated with the implantable device meeting a threshold.
13. The non-transitory computer-readable storage medium of claim 8, wherein the instructions, when executed by a processor, further cause the processor to: provide a location to position an angioplasty balloon, associated with the implantable device, based on an angle associated with the at least one view.
14. The non-transitory computer-readable storage medium of claim 8, wherein the instructions, when executed by the processor, further cause the processor to: identify view angles to obtain new views of the implantable device based on a distance between the implantable device and the vessel wall and a visibility prediction of an angioplasty balloon; orient an imaging system based on the view angles; and obtain, with the oriented imaging system, new images of the angioplasty balloon, the implantable device, and the vasculature.
15. A method comprising: receiving one or more images of an implantable device within a vasculature; extracting, from the one or more images, at least one image feature indicating a position of the implantable device relative to a vessel wall of the vasculature; and selecting, based on the at least one image feature, at least one view for optimally observing a medical procedure to deploy the implantable device to the vessel wall.
16. The method of claim 15, further comprising: selecting a first view to be part of the at least one view based on identification that a clarity measurement of one or more of an angioplasty balloon, the vessel wall, and a malapposition of the first view meets a clarity threshold.
17. The method of claim 15, further comprising: executing a machine learning model configured to receive the at least one image feature and predict the at least one view for optimally observing the medical procedure to deploy the implantable device to the vessel wall.
18. The method of claim 15, further comprising: determining a likelihood of failure of deployment of the implantable device based on the at least one image feature; and providing a warning based on the likelihood meeting a threshold.
19. The method of claim 15, further comprising: selecting the at least one view based on an accuracy metric associated with the implantable device meeting a threshold.
20. The method of claim 15, further comprising: providing a location to position an angioplasty balloon, associated with the implantable device, based on an angle associated with the at least one view; identifying one or more view angles to obtain one or more new views of the implantable device based on a distance between the implantable device and the vessel wall and a visibility prediction of the angioplasty balloon; orienting an imaging system based on the one or more view angles; and obtaining, with the oriented imaging system, the one or more new images of the angioplasty balloon, the implantable device, and the vasculature.
PCT/EP2024/080702 2023-11-06 2024-10-30 Angioplasty imaging, analysis, and procedure facilitation Pending WO2025098851A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US202363596323P 2023-11-06 2023-11-06
US63/596,323 2023-11-06

Publications (1)

Publication Number Publication Date
WO2025098851A1 true WO2025098851A1 (en) 2025-05-15

Family

ID=93333781

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/EP2024/080702 Pending WO2025098851A1 (en) 2023-11-06 2024-10-30 Angioplasty imaging, analysis, and procedure facilitation

Country Status (1)

Country Link
WO (1) WO2025098851A1 (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200142575A1 (en) * 2015-07-25 2020-05-07 Lightlab Imaging, Inc. Intravascular Data Visualization and Interface Systems and Methods
EP2964089B1 (en) * 2013-03-08 2021-11-03 Lightlab Imaging, Inc. Stent visualization and malapposition detection systems, devices, and methods
EP3361933B1 (en) * 2015-10-13 2023-03-01 Lightlab Imaging, Inc. Intravascular imaging system and methods to determine cut plane view angle of side branch
EP4179998A1 (en) * 2021-11-10 2023-05-17 Koninklijke Philips N.V. Control of robotic endovascular devices to align to target vessels with fluoroscopic feedback
EP3229695B1 (en) * 2014-12-10 2023-07-19 Koninklijke Philips N.V. Systems for in-stent restenosis prediction

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2964089B1 (en) * 2013-03-08 2021-11-03 Lightlab Imaging, Inc. Stent visualization and malapposition detection systems, devices, and methods
EP3229695B1 (en) * 2014-12-10 2023-07-19 Koninklijke Philips N.V. Systems for in-stent restenosis prediction
US20200142575A1 (en) * 2015-07-25 2020-05-07 Lightlab Imaging, Inc. Intravascular Data Visualization and Interface Systems and Methods
EP3361933B1 (en) * 2015-10-13 2023-03-01 Lightlab Imaging, Inc. Intravascular imaging system and methods to determine cut plane view angle of side branch
EP4179998A1 (en) * 2021-11-10 2023-05-17 Koninklijke Philips N.V. Control of robotic endovascular devices to align to target vessels with fluoroscopic feedback

Similar Documents

Publication Publication Date Title
US11694330B2 (en) Medical image processing apparatus, system, and method
US11842439B2 (en) Method and system for 3D reconstruction of coronary artery based on vascular branch registration
EP3657437B1 (en) Automatic detection and quantification of the aorta from medical images
JP2022169579A (en) Diagnostically useful results in real time
US20140364720A1 (en) Systems and methods for interactive magnetic resonance imaging
EP2757528A1 (en) Method and apparatus for tracking objects in a target area of a moving organ
US20190029519A1 (en) Enhanced personalized evaluation of coronary artery disease using an integration of multiple medical imaging techniques
CN110490835B (en) Method for automatic inspection of superimposed images, computing unit and medical imaging device
US11335017B2 (en) Registration facility, method for registering, corresponding computer program and computer-readable storage medium
US20160066795A1 (en) Stenosis therapy planning
US20230000364A1 (en) Methods and systems for assessing a vasculature
US20150260819A1 (en) Transfer of validated cad training data to amended mr contrast levels
US11783477B2 (en) Medical image process apparatus, medical image learning method, and medical image process method
US11657519B2 (en) Method for deformation correction
FR3037785A1 (en) METHOD AND SYSTEM FOR GUIDING A ENDOVASCULAR TOOL IN VASCULAR STRUCTURES
KR102103281B1 (en) Ai based assistance diagnosis system for diagnosing cerebrovascular disease
KR20200056105A (en) Method and apparatus for calculating coronary artery calcium scoring
Colley et al. A methodology for non-invasive 3-d surveillance of arteriovenous fistulae using freehand ultrasound
US12207882B2 (en) Methods and systems for planning a surgical procedure
WO2025098851A1 (en) Angioplasty imaging, analysis, and procedure facilitation
CN118634033B (en) Method, system and storage medium for guiding interventional procedures
Thamm et al. An algorithm for the labeling and interactive visualization of the cerebrovascular system of ischemic strokes
US9642535B2 (en) Medical image processing apparatus and medical image processing method
EP4281929B1 (en) Vessel shape
CN113229936A (en) Method and system for improving liver intervention target positioning accuracy

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 24799225

Country of ref document: EP

Kind code of ref document: A1