WO2025224427A1 - Automated diagnosis aid - Google Patents
Automated diagnosis aidInfo
- Publication number
- WO2025224427A1 WO2025224427A1 PCT/GB2025/050816 GB2025050816W WO2025224427A1 WO 2025224427 A1 WO2025224427 A1 WO 2025224427A1 GB 2025050816 W GB2025050816 W GB 2025050816W WO 2025224427 A1 WO2025224427 A1 WO 2025224427A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- model
- cae
- reinforcement learning
- mesh
- agent
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0012—Biomedical image inspection
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H50/00—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
- G16H50/20—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H50/00—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
- G16H50/50—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for simulation or modelling of medical disorders
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10072—Tomographic images
- G06T2207/10088—Magnetic resonance imaging [MRI]
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30048—Heart; Cardiac
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2211/00—Image generation
- G06T2211/40—Computed tomography
- G06T2211/441—AI-based methods, deep learning or artificial neural networks
Definitions
- the present application is concerned with an automated diagnosis aid. More specifically, the present invention is concerned with a computer implemented diagnosis aid for diagnosis of conditions within organs with fluid passages or fluid chambers (hereinafter referred to as 'fluid openings').
- Coronary heart disease is a leading cause of death worldwide: on average, CHD affects more than 1 /6th of the world's population (World Health Organisation).
- Occlusion can be caused by e.g. atherosclerosis, arteriosclerosis, or arteriolosclerosis. All of these conditions are caused by the build-up of plaques within the arteries by, for example, deposition of cholesterol causing stenosis.
- FFR fractional flow reserve
- i FR instantaneous wave-free ratio or instant flow reserve
- the iFR is the ratio of the pressure downstream of the occlusion to that of the aorta (which feeds the left and right coronary arteries). It is measured over a wave-free period of diastole. Again, it is expressed as a decimal fraction, with a figure less than one indicative of an occlusion.
- Clinicians typically use FFR to assess the presence and severity of an occlusion- this is the "gold standard" at the time of writing. iFR is less common, but increasingly being recognised as an equally good, if not better, assessment tool.
- EP3399501 A1 discloses multi-scale deep reinforcement machine learning for N- dimensional segmentation in medical imaging.
- the method utilises patient scan images (such as CT or MR).
- An Al agent is applied to the scan dataset.
- the boundaries of the object represented in the medical dataset are found by iteratively evolving a shape of the object using the learned policy and a shape descriptor.
- EP3635683B1 discloses a computer-implemented method of anatomic structure segmentation in image analysis.
- EP'683 discusses the limitations oftraditional convolutional neural networks (CNNs) and in 2particular their inability to 'understand' that components to be segmented may not have holes.
- the generation of models with holes is a symptom of pixel / voxel scale segmentation that EP'683 attempts to deal with by creating CNNs with sub-pixel / sub-voxel segmentation.
- the CNNs are supervised / semi-supervised models that utilise regression to achieve sub-pixel / sub-voxel accuracy.
- US9167974B2 provides a non-invasive assessment of coronary artery stenosis comprising the steps of determining a reduced order model to define a pressure drop across a stenosis as a function of blood flow velocity.
- Non-invasive clinical measurements are used to determine patient specific boundary conditions and to thereby perform an assessment of the stenosis.
- US10052158B2 discloses a method of image processing to determine PER through a stenosis by creating a CFD simulation using scan data.
- an automated diagnostic method comprising the steps of: providing scan data comprising image data from at least one scan of at least part of a human or animal body organ having at least one fluid opening; segmenting the scan data to identify different regions of the scanned organ; meshing the segmented scan data to provide a meshed CAE model of the at least part of the human or animal body organ; applying properties and boundary conditions to the meshed CAE model; selecting the CAE model characteristics and solver; solving the meshed CAE model; analysing the solved CAE model to determine at least one diagnostic metric; wherein at least one of the steps of: segmenting, meshing, applying properties and boundary conditions; selecting the CAE model characteristics; and, analysing the solved CAE model is carried out by a reinforcement learning trained agent comprising an artificial neural network.
- the present invention utilises an RL-trained agent to undertake at least one step of preparing a model of the organ, which can be analysed to determine at least one diagnostic metric which can then be used for diagnosis of a condition.
- the ability to automatically create and run such models based on non- invasive scan data avoids the cost and complications of the known invasive procedures known in the art.
- Reinforcement Learning is a field of machine learning (ML) where an agent learns to make decisions by interacting with an environment. Unlike supervised learning, which relies on a labelled dataset to make predictions, or unsupervised learning, which seeks patterns in unlabelled data, RL is about decision-making. Self-improvement through trial-and-error sets RL apart from other ML methods. The agent learns optimal behaviour through its own experiences, making it well-suited for complex problems.
- the present invention utilises an actor-critic RLtrained agent.
- 'agent' we mean a software or hardware implemented artificial neural network (ANN).
- the step of providing scan data comprises the step of providing a plurality of offset cross-sectional scans and the step of segmenting comprises defining the environment as a three dimensional spaced containing the cross-sectional scans.
- the reward is based on a greyscale value of a plurality of voxels proximate the pointer.
- a correction factor is applied to the results of the solved CAE model to account for asymmetry.
- a method of training a reinforcement learning model for use in a medical diagnostic procedure comprising the steps of: training the reinforcement learning model to carry out model segmentation by carrying out the following steps for a plurality of patient scan data: providing a three dimensional environment comprising patient scan data; positioning a pointer at a voxel within the environment; undertaking an action by moving the pointer in the environment; determining a reward based on the new position of the pointer.
- the reward is based on a greyscale value of a plurality of voxels proximate the pointer.
- a method of training a reinforcement learning model for use in a medical diagnostic procedure comprising the steps of: training the reinforcement learning model to carry out model meshing by carrying out the following steps for a plurality of segmented models: providing a basic mesh comprising a plurality of nodes; undertaking an action by adding or moving nodes to create a modified mesh; undertaking a CAE simulation based on the modified mesh; determining a reward based on the results of the CAE simulation.
- the reward is determined based on the independence of the mesh to the results of the CAE simulation.
- a method of training a reinforcement learning model for use in a medical diagnostic procedure comprising the steps of: training the reinforcement learning model to select properties and / or boundary conditions of a meshed model by carrying out the following steps for a plurality of meshed models: applying set of properties and / or boundary conditions to the meshed CAE model; undertaking a CAE simulation of the model; determining a reward based on the proximity of the results of the CAE simulation compared to an independently calculated diagnostic metric.
- the meshed CAE model is an axisymmetric model.
- a correction factor is applied to the results of the solved CAE model to account for asymmetry.
- the invention also provides a computer implemented method according to the first aspect, a data processing apparatus comprising means for carrying out the method, a computer program comprising instructions which, when the program is executed by a computer, cause the computer to carry out the method and a computer-readable data carrier having stored thereon the said computer program.
- FIGURE 1 is a flow chart of a first generic method according to the present invention.
- FIGURES 2a to 2c are flow charts of training methods for segmentation agents for use in the method of Figure 1 ;
- FIGURES 3a to 3b are flow charts of training methods for meshing agents for use in the method of Figure 1 ;
- FIGURES 4a to 4b are flow charts of training methods for solving agents for use in the method of Figure 1 ;
- FIGS 5a to 5c show various hardware on which the present invention may be implemented.
- FIG. 1 a generic method 100 for computer implemented diagnosis of a specific patient's heart is shown. It will be understood from the disclosure below, that the present invention (namely us use of RL trained models) can be implemented in one or more steps of the method 100.
- the method 100 may be carried out on a local computer 10 ( Figure 5a) operated by a user 12.
- the method 100 may be carried out on a plurality of networked computers 14 ( Figure 5b) such as a cloud computing system 16 ( Figure 5c).
- step 102 image data in the form of 2D slices or a 3D scan data is provided.
- the data may be in many formats, but in this embodiment is 2D slice DICOM image files from a CT scan of the specific patient's heart.
- contrast CT particularly coronary computed tomography angiography / CCTA is used to highlight the coronary arteries which are very small in scale with respect to the overall scan.
- step 104 the image data is segmented- i.e. separated into regions of pixels representative of different physical areas or structures (such as tissue types) and in particular the fluid domain.
- step 106 the image data is meshed. This involves taking the regions defined in the segmentation step and separating them into many elements or cells. Solid tissue may be split into many simple elements, and the fluid domain into many simple fluid cells.
- step 108 properties are conferred on the materials making up the model, and boundary conditions applied.
- step 1 the model is run to obtain a pressure solution.
- diagnostic metrics are obtained from the model- for example by FFR and iFR as discussed above.
- step 1 14 the metrics are interpreted by a clinician to determine whether medical intervention is required, and if so what type of intervention (e.g. use and selection of stents, surgery etc).
- agents trained by reinforcement learning are used in one or more of the steps 104, 106, 108, 1 10, 1 12. Such agents may be used in each of the steps, or a subset thereof depending on availability, resources and commercial factors. To be more specific:
- an RL-trained model may be used to carry out segmentation- i.e. to generate a two- or three-dimensional model of the biological structure in question based on patient specific scan data. The intention is to create an accurate model;
- step 1 12 and RL-trained model may be used to interpret the results and offer an intervention option to a clinician.
- RL agents may be trained to perform each of the steps. A brief overview of reinforcement learning is provided first.
- RL agents comprise software or hardware based artificial neural' networks (ANNs). RL agents are distinguished from supervised or unsupervised learning models by the fact that they do not require labelled datasets to learn. Instead, the agent, during the learning process, takes an action and receives a reward for performing the action, and influencing the state of an environment.
- ANNs artificial neural' networks
- the key features of RL trained models are the agent and the environment in which the agent acts.
- the agent takes action, which leads to a change in the state of the environment.
- the agent then perceives a reward based on that state.
- the agent seeks to maximise cumulative reward.
- model-free RL such as policy optimisation and Q-learning
- model-based RL as well as model-based RL.
- an RL agent for undertaking the segmentation step 104 is trained. The intention is to produce a model that can be fed the image data from step 102 and to produce a segmented geometric model for meshing in step 106 within the process of Figure 1 .
- Figure 2a shows a generic segmentation RL training process according to the present invention
- Figures 2b and 2c show preferred embodiments in the form of workflows.
- image data is provided (for example CT or MRI data).
- a three dimensional data array is created from the image data.
- the array makes up the agent's environment.
- 2D 'slices' these are arranged in three dimensional space separated by the actual physical spacing at which they were captured. Spacing may vary between the images depending on the technique used, although more commonly each slice is a voxel in width, the stack thus creating a 3D voxel environment.
- the slices within the 3D space is the agent's operating environment.
- an element of size 1 voxel called the 'pointer' is placed within the environment generated in step 204. Positioning of the initial location of the pointer may be fixed or random. Either way, the starting point is positioned within the coronary lumen based on previous scans.
- an agent (or group of agents) control the pointer to carry out segmentation as detailed in the workflows below.
- one or more action spaces can be present. For example, one set of outputs for controlling direction, one set for controlling different 'stop' actions or for assigning values. All agents in this work use actor critic method to learn.
- a voxel volume representing the fluid opening in question is defined.
- Axisymmetric models are 'pseudo-2D' models of flow channels in which axial flow is assumed to be identical at all angles about the flow centreline. Local asymmetry can be accounted for with the use of correction factors as will be discussed below.
- step 206 A pointer is positioned in the environment.
- the agent starts to move the pointer to place it along the expected location of the coronary artery (or other fluid channel of interest).
- step 208b' the same agent (or a different agent) stops the movement of the pointer.
- the greyscale value of the voxel on which the pointer is stopped is determined (the reference value).
- Greyscale values of nearby voxels within a spherical volume (radius £) are determined, within a predetermined range of the reference value.
- the agent is rewarded based on the number of voxels within the volume within the range.
- the aim of the agent is therefore to stop the pointer within the voxel volume (representing the fluid opening). In this way, the volume of the opening can be segmented.
- step 208d' the same agent (or a different agent) checks whether voxels pertaining to the coronary arteries have been selected and segmented. If not, a new pointer is provided (step 206), if so, then the process ends.
- step 208a the agent starts to move the pointer to place it along the expected location of the centreline of the coronary artery (or other fluid channel of interest).
- step 208b the same agent (or a different agent) stops the movement of the pointer.
- step 208c the same agent (or a different agent) checks whether all pointers making up the centreline have been placed. If not, a new pointer is provided (step 206), if so, then the process ends.
- a CFD solver uses a combination of axisymmetric Navier-Stokes equations for fluid mechanics and non-linear elasticity equations for wall mechanics to simulate blood flow and thereby determine FFR / iFR as required.
- the following description provides two alternative workflows for training an RL meshing agent.
- the type of training used is the actor critic reinforcement learning method.
- an RL agent for undertaking the meshing step 106 is trained.
- the intention is to feed the agent the segmented data from step 104 and to produce a mesh for step 106.
- the intention is thatthe mesh represented a balance between accuracy and computational efficiency.
- a basic mesh is provided at step 302 to the RL agent at 304, along with the input data 305 from step 102 (i.e. the medical images).
- the RL agent alters the basic mesh (in generic terms performing an action in the environment) by, for example, adding or moving nodes.
- the CFD model is run using the altered mesh.
- Results of the model are assessed at the points where the alterations have been made at step 310. It can then be determined (by comparison to the previous analysis) whether the alterations have significantly altered the results (the model is not mesh independent) 312 or has not significantly altered the results (the model is mesh independent) 314.
- Mesh independence is measured by comparing the results of the simulation such as comparing the change in volumetric flow rate (Q), pressure (P) or area (A) in the CFD model to a predetermined threshold. If the change is less than the predetermined threshold, mesh independence can be assumed.
- mesh dependence is rewarded with a reward to the RL agent (path 313) and independence is penalised with a negative rewards to the RL agent (path 315).
- Mesh independence is being penalised because the agent should quit and stop the loop (step 316) if mesh independence has been achieved, practically stating that the optimal mesh has been achieved.
- the trained RL model can then be used in step 106 of the method of Figure 1 .
- This workflow is a combination of generative and reinforcement learning.
- Workflow 1 the agent 304 adds or moves nodes in a pre-existing base/initial mesh (302) iteratively.
- Workflow 2 is generative in nature- it creates the entire mesh in one forward step.
- the agent directly generates a full mesh in one go at step 402 from the image data at 405.
- the CFD model is run using the generated mesh.
- a pre-generated mesh 418 of known, high accuracy (and fineness) is provided and run.
- the results from the agent generated mesh and the pregenerated mesh are compared.
- the agent is trained by iteration to generate a suitable mesh 'first time'. Although there are iterative steps in the training phase shown in Figure 3b, it will be noted that when deployed the idea is that the agent will create a mesh based on the geometry in one step.
- the reward policy varies from workflow 1 .
- the agent gets rewarded if it moves towards mesh independence. However, if it does not stop upon reaching a mesh that can be deemed independent, then it gets penalised.
- the reward policy is based on how accurate and efficient the mesh is, generated in one go as compared to the pre-generated mesh 418. The less computationally intensive, and more accurate the mesh is, the higher the reward.
- an RL agent 502 is provided with a meshed model at step 505 created from patient data 500.
- Patient data may include scan data and other data such as cuff blood pressure, height and weight that can be easily measured without special sensors.
- the agent is also provided with the patient data 500 as an input for it to understand on how to choose parameters (hence the direct link between 500 & 502).
- the model provides the model parameters and / or boundary conditions which include, but are not limited to:
- Coronary vascular bed parameters o Intramyocardial Resistances; o Intramyocardial Compliances;
- the RL agent determines regions of interest in the model in form of areas of asymmetry (in the case of occlusions). These are fed to a corrective agent model at step 509 which determines a correction factor to be applied to the axisymmetric model once solved, in the region of asymmetry. The correction factors get added downstream of step 506.
- step 506 the model is run in a CFD solver.
- a comparative step is carried out between predetermined output parameters of the model and outputs from a reference 510, also generated from data 505.
- the comparison is carried out downstream of the regions of the model identified in steps 507, 509 which may be one of: o Results from a predetermined, validated cFFR I cIFR model; and, o Clinical diagnostic results (ground truth), for example from an angiogram procedure.
- Comparison may be between parameters such as the volumetric flow rate (Q), pressure (P) or area (A) in the CFD model.
- the output of the comparison at step 508 is converted into a reward for the agent 502 at step 512.
- the model is rewarded at branch 513 for its proximity to the reference model 510.
- Workflow 2 functions the same as Workflow 1 but this workflow consists of agents controlling each section separately and that asymmetry analysis may or may not be carried out by RL Agent.
- RL agents 602a - an RL-agent (heart model), 602b - an RL-agent (vascular bed) and 602c - an RL-agent (region selection).
- the formertwo are fed by medical image patient data 600, and the latter by a mesh 601 .
- Patient data may include scan data and other data such as cuff blood pressure, height and weight that can be easily measured without special sensors.
- the agents 602a, 602b vary different controls / parameters- for example 602a varies parameters of the heart model (e.g. boundary conditions) and 602b the vascular bed (properties thereof). Both input to the solver at step 606 which outputs a result to a comparator at step 608. As well as the results, the comparator software receives information from the RL-agent (region selection) which has, at corrective agent step 604c identifies areas of the model of possible occlusions. These areas are identified by determining the radii of the structure along the centreline- deviations in radii are indicators of occlusions.
- a comparative step is carried out between predetermined output parameters of the model and outputs from a reference 610, also generated from data 600.
- the comparison is carried out immediately downstream of regions of the model identified in steps 604c which may be one of: o Results from a predetermined, validated cFFR I cIFR model; and, o Clinical diagnostic results (ground truth).
- Comparison may be between parameters such as the volumetric flow rate (Q), pressure (P) or area (A) in the CFD model.
- the output of the comparison at step 608 is converted into a reward for the agents 602a, b, c at step 612.
- the models are rewarded at branch 613 for its proximity to the reference model 610.
- the models are also rewarded for working in unison. What this means in practice is that the agents are rewarded for taking the same, or equivalent (based on some normalised value) number of steps.
- step 1 10 the model is run to obtain a solution.
- the solved model can then be used to extract diagnostic metrics- in this embodiment FFR and iFR at identified occlusions.
- step 1 14 the metrics are interpreted by a clinician to determine whether medical intervention is required, and if so what type of intervention (e.g. use of stents, surgery etc).
- the model can also be used to determine various other properties of the modelled system. For example ventricular mass or auricle.
- the agent 602c may not be an RL agent.
- Model-free RL such as policy optimisation and Q-learning models
- the term "penalised” is used, which is synonymous with a "negative reward”. It will be noted that the phrases “penalised” or “negative reward” may also be construed as limiting an RL agent's ability to collect further rewards. In other words, instead of providing a negative reward value, the immediate learning episode may be stopped. The agent (during learning) always tries to increase its total rewards- if that particular episode is stopped then it is prevented from collecting more rewards.
Landscapes
- Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Public Health (AREA)
- Biomedical Technology (AREA)
- General Health & Medical Sciences (AREA)
- Epidemiology (AREA)
- Pathology (AREA)
- Databases & Information Systems (AREA)
- Data Mining & Analysis (AREA)
- Primary Health Care (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Radiology & Medical Imaging (AREA)
- Quality & Reliability (AREA)
- Image Analysis (AREA)
Abstract
A method (100) of providing an automated diagnostic aid The method comprises the steps of providing scan data comprising image data, segmenting the scan data (104) to identify different regions of the scanned organ, meshing the segmented scan data (106) to provide a meshed CAE model of the at least part of the human or animal body organ and applying properties and boundary conditions (108) to the meshed CAE model. The meshed CAE model is solved (110) and used as a diagnostic tool. At least one of the steps of segmenting, meshing and applying properties and boundary conditions is carried out by a reinforcement learning trained agent comprising an artificial neural network.
Description
Automated diagnosis aid
Technical Field
[0001 ] The present application is concerned with an automated diagnosis aid. More specifically, the present invention is concerned with a computer implemented diagnosis aid for diagnosis of conditions within organs with fluid passages or fluid chambers (hereinafter referred to as 'fluid openings').
Background Art
[0002] There are several organs in the human and animal body that contain fluid openings, and / or have fluid passages connecting them to other parts of the body. One such organ is the human heart. As well as having several chambers itself, the human heart has muscles fed by the coronary arteries. Occlusion of these arteries is the cause of coronary heart disease, and can be catastrophic for the organ, starving the muscles of oxygen and causing harmful or fatal cardiac events.
[0003] Coronary heart disease (CHD) is a leading cause of death worldwide: on average, CHD affects more than 1 /6th of the world's population (World Health Organisation).
[0004] Occlusion can be caused by e.g. atherosclerosis, arteriosclerosis, or arteriolosclerosis. All of these conditions are caused by the build-up of plaques within the arteries by, for example, deposition of cholesterol causing stenosis.
[0005] There are various metrics for measuring the degree of occlusion of the coronary arteries.
• One such metric is the fractional flow reserve (FFR) which is the ratio of blood pressure downstream of the occlusion as compared to the pressure upstream thereof. It is expressed as a decimal fraction, with a figure less than one indicative of an occlusion.
• Another metric is the instantaneous wave-free ratio or instant flow reserve ( i FR). The iFR is the ratio of the pressure downstream of the occlusion to that of the aorta (which feeds the left and right coronary arteries). It is measured over a wave-free period of diastole. Again, it is expressed as a decimal fraction, with a figure less than one indicative of an occlusion.
[0006] Clinicians typically use FFR to assess the presence and severity of an occlusion- this is the "gold standard" at the time of writing. iFR is less common, but increasingly being recognised as an equally good, if not better, assessment tool.
[0007] In the prior art, measurement is undertaken by angiogram. Angiograms are invasive procedures but are currently the gold standard treatment. Each Angiogram costs the UK NHS circa £2000.
[0008] EP3399501 A1 discloses multi-scale deep reinforcement machine learning for N- dimensional segmentation in medical imaging. The method utilises patient scan images (such as CT or MR). An Al agent is applied to the scan dataset. The boundaries of the object represented in the medical dataset are found by iteratively evolving a shape of the object using the learned policy and a shape descriptor.
[0009] EP3635683B1 discloses a computer-implemented method of anatomic structure segmentation in image analysis. EP'683 discusses the limitations oftraditional convolutional neural networks (CNNs) and in 2particular their inability to 'understand' that components to be segmented may not have holes. The generation of models with holes is a symptom of pixel / voxel scale segmentation that EP'683 attempts to deal with by creating CNNs with sub-pixel / sub-voxel segmentation. The CNNs are supervised / semi-supervised models that utilise regression to achieve sub-pixel / sub-voxel accuracy.
[0010] US9167974B2 provides a non-invasive assessment of coronary artery stenosis comprising the steps of determining a reduced order model to define a pressure drop across a stenosis as a function of blood flow velocity. Non-invasive clinical measurements are used to determine patient specific boundary conditions and to thereby perform an assessment of the stenosis.
[001 1 ] US10052158B2 discloses a method of image processing to determine PER through a stenosis by creating a CFD simulation using scan data.
[0012] It is an aim of the invention to provide an improved method of diagnosing heart conditions, and any conditions relating to fluid flow through, to or from organs.
Summary of Invention
[0013] According to a first aspect of the present invention there is provided an automated diagnostic method comprising the steps of:
providing scan data comprising image data from at least one scan of at least part of a human or animal body organ having at least one fluid opening; segmenting the scan data to identify different regions of the scanned organ; meshing the segmented scan data to provide a meshed CAE model of the at least part of the human or animal body organ; applying properties and boundary conditions to the meshed CAE model; selecting the CAE model characteristics and solver; solving the meshed CAE model; analysing the solved CAE model to determine at least one diagnostic metric; wherein at least one of the steps of: segmenting, meshing, applying properties and boundary conditions; selecting the CAE model characteristics; and, analysing the solved CAE model is carried out by a reinforcement learning trained agent comprising an artificial neural network.
[0014] Advantageously the present invention utilises an RL-trained agent to undertake at least one step of preparing a model of the organ, which can be analysed to determine at least one diagnostic metric which can then be used for diagnosis of a condition. Advantageously, the ability to automatically create and run such models based on non- invasive scan data avoids the cost and complications of the known invasive procedures known in the art.
[0015] Reinforcement Learning (RL) is a field of machine learning (ML) where an agent learns to make decisions by interacting with an environment. Unlike supervised learning, which relies on a labelled dataset to make predictions, or unsupervised learning, which seeks patterns in unlabelled data, RL is about decision-making. Self-improvement through trial-and-error sets RL apart from other ML methods. The agent learns optimal behaviour through its own experiences, making it well-suited for complex problems.
[0016] Even more specifically, the present invention utilises an actor-critic RLtrained agent. By 'agent' we mean a software or hardware implemented artificial neural network (ANN).
[0017] Preferably the step of providing scan data comprises the step of providing a plurality of offset cross-sectional scans and the step of segmenting comprises defining the environment as a three dimensional spaced containing the cross-sectional scans.
[0018] Preferably the step of segmenting is carried out by a reinforcement learning trained agent, the agent trained by: positioning a pointer at a voxel within the environment; undertaking an action by moving the pointer in the environment; determining a reward based on the new position of the pointer.
[0019] Preferably the reward is based on a greyscale value of a plurality of voxels proximate the pointer.
[0020] Preferably the step of meshing is carried out by a reinforcement learning trained agent, the agent trained by: providing a basic mesh comprising a plurality of nodes; undertaking an action by adding or moving nodes to create a modified mesh; undertaking a CAE simulation based on the modified mesh; determining a reward based on the results of the CAE simulation.
[0021 ] Preferably the reward is determined based on the independence of the mesh to the results of the CAE simulation.
[0022] In one embodiment, mesh independence from a previous iteration is penalised.
[0023] In another embodiment, mesh independence from a previously validated CAE model is rewarded.
[0024] Preferably the step of applying properties and I or boundary conditions is carried out by a reinforcement learning trained agent, the agent trained by:
applying set of properties and / or boundary conditions to the meshed CAE model; undertaking a CAE simulation of the model; determining a reward based on the proximity of the results of the CAE simulation compared to an independently calculated diagnostic metric.
[0025] Preferably the meshed CAE model is an axisymmetric model.
[0026] Preferably a correction factor is applied to the results of the solved CAE model to account for asymmetry.
[0027] According to a second aspect there is provided a method of training a reinforcement learning model for use in a medical diagnostic procedure, the method comprising the steps of: training the reinforcement learning model to carry out model segmentation by carrying out the following steps for a plurality of patient scan data: providing a three dimensional environment comprising patient scan data; positioning a pointer at a voxel within the environment; undertaking an action by moving the pointer in the environment; determining a reward based on the new position of the pointer.
[0028] Preferably the reward is based on a greyscale value of a plurality of voxels proximate the pointer.
[0029] According to a third aspect there is provided a method of training a reinforcement learning model for use in a medical diagnostic procedure, the method comprising the steps of: training the reinforcement learning model to carry out model meshing by carrying out the following steps for a plurality of segmented models: providing a basic mesh comprising a plurality of nodes; undertaking an action by adding or moving nodes to create a modified mesh;
undertaking a CAE simulation based on the modified mesh; determining a reward based on the results of the CAE simulation.
[0030] Preferably the reward is determined based on the independence of the mesh to the results of the CAE simulation.
[0031 ] Preferably mesh independence from a previous iteration is penalised.
[0032] Alternatively mesh independence from a previously validated CAE model is rewarded.
[0033] According to a fourth aspect there is provided a method of training a reinforcement learning model for use in a medical diagnostic procedure, the method comprising the steps of: training the reinforcement learning model to select properties and / or boundary conditions of a meshed model by carrying out the following steps for a plurality of meshed models: applying set of properties and / or boundary conditions to the meshed CAE model; undertaking a CAE simulation of the model; determining a reward based on the proximity of the results of the CAE simulation compared to an independently calculated diagnostic metric.
[0034] Preferably the meshed CAE model is an axisymmetric model.
[0035] Preferably a correction factor is applied to the results of the solved CAE model to account for asymmetry.
[0036] The invention also provides a computer implemented method according to the first aspect, a data processing apparatus comprising means for carrying out the method, a computer program comprising instructions which, when the program is executed by a computer, cause the computer to carry out the method and a computer-readable data carrier having stored thereon the said computer program.
Brief Description of Drawings
[0037] An embodiment of the present invention will now be described with reference to the following figure in which:
FIGURE 1 is a flow chart of a first generic method according to the present invention;
FIGURES 2a to 2c are flow charts of training methods for segmentation agents for use in the method of Figure 1 ;
FIGURES 3a to 3b are flow charts of training methods for meshing agents for use in the method of Figure 1 ;
FIGURES 4a to 4b are flow charts of training methods for solving agents for use in the method of Figure 1 ; and,
Figures 5a to 5c show various hardware on which the present invention may be implemented.
Description of the first embodiment
[0038] Referring to Figure 1 , a generic method 100 for computer implemented diagnosis of a specific patient's heart is shown. It will be understood from the disclosure below, that the present invention (namely us use of RL trained models) can be implemented in one or more steps of the method 100.
[0039] The method 100 may be carried out on a local computer 10 (Figure 5a) operated by a user 12. Alternatively the method 100 may be carried out on a plurality of networked computers 14 (Figure 5b) such as a cloud computing system 16 (Figure 5c).
[0040] In step 102, image data in the form of 2D slices or a 3D scan data is provided. The data may be in many formats, but in this embodiment is 2D slice DICOM image files from a CT scan of the specific patient's heart. Preferably, for obtaining FFR / iFR, contrast CT (particularly coronary computed tomography angiography / CCTA) is used to highlight the coronary arteries which are very small in scale with respect to the overall scan.
[0041 ] In step 104, the image data is segmented- i.e. separated into regions of pixels representative of different physical areas or structures (such as tissue types) and in particular the fluid domain.
[0042] In step 106, the image data is meshed. This involves taking the regions defined in the segmentation step and separating them into many elements or cells. Solid tissue may be split into many simple elements, and the fluid domain into many simple fluid cells.
[0043] In step 108, properties are conferred on the materials making up the model, and boundary conditions applied.
[0044] In step 1 10, the model is run to obtain a pressure solution.
[0045] In step 1 12, diagnostic metrics are obtained from the model- for example by FFR and iFR as discussed above.
[0046] In step 1 14, the metrics are interpreted by a clinician to determine whether medical intervention is required, and if so what type of intervention (e.g. use and selection of stents, surgery etc).
[0047] In the present invention, agents trained by reinforcement learning ("RL agents") are used in one or more of the steps 104, 106, 108, 1 10, 1 12. Such agents may be used in each of the steps, or a subset thereof depending on availability, resources and commercial factors. To be more specific:
• In step 104, an RL-trained model may be used to carry out segmentation- i.e. to generate a two- or three-dimensional model of the biological structure in question based on patient specific scan data. The intention is to create an accurate model;
• In step 106, an RL-trained model may be used to carry out meshing of the segmented geometry. The intention is to create a mesh that is a suitable balance between accuracy and computational efficiency;
• In step 108, an RL-trained model may be used to select parameters and / or features of the CFD solver. These may include the fluid / solid models used, material properties, boundary conditions and so on;
• In step 1 10, an RL-trained model may be used to select an appropriate CFD solver;
In step 1 12, and RL-trained model may be used to interpret the results and offer an intervention option to a clinician.
[0048] The following description will provide examples of how RL agents may be trained to perform each of the steps. A brief overview of reinforcement learning is provided first.
Reinforcement learning
[0049] RL agents comprise software or hardware based artificial neural' networks (ANNs). RL agents are distinguished from supervised or unsupervised learning models by the fact that they do not require labelled datasets to learn. Instead, the agent, during the learning process, takes an action and receives a reward for performing the action, and influencing the state of an environment.
[0050] The key features of RL trained models are the agent and the environment in which the agent acts. The agent takes action, which leads to a change in the state of the environment. The agent then perceives a reward based on that state. The agent seeks to maximise cumulative reward.
[0051 ] Many different types of RL trained models are available at the time of writing. These include model-free RL, such as policy optimisation and Q-learning, as well as model-based RL.
[0052] In the following examples, an actor-critic RL methodology is used to train the models, but other methods are envisaged.
RL controlled segmentation
[0053] In Figure 2a, an RL agent for undertaking the segmentation step 104 is trained. The intention is to produce a model that can be fed the image data from step 102 and to produce a segmented geometric model for meshing in step 106 within the process of Figure 1 .
[0054] Figure 2a shows a generic segmentation RL training process according to the present invention, whereas Figures 2b and 2c show preferred embodiments in the form of workflows.
[0055] Referring to Figure 2a, at step 202 image data is provided (for example CT or MRI data). At step 204, a three dimensional data array is created from the image data. The array makes up the agent's environment. In the case of 2D 'slices', these are arranged in three dimensional space separated by the actual physical spacing at which they were captured.
Spacing may vary between the images depending on the technique used, although more commonly each slice is a voxel in width, the stack thus creating a 3D voxel environment. Typically there are several tens of slices for a coronary artery scan, and often over one hundred such images. The slices within the 3D space is the agent's operating environment.
[0056] At step 206, an element of size 1 voxel called the 'pointer' is placed within the environment generated in step 204. Positioning of the initial location of the pointer may be fixed or random. Either way, the starting point is positioned within the coronary lumen based on previous scans.
[0057] At step 208, an agent (or group of agents) control the pointer to carry out segmentation as detailed in the workflows below. Depending on type of agent one or more action spaces can be present. For example, one set of outputs for controlling direction, one set for controlling different 'stop' actions or for assigning values. All agents in this work use actor critic method to learn.
[0058] At step 210, a voxel volume representing the fluid opening in question is defined.
[0059] There are two workflows, and both are intended to create an axisymmetric model of the structure in question. Axisymmetric models are 'pseudo-2D' models of flow channels in which axial flow is assumed to be identical at all angles about the flow centreline. Local asymmetry can be accounted for with the use of correction factors as will be discussed below.
RL controlled segmentation - workflow 1
[0060] Referring to Figure 2b, the generic process is carried out to step 206. A pointer is positioned in the environment.
[0061 ] At step 208a', the agent starts to move the pointer to place it along the expected location of the coronary artery (or other fluid channel of interest).
[0062] At step 208b', the same agent (or a different agent) stops the movement of the pointer.
[0063] At step 208c', the greyscale value of the voxel on which the pointer is stopped is determined (the reference value). Greyscale values of nearby voxels within a spherical volume (radius £) are determined, within a predetermined range of the reference value. The
agent is rewarded based on the number of voxels within the volume within the range. The aim of the agent is therefore to stop the pointer within the voxel volume (representing the fluid opening). In this way, the volume of the opening can be segmented.
[0064] At step 208d' the same agent (or a different agent) checks whether voxels pertaining to the coronary arteries have been selected and segmented. If not, a new pointer is provided (step 206), if so, then the process ends.
RL controlled segmentation - workflow 2
[0065] Referring to Figure 2c, at step 208a", the agent starts to move the pointer to place it along the expected location of the centreline of the coronary artery (or other fluid channel of interest).
[0066] At step 208b", the same agent (or a different agent) stops the movement of the pointer.
[0067] At step 208c" the same agent (or a different agent) checks whether all pointers making up the centreline have been placed. If not, a new pointer is provided (step 206), if so, then the process ends.
Axisymmetric model meshing
[0068] In workflow 2 above, the models are axisymmetric (for computational efficiency).
[0069] For workflow 2 - version 1 , skeletonization is carried out and a surface mesh is generated using the segmented volume. A CFD solver uses a combination of axisymmetric Navier-Stokes equations for fluid mechanics and non-linear elasticity equations for wall mechanics to simulate blood flow and thereby determine FFR / iFR as required.
[0070] For workflow 2 - version 2, rough contours for each slice are found using the centreline. Perpendicular distances are found between the centreline points and a 3D object made through interpolation of the contours, which is meshed. A CFD solver uses a combination of axisymmetric Navier-Stokes equations for fluid mechanics and non-linear elasticity equations for wall mechanics to simulate blood flow and thereby determine FFR / iFR as required.
RL controlled meshing
[0071 ] The following description provides two alternative workflows for training an RL meshing agent. The type of training used is the actor critic reinforcement learning method.
RL controlled meshing - workflow 1
[0072] In workflow 1 , Figure 3a, an RL agent for undertaking the meshing step 106 is trained. The intention is to feed the agent the segmented data from step 104 and to produce a mesh for step 106. The intention is thatthe mesh represented a balance between accuracy and computational efficiency.
[0073] In workflow 1 a basic mesh is provided at step 302 to the RL agent at 304, along with the input data 305 from step 102 (i.e. the medical images). At step 306 the RL agent alters the basic mesh (in generic terms performing an action in the environment) by, for example, adding or moving nodes.
[0074] At step 308, the CFD model is run using the altered mesh. Results of the model are assessed at the points where the alterations have been made at step 310. It can then be determined (by comparison to the previous analysis) whether the alterations have significantly altered the results (the model is not mesh independent) 312 or has not significantly altered the results (the model is mesh independent) 314. Mesh independence is measured by comparing the results of the simulation such as comparing the change in volumetric flow rate (Q), pressure (P) or area (A) in the CFD model to a predetermined threshold. If the change is less than the predetermined threshold, mesh independence can be assumed.
[0075] In this embodiment, mesh dependence is rewarded with a reward to the RL agent (path 313) and independence is penalised with a negative rewards to the RL agent (path 315). Mesh independence is being penalised because the agent should quit and stop the loop (step 316) if mesh independence has been achieved, practically stating that the optimal mesh has been achieved.
[0076] The process of Figure 3a is repeated over several different geometries (sets of data 306), and as this happens it will learn to generate a suitable mesh in few (ideally one) iteration, which is what happens when implementing the process of Figure 1 .
[0077] The trained RL model can then be used in step 106 of the method of Figure 1 .
RL controlled meshing - workflow 2
[0078] This workflow is a combination of generative and reinforcement learning.
[0079] In Workflow 1 , the agent 304 adds or moves nodes in a pre-existing base/initial mesh (302) iteratively. Workflow 2 is generative in nature- it creates the entire mesh in one forward step.
[0080] In workflow 2, Figure 3b, the agent directly generates a full mesh in one go at step 402 from the image data at 405. At step 408 the CFD model is run using the generated mesh. At the same time, a pre-generated mesh 418 of known, high accuracy (and fineness) is provided and run. At step 412 the results from the agent generated mesh and the pregenerated mesh are compared.
[0081 ] During the training phase, the agent is trained by iteration to generate a suitable mesh 'first time'. Although there are iterative steps in the training phase shown in Figure 3b, it will be noted that when deployed the idea is that the agent will create a mesh based on the geometry in one step.
[0082] At step 412 the reward policy varies from workflow 1 . In workflow 1 , the agent gets rewarded if it moves towards mesh independence. However, if it does not stop upon reaching a mesh that can be deemed independent, then it gets penalised. In workflow 2, the reward policy is based on how accurate and efficient the mesh is, generated in one go as compared to the pre-generated mesh 418. The less computationally intensive, and more accurate the mesh is, the higher the reward.
[0083] Therefore a high degree of similarity between the flow parameters calculated in agent-generated model from step 402 and the pre-generated model 418 has a positive reward generated at 412 and a low degree of similarity has a negative reward. Lower mesh fineness compared to pre-generated mesh 418 also has a positive reward.
[0084] The idea is to have a reward policy that avoids the kind of extremely fine mesh of the pre-generated mesh 418 but at the same time provides accuracy of the same level. The benefit of workflow two is that the "right first time" meshing concept is faster than workflow 1 in practice (i.e. when deployed).
RL controlled CFD
[0085] The following description provides two alternative workflows for training an RL parameter control agent.
RL controlled parameters - workflow 1
[0086] Referring to Figure 4a, in workflow 1 , an RL agent 502 is provided with a meshed model at step 505 created from patient data 500. Patient data may include scan data and other data such as cuff blood pressure, height and weight that can be easily measured without special sensors. The agent is also provided with the patient data 500 as an input for it to understand on how to choose parameters (hence the direct link between 500 & 502).
[0087] At step 504 the model provides the model parameters and / or boundary conditions which include, but are not limited to:
Coronary vascular bed parameters- o Intramyocardial Resistances; o Intramyocardial Compliances;
• Heart model parameters o Peak Pressure o End Diastolic Pressure. o Time Constant.
[0088] At step 507 in parallel with the control actions at step 504, the RL agent determines regions of interest in the model in form of areas of asymmetry (in the case of occlusions). These are fed to a corrective agent model at step 509 which determines a correction factor to be applied to the axisymmetric model once solved, in the region of asymmetry. The correction factors get added downstream of step 506.
[0089] At step 506 the model is run in a CFD solver.
[0090] At step 508, a comparative step is carried out between predetermined output parameters of the model and outputs from a reference 510, also generated from data 505.
The comparison is carried out downstream of the regions of the model identified in steps 507, 509 which may be one of: o Results from a predetermined, validated cFFR I cIFR model; and, o Clinical diagnostic results (ground truth), for example from an angiogram procedure.
[0091 ] Comparison may be between parameters such as the volumetric flow rate (Q), pressure (P) or area (A) in the CFD model.
[0092] The output of the comparison at step 508 is converted into a reward for the agent 502 at step 512. In this instance, the model is rewarded at branch 513 for its proximity to the reference model 510.
RL controlled parameters - workflow 2
[0093] Workflow 2 functions the same as Workflow 1 but this workflow consists of agents controlling each section separately and that asymmetry analysis may or may not be carried out by RL Agent.
[0094] Referring to Figure 4b there are provided three RL agents 602a - an RL-agent (heart model), 602b - an RL-agent (vascular bed) and 602c - an RL-agent (region selection). The formertwo are fed by medical image patient data 600, and the latter by a mesh 601 . Patient data may include scan data and other data such as cuff blood pressure, height and weight that can be easily measured without special sensors.
[0095] The agents 602a, 602b vary different controls / parameters- for example 602a varies parameters of the heart model (e.g. boundary conditions) and 602b the vascular bed (properties thereof). Both input to the solver at step 606 which outputs a result to a comparator at step 608. As well as the results, the comparator software receives information from the RL-agent (region selection) which has, at corrective agent step 604c identifies areas of the model of possible occlusions. These areas are identified by determining the radii of the structure along the centreline- deviations in radii are indicators of occlusions.
[0096] As with workflow 1 , at step 608, a comparative step is carried out between predetermined output parameters of the model and outputs from a reference 610, also generated from data 600.
[0097] The comparison is carried out immediately downstream of regions of the model identified in steps 604c which may be one of: o Results from a predetermined, validated cFFR I cIFR model; and, o Clinical diagnostic results (ground truth).
[0098] Comparison may be between parameters such as the volumetric flow rate (Q), pressure (P) or area (A) in the CFD model.
[0099] The output of the comparison at step 608 is converted into a reward for the agents 602a, b, c at step 612. In this instance, the models are rewarded at branch 613 for its proximity to the reference model 610. The models are also rewarded for working in unison. What this means in practice is that the agents are rewarded for taking the same, or equivalent (based on some normalised value) number of steps.
[0100] This workflow is memory heavy, and computationally intensive as well especially given the need to achieve harmony required between individual agents.
[0101 ] Clearly the above methods can be used to train agents to segment, mesh and / or apply parameters such as boundary conditions to the model in steps 104, 106, 108.
[0102] As described above, once the agents have prepared the model from the input image data, in step 1 10, the model is run to obtain a solution. The solved model can then be used to extract diagnostic metrics- in this embodiment FFR and iFR at identified occlusions.
[0103] In step 1 14, the metrics are interpreted by a clinician to determine whether medical intervention is required, and if so what type of intervention (e.g. use of stents, surgery etc).
[0104] As well as this specific outcome, the model can also be used to determine various other properties of the modelled system. For example ventricular mass or auricle.
[0105] Further, once treatment has been carried out, further scans can be taken and used to assess the impact of the intervention on FFR and iFR. Data from such a study can be collated in a database and use to select treatment options - such as the type of stent used - in future treatments based on the initially modelled properties and diagnosis.
Variations
[0106] In RL controlled parameters - workflow 2, the agent 602c may not be an RL agent.
[0107] The embodiments above utilise actor-critic RL trained agents, but other RL trained models may be employed including but not limited to:
• Model-free RL such as policy optimisation and Q-learning models;
• Model-based RL.
[0108] The above method is described in relation to diagnosis of conditions in the coronary arteries, although it will be understood by the skilled person that the same technique may be used for other blood vessels (arterial or venous), as well as other fluid passages and chambers within the human or animal body.
[0109] In the embodiments, the term "penalised" is used, which is synonymous with a "negative reward". It will be noted that the phrases "penalised" or "negative reward" may also be construed as limiting an RL agent's ability to collect further rewards. In other words, instead of providing a negative reward value, the immediate learning episode may be stopped. The agent (during learning) always tries to increase its total rewards- if that particular episode is stopped then it is prevented from collecting more rewards.
Claims
1 . An automated diagnostic method comprising the steps of: providing scan data comprising image data from at least one scan of at least part of a human or animal body organ having at least one fluid opening; segmenting the scan data to identify different regions of the scanned organ; meshing the segmented scan data to provide a meshed CAE model of the at least part of the human or animal body organ; applying properties and boundary conditions to the meshed CAE model; selecting the CAE model characteristics and solver; solving the meshed CAE model; analysing the solved CAE model to determine at least one diagnostic metric; wherein at least one of the steps of: segmenting, meshing, applying properties and boundary conditions; selecting the CAE model characteristics; and, analysing the solved CAE model is carried out by a reinforcement learning trained agent comprising an artificial neural network.
2. An automated diagnostic method according to claim 1 , wherein the reinforcement learning trained agent is trained by the actor-critic method.
3. An automated diagnostic method according to claim 1 or 2, wherein: the step of providing scan data comprises the step of providing a plurality of offset cross-sectional scans; and, the step of segmenting comprises defining the environment as a three dimensional spaced containing the cross-sectional scans.
4. An automated diagnostic method according to claim 3, wherein the step of segmenting is carried out by a reinforcement learning trained agent, the agent trained by: positioning a pointer at a voxel within the environment; undertaking an action by moving the pointer in the environment; determining a reward based on the new position of the pointer.
5. An automated diagnostic method according to claim 4, wherein the reward is based on a greyscale value of a plurality of voxels proximate the pointer.
6. An automated diagnostic method according to any preceding claim, wherein the step of meshing is carried out by a reinforcement learning trained agent, the agent trained by: providing a basic mesh comprising a plurality of nodes; undertaking an action by adding or moving nodes to create a modified mesh; undertaking a CAE simulation based on the modified mesh; determining a reward based on the results of the CAE simulation.
7. An automated diagnostic method according to claim 6, wherein the reward is determined based on the independence of the mesh to the results of the CAE simulation.
8. An automated diagnostic method according to claim 7, wherein mesh independence from a previous iteration is penalised.
9. An automated diagnostic method according to claim 7, wherein mesh independence from a previously validated CAE model is rewarded.
10. An automated diagnostic method according to any preceding claim, wherein the step of applying properties and / or boundary conditions is carried out by a reinforcement learning trained agent, the agent trained by: applying set of properties and / or boundary conditions to the meshed CAE model; undertaking a CAE simulation of the model; determining a reward based on the proximity of the results of the CAE simulation compared to an independently calculated diagnostic metric.
1 1. An automated diagnostic method according to any preceding claim, wherein the meshed CAE model is an axisymmetric model.
12. An automated diagnostic method according to claim 1 1 , wherein a correction factor is applied to the results of the solved CAE model to account for asymmetry.
13. A method of training a reinforcement learning model for use in a medical diagnostic procedure, the method comprising the steps of: training the reinforcement learning model to carry out model segmentation by carrying out the following steps for a plurality of patient scan data: providing a three dimensional environment comprising patient scan data; positioning a pointer at a voxel within the environment; undertaking an action by moving the pointer in the environment; determining a reward based on the new position of the pointer.
14. A method of training a reinforcement learning model according to claim 13, wherein the reward is based on a greyscale value of a plurality of voxels proximate the pointer.
15. A method of training a reinforcement learning model for use in a medical diagnostic procedure, the method comprising the steps of: training the reinforcement learning model to carry out model meshing by carrying out the following steps for a plurality of segmented models: providing a basic mesh comprising a plurality of nodes; undertaking an action by adding or moving nodes to create a modified mesh; undertaking a CAE simulation based on the modified mesh; determining a reward based on the results of the CAE simulation.
16. A method oftraining a reinforcement learning model according to claim 15, wherein the reward is determined based on the independence of the mesh to the results of the CAE simulation.
17. A method oftraining a reinforcement learning model according to claim 16, wherein mesh independence from a previous iteration is penalised.
18. A method oftraining a reinforcement learning model according to claim 15, wherein mesh independence from a previously validated CAE model is rewarded.
19. A method oftraining a reinforcement learning model for use in a medical diagnostic procedure, the method comprising the steps of: training the reinforcement learning model to select properties and / or boundary conditions of a meshed model by carrying out the following steps for a plurality of meshed models: applying set of properties and / or boundary conditions to the meshed CAE model; undertaking a CAE simulation of the model; determining a reward based on the proximity of the results of the CAE simulation compared to an independently calculated diagnostic metric.
20. A method oftraining a reinforcement learning model according to claim 19, wherein the meshed CAE model is an axisymmetric model.
21 . A method oftraining a reinforcement learning model according to claim 20, wherein a correction factor is applied to the results of the solved CAE model to account for asymmetry.
22. A computer implemented method according to any of claims 1 to 21 .
23. A data processing apparatus comprising means for carrying out the method of claim 22.
24. A computer program comprising instructions which, when the program is executed by a computer, cause the computer to carry out the method of claim 22.
25. A computer-readable data carrier having stored thereon the computer program according to claim 24.
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| GBGB2405632.7A GB202405632D0 (en) | 2024-04-22 | 2024-04-22 | Automated diagnosis aid |
| GB2405632.7 | 2024-04-22 |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2025224427A1 true WO2025224427A1 (en) | 2025-10-30 |
Family
ID=91275254
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/GB2025/050816 Pending WO2025224427A1 (en) | 2024-04-22 | 2025-04-16 | Automated diagnosis aid |
Country Status (2)
| Country | Link |
|---|---|
| GB (1) | GB202405632D0 (en) |
| WO (1) | WO2025224427A1 (en) |
Citations (7)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| WO2014111929A1 (en) * | 2013-01-15 | 2014-07-24 | Cathworks Ltd. | Calculating a fractional flow reserve |
| US9167974B2 (en) | 2010-08-12 | 2015-10-27 | Heartflow, Inc. | Method and system for patient-specific modeling of blood flow |
| EP3399501A1 (en) | 2017-05-03 | 2018-11-07 | Siemens Healthcare GmbH | Multi-scale deep reinforcement machine learning for n-dimensional segmentation in medical imaging |
| EP3635683B1 (en) | 2017-05-09 | 2022-07-06 | HeartFlow, Inc. | Systems and methods for anatomic structure segmentation in image analysis |
| WO2023278180A2 (en) * | 2021-06-29 | 2023-01-05 | Varian Medical Systems, Inc. | Artificial intelligence enabled preference learning |
| WO2023215758A2 (en) * | 2022-05-02 | 2023-11-09 | Washington University | Absolute perfusion reserve |
| WO2023237553A1 (en) * | 2022-06-07 | 2023-12-14 | Pie Medical Imaging Bv | Method and system for assessing functionally significant vessel obstruction based on machine learning |
-
2024
- 2024-04-22 GB GBGB2405632.7A patent/GB202405632D0/en active Pending
-
2025
- 2025-04-16 WO PCT/GB2025/050816 patent/WO2025224427A1/en active Pending
Patent Citations (8)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US9167974B2 (en) | 2010-08-12 | 2015-10-27 | Heartflow, Inc. | Method and system for patient-specific modeling of blood flow |
| US10052158B2 (en) | 2010-08-12 | 2018-08-21 | Heartflow, Inc. | Method and system for image processing to determine patient-specific blood flow characteristics |
| WO2014111929A1 (en) * | 2013-01-15 | 2014-07-24 | Cathworks Ltd. | Calculating a fractional flow reserve |
| EP3399501A1 (en) | 2017-05-03 | 2018-11-07 | Siemens Healthcare GmbH | Multi-scale deep reinforcement machine learning for n-dimensional segmentation in medical imaging |
| EP3635683B1 (en) | 2017-05-09 | 2022-07-06 | HeartFlow, Inc. | Systems and methods for anatomic structure segmentation in image analysis |
| WO2023278180A2 (en) * | 2021-06-29 | 2023-01-05 | Varian Medical Systems, Inc. | Artificial intelligence enabled preference learning |
| WO2023215758A2 (en) * | 2022-05-02 | 2023-11-09 | Washington University | Absolute perfusion reserve |
| WO2023237553A1 (en) * | 2022-06-07 | 2023-12-14 | Pie Medical Imaging Bv | Method and system for assessing functionally significant vessel obstruction based on machine learning |
Also Published As
| Publication number | Publication date |
|---|---|
| GB202405632D0 (en) | 2024-06-05 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US20230360803A1 (en) | Methods and systems for predicting sensitivity of blood flow calculations to changes in anatomical geometry | |
| US11389130B2 (en) | System and methods for fast computation of computed tomography based fractional flow reserve | |
| EP3585253B1 (en) | Systems and methods for identifying anatomically relevant blood flow characteristics in a patient | |
| CN110444275B (en) | System and method for rapid calculation of fractional flow reserve | |
| CN110638438B (en) | Methods and systems for machine learning-based assessment of fractional flow reserve | |
| US10390885B2 (en) | Method and system for determining treatments by modifying patient-specific geometrical models | |
| EP2966615B1 (en) | Method and system for improved hemodynamic computation in coronary arteries | |
| CN111192316A (en) | Deep learning for artery analysis and assessment | |
| CN109036551B (en) | Coronary artery physiological index relation establishing and applying method and device | |
| CN108735270A (en) | Blood flow reserve score acquisition methods, device, system and computer storage media based on dimensionality reduction model | |
| EP4084011A1 (en) | Computer-implemented method and evaluation system for evaluating at least one image data set of an imaging region of a patient, computer program and electronically readable storage medium | |
| US20240202919A1 (en) | Medical image processing apparatus, method, and storage medium | |
| EP3884848A1 (en) | A system and a method for determining a significance of a stenosis | |
| US20250336548A1 (en) | Medical data processing device, medical data processing method, and non-transitory computer-readable recording medium | |
| WO2025224427A1 (en) | Automated diagnosis aid | |
| EP4208116B1 (en) | Planning of a heart valve implantation via patient-tailored hemodynamics analysis | |
| JP7649303B2 (en) | Computer-implemented method for the automated analysis of bias in measurements performed on medical images of anatomical structures - Patents.com | |
| KR102758139B1 (en) | Medical image process method and apparatus | |
| JP2025132468A (en) | Medical information processing apparatus and medical information processing method | |
| WO2022136688A1 (en) | Non-invasive method for determining a vessel damage indicator |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 25722276 Country of ref document: EP Kind code of ref document: A1 |